##// END OF EJS Templates
Fix completion in indented lines dropping prefix when jedi is disabled (#14474)...
M Bussonnier -
r28832:5c8bc514 merge
parent child Browse files
Show More
@@ -1,3346 +1,3389 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press :kbd:`Tab` to expand it to its latex form.
53 and press :kbd:`Tab` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 :std:configtrait:`Completer.backslash_combining_completions` option to
62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 ``False``.
63 ``False``.
64
64
65
65
66 Experimental
66 Experimental
67 ============
67 ============
68
68
69 Starting with IPython 6.0, this module can make use of the Jedi library to
69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 generate completions both using static analysis of the code, and dynamically
70 generate completions both using static analysis of the code, and dynamically
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 for Python. The APIs attached to this new mechanism is unstable and will
72 for Python. The APIs attached to this new mechanism is unstable and will
73 raise unless use in an :any:`provisionalcompleter` context manager.
73 raise unless use in an :any:`provisionalcompleter` context manager.
74
74
75 You will find that the following are experimental:
75 You will find that the following are experimental:
76
76
77 - :any:`provisionalcompleter`
77 - :any:`provisionalcompleter`
78 - :any:`IPCompleter.completions`
78 - :any:`IPCompleter.completions`
79 - :any:`Completion`
79 - :any:`Completion`
80 - :any:`rectify_completions`
80 - :any:`rectify_completions`
81
81
82 .. note::
82 .. note::
83
83
84 better name for :any:`rectify_completions` ?
84 better name for :any:`rectify_completions` ?
85
85
86 We welcome any feedback on these new API, and we also encourage you to try this
86 We welcome any feedback on these new API, and we also encourage you to try this
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 to have extra logging information if :any:`jedi` is crashing, or if current
88 to have extra logging information if :any:`jedi` is crashing, or if current
89 IPython completer pending deprecations are returning results not yet handled
89 IPython completer pending deprecations are returning results not yet handled
90 by :any:`jedi`
90 by :any:`jedi`
91
91
92 Using Jedi for tab completion allow snippets like the following to work without
92 Using Jedi for tab completion allow snippets like the following to work without
93 having to execute any code:
93 having to execute any code:
94
94
95 >>> myvar = ['hello', 42]
95 >>> myvar = ['hello', 42]
96 ... myvar[1].bi<tab>
96 ... myvar[1].bi<tab>
97
97
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 option.
100 option.
101
101
102 Be sure to update :any:`jedi` to the latest stable version or to try the
102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 current development version to get better completions.
103 current development version to get better completions.
104
104
105 Matchers
105 Matchers
106 ========
106 ========
107
107
108 All completions routines are implemented using unified *Matchers* API.
108 All completions routines are implemented using unified *Matchers* API.
109 The matchers API is provisional and subject to change without notice.
109 The matchers API is provisional and subject to change without notice.
110
110
111 The built-in matchers include:
111 The built-in matchers include:
112
112
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 - :any:`IPCompleter.unicode_name_matcher`,
115 - :any:`IPCompleter.unicode_name_matcher`,
116 :any:`IPCompleter.fwd_unicode_matcher`
116 :any:`IPCompleter.fwd_unicode_matcher`
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 (`complete_command`) with string dispatch (including regular expressions).
125 (`complete_command`) with string dispatch (including regular expressions).
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 Jedi results to match behaviour in earlier IPython versions.
127 Jedi results to match behaviour in earlier IPython versions.
128
128
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130
130
131 Matcher API
131 Matcher API
132 -----------
132 -----------
133
133
134 Simplifying some details, the ``Matcher`` interface can described as
134 Simplifying some details, the ``Matcher`` interface can described as
135
135
136 .. code-block::
136 .. code-block::
137
137
138 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv1 = Callable[[str], list[str]]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140
140
141 Matcher = MatcherAPIv1 | MatcherAPIv2
141 Matcher = MatcherAPIv1 | MatcherAPIv2
142
142
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 and remains supported as a simplest way for generating completions. This is also
144 and remains supported as a simplest way for generating completions. This is also
145 currently the only API supported by the IPython hooks system `complete_command`.
145 currently the only API supported by the IPython hooks system `complete_command`.
146
146
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 and requires a literal ``2`` for v2 Matchers.
149 and requires a literal ``2`` for v2 Matchers.
150
150
151 Once the API stabilises future versions may relax the requirement for specifying
151 Once the API stabilises future versions may relax the requirement for specifying
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154
154
155 Suppression of competing matchers
155 Suppression of competing matchers
156 ---------------------------------
156 ---------------------------------
157
157
158 By default results from all matchers are combined, in the order determined by
158 By default results from all matchers are combined, in the order determined by
159 their priority. Matchers can request to suppress results from subsequent
159 their priority. Matchers can request to suppress results from subsequent
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161
161
162 When multiple matchers simultaneously request surpression, the results from of
162 When multiple matchers simultaneously request surpression, the results from of
163 the matcher with higher priority will be returned.
163 the matcher with higher priority will be returned.
164
164
165 Sometimes it is desirable to suppress most but not all other matchers;
165 Sometimes it is desirable to suppress most but not all other matchers;
166 this can be achieved by adding a set of identifiers of matchers which
166 this can be achieved by adding a set of identifiers of matchers which
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168
168
169 The suppression behaviour can is user-configurable via
169 The suppression behaviour can is user-configurable via
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 """
171 """
172
172
173
173
174 # Copyright (c) IPython Development Team.
174 # Copyright (c) IPython Development Team.
175 # Distributed under the terms of the Modified BSD License.
175 # Distributed under the terms of the Modified BSD License.
176 #
176 #
177 # Some of this code originated from rlcompleter in the Python standard library
177 # Some of this code originated from rlcompleter in the Python standard library
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179
179
180 from __future__ import annotations
180 from __future__ import annotations
181 import builtins as builtin_mod
181 import builtins as builtin_mod
182 import enum
182 import enum
183 import glob
183 import glob
184 import inspect
184 import inspect
185 import itertools
185 import itertools
186 import keyword
186 import keyword
187 import os
187 import os
188 import re
188 import re
189 import string
189 import string
190 import sys
190 import sys
191 import tokenize
191 import tokenize
192 import time
192 import time
193 import unicodedata
193 import unicodedata
194 import uuid
194 import uuid
195 import warnings
195 import warnings
196 from ast import literal_eval
196 from ast import literal_eval
197 from collections import defaultdict
197 from collections import defaultdict
198 from contextlib import contextmanager
198 from contextlib import contextmanager
199 from dataclasses import dataclass
199 from dataclasses import dataclass
200 from functools import cached_property, partial
200 from functools import cached_property, partial
201 from types import SimpleNamespace
201 from types import SimpleNamespace
202 from typing import (
202 from typing import (
203 Iterable,
203 Iterable,
204 Iterator,
204 Iterator,
205 List,
205 List,
206 Tuple,
206 Tuple,
207 Union,
207 Union,
208 Any,
208 Any,
209 Sequence,
209 Sequence,
210 Dict,
210 Dict,
211 Optional,
211 Optional,
212 TYPE_CHECKING,
212 TYPE_CHECKING,
213 Set,
213 Set,
214 Sized,
214 Sized,
215 TypeVar,
215 TypeVar,
216 Literal,
216 Literal,
217 )
217 )
218
218
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 from IPython.core.error import TryNext
220 from IPython.core.error import TryNext
221 from IPython.core.inputtransformer2 import ESC_MAGIC
221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 from IPython.core.oinspect import InspectColors
223 from IPython.core.oinspect import InspectColors
224 from IPython.testing.skipdoctest import skip_doctest
224 from IPython.testing.skipdoctest import skip_doctest
225 from IPython.utils import generics
225 from IPython.utils import generics
226 from IPython.utils.decorators import sphinx_options
226 from IPython.utils.decorators import sphinx_options
227 from IPython.utils.dir2 import dir2, get_real_method
227 from IPython.utils.dir2 import dir2, get_real_method
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 from IPython.utils.path import ensure_dir_exists
229 from IPython.utils.path import ensure_dir_exists
230 from IPython.utils.process import arg_split
230 from IPython.utils.process import arg_split
231 from traitlets import (
231 from traitlets import (
232 Bool,
232 Bool,
233 Enum,
233 Enum,
234 Int,
234 Int,
235 List as ListTrait,
235 List as ListTrait,
236 Unicode,
236 Unicode,
237 Dict as DictTrait,
237 Dict as DictTrait,
238 Union as UnionTrait,
238 Union as UnionTrait,
239 observe,
239 observe,
240 )
240 )
241 from traitlets.config.configurable import Configurable
241 from traitlets.config.configurable import Configurable
242
242
243 import __main__
243 import __main__
244
244
245 # skip module docstests
245 # skip module docstests
246 __skip_doctest__ = True
246 __skip_doctest__ = True
247
247
248
248
249 try:
249 try:
250 import jedi
250 import jedi
251 jedi.settings.case_insensitive_completion = False
251 jedi.settings.case_insensitive_completion = False
252 import jedi.api.helpers
252 import jedi.api.helpers
253 import jedi.api.classes
253 import jedi.api.classes
254 JEDI_INSTALLED = True
254 JEDI_INSTALLED = True
255 except ImportError:
255 except ImportError:
256 JEDI_INSTALLED = False
256 JEDI_INSTALLED = False
257
257
258
258
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 from typing import cast
260 from typing import cast
261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 else:
262 else:
263 from typing import Generic
263 from typing import Generic
264
264
265 def cast(type_, obj):
265 def cast(type_, obj):
266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 return obj
267 return obj
268
268
269 # do not require on runtime
269 # do not require on runtime
270 NotRequired = Tuple # requires Python >=3.11
270 NotRequired = Tuple # requires Python >=3.11
271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 Protocol = object # requires Python >=3.8
272 Protocol = object # requires Python >=3.8
273 TypeAlias = Any # requires Python >=3.10
273 TypeAlias = Any # requires Python >=3.10
274 TypeGuard = Generic # requires Python >=3.10
274 TypeGuard = Generic # requires Python >=3.10
275 if GENERATING_DOCUMENTATION:
275 if GENERATING_DOCUMENTATION:
276 from typing import TypedDict
276 from typing import TypedDict
277
277
278 # -----------------------------------------------------------------------------
278 # -----------------------------------------------------------------------------
279 # Globals
279 # Globals
280 #-----------------------------------------------------------------------------
280 #-----------------------------------------------------------------------------
281
281
282 # ranges where we have most of the valid unicode names. We could be more finer
282 # ranges where we have most of the valid unicode names. We could be more finer
283 # grained but is it worth it for performance While unicode have character in the
283 # grained but is it worth it for performance While unicode have character in the
284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 # write this). With below range we cover them all, with a density of ~67%
285 # write this). With below range we cover them all, with a density of ~67%
286 # biggest next gap we consider only adds up about 1% density and there are 600
286 # biggest next gap we consider only adds up about 1% density and there are 600
287 # gaps that would need hard coding.
287 # gaps that would need hard coding.
288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289
289
290 # Public API
290 # Public API
291 __all__ = ["Completer", "IPCompleter"]
291 __all__ = ["Completer", "IPCompleter"]
292
292
293 if sys.platform == 'win32':
293 if sys.platform == 'win32':
294 PROTECTABLES = ' '
294 PROTECTABLES = ' '
295 else:
295 else:
296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297
297
298 # Protect against returning an enormous number of completions which the frontend
298 # Protect against returning an enormous number of completions which the frontend
299 # may have trouble processing.
299 # may have trouble processing.
300 MATCHES_LIMIT = 500
300 MATCHES_LIMIT = 500
301
301
302 # Completion type reported when no type can be inferred.
302 # Completion type reported when no type can be inferred.
303 _UNKNOWN_TYPE = "<unknown>"
303 _UNKNOWN_TYPE = "<unknown>"
304
304
305 # sentinel value to signal lack of a match
305 # sentinel value to signal lack of a match
306 not_found = object()
306 not_found = object()
307
307
308 class ProvisionalCompleterWarning(FutureWarning):
308 class ProvisionalCompleterWarning(FutureWarning):
309 """
309 """
310 Exception raise by an experimental feature in this module.
310 Exception raise by an experimental feature in this module.
311
311
312 Wrap code in :any:`provisionalcompleter` context manager if you
312 Wrap code in :any:`provisionalcompleter` context manager if you
313 are certain you want to use an unstable feature.
313 are certain you want to use an unstable feature.
314 """
314 """
315 pass
315 pass
316
316
317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318
318
319
319
320 @skip_doctest
320 @skip_doctest
321 @contextmanager
321 @contextmanager
322 def provisionalcompleter(action='ignore'):
322 def provisionalcompleter(action='ignore'):
323 """
323 """
324 This context manager has to be used in any place where unstable completer
324 This context manager has to be used in any place where unstable completer
325 behavior and API may be called.
325 behavior and API may be called.
326
326
327 >>> with provisionalcompleter():
327 >>> with provisionalcompleter():
328 ... completer.do_experimental_things() # works
328 ... completer.do_experimental_things() # works
329
329
330 >>> completer.do_experimental_things() # raises.
330 >>> completer.do_experimental_things() # raises.
331
331
332 .. note::
332 .. note::
333
333
334 Unstable
334 Unstable
335
335
336 By using this context manager you agree that the API in use may change
336 By using this context manager you agree that the API in use may change
337 without warning, and that you won't complain if they do so.
337 without warning, and that you won't complain if they do so.
338
338
339 You also understand that, if the API is not to your liking, you should report
339 You also understand that, if the API is not to your liking, you should report
340 a bug to explain your use case upstream.
340 a bug to explain your use case upstream.
341
341
342 We'll be happy to get your feedback, feature requests, and improvements on
342 We'll be happy to get your feedback, feature requests, and improvements on
343 any of the unstable APIs!
343 any of the unstable APIs!
344 """
344 """
345 with warnings.catch_warnings():
345 with warnings.catch_warnings():
346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 yield
347 yield
348
348
349
349
350 def has_open_quotes(s):
350 def has_open_quotes(s):
351 """Return whether a string has open quotes.
351 """Return whether a string has open quotes.
352
352
353 This simply counts whether the number of quote characters of either type in
353 This simply counts whether the number of quote characters of either type in
354 the string is odd.
354 the string is odd.
355
355
356 Returns
356 Returns
357 -------
357 -------
358 If there is an open quote, the quote character is returned. Else, return
358 If there is an open quote, the quote character is returned. Else, return
359 False.
359 False.
360 """
360 """
361 # We check " first, then ', so complex cases with nested quotes will get
361 # We check " first, then ', so complex cases with nested quotes will get
362 # the " to take precedence.
362 # the " to take precedence.
363 if s.count('"') % 2:
363 if s.count('"') % 2:
364 return '"'
364 return '"'
365 elif s.count("'") % 2:
365 elif s.count("'") % 2:
366 return "'"
366 return "'"
367 else:
367 else:
368 return False
368 return False
369
369
370
370
371 def protect_filename(s, protectables=PROTECTABLES):
371 def protect_filename(s, protectables=PROTECTABLES):
372 """Escape a string to protect certain characters."""
372 """Escape a string to protect certain characters."""
373 if set(s) & set(protectables):
373 if set(s) & set(protectables):
374 if sys.platform == "win32":
374 if sys.platform == "win32":
375 return '"' + s + '"'
375 return '"' + s + '"'
376 else:
376 else:
377 return "".join(("\\" + c if c in protectables else c) for c in s)
377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 else:
378 else:
379 return s
379 return s
380
380
381
381
382 def expand_user(path:str) -> Tuple[str, bool, str]:
382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 """Expand ``~``-style usernames in strings.
383 """Expand ``~``-style usernames in strings.
384
384
385 This is similar to :func:`os.path.expanduser`, but it computes and returns
385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 extra information that will be useful if the input was being used in
386 extra information that will be useful if the input was being used in
387 computing completions, and you wish to return the completions with the
387 computing completions, and you wish to return the completions with the
388 original '~' instead of its expanded value.
388 original '~' instead of its expanded value.
389
389
390 Parameters
390 Parameters
391 ----------
391 ----------
392 path : str
392 path : str
393 String to be expanded. If no ~ is present, the output is the same as the
393 String to be expanded. If no ~ is present, the output is the same as the
394 input.
394 input.
395
395
396 Returns
396 Returns
397 -------
397 -------
398 newpath : str
398 newpath : str
399 Result of ~ expansion in the input path.
399 Result of ~ expansion in the input path.
400 tilde_expand : bool
400 tilde_expand : bool
401 Whether any expansion was performed or not.
401 Whether any expansion was performed or not.
402 tilde_val : str
402 tilde_val : str
403 The value that ~ was replaced with.
403 The value that ~ was replaced with.
404 """
404 """
405 # Default values
405 # Default values
406 tilde_expand = False
406 tilde_expand = False
407 tilde_val = ''
407 tilde_val = ''
408 newpath = path
408 newpath = path
409
409
410 if path.startswith('~'):
410 if path.startswith('~'):
411 tilde_expand = True
411 tilde_expand = True
412 rest = len(path)-1
412 rest = len(path)-1
413 newpath = os.path.expanduser(path)
413 newpath = os.path.expanduser(path)
414 if rest:
414 if rest:
415 tilde_val = newpath[:-rest]
415 tilde_val = newpath[:-rest]
416 else:
416 else:
417 tilde_val = newpath
417 tilde_val = newpath
418
418
419 return newpath, tilde_expand, tilde_val
419 return newpath, tilde_expand, tilde_val
420
420
421
421
422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 """Does the opposite of expand_user, with its outputs.
423 """Does the opposite of expand_user, with its outputs.
424 """
424 """
425 if tilde_expand:
425 if tilde_expand:
426 return path.replace(tilde_val, '~')
426 return path.replace(tilde_val, '~')
427 else:
427 else:
428 return path
428 return path
429
429
430
430
431 def completions_sorting_key(word):
431 def completions_sorting_key(word):
432 """key for sorting completions
432 """key for sorting completions
433
433
434 This does several things:
434 This does several things:
435
435
436 - Demote any completions starting with underscores to the end
436 - Demote any completions starting with underscores to the end
437 - Insert any %magic and %%cellmagic completions in the alphabetical order
437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 by their name
438 by their name
439 """
439 """
440 prio1, prio2 = 0, 0
440 prio1, prio2 = 0, 0
441
441
442 if word.startswith('__'):
442 if word.startswith('__'):
443 prio1 = 2
443 prio1 = 2
444 elif word.startswith('_'):
444 elif word.startswith('_'):
445 prio1 = 1
445 prio1 = 1
446
446
447 if word.endswith('='):
447 if word.endswith('='):
448 prio1 = -1
448 prio1 = -1
449
449
450 if word.startswith('%%'):
450 if word.startswith('%%'):
451 # If there's another % in there, this is something else, so leave it alone
451 # If there's another % in there, this is something else, so leave it alone
452 if not "%" in word[2:]:
452 if not "%" in word[2:]:
453 word = word[2:]
453 word = word[2:]
454 prio2 = 2
454 prio2 = 2
455 elif word.startswith('%'):
455 elif word.startswith('%'):
456 if not "%" in word[1:]:
456 if not "%" in word[1:]:
457 word = word[1:]
457 word = word[1:]
458 prio2 = 1
458 prio2 = 1
459
459
460 return prio1, word, prio2
460 return prio1, word, prio2
461
461
462
462
463 class _FakeJediCompletion:
463 class _FakeJediCompletion:
464 """
464 """
465 This is a workaround to communicate to the UI that Jedi has crashed and to
465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467
467
468 Added in IPython 6.0 so should likely be removed for 7.0
468 Added in IPython 6.0 so should likely be removed for 7.0
469
469
470 """
470 """
471
471
472 def __init__(self, name):
472 def __init__(self, name):
473
473
474 self.name = name
474 self.name = name
475 self.complete = name
475 self.complete = name
476 self.type = 'crashed'
476 self.type = 'crashed'
477 self.name_with_symbols = name
477 self.name_with_symbols = name
478 self.signature = ""
478 self.signature = ""
479 self._origin = "fake"
479 self._origin = "fake"
480 self.text = "crashed"
480 self.text = "crashed"
481
481
482 def __repr__(self):
482 def __repr__(self):
483 return '<Fake completion object jedi has crashed>'
483 return '<Fake completion object jedi has crashed>'
484
484
485
485
486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487
487
488
488
489 class Completion:
489 class Completion:
490 """
490 """
491 Completion object used and returned by IPython completers.
491 Completion object used and returned by IPython completers.
492
492
493 .. warning::
493 .. warning::
494
494
495 Unstable
495 Unstable
496
496
497 This function is unstable, API may change without warning.
497 This function is unstable, API may change without warning.
498 It will also raise unless use in proper context manager.
498 It will also raise unless use in proper context manager.
499
499
500 This act as a middle ground :any:`Completion` object between the
500 This act as a middle ground :any:`Completion` object between the
501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 object. While Jedi need a lot of information about evaluator and how the
502 object. While Jedi need a lot of information about evaluator and how the
503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 need user facing information.
504 need user facing information.
505
505
506 - Which range should be replaced replaced by what.
506 - Which range should be replaced replaced by what.
507 - Some metadata (like completion type), or meta information to displayed to
507 - Some metadata (like completion type), or meta information to displayed to
508 the use user.
508 the use user.
509
509
510 For debugging purpose we can also store the origin of the completion (``jedi``,
510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 ``IPython.python_matches``, ``IPython.magics_matches``...).
511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 """
512 """
513
513
514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515
515
516 def __init__(
516 def __init__(
517 self,
517 self,
518 start: int,
518 start: int,
519 end: int,
519 end: int,
520 text: str,
520 text: str,
521 *,
521 *,
522 type: Optional[str] = None,
522 type: Optional[str] = None,
523 _origin="",
523 _origin="",
524 signature="",
524 signature="",
525 ) -> None:
525 ) -> None:
526 warnings.warn(
526 warnings.warn(
527 "``Completion`` is a provisional API (as of IPython 6.0). "
527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 "It may change without warnings. "
528 "It may change without warnings. "
529 "Use in corresponding context manager.",
529 "Use in corresponding context manager.",
530 category=ProvisionalCompleterWarning,
530 category=ProvisionalCompleterWarning,
531 stacklevel=2,
531 stacklevel=2,
532 )
532 )
533
533
534 self.start = start
534 self.start = start
535 self.end = end
535 self.end = end
536 self.text = text
536 self.text = text
537 self.type = type
537 self.type = type
538 self.signature = signature
538 self.signature = signature
539 self._origin = _origin
539 self._origin = _origin
540
540
541 def __repr__(self):
541 def __repr__(self):
542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544
544
545 def __eq__(self, other) -> bool:
545 def __eq__(self, other) -> bool:
546 """
546 """
547 Equality and hash do not hash the type (as some completer may not be
547 Equality and hash do not hash the type (as some completer may not be
548 able to infer the type), but are use to (partially) de-duplicate
548 able to infer the type), but are use to (partially) de-duplicate
549 completion.
549 completion.
550
550
551 Completely de-duplicating completion is a bit tricker that just
551 Completely de-duplicating completion is a bit tricker that just
552 comparing as it depends on surrounding text, which Completions are not
552 comparing as it depends on surrounding text, which Completions are not
553 aware of.
553 aware of.
554 """
554 """
555 return self.start == other.start and \
555 return self.start == other.start and \
556 self.end == other.end and \
556 self.end == other.end and \
557 self.text == other.text
557 self.text == other.text
558
558
559 def __hash__(self):
559 def __hash__(self):
560 return hash((self.start, self.end, self.text))
560 return hash((self.start, self.end, self.text))
561
561
562
562
563 class SimpleCompletion:
563 class SimpleCompletion:
564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565
565
566 .. warning::
566 .. warning::
567
567
568 Provisional
568 Provisional
569
569
570 This class is used to describe the currently supported attributes of
570 This class is used to describe the currently supported attributes of
571 simple completion items, and any additional implementation details
571 simple completion items, and any additional implementation details
572 should not be relied on. Additional attributes may be included in
572 should not be relied on. Additional attributes may be included in
573 future versions, and meaning of text disambiguated from the current
573 future versions, and meaning of text disambiguated from the current
574 dual meaning of "text to insert" and "text to used as a label".
574 dual meaning of "text to insert" and "text to used as a label".
575 """
575 """
576
576
577 __slots__ = ["text", "type"]
577 __slots__ = ["text", "type"]
578
578
579 def __init__(self, text: str, *, type: Optional[str] = None):
579 def __init__(self, text: str, *, type: Optional[str] = None):
580 self.text = text
580 self.text = text
581 self.type = type
581 self.type = type
582
582
583 def __repr__(self):
583 def __repr__(self):
584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585
585
586
586
587 class _MatcherResultBase(TypedDict):
587 class _MatcherResultBase(TypedDict):
588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589
589
590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 matched_fragment: NotRequired[str]
591 matched_fragment: NotRequired[str]
592
592
593 #: Whether to suppress results from all other matchers (True), some
593 #: Whether to suppress results from all other matchers (True), some
594 #: matchers (set of identifiers) or none (False); default is False.
594 #: matchers (set of identifiers) or none (False); default is False.
595 suppress: NotRequired[Union[bool, Set[str]]]
595 suppress: NotRequired[Union[bool, Set[str]]]
596
596
597 #: Identifiers of matchers which should NOT be suppressed when this matcher
597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 #: requests to suppress all other matchers; defaults to an empty set.
598 #: requests to suppress all other matchers; defaults to an empty set.
599 do_not_suppress: NotRequired[Set[str]]
599 do_not_suppress: NotRequired[Set[str]]
600
600
601 #: Are completions already ordered and should be left as-is? default is False.
601 #: Are completions already ordered and should be left as-is? default is False.
602 ordered: NotRequired[bool]
602 ordered: NotRequired[bool]
603
603
604
604
605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 """Result of new-style completion matcher."""
607 """Result of new-style completion matcher."""
608
608
609 # note: TypedDict is added again to the inheritance chain
609 # note: TypedDict is added again to the inheritance chain
610 # in order to get __orig_bases__ for documentation
610 # in order to get __orig_bases__ for documentation
611
611
612 #: List of candidate completions
612 #: List of candidate completions
613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614
614
615
615
616 class _JediMatcherResult(_MatcherResultBase):
616 class _JediMatcherResult(_MatcherResultBase):
617 """Matching result returned by Jedi (will be processed differently)"""
617 """Matching result returned by Jedi (will be processed differently)"""
618
618
619 #: list of candidate completions
619 #: list of candidate completions
620 completions: Iterator[_JediCompletionLike]
620 completions: Iterator[_JediCompletionLike]
621
621
622
622
623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625
625
626
626
627 @dataclass
627 @dataclass
628 class CompletionContext:
628 class CompletionContext:
629 """Completion context provided as an argument to matchers in the Matcher API v2."""
629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630
630
631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 # which was not explicitly visible as an argument of the matcher, making any refactor
632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 # from the completer, and make substituting them in sub-classes easier.
634 # from the completer, and make substituting them in sub-classes easier.
635
635
636 #: Relevant fragment of code directly preceding the cursor.
636 #: Relevant fragment of code directly preceding the cursor.
637 #: The extraction of token is implemented via splitter heuristic
637 #: The extraction of token is implemented via splitter heuristic
638 #: (following readline behaviour for legacy reasons), which is user configurable
638 #: (following readline behaviour for legacy reasons), which is user configurable
639 #: (by switching the greedy mode).
639 #: (by switching the greedy mode).
640 token: str
640 token: str
641
641
642 #: The full available content of the editor or buffer
642 #: The full available content of the editor or buffer
643 full_text: str
643 full_text: str
644
644
645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 cursor_position: int
646 cursor_position: int
647
647
648 #: Cursor line in ``full_text``.
648 #: Cursor line in ``full_text``.
649 cursor_line: int
649 cursor_line: int
650
650
651 #: The maximum number of completions that will be used downstream.
651 #: The maximum number of completions that will be used downstream.
652 #: Matchers can use this information to abort early.
652 #: Matchers can use this information to abort early.
653 #: The built-in Jedi matcher is currently excepted from this limit.
653 #: The built-in Jedi matcher is currently excepted from this limit.
654 # If not given, return all possible completions.
654 # If not given, return all possible completions.
655 limit: Optional[int]
655 limit: Optional[int]
656
656
657 @cached_property
657 @cached_property
658 def text_until_cursor(self) -> str:
658 def text_until_cursor(self) -> str:
659 return self.line_with_cursor[: self.cursor_position]
659 return self.line_with_cursor[: self.cursor_position]
660
660
661 @cached_property
661 @cached_property
662 def line_with_cursor(self) -> str:
662 def line_with_cursor(self) -> str:
663 return self.full_text.split("\n")[self.cursor_line]
663 return self.full_text.split("\n")[self.cursor_line]
664
664
665
665
666 #: Matcher results for API v2.
666 #: Matcher results for API v2.
667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668
668
669
669
670 class _MatcherAPIv1Base(Protocol):
670 class _MatcherAPIv1Base(Protocol):
671 def __call__(self, text: str) -> List[str]:
671 def __call__(self, text: str) -> List[str]:
672 """Call signature."""
672 """Call signature."""
673 ...
673 ...
674
674
675 #: Used to construct the default matcher identifier
675 #: Used to construct the default matcher identifier
676 __qualname__: str
676 __qualname__: str
677
677
678
678
679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 #: API version
680 #: API version
681 matcher_api_version: Optional[Literal[1]]
681 matcher_api_version: Optional[Literal[1]]
682
682
683 def __call__(self, text: str) -> List[str]:
683 def __call__(self, text: str) -> List[str]:
684 """Call signature."""
684 """Call signature."""
685 ...
685 ...
686
686
687
687
688 #: Protocol describing Matcher API v1.
688 #: Protocol describing Matcher API v1.
689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690
690
691
691
692 class MatcherAPIv2(Protocol):
692 class MatcherAPIv2(Protocol):
693 """Protocol describing Matcher API v2."""
693 """Protocol describing Matcher API v2."""
694
694
695 #: API version
695 #: API version
696 matcher_api_version: Literal[2] = 2
696 matcher_api_version: Literal[2] = 2
697
697
698 def __call__(self, context: CompletionContext) -> MatcherResult:
698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 """Call signature."""
699 """Call signature."""
700 ...
700 ...
701
701
702 #: Used to construct the default matcher identifier
702 #: Used to construct the default matcher identifier
703 __qualname__: str
703 __qualname__: str
704
704
705
705
706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707
707
708
708
709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 api_version = _get_matcher_api_version(matcher)
710 api_version = _get_matcher_api_version(matcher)
711 return api_version == 1
711 return api_version == 1
712
712
713
713
714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 api_version = _get_matcher_api_version(matcher)
715 api_version = _get_matcher_api_version(matcher)
716 return api_version == 2
716 return api_version == 2
717
717
718
718
719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 """Determines whether objects is sizable"""
720 """Determines whether objects is sizable"""
721 return hasattr(value, "__len__")
721 return hasattr(value, "__len__")
722
722
723
723
724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 """Determines whether objects is sizable"""
725 """Determines whether objects is sizable"""
726 return hasattr(value, "__next__")
726 return hasattr(value, "__next__")
727
727
728
728
729 def has_any_completions(result: MatcherResult) -> bool:
729 def has_any_completions(result: MatcherResult) -> bool:
730 """Check if any result includes any completions."""
730 """Check if any result includes any completions."""
731 completions = result["completions"]
731 completions = result["completions"]
732 if _is_sizable(completions):
732 if _is_sizable(completions):
733 return len(completions) != 0
733 return len(completions) != 0
734 if _is_iterator(completions):
734 if _is_iterator(completions):
735 try:
735 try:
736 old_iterator = completions
736 old_iterator = completions
737 first = next(old_iterator)
737 first = next(old_iterator)
738 result["completions"] = cast(
738 result["completions"] = cast(
739 Iterator[SimpleCompletion],
739 Iterator[SimpleCompletion],
740 itertools.chain([first], old_iterator),
740 itertools.chain([first], old_iterator),
741 )
741 )
742 return True
742 return True
743 except StopIteration:
743 except StopIteration:
744 return False
744 return False
745 raise ValueError(
745 raise ValueError(
746 "Completions returned by matcher need to be an Iterator or a Sizable"
746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 )
747 )
748
748
749
749
750 def completion_matcher(
750 def completion_matcher(
751 *,
751 *,
752 priority: Optional[float] = None,
752 priority: Optional[float] = None,
753 identifier: Optional[str] = None,
753 identifier: Optional[str] = None,
754 api_version: int = 1,
754 api_version: int = 1,
755 ):
755 ):
756 """Adds attributes describing the matcher.
756 """Adds attributes describing the matcher.
757
757
758 Parameters
758 Parameters
759 ----------
759 ----------
760 priority : Optional[float]
760 priority : Optional[float]
761 The priority of the matcher, determines the order of execution of matchers.
761 The priority of the matcher, determines the order of execution of matchers.
762 Higher priority means that the matcher will be executed first. Defaults to 0.
762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 identifier : Optional[str]
763 identifier : Optional[str]
764 identifier of the matcher allowing users to modify the behaviour via traitlets,
764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 and also used to for debugging (will be passed as ``origin`` with the completions).
765 and also used to for debugging (will be passed as ``origin`` with the completions).
766
766
767 Defaults to matcher function's ``__qualname__`` (for example,
767 Defaults to matcher function's ``__qualname__`` (for example,
768 ``IPCompleter.file_matcher`` for the built-in matched defined
768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 as a ``file_matcher`` method of the ``IPCompleter`` class).
769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 api_version: Optional[int]
770 api_version: Optional[int]
771 version of the Matcher API used by this matcher.
771 version of the Matcher API used by this matcher.
772 Currently supported values are 1 and 2.
772 Currently supported values are 1 and 2.
773 Defaults to 1.
773 Defaults to 1.
774 """
774 """
775
775
776 def wrapper(func: Matcher):
776 def wrapper(func: Matcher):
777 func.matcher_priority = priority or 0 # type: ignore
777 func.matcher_priority = priority or 0 # type: ignore
778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 func.matcher_api_version = api_version # type: ignore
779 func.matcher_api_version = api_version # type: ignore
780 if TYPE_CHECKING:
780 if TYPE_CHECKING:
781 if api_version == 1:
781 if api_version == 1:
782 func = cast(MatcherAPIv1, func)
782 func = cast(MatcherAPIv1, func)
783 elif api_version == 2:
783 elif api_version == 2:
784 func = cast(MatcherAPIv2, func)
784 func = cast(MatcherAPIv2, func)
785 return func
785 return func
786
786
787 return wrapper
787 return wrapper
788
788
789
789
790 def _get_matcher_priority(matcher: Matcher):
790 def _get_matcher_priority(matcher: Matcher):
791 return getattr(matcher, "matcher_priority", 0)
791 return getattr(matcher, "matcher_priority", 0)
792
792
793
793
794 def _get_matcher_id(matcher: Matcher):
794 def _get_matcher_id(matcher: Matcher):
795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796
796
797
797
798 def _get_matcher_api_version(matcher):
798 def _get_matcher_api_version(matcher):
799 return getattr(matcher, "matcher_api_version", 1)
799 return getattr(matcher, "matcher_api_version", 1)
800
800
801
801
802 context_matcher = partial(completion_matcher, api_version=2)
802 context_matcher = partial(completion_matcher, api_version=2)
803
803
804
804
805 _IC = Iterable[Completion]
805 _IC = Iterable[Completion]
806
806
807
807
808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 """
809 """
810 Deduplicate a set of completions.
810 Deduplicate a set of completions.
811
811
812 .. warning::
812 .. warning::
813
813
814 Unstable
814 Unstable
815
815
816 This function is unstable, API may change without warning.
816 This function is unstable, API may change without warning.
817
817
818 Parameters
818 Parameters
819 ----------
819 ----------
820 text : str
820 text : str
821 text that should be completed.
821 text that should be completed.
822 completions : Iterator[Completion]
822 completions : Iterator[Completion]
823 iterator over the completions to deduplicate
823 iterator over the completions to deduplicate
824
824
825 Yields
825 Yields
826 ------
826 ------
827 `Completions` objects
827 `Completions` objects
828 Completions coming from multiple sources, may be different but end up having
828 Completions coming from multiple sources, may be different but end up having
829 the same effect when applied to ``text``. If this is the case, this will
829 the same effect when applied to ``text``. If this is the case, this will
830 consider completions as equal and only emit the first encountered.
830 consider completions as equal and only emit the first encountered.
831 Not folded in `completions()` yet for debugging purpose, and to detect when
831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 the IPython completer does return things that Jedi does not, but should be
832 the IPython completer does return things that Jedi does not, but should be
833 at some point.
833 at some point.
834 """
834 """
835 completions = list(completions)
835 completions = list(completions)
836 if not completions:
836 if not completions:
837 return
837 return
838
838
839 new_start = min(c.start for c in completions)
839 new_start = min(c.start for c in completions)
840 new_end = max(c.end for c in completions)
840 new_end = max(c.end for c in completions)
841
841
842 seen = set()
842 seen = set()
843 for c in completions:
843 for c in completions:
844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 if new_text not in seen:
845 if new_text not in seen:
846 yield c
846 yield c
847 seen.add(new_text)
847 seen.add(new_text)
848
848
849
849
850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 """
851 """
852 Rectify a set of completions to all have the same ``start`` and ``end``
852 Rectify a set of completions to all have the same ``start`` and ``end``
853
853
854 .. warning::
854 .. warning::
855
855
856 Unstable
856 Unstable
857
857
858 This function is unstable, API may change without warning.
858 This function is unstable, API may change without warning.
859 It will also raise unless use in proper context manager.
859 It will also raise unless use in proper context manager.
860
860
861 Parameters
861 Parameters
862 ----------
862 ----------
863 text : str
863 text : str
864 text that should be completed.
864 text that should be completed.
865 completions : Iterator[Completion]
865 completions : Iterator[Completion]
866 iterator over the completions to rectify
866 iterator over the completions to rectify
867 _debug : bool
867 _debug : bool
868 Log failed completion
868 Log failed completion
869
869
870 Notes
870 Notes
871 -----
871 -----
872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 the Jupyter Protocol requires them to behave like so. This will readjust
873 the Jupyter Protocol requires them to behave like so. This will readjust
874 the completion to have the same ``start`` and ``end`` by padding both
874 the completion to have the same ``start`` and ``end`` by padding both
875 extremities with surrounding text.
875 extremities with surrounding text.
876
876
877 During stabilisation should support a ``_debug`` option to log which
877 During stabilisation should support a ``_debug`` option to log which
878 completion are return by the IPython completer and not found in Jedi in
878 completion are return by the IPython completer and not found in Jedi in
879 order to make upstream bug report.
879 order to make upstream bug report.
880 """
880 """
881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 "It may change without warnings. "
882 "It may change without warnings. "
883 "Use in corresponding context manager.",
883 "Use in corresponding context manager.",
884 category=ProvisionalCompleterWarning, stacklevel=2)
884 category=ProvisionalCompleterWarning, stacklevel=2)
885
885
886 completions = list(completions)
886 completions = list(completions)
887 if not completions:
887 if not completions:
888 return
888 return
889 starts = (c.start for c in completions)
889 starts = (c.start for c in completions)
890 ends = (c.end for c in completions)
890 ends = (c.end for c in completions)
891
891
892 new_start = min(starts)
892 new_start = min(starts)
893 new_end = max(ends)
893 new_end = max(ends)
894
894
895 seen_jedi = set()
895 seen_jedi = set()
896 seen_python_matches = set()
896 seen_python_matches = set()
897 for c in completions:
897 for c in completions:
898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 if c._origin == 'jedi':
899 if c._origin == 'jedi':
900 seen_jedi.add(new_text)
900 seen_jedi.add(new_text)
901 elif c._origin == 'IPCompleter.python_matches':
901 elif c._origin == "IPCompleter.python_matcher":
902 seen_python_matches.add(new_text)
902 seen_python_matches.add(new_text)
903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 diff = seen_python_matches.difference(seen_jedi)
904 diff = seen_python_matches.difference(seen_jedi)
905 if diff and _debug:
905 if diff and _debug:
906 print('IPython.python matches have extras:', diff)
906 print('IPython.python matches have extras:', diff)
907
907
908
908
909 if sys.platform == 'win32':
909 if sys.platform == 'win32':
910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 else:
911 else:
912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913
913
914 GREEDY_DELIMS = ' =\r\n'
914 GREEDY_DELIMS = ' =\r\n'
915
915
916
916
917 class CompletionSplitter(object):
917 class CompletionSplitter(object):
918 """An object to split an input line in a manner similar to readline.
918 """An object to split an input line in a manner similar to readline.
919
919
920 By having our own implementation, we can expose readline-like completion in
920 By having our own implementation, we can expose readline-like completion in
921 a uniform manner to all frontends. This object only needs to be given the
921 a uniform manner to all frontends. This object only needs to be given the
922 line of text to be split and the cursor position on said line, and it
922 line of text to be split and the cursor position on said line, and it
923 returns the 'word' to be completed on at the cursor after splitting the
923 returns the 'word' to be completed on at the cursor after splitting the
924 entire line.
924 entire line.
925
925
926 What characters are used as splitting delimiters can be controlled by
926 What characters are used as splitting delimiters can be controlled by
927 setting the ``delims`` attribute (this is a property that internally
927 setting the ``delims`` attribute (this is a property that internally
928 automatically builds the necessary regular expression)"""
928 automatically builds the necessary regular expression)"""
929
929
930 # Private interface
930 # Private interface
931
931
932 # A string of delimiter characters. The default value makes sense for
932 # A string of delimiter characters. The default value makes sense for
933 # IPython's most typical usage patterns.
933 # IPython's most typical usage patterns.
934 _delims = DELIMS
934 _delims = DELIMS
935
935
936 # The expression (a normal string) to be compiled into a regular expression
936 # The expression (a normal string) to be compiled into a regular expression
937 # for actual splitting. We store it as an attribute mostly for ease of
937 # for actual splitting. We store it as an attribute mostly for ease of
938 # debugging, since this type of code can be so tricky to debug.
938 # debugging, since this type of code can be so tricky to debug.
939 _delim_expr = None
939 _delim_expr = None
940
940
941 # The regular expression that does the actual splitting
941 # The regular expression that does the actual splitting
942 _delim_re = None
942 _delim_re = None
943
943
944 def __init__(self, delims=None):
944 def __init__(self, delims=None):
945 delims = CompletionSplitter._delims if delims is None else delims
945 delims = CompletionSplitter._delims if delims is None else delims
946 self.delims = delims
946 self.delims = delims
947
947
948 @property
948 @property
949 def delims(self):
949 def delims(self):
950 """Return the string of delimiter characters."""
950 """Return the string of delimiter characters."""
951 return self._delims
951 return self._delims
952
952
953 @delims.setter
953 @delims.setter
954 def delims(self, delims):
954 def delims(self, delims):
955 """Set the delimiters for line splitting."""
955 """Set the delimiters for line splitting."""
956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 self._delim_re = re.compile(expr)
957 self._delim_re = re.compile(expr)
958 self._delims = delims
958 self._delims = delims
959 self._delim_expr = expr
959 self._delim_expr = expr
960
960
961 def split_line(self, line, cursor_pos=None):
961 def split_line(self, line, cursor_pos=None):
962 """Split a line of text with a cursor at the given position.
962 """Split a line of text with a cursor at the given position.
963 """
963 """
964 l = line if cursor_pos is None else line[:cursor_pos]
964 l = line if cursor_pos is None else line[:cursor_pos]
965 return self._delim_re.split(l)[-1]
965 return self._delim_re.split(l)[-1]
966
966
967
967
968
968
969 class Completer(Configurable):
969 class Completer(Configurable):
970
970
971 greedy = Bool(
971 greedy = Bool(
972 False,
972 False,
973 help="""Activate greedy completion.
973 help="""Activate greedy completion.
974
974
975 .. deprecated:: 8.8
975 .. deprecated:: 8.8
976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977
977
978 When enabled in IPython 8.8 or newer, changes configuration as follows:
978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979
979
980 - ``Completer.evaluation = 'unsafe'``
980 - ``Completer.evaluation = 'unsafe'``
981 - ``Completer.auto_close_dict_keys = True``
981 - ``Completer.auto_close_dict_keys = True``
982 """,
982 """,
983 ).tag(config=True)
983 ).tag(config=True)
984
984
985 evaluation = Enum(
985 evaluation = Enum(
986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 default_value="limited",
987 default_value="limited",
988 help="""Policy for code evaluation under completion.
988 help="""Policy for code evaluation under completion.
989
989
990 Successive options allow to enable more eager evaluation for better
990 Successive options allow to enable more eager evaluation for better
991 completion suggestions, including for nested dictionaries, nested lists,
991 completion suggestions, including for nested dictionaries, nested lists,
992 or even results of function calls.
992 or even results of function calls.
993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995
995
996 Allowed values are:
996 Allowed values are:
997
997
998 - ``forbidden``: no evaluation of code is permitted,
998 - ``forbidden``: no evaluation of code is permitted,
999 - ``minimal``: evaluation of literals and access to built-in namespace;
999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 no item/attribute evaluationm no access to locals/globals,
1000 no item/attribute evaluationm no access to locals/globals,
1001 no evaluation of any operations or comparisons.
1001 no evaluation of any operations or comparisons.
1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 - ``unsafe``: evaluation of all methods and function calls but not of
1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 syntax with side-effects like `del x`,
1007 syntax with side-effects like `del x`,
1008 - ``dangerous``: completely arbitrary evaluation.
1008 - ``dangerous``: completely arbitrary evaluation.
1009 """,
1009 """,
1010 ).tag(config=True)
1010 ).tag(config=True)
1011
1011
1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 help="Experimental: Use Jedi to generate autocompletions. "
1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 "Default to True if jedi is installed.").tag(config=True)
1014 "Default to True if jedi is installed.").tag(config=True)
1015
1015
1016 jedi_compute_type_timeout = Int(default_value=400,
1016 jedi_compute_type_timeout = Int(default_value=400,
1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 performance by preventing jedi to build its cache.
1019 performance by preventing jedi to build its cache.
1020 """).tag(config=True)
1020 """).tag(config=True)
1021
1021
1022 debug = Bool(default_value=False,
1022 debug = Bool(default_value=False,
1023 help='Enable debug for the Completer. Mostly print extra '
1023 help='Enable debug for the Completer. Mostly print extra '
1024 'information for experimental jedi integration.')\
1024 'information for experimental jedi integration.')\
1025 .tag(config=True)
1025 .tag(config=True)
1026
1026
1027 backslash_combining_completions = Bool(True,
1027 backslash_combining_completions = Bool(True,
1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 "Includes completion of latex commands, unicode names, and expanding "
1029 "Includes completion of latex commands, unicode names, and expanding "
1030 "unicode characters back to latex commands.").tag(config=True)
1030 "unicode characters back to latex commands.").tag(config=True)
1031
1031
1032 auto_close_dict_keys = Bool(
1032 auto_close_dict_keys = Bool(
1033 False,
1033 False,
1034 help="""
1034 help="""
1035 Enable auto-closing dictionary keys.
1035 Enable auto-closing dictionary keys.
1036
1036
1037 When enabled string keys will be suffixed with a final quote
1037 When enabled string keys will be suffixed with a final quote
1038 (matching the opening quote), tuple keys will also receive a
1038 (matching the opening quote), tuple keys will also receive a
1039 separating comma if needed, and keys which are final will
1039 separating comma if needed, and keys which are final will
1040 receive a closing bracket (``]``).
1040 receive a closing bracket (``]``).
1041 """,
1041 """,
1042 ).tag(config=True)
1042 ).tag(config=True)
1043
1043
1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 """Create a new completer for the command line.
1045 """Create a new completer for the command line.
1046
1046
1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048
1048
1049 If unspecified, the default namespace where completions are performed
1049 If unspecified, the default namespace where completions are performed
1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 given as dictionaries.
1051 given as dictionaries.
1052
1052
1053 An optional second namespace can be given. This allows the completer
1053 An optional second namespace can be given. This allows the completer
1054 to handle cases where both the local and global scopes need to be
1054 to handle cases where both the local and global scopes need to be
1055 distinguished.
1055 distinguished.
1056 """
1056 """
1057
1057
1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 # specific namespace or to use __main__.__dict__. This will allow us
1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 # to bind to __main__.__dict__ at completion time, not now.
1060 # to bind to __main__.__dict__ at completion time, not now.
1061 if namespace is None:
1061 if namespace is None:
1062 self.use_main_ns = True
1062 self.use_main_ns = True
1063 else:
1063 else:
1064 self.use_main_ns = False
1064 self.use_main_ns = False
1065 self.namespace = namespace
1065 self.namespace = namespace
1066
1066
1067 # The global namespace, if given, can be bound directly
1067 # The global namespace, if given, can be bound directly
1068 if global_namespace is None:
1068 if global_namespace is None:
1069 self.global_namespace = {}
1069 self.global_namespace = {}
1070 else:
1070 else:
1071 self.global_namespace = global_namespace
1071 self.global_namespace = global_namespace
1072
1072
1073 self.custom_matchers = []
1073 self.custom_matchers = []
1074
1074
1075 super(Completer, self).__init__(**kwargs)
1075 super(Completer, self).__init__(**kwargs)
1076
1076
1077 def complete(self, text, state):
1077 def complete(self, text, state):
1078 """Return the next possible completion for 'text'.
1078 """Return the next possible completion for 'text'.
1079
1079
1080 This is called successively with state == 0, 1, 2, ... until it
1080 This is called successively with state == 0, 1, 2, ... until it
1081 returns None. The completion should begin with 'text'.
1081 returns None. The completion should begin with 'text'.
1082
1082
1083 """
1083 """
1084 if self.use_main_ns:
1084 if self.use_main_ns:
1085 self.namespace = __main__.__dict__
1085 self.namespace = __main__.__dict__
1086
1086
1087 if state == 0:
1087 if state == 0:
1088 if "." in text:
1088 if "." in text:
1089 self.matches = self.attr_matches(text)
1089 self.matches = self.attr_matches(text)
1090 else:
1090 else:
1091 self.matches = self.global_matches(text)
1091 self.matches = self.global_matches(text)
1092 try:
1092 try:
1093 return self.matches[state]
1093 return self.matches[state]
1094 except IndexError:
1094 except IndexError:
1095 return None
1095 return None
1096
1096
1097 def global_matches(self, text):
1097 def global_matches(self, text):
1098 """Compute matches when text is a simple name.
1098 """Compute matches when text is a simple name.
1099
1099
1100 Return a list of all keywords, built-in functions and names currently
1100 Return a list of all keywords, built-in functions and names currently
1101 defined in self.namespace or self.global_namespace that match.
1101 defined in self.namespace or self.global_namespace that match.
1102
1102
1103 """
1103 """
1104 matches = []
1104 matches = []
1105 match_append = matches.append
1105 match_append = matches.append
1106 n = len(text)
1106 n = len(text)
1107 for lst in [
1107 for lst in [
1108 keyword.kwlist,
1108 keyword.kwlist,
1109 builtin_mod.__dict__.keys(),
1109 builtin_mod.__dict__.keys(),
1110 list(self.namespace.keys()),
1110 list(self.namespace.keys()),
1111 list(self.global_namespace.keys()),
1111 list(self.global_namespace.keys()),
1112 ]:
1112 ]:
1113 for word in lst:
1113 for word in lst:
1114 if word[:n] == text and word != "__builtins__":
1114 if word[:n] == text and word != "__builtins__":
1115 match_append(word)
1115 match_append(word)
1116
1116
1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 shortened = {
1119 shortened = {
1120 "_".join([sub[0] for sub in word.split("_")]): word
1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 for word in lst
1121 for word in lst
1122 if snake_case_re.match(word)
1122 if snake_case_re.match(word)
1123 }
1123 }
1124 for word in shortened.keys():
1124 for word in shortened.keys():
1125 if word[:n] == text and word != "__builtins__":
1125 if word[:n] == text and word != "__builtins__":
1126 match_append(shortened[word])
1126 match_append(shortened[word])
1127 return matches
1127 return matches
1128
1128
1129 def attr_matches(self, text):
1129 def attr_matches(self, text):
1130 """Compute matches when text contains a dot.
1130 """Compute matches when text contains a dot.
1131
1131
1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 evaluatable in self.namespace or self.global_namespace, it will be
1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 evaluated and its attributes (as revealed by dir()) are used as
1134 evaluated and its attributes (as revealed by dir()) are used as
1135 possible completions. (For class instances, class members are
1135 possible completions. (For class instances, class members are
1136 also considered.)
1136 also considered.)
1137
1137
1138 WARNING: this can still invoke arbitrary C code, if an object
1138 WARNING: this can still invoke arbitrary C code, if an object
1139 with a __getattr__ hook is evaluated.
1139 with a __getattr__ hook is evaluated.
1140
1140
1141 """
1141 """
1142 return self._attr_matches(text)[0]
1143
1144 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1142 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1143 if not m2:
1146 if not m2:
1144 return []
1147 return [], ""
1145 expr, attr = m2.group(1, 2)
1148 expr, attr = m2.group(1, 2)
1146
1149
1147 obj = self._evaluate_expr(expr)
1150 obj = self._evaluate_expr(expr)
1148
1151
1149 if obj is not_found:
1152 if obj is not_found:
1150 return []
1153 return [], ""
1151
1154
1152 if self.limit_to__all__ and hasattr(obj, '__all__'):
1155 if self.limit_to__all__ and hasattr(obj, '__all__'):
1153 words = get__all__entries(obj)
1156 words = get__all__entries(obj)
1154 else:
1157 else:
1155 words = dir2(obj)
1158 words = dir2(obj)
1156
1159
1157 try:
1160 try:
1158 words = generics.complete_object(obj, words)
1161 words = generics.complete_object(obj, words)
1159 except TryNext:
1162 except TryNext:
1160 pass
1163 pass
1161 except AssertionError:
1164 except AssertionError:
1162 raise
1165 raise
1163 except Exception:
1166 except Exception:
1164 # Silence errors from completion function
1167 # Silence errors from completion function
1165 pass
1168 pass
1166 # Build match list to return
1169 # Build match list to return
1167 n = len(attr)
1170 n = len(attr)
1168
1171
1169 # Note: ideally we would just return words here and the prefix
1172 # Note: ideally we would just return words here and the prefix
1170 # reconciliator would know that we intend to append to rather than
1173 # reconciliator would know that we intend to append to rather than
1171 # replace the input text; this requires refactoring to return range
1174 # replace the input text; this requires refactoring to return range
1172 # which ought to be replaced (as does jedi).
1175 # which ought to be replaced (as does jedi).
1173 tokens = _parse_tokens(expr)
1176 if include_prefix:
1174 rev_tokens = reversed(tokens)
1177 tokens = _parse_tokens(expr)
1175 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1178 rev_tokens = reversed(tokens)
1176 name_turn = True
1179 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1177
1180 name_turn = True
1178 parts = []
1181
1179 for token in rev_tokens:
1182 parts = []
1180 if token.type in skip_over:
1183 for token in rev_tokens:
1181 continue
1184 if token.type in skip_over:
1182 if token.type == tokenize.NAME and name_turn:
1185 continue
1183 parts.append(token.string)
1186 if token.type == tokenize.NAME and name_turn:
1184 name_turn = False
1187 parts.append(token.string)
1185 elif token.type == tokenize.OP and token.string == "." and not name_turn:
1188 name_turn = False
1186 parts.append(token.string)
1189 elif (
1187 name_turn = True
1190 token.type == tokenize.OP and token.string == "." and not name_turn
1188 else:
1191 ):
1189 # short-circuit if not empty nor name token
1192 parts.append(token.string)
1190 break
1193 name_turn = True
1194 else:
1195 # short-circuit if not empty nor name token
1196 break
1191
1197
1192 prefix_after_space = "".join(reversed(parts))
1198 prefix_after_space = "".join(reversed(parts))
1199 else:
1200 prefix_after_space = ""
1193
1201
1194 return ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr]
1202 return (
1203 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1204 "." + attr,
1205 )
1195
1206
1196 def _evaluate_expr(self, expr):
1207 def _evaluate_expr(self, expr):
1197 obj = not_found
1208 obj = not_found
1198 done = False
1209 done = False
1199 while not done and expr:
1210 while not done and expr:
1200 try:
1211 try:
1201 obj = guarded_eval(
1212 obj = guarded_eval(
1202 expr,
1213 expr,
1203 EvaluationContext(
1214 EvaluationContext(
1204 globals=self.global_namespace,
1215 globals=self.global_namespace,
1205 locals=self.namespace,
1216 locals=self.namespace,
1206 evaluation=self.evaluation,
1217 evaluation=self.evaluation,
1207 ),
1218 ),
1208 )
1219 )
1209 done = True
1220 done = True
1210 except Exception as e:
1221 except Exception as e:
1211 if self.debug:
1222 if self.debug:
1212 print("Evaluation exception", e)
1223 print("Evaluation exception", e)
1213 # trim the expression to remove any invalid prefix
1224 # trim the expression to remove any invalid prefix
1214 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1225 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1215 # where parenthesis is not closed.
1226 # where parenthesis is not closed.
1216 # TODO: make this faster by reusing parts of the computation?
1227 # TODO: make this faster by reusing parts of the computation?
1217 expr = expr[1:]
1228 expr = expr[1:]
1218 return obj
1229 return obj
1219
1230
1220 def get__all__entries(obj):
1231 def get__all__entries(obj):
1221 """returns the strings in the __all__ attribute"""
1232 """returns the strings in the __all__ attribute"""
1222 try:
1233 try:
1223 words = getattr(obj, '__all__')
1234 words = getattr(obj, '__all__')
1224 except:
1235 except:
1225 return []
1236 return []
1226
1237
1227 return [w for w in words if isinstance(w, str)]
1238 return [w for w in words if isinstance(w, str)]
1228
1239
1229
1240
1230 class _DictKeyState(enum.Flag):
1241 class _DictKeyState(enum.Flag):
1231 """Represent state of the key match in context of other possible matches.
1242 """Represent state of the key match in context of other possible matches.
1232
1243
1233 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1244 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1234 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1245 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1235 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1246 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1236 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1247 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1237 """
1248 """
1238
1249
1239 BASELINE = 0
1250 BASELINE = 0
1240 END_OF_ITEM = enum.auto()
1251 END_OF_ITEM = enum.auto()
1241 END_OF_TUPLE = enum.auto()
1252 END_OF_TUPLE = enum.auto()
1242 IN_TUPLE = enum.auto()
1253 IN_TUPLE = enum.auto()
1243
1254
1244
1255
1245 def _parse_tokens(c):
1256 def _parse_tokens(c):
1246 """Parse tokens even if there is an error."""
1257 """Parse tokens even if there is an error."""
1247 tokens = []
1258 tokens = []
1248 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1259 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1249 while True:
1260 while True:
1250 try:
1261 try:
1251 tokens.append(next(token_generator))
1262 tokens.append(next(token_generator))
1252 except tokenize.TokenError:
1263 except tokenize.TokenError:
1253 return tokens
1264 return tokens
1254 except StopIteration:
1265 except StopIteration:
1255 return tokens
1266 return tokens
1256
1267
1257
1268
1258 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1269 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1259 """Match any valid Python numeric literal in a prefix of dictionary keys.
1270 """Match any valid Python numeric literal in a prefix of dictionary keys.
1260
1271
1261 References:
1272 References:
1262 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1273 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1263 - https://docs.python.org/3/library/tokenize.html
1274 - https://docs.python.org/3/library/tokenize.html
1264 """
1275 """
1265 if prefix[-1].isspace():
1276 if prefix[-1].isspace():
1266 # if user typed a space we do not have anything to complete
1277 # if user typed a space we do not have anything to complete
1267 # even if there was a valid number token before
1278 # even if there was a valid number token before
1268 return None
1279 return None
1269 tokens = _parse_tokens(prefix)
1280 tokens = _parse_tokens(prefix)
1270 rev_tokens = reversed(tokens)
1281 rev_tokens = reversed(tokens)
1271 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1282 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1272 number = None
1283 number = None
1273 for token in rev_tokens:
1284 for token in rev_tokens:
1274 if token.type in skip_over:
1285 if token.type in skip_over:
1275 continue
1286 continue
1276 if number is None:
1287 if number is None:
1277 if token.type == tokenize.NUMBER:
1288 if token.type == tokenize.NUMBER:
1278 number = token.string
1289 number = token.string
1279 continue
1290 continue
1280 else:
1291 else:
1281 # we did not match a number
1292 # we did not match a number
1282 return None
1293 return None
1283 if token.type == tokenize.OP:
1294 if token.type == tokenize.OP:
1284 if token.string == ",":
1295 if token.string == ",":
1285 break
1296 break
1286 if token.string in {"+", "-"}:
1297 if token.string in {"+", "-"}:
1287 number = token.string + number
1298 number = token.string + number
1288 else:
1299 else:
1289 return None
1300 return None
1290 return number
1301 return number
1291
1302
1292
1303
1293 _INT_FORMATS = {
1304 _INT_FORMATS = {
1294 "0b": bin,
1305 "0b": bin,
1295 "0o": oct,
1306 "0o": oct,
1296 "0x": hex,
1307 "0x": hex,
1297 }
1308 }
1298
1309
1299
1310
1300 def match_dict_keys(
1311 def match_dict_keys(
1301 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1312 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1302 prefix: str,
1313 prefix: str,
1303 delims: str,
1314 delims: str,
1304 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1315 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1305 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1316 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1306 """Used by dict_key_matches, matching the prefix to a list of keys
1317 """Used by dict_key_matches, matching the prefix to a list of keys
1307
1318
1308 Parameters
1319 Parameters
1309 ----------
1320 ----------
1310 keys
1321 keys
1311 list of keys in dictionary currently being completed.
1322 list of keys in dictionary currently being completed.
1312 prefix
1323 prefix
1313 Part of the text already typed by the user. E.g. `mydict[b'fo`
1324 Part of the text already typed by the user. E.g. `mydict[b'fo`
1314 delims
1325 delims
1315 String of delimiters to consider when finding the current key.
1326 String of delimiters to consider when finding the current key.
1316 extra_prefix : optional
1327 extra_prefix : optional
1317 Part of the text already typed in multi-key index cases. E.g. for
1328 Part of the text already typed in multi-key index cases. E.g. for
1318 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1329 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1319
1330
1320 Returns
1331 Returns
1321 -------
1332 -------
1322 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1333 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1323 ``quote`` being the quote that need to be used to close current string.
1334 ``quote`` being the quote that need to be used to close current string.
1324 ``token_start`` the position where the replacement should start occurring,
1335 ``token_start`` the position where the replacement should start occurring,
1325 ``matches`` a dictionary of replacement/completion keys on keys and values
1336 ``matches`` a dictionary of replacement/completion keys on keys and values
1326 indicating whether the state.
1337 indicating whether the state.
1327 """
1338 """
1328 prefix_tuple = extra_prefix if extra_prefix else ()
1339 prefix_tuple = extra_prefix if extra_prefix else ()
1329
1340
1330 prefix_tuple_size = sum(
1341 prefix_tuple_size = sum(
1331 [
1342 [
1332 # for pandas, do not count slices as taking space
1343 # for pandas, do not count slices as taking space
1333 not isinstance(k, slice)
1344 not isinstance(k, slice)
1334 for k in prefix_tuple
1345 for k in prefix_tuple
1335 ]
1346 ]
1336 )
1347 )
1337 text_serializable_types = (str, bytes, int, float, slice)
1348 text_serializable_types = (str, bytes, int, float, slice)
1338
1349
1339 def filter_prefix_tuple(key):
1350 def filter_prefix_tuple(key):
1340 # Reject too short keys
1351 # Reject too short keys
1341 if len(key) <= prefix_tuple_size:
1352 if len(key) <= prefix_tuple_size:
1342 return False
1353 return False
1343 # Reject keys which cannot be serialised to text
1354 # Reject keys which cannot be serialised to text
1344 for k in key:
1355 for k in key:
1345 if not isinstance(k, text_serializable_types):
1356 if not isinstance(k, text_serializable_types):
1346 return False
1357 return False
1347 # Reject keys that do not match the prefix
1358 # Reject keys that do not match the prefix
1348 for k, pt in zip(key, prefix_tuple):
1359 for k, pt in zip(key, prefix_tuple):
1349 if k != pt and not isinstance(pt, slice):
1360 if k != pt and not isinstance(pt, slice):
1350 return False
1361 return False
1351 # All checks passed!
1362 # All checks passed!
1352 return True
1363 return True
1353
1364
1354 filtered_key_is_final: Dict[
1365 filtered_key_is_final: Dict[
1355 Union[str, bytes, int, float], _DictKeyState
1366 Union[str, bytes, int, float], _DictKeyState
1356 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1367 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1357
1368
1358 for k in keys:
1369 for k in keys:
1359 # If at least one of the matches is not final, mark as undetermined.
1370 # If at least one of the matches is not final, mark as undetermined.
1360 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1371 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1361 # `111` appears final on first match but is not final on the second.
1372 # `111` appears final on first match but is not final on the second.
1362
1373
1363 if isinstance(k, tuple):
1374 if isinstance(k, tuple):
1364 if filter_prefix_tuple(k):
1375 if filter_prefix_tuple(k):
1365 key_fragment = k[prefix_tuple_size]
1376 key_fragment = k[prefix_tuple_size]
1366 filtered_key_is_final[key_fragment] |= (
1377 filtered_key_is_final[key_fragment] |= (
1367 _DictKeyState.END_OF_TUPLE
1378 _DictKeyState.END_OF_TUPLE
1368 if len(k) == prefix_tuple_size + 1
1379 if len(k) == prefix_tuple_size + 1
1369 else _DictKeyState.IN_TUPLE
1380 else _DictKeyState.IN_TUPLE
1370 )
1381 )
1371 elif prefix_tuple_size > 0:
1382 elif prefix_tuple_size > 0:
1372 # we are completing a tuple but this key is not a tuple,
1383 # we are completing a tuple but this key is not a tuple,
1373 # so we should ignore it
1384 # so we should ignore it
1374 pass
1385 pass
1375 else:
1386 else:
1376 if isinstance(k, text_serializable_types):
1387 if isinstance(k, text_serializable_types):
1377 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1388 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1378
1389
1379 filtered_keys = filtered_key_is_final.keys()
1390 filtered_keys = filtered_key_is_final.keys()
1380
1391
1381 if not prefix:
1392 if not prefix:
1382 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1393 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1383
1394
1384 quote_match = re.search("(?:\"|')", prefix)
1395 quote_match = re.search("(?:\"|')", prefix)
1385 is_user_prefix_numeric = False
1396 is_user_prefix_numeric = False
1386
1397
1387 if quote_match:
1398 if quote_match:
1388 quote = quote_match.group()
1399 quote = quote_match.group()
1389 valid_prefix = prefix + quote
1400 valid_prefix = prefix + quote
1390 try:
1401 try:
1391 prefix_str = literal_eval(valid_prefix)
1402 prefix_str = literal_eval(valid_prefix)
1392 except Exception:
1403 except Exception:
1393 return "", 0, {}
1404 return "", 0, {}
1394 else:
1405 else:
1395 # If it does not look like a string, let's assume
1406 # If it does not look like a string, let's assume
1396 # we are dealing with a number or variable.
1407 # we are dealing with a number or variable.
1397 number_match = _match_number_in_dict_key_prefix(prefix)
1408 number_match = _match_number_in_dict_key_prefix(prefix)
1398
1409
1399 # We do not want the key matcher to suggest variable names so we yield:
1410 # We do not want the key matcher to suggest variable names so we yield:
1400 if number_match is None:
1411 if number_match is None:
1401 # The alternative would be to assume that user forgort the quote
1412 # The alternative would be to assume that user forgort the quote
1402 # and if the substring matches, suggest adding it at the start.
1413 # and if the substring matches, suggest adding it at the start.
1403 return "", 0, {}
1414 return "", 0, {}
1404
1415
1405 prefix_str = number_match
1416 prefix_str = number_match
1406 is_user_prefix_numeric = True
1417 is_user_prefix_numeric = True
1407 quote = ""
1418 quote = ""
1408
1419
1409 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1420 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1410 token_match = re.search(pattern, prefix, re.UNICODE)
1421 token_match = re.search(pattern, prefix, re.UNICODE)
1411 assert token_match is not None # silence mypy
1422 assert token_match is not None # silence mypy
1412 token_start = token_match.start()
1423 token_start = token_match.start()
1413 token_prefix = token_match.group()
1424 token_prefix = token_match.group()
1414
1425
1415 matched: Dict[str, _DictKeyState] = {}
1426 matched: Dict[str, _DictKeyState] = {}
1416
1427
1417 str_key: Union[str, bytes]
1428 str_key: Union[str, bytes]
1418
1429
1419 for key in filtered_keys:
1430 for key in filtered_keys:
1420 if isinstance(key, (int, float)):
1431 if isinstance(key, (int, float)):
1421 # User typed a number but this key is not a number.
1432 # User typed a number but this key is not a number.
1422 if not is_user_prefix_numeric:
1433 if not is_user_prefix_numeric:
1423 continue
1434 continue
1424 str_key = str(key)
1435 str_key = str(key)
1425 if isinstance(key, int):
1436 if isinstance(key, int):
1426 int_base = prefix_str[:2].lower()
1437 int_base = prefix_str[:2].lower()
1427 # if user typed integer using binary/oct/hex notation:
1438 # if user typed integer using binary/oct/hex notation:
1428 if int_base in _INT_FORMATS:
1439 if int_base in _INT_FORMATS:
1429 int_format = _INT_FORMATS[int_base]
1440 int_format = _INT_FORMATS[int_base]
1430 str_key = int_format(key)
1441 str_key = int_format(key)
1431 else:
1442 else:
1432 # User typed a string but this key is a number.
1443 # User typed a string but this key is a number.
1433 if is_user_prefix_numeric:
1444 if is_user_prefix_numeric:
1434 continue
1445 continue
1435 str_key = key
1446 str_key = key
1436 try:
1447 try:
1437 if not str_key.startswith(prefix_str):
1448 if not str_key.startswith(prefix_str):
1438 continue
1449 continue
1439 except (AttributeError, TypeError, UnicodeError) as e:
1450 except (AttributeError, TypeError, UnicodeError) as e:
1440 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1451 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1441 continue
1452 continue
1442
1453
1443 # reformat remainder of key to begin with prefix
1454 # reformat remainder of key to begin with prefix
1444 rem = str_key[len(prefix_str) :]
1455 rem = str_key[len(prefix_str) :]
1445 # force repr wrapped in '
1456 # force repr wrapped in '
1446 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1457 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1447 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1458 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1448 if quote == '"':
1459 if quote == '"':
1449 # The entered prefix is quoted with ",
1460 # The entered prefix is quoted with ",
1450 # but the match is quoted with '.
1461 # but the match is quoted with '.
1451 # A contained " hence needs escaping for comparison:
1462 # A contained " hence needs escaping for comparison:
1452 rem_repr = rem_repr.replace('"', '\\"')
1463 rem_repr = rem_repr.replace('"', '\\"')
1453
1464
1454 # then reinsert prefix from start of token
1465 # then reinsert prefix from start of token
1455 match = "%s%s" % (token_prefix, rem_repr)
1466 match = "%s%s" % (token_prefix, rem_repr)
1456
1467
1457 matched[match] = filtered_key_is_final[key]
1468 matched[match] = filtered_key_is_final[key]
1458 return quote, token_start, matched
1469 return quote, token_start, matched
1459
1470
1460
1471
1461 def cursor_to_position(text:str, line:int, column:int)->int:
1472 def cursor_to_position(text:str, line:int, column:int)->int:
1462 """
1473 """
1463 Convert the (line,column) position of the cursor in text to an offset in a
1474 Convert the (line,column) position of the cursor in text to an offset in a
1464 string.
1475 string.
1465
1476
1466 Parameters
1477 Parameters
1467 ----------
1478 ----------
1468 text : str
1479 text : str
1469 The text in which to calculate the cursor offset
1480 The text in which to calculate the cursor offset
1470 line : int
1481 line : int
1471 Line of the cursor; 0-indexed
1482 Line of the cursor; 0-indexed
1472 column : int
1483 column : int
1473 Column of the cursor 0-indexed
1484 Column of the cursor 0-indexed
1474
1485
1475 Returns
1486 Returns
1476 -------
1487 -------
1477 Position of the cursor in ``text``, 0-indexed.
1488 Position of the cursor in ``text``, 0-indexed.
1478
1489
1479 See Also
1490 See Also
1480 --------
1491 --------
1481 position_to_cursor : reciprocal of this function
1492 position_to_cursor : reciprocal of this function
1482
1493
1483 """
1494 """
1484 lines = text.split('\n')
1495 lines = text.split('\n')
1485 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1496 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1486
1497
1487 return sum(len(l) + 1 for l in lines[:line]) + column
1498 return sum(len(l) + 1 for l in lines[:line]) + column
1488
1499
1489 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1500 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1490 """
1501 """
1491 Convert the position of the cursor in text (0 indexed) to a line
1502 Convert the position of the cursor in text (0 indexed) to a line
1492 number(0-indexed) and a column number (0-indexed) pair
1503 number(0-indexed) and a column number (0-indexed) pair
1493
1504
1494 Position should be a valid position in ``text``.
1505 Position should be a valid position in ``text``.
1495
1506
1496 Parameters
1507 Parameters
1497 ----------
1508 ----------
1498 text : str
1509 text : str
1499 The text in which to calculate the cursor offset
1510 The text in which to calculate the cursor offset
1500 offset : int
1511 offset : int
1501 Position of the cursor in ``text``, 0-indexed.
1512 Position of the cursor in ``text``, 0-indexed.
1502
1513
1503 Returns
1514 Returns
1504 -------
1515 -------
1505 (line, column) : (int, int)
1516 (line, column) : (int, int)
1506 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1517 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1507
1518
1508 See Also
1519 See Also
1509 --------
1520 --------
1510 cursor_to_position : reciprocal of this function
1521 cursor_to_position : reciprocal of this function
1511
1522
1512 """
1523 """
1513
1524
1514 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1525 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1515
1526
1516 before = text[:offset]
1527 before = text[:offset]
1517 blines = before.split('\n') # ! splitnes trim trailing \n
1528 blines = before.split('\n') # ! splitnes trim trailing \n
1518 line = before.count('\n')
1529 line = before.count('\n')
1519 col = len(blines[-1])
1530 col = len(blines[-1])
1520 return line, col
1531 return line, col
1521
1532
1522
1533
1523 def _safe_isinstance(obj, module, class_name, *attrs):
1534 def _safe_isinstance(obj, module, class_name, *attrs):
1524 """Checks if obj is an instance of module.class_name if loaded
1535 """Checks if obj is an instance of module.class_name if loaded
1525 """
1536 """
1526 if module in sys.modules:
1537 if module in sys.modules:
1527 m = sys.modules[module]
1538 m = sys.modules[module]
1528 for attr in [class_name, *attrs]:
1539 for attr in [class_name, *attrs]:
1529 m = getattr(m, attr)
1540 m = getattr(m, attr)
1530 return isinstance(obj, m)
1541 return isinstance(obj, m)
1531
1542
1532
1543
1533 @context_matcher()
1544 @context_matcher()
1534 def back_unicode_name_matcher(context: CompletionContext):
1545 def back_unicode_name_matcher(context: CompletionContext):
1535 """Match Unicode characters back to Unicode name
1546 """Match Unicode characters back to Unicode name
1536
1547
1537 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1548 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1538 """
1549 """
1539 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1550 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1540 return _convert_matcher_v1_result_to_v2(
1551 return _convert_matcher_v1_result_to_v2(
1541 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1552 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1542 )
1553 )
1543
1554
1544
1555
1545 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1556 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1546 """Match Unicode characters back to Unicode name
1557 """Match Unicode characters back to Unicode name
1547
1558
1548 This does ``β˜ƒ`` -> ``\\snowman``
1559 This does ``β˜ƒ`` -> ``\\snowman``
1549
1560
1550 Note that snowman is not a valid python3 combining character but will be expanded.
1561 Note that snowman is not a valid python3 combining character but will be expanded.
1551 Though it will not recombine back to the snowman character by the completion machinery.
1562 Though it will not recombine back to the snowman character by the completion machinery.
1552
1563
1553 This will not either back-complete standard sequences like \\n, \\b ...
1564 This will not either back-complete standard sequences like \\n, \\b ...
1554
1565
1555 .. deprecated:: 8.6
1566 .. deprecated:: 8.6
1556 You can use :meth:`back_unicode_name_matcher` instead.
1567 You can use :meth:`back_unicode_name_matcher` instead.
1557
1568
1558 Returns
1569 Returns
1559 =======
1570 =======
1560
1571
1561 Return a tuple with two elements:
1572 Return a tuple with two elements:
1562
1573
1563 - The Unicode character that was matched (preceded with a backslash), or
1574 - The Unicode character that was matched (preceded with a backslash), or
1564 empty string,
1575 empty string,
1565 - a sequence (of 1), name for the match Unicode character, preceded by
1576 - a sequence (of 1), name for the match Unicode character, preceded by
1566 backslash, or empty if no match.
1577 backslash, or empty if no match.
1567 """
1578 """
1568 if len(text)<2:
1579 if len(text)<2:
1569 return '', ()
1580 return '', ()
1570 maybe_slash = text[-2]
1581 maybe_slash = text[-2]
1571 if maybe_slash != '\\':
1582 if maybe_slash != '\\':
1572 return '', ()
1583 return '', ()
1573
1584
1574 char = text[-1]
1585 char = text[-1]
1575 # no expand on quote for completion in strings.
1586 # no expand on quote for completion in strings.
1576 # nor backcomplete standard ascii keys
1587 # nor backcomplete standard ascii keys
1577 if char in string.ascii_letters or char in ('"',"'"):
1588 if char in string.ascii_letters or char in ('"',"'"):
1578 return '', ()
1589 return '', ()
1579 try :
1590 try :
1580 unic = unicodedata.name(char)
1591 unic = unicodedata.name(char)
1581 return '\\'+char,('\\'+unic,)
1592 return '\\'+char,('\\'+unic,)
1582 except KeyError:
1593 except KeyError:
1583 pass
1594 pass
1584 return '', ()
1595 return '', ()
1585
1596
1586
1597
1587 @context_matcher()
1598 @context_matcher()
1588 def back_latex_name_matcher(context: CompletionContext):
1599 def back_latex_name_matcher(context: CompletionContext):
1589 """Match latex characters back to unicode name
1600 """Match latex characters back to unicode name
1590
1601
1591 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1602 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1592 """
1603 """
1593 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1604 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1594 return _convert_matcher_v1_result_to_v2(
1605 return _convert_matcher_v1_result_to_v2(
1595 matches, type="latex", fragment=fragment, suppress_if_matches=True
1606 matches, type="latex", fragment=fragment, suppress_if_matches=True
1596 )
1607 )
1597
1608
1598
1609
1599 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1610 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1600 """Match latex characters back to unicode name
1611 """Match latex characters back to unicode name
1601
1612
1602 This does ``\\β„΅`` -> ``\\aleph``
1613 This does ``\\β„΅`` -> ``\\aleph``
1603
1614
1604 .. deprecated:: 8.6
1615 .. deprecated:: 8.6
1605 You can use :meth:`back_latex_name_matcher` instead.
1616 You can use :meth:`back_latex_name_matcher` instead.
1606 """
1617 """
1607 if len(text)<2:
1618 if len(text)<2:
1608 return '', ()
1619 return '', ()
1609 maybe_slash = text[-2]
1620 maybe_slash = text[-2]
1610 if maybe_slash != '\\':
1621 if maybe_slash != '\\':
1611 return '', ()
1622 return '', ()
1612
1623
1613
1624
1614 char = text[-1]
1625 char = text[-1]
1615 # no expand on quote for completion in strings.
1626 # no expand on quote for completion in strings.
1616 # nor backcomplete standard ascii keys
1627 # nor backcomplete standard ascii keys
1617 if char in string.ascii_letters or char in ('"',"'"):
1628 if char in string.ascii_letters or char in ('"',"'"):
1618 return '', ()
1629 return '', ()
1619 try :
1630 try :
1620 latex = reverse_latex_symbol[char]
1631 latex = reverse_latex_symbol[char]
1621 # '\\' replace the \ as well
1632 # '\\' replace the \ as well
1622 return '\\'+char,[latex]
1633 return '\\'+char,[latex]
1623 except KeyError:
1634 except KeyError:
1624 pass
1635 pass
1625 return '', ()
1636 return '', ()
1626
1637
1627
1638
1628 def _formatparamchildren(parameter) -> str:
1639 def _formatparamchildren(parameter) -> str:
1629 """
1640 """
1630 Get parameter name and value from Jedi Private API
1641 Get parameter name and value from Jedi Private API
1631
1642
1632 Jedi does not expose a simple way to get `param=value` from its API.
1643 Jedi does not expose a simple way to get `param=value` from its API.
1633
1644
1634 Parameters
1645 Parameters
1635 ----------
1646 ----------
1636 parameter
1647 parameter
1637 Jedi's function `Param`
1648 Jedi's function `Param`
1638
1649
1639 Returns
1650 Returns
1640 -------
1651 -------
1641 A string like 'a', 'b=1', '*args', '**kwargs'
1652 A string like 'a', 'b=1', '*args', '**kwargs'
1642
1653
1643 """
1654 """
1644 description = parameter.description
1655 description = parameter.description
1645 if not description.startswith('param '):
1656 if not description.startswith('param '):
1646 raise ValueError('Jedi function parameter description have change format.'
1657 raise ValueError('Jedi function parameter description have change format.'
1647 'Expected "param ...", found %r".' % description)
1658 'Expected "param ...", found %r".' % description)
1648 return description[6:]
1659 return description[6:]
1649
1660
1650 def _make_signature(completion)-> str:
1661 def _make_signature(completion)-> str:
1651 """
1662 """
1652 Make the signature from a jedi completion
1663 Make the signature from a jedi completion
1653
1664
1654 Parameters
1665 Parameters
1655 ----------
1666 ----------
1656 completion : jedi.Completion
1667 completion : jedi.Completion
1657 object does not complete a function type
1668 object does not complete a function type
1658
1669
1659 Returns
1670 Returns
1660 -------
1671 -------
1661 a string consisting of the function signature, with the parenthesis but
1672 a string consisting of the function signature, with the parenthesis but
1662 without the function name. example:
1673 without the function name. example:
1663 `(a, *args, b=1, **kwargs)`
1674 `(a, *args, b=1, **kwargs)`
1664
1675
1665 """
1676 """
1666
1677
1667 # it looks like this might work on jedi 0.17
1678 # it looks like this might work on jedi 0.17
1668 if hasattr(completion, 'get_signatures'):
1679 if hasattr(completion, 'get_signatures'):
1669 signatures = completion.get_signatures()
1680 signatures = completion.get_signatures()
1670 if not signatures:
1681 if not signatures:
1671 return '(?)'
1682 return '(?)'
1672
1683
1673 c0 = completion.get_signatures()[0]
1684 c0 = completion.get_signatures()[0]
1674 return '('+c0.to_string().split('(', maxsplit=1)[1]
1685 return '('+c0.to_string().split('(', maxsplit=1)[1]
1675
1686
1676 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1687 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1677 for p in signature.defined_names()) if f])
1688 for p in signature.defined_names()) if f])
1678
1689
1679
1690
1680 _CompleteResult = Dict[str, MatcherResult]
1691 _CompleteResult = Dict[str, MatcherResult]
1681
1692
1682
1693
1683 DICT_MATCHER_REGEX = re.compile(
1694 DICT_MATCHER_REGEX = re.compile(
1684 r"""(?x)
1695 r"""(?x)
1685 ( # match dict-referring - or any get item object - expression
1696 ( # match dict-referring - or any get item object - expression
1686 .+
1697 .+
1687 )
1698 )
1688 \[ # open bracket
1699 \[ # open bracket
1689 \s* # and optional whitespace
1700 \s* # and optional whitespace
1690 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1701 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1691 # and slices
1702 # and slices
1692 ((?:(?:
1703 ((?:(?:
1693 (?: # closed string
1704 (?: # closed string
1694 [uUbB]? # string prefix (r not handled)
1705 [uUbB]? # string prefix (r not handled)
1695 (?:
1706 (?:
1696 '(?:[^']|(?<!\\)\\')*'
1707 '(?:[^']|(?<!\\)\\')*'
1697 |
1708 |
1698 "(?:[^"]|(?<!\\)\\")*"
1709 "(?:[^"]|(?<!\\)\\")*"
1699 )
1710 )
1700 )
1711 )
1701 |
1712 |
1702 # capture integers and slices
1713 # capture integers and slices
1703 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1714 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1704 |
1715 |
1705 # integer in bin/hex/oct notation
1716 # integer in bin/hex/oct notation
1706 0[bBxXoO]_?(?:\w|\d)+
1717 0[bBxXoO]_?(?:\w|\d)+
1707 )
1718 )
1708 \s*,\s*
1719 \s*,\s*
1709 )*)
1720 )*)
1710 ((?:
1721 ((?:
1711 (?: # unclosed string
1722 (?: # unclosed string
1712 [uUbB]? # string prefix (r not handled)
1723 [uUbB]? # string prefix (r not handled)
1713 (?:
1724 (?:
1714 '(?:[^']|(?<!\\)\\')*
1725 '(?:[^']|(?<!\\)\\')*
1715 |
1726 |
1716 "(?:[^"]|(?<!\\)\\")*
1727 "(?:[^"]|(?<!\\)\\")*
1717 )
1728 )
1718 )
1729 )
1719 |
1730 |
1720 # unfinished integer
1731 # unfinished integer
1721 (?:[-+]?\d+)
1732 (?:[-+]?\d+)
1722 |
1733 |
1723 # integer in bin/hex/oct notation
1734 # integer in bin/hex/oct notation
1724 0[bBxXoO]_?(?:\w|\d)+
1735 0[bBxXoO]_?(?:\w|\d)+
1725 )
1736 )
1726 )?
1737 )?
1727 $
1738 $
1728 """
1739 """
1729 )
1740 )
1730
1741
1731
1742
1732 def _convert_matcher_v1_result_to_v2(
1743 def _convert_matcher_v1_result_to_v2(
1733 matches: Sequence[str],
1744 matches: Sequence[str],
1734 type: str,
1745 type: str,
1735 fragment: Optional[str] = None,
1746 fragment: Optional[str] = None,
1736 suppress_if_matches: bool = False,
1747 suppress_if_matches: bool = False,
1737 ) -> SimpleMatcherResult:
1748 ) -> SimpleMatcherResult:
1738 """Utility to help with transition"""
1749 """Utility to help with transition"""
1739 result = {
1750 result = {
1740 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1751 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1741 "suppress": (True if matches else False) if suppress_if_matches else False,
1752 "suppress": (True if matches else False) if suppress_if_matches else False,
1742 }
1753 }
1743 if fragment is not None:
1754 if fragment is not None:
1744 result["matched_fragment"] = fragment
1755 result["matched_fragment"] = fragment
1745 return cast(SimpleMatcherResult, result)
1756 return cast(SimpleMatcherResult, result)
1746
1757
1747
1758
1748 class IPCompleter(Completer):
1759 class IPCompleter(Completer):
1749 """Extension of the completer class with IPython-specific features"""
1760 """Extension of the completer class with IPython-specific features"""
1750
1761
1751 @observe('greedy')
1762 @observe('greedy')
1752 def _greedy_changed(self, change):
1763 def _greedy_changed(self, change):
1753 """update the splitter and readline delims when greedy is changed"""
1764 """update the splitter and readline delims when greedy is changed"""
1754 if change["new"]:
1765 if change["new"]:
1755 self.evaluation = "unsafe"
1766 self.evaluation = "unsafe"
1756 self.auto_close_dict_keys = True
1767 self.auto_close_dict_keys = True
1757 self.splitter.delims = GREEDY_DELIMS
1768 self.splitter.delims = GREEDY_DELIMS
1758 else:
1769 else:
1759 self.evaluation = "limited"
1770 self.evaluation = "limited"
1760 self.auto_close_dict_keys = False
1771 self.auto_close_dict_keys = False
1761 self.splitter.delims = DELIMS
1772 self.splitter.delims = DELIMS
1762
1773
1763 dict_keys_only = Bool(
1774 dict_keys_only = Bool(
1764 False,
1775 False,
1765 help="""
1776 help="""
1766 Whether to show dict key matches only.
1777 Whether to show dict key matches only.
1767
1778
1768 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1779 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1769 """,
1780 """,
1770 )
1781 )
1771
1782
1772 suppress_competing_matchers = UnionTrait(
1783 suppress_competing_matchers = UnionTrait(
1773 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1784 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1774 default_value=None,
1785 default_value=None,
1775 help="""
1786 help="""
1776 Whether to suppress completions from other *Matchers*.
1787 Whether to suppress completions from other *Matchers*.
1777
1788
1778 When set to ``None`` (default) the matchers will attempt to auto-detect
1789 When set to ``None`` (default) the matchers will attempt to auto-detect
1779 whether suppression of other matchers is desirable. For example, at
1790 whether suppression of other matchers is desirable. For example, at
1780 the beginning of a line followed by `%` we expect a magic completion
1791 the beginning of a line followed by `%` we expect a magic completion
1781 to be the only applicable option, and after ``my_dict['`` we usually
1792 to be the only applicable option, and after ``my_dict['`` we usually
1782 expect a completion with an existing dictionary key.
1793 expect a completion with an existing dictionary key.
1783
1794
1784 If you want to disable this heuristic and see completions from all matchers,
1795 If you want to disable this heuristic and see completions from all matchers,
1785 set ``IPCompleter.suppress_competing_matchers = False``.
1796 set ``IPCompleter.suppress_competing_matchers = False``.
1786 To disable the heuristic for specific matchers provide a dictionary mapping:
1797 To disable the heuristic for specific matchers provide a dictionary mapping:
1787 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1798 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1788
1799
1789 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1800 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1790 completions to the set of matchers with the highest priority;
1801 completions to the set of matchers with the highest priority;
1791 this is equivalent to ``IPCompleter.merge_completions`` and
1802 this is equivalent to ``IPCompleter.merge_completions`` and
1792 can be beneficial for performance, but will sometimes omit relevant
1803 can be beneficial for performance, but will sometimes omit relevant
1793 candidates from matchers further down the priority list.
1804 candidates from matchers further down the priority list.
1794 """,
1805 """,
1795 ).tag(config=True)
1806 ).tag(config=True)
1796
1807
1797 merge_completions = Bool(
1808 merge_completions = Bool(
1798 True,
1809 True,
1799 help="""Whether to merge completion results into a single list
1810 help="""Whether to merge completion results into a single list
1800
1811
1801 If False, only the completion results from the first non-empty
1812 If False, only the completion results from the first non-empty
1802 completer will be returned.
1813 completer will be returned.
1803
1814
1804 As of version 8.6.0, setting the value to ``False`` is an alias for:
1815 As of version 8.6.0, setting the value to ``False`` is an alias for:
1805 ``IPCompleter.suppress_competing_matchers = True.``.
1816 ``IPCompleter.suppress_competing_matchers = True.``.
1806 """,
1817 """,
1807 ).tag(config=True)
1818 ).tag(config=True)
1808
1819
1809 disable_matchers = ListTrait(
1820 disable_matchers = ListTrait(
1810 Unicode(),
1821 Unicode(),
1811 help="""List of matchers to disable.
1822 help="""List of matchers to disable.
1812
1823
1813 The list should contain matcher identifiers (see :any:`completion_matcher`).
1824 The list should contain matcher identifiers (see :any:`completion_matcher`).
1814 """,
1825 """,
1815 ).tag(config=True)
1826 ).tag(config=True)
1816
1827
1817 omit__names = Enum(
1828 omit__names = Enum(
1818 (0, 1, 2),
1829 (0, 1, 2),
1819 default_value=2,
1830 default_value=2,
1820 help="""Instruct the completer to omit private method names
1831 help="""Instruct the completer to omit private method names
1821
1832
1822 Specifically, when completing on ``object.<tab>``.
1833 Specifically, when completing on ``object.<tab>``.
1823
1834
1824 When 2 [default]: all names that start with '_' will be excluded.
1835 When 2 [default]: all names that start with '_' will be excluded.
1825
1836
1826 When 1: all 'magic' names (``__foo__``) will be excluded.
1837 When 1: all 'magic' names (``__foo__``) will be excluded.
1827
1838
1828 When 0: nothing will be excluded.
1839 When 0: nothing will be excluded.
1829 """
1840 """
1830 ).tag(config=True)
1841 ).tag(config=True)
1831 limit_to__all__ = Bool(False,
1842 limit_to__all__ = Bool(False,
1832 help="""
1843 help="""
1833 DEPRECATED as of version 5.0.
1844 DEPRECATED as of version 5.0.
1834
1845
1835 Instruct the completer to use __all__ for the completion
1846 Instruct the completer to use __all__ for the completion
1836
1847
1837 Specifically, when completing on ``object.<tab>``.
1848 Specifically, when completing on ``object.<tab>``.
1838
1849
1839 When True: only those names in obj.__all__ will be included.
1850 When True: only those names in obj.__all__ will be included.
1840
1851
1841 When False [default]: the __all__ attribute is ignored
1852 When False [default]: the __all__ attribute is ignored
1842 """,
1853 """,
1843 ).tag(config=True)
1854 ).tag(config=True)
1844
1855
1845 profile_completions = Bool(
1856 profile_completions = Bool(
1846 default_value=False,
1857 default_value=False,
1847 help="If True, emit profiling data for completion subsystem using cProfile."
1858 help="If True, emit profiling data for completion subsystem using cProfile."
1848 ).tag(config=True)
1859 ).tag(config=True)
1849
1860
1850 profiler_output_dir = Unicode(
1861 profiler_output_dir = Unicode(
1851 default_value=".completion_profiles",
1862 default_value=".completion_profiles",
1852 help="Template for path at which to output profile data for completions."
1863 help="Template for path at which to output profile data for completions."
1853 ).tag(config=True)
1864 ).tag(config=True)
1854
1865
1855 @observe('limit_to__all__')
1866 @observe('limit_to__all__')
1856 def _limit_to_all_changed(self, change):
1867 def _limit_to_all_changed(self, change):
1857 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1868 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1858 'value has been deprecated since IPython 5.0, will be made to have '
1869 'value has been deprecated since IPython 5.0, will be made to have '
1859 'no effects and then removed in future version of IPython.',
1870 'no effects and then removed in future version of IPython.',
1860 UserWarning)
1871 UserWarning)
1861
1872
1862 def __init__(
1873 def __init__(
1863 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1874 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1864 ):
1875 ):
1865 """IPCompleter() -> completer
1876 """IPCompleter() -> completer
1866
1877
1867 Return a completer object.
1878 Return a completer object.
1868
1879
1869 Parameters
1880 Parameters
1870 ----------
1881 ----------
1871 shell
1882 shell
1872 a pointer to the ipython shell itself. This is needed
1883 a pointer to the ipython shell itself. This is needed
1873 because this completer knows about magic functions, and those can
1884 because this completer knows about magic functions, and those can
1874 only be accessed via the ipython instance.
1885 only be accessed via the ipython instance.
1875 namespace : dict, optional
1886 namespace : dict, optional
1876 an optional dict where completions are performed.
1887 an optional dict where completions are performed.
1877 global_namespace : dict, optional
1888 global_namespace : dict, optional
1878 secondary optional dict for completions, to
1889 secondary optional dict for completions, to
1879 handle cases (such as IPython embedded inside functions) where
1890 handle cases (such as IPython embedded inside functions) where
1880 both Python scopes are visible.
1891 both Python scopes are visible.
1881 config : Config
1892 config : Config
1882 traitlet's config object
1893 traitlet's config object
1883 **kwargs
1894 **kwargs
1884 passed to super class unmodified.
1895 passed to super class unmodified.
1885 """
1896 """
1886
1897
1887 self.magic_escape = ESC_MAGIC
1898 self.magic_escape = ESC_MAGIC
1888 self.splitter = CompletionSplitter()
1899 self.splitter = CompletionSplitter()
1889
1900
1890 # _greedy_changed() depends on splitter and readline being defined:
1901 # _greedy_changed() depends on splitter and readline being defined:
1891 super().__init__(
1902 super().__init__(
1892 namespace=namespace,
1903 namespace=namespace,
1893 global_namespace=global_namespace,
1904 global_namespace=global_namespace,
1894 config=config,
1905 config=config,
1895 **kwargs,
1906 **kwargs,
1896 )
1907 )
1897
1908
1898 # List where completion matches will be stored
1909 # List where completion matches will be stored
1899 self.matches = []
1910 self.matches = []
1900 self.shell = shell
1911 self.shell = shell
1901 # Regexp to split filenames with spaces in them
1912 # Regexp to split filenames with spaces in them
1902 self.space_name_re = re.compile(r'([^\\] )')
1913 self.space_name_re = re.compile(r'([^\\] )')
1903 # Hold a local ref. to glob.glob for speed
1914 # Hold a local ref. to glob.glob for speed
1904 self.glob = glob.glob
1915 self.glob = glob.glob
1905
1916
1906 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1917 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1907 # buffers, to avoid completion problems.
1918 # buffers, to avoid completion problems.
1908 term = os.environ.get('TERM','xterm')
1919 term = os.environ.get('TERM','xterm')
1909 self.dumb_terminal = term in ['dumb','emacs']
1920 self.dumb_terminal = term in ['dumb','emacs']
1910
1921
1911 # Special handling of backslashes needed in win32 platforms
1922 # Special handling of backslashes needed in win32 platforms
1912 if sys.platform == "win32":
1923 if sys.platform == "win32":
1913 self.clean_glob = self._clean_glob_win32
1924 self.clean_glob = self._clean_glob_win32
1914 else:
1925 else:
1915 self.clean_glob = self._clean_glob
1926 self.clean_glob = self._clean_glob
1916
1927
1917 #regexp to parse docstring for function signature
1928 #regexp to parse docstring for function signature
1918 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1929 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1919 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1930 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1920 #use this if positional argument name is also needed
1931 #use this if positional argument name is also needed
1921 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1932 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1922
1933
1923 self.magic_arg_matchers = [
1934 self.magic_arg_matchers = [
1924 self.magic_config_matcher,
1935 self.magic_config_matcher,
1925 self.magic_color_matcher,
1936 self.magic_color_matcher,
1926 ]
1937 ]
1927
1938
1928 # This is set externally by InteractiveShell
1939 # This is set externally by InteractiveShell
1929 self.custom_completers = None
1940 self.custom_completers = None
1930
1941
1931 # This is a list of names of unicode characters that can be completed
1942 # This is a list of names of unicode characters that can be completed
1932 # into their corresponding unicode value. The list is large, so we
1943 # into their corresponding unicode value. The list is large, so we
1933 # lazily initialize it on first use. Consuming code should access this
1944 # lazily initialize it on first use. Consuming code should access this
1934 # attribute through the `@unicode_names` property.
1945 # attribute through the `@unicode_names` property.
1935 self._unicode_names = None
1946 self._unicode_names = None
1936
1947
1937 self._backslash_combining_matchers = [
1948 self._backslash_combining_matchers = [
1938 self.latex_name_matcher,
1949 self.latex_name_matcher,
1939 self.unicode_name_matcher,
1950 self.unicode_name_matcher,
1940 back_latex_name_matcher,
1951 back_latex_name_matcher,
1941 back_unicode_name_matcher,
1952 back_unicode_name_matcher,
1942 self.fwd_unicode_matcher,
1953 self.fwd_unicode_matcher,
1943 ]
1954 ]
1944
1955
1945 if not self.backslash_combining_completions:
1956 if not self.backslash_combining_completions:
1946 for matcher in self._backslash_combining_matchers:
1957 for matcher in self._backslash_combining_matchers:
1947 self.disable_matchers.append(_get_matcher_id(matcher))
1958 self.disable_matchers.append(_get_matcher_id(matcher))
1948
1959
1949 if not self.merge_completions:
1960 if not self.merge_completions:
1950 self.suppress_competing_matchers = True
1961 self.suppress_competing_matchers = True
1951
1962
1952 @property
1963 @property
1953 def matchers(self) -> List[Matcher]:
1964 def matchers(self) -> List[Matcher]:
1954 """All active matcher routines for completion"""
1965 """All active matcher routines for completion"""
1955 if self.dict_keys_only:
1966 if self.dict_keys_only:
1956 return [self.dict_key_matcher]
1967 return [self.dict_key_matcher]
1957
1968
1958 if self.use_jedi:
1969 if self.use_jedi:
1959 return [
1970 return [
1960 *self.custom_matchers,
1971 *self.custom_matchers,
1961 *self._backslash_combining_matchers,
1972 *self._backslash_combining_matchers,
1962 *self.magic_arg_matchers,
1973 *self.magic_arg_matchers,
1963 self.custom_completer_matcher,
1974 self.custom_completer_matcher,
1964 self.magic_matcher,
1975 self.magic_matcher,
1965 self._jedi_matcher,
1976 self._jedi_matcher,
1966 self.dict_key_matcher,
1977 self.dict_key_matcher,
1967 self.file_matcher,
1978 self.file_matcher,
1968 ]
1979 ]
1969 else:
1980 else:
1970 return [
1981 return [
1971 *self.custom_matchers,
1982 *self.custom_matchers,
1972 *self._backslash_combining_matchers,
1983 *self._backslash_combining_matchers,
1973 *self.magic_arg_matchers,
1984 *self.magic_arg_matchers,
1974 self.custom_completer_matcher,
1985 self.custom_completer_matcher,
1975 self.dict_key_matcher,
1986 self.dict_key_matcher,
1976 # TODO: convert python_matches to v2 API
1977 self.magic_matcher,
1987 self.magic_matcher,
1978 self.python_matches,
1988 self.python_matcher,
1979 self.file_matcher,
1989 self.file_matcher,
1980 self.python_func_kw_matcher,
1990 self.python_func_kw_matcher,
1981 ]
1991 ]
1982
1992
1983 def all_completions(self, text:str) -> List[str]:
1993 def all_completions(self, text:str) -> List[str]:
1984 """
1994 """
1985 Wrapper around the completion methods for the benefit of emacs.
1995 Wrapper around the completion methods for the benefit of emacs.
1986 """
1996 """
1987 prefix = text.rpartition('.')[0]
1997 prefix = text.rpartition('.')[0]
1988 with provisionalcompleter():
1998 with provisionalcompleter():
1989 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1999 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1990 for c in self.completions(text, len(text))]
2000 for c in self.completions(text, len(text))]
1991
2001
1992 return self.complete(text)[1]
2002 return self.complete(text)[1]
1993
2003
1994 def _clean_glob(self, text:str):
2004 def _clean_glob(self, text:str):
1995 return self.glob("%s*" % text)
2005 return self.glob("%s*" % text)
1996
2006
1997 def _clean_glob_win32(self, text:str):
2007 def _clean_glob_win32(self, text:str):
1998 return [f.replace("\\","/")
2008 return [f.replace("\\","/")
1999 for f in self.glob("%s*" % text)]
2009 for f in self.glob("%s*" % text)]
2000
2010
2001 @context_matcher()
2011 @context_matcher()
2002 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2012 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2003 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2013 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2004 matches = self.file_matches(context.token)
2014 matches = self.file_matches(context.token)
2005 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2015 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2006 # starts with `/home/`, `C:\`, etc)
2016 # starts with `/home/`, `C:\`, etc)
2007 return _convert_matcher_v1_result_to_v2(matches, type="path")
2017 return _convert_matcher_v1_result_to_v2(matches, type="path")
2008
2018
2009 def file_matches(self, text: str) -> List[str]:
2019 def file_matches(self, text: str) -> List[str]:
2010 """Match filenames, expanding ~USER type strings.
2020 """Match filenames, expanding ~USER type strings.
2011
2021
2012 Most of the seemingly convoluted logic in this completer is an
2022 Most of the seemingly convoluted logic in this completer is an
2013 attempt to handle filenames with spaces in them. And yet it's not
2023 attempt to handle filenames with spaces in them. And yet it's not
2014 quite perfect, because Python's readline doesn't expose all of the
2024 quite perfect, because Python's readline doesn't expose all of the
2015 GNU readline details needed for this to be done correctly.
2025 GNU readline details needed for this to be done correctly.
2016
2026
2017 For a filename with a space in it, the printed completions will be
2027 For a filename with a space in it, the printed completions will be
2018 only the parts after what's already been typed (instead of the
2028 only the parts after what's already been typed (instead of the
2019 full completions, as is normally done). I don't think with the
2029 full completions, as is normally done). I don't think with the
2020 current (as of Python 2.3) Python readline it's possible to do
2030 current (as of Python 2.3) Python readline it's possible to do
2021 better.
2031 better.
2022
2032
2023 .. deprecated:: 8.6
2033 .. deprecated:: 8.6
2024 You can use :meth:`file_matcher` instead.
2034 You can use :meth:`file_matcher` instead.
2025 """
2035 """
2026
2036
2027 # chars that require escaping with backslash - i.e. chars
2037 # chars that require escaping with backslash - i.e. chars
2028 # that readline treats incorrectly as delimiters, but we
2038 # that readline treats incorrectly as delimiters, but we
2029 # don't want to treat as delimiters in filename matching
2039 # don't want to treat as delimiters in filename matching
2030 # when escaped with backslash
2040 # when escaped with backslash
2031 if text.startswith('!'):
2041 if text.startswith('!'):
2032 text = text[1:]
2042 text = text[1:]
2033 text_prefix = u'!'
2043 text_prefix = u'!'
2034 else:
2044 else:
2035 text_prefix = u''
2045 text_prefix = u''
2036
2046
2037 text_until_cursor = self.text_until_cursor
2047 text_until_cursor = self.text_until_cursor
2038 # track strings with open quotes
2048 # track strings with open quotes
2039 open_quotes = has_open_quotes(text_until_cursor)
2049 open_quotes = has_open_quotes(text_until_cursor)
2040
2050
2041 if '(' in text_until_cursor or '[' in text_until_cursor:
2051 if '(' in text_until_cursor or '[' in text_until_cursor:
2042 lsplit = text
2052 lsplit = text
2043 else:
2053 else:
2044 try:
2054 try:
2045 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2055 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2046 lsplit = arg_split(text_until_cursor)[-1]
2056 lsplit = arg_split(text_until_cursor)[-1]
2047 except ValueError:
2057 except ValueError:
2048 # typically an unmatched ", or backslash without escaped char.
2058 # typically an unmatched ", or backslash without escaped char.
2049 if open_quotes:
2059 if open_quotes:
2050 lsplit = text_until_cursor.split(open_quotes)[-1]
2060 lsplit = text_until_cursor.split(open_quotes)[-1]
2051 else:
2061 else:
2052 return []
2062 return []
2053 except IndexError:
2063 except IndexError:
2054 # tab pressed on empty line
2064 # tab pressed on empty line
2055 lsplit = ""
2065 lsplit = ""
2056
2066
2057 if not open_quotes and lsplit != protect_filename(lsplit):
2067 if not open_quotes and lsplit != protect_filename(lsplit):
2058 # if protectables are found, do matching on the whole escaped name
2068 # if protectables are found, do matching on the whole escaped name
2059 has_protectables = True
2069 has_protectables = True
2060 text0,text = text,lsplit
2070 text0,text = text,lsplit
2061 else:
2071 else:
2062 has_protectables = False
2072 has_protectables = False
2063 text = os.path.expanduser(text)
2073 text = os.path.expanduser(text)
2064
2074
2065 if text == "":
2075 if text == "":
2066 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2076 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2067
2077
2068 # Compute the matches from the filesystem
2078 # Compute the matches from the filesystem
2069 if sys.platform == 'win32':
2079 if sys.platform == 'win32':
2070 m0 = self.clean_glob(text)
2080 m0 = self.clean_glob(text)
2071 else:
2081 else:
2072 m0 = self.clean_glob(text.replace('\\', ''))
2082 m0 = self.clean_glob(text.replace('\\', ''))
2073
2083
2074 if has_protectables:
2084 if has_protectables:
2075 # If we had protectables, we need to revert our changes to the
2085 # If we had protectables, we need to revert our changes to the
2076 # beginning of filename so that we don't double-write the part
2086 # beginning of filename so that we don't double-write the part
2077 # of the filename we have so far
2087 # of the filename we have so far
2078 len_lsplit = len(lsplit)
2088 len_lsplit = len(lsplit)
2079 matches = [text_prefix + text0 +
2089 matches = [text_prefix + text0 +
2080 protect_filename(f[len_lsplit:]) for f in m0]
2090 protect_filename(f[len_lsplit:]) for f in m0]
2081 else:
2091 else:
2082 if open_quotes:
2092 if open_quotes:
2083 # if we have a string with an open quote, we don't need to
2093 # if we have a string with an open quote, we don't need to
2084 # protect the names beyond the quote (and we _shouldn't_, as
2094 # protect the names beyond the quote (and we _shouldn't_, as
2085 # it would cause bugs when the filesystem call is made).
2095 # it would cause bugs when the filesystem call is made).
2086 matches = m0 if sys.platform == "win32" else\
2096 matches = m0 if sys.platform == "win32" else\
2087 [protect_filename(f, open_quotes) for f in m0]
2097 [protect_filename(f, open_quotes) for f in m0]
2088 else:
2098 else:
2089 matches = [text_prefix +
2099 matches = [text_prefix +
2090 protect_filename(f) for f in m0]
2100 protect_filename(f) for f in m0]
2091
2101
2092 # Mark directories in input list by appending '/' to their names.
2102 # Mark directories in input list by appending '/' to their names.
2093 return [x+'/' if os.path.isdir(x) else x for x in matches]
2103 return [x+'/' if os.path.isdir(x) else x for x in matches]
2094
2104
2095 @context_matcher()
2105 @context_matcher()
2096 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2106 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2097 """Match magics."""
2107 """Match magics."""
2098 text = context.token
2108 text = context.token
2099 matches = self.magic_matches(text)
2109 matches = self.magic_matches(text)
2100 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2110 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2101 is_magic_prefix = len(text) > 0 and text[0] == "%"
2111 is_magic_prefix = len(text) > 0 and text[0] == "%"
2102 result["suppress"] = is_magic_prefix and bool(result["completions"])
2112 result["suppress"] = is_magic_prefix and bool(result["completions"])
2103 return result
2113 return result
2104
2114
2105 def magic_matches(self, text: str):
2115 def magic_matches(self, text: str):
2106 """Match magics.
2116 """Match magics.
2107
2117
2108 .. deprecated:: 8.6
2118 .. deprecated:: 8.6
2109 You can use :meth:`magic_matcher` instead.
2119 You can use :meth:`magic_matcher` instead.
2110 """
2120 """
2111 # Get all shell magics now rather than statically, so magics loaded at
2121 # Get all shell magics now rather than statically, so magics loaded at
2112 # runtime show up too.
2122 # runtime show up too.
2113 lsm = self.shell.magics_manager.lsmagic()
2123 lsm = self.shell.magics_manager.lsmagic()
2114 line_magics = lsm['line']
2124 line_magics = lsm['line']
2115 cell_magics = lsm['cell']
2125 cell_magics = lsm['cell']
2116 pre = self.magic_escape
2126 pre = self.magic_escape
2117 pre2 = pre+pre
2127 pre2 = pre+pre
2118
2128
2119 explicit_magic = text.startswith(pre)
2129 explicit_magic = text.startswith(pre)
2120
2130
2121 # Completion logic:
2131 # Completion logic:
2122 # - user gives %%: only do cell magics
2132 # - user gives %%: only do cell magics
2123 # - user gives %: do both line and cell magics
2133 # - user gives %: do both line and cell magics
2124 # - no prefix: do both
2134 # - no prefix: do both
2125 # In other words, line magics are skipped if the user gives %% explicitly
2135 # In other words, line magics are skipped if the user gives %% explicitly
2126 #
2136 #
2127 # We also exclude magics that match any currently visible names:
2137 # We also exclude magics that match any currently visible names:
2128 # https://github.com/ipython/ipython/issues/4877, unless the user has
2138 # https://github.com/ipython/ipython/issues/4877, unless the user has
2129 # typed a %:
2139 # typed a %:
2130 # https://github.com/ipython/ipython/issues/10754
2140 # https://github.com/ipython/ipython/issues/10754
2131 bare_text = text.lstrip(pre)
2141 bare_text = text.lstrip(pre)
2132 global_matches = self.global_matches(bare_text)
2142 global_matches = self.global_matches(bare_text)
2133 if not explicit_magic:
2143 if not explicit_magic:
2134 def matches(magic):
2144 def matches(magic):
2135 """
2145 """
2136 Filter magics, in particular remove magics that match
2146 Filter magics, in particular remove magics that match
2137 a name present in global namespace.
2147 a name present in global namespace.
2138 """
2148 """
2139 return ( magic.startswith(bare_text) and
2149 return ( magic.startswith(bare_text) and
2140 magic not in global_matches )
2150 magic not in global_matches )
2141 else:
2151 else:
2142 def matches(magic):
2152 def matches(magic):
2143 return magic.startswith(bare_text)
2153 return magic.startswith(bare_text)
2144
2154
2145 comp = [ pre2+m for m in cell_magics if matches(m)]
2155 comp = [ pre2+m for m in cell_magics if matches(m)]
2146 if not text.startswith(pre2):
2156 if not text.startswith(pre2):
2147 comp += [ pre+m for m in line_magics if matches(m)]
2157 comp += [ pre+m for m in line_magics if matches(m)]
2148
2158
2149 return comp
2159 return comp
2150
2160
2151 @context_matcher()
2161 @context_matcher()
2152 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2162 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2153 """Match class names and attributes for %config magic."""
2163 """Match class names and attributes for %config magic."""
2154 # NOTE: uses `line_buffer` equivalent for compatibility
2164 # NOTE: uses `line_buffer` equivalent for compatibility
2155 matches = self.magic_config_matches(context.line_with_cursor)
2165 matches = self.magic_config_matches(context.line_with_cursor)
2156 return _convert_matcher_v1_result_to_v2(matches, type="param")
2166 return _convert_matcher_v1_result_to_v2(matches, type="param")
2157
2167
2158 def magic_config_matches(self, text: str) -> List[str]:
2168 def magic_config_matches(self, text: str) -> List[str]:
2159 """Match class names and attributes for %config magic.
2169 """Match class names and attributes for %config magic.
2160
2170
2161 .. deprecated:: 8.6
2171 .. deprecated:: 8.6
2162 You can use :meth:`magic_config_matcher` instead.
2172 You can use :meth:`magic_config_matcher` instead.
2163 """
2173 """
2164 texts = text.strip().split()
2174 texts = text.strip().split()
2165
2175
2166 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2176 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2167 # get all configuration classes
2177 # get all configuration classes
2168 classes = sorted(set([ c for c in self.shell.configurables
2178 classes = sorted(set([ c for c in self.shell.configurables
2169 if c.__class__.class_traits(config=True)
2179 if c.__class__.class_traits(config=True)
2170 ]), key=lambda x: x.__class__.__name__)
2180 ]), key=lambda x: x.__class__.__name__)
2171 classnames = [ c.__class__.__name__ for c in classes ]
2181 classnames = [ c.__class__.__name__ for c in classes ]
2172
2182
2173 # return all classnames if config or %config is given
2183 # return all classnames if config or %config is given
2174 if len(texts) == 1:
2184 if len(texts) == 1:
2175 return classnames
2185 return classnames
2176
2186
2177 # match classname
2187 # match classname
2178 classname_texts = texts[1].split('.')
2188 classname_texts = texts[1].split('.')
2179 classname = classname_texts[0]
2189 classname = classname_texts[0]
2180 classname_matches = [ c for c in classnames
2190 classname_matches = [ c for c in classnames
2181 if c.startswith(classname) ]
2191 if c.startswith(classname) ]
2182
2192
2183 # return matched classes or the matched class with attributes
2193 # return matched classes or the matched class with attributes
2184 if texts[1].find('.') < 0:
2194 if texts[1].find('.') < 0:
2185 return classname_matches
2195 return classname_matches
2186 elif len(classname_matches) == 1 and \
2196 elif len(classname_matches) == 1 and \
2187 classname_matches[0] == classname:
2197 classname_matches[0] == classname:
2188 cls = classes[classnames.index(classname)].__class__
2198 cls = classes[classnames.index(classname)].__class__
2189 help = cls.class_get_help()
2199 help = cls.class_get_help()
2190 # strip leading '--' from cl-args:
2200 # strip leading '--' from cl-args:
2191 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2201 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2192 return [ attr.split('=')[0]
2202 return [ attr.split('=')[0]
2193 for attr in help.strip().splitlines()
2203 for attr in help.strip().splitlines()
2194 if attr.startswith(texts[1]) ]
2204 if attr.startswith(texts[1]) ]
2195 return []
2205 return []
2196
2206
2197 @context_matcher()
2207 @context_matcher()
2198 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2208 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2199 """Match color schemes for %colors magic."""
2209 """Match color schemes for %colors magic."""
2200 # NOTE: uses `line_buffer` equivalent for compatibility
2210 # NOTE: uses `line_buffer` equivalent for compatibility
2201 matches = self.magic_color_matches(context.line_with_cursor)
2211 matches = self.magic_color_matches(context.line_with_cursor)
2202 return _convert_matcher_v1_result_to_v2(matches, type="param")
2212 return _convert_matcher_v1_result_to_v2(matches, type="param")
2203
2213
2204 def magic_color_matches(self, text: str) -> List[str]:
2214 def magic_color_matches(self, text: str) -> List[str]:
2205 """Match color schemes for %colors magic.
2215 """Match color schemes for %colors magic.
2206
2216
2207 .. deprecated:: 8.6
2217 .. deprecated:: 8.6
2208 You can use :meth:`magic_color_matcher` instead.
2218 You can use :meth:`magic_color_matcher` instead.
2209 """
2219 """
2210 texts = text.split()
2220 texts = text.split()
2211 if text.endswith(' '):
2221 if text.endswith(' '):
2212 # .split() strips off the trailing whitespace. Add '' back
2222 # .split() strips off the trailing whitespace. Add '' back
2213 # so that: '%colors ' -> ['%colors', '']
2223 # so that: '%colors ' -> ['%colors', '']
2214 texts.append('')
2224 texts.append('')
2215
2225
2216 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2226 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2217 prefix = texts[1]
2227 prefix = texts[1]
2218 return [ color for color in InspectColors.keys()
2228 return [ color for color in InspectColors.keys()
2219 if color.startswith(prefix) ]
2229 if color.startswith(prefix) ]
2220 return []
2230 return []
2221
2231
2222 @context_matcher(identifier="IPCompleter.jedi_matcher")
2232 @context_matcher(identifier="IPCompleter.jedi_matcher")
2223 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2233 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2224 matches = self._jedi_matches(
2234 matches = self._jedi_matches(
2225 cursor_column=context.cursor_position,
2235 cursor_column=context.cursor_position,
2226 cursor_line=context.cursor_line,
2236 cursor_line=context.cursor_line,
2227 text=context.full_text,
2237 text=context.full_text,
2228 )
2238 )
2229 return {
2239 return {
2230 "completions": matches,
2240 "completions": matches,
2231 # static analysis should not suppress other matchers
2241 # static analysis should not suppress other matchers
2232 "suppress": False,
2242 "suppress": False,
2233 }
2243 }
2234
2244
2235 def _jedi_matches(
2245 def _jedi_matches(
2236 self, cursor_column: int, cursor_line: int, text: str
2246 self, cursor_column: int, cursor_line: int, text: str
2237 ) -> Iterator[_JediCompletionLike]:
2247 ) -> Iterator[_JediCompletionLike]:
2238 """
2248 """
2239 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2249 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2240 cursor position.
2250 cursor position.
2241
2251
2242 Parameters
2252 Parameters
2243 ----------
2253 ----------
2244 cursor_column : int
2254 cursor_column : int
2245 column position of the cursor in ``text``, 0-indexed.
2255 column position of the cursor in ``text``, 0-indexed.
2246 cursor_line : int
2256 cursor_line : int
2247 line position of the cursor in ``text``, 0-indexed
2257 line position of the cursor in ``text``, 0-indexed
2248 text : str
2258 text : str
2249 text to complete
2259 text to complete
2250
2260
2251 Notes
2261 Notes
2252 -----
2262 -----
2253 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2263 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2254 object containing a string with the Jedi debug information attached.
2264 object containing a string with the Jedi debug information attached.
2255
2265
2256 .. deprecated:: 8.6
2266 .. deprecated:: 8.6
2257 You can use :meth:`_jedi_matcher` instead.
2267 You can use :meth:`_jedi_matcher` instead.
2258 """
2268 """
2259 namespaces = [self.namespace]
2269 namespaces = [self.namespace]
2260 if self.global_namespace is not None:
2270 if self.global_namespace is not None:
2261 namespaces.append(self.global_namespace)
2271 namespaces.append(self.global_namespace)
2262
2272
2263 completion_filter = lambda x:x
2273 completion_filter = lambda x:x
2264 offset = cursor_to_position(text, cursor_line, cursor_column)
2274 offset = cursor_to_position(text, cursor_line, cursor_column)
2265 # filter output if we are completing for object members
2275 # filter output if we are completing for object members
2266 if offset:
2276 if offset:
2267 pre = text[offset-1]
2277 pre = text[offset-1]
2268 if pre == '.':
2278 if pre == '.':
2269 if self.omit__names == 2:
2279 if self.omit__names == 2:
2270 completion_filter = lambda c:not c.name.startswith('_')
2280 completion_filter = lambda c:not c.name.startswith('_')
2271 elif self.omit__names == 1:
2281 elif self.omit__names == 1:
2272 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2282 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2273 elif self.omit__names == 0:
2283 elif self.omit__names == 0:
2274 completion_filter = lambda x:x
2284 completion_filter = lambda x:x
2275 else:
2285 else:
2276 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2286 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2277
2287
2278 interpreter = jedi.Interpreter(text[:offset], namespaces)
2288 interpreter = jedi.Interpreter(text[:offset], namespaces)
2279 try_jedi = True
2289 try_jedi = True
2280
2290
2281 try:
2291 try:
2282 # find the first token in the current tree -- if it is a ' or " then we are in a string
2292 # find the first token in the current tree -- if it is a ' or " then we are in a string
2283 completing_string = False
2293 completing_string = False
2284 try:
2294 try:
2285 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2295 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2286 except StopIteration:
2296 except StopIteration:
2287 pass
2297 pass
2288 else:
2298 else:
2289 # note the value may be ', ", or it may also be ''' or """, or
2299 # note the value may be ', ", or it may also be ''' or """, or
2290 # in some cases, """what/you/typed..., but all of these are
2300 # in some cases, """what/you/typed..., but all of these are
2291 # strings.
2301 # strings.
2292 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2302 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2293
2303
2294 # if we are in a string jedi is likely not the right candidate for
2304 # if we are in a string jedi is likely not the right candidate for
2295 # now. Skip it.
2305 # now. Skip it.
2296 try_jedi = not completing_string
2306 try_jedi = not completing_string
2297 except Exception as e:
2307 except Exception as e:
2298 # many of things can go wrong, we are using private API just don't crash.
2308 # many of things can go wrong, we are using private API just don't crash.
2299 if self.debug:
2309 if self.debug:
2300 print("Error detecting if completing a non-finished string :", e, '|')
2310 print("Error detecting if completing a non-finished string :", e, '|')
2301
2311
2302 if not try_jedi:
2312 if not try_jedi:
2303 return iter([])
2313 return iter([])
2304 try:
2314 try:
2305 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2315 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2306 except Exception as e:
2316 except Exception as e:
2307 if self.debug:
2317 if self.debug:
2308 return iter(
2318 return iter(
2309 [
2319 [
2310 _FakeJediCompletion(
2320 _FakeJediCompletion(
2311 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2321 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2312 % (e)
2322 % (e)
2313 )
2323 )
2314 ]
2324 ]
2315 )
2325 )
2316 else:
2326 else:
2317 return iter([])
2327 return iter([])
2318
2328
2329 @context_matcher()
2330 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2331 """Match attributes or global python names"""
2332 text = context.line_with_cursor
2333 if "." in text:
2334 try:
2335 matches, fragment = self._attr_matches(text, include_prefix=False)
2336 if text.endswith(".") and self.omit__names:
2337 if self.omit__names == 1:
2338 # true if txt is _not_ a __ name, false otherwise:
2339 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2340 else:
2341 # true if txt is _not_ a _ name, false otherwise:
2342 no__name = (
2343 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2344 is None
2345 )
2346 matches = filter(no__name, matches)
2347 return _convert_matcher_v1_result_to_v2(
2348 matches, type="attribute", fragment=fragment
2349 )
2350 except NameError:
2351 # catches <undefined attributes>.<tab>
2352 matches = []
2353 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2354 else:
2355 matches = self.global_matches(context.token)
2356 # TODO: maybe distinguish between functions, modules and just "variables"
2357 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2358
2319 @completion_matcher(api_version=1)
2359 @completion_matcher(api_version=1)
2320 def python_matches(self, text: str) -> Iterable[str]:
2360 def python_matches(self, text: str) -> Iterable[str]:
2321 """Match attributes or global python names"""
2361 """Match attributes or global python names.
2362
2363 .. deprecated:: 8.27
2364 You can use :meth:`python_matcher` instead."""
2322 if "." in text:
2365 if "." in text:
2323 try:
2366 try:
2324 matches = self.attr_matches(text)
2367 matches = self.attr_matches(text)
2325 if text.endswith('.') and self.omit__names:
2368 if text.endswith('.') and self.omit__names:
2326 if self.omit__names == 1:
2369 if self.omit__names == 1:
2327 # true if txt is _not_ a __ name, false otherwise:
2370 # true if txt is _not_ a __ name, false otherwise:
2328 no__name = (lambda txt:
2371 no__name = (lambda txt:
2329 re.match(r'.*\.__.*?__',txt) is None)
2372 re.match(r'.*\.__.*?__',txt) is None)
2330 else:
2373 else:
2331 # true if txt is _not_ a _ name, false otherwise:
2374 # true if txt is _not_ a _ name, false otherwise:
2332 no__name = (lambda txt:
2375 no__name = (lambda txt:
2333 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2376 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2334 matches = filter(no__name, matches)
2377 matches = filter(no__name, matches)
2335 except NameError:
2378 except NameError:
2336 # catches <undefined attributes>.<tab>
2379 # catches <undefined attributes>.<tab>
2337 matches = []
2380 matches = []
2338 else:
2381 else:
2339 matches = self.global_matches(text)
2382 matches = self.global_matches(text)
2340 return matches
2383 return matches
2341
2384
2342 def _default_arguments_from_docstring(self, doc):
2385 def _default_arguments_from_docstring(self, doc):
2343 """Parse the first line of docstring for call signature.
2386 """Parse the first line of docstring for call signature.
2344
2387
2345 Docstring should be of the form 'min(iterable[, key=func])\n'.
2388 Docstring should be of the form 'min(iterable[, key=func])\n'.
2346 It can also parse cython docstring of the form
2389 It can also parse cython docstring of the form
2347 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2390 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2348 """
2391 """
2349 if doc is None:
2392 if doc is None:
2350 return []
2393 return []
2351
2394
2352 #care only the firstline
2395 #care only the firstline
2353 line = doc.lstrip().splitlines()[0]
2396 line = doc.lstrip().splitlines()[0]
2354
2397
2355 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2398 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2356 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2399 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2357 sig = self.docstring_sig_re.search(line)
2400 sig = self.docstring_sig_re.search(line)
2358 if sig is None:
2401 if sig is None:
2359 return []
2402 return []
2360 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2403 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2361 sig = sig.groups()[0].split(',')
2404 sig = sig.groups()[0].split(',')
2362 ret = []
2405 ret = []
2363 for s in sig:
2406 for s in sig:
2364 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2407 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2365 ret += self.docstring_kwd_re.findall(s)
2408 ret += self.docstring_kwd_re.findall(s)
2366 return ret
2409 return ret
2367
2410
2368 def _default_arguments(self, obj):
2411 def _default_arguments(self, obj):
2369 """Return the list of default arguments of obj if it is callable,
2412 """Return the list of default arguments of obj if it is callable,
2370 or empty list otherwise."""
2413 or empty list otherwise."""
2371 call_obj = obj
2414 call_obj = obj
2372 ret = []
2415 ret = []
2373 if inspect.isbuiltin(obj):
2416 if inspect.isbuiltin(obj):
2374 pass
2417 pass
2375 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2418 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2376 if inspect.isclass(obj):
2419 if inspect.isclass(obj):
2377 #for cython embedsignature=True the constructor docstring
2420 #for cython embedsignature=True the constructor docstring
2378 #belongs to the object itself not __init__
2421 #belongs to the object itself not __init__
2379 ret += self._default_arguments_from_docstring(
2422 ret += self._default_arguments_from_docstring(
2380 getattr(obj, '__doc__', ''))
2423 getattr(obj, '__doc__', ''))
2381 # for classes, check for __init__,__new__
2424 # for classes, check for __init__,__new__
2382 call_obj = (getattr(obj, '__init__', None) or
2425 call_obj = (getattr(obj, '__init__', None) or
2383 getattr(obj, '__new__', None))
2426 getattr(obj, '__new__', None))
2384 # for all others, check if they are __call__able
2427 # for all others, check if they are __call__able
2385 elif hasattr(obj, '__call__'):
2428 elif hasattr(obj, '__call__'):
2386 call_obj = obj.__call__
2429 call_obj = obj.__call__
2387 ret += self._default_arguments_from_docstring(
2430 ret += self._default_arguments_from_docstring(
2388 getattr(call_obj, '__doc__', ''))
2431 getattr(call_obj, '__doc__', ''))
2389
2432
2390 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2433 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2391 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2434 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2392
2435
2393 try:
2436 try:
2394 sig = inspect.signature(obj)
2437 sig = inspect.signature(obj)
2395 ret.extend(k for k, v in sig.parameters.items() if
2438 ret.extend(k for k, v in sig.parameters.items() if
2396 v.kind in _keeps)
2439 v.kind in _keeps)
2397 except ValueError:
2440 except ValueError:
2398 pass
2441 pass
2399
2442
2400 return list(set(ret))
2443 return list(set(ret))
2401
2444
2402 @context_matcher()
2445 @context_matcher()
2403 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2446 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2404 """Match named parameters (kwargs) of the last open function."""
2447 """Match named parameters (kwargs) of the last open function."""
2405 matches = self.python_func_kw_matches(context.token)
2448 matches = self.python_func_kw_matches(context.token)
2406 return _convert_matcher_v1_result_to_v2(matches, type="param")
2449 return _convert_matcher_v1_result_to_v2(matches, type="param")
2407
2450
2408 def python_func_kw_matches(self, text):
2451 def python_func_kw_matches(self, text):
2409 """Match named parameters (kwargs) of the last open function.
2452 """Match named parameters (kwargs) of the last open function.
2410
2453
2411 .. deprecated:: 8.6
2454 .. deprecated:: 8.6
2412 You can use :meth:`python_func_kw_matcher` instead.
2455 You can use :meth:`python_func_kw_matcher` instead.
2413 """
2456 """
2414
2457
2415 if "." in text: # a parameter cannot be dotted
2458 if "." in text: # a parameter cannot be dotted
2416 return []
2459 return []
2417 try: regexp = self.__funcParamsRegex
2460 try: regexp = self.__funcParamsRegex
2418 except AttributeError:
2461 except AttributeError:
2419 regexp = self.__funcParamsRegex = re.compile(r'''
2462 regexp = self.__funcParamsRegex = re.compile(r'''
2420 '.*?(?<!\\)' | # single quoted strings or
2463 '.*?(?<!\\)' | # single quoted strings or
2421 ".*?(?<!\\)" | # double quoted strings or
2464 ".*?(?<!\\)" | # double quoted strings or
2422 \w+ | # identifier
2465 \w+ | # identifier
2423 \S # other characters
2466 \S # other characters
2424 ''', re.VERBOSE | re.DOTALL)
2467 ''', re.VERBOSE | re.DOTALL)
2425 # 1. find the nearest identifier that comes before an unclosed
2468 # 1. find the nearest identifier that comes before an unclosed
2426 # parenthesis before the cursor
2469 # parenthesis before the cursor
2427 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2470 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2428 tokens = regexp.findall(self.text_until_cursor)
2471 tokens = regexp.findall(self.text_until_cursor)
2429 iterTokens = reversed(tokens); openPar = 0
2472 iterTokens = reversed(tokens); openPar = 0
2430
2473
2431 for token in iterTokens:
2474 for token in iterTokens:
2432 if token == ')':
2475 if token == ')':
2433 openPar -= 1
2476 openPar -= 1
2434 elif token == '(':
2477 elif token == '(':
2435 openPar += 1
2478 openPar += 1
2436 if openPar > 0:
2479 if openPar > 0:
2437 # found the last unclosed parenthesis
2480 # found the last unclosed parenthesis
2438 break
2481 break
2439 else:
2482 else:
2440 return []
2483 return []
2441 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2484 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2442 ids = []
2485 ids = []
2443 isId = re.compile(r'\w+$').match
2486 isId = re.compile(r'\w+$').match
2444
2487
2445 while True:
2488 while True:
2446 try:
2489 try:
2447 ids.append(next(iterTokens))
2490 ids.append(next(iterTokens))
2448 if not isId(ids[-1]):
2491 if not isId(ids[-1]):
2449 ids.pop(); break
2492 ids.pop(); break
2450 if not next(iterTokens) == '.':
2493 if not next(iterTokens) == '.':
2451 break
2494 break
2452 except StopIteration:
2495 except StopIteration:
2453 break
2496 break
2454
2497
2455 # Find all named arguments already assigned to, as to avoid suggesting
2498 # Find all named arguments already assigned to, as to avoid suggesting
2456 # them again
2499 # them again
2457 usedNamedArgs = set()
2500 usedNamedArgs = set()
2458 par_level = -1
2501 par_level = -1
2459 for token, next_token in zip(tokens, tokens[1:]):
2502 for token, next_token in zip(tokens, tokens[1:]):
2460 if token == '(':
2503 if token == '(':
2461 par_level += 1
2504 par_level += 1
2462 elif token == ')':
2505 elif token == ')':
2463 par_level -= 1
2506 par_level -= 1
2464
2507
2465 if par_level != 0:
2508 if par_level != 0:
2466 continue
2509 continue
2467
2510
2468 if next_token != '=':
2511 if next_token != '=':
2469 continue
2512 continue
2470
2513
2471 usedNamedArgs.add(token)
2514 usedNamedArgs.add(token)
2472
2515
2473 argMatches = []
2516 argMatches = []
2474 try:
2517 try:
2475 callableObj = '.'.join(ids[::-1])
2518 callableObj = '.'.join(ids[::-1])
2476 namedArgs = self._default_arguments(eval(callableObj,
2519 namedArgs = self._default_arguments(eval(callableObj,
2477 self.namespace))
2520 self.namespace))
2478
2521
2479 # Remove used named arguments from the list, no need to show twice
2522 # Remove used named arguments from the list, no need to show twice
2480 for namedArg in set(namedArgs) - usedNamedArgs:
2523 for namedArg in set(namedArgs) - usedNamedArgs:
2481 if namedArg.startswith(text):
2524 if namedArg.startswith(text):
2482 argMatches.append("%s=" %namedArg)
2525 argMatches.append("%s=" %namedArg)
2483 except:
2526 except:
2484 pass
2527 pass
2485
2528
2486 return argMatches
2529 return argMatches
2487
2530
2488 @staticmethod
2531 @staticmethod
2489 def _get_keys(obj: Any) -> List[Any]:
2532 def _get_keys(obj: Any) -> List[Any]:
2490 # Objects can define their own completions by defining an
2533 # Objects can define their own completions by defining an
2491 # _ipy_key_completions_() method.
2534 # _ipy_key_completions_() method.
2492 method = get_real_method(obj, '_ipython_key_completions_')
2535 method = get_real_method(obj, '_ipython_key_completions_')
2493 if method is not None:
2536 if method is not None:
2494 return method()
2537 return method()
2495
2538
2496 # Special case some common in-memory dict-like types
2539 # Special case some common in-memory dict-like types
2497 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2540 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2498 try:
2541 try:
2499 return list(obj.keys())
2542 return list(obj.keys())
2500 except Exception:
2543 except Exception:
2501 return []
2544 return []
2502 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2545 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2503 try:
2546 try:
2504 return list(obj.obj.keys())
2547 return list(obj.obj.keys())
2505 except Exception:
2548 except Exception:
2506 return []
2549 return []
2507 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2550 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2508 _safe_isinstance(obj, 'numpy', 'void'):
2551 _safe_isinstance(obj, 'numpy', 'void'):
2509 return obj.dtype.names or []
2552 return obj.dtype.names or []
2510 return []
2553 return []
2511
2554
2512 @context_matcher()
2555 @context_matcher()
2513 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2556 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2514 """Match string keys in a dictionary, after e.g. ``foo[``."""
2557 """Match string keys in a dictionary, after e.g. ``foo[``."""
2515 matches = self.dict_key_matches(context.token)
2558 matches = self.dict_key_matches(context.token)
2516 return _convert_matcher_v1_result_to_v2(
2559 return _convert_matcher_v1_result_to_v2(
2517 matches, type="dict key", suppress_if_matches=True
2560 matches, type="dict key", suppress_if_matches=True
2518 )
2561 )
2519
2562
2520 def dict_key_matches(self, text: str) -> List[str]:
2563 def dict_key_matches(self, text: str) -> List[str]:
2521 """Match string keys in a dictionary, after e.g. ``foo[``.
2564 """Match string keys in a dictionary, after e.g. ``foo[``.
2522
2565
2523 .. deprecated:: 8.6
2566 .. deprecated:: 8.6
2524 You can use :meth:`dict_key_matcher` instead.
2567 You can use :meth:`dict_key_matcher` instead.
2525 """
2568 """
2526
2569
2527 # Short-circuit on closed dictionary (regular expression would
2570 # Short-circuit on closed dictionary (regular expression would
2528 # not match anyway, but would take quite a while).
2571 # not match anyway, but would take quite a while).
2529 if self.text_until_cursor.strip().endswith("]"):
2572 if self.text_until_cursor.strip().endswith("]"):
2530 return []
2573 return []
2531
2574
2532 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2575 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2533
2576
2534 if match is None:
2577 if match is None:
2535 return []
2578 return []
2536
2579
2537 expr, prior_tuple_keys, key_prefix = match.groups()
2580 expr, prior_tuple_keys, key_prefix = match.groups()
2538
2581
2539 obj = self._evaluate_expr(expr)
2582 obj = self._evaluate_expr(expr)
2540
2583
2541 if obj is not_found:
2584 if obj is not_found:
2542 return []
2585 return []
2543
2586
2544 keys = self._get_keys(obj)
2587 keys = self._get_keys(obj)
2545 if not keys:
2588 if not keys:
2546 return keys
2589 return keys
2547
2590
2548 tuple_prefix = guarded_eval(
2591 tuple_prefix = guarded_eval(
2549 prior_tuple_keys,
2592 prior_tuple_keys,
2550 EvaluationContext(
2593 EvaluationContext(
2551 globals=self.global_namespace,
2594 globals=self.global_namespace,
2552 locals=self.namespace,
2595 locals=self.namespace,
2553 evaluation=self.evaluation, # type: ignore
2596 evaluation=self.evaluation, # type: ignore
2554 in_subscript=True,
2597 in_subscript=True,
2555 ),
2598 ),
2556 )
2599 )
2557
2600
2558 closing_quote, token_offset, matches = match_dict_keys(
2601 closing_quote, token_offset, matches = match_dict_keys(
2559 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2602 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2560 )
2603 )
2561 if not matches:
2604 if not matches:
2562 return []
2605 return []
2563
2606
2564 # get the cursor position of
2607 # get the cursor position of
2565 # - the text being completed
2608 # - the text being completed
2566 # - the start of the key text
2609 # - the start of the key text
2567 # - the start of the completion
2610 # - the start of the completion
2568 text_start = len(self.text_until_cursor) - len(text)
2611 text_start = len(self.text_until_cursor) - len(text)
2569 if key_prefix:
2612 if key_prefix:
2570 key_start = match.start(3)
2613 key_start = match.start(3)
2571 completion_start = key_start + token_offset
2614 completion_start = key_start + token_offset
2572 else:
2615 else:
2573 key_start = completion_start = match.end()
2616 key_start = completion_start = match.end()
2574
2617
2575 # grab the leading prefix, to make sure all completions start with `text`
2618 # grab the leading prefix, to make sure all completions start with `text`
2576 if text_start > key_start:
2619 if text_start > key_start:
2577 leading = ''
2620 leading = ''
2578 else:
2621 else:
2579 leading = text[text_start:completion_start]
2622 leading = text[text_start:completion_start]
2580
2623
2581 # append closing quote and bracket as appropriate
2624 # append closing quote and bracket as appropriate
2582 # this is *not* appropriate if the opening quote or bracket is outside
2625 # this is *not* appropriate if the opening quote or bracket is outside
2583 # the text given to this method, e.g. `d["""a\nt
2626 # the text given to this method, e.g. `d["""a\nt
2584 can_close_quote = False
2627 can_close_quote = False
2585 can_close_bracket = False
2628 can_close_bracket = False
2586
2629
2587 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2630 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2588
2631
2589 if continuation.startswith(closing_quote):
2632 if continuation.startswith(closing_quote):
2590 # do not close if already closed, e.g. `d['a<tab>'`
2633 # do not close if already closed, e.g. `d['a<tab>'`
2591 continuation = continuation[len(closing_quote) :]
2634 continuation = continuation[len(closing_quote) :]
2592 else:
2635 else:
2593 can_close_quote = True
2636 can_close_quote = True
2594
2637
2595 continuation = continuation.strip()
2638 continuation = continuation.strip()
2596
2639
2597 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2640 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2598 # handling it is out of scope, so let's avoid appending suffixes.
2641 # handling it is out of scope, so let's avoid appending suffixes.
2599 has_known_tuple_handling = isinstance(obj, dict)
2642 has_known_tuple_handling = isinstance(obj, dict)
2600
2643
2601 can_close_bracket = (
2644 can_close_bracket = (
2602 not continuation.startswith("]") and self.auto_close_dict_keys
2645 not continuation.startswith("]") and self.auto_close_dict_keys
2603 )
2646 )
2604 can_close_tuple_item = (
2647 can_close_tuple_item = (
2605 not continuation.startswith(",")
2648 not continuation.startswith(",")
2606 and has_known_tuple_handling
2649 and has_known_tuple_handling
2607 and self.auto_close_dict_keys
2650 and self.auto_close_dict_keys
2608 )
2651 )
2609 can_close_quote = can_close_quote and self.auto_close_dict_keys
2652 can_close_quote = can_close_quote and self.auto_close_dict_keys
2610
2653
2611 # fast path if closing qoute should be appended but not suffix is allowed
2654 # fast path if closing qoute should be appended but not suffix is allowed
2612 if not can_close_quote and not can_close_bracket and closing_quote:
2655 if not can_close_quote and not can_close_bracket and closing_quote:
2613 return [leading + k for k in matches]
2656 return [leading + k for k in matches]
2614
2657
2615 results = []
2658 results = []
2616
2659
2617 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2660 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2618
2661
2619 for k, state_flag in matches.items():
2662 for k, state_flag in matches.items():
2620 result = leading + k
2663 result = leading + k
2621 if can_close_quote and closing_quote:
2664 if can_close_quote and closing_quote:
2622 result += closing_quote
2665 result += closing_quote
2623
2666
2624 if state_flag == end_of_tuple_or_item:
2667 if state_flag == end_of_tuple_or_item:
2625 # We do not know which suffix to add,
2668 # We do not know which suffix to add,
2626 # e.g. both tuple item and string
2669 # e.g. both tuple item and string
2627 # match this item.
2670 # match this item.
2628 pass
2671 pass
2629
2672
2630 if state_flag in end_of_tuple_or_item and can_close_bracket:
2673 if state_flag in end_of_tuple_or_item and can_close_bracket:
2631 result += "]"
2674 result += "]"
2632 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2675 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2633 result += ", "
2676 result += ", "
2634 results.append(result)
2677 results.append(result)
2635 return results
2678 return results
2636
2679
2637 @context_matcher()
2680 @context_matcher()
2638 def unicode_name_matcher(self, context: CompletionContext):
2681 def unicode_name_matcher(self, context: CompletionContext):
2639 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2682 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2640 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2683 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2641 return _convert_matcher_v1_result_to_v2(
2684 return _convert_matcher_v1_result_to_v2(
2642 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2685 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2643 )
2686 )
2644
2687
2645 @staticmethod
2688 @staticmethod
2646 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2689 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2647 """Match Latex-like syntax for unicode characters base
2690 """Match Latex-like syntax for unicode characters base
2648 on the name of the character.
2691 on the name of the character.
2649
2692
2650 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2693 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2651
2694
2652 Works only on valid python 3 identifier, or on combining characters that
2695 Works only on valid python 3 identifier, or on combining characters that
2653 will combine to form a valid identifier.
2696 will combine to form a valid identifier.
2654 """
2697 """
2655 slashpos = text.rfind('\\')
2698 slashpos = text.rfind('\\')
2656 if slashpos > -1:
2699 if slashpos > -1:
2657 s = text[slashpos+1:]
2700 s = text[slashpos+1:]
2658 try :
2701 try :
2659 unic = unicodedata.lookup(s)
2702 unic = unicodedata.lookup(s)
2660 # allow combining chars
2703 # allow combining chars
2661 if ('a'+unic).isidentifier():
2704 if ('a'+unic).isidentifier():
2662 return '\\'+s,[unic]
2705 return '\\'+s,[unic]
2663 except KeyError:
2706 except KeyError:
2664 pass
2707 pass
2665 return '', []
2708 return '', []
2666
2709
2667 @context_matcher()
2710 @context_matcher()
2668 def latex_name_matcher(self, context: CompletionContext):
2711 def latex_name_matcher(self, context: CompletionContext):
2669 """Match Latex syntax for unicode characters.
2712 """Match Latex syntax for unicode characters.
2670
2713
2671 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2714 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2672 """
2715 """
2673 fragment, matches = self.latex_matches(context.text_until_cursor)
2716 fragment, matches = self.latex_matches(context.text_until_cursor)
2674 return _convert_matcher_v1_result_to_v2(
2717 return _convert_matcher_v1_result_to_v2(
2675 matches, type="latex", fragment=fragment, suppress_if_matches=True
2718 matches, type="latex", fragment=fragment, suppress_if_matches=True
2676 )
2719 )
2677
2720
2678 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2721 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2679 """Match Latex syntax for unicode characters.
2722 """Match Latex syntax for unicode characters.
2680
2723
2681 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2724 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2682
2725
2683 .. deprecated:: 8.6
2726 .. deprecated:: 8.6
2684 You can use :meth:`latex_name_matcher` instead.
2727 You can use :meth:`latex_name_matcher` instead.
2685 """
2728 """
2686 slashpos = text.rfind('\\')
2729 slashpos = text.rfind('\\')
2687 if slashpos > -1:
2730 if slashpos > -1:
2688 s = text[slashpos:]
2731 s = text[slashpos:]
2689 if s in latex_symbols:
2732 if s in latex_symbols:
2690 # Try to complete a full latex symbol to unicode
2733 # Try to complete a full latex symbol to unicode
2691 # \\alpha -> Ξ±
2734 # \\alpha -> Ξ±
2692 return s, [latex_symbols[s]]
2735 return s, [latex_symbols[s]]
2693 else:
2736 else:
2694 # If a user has partially typed a latex symbol, give them
2737 # If a user has partially typed a latex symbol, give them
2695 # a full list of options \al -> [\aleph, \alpha]
2738 # a full list of options \al -> [\aleph, \alpha]
2696 matches = [k for k in latex_symbols if k.startswith(s)]
2739 matches = [k for k in latex_symbols if k.startswith(s)]
2697 if matches:
2740 if matches:
2698 return s, matches
2741 return s, matches
2699 return '', ()
2742 return '', ()
2700
2743
2701 @context_matcher()
2744 @context_matcher()
2702 def custom_completer_matcher(self, context):
2745 def custom_completer_matcher(self, context):
2703 """Dispatch custom completer.
2746 """Dispatch custom completer.
2704
2747
2705 If a match is found, suppresses all other matchers except for Jedi.
2748 If a match is found, suppresses all other matchers except for Jedi.
2706 """
2749 """
2707 matches = self.dispatch_custom_completer(context.token) or []
2750 matches = self.dispatch_custom_completer(context.token) or []
2708 result = _convert_matcher_v1_result_to_v2(
2751 result = _convert_matcher_v1_result_to_v2(
2709 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2752 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2710 )
2753 )
2711 result["ordered"] = True
2754 result["ordered"] = True
2712 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2755 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2713 return result
2756 return result
2714
2757
2715 def dispatch_custom_completer(self, text):
2758 def dispatch_custom_completer(self, text):
2716 """
2759 """
2717 .. deprecated:: 8.6
2760 .. deprecated:: 8.6
2718 You can use :meth:`custom_completer_matcher` instead.
2761 You can use :meth:`custom_completer_matcher` instead.
2719 """
2762 """
2720 if not self.custom_completers:
2763 if not self.custom_completers:
2721 return
2764 return
2722
2765
2723 line = self.line_buffer
2766 line = self.line_buffer
2724 if not line.strip():
2767 if not line.strip():
2725 return None
2768 return None
2726
2769
2727 # Create a little structure to pass all the relevant information about
2770 # Create a little structure to pass all the relevant information about
2728 # the current completion to any custom completer.
2771 # the current completion to any custom completer.
2729 event = SimpleNamespace()
2772 event = SimpleNamespace()
2730 event.line = line
2773 event.line = line
2731 event.symbol = text
2774 event.symbol = text
2732 cmd = line.split(None,1)[0]
2775 cmd = line.split(None,1)[0]
2733 event.command = cmd
2776 event.command = cmd
2734 event.text_until_cursor = self.text_until_cursor
2777 event.text_until_cursor = self.text_until_cursor
2735
2778
2736 # for foo etc, try also to find completer for %foo
2779 # for foo etc, try also to find completer for %foo
2737 if not cmd.startswith(self.magic_escape):
2780 if not cmd.startswith(self.magic_escape):
2738 try_magic = self.custom_completers.s_matches(
2781 try_magic = self.custom_completers.s_matches(
2739 self.magic_escape + cmd)
2782 self.magic_escape + cmd)
2740 else:
2783 else:
2741 try_magic = []
2784 try_magic = []
2742
2785
2743 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2786 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2744 try_magic,
2787 try_magic,
2745 self.custom_completers.flat_matches(self.text_until_cursor)):
2788 self.custom_completers.flat_matches(self.text_until_cursor)):
2746 try:
2789 try:
2747 res = c(event)
2790 res = c(event)
2748 if res:
2791 if res:
2749 # first, try case sensitive match
2792 # first, try case sensitive match
2750 withcase = [r for r in res if r.startswith(text)]
2793 withcase = [r for r in res if r.startswith(text)]
2751 if withcase:
2794 if withcase:
2752 return withcase
2795 return withcase
2753 # if none, then case insensitive ones are ok too
2796 # if none, then case insensitive ones are ok too
2754 text_low = text.lower()
2797 text_low = text.lower()
2755 return [r for r in res if r.lower().startswith(text_low)]
2798 return [r for r in res if r.lower().startswith(text_low)]
2756 except TryNext:
2799 except TryNext:
2757 pass
2800 pass
2758 except KeyboardInterrupt:
2801 except KeyboardInterrupt:
2759 """
2802 """
2760 If custom completer take too long,
2803 If custom completer take too long,
2761 let keyboard interrupt abort and return nothing.
2804 let keyboard interrupt abort and return nothing.
2762 """
2805 """
2763 break
2806 break
2764
2807
2765 return None
2808 return None
2766
2809
2767 def completions(self, text: str, offset: int)->Iterator[Completion]:
2810 def completions(self, text: str, offset: int)->Iterator[Completion]:
2768 """
2811 """
2769 Returns an iterator over the possible completions
2812 Returns an iterator over the possible completions
2770
2813
2771 .. warning::
2814 .. warning::
2772
2815
2773 Unstable
2816 Unstable
2774
2817
2775 This function is unstable, API may change without warning.
2818 This function is unstable, API may change without warning.
2776 It will also raise unless use in proper context manager.
2819 It will also raise unless use in proper context manager.
2777
2820
2778 Parameters
2821 Parameters
2779 ----------
2822 ----------
2780 text : str
2823 text : str
2781 Full text of the current input, multi line string.
2824 Full text of the current input, multi line string.
2782 offset : int
2825 offset : int
2783 Integer representing the position of the cursor in ``text``. Offset
2826 Integer representing the position of the cursor in ``text``. Offset
2784 is 0-based indexed.
2827 is 0-based indexed.
2785
2828
2786 Yields
2829 Yields
2787 ------
2830 ------
2788 Completion
2831 Completion
2789
2832
2790 Notes
2833 Notes
2791 -----
2834 -----
2792 The cursor on a text can either be seen as being "in between"
2835 The cursor on a text can either be seen as being "in between"
2793 characters or "On" a character depending on the interface visible to
2836 characters or "On" a character depending on the interface visible to
2794 the user. For consistency the cursor being on "in between" characters X
2837 the user. For consistency the cursor being on "in between" characters X
2795 and Y is equivalent to the cursor being "on" character Y, that is to say
2838 and Y is equivalent to the cursor being "on" character Y, that is to say
2796 the character the cursor is on is considered as being after the cursor.
2839 the character the cursor is on is considered as being after the cursor.
2797
2840
2798 Combining characters may span more that one position in the
2841 Combining characters may span more that one position in the
2799 text.
2842 text.
2800
2843
2801 .. note::
2844 .. note::
2802
2845
2803 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2846 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2804 fake Completion token to distinguish completion returned by Jedi
2847 fake Completion token to distinguish completion returned by Jedi
2805 and usual IPython completion.
2848 and usual IPython completion.
2806
2849
2807 .. note::
2850 .. note::
2808
2851
2809 Completions are not completely deduplicated yet. If identical
2852 Completions are not completely deduplicated yet. If identical
2810 completions are coming from different sources this function does not
2853 completions are coming from different sources this function does not
2811 ensure that each completion object will only be present once.
2854 ensure that each completion object will only be present once.
2812 """
2855 """
2813 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2856 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2814 "It may change without warnings. "
2857 "It may change without warnings. "
2815 "Use in corresponding context manager.",
2858 "Use in corresponding context manager.",
2816 category=ProvisionalCompleterWarning, stacklevel=2)
2859 category=ProvisionalCompleterWarning, stacklevel=2)
2817
2860
2818 seen = set()
2861 seen = set()
2819 profiler:Optional[cProfile.Profile]
2862 profiler:Optional[cProfile.Profile]
2820 try:
2863 try:
2821 if self.profile_completions:
2864 if self.profile_completions:
2822 import cProfile
2865 import cProfile
2823 profiler = cProfile.Profile()
2866 profiler = cProfile.Profile()
2824 profiler.enable()
2867 profiler.enable()
2825 else:
2868 else:
2826 profiler = None
2869 profiler = None
2827
2870
2828 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2871 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2829 if c and (c in seen):
2872 if c and (c in seen):
2830 continue
2873 continue
2831 yield c
2874 yield c
2832 seen.add(c)
2875 seen.add(c)
2833 except KeyboardInterrupt:
2876 except KeyboardInterrupt:
2834 """if completions take too long and users send keyboard interrupt,
2877 """if completions take too long and users send keyboard interrupt,
2835 do not crash and return ASAP. """
2878 do not crash and return ASAP. """
2836 pass
2879 pass
2837 finally:
2880 finally:
2838 if profiler is not None:
2881 if profiler is not None:
2839 profiler.disable()
2882 profiler.disable()
2840 ensure_dir_exists(self.profiler_output_dir)
2883 ensure_dir_exists(self.profiler_output_dir)
2841 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2884 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2842 print("Writing profiler output to", output_path)
2885 print("Writing profiler output to", output_path)
2843 profiler.dump_stats(output_path)
2886 profiler.dump_stats(output_path)
2844
2887
2845 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2888 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2846 """
2889 """
2847 Core completion module.Same signature as :any:`completions`, with the
2890 Core completion module.Same signature as :any:`completions`, with the
2848 extra `timeout` parameter (in seconds).
2891 extra `timeout` parameter (in seconds).
2849
2892
2850 Computing jedi's completion ``.type`` can be quite expensive (it is a
2893 Computing jedi's completion ``.type`` can be quite expensive (it is a
2851 lazy property) and can require some warm-up, more warm up than just
2894 lazy property) and can require some warm-up, more warm up than just
2852 computing the ``name`` of a completion. The warm-up can be :
2895 computing the ``name`` of a completion. The warm-up can be :
2853
2896
2854 - Long warm-up the first time a module is encountered after
2897 - Long warm-up the first time a module is encountered after
2855 install/update: actually build parse/inference tree.
2898 install/update: actually build parse/inference tree.
2856
2899
2857 - first time the module is encountered in a session: load tree from
2900 - first time the module is encountered in a session: load tree from
2858 disk.
2901 disk.
2859
2902
2860 We don't want to block completions for tens of seconds so we give the
2903 We don't want to block completions for tens of seconds so we give the
2861 completer a "budget" of ``_timeout`` seconds per invocation to compute
2904 completer a "budget" of ``_timeout`` seconds per invocation to compute
2862 completions types, the completions that have not yet been computed will
2905 completions types, the completions that have not yet been computed will
2863 be marked as "unknown" an will have a chance to be computed next round
2906 be marked as "unknown" an will have a chance to be computed next round
2864 are things get cached.
2907 are things get cached.
2865
2908
2866 Keep in mind that Jedi is not the only thing treating the completion so
2909 Keep in mind that Jedi is not the only thing treating the completion so
2867 keep the timeout short-ish as if we take more than 0.3 second we still
2910 keep the timeout short-ish as if we take more than 0.3 second we still
2868 have lots of processing to do.
2911 have lots of processing to do.
2869
2912
2870 """
2913 """
2871 deadline = time.monotonic() + _timeout
2914 deadline = time.monotonic() + _timeout
2872
2915
2873 before = full_text[:offset]
2916 before = full_text[:offset]
2874 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2917 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2875
2918
2876 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2919 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2877
2920
2878 def is_non_jedi_result(
2921 def is_non_jedi_result(
2879 result: MatcherResult, identifier: str
2922 result: MatcherResult, identifier: str
2880 ) -> TypeGuard[SimpleMatcherResult]:
2923 ) -> TypeGuard[SimpleMatcherResult]:
2881 return identifier != jedi_matcher_id
2924 return identifier != jedi_matcher_id
2882
2925
2883 results = self._complete(
2926 results = self._complete(
2884 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2927 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2885 )
2928 )
2886
2929
2887 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2930 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2888 identifier: result
2931 identifier: result
2889 for identifier, result in results.items()
2932 for identifier, result in results.items()
2890 if is_non_jedi_result(result, identifier)
2933 if is_non_jedi_result(result, identifier)
2891 }
2934 }
2892
2935
2893 jedi_matches = (
2936 jedi_matches = (
2894 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2937 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2895 if jedi_matcher_id in results
2938 if jedi_matcher_id in results
2896 else ()
2939 else ()
2897 )
2940 )
2898
2941
2899 iter_jm = iter(jedi_matches)
2942 iter_jm = iter(jedi_matches)
2900 if _timeout:
2943 if _timeout:
2901 for jm in iter_jm:
2944 for jm in iter_jm:
2902 try:
2945 try:
2903 type_ = jm.type
2946 type_ = jm.type
2904 except Exception:
2947 except Exception:
2905 if self.debug:
2948 if self.debug:
2906 print("Error in Jedi getting type of ", jm)
2949 print("Error in Jedi getting type of ", jm)
2907 type_ = None
2950 type_ = None
2908 delta = len(jm.name_with_symbols) - len(jm.complete)
2951 delta = len(jm.name_with_symbols) - len(jm.complete)
2909 if type_ == 'function':
2952 if type_ == 'function':
2910 signature = _make_signature(jm)
2953 signature = _make_signature(jm)
2911 else:
2954 else:
2912 signature = ''
2955 signature = ''
2913 yield Completion(start=offset - delta,
2956 yield Completion(start=offset - delta,
2914 end=offset,
2957 end=offset,
2915 text=jm.name_with_symbols,
2958 text=jm.name_with_symbols,
2916 type=type_,
2959 type=type_,
2917 signature=signature,
2960 signature=signature,
2918 _origin='jedi')
2961 _origin='jedi')
2919
2962
2920 if time.monotonic() > deadline:
2963 if time.monotonic() > deadline:
2921 break
2964 break
2922
2965
2923 for jm in iter_jm:
2966 for jm in iter_jm:
2924 delta = len(jm.name_with_symbols) - len(jm.complete)
2967 delta = len(jm.name_with_symbols) - len(jm.complete)
2925 yield Completion(
2968 yield Completion(
2926 start=offset - delta,
2969 start=offset - delta,
2927 end=offset,
2970 end=offset,
2928 text=jm.name_with_symbols,
2971 text=jm.name_with_symbols,
2929 type=_UNKNOWN_TYPE, # don't compute type for speed
2972 type=_UNKNOWN_TYPE, # don't compute type for speed
2930 _origin="jedi",
2973 _origin="jedi",
2931 signature="",
2974 signature="",
2932 )
2975 )
2933
2976
2934 # TODO:
2977 # TODO:
2935 # Suppress this, right now just for debug.
2978 # Suppress this, right now just for debug.
2936 if jedi_matches and non_jedi_results and self.debug:
2979 if jedi_matches and non_jedi_results and self.debug:
2937 some_start_offset = before.rfind(
2980 some_start_offset = before.rfind(
2938 next(iter(non_jedi_results.values()))["matched_fragment"]
2981 next(iter(non_jedi_results.values()))["matched_fragment"]
2939 )
2982 )
2940 yield Completion(
2983 yield Completion(
2941 start=some_start_offset,
2984 start=some_start_offset,
2942 end=offset,
2985 end=offset,
2943 text="--jedi/ipython--",
2986 text="--jedi/ipython--",
2944 _origin="debug",
2987 _origin="debug",
2945 type="none",
2988 type="none",
2946 signature="",
2989 signature="",
2947 )
2990 )
2948
2991
2949 ordered: List[Completion] = []
2992 ordered: List[Completion] = []
2950 sortable: List[Completion] = []
2993 sortable: List[Completion] = []
2951
2994
2952 for origin, result in non_jedi_results.items():
2995 for origin, result in non_jedi_results.items():
2953 matched_text = result["matched_fragment"]
2996 matched_text = result["matched_fragment"]
2954 start_offset = before.rfind(matched_text)
2997 start_offset = before.rfind(matched_text)
2955 is_ordered = result.get("ordered", False)
2998 is_ordered = result.get("ordered", False)
2956 container = ordered if is_ordered else sortable
2999 container = ordered if is_ordered else sortable
2957
3000
2958 # I'm unsure if this is always true, so let's assert and see if it
3001 # I'm unsure if this is always true, so let's assert and see if it
2959 # crash
3002 # crash
2960 assert before.endswith(matched_text)
3003 assert before.endswith(matched_text)
2961
3004
2962 for simple_completion in result["completions"]:
3005 for simple_completion in result["completions"]:
2963 completion = Completion(
3006 completion = Completion(
2964 start=start_offset,
3007 start=start_offset,
2965 end=offset,
3008 end=offset,
2966 text=simple_completion.text,
3009 text=simple_completion.text,
2967 _origin=origin,
3010 _origin=origin,
2968 signature="",
3011 signature="",
2969 type=simple_completion.type or _UNKNOWN_TYPE,
3012 type=simple_completion.type or _UNKNOWN_TYPE,
2970 )
3013 )
2971 container.append(completion)
3014 container.append(completion)
2972
3015
2973 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3016 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2974 :MATCHES_LIMIT
3017 :MATCHES_LIMIT
2975 ]
3018 ]
2976
3019
2977 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3020 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2978 """Find completions for the given text and line context.
3021 """Find completions for the given text and line context.
2979
3022
2980 Note that both the text and the line_buffer are optional, but at least
3023 Note that both the text and the line_buffer are optional, but at least
2981 one of them must be given.
3024 one of them must be given.
2982
3025
2983 Parameters
3026 Parameters
2984 ----------
3027 ----------
2985 text : string, optional
3028 text : string, optional
2986 Text to perform the completion on. If not given, the line buffer
3029 Text to perform the completion on. If not given, the line buffer
2987 is split using the instance's CompletionSplitter object.
3030 is split using the instance's CompletionSplitter object.
2988 line_buffer : string, optional
3031 line_buffer : string, optional
2989 If not given, the completer attempts to obtain the current line
3032 If not given, the completer attempts to obtain the current line
2990 buffer via readline. This keyword allows clients which are
3033 buffer via readline. This keyword allows clients which are
2991 requesting for text completions in non-readline contexts to inform
3034 requesting for text completions in non-readline contexts to inform
2992 the completer of the entire text.
3035 the completer of the entire text.
2993 cursor_pos : int, optional
3036 cursor_pos : int, optional
2994 Index of the cursor in the full line buffer. Should be provided by
3037 Index of the cursor in the full line buffer. Should be provided by
2995 remote frontends where kernel has no access to frontend state.
3038 remote frontends where kernel has no access to frontend state.
2996
3039
2997 Returns
3040 Returns
2998 -------
3041 -------
2999 Tuple of two items:
3042 Tuple of two items:
3000 text : str
3043 text : str
3001 Text that was actually used in the completion.
3044 Text that was actually used in the completion.
3002 matches : list
3045 matches : list
3003 A list of completion matches.
3046 A list of completion matches.
3004
3047
3005 Notes
3048 Notes
3006 -----
3049 -----
3007 This API is likely to be deprecated and replaced by
3050 This API is likely to be deprecated and replaced by
3008 :any:`IPCompleter.completions` in the future.
3051 :any:`IPCompleter.completions` in the future.
3009
3052
3010 """
3053 """
3011 warnings.warn('`Completer.complete` is pending deprecation since '
3054 warnings.warn('`Completer.complete` is pending deprecation since '
3012 'IPython 6.0 and will be replaced by `Completer.completions`.',
3055 'IPython 6.0 and will be replaced by `Completer.completions`.',
3013 PendingDeprecationWarning)
3056 PendingDeprecationWarning)
3014 # potential todo, FOLD the 3rd throw away argument of _complete
3057 # potential todo, FOLD the 3rd throw away argument of _complete
3015 # into the first 2 one.
3058 # into the first 2 one.
3016 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3059 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3017 # TODO: should we deprecate now, or does it stay?
3060 # TODO: should we deprecate now, or does it stay?
3018
3061
3019 results = self._complete(
3062 results = self._complete(
3020 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3063 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3021 )
3064 )
3022
3065
3023 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3066 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3024
3067
3025 return self._arrange_and_extract(
3068 return self._arrange_and_extract(
3026 results,
3069 results,
3027 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3070 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3028 skip_matchers={jedi_matcher_id},
3071 skip_matchers={jedi_matcher_id},
3029 # this API does not support different start/end positions (fragments of token).
3072 # this API does not support different start/end positions (fragments of token).
3030 abort_if_offset_changes=True,
3073 abort_if_offset_changes=True,
3031 )
3074 )
3032
3075
3033 def _arrange_and_extract(
3076 def _arrange_and_extract(
3034 self,
3077 self,
3035 results: Dict[str, MatcherResult],
3078 results: Dict[str, MatcherResult],
3036 skip_matchers: Set[str],
3079 skip_matchers: Set[str],
3037 abort_if_offset_changes: bool,
3080 abort_if_offset_changes: bool,
3038 ):
3081 ):
3039 sortable: List[AnyMatcherCompletion] = []
3082 sortable: List[AnyMatcherCompletion] = []
3040 ordered: List[AnyMatcherCompletion] = []
3083 ordered: List[AnyMatcherCompletion] = []
3041 most_recent_fragment = None
3084 most_recent_fragment = None
3042 for identifier, result in results.items():
3085 for identifier, result in results.items():
3043 if identifier in skip_matchers:
3086 if identifier in skip_matchers:
3044 continue
3087 continue
3045 if not result["completions"]:
3088 if not result["completions"]:
3046 continue
3089 continue
3047 if not most_recent_fragment:
3090 if not most_recent_fragment:
3048 most_recent_fragment = result["matched_fragment"]
3091 most_recent_fragment = result["matched_fragment"]
3049 if (
3092 if (
3050 abort_if_offset_changes
3093 abort_if_offset_changes
3051 and result["matched_fragment"] != most_recent_fragment
3094 and result["matched_fragment"] != most_recent_fragment
3052 ):
3095 ):
3053 break
3096 break
3054 if result.get("ordered", False):
3097 if result.get("ordered", False):
3055 ordered.extend(result["completions"])
3098 ordered.extend(result["completions"])
3056 else:
3099 else:
3057 sortable.extend(result["completions"])
3100 sortable.extend(result["completions"])
3058
3101
3059 if not most_recent_fragment:
3102 if not most_recent_fragment:
3060 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3103 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3061
3104
3062 return most_recent_fragment, [
3105 return most_recent_fragment, [
3063 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3106 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3064 ]
3107 ]
3065
3108
3066 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3109 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3067 full_text=None) -> _CompleteResult:
3110 full_text=None) -> _CompleteResult:
3068 """
3111 """
3069 Like complete but can also returns raw jedi completions as well as the
3112 Like complete but can also returns raw jedi completions as well as the
3070 origin of the completion text. This could (and should) be made much
3113 origin of the completion text. This could (and should) be made much
3071 cleaner but that will be simpler once we drop the old (and stateful)
3114 cleaner but that will be simpler once we drop the old (and stateful)
3072 :any:`complete` API.
3115 :any:`complete` API.
3073
3116
3074 With current provisional API, cursor_pos act both (depending on the
3117 With current provisional API, cursor_pos act both (depending on the
3075 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3118 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3076 ``column`` when passing multiline strings this could/should be renamed
3119 ``column`` when passing multiline strings this could/should be renamed
3077 but would add extra noise.
3120 but would add extra noise.
3078
3121
3079 Parameters
3122 Parameters
3080 ----------
3123 ----------
3081 cursor_line
3124 cursor_line
3082 Index of the line the cursor is on. 0 indexed.
3125 Index of the line the cursor is on. 0 indexed.
3083 cursor_pos
3126 cursor_pos
3084 Position of the cursor in the current line/line_buffer/text. 0
3127 Position of the cursor in the current line/line_buffer/text. 0
3085 indexed.
3128 indexed.
3086 line_buffer : optional, str
3129 line_buffer : optional, str
3087 The current line the cursor is in, this is mostly due to legacy
3130 The current line the cursor is in, this is mostly due to legacy
3088 reason that readline could only give a us the single current line.
3131 reason that readline could only give a us the single current line.
3089 Prefer `full_text`.
3132 Prefer `full_text`.
3090 text : str
3133 text : str
3091 The current "token" the cursor is in, mostly also for historical
3134 The current "token" the cursor is in, mostly also for historical
3092 reasons. as the completer would trigger only after the current line
3135 reasons. as the completer would trigger only after the current line
3093 was parsed.
3136 was parsed.
3094 full_text : str
3137 full_text : str
3095 Full text of the current cell.
3138 Full text of the current cell.
3096
3139
3097 Returns
3140 Returns
3098 -------
3141 -------
3099 An ordered dictionary where keys are identifiers of completion
3142 An ordered dictionary where keys are identifiers of completion
3100 matchers and values are ``MatcherResult``s.
3143 matchers and values are ``MatcherResult``s.
3101 """
3144 """
3102
3145
3103 # if the cursor position isn't given, the only sane assumption we can
3146 # if the cursor position isn't given, the only sane assumption we can
3104 # make is that it's at the end of the line (the common case)
3147 # make is that it's at the end of the line (the common case)
3105 if cursor_pos is None:
3148 if cursor_pos is None:
3106 cursor_pos = len(line_buffer) if text is None else len(text)
3149 cursor_pos = len(line_buffer) if text is None else len(text)
3107
3150
3108 if self.use_main_ns:
3151 if self.use_main_ns:
3109 self.namespace = __main__.__dict__
3152 self.namespace = __main__.__dict__
3110
3153
3111 # if text is either None or an empty string, rely on the line buffer
3154 # if text is either None or an empty string, rely on the line buffer
3112 if (not line_buffer) and full_text:
3155 if (not line_buffer) and full_text:
3113 line_buffer = full_text.split('\n')[cursor_line]
3156 line_buffer = full_text.split('\n')[cursor_line]
3114 if not text: # issue #11508: check line_buffer before calling split_line
3157 if not text: # issue #11508: check line_buffer before calling split_line
3115 text = (
3158 text = (
3116 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3159 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3117 )
3160 )
3118
3161
3119 # If no line buffer is given, assume the input text is all there was
3162 # If no line buffer is given, assume the input text is all there was
3120 if line_buffer is None:
3163 if line_buffer is None:
3121 line_buffer = text
3164 line_buffer = text
3122
3165
3123 # deprecated - do not use `line_buffer` in new code.
3166 # deprecated - do not use `line_buffer` in new code.
3124 self.line_buffer = line_buffer
3167 self.line_buffer = line_buffer
3125 self.text_until_cursor = self.line_buffer[:cursor_pos]
3168 self.text_until_cursor = self.line_buffer[:cursor_pos]
3126
3169
3127 if not full_text:
3170 if not full_text:
3128 full_text = line_buffer
3171 full_text = line_buffer
3129
3172
3130 context = CompletionContext(
3173 context = CompletionContext(
3131 full_text=full_text,
3174 full_text=full_text,
3132 cursor_position=cursor_pos,
3175 cursor_position=cursor_pos,
3133 cursor_line=cursor_line,
3176 cursor_line=cursor_line,
3134 token=text,
3177 token=text,
3135 limit=MATCHES_LIMIT,
3178 limit=MATCHES_LIMIT,
3136 )
3179 )
3137
3180
3138 # Start with a clean slate of completions
3181 # Start with a clean slate of completions
3139 results: Dict[str, MatcherResult] = {}
3182 results: Dict[str, MatcherResult] = {}
3140
3183
3141 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3184 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3142
3185
3143 suppressed_matchers: Set[str] = set()
3186 suppressed_matchers: Set[str] = set()
3144
3187
3145 matchers = {
3188 matchers = {
3146 _get_matcher_id(matcher): matcher
3189 _get_matcher_id(matcher): matcher
3147 for matcher in sorted(
3190 for matcher in sorted(
3148 self.matchers, key=_get_matcher_priority, reverse=True
3191 self.matchers, key=_get_matcher_priority, reverse=True
3149 )
3192 )
3150 }
3193 }
3151
3194
3152 for matcher_id, matcher in matchers.items():
3195 for matcher_id, matcher in matchers.items():
3153 matcher_id = _get_matcher_id(matcher)
3196 matcher_id = _get_matcher_id(matcher)
3154
3197
3155 if matcher_id in self.disable_matchers:
3198 if matcher_id in self.disable_matchers:
3156 continue
3199 continue
3157
3200
3158 if matcher_id in results:
3201 if matcher_id in results:
3159 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3202 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3160
3203
3161 if matcher_id in suppressed_matchers:
3204 if matcher_id in suppressed_matchers:
3162 continue
3205 continue
3163
3206
3164 result: MatcherResult
3207 result: MatcherResult
3165 try:
3208 try:
3166 if _is_matcher_v1(matcher):
3209 if _is_matcher_v1(matcher):
3167 result = _convert_matcher_v1_result_to_v2(
3210 result = _convert_matcher_v1_result_to_v2(
3168 matcher(text), type=_UNKNOWN_TYPE
3211 matcher(text), type=_UNKNOWN_TYPE
3169 )
3212 )
3170 elif _is_matcher_v2(matcher):
3213 elif _is_matcher_v2(matcher):
3171 result = matcher(context)
3214 result = matcher(context)
3172 else:
3215 else:
3173 api_version = _get_matcher_api_version(matcher)
3216 api_version = _get_matcher_api_version(matcher)
3174 raise ValueError(f"Unsupported API version {api_version}")
3217 raise ValueError(f"Unsupported API version {api_version}")
3175 except:
3218 except:
3176 # Show the ugly traceback if the matcher causes an
3219 # Show the ugly traceback if the matcher causes an
3177 # exception, but do NOT crash the kernel!
3220 # exception, but do NOT crash the kernel!
3178 sys.excepthook(*sys.exc_info())
3221 sys.excepthook(*sys.exc_info())
3179 continue
3222 continue
3180
3223
3181 # set default value for matched fragment if suffix was not selected.
3224 # set default value for matched fragment if suffix was not selected.
3182 result["matched_fragment"] = result.get("matched_fragment", context.token)
3225 result["matched_fragment"] = result.get("matched_fragment", context.token)
3183
3226
3184 if not suppressed_matchers:
3227 if not suppressed_matchers:
3185 suppression_recommended: Union[bool, Set[str]] = result.get(
3228 suppression_recommended: Union[bool, Set[str]] = result.get(
3186 "suppress", False
3229 "suppress", False
3187 )
3230 )
3188
3231
3189 suppression_config = (
3232 suppression_config = (
3190 self.suppress_competing_matchers.get(matcher_id, None)
3233 self.suppress_competing_matchers.get(matcher_id, None)
3191 if isinstance(self.suppress_competing_matchers, dict)
3234 if isinstance(self.suppress_competing_matchers, dict)
3192 else self.suppress_competing_matchers
3235 else self.suppress_competing_matchers
3193 )
3236 )
3194 should_suppress = (
3237 should_suppress = (
3195 (suppression_config is True)
3238 (suppression_config is True)
3196 or (suppression_recommended and (suppression_config is not False))
3239 or (suppression_recommended and (suppression_config is not False))
3197 ) and has_any_completions(result)
3240 ) and has_any_completions(result)
3198
3241
3199 if should_suppress:
3242 if should_suppress:
3200 suppression_exceptions: Set[str] = result.get(
3243 suppression_exceptions: Set[str] = result.get(
3201 "do_not_suppress", set()
3244 "do_not_suppress", set()
3202 )
3245 )
3203 if isinstance(suppression_recommended, Iterable):
3246 if isinstance(suppression_recommended, Iterable):
3204 to_suppress = set(suppression_recommended)
3247 to_suppress = set(suppression_recommended)
3205 else:
3248 else:
3206 to_suppress = set(matchers)
3249 to_suppress = set(matchers)
3207 suppressed_matchers = to_suppress - suppression_exceptions
3250 suppressed_matchers = to_suppress - suppression_exceptions
3208
3251
3209 new_results = {}
3252 new_results = {}
3210 for previous_matcher_id, previous_result in results.items():
3253 for previous_matcher_id, previous_result in results.items():
3211 if previous_matcher_id not in suppressed_matchers:
3254 if previous_matcher_id not in suppressed_matchers:
3212 new_results[previous_matcher_id] = previous_result
3255 new_results[previous_matcher_id] = previous_result
3213 results = new_results
3256 results = new_results
3214
3257
3215 results[matcher_id] = result
3258 results[matcher_id] = result
3216
3259
3217 _, matches = self._arrange_and_extract(
3260 _, matches = self._arrange_and_extract(
3218 results,
3261 results,
3219 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3262 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3220 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3263 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3221 skip_matchers={jedi_matcher_id},
3264 skip_matchers={jedi_matcher_id},
3222 abort_if_offset_changes=False,
3265 abort_if_offset_changes=False,
3223 )
3266 )
3224
3267
3225 # populate legacy stateful API
3268 # populate legacy stateful API
3226 self.matches = matches
3269 self.matches = matches
3227
3270
3228 return results
3271 return results
3229
3272
3230 @staticmethod
3273 @staticmethod
3231 def _deduplicate(
3274 def _deduplicate(
3232 matches: Sequence[AnyCompletion],
3275 matches: Sequence[AnyCompletion],
3233 ) -> Iterable[AnyCompletion]:
3276 ) -> Iterable[AnyCompletion]:
3234 filtered_matches: Dict[str, AnyCompletion] = {}
3277 filtered_matches: Dict[str, AnyCompletion] = {}
3235 for match in matches:
3278 for match in matches:
3236 text = match.text
3279 text = match.text
3237 if (
3280 if (
3238 text not in filtered_matches
3281 text not in filtered_matches
3239 or filtered_matches[text].type == _UNKNOWN_TYPE
3282 or filtered_matches[text].type == _UNKNOWN_TYPE
3240 ):
3283 ):
3241 filtered_matches[text] = match
3284 filtered_matches[text] = match
3242
3285
3243 return filtered_matches.values()
3286 return filtered_matches.values()
3244
3287
3245 @staticmethod
3288 @staticmethod
3246 def _sort(matches: Sequence[AnyCompletion]):
3289 def _sort(matches: Sequence[AnyCompletion]):
3247 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3290 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3248
3291
3249 @context_matcher()
3292 @context_matcher()
3250 def fwd_unicode_matcher(self, context: CompletionContext):
3293 def fwd_unicode_matcher(self, context: CompletionContext):
3251 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3294 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3252 # TODO: use `context.limit` to terminate early once we matched the maximum
3295 # TODO: use `context.limit` to terminate early once we matched the maximum
3253 # number that will be used downstream; can be added as an optional to
3296 # number that will be used downstream; can be added as an optional to
3254 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3297 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3255 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3298 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3256 return _convert_matcher_v1_result_to_v2(
3299 return _convert_matcher_v1_result_to_v2(
3257 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3300 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3258 )
3301 )
3259
3302
3260 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3303 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3261 """
3304 """
3262 Forward match a string starting with a backslash with a list of
3305 Forward match a string starting with a backslash with a list of
3263 potential Unicode completions.
3306 potential Unicode completions.
3264
3307
3265 Will compute list of Unicode character names on first call and cache it.
3308 Will compute list of Unicode character names on first call and cache it.
3266
3309
3267 .. deprecated:: 8.6
3310 .. deprecated:: 8.6
3268 You can use :meth:`fwd_unicode_matcher` instead.
3311 You can use :meth:`fwd_unicode_matcher` instead.
3269
3312
3270 Returns
3313 Returns
3271 -------
3314 -------
3272 At tuple with:
3315 At tuple with:
3273 - matched text (empty if no matches)
3316 - matched text (empty if no matches)
3274 - list of potential completions, empty tuple otherwise)
3317 - list of potential completions, empty tuple otherwise)
3275 """
3318 """
3276 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3319 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3277 # We could do a faster match using a Trie.
3320 # We could do a faster match using a Trie.
3278
3321
3279 # Using pygtrie the following seem to work:
3322 # Using pygtrie the following seem to work:
3280
3323
3281 # s = PrefixSet()
3324 # s = PrefixSet()
3282
3325
3283 # for c in range(0,0x10FFFF + 1):
3326 # for c in range(0,0x10FFFF + 1):
3284 # try:
3327 # try:
3285 # s.add(unicodedata.name(chr(c)))
3328 # s.add(unicodedata.name(chr(c)))
3286 # except ValueError:
3329 # except ValueError:
3287 # pass
3330 # pass
3288 # [''.join(k) for k in s.iter(prefix)]
3331 # [''.join(k) for k in s.iter(prefix)]
3289
3332
3290 # But need to be timed and adds an extra dependency.
3333 # But need to be timed and adds an extra dependency.
3291
3334
3292 slashpos = text.rfind('\\')
3335 slashpos = text.rfind('\\')
3293 # if text starts with slash
3336 # if text starts with slash
3294 if slashpos > -1:
3337 if slashpos > -1:
3295 # PERF: It's important that we don't access self._unicode_names
3338 # PERF: It's important that we don't access self._unicode_names
3296 # until we're inside this if-block. _unicode_names is lazily
3339 # until we're inside this if-block. _unicode_names is lazily
3297 # initialized, and it takes a user-noticeable amount of time to
3340 # initialized, and it takes a user-noticeable amount of time to
3298 # initialize it, so we don't want to initialize it unless we're
3341 # initialize it, so we don't want to initialize it unless we're
3299 # actually going to use it.
3342 # actually going to use it.
3300 s = text[slashpos + 1 :]
3343 s = text[slashpos + 1 :]
3301 sup = s.upper()
3344 sup = s.upper()
3302 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3345 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3303 if candidates:
3346 if candidates:
3304 return s, candidates
3347 return s, candidates
3305 candidates = [x for x in self.unicode_names if sup in x]
3348 candidates = [x for x in self.unicode_names if sup in x]
3306 if candidates:
3349 if candidates:
3307 return s, candidates
3350 return s, candidates
3308 splitsup = sup.split(" ")
3351 splitsup = sup.split(" ")
3309 candidates = [
3352 candidates = [
3310 x for x in self.unicode_names if all(u in x for u in splitsup)
3353 x for x in self.unicode_names if all(u in x for u in splitsup)
3311 ]
3354 ]
3312 if candidates:
3355 if candidates:
3313 return s, candidates
3356 return s, candidates
3314
3357
3315 return "", ()
3358 return "", ()
3316
3359
3317 # if text does not start with slash
3360 # if text does not start with slash
3318 else:
3361 else:
3319 return '', ()
3362 return '', ()
3320
3363
3321 @property
3364 @property
3322 def unicode_names(self) -> List[str]:
3365 def unicode_names(self) -> List[str]:
3323 """List of names of unicode code points that can be completed.
3366 """List of names of unicode code points that can be completed.
3324
3367
3325 The list is lazily initialized on first access.
3368 The list is lazily initialized on first access.
3326 """
3369 """
3327 if self._unicode_names is None:
3370 if self._unicode_names is None:
3328 names = []
3371 names = []
3329 for c in range(0,0x10FFFF + 1):
3372 for c in range(0,0x10FFFF + 1):
3330 try:
3373 try:
3331 names.append(unicodedata.name(chr(c)))
3374 names.append(unicodedata.name(chr(c)))
3332 except ValueError:
3375 except ValueError:
3333 pass
3376 pass
3334 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3377 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3335
3378
3336 return self._unicode_names
3379 return self._unicode_names
3337
3380
3338 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3381 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3339 names = []
3382 names = []
3340 for start,stop in ranges:
3383 for start,stop in ranges:
3341 for c in range(start, stop) :
3384 for c in range(start, stop) :
3342 try:
3385 try:
3343 names.append(unicodedata.name(chr(c)))
3386 names.append(unicodedata.name(chr(c)))
3344 except ValueError:
3387 except ValueError:
3345 pass
3388 pass
3346 return names
3389 return names
@@ -1,1759 +1,1769 b''
1 # encoding: utf-8
1 # encoding: utf-8
2 """Tests for the IPython tab-completion machinery."""
2 """Tests for the IPython tab-completion machinery."""
3
3
4 # Copyright (c) IPython Development Team.
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
5 # Distributed under the terms of the Modified BSD License.
6
6
7 import os
7 import os
8 import pytest
8 import pytest
9 import sys
9 import sys
10 import textwrap
10 import textwrap
11 import unittest
11 import unittest
12
12
13 from importlib.metadata import version
13 from importlib.metadata import version
14
14
15
15
16 from contextlib import contextmanager
16 from contextlib import contextmanager
17
17
18 from traitlets.config.loader import Config
18 from traitlets.config.loader import Config
19 from IPython import get_ipython
19 from IPython import get_ipython
20 from IPython.core import completer
20 from IPython.core import completer
21 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
21 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
22 from IPython.utils.generics import complete_object
22 from IPython.utils.generics import complete_object
23 from IPython.testing import decorators as dec
23 from IPython.testing import decorators as dec
24
24
25 from IPython.core.completer import (
25 from IPython.core.completer import (
26 Completion,
26 Completion,
27 provisionalcompleter,
27 provisionalcompleter,
28 match_dict_keys,
28 match_dict_keys,
29 _deduplicate_completions,
29 _deduplicate_completions,
30 _match_number_in_dict_key_prefix,
30 _match_number_in_dict_key_prefix,
31 completion_matcher,
31 completion_matcher,
32 SimpleCompletion,
32 SimpleCompletion,
33 CompletionContext,
33 CompletionContext,
34 )
34 )
35
35
36 from packaging.version import parse
36 from packaging.version import parse
37
37
38
38
39 # -----------------------------------------------------------------------------
39 # -----------------------------------------------------------------------------
40 # Test functions
40 # Test functions
41 # -----------------------------------------------------------------------------
41 # -----------------------------------------------------------------------------
42
42
43
43
44 def recompute_unicode_ranges():
44 def recompute_unicode_ranges():
45 """
45 """
46 utility to recompute the largest unicode range without any characters
46 utility to recompute the largest unicode range without any characters
47
47
48 use to recompute the gap in the global _UNICODE_RANGES of completer.py
48 use to recompute the gap in the global _UNICODE_RANGES of completer.py
49 """
49 """
50 import itertools
50 import itertools
51 import unicodedata
51 import unicodedata
52
52
53 valid = []
53 valid = []
54 for c in range(0, 0x10FFFF + 1):
54 for c in range(0, 0x10FFFF + 1):
55 try:
55 try:
56 unicodedata.name(chr(c))
56 unicodedata.name(chr(c))
57 except ValueError:
57 except ValueError:
58 continue
58 continue
59 valid.append(c)
59 valid.append(c)
60
60
61 def ranges(i):
61 def ranges(i):
62 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
62 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
63 b = list(b)
63 b = list(b)
64 yield b[0][1], b[-1][1]
64 yield b[0][1], b[-1][1]
65
65
66 rg = list(ranges(valid))
66 rg = list(ranges(valid))
67 lens = []
67 lens = []
68 gap_lens = []
68 gap_lens = []
69 pstart, pstop = 0, 0
69 pstart, pstop = 0, 0
70 for start, stop in rg:
70 for start, stop in rg:
71 lens.append(stop - start)
71 lens.append(stop - start)
72 gap_lens.append(
72 gap_lens.append(
73 (
73 (
74 start - pstop,
74 start - pstop,
75 hex(pstop + 1),
75 hex(pstop + 1),
76 hex(start),
76 hex(start),
77 f"{round((start - pstop)/0xe01f0*100)}%",
77 f"{round((start - pstop)/0xe01f0*100)}%",
78 )
78 )
79 )
79 )
80 pstart, pstop = start, stop
80 pstart, pstop = start, stop
81
81
82 return sorted(gap_lens)[-1]
82 return sorted(gap_lens)[-1]
83
83
84
84
85 def test_unicode_range():
85 def test_unicode_range():
86 """
86 """
87 Test that the ranges we test for unicode names give the same number of
87 Test that the ranges we test for unicode names give the same number of
88 results than testing the full length.
88 results than testing the full length.
89 """
89 """
90 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
90 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
91
91
92 expected_list = _unicode_name_compute([(0, 0x110000)])
92 expected_list = _unicode_name_compute([(0, 0x110000)])
93 test = _unicode_name_compute(_UNICODE_RANGES)
93 test = _unicode_name_compute(_UNICODE_RANGES)
94 len_exp = len(expected_list)
94 len_exp = len(expected_list)
95 len_test = len(test)
95 len_test = len(test)
96
96
97 # do not inline the len() or on error pytest will try to print the 130 000 +
97 # do not inline the len() or on error pytest will try to print the 130 000 +
98 # elements.
98 # elements.
99 message = None
99 message = None
100 if len_exp != len_test or len_exp > 131808:
100 if len_exp != len_test or len_exp > 131808:
101 size, start, stop, prct = recompute_unicode_ranges()
101 size, start, stop, prct = recompute_unicode_ranges()
102 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
102 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
103 likely due to a new release of Python. We've find that the biggest gap
103 likely due to a new release of Python. We've find that the biggest gap
104 in unicode characters has reduces in size to be {size} characters
104 in unicode characters has reduces in size to be {size} characters
105 ({prct}), from {start}, to {stop}. In completer.py likely update to
105 ({prct}), from {start}, to {stop}. In completer.py likely update to
106
106
107 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
107 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
108
108
109 And update the assertion below to use
109 And update the assertion below to use
110
110
111 len_exp <= {len_exp}
111 len_exp <= {len_exp}
112 """
112 """
113 assert len_exp == len_test, message
113 assert len_exp == len_test, message
114
114
115 # fail if new unicode symbols have been added.
115 # fail if new unicode symbols have been added.
116 assert len_exp <= 143668, message
116 assert len_exp <= 143668, message
117
117
118
118
119 @contextmanager
119 @contextmanager
120 def greedy_completion():
120 def greedy_completion():
121 ip = get_ipython()
121 ip = get_ipython()
122 greedy_original = ip.Completer.greedy
122 greedy_original = ip.Completer.greedy
123 try:
123 try:
124 ip.Completer.greedy = True
124 ip.Completer.greedy = True
125 yield
125 yield
126 finally:
126 finally:
127 ip.Completer.greedy = greedy_original
127 ip.Completer.greedy = greedy_original
128
128
129
129
130 @contextmanager
130 @contextmanager
131 def evaluation_policy(evaluation: str):
131 def evaluation_policy(evaluation: str):
132 ip = get_ipython()
132 ip = get_ipython()
133 evaluation_original = ip.Completer.evaluation
133 evaluation_original = ip.Completer.evaluation
134 try:
134 try:
135 ip.Completer.evaluation = evaluation
135 ip.Completer.evaluation = evaluation
136 yield
136 yield
137 finally:
137 finally:
138 ip.Completer.evaluation = evaluation_original
138 ip.Completer.evaluation = evaluation_original
139
139
140
140
141 @contextmanager
141 @contextmanager
142 def custom_matchers(matchers):
142 def custom_matchers(matchers):
143 ip = get_ipython()
143 ip = get_ipython()
144 try:
144 try:
145 ip.Completer.custom_matchers.extend(matchers)
145 ip.Completer.custom_matchers.extend(matchers)
146 yield
146 yield
147 finally:
147 finally:
148 ip.Completer.custom_matchers.clear()
148 ip.Completer.custom_matchers.clear()
149
149
150
150
151 def test_protect_filename():
151 def test_protect_filename():
152 if sys.platform == "win32":
152 if sys.platform == "win32":
153 pairs = [
153 pairs = [
154 ("abc", "abc"),
154 ("abc", "abc"),
155 (" abc", '" abc"'),
155 (" abc", '" abc"'),
156 ("a bc", '"a bc"'),
156 ("a bc", '"a bc"'),
157 ("a bc", '"a bc"'),
157 ("a bc", '"a bc"'),
158 (" bc", '" bc"'),
158 (" bc", '" bc"'),
159 ]
159 ]
160 else:
160 else:
161 pairs = [
161 pairs = [
162 ("abc", "abc"),
162 ("abc", "abc"),
163 (" abc", r"\ abc"),
163 (" abc", r"\ abc"),
164 ("a bc", r"a\ bc"),
164 ("a bc", r"a\ bc"),
165 ("a bc", r"a\ \ bc"),
165 ("a bc", r"a\ \ bc"),
166 (" bc", r"\ \ bc"),
166 (" bc", r"\ \ bc"),
167 # On posix, we also protect parens and other special characters.
167 # On posix, we also protect parens and other special characters.
168 ("a(bc", r"a\(bc"),
168 ("a(bc", r"a\(bc"),
169 ("a)bc", r"a\)bc"),
169 ("a)bc", r"a\)bc"),
170 ("a( )bc", r"a\(\ \)bc"),
170 ("a( )bc", r"a\(\ \)bc"),
171 ("a[1]bc", r"a\[1\]bc"),
171 ("a[1]bc", r"a\[1\]bc"),
172 ("a{1}bc", r"a\{1\}bc"),
172 ("a{1}bc", r"a\{1\}bc"),
173 ("a#bc", r"a\#bc"),
173 ("a#bc", r"a\#bc"),
174 ("a?bc", r"a\?bc"),
174 ("a?bc", r"a\?bc"),
175 ("a=bc", r"a\=bc"),
175 ("a=bc", r"a\=bc"),
176 ("a\\bc", r"a\\bc"),
176 ("a\\bc", r"a\\bc"),
177 ("a|bc", r"a\|bc"),
177 ("a|bc", r"a\|bc"),
178 ("a;bc", r"a\;bc"),
178 ("a;bc", r"a\;bc"),
179 ("a:bc", r"a\:bc"),
179 ("a:bc", r"a\:bc"),
180 ("a'bc", r"a\'bc"),
180 ("a'bc", r"a\'bc"),
181 ("a*bc", r"a\*bc"),
181 ("a*bc", r"a\*bc"),
182 ('a"bc', r"a\"bc"),
182 ('a"bc', r"a\"bc"),
183 ("a^bc", r"a\^bc"),
183 ("a^bc", r"a\^bc"),
184 ("a&bc", r"a\&bc"),
184 ("a&bc", r"a\&bc"),
185 ]
185 ]
186 # run the actual tests
186 # run the actual tests
187 for s1, s2 in pairs:
187 for s1, s2 in pairs:
188 s1p = completer.protect_filename(s1)
188 s1p = completer.protect_filename(s1)
189 assert s1p == s2
189 assert s1p == s2
190
190
191
191
192 def check_line_split(splitter, test_specs):
192 def check_line_split(splitter, test_specs):
193 for part1, part2, split in test_specs:
193 for part1, part2, split in test_specs:
194 cursor_pos = len(part1)
194 cursor_pos = len(part1)
195 line = part1 + part2
195 line = part1 + part2
196 out = splitter.split_line(line, cursor_pos)
196 out = splitter.split_line(line, cursor_pos)
197 assert out == split
197 assert out == split
198
198
199 def test_line_split():
199 def test_line_split():
200 """Basic line splitter test with default specs."""
200 """Basic line splitter test with default specs."""
201 sp = completer.CompletionSplitter()
201 sp = completer.CompletionSplitter()
202 # The format of the test specs is: part1, part2, expected answer. Parts 1
202 # The format of the test specs is: part1, part2, expected answer. Parts 1
203 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
203 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
204 # was at the end of part1. So an empty part2 represents someone hitting
204 # was at the end of part1. So an empty part2 represents someone hitting
205 # tab at the end of the line, the most common case.
205 # tab at the end of the line, the most common case.
206 t = [
206 t = [
207 ("run some/scrip", "", "some/scrip"),
207 ("run some/scrip", "", "some/scrip"),
208 ("run scripts/er", "ror.py foo", "scripts/er"),
208 ("run scripts/er", "ror.py foo", "scripts/er"),
209 ("echo $HOM", "", "HOM"),
209 ("echo $HOM", "", "HOM"),
210 ("print sys.pa", "", "sys.pa"),
210 ("print sys.pa", "", "sys.pa"),
211 ("print(sys.pa", "", "sys.pa"),
211 ("print(sys.pa", "", "sys.pa"),
212 ("execfile('scripts/er", "", "scripts/er"),
212 ("execfile('scripts/er", "", "scripts/er"),
213 ("a[x.", "", "x."),
213 ("a[x.", "", "x."),
214 ("a[x.", "y", "x."),
214 ("a[x.", "y", "x."),
215 ('cd "some_file/', "", "some_file/"),
215 ('cd "some_file/', "", "some_file/"),
216 ]
216 ]
217 check_line_split(sp, t)
217 check_line_split(sp, t)
218 # Ensure splitting works OK with unicode by re-running the tests with
218 # Ensure splitting works OK with unicode by re-running the tests with
219 # all inputs turned into unicode
219 # all inputs turned into unicode
220 check_line_split(sp, [map(str, p) for p in t])
220 check_line_split(sp, [map(str, p) for p in t])
221
221
222
222
223 class NamedInstanceClass:
223 class NamedInstanceClass:
224 instances = {}
224 instances = {}
225
225
226 def __init__(self, name):
226 def __init__(self, name):
227 self.instances[name] = self
227 self.instances[name] = self
228
228
229 @classmethod
229 @classmethod
230 def _ipython_key_completions_(cls):
230 def _ipython_key_completions_(cls):
231 return cls.instances.keys()
231 return cls.instances.keys()
232
232
233
233
234 class KeyCompletable:
234 class KeyCompletable:
235 def __init__(self, things=()):
235 def __init__(self, things=()):
236 self.things = things
236 self.things = things
237
237
238 def _ipython_key_completions_(self):
238 def _ipython_key_completions_(self):
239 return list(self.things)
239 return list(self.things)
240
240
241
241
242 class TestCompleter(unittest.TestCase):
242 class TestCompleter(unittest.TestCase):
243 def setUp(self):
243 def setUp(self):
244 """
244 """
245 We want to silence all PendingDeprecationWarning when testing the completer
245 We want to silence all PendingDeprecationWarning when testing the completer
246 """
246 """
247 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
247 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
248 self._assertwarns.__enter__()
248 self._assertwarns.__enter__()
249
249
250 def tearDown(self):
250 def tearDown(self):
251 try:
251 try:
252 self._assertwarns.__exit__(None, None, None)
252 self._assertwarns.__exit__(None, None, None)
253 except AssertionError:
253 except AssertionError:
254 pass
254 pass
255
255
256 def test_custom_completion_error(self):
256 def test_custom_completion_error(self):
257 """Test that errors from custom attribute completers are silenced."""
257 """Test that errors from custom attribute completers are silenced."""
258 ip = get_ipython()
258 ip = get_ipython()
259
259
260 class A:
260 class A:
261 pass
261 pass
262
262
263 ip.user_ns["x"] = A()
263 ip.user_ns["x"] = A()
264
264
265 @complete_object.register(A)
265 @complete_object.register(A)
266 def complete_A(a, existing_completions):
266 def complete_A(a, existing_completions):
267 raise TypeError("this should be silenced")
267 raise TypeError("this should be silenced")
268
268
269 ip.complete("x.")
269 ip.complete("x.")
270
270
271 def test_custom_completion_ordering(self):
271 def test_custom_completion_ordering(self):
272 """Test that errors from custom attribute completers are silenced."""
272 """Test that errors from custom attribute completers are silenced."""
273 ip = get_ipython()
273 ip = get_ipython()
274
274
275 _, matches = ip.complete('in')
275 _, matches = ip.complete('in')
276 assert matches.index('input') < matches.index('int')
276 assert matches.index('input') < matches.index('int')
277
277
278 def complete_example(a):
278 def complete_example(a):
279 return ['example2', 'example1']
279 return ['example2', 'example1']
280
280
281 ip.Completer.custom_completers.add_re('ex*', complete_example)
281 ip.Completer.custom_completers.add_re('ex*', complete_example)
282 _, matches = ip.complete('ex')
282 _, matches = ip.complete('ex')
283 assert matches.index('example2') < matches.index('example1')
283 assert matches.index('example2') < matches.index('example1')
284
284
285 def test_unicode_completions(self):
285 def test_unicode_completions(self):
286 ip = get_ipython()
286 ip = get_ipython()
287 # Some strings that trigger different types of completion. Check them both
287 # Some strings that trigger different types of completion. Check them both
288 # in str and unicode forms
288 # in str and unicode forms
289 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
289 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
290 for t in s + list(map(str, s)):
290 for t in s + list(map(str, s)):
291 # We don't need to check exact completion values (they may change
291 # We don't need to check exact completion values (they may change
292 # depending on the state of the namespace, but at least no exceptions
292 # depending on the state of the namespace, but at least no exceptions
293 # should be thrown and the return value should be a pair of text, list
293 # should be thrown and the return value should be a pair of text, list
294 # values.
294 # values.
295 text, matches = ip.complete(t)
295 text, matches = ip.complete(t)
296 self.assertIsInstance(text, str)
296 self.assertIsInstance(text, str)
297 self.assertIsInstance(matches, list)
297 self.assertIsInstance(matches, list)
298
298
299 def test_latex_completions(self):
299 def test_latex_completions(self):
300 from IPython.core.latex_symbols import latex_symbols
300 from IPython.core.latex_symbols import latex_symbols
301 import random
301 import random
302
302
303 ip = get_ipython()
303 ip = get_ipython()
304 # Test some random unicode symbols
304 # Test some random unicode symbols
305 keys = random.sample(sorted(latex_symbols), 10)
305 keys = random.sample(sorted(latex_symbols), 10)
306 for k in keys:
306 for k in keys:
307 text, matches = ip.complete(k)
307 text, matches = ip.complete(k)
308 self.assertEqual(text, k)
308 self.assertEqual(text, k)
309 self.assertEqual(matches, [latex_symbols[k]])
309 self.assertEqual(matches, [latex_symbols[k]])
310 # Test a more complex line
310 # Test a more complex line
311 text, matches = ip.complete("print(\\alpha")
311 text, matches = ip.complete("print(\\alpha")
312 self.assertEqual(text, "\\alpha")
312 self.assertEqual(text, "\\alpha")
313 self.assertEqual(matches[0], latex_symbols["\\alpha"])
313 self.assertEqual(matches[0], latex_symbols["\\alpha"])
314 # Test multiple matching latex symbols
314 # Test multiple matching latex symbols
315 text, matches = ip.complete("\\al")
315 text, matches = ip.complete("\\al")
316 self.assertIn("\\alpha", matches)
316 self.assertIn("\\alpha", matches)
317 self.assertIn("\\aleph", matches)
317 self.assertIn("\\aleph", matches)
318
318
319 def test_latex_no_results(self):
319 def test_latex_no_results(self):
320 """
320 """
321 forward latex should really return nothing in either field if nothing is found.
321 forward latex should really return nothing in either field if nothing is found.
322 """
322 """
323 ip = get_ipython()
323 ip = get_ipython()
324 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
324 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
325 self.assertEqual(text, "")
325 self.assertEqual(text, "")
326 self.assertEqual(matches, ())
326 self.assertEqual(matches, ())
327
327
328 def test_back_latex_completion(self):
328 def test_back_latex_completion(self):
329 ip = get_ipython()
329 ip = get_ipython()
330
330
331 # do not return more than 1 matches for \beta, only the latex one.
331 # do not return more than 1 matches for \beta, only the latex one.
332 name, matches = ip.complete("\\Ξ²")
332 name, matches = ip.complete("\\Ξ²")
333 self.assertEqual(matches, ["\\beta"])
333 self.assertEqual(matches, ["\\beta"])
334
334
335 def test_back_unicode_completion(self):
335 def test_back_unicode_completion(self):
336 ip = get_ipython()
336 ip = get_ipython()
337
337
338 name, matches = ip.complete("\\β…€")
338 name, matches = ip.complete("\\β…€")
339 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
339 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
340
340
341 def test_forward_unicode_completion(self):
341 def test_forward_unicode_completion(self):
342 ip = get_ipython()
342 ip = get_ipython()
343
343
344 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
344 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
345 self.assertEqual(matches, ["β…€"]) # This is not a V
345 self.assertEqual(matches, ["β…€"]) # This is not a V
346 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
346 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
347
347
348 def test_delim_setting(self):
348 def test_delim_setting(self):
349 sp = completer.CompletionSplitter()
349 sp = completer.CompletionSplitter()
350 sp.delims = " "
350 sp.delims = " "
351 self.assertEqual(sp.delims, " ")
351 self.assertEqual(sp.delims, " ")
352 self.assertEqual(sp._delim_expr, r"[\ ]")
352 self.assertEqual(sp._delim_expr, r"[\ ]")
353
353
354 def test_spaces(self):
354 def test_spaces(self):
355 """Test with only spaces as split chars."""
355 """Test with only spaces as split chars."""
356 sp = completer.CompletionSplitter()
356 sp = completer.CompletionSplitter()
357 sp.delims = " "
357 sp.delims = " "
358 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
358 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
359 check_line_split(sp, t)
359 check_line_split(sp, t)
360
360
361 def test_has_open_quotes1(self):
361 def test_has_open_quotes1(self):
362 for s in ["'", "'''", "'hi' '"]:
362 for s in ["'", "'''", "'hi' '"]:
363 self.assertEqual(completer.has_open_quotes(s), "'")
363 self.assertEqual(completer.has_open_quotes(s), "'")
364
364
365 def test_has_open_quotes2(self):
365 def test_has_open_quotes2(self):
366 for s in ['"', '"""', '"hi" "']:
366 for s in ['"', '"""', '"hi" "']:
367 self.assertEqual(completer.has_open_quotes(s), '"')
367 self.assertEqual(completer.has_open_quotes(s), '"')
368
368
369 def test_has_open_quotes3(self):
369 def test_has_open_quotes3(self):
370 for s in ["''", "''' '''", "'hi' 'ipython'"]:
370 for s in ["''", "''' '''", "'hi' 'ipython'"]:
371 self.assertFalse(completer.has_open_quotes(s))
371 self.assertFalse(completer.has_open_quotes(s))
372
372
373 def test_has_open_quotes4(self):
373 def test_has_open_quotes4(self):
374 for s in ['""', '""" """', '"hi" "ipython"']:
374 for s in ['""', '""" """', '"hi" "ipython"']:
375 self.assertFalse(completer.has_open_quotes(s))
375 self.assertFalse(completer.has_open_quotes(s))
376
376
377 @pytest.mark.xfail(
377 @pytest.mark.xfail(
378 sys.platform == "win32", reason="abspath completions fail on Windows"
378 sys.platform == "win32", reason="abspath completions fail on Windows"
379 )
379 )
380 def test_abspath_file_completions(self):
380 def test_abspath_file_completions(self):
381 ip = get_ipython()
381 ip = get_ipython()
382 with TemporaryDirectory() as tmpdir:
382 with TemporaryDirectory() as tmpdir:
383 prefix = os.path.join(tmpdir, "foo")
383 prefix = os.path.join(tmpdir, "foo")
384 suffixes = ["1", "2"]
384 suffixes = ["1", "2"]
385 names = [prefix + s for s in suffixes]
385 names = [prefix + s for s in suffixes]
386 for n in names:
386 for n in names:
387 open(n, "w", encoding="utf-8").close()
387 open(n, "w", encoding="utf-8").close()
388
388
389 # Check simple completion
389 # Check simple completion
390 c = ip.complete(prefix)[1]
390 c = ip.complete(prefix)[1]
391 self.assertEqual(c, names)
391 self.assertEqual(c, names)
392
392
393 # Now check with a function call
393 # Now check with a function call
394 cmd = 'a = f("%s' % prefix
394 cmd = 'a = f("%s' % prefix
395 c = ip.complete(prefix, cmd)[1]
395 c = ip.complete(prefix, cmd)[1]
396 comp = [prefix + s for s in suffixes]
396 comp = [prefix + s for s in suffixes]
397 self.assertEqual(c, comp)
397 self.assertEqual(c, comp)
398
398
399 def test_local_file_completions(self):
399 def test_local_file_completions(self):
400 ip = get_ipython()
400 ip = get_ipython()
401 with TemporaryWorkingDirectory():
401 with TemporaryWorkingDirectory():
402 prefix = "./foo"
402 prefix = "./foo"
403 suffixes = ["1", "2"]
403 suffixes = ["1", "2"]
404 names = [prefix + s for s in suffixes]
404 names = [prefix + s for s in suffixes]
405 for n in names:
405 for n in names:
406 open(n, "w", encoding="utf-8").close()
406 open(n, "w", encoding="utf-8").close()
407
407
408 # Check simple completion
408 # Check simple completion
409 c = ip.complete(prefix)[1]
409 c = ip.complete(prefix)[1]
410 self.assertEqual(c, names)
410 self.assertEqual(c, names)
411
411
412 # Now check with a function call
412 # Now check with a function call
413 cmd = 'a = f("%s' % prefix
413 cmd = 'a = f("%s' % prefix
414 c = ip.complete(prefix, cmd)[1]
414 c = ip.complete(prefix, cmd)[1]
415 comp = {prefix + s for s in suffixes}
415 comp = {prefix + s for s in suffixes}
416 self.assertTrue(comp.issubset(set(c)))
416 self.assertTrue(comp.issubset(set(c)))
417
417
418 def test_quoted_file_completions(self):
418 def test_quoted_file_completions(self):
419 ip = get_ipython()
419 ip = get_ipython()
420
420
421 def _(text):
421 def _(text):
422 return ip.Completer._complete(
422 return ip.Completer._complete(
423 cursor_line=0, cursor_pos=len(text), full_text=text
423 cursor_line=0, cursor_pos=len(text), full_text=text
424 )["IPCompleter.file_matcher"]["completions"]
424 )["IPCompleter.file_matcher"]["completions"]
425
425
426 with TemporaryWorkingDirectory():
426 with TemporaryWorkingDirectory():
427 name = "foo'bar"
427 name = "foo'bar"
428 open(name, "w", encoding="utf-8").close()
428 open(name, "w", encoding="utf-8").close()
429
429
430 # Don't escape Windows
430 # Don't escape Windows
431 escaped = name if sys.platform == "win32" else "foo\\'bar"
431 escaped = name if sys.platform == "win32" else "foo\\'bar"
432
432
433 # Single quote matches embedded single quote
433 # Single quote matches embedded single quote
434 c = _("open('foo")[0]
434 c = _("open('foo")[0]
435 self.assertEqual(c.text, escaped)
435 self.assertEqual(c.text, escaped)
436
436
437 # Double quote requires no escape
437 # Double quote requires no escape
438 c = _('open("foo')[0]
438 c = _('open("foo')[0]
439 self.assertEqual(c.text, name)
439 self.assertEqual(c.text, name)
440
440
441 # No quote requires an escape
441 # No quote requires an escape
442 c = _("%ls foo")[0]
442 c = _("%ls foo")[0]
443 self.assertEqual(c.text, escaped)
443 self.assertEqual(c.text, escaped)
444
444
445 @pytest.mark.xfail(
445 @pytest.mark.xfail(
446 sys.version_info.releaselevel in ("alpha",),
446 sys.version_info.releaselevel in ("alpha",),
447 reason="Parso does not yet parse 3.13",
447 reason="Parso does not yet parse 3.13",
448 )
448 )
449 def test_all_completions_dups(self):
449 def test_all_completions_dups(self):
450 """
450 """
451 Make sure the output of `IPCompleter.all_completions` does not have
451 Make sure the output of `IPCompleter.all_completions` does not have
452 duplicated prefixes.
452 duplicated prefixes.
453 """
453 """
454 ip = get_ipython()
454 ip = get_ipython()
455 c = ip.Completer
455 c = ip.Completer
456 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
456 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
457 for jedi_status in [True, False]:
457 for jedi_status in [True, False]:
458 with provisionalcompleter():
458 with provisionalcompleter():
459 ip.Completer.use_jedi = jedi_status
459 ip.Completer.use_jedi = jedi_status
460 matches = c.all_completions("TestCl")
460 matches = c.all_completions("TestCl")
461 assert matches == ["TestClass"], (jedi_status, matches)
461 assert matches == ["TestClass"], (jedi_status, matches)
462 matches = c.all_completions("TestClass.")
462 matches = c.all_completions("TestClass.")
463 assert len(matches) > 2, (jedi_status, matches)
463 assert len(matches) > 2, (jedi_status, matches)
464 matches = c.all_completions("TestClass.a")
464 matches = c.all_completions("TestClass.a")
465 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
465 if jedi_status:
466 assert matches == ["TestClass.a", "TestClass.a1"], jedi_status
467 else:
468 assert matches == [".a", ".a1"], jedi_status
466
469
467 @pytest.mark.xfail(
470 @pytest.mark.xfail(
468 sys.version_info.releaselevel in ("alpha",),
471 sys.version_info.releaselevel in ("alpha",),
469 reason="Parso does not yet parse 3.13",
472 reason="Parso does not yet parse 3.13",
470 )
473 )
471 def test_jedi(self):
474 def test_jedi(self):
472 """
475 """
473 A couple of issue we had with Jedi
476 A couple of issue we had with Jedi
474 """
477 """
475 ip = get_ipython()
478 ip = get_ipython()
476
479
477 def _test_complete(reason, s, comp, start=None, end=None):
480 def _test_complete(reason, s, comp, start=None, end=None):
478 l = len(s)
481 l = len(s)
479 start = start if start is not None else l
482 start = start if start is not None else l
480 end = end if end is not None else l
483 end = end if end is not None else l
481 with provisionalcompleter():
484 with provisionalcompleter():
482 ip.Completer.use_jedi = True
485 ip.Completer.use_jedi = True
483 completions = set(ip.Completer.completions(s, l))
486 completions = set(ip.Completer.completions(s, l))
484 ip.Completer.use_jedi = False
487 ip.Completer.use_jedi = False
485 assert Completion(start, end, comp) in completions, reason
488 assert Completion(start, end, comp) in completions, reason
486
489
487 def _test_not_complete(reason, s, comp):
490 def _test_not_complete(reason, s, comp):
488 l = len(s)
491 l = len(s)
489 with provisionalcompleter():
492 with provisionalcompleter():
490 ip.Completer.use_jedi = True
493 ip.Completer.use_jedi = True
491 completions = set(ip.Completer.completions(s, l))
494 completions = set(ip.Completer.completions(s, l))
492 ip.Completer.use_jedi = False
495 ip.Completer.use_jedi = False
493 assert Completion(l, l, comp) not in completions, reason
496 assert Completion(l, l, comp) not in completions, reason
494
497
495 import jedi
498 import jedi
496
499
497 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
500 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
498 if jedi_version > (0, 10):
501 if jedi_version > (0, 10):
499 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
502 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
500 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
503 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
501 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
504 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
502 _test_complete("cover duplicate completions", "im", "import", 0, 2)
505 _test_complete("cover duplicate completions", "im", "import", 0, 2)
503
506
504 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
507 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
505
508
506 @pytest.mark.xfail(
509 @pytest.mark.xfail(
507 sys.version_info.releaselevel in ("alpha",),
510 sys.version_info.releaselevel in ("alpha",),
508 reason="Parso does not yet parse 3.13",
511 reason="Parso does not yet parse 3.13",
509 )
512 )
510 def test_completion_have_signature(self):
513 def test_completion_have_signature(self):
511 """
514 """
512 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
515 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
513 """
516 """
514 ip = get_ipython()
517 ip = get_ipython()
515 with provisionalcompleter():
518 with provisionalcompleter():
516 ip.Completer.use_jedi = True
519 ip.Completer.use_jedi = True
517 completions = ip.Completer.completions("ope", 3)
520 completions = ip.Completer.completions("ope", 3)
518 c = next(completions) # should be `open`
521 c = next(completions) # should be `open`
519 ip.Completer.use_jedi = False
522 ip.Completer.use_jedi = False
520 assert "file" in c.signature, "Signature of function was not found by completer"
523 assert "file" in c.signature, "Signature of function was not found by completer"
521 assert (
524 assert (
522 "encoding" in c.signature
525 "encoding" in c.signature
523 ), "Signature of function was not found by completer"
526 ), "Signature of function was not found by completer"
524
527
525 @pytest.mark.xfail(
528 @pytest.mark.xfail(
526 sys.version_info.releaselevel in ("alpha",),
529 sys.version_info.releaselevel in ("alpha",),
527 reason="Parso does not yet parse 3.13",
530 reason="Parso does not yet parse 3.13",
528 )
531 )
529 def test_completions_have_type(self):
532 def test_completions_have_type(self):
530 """
533 """
531 Lets make sure matchers provide completion type.
534 Lets make sure matchers provide completion type.
532 """
535 """
533 ip = get_ipython()
536 ip = get_ipython()
534 with provisionalcompleter():
537 with provisionalcompleter():
535 ip.Completer.use_jedi = False
538 ip.Completer.use_jedi = False
536 completions = ip.Completer.completions("%tim", 3)
539 completions = ip.Completer.completions("%tim", 3)
537 c = next(completions) # should be `%time` or similar
540 c = next(completions) # should be `%time` or similar
538 assert c.type == "magic", "Type of magic was not assigned by completer"
541 assert c.type == "magic", "Type of magic was not assigned by completer"
539
542
540 @pytest.mark.xfail(
543 @pytest.mark.xfail(
541 parse(version("jedi")) <= parse("0.18.0"),
544 parse(version("jedi")) <= parse("0.18.0"),
542 reason="Known failure on jedi<=0.18.0",
545 reason="Known failure on jedi<=0.18.0",
543 strict=True,
546 strict=True,
544 )
547 )
545 def test_deduplicate_completions(self):
548 def test_deduplicate_completions(self):
546 """
549 """
547 Test that completions are correctly deduplicated (even if ranges are not the same)
550 Test that completions are correctly deduplicated (even if ranges are not the same)
548 """
551 """
549 ip = get_ipython()
552 ip = get_ipython()
550 ip.ex(
553 ip.ex(
551 textwrap.dedent(
554 textwrap.dedent(
552 """
555 """
553 class Z:
556 class Z:
554 zoo = 1
557 zoo = 1
555 """
558 """
556 )
559 )
557 )
560 )
558 with provisionalcompleter():
561 with provisionalcompleter():
559 ip.Completer.use_jedi = True
562 ip.Completer.use_jedi = True
560 l = list(
563 l = list(
561 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
564 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
562 )
565 )
563 ip.Completer.use_jedi = False
566 ip.Completer.use_jedi = False
564
567
565 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
568 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
566 assert l[0].text == "zoo" # and not `it.accumulate`
569 assert l[0].text == "zoo" # and not `it.accumulate`
567
570
568 @pytest.mark.xfail(
571 @pytest.mark.xfail(
569 sys.version_info.releaselevel in ("alpha",),
572 sys.version_info.releaselevel in ("alpha",),
570 reason="Parso does not yet parse 3.13",
573 reason="Parso does not yet parse 3.13",
571 )
574 )
572 def test_greedy_completions(self):
575 def test_greedy_completions(self):
573 """
576 """
574 Test the capability of the Greedy completer.
577 Test the capability of the Greedy completer.
575
578
576 Most of the test here does not really show off the greedy completer, for proof
579 Most of the test here does not really show off the greedy completer, for proof
577 each of the text below now pass with Jedi. The greedy completer is capable of more.
580 each of the text below now pass with Jedi. The greedy completer is capable of more.
578
581
579 See the :any:`test_dict_key_completion_contexts`
582 See the :any:`test_dict_key_completion_contexts`
580
583
581 """
584 """
582 ip = get_ipython()
585 ip = get_ipython()
583 ip.ex("a=list(range(5))")
586 ip.ex("a=list(range(5))")
584 ip.ex("d = {'a b': str}")
587 ip.ex("d = {'a b': str}")
585 _, c = ip.complete(".", line="a[0].")
588 _, c = ip.complete(".", line="a[0].")
586 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
589 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
587
590
588 def _(line, cursor_pos, expect, message, completion):
591 def _(line, cursor_pos, expect, message, completion):
589 with greedy_completion(), provisionalcompleter():
592 with greedy_completion(), provisionalcompleter():
590 ip.Completer.use_jedi = False
593 ip.Completer.use_jedi = False
591 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
594 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
592 self.assertIn(expect, c, message % c)
595 self.assertIn(expect, c, message % c)
593
596
594 ip.Completer.use_jedi = True
597 ip.Completer.use_jedi = True
595 with provisionalcompleter():
598 with provisionalcompleter():
596 completions = ip.Completer.completions(line, cursor_pos)
599 completions = ip.Completer.completions(line, cursor_pos)
597 self.assertIn(completion, completions)
600 self.assertIn(completion, list(completions))
598
601
599 with provisionalcompleter():
602 with provisionalcompleter():
600 _(
603 _(
601 "a[0].",
604 "a[0].",
602 5,
605 5,
603 ".real",
606 ".real",
604 "Should have completed on a[0].: %s",
607 "Should have completed on a[0].: %s",
605 Completion(5, 5, "real"),
608 Completion(5, 5, "real"),
606 )
609 )
607 _(
610 _(
608 "a[0].r",
611 "a[0].r",
609 6,
612 6,
610 ".real",
613 ".real",
611 "Should have completed on a[0].r: %s",
614 "Should have completed on a[0].r: %s",
612 Completion(5, 6, "real"),
615 Completion(5, 6, "real"),
613 )
616 )
614
617
615 _(
618 _(
616 "a[0].from_",
619 "a[0].from_",
617 10,
620 10,
618 ".from_bytes",
621 ".from_bytes",
619 "Should have completed on a[0].from_: %s",
622 "Should have completed on a[0].from_: %s",
620 Completion(5, 10, "from_bytes"),
623 Completion(5, 10, "from_bytes"),
621 )
624 )
622 _(
625 _(
623 "assert str.star",
626 "assert str.star",
624 14,
627 14,
625 "str.startswith",
628 ".startswith",
626 "Should have completed on `assert str.star`: %s",
629 "Should have completed on `assert str.star`: %s",
627 Completion(11, 14, "startswith"),
630 Completion(11, 14, "startswith"),
628 )
631 )
629 _(
632 _(
630 "d['a b'].str",
633 "d['a b'].str",
631 12,
634 12,
632 ".strip",
635 ".strip",
633 "Should have completed on `d['a b'].str`: %s",
636 "Should have completed on `d['a b'].str`: %s",
634 Completion(9, 12, "strip"),
637 Completion(9, 12, "strip"),
635 )
638 )
639 _(
640 "a.app",
641 4,
642 ".append",
643 "Should have completed on `a.app`: %s",
644 Completion(2, 4, "append"),
645 )
636
646
637 def test_omit__names(self):
647 def test_omit__names(self):
638 # also happens to test IPCompleter as a configurable
648 # also happens to test IPCompleter as a configurable
639 ip = get_ipython()
649 ip = get_ipython()
640 ip._hidden_attr = 1
650 ip._hidden_attr = 1
641 ip._x = {}
651 ip._x = {}
642 c = ip.Completer
652 c = ip.Completer
643 ip.ex("ip=get_ipython()")
653 ip.ex("ip=get_ipython()")
644 cfg = Config()
654 cfg = Config()
645 cfg.IPCompleter.omit__names = 0
655 cfg.IPCompleter.omit__names = 0
646 c.update_config(cfg)
656 c.update_config(cfg)
647 with provisionalcompleter():
657 with provisionalcompleter():
648 c.use_jedi = False
658 c.use_jedi = False
649 s, matches = c.complete("ip.")
659 s, matches = c.complete("ip.")
650 self.assertIn("ip.__str__", matches)
660 self.assertIn(".__str__", matches)
651 self.assertIn("ip._hidden_attr", matches)
661 self.assertIn("._hidden_attr", matches)
652
662
653 # c.use_jedi = True
663 # c.use_jedi = True
654 # completions = set(c.completions('ip.', 3))
664 # completions = set(c.completions('ip.', 3))
655 # self.assertIn(Completion(3, 3, '__str__'), completions)
665 # self.assertIn(Completion(3, 3, '__str__'), completions)
656 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
666 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
657
667
658 cfg = Config()
668 cfg = Config()
659 cfg.IPCompleter.omit__names = 1
669 cfg.IPCompleter.omit__names = 1
660 c.update_config(cfg)
670 c.update_config(cfg)
661 with provisionalcompleter():
671 with provisionalcompleter():
662 c.use_jedi = False
672 c.use_jedi = False
663 s, matches = c.complete("ip.")
673 s, matches = c.complete("ip.")
664 self.assertNotIn("ip.__str__", matches)
674 self.assertNotIn(".__str__", matches)
665 # self.assertIn('ip._hidden_attr', matches)
675 # self.assertIn('ip._hidden_attr', matches)
666
676
667 # c.use_jedi = True
677 # c.use_jedi = True
668 # completions = set(c.completions('ip.', 3))
678 # completions = set(c.completions('ip.', 3))
669 # self.assertNotIn(Completion(3,3,'__str__'), completions)
679 # self.assertNotIn(Completion(3,3,'__str__'), completions)
670 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
680 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
671
681
672 cfg = Config()
682 cfg = Config()
673 cfg.IPCompleter.omit__names = 2
683 cfg.IPCompleter.omit__names = 2
674 c.update_config(cfg)
684 c.update_config(cfg)
675 with provisionalcompleter():
685 with provisionalcompleter():
676 c.use_jedi = False
686 c.use_jedi = False
677 s, matches = c.complete("ip.")
687 s, matches = c.complete("ip.")
678 self.assertNotIn("ip.__str__", matches)
688 self.assertNotIn(".__str__", matches)
679 self.assertNotIn("ip._hidden_attr", matches)
689 self.assertNotIn("._hidden_attr", matches)
680
690
681 # c.use_jedi = True
691 # c.use_jedi = True
682 # completions = set(c.completions('ip.', 3))
692 # completions = set(c.completions('ip.', 3))
683 # self.assertNotIn(Completion(3,3,'__str__'), completions)
693 # self.assertNotIn(Completion(3,3,'__str__'), completions)
684 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
694 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
685
695
686 with provisionalcompleter():
696 with provisionalcompleter():
687 c.use_jedi = False
697 c.use_jedi = False
688 s, matches = c.complete("ip._x.")
698 s, matches = c.complete("ip._x.")
689 self.assertIn("ip._x.keys", matches)
699 self.assertIn(".keys", matches)
690
700
691 # c.use_jedi = True
701 # c.use_jedi = True
692 # completions = set(c.completions('ip._x.', 6))
702 # completions = set(c.completions('ip._x.', 6))
693 # self.assertIn(Completion(6,6, "keys"), completions)
703 # self.assertIn(Completion(6,6, "keys"), completions)
694
704
695 del ip._hidden_attr
705 del ip._hidden_attr
696 del ip._x
706 del ip._x
697
707
698 def test_limit_to__all__False_ok(self):
708 def test_limit_to__all__False_ok(self):
699 """
709 """
700 Limit to all is deprecated, once we remove it this test can go away.
710 Limit to all is deprecated, once we remove it this test can go away.
701 """
711 """
702 ip = get_ipython()
712 ip = get_ipython()
703 c = ip.Completer
713 c = ip.Completer
704 c.use_jedi = False
714 c.use_jedi = False
705 ip.ex("class D: x=24")
715 ip.ex("class D: x=24")
706 ip.ex("d=D()")
716 ip.ex("d=D()")
707 cfg = Config()
717 cfg = Config()
708 cfg.IPCompleter.limit_to__all__ = False
718 cfg.IPCompleter.limit_to__all__ = False
709 c.update_config(cfg)
719 c.update_config(cfg)
710 s, matches = c.complete("d.")
720 s, matches = c.complete("d.")
711 self.assertIn("d.x", matches)
721 self.assertIn(".x", matches)
712
722
713 def test_get__all__entries_ok(self):
723 def test_get__all__entries_ok(self):
714 class A:
724 class A:
715 __all__ = ["x", 1]
725 __all__ = ["x", 1]
716
726
717 words = completer.get__all__entries(A())
727 words = completer.get__all__entries(A())
718 self.assertEqual(words, ["x"])
728 self.assertEqual(words, ["x"])
719
729
720 def test_get__all__entries_no__all__ok(self):
730 def test_get__all__entries_no__all__ok(self):
721 class A:
731 class A:
722 pass
732 pass
723
733
724 words = completer.get__all__entries(A())
734 words = completer.get__all__entries(A())
725 self.assertEqual(words, [])
735 self.assertEqual(words, [])
726
736
727 def test_func_kw_completions(self):
737 def test_func_kw_completions(self):
728 ip = get_ipython()
738 ip = get_ipython()
729 c = ip.Completer
739 c = ip.Completer
730 c.use_jedi = False
740 c.use_jedi = False
731 ip.ex("def myfunc(a=1,b=2): return a+b")
741 ip.ex("def myfunc(a=1,b=2): return a+b")
732 s, matches = c.complete(None, "myfunc(1,b")
742 s, matches = c.complete(None, "myfunc(1,b")
733 self.assertIn("b=", matches)
743 self.assertIn("b=", matches)
734 # Simulate completing with cursor right after b (pos==10):
744 # Simulate completing with cursor right after b (pos==10):
735 s, matches = c.complete(None, "myfunc(1,b)", 10)
745 s, matches = c.complete(None, "myfunc(1,b)", 10)
736 self.assertIn("b=", matches)
746 self.assertIn("b=", matches)
737 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
747 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
738 self.assertIn("b=", matches)
748 self.assertIn("b=", matches)
739 # builtin function
749 # builtin function
740 s, matches = c.complete(None, "min(k, k")
750 s, matches = c.complete(None, "min(k, k")
741 self.assertIn("key=", matches)
751 self.assertIn("key=", matches)
742
752
743 def test_default_arguments_from_docstring(self):
753 def test_default_arguments_from_docstring(self):
744 ip = get_ipython()
754 ip = get_ipython()
745 c = ip.Completer
755 c = ip.Completer
746 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
756 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
747 self.assertEqual(kwd, ["key"])
757 self.assertEqual(kwd, ["key"])
748 # with cython type etc
758 # with cython type etc
749 kwd = c._default_arguments_from_docstring(
759 kwd = c._default_arguments_from_docstring(
750 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
760 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
751 )
761 )
752 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
762 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
753 # white spaces
763 # white spaces
754 kwd = c._default_arguments_from_docstring(
764 kwd = c._default_arguments_from_docstring(
755 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
765 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
756 )
766 )
757 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
767 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
758
768
759 def test_line_magics(self):
769 def test_line_magics(self):
760 ip = get_ipython()
770 ip = get_ipython()
761 c = ip.Completer
771 c = ip.Completer
762 s, matches = c.complete(None, "lsmag")
772 s, matches = c.complete(None, "lsmag")
763 self.assertIn("%lsmagic", matches)
773 self.assertIn("%lsmagic", matches)
764 s, matches = c.complete(None, "%lsmag")
774 s, matches = c.complete(None, "%lsmag")
765 self.assertIn("%lsmagic", matches)
775 self.assertIn("%lsmagic", matches)
766
776
767 def test_cell_magics(self):
777 def test_cell_magics(self):
768 from IPython.core.magic import register_cell_magic
778 from IPython.core.magic import register_cell_magic
769
779
770 @register_cell_magic
780 @register_cell_magic
771 def _foo_cellm(line, cell):
781 def _foo_cellm(line, cell):
772 pass
782 pass
773
783
774 ip = get_ipython()
784 ip = get_ipython()
775 c = ip.Completer
785 c = ip.Completer
776
786
777 s, matches = c.complete(None, "_foo_ce")
787 s, matches = c.complete(None, "_foo_ce")
778 self.assertIn("%%_foo_cellm", matches)
788 self.assertIn("%%_foo_cellm", matches)
779 s, matches = c.complete(None, "%%_foo_ce")
789 s, matches = c.complete(None, "%%_foo_ce")
780 self.assertIn("%%_foo_cellm", matches)
790 self.assertIn("%%_foo_cellm", matches)
781
791
782 def test_line_cell_magics(self):
792 def test_line_cell_magics(self):
783 from IPython.core.magic import register_line_cell_magic
793 from IPython.core.magic import register_line_cell_magic
784
794
785 @register_line_cell_magic
795 @register_line_cell_magic
786 def _bar_cellm(line, cell):
796 def _bar_cellm(line, cell):
787 pass
797 pass
788
798
789 ip = get_ipython()
799 ip = get_ipython()
790 c = ip.Completer
800 c = ip.Completer
791
801
792 # The policy here is trickier, see comments in completion code. The
802 # The policy here is trickier, see comments in completion code. The
793 # returned values depend on whether the user passes %% or not explicitly,
803 # returned values depend on whether the user passes %% or not explicitly,
794 # and this will show a difference if the same name is both a line and cell
804 # and this will show a difference if the same name is both a line and cell
795 # magic.
805 # magic.
796 s, matches = c.complete(None, "_bar_ce")
806 s, matches = c.complete(None, "_bar_ce")
797 self.assertIn("%_bar_cellm", matches)
807 self.assertIn("%_bar_cellm", matches)
798 self.assertIn("%%_bar_cellm", matches)
808 self.assertIn("%%_bar_cellm", matches)
799 s, matches = c.complete(None, "%_bar_ce")
809 s, matches = c.complete(None, "%_bar_ce")
800 self.assertIn("%_bar_cellm", matches)
810 self.assertIn("%_bar_cellm", matches)
801 self.assertIn("%%_bar_cellm", matches)
811 self.assertIn("%%_bar_cellm", matches)
802 s, matches = c.complete(None, "%%_bar_ce")
812 s, matches = c.complete(None, "%%_bar_ce")
803 self.assertNotIn("%_bar_cellm", matches)
813 self.assertNotIn("%_bar_cellm", matches)
804 self.assertIn("%%_bar_cellm", matches)
814 self.assertIn("%%_bar_cellm", matches)
805
815
806 def test_magic_completion_order(self):
816 def test_magic_completion_order(self):
807 ip = get_ipython()
817 ip = get_ipython()
808 c = ip.Completer
818 c = ip.Completer
809
819
810 # Test ordering of line and cell magics.
820 # Test ordering of line and cell magics.
811 text, matches = c.complete("timeit")
821 text, matches = c.complete("timeit")
812 self.assertEqual(matches, ["%timeit", "%%timeit"])
822 self.assertEqual(matches, ["%timeit", "%%timeit"])
813
823
814 def test_magic_completion_shadowing(self):
824 def test_magic_completion_shadowing(self):
815 ip = get_ipython()
825 ip = get_ipython()
816 c = ip.Completer
826 c = ip.Completer
817 c.use_jedi = False
827 c.use_jedi = False
818
828
819 # Before importing matplotlib, %matplotlib magic should be the only option.
829 # Before importing matplotlib, %matplotlib magic should be the only option.
820 text, matches = c.complete("mat")
830 text, matches = c.complete("mat")
821 self.assertEqual(matches, ["%matplotlib"])
831 self.assertEqual(matches, ["%matplotlib"])
822
832
823 # The newly introduced name should shadow the magic.
833 # The newly introduced name should shadow the magic.
824 ip.run_cell("matplotlib = 1")
834 ip.run_cell("matplotlib = 1")
825 text, matches = c.complete("mat")
835 text, matches = c.complete("mat")
826 self.assertEqual(matches, ["matplotlib"])
836 self.assertEqual(matches, ["matplotlib"])
827
837
828 # After removing matplotlib from namespace, the magic should again be
838 # After removing matplotlib from namespace, the magic should again be
829 # the only option.
839 # the only option.
830 del ip.user_ns["matplotlib"]
840 del ip.user_ns["matplotlib"]
831 text, matches = c.complete("mat")
841 text, matches = c.complete("mat")
832 self.assertEqual(matches, ["%matplotlib"])
842 self.assertEqual(matches, ["%matplotlib"])
833
843
834 def test_magic_completion_shadowing_explicit(self):
844 def test_magic_completion_shadowing_explicit(self):
835 """
845 """
836 If the user try to complete a shadowed magic, and explicit % start should
846 If the user try to complete a shadowed magic, and explicit % start should
837 still return the completions.
847 still return the completions.
838 """
848 """
839 ip = get_ipython()
849 ip = get_ipython()
840 c = ip.Completer
850 c = ip.Completer
841
851
842 # Before importing matplotlib, %matplotlib magic should be the only option.
852 # Before importing matplotlib, %matplotlib magic should be the only option.
843 text, matches = c.complete("%mat")
853 text, matches = c.complete("%mat")
844 self.assertEqual(matches, ["%matplotlib"])
854 self.assertEqual(matches, ["%matplotlib"])
845
855
846 ip.run_cell("matplotlib = 1")
856 ip.run_cell("matplotlib = 1")
847
857
848 # After removing matplotlib from namespace, the magic should still be
858 # After removing matplotlib from namespace, the magic should still be
849 # the only option.
859 # the only option.
850 text, matches = c.complete("%mat")
860 text, matches = c.complete("%mat")
851 self.assertEqual(matches, ["%matplotlib"])
861 self.assertEqual(matches, ["%matplotlib"])
852
862
853 def test_magic_config(self):
863 def test_magic_config(self):
854 ip = get_ipython()
864 ip = get_ipython()
855 c = ip.Completer
865 c = ip.Completer
856
866
857 s, matches = c.complete(None, "conf")
867 s, matches = c.complete(None, "conf")
858 self.assertIn("%config", matches)
868 self.assertIn("%config", matches)
859 s, matches = c.complete(None, "conf")
869 s, matches = c.complete(None, "conf")
860 self.assertNotIn("AliasManager", matches)
870 self.assertNotIn("AliasManager", matches)
861 s, matches = c.complete(None, "config ")
871 s, matches = c.complete(None, "config ")
862 self.assertIn("AliasManager", matches)
872 self.assertIn("AliasManager", matches)
863 s, matches = c.complete(None, "%config ")
873 s, matches = c.complete(None, "%config ")
864 self.assertIn("AliasManager", matches)
874 self.assertIn("AliasManager", matches)
865 s, matches = c.complete(None, "config Ali")
875 s, matches = c.complete(None, "config Ali")
866 self.assertListEqual(["AliasManager"], matches)
876 self.assertListEqual(["AliasManager"], matches)
867 s, matches = c.complete(None, "%config Ali")
877 s, matches = c.complete(None, "%config Ali")
868 self.assertListEqual(["AliasManager"], matches)
878 self.assertListEqual(["AliasManager"], matches)
869 s, matches = c.complete(None, "config AliasManager")
879 s, matches = c.complete(None, "config AliasManager")
870 self.assertListEqual(["AliasManager"], matches)
880 self.assertListEqual(["AliasManager"], matches)
871 s, matches = c.complete(None, "%config AliasManager")
881 s, matches = c.complete(None, "%config AliasManager")
872 self.assertListEqual(["AliasManager"], matches)
882 self.assertListEqual(["AliasManager"], matches)
873 s, matches = c.complete(None, "config AliasManager.")
883 s, matches = c.complete(None, "config AliasManager.")
874 self.assertIn("AliasManager.default_aliases", matches)
884 self.assertIn("AliasManager.default_aliases", matches)
875 s, matches = c.complete(None, "%config AliasManager.")
885 s, matches = c.complete(None, "%config AliasManager.")
876 self.assertIn("AliasManager.default_aliases", matches)
886 self.assertIn("AliasManager.default_aliases", matches)
877 s, matches = c.complete(None, "config AliasManager.de")
887 s, matches = c.complete(None, "config AliasManager.de")
878 self.assertListEqual(["AliasManager.default_aliases"], matches)
888 self.assertListEqual(["AliasManager.default_aliases"], matches)
879 s, matches = c.complete(None, "config AliasManager.de")
889 s, matches = c.complete(None, "config AliasManager.de")
880 self.assertListEqual(["AliasManager.default_aliases"], matches)
890 self.assertListEqual(["AliasManager.default_aliases"], matches)
881
891
882 def test_magic_color(self):
892 def test_magic_color(self):
883 ip = get_ipython()
893 ip = get_ipython()
884 c = ip.Completer
894 c = ip.Completer
885
895
886 s, matches = c.complete(None, "colo")
896 s, matches = c.complete(None, "colo")
887 self.assertIn("%colors", matches)
897 self.assertIn("%colors", matches)
888 s, matches = c.complete(None, "colo")
898 s, matches = c.complete(None, "colo")
889 self.assertNotIn("NoColor", matches)
899 self.assertNotIn("NoColor", matches)
890 s, matches = c.complete(None, "%colors") # No trailing space
900 s, matches = c.complete(None, "%colors") # No trailing space
891 self.assertNotIn("NoColor", matches)
901 self.assertNotIn("NoColor", matches)
892 s, matches = c.complete(None, "colors ")
902 s, matches = c.complete(None, "colors ")
893 self.assertIn("NoColor", matches)
903 self.assertIn("NoColor", matches)
894 s, matches = c.complete(None, "%colors ")
904 s, matches = c.complete(None, "%colors ")
895 self.assertIn("NoColor", matches)
905 self.assertIn("NoColor", matches)
896 s, matches = c.complete(None, "colors NoCo")
906 s, matches = c.complete(None, "colors NoCo")
897 self.assertListEqual(["NoColor"], matches)
907 self.assertListEqual(["NoColor"], matches)
898 s, matches = c.complete(None, "%colors NoCo")
908 s, matches = c.complete(None, "%colors NoCo")
899 self.assertListEqual(["NoColor"], matches)
909 self.assertListEqual(["NoColor"], matches)
900
910
901 def test_match_dict_keys(self):
911 def test_match_dict_keys(self):
902 """
912 """
903 Test that match_dict_keys works on a couple of use case does return what
913 Test that match_dict_keys works on a couple of use case does return what
904 expected, and does not crash
914 expected, and does not crash
905 """
915 """
906 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
916 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
907
917
908 def match(*args, **kwargs):
918 def match(*args, **kwargs):
909 quote, offset, matches = match_dict_keys(*args, delims=delims, **kwargs)
919 quote, offset, matches = match_dict_keys(*args, delims=delims, **kwargs)
910 return quote, offset, list(matches)
920 return quote, offset, list(matches)
911
921
912 keys = ["foo", b"far"]
922 keys = ["foo", b"far"]
913 assert match(keys, "b'") == ("'", 2, ["far"])
923 assert match(keys, "b'") == ("'", 2, ["far"])
914 assert match(keys, "b'f") == ("'", 2, ["far"])
924 assert match(keys, "b'f") == ("'", 2, ["far"])
915 assert match(keys, 'b"') == ('"', 2, ["far"])
925 assert match(keys, 'b"') == ('"', 2, ["far"])
916 assert match(keys, 'b"f') == ('"', 2, ["far"])
926 assert match(keys, 'b"f') == ('"', 2, ["far"])
917
927
918 assert match(keys, "'") == ("'", 1, ["foo"])
928 assert match(keys, "'") == ("'", 1, ["foo"])
919 assert match(keys, "'f") == ("'", 1, ["foo"])
929 assert match(keys, "'f") == ("'", 1, ["foo"])
920 assert match(keys, '"') == ('"', 1, ["foo"])
930 assert match(keys, '"') == ('"', 1, ["foo"])
921 assert match(keys, '"f') == ('"', 1, ["foo"])
931 assert match(keys, '"f') == ('"', 1, ["foo"])
922
932
923 # Completion on first item of tuple
933 # Completion on first item of tuple
924 keys = [("foo", 1111), ("foo", 2222), (3333, "bar"), (3333, "test")]
934 keys = [("foo", 1111), ("foo", 2222), (3333, "bar"), (3333, "test")]
925 assert match(keys, "'f") == ("'", 1, ["foo"])
935 assert match(keys, "'f") == ("'", 1, ["foo"])
926 assert match(keys, "33") == ("", 0, ["3333"])
936 assert match(keys, "33") == ("", 0, ["3333"])
927
937
928 # Completion on numbers
938 # Completion on numbers
929 keys = [
939 keys = [
930 0xDEADBEEF,
940 0xDEADBEEF,
931 1111,
941 1111,
932 1234,
942 1234,
933 "1999",
943 "1999",
934 0b10101,
944 0b10101,
935 22,
945 22,
936 ] # 0xDEADBEEF = 3735928559; 0b10101 = 21
946 ] # 0xDEADBEEF = 3735928559; 0b10101 = 21
937 assert match(keys, "0xdead") == ("", 0, ["0xdeadbeef"])
947 assert match(keys, "0xdead") == ("", 0, ["0xdeadbeef"])
938 assert match(keys, "1") == ("", 0, ["1111", "1234"])
948 assert match(keys, "1") == ("", 0, ["1111", "1234"])
939 assert match(keys, "2") == ("", 0, ["21", "22"])
949 assert match(keys, "2") == ("", 0, ["21", "22"])
940 assert match(keys, "0b101") == ("", 0, ["0b10101", "0b10110"])
950 assert match(keys, "0b101") == ("", 0, ["0b10101", "0b10110"])
941
951
942 # Should yield on variables
952 # Should yield on variables
943 assert match(keys, "a_variable") == ("", 0, [])
953 assert match(keys, "a_variable") == ("", 0, [])
944
954
945 # Should pass over invalid literals
955 # Should pass over invalid literals
946 assert match(keys, "'' ''") == ("", 0, [])
956 assert match(keys, "'' ''") == ("", 0, [])
947
957
948 def test_match_dict_keys_tuple(self):
958 def test_match_dict_keys_tuple(self):
949 """
959 """
950 Test that match_dict_keys called with extra prefix works on a couple of use case,
960 Test that match_dict_keys called with extra prefix works on a couple of use case,
951 does return what expected, and does not crash.
961 does return what expected, and does not crash.
952 """
962 """
953 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
963 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
954
964
955 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
965 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
956
966
957 def match(*args, extra=None, **kwargs):
967 def match(*args, extra=None, **kwargs):
958 quote, offset, matches = match_dict_keys(
968 quote, offset, matches = match_dict_keys(
959 *args, delims=delims, extra_prefix=extra, **kwargs
969 *args, delims=delims, extra_prefix=extra, **kwargs
960 )
970 )
961 return quote, offset, list(matches)
971 return quote, offset, list(matches)
962
972
963 # Completion on first key == "foo"
973 # Completion on first key == "foo"
964 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["bar", "oof"])
974 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["bar", "oof"])
965 assert match(keys, '"', extra=("foo",)) == ('"', 1, ["bar", "oof"])
975 assert match(keys, '"', extra=("foo",)) == ('"', 1, ["bar", "oof"])
966 assert match(keys, "'o", extra=("foo",)) == ("'", 1, ["oof"])
976 assert match(keys, "'o", extra=("foo",)) == ("'", 1, ["oof"])
967 assert match(keys, '"o', extra=("foo",)) == ('"', 1, ["oof"])
977 assert match(keys, '"o', extra=("foo",)) == ('"', 1, ["oof"])
968 assert match(keys, "b'", extra=("foo",)) == ("'", 2, ["bar"])
978 assert match(keys, "b'", extra=("foo",)) == ("'", 2, ["bar"])
969 assert match(keys, 'b"', extra=("foo",)) == ('"', 2, ["bar"])
979 assert match(keys, 'b"', extra=("foo",)) == ('"', 2, ["bar"])
970 assert match(keys, "b'b", extra=("foo",)) == ("'", 2, ["bar"])
980 assert match(keys, "b'b", extra=("foo",)) == ("'", 2, ["bar"])
971 assert match(keys, 'b"b', extra=("foo",)) == ('"', 2, ["bar"])
981 assert match(keys, 'b"b', extra=("foo",)) == ('"', 2, ["bar"])
972
982
973 # No Completion
983 # No Completion
974 assert match(keys, "'", extra=("no_foo",)) == ("'", 1, [])
984 assert match(keys, "'", extra=("no_foo",)) == ("'", 1, [])
975 assert match(keys, "'", extra=("fo",)) == ("'", 1, [])
985 assert match(keys, "'", extra=("fo",)) == ("'", 1, [])
976
986
977 keys = [("foo1", "foo2", "foo3", "foo4"), ("foo1", "foo2", "bar", "foo4")]
987 keys = [("foo1", "foo2", "foo3", "foo4"), ("foo1", "foo2", "bar", "foo4")]
978 assert match(keys, "'foo", extra=("foo1",)) == ("'", 1, ["foo2"])
988 assert match(keys, "'foo", extra=("foo1",)) == ("'", 1, ["foo2"])
979 assert match(keys, "'foo", extra=("foo1", "foo2")) == ("'", 1, ["foo3"])
989 assert match(keys, "'foo", extra=("foo1", "foo2")) == ("'", 1, ["foo3"])
980 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3")) == ("'", 1, ["foo4"])
990 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3")) == ("'", 1, ["foo4"])
981 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3", "foo4")) == (
991 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3", "foo4")) == (
982 "'",
992 "'",
983 1,
993 1,
984 [],
994 [],
985 )
995 )
986
996
987 keys = [("foo", 1111), ("foo", "2222"), (3333, "bar"), (3333, 4444)]
997 keys = [("foo", 1111), ("foo", "2222"), (3333, "bar"), (3333, 4444)]
988 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["2222"])
998 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["2222"])
989 assert match(keys, "", extra=("foo",)) == ("", 0, ["1111", "'2222'"])
999 assert match(keys, "", extra=("foo",)) == ("", 0, ["1111", "'2222'"])
990 assert match(keys, "'", extra=(3333,)) == ("'", 1, ["bar"])
1000 assert match(keys, "'", extra=(3333,)) == ("'", 1, ["bar"])
991 assert match(keys, "", extra=(3333,)) == ("", 0, ["'bar'", "4444"])
1001 assert match(keys, "", extra=(3333,)) == ("", 0, ["'bar'", "4444"])
992 assert match(keys, "'", extra=("3333",)) == ("'", 1, [])
1002 assert match(keys, "'", extra=("3333",)) == ("'", 1, [])
993 assert match(keys, "33") == ("", 0, ["3333"])
1003 assert match(keys, "33") == ("", 0, ["3333"])
994
1004
995 def test_dict_key_completion_closures(self):
1005 def test_dict_key_completion_closures(self):
996 ip = get_ipython()
1006 ip = get_ipython()
997 complete = ip.Completer.complete
1007 complete = ip.Completer.complete
998 ip.Completer.auto_close_dict_keys = True
1008 ip.Completer.auto_close_dict_keys = True
999
1009
1000 ip.user_ns["d"] = {
1010 ip.user_ns["d"] = {
1001 # tuple only
1011 # tuple only
1002 ("aa", 11): None,
1012 ("aa", 11): None,
1003 # tuple and non-tuple
1013 # tuple and non-tuple
1004 ("bb", 22): None,
1014 ("bb", 22): None,
1005 "bb": None,
1015 "bb": None,
1006 # non-tuple only
1016 # non-tuple only
1007 "cc": None,
1017 "cc": None,
1008 # numeric tuple only
1018 # numeric tuple only
1009 (77, "x"): None,
1019 (77, "x"): None,
1010 # numeric tuple and non-tuple
1020 # numeric tuple and non-tuple
1011 (88, "y"): None,
1021 (88, "y"): None,
1012 88: None,
1022 88: None,
1013 # numeric non-tuple only
1023 # numeric non-tuple only
1014 99: None,
1024 99: None,
1015 }
1025 }
1016
1026
1017 _, matches = complete(line_buffer="d[")
1027 _, matches = complete(line_buffer="d[")
1018 # should append `, ` if matches a tuple only
1028 # should append `, ` if matches a tuple only
1019 self.assertIn("'aa', ", matches)
1029 self.assertIn("'aa', ", matches)
1020 # should not append anything if matches a tuple and an item
1030 # should not append anything if matches a tuple and an item
1021 self.assertIn("'bb'", matches)
1031 self.assertIn("'bb'", matches)
1022 # should append `]` if matches and item only
1032 # should append `]` if matches and item only
1023 self.assertIn("'cc']", matches)
1033 self.assertIn("'cc']", matches)
1024
1034
1025 # should append `, ` if matches a tuple only
1035 # should append `, ` if matches a tuple only
1026 self.assertIn("77, ", matches)
1036 self.assertIn("77, ", matches)
1027 # should not append anything if matches a tuple and an item
1037 # should not append anything if matches a tuple and an item
1028 self.assertIn("88", matches)
1038 self.assertIn("88", matches)
1029 # should append `]` if matches and item only
1039 # should append `]` if matches and item only
1030 self.assertIn("99]", matches)
1040 self.assertIn("99]", matches)
1031
1041
1032 _, matches = complete(line_buffer="d['aa', ")
1042 _, matches = complete(line_buffer="d['aa', ")
1033 # should restrict matches to those matching tuple prefix
1043 # should restrict matches to those matching tuple prefix
1034 self.assertIn("11]", matches)
1044 self.assertIn("11]", matches)
1035 self.assertNotIn("'bb'", matches)
1045 self.assertNotIn("'bb'", matches)
1036 self.assertNotIn("'bb', ", matches)
1046 self.assertNotIn("'bb', ", matches)
1037 self.assertNotIn("'bb']", matches)
1047 self.assertNotIn("'bb']", matches)
1038 self.assertNotIn("'cc'", matches)
1048 self.assertNotIn("'cc'", matches)
1039 self.assertNotIn("'cc', ", matches)
1049 self.assertNotIn("'cc', ", matches)
1040 self.assertNotIn("'cc']", matches)
1050 self.assertNotIn("'cc']", matches)
1041 ip.Completer.auto_close_dict_keys = False
1051 ip.Completer.auto_close_dict_keys = False
1042
1052
1043 def test_dict_key_completion_string(self):
1053 def test_dict_key_completion_string(self):
1044 """Test dictionary key completion for string keys"""
1054 """Test dictionary key completion for string keys"""
1045 ip = get_ipython()
1055 ip = get_ipython()
1046 complete = ip.Completer.complete
1056 complete = ip.Completer.complete
1047
1057
1048 ip.user_ns["d"] = {"abc": None}
1058 ip.user_ns["d"] = {"abc": None}
1049
1059
1050 # check completion at different stages
1060 # check completion at different stages
1051 _, matches = complete(line_buffer="d[")
1061 _, matches = complete(line_buffer="d[")
1052 self.assertIn("'abc'", matches)
1062 self.assertIn("'abc'", matches)
1053 self.assertNotIn("'abc']", matches)
1063 self.assertNotIn("'abc']", matches)
1054
1064
1055 _, matches = complete(line_buffer="d['")
1065 _, matches = complete(line_buffer="d['")
1056 self.assertIn("abc", matches)
1066 self.assertIn("abc", matches)
1057 self.assertNotIn("abc']", matches)
1067 self.assertNotIn("abc']", matches)
1058
1068
1059 _, matches = complete(line_buffer="d['a")
1069 _, matches = complete(line_buffer="d['a")
1060 self.assertIn("abc", matches)
1070 self.assertIn("abc", matches)
1061 self.assertNotIn("abc']", matches)
1071 self.assertNotIn("abc']", matches)
1062
1072
1063 # check use of different quoting
1073 # check use of different quoting
1064 _, matches = complete(line_buffer='d["')
1074 _, matches = complete(line_buffer='d["')
1065 self.assertIn("abc", matches)
1075 self.assertIn("abc", matches)
1066 self.assertNotIn('abc"]', matches)
1076 self.assertNotIn('abc"]', matches)
1067
1077
1068 _, matches = complete(line_buffer='d["a')
1078 _, matches = complete(line_buffer='d["a')
1069 self.assertIn("abc", matches)
1079 self.assertIn("abc", matches)
1070 self.assertNotIn('abc"]', matches)
1080 self.assertNotIn('abc"]', matches)
1071
1081
1072 # check sensitivity to following context
1082 # check sensitivity to following context
1073 _, matches = complete(line_buffer="d[]", cursor_pos=2)
1083 _, matches = complete(line_buffer="d[]", cursor_pos=2)
1074 self.assertIn("'abc'", matches)
1084 self.assertIn("'abc'", matches)
1075
1085
1076 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1086 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1077 self.assertIn("abc", matches)
1087 self.assertIn("abc", matches)
1078 self.assertNotIn("abc'", matches)
1088 self.assertNotIn("abc'", matches)
1079 self.assertNotIn("abc']", matches)
1089 self.assertNotIn("abc']", matches)
1080
1090
1081 # check multiple solutions are correctly returned and that noise is not
1091 # check multiple solutions are correctly returned and that noise is not
1082 ip.user_ns["d"] = {
1092 ip.user_ns["d"] = {
1083 "abc": None,
1093 "abc": None,
1084 "abd": None,
1094 "abd": None,
1085 "bad": None,
1095 "bad": None,
1086 object(): None,
1096 object(): None,
1087 5: None,
1097 5: None,
1088 ("abe", None): None,
1098 ("abe", None): None,
1089 (None, "abf"): None
1099 (None, "abf"): None
1090 }
1100 }
1091
1101
1092 _, matches = complete(line_buffer="d['a")
1102 _, matches = complete(line_buffer="d['a")
1093 self.assertIn("abc", matches)
1103 self.assertIn("abc", matches)
1094 self.assertIn("abd", matches)
1104 self.assertIn("abd", matches)
1095 self.assertNotIn("bad", matches)
1105 self.assertNotIn("bad", matches)
1096 self.assertNotIn("abe", matches)
1106 self.assertNotIn("abe", matches)
1097 self.assertNotIn("abf", matches)
1107 self.assertNotIn("abf", matches)
1098 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1108 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1099
1109
1100 # check escaping and whitespace
1110 # check escaping and whitespace
1101 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
1111 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
1102 _, matches = complete(line_buffer="d['a")
1112 _, matches = complete(line_buffer="d['a")
1103 self.assertIn("a\\nb", matches)
1113 self.assertIn("a\\nb", matches)
1104 self.assertIn("a\\'b", matches)
1114 self.assertIn("a\\'b", matches)
1105 self.assertIn('a"b', matches)
1115 self.assertIn('a"b', matches)
1106 self.assertIn("a word", matches)
1116 self.assertIn("a word", matches)
1107 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1117 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1108
1118
1109 # - can complete on non-initial word of the string
1119 # - can complete on non-initial word of the string
1110 _, matches = complete(line_buffer="d['a w")
1120 _, matches = complete(line_buffer="d['a w")
1111 self.assertIn("word", matches)
1121 self.assertIn("word", matches)
1112
1122
1113 # - understands quote escaping
1123 # - understands quote escaping
1114 _, matches = complete(line_buffer="d['a\\'")
1124 _, matches = complete(line_buffer="d['a\\'")
1115 self.assertIn("b", matches)
1125 self.assertIn("b", matches)
1116
1126
1117 # - default quoting should work like repr
1127 # - default quoting should work like repr
1118 _, matches = complete(line_buffer="d[")
1128 _, matches = complete(line_buffer="d[")
1119 self.assertIn('"a\'b"', matches)
1129 self.assertIn('"a\'b"', matches)
1120
1130
1121 # - when opening quote with ", possible to match with unescaped apostrophe
1131 # - when opening quote with ", possible to match with unescaped apostrophe
1122 _, matches = complete(line_buffer="d[\"a'")
1132 _, matches = complete(line_buffer="d[\"a'")
1123 self.assertIn("b", matches)
1133 self.assertIn("b", matches)
1124
1134
1125 # need to not split at delims that readline won't split at
1135 # need to not split at delims that readline won't split at
1126 if "-" not in ip.Completer.splitter.delims:
1136 if "-" not in ip.Completer.splitter.delims:
1127 ip.user_ns["d"] = {"before-after": None}
1137 ip.user_ns["d"] = {"before-after": None}
1128 _, matches = complete(line_buffer="d['before-af")
1138 _, matches = complete(line_buffer="d['before-af")
1129 self.assertIn("before-after", matches)
1139 self.assertIn("before-after", matches)
1130
1140
1131 # check completion on tuple-of-string keys at different stage - on first key
1141 # check completion on tuple-of-string keys at different stage - on first key
1132 ip.user_ns["d"] = {('foo', 'bar'): None}
1142 ip.user_ns["d"] = {('foo', 'bar'): None}
1133 _, matches = complete(line_buffer="d[")
1143 _, matches = complete(line_buffer="d[")
1134 self.assertIn("'foo'", matches)
1144 self.assertIn("'foo'", matches)
1135 self.assertNotIn("'foo']", matches)
1145 self.assertNotIn("'foo']", matches)
1136 self.assertNotIn("'bar'", matches)
1146 self.assertNotIn("'bar'", matches)
1137 self.assertNotIn("foo", matches)
1147 self.assertNotIn("foo", matches)
1138 self.assertNotIn("bar", matches)
1148 self.assertNotIn("bar", matches)
1139
1149
1140 # - match the prefix
1150 # - match the prefix
1141 _, matches = complete(line_buffer="d['f")
1151 _, matches = complete(line_buffer="d['f")
1142 self.assertIn("foo", matches)
1152 self.assertIn("foo", matches)
1143 self.assertNotIn("foo']", matches)
1153 self.assertNotIn("foo']", matches)
1144 self.assertNotIn('foo"]', matches)
1154 self.assertNotIn('foo"]', matches)
1145 _, matches = complete(line_buffer="d['foo")
1155 _, matches = complete(line_buffer="d['foo")
1146 self.assertIn("foo", matches)
1156 self.assertIn("foo", matches)
1147
1157
1148 # - can complete on second key
1158 # - can complete on second key
1149 _, matches = complete(line_buffer="d['foo', ")
1159 _, matches = complete(line_buffer="d['foo', ")
1150 self.assertIn("'bar'", matches)
1160 self.assertIn("'bar'", matches)
1151 _, matches = complete(line_buffer="d['foo', 'b")
1161 _, matches = complete(line_buffer="d['foo', 'b")
1152 self.assertIn("bar", matches)
1162 self.assertIn("bar", matches)
1153 self.assertNotIn("foo", matches)
1163 self.assertNotIn("foo", matches)
1154
1164
1155 # - does not propose missing keys
1165 # - does not propose missing keys
1156 _, matches = complete(line_buffer="d['foo', 'f")
1166 _, matches = complete(line_buffer="d['foo', 'f")
1157 self.assertNotIn("bar", matches)
1167 self.assertNotIn("bar", matches)
1158 self.assertNotIn("foo", matches)
1168 self.assertNotIn("foo", matches)
1159
1169
1160 # check sensitivity to following context
1170 # check sensitivity to following context
1161 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1171 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1162 self.assertIn("'bar'", matches)
1172 self.assertIn("'bar'", matches)
1163 self.assertNotIn("bar", matches)
1173 self.assertNotIn("bar", matches)
1164 self.assertNotIn("'foo'", matches)
1174 self.assertNotIn("'foo'", matches)
1165 self.assertNotIn("foo", matches)
1175 self.assertNotIn("foo", matches)
1166
1176
1167 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1177 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1168 self.assertIn("foo", matches)
1178 self.assertIn("foo", matches)
1169 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1179 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1170
1180
1171 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1181 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1172 self.assertIn("foo", matches)
1182 self.assertIn("foo", matches)
1173 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1183 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1174
1184
1175 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1185 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1176 self.assertIn("bar", matches)
1186 self.assertIn("bar", matches)
1177 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1187 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1178
1188
1179 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1189 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1180 self.assertIn("'bar'", matches)
1190 self.assertIn("'bar'", matches)
1181 self.assertNotIn("bar", matches)
1191 self.assertNotIn("bar", matches)
1182
1192
1183 # Can complete with longer tuple keys
1193 # Can complete with longer tuple keys
1184 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1194 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1185
1195
1186 # - can complete second key
1196 # - can complete second key
1187 _, matches = complete(line_buffer="d['foo', 'b")
1197 _, matches = complete(line_buffer="d['foo', 'b")
1188 self.assertIn("bar", matches)
1198 self.assertIn("bar", matches)
1189 self.assertNotIn("foo", matches)
1199 self.assertNotIn("foo", matches)
1190 self.assertNotIn("foobar", matches)
1200 self.assertNotIn("foobar", matches)
1191
1201
1192 # - can complete third key
1202 # - can complete third key
1193 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1203 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1194 self.assertIn("foobar", matches)
1204 self.assertIn("foobar", matches)
1195 self.assertNotIn("foo", matches)
1205 self.assertNotIn("foo", matches)
1196 self.assertNotIn("bar", matches)
1206 self.assertNotIn("bar", matches)
1197
1207
1198 def test_dict_key_completion_numbers(self):
1208 def test_dict_key_completion_numbers(self):
1199 ip = get_ipython()
1209 ip = get_ipython()
1200 complete = ip.Completer.complete
1210 complete = ip.Completer.complete
1201
1211
1202 ip.user_ns["d"] = {
1212 ip.user_ns["d"] = {
1203 0xDEADBEEF: None, # 3735928559
1213 0xDEADBEEF: None, # 3735928559
1204 1111: None,
1214 1111: None,
1205 1234: None,
1215 1234: None,
1206 "1999": None,
1216 "1999": None,
1207 0b10101: None, # 21
1217 0b10101: None, # 21
1208 22: None,
1218 22: None,
1209 }
1219 }
1210 _, matches = complete(line_buffer="d[1")
1220 _, matches = complete(line_buffer="d[1")
1211 self.assertIn("1111", matches)
1221 self.assertIn("1111", matches)
1212 self.assertIn("1234", matches)
1222 self.assertIn("1234", matches)
1213 self.assertNotIn("1999", matches)
1223 self.assertNotIn("1999", matches)
1214 self.assertNotIn("'1999'", matches)
1224 self.assertNotIn("'1999'", matches)
1215
1225
1216 _, matches = complete(line_buffer="d[0xdead")
1226 _, matches = complete(line_buffer="d[0xdead")
1217 self.assertIn("0xdeadbeef", matches)
1227 self.assertIn("0xdeadbeef", matches)
1218
1228
1219 _, matches = complete(line_buffer="d[2")
1229 _, matches = complete(line_buffer="d[2")
1220 self.assertIn("21", matches)
1230 self.assertIn("21", matches)
1221 self.assertIn("22", matches)
1231 self.assertIn("22", matches)
1222
1232
1223 _, matches = complete(line_buffer="d[0b101")
1233 _, matches = complete(line_buffer="d[0b101")
1224 self.assertIn("0b10101", matches)
1234 self.assertIn("0b10101", matches)
1225 self.assertIn("0b10110", matches)
1235 self.assertIn("0b10110", matches)
1226
1236
1227 def test_dict_key_completion_contexts(self):
1237 def test_dict_key_completion_contexts(self):
1228 """Test expression contexts in which dict key completion occurs"""
1238 """Test expression contexts in which dict key completion occurs"""
1229 ip = get_ipython()
1239 ip = get_ipython()
1230 complete = ip.Completer.complete
1240 complete = ip.Completer.complete
1231 d = {"abc": None}
1241 d = {"abc": None}
1232 ip.user_ns["d"] = d
1242 ip.user_ns["d"] = d
1233
1243
1234 class C:
1244 class C:
1235 data = d
1245 data = d
1236
1246
1237 ip.user_ns["C"] = C
1247 ip.user_ns["C"] = C
1238 ip.user_ns["get"] = lambda: d
1248 ip.user_ns["get"] = lambda: d
1239 ip.user_ns["nested"] = {"x": d}
1249 ip.user_ns["nested"] = {"x": d}
1240
1250
1241 def assert_no_completion(**kwargs):
1251 def assert_no_completion(**kwargs):
1242 _, matches = complete(**kwargs)
1252 _, matches = complete(**kwargs)
1243 self.assertNotIn("abc", matches)
1253 self.assertNotIn("abc", matches)
1244 self.assertNotIn("abc'", matches)
1254 self.assertNotIn("abc'", matches)
1245 self.assertNotIn("abc']", matches)
1255 self.assertNotIn("abc']", matches)
1246 self.assertNotIn("'abc'", matches)
1256 self.assertNotIn("'abc'", matches)
1247 self.assertNotIn("'abc']", matches)
1257 self.assertNotIn("'abc']", matches)
1248
1258
1249 def assert_completion(**kwargs):
1259 def assert_completion(**kwargs):
1250 _, matches = complete(**kwargs)
1260 _, matches = complete(**kwargs)
1251 self.assertIn("'abc'", matches)
1261 self.assertIn("'abc'", matches)
1252 self.assertNotIn("'abc']", matches)
1262 self.assertNotIn("'abc']", matches)
1253
1263
1254 # no completion after string closed, even if reopened
1264 # no completion after string closed, even if reopened
1255 assert_no_completion(line_buffer="d['a'")
1265 assert_no_completion(line_buffer="d['a'")
1256 assert_no_completion(line_buffer='d["a"')
1266 assert_no_completion(line_buffer='d["a"')
1257 assert_no_completion(line_buffer="d['a' + ")
1267 assert_no_completion(line_buffer="d['a' + ")
1258 assert_no_completion(line_buffer="d['a' + '")
1268 assert_no_completion(line_buffer="d['a' + '")
1259
1269
1260 # completion in non-trivial expressions
1270 # completion in non-trivial expressions
1261 assert_completion(line_buffer="+ d[")
1271 assert_completion(line_buffer="+ d[")
1262 assert_completion(line_buffer="(d[")
1272 assert_completion(line_buffer="(d[")
1263 assert_completion(line_buffer="C.data[")
1273 assert_completion(line_buffer="C.data[")
1264
1274
1265 # nested dict completion
1275 # nested dict completion
1266 assert_completion(line_buffer="nested['x'][")
1276 assert_completion(line_buffer="nested['x'][")
1267
1277
1268 with evaluation_policy("minimal"):
1278 with evaluation_policy("minimal"):
1269 with pytest.raises(AssertionError):
1279 with pytest.raises(AssertionError):
1270 assert_completion(line_buffer="nested['x'][")
1280 assert_completion(line_buffer="nested['x'][")
1271
1281
1272 # greedy flag
1282 # greedy flag
1273 def assert_completion(**kwargs):
1283 def assert_completion(**kwargs):
1274 _, matches = complete(**kwargs)
1284 _, matches = complete(**kwargs)
1275 self.assertIn("get()['abc']", matches)
1285 self.assertIn("get()['abc']", matches)
1276
1286
1277 assert_no_completion(line_buffer="get()[")
1287 assert_no_completion(line_buffer="get()[")
1278 with greedy_completion():
1288 with greedy_completion():
1279 assert_completion(line_buffer="get()[")
1289 assert_completion(line_buffer="get()[")
1280 assert_completion(line_buffer="get()['")
1290 assert_completion(line_buffer="get()['")
1281 assert_completion(line_buffer="get()['a")
1291 assert_completion(line_buffer="get()['a")
1282 assert_completion(line_buffer="get()['ab")
1292 assert_completion(line_buffer="get()['ab")
1283 assert_completion(line_buffer="get()['abc")
1293 assert_completion(line_buffer="get()['abc")
1284
1294
1285 def test_dict_key_completion_bytes(self):
1295 def test_dict_key_completion_bytes(self):
1286 """Test handling of bytes in dict key completion"""
1296 """Test handling of bytes in dict key completion"""
1287 ip = get_ipython()
1297 ip = get_ipython()
1288 complete = ip.Completer.complete
1298 complete = ip.Completer.complete
1289
1299
1290 ip.user_ns["d"] = {"abc": None, b"abd": None}
1300 ip.user_ns["d"] = {"abc": None, b"abd": None}
1291
1301
1292 _, matches = complete(line_buffer="d[")
1302 _, matches = complete(line_buffer="d[")
1293 self.assertIn("'abc'", matches)
1303 self.assertIn("'abc'", matches)
1294 self.assertIn("b'abd'", matches)
1304 self.assertIn("b'abd'", matches)
1295
1305
1296 if False: # not currently implemented
1306 if False: # not currently implemented
1297 _, matches = complete(line_buffer="d[b")
1307 _, matches = complete(line_buffer="d[b")
1298 self.assertIn("b'abd'", matches)
1308 self.assertIn("b'abd'", matches)
1299 self.assertNotIn("b'abc'", matches)
1309 self.assertNotIn("b'abc'", matches)
1300
1310
1301 _, matches = complete(line_buffer="d[b'")
1311 _, matches = complete(line_buffer="d[b'")
1302 self.assertIn("abd", matches)
1312 self.assertIn("abd", matches)
1303 self.assertNotIn("abc", matches)
1313 self.assertNotIn("abc", matches)
1304
1314
1305 _, matches = complete(line_buffer="d[B'")
1315 _, matches = complete(line_buffer="d[B'")
1306 self.assertIn("abd", matches)
1316 self.assertIn("abd", matches)
1307 self.assertNotIn("abc", matches)
1317 self.assertNotIn("abc", matches)
1308
1318
1309 _, matches = complete(line_buffer="d['")
1319 _, matches = complete(line_buffer="d['")
1310 self.assertIn("abc", matches)
1320 self.assertIn("abc", matches)
1311 self.assertNotIn("abd", matches)
1321 self.assertNotIn("abd", matches)
1312
1322
1313 def test_dict_key_completion_unicode_py3(self):
1323 def test_dict_key_completion_unicode_py3(self):
1314 """Test handling of unicode in dict key completion"""
1324 """Test handling of unicode in dict key completion"""
1315 ip = get_ipython()
1325 ip = get_ipython()
1316 complete = ip.Completer.complete
1326 complete = ip.Completer.complete
1317
1327
1318 ip.user_ns["d"] = {"a\u05d0": None}
1328 ip.user_ns["d"] = {"a\u05d0": None}
1319
1329
1320 # query using escape
1330 # query using escape
1321 if sys.platform != "win32":
1331 if sys.platform != "win32":
1322 # Known failure on Windows
1332 # Known failure on Windows
1323 _, matches = complete(line_buffer="d['a\\u05d0")
1333 _, matches = complete(line_buffer="d['a\\u05d0")
1324 self.assertIn("u05d0", matches) # tokenized after \\
1334 self.assertIn("u05d0", matches) # tokenized after \\
1325
1335
1326 # query using character
1336 # query using character
1327 _, matches = complete(line_buffer="d['a\u05d0")
1337 _, matches = complete(line_buffer="d['a\u05d0")
1328 self.assertIn("a\u05d0", matches)
1338 self.assertIn("a\u05d0", matches)
1329
1339
1330 with greedy_completion():
1340 with greedy_completion():
1331 # query using escape
1341 # query using escape
1332 _, matches = complete(line_buffer="d['a\\u05d0")
1342 _, matches = complete(line_buffer="d['a\\u05d0")
1333 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1343 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1334
1344
1335 # query using character
1345 # query using character
1336 _, matches = complete(line_buffer="d['a\u05d0")
1346 _, matches = complete(line_buffer="d['a\u05d0")
1337 self.assertIn("d['a\u05d0']", matches)
1347 self.assertIn("d['a\u05d0']", matches)
1338
1348
1339 @dec.skip_without("numpy")
1349 @dec.skip_without("numpy")
1340 def test_struct_array_key_completion(self):
1350 def test_struct_array_key_completion(self):
1341 """Test dict key completion applies to numpy struct arrays"""
1351 """Test dict key completion applies to numpy struct arrays"""
1342 import numpy
1352 import numpy
1343
1353
1344 ip = get_ipython()
1354 ip = get_ipython()
1345 complete = ip.Completer.complete
1355 complete = ip.Completer.complete
1346 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1356 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1347 _, matches = complete(line_buffer="d['")
1357 _, matches = complete(line_buffer="d['")
1348 self.assertIn("hello", matches)
1358 self.assertIn("hello", matches)
1349 self.assertIn("world", matches)
1359 self.assertIn("world", matches)
1350 # complete on the numpy struct itself
1360 # complete on the numpy struct itself
1351 dt = numpy.dtype(
1361 dt = numpy.dtype(
1352 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1362 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1353 )
1363 )
1354 x = numpy.zeros(2, dtype=dt)
1364 x = numpy.zeros(2, dtype=dt)
1355 ip.user_ns["d"] = x[1]
1365 ip.user_ns["d"] = x[1]
1356 _, matches = complete(line_buffer="d['")
1366 _, matches = complete(line_buffer="d['")
1357 self.assertIn("my_head", matches)
1367 self.assertIn("my_head", matches)
1358 self.assertIn("my_data", matches)
1368 self.assertIn("my_data", matches)
1359
1369
1360 def completes_on_nested():
1370 def completes_on_nested():
1361 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1371 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1362 _, matches = complete(line_buffer="d[1]['my_head']['")
1372 _, matches = complete(line_buffer="d[1]['my_head']['")
1363 self.assertTrue(any(["my_dt" in m for m in matches]))
1373 self.assertTrue(any(["my_dt" in m for m in matches]))
1364 self.assertTrue(any(["my_df" in m for m in matches]))
1374 self.assertTrue(any(["my_df" in m for m in matches]))
1365 # complete on a nested level
1375 # complete on a nested level
1366 with greedy_completion():
1376 with greedy_completion():
1367 completes_on_nested()
1377 completes_on_nested()
1368
1378
1369 with evaluation_policy("limited"):
1379 with evaluation_policy("limited"):
1370 completes_on_nested()
1380 completes_on_nested()
1371
1381
1372 with evaluation_policy("minimal"):
1382 with evaluation_policy("minimal"):
1373 with pytest.raises(AssertionError):
1383 with pytest.raises(AssertionError):
1374 completes_on_nested()
1384 completes_on_nested()
1375
1385
1376 @dec.skip_without("pandas")
1386 @dec.skip_without("pandas")
1377 def test_dataframe_key_completion(self):
1387 def test_dataframe_key_completion(self):
1378 """Test dict key completion applies to pandas DataFrames"""
1388 """Test dict key completion applies to pandas DataFrames"""
1379 import pandas
1389 import pandas
1380
1390
1381 ip = get_ipython()
1391 ip = get_ipython()
1382 complete = ip.Completer.complete
1392 complete = ip.Completer.complete
1383 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1393 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1384 _, matches = complete(line_buffer="d['")
1394 _, matches = complete(line_buffer="d['")
1385 self.assertIn("hello", matches)
1395 self.assertIn("hello", matches)
1386 self.assertIn("world", matches)
1396 self.assertIn("world", matches)
1387 _, matches = complete(line_buffer="d.loc[:, '")
1397 _, matches = complete(line_buffer="d.loc[:, '")
1388 self.assertIn("hello", matches)
1398 self.assertIn("hello", matches)
1389 self.assertIn("world", matches)
1399 self.assertIn("world", matches)
1390 _, matches = complete(line_buffer="d.loc[1:, '")
1400 _, matches = complete(line_buffer="d.loc[1:, '")
1391 self.assertIn("hello", matches)
1401 self.assertIn("hello", matches)
1392 _, matches = complete(line_buffer="d.loc[1:1, '")
1402 _, matches = complete(line_buffer="d.loc[1:1, '")
1393 self.assertIn("hello", matches)
1403 self.assertIn("hello", matches)
1394 _, matches = complete(line_buffer="d.loc[1:1:-1, '")
1404 _, matches = complete(line_buffer="d.loc[1:1:-1, '")
1395 self.assertIn("hello", matches)
1405 self.assertIn("hello", matches)
1396 _, matches = complete(line_buffer="d.loc[::, '")
1406 _, matches = complete(line_buffer="d.loc[::, '")
1397 self.assertIn("hello", matches)
1407 self.assertIn("hello", matches)
1398
1408
1399 def test_dict_key_completion_invalids(self):
1409 def test_dict_key_completion_invalids(self):
1400 """Smoke test cases dict key completion can't handle"""
1410 """Smoke test cases dict key completion can't handle"""
1401 ip = get_ipython()
1411 ip = get_ipython()
1402 complete = ip.Completer.complete
1412 complete = ip.Completer.complete
1403
1413
1404 ip.user_ns["no_getitem"] = None
1414 ip.user_ns["no_getitem"] = None
1405 ip.user_ns["no_keys"] = []
1415 ip.user_ns["no_keys"] = []
1406 ip.user_ns["cant_call_keys"] = dict
1416 ip.user_ns["cant_call_keys"] = dict
1407 ip.user_ns["empty"] = {}
1417 ip.user_ns["empty"] = {}
1408 ip.user_ns["d"] = {"abc": 5}
1418 ip.user_ns["d"] = {"abc": 5}
1409
1419
1410 _, matches = complete(line_buffer="no_getitem['")
1420 _, matches = complete(line_buffer="no_getitem['")
1411 _, matches = complete(line_buffer="no_keys['")
1421 _, matches = complete(line_buffer="no_keys['")
1412 _, matches = complete(line_buffer="cant_call_keys['")
1422 _, matches = complete(line_buffer="cant_call_keys['")
1413 _, matches = complete(line_buffer="empty['")
1423 _, matches = complete(line_buffer="empty['")
1414 _, matches = complete(line_buffer="name_error['")
1424 _, matches = complete(line_buffer="name_error['")
1415 _, matches = complete(line_buffer="d['\\") # incomplete escape
1425 _, matches = complete(line_buffer="d['\\") # incomplete escape
1416
1426
1417 def test_object_key_completion(self):
1427 def test_object_key_completion(self):
1418 ip = get_ipython()
1428 ip = get_ipython()
1419 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1429 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1420
1430
1421 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1431 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1422 self.assertIn("qwerty", matches)
1432 self.assertIn("qwerty", matches)
1423 self.assertIn("qwick", matches)
1433 self.assertIn("qwick", matches)
1424
1434
1425 def test_class_key_completion(self):
1435 def test_class_key_completion(self):
1426 ip = get_ipython()
1436 ip = get_ipython()
1427 NamedInstanceClass("qwerty")
1437 NamedInstanceClass("qwerty")
1428 NamedInstanceClass("qwick")
1438 NamedInstanceClass("qwick")
1429 ip.user_ns["named_instance_class"] = NamedInstanceClass
1439 ip.user_ns["named_instance_class"] = NamedInstanceClass
1430
1440
1431 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1441 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1432 self.assertIn("qwerty", matches)
1442 self.assertIn("qwerty", matches)
1433 self.assertIn("qwick", matches)
1443 self.assertIn("qwick", matches)
1434
1444
1435 def test_tryimport(self):
1445 def test_tryimport(self):
1436 """
1446 """
1437 Test that try-import don't crash on trailing dot, and import modules before
1447 Test that try-import don't crash on trailing dot, and import modules before
1438 """
1448 """
1439 from IPython.core.completerlib import try_import
1449 from IPython.core.completerlib import try_import
1440
1450
1441 assert try_import("IPython.")
1451 assert try_import("IPython.")
1442
1452
1443 def test_aimport_module_completer(self):
1453 def test_aimport_module_completer(self):
1444 ip = get_ipython()
1454 ip = get_ipython()
1445 _, matches = ip.complete("i", "%aimport i")
1455 _, matches = ip.complete("i", "%aimport i")
1446 self.assertIn("io", matches)
1456 self.assertIn("io", matches)
1447 self.assertNotIn("int", matches)
1457 self.assertNotIn("int", matches)
1448
1458
1449 def test_nested_import_module_completer(self):
1459 def test_nested_import_module_completer(self):
1450 ip = get_ipython()
1460 ip = get_ipython()
1451 _, matches = ip.complete(None, "import IPython.co", 17)
1461 _, matches = ip.complete(None, "import IPython.co", 17)
1452 self.assertIn("IPython.core", matches)
1462 self.assertIn("IPython.core", matches)
1453 self.assertNotIn("import IPython.core", matches)
1463 self.assertNotIn("import IPython.core", matches)
1454 self.assertNotIn("IPython.display", matches)
1464 self.assertNotIn("IPython.display", matches)
1455
1465
1456 def test_import_module_completer(self):
1466 def test_import_module_completer(self):
1457 ip = get_ipython()
1467 ip = get_ipython()
1458 _, matches = ip.complete("i", "import i")
1468 _, matches = ip.complete("i", "import i")
1459 self.assertIn("io", matches)
1469 self.assertIn("io", matches)
1460 self.assertNotIn("int", matches)
1470 self.assertNotIn("int", matches)
1461
1471
1462 def test_from_module_completer(self):
1472 def test_from_module_completer(self):
1463 ip = get_ipython()
1473 ip = get_ipython()
1464 _, matches = ip.complete("B", "from io import B", 16)
1474 _, matches = ip.complete("B", "from io import B", 16)
1465 self.assertIn("BytesIO", matches)
1475 self.assertIn("BytesIO", matches)
1466 self.assertNotIn("BaseException", matches)
1476 self.assertNotIn("BaseException", matches)
1467
1477
1468 def test_snake_case_completion(self):
1478 def test_snake_case_completion(self):
1469 ip = get_ipython()
1479 ip = get_ipython()
1470 ip.Completer.use_jedi = False
1480 ip.Completer.use_jedi = False
1471 ip.user_ns["some_three"] = 3
1481 ip.user_ns["some_three"] = 3
1472 ip.user_ns["some_four"] = 4
1482 ip.user_ns["some_four"] = 4
1473 _, matches = ip.complete("s_", "print(s_f")
1483 _, matches = ip.complete("s_", "print(s_f")
1474 self.assertIn("some_three", matches)
1484 self.assertIn("some_three", matches)
1475 self.assertIn("some_four", matches)
1485 self.assertIn("some_four", matches)
1476
1486
1477 def test_mix_terms(self):
1487 def test_mix_terms(self):
1478 ip = get_ipython()
1488 ip = get_ipython()
1479 from textwrap import dedent
1489 from textwrap import dedent
1480
1490
1481 ip.Completer.use_jedi = False
1491 ip.Completer.use_jedi = False
1482 ip.ex(
1492 ip.ex(
1483 dedent(
1493 dedent(
1484 """
1494 """
1485 class Test:
1495 class Test:
1486 def meth(self, meth_arg1):
1496 def meth(self, meth_arg1):
1487 print("meth")
1497 print("meth")
1488
1498
1489 def meth_1(self, meth1_arg1, meth1_arg2):
1499 def meth_1(self, meth1_arg1, meth1_arg2):
1490 print("meth1")
1500 print("meth1")
1491
1501
1492 def meth_2(self, meth2_arg1, meth2_arg2):
1502 def meth_2(self, meth2_arg1, meth2_arg2):
1493 print("meth2")
1503 print("meth2")
1494 test = Test()
1504 test = Test()
1495 """
1505 """
1496 )
1506 )
1497 )
1507 )
1498 _, matches = ip.complete(None, "test.meth(")
1508 _, matches = ip.complete(None, "test.meth(")
1499 self.assertIn("meth_arg1=", matches)
1509 self.assertIn("meth_arg1=", matches)
1500 self.assertNotIn("meth2_arg1=", matches)
1510 self.assertNotIn("meth2_arg1=", matches)
1501
1511
1502 def test_percent_symbol_restrict_to_magic_completions(self):
1512 def test_percent_symbol_restrict_to_magic_completions(self):
1503 ip = get_ipython()
1513 ip = get_ipython()
1504 completer = ip.Completer
1514 completer = ip.Completer
1505 text = "%a"
1515 text = "%a"
1506
1516
1507 with provisionalcompleter():
1517 with provisionalcompleter():
1508 completer.use_jedi = True
1518 completer.use_jedi = True
1509 completions = completer.completions(text, len(text))
1519 completions = completer.completions(text, len(text))
1510 for c in completions:
1520 for c in completions:
1511 self.assertEqual(c.text[0], "%")
1521 self.assertEqual(c.text[0], "%")
1512
1522
1513 def test_fwd_unicode_restricts(self):
1523 def test_fwd_unicode_restricts(self):
1514 ip = get_ipython()
1524 ip = get_ipython()
1515 completer = ip.Completer
1525 completer = ip.Completer
1516 text = "\\ROMAN NUMERAL FIVE"
1526 text = "\\ROMAN NUMERAL FIVE"
1517
1527
1518 with provisionalcompleter():
1528 with provisionalcompleter():
1519 completer.use_jedi = True
1529 completer.use_jedi = True
1520 completions = [
1530 completions = [
1521 completion.text for completion in completer.completions(text, len(text))
1531 completion.text for completion in completer.completions(text, len(text))
1522 ]
1532 ]
1523 self.assertEqual(completions, ["\u2164"])
1533 self.assertEqual(completions, ["\u2164"])
1524
1534
1525 def test_dict_key_restrict_to_dicts(self):
1535 def test_dict_key_restrict_to_dicts(self):
1526 """Test that dict key suppresses non-dict completion items"""
1536 """Test that dict key suppresses non-dict completion items"""
1527 ip = get_ipython()
1537 ip = get_ipython()
1528 c = ip.Completer
1538 c = ip.Completer
1529 d = {"abc": None}
1539 d = {"abc": None}
1530 ip.user_ns["d"] = d
1540 ip.user_ns["d"] = d
1531
1541
1532 text = 'd["a'
1542 text = 'd["a'
1533
1543
1534 def _():
1544 def _():
1535 with provisionalcompleter():
1545 with provisionalcompleter():
1536 c.use_jedi = True
1546 c.use_jedi = True
1537 return [
1547 return [
1538 completion.text for completion in c.completions(text, len(text))
1548 completion.text for completion in c.completions(text, len(text))
1539 ]
1549 ]
1540
1550
1541 completions = _()
1551 completions = _()
1542 self.assertEqual(completions, ["abc"])
1552 self.assertEqual(completions, ["abc"])
1543
1553
1544 # check that it can be disabled in granular manner:
1554 # check that it can be disabled in granular manner:
1545 cfg = Config()
1555 cfg = Config()
1546 cfg.IPCompleter.suppress_competing_matchers = {
1556 cfg.IPCompleter.suppress_competing_matchers = {
1547 "IPCompleter.dict_key_matcher": False
1557 "IPCompleter.dict_key_matcher": False
1548 }
1558 }
1549 c.update_config(cfg)
1559 c.update_config(cfg)
1550
1560
1551 completions = _()
1561 completions = _()
1552 self.assertIn("abc", completions)
1562 self.assertIn("abc", completions)
1553 self.assertGreater(len(completions), 1)
1563 self.assertGreater(len(completions), 1)
1554
1564
1555 def test_matcher_suppression(self):
1565 def test_matcher_suppression(self):
1556 @completion_matcher(identifier="a_matcher")
1566 @completion_matcher(identifier="a_matcher")
1557 def a_matcher(text):
1567 def a_matcher(text):
1558 return ["completion_a"]
1568 return ["completion_a"]
1559
1569
1560 @completion_matcher(identifier="b_matcher", api_version=2)
1570 @completion_matcher(identifier="b_matcher", api_version=2)
1561 def b_matcher(context: CompletionContext):
1571 def b_matcher(context: CompletionContext):
1562 text = context.token
1572 text = context.token
1563 result = {"completions": [SimpleCompletion("completion_b")]}
1573 result = {"completions": [SimpleCompletion("completion_b")]}
1564
1574
1565 if text == "suppress c":
1575 if text == "suppress c":
1566 result["suppress"] = {"c_matcher"}
1576 result["suppress"] = {"c_matcher"}
1567
1577
1568 if text.startswith("suppress all"):
1578 if text.startswith("suppress all"):
1569 result["suppress"] = True
1579 result["suppress"] = True
1570 if text == "suppress all but c":
1580 if text == "suppress all but c":
1571 result["do_not_suppress"] = {"c_matcher"}
1581 result["do_not_suppress"] = {"c_matcher"}
1572 if text == "suppress all but a":
1582 if text == "suppress all but a":
1573 result["do_not_suppress"] = {"a_matcher"}
1583 result["do_not_suppress"] = {"a_matcher"}
1574
1584
1575 return result
1585 return result
1576
1586
1577 @completion_matcher(identifier="c_matcher")
1587 @completion_matcher(identifier="c_matcher")
1578 def c_matcher(text):
1588 def c_matcher(text):
1579 return ["completion_c"]
1589 return ["completion_c"]
1580
1590
1581 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1591 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1582 ip = get_ipython()
1592 ip = get_ipython()
1583 c = ip.Completer
1593 c = ip.Completer
1584
1594
1585 def _(text, expected):
1595 def _(text, expected):
1586 c.use_jedi = False
1596 c.use_jedi = False
1587 s, matches = c.complete(text)
1597 s, matches = c.complete(text)
1588 self.assertEqual(expected, matches)
1598 self.assertEqual(expected, matches)
1589
1599
1590 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1600 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1591 _("suppress all", ["completion_b"])
1601 _("suppress all", ["completion_b"])
1592 _("suppress all but a", ["completion_a", "completion_b"])
1602 _("suppress all but a", ["completion_a", "completion_b"])
1593 _("suppress all but c", ["completion_b", "completion_c"])
1603 _("suppress all but c", ["completion_b", "completion_c"])
1594
1604
1595 def configure(suppression_config):
1605 def configure(suppression_config):
1596 cfg = Config()
1606 cfg = Config()
1597 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1607 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1598 c.update_config(cfg)
1608 c.update_config(cfg)
1599
1609
1600 # test that configuration takes priority over the run-time decisions
1610 # test that configuration takes priority over the run-time decisions
1601
1611
1602 configure(False)
1612 configure(False)
1603 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1613 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1604
1614
1605 configure({"b_matcher": False})
1615 configure({"b_matcher": False})
1606 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1616 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1607
1617
1608 configure({"a_matcher": False})
1618 configure({"a_matcher": False})
1609 _("suppress all", ["completion_b"])
1619 _("suppress all", ["completion_b"])
1610
1620
1611 configure({"b_matcher": True})
1621 configure({"b_matcher": True})
1612 _("do not suppress", ["completion_b"])
1622 _("do not suppress", ["completion_b"])
1613
1623
1614 configure(True)
1624 configure(True)
1615 _("do not suppress", ["completion_a"])
1625 _("do not suppress", ["completion_a"])
1616
1626
1617 def test_matcher_suppression_with_iterator(self):
1627 def test_matcher_suppression_with_iterator(self):
1618 @completion_matcher(identifier="matcher_returning_iterator")
1628 @completion_matcher(identifier="matcher_returning_iterator")
1619 def matcher_returning_iterator(text):
1629 def matcher_returning_iterator(text):
1620 return iter(["completion_iter"])
1630 return iter(["completion_iter"])
1621
1631
1622 @completion_matcher(identifier="matcher_returning_list")
1632 @completion_matcher(identifier="matcher_returning_list")
1623 def matcher_returning_list(text):
1633 def matcher_returning_list(text):
1624 return ["completion_list"]
1634 return ["completion_list"]
1625
1635
1626 with custom_matchers([matcher_returning_iterator, matcher_returning_list]):
1636 with custom_matchers([matcher_returning_iterator, matcher_returning_list]):
1627 ip = get_ipython()
1637 ip = get_ipython()
1628 c = ip.Completer
1638 c = ip.Completer
1629
1639
1630 def _(text, expected):
1640 def _(text, expected):
1631 c.use_jedi = False
1641 c.use_jedi = False
1632 s, matches = c.complete(text)
1642 s, matches = c.complete(text)
1633 self.assertEqual(expected, matches)
1643 self.assertEqual(expected, matches)
1634
1644
1635 def configure(suppression_config):
1645 def configure(suppression_config):
1636 cfg = Config()
1646 cfg = Config()
1637 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1647 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1638 c.update_config(cfg)
1648 c.update_config(cfg)
1639
1649
1640 configure(False)
1650 configure(False)
1641 _("---", ["completion_iter", "completion_list"])
1651 _("---", ["completion_iter", "completion_list"])
1642
1652
1643 configure(True)
1653 configure(True)
1644 _("---", ["completion_iter"])
1654 _("---", ["completion_iter"])
1645
1655
1646 configure(None)
1656 configure(None)
1647 _("--", ["completion_iter", "completion_list"])
1657 _("--", ["completion_iter", "completion_list"])
1648
1658
1649 @pytest.mark.xfail(
1659 @pytest.mark.xfail(
1650 sys.version_info.releaselevel in ("alpha",),
1660 sys.version_info.releaselevel in ("alpha",),
1651 reason="Parso does not yet parse 3.13",
1661 reason="Parso does not yet parse 3.13",
1652 )
1662 )
1653 def test_matcher_suppression_with_jedi(self):
1663 def test_matcher_suppression_with_jedi(self):
1654 ip = get_ipython()
1664 ip = get_ipython()
1655 c = ip.Completer
1665 c = ip.Completer
1656 c.use_jedi = True
1666 c.use_jedi = True
1657
1667
1658 def configure(suppression_config):
1668 def configure(suppression_config):
1659 cfg = Config()
1669 cfg = Config()
1660 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1670 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1661 c.update_config(cfg)
1671 c.update_config(cfg)
1662
1672
1663 def _():
1673 def _():
1664 with provisionalcompleter():
1674 with provisionalcompleter():
1665 matches = [completion.text for completion in c.completions("dict.", 5)]
1675 matches = [completion.text for completion in c.completions("dict.", 5)]
1666 self.assertIn("keys", matches)
1676 self.assertIn("keys", matches)
1667
1677
1668 configure(False)
1678 configure(False)
1669 _()
1679 _()
1670
1680
1671 configure(True)
1681 configure(True)
1672 _()
1682 _()
1673
1683
1674 configure(None)
1684 configure(None)
1675 _()
1685 _()
1676
1686
1677 def test_matcher_disabling(self):
1687 def test_matcher_disabling(self):
1678 @completion_matcher(identifier="a_matcher")
1688 @completion_matcher(identifier="a_matcher")
1679 def a_matcher(text):
1689 def a_matcher(text):
1680 return ["completion_a"]
1690 return ["completion_a"]
1681
1691
1682 @completion_matcher(identifier="b_matcher")
1692 @completion_matcher(identifier="b_matcher")
1683 def b_matcher(text):
1693 def b_matcher(text):
1684 return ["completion_b"]
1694 return ["completion_b"]
1685
1695
1686 def _(expected):
1696 def _(expected):
1687 s, matches = c.complete("completion_")
1697 s, matches = c.complete("completion_")
1688 self.assertEqual(expected, matches)
1698 self.assertEqual(expected, matches)
1689
1699
1690 with custom_matchers([a_matcher, b_matcher]):
1700 with custom_matchers([a_matcher, b_matcher]):
1691 ip = get_ipython()
1701 ip = get_ipython()
1692 c = ip.Completer
1702 c = ip.Completer
1693
1703
1694 _(["completion_a", "completion_b"])
1704 _(["completion_a", "completion_b"])
1695
1705
1696 cfg = Config()
1706 cfg = Config()
1697 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1707 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1698 c.update_config(cfg)
1708 c.update_config(cfg)
1699
1709
1700 _(["completion_a"])
1710 _(["completion_a"])
1701
1711
1702 cfg.IPCompleter.disable_matchers = []
1712 cfg.IPCompleter.disable_matchers = []
1703 c.update_config(cfg)
1713 c.update_config(cfg)
1704
1714
1705 def test_matcher_priority(self):
1715 def test_matcher_priority(self):
1706 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1716 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1707 def a_matcher(text):
1717 def a_matcher(text):
1708 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1718 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1709
1719
1710 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1720 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1711 def b_matcher(text):
1721 def b_matcher(text):
1712 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1722 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1713
1723
1714 def _(expected):
1724 def _(expected):
1715 s, matches = c.complete("completion_")
1725 s, matches = c.complete("completion_")
1716 self.assertEqual(expected, matches)
1726 self.assertEqual(expected, matches)
1717
1727
1718 with custom_matchers([a_matcher, b_matcher]):
1728 with custom_matchers([a_matcher, b_matcher]):
1719 ip = get_ipython()
1729 ip = get_ipython()
1720 c = ip.Completer
1730 c = ip.Completer
1721
1731
1722 _(["completion_b"])
1732 _(["completion_b"])
1723 a_matcher.matcher_priority = 3
1733 a_matcher.matcher_priority = 3
1724 _(["completion_a"])
1734 _(["completion_a"])
1725
1735
1726
1736
1727 @pytest.mark.parametrize(
1737 @pytest.mark.parametrize(
1728 "input, expected",
1738 "input, expected",
1729 [
1739 [
1730 ["1.234", "1.234"],
1740 ["1.234", "1.234"],
1731 # should match signed numbers
1741 # should match signed numbers
1732 ["+1", "+1"],
1742 ["+1", "+1"],
1733 ["-1", "-1"],
1743 ["-1", "-1"],
1734 ["-1.0", "-1.0"],
1744 ["-1.0", "-1.0"],
1735 ["-1.", "-1."],
1745 ["-1.", "-1."],
1736 ["+1.", "+1."],
1746 ["+1.", "+1."],
1737 [".1", ".1"],
1747 [".1", ".1"],
1738 # should not match non-numbers
1748 # should not match non-numbers
1739 ["1..", None],
1749 ["1..", None],
1740 ["..", None],
1750 ["..", None],
1741 [".1.", None],
1751 [".1.", None],
1742 # should match after comma
1752 # should match after comma
1743 [",1", "1"],
1753 [",1", "1"],
1744 [", 1", "1"],
1754 [", 1", "1"],
1745 [", .1", ".1"],
1755 [", .1", ".1"],
1746 [", +.1", "+.1"],
1756 [", +.1", "+.1"],
1747 # should not match after trailing spaces
1757 # should not match after trailing spaces
1748 [".1 ", None],
1758 [".1 ", None],
1749 # some complex cases
1759 # some complex cases
1750 ["0b_0011_1111_0100_1110", "0b_0011_1111_0100_1110"],
1760 ["0b_0011_1111_0100_1110", "0b_0011_1111_0100_1110"],
1751 ["0xdeadbeef", "0xdeadbeef"],
1761 ["0xdeadbeef", "0xdeadbeef"],
1752 ["0b_1110_0101", "0b_1110_0101"],
1762 ["0b_1110_0101", "0b_1110_0101"],
1753 # should not match if in an operation
1763 # should not match if in an operation
1754 ["1 + 1", None],
1764 ["1 + 1", None],
1755 [", 1 + 1", None],
1765 [", 1 + 1", None],
1756 ],
1766 ],
1757 )
1767 )
1758 def test_match_numeric_literal_for_dict_key(input, expected):
1768 def test_match_numeric_literal_for_dict_key(input, expected):
1759 assert _match_number_in_dict_key_prefix(input) == expected
1769 assert _match_number_in_dict_key_prefix(input) == expected
General Comments 0
You need to be logged in to leave comments. Login now