##// END OF EJS Templates
types hints
M Bussonnier -
Show More
@@ -1,3420 +1,3421
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press :kbd:`Tab` to expand it to its latex form.
53 and press :kbd:`Tab` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 :std:configtrait:`Completer.backslash_combining_completions` option to
62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 ``False``.
63 ``False``.
64
64
65
65
66 Experimental
66 Experimental
67 ============
67 ============
68
68
69 Starting with IPython 6.0, this module can make use of the Jedi library to
69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 generate completions both using static analysis of the code, and dynamically
70 generate completions both using static analysis of the code, and dynamically
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 for Python. The APIs attached to this new mechanism is unstable and will
72 for Python. The APIs attached to this new mechanism is unstable and will
73 raise unless use in an :any:`provisionalcompleter` context manager.
73 raise unless use in an :any:`provisionalcompleter` context manager.
74
74
75 You will find that the following are experimental:
75 You will find that the following are experimental:
76
76
77 - :any:`provisionalcompleter`
77 - :any:`provisionalcompleter`
78 - :any:`IPCompleter.completions`
78 - :any:`IPCompleter.completions`
79 - :any:`Completion`
79 - :any:`Completion`
80 - :any:`rectify_completions`
80 - :any:`rectify_completions`
81
81
82 .. note::
82 .. note::
83
83
84 better name for :any:`rectify_completions` ?
84 better name for :any:`rectify_completions` ?
85
85
86 We welcome any feedback on these new API, and we also encourage you to try this
86 We welcome any feedback on these new API, and we also encourage you to try this
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 to have extra logging information if :any:`jedi` is crashing, or if current
88 to have extra logging information if :any:`jedi` is crashing, or if current
89 IPython completer pending deprecations are returning results not yet handled
89 IPython completer pending deprecations are returning results not yet handled
90 by :any:`jedi`
90 by :any:`jedi`
91
91
92 Using Jedi for tab completion allow snippets like the following to work without
92 Using Jedi for tab completion allow snippets like the following to work without
93 having to execute any code:
93 having to execute any code:
94
94
95 >>> myvar = ['hello', 42]
95 >>> myvar = ['hello', 42]
96 ... myvar[1].bi<tab>
96 ... myvar[1].bi<tab>
97
97
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 option.
100 option.
101
101
102 Be sure to update :any:`jedi` to the latest stable version or to try the
102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 current development version to get better completions.
103 current development version to get better completions.
104
104
105 Matchers
105 Matchers
106 ========
106 ========
107
107
108 All completions routines are implemented using unified *Matchers* API.
108 All completions routines are implemented using unified *Matchers* API.
109 The matchers API is provisional and subject to change without notice.
109 The matchers API is provisional and subject to change without notice.
110
110
111 The built-in matchers include:
111 The built-in matchers include:
112
112
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 - :any:`IPCompleter.unicode_name_matcher`,
115 - :any:`IPCompleter.unicode_name_matcher`,
116 :any:`IPCompleter.fwd_unicode_matcher`
116 :any:`IPCompleter.fwd_unicode_matcher`
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 (`complete_command`) with string dispatch (including regular expressions).
125 (`complete_command`) with string dispatch (including regular expressions).
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 Jedi results to match behaviour in earlier IPython versions.
127 Jedi results to match behaviour in earlier IPython versions.
128
128
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130
130
131 Matcher API
131 Matcher API
132 -----------
132 -----------
133
133
134 Simplifying some details, the ``Matcher`` interface can described as
134 Simplifying some details, the ``Matcher`` interface can described as
135
135
136 .. code-block::
136 .. code-block::
137
137
138 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv1 = Callable[[str], list[str]]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140
140
141 Matcher = MatcherAPIv1 | MatcherAPIv2
141 Matcher = MatcherAPIv1 | MatcherAPIv2
142
142
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 and remains supported as a simplest way for generating completions. This is also
144 and remains supported as a simplest way for generating completions. This is also
145 currently the only API supported by the IPython hooks system `complete_command`.
145 currently the only API supported by the IPython hooks system `complete_command`.
146
146
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 and requires a literal ``2`` for v2 Matchers.
149 and requires a literal ``2`` for v2 Matchers.
150
150
151 Once the API stabilises future versions may relax the requirement for specifying
151 Once the API stabilises future versions may relax the requirement for specifying
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154
154
155 Suppression of competing matchers
155 Suppression of competing matchers
156 ---------------------------------
156 ---------------------------------
157
157
158 By default results from all matchers are combined, in the order determined by
158 By default results from all matchers are combined, in the order determined by
159 their priority. Matchers can request to suppress results from subsequent
159 their priority. Matchers can request to suppress results from subsequent
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161
161
162 When multiple matchers simultaneously request suppression, the results from of
162 When multiple matchers simultaneously request suppression, the results from of
163 the matcher with higher priority will be returned.
163 the matcher with higher priority will be returned.
164
164
165 Sometimes it is desirable to suppress most but not all other matchers;
165 Sometimes it is desirable to suppress most but not all other matchers;
166 this can be achieved by adding a set of identifiers of matchers which
166 this can be achieved by adding a set of identifiers of matchers which
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168
168
169 The suppression behaviour can is user-configurable via
169 The suppression behaviour can is user-configurable via
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 """
171 """
172
172
173
173
174 # Copyright (c) IPython Development Team.
174 # Copyright (c) IPython Development Team.
175 # Distributed under the terms of the Modified BSD License.
175 # Distributed under the terms of the Modified BSD License.
176 #
176 #
177 # Some of this code originated from rlcompleter in the Python standard library
177 # Some of this code originated from rlcompleter in the Python standard library
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179
179
180 from __future__ import annotations
180 from __future__ import annotations
181 import builtins as builtin_mod
181 import builtins as builtin_mod
182 import enum
182 import enum
183 import glob
183 import glob
184 import inspect
184 import inspect
185 import itertools
185 import itertools
186 import keyword
186 import keyword
187 import ast
187 import ast
188 import os
188 import os
189 import re
189 import re
190 import string
190 import string
191 import sys
191 import sys
192 import tokenize
192 import tokenize
193 import time
193 import time
194 import unicodedata
194 import unicodedata
195 import uuid
195 import uuid
196 import warnings
196 import warnings
197 from ast import literal_eval
197 from ast import literal_eval
198 from collections import defaultdict
198 from collections import defaultdict
199 from contextlib import contextmanager
199 from contextlib import contextmanager
200 from dataclasses import dataclass
200 from dataclasses import dataclass
201 from functools import cached_property, partial
201 from functools import cached_property, partial
202 from types import SimpleNamespace
202 from types import SimpleNamespace
203 from typing import (
203 from typing import (
204 Iterable,
204 Iterable,
205 Iterator,
205 Iterator,
206 List,
206 List,
207 Tuple,
207 Tuple,
208 Union,
208 Union,
209 Any,
209 Any,
210 Sequence,
210 Sequence,
211 Dict,
211 Dict,
212 Optional,
212 Optional,
213 TYPE_CHECKING,
213 TYPE_CHECKING,
214 Set,
214 Set,
215 Sized,
215 Sized,
216 TypeVar,
216 TypeVar,
217 Literal,
217 Literal,
218 )
218 )
219
219
220 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
221 from IPython.core.error import TryNext
221 from IPython.core.error import TryNext
222 from IPython.core.inputtransformer2 import ESC_MAGIC
222 from IPython.core.inputtransformer2 import ESC_MAGIC
223 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
224 from IPython.core.oinspect import InspectColors
224 from IPython.core.oinspect import InspectColors
225 from IPython.testing.skipdoctest import skip_doctest
225 from IPython.testing.skipdoctest import skip_doctest
226 from IPython.utils import generics
226 from IPython.utils import generics
227 from IPython.utils.decorators import sphinx_options
227 from IPython.utils.decorators import sphinx_options
228 from IPython.utils.dir2 import dir2, get_real_method
228 from IPython.utils.dir2 import dir2, get_real_method
229 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 from IPython.utils.docs import GENERATING_DOCUMENTATION
230 from IPython.utils.path import ensure_dir_exists
230 from IPython.utils.path import ensure_dir_exists
231 from IPython.utils.process import arg_split
231 from IPython.utils.process import arg_split
232 from traitlets import (
232 from traitlets import (
233 Bool,
233 Bool,
234 Enum,
234 Enum,
235 Int,
235 Int,
236 List as ListTrait,
236 List as ListTrait,
237 Unicode,
237 Unicode,
238 Dict as DictTrait,
238 Dict as DictTrait,
239 Union as UnionTrait,
239 Union as UnionTrait,
240 observe,
240 observe,
241 )
241 )
242 from traitlets.config.configurable import Configurable
242 from traitlets.config.configurable import Configurable
243
243
244 import __main__
244 import __main__
245
245
246 # skip module docstests
246 # skip module docstests
247 __skip_doctest__ = True
247 __skip_doctest__ = True
248
248
249
249
250 try:
250 try:
251 import jedi
251 import jedi
252 jedi.settings.case_insensitive_completion = False
252 jedi.settings.case_insensitive_completion = False
253 import jedi.api.helpers
253 import jedi.api.helpers
254 import jedi.api.classes
254 import jedi.api.classes
255 JEDI_INSTALLED = True
255 JEDI_INSTALLED = True
256 except ImportError:
256 except ImportError:
257 JEDI_INSTALLED = False
257 JEDI_INSTALLED = False
258
258
259
259
260 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
261 from typing import cast
261 from typing import cast
262 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
263 else:
263 else:
264 from typing import Generic
264 from typing import Generic
265
265
266 def cast(type_, obj):
266 def cast(type_, obj):
267 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
268 return obj
268 return obj
269
269
270 # do not require on runtime
270 # do not require on runtime
271 NotRequired = Tuple # requires Python >=3.11
271 NotRequired = Tuple # requires Python >=3.11
272 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
273 Protocol = object # requires Python >=3.8
273 Protocol = object # requires Python >=3.8
274 TypeAlias = Any # requires Python >=3.10
274 TypeAlias = Any # requires Python >=3.10
275 TypeGuard = Generic # requires Python >=3.10
275 TypeGuard = Generic # requires Python >=3.10
276 if GENERATING_DOCUMENTATION:
276 if GENERATING_DOCUMENTATION:
277 from typing import TypedDict
277 from typing import TypedDict
278
278
279 # -----------------------------------------------------------------------------
279 # -----------------------------------------------------------------------------
280 # Globals
280 # Globals
281 #-----------------------------------------------------------------------------
281 #-----------------------------------------------------------------------------
282
282
283 # ranges where we have most of the valid unicode names. We could be more finer
283 # ranges where we have most of the valid unicode names. We could be more finer
284 # grained but is it worth it for performance While unicode have character in the
284 # grained but is it worth it for performance While unicode have character in the
285 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
286 # write this). With below range we cover them all, with a density of ~67%
286 # write this). With below range we cover them all, with a density of ~67%
287 # biggest next gap we consider only adds up about 1% density and there are 600
287 # biggest next gap we consider only adds up about 1% density and there are 600
288 # gaps that would need hard coding.
288 # gaps that would need hard coding.
289 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
290
290
291 # Public API
291 # Public API
292 __all__ = ["Completer", "IPCompleter"]
292 __all__ = ["Completer", "IPCompleter"]
293
293
294 if sys.platform == 'win32':
294 if sys.platform == 'win32':
295 PROTECTABLES = ' '
295 PROTECTABLES = ' '
296 else:
296 else:
297 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
298
298
299 # Protect against returning an enormous number of completions which the frontend
299 # Protect against returning an enormous number of completions which the frontend
300 # may have trouble processing.
300 # may have trouble processing.
301 MATCHES_LIMIT = 500
301 MATCHES_LIMIT = 500
302
302
303 # Completion type reported when no type can be inferred.
303 # Completion type reported when no type can be inferred.
304 _UNKNOWN_TYPE = "<unknown>"
304 _UNKNOWN_TYPE = "<unknown>"
305
305
306 # sentinel value to signal lack of a match
306 # sentinel value to signal lack of a match
307 not_found = object()
307 not_found = object()
308
308
309 class ProvisionalCompleterWarning(FutureWarning):
309 class ProvisionalCompleterWarning(FutureWarning):
310 """
310 """
311 Exception raise by an experimental feature in this module.
311 Exception raise by an experimental feature in this module.
312
312
313 Wrap code in :any:`provisionalcompleter` context manager if you
313 Wrap code in :any:`provisionalcompleter` context manager if you
314 are certain you want to use an unstable feature.
314 are certain you want to use an unstable feature.
315 """
315 """
316 pass
316 pass
317
317
318 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
319
319
320
320
321 @skip_doctest
321 @skip_doctest
322 @contextmanager
322 @contextmanager
323 def provisionalcompleter(action='ignore'):
323 def provisionalcompleter(action='ignore'):
324 """
324 """
325 This context manager has to be used in any place where unstable completer
325 This context manager has to be used in any place where unstable completer
326 behavior and API may be called.
326 behavior and API may be called.
327
327
328 >>> with provisionalcompleter():
328 >>> with provisionalcompleter():
329 ... completer.do_experimental_things() # works
329 ... completer.do_experimental_things() # works
330
330
331 >>> completer.do_experimental_things() # raises.
331 >>> completer.do_experimental_things() # raises.
332
332
333 .. note::
333 .. note::
334
334
335 Unstable
335 Unstable
336
336
337 By using this context manager you agree that the API in use may change
337 By using this context manager you agree that the API in use may change
338 without warning, and that you won't complain if they do so.
338 without warning, and that you won't complain if they do so.
339
339
340 You also understand that, if the API is not to your liking, you should report
340 You also understand that, if the API is not to your liking, you should report
341 a bug to explain your use case upstream.
341 a bug to explain your use case upstream.
342
342
343 We'll be happy to get your feedback, feature requests, and improvements on
343 We'll be happy to get your feedback, feature requests, and improvements on
344 any of the unstable APIs!
344 any of the unstable APIs!
345 """
345 """
346 with warnings.catch_warnings():
346 with warnings.catch_warnings():
347 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
348 yield
348 yield
349
349
350
350
351 def has_open_quotes(s):
351 def has_open_quotes(s: str) -> Union[str, bool]:
352 """Return whether a string has open quotes.
352 """Return whether a string has open quotes.
353
353
354 This simply counts whether the number of quote characters of either type in
354 This simply counts whether the number of quote characters of either type in
355 the string is odd.
355 the string is odd.
356
356
357 Returns
357 Returns
358 -------
358 -------
359 If there is an open quote, the quote character is returned. Else, return
359 If there is an open quote, the quote character is returned. Else, return
360 False.
360 False.
361 """
361 """
362 # We check " first, then ', so complex cases with nested quotes will get
362 # We check " first, then ', so complex cases with nested quotes will get
363 # the " to take precedence.
363 # the " to take precedence.
364 if s.count('"') % 2:
364 if s.count('"') % 2:
365 return '"'
365 return '"'
366 elif s.count("'") % 2:
366 elif s.count("'") % 2:
367 return "'"
367 return "'"
368 else:
368 else:
369 return False
369 return False
370
370
371
371
372 def protect_filename(s, protectables=PROTECTABLES):
372 def protect_filename(s: str, protectables: str = PROTECTABLES) -> str:
373 """Escape a string to protect certain characters."""
373 """Escape a string to protect certain characters."""
374 if set(s) & set(protectables):
374 if set(s) & set(protectables):
375 if sys.platform == "win32":
375 if sys.platform == "win32":
376 return '"' + s + '"'
376 return '"' + s + '"'
377 else:
377 else:
378 return "".join(("\\" + c if c in protectables else c) for c in s)
378 return "".join(("\\" + c if c in protectables else c) for c in s)
379 else:
379 else:
380 return s
380 return s
381
381
382
382
383 def expand_user(path:str) -> Tuple[str, bool, str]:
383 def expand_user(path:str) -> Tuple[str, bool, str]:
384 """Expand ``~``-style usernames in strings.
384 """Expand ``~``-style usernames in strings.
385
385
386 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 This is similar to :func:`os.path.expanduser`, but it computes and returns
387 extra information that will be useful if the input was being used in
387 extra information that will be useful if the input was being used in
388 computing completions, and you wish to return the completions with the
388 computing completions, and you wish to return the completions with the
389 original '~' instead of its expanded value.
389 original '~' instead of its expanded value.
390
390
391 Parameters
391 Parameters
392 ----------
392 ----------
393 path : str
393 path : str
394 String to be expanded. If no ~ is present, the output is the same as the
394 String to be expanded. If no ~ is present, the output is the same as the
395 input.
395 input.
396
396
397 Returns
397 Returns
398 -------
398 -------
399 newpath : str
399 newpath : str
400 Result of ~ expansion in the input path.
400 Result of ~ expansion in the input path.
401 tilde_expand : bool
401 tilde_expand : bool
402 Whether any expansion was performed or not.
402 Whether any expansion was performed or not.
403 tilde_val : str
403 tilde_val : str
404 The value that ~ was replaced with.
404 The value that ~ was replaced with.
405 """
405 """
406 # Default values
406 # Default values
407 tilde_expand = False
407 tilde_expand = False
408 tilde_val = ''
408 tilde_val = ''
409 newpath = path
409 newpath = path
410
410
411 if path.startswith('~'):
411 if path.startswith('~'):
412 tilde_expand = True
412 tilde_expand = True
413 rest = len(path)-1
413 rest = len(path)-1
414 newpath = os.path.expanduser(path)
414 newpath = os.path.expanduser(path)
415 if rest:
415 if rest:
416 tilde_val = newpath[:-rest]
416 tilde_val = newpath[:-rest]
417 else:
417 else:
418 tilde_val = newpath
418 tilde_val = newpath
419
419
420 return newpath, tilde_expand, tilde_val
420 return newpath, tilde_expand, tilde_val
421
421
422
422
423 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
424 """Does the opposite of expand_user, with its outputs.
424 """Does the opposite of expand_user, with its outputs.
425 """
425 """
426 if tilde_expand:
426 if tilde_expand:
427 return path.replace(tilde_val, '~')
427 return path.replace(tilde_val, '~')
428 else:
428 else:
429 return path
429 return path
430
430
431
431
432 def completions_sorting_key(word):
432 def completions_sorting_key(word):
433 """key for sorting completions
433 """key for sorting completions
434
434
435 This does several things:
435 This does several things:
436
436
437 - Demote any completions starting with underscores to the end
437 - Demote any completions starting with underscores to the end
438 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 - Insert any %magic and %%cellmagic completions in the alphabetical order
439 by their name
439 by their name
440 """
440 """
441 prio1, prio2 = 0, 0
441 prio1, prio2 = 0, 0
442
442
443 if word.startswith('__'):
443 if word.startswith('__'):
444 prio1 = 2
444 prio1 = 2
445 elif word.startswith('_'):
445 elif word.startswith('_'):
446 prio1 = 1
446 prio1 = 1
447
447
448 if word.endswith('='):
448 if word.endswith('='):
449 prio1 = -1
449 prio1 = -1
450
450
451 if word.startswith('%%'):
451 if word.startswith('%%'):
452 # If there's another % in there, this is something else, so leave it alone
452 # If there's another % in there, this is something else, so leave it alone
453 if "%" not in word[2:]:
453 if "%" not in word[2:]:
454 word = word[2:]
454 word = word[2:]
455 prio2 = 2
455 prio2 = 2
456 elif word.startswith('%'):
456 elif word.startswith('%'):
457 if "%" not in word[1:]:
457 if "%" not in word[1:]:
458 word = word[1:]
458 word = word[1:]
459 prio2 = 1
459 prio2 = 1
460
460
461 return prio1, word, prio2
461 return prio1, word, prio2
462
462
463
463
464 class _FakeJediCompletion:
464 class _FakeJediCompletion:
465 """
465 """
466 This is a workaround to communicate to the UI that Jedi has crashed and to
466 This is a workaround to communicate to the UI that Jedi has crashed and to
467 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
468
468
469 Added in IPython 6.0 so should likely be removed for 7.0
469 Added in IPython 6.0 so should likely be removed for 7.0
470
470
471 """
471 """
472
472
473 def __init__(self, name):
473 def __init__(self, name):
474
474
475 self.name = name
475 self.name = name
476 self.complete = name
476 self.complete = name
477 self.type = 'crashed'
477 self.type = 'crashed'
478 self.name_with_symbols = name
478 self.name_with_symbols = name
479 self.signature = ""
479 self.signature = ""
480 self._origin = "fake"
480 self._origin = "fake"
481 self.text = "crashed"
481 self.text = "crashed"
482
482
483 def __repr__(self):
483 def __repr__(self):
484 return '<Fake completion object jedi has crashed>'
484 return '<Fake completion object jedi has crashed>'
485
485
486
486
487 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
488
488
489
489
490 class Completion:
490 class Completion:
491 """
491 """
492 Completion object used and returned by IPython completers.
492 Completion object used and returned by IPython completers.
493
493
494 .. warning::
494 .. warning::
495
495
496 Unstable
496 Unstable
497
497
498 This function is unstable, API may change without warning.
498 This function is unstable, API may change without warning.
499 It will also raise unless use in proper context manager.
499 It will also raise unless use in proper context manager.
500
500
501 This act as a middle ground :any:`Completion` object between the
501 This act as a middle ground :any:`Completion` object between the
502 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
503 object. While Jedi need a lot of information about evaluator and how the
503 object. While Jedi need a lot of information about evaluator and how the
504 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 code should be ran/inspected, PromptToolkit (and other frontend) mostly
505 need user facing information.
505 need user facing information.
506
506
507 - Which range should be replaced replaced by what.
507 - Which range should be replaced replaced by what.
508 - Some metadata (like completion type), or meta information to displayed to
508 - Some metadata (like completion type), or meta information to displayed to
509 the use user.
509 the use user.
510
510
511 For debugging purpose we can also store the origin of the completion (``jedi``,
511 For debugging purpose we can also store the origin of the completion (``jedi``,
512 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 ``IPython.python_matches``, ``IPython.magics_matches``...).
513 """
513 """
514
514
515 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
516
516
517 def __init__(
517 def __init__(
518 self,
518 self,
519 start: int,
519 start: int,
520 end: int,
520 end: int,
521 text: str,
521 text: str,
522 *,
522 *,
523 type: Optional[str] = None,
523 type: Optional[str] = None,
524 _origin="",
524 _origin="",
525 signature="",
525 signature="",
526 ) -> None:
526 ) -> None:
527 warnings.warn(
527 warnings.warn(
528 "``Completion`` is a provisional API (as of IPython 6.0). "
528 "``Completion`` is a provisional API (as of IPython 6.0). "
529 "It may change without warnings. "
529 "It may change without warnings. "
530 "Use in corresponding context manager.",
530 "Use in corresponding context manager.",
531 category=ProvisionalCompleterWarning,
531 category=ProvisionalCompleterWarning,
532 stacklevel=2,
532 stacklevel=2,
533 )
533 )
534
534
535 self.start = start
535 self.start = start
536 self.end = end
536 self.end = end
537 self.text = text
537 self.text = text
538 self.type = type
538 self.type = type
539 self.signature = signature
539 self.signature = signature
540 self._origin = _origin
540 self._origin = _origin
541
541
542 def __repr__(self):
542 def __repr__(self):
543 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
544 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
545
545
546 def __eq__(self, other) -> bool:
546 def __eq__(self, other) -> bool:
547 """
547 """
548 Equality and hash do not hash the type (as some completer may not be
548 Equality and hash do not hash the type (as some completer may not be
549 able to infer the type), but are use to (partially) de-duplicate
549 able to infer the type), but are use to (partially) de-duplicate
550 completion.
550 completion.
551
551
552 Completely de-duplicating completion is a bit tricker that just
552 Completely de-duplicating completion is a bit tricker that just
553 comparing as it depends on surrounding text, which Completions are not
553 comparing as it depends on surrounding text, which Completions are not
554 aware of.
554 aware of.
555 """
555 """
556 return self.start == other.start and \
556 return self.start == other.start and \
557 self.end == other.end and \
557 self.end == other.end and \
558 self.text == other.text
558 self.text == other.text
559
559
560 def __hash__(self):
560 def __hash__(self):
561 return hash((self.start, self.end, self.text))
561 return hash((self.start, self.end, self.text))
562
562
563
563
564 class SimpleCompletion:
564 class SimpleCompletion:
565 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
566
566
567 .. warning::
567 .. warning::
568
568
569 Provisional
569 Provisional
570
570
571 This class is used to describe the currently supported attributes of
571 This class is used to describe the currently supported attributes of
572 simple completion items, and any additional implementation details
572 simple completion items, and any additional implementation details
573 should not be relied on. Additional attributes may be included in
573 should not be relied on. Additional attributes may be included in
574 future versions, and meaning of text disambiguated from the current
574 future versions, and meaning of text disambiguated from the current
575 dual meaning of "text to insert" and "text to used as a label".
575 dual meaning of "text to insert" and "text to used as a label".
576 """
576 """
577
577
578 __slots__ = ["text", "type"]
578 __slots__ = ["text", "type"]
579
579
580 def __init__(self, text: str, *, type: Optional[str] = None):
580 def __init__(self, text: str, *, type: Optional[str] = None):
581 self.text = text
581 self.text = text
582 self.type = type
582 self.type = type
583
583
584 def __repr__(self):
584 def __repr__(self):
585 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
586
586
587
587
588 class _MatcherResultBase(TypedDict):
588 class _MatcherResultBase(TypedDict):
589 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
590
590
591 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
592 matched_fragment: NotRequired[str]
592 matched_fragment: NotRequired[str]
593
593
594 #: Whether to suppress results from all other matchers (True), some
594 #: Whether to suppress results from all other matchers (True), some
595 #: matchers (set of identifiers) or none (False); default is False.
595 #: matchers (set of identifiers) or none (False); default is False.
596 suppress: NotRequired[Union[bool, Set[str]]]
596 suppress: NotRequired[Union[bool, Set[str]]]
597
597
598 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 #: Identifiers of matchers which should NOT be suppressed when this matcher
599 #: requests to suppress all other matchers; defaults to an empty set.
599 #: requests to suppress all other matchers; defaults to an empty set.
600 do_not_suppress: NotRequired[Set[str]]
600 do_not_suppress: NotRequired[Set[str]]
601
601
602 #: Are completions already ordered and should be left as-is? default is False.
602 #: Are completions already ordered and should be left as-is? default is False.
603 ordered: NotRequired[bool]
603 ordered: NotRequired[bool]
604
604
605
605
606 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
607 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
608 """Result of new-style completion matcher."""
608 """Result of new-style completion matcher."""
609
609
610 # note: TypedDict is added again to the inheritance chain
610 # note: TypedDict is added again to the inheritance chain
611 # in order to get __orig_bases__ for documentation
611 # in order to get __orig_bases__ for documentation
612
612
613 #: List of candidate completions
613 #: List of candidate completions
614 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
615
615
616
616
617 class _JediMatcherResult(_MatcherResultBase):
617 class _JediMatcherResult(_MatcherResultBase):
618 """Matching result returned by Jedi (will be processed differently)"""
618 """Matching result returned by Jedi (will be processed differently)"""
619
619
620 #: list of candidate completions
620 #: list of candidate completions
621 completions: Iterator[_JediCompletionLike]
621 completions: Iterator[_JediCompletionLike]
622
622
623
623
624 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
625 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
626
626
627
627
628 @dataclass
628 @dataclass
629 class CompletionContext:
629 class CompletionContext:
630 """Completion context provided as an argument to matchers in the Matcher API v2."""
630 """Completion context provided as an argument to matchers in the Matcher API v2."""
631
631
632 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
633 # which was not explicitly visible as an argument of the matcher, making any refactor
633 # which was not explicitly visible as an argument of the matcher, making any refactor
634 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
635 # from the completer, and make substituting them in sub-classes easier.
635 # from the completer, and make substituting them in sub-classes easier.
636
636
637 #: Relevant fragment of code directly preceding the cursor.
637 #: Relevant fragment of code directly preceding the cursor.
638 #: The extraction of token is implemented via splitter heuristic
638 #: The extraction of token is implemented via splitter heuristic
639 #: (following readline behaviour for legacy reasons), which is user configurable
639 #: (following readline behaviour for legacy reasons), which is user configurable
640 #: (by switching the greedy mode).
640 #: (by switching the greedy mode).
641 token: str
641 token: str
642
642
643 #: The full available content of the editor or buffer
643 #: The full available content of the editor or buffer
644 full_text: str
644 full_text: str
645
645
646 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 #: Cursor position in the line (the same for ``full_text`` and ``text``).
647 cursor_position: int
647 cursor_position: int
648
648
649 #: Cursor line in ``full_text``.
649 #: Cursor line in ``full_text``.
650 cursor_line: int
650 cursor_line: int
651
651
652 #: The maximum number of completions that will be used downstream.
652 #: The maximum number of completions that will be used downstream.
653 #: Matchers can use this information to abort early.
653 #: Matchers can use this information to abort early.
654 #: The built-in Jedi matcher is currently excepted from this limit.
654 #: The built-in Jedi matcher is currently excepted from this limit.
655 # If not given, return all possible completions.
655 # If not given, return all possible completions.
656 limit: Optional[int]
656 limit: Optional[int]
657
657
658 @cached_property
658 @cached_property
659 def text_until_cursor(self) -> str:
659 def text_until_cursor(self) -> str:
660 return self.line_with_cursor[: self.cursor_position]
660 return self.line_with_cursor[: self.cursor_position]
661
661
662 @cached_property
662 @cached_property
663 def line_with_cursor(self) -> str:
663 def line_with_cursor(self) -> str:
664 return self.full_text.split("\n")[self.cursor_line]
664 return self.full_text.split("\n")[self.cursor_line]
665
665
666
666
667 #: Matcher results for API v2.
667 #: Matcher results for API v2.
668 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
669
669
670
670
671 class _MatcherAPIv1Base(Protocol):
671 class _MatcherAPIv1Base(Protocol):
672 def __call__(self, text: str) -> List[str]:
672 def __call__(self, text: str) -> List[str]:
673 """Call signature."""
673 """Call signature."""
674 ...
674 ...
675
675
676 #: Used to construct the default matcher identifier
676 #: Used to construct the default matcher identifier
677 __qualname__: str
677 __qualname__: str
678
678
679
679
680 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
681 #: API version
681 #: API version
682 matcher_api_version: Optional[Literal[1]]
682 matcher_api_version: Optional[Literal[1]]
683
683
684 def __call__(self, text: str) -> List[str]:
684 def __call__(self, text: str) -> List[str]:
685 """Call signature."""
685 """Call signature."""
686 ...
686 ...
687
687
688
688
689 #: Protocol describing Matcher API v1.
689 #: Protocol describing Matcher API v1.
690 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
691
691
692
692
693 class MatcherAPIv2(Protocol):
693 class MatcherAPIv2(Protocol):
694 """Protocol describing Matcher API v2."""
694 """Protocol describing Matcher API v2."""
695
695
696 #: API version
696 #: API version
697 matcher_api_version: Literal[2] = 2
697 matcher_api_version: Literal[2] = 2
698
698
699 def __call__(self, context: CompletionContext) -> MatcherResult:
699 def __call__(self, context: CompletionContext) -> MatcherResult:
700 """Call signature."""
700 """Call signature."""
701 ...
701 ...
702
702
703 #: Used to construct the default matcher identifier
703 #: Used to construct the default matcher identifier
704 __qualname__: str
704 __qualname__: str
705
705
706
706
707 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
708
708
709
709
710 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
711 api_version = _get_matcher_api_version(matcher)
711 api_version = _get_matcher_api_version(matcher)
712 return api_version == 1
712 return api_version == 1
713
713
714
714
715 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
716 api_version = _get_matcher_api_version(matcher)
716 api_version = _get_matcher_api_version(matcher)
717 return api_version == 2
717 return api_version == 2
718
718
719
719
720 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 def _is_sizable(value: Any) -> TypeGuard[Sized]:
721 """Determines whether objects is sizable"""
721 """Determines whether objects is sizable"""
722 return hasattr(value, "__len__")
722 return hasattr(value, "__len__")
723
723
724
724
725 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
726 """Determines whether objects is sizable"""
726 """Determines whether objects is sizable"""
727 return hasattr(value, "__next__")
727 return hasattr(value, "__next__")
728
728
729
729
730 def has_any_completions(result: MatcherResult) -> bool:
730 def has_any_completions(result: MatcherResult) -> bool:
731 """Check if any result includes any completions."""
731 """Check if any result includes any completions."""
732 completions = result["completions"]
732 completions = result["completions"]
733 if _is_sizable(completions):
733 if _is_sizable(completions):
734 return len(completions) != 0
734 return len(completions) != 0
735 if _is_iterator(completions):
735 if _is_iterator(completions):
736 try:
736 try:
737 old_iterator = completions
737 old_iterator = completions
738 first = next(old_iterator)
738 first = next(old_iterator)
739 result["completions"] = cast(
739 result["completions"] = cast(
740 Iterator[SimpleCompletion],
740 Iterator[SimpleCompletion],
741 itertools.chain([first], old_iterator),
741 itertools.chain([first], old_iterator),
742 )
742 )
743 return True
743 return True
744 except StopIteration:
744 except StopIteration:
745 return False
745 return False
746 raise ValueError(
746 raise ValueError(
747 "Completions returned by matcher need to be an Iterator or a Sizable"
747 "Completions returned by matcher need to be an Iterator or a Sizable"
748 )
748 )
749
749
750
750
751 def completion_matcher(
751 def completion_matcher(
752 *,
752 *,
753 priority: Optional[float] = None,
753 priority: Optional[float] = None,
754 identifier: Optional[str] = None,
754 identifier: Optional[str] = None,
755 api_version: int = 1,
755 api_version: int = 1,
756 ):
756 ) -> Callable[[Matcher], Matcher]:
757 """Adds attributes describing the matcher.
757 """Adds attributes describing the matcher.
758
758
759 Parameters
759 Parameters
760 ----------
760 ----------
761 priority : Optional[float]
761 priority : Optional[float]
762 The priority of the matcher, determines the order of execution of matchers.
762 The priority of the matcher, determines the order of execution of matchers.
763 Higher priority means that the matcher will be executed first. Defaults to 0.
763 Higher priority means that the matcher will be executed first. Defaults to 0.
764 identifier : Optional[str]
764 identifier : Optional[str]
765 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 identifier of the matcher allowing users to modify the behaviour via traitlets,
766 and also used to for debugging (will be passed as ``origin`` with the completions).
766 and also used to for debugging (will be passed as ``origin`` with the completions).
767
767
768 Defaults to matcher function's ``__qualname__`` (for example,
768 Defaults to matcher function's ``__qualname__`` (for example,
769 ``IPCompleter.file_matcher`` for the built-in matched defined
769 ``IPCompleter.file_matcher`` for the built-in matched defined
770 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 as a ``file_matcher`` method of the ``IPCompleter`` class).
771 api_version: Optional[int]
771 api_version: Optional[int]
772 version of the Matcher API used by this matcher.
772 version of the Matcher API used by this matcher.
773 Currently supported values are 1 and 2.
773 Currently supported values are 1 and 2.
774 Defaults to 1.
774 Defaults to 1.
775 """
775 """
776
776
777 def wrapper(func: Matcher):
777 def wrapper(func: Matcher):
778 func.matcher_priority = priority or 0 # type: ignore
778 func.matcher_priority = priority or 0 # type: ignore
779 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
780 func.matcher_api_version = api_version # type: ignore
780 func.matcher_api_version = api_version # type: ignore
781 if TYPE_CHECKING:
781 if TYPE_CHECKING:
782 if api_version == 1:
782 if api_version == 1:
783 func = cast(MatcherAPIv1, func)
783 func = cast(MatcherAPIv1, func)
784 elif api_version == 2:
784 elif api_version == 2:
785 func = cast(MatcherAPIv2, func)
785 func = cast(MatcherAPIv2, func)
786 return func
786 return func
787
787
788 return wrapper
788 return wrapper
789
789
790
790
791 def _get_matcher_priority(matcher: Matcher):
791 def _get_matcher_priority(matcher: Matcher):
792 return getattr(matcher, "matcher_priority", 0)
792 return getattr(matcher, "matcher_priority", 0)
793
793
794
794
795 def _get_matcher_id(matcher: Matcher):
795 def _get_matcher_id(matcher: Matcher):
796 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
797
797
798
798
799 def _get_matcher_api_version(matcher):
799 def _get_matcher_api_version(matcher):
800 return getattr(matcher, "matcher_api_version", 1)
800 return getattr(matcher, "matcher_api_version", 1)
801
801
802
802
803 context_matcher = partial(completion_matcher, api_version=2)
803 context_matcher = partial(completion_matcher, api_version=2)
804
804
805
805
806 _IC = Iterable[Completion]
806 _IC = Iterable[Completion]
807
807
808
808
809 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
810 """
810 """
811 Deduplicate a set of completions.
811 Deduplicate a set of completions.
812
812
813 .. warning::
813 .. warning::
814
814
815 Unstable
815 Unstable
816
816
817 This function is unstable, API may change without warning.
817 This function is unstable, API may change without warning.
818
818
819 Parameters
819 Parameters
820 ----------
820 ----------
821 text : str
821 text : str
822 text that should be completed.
822 text that should be completed.
823 completions : Iterator[Completion]
823 completions : Iterator[Completion]
824 iterator over the completions to deduplicate
824 iterator over the completions to deduplicate
825
825
826 Yields
826 Yields
827 ------
827 ------
828 `Completions` objects
828 `Completions` objects
829 Completions coming from multiple sources, may be different but end up having
829 Completions coming from multiple sources, may be different but end up having
830 the same effect when applied to ``text``. If this is the case, this will
830 the same effect when applied to ``text``. If this is the case, this will
831 consider completions as equal and only emit the first encountered.
831 consider completions as equal and only emit the first encountered.
832 Not folded in `completions()` yet for debugging purpose, and to detect when
832 Not folded in `completions()` yet for debugging purpose, and to detect when
833 the IPython completer does return things that Jedi does not, but should be
833 the IPython completer does return things that Jedi does not, but should be
834 at some point.
834 at some point.
835 """
835 """
836 completions = list(completions)
836 completions = list(completions)
837 if not completions:
837 if not completions:
838 return
838 return
839
839
840 new_start = min(c.start for c in completions)
840 new_start = min(c.start for c in completions)
841 new_end = max(c.end for c in completions)
841 new_end = max(c.end for c in completions)
842
842
843 seen = set()
843 seen = set()
844 for c in completions:
844 for c in completions:
845 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
846 if new_text not in seen:
846 if new_text not in seen:
847 yield c
847 yield c
848 seen.add(new_text)
848 seen.add(new_text)
849
849
850
850
851 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
852 """
852 """
853 Rectify a set of completions to all have the same ``start`` and ``end``
853 Rectify a set of completions to all have the same ``start`` and ``end``
854
854
855 .. warning::
855 .. warning::
856
856
857 Unstable
857 Unstable
858
858
859 This function is unstable, API may change without warning.
859 This function is unstable, API may change without warning.
860 It will also raise unless use in proper context manager.
860 It will also raise unless use in proper context manager.
861
861
862 Parameters
862 Parameters
863 ----------
863 ----------
864 text : str
864 text : str
865 text that should be completed.
865 text that should be completed.
866 completions : Iterator[Completion]
866 completions : Iterator[Completion]
867 iterator over the completions to rectify
867 iterator over the completions to rectify
868 _debug : bool
868 _debug : bool
869 Log failed completion
869 Log failed completion
870
870
871 Notes
871 Notes
872 -----
872 -----
873 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
874 the Jupyter Protocol requires them to behave like so. This will readjust
874 the Jupyter Protocol requires them to behave like so. This will readjust
875 the completion to have the same ``start`` and ``end`` by padding both
875 the completion to have the same ``start`` and ``end`` by padding both
876 extremities with surrounding text.
876 extremities with surrounding text.
877
877
878 During stabilisation should support a ``_debug`` option to log which
878 During stabilisation should support a ``_debug`` option to log which
879 completion are return by the IPython completer and not found in Jedi in
879 completion are return by the IPython completer and not found in Jedi in
880 order to make upstream bug report.
880 order to make upstream bug report.
881 """
881 """
882 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
883 "It may change without warnings. "
883 "It may change without warnings. "
884 "Use in corresponding context manager.",
884 "Use in corresponding context manager.",
885 category=ProvisionalCompleterWarning, stacklevel=2)
885 category=ProvisionalCompleterWarning, stacklevel=2)
886
886
887 completions = list(completions)
887 completions = list(completions)
888 if not completions:
888 if not completions:
889 return
889 return
890 starts = (c.start for c in completions)
890 starts = (c.start for c in completions)
891 ends = (c.end for c in completions)
891 ends = (c.end for c in completions)
892
892
893 new_start = min(starts)
893 new_start = min(starts)
894 new_end = max(ends)
894 new_end = max(ends)
895
895
896 seen_jedi = set()
896 seen_jedi = set()
897 seen_python_matches = set()
897 seen_python_matches = set()
898 for c in completions:
898 for c in completions:
899 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
900 if c._origin == 'jedi':
900 if c._origin == 'jedi':
901 seen_jedi.add(new_text)
901 seen_jedi.add(new_text)
902 elif c._origin == "IPCompleter.python_matcher":
902 elif c._origin == "IPCompleter.python_matcher":
903 seen_python_matches.add(new_text)
903 seen_python_matches.add(new_text)
904 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
905 diff = seen_python_matches.difference(seen_jedi)
905 diff = seen_python_matches.difference(seen_jedi)
906 if diff and _debug:
906 if diff and _debug:
907 print('IPython.python matches have extras:', diff)
907 print('IPython.python matches have extras:', diff)
908
908
909
909
910 if sys.platform == 'win32':
910 if sys.platform == 'win32':
911 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
912 else:
912 else:
913 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
914
914
915 GREEDY_DELIMS = ' =\r\n'
915 GREEDY_DELIMS = ' =\r\n'
916
916
917
917
918 class CompletionSplitter(object):
918 class CompletionSplitter(object):
919 """An object to split an input line in a manner similar to readline.
919 """An object to split an input line in a manner similar to readline.
920
920
921 By having our own implementation, we can expose readline-like completion in
921 By having our own implementation, we can expose readline-like completion in
922 a uniform manner to all frontends. This object only needs to be given the
922 a uniform manner to all frontends. This object only needs to be given the
923 line of text to be split and the cursor position on said line, and it
923 line of text to be split and the cursor position on said line, and it
924 returns the 'word' to be completed on at the cursor after splitting the
924 returns the 'word' to be completed on at the cursor after splitting the
925 entire line.
925 entire line.
926
926
927 What characters are used as splitting delimiters can be controlled by
927 What characters are used as splitting delimiters can be controlled by
928 setting the ``delims`` attribute (this is a property that internally
928 setting the ``delims`` attribute (this is a property that internally
929 automatically builds the necessary regular expression)"""
929 automatically builds the necessary regular expression)"""
930
930
931 # Private interface
931 # Private interface
932
932
933 # A string of delimiter characters. The default value makes sense for
933 # A string of delimiter characters. The default value makes sense for
934 # IPython's most typical usage patterns.
934 # IPython's most typical usage patterns.
935 _delims = DELIMS
935 _delims = DELIMS
936
936
937 # The expression (a normal string) to be compiled into a regular expression
937 # The expression (a normal string) to be compiled into a regular expression
938 # for actual splitting. We store it as an attribute mostly for ease of
938 # for actual splitting. We store it as an attribute mostly for ease of
939 # debugging, since this type of code can be so tricky to debug.
939 # debugging, since this type of code can be so tricky to debug.
940 _delim_expr = None
940 _delim_expr = None
941
941
942 # The regular expression that does the actual splitting
942 # The regular expression that does the actual splitting
943 _delim_re = None
943 _delim_re = None
944
944
945 def __init__(self, delims=None):
945 def __init__(self, delims=None):
946 delims = CompletionSplitter._delims if delims is None else delims
946 delims = CompletionSplitter._delims if delims is None else delims
947 self.delims = delims
947 self.delims = delims
948
948
949 @property
949 @property
950 def delims(self):
950 def delims(self):
951 """Return the string of delimiter characters."""
951 """Return the string of delimiter characters."""
952 return self._delims
952 return self._delims
953
953
954 @delims.setter
954 @delims.setter
955 def delims(self, delims):
955 def delims(self, delims):
956 """Set the delimiters for line splitting."""
956 """Set the delimiters for line splitting."""
957 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
958 self._delim_re = re.compile(expr)
958 self._delim_re = re.compile(expr)
959 self._delims = delims
959 self._delims = delims
960 self._delim_expr = expr
960 self._delim_expr = expr
961
961
962 def split_line(self, line, cursor_pos=None):
962 def split_line(self, line, cursor_pos=None):
963 """Split a line of text with a cursor at the given position.
963 """Split a line of text with a cursor at the given position.
964 """
964 """
965 cut_line = line if cursor_pos is None else line[:cursor_pos]
965 cut_line = line if cursor_pos is None else line[:cursor_pos]
966 return self._delim_re.split(cut_line)[-1]
966 return self._delim_re.split(cut_line)[-1]
967
967
968
968
969
969
970 class Completer(Configurable):
970 class Completer(Configurable):
971
971
972 greedy = Bool(
972 greedy = Bool(
973 False,
973 False,
974 help="""Activate greedy completion.
974 help="""Activate greedy completion.
975
975
976 .. deprecated:: 8.8
976 .. deprecated:: 8.8
977 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
978
978
979 When enabled in IPython 8.8 or newer, changes configuration as follows:
979 When enabled in IPython 8.8 or newer, changes configuration as follows:
980
980
981 - ``Completer.evaluation = 'unsafe'``
981 - ``Completer.evaluation = 'unsafe'``
982 - ``Completer.auto_close_dict_keys = True``
982 - ``Completer.auto_close_dict_keys = True``
983 """,
983 """,
984 ).tag(config=True)
984 ).tag(config=True)
985
985
986 evaluation = Enum(
986 evaluation = Enum(
987 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
988 default_value="limited",
988 default_value="limited",
989 help="""Policy for code evaluation under completion.
989 help="""Policy for code evaluation under completion.
990
990
991 Successive options allow to enable more eager evaluation for better
991 Successive options allow to enable more eager evaluation for better
992 completion suggestions, including for nested dictionaries, nested lists,
992 completion suggestions, including for nested dictionaries, nested lists,
993 or even results of function calls.
993 or even results of function calls.
994 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
995 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
996
996
997 Allowed values are:
997 Allowed values are:
998
998
999 - ``forbidden``: no evaluation of code is permitted,
999 - ``forbidden``: no evaluation of code is permitted,
1000 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 - ``minimal``: evaluation of literals and access to built-in namespace;
1001 no item/attribute evaluationm no access to locals/globals,
1001 no item/attribute evaluationm no access to locals/globals,
1002 no evaluation of any operations or comparisons.
1002 no evaluation of any operations or comparisons.
1003 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1004 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1005 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 :any:`object.__getitem__`) on allow-listed objects (for example:
1006 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1007 - ``unsafe``: evaluation of all methods and function calls but not of
1007 - ``unsafe``: evaluation of all methods and function calls but not of
1008 syntax with side-effects like `del x`,
1008 syntax with side-effects like `del x`,
1009 - ``dangerous``: completely arbitrary evaluation.
1009 - ``dangerous``: completely arbitrary evaluation.
1010 """,
1010 """,
1011 ).tag(config=True)
1011 ).tag(config=True)
1012
1012
1013 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 use_jedi = Bool(default_value=JEDI_INSTALLED,
1014 help="Experimental: Use Jedi to generate autocompletions. "
1014 help="Experimental: Use Jedi to generate autocompletions. "
1015 "Default to True if jedi is installed.").tag(config=True)
1015 "Default to True if jedi is installed.").tag(config=True)
1016
1016
1017 jedi_compute_type_timeout = Int(default_value=400,
1017 jedi_compute_type_timeout = Int(default_value=400,
1018 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1019 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1020 performance by preventing jedi to build its cache.
1020 performance by preventing jedi to build its cache.
1021 """).tag(config=True)
1021 """).tag(config=True)
1022
1022
1023 debug = Bool(default_value=False,
1023 debug = Bool(default_value=False,
1024 help='Enable debug for the Completer. Mostly print extra '
1024 help='Enable debug for the Completer. Mostly print extra '
1025 'information for experimental jedi integration.')\
1025 'information for experimental jedi integration.')\
1026 .tag(config=True)
1026 .tag(config=True)
1027
1027
1028 backslash_combining_completions = Bool(True,
1028 backslash_combining_completions = Bool(True,
1029 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 help="Enable unicode completions, e.g. \\alpha<tab> . "
1030 "Includes completion of latex commands, unicode names, and expanding "
1030 "Includes completion of latex commands, unicode names, and expanding "
1031 "unicode characters back to latex commands.").tag(config=True)
1031 "unicode characters back to latex commands.").tag(config=True)
1032
1032
1033 auto_close_dict_keys = Bool(
1033 auto_close_dict_keys = Bool(
1034 False,
1034 False,
1035 help="""
1035 help="""
1036 Enable auto-closing dictionary keys.
1036 Enable auto-closing dictionary keys.
1037
1037
1038 When enabled string keys will be suffixed with a final quote
1038 When enabled string keys will be suffixed with a final quote
1039 (matching the opening quote), tuple keys will also receive a
1039 (matching the opening quote), tuple keys will also receive a
1040 separating comma if needed, and keys which are final will
1040 separating comma if needed, and keys which are final will
1041 receive a closing bracket (``]``).
1041 receive a closing bracket (``]``).
1042 """,
1042 """,
1043 ).tag(config=True)
1043 ).tag(config=True)
1044
1044
1045 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1046 """Create a new completer for the command line.
1046 """Create a new completer for the command line.
1047
1047
1048 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1049
1049
1050 If unspecified, the default namespace where completions are performed
1050 If unspecified, the default namespace where completions are performed
1051 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 is __main__ (technically, __main__.__dict__). Namespaces should be
1052 given as dictionaries.
1052 given as dictionaries.
1053
1053
1054 An optional second namespace can be given. This allows the completer
1054 An optional second namespace can be given. This allows the completer
1055 to handle cases where both the local and global scopes need to be
1055 to handle cases where both the local and global scopes need to be
1056 distinguished.
1056 distinguished.
1057 """
1057 """
1058
1058
1059 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 # Don't bind to namespace quite yet, but flag whether the user wants a
1060 # specific namespace or to use __main__.__dict__. This will allow us
1060 # specific namespace or to use __main__.__dict__. This will allow us
1061 # to bind to __main__.__dict__ at completion time, not now.
1061 # to bind to __main__.__dict__ at completion time, not now.
1062 if namespace is None:
1062 if namespace is None:
1063 self.use_main_ns = True
1063 self.use_main_ns = True
1064 else:
1064 else:
1065 self.use_main_ns = False
1065 self.use_main_ns = False
1066 self.namespace = namespace
1066 self.namespace = namespace
1067
1067
1068 # The global namespace, if given, can be bound directly
1068 # The global namespace, if given, can be bound directly
1069 if global_namespace is None:
1069 if global_namespace is None:
1070 self.global_namespace = {}
1070 self.global_namespace = {}
1071 else:
1071 else:
1072 self.global_namespace = global_namespace
1072 self.global_namespace = global_namespace
1073
1073
1074 self.custom_matchers = []
1074 self.custom_matchers = []
1075
1075
1076 super(Completer, self).__init__(**kwargs)
1076 super(Completer, self).__init__(**kwargs)
1077
1077
1078 def complete(self, text, state):
1078 def complete(self, text, state):
1079 """Return the next possible completion for 'text'.
1079 """Return the next possible completion for 'text'.
1080
1080
1081 This is called successively with state == 0, 1, 2, ... until it
1081 This is called successively with state == 0, 1, 2, ... until it
1082 returns None. The completion should begin with 'text'.
1082 returns None. The completion should begin with 'text'.
1083
1083
1084 """
1084 """
1085 if self.use_main_ns:
1085 if self.use_main_ns:
1086 self.namespace = __main__.__dict__
1086 self.namespace = __main__.__dict__
1087
1087
1088 if state == 0:
1088 if state == 0:
1089 if "." in text:
1089 if "." in text:
1090 self.matches = self.attr_matches(text)
1090 self.matches = self.attr_matches(text)
1091 else:
1091 else:
1092 self.matches = self.global_matches(text)
1092 self.matches = self.global_matches(text)
1093 try:
1093 try:
1094 return self.matches[state]
1094 return self.matches[state]
1095 except IndexError:
1095 except IndexError:
1096 return None
1096 return None
1097
1097
1098 def global_matches(self, text):
1098 def global_matches(self, text):
1099 """Compute matches when text is a simple name.
1099 """Compute matches when text is a simple name.
1100
1100
1101 Return a list of all keywords, built-in functions and names currently
1101 Return a list of all keywords, built-in functions and names currently
1102 defined in self.namespace or self.global_namespace that match.
1102 defined in self.namespace or self.global_namespace that match.
1103
1103
1104 """
1104 """
1105 matches = []
1105 matches = []
1106 match_append = matches.append
1106 match_append = matches.append
1107 n = len(text)
1107 n = len(text)
1108 for lst in [
1108 for lst in [
1109 keyword.kwlist,
1109 keyword.kwlist,
1110 builtin_mod.__dict__.keys(),
1110 builtin_mod.__dict__.keys(),
1111 list(self.namespace.keys()),
1111 list(self.namespace.keys()),
1112 list(self.global_namespace.keys()),
1112 list(self.global_namespace.keys()),
1113 ]:
1113 ]:
1114 for word in lst:
1114 for word in lst:
1115 if word[:n] == text and word != "__builtins__":
1115 if word[:n] == text and word != "__builtins__":
1116 match_append(word)
1116 match_append(word)
1117
1117
1118 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1119 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1120 shortened = {
1120 shortened = {
1121 "_".join([sub[0] for sub in word.split("_")]): word
1121 "_".join([sub[0] for sub in word.split("_")]): word
1122 for word in lst
1122 for word in lst
1123 if snake_case_re.match(word)
1123 if snake_case_re.match(word)
1124 }
1124 }
1125 for word in shortened.keys():
1125 for word in shortened.keys():
1126 if word[:n] == text and word != "__builtins__":
1126 if word[:n] == text and word != "__builtins__":
1127 match_append(shortened[word])
1127 match_append(shortened[word])
1128 return matches
1128 return matches
1129
1129
1130 def attr_matches(self, text):
1130 def attr_matches(self, text):
1131 """Compute matches when text contains a dot.
1131 """Compute matches when text contains a dot.
1132
1132
1133 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 Assuming the text is of the form NAME.NAME....[NAME], and is
1134 evaluatable in self.namespace or self.global_namespace, it will be
1134 evaluatable in self.namespace or self.global_namespace, it will be
1135 evaluated and its attributes (as revealed by dir()) are used as
1135 evaluated and its attributes (as revealed by dir()) are used as
1136 possible completions. (For class instances, class members are
1136 possible completions. (For class instances, class members are
1137 also considered.)
1137 also considered.)
1138
1138
1139 WARNING: this can still invoke arbitrary C code, if an object
1139 WARNING: this can still invoke arbitrary C code, if an object
1140 with a __getattr__ hook is evaluated.
1140 with a __getattr__ hook is evaluated.
1141
1141
1142 """
1142 """
1143 return self._attr_matches(text)[0]
1143 return self._attr_matches(text)[0]
1144
1144
1145 # we simple attribute matching with normal identifiers.
1145 # we simple attribute matching with normal identifiers.
1146 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$")
1146 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$")
1147
1147
1148 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1148 def _attr_matches(
1149
1149 self, text: str, include_prefix: bool = True
1150 ) -> Tuple[Sequence[str], str]:
1150 m2 = self._ATTR_MATCH_RE.match(self.line_buffer)
1151 m2 = self._ATTR_MATCH_RE.match(self.line_buffer)
1151 if not m2:
1152 if not m2:
1152 return [], ""
1153 return [], ""
1153 expr, attr = m2.group(1, 2)
1154 expr, attr = m2.group(1, 2)
1154
1155
1155 obj = self._evaluate_expr(expr)
1156 obj = self._evaluate_expr(expr)
1156
1157
1157 if obj is not_found:
1158 if obj is not_found:
1158 return [], ""
1159 return [], ""
1159
1160
1160 if self.limit_to__all__ and hasattr(obj, '__all__'):
1161 if self.limit_to__all__ and hasattr(obj, '__all__'):
1161 words = get__all__entries(obj)
1162 words = get__all__entries(obj)
1162 else:
1163 else:
1163 words = dir2(obj)
1164 words = dir2(obj)
1164
1165
1165 try:
1166 try:
1166 words = generics.complete_object(obj, words)
1167 words = generics.complete_object(obj, words)
1167 except TryNext:
1168 except TryNext:
1168 pass
1169 pass
1169 except AssertionError:
1170 except AssertionError:
1170 raise
1171 raise
1171 except Exception:
1172 except Exception:
1172 # Silence errors from completion function
1173 # Silence errors from completion function
1173 pass
1174 pass
1174 # Build match list to return
1175 # Build match list to return
1175 n = len(attr)
1176 n = len(attr)
1176
1177
1177 # Note: ideally we would just return words here and the prefix
1178 # Note: ideally we would just return words here and the prefix
1178 # reconciliator would know that we intend to append to rather than
1179 # reconciliator would know that we intend to append to rather than
1179 # replace the input text; this requires refactoring to return range
1180 # replace the input text; this requires refactoring to return range
1180 # which ought to be replaced (as does jedi).
1181 # which ought to be replaced (as does jedi).
1181 if include_prefix:
1182 if include_prefix:
1182 tokens = _parse_tokens(expr)
1183 tokens = _parse_tokens(expr)
1183 rev_tokens = reversed(tokens)
1184 rev_tokens = reversed(tokens)
1184 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1185 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1185 name_turn = True
1186 name_turn = True
1186
1187
1187 parts = []
1188 parts = []
1188 for token in rev_tokens:
1189 for token in rev_tokens:
1189 if token.type in skip_over:
1190 if token.type in skip_over:
1190 continue
1191 continue
1191 if token.type == tokenize.NAME and name_turn:
1192 if token.type == tokenize.NAME and name_turn:
1192 parts.append(token.string)
1193 parts.append(token.string)
1193 name_turn = False
1194 name_turn = False
1194 elif (
1195 elif (
1195 token.type == tokenize.OP and token.string == "." and not name_turn
1196 token.type == tokenize.OP and token.string == "." and not name_turn
1196 ):
1197 ):
1197 parts.append(token.string)
1198 parts.append(token.string)
1198 name_turn = True
1199 name_turn = True
1199 else:
1200 else:
1200 # short-circuit if not empty nor name token
1201 # short-circuit if not empty nor name token
1201 break
1202 break
1202
1203
1203 prefix_after_space = "".join(reversed(parts))
1204 prefix_after_space = "".join(reversed(parts))
1204 else:
1205 else:
1205 prefix_after_space = ""
1206 prefix_after_space = ""
1206
1207
1207 return (
1208 return (
1208 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1209 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1209 "." + attr,
1210 "." + attr,
1210 )
1211 )
1211
1212
1212 def _trim_expr(self, code: str) -> str:
1213 def _trim_expr(self, code: str) -> str:
1213 """
1214 """
1214 Trim the code until it is a valid expression and not a tuple;
1215 Trim the code until it is a valid expression and not a tuple;
1215
1216
1216 return the trimmed expression for guarded_eval.
1217 return the trimmed expression for guarded_eval.
1217 """
1218 """
1218 while code:
1219 while code:
1219 code = code[1:]
1220 code = code[1:]
1220 try:
1221 try:
1221 res = ast.parse(code)
1222 res = ast.parse(code)
1222 except SyntaxError:
1223 except SyntaxError:
1223 continue
1224 continue
1224
1225
1225 assert res is not None
1226 assert res is not None
1226 if len(res.body) != 1:
1227 if len(res.body) != 1:
1227 continue
1228 continue
1228 expr = res.body[0].value
1229 expr = res.body[0].value
1229 if isinstance(expr, ast.Tuple) and not code[-1] == ")":
1230 if isinstance(expr, ast.Tuple) and not code[-1] == ")":
1230 # we skip implicit tuple, like when trimming `fun(a,b`<completion>
1231 # we skip implicit tuple, like when trimming `fun(a,b`<completion>
1231 # as `a,b` would be a tuple, and we actually expect to get only `b`
1232 # as `a,b` would be a tuple, and we actually expect to get only `b`
1232 continue
1233 continue
1233 return code
1234 return code
1234 return ""
1235 return ""
1235
1236
1236 def _evaluate_expr(self, expr):
1237 def _evaluate_expr(self, expr):
1237 obj = not_found
1238 obj = not_found
1238 done = False
1239 done = False
1239 while not done and expr:
1240 while not done and expr:
1240 try:
1241 try:
1241 obj = guarded_eval(
1242 obj = guarded_eval(
1242 expr,
1243 expr,
1243 EvaluationContext(
1244 EvaluationContext(
1244 globals=self.global_namespace,
1245 globals=self.global_namespace,
1245 locals=self.namespace,
1246 locals=self.namespace,
1246 evaluation=self.evaluation,
1247 evaluation=self.evaluation,
1247 ),
1248 ),
1248 )
1249 )
1249 done = True
1250 done = True
1250 except Exception as e:
1251 except Exception as e:
1251 if self.debug:
1252 if self.debug:
1252 print("Evaluation exception", e)
1253 print("Evaluation exception", e)
1253 # trim the expression to remove any invalid prefix
1254 # trim the expression to remove any invalid prefix
1254 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1255 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1255 # where parenthesis is not closed.
1256 # where parenthesis is not closed.
1256 # TODO: make this faster by reusing parts of the computation?
1257 # TODO: make this faster by reusing parts of the computation?
1257 expr = self._trim_expr(expr)
1258 expr = self._trim_expr(expr)
1258 return obj
1259 return obj
1259
1260
1260 def get__all__entries(obj):
1261 def get__all__entries(obj):
1261 """returns the strings in the __all__ attribute"""
1262 """returns the strings in the __all__ attribute"""
1262 try:
1263 try:
1263 words = getattr(obj, '__all__')
1264 words = getattr(obj, '__all__')
1264 except Exception:
1265 except Exception:
1265 return []
1266 return []
1266
1267
1267 return [w for w in words if isinstance(w, str)]
1268 return [w for w in words if isinstance(w, str)]
1268
1269
1269
1270
1270 class _DictKeyState(enum.Flag):
1271 class _DictKeyState(enum.Flag):
1271 """Represent state of the key match in context of other possible matches.
1272 """Represent state of the key match in context of other possible matches.
1272
1273
1273 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1274 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1274 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1275 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1275 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1276 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1276 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1277 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1277 """
1278 """
1278
1279
1279 BASELINE = 0
1280 BASELINE = 0
1280 END_OF_ITEM = enum.auto()
1281 END_OF_ITEM = enum.auto()
1281 END_OF_TUPLE = enum.auto()
1282 END_OF_TUPLE = enum.auto()
1282 IN_TUPLE = enum.auto()
1283 IN_TUPLE = enum.auto()
1283
1284
1284
1285
1285 def _parse_tokens(c):
1286 def _parse_tokens(c):
1286 """Parse tokens even if there is an error."""
1287 """Parse tokens even if there is an error."""
1287 tokens = []
1288 tokens = []
1288 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1289 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1289 while True:
1290 while True:
1290 try:
1291 try:
1291 tokens.append(next(token_generator))
1292 tokens.append(next(token_generator))
1292 except tokenize.TokenError:
1293 except tokenize.TokenError:
1293 return tokens
1294 return tokens
1294 except StopIteration:
1295 except StopIteration:
1295 return tokens
1296 return tokens
1296
1297
1297
1298
1298 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1299 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1299 """Match any valid Python numeric literal in a prefix of dictionary keys.
1300 """Match any valid Python numeric literal in a prefix of dictionary keys.
1300
1301
1301 References:
1302 References:
1302 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1303 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1303 - https://docs.python.org/3/library/tokenize.html
1304 - https://docs.python.org/3/library/tokenize.html
1304 """
1305 """
1305 if prefix[-1].isspace():
1306 if prefix[-1].isspace():
1306 # if user typed a space we do not have anything to complete
1307 # if user typed a space we do not have anything to complete
1307 # even if there was a valid number token before
1308 # even if there was a valid number token before
1308 return None
1309 return None
1309 tokens = _parse_tokens(prefix)
1310 tokens = _parse_tokens(prefix)
1310 rev_tokens = reversed(tokens)
1311 rev_tokens = reversed(tokens)
1311 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1312 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1312 number = None
1313 number = None
1313 for token in rev_tokens:
1314 for token in rev_tokens:
1314 if token.type in skip_over:
1315 if token.type in skip_over:
1315 continue
1316 continue
1316 if number is None:
1317 if number is None:
1317 if token.type == tokenize.NUMBER:
1318 if token.type == tokenize.NUMBER:
1318 number = token.string
1319 number = token.string
1319 continue
1320 continue
1320 else:
1321 else:
1321 # we did not match a number
1322 # we did not match a number
1322 return None
1323 return None
1323 if token.type == tokenize.OP:
1324 if token.type == tokenize.OP:
1324 if token.string == ",":
1325 if token.string == ",":
1325 break
1326 break
1326 if token.string in {"+", "-"}:
1327 if token.string in {"+", "-"}:
1327 number = token.string + number
1328 number = token.string + number
1328 else:
1329 else:
1329 return None
1330 return None
1330 return number
1331 return number
1331
1332
1332
1333
1333 _INT_FORMATS = {
1334 _INT_FORMATS = {
1334 "0b": bin,
1335 "0b": bin,
1335 "0o": oct,
1336 "0o": oct,
1336 "0x": hex,
1337 "0x": hex,
1337 }
1338 }
1338
1339
1339
1340
1340 def match_dict_keys(
1341 def match_dict_keys(
1341 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1342 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1342 prefix: str,
1343 prefix: str,
1343 delims: str,
1344 delims: str,
1344 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1345 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1345 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1346 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1346 """Used by dict_key_matches, matching the prefix to a list of keys
1347 """Used by dict_key_matches, matching the prefix to a list of keys
1347
1348
1348 Parameters
1349 Parameters
1349 ----------
1350 ----------
1350 keys
1351 keys
1351 list of keys in dictionary currently being completed.
1352 list of keys in dictionary currently being completed.
1352 prefix
1353 prefix
1353 Part of the text already typed by the user. E.g. `mydict[b'fo`
1354 Part of the text already typed by the user. E.g. `mydict[b'fo`
1354 delims
1355 delims
1355 String of delimiters to consider when finding the current key.
1356 String of delimiters to consider when finding the current key.
1356 extra_prefix : optional
1357 extra_prefix : optional
1357 Part of the text already typed in multi-key index cases. E.g. for
1358 Part of the text already typed in multi-key index cases. E.g. for
1358 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1359 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1359
1360
1360 Returns
1361 Returns
1361 -------
1362 -------
1362 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1363 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1363 ``quote`` being the quote that need to be used to close current string.
1364 ``quote`` being the quote that need to be used to close current string.
1364 ``token_start`` the position where the replacement should start occurring,
1365 ``token_start`` the position where the replacement should start occurring,
1365 ``matches`` a dictionary of replacement/completion keys on keys and values
1366 ``matches`` a dictionary of replacement/completion keys on keys and values
1366 indicating whether the state.
1367 indicating whether the state.
1367 """
1368 """
1368 prefix_tuple = extra_prefix if extra_prefix else ()
1369 prefix_tuple = extra_prefix if extra_prefix else ()
1369
1370
1370 prefix_tuple_size = sum(
1371 prefix_tuple_size = sum(
1371 [
1372 [
1372 # for pandas, do not count slices as taking space
1373 # for pandas, do not count slices as taking space
1373 not isinstance(k, slice)
1374 not isinstance(k, slice)
1374 for k in prefix_tuple
1375 for k in prefix_tuple
1375 ]
1376 ]
1376 )
1377 )
1377 text_serializable_types = (str, bytes, int, float, slice)
1378 text_serializable_types = (str, bytes, int, float, slice)
1378
1379
1379 def filter_prefix_tuple(key):
1380 def filter_prefix_tuple(key):
1380 # Reject too short keys
1381 # Reject too short keys
1381 if len(key) <= prefix_tuple_size:
1382 if len(key) <= prefix_tuple_size:
1382 return False
1383 return False
1383 # Reject keys which cannot be serialised to text
1384 # Reject keys which cannot be serialised to text
1384 for k in key:
1385 for k in key:
1385 if not isinstance(k, text_serializable_types):
1386 if not isinstance(k, text_serializable_types):
1386 return False
1387 return False
1387 # Reject keys that do not match the prefix
1388 # Reject keys that do not match the prefix
1388 for k, pt in zip(key, prefix_tuple):
1389 for k, pt in zip(key, prefix_tuple):
1389 if k != pt and not isinstance(pt, slice):
1390 if k != pt and not isinstance(pt, slice):
1390 return False
1391 return False
1391 # All checks passed!
1392 # All checks passed!
1392 return True
1393 return True
1393
1394
1394 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1395 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1395 defaultdict(lambda: _DictKeyState.BASELINE)
1396 defaultdict(lambda: _DictKeyState.BASELINE)
1396 )
1397 )
1397
1398
1398 for k in keys:
1399 for k in keys:
1399 # If at least one of the matches is not final, mark as undetermined.
1400 # If at least one of the matches is not final, mark as undetermined.
1400 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1401 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1401 # `111` appears final on first match but is not final on the second.
1402 # `111` appears final on first match but is not final on the second.
1402
1403
1403 if isinstance(k, tuple):
1404 if isinstance(k, tuple):
1404 if filter_prefix_tuple(k):
1405 if filter_prefix_tuple(k):
1405 key_fragment = k[prefix_tuple_size]
1406 key_fragment = k[prefix_tuple_size]
1406 filtered_key_is_final[key_fragment] |= (
1407 filtered_key_is_final[key_fragment] |= (
1407 _DictKeyState.END_OF_TUPLE
1408 _DictKeyState.END_OF_TUPLE
1408 if len(k) == prefix_tuple_size + 1
1409 if len(k) == prefix_tuple_size + 1
1409 else _DictKeyState.IN_TUPLE
1410 else _DictKeyState.IN_TUPLE
1410 )
1411 )
1411 elif prefix_tuple_size > 0:
1412 elif prefix_tuple_size > 0:
1412 # we are completing a tuple but this key is not a tuple,
1413 # we are completing a tuple but this key is not a tuple,
1413 # so we should ignore it
1414 # so we should ignore it
1414 pass
1415 pass
1415 else:
1416 else:
1416 if isinstance(k, text_serializable_types):
1417 if isinstance(k, text_serializable_types):
1417 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1418 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1418
1419
1419 filtered_keys = filtered_key_is_final.keys()
1420 filtered_keys = filtered_key_is_final.keys()
1420
1421
1421 if not prefix:
1422 if not prefix:
1422 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1423 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1423
1424
1424 quote_match = re.search("(?:\"|')", prefix)
1425 quote_match = re.search("(?:\"|')", prefix)
1425 is_user_prefix_numeric = False
1426 is_user_prefix_numeric = False
1426
1427
1427 if quote_match:
1428 if quote_match:
1428 quote = quote_match.group()
1429 quote = quote_match.group()
1429 valid_prefix = prefix + quote
1430 valid_prefix = prefix + quote
1430 try:
1431 try:
1431 prefix_str = literal_eval(valid_prefix)
1432 prefix_str = literal_eval(valid_prefix)
1432 except Exception:
1433 except Exception:
1433 return "", 0, {}
1434 return "", 0, {}
1434 else:
1435 else:
1435 # If it does not look like a string, let's assume
1436 # If it does not look like a string, let's assume
1436 # we are dealing with a number or variable.
1437 # we are dealing with a number or variable.
1437 number_match = _match_number_in_dict_key_prefix(prefix)
1438 number_match = _match_number_in_dict_key_prefix(prefix)
1438
1439
1439 # We do not want the key matcher to suggest variable names so we yield:
1440 # We do not want the key matcher to suggest variable names so we yield:
1440 if number_match is None:
1441 if number_match is None:
1441 # The alternative would be to assume that user forgort the quote
1442 # The alternative would be to assume that user forgort the quote
1442 # and if the substring matches, suggest adding it at the start.
1443 # and if the substring matches, suggest adding it at the start.
1443 return "", 0, {}
1444 return "", 0, {}
1444
1445
1445 prefix_str = number_match
1446 prefix_str = number_match
1446 is_user_prefix_numeric = True
1447 is_user_prefix_numeric = True
1447 quote = ""
1448 quote = ""
1448
1449
1449 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1450 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1450 token_match = re.search(pattern, prefix, re.UNICODE)
1451 token_match = re.search(pattern, prefix, re.UNICODE)
1451 assert token_match is not None # silence mypy
1452 assert token_match is not None # silence mypy
1452 token_start = token_match.start()
1453 token_start = token_match.start()
1453 token_prefix = token_match.group()
1454 token_prefix = token_match.group()
1454
1455
1455 matched: Dict[str, _DictKeyState] = {}
1456 matched: Dict[str, _DictKeyState] = {}
1456
1457
1457 str_key: Union[str, bytes]
1458 str_key: Union[str, bytes]
1458
1459
1459 for key in filtered_keys:
1460 for key in filtered_keys:
1460 if isinstance(key, (int, float)):
1461 if isinstance(key, (int, float)):
1461 # User typed a number but this key is not a number.
1462 # User typed a number but this key is not a number.
1462 if not is_user_prefix_numeric:
1463 if not is_user_prefix_numeric:
1463 continue
1464 continue
1464 str_key = str(key)
1465 str_key = str(key)
1465 if isinstance(key, int):
1466 if isinstance(key, int):
1466 int_base = prefix_str[:2].lower()
1467 int_base = prefix_str[:2].lower()
1467 # if user typed integer using binary/oct/hex notation:
1468 # if user typed integer using binary/oct/hex notation:
1468 if int_base in _INT_FORMATS:
1469 if int_base in _INT_FORMATS:
1469 int_format = _INT_FORMATS[int_base]
1470 int_format = _INT_FORMATS[int_base]
1470 str_key = int_format(key)
1471 str_key = int_format(key)
1471 else:
1472 else:
1472 # User typed a string but this key is a number.
1473 # User typed a string but this key is a number.
1473 if is_user_prefix_numeric:
1474 if is_user_prefix_numeric:
1474 continue
1475 continue
1475 str_key = key
1476 str_key = key
1476 try:
1477 try:
1477 if not str_key.startswith(prefix_str):
1478 if not str_key.startswith(prefix_str):
1478 continue
1479 continue
1479 except (AttributeError, TypeError, UnicodeError):
1480 except (AttributeError, TypeError, UnicodeError):
1480 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1481 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1481 continue
1482 continue
1482
1483
1483 # reformat remainder of key to begin with prefix
1484 # reformat remainder of key to begin with prefix
1484 rem = str_key[len(prefix_str) :]
1485 rem = str_key[len(prefix_str) :]
1485 # force repr wrapped in '
1486 # force repr wrapped in '
1486 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1487 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1487 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1488 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1488 if quote == '"':
1489 if quote == '"':
1489 # The entered prefix is quoted with ",
1490 # The entered prefix is quoted with ",
1490 # but the match is quoted with '.
1491 # but the match is quoted with '.
1491 # A contained " hence needs escaping for comparison:
1492 # A contained " hence needs escaping for comparison:
1492 rem_repr = rem_repr.replace('"', '\\"')
1493 rem_repr = rem_repr.replace('"', '\\"')
1493
1494
1494 # then reinsert prefix from start of token
1495 # then reinsert prefix from start of token
1495 match = "%s%s" % (token_prefix, rem_repr)
1496 match = "%s%s" % (token_prefix, rem_repr)
1496
1497
1497 matched[match] = filtered_key_is_final[key]
1498 matched[match] = filtered_key_is_final[key]
1498 return quote, token_start, matched
1499 return quote, token_start, matched
1499
1500
1500
1501
1501 def cursor_to_position(text:str, line:int, column:int)->int:
1502 def cursor_to_position(text:str, line:int, column:int)->int:
1502 """
1503 """
1503 Convert the (line,column) position of the cursor in text to an offset in a
1504 Convert the (line,column) position of the cursor in text to an offset in a
1504 string.
1505 string.
1505
1506
1506 Parameters
1507 Parameters
1507 ----------
1508 ----------
1508 text : str
1509 text : str
1509 The text in which to calculate the cursor offset
1510 The text in which to calculate the cursor offset
1510 line : int
1511 line : int
1511 Line of the cursor; 0-indexed
1512 Line of the cursor; 0-indexed
1512 column : int
1513 column : int
1513 Column of the cursor 0-indexed
1514 Column of the cursor 0-indexed
1514
1515
1515 Returns
1516 Returns
1516 -------
1517 -------
1517 Position of the cursor in ``text``, 0-indexed.
1518 Position of the cursor in ``text``, 0-indexed.
1518
1519
1519 See Also
1520 See Also
1520 --------
1521 --------
1521 position_to_cursor : reciprocal of this function
1522 position_to_cursor : reciprocal of this function
1522
1523
1523 """
1524 """
1524 lines = text.split('\n')
1525 lines = text.split('\n')
1525 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1526 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1526
1527
1527 return sum(len(line) + 1 for line in lines[:line]) + column
1528 return sum(len(line) + 1 for line in lines[:line]) + column
1528
1529
1529 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1530 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1530 """
1531 """
1531 Convert the position of the cursor in text (0 indexed) to a line
1532 Convert the position of the cursor in text (0 indexed) to a line
1532 number(0-indexed) and a column number (0-indexed) pair
1533 number(0-indexed) and a column number (0-indexed) pair
1533
1534
1534 Position should be a valid position in ``text``.
1535 Position should be a valid position in ``text``.
1535
1536
1536 Parameters
1537 Parameters
1537 ----------
1538 ----------
1538 text : str
1539 text : str
1539 The text in which to calculate the cursor offset
1540 The text in which to calculate the cursor offset
1540 offset : int
1541 offset : int
1541 Position of the cursor in ``text``, 0-indexed.
1542 Position of the cursor in ``text``, 0-indexed.
1542
1543
1543 Returns
1544 Returns
1544 -------
1545 -------
1545 (line, column) : (int, int)
1546 (line, column) : (int, int)
1546 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1547 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1547
1548
1548 See Also
1549 See Also
1549 --------
1550 --------
1550 cursor_to_position : reciprocal of this function
1551 cursor_to_position : reciprocal of this function
1551
1552
1552 """
1553 """
1553
1554
1554 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1555 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1555
1556
1556 before = text[:offset]
1557 before = text[:offset]
1557 blines = before.split('\n') # ! splitnes trim trailing \n
1558 blines = before.split('\n') # ! splitnes trim trailing \n
1558 line = before.count('\n')
1559 line = before.count('\n')
1559 col = len(blines[-1])
1560 col = len(blines[-1])
1560 return line, col
1561 return line, col
1561
1562
1562
1563
1563 def _safe_isinstance(obj, module, class_name, *attrs):
1564 def _safe_isinstance(obj, module, class_name, *attrs):
1564 """Checks if obj is an instance of module.class_name if loaded
1565 """Checks if obj is an instance of module.class_name if loaded
1565 """
1566 """
1566 if module in sys.modules:
1567 if module in sys.modules:
1567 m = sys.modules[module]
1568 m = sys.modules[module]
1568 for attr in [class_name, *attrs]:
1569 for attr in [class_name, *attrs]:
1569 m = getattr(m, attr)
1570 m = getattr(m, attr)
1570 return isinstance(obj, m)
1571 return isinstance(obj, m)
1571
1572
1572
1573
1573 @context_matcher()
1574 @context_matcher()
1574 def back_unicode_name_matcher(context: CompletionContext):
1575 def back_unicode_name_matcher(context: CompletionContext):
1575 """Match Unicode characters back to Unicode name
1576 """Match Unicode characters back to Unicode name
1576
1577
1577 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1578 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1578 """
1579 """
1579 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1580 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1580 return _convert_matcher_v1_result_to_v2(
1581 return _convert_matcher_v1_result_to_v2(
1581 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1582 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1582 )
1583 )
1583
1584
1584
1585
1585 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1586 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1586 """Match Unicode characters back to Unicode name
1587 """Match Unicode characters back to Unicode name
1587
1588
1588 This does ``β˜ƒ`` -> ``\\snowman``
1589 This does ``β˜ƒ`` -> ``\\snowman``
1589
1590
1590 Note that snowman is not a valid python3 combining character but will be expanded.
1591 Note that snowman is not a valid python3 combining character but will be expanded.
1591 Though it will not recombine back to the snowman character by the completion machinery.
1592 Though it will not recombine back to the snowman character by the completion machinery.
1592
1593
1593 This will not either back-complete standard sequences like \\n, \\b ...
1594 This will not either back-complete standard sequences like \\n, \\b ...
1594
1595
1595 .. deprecated:: 8.6
1596 .. deprecated:: 8.6
1596 You can use :meth:`back_unicode_name_matcher` instead.
1597 You can use :meth:`back_unicode_name_matcher` instead.
1597
1598
1598 Returns
1599 Returns
1599 =======
1600 =======
1600
1601
1601 Return a tuple with two elements:
1602 Return a tuple with two elements:
1602
1603
1603 - The Unicode character that was matched (preceded with a backslash), or
1604 - The Unicode character that was matched (preceded with a backslash), or
1604 empty string,
1605 empty string,
1605 - a sequence (of 1), name for the match Unicode character, preceded by
1606 - a sequence (of 1), name for the match Unicode character, preceded by
1606 backslash, or empty if no match.
1607 backslash, or empty if no match.
1607 """
1608 """
1608 if len(text)<2:
1609 if len(text)<2:
1609 return '', ()
1610 return '', ()
1610 maybe_slash = text[-2]
1611 maybe_slash = text[-2]
1611 if maybe_slash != '\\':
1612 if maybe_slash != '\\':
1612 return '', ()
1613 return '', ()
1613
1614
1614 char = text[-1]
1615 char = text[-1]
1615 # no expand on quote for completion in strings.
1616 # no expand on quote for completion in strings.
1616 # nor backcomplete standard ascii keys
1617 # nor backcomplete standard ascii keys
1617 if char in string.ascii_letters or char in ('"',"'"):
1618 if char in string.ascii_letters or char in ('"',"'"):
1618 return '', ()
1619 return '', ()
1619 try :
1620 try :
1620 unic = unicodedata.name(char)
1621 unic = unicodedata.name(char)
1621 return '\\'+char,('\\'+unic,)
1622 return '\\'+char,('\\'+unic,)
1622 except KeyError:
1623 except KeyError:
1623 pass
1624 pass
1624 return '', ()
1625 return '', ()
1625
1626
1626
1627
1627 @context_matcher()
1628 @context_matcher()
1628 def back_latex_name_matcher(context: CompletionContext):
1629 def back_latex_name_matcher(context: CompletionContext):
1629 """Match latex characters back to unicode name
1630 """Match latex characters back to unicode name
1630
1631
1631 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1632 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1632 """
1633 """
1633 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1634 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1634 return _convert_matcher_v1_result_to_v2(
1635 return _convert_matcher_v1_result_to_v2(
1635 matches, type="latex", fragment=fragment, suppress_if_matches=True
1636 matches, type="latex", fragment=fragment, suppress_if_matches=True
1636 )
1637 )
1637
1638
1638
1639
1639 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1640 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1640 """Match latex characters back to unicode name
1641 """Match latex characters back to unicode name
1641
1642
1642 This does ``\\β„΅`` -> ``\\aleph``
1643 This does ``\\β„΅`` -> ``\\aleph``
1643
1644
1644 .. deprecated:: 8.6
1645 .. deprecated:: 8.6
1645 You can use :meth:`back_latex_name_matcher` instead.
1646 You can use :meth:`back_latex_name_matcher` instead.
1646 """
1647 """
1647 if len(text)<2:
1648 if len(text)<2:
1648 return '', ()
1649 return '', ()
1649 maybe_slash = text[-2]
1650 maybe_slash = text[-2]
1650 if maybe_slash != '\\':
1651 if maybe_slash != '\\':
1651 return '', ()
1652 return '', ()
1652
1653
1653
1654
1654 char = text[-1]
1655 char = text[-1]
1655 # no expand on quote for completion in strings.
1656 # no expand on quote for completion in strings.
1656 # nor backcomplete standard ascii keys
1657 # nor backcomplete standard ascii keys
1657 if char in string.ascii_letters or char in ('"',"'"):
1658 if char in string.ascii_letters or char in ('"',"'"):
1658 return '', ()
1659 return '', ()
1659 try :
1660 try :
1660 latex = reverse_latex_symbol[char]
1661 latex = reverse_latex_symbol[char]
1661 # '\\' replace the \ as well
1662 # '\\' replace the \ as well
1662 return '\\'+char,[latex]
1663 return '\\'+char,[latex]
1663 except KeyError:
1664 except KeyError:
1664 pass
1665 pass
1665 return '', ()
1666 return '', ()
1666
1667
1667
1668
1668 def _formatparamchildren(parameter) -> str:
1669 def _formatparamchildren(parameter) -> str:
1669 """
1670 """
1670 Get parameter name and value from Jedi Private API
1671 Get parameter name and value from Jedi Private API
1671
1672
1672 Jedi does not expose a simple way to get `param=value` from its API.
1673 Jedi does not expose a simple way to get `param=value` from its API.
1673
1674
1674 Parameters
1675 Parameters
1675 ----------
1676 ----------
1676 parameter
1677 parameter
1677 Jedi's function `Param`
1678 Jedi's function `Param`
1678
1679
1679 Returns
1680 Returns
1680 -------
1681 -------
1681 A string like 'a', 'b=1', '*args', '**kwargs'
1682 A string like 'a', 'b=1', '*args', '**kwargs'
1682
1683
1683 """
1684 """
1684 description = parameter.description
1685 description = parameter.description
1685 if not description.startswith('param '):
1686 if not description.startswith('param '):
1686 raise ValueError('Jedi function parameter description have change format.'
1687 raise ValueError('Jedi function parameter description have change format.'
1687 'Expected "param ...", found %r".' % description)
1688 'Expected "param ...", found %r".' % description)
1688 return description[6:]
1689 return description[6:]
1689
1690
1690 def _make_signature(completion)-> str:
1691 def _make_signature(completion)-> str:
1691 """
1692 """
1692 Make the signature from a jedi completion
1693 Make the signature from a jedi completion
1693
1694
1694 Parameters
1695 Parameters
1695 ----------
1696 ----------
1696 completion : jedi.Completion
1697 completion : jedi.Completion
1697 object does not complete a function type
1698 object does not complete a function type
1698
1699
1699 Returns
1700 Returns
1700 -------
1701 -------
1701 a string consisting of the function signature, with the parenthesis but
1702 a string consisting of the function signature, with the parenthesis but
1702 without the function name. example:
1703 without the function name. example:
1703 `(a, *args, b=1, **kwargs)`
1704 `(a, *args, b=1, **kwargs)`
1704
1705
1705 """
1706 """
1706
1707
1707 # it looks like this might work on jedi 0.17
1708 # it looks like this might work on jedi 0.17
1708 if hasattr(completion, 'get_signatures'):
1709 if hasattr(completion, 'get_signatures'):
1709 signatures = completion.get_signatures()
1710 signatures = completion.get_signatures()
1710 if not signatures:
1711 if not signatures:
1711 return '(?)'
1712 return '(?)'
1712
1713
1713 c0 = completion.get_signatures()[0]
1714 c0 = completion.get_signatures()[0]
1714 return '('+c0.to_string().split('(', maxsplit=1)[1]
1715 return '('+c0.to_string().split('(', maxsplit=1)[1]
1715
1716
1716 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1717 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1717 for p in signature.defined_names()) if f])
1718 for p in signature.defined_names()) if f])
1718
1719
1719
1720
1720 _CompleteResult = Dict[str, MatcherResult]
1721 _CompleteResult = Dict[str, MatcherResult]
1721
1722
1722
1723
1723 DICT_MATCHER_REGEX = re.compile(
1724 DICT_MATCHER_REGEX = re.compile(
1724 r"""(?x)
1725 r"""(?x)
1725 ( # match dict-referring - or any get item object - expression
1726 ( # match dict-referring - or any get item object - expression
1726 .+
1727 .+
1727 )
1728 )
1728 \[ # open bracket
1729 \[ # open bracket
1729 \s* # and optional whitespace
1730 \s* # and optional whitespace
1730 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1731 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1731 # and slices
1732 # and slices
1732 ((?:(?:
1733 ((?:(?:
1733 (?: # closed string
1734 (?: # closed string
1734 [uUbB]? # string prefix (r not handled)
1735 [uUbB]? # string prefix (r not handled)
1735 (?:
1736 (?:
1736 '(?:[^']|(?<!\\)\\')*'
1737 '(?:[^']|(?<!\\)\\')*'
1737 |
1738 |
1738 "(?:[^"]|(?<!\\)\\")*"
1739 "(?:[^"]|(?<!\\)\\")*"
1739 )
1740 )
1740 )
1741 )
1741 |
1742 |
1742 # capture integers and slices
1743 # capture integers and slices
1743 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1744 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1744 |
1745 |
1745 # integer in bin/hex/oct notation
1746 # integer in bin/hex/oct notation
1746 0[bBxXoO]_?(?:\w|\d)+
1747 0[bBxXoO]_?(?:\w|\d)+
1747 )
1748 )
1748 \s*,\s*
1749 \s*,\s*
1749 )*)
1750 )*)
1750 ((?:
1751 ((?:
1751 (?: # unclosed string
1752 (?: # unclosed string
1752 [uUbB]? # string prefix (r not handled)
1753 [uUbB]? # string prefix (r not handled)
1753 (?:
1754 (?:
1754 '(?:[^']|(?<!\\)\\')*
1755 '(?:[^']|(?<!\\)\\')*
1755 |
1756 |
1756 "(?:[^"]|(?<!\\)\\")*
1757 "(?:[^"]|(?<!\\)\\")*
1757 )
1758 )
1758 )
1759 )
1759 |
1760 |
1760 # unfinished integer
1761 # unfinished integer
1761 (?:[-+]?\d+)
1762 (?:[-+]?\d+)
1762 |
1763 |
1763 # integer in bin/hex/oct notation
1764 # integer in bin/hex/oct notation
1764 0[bBxXoO]_?(?:\w|\d)+
1765 0[bBxXoO]_?(?:\w|\d)+
1765 )
1766 )
1766 )?
1767 )?
1767 $
1768 $
1768 """
1769 """
1769 )
1770 )
1770
1771
1771
1772
1772 def _convert_matcher_v1_result_to_v2(
1773 def _convert_matcher_v1_result_to_v2(
1773 matches: Sequence[str],
1774 matches: Sequence[str],
1774 type: str,
1775 type: str,
1775 fragment: Optional[str] = None,
1776 fragment: Optional[str] = None,
1776 suppress_if_matches: bool = False,
1777 suppress_if_matches: bool = False,
1777 ) -> SimpleMatcherResult:
1778 ) -> SimpleMatcherResult:
1778 """Utility to help with transition"""
1779 """Utility to help with transition"""
1779 result = {
1780 result = {
1780 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1781 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1781 "suppress": (True if matches else False) if suppress_if_matches else False,
1782 "suppress": (True if matches else False) if suppress_if_matches else False,
1782 }
1783 }
1783 if fragment is not None:
1784 if fragment is not None:
1784 result["matched_fragment"] = fragment
1785 result["matched_fragment"] = fragment
1785 return cast(SimpleMatcherResult, result)
1786 return cast(SimpleMatcherResult, result)
1786
1787
1787
1788
1788 class IPCompleter(Completer):
1789 class IPCompleter(Completer):
1789 """Extension of the completer class with IPython-specific features"""
1790 """Extension of the completer class with IPython-specific features"""
1790
1791
1791 @observe('greedy')
1792 @observe('greedy')
1792 def _greedy_changed(self, change):
1793 def _greedy_changed(self, change):
1793 """update the splitter and readline delims when greedy is changed"""
1794 """update the splitter and readline delims when greedy is changed"""
1794 if change["new"]:
1795 if change["new"]:
1795 self.evaluation = "unsafe"
1796 self.evaluation = "unsafe"
1796 self.auto_close_dict_keys = True
1797 self.auto_close_dict_keys = True
1797 self.splitter.delims = GREEDY_DELIMS
1798 self.splitter.delims = GREEDY_DELIMS
1798 else:
1799 else:
1799 self.evaluation = "limited"
1800 self.evaluation = "limited"
1800 self.auto_close_dict_keys = False
1801 self.auto_close_dict_keys = False
1801 self.splitter.delims = DELIMS
1802 self.splitter.delims = DELIMS
1802
1803
1803 dict_keys_only = Bool(
1804 dict_keys_only = Bool(
1804 False,
1805 False,
1805 help="""
1806 help="""
1806 Whether to show dict key matches only.
1807 Whether to show dict key matches only.
1807
1808
1808 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1809 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1809 """,
1810 """,
1810 )
1811 )
1811
1812
1812 suppress_competing_matchers = UnionTrait(
1813 suppress_competing_matchers = UnionTrait(
1813 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1814 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1814 default_value=None,
1815 default_value=None,
1815 help="""
1816 help="""
1816 Whether to suppress completions from other *Matchers*.
1817 Whether to suppress completions from other *Matchers*.
1817
1818
1818 When set to ``None`` (default) the matchers will attempt to auto-detect
1819 When set to ``None`` (default) the matchers will attempt to auto-detect
1819 whether suppression of other matchers is desirable. For example, at
1820 whether suppression of other matchers is desirable. For example, at
1820 the beginning of a line followed by `%` we expect a magic completion
1821 the beginning of a line followed by `%` we expect a magic completion
1821 to be the only applicable option, and after ``my_dict['`` we usually
1822 to be the only applicable option, and after ``my_dict['`` we usually
1822 expect a completion with an existing dictionary key.
1823 expect a completion with an existing dictionary key.
1823
1824
1824 If you want to disable this heuristic and see completions from all matchers,
1825 If you want to disable this heuristic and see completions from all matchers,
1825 set ``IPCompleter.suppress_competing_matchers = False``.
1826 set ``IPCompleter.suppress_competing_matchers = False``.
1826 To disable the heuristic for specific matchers provide a dictionary mapping:
1827 To disable the heuristic for specific matchers provide a dictionary mapping:
1827 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1828 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1828
1829
1829 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1830 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1830 completions to the set of matchers with the highest priority;
1831 completions to the set of matchers with the highest priority;
1831 this is equivalent to ``IPCompleter.merge_completions`` and
1832 this is equivalent to ``IPCompleter.merge_completions`` and
1832 can be beneficial for performance, but will sometimes omit relevant
1833 can be beneficial for performance, but will sometimes omit relevant
1833 candidates from matchers further down the priority list.
1834 candidates from matchers further down the priority list.
1834 """,
1835 """,
1835 ).tag(config=True)
1836 ).tag(config=True)
1836
1837
1837 merge_completions = Bool(
1838 merge_completions = Bool(
1838 True,
1839 True,
1839 help="""Whether to merge completion results into a single list
1840 help="""Whether to merge completion results into a single list
1840
1841
1841 If False, only the completion results from the first non-empty
1842 If False, only the completion results from the first non-empty
1842 completer will be returned.
1843 completer will be returned.
1843
1844
1844 As of version 8.6.0, setting the value to ``False`` is an alias for:
1845 As of version 8.6.0, setting the value to ``False`` is an alias for:
1845 ``IPCompleter.suppress_competing_matchers = True.``.
1846 ``IPCompleter.suppress_competing_matchers = True.``.
1846 """,
1847 """,
1847 ).tag(config=True)
1848 ).tag(config=True)
1848
1849
1849 disable_matchers = ListTrait(
1850 disable_matchers = ListTrait(
1850 Unicode(),
1851 Unicode(),
1851 help="""List of matchers to disable.
1852 help="""List of matchers to disable.
1852
1853
1853 The list should contain matcher identifiers (see :any:`completion_matcher`).
1854 The list should contain matcher identifiers (see :any:`completion_matcher`).
1854 """,
1855 """,
1855 ).tag(config=True)
1856 ).tag(config=True)
1856
1857
1857 omit__names = Enum(
1858 omit__names = Enum(
1858 (0, 1, 2),
1859 (0, 1, 2),
1859 default_value=2,
1860 default_value=2,
1860 help="""Instruct the completer to omit private method names
1861 help="""Instruct the completer to omit private method names
1861
1862
1862 Specifically, when completing on ``object.<tab>``.
1863 Specifically, when completing on ``object.<tab>``.
1863
1864
1864 When 2 [default]: all names that start with '_' will be excluded.
1865 When 2 [default]: all names that start with '_' will be excluded.
1865
1866
1866 When 1: all 'magic' names (``__foo__``) will be excluded.
1867 When 1: all 'magic' names (``__foo__``) will be excluded.
1867
1868
1868 When 0: nothing will be excluded.
1869 When 0: nothing will be excluded.
1869 """
1870 """
1870 ).tag(config=True)
1871 ).tag(config=True)
1871 limit_to__all__ = Bool(False,
1872 limit_to__all__ = Bool(False,
1872 help="""
1873 help="""
1873 DEPRECATED as of version 5.0.
1874 DEPRECATED as of version 5.0.
1874
1875
1875 Instruct the completer to use __all__ for the completion
1876 Instruct the completer to use __all__ for the completion
1876
1877
1877 Specifically, when completing on ``object.<tab>``.
1878 Specifically, when completing on ``object.<tab>``.
1878
1879
1879 When True: only those names in obj.__all__ will be included.
1880 When True: only those names in obj.__all__ will be included.
1880
1881
1881 When False [default]: the __all__ attribute is ignored
1882 When False [default]: the __all__ attribute is ignored
1882 """,
1883 """,
1883 ).tag(config=True)
1884 ).tag(config=True)
1884
1885
1885 profile_completions = Bool(
1886 profile_completions = Bool(
1886 default_value=False,
1887 default_value=False,
1887 help="If True, emit profiling data for completion subsystem using cProfile."
1888 help="If True, emit profiling data for completion subsystem using cProfile."
1888 ).tag(config=True)
1889 ).tag(config=True)
1889
1890
1890 profiler_output_dir = Unicode(
1891 profiler_output_dir = Unicode(
1891 default_value=".completion_profiles",
1892 default_value=".completion_profiles",
1892 help="Template for path at which to output profile data for completions."
1893 help="Template for path at which to output profile data for completions."
1893 ).tag(config=True)
1894 ).tag(config=True)
1894
1895
1895 @observe('limit_to__all__')
1896 @observe('limit_to__all__')
1896 def _limit_to_all_changed(self, change):
1897 def _limit_to_all_changed(self, change):
1897 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1898 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1898 'value has been deprecated since IPython 5.0, will be made to have '
1899 'value has been deprecated since IPython 5.0, will be made to have '
1899 'no effects and then removed in future version of IPython.',
1900 'no effects and then removed in future version of IPython.',
1900 UserWarning)
1901 UserWarning)
1901
1902
1902 def __init__(
1903 def __init__(
1903 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1904 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1904 ):
1905 ):
1905 """IPCompleter() -> completer
1906 """IPCompleter() -> completer
1906
1907
1907 Return a completer object.
1908 Return a completer object.
1908
1909
1909 Parameters
1910 Parameters
1910 ----------
1911 ----------
1911 shell
1912 shell
1912 a pointer to the ipython shell itself. This is needed
1913 a pointer to the ipython shell itself. This is needed
1913 because this completer knows about magic functions, and those can
1914 because this completer knows about magic functions, and those can
1914 only be accessed via the ipython instance.
1915 only be accessed via the ipython instance.
1915 namespace : dict, optional
1916 namespace : dict, optional
1916 an optional dict where completions are performed.
1917 an optional dict where completions are performed.
1917 global_namespace : dict, optional
1918 global_namespace : dict, optional
1918 secondary optional dict for completions, to
1919 secondary optional dict for completions, to
1919 handle cases (such as IPython embedded inside functions) where
1920 handle cases (such as IPython embedded inside functions) where
1920 both Python scopes are visible.
1921 both Python scopes are visible.
1921 config : Config
1922 config : Config
1922 traitlet's config object
1923 traitlet's config object
1923 **kwargs
1924 **kwargs
1924 passed to super class unmodified.
1925 passed to super class unmodified.
1925 """
1926 """
1926
1927
1927 self.magic_escape = ESC_MAGIC
1928 self.magic_escape = ESC_MAGIC
1928 self.splitter = CompletionSplitter()
1929 self.splitter = CompletionSplitter()
1929
1930
1930 # _greedy_changed() depends on splitter and readline being defined:
1931 # _greedy_changed() depends on splitter and readline being defined:
1931 super().__init__(
1932 super().__init__(
1932 namespace=namespace,
1933 namespace=namespace,
1933 global_namespace=global_namespace,
1934 global_namespace=global_namespace,
1934 config=config,
1935 config=config,
1935 **kwargs,
1936 **kwargs,
1936 )
1937 )
1937
1938
1938 # List where completion matches will be stored
1939 # List where completion matches will be stored
1939 self.matches = []
1940 self.matches = []
1940 self.shell = shell
1941 self.shell = shell
1941 # Regexp to split filenames with spaces in them
1942 # Regexp to split filenames with spaces in them
1942 self.space_name_re = re.compile(r'([^\\] )')
1943 self.space_name_re = re.compile(r'([^\\] )')
1943 # Hold a local ref. to glob.glob for speed
1944 # Hold a local ref. to glob.glob for speed
1944 self.glob = glob.glob
1945 self.glob = glob.glob
1945
1946
1946 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1947 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1947 # buffers, to avoid completion problems.
1948 # buffers, to avoid completion problems.
1948 term = os.environ.get('TERM','xterm')
1949 term = os.environ.get('TERM','xterm')
1949 self.dumb_terminal = term in ['dumb','emacs']
1950 self.dumb_terminal = term in ['dumb','emacs']
1950
1951
1951 # Special handling of backslashes needed in win32 platforms
1952 # Special handling of backslashes needed in win32 platforms
1952 if sys.platform == "win32":
1953 if sys.platform == "win32":
1953 self.clean_glob = self._clean_glob_win32
1954 self.clean_glob = self._clean_glob_win32
1954 else:
1955 else:
1955 self.clean_glob = self._clean_glob
1956 self.clean_glob = self._clean_glob
1956
1957
1957 #regexp to parse docstring for function signature
1958 #regexp to parse docstring for function signature
1958 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1959 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1959 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1960 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1960 #use this if positional argument name is also needed
1961 #use this if positional argument name is also needed
1961 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1962 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1962
1963
1963 self.magic_arg_matchers = [
1964 self.magic_arg_matchers = [
1964 self.magic_config_matcher,
1965 self.magic_config_matcher,
1965 self.magic_color_matcher,
1966 self.magic_color_matcher,
1966 ]
1967 ]
1967
1968
1968 # This is set externally by InteractiveShell
1969 # This is set externally by InteractiveShell
1969 self.custom_completers = None
1970 self.custom_completers = None
1970
1971
1971 # This is a list of names of unicode characters that can be completed
1972 # This is a list of names of unicode characters that can be completed
1972 # into their corresponding unicode value. The list is large, so we
1973 # into their corresponding unicode value. The list is large, so we
1973 # lazily initialize it on first use. Consuming code should access this
1974 # lazily initialize it on first use. Consuming code should access this
1974 # attribute through the `@unicode_names` property.
1975 # attribute through the `@unicode_names` property.
1975 self._unicode_names = None
1976 self._unicode_names = None
1976
1977
1977 self._backslash_combining_matchers = [
1978 self._backslash_combining_matchers = [
1978 self.latex_name_matcher,
1979 self.latex_name_matcher,
1979 self.unicode_name_matcher,
1980 self.unicode_name_matcher,
1980 back_latex_name_matcher,
1981 back_latex_name_matcher,
1981 back_unicode_name_matcher,
1982 back_unicode_name_matcher,
1982 self.fwd_unicode_matcher,
1983 self.fwd_unicode_matcher,
1983 ]
1984 ]
1984
1985
1985 if not self.backslash_combining_completions:
1986 if not self.backslash_combining_completions:
1986 for matcher in self._backslash_combining_matchers:
1987 for matcher in self._backslash_combining_matchers:
1987 self.disable_matchers.append(_get_matcher_id(matcher))
1988 self.disable_matchers.append(_get_matcher_id(matcher))
1988
1989
1989 if not self.merge_completions:
1990 if not self.merge_completions:
1990 self.suppress_competing_matchers = True
1991 self.suppress_competing_matchers = True
1991
1992
1992 @property
1993 @property
1993 def matchers(self) -> List[Matcher]:
1994 def matchers(self) -> List[Matcher]:
1994 """All active matcher routines for completion"""
1995 """All active matcher routines for completion"""
1995 if self.dict_keys_only:
1996 if self.dict_keys_only:
1996 return [self.dict_key_matcher]
1997 return [self.dict_key_matcher]
1997
1998
1998 if self.use_jedi:
1999 if self.use_jedi:
1999 return [
2000 return [
2000 *self.custom_matchers,
2001 *self.custom_matchers,
2001 *self._backslash_combining_matchers,
2002 *self._backslash_combining_matchers,
2002 *self.magic_arg_matchers,
2003 *self.magic_arg_matchers,
2003 self.custom_completer_matcher,
2004 self.custom_completer_matcher,
2004 self.magic_matcher,
2005 self.magic_matcher,
2005 self._jedi_matcher,
2006 self._jedi_matcher,
2006 self.dict_key_matcher,
2007 self.dict_key_matcher,
2007 self.file_matcher,
2008 self.file_matcher,
2008 ]
2009 ]
2009 else:
2010 else:
2010 return [
2011 return [
2011 *self.custom_matchers,
2012 *self.custom_matchers,
2012 *self._backslash_combining_matchers,
2013 *self._backslash_combining_matchers,
2013 *self.magic_arg_matchers,
2014 *self.magic_arg_matchers,
2014 self.custom_completer_matcher,
2015 self.custom_completer_matcher,
2015 self.dict_key_matcher,
2016 self.dict_key_matcher,
2016 self.magic_matcher,
2017 self.magic_matcher,
2017 self.python_matcher,
2018 self.python_matcher,
2018 self.file_matcher,
2019 self.file_matcher,
2019 self.python_func_kw_matcher,
2020 self.python_func_kw_matcher,
2020 ]
2021 ]
2021
2022
2022 def all_completions(self, text:str) -> List[str]:
2023 def all_completions(self, text:str) -> List[str]:
2023 """
2024 """
2024 Wrapper around the completion methods for the benefit of emacs.
2025 Wrapper around the completion methods for the benefit of emacs.
2025 """
2026 """
2026 prefix = text.rpartition('.')[0]
2027 prefix = text.rpartition('.')[0]
2027 with provisionalcompleter():
2028 with provisionalcompleter():
2028 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2029 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2029 for c in self.completions(text, len(text))]
2030 for c in self.completions(text, len(text))]
2030
2031
2031 return self.complete(text)[1]
2032 return self.complete(text)[1]
2032
2033
2033 def _clean_glob(self, text:str):
2034 def _clean_glob(self, text:str):
2034 return self.glob("%s*" % text)
2035 return self.glob("%s*" % text)
2035
2036
2036 def _clean_glob_win32(self, text:str):
2037 def _clean_glob_win32(self, text:str):
2037 return [f.replace("\\","/")
2038 return [f.replace("\\","/")
2038 for f in self.glob("%s*" % text)]
2039 for f in self.glob("%s*" % text)]
2039
2040
2040 @context_matcher()
2041 @context_matcher()
2041 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2042 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2042 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2043 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2043 matches = self.file_matches(context.token)
2044 matches = self.file_matches(context.token)
2044 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2045 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2045 # starts with `/home/`, `C:\`, etc)
2046 # starts with `/home/`, `C:\`, etc)
2046 return _convert_matcher_v1_result_to_v2(matches, type="path")
2047 return _convert_matcher_v1_result_to_v2(matches, type="path")
2047
2048
2048 def file_matches(self, text: str) -> List[str]:
2049 def file_matches(self, text: str) -> List[str]:
2049 """Match filenames, expanding ~USER type strings.
2050 """Match filenames, expanding ~USER type strings.
2050
2051
2051 Most of the seemingly convoluted logic in this completer is an
2052 Most of the seemingly convoluted logic in this completer is an
2052 attempt to handle filenames with spaces in them. And yet it's not
2053 attempt to handle filenames with spaces in them. And yet it's not
2053 quite perfect, because Python's readline doesn't expose all of the
2054 quite perfect, because Python's readline doesn't expose all of the
2054 GNU readline details needed for this to be done correctly.
2055 GNU readline details needed for this to be done correctly.
2055
2056
2056 For a filename with a space in it, the printed completions will be
2057 For a filename with a space in it, the printed completions will be
2057 only the parts after what's already been typed (instead of the
2058 only the parts after what's already been typed (instead of the
2058 full completions, as is normally done). I don't think with the
2059 full completions, as is normally done). I don't think with the
2059 current (as of Python 2.3) Python readline it's possible to do
2060 current (as of Python 2.3) Python readline it's possible to do
2060 better.
2061 better.
2061
2062
2062 .. deprecated:: 8.6
2063 .. deprecated:: 8.6
2063 You can use :meth:`file_matcher` instead.
2064 You can use :meth:`file_matcher` instead.
2064 """
2065 """
2065
2066
2066 # chars that require escaping with backslash - i.e. chars
2067 # chars that require escaping with backslash - i.e. chars
2067 # that readline treats incorrectly as delimiters, but we
2068 # that readline treats incorrectly as delimiters, but we
2068 # don't want to treat as delimiters in filename matching
2069 # don't want to treat as delimiters in filename matching
2069 # when escaped with backslash
2070 # when escaped with backslash
2070 if text.startswith('!'):
2071 if text.startswith('!'):
2071 text = text[1:]
2072 text = text[1:]
2072 text_prefix = u'!'
2073 text_prefix = u'!'
2073 else:
2074 else:
2074 text_prefix = u''
2075 text_prefix = u''
2075
2076
2076 text_until_cursor = self.text_until_cursor
2077 text_until_cursor = self.text_until_cursor
2077 # track strings with open quotes
2078 # track strings with open quotes
2078 open_quotes = has_open_quotes(text_until_cursor)
2079 open_quotes = has_open_quotes(text_until_cursor)
2079
2080
2080 if '(' in text_until_cursor or '[' in text_until_cursor:
2081 if '(' in text_until_cursor or '[' in text_until_cursor:
2081 lsplit = text
2082 lsplit = text
2082 else:
2083 else:
2083 try:
2084 try:
2084 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2085 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2085 lsplit = arg_split(text_until_cursor)[-1]
2086 lsplit = arg_split(text_until_cursor)[-1]
2086 except ValueError:
2087 except ValueError:
2087 # typically an unmatched ", or backslash without escaped char.
2088 # typically an unmatched ", or backslash without escaped char.
2088 if open_quotes:
2089 if open_quotes:
2089 lsplit = text_until_cursor.split(open_quotes)[-1]
2090 lsplit = text_until_cursor.split(open_quotes)[-1]
2090 else:
2091 else:
2091 return []
2092 return []
2092 except IndexError:
2093 except IndexError:
2093 # tab pressed on empty line
2094 # tab pressed on empty line
2094 lsplit = ""
2095 lsplit = ""
2095
2096
2096 if not open_quotes and lsplit != protect_filename(lsplit):
2097 if not open_quotes and lsplit != protect_filename(lsplit):
2097 # if protectables are found, do matching on the whole escaped name
2098 # if protectables are found, do matching on the whole escaped name
2098 has_protectables = True
2099 has_protectables = True
2099 text0,text = text,lsplit
2100 text0,text = text,lsplit
2100 else:
2101 else:
2101 has_protectables = False
2102 has_protectables = False
2102 text = os.path.expanduser(text)
2103 text = os.path.expanduser(text)
2103
2104
2104 if text == "":
2105 if text == "":
2105 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2106 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2106
2107
2107 # Compute the matches from the filesystem
2108 # Compute the matches from the filesystem
2108 if sys.platform == 'win32':
2109 if sys.platform == 'win32':
2109 m0 = self.clean_glob(text)
2110 m0 = self.clean_glob(text)
2110 else:
2111 else:
2111 m0 = self.clean_glob(text.replace('\\', ''))
2112 m0 = self.clean_glob(text.replace('\\', ''))
2112
2113
2113 if has_protectables:
2114 if has_protectables:
2114 # If we had protectables, we need to revert our changes to the
2115 # If we had protectables, we need to revert our changes to the
2115 # beginning of filename so that we don't double-write the part
2116 # beginning of filename so that we don't double-write the part
2116 # of the filename we have so far
2117 # of the filename we have so far
2117 len_lsplit = len(lsplit)
2118 len_lsplit = len(lsplit)
2118 matches = [text_prefix + text0 +
2119 matches = [text_prefix + text0 +
2119 protect_filename(f[len_lsplit:]) for f in m0]
2120 protect_filename(f[len_lsplit:]) for f in m0]
2120 else:
2121 else:
2121 if open_quotes:
2122 if open_quotes:
2122 # if we have a string with an open quote, we don't need to
2123 # if we have a string with an open quote, we don't need to
2123 # protect the names beyond the quote (and we _shouldn't_, as
2124 # protect the names beyond the quote (and we _shouldn't_, as
2124 # it would cause bugs when the filesystem call is made).
2125 # it would cause bugs when the filesystem call is made).
2125 matches = m0 if sys.platform == "win32" else\
2126 matches = m0 if sys.platform == "win32" else\
2126 [protect_filename(f, open_quotes) for f in m0]
2127 [protect_filename(f, open_quotes) for f in m0]
2127 else:
2128 else:
2128 matches = [text_prefix +
2129 matches = [text_prefix +
2129 protect_filename(f) for f in m0]
2130 protect_filename(f) for f in m0]
2130
2131
2131 # Mark directories in input list by appending '/' to their names.
2132 # Mark directories in input list by appending '/' to their names.
2132 return [x+'/' if os.path.isdir(x) else x for x in matches]
2133 return [x+'/' if os.path.isdir(x) else x for x in matches]
2133
2134
2134 @context_matcher()
2135 @context_matcher()
2135 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2136 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2136 """Match magics."""
2137 """Match magics."""
2137 text = context.token
2138 text = context.token
2138 matches = self.magic_matches(text)
2139 matches = self.magic_matches(text)
2139 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2140 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2140 is_magic_prefix = len(text) > 0 and text[0] == "%"
2141 is_magic_prefix = len(text) > 0 and text[0] == "%"
2141 result["suppress"] = is_magic_prefix and bool(result["completions"])
2142 result["suppress"] = is_magic_prefix and bool(result["completions"])
2142 return result
2143 return result
2143
2144
2144 def magic_matches(self, text: str):
2145 def magic_matches(self, text: str) -> List[str]:
2145 """Match magics.
2146 """Match magics.
2146
2147
2147 .. deprecated:: 8.6
2148 .. deprecated:: 8.6
2148 You can use :meth:`magic_matcher` instead.
2149 You can use :meth:`magic_matcher` instead.
2149 """
2150 """
2150 # Get all shell magics now rather than statically, so magics loaded at
2151 # Get all shell magics now rather than statically, so magics loaded at
2151 # runtime show up too.
2152 # runtime show up too.
2152 lsm = self.shell.magics_manager.lsmagic()
2153 lsm = self.shell.magics_manager.lsmagic()
2153 line_magics = lsm['line']
2154 line_magics = lsm['line']
2154 cell_magics = lsm['cell']
2155 cell_magics = lsm['cell']
2155 pre = self.magic_escape
2156 pre = self.magic_escape
2156 pre2 = pre+pre
2157 pre2 = pre+pre
2157
2158
2158 explicit_magic = text.startswith(pre)
2159 explicit_magic = text.startswith(pre)
2159
2160
2160 # Completion logic:
2161 # Completion logic:
2161 # - user gives %%: only do cell magics
2162 # - user gives %%: only do cell magics
2162 # - user gives %: do both line and cell magics
2163 # - user gives %: do both line and cell magics
2163 # - no prefix: do both
2164 # - no prefix: do both
2164 # In other words, line magics are skipped if the user gives %% explicitly
2165 # In other words, line magics are skipped if the user gives %% explicitly
2165 #
2166 #
2166 # We also exclude magics that match any currently visible names:
2167 # We also exclude magics that match any currently visible names:
2167 # https://github.com/ipython/ipython/issues/4877, unless the user has
2168 # https://github.com/ipython/ipython/issues/4877, unless the user has
2168 # typed a %:
2169 # typed a %:
2169 # https://github.com/ipython/ipython/issues/10754
2170 # https://github.com/ipython/ipython/issues/10754
2170 bare_text = text.lstrip(pre)
2171 bare_text = text.lstrip(pre)
2171 global_matches = self.global_matches(bare_text)
2172 global_matches = self.global_matches(bare_text)
2172 if not explicit_magic:
2173 if not explicit_magic:
2173 def matches(magic):
2174 def matches(magic):
2174 """
2175 """
2175 Filter magics, in particular remove magics that match
2176 Filter magics, in particular remove magics that match
2176 a name present in global namespace.
2177 a name present in global namespace.
2177 """
2178 """
2178 return ( magic.startswith(bare_text) and
2179 return ( magic.startswith(bare_text) and
2179 magic not in global_matches )
2180 magic not in global_matches )
2180 else:
2181 else:
2181 def matches(magic):
2182 def matches(magic):
2182 return magic.startswith(bare_text)
2183 return magic.startswith(bare_text)
2183
2184
2184 comp = [ pre2+m for m in cell_magics if matches(m)]
2185 comp = [ pre2+m for m in cell_magics if matches(m)]
2185 if not text.startswith(pre2):
2186 if not text.startswith(pre2):
2186 comp += [ pre+m for m in line_magics if matches(m)]
2187 comp += [ pre+m for m in line_magics if matches(m)]
2187
2188
2188 return comp
2189 return comp
2189
2190
2190 @context_matcher()
2191 @context_matcher()
2191 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2192 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2192 """Match class names and attributes for %config magic."""
2193 """Match class names and attributes for %config magic."""
2193 # NOTE: uses `line_buffer` equivalent for compatibility
2194 # NOTE: uses `line_buffer` equivalent for compatibility
2194 matches = self.magic_config_matches(context.line_with_cursor)
2195 matches = self.magic_config_matches(context.line_with_cursor)
2195 return _convert_matcher_v1_result_to_v2(matches, type="param")
2196 return _convert_matcher_v1_result_to_v2(matches, type="param")
2196
2197
2197 def magic_config_matches(self, text: str) -> List[str]:
2198 def magic_config_matches(self, text: str) -> List[str]:
2198 """Match class names and attributes for %config magic.
2199 """Match class names and attributes for %config magic.
2199
2200
2200 .. deprecated:: 8.6
2201 .. deprecated:: 8.6
2201 You can use :meth:`magic_config_matcher` instead.
2202 You can use :meth:`magic_config_matcher` instead.
2202 """
2203 """
2203 texts = text.strip().split()
2204 texts = text.strip().split()
2204
2205
2205 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2206 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2206 # get all configuration classes
2207 # get all configuration classes
2207 classes = sorted(set([ c for c in self.shell.configurables
2208 classes = sorted(set([ c for c in self.shell.configurables
2208 if c.__class__.class_traits(config=True)
2209 if c.__class__.class_traits(config=True)
2209 ]), key=lambda x: x.__class__.__name__)
2210 ]), key=lambda x: x.__class__.__name__)
2210 classnames = [ c.__class__.__name__ for c in classes ]
2211 classnames = [ c.__class__.__name__ for c in classes ]
2211
2212
2212 # return all classnames if config or %config is given
2213 # return all classnames if config or %config is given
2213 if len(texts) == 1:
2214 if len(texts) == 1:
2214 return classnames
2215 return classnames
2215
2216
2216 # match classname
2217 # match classname
2217 classname_texts = texts[1].split('.')
2218 classname_texts = texts[1].split('.')
2218 classname = classname_texts[0]
2219 classname = classname_texts[0]
2219 classname_matches = [ c for c in classnames
2220 classname_matches = [ c for c in classnames
2220 if c.startswith(classname) ]
2221 if c.startswith(classname) ]
2221
2222
2222 # return matched classes or the matched class with attributes
2223 # return matched classes or the matched class with attributes
2223 if texts[1].find('.') < 0:
2224 if texts[1].find('.') < 0:
2224 return classname_matches
2225 return classname_matches
2225 elif len(classname_matches) == 1 and \
2226 elif len(classname_matches) == 1 and \
2226 classname_matches[0] == classname:
2227 classname_matches[0] == classname:
2227 cls = classes[classnames.index(classname)].__class__
2228 cls = classes[classnames.index(classname)].__class__
2228 help = cls.class_get_help()
2229 help = cls.class_get_help()
2229 # strip leading '--' from cl-args:
2230 # strip leading '--' from cl-args:
2230 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2231 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2231 return [ attr.split('=')[0]
2232 return [ attr.split('=')[0]
2232 for attr in help.strip().splitlines()
2233 for attr in help.strip().splitlines()
2233 if attr.startswith(texts[1]) ]
2234 if attr.startswith(texts[1]) ]
2234 return []
2235 return []
2235
2236
2236 @context_matcher()
2237 @context_matcher()
2237 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2238 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2238 """Match color schemes for %colors magic."""
2239 """Match color schemes for %colors magic."""
2239 # NOTE: uses `line_buffer` equivalent for compatibility
2240 # NOTE: uses `line_buffer` equivalent for compatibility
2240 matches = self.magic_color_matches(context.line_with_cursor)
2241 matches = self.magic_color_matches(context.line_with_cursor)
2241 return _convert_matcher_v1_result_to_v2(matches, type="param")
2242 return _convert_matcher_v1_result_to_v2(matches, type="param")
2242
2243
2243 def magic_color_matches(self, text: str) -> List[str]:
2244 def magic_color_matches(self, text: str) -> List[str]:
2244 """Match color schemes for %colors magic.
2245 """Match color schemes for %colors magic.
2245
2246
2246 .. deprecated:: 8.6
2247 .. deprecated:: 8.6
2247 You can use :meth:`magic_color_matcher` instead.
2248 You can use :meth:`magic_color_matcher` instead.
2248 """
2249 """
2249 texts = text.split()
2250 texts = text.split()
2250 if text.endswith(' '):
2251 if text.endswith(' '):
2251 # .split() strips off the trailing whitespace. Add '' back
2252 # .split() strips off the trailing whitespace. Add '' back
2252 # so that: '%colors ' -> ['%colors', '']
2253 # so that: '%colors ' -> ['%colors', '']
2253 texts.append('')
2254 texts.append('')
2254
2255
2255 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2256 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2256 prefix = texts[1]
2257 prefix = texts[1]
2257 return [ color for color in InspectColors.keys()
2258 return [ color for color in InspectColors.keys()
2258 if color.startswith(prefix) ]
2259 if color.startswith(prefix) ]
2259 return []
2260 return []
2260
2261
2261 @context_matcher(identifier="IPCompleter.jedi_matcher")
2262 @context_matcher(identifier="IPCompleter.jedi_matcher")
2262 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2263 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2263 matches = self._jedi_matches(
2264 matches = self._jedi_matches(
2264 cursor_column=context.cursor_position,
2265 cursor_column=context.cursor_position,
2265 cursor_line=context.cursor_line,
2266 cursor_line=context.cursor_line,
2266 text=context.full_text,
2267 text=context.full_text,
2267 )
2268 )
2268 return {
2269 return {
2269 "completions": matches,
2270 "completions": matches,
2270 # static analysis should not suppress other matchers
2271 # static analysis should not suppress other matchers
2271 "suppress": False,
2272 "suppress": False,
2272 }
2273 }
2273
2274
2274 def _jedi_matches(
2275 def _jedi_matches(
2275 self, cursor_column: int, cursor_line: int, text: str
2276 self, cursor_column: int, cursor_line: int, text: str
2276 ) -> Iterator[_JediCompletionLike]:
2277 ) -> Iterator[_JediCompletionLike]:
2277 """
2278 """
2278 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2279 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2279 cursor position.
2280 cursor position.
2280
2281
2281 Parameters
2282 Parameters
2282 ----------
2283 ----------
2283 cursor_column : int
2284 cursor_column : int
2284 column position of the cursor in ``text``, 0-indexed.
2285 column position of the cursor in ``text``, 0-indexed.
2285 cursor_line : int
2286 cursor_line : int
2286 line position of the cursor in ``text``, 0-indexed
2287 line position of the cursor in ``text``, 0-indexed
2287 text : str
2288 text : str
2288 text to complete
2289 text to complete
2289
2290
2290 Notes
2291 Notes
2291 -----
2292 -----
2292 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2293 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2293 object containing a string with the Jedi debug information attached.
2294 object containing a string with the Jedi debug information attached.
2294
2295
2295 .. deprecated:: 8.6
2296 .. deprecated:: 8.6
2296 You can use :meth:`_jedi_matcher` instead.
2297 You can use :meth:`_jedi_matcher` instead.
2297 """
2298 """
2298 namespaces = [self.namespace]
2299 namespaces = [self.namespace]
2299 if self.global_namespace is not None:
2300 if self.global_namespace is not None:
2300 namespaces.append(self.global_namespace)
2301 namespaces.append(self.global_namespace)
2301
2302
2302 completion_filter = lambda x:x
2303 completion_filter = lambda x:x
2303 offset = cursor_to_position(text, cursor_line, cursor_column)
2304 offset = cursor_to_position(text, cursor_line, cursor_column)
2304 # filter output if we are completing for object members
2305 # filter output if we are completing for object members
2305 if offset:
2306 if offset:
2306 pre = text[offset-1]
2307 pre = text[offset-1]
2307 if pre == '.':
2308 if pre == '.':
2308 if self.omit__names == 2:
2309 if self.omit__names == 2:
2309 completion_filter = lambda c:not c.name.startswith('_')
2310 completion_filter = lambda c:not c.name.startswith('_')
2310 elif self.omit__names == 1:
2311 elif self.omit__names == 1:
2311 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2312 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2312 elif self.omit__names == 0:
2313 elif self.omit__names == 0:
2313 completion_filter = lambda x:x
2314 completion_filter = lambda x:x
2314 else:
2315 else:
2315 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2316 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2316
2317
2317 interpreter = jedi.Interpreter(text[:offset], namespaces)
2318 interpreter = jedi.Interpreter(text[:offset], namespaces)
2318 try_jedi = True
2319 try_jedi = True
2319
2320
2320 try:
2321 try:
2321 # find the first token in the current tree -- if it is a ' or " then we are in a string
2322 # find the first token in the current tree -- if it is a ' or " then we are in a string
2322 completing_string = False
2323 completing_string = False
2323 try:
2324 try:
2324 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2325 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2325 except StopIteration:
2326 except StopIteration:
2326 pass
2327 pass
2327 else:
2328 else:
2328 # note the value may be ', ", or it may also be ''' or """, or
2329 # note the value may be ', ", or it may also be ''' or """, or
2329 # in some cases, """what/you/typed..., but all of these are
2330 # in some cases, """what/you/typed..., but all of these are
2330 # strings.
2331 # strings.
2331 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2332 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2332
2333
2333 # if we are in a string jedi is likely not the right candidate for
2334 # if we are in a string jedi is likely not the right candidate for
2334 # now. Skip it.
2335 # now. Skip it.
2335 try_jedi = not completing_string
2336 try_jedi = not completing_string
2336 except Exception as e:
2337 except Exception as e:
2337 # many of things can go wrong, we are using private API just don't crash.
2338 # many of things can go wrong, we are using private API just don't crash.
2338 if self.debug:
2339 if self.debug:
2339 print("Error detecting if completing a non-finished string :", e, '|')
2340 print("Error detecting if completing a non-finished string :", e, '|')
2340
2341
2341 if not try_jedi:
2342 if not try_jedi:
2342 return iter([])
2343 return iter([])
2343 try:
2344 try:
2344 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2345 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2345 except Exception as e:
2346 except Exception as e:
2346 if self.debug:
2347 if self.debug:
2347 return iter(
2348 return iter(
2348 [
2349 [
2349 _FakeJediCompletion(
2350 _FakeJediCompletion(
2350 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2351 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2351 % (e)
2352 % (e)
2352 )
2353 )
2353 ]
2354 ]
2354 )
2355 )
2355 else:
2356 else:
2356 return iter([])
2357 return iter([])
2357
2358
2358 @context_matcher()
2359 @context_matcher()
2359 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2360 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2360 """Match attributes or global python names"""
2361 """Match attributes or global python names"""
2361 text = context.line_with_cursor
2362 text = context.line_with_cursor
2362 if "." in text:
2363 if "." in text:
2363 try:
2364 try:
2364 matches, fragment = self._attr_matches(text, include_prefix=False)
2365 matches, fragment = self._attr_matches(text, include_prefix=False)
2365 if text.endswith(".") and self.omit__names:
2366 if text.endswith(".") and self.omit__names:
2366 if self.omit__names == 1:
2367 if self.omit__names == 1:
2367 # true if txt is _not_ a __ name, false otherwise:
2368 # true if txt is _not_ a __ name, false otherwise:
2368 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2369 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2369 else:
2370 else:
2370 # true if txt is _not_ a _ name, false otherwise:
2371 # true if txt is _not_ a _ name, false otherwise:
2371 no__name = (
2372 no__name = (
2372 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2373 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2373 is None
2374 is None
2374 )
2375 )
2375 matches = filter(no__name, matches)
2376 matches = filter(no__name, matches)
2376 return _convert_matcher_v1_result_to_v2(
2377 return _convert_matcher_v1_result_to_v2(
2377 matches, type="attribute", fragment=fragment
2378 matches, type="attribute", fragment=fragment
2378 )
2379 )
2379 except NameError:
2380 except NameError:
2380 # catches <undefined attributes>.<tab>
2381 # catches <undefined attributes>.<tab>
2381 matches = []
2382 matches = []
2382 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2383 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2383 else:
2384 else:
2384 matches = self.global_matches(context.token)
2385 matches = self.global_matches(context.token)
2385 # TODO: maybe distinguish between functions, modules and just "variables"
2386 # TODO: maybe distinguish between functions, modules and just "variables"
2386 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2387 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2387
2388
2388 @completion_matcher(api_version=1)
2389 @completion_matcher(api_version=1)
2389 def python_matches(self, text: str) -> Iterable[str]:
2390 def python_matches(self, text: str) -> Iterable[str]:
2390 """Match attributes or global python names.
2391 """Match attributes or global python names.
2391
2392
2392 .. deprecated:: 8.27
2393 .. deprecated:: 8.27
2393 You can use :meth:`python_matcher` instead."""
2394 You can use :meth:`python_matcher` instead."""
2394 if "." in text:
2395 if "." in text:
2395 try:
2396 try:
2396 matches = self.attr_matches(text)
2397 matches = self.attr_matches(text)
2397 if text.endswith('.') and self.omit__names:
2398 if text.endswith('.') and self.omit__names:
2398 if self.omit__names == 1:
2399 if self.omit__names == 1:
2399 # true if txt is _not_ a __ name, false otherwise:
2400 # true if txt is _not_ a __ name, false otherwise:
2400 no__name = (lambda txt:
2401 no__name = (lambda txt:
2401 re.match(r'.*\.__.*?__',txt) is None)
2402 re.match(r'.*\.__.*?__',txt) is None)
2402 else:
2403 else:
2403 # true if txt is _not_ a _ name, false otherwise:
2404 # true if txt is _not_ a _ name, false otherwise:
2404 no__name = (lambda txt:
2405 no__name = (lambda txt:
2405 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2406 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2406 matches = filter(no__name, matches)
2407 matches = filter(no__name, matches)
2407 except NameError:
2408 except NameError:
2408 # catches <undefined attributes>.<tab>
2409 # catches <undefined attributes>.<tab>
2409 matches = []
2410 matches = []
2410 else:
2411 else:
2411 matches = self.global_matches(text)
2412 matches = self.global_matches(text)
2412 return matches
2413 return matches
2413
2414
2414 def _default_arguments_from_docstring(self, doc):
2415 def _default_arguments_from_docstring(self, doc):
2415 """Parse the first line of docstring for call signature.
2416 """Parse the first line of docstring for call signature.
2416
2417
2417 Docstring should be of the form 'min(iterable[, key=func])\n'.
2418 Docstring should be of the form 'min(iterable[, key=func])\n'.
2418 It can also parse cython docstring of the form
2419 It can also parse cython docstring of the form
2419 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2420 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2420 """
2421 """
2421 if doc is None:
2422 if doc is None:
2422 return []
2423 return []
2423
2424
2424 #care only the firstline
2425 #care only the firstline
2425 line = doc.lstrip().splitlines()[0]
2426 line = doc.lstrip().splitlines()[0]
2426
2427
2427 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2428 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2428 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2429 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2429 sig = self.docstring_sig_re.search(line)
2430 sig = self.docstring_sig_re.search(line)
2430 if sig is None:
2431 if sig is None:
2431 return []
2432 return []
2432 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2433 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2433 sig = sig.groups()[0].split(',')
2434 sig = sig.groups()[0].split(',')
2434 ret = []
2435 ret = []
2435 for s in sig:
2436 for s in sig:
2436 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2437 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2437 ret += self.docstring_kwd_re.findall(s)
2438 ret += self.docstring_kwd_re.findall(s)
2438 return ret
2439 return ret
2439
2440
2440 def _default_arguments(self, obj):
2441 def _default_arguments(self, obj):
2441 """Return the list of default arguments of obj if it is callable,
2442 """Return the list of default arguments of obj if it is callable,
2442 or empty list otherwise."""
2443 or empty list otherwise."""
2443 call_obj = obj
2444 call_obj = obj
2444 ret = []
2445 ret = []
2445 if inspect.isbuiltin(obj):
2446 if inspect.isbuiltin(obj):
2446 pass
2447 pass
2447 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2448 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2448 if inspect.isclass(obj):
2449 if inspect.isclass(obj):
2449 #for cython embedsignature=True the constructor docstring
2450 #for cython embedsignature=True the constructor docstring
2450 #belongs to the object itself not __init__
2451 #belongs to the object itself not __init__
2451 ret += self._default_arguments_from_docstring(
2452 ret += self._default_arguments_from_docstring(
2452 getattr(obj, '__doc__', ''))
2453 getattr(obj, '__doc__', ''))
2453 # for classes, check for __init__,__new__
2454 # for classes, check for __init__,__new__
2454 call_obj = (getattr(obj, '__init__', None) or
2455 call_obj = (getattr(obj, '__init__', None) or
2455 getattr(obj, '__new__', None))
2456 getattr(obj, '__new__', None))
2456 # for all others, check if they are __call__able
2457 # for all others, check if they are __call__able
2457 elif hasattr(obj, '__call__'):
2458 elif hasattr(obj, '__call__'):
2458 call_obj = obj.__call__
2459 call_obj = obj.__call__
2459 ret += self._default_arguments_from_docstring(
2460 ret += self._default_arguments_from_docstring(
2460 getattr(call_obj, '__doc__', ''))
2461 getattr(call_obj, '__doc__', ''))
2461
2462
2462 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2463 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2463 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2464 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2464
2465
2465 try:
2466 try:
2466 sig = inspect.signature(obj)
2467 sig = inspect.signature(obj)
2467 ret.extend(k for k, v in sig.parameters.items() if
2468 ret.extend(k for k, v in sig.parameters.items() if
2468 v.kind in _keeps)
2469 v.kind in _keeps)
2469 except ValueError:
2470 except ValueError:
2470 pass
2471 pass
2471
2472
2472 return list(set(ret))
2473 return list(set(ret))
2473
2474
2474 @context_matcher()
2475 @context_matcher()
2475 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2476 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2476 """Match named parameters (kwargs) of the last open function."""
2477 """Match named parameters (kwargs) of the last open function."""
2477 matches = self.python_func_kw_matches(context.token)
2478 matches = self.python_func_kw_matches(context.token)
2478 return _convert_matcher_v1_result_to_v2(matches, type="param")
2479 return _convert_matcher_v1_result_to_v2(matches, type="param")
2479
2480
2480 def python_func_kw_matches(self, text):
2481 def python_func_kw_matches(self, text):
2481 """Match named parameters (kwargs) of the last open function.
2482 """Match named parameters (kwargs) of the last open function.
2482
2483
2483 .. deprecated:: 8.6
2484 .. deprecated:: 8.6
2484 You can use :meth:`python_func_kw_matcher` instead.
2485 You can use :meth:`python_func_kw_matcher` instead.
2485 """
2486 """
2486
2487
2487 if "." in text: # a parameter cannot be dotted
2488 if "." in text: # a parameter cannot be dotted
2488 return []
2489 return []
2489 try: regexp = self.__funcParamsRegex
2490 try: regexp = self.__funcParamsRegex
2490 except AttributeError:
2491 except AttributeError:
2491 regexp = self.__funcParamsRegex = re.compile(r'''
2492 regexp = self.__funcParamsRegex = re.compile(r'''
2492 '.*?(?<!\\)' | # single quoted strings or
2493 '.*?(?<!\\)' | # single quoted strings or
2493 ".*?(?<!\\)" | # double quoted strings or
2494 ".*?(?<!\\)" | # double quoted strings or
2494 \w+ | # identifier
2495 \w+ | # identifier
2495 \S # other characters
2496 \S # other characters
2496 ''', re.VERBOSE | re.DOTALL)
2497 ''', re.VERBOSE | re.DOTALL)
2497 # 1. find the nearest identifier that comes before an unclosed
2498 # 1. find the nearest identifier that comes before an unclosed
2498 # parenthesis before the cursor
2499 # parenthesis before the cursor
2499 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2500 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2500 tokens = regexp.findall(self.text_until_cursor)
2501 tokens = regexp.findall(self.text_until_cursor)
2501 iterTokens = reversed(tokens)
2502 iterTokens = reversed(tokens)
2502 openPar = 0
2503 openPar = 0
2503
2504
2504 for token in iterTokens:
2505 for token in iterTokens:
2505 if token == ')':
2506 if token == ')':
2506 openPar -= 1
2507 openPar -= 1
2507 elif token == '(':
2508 elif token == '(':
2508 openPar += 1
2509 openPar += 1
2509 if openPar > 0:
2510 if openPar > 0:
2510 # found the last unclosed parenthesis
2511 # found the last unclosed parenthesis
2511 break
2512 break
2512 else:
2513 else:
2513 return []
2514 return []
2514 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2515 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2515 ids = []
2516 ids = []
2516 isId = re.compile(r'\w+$').match
2517 isId = re.compile(r'\w+$').match
2517
2518
2518 while True:
2519 while True:
2519 try:
2520 try:
2520 ids.append(next(iterTokens))
2521 ids.append(next(iterTokens))
2521 if not isId(ids[-1]):
2522 if not isId(ids[-1]):
2522 ids.pop()
2523 ids.pop()
2523 break
2524 break
2524 if not next(iterTokens) == '.':
2525 if not next(iterTokens) == '.':
2525 break
2526 break
2526 except StopIteration:
2527 except StopIteration:
2527 break
2528 break
2528
2529
2529 # Find all named arguments already assigned to, as to avoid suggesting
2530 # Find all named arguments already assigned to, as to avoid suggesting
2530 # them again
2531 # them again
2531 usedNamedArgs = set()
2532 usedNamedArgs = set()
2532 par_level = -1
2533 par_level = -1
2533 for token, next_token in zip(tokens, tokens[1:]):
2534 for token, next_token in zip(tokens, tokens[1:]):
2534 if token == '(':
2535 if token == '(':
2535 par_level += 1
2536 par_level += 1
2536 elif token == ')':
2537 elif token == ')':
2537 par_level -= 1
2538 par_level -= 1
2538
2539
2539 if par_level != 0:
2540 if par_level != 0:
2540 continue
2541 continue
2541
2542
2542 if next_token != '=':
2543 if next_token != '=':
2543 continue
2544 continue
2544
2545
2545 usedNamedArgs.add(token)
2546 usedNamedArgs.add(token)
2546
2547
2547 argMatches = []
2548 argMatches = []
2548 try:
2549 try:
2549 callableObj = '.'.join(ids[::-1])
2550 callableObj = '.'.join(ids[::-1])
2550 namedArgs = self._default_arguments(eval(callableObj,
2551 namedArgs = self._default_arguments(eval(callableObj,
2551 self.namespace))
2552 self.namespace))
2552
2553
2553 # Remove used named arguments from the list, no need to show twice
2554 # Remove used named arguments from the list, no need to show twice
2554 for namedArg in set(namedArgs) - usedNamedArgs:
2555 for namedArg in set(namedArgs) - usedNamedArgs:
2555 if namedArg.startswith(text):
2556 if namedArg.startswith(text):
2556 argMatches.append("%s=" %namedArg)
2557 argMatches.append("%s=" %namedArg)
2557 except:
2558 except:
2558 pass
2559 pass
2559
2560
2560 return argMatches
2561 return argMatches
2561
2562
2562 @staticmethod
2563 @staticmethod
2563 def _get_keys(obj: Any) -> List[Any]:
2564 def _get_keys(obj: Any) -> List[Any]:
2564 # Objects can define their own completions by defining an
2565 # Objects can define their own completions by defining an
2565 # _ipy_key_completions_() method.
2566 # _ipy_key_completions_() method.
2566 method = get_real_method(obj, '_ipython_key_completions_')
2567 method = get_real_method(obj, '_ipython_key_completions_')
2567 if method is not None:
2568 if method is not None:
2568 return method()
2569 return method()
2569
2570
2570 # Special case some common in-memory dict-like types
2571 # Special case some common in-memory dict-like types
2571 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2572 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2572 try:
2573 try:
2573 return list(obj.keys())
2574 return list(obj.keys())
2574 except Exception:
2575 except Exception:
2575 return []
2576 return []
2576 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2577 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2577 try:
2578 try:
2578 return list(obj.obj.keys())
2579 return list(obj.obj.keys())
2579 except Exception:
2580 except Exception:
2580 return []
2581 return []
2581 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2582 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2582 _safe_isinstance(obj, 'numpy', 'void'):
2583 _safe_isinstance(obj, 'numpy', 'void'):
2583 return obj.dtype.names or []
2584 return obj.dtype.names or []
2584 return []
2585 return []
2585
2586
2586 @context_matcher()
2587 @context_matcher()
2587 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2588 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2588 """Match string keys in a dictionary, after e.g. ``foo[``."""
2589 """Match string keys in a dictionary, after e.g. ``foo[``."""
2589 matches = self.dict_key_matches(context.token)
2590 matches = self.dict_key_matches(context.token)
2590 return _convert_matcher_v1_result_to_v2(
2591 return _convert_matcher_v1_result_to_v2(
2591 matches, type="dict key", suppress_if_matches=True
2592 matches, type="dict key", suppress_if_matches=True
2592 )
2593 )
2593
2594
2594 def dict_key_matches(self, text: str) -> List[str]:
2595 def dict_key_matches(self, text: str) -> List[str]:
2595 """Match string keys in a dictionary, after e.g. ``foo[``.
2596 """Match string keys in a dictionary, after e.g. ``foo[``.
2596
2597
2597 .. deprecated:: 8.6
2598 .. deprecated:: 8.6
2598 You can use :meth:`dict_key_matcher` instead.
2599 You can use :meth:`dict_key_matcher` instead.
2599 """
2600 """
2600
2601
2601 # Short-circuit on closed dictionary (regular expression would
2602 # Short-circuit on closed dictionary (regular expression would
2602 # not match anyway, but would take quite a while).
2603 # not match anyway, but would take quite a while).
2603 if self.text_until_cursor.strip().endswith("]"):
2604 if self.text_until_cursor.strip().endswith("]"):
2604 return []
2605 return []
2605
2606
2606 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2607 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2607
2608
2608 if match is None:
2609 if match is None:
2609 return []
2610 return []
2610
2611
2611 expr, prior_tuple_keys, key_prefix = match.groups()
2612 expr, prior_tuple_keys, key_prefix = match.groups()
2612
2613
2613 obj = self._evaluate_expr(expr)
2614 obj = self._evaluate_expr(expr)
2614
2615
2615 if obj is not_found:
2616 if obj is not_found:
2616 return []
2617 return []
2617
2618
2618 keys = self._get_keys(obj)
2619 keys = self._get_keys(obj)
2619 if not keys:
2620 if not keys:
2620 return keys
2621 return keys
2621
2622
2622 tuple_prefix = guarded_eval(
2623 tuple_prefix = guarded_eval(
2623 prior_tuple_keys,
2624 prior_tuple_keys,
2624 EvaluationContext(
2625 EvaluationContext(
2625 globals=self.global_namespace,
2626 globals=self.global_namespace,
2626 locals=self.namespace,
2627 locals=self.namespace,
2627 evaluation=self.evaluation, # type: ignore
2628 evaluation=self.evaluation, # type: ignore
2628 in_subscript=True,
2629 in_subscript=True,
2629 ),
2630 ),
2630 )
2631 )
2631
2632
2632 closing_quote, token_offset, matches = match_dict_keys(
2633 closing_quote, token_offset, matches = match_dict_keys(
2633 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2634 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2634 )
2635 )
2635 if not matches:
2636 if not matches:
2636 return []
2637 return []
2637
2638
2638 # get the cursor position of
2639 # get the cursor position of
2639 # - the text being completed
2640 # - the text being completed
2640 # - the start of the key text
2641 # - the start of the key text
2641 # - the start of the completion
2642 # - the start of the completion
2642 text_start = len(self.text_until_cursor) - len(text)
2643 text_start = len(self.text_until_cursor) - len(text)
2643 if key_prefix:
2644 if key_prefix:
2644 key_start = match.start(3)
2645 key_start = match.start(3)
2645 completion_start = key_start + token_offset
2646 completion_start = key_start + token_offset
2646 else:
2647 else:
2647 key_start = completion_start = match.end()
2648 key_start = completion_start = match.end()
2648
2649
2649 # grab the leading prefix, to make sure all completions start with `text`
2650 # grab the leading prefix, to make sure all completions start with `text`
2650 if text_start > key_start:
2651 if text_start > key_start:
2651 leading = ''
2652 leading = ''
2652 else:
2653 else:
2653 leading = text[text_start:completion_start]
2654 leading = text[text_start:completion_start]
2654
2655
2655 # append closing quote and bracket as appropriate
2656 # append closing quote and bracket as appropriate
2656 # this is *not* appropriate if the opening quote or bracket is outside
2657 # this is *not* appropriate if the opening quote or bracket is outside
2657 # the text given to this method, e.g. `d["""a\nt
2658 # the text given to this method, e.g. `d["""a\nt
2658 can_close_quote = False
2659 can_close_quote = False
2659 can_close_bracket = False
2660 can_close_bracket = False
2660
2661
2661 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2662 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2662
2663
2663 if continuation.startswith(closing_quote):
2664 if continuation.startswith(closing_quote):
2664 # do not close if already closed, e.g. `d['a<tab>'`
2665 # do not close if already closed, e.g. `d['a<tab>'`
2665 continuation = continuation[len(closing_quote) :]
2666 continuation = continuation[len(closing_quote) :]
2666 else:
2667 else:
2667 can_close_quote = True
2668 can_close_quote = True
2668
2669
2669 continuation = continuation.strip()
2670 continuation = continuation.strip()
2670
2671
2671 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2672 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2672 # handling it is out of scope, so let's avoid appending suffixes.
2673 # handling it is out of scope, so let's avoid appending suffixes.
2673 has_known_tuple_handling = isinstance(obj, dict)
2674 has_known_tuple_handling = isinstance(obj, dict)
2674
2675
2675 can_close_bracket = (
2676 can_close_bracket = (
2676 not continuation.startswith("]") and self.auto_close_dict_keys
2677 not continuation.startswith("]") and self.auto_close_dict_keys
2677 )
2678 )
2678 can_close_tuple_item = (
2679 can_close_tuple_item = (
2679 not continuation.startswith(",")
2680 not continuation.startswith(",")
2680 and has_known_tuple_handling
2681 and has_known_tuple_handling
2681 and self.auto_close_dict_keys
2682 and self.auto_close_dict_keys
2682 )
2683 )
2683 can_close_quote = can_close_quote and self.auto_close_dict_keys
2684 can_close_quote = can_close_quote and self.auto_close_dict_keys
2684
2685
2685 # fast path if closing quote should be appended but not suffix is allowed
2686 # fast path if closing quote should be appended but not suffix is allowed
2686 if not can_close_quote and not can_close_bracket and closing_quote:
2687 if not can_close_quote and not can_close_bracket and closing_quote:
2687 return [leading + k for k in matches]
2688 return [leading + k for k in matches]
2688
2689
2689 results = []
2690 results = []
2690
2691
2691 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2692 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2692
2693
2693 for k, state_flag in matches.items():
2694 for k, state_flag in matches.items():
2694 result = leading + k
2695 result = leading + k
2695 if can_close_quote and closing_quote:
2696 if can_close_quote and closing_quote:
2696 result += closing_quote
2697 result += closing_quote
2697
2698
2698 if state_flag == end_of_tuple_or_item:
2699 if state_flag == end_of_tuple_or_item:
2699 # We do not know which suffix to add,
2700 # We do not know which suffix to add,
2700 # e.g. both tuple item and string
2701 # e.g. both tuple item and string
2701 # match this item.
2702 # match this item.
2702 pass
2703 pass
2703
2704
2704 if state_flag in end_of_tuple_or_item and can_close_bracket:
2705 if state_flag in end_of_tuple_or_item and can_close_bracket:
2705 result += "]"
2706 result += "]"
2706 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2707 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2707 result += ", "
2708 result += ", "
2708 results.append(result)
2709 results.append(result)
2709 return results
2710 return results
2710
2711
2711 @context_matcher()
2712 @context_matcher()
2712 def unicode_name_matcher(self, context: CompletionContext):
2713 def unicode_name_matcher(self, context: CompletionContext):
2713 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2714 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2714 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2715 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2715 return _convert_matcher_v1_result_to_v2(
2716 return _convert_matcher_v1_result_to_v2(
2716 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2717 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2717 )
2718 )
2718
2719
2719 @staticmethod
2720 @staticmethod
2720 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2721 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2721 """Match Latex-like syntax for unicode characters base
2722 """Match Latex-like syntax for unicode characters base
2722 on the name of the character.
2723 on the name of the character.
2723
2724
2724 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2725 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2725
2726
2726 Works only on valid python 3 identifier, or on combining characters that
2727 Works only on valid python 3 identifier, or on combining characters that
2727 will combine to form a valid identifier.
2728 will combine to form a valid identifier.
2728 """
2729 """
2729 slashpos = text.rfind('\\')
2730 slashpos = text.rfind('\\')
2730 if slashpos > -1:
2731 if slashpos > -1:
2731 s = text[slashpos+1:]
2732 s = text[slashpos+1:]
2732 try :
2733 try :
2733 unic = unicodedata.lookup(s)
2734 unic = unicodedata.lookup(s)
2734 # allow combining chars
2735 # allow combining chars
2735 if ('a'+unic).isidentifier():
2736 if ('a'+unic).isidentifier():
2736 return '\\'+s,[unic]
2737 return '\\'+s,[unic]
2737 except KeyError:
2738 except KeyError:
2738 pass
2739 pass
2739 return '', []
2740 return '', []
2740
2741
2741 @context_matcher()
2742 @context_matcher()
2742 def latex_name_matcher(self, context: CompletionContext):
2743 def latex_name_matcher(self, context: CompletionContext):
2743 """Match Latex syntax for unicode characters.
2744 """Match Latex syntax for unicode characters.
2744
2745
2745 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2746 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2746 """
2747 """
2747 fragment, matches = self.latex_matches(context.text_until_cursor)
2748 fragment, matches = self.latex_matches(context.text_until_cursor)
2748 return _convert_matcher_v1_result_to_v2(
2749 return _convert_matcher_v1_result_to_v2(
2749 matches, type="latex", fragment=fragment, suppress_if_matches=True
2750 matches, type="latex", fragment=fragment, suppress_if_matches=True
2750 )
2751 )
2751
2752
2752 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2753 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2753 """Match Latex syntax for unicode characters.
2754 """Match Latex syntax for unicode characters.
2754
2755
2755 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2756 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2756
2757
2757 .. deprecated:: 8.6
2758 .. deprecated:: 8.6
2758 You can use :meth:`latex_name_matcher` instead.
2759 You can use :meth:`latex_name_matcher` instead.
2759 """
2760 """
2760 slashpos = text.rfind('\\')
2761 slashpos = text.rfind('\\')
2761 if slashpos > -1:
2762 if slashpos > -1:
2762 s = text[slashpos:]
2763 s = text[slashpos:]
2763 if s in latex_symbols:
2764 if s in latex_symbols:
2764 # Try to complete a full latex symbol to unicode
2765 # Try to complete a full latex symbol to unicode
2765 # \\alpha -> Ξ±
2766 # \\alpha -> Ξ±
2766 return s, [latex_symbols[s]]
2767 return s, [latex_symbols[s]]
2767 else:
2768 else:
2768 # If a user has partially typed a latex symbol, give them
2769 # If a user has partially typed a latex symbol, give them
2769 # a full list of options \al -> [\aleph, \alpha]
2770 # a full list of options \al -> [\aleph, \alpha]
2770 matches = [k for k in latex_symbols if k.startswith(s)]
2771 matches = [k for k in latex_symbols if k.startswith(s)]
2771 if matches:
2772 if matches:
2772 return s, matches
2773 return s, matches
2773 return '', ()
2774 return '', ()
2774
2775
2775 @context_matcher()
2776 @context_matcher()
2776 def custom_completer_matcher(self, context):
2777 def custom_completer_matcher(self, context):
2777 """Dispatch custom completer.
2778 """Dispatch custom completer.
2778
2779
2779 If a match is found, suppresses all other matchers except for Jedi.
2780 If a match is found, suppresses all other matchers except for Jedi.
2780 """
2781 """
2781 matches = self.dispatch_custom_completer(context.token) or []
2782 matches = self.dispatch_custom_completer(context.token) or []
2782 result = _convert_matcher_v1_result_to_v2(
2783 result = _convert_matcher_v1_result_to_v2(
2783 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2784 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2784 )
2785 )
2785 result["ordered"] = True
2786 result["ordered"] = True
2786 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2787 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2787 return result
2788 return result
2788
2789
2789 def dispatch_custom_completer(self, text):
2790 def dispatch_custom_completer(self, text):
2790 """
2791 """
2791 .. deprecated:: 8.6
2792 .. deprecated:: 8.6
2792 You can use :meth:`custom_completer_matcher` instead.
2793 You can use :meth:`custom_completer_matcher` instead.
2793 """
2794 """
2794 if not self.custom_completers:
2795 if not self.custom_completers:
2795 return
2796 return
2796
2797
2797 line = self.line_buffer
2798 line = self.line_buffer
2798 if not line.strip():
2799 if not line.strip():
2799 return None
2800 return None
2800
2801
2801 # Create a little structure to pass all the relevant information about
2802 # Create a little structure to pass all the relevant information about
2802 # the current completion to any custom completer.
2803 # the current completion to any custom completer.
2803 event = SimpleNamespace()
2804 event = SimpleNamespace()
2804 event.line = line
2805 event.line = line
2805 event.symbol = text
2806 event.symbol = text
2806 cmd = line.split(None,1)[0]
2807 cmd = line.split(None,1)[0]
2807 event.command = cmd
2808 event.command = cmd
2808 event.text_until_cursor = self.text_until_cursor
2809 event.text_until_cursor = self.text_until_cursor
2809
2810
2810 # for foo etc, try also to find completer for %foo
2811 # for foo etc, try also to find completer for %foo
2811 if not cmd.startswith(self.magic_escape):
2812 if not cmd.startswith(self.magic_escape):
2812 try_magic = self.custom_completers.s_matches(
2813 try_magic = self.custom_completers.s_matches(
2813 self.magic_escape + cmd)
2814 self.magic_escape + cmd)
2814 else:
2815 else:
2815 try_magic = []
2816 try_magic = []
2816
2817
2817 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2818 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2818 try_magic,
2819 try_magic,
2819 self.custom_completers.flat_matches(self.text_until_cursor)):
2820 self.custom_completers.flat_matches(self.text_until_cursor)):
2820 try:
2821 try:
2821 res = c(event)
2822 res = c(event)
2822 if res:
2823 if res:
2823 # first, try case sensitive match
2824 # first, try case sensitive match
2824 withcase = [r for r in res if r.startswith(text)]
2825 withcase = [r for r in res if r.startswith(text)]
2825 if withcase:
2826 if withcase:
2826 return withcase
2827 return withcase
2827 # if none, then case insensitive ones are ok too
2828 # if none, then case insensitive ones are ok too
2828 text_low = text.lower()
2829 text_low = text.lower()
2829 return [r for r in res if r.lower().startswith(text_low)]
2830 return [r for r in res if r.lower().startswith(text_low)]
2830 except TryNext:
2831 except TryNext:
2831 pass
2832 pass
2832 except KeyboardInterrupt:
2833 except KeyboardInterrupt:
2833 """
2834 """
2834 If custom completer take too long,
2835 If custom completer take too long,
2835 let keyboard interrupt abort and return nothing.
2836 let keyboard interrupt abort and return nothing.
2836 """
2837 """
2837 break
2838 break
2838
2839
2839 return None
2840 return None
2840
2841
2841 def completions(self, text: str, offset: int)->Iterator[Completion]:
2842 def completions(self, text: str, offset: int)->Iterator[Completion]:
2842 """
2843 """
2843 Returns an iterator over the possible completions
2844 Returns an iterator over the possible completions
2844
2845
2845 .. warning::
2846 .. warning::
2846
2847
2847 Unstable
2848 Unstable
2848
2849
2849 This function is unstable, API may change without warning.
2850 This function is unstable, API may change without warning.
2850 It will also raise unless use in proper context manager.
2851 It will also raise unless use in proper context manager.
2851
2852
2852 Parameters
2853 Parameters
2853 ----------
2854 ----------
2854 text : str
2855 text : str
2855 Full text of the current input, multi line string.
2856 Full text of the current input, multi line string.
2856 offset : int
2857 offset : int
2857 Integer representing the position of the cursor in ``text``. Offset
2858 Integer representing the position of the cursor in ``text``. Offset
2858 is 0-based indexed.
2859 is 0-based indexed.
2859
2860
2860 Yields
2861 Yields
2861 ------
2862 ------
2862 Completion
2863 Completion
2863
2864
2864 Notes
2865 Notes
2865 -----
2866 -----
2866 The cursor on a text can either be seen as being "in between"
2867 The cursor on a text can either be seen as being "in between"
2867 characters or "On" a character depending on the interface visible to
2868 characters or "On" a character depending on the interface visible to
2868 the user. For consistency the cursor being on "in between" characters X
2869 the user. For consistency the cursor being on "in between" characters X
2869 and Y is equivalent to the cursor being "on" character Y, that is to say
2870 and Y is equivalent to the cursor being "on" character Y, that is to say
2870 the character the cursor is on is considered as being after the cursor.
2871 the character the cursor is on is considered as being after the cursor.
2871
2872
2872 Combining characters may span more that one position in the
2873 Combining characters may span more that one position in the
2873 text.
2874 text.
2874
2875
2875 .. note::
2876 .. note::
2876
2877
2877 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2878 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2878 fake Completion token to distinguish completion returned by Jedi
2879 fake Completion token to distinguish completion returned by Jedi
2879 and usual IPython completion.
2880 and usual IPython completion.
2880
2881
2881 .. note::
2882 .. note::
2882
2883
2883 Completions are not completely deduplicated yet. If identical
2884 Completions are not completely deduplicated yet. If identical
2884 completions are coming from different sources this function does not
2885 completions are coming from different sources this function does not
2885 ensure that each completion object will only be present once.
2886 ensure that each completion object will only be present once.
2886 """
2887 """
2887 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2888 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2888 "It may change without warnings. "
2889 "It may change without warnings. "
2889 "Use in corresponding context manager.",
2890 "Use in corresponding context manager.",
2890 category=ProvisionalCompleterWarning, stacklevel=2)
2891 category=ProvisionalCompleterWarning, stacklevel=2)
2891
2892
2892 seen = set()
2893 seen = set()
2893 profiler:Optional[cProfile.Profile]
2894 profiler:Optional[cProfile.Profile]
2894 try:
2895 try:
2895 if self.profile_completions:
2896 if self.profile_completions:
2896 import cProfile
2897 import cProfile
2897 profiler = cProfile.Profile()
2898 profiler = cProfile.Profile()
2898 profiler.enable()
2899 profiler.enable()
2899 else:
2900 else:
2900 profiler = None
2901 profiler = None
2901
2902
2902 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2903 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2903 if c and (c in seen):
2904 if c and (c in seen):
2904 continue
2905 continue
2905 yield c
2906 yield c
2906 seen.add(c)
2907 seen.add(c)
2907 except KeyboardInterrupt:
2908 except KeyboardInterrupt:
2908 """if completions take too long and users send keyboard interrupt,
2909 """if completions take too long and users send keyboard interrupt,
2909 do not crash and return ASAP. """
2910 do not crash and return ASAP. """
2910 pass
2911 pass
2911 finally:
2912 finally:
2912 if profiler is not None:
2913 if profiler is not None:
2913 profiler.disable()
2914 profiler.disable()
2914 ensure_dir_exists(self.profiler_output_dir)
2915 ensure_dir_exists(self.profiler_output_dir)
2915 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2916 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2916 print("Writing profiler output to", output_path)
2917 print("Writing profiler output to", output_path)
2917 profiler.dump_stats(output_path)
2918 profiler.dump_stats(output_path)
2918
2919
2919 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2920 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2920 """
2921 """
2921 Core completion module.Same signature as :any:`completions`, with the
2922 Core completion module.Same signature as :any:`completions`, with the
2922 extra `timeout` parameter (in seconds).
2923 extra `timeout` parameter (in seconds).
2923
2924
2924 Computing jedi's completion ``.type`` can be quite expensive (it is a
2925 Computing jedi's completion ``.type`` can be quite expensive (it is a
2925 lazy property) and can require some warm-up, more warm up than just
2926 lazy property) and can require some warm-up, more warm up than just
2926 computing the ``name`` of a completion. The warm-up can be :
2927 computing the ``name`` of a completion. The warm-up can be :
2927
2928
2928 - Long warm-up the first time a module is encountered after
2929 - Long warm-up the first time a module is encountered after
2929 install/update: actually build parse/inference tree.
2930 install/update: actually build parse/inference tree.
2930
2931
2931 - first time the module is encountered in a session: load tree from
2932 - first time the module is encountered in a session: load tree from
2932 disk.
2933 disk.
2933
2934
2934 We don't want to block completions for tens of seconds so we give the
2935 We don't want to block completions for tens of seconds so we give the
2935 completer a "budget" of ``_timeout`` seconds per invocation to compute
2936 completer a "budget" of ``_timeout`` seconds per invocation to compute
2936 completions types, the completions that have not yet been computed will
2937 completions types, the completions that have not yet been computed will
2937 be marked as "unknown" an will have a chance to be computed next round
2938 be marked as "unknown" an will have a chance to be computed next round
2938 are things get cached.
2939 are things get cached.
2939
2940
2940 Keep in mind that Jedi is not the only thing treating the completion so
2941 Keep in mind that Jedi is not the only thing treating the completion so
2941 keep the timeout short-ish as if we take more than 0.3 second we still
2942 keep the timeout short-ish as if we take more than 0.3 second we still
2942 have lots of processing to do.
2943 have lots of processing to do.
2943
2944
2944 """
2945 """
2945 deadline = time.monotonic() + _timeout
2946 deadline = time.monotonic() + _timeout
2946
2947
2947 before = full_text[:offset]
2948 before = full_text[:offset]
2948 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2949 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2949
2950
2950 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2951 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2951
2952
2952 def is_non_jedi_result(
2953 def is_non_jedi_result(
2953 result: MatcherResult, identifier: str
2954 result: MatcherResult, identifier: str
2954 ) -> TypeGuard[SimpleMatcherResult]:
2955 ) -> TypeGuard[SimpleMatcherResult]:
2955 return identifier != jedi_matcher_id
2956 return identifier != jedi_matcher_id
2956
2957
2957 results = self._complete(
2958 results = self._complete(
2958 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2959 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2959 )
2960 )
2960
2961
2961 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2962 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2962 identifier: result
2963 identifier: result
2963 for identifier, result in results.items()
2964 for identifier, result in results.items()
2964 if is_non_jedi_result(result, identifier)
2965 if is_non_jedi_result(result, identifier)
2965 }
2966 }
2966
2967
2967 jedi_matches = (
2968 jedi_matches = (
2968 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2969 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2969 if jedi_matcher_id in results
2970 if jedi_matcher_id in results
2970 else ()
2971 else ()
2971 )
2972 )
2972
2973
2973 iter_jm = iter(jedi_matches)
2974 iter_jm = iter(jedi_matches)
2974 if _timeout:
2975 if _timeout:
2975 for jm in iter_jm:
2976 for jm in iter_jm:
2976 try:
2977 try:
2977 type_ = jm.type
2978 type_ = jm.type
2978 except Exception:
2979 except Exception:
2979 if self.debug:
2980 if self.debug:
2980 print("Error in Jedi getting type of ", jm)
2981 print("Error in Jedi getting type of ", jm)
2981 type_ = None
2982 type_ = None
2982 delta = len(jm.name_with_symbols) - len(jm.complete)
2983 delta = len(jm.name_with_symbols) - len(jm.complete)
2983 if type_ == 'function':
2984 if type_ == 'function':
2984 signature = _make_signature(jm)
2985 signature = _make_signature(jm)
2985 else:
2986 else:
2986 signature = ''
2987 signature = ''
2987 yield Completion(start=offset - delta,
2988 yield Completion(start=offset - delta,
2988 end=offset,
2989 end=offset,
2989 text=jm.name_with_symbols,
2990 text=jm.name_with_symbols,
2990 type=type_,
2991 type=type_,
2991 signature=signature,
2992 signature=signature,
2992 _origin='jedi')
2993 _origin='jedi')
2993
2994
2994 if time.monotonic() > deadline:
2995 if time.monotonic() > deadline:
2995 break
2996 break
2996
2997
2997 for jm in iter_jm:
2998 for jm in iter_jm:
2998 delta = len(jm.name_with_symbols) - len(jm.complete)
2999 delta = len(jm.name_with_symbols) - len(jm.complete)
2999 yield Completion(
3000 yield Completion(
3000 start=offset - delta,
3001 start=offset - delta,
3001 end=offset,
3002 end=offset,
3002 text=jm.name_with_symbols,
3003 text=jm.name_with_symbols,
3003 type=_UNKNOWN_TYPE, # don't compute type for speed
3004 type=_UNKNOWN_TYPE, # don't compute type for speed
3004 _origin="jedi",
3005 _origin="jedi",
3005 signature="",
3006 signature="",
3006 )
3007 )
3007
3008
3008 # TODO:
3009 # TODO:
3009 # Suppress this, right now just for debug.
3010 # Suppress this, right now just for debug.
3010 if jedi_matches and non_jedi_results and self.debug:
3011 if jedi_matches and non_jedi_results and self.debug:
3011 some_start_offset = before.rfind(
3012 some_start_offset = before.rfind(
3012 next(iter(non_jedi_results.values()))["matched_fragment"]
3013 next(iter(non_jedi_results.values()))["matched_fragment"]
3013 )
3014 )
3014 yield Completion(
3015 yield Completion(
3015 start=some_start_offset,
3016 start=some_start_offset,
3016 end=offset,
3017 end=offset,
3017 text="--jedi/ipython--",
3018 text="--jedi/ipython--",
3018 _origin="debug",
3019 _origin="debug",
3019 type="none",
3020 type="none",
3020 signature="",
3021 signature="",
3021 )
3022 )
3022
3023
3023 ordered: List[Completion] = []
3024 ordered: List[Completion] = []
3024 sortable: List[Completion] = []
3025 sortable: List[Completion] = []
3025
3026
3026 for origin, result in non_jedi_results.items():
3027 for origin, result in non_jedi_results.items():
3027 matched_text = result["matched_fragment"]
3028 matched_text = result["matched_fragment"]
3028 start_offset = before.rfind(matched_text)
3029 start_offset = before.rfind(matched_text)
3029 is_ordered = result.get("ordered", False)
3030 is_ordered = result.get("ordered", False)
3030 container = ordered if is_ordered else sortable
3031 container = ordered if is_ordered else sortable
3031
3032
3032 # I'm unsure if this is always true, so let's assert and see if it
3033 # I'm unsure if this is always true, so let's assert and see if it
3033 # crash
3034 # crash
3034 assert before.endswith(matched_text)
3035 assert before.endswith(matched_text)
3035
3036
3036 for simple_completion in result["completions"]:
3037 for simple_completion in result["completions"]:
3037 completion = Completion(
3038 completion = Completion(
3038 start=start_offset,
3039 start=start_offset,
3039 end=offset,
3040 end=offset,
3040 text=simple_completion.text,
3041 text=simple_completion.text,
3041 _origin=origin,
3042 _origin=origin,
3042 signature="",
3043 signature="",
3043 type=simple_completion.type or _UNKNOWN_TYPE,
3044 type=simple_completion.type or _UNKNOWN_TYPE,
3044 )
3045 )
3045 container.append(completion)
3046 container.append(completion)
3046
3047
3047 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3048 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3048 :MATCHES_LIMIT
3049 :MATCHES_LIMIT
3049 ]
3050 ]
3050
3051
3051 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3052 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3052 """Find completions for the given text and line context.
3053 """Find completions for the given text and line context.
3053
3054
3054 Note that both the text and the line_buffer are optional, but at least
3055 Note that both the text and the line_buffer are optional, but at least
3055 one of them must be given.
3056 one of them must be given.
3056
3057
3057 Parameters
3058 Parameters
3058 ----------
3059 ----------
3059 text : string, optional
3060 text : string, optional
3060 Text to perform the completion on. If not given, the line buffer
3061 Text to perform the completion on. If not given, the line buffer
3061 is split using the instance's CompletionSplitter object.
3062 is split using the instance's CompletionSplitter object.
3062 line_buffer : string, optional
3063 line_buffer : string, optional
3063 If not given, the completer attempts to obtain the current line
3064 If not given, the completer attempts to obtain the current line
3064 buffer via readline. This keyword allows clients which are
3065 buffer via readline. This keyword allows clients which are
3065 requesting for text completions in non-readline contexts to inform
3066 requesting for text completions in non-readline contexts to inform
3066 the completer of the entire text.
3067 the completer of the entire text.
3067 cursor_pos : int, optional
3068 cursor_pos : int, optional
3068 Index of the cursor in the full line buffer. Should be provided by
3069 Index of the cursor in the full line buffer. Should be provided by
3069 remote frontends where kernel has no access to frontend state.
3070 remote frontends where kernel has no access to frontend state.
3070
3071
3071 Returns
3072 Returns
3072 -------
3073 -------
3073 Tuple of two items:
3074 Tuple of two items:
3074 text : str
3075 text : str
3075 Text that was actually used in the completion.
3076 Text that was actually used in the completion.
3076 matches : list
3077 matches : list
3077 A list of completion matches.
3078 A list of completion matches.
3078
3079
3079 Notes
3080 Notes
3080 -----
3081 -----
3081 This API is likely to be deprecated and replaced by
3082 This API is likely to be deprecated and replaced by
3082 :any:`IPCompleter.completions` in the future.
3083 :any:`IPCompleter.completions` in the future.
3083
3084
3084 """
3085 """
3085 warnings.warn('`Completer.complete` is pending deprecation since '
3086 warnings.warn('`Completer.complete` is pending deprecation since '
3086 'IPython 6.0 and will be replaced by `Completer.completions`.',
3087 'IPython 6.0 and will be replaced by `Completer.completions`.',
3087 PendingDeprecationWarning)
3088 PendingDeprecationWarning)
3088 # potential todo, FOLD the 3rd throw away argument of _complete
3089 # potential todo, FOLD the 3rd throw away argument of _complete
3089 # into the first 2 one.
3090 # into the first 2 one.
3090 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3091 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3091 # TODO: should we deprecate now, or does it stay?
3092 # TODO: should we deprecate now, or does it stay?
3092
3093
3093 results = self._complete(
3094 results = self._complete(
3094 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3095 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3095 )
3096 )
3096
3097
3097 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3098 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3098
3099
3099 return self._arrange_and_extract(
3100 return self._arrange_and_extract(
3100 results,
3101 results,
3101 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3102 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3102 skip_matchers={jedi_matcher_id},
3103 skip_matchers={jedi_matcher_id},
3103 # this API does not support different start/end positions (fragments of token).
3104 # this API does not support different start/end positions (fragments of token).
3104 abort_if_offset_changes=True,
3105 abort_if_offset_changes=True,
3105 )
3106 )
3106
3107
3107 def _arrange_and_extract(
3108 def _arrange_and_extract(
3108 self,
3109 self,
3109 results: Dict[str, MatcherResult],
3110 results: Dict[str, MatcherResult],
3110 skip_matchers: Set[str],
3111 skip_matchers: Set[str],
3111 abort_if_offset_changes: bool,
3112 abort_if_offset_changes: bool,
3112 ):
3113 ):
3113 sortable: List[AnyMatcherCompletion] = []
3114 sortable: List[AnyMatcherCompletion] = []
3114 ordered: List[AnyMatcherCompletion] = []
3115 ordered: List[AnyMatcherCompletion] = []
3115 most_recent_fragment = None
3116 most_recent_fragment = None
3116 for identifier, result in results.items():
3117 for identifier, result in results.items():
3117 if identifier in skip_matchers:
3118 if identifier in skip_matchers:
3118 continue
3119 continue
3119 if not result["completions"]:
3120 if not result["completions"]:
3120 continue
3121 continue
3121 if not most_recent_fragment:
3122 if not most_recent_fragment:
3122 most_recent_fragment = result["matched_fragment"]
3123 most_recent_fragment = result["matched_fragment"]
3123 if (
3124 if (
3124 abort_if_offset_changes
3125 abort_if_offset_changes
3125 and result["matched_fragment"] != most_recent_fragment
3126 and result["matched_fragment"] != most_recent_fragment
3126 ):
3127 ):
3127 break
3128 break
3128 if result.get("ordered", False):
3129 if result.get("ordered", False):
3129 ordered.extend(result["completions"])
3130 ordered.extend(result["completions"])
3130 else:
3131 else:
3131 sortable.extend(result["completions"])
3132 sortable.extend(result["completions"])
3132
3133
3133 if not most_recent_fragment:
3134 if not most_recent_fragment:
3134 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3135 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3135
3136
3136 return most_recent_fragment, [
3137 return most_recent_fragment, [
3137 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3138 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3138 ]
3139 ]
3139
3140
3140 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3141 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3141 full_text=None) -> _CompleteResult:
3142 full_text=None) -> _CompleteResult:
3142 """
3143 """
3143 Like complete but can also returns raw jedi completions as well as the
3144 Like complete but can also returns raw jedi completions as well as the
3144 origin of the completion text. This could (and should) be made much
3145 origin of the completion text. This could (and should) be made much
3145 cleaner but that will be simpler once we drop the old (and stateful)
3146 cleaner but that will be simpler once we drop the old (and stateful)
3146 :any:`complete` API.
3147 :any:`complete` API.
3147
3148
3148 With current provisional API, cursor_pos act both (depending on the
3149 With current provisional API, cursor_pos act both (depending on the
3149 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3150 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3150 ``column`` when passing multiline strings this could/should be renamed
3151 ``column`` when passing multiline strings this could/should be renamed
3151 but would add extra noise.
3152 but would add extra noise.
3152
3153
3153 Parameters
3154 Parameters
3154 ----------
3155 ----------
3155 cursor_line
3156 cursor_line
3156 Index of the line the cursor is on. 0 indexed.
3157 Index of the line the cursor is on. 0 indexed.
3157 cursor_pos
3158 cursor_pos
3158 Position of the cursor in the current line/line_buffer/text. 0
3159 Position of the cursor in the current line/line_buffer/text. 0
3159 indexed.
3160 indexed.
3160 line_buffer : optional, str
3161 line_buffer : optional, str
3161 The current line the cursor is in, this is mostly due to legacy
3162 The current line the cursor is in, this is mostly due to legacy
3162 reason that readline could only give a us the single current line.
3163 reason that readline could only give a us the single current line.
3163 Prefer `full_text`.
3164 Prefer `full_text`.
3164 text : str
3165 text : str
3165 The current "token" the cursor is in, mostly also for historical
3166 The current "token" the cursor is in, mostly also for historical
3166 reasons. as the completer would trigger only after the current line
3167 reasons. as the completer would trigger only after the current line
3167 was parsed.
3168 was parsed.
3168 full_text : str
3169 full_text : str
3169 Full text of the current cell.
3170 Full text of the current cell.
3170
3171
3171 Returns
3172 Returns
3172 -------
3173 -------
3173 An ordered dictionary where keys are identifiers of completion
3174 An ordered dictionary where keys are identifiers of completion
3174 matchers and values are ``MatcherResult``s.
3175 matchers and values are ``MatcherResult``s.
3175 """
3176 """
3176
3177
3177 # if the cursor position isn't given, the only sane assumption we can
3178 # if the cursor position isn't given, the only sane assumption we can
3178 # make is that it's at the end of the line (the common case)
3179 # make is that it's at the end of the line (the common case)
3179 if cursor_pos is None:
3180 if cursor_pos is None:
3180 cursor_pos = len(line_buffer) if text is None else len(text)
3181 cursor_pos = len(line_buffer) if text is None else len(text)
3181
3182
3182 if self.use_main_ns:
3183 if self.use_main_ns:
3183 self.namespace = __main__.__dict__
3184 self.namespace = __main__.__dict__
3184
3185
3185 # if text is either None or an empty string, rely on the line buffer
3186 # if text is either None or an empty string, rely on the line buffer
3186 if (not line_buffer) and full_text:
3187 if (not line_buffer) and full_text:
3187 line_buffer = full_text.split('\n')[cursor_line]
3188 line_buffer = full_text.split('\n')[cursor_line]
3188 if not text: # issue #11508: check line_buffer before calling split_line
3189 if not text: # issue #11508: check line_buffer before calling split_line
3189 text = (
3190 text = (
3190 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3191 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3191 )
3192 )
3192
3193
3193 # If no line buffer is given, assume the input text is all there was
3194 # If no line buffer is given, assume the input text is all there was
3194 if line_buffer is None:
3195 if line_buffer is None:
3195 line_buffer = text
3196 line_buffer = text
3196
3197
3197 # deprecated - do not use `line_buffer` in new code.
3198 # deprecated - do not use `line_buffer` in new code.
3198 self.line_buffer = line_buffer
3199 self.line_buffer = line_buffer
3199 self.text_until_cursor = self.line_buffer[:cursor_pos]
3200 self.text_until_cursor = self.line_buffer[:cursor_pos]
3200
3201
3201 if not full_text:
3202 if not full_text:
3202 full_text = line_buffer
3203 full_text = line_buffer
3203
3204
3204 context = CompletionContext(
3205 context = CompletionContext(
3205 full_text=full_text,
3206 full_text=full_text,
3206 cursor_position=cursor_pos,
3207 cursor_position=cursor_pos,
3207 cursor_line=cursor_line,
3208 cursor_line=cursor_line,
3208 token=text,
3209 token=text,
3209 limit=MATCHES_LIMIT,
3210 limit=MATCHES_LIMIT,
3210 )
3211 )
3211
3212
3212 # Start with a clean slate of completions
3213 # Start with a clean slate of completions
3213 results: Dict[str, MatcherResult] = {}
3214 results: Dict[str, MatcherResult] = {}
3214
3215
3215 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3216 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3216
3217
3217 suppressed_matchers: Set[str] = set()
3218 suppressed_matchers: Set[str] = set()
3218
3219
3219 matchers = {
3220 matchers = {
3220 _get_matcher_id(matcher): matcher
3221 _get_matcher_id(matcher): matcher
3221 for matcher in sorted(
3222 for matcher in sorted(
3222 self.matchers, key=_get_matcher_priority, reverse=True
3223 self.matchers, key=_get_matcher_priority, reverse=True
3223 )
3224 )
3224 }
3225 }
3225
3226
3226 for matcher_id, matcher in matchers.items():
3227 for matcher_id, matcher in matchers.items():
3227 matcher_id = _get_matcher_id(matcher)
3228 matcher_id = _get_matcher_id(matcher)
3228
3229
3229 if matcher_id in self.disable_matchers:
3230 if matcher_id in self.disable_matchers:
3230 continue
3231 continue
3231
3232
3232 if matcher_id in results:
3233 if matcher_id in results:
3233 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3234 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3234
3235
3235 if matcher_id in suppressed_matchers:
3236 if matcher_id in suppressed_matchers:
3236 continue
3237 continue
3237
3238
3238 result: MatcherResult
3239 result: MatcherResult
3239 try:
3240 try:
3240 if _is_matcher_v1(matcher):
3241 if _is_matcher_v1(matcher):
3241 result = _convert_matcher_v1_result_to_v2(
3242 result = _convert_matcher_v1_result_to_v2(
3242 matcher(text), type=_UNKNOWN_TYPE
3243 matcher(text), type=_UNKNOWN_TYPE
3243 )
3244 )
3244 elif _is_matcher_v2(matcher):
3245 elif _is_matcher_v2(matcher):
3245 result = matcher(context)
3246 result = matcher(context)
3246 else:
3247 else:
3247 api_version = _get_matcher_api_version(matcher)
3248 api_version = _get_matcher_api_version(matcher)
3248 raise ValueError(f"Unsupported API version {api_version}")
3249 raise ValueError(f"Unsupported API version {api_version}")
3249 except BaseException:
3250 except BaseException:
3250 # Show the ugly traceback if the matcher causes an
3251 # Show the ugly traceback if the matcher causes an
3251 # exception, but do NOT crash the kernel!
3252 # exception, but do NOT crash the kernel!
3252 sys.excepthook(*sys.exc_info())
3253 sys.excepthook(*sys.exc_info())
3253 continue
3254 continue
3254
3255
3255 # set default value for matched fragment if suffix was not selected.
3256 # set default value for matched fragment if suffix was not selected.
3256 result["matched_fragment"] = result.get("matched_fragment", context.token)
3257 result["matched_fragment"] = result.get("matched_fragment", context.token)
3257
3258
3258 if not suppressed_matchers:
3259 if not suppressed_matchers:
3259 suppression_recommended: Union[bool, Set[str]] = result.get(
3260 suppression_recommended: Union[bool, Set[str]] = result.get(
3260 "suppress", False
3261 "suppress", False
3261 )
3262 )
3262
3263
3263 suppression_config = (
3264 suppression_config = (
3264 self.suppress_competing_matchers.get(matcher_id, None)
3265 self.suppress_competing_matchers.get(matcher_id, None)
3265 if isinstance(self.suppress_competing_matchers, dict)
3266 if isinstance(self.suppress_competing_matchers, dict)
3266 else self.suppress_competing_matchers
3267 else self.suppress_competing_matchers
3267 )
3268 )
3268 should_suppress = (
3269 should_suppress = (
3269 (suppression_config is True)
3270 (suppression_config is True)
3270 or (suppression_recommended and (suppression_config is not False))
3271 or (suppression_recommended and (suppression_config is not False))
3271 ) and has_any_completions(result)
3272 ) and has_any_completions(result)
3272
3273
3273 if should_suppress:
3274 if should_suppress:
3274 suppression_exceptions: Set[str] = result.get(
3275 suppression_exceptions: Set[str] = result.get(
3275 "do_not_suppress", set()
3276 "do_not_suppress", set()
3276 )
3277 )
3277 if isinstance(suppression_recommended, Iterable):
3278 if isinstance(suppression_recommended, Iterable):
3278 to_suppress = set(suppression_recommended)
3279 to_suppress = set(suppression_recommended)
3279 else:
3280 else:
3280 to_suppress = set(matchers)
3281 to_suppress = set(matchers)
3281 suppressed_matchers = to_suppress - suppression_exceptions
3282 suppressed_matchers = to_suppress - suppression_exceptions
3282
3283
3283 new_results = {}
3284 new_results = {}
3284 for previous_matcher_id, previous_result in results.items():
3285 for previous_matcher_id, previous_result in results.items():
3285 if previous_matcher_id not in suppressed_matchers:
3286 if previous_matcher_id not in suppressed_matchers:
3286 new_results[previous_matcher_id] = previous_result
3287 new_results[previous_matcher_id] = previous_result
3287 results = new_results
3288 results = new_results
3288
3289
3289 results[matcher_id] = result
3290 results[matcher_id] = result
3290
3291
3291 _, matches = self._arrange_and_extract(
3292 _, matches = self._arrange_and_extract(
3292 results,
3293 results,
3293 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3294 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3294 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3295 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3295 skip_matchers={jedi_matcher_id},
3296 skip_matchers={jedi_matcher_id},
3296 abort_if_offset_changes=False,
3297 abort_if_offset_changes=False,
3297 )
3298 )
3298
3299
3299 # populate legacy stateful API
3300 # populate legacy stateful API
3300 self.matches = matches
3301 self.matches = matches
3301
3302
3302 return results
3303 return results
3303
3304
3304 @staticmethod
3305 @staticmethod
3305 def _deduplicate(
3306 def _deduplicate(
3306 matches: Sequence[AnyCompletion],
3307 matches: Sequence[AnyCompletion],
3307 ) -> Iterable[AnyCompletion]:
3308 ) -> Iterable[AnyCompletion]:
3308 filtered_matches: Dict[str, AnyCompletion] = {}
3309 filtered_matches: Dict[str, AnyCompletion] = {}
3309 for match in matches:
3310 for match in matches:
3310 text = match.text
3311 text = match.text
3311 if (
3312 if (
3312 text not in filtered_matches
3313 text not in filtered_matches
3313 or filtered_matches[text].type == _UNKNOWN_TYPE
3314 or filtered_matches[text].type == _UNKNOWN_TYPE
3314 ):
3315 ):
3315 filtered_matches[text] = match
3316 filtered_matches[text] = match
3316
3317
3317 return filtered_matches.values()
3318 return filtered_matches.values()
3318
3319
3319 @staticmethod
3320 @staticmethod
3320 def _sort(matches: Sequence[AnyCompletion]):
3321 def _sort(matches: Sequence[AnyCompletion]):
3321 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3322 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3322
3323
3323 @context_matcher()
3324 @context_matcher()
3324 def fwd_unicode_matcher(self, context: CompletionContext):
3325 def fwd_unicode_matcher(self, context: CompletionContext):
3325 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3326 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3326 # TODO: use `context.limit` to terminate early once we matched the maximum
3327 # TODO: use `context.limit` to terminate early once we matched the maximum
3327 # number that will be used downstream; can be added as an optional to
3328 # number that will be used downstream; can be added as an optional to
3328 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3329 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3329 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3330 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3330 return _convert_matcher_v1_result_to_v2(
3331 return _convert_matcher_v1_result_to_v2(
3331 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3332 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3332 )
3333 )
3333
3334
3334 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3335 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3335 """
3336 """
3336 Forward match a string starting with a backslash with a list of
3337 Forward match a string starting with a backslash with a list of
3337 potential Unicode completions.
3338 potential Unicode completions.
3338
3339
3339 Will compute list of Unicode character names on first call and cache it.
3340 Will compute list of Unicode character names on first call and cache it.
3340
3341
3341 .. deprecated:: 8.6
3342 .. deprecated:: 8.6
3342 You can use :meth:`fwd_unicode_matcher` instead.
3343 You can use :meth:`fwd_unicode_matcher` instead.
3343
3344
3344 Returns
3345 Returns
3345 -------
3346 -------
3346 At tuple with:
3347 At tuple with:
3347 - matched text (empty if no matches)
3348 - matched text (empty if no matches)
3348 - list of potential completions, empty tuple otherwise)
3349 - list of potential completions, empty tuple otherwise)
3349 """
3350 """
3350 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3351 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3351 # We could do a faster match using a Trie.
3352 # We could do a faster match using a Trie.
3352
3353
3353 # Using pygtrie the following seem to work:
3354 # Using pygtrie the following seem to work:
3354
3355
3355 # s = PrefixSet()
3356 # s = PrefixSet()
3356
3357
3357 # for c in range(0,0x10FFFF + 1):
3358 # for c in range(0,0x10FFFF + 1):
3358 # try:
3359 # try:
3359 # s.add(unicodedata.name(chr(c)))
3360 # s.add(unicodedata.name(chr(c)))
3360 # except ValueError:
3361 # except ValueError:
3361 # pass
3362 # pass
3362 # [''.join(k) for k in s.iter(prefix)]
3363 # [''.join(k) for k in s.iter(prefix)]
3363
3364
3364 # But need to be timed and adds an extra dependency.
3365 # But need to be timed and adds an extra dependency.
3365
3366
3366 slashpos = text.rfind('\\')
3367 slashpos = text.rfind('\\')
3367 # if text starts with slash
3368 # if text starts with slash
3368 if slashpos > -1:
3369 if slashpos > -1:
3369 # PERF: It's important that we don't access self._unicode_names
3370 # PERF: It's important that we don't access self._unicode_names
3370 # until we're inside this if-block. _unicode_names is lazily
3371 # until we're inside this if-block. _unicode_names is lazily
3371 # initialized, and it takes a user-noticeable amount of time to
3372 # initialized, and it takes a user-noticeable amount of time to
3372 # initialize it, so we don't want to initialize it unless we're
3373 # initialize it, so we don't want to initialize it unless we're
3373 # actually going to use it.
3374 # actually going to use it.
3374 s = text[slashpos + 1 :]
3375 s = text[slashpos + 1 :]
3375 sup = s.upper()
3376 sup = s.upper()
3376 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3377 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3377 if candidates:
3378 if candidates:
3378 return s, candidates
3379 return s, candidates
3379 candidates = [x for x in self.unicode_names if sup in x]
3380 candidates = [x for x in self.unicode_names if sup in x]
3380 if candidates:
3381 if candidates:
3381 return s, candidates
3382 return s, candidates
3382 splitsup = sup.split(" ")
3383 splitsup = sup.split(" ")
3383 candidates = [
3384 candidates = [
3384 x for x in self.unicode_names if all(u in x for u in splitsup)
3385 x for x in self.unicode_names if all(u in x for u in splitsup)
3385 ]
3386 ]
3386 if candidates:
3387 if candidates:
3387 return s, candidates
3388 return s, candidates
3388
3389
3389 return "", ()
3390 return "", ()
3390
3391
3391 # if text does not start with slash
3392 # if text does not start with slash
3392 else:
3393 else:
3393 return '', ()
3394 return '', ()
3394
3395
3395 @property
3396 @property
3396 def unicode_names(self) -> List[str]:
3397 def unicode_names(self) -> List[str]:
3397 """List of names of unicode code points that can be completed.
3398 """List of names of unicode code points that can be completed.
3398
3399
3399 The list is lazily initialized on first access.
3400 The list is lazily initialized on first access.
3400 """
3401 """
3401 if self._unicode_names is None:
3402 if self._unicode_names is None:
3402 names = []
3403 names = []
3403 for c in range(0,0x10FFFF + 1):
3404 for c in range(0,0x10FFFF + 1):
3404 try:
3405 try:
3405 names.append(unicodedata.name(chr(c)))
3406 names.append(unicodedata.name(chr(c)))
3406 except ValueError:
3407 except ValueError:
3407 pass
3408 pass
3408 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3409 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3409
3410
3410 return self._unicode_names
3411 return self._unicode_names
3411
3412
3412 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3413 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3413 names = []
3414 names = []
3414 for start,stop in ranges:
3415 for start,stop in ranges:
3415 for c in range(start, stop) :
3416 for c in range(start, stop) :
3416 try:
3417 try:
3417 names.append(unicodedata.name(chr(c)))
3418 names.append(unicodedata.name(chr(c)))
3418 except ValueError:
3419 except ValueError:
3419 pass
3420 pass
3420 return names
3421 return names
General Comments 0
You need to be logged in to leave comments. Login now