##// END OF EJS Templates
Fix exception on missing attribute matches
krassowski -
Show More
@@ -1,3388 +1,3388 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press :kbd:`Tab` to expand it to its latex form.
53 and press :kbd:`Tab` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 :std:configtrait:`Completer.backslash_combining_completions` option to
62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 ``False``.
63 ``False``.
64
64
65
65
66 Experimental
66 Experimental
67 ============
67 ============
68
68
69 Starting with IPython 6.0, this module can make use of the Jedi library to
69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 generate completions both using static analysis of the code, and dynamically
70 generate completions both using static analysis of the code, and dynamically
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 for Python. The APIs attached to this new mechanism is unstable and will
72 for Python. The APIs attached to this new mechanism is unstable and will
73 raise unless use in an :any:`provisionalcompleter` context manager.
73 raise unless use in an :any:`provisionalcompleter` context manager.
74
74
75 You will find that the following are experimental:
75 You will find that the following are experimental:
76
76
77 - :any:`provisionalcompleter`
77 - :any:`provisionalcompleter`
78 - :any:`IPCompleter.completions`
78 - :any:`IPCompleter.completions`
79 - :any:`Completion`
79 - :any:`Completion`
80 - :any:`rectify_completions`
80 - :any:`rectify_completions`
81
81
82 .. note::
82 .. note::
83
83
84 better name for :any:`rectify_completions` ?
84 better name for :any:`rectify_completions` ?
85
85
86 We welcome any feedback on these new API, and we also encourage you to try this
86 We welcome any feedback on these new API, and we also encourage you to try this
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 to have extra logging information if :any:`jedi` is crashing, or if current
88 to have extra logging information if :any:`jedi` is crashing, or if current
89 IPython completer pending deprecations are returning results not yet handled
89 IPython completer pending deprecations are returning results not yet handled
90 by :any:`jedi`
90 by :any:`jedi`
91
91
92 Using Jedi for tab completion allow snippets like the following to work without
92 Using Jedi for tab completion allow snippets like the following to work without
93 having to execute any code:
93 having to execute any code:
94
94
95 >>> myvar = ['hello', 42]
95 >>> myvar = ['hello', 42]
96 ... myvar[1].bi<tab>
96 ... myvar[1].bi<tab>
97
97
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 option.
100 option.
101
101
102 Be sure to update :any:`jedi` to the latest stable version or to try the
102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 current development version to get better completions.
103 current development version to get better completions.
104
104
105 Matchers
105 Matchers
106 ========
106 ========
107
107
108 All completions routines are implemented using unified *Matchers* API.
108 All completions routines are implemented using unified *Matchers* API.
109 The matchers API is provisional and subject to change without notice.
109 The matchers API is provisional and subject to change without notice.
110
110
111 The built-in matchers include:
111 The built-in matchers include:
112
112
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 - :any:`IPCompleter.unicode_name_matcher`,
115 - :any:`IPCompleter.unicode_name_matcher`,
116 :any:`IPCompleter.fwd_unicode_matcher`
116 :any:`IPCompleter.fwd_unicode_matcher`
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 (`complete_command`) with string dispatch (including regular expressions).
125 (`complete_command`) with string dispatch (including regular expressions).
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 Jedi results to match behaviour in earlier IPython versions.
127 Jedi results to match behaviour in earlier IPython versions.
128
128
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130
130
131 Matcher API
131 Matcher API
132 -----------
132 -----------
133
133
134 Simplifying some details, the ``Matcher`` interface can described as
134 Simplifying some details, the ``Matcher`` interface can described as
135
135
136 .. code-block::
136 .. code-block::
137
137
138 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv1 = Callable[[str], list[str]]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140
140
141 Matcher = MatcherAPIv1 | MatcherAPIv2
141 Matcher = MatcherAPIv1 | MatcherAPIv2
142
142
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 and remains supported as a simplest way for generating completions. This is also
144 and remains supported as a simplest way for generating completions. This is also
145 currently the only API supported by the IPython hooks system `complete_command`.
145 currently the only API supported by the IPython hooks system `complete_command`.
146
146
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 and requires a literal ``2`` for v2 Matchers.
149 and requires a literal ``2`` for v2 Matchers.
150
150
151 Once the API stabilises future versions may relax the requirement for specifying
151 Once the API stabilises future versions may relax the requirement for specifying
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154
154
155 Suppression of competing matchers
155 Suppression of competing matchers
156 ---------------------------------
156 ---------------------------------
157
157
158 By default results from all matchers are combined, in the order determined by
158 By default results from all matchers are combined, in the order determined by
159 their priority. Matchers can request to suppress results from subsequent
159 their priority. Matchers can request to suppress results from subsequent
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161
161
162 When multiple matchers simultaneously request surpression, the results from of
162 When multiple matchers simultaneously request surpression, the results from of
163 the matcher with higher priority will be returned.
163 the matcher with higher priority will be returned.
164
164
165 Sometimes it is desirable to suppress most but not all other matchers;
165 Sometimes it is desirable to suppress most but not all other matchers;
166 this can be achieved by adding a set of identifiers of matchers which
166 this can be achieved by adding a set of identifiers of matchers which
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168
168
169 The suppression behaviour can is user-configurable via
169 The suppression behaviour can is user-configurable via
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 """
171 """
172
172
173
173
174 # Copyright (c) IPython Development Team.
174 # Copyright (c) IPython Development Team.
175 # Distributed under the terms of the Modified BSD License.
175 # Distributed under the terms of the Modified BSD License.
176 #
176 #
177 # Some of this code originated from rlcompleter in the Python standard library
177 # Some of this code originated from rlcompleter in the Python standard library
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179
179
180 from __future__ import annotations
180 from __future__ import annotations
181 import builtins as builtin_mod
181 import builtins as builtin_mod
182 import enum
182 import enum
183 import glob
183 import glob
184 import inspect
184 import inspect
185 import itertools
185 import itertools
186 import keyword
186 import keyword
187 import os
187 import os
188 import re
188 import re
189 import string
189 import string
190 import sys
190 import sys
191 import tokenize
191 import tokenize
192 import time
192 import time
193 import unicodedata
193 import unicodedata
194 import uuid
194 import uuid
195 import warnings
195 import warnings
196 from ast import literal_eval
196 from ast import literal_eval
197 from collections import defaultdict
197 from collections import defaultdict
198 from contextlib import contextmanager
198 from contextlib import contextmanager
199 from dataclasses import dataclass
199 from dataclasses import dataclass
200 from functools import cached_property, partial
200 from functools import cached_property, partial
201 from types import SimpleNamespace
201 from types import SimpleNamespace
202 from typing import (
202 from typing import (
203 Iterable,
203 Iterable,
204 Iterator,
204 Iterator,
205 List,
205 List,
206 Tuple,
206 Tuple,
207 Union,
207 Union,
208 Any,
208 Any,
209 Sequence,
209 Sequence,
210 Dict,
210 Dict,
211 Optional,
211 Optional,
212 TYPE_CHECKING,
212 TYPE_CHECKING,
213 Set,
213 Set,
214 Sized,
214 Sized,
215 TypeVar,
215 TypeVar,
216 Literal,
216 Literal,
217 )
217 )
218
218
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 from IPython.core.error import TryNext
220 from IPython.core.error import TryNext
221 from IPython.core.inputtransformer2 import ESC_MAGIC
221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 from IPython.core.oinspect import InspectColors
223 from IPython.core.oinspect import InspectColors
224 from IPython.testing.skipdoctest import skip_doctest
224 from IPython.testing.skipdoctest import skip_doctest
225 from IPython.utils import generics
225 from IPython.utils import generics
226 from IPython.utils.decorators import sphinx_options
226 from IPython.utils.decorators import sphinx_options
227 from IPython.utils.dir2 import dir2, get_real_method
227 from IPython.utils.dir2 import dir2, get_real_method
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 from IPython.utils.path import ensure_dir_exists
229 from IPython.utils.path import ensure_dir_exists
230 from IPython.utils.process import arg_split
230 from IPython.utils.process import arg_split
231 from traitlets import (
231 from traitlets import (
232 Bool,
232 Bool,
233 Enum,
233 Enum,
234 Int,
234 Int,
235 List as ListTrait,
235 List as ListTrait,
236 Unicode,
236 Unicode,
237 Dict as DictTrait,
237 Dict as DictTrait,
238 Union as UnionTrait,
238 Union as UnionTrait,
239 observe,
239 observe,
240 )
240 )
241 from traitlets.config.configurable import Configurable
241 from traitlets.config.configurable import Configurable
242
242
243 import __main__
243 import __main__
244
244
245 # skip module docstests
245 # skip module docstests
246 __skip_doctest__ = True
246 __skip_doctest__ = True
247
247
248
248
249 try:
249 try:
250 import jedi
250 import jedi
251 jedi.settings.case_insensitive_completion = False
251 jedi.settings.case_insensitive_completion = False
252 import jedi.api.helpers
252 import jedi.api.helpers
253 import jedi.api.classes
253 import jedi.api.classes
254 JEDI_INSTALLED = True
254 JEDI_INSTALLED = True
255 except ImportError:
255 except ImportError:
256 JEDI_INSTALLED = False
256 JEDI_INSTALLED = False
257
257
258
258
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 from typing import cast
260 from typing import cast
261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 else:
262 else:
263 from typing import Generic
263 from typing import Generic
264
264
265 def cast(type_, obj):
265 def cast(type_, obj):
266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 return obj
267 return obj
268
268
269 # do not require on runtime
269 # do not require on runtime
270 NotRequired = Tuple # requires Python >=3.11
270 NotRequired = Tuple # requires Python >=3.11
271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 Protocol = object # requires Python >=3.8
272 Protocol = object # requires Python >=3.8
273 TypeAlias = Any # requires Python >=3.10
273 TypeAlias = Any # requires Python >=3.10
274 TypeGuard = Generic # requires Python >=3.10
274 TypeGuard = Generic # requires Python >=3.10
275 if GENERATING_DOCUMENTATION:
275 if GENERATING_DOCUMENTATION:
276 from typing import TypedDict
276 from typing import TypedDict
277
277
278 # -----------------------------------------------------------------------------
278 # -----------------------------------------------------------------------------
279 # Globals
279 # Globals
280 #-----------------------------------------------------------------------------
280 #-----------------------------------------------------------------------------
281
281
282 # ranges where we have most of the valid unicode names. We could be more finer
282 # ranges where we have most of the valid unicode names. We could be more finer
283 # grained but is it worth it for performance While unicode have character in the
283 # grained but is it worth it for performance While unicode have character in the
284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 # write this). With below range we cover them all, with a density of ~67%
285 # write this). With below range we cover them all, with a density of ~67%
286 # biggest next gap we consider only adds up about 1% density and there are 600
286 # biggest next gap we consider only adds up about 1% density and there are 600
287 # gaps that would need hard coding.
287 # gaps that would need hard coding.
288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289
289
290 # Public API
290 # Public API
291 __all__ = ["Completer", "IPCompleter"]
291 __all__ = ["Completer", "IPCompleter"]
292
292
293 if sys.platform == 'win32':
293 if sys.platform == 'win32':
294 PROTECTABLES = ' '
294 PROTECTABLES = ' '
295 else:
295 else:
296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297
297
298 # Protect against returning an enormous number of completions which the frontend
298 # Protect against returning an enormous number of completions which the frontend
299 # may have trouble processing.
299 # may have trouble processing.
300 MATCHES_LIMIT = 500
300 MATCHES_LIMIT = 500
301
301
302 # Completion type reported when no type can be inferred.
302 # Completion type reported when no type can be inferred.
303 _UNKNOWN_TYPE = "<unknown>"
303 _UNKNOWN_TYPE = "<unknown>"
304
304
305 # sentinel value to signal lack of a match
305 # sentinel value to signal lack of a match
306 not_found = object()
306 not_found = object()
307
307
308 class ProvisionalCompleterWarning(FutureWarning):
308 class ProvisionalCompleterWarning(FutureWarning):
309 """
309 """
310 Exception raise by an experimental feature in this module.
310 Exception raise by an experimental feature in this module.
311
311
312 Wrap code in :any:`provisionalcompleter` context manager if you
312 Wrap code in :any:`provisionalcompleter` context manager if you
313 are certain you want to use an unstable feature.
313 are certain you want to use an unstable feature.
314 """
314 """
315 pass
315 pass
316
316
317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318
318
319
319
320 @skip_doctest
320 @skip_doctest
321 @contextmanager
321 @contextmanager
322 def provisionalcompleter(action='ignore'):
322 def provisionalcompleter(action='ignore'):
323 """
323 """
324 This context manager has to be used in any place where unstable completer
324 This context manager has to be used in any place where unstable completer
325 behavior and API may be called.
325 behavior and API may be called.
326
326
327 >>> with provisionalcompleter():
327 >>> with provisionalcompleter():
328 ... completer.do_experimental_things() # works
328 ... completer.do_experimental_things() # works
329
329
330 >>> completer.do_experimental_things() # raises.
330 >>> completer.do_experimental_things() # raises.
331
331
332 .. note::
332 .. note::
333
333
334 Unstable
334 Unstable
335
335
336 By using this context manager you agree that the API in use may change
336 By using this context manager you agree that the API in use may change
337 without warning, and that you won't complain if they do so.
337 without warning, and that you won't complain if they do so.
338
338
339 You also understand that, if the API is not to your liking, you should report
339 You also understand that, if the API is not to your liking, you should report
340 a bug to explain your use case upstream.
340 a bug to explain your use case upstream.
341
341
342 We'll be happy to get your feedback, feature requests, and improvements on
342 We'll be happy to get your feedback, feature requests, and improvements on
343 any of the unstable APIs!
343 any of the unstable APIs!
344 """
344 """
345 with warnings.catch_warnings():
345 with warnings.catch_warnings():
346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 yield
347 yield
348
348
349
349
350 def has_open_quotes(s):
350 def has_open_quotes(s):
351 """Return whether a string has open quotes.
351 """Return whether a string has open quotes.
352
352
353 This simply counts whether the number of quote characters of either type in
353 This simply counts whether the number of quote characters of either type in
354 the string is odd.
354 the string is odd.
355
355
356 Returns
356 Returns
357 -------
357 -------
358 If there is an open quote, the quote character is returned. Else, return
358 If there is an open quote, the quote character is returned. Else, return
359 False.
359 False.
360 """
360 """
361 # We check " first, then ', so complex cases with nested quotes will get
361 # We check " first, then ', so complex cases with nested quotes will get
362 # the " to take precedence.
362 # the " to take precedence.
363 if s.count('"') % 2:
363 if s.count('"') % 2:
364 return '"'
364 return '"'
365 elif s.count("'") % 2:
365 elif s.count("'") % 2:
366 return "'"
366 return "'"
367 else:
367 else:
368 return False
368 return False
369
369
370
370
371 def protect_filename(s, protectables=PROTECTABLES):
371 def protect_filename(s, protectables=PROTECTABLES):
372 """Escape a string to protect certain characters."""
372 """Escape a string to protect certain characters."""
373 if set(s) & set(protectables):
373 if set(s) & set(protectables):
374 if sys.platform == "win32":
374 if sys.platform == "win32":
375 return '"' + s + '"'
375 return '"' + s + '"'
376 else:
376 else:
377 return "".join(("\\" + c if c in protectables else c) for c in s)
377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 else:
378 else:
379 return s
379 return s
380
380
381
381
382 def expand_user(path:str) -> Tuple[str, bool, str]:
382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 """Expand ``~``-style usernames in strings.
383 """Expand ``~``-style usernames in strings.
384
384
385 This is similar to :func:`os.path.expanduser`, but it computes and returns
385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 extra information that will be useful if the input was being used in
386 extra information that will be useful if the input was being used in
387 computing completions, and you wish to return the completions with the
387 computing completions, and you wish to return the completions with the
388 original '~' instead of its expanded value.
388 original '~' instead of its expanded value.
389
389
390 Parameters
390 Parameters
391 ----------
391 ----------
392 path : str
392 path : str
393 String to be expanded. If no ~ is present, the output is the same as the
393 String to be expanded. If no ~ is present, the output is the same as the
394 input.
394 input.
395
395
396 Returns
396 Returns
397 -------
397 -------
398 newpath : str
398 newpath : str
399 Result of ~ expansion in the input path.
399 Result of ~ expansion in the input path.
400 tilde_expand : bool
400 tilde_expand : bool
401 Whether any expansion was performed or not.
401 Whether any expansion was performed or not.
402 tilde_val : str
402 tilde_val : str
403 The value that ~ was replaced with.
403 The value that ~ was replaced with.
404 """
404 """
405 # Default values
405 # Default values
406 tilde_expand = False
406 tilde_expand = False
407 tilde_val = ''
407 tilde_val = ''
408 newpath = path
408 newpath = path
409
409
410 if path.startswith('~'):
410 if path.startswith('~'):
411 tilde_expand = True
411 tilde_expand = True
412 rest = len(path)-1
412 rest = len(path)-1
413 newpath = os.path.expanduser(path)
413 newpath = os.path.expanduser(path)
414 if rest:
414 if rest:
415 tilde_val = newpath[:-rest]
415 tilde_val = newpath[:-rest]
416 else:
416 else:
417 tilde_val = newpath
417 tilde_val = newpath
418
418
419 return newpath, tilde_expand, tilde_val
419 return newpath, tilde_expand, tilde_val
420
420
421
421
422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 """Does the opposite of expand_user, with its outputs.
423 """Does the opposite of expand_user, with its outputs.
424 """
424 """
425 if tilde_expand:
425 if tilde_expand:
426 return path.replace(tilde_val, '~')
426 return path.replace(tilde_val, '~')
427 else:
427 else:
428 return path
428 return path
429
429
430
430
431 def completions_sorting_key(word):
431 def completions_sorting_key(word):
432 """key for sorting completions
432 """key for sorting completions
433
433
434 This does several things:
434 This does several things:
435
435
436 - Demote any completions starting with underscores to the end
436 - Demote any completions starting with underscores to the end
437 - Insert any %magic and %%cellmagic completions in the alphabetical order
437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 by their name
438 by their name
439 """
439 """
440 prio1, prio2 = 0, 0
440 prio1, prio2 = 0, 0
441
441
442 if word.startswith('__'):
442 if word.startswith('__'):
443 prio1 = 2
443 prio1 = 2
444 elif word.startswith('_'):
444 elif word.startswith('_'):
445 prio1 = 1
445 prio1 = 1
446
446
447 if word.endswith('='):
447 if word.endswith('='):
448 prio1 = -1
448 prio1 = -1
449
449
450 if word.startswith('%%'):
450 if word.startswith('%%'):
451 # If there's another % in there, this is something else, so leave it alone
451 # If there's another % in there, this is something else, so leave it alone
452 if not "%" in word[2:]:
452 if not "%" in word[2:]:
453 word = word[2:]
453 word = word[2:]
454 prio2 = 2
454 prio2 = 2
455 elif word.startswith('%'):
455 elif word.startswith('%'):
456 if not "%" in word[1:]:
456 if not "%" in word[1:]:
457 word = word[1:]
457 word = word[1:]
458 prio2 = 1
458 prio2 = 1
459
459
460 return prio1, word, prio2
460 return prio1, word, prio2
461
461
462
462
463 class _FakeJediCompletion:
463 class _FakeJediCompletion:
464 """
464 """
465 This is a workaround to communicate to the UI that Jedi has crashed and to
465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467
467
468 Added in IPython 6.0 so should likely be removed for 7.0
468 Added in IPython 6.0 so should likely be removed for 7.0
469
469
470 """
470 """
471
471
472 def __init__(self, name):
472 def __init__(self, name):
473
473
474 self.name = name
474 self.name = name
475 self.complete = name
475 self.complete = name
476 self.type = 'crashed'
476 self.type = 'crashed'
477 self.name_with_symbols = name
477 self.name_with_symbols = name
478 self.signature = ""
478 self.signature = ""
479 self._origin = "fake"
479 self._origin = "fake"
480 self.text = "crashed"
480 self.text = "crashed"
481
481
482 def __repr__(self):
482 def __repr__(self):
483 return '<Fake completion object jedi has crashed>'
483 return '<Fake completion object jedi has crashed>'
484
484
485
485
486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487
487
488
488
489 class Completion:
489 class Completion:
490 """
490 """
491 Completion object used and returned by IPython completers.
491 Completion object used and returned by IPython completers.
492
492
493 .. warning::
493 .. warning::
494
494
495 Unstable
495 Unstable
496
496
497 This function is unstable, API may change without warning.
497 This function is unstable, API may change without warning.
498 It will also raise unless use in proper context manager.
498 It will also raise unless use in proper context manager.
499
499
500 This act as a middle ground :any:`Completion` object between the
500 This act as a middle ground :any:`Completion` object between the
501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 object. While Jedi need a lot of information about evaluator and how the
502 object. While Jedi need a lot of information about evaluator and how the
503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 need user facing information.
504 need user facing information.
505
505
506 - Which range should be replaced replaced by what.
506 - Which range should be replaced replaced by what.
507 - Some metadata (like completion type), or meta information to displayed to
507 - Some metadata (like completion type), or meta information to displayed to
508 the use user.
508 the use user.
509
509
510 For debugging purpose we can also store the origin of the completion (``jedi``,
510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 ``IPython.python_matches``, ``IPython.magics_matches``...).
511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 """
512 """
513
513
514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515
515
516 def __init__(
516 def __init__(
517 self,
517 self,
518 start: int,
518 start: int,
519 end: int,
519 end: int,
520 text: str,
520 text: str,
521 *,
521 *,
522 type: Optional[str] = None,
522 type: Optional[str] = None,
523 _origin="",
523 _origin="",
524 signature="",
524 signature="",
525 ) -> None:
525 ) -> None:
526 warnings.warn(
526 warnings.warn(
527 "``Completion`` is a provisional API (as of IPython 6.0). "
527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 "It may change without warnings. "
528 "It may change without warnings. "
529 "Use in corresponding context manager.",
529 "Use in corresponding context manager.",
530 category=ProvisionalCompleterWarning,
530 category=ProvisionalCompleterWarning,
531 stacklevel=2,
531 stacklevel=2,
532 )
532 )
533
533
534 self.start = start
534 self.start = start
535 self.end = end
535 self.end = end
536 self.text = text
536 self.text = text
537 self.type = type
537 self.type = type
538 self.signature = signature
538 self.signature = signature
539 self._origin = _origin
539 self._origin = _origin
540
540
541 def __repr__(self):
541 def __repr__(self):
542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544
544
545 def __eq__(self, other) -> bool:
545 def __eq__(self, other) -> bool:
546 """
546 """
547 Equality and hash do not hash the type (as some completer may not be
547 Equality and hash do not hash the type (as some completer may not be
548 able to infer the type), but are use to (partially) de-duplicate
548 able to infer the type), but are use to (partially) de-duplicate
549 completion.
549 completion.
550
550
551 Completely de-duplicating completion is a bit tricker that just
551 Completely de-duplicating completion is a bit tricker that just
552 comparing as it depends on surrounding text, which Completions are not
552 comparing as it depends on surrounding text, which Completions are not
553 aware of.
553 aware of.
554 """
554 """
555 return self.start == other.start and \
555 return self.start == other.start and \
556 self.end == other.end and \
556 self.end == other.end and \
557 self.text == other.text
557 self.text == other.text
558
558
559 def __hash__(self):
559 def __hash__(self):
560 return hash((self.start, self.end, self.text))
560 return hash((self.start, self.end, self.text))
561
561
562
562
563 class SimpleCompletion:
563 class SimpleCompletion:
564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565
565
566 .. warning::
566 .. warning::
567
567
568 Provisional
568 Provisional
569
569
570 This class is used to describe the currently supported attributes of
570 This class is used to describe the currently supported attributes of
571 simple completion items, and any additional implementation details
571 simple completion items, and any additional implementation details
572 should not be relied on. Additional attributes may be included in
572 should not be relied on. Additional attributes may be included in
573 future versions, and meaning of text disambiguated from the current
573 future versions, and meaning of text disambiguated from the current
574 dual meaning of "text to insert" and "text to used as a label".
574 dual meaning of "text to insert" and "text to used as a label".
575 """
575 """
576
576
577 __slots__ = ["text", "type"]
577 __slots__ = ["text", "type"]
578
578
579 def __init__(self, text: str, *, type: Optional[str] = None):
579 def __init__(self, text: str, *, type: Optional[str] = None):
580 self.text = text
580 self.text = text
581 self.type = type
581 self.type = type
582
582
583 def __repr__(self):
583 def __repr__(self):
584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585
585
586
586
587 class _MatcherResultBase(TypedDict):
587 class _MatcherResultBase(TypedDict):
588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589
589
590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 matched_fragment: NotRequired[str]
591 matched_fragment: NotRequired[str]
592
592
593 #: Whether to suppress results from all other matchers (True), some
593 #: Whether to suppress results from all other matchers (True), some
594 #: matchers (set of identifiers) or none (False); default is False.
594 #: matchers (set of identifiers) or none (False); default is False.
595 suppress: NotRequired[Union[bool, Set[str]]]
595 suppress: NotRequired[Union[bool, Set[str]]]
596
596
597 #: Identifiers of matchers which should NOT be suppressed when this matcher
597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 #: requests to suppress all other matchers; defaults to an empty set.
598 #: requests to suppress all other matchers; defaults to an empty set.
599 do_not_suppress: NotRequired[Set[str]]
599 do_not_suppress: NotRequired[Set[str]]
600
600
601 #: Are completions already ordered and should be left as-is? default is False.
601 #: Are completions already ordered and should be left as-is? default is False.
602 ordered: NotRequired[bool]
602 ordered: NotRequired[bool]
603
603
604
604
605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 """Result of new-style completion matcher."""
607 """Result of new-style completion matcher."""
608
608
609 # note: TypedDict is added again to the inheritance chain
609 # note: TypedDict is added again to the inheritance chain
610 # in order to get __orig_bases__ for documentation
610 # in order to get __orig_bases__ for documentation
611
611
612 #: List of candidate completions
612 #: List of candidate completions
613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614
614
615
615
616 class _JediMatcherResult(_MatcherResultBase):
616 class _JediMatcherResult(_MatcherResultBase):
617 """Matching result returned by Jedi (will be processed differently)"""
617 """Matching result returned by Jedi (will be processed differently)"""
618
618
619 #: list of candidate completions
619 #: list of candidate completions
620 completions: Iterator[_JediCompletionLike]
620 completions: Iterator[_JediCompletionLike]
621
621
622
622
623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625
625
626
626
627 @dataclass
627 @dataclass
628 class CompletionContext:
628 class CompletionContext:
629 """Completion context provided as an argument to matchers in the Matcher API v2."""
629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630
630
631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 # which was not explicitly visible as an argument of the matcher, making any refactor
632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 # from the completer, and make substituting them in sub-classes easier.
634 # from the completer, and make substituting them in sub-classes easier.
635
635
636 #: Relevant fragment of code directly preceding the cursor.
636 #: Relevant fragment of code directly preceding the cursor.
637 #: The extraction of token is implemented via splitter heuristic
637 #: The extraction of token is implemented via splitter heuristic
638 #: (following readline behaviour for legacy reasons), which is user configurable
638 #: (following readline behaviour for legacy reasons), which is user configurable
639 #: (by switching the greedy mode).
639 #: (by switching the greedy mode).
640 token: str
640 token: str
641
641
642 #: The full available content of the editor or buffer
642 #: The full available content of the editor or buffer
643 full_text: str
643 full_text: str
644
644
645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 cursor_position: int
646 cursor_position: int
647
647
648 #: Cursor line in ``full_text``.
648 #: Cursor line in ``full_text``.
649 cursor_line: int
649 cursor_line: int
650
650
651 #: The maximum number of completions that will be used downstream.
651 #: The maximum number of completions that will be used downstream.
652 #: Matchers can use this information to abort early.
652 #: Matchers can use this information to abort early.
653 #: The built-in Jedi matcher is currently excepted from this limit.
653 #: The built-in Jedi matcher is currently excepted from this limit.
654 # If not given, return all possible completions.
654 # If not given, return all possible completions.
655 limit: Optional[int]
655 limit: Optional[int]
656
656
657 @cached_property
657 @cached_property
658 def text_until_cursor(self) -> str:
658 def text_until_cursor(self) -> str:
659 return self.line_with_cursor[: self.cursor_position]
659 return self.line_with_cursor[: self.cursor_position]
660
660
661 @cached_property
661 @cached_property
662 def line_with_cursor(self) -> str:
662 def line_with_cursor(self) -> str:
663 return self.full_text.split("\n")[self.cursor_line]
663 return self.full_text.split("\n")[self.cursor_line]
664
664
665
665
666 #: Matcher results for API v2.
666 #: Matcher results for API v2.
667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668
668
669
669
670 class _MatcherAPIv1Base(Protocol):
670 class _MatcherAPIv1Base(Protocol):
671 def __call__(self, text: str) -> List[str]:
671 def __call__(self, text: str) -> List[str]:
672 """Call signature."""
672 """Call signature."""
673 ...
673 ...
674
674
675 #: Used to construct the default matcher identifier
675 #: Used to construct the default matcher identifier
676 __qualname__: str
676 __qualname__: str
677
677
678
678
679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 #: API version
680 #: API version
681 matcher_api_version: Optional[Literal[1]]
681 matcher_api_version: Optional[Literal[1]]
682
682
683 def __call__(self, text: str) -> List[str]:
683 def __call__(self, text: str) -> List[str]:
684 """Call signature."""
684 """Call signature."""
685 ...
685 ...
686
686
687
687
688 #: Protocol describing Matcher API v1.
688 #: Protocol describing Matcher API v1.
689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690
690
691
691
692 class MatcherAPIv2(Protocol):
692 class MatcherAPIv2(Protocol):
693 """Protocol describing Matcher API v2."""
693 """Protocol describing Matcher API v2."""
694
694
695 #: API version
695 #: API version
696 matcher_api_version: Literal[2] = 2
696 matcher_api_version: Literal[2] = 2
697
697
698 def __call__(self, context: CompletionContext) -> MatcherResult:
698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 """Call signature."""
699 """Call signature."""
700 ...
700 ...
701
701
702 #: Used to construct the default matcher identifier
702 #: Used to construct the default matcher identifier
703 __qualname__: str
703 __qualname__: str
704
704
705
705
706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707
707
708
708
709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 api_version = _get_matcher_api_version(matcher)
710 api_version = _get_matcher_api_version(matcher)
711 return api_version == 1
711 return api_version == 1
712
712
713
713
714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 api_version = _get_matcher_api_version(matcher)
715 api_version = _get_matcher_api_version(matcher)
716 return api_version == 2
716 return api_version == 2
717
717
718
718
719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 """Determines whether objects is sizable"""
720 """Determines whether objects is sizable"""
721 return hasattr(value, "__len__")
721 return hasattr(value, "__len__")
722
722
723
723
724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 """Determines whether objects is sizable"""
725 """Determines whether objects is sizable"""
726 return hasattr(value, "__next__")
726 return hasattr(value, "__next__")
727
727
728
728
729 def has_any_completions(result: MatcherResult) -> bool:
729 def has_any_completions(result: MatcherResult) -> bool:
730 """Check if any result includes any completions."""
730 """Check if any result includes any completions."""
731 completions = result["completions"]
731 completions = result["completions"]
732 if _is_sizable(completions):
732 if _is_sizable(completions):
733 return len(completions) != 0
733 return len(completions) != 0
734 if _is_iterator(completions):
734 if _is_iterator(completions):
735 try:
735 try:
736 old_iterator = completions
736 old_iterator = completions
737 first = next(old_iterator)
737 first = next(old_iterator)
738 result["completions"] = cast(
738 result["completions"] = cast(
739 Iterator[SimpleCompletion],
739 Iterator[SimpleCompletion],
740 itertools.chain([first], old_iterator),
740 itertools.chain([first], old_iterator),
741 )
741 )
742 return True
742 return True
743 except StopIteration:
743 except StopIteration:
744 return False
744 return False
745 raise ValueError(
745 raise ValueError(
746 "Completions returned by matcher need to be an Iterator or a Sizable"
746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 )
747 )
748
748
749
749
750 def completion_matcher(
750 def completion_matcher(
751 *,
751 *,
752 priority: Optional[float] = None,
752 priority: Optional[float] = None,
753 identifier: Optional[str] = None,
753 identifier: Optional[str] = None,
754 api_version: int = 1,
754 api_version: int = 1,
755 ):
755 ):
756 """Adds attributes describing the matcher.
756 """Adds attributes describing the matcher.
757
757
758 Parameters
758 Parameters
759 ----------
759 ----------
760 priority : Optional[float]
760 priority : Optional[float]
761 The priority of the matcher, determines the order of execution of matchers.
761 The priority of the matcher, determines the order of execution of matchers.
762 Higher priority means that the matcher will be executed first. Defaults to 0.
762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 identifier : Optional[str]
763 identifier : Optional[str]
764 identifier of the matcher allowing users to modify the behaviour via traitlets,
764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 and also used to for debugging (will be passed as ``origin`` with the completions).
765 and also used to for debugging (will be passed as ``origin`` with the completions).
766
766
767 Defaults to matcher function's ``__qualname__`` (for example,
767 Defaults to matcher function's ``__qualname__`` (for example,
768 ``IPCompleter.file_matcher`` for the built-in matched defined
768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 as a ``file_matcher`` method of the ``IPCompleter`` class).
769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 api_version: Optional[int]
770 api_version: Optional[int]
771 version of the Matcher API used by this matcher.
771 version of the Matcher API used by this matcher.
772 Currently supported values are 1 and 2.
772 Currently supported values are 1 and 2.
773 Defaults to 1.
773 Defaults to 1.
774 """
774 """
775
775
776 def wrapper(func: Matcher):
776 def wrapper(func: Matcher):
777 func.matcher_priority = priority or 0 # type: ignore
777 func.matcher_priority = priority or 0 # type: ignore
778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 func.matcher_api_version = api_version # type: ignore
779 func.matcher_api_version = api_version # type: ignore
780 if TYPE_CHECKING:
780 if TYPE_CHECKING:
781 if api_version == 1:
781 if api_version == 1:
782 func = cast(MatcherAPIv1, func)
782 func = cast(MatcherAPIv1, func)
783 elif api_version == 2:
783 elif api_version == 2:
784 func = cast(MatcherAPIv2, func)
784 func = cast(MatcherAPIv2, func)
785 return func
785 return func
786
786
787 return wrapper
787 return wrapper
788
788
789
789
790 def _get_matcher_priority(matcher: Matcher):
790 def _get_matcher_priority(matcher: Matcher):
791 return getattr(matcher, "matcher_priority", 0)
791 return getattr(matcher, "matcher_priority", 0)
792
792
793
793
794 def _get_matcher_id(matcher: Matcher):
794 def _get_matcher_id(matcher: Matcher):
795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796
796
797
797
798 def _get_matcher_api_version(matcher):
798 def _get_matcher_api_version(matcher):
799 return getattr(matcher, "matcher_api_version", 1)
799 return getattr(matcher, "matcher_api_version", 1)
800
800
801
801
802 context_matcher = partial(completion_matcher, api_version=2)
802 context_matcher = partial(completion_matcher, api_version=2)
803
803
804
804
805 _IC = Iterable[Completion]
805 _IC = Iterable[Completion]
806
806
807
807
808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 """
809 """
810 Deduplicate a set of completions.
810 Deduplicate a set of completions.
811
811
812 .. warning::
812 .. warning::
813
813
814 Unstable
814 Unstable
815
815
816 This function is unstable, API may change without warning.
816 This function is unstable, API may change without warning.
817
817
818 Parameters
818 Parameters
819 ----------
819 ----------
820 text : str
820 text : str
821 text that should be completed.
821 text that should be completed.
822 completions : Iterator[Completion]
822 completions : Iterator[Completion]
823 iterator over the completions to deduplicate
823 iterator over the completions to deduplicate
824
824
825 Yields
825 Yields
826 ------
826 ------
827 `Completions` objects
827 `Completions` objects
828 Completions coming from multiple sources, may be different but end up having
828 Completions coming from multiple sources, may be different but end up having
829 the same effect when applied to ``text``. If this is the case, this will
829 the same effect when applied to ``text``. If this is the case, this will
830 consider completions as equal and only emit the first encountered.
830 consider completions as equal and only emit the first encountered.
831 Not folded in `completions()` yet for debugging purpose, and to detect when
831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 the IPython completer does return things that Jedi does not, but should be
832 the IPython completer does return things that Jedi does not, but should be
833 at some point.
833 at some point.
834 """
834 """
835 completions = list(completions)
835 completions = list(completions)
836 if not completions:
836 if not completions:
837 return
837 return
838
838
839 new_start = min(c.start for c in completions)
839 new_start = min(c.start for c in completions)
840 new_end = max(c.end for c in completions)
840 new_end = max(c.end for c in completions)
841
841
842 seen = set()
842 seen = set()
843 for c in completions:
843 for c in completions:
844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 if new_text not in seen:
845 if new_text not in seen:
846 yield c
846 yield c
847 seen.add(new_text)
847 seen.add(new_text)
848
848
849
849
850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 """
851 """
852 Rectify a set of completions to all have the same ``start`` and ``end``
852 Rectify a set of completions to all have the same ``start`` and ``end``
853
853
854 .. warning::
854 .. warning::
855
855
856 Unstable
856 Unstable
857
857
858 This function is unstable, API may change without warning.
858 This function is unstable, API may change without warning.
859 It will also raise unless use in proper context manager.
859 It will also raise unless use in proper context manager.
860
860
861 Parameters
861 Parameters
862 ----------
862 ----------
863 text : str
863 text : str
864 text that should be completed.
864 text that should be completed.
865 completions : Iterator[Completion]
865 completions : Iterator[Completion]
866 iterator over the completions to rectify
866 iterator over the completions to rectify
867 _debug : bool
867 _debug : bool
868 Log failed completion
868 Log failed completion
869
869
870 Notes
870 Notes
871 -----
871 -----
872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 the Jupyter Protocol requires them to behave like so. This will readjust
873 the Jupyter Protocol requires them to behave like so. This will readjust
874 the completion to have the same ``start`` and ``end`` by padding both
874 the completion to have the same ``start`` and ``end`` by padding both
875 extremities with surrounding text.
875 extremities with surrounding text.
876
876
877 During stabilisation should support a ``_debug`` option to log which
877 During stabilisation should support a ``_debug`` option to log which
878 completion are return by the IPython completer and not found in Jedi in
878 completion are return by the IPython completer and not found in Jedi in
879 order to make upstream bug report.
879 order to make upstream bug report.
880 """
880 """
881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 "It may change without warnings. "
882 "It may change without warnings. "
883 "Use in corresponding context manager.",
883 "Use in corresponding context manager.",
884 category=ProvisionalCompleterWarning, stacklevel=2)
884 category=ProvisionalCompleterWarning, stacklevel=2)
885
885
886 completions = list(completions)
886 completions = list(completions)
887 if not completions:
887 if not completions:
888 return
888 return
889 starts = (c.start for c in completions)
889 starts = (c.start for c in completions)
890 ends = (c.end for c in completions)
890 ends = (c.end for c in completions)
891
891
892 new_start = min(starts)
892 new_start = min(starts)
893 new_end = max(ends)
893 new_end = max(ends)
894
894
895 seen_jedi = set()
895 seen_jedi = set()
896 seen_python_matches = set()
896 seen_python_matches = set()
897 for c in completions:
897 for c in completions:
898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 if c._origin == 'jedi':
899 if c._origin == 'jedi':
900 seen_jedi.add(new_text)
900 seen_jedi.add(new_text)
901 elif c._origin == "IPCompleter.python_matcher":
901 elif c._origin == "IPCompleter.python_matcher":
902 seen_python_matches.add(new_text)
902 seen_python_matches.add(new_text)
903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 diff = seen_python_matches.difference(seen_jedi)
904 diff = seen_python_matches.difference(seen_jedi)
905 if diff and _debug:
905 if diff and _debug:
906 print('IPython.python matches have extras:', diff)
906 print('IPython.python matches have extras:', diff)
907
907
908
908
909 if sys.platform == 'win32':
909 if sys.platform == 'win32':
910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 else:
911 else:
912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913
913
914 GREEDY_DELIMS = ' =\r\n'
914 GREEDY_DELIMS = ' =\r\n'
915
915
916
916
917 class CompletionSplitter(object):
917 class CompletionSplitter(object):
918 """An object to split an input line in a manner similar to readline.
918 """An object to split an input line in a manner similar to readline.
919
919
920 By having our own implementation, we can expose readline-like completion in
920 By having our own implementation, we can expose readline-like completion in
921 a uniform manner to all frontends. This object only needs to be given the
921 a uniform manner to all frontends. This object only needs to be given the
922 line of text to be split and the cursor position on said line, and it
922 line of text to be split and the cursor position on said line, and it
923 returns the 'word' to be completed on at the cursor after splitting the
923 returns the 'word' to be completed on at the cursor after splitting the
924 entire line.
924 entire line.
925
925
926 What characters are used as splitting delimiters can be controlled by
926 What characters are used as splitting delimiters can be controlled by
927 setting the ``delims`` attribute (this is a property that internally
927 setting the ``delims`` attribute (this is a property that internally
928 automatically builds the necessary regular expression)"""
928 automatically builds the necessary regular expression)"""
929
929
930 # Private interface
930 # Private interface
931
931
932 # A string of delimiter characters. The default value makes sense for
932 # A string of delimiter characters. The default value makes sense for
933 # IPython's most typical usage patterns.
933 # IPython's most typical usage patterns.
934 _delims = DELIMS
934 _delims = DELIMS
935
935
936 # The expression (a normal string) to be compiled into a regular expression
936 # The expression (a normal string) to be compiled into a regular expression
937 # for actual splitting. We store it as an attribute mostly for ease of
937 # for actual splitting. We store it as an attribute mostly for ease of
938 # debugging, since this type of code can be so tricky to debug.
938 # debugging, since this type of code can be so tricky to debug.
939 _delim_expr = None
939 _delim_expr = None
940
940
941 # The regular expression that does the actual splitting
941 # The regular expression that does the actual splitting
942 _delim_re = None
942 _delim_re = None
943
943
944 def __init__(self, delims=None):
944 def __init__(self, delims=None):
945 delims = CompletionSplitter._delims if delims is None else delims
945 delims = CompletionSplitter._delims if delims is None else delims
946 self.delims = delims
946 self.delims = delims
947
947
948 @property
948 @property
949 def delims(self):
949 def delims(self):
950 """Return the string of delimiter characters."""
950 """Return the string of delimiter characters."""
951 return self._delims
951 return self._delims
952
952
953 @delims.setter
953 @delims.setter
954 def delims(self, delims):
954 def delims(self, delims):
955 """Set the delimiters for line splitting."""
955 """Set the delimiters for line splitting."""
956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 self._delim_re = re.compile(expr)
957 self._delim_re = re.compile(expr)
958 self._delims = delims
958 self._delims = delims
959 self._delim_expr = expr
959 self._delim_expr = expr
960
960
961 def split_line(self, line, cursor_pos=None):
961 def split_line(self, line, cursor_pos=None):
962 """Split a line of text with a cursor at the given position.
962 """Split a line of text with a cursor at the given position.
963 """
963 """
964 l = line if cursor_pos is None else line[:cursor_pos]
964 l = line if cursor_pos is None else line[:cursor_pos]
965 return self._delim_re.split(l)[-1]
965 return self._delim_re.split(l)[-1]
966
966
967
967
968
968
969 class Completer(Configurable):
969 class Completer(Configurable):
970
970
971 greedy = Bool(
971 greedy = Bool(
972 False,
972 False,
973 help="""Activate greedy completion.
973 help="""Activate greedy completion.
974
974
975 .. deprecated:: 8.8
975 .. deprecated:: 8.8
976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977
977
978 When enabled in IPython 8.8 or newer, changes configuration as follows:
978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979
979
980 - ``Completer.evaluation = 'unsafe'``
980 - ``Completer.evaluation = 'unsafe'``
981 - ``Completer.auto_close_dict_keys = True``
981 - ``Completer.auto_close_dict_keys = True``
982 """,
982 """,
983 ).tag(config=True)
983 ).tag(config=True)
984
984
985 evaluation = Enum(
985 evaluation = Enum(
986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 default_value="limited",
987 default_value="limited",
988 help="""Policy for code evaluation under completion.
988 help="""Policy for code evaluation under completion.
989
989
990 Successive options allow to enable more eager evaluation for better
990 Successive options allow to enable more eager evaluation for better
991 completion suggestions, including for nested dictionaries, nested lists,
991 completion suggestions, including for nested dictionaries, nested lists,
992 or even results of function calls.
992 or even results of function calls.
993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995
995
996 Allowed values are:
996 Allowed values are:
997
997
998 - ``forbidden``: no evaluation of code is permitted,
998 - ``forbidden``: no evaluation of code is permitted,
999 - ``minimal``: evaluation of literals and access to built-in namespace;
999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 no item/attribute evaluationm no access to locals/globals,
1000 no item/attribute evaluationm no access to locals/globals,
1001 no evaluation of any operations or comparisons.
1001 no evaluation of any operations or comparisons.
1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 - ``unsafe``: evaluation of all methods and function calls but not of
1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 syntax with side-effects like `del x`,
1007 syntax with side-effects like `del x`,
1008 - ``dangerous``: completely arbitrary evaluation.
1008 - ``dangerous``: completely arbitrary evaluation.
1009 """,
1009 """,
1010 ).tag(config=True)
1010 ).tag(config=True)
1011
1011
1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 help="Experimental: Use Jedi to generate autocompletions. "
1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 "Default to True if jedi is installed.").tag(config=True)
1014 "Default to True if jedi is installed.").tag(config=True)
1015
1015
1016 jedi_compute_type_timeout = Int(default_value=400,
1016 jedi_compute_type_timeout = Int(default_value=400,
1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 performance by preventing jedi to build its cache.
1019 performance by preventing jedi to build its cache.
1020 """).tag(config=True)
1020 """).tag(config=True)
1021
1021
1022 debug = Bool(default_value=False,
1022 debug = Bool(default_value=False,
1023 help='Enable debug for the Completer. Mostly print extra '
1023 help='Enable debug for the Completer. Mostly print extra '
1024 'information for experimental jedi integration.')\
1024 'information for experimental jedi integration.')\
1025 .tag(config=True)
1025 .tag(config=True)
1026
1026
1027 backslash_combining_completions = Bool(True,
1027 backslash_combining_completions = Bool(True,
1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 "Includes completion of latex commands, unicode names, and expanding "
1029 "Includes completion of latex commands, unicode names, and expanding "
1030 "unicode characters back to latex commands.").tag(config=True)
1030 "unicode characters back to latex commands.").tag(config=True)
1031
1031
1032 auto_close_dict_keys = Bool(
1032 auto_close_dict_keys = Bool(
1033 False,
1033 False,
1034 help="""
1034 help="""
1035 Enable auto-closing dictionary keys.
1035 Enable auto-closing dictionary keys.
1036
1036
1037 When enabled string keys will be suffixed with a final quote
1037 When enabled string keys will be suffixed with a final quote
1038 (matching the opening quote), tuple keys will also receive a
1038 (matching the opening quote), tuple keys will also receive a
1039 separating comma if needed, and keys which are final will
1039 separating comma if needed, and keys which are final will
1040 receive a closing bracket (``]``).
1040 receive a closing bracket (``]``).
1041 """,
1041 """,
1042 ).tag(config=True)
1042 ).tag(config=True)
1043
1043
1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 """Create a new completer for the command line.
1045 """Create a new completer for the command line.
1046
1046
1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048
1048
1049 If unspecified, the default namespace where completions are performed
1049 If unspecified, the default namespace where completions are performed
1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 given as dictionaries.
1051 given as dictionaries.
1052
1052
1053 An optional second namespace can be given. This allows the completer
1053 An optional second namespace can be given. This allows the completer
1054 to handle cases where both the local and global scopes need to be
1054 to handle cases where both the local and global scopes need to be
1055 distinguished.
1055 distinguished.
1056 """
1056 """
1057
1057
1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 # specific namespace or to use __main__.__dict__. This will allow us
1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 # to bind to __main__.__dict__ at completion time, not now.
1060 # to bind to __main__.__dict__ at completion time, not now.
1061 if namespace is None:
1061 if namespace is None:
1062 self.use_main_ns = True
1062 self.use_main_ns = True
1063 else:
1063 else:
1064 self.use_main_ns = False
1064 self.use_main_ns = False
1065 self.namespace = namespace
1065 self.namespace = namespace
1066
1066
1067 # The global namespace, if given, can be bound directly
1067 # The global namespace, if given, can be bound directly
1068 if global_namespace is None:
1068 if global_namespace is None:
1069 self.global_namespace = {}
1069 self.global_namespace = {}
1070 else:
1070 else:
1071 self.global_namespace = global_namespace
1071 self.global_namespace = global_namespace
1072
1072
1073 self.custom_matchers = []
1073 self.custom_matchers = []
1074
1074
1075 super(Completer, self).__init__(**kwargs)
1075 super(Completer, self).__init__(**kwargs)
1076
1076
1077 def complete(self, text, state):
1077 def complete(self, text, state):
1078 """Return the next possible completion for 'text'.
1078 """Return the next possible completion for 'text'.
1079
1079
1080 This is called successively with state == 0, 1, 2, ... until it
1080 This is called successively with state == 0, 1, 2, ... until it
1081 returns None. The completion should begin with 'text'.
1081 returns None. The completion should begin with 'text'.
1082
1082
1083 """
1083 """
1084 if self.use_main_ns:
1084 if self.use_main_ns:
1085 self.namespace = __main__.__dict__
1085 self.namespace = __main__.__dict__
1086
1086
1087 if state == 0:
1087 if state == 0:
1088 if "." in text:
1088 if "." in text:
1089 self.matches = self.attr_matches(text)
1089 self.matches = self.attr_matches(text)
1090 else:
1090 else:
1091 self.matches = self.global_matches(text)
1091 self.matches = self.global_matches(text)
1092 try:
1092 try:
1093 return self.matches[state]
1093 return self.matches[state]
1094 except IndexError:
1094 except IndexError:
1095 return None
1095 return None
1096
1096
1097 def global_matches(self, text):
1097 def global_matches(self, text):
1098 """Compute matches when text is a simple name.
1098 """Compute matches when text is a simple name.
1099
1099
1100 Return a list of all keywords, built-in functions and names currently
1100 Return a list of all keywords, built-in functions and names currently
1101 defined in self.namespace or self.global_namespace that match.
1101 defined in self.namespace or self.global_namespace that match.
1102
1102
1103 """
1103 """
1104 matches = []
1104 matches = []
1105 match_append = matches.append
1105 match_append = matches.append
1106 n = len(text)
1106 n = len(text)
1107 for lst in [
1107 for lst in [
1108 keyword.kwlist,
1108 keyword.kwlist,
1109 builtin_mod.__dict__.keys(),
1109 builtin_mod.__dict__.keys(),
1110 list(self.namespace.keys()),
1110 list(self.namespace.keys()),
1111 list(self.global_namespace.keys()),
1111 list(self.global_namespace.keys()),
1112 ]:
1112 ]:
1113 for word in lst:
1113 for word in lst:
1114 if word[:n] == text and word != "__builtins__":
1114 if word[:n] == text and word != "__builtins__":
1115 match_append(word)
1115 match_append(word)
1116
1116
1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 shortened = {
1119 shortened = {
1120 "_".join([sub[0] for sub in word.split("_")]): word
1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 for word in lst
1121 for word in lst
1122 if snake_case_re.match(word)
1122 if snake_case_re.match(word)
1123 }
1123 }
1124 for word in shortened.keys():
1124 for word in shortened.keys():
1125 if word[:n] == text and word != "__builtins__":
1125 if word[:n] == text and word != "__builtins__":
1126 match_append(shortened[word])
1126 match_append(shortened[word])
1127 return matches
1127 return matches
1128
1128
1129 def attr_matches(self, text):
1129 def attr_matches(self, text):
1130 """Compute matches when text contains a dot.
1130 """Compute matches when text contains a dot.
1131
1131
1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 evaluatable in self.namespace or self.global_namespace, it will be
1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 evaluated and its attributes (as revealed by dir()) are used as
1134 evaluated and its attributes (as revealed by dir()) are used as
1135 possible completions. (For class instances, class members are
1135 possible completions. (For class instances, class members are
1136 also considered.)
1136 also considered.)
1137
1137
1138 WARNING: this can still invoke arbitrary C code, if an object
1138 WARNING: this can still invoke arbitrary C code, if an object
1139 with a __getattr__ hook is evaluated.
1139 with a __getattr__ hook is evaluated.
1140
1140
1141 """
1141 """
1142 return self._attr_matches(text)[0]
1142 return self._attr_matches(text)[0]
1143
1143
1144 def _attr_matches(self, text, include_prefix=True):
1144 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1146 if not m2:
1146 if not m2:
1147 return []
1147 return [], ""
1148 expr, attr = m2.group(1, 2)
1148 expr, attr = m2.group(1, 2)
1149
1149
1150 obj = self._evaluate_expr(expr)
1150 obj = self._evaluate_expr(expr)
1151
1151
1152 if obj is not_found:
1152 if obj is not_found:
1153 return []
1153 return [], ""
1154
1154
1155 if self.limit_to__all__ and hasattr(obj, '__all__'):
1155 if self.limit_to__all__ and hasattr(obj, '__all__'):
1156 words = get__all__entries(obj)
1156 words = get__all__entries(obj)
1157 else:
1157 else:
1158 words = dir2(obj)
1158 words = dir2(obj)
1159
1159
1160 try:
1160 try:
1161 words = generics.complete_object(obj, words)
1161 words = generics.complete_object(obj, words)
1162 except TryNext:
1162 except TryNext:
1163 pass
1163 pass
1164 except AssertionError:
1164 except AssertionError:
1165 raise
1165 raise
1166 except Exception:
1166 except Exception:
1167 # Silence errors from completion function
1167 # Silence errors from completion function
1168 pass
1168 pass
1169 # Build match list to return
1169 # Build match list to return
1170 n = len(attr)
1170 n = len(attr)
1171
1171
1172 # Note: ideally we would just return words here and the prefix
1172 # Note: ideally we would just return words here and the prefix
1173 # reconciliator would know that we intend to append to rather than
1173 # reconciliator would know that we intend to append to rather than
1174 # replace the input text; this requires refactoring to return range
1174 # replace the input text; this requires refactoring to return range
1175 # which ought to be replaced (as does jedi).
1175 # which ought to be replaced (as does jedi).
1176 if include_prefix:
1176 if include_prefix:
1177 tokens = _parse_tokens(expr)
1177 tokens = _parse_tokens(expr)
1178 rev_tokens = reversed(tokens)
1178 rev_tokens = reversed(tokens)
1179 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1179 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1180 name_turn = True
1180 name_turn = True
1181
1181
1182 parts = []
1182 parts = []
1183 for token in rev_tokens:
1183 for token in rev_tokens:
1184 if token.type in skip_over:
1184 if token.type in skip_over:
1185 continue
1185 continue
1186 if token.type == tokenize.NAME and name_turn:
1186 if token.type == tokenize.NAME and name_turn:
1187 parts.append(token.string)
1187 parts.append(token.string)
1188 name_turn = False
1188 name_turn = False
1189 elif (
1189 elif (
1190 token.type == tokenize.OP and token.string == "." and not name_turn
1190 token.type == tokenize.OP and token.string == "." and not name_turn
1191 ):
1191 ):
1192 parts.append(token.string)
1192 parts.append(token.string)
1193 name_turn = True
1193 name_turn = True
1194 else:
1194 else:
1195 # short-circuit if not empty nor name token
1195 # short-circuit if not empty nor name token
1196 break
1196 break
1197
1197
1198 prefix_after_space = "".join(reversed(parts))
1198 prefix_after_space = "".join(reversed(parts))
1199 else:
1199 else:
1200 prefix_after_space = ""
1200 prefix_after_space = ""
1201
1201
1202 return (
1202 return (
1203 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1203 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1204 "." + attr,
1204 "." + attr,
1205 )
1205 )
1206
1206
1207 def _evaluate_expr(self, expr):
1207 def _evaluate_expr(self, expr):
1208 obj = not_found
1208 obj = not_found
1209 done = False
1209 done = False
1210 while not done and expr:
1210 while not done and expr:
1211 try:
1211 try:
1212 obj = guarded_eval(
1212 obj = guarded_eval(
1213 expr,
1213 expr,
1214 EvaluationContext(
1214 EvaluationContext(
1215 globals=self.global_namespace,
1215 globals=self.global_namespace,
1216 locals=self.namespace,
1216 locals=self.namespace,
1217 evaluation=self.evaluation,
1217 evaluation=self.evaluation,
1218 ),
1218 ),
1219 )
1219 )
1220 done = True
1220 done = True
1221 except Exception as e:
1221 except Exception as e:
1222 if self.debug:
1222 if self.debug:
1223 print("Evaluation exception", e)
1223 print("Evaluation exception", e)
1224 # trim the expression to remove any invalid prefix
1224 # trim the expression to remove any invalid prefix
1225 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1225 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1226 # where parenthesis is not closed.
1226 # where parenthesis is not closed.
1227 # TODO: make this faster by reusing parts of the computation?
1227 # TODO: make this faster by reusing parts of the computation?
1228 expr = expr[1:]
1228 expr = expr[1:]
1229 return obj
1229 return obj
1230
1230
1231 def get__all__entries(obj):
1231 def get__all__entries(obj):
1232 """returns the strings in the __all__ attribute"""
1232 """returns the strings in the __all__ attribute"""
1233 try:
1233 try:
1234 words = getattr(obj, '__all__')
1234 words = getattr(obj, '__all__')
1235 except:
1235 except:
1236 return []
1236 return []
1237
1237
1238 return [w for w in words if isinstance(w, str)]
1238 return [w for w in words if isinstance(w, str)]
1239
1239
1240
1240
1241 class _DictKeyState(enum.Flag):
1241 class _DictKeyState(enum.Flag):
1242 """Represent state of the key match in context of other possible matches.
1242 """Represent state of the key match in context of other possible matches.
1243
1243
1244 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1244 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1245 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1245 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1246 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1246 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1247 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1247 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1248 """
1248 """
1249
1249
1250 BASELINE = 0
1250 BASELINE = 0
1251 END_OF_ITEM = enum.auto()
1251 END_OF_ITEM = enum.auto()
1252 END_OF_TUPLE = enum.auto()
1252 END_OF_TUPLE = enum.auto()
1253 IN_TUPLE = enum.auto()
1253 IN_TUPLE = enum.auto()
1254
1254
1255
1255
1256 def _parse_tokens(c):
1256 def _parse_tokens(c):
1257 """Parse tokens even if there is an error."""
1257 """Parse tokens even if there is an error."""
1258 tokens = []
1258 tokens = []
1259 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1259 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1260 while True:
1260 while True:
1261 try:
1261 try:
1262 tokens.append(next(token_generator))
1262 tokens.append(next(token_generator))
1263 except tokenize.TokenError:
1263 except tokenize.TokenError:
1264 return tokens
1264 return tokens
1265 except StopIteration:
1265 except StopIteration:
1266 return tokens
1266 return tokens
1267
1267
1268
1268
1269 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1269 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1270 """Match any valid Python numeric literal in a prefix of dictionary keys.
1270 """Match any valid Python numeric literal in a prefix of dictionary keys.
1271
1271
1272 References:
1272 References:
1273 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1273 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1274 - https://docs.python.org/3/library/tokenize.html
1274 - https://docs.python.org/3/library/tokenize.html
1275 """
1275 """
1276 if prefix[-1].isspace():
1276 if prefix[-1].isspace():
1277 # if user typed a space we do not have anything to complete
1277 # if user typed a space we do not have anything to complete
1278 # even if there was a valid number token before
1278 # even if there was a valid number token before
1279 return None
1279 return None
1280 tokens = _parse_tokens(prefix)
1280 tokens = _parse_tokens(prefix)
1281 rev_tokens = reversed(tokens)
1281 rev_tokens = reversed(tokens)
1282 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1282 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1283 number = None
1283 number = None
1284 for token in rev_tokens:
1284 for token in rev_tokens:
1285 if token.type in skip_over:
1285 if token.type in skip_over:
1286 continue
1286 continue
1287 if number is None:
1287 if number is None:
1288 if token.type == tokenize.NUMBER:
1288 if token.type == tokenize.NUMBER:
1289 number = token.string
1289 number = token.string
1290 continue
1290 continue
1291 else:
1291 else:
1292 # we did not match a number
1292 # we did not match a number
1293 return None
1293 return None
1294 if token.type == tokenize.OP:
1294 if token.type == tokenize.OP:
1295 if token.string == ",":
1295 if token.string == ",":
1296 break
1296 break
1297 if token.string in {"+", "-"}:
1297 if token.string in {"+", "-"}:
1298 number = token.string + number
1298 number = token.string + number
1299 else:
1299 else:
1300 return None
1300 return None
1301 return number
1301 return number
1302
1302
1303
1303
1304 _INT_FORMATS = {
1304 _INT_FORMATS = {
1305 "0b": bin,
1305 "0b": bin,
1306 "0o": oct,
1306 "0o": oct,
1307 "0x": hex,
1307 "0x": hex,
1308 }
1308 }
1309
1309
1310
1310
1311 def match_dict_keys(
1311 def match_dict_keys(
1312 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1312 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1313 prefix: str,
1313 prefix: str,
1314 delims: str,
1314 delims: str,
1315 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1315 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1316 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1316 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1317 """Used by dict_key_matches, matching the prefix to a list of keys
1317 """Used by dict_key_matches, matching the prefix to a list of keys
1318
1318
1319 Parameters
1319 Parameters
1320 ----------
1320 ----------
1321 keys
1321 keys
1322 list of keys in dictionary currently being completed.
1322 list of keys in dictionary currently being completed.
1323 prefix
1323 prefix
1324 Part of the text already typed by the user. E.g. `mydict[b'fo`
1324 Part of the text already typed by the user. E.g. `mydict[b'fo`
1325 delims
1325 delims
1326 String of delimiters to consider when finding the current key.
1326 String of delimiters to consider when finding the current key.
1327 extra_prefix : optional
1327 extra_prefix : optional
1328 Part of the text already typed in multi-key index cases. E.g. for
1328 Part of the text already typed in multi-key index cases. E.g. for
1329 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1329 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1330
1330
1331 Returns
1331 Returns
1332 -------
1332 -------
1333 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1333 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1334 ``quote`` being the quote that need to be used to close current string.
1334 ``quote`` being the quote that need to be used to close current string.
1335 ``token_start`` the position where the replacement should start occurring,
1335 ``token_start`` the position where the replacement should start occurring,
1336 ``matches`` a dictionary of replacement/completion keys on keys and values
1336 ``matches`` a dictionary of replacement/completion keys on keys and values
1337 indicating whether the state.
1337 indicating whether the state.
1338 """
1338 """
1339 prefix_tuple = extra_prefix if extra_prefix else ()
1339 prefix_tuple = extra_prefix if extra_prefix else ()
1340
1340
1341 prefix_tuple_size = sum(
1341 prefix_tuple_size = sum(
1342 [
1342 [
1343 # for pandas, do not count slices as taking space
1343 # for pandas, do not count slices as taking space
1344 not isinstance(k, slice)
1344 not isinstance(k, slice)
1345 for k in prefix_tuple
1345 for k in prefix_tuple
1346 ]
1346 ]
1347 )
1347 )
1348 text_serializable_types = (str, bytes, int, float, slice)
1348 text_serializable_types = (str, bytes, int, float, slice)
1349
1349
1350 def filter_prefix_tuple(key):
1350 def filter_prefix_tuple(key):
1351 # Reject too short keys
1351 # Reject too short keys
1352 if len(key) <= prefix_tuple_size:
1352 if len(key) <= prefix_tuple_size:
1353 return False
1353 return False
1354 # Reject keys which cannot be serialised to text
1354 # Reject keys which cannot be serialised to text
1355 for k in key:
1355 for k in key:
1356 if not isinstance(k, text_serializable_types):
1356 if not isinstance(k, text_serializable_types):
1357 return False
1357 return False
1358 # Reject keys that do not match the prefix
1358 # Reject keys that do not match the prefix
1359 for k, pt in zip(key, prefix_tuple):
1359 for k, pt in zip(key, prefix_tuple):
1360 if k != pt and not isinstance(pt, slice):
1360 if k != pt and not isinstance(pt, slice):
1361 return False
1361 return False
1362 # All checks passed!
1362 # All checks passed!
1363 return True
1363 return True
1364
1364
1365 filtered_key_is_final: Dict[
1365 filtered_key_is_final: Dict[
1366 Union[str, bytes, int, float], _DictKeyState
1366 Union[str, bytes, int, float], _DictKeyState
1367 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1367 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1368
1368
1369 for k in keys:
1369 for k in keys:
1370 # If at least one of the matches is not final, mark as undetermined.
1370 # If at least one of the matches is not final, mark as undetermined.
1371 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1371 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1372 # `111` appears final on first match but is not final on the second.
1372 # `111` appears final on first match but is not final on the second.
1373
1373
1374 if isinstance(k, tuple):
1374 if isinstance(k, tuple):
1375 if filter_prefix_tuple(k):
1375 if filter_prefix_tuple(k):
1376 key_fragment = k[prefix_tuple_size]
1376 key_fragment = k[prefix_tuple_size]
1377 filtered_key_is_final[key_fragment] |= (
1377 filtered_key_is_final[key_fragment] |= (
1378 _DictKeyState.END_OF_TUPLE
1378 _DictKeyState.END_OF_TUPLE
1379 if len(k) == prefix_tuple_size + 1
1379 if len(k) == prefix_tuple_size + 1
1380 else _DictKeyState.IN_TUPLE
1380 else _DictKeyState.IN_TUPLE
1381 )
1381 )
1382 elif prefix_tuple_size > 0:
1382 elif prefix_tuple_size > 0:
1383 # we are completing a tuple but this key is not a tuple,
1383 # we are completing a tuple but this key is not a tuple,
1384 # so we should ignore it
1384 # so we should ignore it
1385 pass
1385 pass
1386 else:
1386 else:
1387 if isinstance(k, text_serializable_types):
1387 if isinstance(k, text_serializable_types):
1388 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1388 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1389
1389
1390 filtered_keys = filtered_key_is_final.keys()
1390 filtered_keys = filtered_key_is_final.keys()
1391
1391
1392 if not prefix:
1392 if not prefix:
1393 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1393 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1394
1394
1395 quote_match = re.search("(?:\"|')", prefix)
1395 quote_match = re.search("(?:\"|')", prefix)
1396 is_user_prefix_numeric = False
1396 is_user_prefix_numeric = False
1397
1397
1398 if quote_match:
1398 if quote_match:
1399 quote = quote_match.group()
1399 quote = quote_match.group()
1400 valid_prefix = prefix + quote
1400 valid_prefix = prefix + quote
1401 try:
1401 try:
1402 prefix_str = literal_eval(valid_prefix)
1402 prefix_str = literal_eval(valid_prefix)
1403 except Exception:
1403 except Exception:
1404 return "", 0, {}
1404 return "", 0, {}
1405 else:
1405 else:
1406 # If it does not look like a string, let's assume
1406 # If it does not look like a string, let's assume
1407 # we are dealing with a number or variable.
1407 # we are dealing with a number or variable.
1408 number_match = _match_number_in_dict_key_prefix(prefix)
1408 number_match = _match_number_in_dict_key_prefix(prefix)
1409
1409
1410 # We do not want the key matcher to suggest variable names so we yield:
1410 # We do not want the key matcher to suggest variable names so we yield:
1411 if number_match is None:
1411 if number_match is None:
1412 # The alternative would be to assume that user forgort the quote
1412 # The alternative would be to assume that user forgort the quote
1413 # and if the substring matches, suggest adding it at the start.
1413 # and if the substring matches, suggest adding it at the start.
1414 return "", 0, {}
1414 return "", 0, {}
1415
1415
1416 prefix_str = number_match
1416 prefix_str = number_match
1417 is_user_prefix_numeric = True
1417 is_user_prefix_numeric = True
1418 quote = ""
1418 quote = ""
1419
1419
1420 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1420 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1421 token_match = re.search(pattern, prefix, re.UNICODE)
1421 token_match = re.search(pattern, prefix, re.UNICODE)
1422 assert token_match is not None # silence mypy
1422 assert token_match is not None # silence mypy
1423 token_start = token_match.start()
1423 token_start = token_match.start()
1424 token_prefix = token_match.group()
1424 token_prefix = token_match.group()
1425
1425
1426 matched: Dict[str, _DictKeyState] = {}
1426 matched: Dict[str, _DictKeyState] = {}
1427
1427
1428 str_key: Union[str, bytes]
1428 str_key: Union[str, bytes]
1429
1429
1430 for key in filtered_keys:
1430 for key in filtered_keys:
1431 if isinstance(key, (int, float)):
1431 if isinstance(key, (int, float)):
1432 # User typed a number but this key is not a number.
1432 # User typed a number but this key is not a number.
1433 if not is_user_prefix_numeric:
1433 if not is_user_prefix_numeric:
1434 continue
1434 continue
1435 str_key = str(key)
1435 str_key = str(key)
1436 if isinstance(key, int):
1436 if isinstance(key, int):
1437 int_base = prefix_str[:2].lower()
1437 int_base = prefix_str[:2].lower()
1438 # if user typed integer using binary/oct/hex notation:
1438 # if user typed integer using binary/oct/hex notation:
1439 if int_base in _INT_FORMATS:
1439 if int_base in _INT_FORMATS:
1440 int_format = _INT_FORMATS[int_base]
1440 int_format = _INT_FORMATS[int_base]
1441 str_key = int_format(key)
1441 str_key = int_format(key)
1442 else:
1442 else:
1443 # User typed a string but this key is a number.
1443 # User typed a string but this key is a number.
1444 if is_user_prefix_numeric:
1444 if is_user_prefix_numeric:
1445 continue
1445 continue
1446 str_key = key
1446 str_key = key
1447 try:
1447 try:
1448 if not str_key.startswith(prefix_str):
1448 if not str_key.startswith(prefix_str):
1449 continue
1449 continue
1450 except (AttributeError, TypeError, UnicodeError) as e:
1450 except (AttributeError, TypeError, UnicodeError) as e:
1451 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1451 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1452 continue
1452 continue
1453
1453
1454 # reformat remainder of key to begin with prefix
1454 # reformat remainder of key to begin with prefix
1455 rem = str_key[len(prefix_str) :]
1455 rem = str_key[len(prefix_str) :]
1456 # force repr wrapped in '
1456 # force repr wrapped in '
1457 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1457 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1458 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1458 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1459 if quote == '"':
1459 if quote == '"':
1460 # The entered prefix is quoted with ",
1460 # The entered prefix is quoted with ",
1461 # but the match is quoted with '.
1461 # but the match is quoted with '.
1462 # A contained " hence needs escaping for comparison:
1462 # A contained " hence needs escaping for comparison:
1463 rem_repr = rem_repr.replace('"', '\\"')
1463 rem_repr = rem_repr.replace('"', '\\"')
1464
1464
1465 # then reinsert prefix from start of token
1465 # then reinsert prefix from start of token
1466 match = "%s%s" % (token_prefix, rem_repr)
1466 match = "%s%s" % (token_prefix, rem_repr)
1467
1467
1468 matched[match] = filtered_key_is_final[key]
1468 matched[match] = filtered_key_is_final[key]
1469 return quote, token_start, matched
1469 return quote, token_start, matched
1470
1470
1471
1471
1472 def cursor_to_position(text:str, line:int, column:int)->int:
1472 def cursor_to_position(text:str, line:int, column:int)->int:
1473 """
1473 """
1474 Convert the (line,column) position of the cursor in text to an offset in a
1474 Convert the (line,column) position of the cursor in text to an offset in a
1475 string.
1475 string.
1476
1476
1477 Parameters
1477 Parameters
1478 ----------
1478 ----------
1479 text : str
1479 text : str
1480 The text in which to calculate the cursor offset
1480 The text in which to calculate the cursor offset
1481 line : int
1481 line : int
1482 Line of the cursor; 0-indexed
1482 Line of the cursor; 0-indexed
1483 column : int
1483 column : int
1484 Column of the cursor 0-indexed
1484 Column of the cursor 0-indexed
1485
1485
1486 Returns
1486 Returns
1487 -------
1487 -------
1488 Position of the cursor in ``text``, 0-indexed.
1488 Position of the cursor in ``text``, 0-indexed.
1489
1489
1490 See Also
1490 See Also
1491 --------
1491 --------
1492 position_to_cursor : reciprocal of this function
1492 position_to_cursor : reciprocal of this function
1493
1493
1494 """
1494 """
1495 lines = text.split('\n')
1495 lines = text.split('\n')
1496 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1496 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1497
1497
1498 return sum(len(l) + 1 for l in lines[:line]) + column
1498 return sum(len(l) + 1 for l in lines[:line]) + column
1499
1499
1500 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1500 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1501 """
1501 """
1502 Convert the position of the cursor in text (0 indexed) to a line
1502 Convert the position of the cursor in text (0 indexed) to a line
1503 number(0-indexed) and a column number (0-indexed) pair
1503 number(0-indexed) and a column number (0-indexed) pair
1504
1504
1505 Position should be a valid position in ``text``.
1505 Position should be a valid position in ``text``.
1506
1506
1507 Parameters
1507 Parameters
1508 ----------
1508 ----------
1509 text : str
1509 text : str
1510 The text in which to calculate the cursor offset
1510 The text in which to calculate the cursor offset
1511 offset : int
1511 offset : int
1512 Position of the cursor in ``text``, 0-indexed.
1512 Position of the cursor in ``text``, 0-indexed.
1513
1513
1514 Returns
1514 Returns
1515 -------
1515 -------
1516 (line, column) : (int, int)
1516 (line, column) : (int, int)
1517 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1517 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1518
1518
1519 See Also
1519 See Also
1520 --------
1520 --------
1521 cursor_to_position : reciprocal of this function
1521 cursor_to_position : reciprocal of this function
1522
1522
1523 """
1523 """
1524
1524
1525 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1525 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1526
1526
1527 before = text[:offset]
1527 before = text[:offset]
1528 blines = before.split('\n') # ! splitnes trim trailing \n
1528 blines = before.split('\n') # ! splitnes trim trailing \n
1529 line = before.count('\n')
1529 line = before.count('\n')
1530 col = len(blines[-1])
1530 col = len(blines[-1])
1531 return line, col
1531 return line, col
1532
1532
1533
1533
1534 def _safe_isinstance(obj, module, class_name, *attrs):
1534 def _safe_isinstance(obj, module, class_name, *attrs):
1535 """Checks if obj is an instance of module.class_name if loaded
1535 """Checks if obj is an instance of module.class_name if loaded
1536 """
1536 """
1537 if module in sys.modules:
1537 if module in sys.modules:
1538 m = sys.modules[module]
1538 m = sys.modules[module]
1539 for attr in [class_name, *attrs]:
1539 for attr in [class_name, *attrs]:
1540 m = getattr(m, attr)
1540 m = getattr(m, attr)
1541 return isinstance(obj, m)
1541 return isinstance(obj, m)
1542
1542
1543
1543
1544 @context_matcher()
1544 @context_matcher()
1545 def back_unicode_name_matcher(context: CompletionContext):
1545 def back_unicode_name_matcher(context: CompletionContext):
1546 """Match Unicode characters back to Unicode name
1546 """Match Unicode characters back to Unicode name
1547
1547
1548 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1548 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1549 """
1549 """
1550 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1550 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1551 return _convert_matcher_v1_result_to_v2(
1551 return _convert_matcher_v1_result_to_v2(
1552 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1552 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1553 )
1553 )
1554
1554
1555
1555
1556 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1556 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1557 """Match Unicode characters back to Unicode name
1557 """Match Unicode characters back to Unicode name
1558
1558
1559 This does ``β˜ƒ`` -> ``\\snowman``
1559 This does ``β˜ƒ`` -> ``\\snowman``
1560
1560
1561 Note that snowman is not a valid python3 combining character but will be expanded.
1561 Note that snowman is not a valid python3 combining character but will be expanded.
1562 Though it will not recombine back to the snowman character by the completion machinery.
1562 Though it will not recombine back to the snowman character by the completion machinery.
1563
1563
1564 This will not either back-complete standard sequences like \\n, \\b ...
1564 This will not either back-complete standard sequences like \\n, \\b ...
1565
1565
1566 .. deprecated:: 8.6
1566 .. deprecated:: 8.6
1567 You can use :meth:`back_unicode_name_matcher` instead.
1567 You can use :meth:`back_unicode_name_matcher` instead.
1568
1568
1569 Returns
1569 Returns
1570 =======
1570 =======
1571
1571
1572 Return a tuple with two elements:
1572 Return a tuple with two elements:
1573
1573
1574 - The Unicode character that was matched (preceded with a backslash), or
1574 - The Unicode character that was matched (preceded with a backslash), or
1575 empty string,
1575 empty string,
1576 - a sequence (of 1), name for the match Unicode character, preceded by
1576 - a sequence (of 1), name for the match Unicode character, preceded by
1577 backslash, or empty if no match.
1577 backslash, or empty if no match.
1578 """
1578 """
1579 if len(text)<2:
1579 if len(text)<2:
1580 return '', ()
1580 return '', ()
1581 maybe_slash = text[-2]
1581 maybe_slash = text[-2]
1582 if maybe_slash != '\\':
1582 if maybe_slash != '\\':
1583 return '', ()
1583 return '', ()
1584
1584
1585 char = text[-1]
1585 char = text[-1]
1586 # no expand on quote for completion in strings.
1586 # no expand on quote for completion in strings.
1587 # nor backcomplete standard ascii keys
1587 # nor backcomplete standard ascii keys
1588 if char in string.ascii_letters or char in ('"',"'"):
1588 if char in string.ascii_letters or char in ('"',"'"):
1589 return '', ()
1589 return '', ()
1590 try :
1590 try :
1591 unic = unicodedata.name(char)
1591 unic = unicodedata.name(char)
1592 return '\\'+char,('\\'+unic,)
1592 return '\\'+char,('\\'+unic,)
1593 except KeyError:
1593 except KeyError:
1594 pass
1594 pass
1595 return '', ()
1595 return '', ()
1596
1596
1597
1597
1598 @context_matcher()
1598 @context_matcher()
1599 def back_latex_name_matcher(context: CompletionContext):
1599 def back_latex_name_matcher(context: CompletionContext):
1600 """Match latex characters back to unicode name
1600 """Match latex characters back to unicode name
1601
1601
1602 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1602 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1603 """
1603 """
1604 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1604 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1605 return _convert_matcher_v1_result_to_v2(
1605 return _convert_matcher_v1_result_to_v2(
1606 matches, type="latex", fragment=fragment, suppress_if_matches=True
1606 matches, type="latex", fragment=fragment, suppress_if_matches=True
1607 )
1607 )
1608
1608
1609
1609
1610 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1610 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1611 """Match latex characters back to unicode name
1611 """Match latex characters back to unicode name
1612
1612
1613 This does ``\\β„΅`` -> ``\\aleph``
1613 This does ``\\β„΅`` -> ``\\aleph``
1614
1614
1615 .. deprecated:: 8.6
1615 .. deprecated:: 8.6
1616 You can use :meth:`back_latex_name_matcher` instead.
1616 You can use :meth:`back_latex_name_matcher` instead.
1617 """
1617 """
1618 if len(text)<2:
1618 if len(text)<2:
1619 return '', ()
1619 return '', ()
1620 maybe_slash = text[-2]
1620 maybe_slash = text[-2]
1621 if maybe_slash != '\\':
1621 if maybe_slash != '\\':
1622 return '', ()
1622 return '', ()
1623
1623
1624
1624
1625 char = text[-1]
1625 char = text[-1]
1626 # no expand on quote for completion in strings.
1626 # no expand on quote for completion in strings.
1627 # nor backcomplete standard ascii keys
1627 # nor backcomplete standard ascii keys
1628 if char in string.ascii_letters or char in ('"',"'"):
1628 if char in string.ascii_letters or char in ('"',"'"):
1629 return '', ()
1629 return '', ()
1630 try :
1630 try :
1631 latex = reverse_latex_symbol[char]
1631 latex = reverse_latex_symbol[char]
1632 # '\\' replace the \ as well
1632 # '\\' replace the \ as well
1633 return '\\'+char,[latex]
1633 return '\\'+char,[latex]
1634 except KeyError:
1634 except KeyError:
1635 pass
1635 pass
1636 return '', ()
1636 return '', ()
1637
1637
1638
1638
1639 def _formatparamchildren(parameter) -> str:
1639 def _formatparamchildren(parameter) -> str:
1640 """
1640 """
1641 Get parameter name and value from Jedi Private API
1641 Get parameter name and value from Jedi Private API
1642
1642
1643 Jedi does not expose a simple way to get `param=value` from its API.
1643 Jedi does not expose a simple way to get `param=value` from its API.
1644
1644
1645 Parameters
1645 Parameters
1646 ----------
1646 ----------
1647 parameter
1647 parameter
1648 Jedi's function `Param`
1648 Jedi's function `Param`
1649
1649
1650 Returns
1650 Returns
1651 -------
1651 -------
1652 A string like 'a', 'b=1', '*args', '**kwargs'
1652 A string like 'a', 'b=1', '*args', '**kwargs'
1653
1653
1654 """
1654 """
1655 description = parameter.description
1655 description = parameter.description
1656 if not description.startswith('param '):
1656 if not description.startswith('param '):
1657 raise ValueError('Jedi function parameter description have change format.'
1657 raise ValueError('Jedi function parameter description have change format.'
1658 'Expected "param ...", found %r".' % description)
1658 'Expected "param ...", found %r".' % description)
1659 return description[6:]
1659 return description[6:]
1660
1660
1661 def _make_signature(completion)-> str:
1661 def _make_signature(completion)-> str:
1662 """
1662 """
1663 Make the signature from a jedi completion
1663 Make the signature from a jedi completion
1664
1664
1665 Parameters
1665 Parameters
1666 ----------
1666 ----------
1667 completion : jedi.Completion
1667 completion : jedi.Completion
1668 object does not complete a function type
1668 object does not complete a function type
1669
1669
1670 Returns
1670 Returns
1671 -------
1671 -------
1672 a string consisting of the function signature, with the parenthesis but
1672 a string consisting of the function signature, with the parenthesis but
1673 without the function name. example:
1673 without the function name. example:
1674 `(a, *args, b=1, **kwargs)`
1674 `(a, *args, b=1, **kwargs)`
1675
1675
1676 """
1676 """
1677
1677
1678 # it looks like this might work on jedi 0.17
1678 # it looks like this might work on jedi 0.17
1679 if hasattr(completion, 'get_signatures'):
1679 if hasattr(completion, 'get_signatures'):
1680 signatures = completion.get_signatures()
1680 signatures = completion.get_signatures()
1681 if not signatures:
1681 if not signatures:
1682 return '(?)'
1682 return '(?)'
1683
1683
1684 c0 = completion.get_signatures()[0]
1684 c0 = completion.get_signatures()[0]
1685 return '('+c0.to_string().split('(', maxsplit=1)[1]
1685 return '('+c0.to_string().split('(', maxsplit=1)[1]
1686
1686
1687 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1687 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1688 for p in signature.defined_names()) if f])
1688 for p in signature.defined_names()) if f])
1689
1689
1690
1690
1691 _CompleteResult = Dict[str, MatcherResult]
1691 _CompleteResult = Dict[str, MatcherResult]
1692
1692
1693
1693
1694 DICT_MATCHER_REGEX = re.compile(
1694 DICT_MATCHER_REGEX = re.compile(
1695 r"""(?x)
1695 r"""(?x)
1696 ( # match dict-referring - or any get item object - expression
1696 ( # match dict-referring - or any get item object - expression
1697 .+
1697 .+
1698 )
1698 )
1699 \[ # open bracket
1699 \[ # open bracket
1700 \s* # and optional whitespace
1700 \s* # and optional whitespace
1701 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1701 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1702 # and slices
1702 # and slices
1703 ((?:(?:
1703 ((?:(?:
1704 (?: # closed string
1704 (?: # closed string
1705 [uUbB]? # string prefix (r not handled)
1705 [uUbB]? # string prefix (r not handled)
1706 (?:
1706 (?:
1707 '(?:[^']|(?<!\\)\\')*'
1707 '(?:[^']|(?<!\\)\\')*'
1708 |
1708 |
1709 "(?:[^"]|(?<!\\)\\")*"
1709 "(?:[^"]|(?<!\\)\\")*"
1710 )
1710 )
1711 )
1711 )
1712 |
1712 |
1713 # capture integers and slices
1713 # capture integers and slices
1714 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1714 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1715 |
1715 |
1716 # integer in bin/hex/oct notation
1716 # integer in bin/hex/oct notation
1717 0[bBxXoO]_?(?:\w|\d)+
1717 0[bBxXoO]_?(?:\w|\d)+
1718 )
1718 )
1719 \s*,\s*
1719 \s*,\s*
1720 )*)
1720 )*)
1721 ((?:
1721 ((?:
1722 (?: # unclosed string
1722 (?: # unclosed string
1723 [uUbB]? # string prefix (r not handled)
1723 [uUbB]? # string prefix (r not handled)
1724 (?:
1724 (?:
1725 '(?:[^']|(?<!\\)\\')*
1725 '(?:[^']|(?<!\\)\\')*
1726 |
1726 |
1727 "(?:[^"]|(?<!\\)\\")*
1727 "(?:[^"]|(?<!\\)\\")*
1728 )
1728 )
1729 )
1729 )
1730 |
1730 |
1731 # unfinished integer
1731 # unfinished integer
1732 (?:[-+]?\d+)
1732 (?:[-+]?\d+)
1733 |
1733 |
1734 # integer in bin/hex/oct notation
1734 # integer in bin/hex/oct notation
1735 0[bBxXoO]_?(?:\w|\d)+
1735 0[bBxXoO]_?(?:\w|\d)+
1736 )
1736 )
1737 )?
1737 )?
1738 $
1738 $
1739 """
1739 """
1740 )
1740 )
1741
1741
1742
1742
1743 def _convert_matcher_v1_result_to_v2(
1743 def _convert_matcher_v1_result_to_v2(
1744 matches: Sequence[str],
1744 matches: Sequence[str],
1745 type: str,
1745 type: str,
1746 fragment: Optional[str] = None,
1746 fragment: Optional[str] = None,
1747 suppress_if_matches: bool = False,
1747 suppress_if_matches: bool = False,
1748 ) -> SimpleMatcherResult:
1748 ) -> SimpleMatcherResult:
1749 """Utility to help with transition"""
1749 """Utility to help with transition"""
1750 result = {
1750 result = {
1751 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1751 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1752 "suppress": (True if matches else False) if suppress_if_matches else False,
1752 "suppress": (True if matches else False) if suppress_if_matches else False,
1753 }
1753 }
1754 if fragment is not None:
1754 if fragment is not None:
1755 result["matched_fragment"] = fragment
1755 result["matched_fragment"] = fragment
1756 return cast(SimpleMatcherResult, result)
1756 return cast(SimpleMatcherResult, result)
1757
1757
1758
1758
1759 class IPCompleter(Completer):
1759 class IPCompleter(Completer):
1760 """Extension of the completer class with IPython-specific features"""
1760 """Extension of the completer class with IPython-specific features"""
1761
1761
1762 @observe('greedy')
1762 @observe('greedy')
1763 def _greedy_changed(self, change):
1763 def _greedy_changed(self, change):
1764 """update the splitter and readline delims when greedy is changed"""
1764 """update the splitter and readline delims when greedy is changed"""
1765 if change["new"]:
1765 if change["new"]:
1766 self.evaluation = "unsafe"
1766 self.evaluation = "unsafe"
1767 self.auto_close_dict_keys = True
1767 self.auto_close_dict_keys = True
1768 self.splitter.delims = GREEDY_DELIMS
1768 self.splitter.delims = GREEDY_DELIMS
1769 else:
1769 else:
1770 self.evaluation = "limited"
1770 self.evaluation = "limited"
1771 self.auto_close_dict_keys = False
1771 self.auto_close_dict_keys = False
1772 self.splitter.delims = DELIMS
1772 self.splitter.delims = DELIMS
1773
1773
1774 dict_keys_only = Bool(
1774 dict_keys_only = Bool(
1775 False,
1775 False,
1776 help="""
1776 help="""
1777 Whether to show dict key matches only.
1777 Whether to show dict key matches only.
1778
1778
1779 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1779 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1780 """,
1780 """,
1781 )
1781 )
1782
1782
1783 suppress_competing_matchers = UnionTrait(
1783 suppress_competing_matchers = UnionTrait(
1784 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1784 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1785 default_value=None,
1785 default_value=None,
1786 help="""
1786 help="""
1787 Whether to suppress completions from other *Matchers*.
1787 Whether to suppress completions from other *Matchers*.
1788
1788
1789 When set to ``None`` (default) the matchers will attempt to auto-detect
1789 When set to ``None`` (default) the matchers will attempt to auto-detect
1790 whether suppression of other matchers is desirable. For example, at
1790 whether suppression of other matchers is desirable. For example, at
1791 the beginning of a line followed by `%` we expect a magic completion
1791 the beginning of a line followed by `%` we expect a magic completion
1792 to be the only applicable option, and after ``my_dict['`` we usually
1792 to be the only applicable option, and after ``my_dict['`` we usually
1793 expect a completion with an existing dictionary key.
1793 expect a completion with an existing dictionary key.
1794
1794
1795 If you want to disable this heuristic and see completions from all matchers,
1795 If you want to disable this heuristic and see completions from all matchers,
1796 set ``IPCompleter.suppress_competing_matchers = False``.
1796 set ``IPCompleter.suppress_competing_matchers = False``.
1797 To disable the heuristic for specific matchers provide a dictionary mapping:
1797 To disable the heuristic for specific matchers provide a dictionary mapping:
1798 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1798 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1799
1799
1800 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1800 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1801 completions to the set of matchers with the highest priority;
1801 completions to the set of matchers with the highest priority;
1802 this is equivalent to ``IPCompleter.merge_completions`` and
1802 this is equivalent to ``IPCompleter.merge_completions`` and
1803 can be beneficial for performance, but will sometimes omit relevant
1803 can be beneficial for performance, but will sometimes omit relevant
1804 candidates from matchers further down the priority list.
1804 candidates from matchers further down the priority list.
1805 """,
1805 """,
1806 ).tag(config=True)
1806 ).tag(config=True)
1807
1807
1808 merge_completions = Bool(
1808 merge_completions = Bool(
1809 True,
1809 True,
1810 help="""Whether to merge completion results into a single list
1810 help="""Whether to merge completion results into a single list
1811
1811
1812 If False, only the completion results from the first non-empty
1812 If False, only the completion results from the first non-empty
1813 completer will be returned.
1813 completer will be returned.
1814
1814
1815 As of version 8.6.0, setting the value to ``False`` is an alias for:
1815 As of version 8.6.0, setting the value to ``False`` is an alias for:
1816 ``IPCompleter.suppress_competing_matchers = True.``.
1816 ``IPCompleter.suppress_competing_matchers = True.``.
1817 """,
1817 """,
1818 ).tag(config=True)
1818 ).tag(config=True)
1819
1819
1820 disable_matchers = ListTrait(
1820 disable_matchers = ListTrait(
1821 Unicode(),
1821 Unicode(),
1822 help="""List of matchers to disable.
1822 help="""List of matchers to disable.
1823
1823
1824 The list should contain matcher identifiers (see :any:`completion_matcher`).
1824 The list should contain matcher identifiers (see :any:`completion_matcher`).
1825 """,
1825 """,
1826 ).tag(config=True)
1826 ).tag(config=True)
1827
1827
1828 omit__names = Enum(
1828 omit__names = Enum(
1829 (0, 1, 2),
1829 (0, 1, 2),
1830 default_value=2,
1830 default_value=2,
1831 help="""Instruct the completer to omit private method names
1831 help="""Instruct the completer to omit private method names
1832
1832
1833 Specifically, when completing on ``object.<tab>``.
1833 Specifically, when completing on ``object.<tab>``.
1834
1834
1835 When 2 [default]: all names that start with '_' will be excluded.
1835 When 2 [default]: all names that start with '_' will be excluded.
1836
1836
1837 When 1: all 'magic' names (``__foo__``) will be excluded.
1837 When 1: all 'magic' names (``__foo__``) will be excluded.
1838
1838
1839 When 0: nothing will be excluded.
1839 When 0: nothing will be excluded.
1840 """
1840 """
1841 ).tag(config=True)
1841 ).tag(config=True)
1842 limit_to__all__ = Bool(False,
1842 limit_to__all__ = Bool(False,
1843 help="""
1843 help="""
1844 DEPRECATED as of version 5.0.
1844 DEPRECATED as of version 5.0.
1845
1845
1846 Instruct the completer to use __all__ for the completion
1846 Instruct the completer to use __all__ for the completion
1847
1847
1848 Specifically, when completing on ``object.<tab>``.
1848 Specifically, when completing on ``object.<tab>``.
1849
1849
1850 When True: only those names in obj.__all__ will be included.
1850 When True: only those names in obj.__all__ will be included.
1851
1851
1852 When False [default]: the __all__ attribute is ignored
1852 When False [default]: the __all__ attribute is ignored
1853 """,
1853 """,
1854 ).tag(config=True)
1854 ).tag(config=True)
1855
1855
1856 profile_completions = Bool(
1856 profile_completions = Bool(
1857 default_value=False,
1857 default_value=False,
1858 help="If True, emit profiling data for completion subsystem using cProfile."
1858 help="If True, emit profiling data for completion subsystem using cProfile."
1859 ).tag(config=True)
1859 ).tag(config=True)
1860
1860
1861 profiler_output_dir = Unicode(
1861 profiler_output_dir = Unicode(
1862 default_value=".completion_profiles",
1862 default_value=".completion_profiles",
1863 help="Template for path at which to output profile data for completions."
1863 help="Template for path at which to output profile data for completions."
1864 ).tag(config=True)
1864 ).tag(config=True)
1865
1865
1866 @observe('limit_to__all__')
1866 @observe('limit_to__all__')
1867 def _limit_to_all_changed(self, change):
1867 def _limit_to_all_changed(self, change):
1868 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1868 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1869 'value has been deprecated since IPython 5.0, will be made to have '
1869 'value has been deprecated since IPython 5.0, will be made to have '
1870 'no effects and then removed in future version of IPython.',
1870 'no effects and then removed in future version of IPython.',
1871 UserWarning)
1871 UserWarning)
1872
1872
1873 def __init__(
1873 def __init__(
1874 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1874 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1875 ):
1875 ):
1876 """IPCompleter() -> completer
1876 """IPCompleter() -> completer
1877
1877
1878 Return a completer object.
1878 Return a completer object.
1879
1879
1880 Parameters
1880 Parameters
1881 ----------
1881 ----------
1882 shell
1882 shell
1883 a pointer to the ipython shell itself. This is needed
1883 a pointer to the ipython shell itself. This is needed
1884 because this completer knows about magic functions, and those can
1884 because this completer knows about magic functions, and those can
1885 only be accessed via the ipython instance.
1885 only be accessed via the ipython instance.
1886 namespace : dict, optional
1886 namespace : dict, optional
1887 an optional dict where completions are performed.
1887 an optional dict where completions are performed.
1888 global_namespace : dict, optional
1888 global_namespace : dict, optional
1889 secondary optional dict for completions, to
1889 secondary optional dict for completions, to
1890 handle cases (such as IPython embedded inside functions) where
1890 handle cases (such as IPython embedded inside functions) where
1891 both Python scopes are visible.
1891 both Python scopes are visible.
1892 config : Config
1892 config : Config
1893 traitlet's config object
1893 traitlet's config object
1894 **kwargs
1894 **kwargs
1895 passed to super class unmodified.
1895 passed to super class unmodified.
1896 """
1896 """
1897
1897
1898 self.magic_escape = ESC_MAGIC
1898 self.magic_escape = ESC_MAGIC
1899 self.splitter = CompletionSplitter()
1899 self.splitter = CompletionSplitter()
1900
1900
1901 # _greedy_changed() depends on splitter and readline being defined:
1901 # _greedy_changed() depends on splitter and readline being defined:
1902 super().__init__(
1902 super().__init__(
1903 namespace=namespace,
1903 namespace=namespace,
1904 global_namespace=global_namespace,
1904 global_namespace=global_namespace,
1905 config=config,
1905 config=config,
1906 **kwargs,
1906 **kwargs,
1907 )
1907 )
1908
1908
1909 # List where completion matches will be stored
1909 # List where completion matches will be stored
1910 self.matches = []
1910 self.matches = []
1911 self.shell = shell
1911 self.shell = shell
1912 # Regexp to split filenames with spaces in them
1912 # Regexp to split filenames with spaces in them
1913 self.space_name_re = re.compile(r'([^\\] )')
1913 self.space_name_re = re.compile(r'([^\\] )')
1914 # Hold a local ref. to glob.glob for speed
1914 # Hold a local ref. to glob.glob for speed
1915 self.glob = glob.glob
1915 self.glob = glob.glob
1916
1916
1917 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1917 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1918 # buffers, to avoid completion problems.
1918 # buffers, to avoid completion problems.
1919 term = os.environ.get('TERM','xterm')
1919 term = os.environ.get('TERM','xterm')
1920 self.dumb_terminal = term in ['dumb','emacs']
1920 self.dumb_terminal = term in ['dumb','emacs']
1921
1921
1922 # Special handling of backslashes needed in win32 platforms
1922 # Special handling of backslashes needed in win32 platforms
1923 if sys.platform == "win32":
1923 if sys.platform == "win32":
1924 self.clean_glob = self._clean_glob_win32
1924 self.clean_glob = self._clean_glob_win32
1925 else:
1925 else:
1926 self.clean_glob = self._clean_glob
1926 self.clean_glob = self._clean_glob
1927
1927
1928 #regexp to parse docstring for function signature
1928 #regexp to parse docstring for function signature
1929 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1929 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1930 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1930 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1931 #use this if positional argument name is also needed
1931 #use this if positional argument name is also needed
1932 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1932 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1933
1933
1934 self.magic_arg_matchers = [
1934 self.magic_arg_matchers = [
1935 self.magic_config_matcher,
1935 self.magic_config_matcher,
1936 self.magic_color_matcher,
1936 self.magic_color_matcher,
1937 ]
1937 ]
1938
1938
1939 # This is set externally by InteractiveShell
1939 # This is set externally by InteractiveShell
1940 self.custom_completers = None
1940 self.custom_completers = None
1941
1941
1942 # This is a list of names of unicode characters that can be completed
1942 # This is a list of names of unicode characters that can be completed
1943 # into their corresponding unicode value. The list is large, so we
1943 # into their corresponding unicode value. The list is large, so we
1944 # lazily initialize it on first use. Consuming code should access this
1944 # lazily initialize it on first use. Consuming code should access this
1945 # attribute through the `@unicode_names` property.
1945 # attribute through the `@unicode_names` property.
1946 self._unicode_names = None
1946 self._unicode_names = None
1947
1947
1948 self._backslash_combining_matchers = [
1948 self._backslash_combining_matchers = [
1949 self.latex_name_matcher,
1949 self.latex_name_matcher,
1950 self.unicode_name_matcher,
1950 self.unicode_name_matcher,
1951 back_latex_name_matcher,
1951 back_latex_name_matcher,
1952 back_unicode_name_matcher,
1952 back_unicode_name_matcher,
1953 self.fwd_unicode_matcher,
1953 self.fwd_unicode_matcher,
1954 ]
1954 ]
1955
1955
1956 if not self.backslash_combining_completions:
1956 if not self.backslash_combining_completions:
1957 for matcher in self._backslash_combining_matchers:
1957 for matcher in self._backslash_combining_matchers:
1958 self.disable_matchers.append(_get_matcher_id(matcher))
1958 self.disable_matchers.append(_get_matcher_id(matcher))
1959
1959
1960 if not self.merge_completions:
1960 if not self.merge_completions:
1961 self.suppress_competing_matchers = True
1961 self.suppress_competing_matchers = True
1962
1962
1963 @property
1963 @property
1964 def matchers(self) -> List[Matcher]:
1964 def matchers(self) -> List[Matcher]:
1965 """All active matcher routines for completion"""
1965 """All active matcher routines for completion"""
1966 if self.dict_keys_only:
1966 if self.dict_keys_only:
1967 return [self.dict_key_matcher]
1967 return [self.dict_key_matcher]
1968
1968
1969 if self.use_jedi:
1969 if self.use_jedi:
1970 return [
1970 return [
1971 *self.custom_matchers,
1971 *self.custom_matchers,
1972 *self._backslash_combining_matchers,
1972 *self._backslash_combining_matchers,
1973 *self.magic_arg_matchers,
1973 *self.magic_arg_matchers,
1974 self.custom_completer_matcher,
1974 self.custom_completer_matcher,
1975 self.magic_matcher,
1975 self.magic_matcher,
1976 self._jedi_matcher,
1976 self._jedi_matcher,
1977 self.dict_key_matcher,
1977 self.dict_key_matcher,
1978 self.file_matcher,
1978 self.file_matcher,
1979 ]
1979 ]
1980 else:
1980 else:
1981 return [
1981 return [
1982 *self.custom_matchers,
1982 *self.custom_matchers,
1983 *self._backslash_combining_matchers,
1983 *self._backslash_combining_matchers,
1984 *self.magic_arg_matchers,
1984 *self.magic_arg_matchers,
1985 self.custom_completer_matcher,
1985 self.custom_completer_matcher,
1986 self.dict_key_matcher,
1986 self.dict_key_matcher,
1987 self.magic_matcher,
1987 self.magic_matcher,
1988 self.python_matcher,
1988 self.python_matcher,
1989 self.file_matcher,
1989 self.file_matcher,
1990 self.python_func_kw_matcher,
1990 self.python_func_kw_matcher,
1991 ]
1991 ]
1992
1992
1993 def all_completions(self, text:str) -> List[str]:
1993 def all_completions(self, text:str) -> List[str]:
1994 """
1994 """
1995 Wrapper around the completion methods for the benefit of emacs.
1995 Wrapper around the completion methods for the benefit of emacs.
1996 """
1996 """
1997 prefix = text.rpartition('.')[0]
1997 prefix = text.rpartition('.')[0]
1998 with provisionalcompleter():
1998 with provisionalcompleter():
1999 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1999 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2000 for c in self.completions(text, len(text))]
2000 for c in self.completions(text, len(text))]
2001
2001
2002 return self.complete(text)[1]
2002 return self.complete(text)[1]
2003
2003
2004 def _clean_glob(self, text:str):
2004 def _clean_glob(self, text:str):
2005 return self.glob("%s*" % text)
2005 return self.glob("%s*" % text)
2006
2006
2007 def _clean_glob_win32(self, text:str):
2007 def _clean_glob_win32(self, text:str):
2008 return [f.replace("\\","/")
2008 return [f.replace("\\","/")
2009 for f in self.glob("%s*" % text)]
2009 for f in self.glob("%s*" % text)]
2010
2010
2011 @context_matcher()
2011 @context_matcher()
2012 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2012 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2013 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2013 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2014 matches = self.file_matches(context.token)
2014 matches = self.file_matches(context.token)
2015 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2015 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2016 # starts with `/home/`, `C:\`, etc)
2016 # starts with `/home/`, `C:\`, etc)
2017 return _convert_matcher_v1_result_to_v2(matches, type="path")
2017 return _convert_matcher_v1_result_to_v2(matches, type="path")
2018
2018
2019 def file_matches(self, text: str) -> List[str]:
2019 def file_matches(self, text: str) -> List[str]:
2020 """Match filenames, expanding ~USER type strings.
2020 """Match filenames, expanding ~USER type strings.
2021
2021
2022 Most of the seemingly convoluted logic in this completer is an
2022 Most of the seemingly convoluted logic in this completer is an
2023 attempt to handle filenames with spaces in them. And yet it's not
2023 attempt to handle filenames with spaces in them. And yet it's not
2024 quite perfect, because Python's readline doesn't expose all of the
2024 quite perfect, because Python's readline doesn't expose all of the
2025 GNU readline details needed for this to be done correctly.
2025 GNU readline details needed for this to be done correctly.
2026
2026
2027 For a filename with a space in it, the printed completions will be
2027 For a filename with a space in it, the printed completions will be
2028 only the parts after what's already been typed (instead of the
2028 only the parts after what's already been typed (instead of the
2029 full completions, as is normally done). I don't think with the
2029 full completions, as is normally done). I don't think with the
2030 current (as of Python 2.3) Python readline it's possible to do
2030 current (as of Python 2.3) Python readline it's possible to do
2031 better.
2031 better.
2032
2032
2033 .. deprecated:: 8.6
2033 .. deprecated:: 8.6
2034 You can use :meth:`file_matcher` instead.
2034 You can use :meth:`file_matcher` instead.
2035 """
2035 """
2036
2036
2037 # chars that require escaping with backslash - i.e. chars
2037 # chars that require escaping with backslash - i.e. chars
2038 # that readline treats incorrectly as delimiters, but we
2038 # that readline treats incorrectly as delimiters, but we
2039 # don't want to treat as delimiters in filename matching
2039 # don't want to treat as delimiters in filename matching
2040 # when escaped with backslash
2040 # when escaped with backslash
2041 if text.startswith('!'):
2041 if text.startswith('!'):
2042 text = text[1:]
2042 text = text[1:]
2043 text_prefix = u'!'
2043 text_prefix = u'!'
2044 else:
2044 else:
2045 text_prefix = u''
2045 text_prefix = u''
2046
2046
2047 text_until_cursor = self.text_until_cursor
2047 text_until_cursor = self.text_until_cursor
2048 # track strings with open quotes
2048 # track strings with open quotes
2049 open_quotes = has_open_quotes(text_until_cursor)
2049 open_quotes = has_open_quotes(text_until_cursor)
2050
2050
2051 if '(' in text_until_cursor or '[' in text_until_cursor:
2051 if '(' in text_until_cursor or '[' in text_until_cursor:
2052 lsplit = text
2052 lsplit = text
2053 else:
2053 else:
2054 try:
2054 try:
2055 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2055 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2056 lsplit = arg_split(text_until_cursor)[-1]
2056 lsplit = arg_split(text_until_cursor)[-1]
2057 except ValueError:
2057 except ValueError:
2058 # typically an unmatched ", or backslash without escaped char.
2058 # typically an unmatched ", or backslash without escaped char.
2059 if open_quotes:
2059 if open_quotes:
2060 lsplit = text_until_cursor.split(open_quotes)[-1]
2060 lsplit = text_until_cursor.split(open_quotes)[-1]
2061 else:
2061 else:
2062 return []
2062 return []
2063 except IndexError:
2063 except IndexError:
2064 # tab pressed on empty line
2064 # tab pressed on empty line
2065 lsplit = ""
2065 lsplit = ""
2066
2066
2067 if not open_quotes and lsplit != protect_filename(lsplit):
2067 if not open_quotes and lsplit != protect_filename(lsplit):
2068 # if protectables are found, do matching on the whole escaped name
2068 # if protectables are found, do matching on the whole escaped name
2069 has_protectables = True
2069 has_protectables = True
2070 text0,text = text,lsplit
2070 text0,text = text,lsplit
2071 else:
2071 else:
2072 has_protectables = False
2072 has_protectables = False
2073 text = os.path.expanduser(text)
2073 text = os.path.expanduser(text)
2074
2074
2075 if text == "":
2075 if text == "":
2076 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2076 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2077
2077
2078 # Compute the matches from the filesystem
2078 # Compute the matches from the filesystem
2079 if sys.platform == 'win32':
2079 if sys.platform == 'win32':
2080 m0 = self.clean_glob(text)
2080 m0 = self.clean_glob(text)
2081 else:
2081 else:
2082 m0 = self.clean_glob(text.replace('\\', ''))
2082 m0 = self.clean_glob(text.replace('\\', ''))
2083
2083
2084 if has_protectables:
2084 if has_protectables:
2085 # If we had protectables, we need to revert our changes to the
2085 # If we had protectables, we need to revert our changes to the
2086 # beginning of filename so that we don't double-write the part
2086 # beginning of filename so that we don't double-write the part
2087 # of the filename we have so far
2087 # of the filename we have so far
2088 len_lsplit = len(lsplit)
2088 len_lsplit = len(lsplit)
2089 matches = [text_prefix + text0 +
2089 matches = [text_prefix + text0 +
2090 protect_filename(f[len_lsplit:]) for f in m0]
2090 protect_filename(f[len_lsplit:]) for f in m0]
2091 else:
2091 else:
2092 if open_quotes:
2092 if open_quotes:
2093 # if we have a string with an open quote, we don't need to
2093 # if we have a string with an open quote, we don't need to
2094 # protect the names beyond the quote (and we _shouldn't_, as
2094 # protect the names beyond the quote (and we _shouldn't_, as
2095 # it would cause bugs when the filesystem call is made).
2095 # it would cause bugs when the filesystem call is made).
2096 matches = m0 if sys.platform == "win32" else\
2096 matches = m0 if sys.platform == "win32" else\
2097 [protect_filename(f, open_quotes) for f in m0]
2097 [protect_filename(f, open_quotes) for f in m0]
2098 else:
2098 else:
2099 matches = [text_prefix +
2099 matches = [text_prefix +
2100 protect_filename(f) for f in m0]
2100 protect_filename(f) for f in m0]
2101
2101
2102 # Mark directories in input list by appending '/' to their names.
2102 # Mark directories in input list by appending '/' to their names.
2103 return [x+'/' if os.path.isdir(x) else x for x in matches]
2103 return [x+'/' if os.path.isdir(x) else x for x in matches]
2104
2104
2105 @context_matcher()
2105 @context_matcher()
2106 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2106 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2107 """Match magics."""
2107 """Match magics."""
2108 text = context.token
2108 text = context.token
2109 matches = self.magic_matches(text)
2109 matches = self.magic_matches(text)
2110 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2110 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2111 is_magic_prefix = len(text) > 0 and text[0] == "%"
2111 is_magic_prefix = len(text) > 0 and text[0] == "%"
2112 result["suppress"] = is_magic_prefix and bool(result["completions"])
2112 result["suppress"] = is_magic_prefix and bool(result["completions"])
2113 return result
2113 return result
2114
2114
2115 def magic_matches(self, text: str):
2115 def magic_matches(self, text: str):
2116 """Match magics.
2116 """Match magics.
2117
2117
2118 .. deprecated:: 8.6
2118 .. deprecated:: 8.6
2119 You can use :meth:`magic_matcher` instead.
2119 You can use :meth:`magic_matcher` instead.
2120 """
2120 """
2121 # Get all shell magics now rather than statically, so magics loaded at
2121 # Get all shell magics now rather than statically, so magics loaded at
2122 # runtime show up too.
2122 # runtime show up too.
2123 lsm = self.shell.magics_manager.lsmagic()
2123 lsm = self.shell.magics_manager.lsmagic()
2124 line_magics = lsm['line']
2124 line_magics = lsm['line']
2125 cell_magics = lsm['cell']
2125 cell_magics = lsm['cell']
2126 pre = self.magic_escape
2126 pre = self.magic_escape
2127 pre2 = pre+pre
2127 pre2 = pre+pre
2128
2128
2129 explicit_magic = text.startswith(pre)
2129 explicit_magic = text.startswith(pre)
2130
2130
2131 # Completion logic:
2131 # Completion logic:
2132 # - user gives %%: only do cell magics
2132 # - user gives %%: only do cell magics
2133 # - user gives %: do both line and cell magics
2133 # - user gives %: do both line and cell magics
2134 # - no prefix: do both
2134 # - no prefix: do both
2135 # In other words, line magics are skipped if the user gives %% explicitly
2135 # In other words, line magics are skipped if the user gives %% explicitly
2136 #
2136 #
2137 # We also exclude magics that match any currently visible names:
2137 # We also exclude magics that match any currently visible names:
2138 # https://github.com/ipython/ipython/issues/4877, unless the user has
2138 # https://github.com/ipython/ipython/issues/4877, unless the user has
2139 # typed a %:
2139 # typed a %:
2140 # https://github.com/ipython/ipython/issues/10754
2140 # https://github.com/ipython/ipython/issues/10754
2141 bare_text = text.lstrip(pre)
2141 bare_text = text.lstrip(pre)
2142 global_matches = self.global_matches(bare_text)
2142 global_matches = self.global_matches(bare_text)
2143 if not explicit_magic:
2143 if not explicit_magic:
2144 def matches(magic):
2144 def matches(magic):
2145 """
2145 """
2146 Filter magics, in particular remove magics that match
2146 Filter magics, in particular remove magics that match
2147 a name present in global namespace.
2147 a name present in global namespace.
2148 """
2148 """
2149 return ( magic.startswith(bare_text) and
2149 return ( magic.startswith(bare_text) and
2150 magic not in global_matches )
2150 magic not in global_matches )
2151 else:
2151 else:
2152 def matches(magic):
2152 def matches(magic):
2153 return magic.startswith(bare_text)
2153 return magic.startswith(bare_text)
2154
2154
2155 comp = [ pre2+m for m in cell_magics if matches(m)]
2155 comp = [ pre2+m for m in cell_magics if matches(m)]
2156 if not text.startswith(pre2):
2156 if not text.startswith(pre2):
2157 comp += [ pre+m for m in line_magics if matches(m)]
2157 comp += [ pre+m for m in line_magics if matches(m)]
2158
2158
2159 return comp
2159 return comp
2160
2160
2161 @context_matcher()
2161 @context_matcher()
2162 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2162 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2163 """Match class names and attributes for %config magic."""
2163 """Match class names and attributes for %config magic."""
2164 # NOTE: uses `line_buffer` equivalent for compatibility
2164 # NOTE: uses `line_buffer` equivalent for compatibility
2165 matches = self.magic_config_matches(context.line_with_cursor)
2165 matches = self.magic_config_matches(context.line_with_cursor)
2166 return _convert_matcher_v1_result_to_v2(matches, type="param")
2166 return _convert_matcher_v1_result_to_v2(matches, type="param")
2167
2167
2168 def magic_config_matches(self, text: str) -> List[str]:
2168 def magic_config_matches(self, text: str) -> List[str]:
2169 """Match class names and attributes for %config magic.
2169 """Match class names and attributes for %config magic.
2170
2170
2171 .. deprecated:: 8.6
2171 .. deprecated:: 8.6
2172 You can use :meth:`magic_config_matcher` instead.
2172 You can use :meth:`magic_config_matcher` instead.
2173 """
2173 """
2174 texts = text.strip().split()
2174 texts = text.strip().split()
2175
2175
2176 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2176 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2177 # get all configuration classes
2177 # get all configuration classes
2178 classes = sorted(set([ c for c in self.shell.configurables
2178 classes = sorted(set([ c for c in self.shell.configurables
2179 if c.__class__.class_traits(config=True)
2179 if c.__class__.class_traits(config=True)
2180 ]), key=lambda x: x.__class__.__name__)
2180 ]), key=lambda x: x.__class__.__name__)
2181 classnames = [ c.__class__.__name__ for c in classes ]
2181 classnames = [ c.__class__.__name__ for c in classes ]
2182
2182
2183 # return all classnames if config or %config is given
2183 # return all classnames if config or %config is given
2184 if len(texts) == 1:
2184 if len(texts) == 1:
2185 return classnames
2185 return classnames
2186
2186
2187 # match classname
2187 # match classname
2188 classname_texts = texts[1].split('.')
2188 classname_texts = texts[1].split('.')
2189 classname = classname_texts[0]
2189 classname = classname_texts[0]
2190 classname_matches = [ c for c in classnames
2190 classname_matches = [ c for c in classnames
2191 if c.startswith(classname) ]
2191 if c.startswith(classname) ]
2192
2192
2193 # return matched classes or the matched class with attributes
2193 # return matched classes or the matched class with attributes
2194 if texts[1].find('.') < 0:
2194 if texts[1].find('.') < 0:
2195 return classname_matches
2195 return classname_matches
2196 elif len(classname_matches) == 1 and \
2196 elif len(classname_matches) == 1 and \
2197 classname_matches[0] == classname:
2197 classname_matches[0] == classname:
2198 cls = classes[classnames.index(classname)].__class__
2198 cls = classes[classnames.index(classname)].__class__
2199 help = cls.class_get_help()
2199 help = cls.class_get_help()
2200 # strip leading '--' from cl-args:
2200 # strip leading '--' from cl-args:
2201 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2201 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2202 return [ attr.split('=')[0]
2202 return [ attr.split('=')[0]
2203 for attr in help.strip().splitlines()
2203 for attr in help.strip().splitlines()
2204 if attr.startswith(texts[1]) ]
2204 if attr.startswith(texts[1]) ]
2205 return []
2205 return []
2206
2206
2207 @context_matcher()
2207 @context_matcher()
2208 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2208 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2209 """Match color schemes for %colors magic."""
2209 """Match color schemes for %colors magic."""
2210 # NOTE: uses `line_buffer` equivalent for compatibility
2210 # NOTE: uses `line_buffer` equivalent for compatibility
2211 matches = self.magic_color_matches(context.line_with_cursor)
2211 matches = self.magic_color_matches(context.line_with_cursor)
2212 return _convert_matcher_v1_result_to_v2(matches, type="param")
2212 return _convert_matcher_v1_result_to_v2(matches, type="param")
2213
2213
2214 def magic_color_matches(self, text: str) -> List[str]:
2214 def magic_color_matches(self, text: str) -> List[str]:
2215 """Match color schemes for %colors magic.
2215 """Match color schemes for %colors magic.
2216
2216
2217 .. deprecated:: 8.6
2217 .. deprecated:: 8.6
2218 You can use :meth:`magic_color_matcher` instead.
2218 You can use :meth:`magic_color_matcher` instead.
2219 """
2219 """
2220 texts = text.split()
2220 texts = text.split()
2221 if text.endswith(' '):
2221 if text.endswith(' '):
2222 # .split() strips off the trailing whitespace. Add '' back
2222 # .split() strips off the trailing whitespace. Add '' back
2223 # so that: '%colors ' -> ['%colors', '']
2223 # so that: '%colors ' -> ['%colors', '']
2224 texts.append('')
2224 texts.append('')
2225
2225
2226 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2226 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2227 prefix = texts[1]
2227 prefix = texts[1]
2228 return [ color for color in InspectColors.keys()
2228 return [ color for color in InspectColors.keys()
2229 if color.startswith(prefix) ]
2229 if color.startswith(prefix) ]
2230 return []
2230 return []
2231
2231
2232 @context_matcher(identifier="IPCompleter.jedi_matcher")
2232 @context_matcher(identifier="IPCompleter.jedi_matcher")
2233 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2233 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2234 matches = self._jedi_matches(
2234 matches = self._jedi_matches(
2235 cursor_column=context.cursor_position,
2235 cursor_column=context.cursor_position,
2236 cursor_line=context.cursor_line,
2236 cursor_line=context.cursor_line,
2237 text=context.full_text,
2237 text=context.full_text,
2238 )
2238 )
2239 return {
2239 return {
2240 "completions": matches,
2240 "completions": matches,
2241 # static analysis should not suppress other matchers
2241 # static analysis should not suppress other matchers
2242 "suppress": False,
2242 "suppress": False,
2243 }
2243 }
2244
2244
2245 def _jedi_matches(
2245 def _jedi_matches(
2246 self, cursor_column: int, cursor_line: int, text: str
2246 self, cursor_column: int, cursor_line: int, text: str
2247 ) -> Iterator[_JediCompletionLike]:
2247 ) -> Iterator[_JediCompletionLike]:
2248 """
2248 """
2249 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2249 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2250 cursor position.
2250 cursor position.
2251
2251
2252 Parameters
2252 Parameters
2253 ----------
2253 ----------
2254 cursor_column : int
2254 cursor_column : int
2255 column position of the cursor in ``text``, 0-indexed.
2255 column position of the cursor in ``text``, 0-indexed.
2256 cursor_line : int
2256 cursor_line : int
2257 line position of the cursor in ``text``, 0-indexed
2257 line position of the cursor in ``text``, 0-indexed
2258 text : str
2258 text : str
2259 text to complete
2259 text to complete
2260
2260
2261 Notes
2261 Notes
2262 -----
2262 -----
2263 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2263 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2264 object containing a string with the Jedi debug information attached.
2264 object containing a string with the Jedi debug information attached.
2265
2265
2266 .. deprecated:: 8.6
2266 .. deprecated:: 8.6
2267 You can use :meth:`_jedi_matcher` instead.
2267 You can use :meth:`_jedi_matcher` instead.
2268 """
2268 """
2269 namespaces = [self.namespace]
2269 namespaces = [self.namespace]
2270 if self.global_namespace is not None:
2270 if self.global_namespace is not None:
2271 namespaces.append(self.global_namespace)
2271 namespaces.append(self.global_namespace)
2272
2272
2273 completion_filter = lambda x:x
2273 completion_filter = lambda x:x
2274 offset = cursor_to_position(text, cursor_line, cursor_column)
2274 offset = cursor_to_position(text, cursor_line, cursor_column)
2275 # filter output if we are completing for object members
2275 # filter output if we are completing for object members
2276 if offset:
2276 if offset:
2277 pre = text[offset-1]
2277 pre = text[offset-1]
2278 if pre == '.':
2278 if pre == '.':
2279 if self.omit__names == 2:
2279 if self.omit__names == 2:
2280 completion_filter = lambda c:not c.name.startswith('_')
2280 completion_filter = lambda c:not c.name.startswith('_')
2281 elif self.omit__names == 1:
2281 elif self.omit__names == 1:
2282 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2282 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2283 elif self.omit__names == 0:
2283 elif self.omit__names == 0:
2284 completion_filter = lambda x:x
2284 completion_filter = lambda x:x
2285 else:
2285 else:
2286 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2286 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2287
2287
2288 interpreter = jedi.Interpreter(text[:offset], namespaces)
2288 interpreter = jedi.Interpreter(text[:offset], namespaces)
2289 try_jedi = True
2289 try_jedi = True
2290
2290
2291 try:
2291 try:
2292 # find the first token in the current tree -- if it is a ' or " then we are in a string
2292 # find the first token in the current tree -- if it is a ' or " then we are in a string
2293 completing_string = False
2293 completing_string = False
2294 try:
2294 try:
2295 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2295 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2296 except StopIteration:
2296 except StopIteration:
2297 pass
2297 pass
2298 else:
2298 else:
2299 # note the value may be ', ", or it may also be ''' or """, or
2299 # note the value may be ', ", or it may also be ''' or """, or
2300 # in some cases, """what/you/typed..., but all of these are
2300 # in some cases, """what/you/typed..., but all of these are
2301 # strings.
2301 # strings.
2302 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2302 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2303
2303
2304 # if we are in a string jedi is likely not the right candidate for
2304 # if we are in a string jedi is likely not the right candidate for
2305 # now. Skip it.
2305 # now. Skip it.
2306 try_jedi = not completing_string
2306 try_jedi = not completing_string
2307 except Exception as e:
2307 except Exception as e:
2308 # many of things can go wrong, we are using private API just don't crash.
2308 # many of things can go wrong, we are using private API just don't crash.
2309 if self.debug:
2309 if self.debug:
2310 print("Error detecting if completing a non-finished string :", e, '|')
2310 print("Error detecting if completing a non-finished string :", e, '|')
2311
2311
2312 if not try_jedi:
2312 if not try_jedi:
2313 return iter([])
2313 return iter([])
2314 try:
2314 try:
2315 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2315 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2316 except Exception as e:
2316 except Exception as e:
2317 if self.debug:
2317 if self.debug:
2318 return iter(
2318 return iter(
2319 [
2319 [
2320 _FakeJediCompletion(
2320 _FakeJediCompletion(
2321 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2321 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2322 % (e)
2322 % (e)
2323 )
2323 )
2324 ]
2324 ]
2325 )
2325 )
2326 else:
2326 else:
2327 return iter([])
2327 return iter([])
2328
2328
2329 @context_matcher()
2329 @context_matcher()
2330 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2330 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2331 """Match attributes or global python names"""
2331 """Match attributes or global python names"""
2332 text = context.line_with_cursor
2332 text = context.line_with_cursor
2333 if "." in text:
2333 if "." in text:
2334 try:
2334 try:
2335 matches, fragment = self._attr_matches(text, include_prefix=False)
2335 matches, fragment = self._attr_matches(text, include_prefix=False)
2336 if text.endswith(".") and self.omit__names:
2336 if text.endswith(".") and self.omit__names:
2337 if self.omit__names == 1:
2337 if self.omit__names == 1:
2338 # true if txt is _not_ a __ name, false otherwise:
2338 # true if txt is _not_ a __ name, false otherwise:
2339 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2339 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2340 else:
2340 else:
2341 # true if txt is _not_ a _ name, false otherwise:
2341 # true if txt is _not_ a _ name, false otherwise:
2342 no__name = (
2342 no__name = (
2343 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2343 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2344 is None
2344 is None
2345 )
2345 )
2346 matches = filter(no__name, matches)
2346 matches = filter(no__name, matches)
2347 return _convert_matcher_v1_result_to_v2(
2347 return _convert_matcher_v1_result_to_v2(
2348 matches, type="attribute", fragment=fragment
2348 matches, type="attribute", fragment=fragment
2349 )
2349 )
2350 except NameError:
2350 except NameError:
2351 # catches <undefined attributes>.<tab>
2351 # catches <undefined attributes>.<tab>
2352 matches = []
2352 matches = []
2353 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2353 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2354 else:
2354 else:
2355 matches = self.global_matches(context.token)
2355 matches = self.global_matches(context.token)
2356 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2356 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2357
2357
2358 @completion_matcher(api_version=1)
2358 @completion_matcher(api_version=1)
2359 def python_matches(self, text: str) -> Iterable[str]:
2359 def python_matches(self, text: str) -> Iterable[str]:
2360 """Match attributes or global python names.
2360 """Match attributes or global python names.
2361
2361
2362 .. deprecated:: 8.27
2362 .. deprecated:: 8.27
2363 You can use :meth:`python_matcher` instead."""
2363 You can use :meth:`python_matcher` instead."""
2364 if "." in text:
2364 if "." in text:
2365 try:
2365 try:
2366 matches = self.attr_matches(text)
2366 matches = self.attr_matches(text)
2367 if text.endswith('.') and self.omit__names:
2367 if text.endswith('.') and self.omit__names:
2368 if self.omit__names == 1:
2368 if self.omit__names == 1:
2369 # true if txt is _not_ a __ name, false otherwise:
2369 # true if txt is _not_ a __ name, false otherwise:
2370 no__name = (lambda txt:
2370 no__name = (lambda txt:
2371 re.match(r'.*\.__.*?__',txt) is None)
2371 re.match(r'.*\.__.*?__',txt) is None)
2372 else:
2372 else:
2373 # true if txt is _not_ a _ name, false otherwise:
2373 # true if txt is _not_ a _ name, false otherwise:
2374 no__name = (lambda txt:
2374 no__name = (lambda txt:
2375 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2375 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2376 matches = filter(no__name, matches)
2376 matches = filter(no__name, matches)
2377 except NameError:
2377 except NameError:
2378 # catches <undefined attributes>.<tab>
2378 # catches <undefined attributes>.<tab>
2379 matches = []
2379 matches = []
2380 else:
2380 else:
2381 matches = self.global_matches(text)
2381 matches = self.global_matches(text)
2382 return matches
2382 return matches
2383
2383
2384 def _default_arguments_from_docstring(self, doc):
2384 def _default_arguments_from_docstring(self, doc):
2385 """Parse the first line of docstring for call signature.
2385 """Parse the first line of docstring for call signature.
2386
2386
2387 Docstring should be of the form 'min(iterable[, key=func])\n'.
2387 Docstring should be of the form 'min(iterable[, key=func])\n'.
2388 It can also parse cython docstring of the form
2388 It can also parse cython docstring of the form
2389 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2389 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2390 """
2390 """
2391 if doc is None:
2391 if doc is None:
2392 return []
2392 return []
2393
2393
2394 #care only the firstline
2394 #care only the firstline
2395 line = doc.lstrip().splitlines()[0]
2395 line = doc.lstrip().splitlines()[0]
2396
2396
2397 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2397 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2398 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2398 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2399 sig = self.docstring_sig_re.search(line)
2399 sig = self.docstring_sig_re.search(line)
2400 if sig is None:
2400 if sig is None:
2401 return []
2401 return []
2402 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2402 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2403 sig = sig.groups()[0].split(',')
2403 sig = sig.groups()[0].split(',')
2404 ret = []
2404 ret = []
2405 for s in sig:
2405 for s in sig:
2406 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2406 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2407 ret += self.docstring_kwd_re.findall(s)
2407 ret += self.docstring_kwd_re.findall(s)
2408 return ret
2408 return ret
2409
2409
2410 def _default_arguments(self, obj):
2410 def _default_arguments(self, obj):
2411 """Return the list of default arguments of obj if it is callable,
2411 """Return the list of default arguments of obj if it is callable,
2412 or empty list otherwise."""
2412 or empty list otherwise."""
2413 call_obj = obj
2413 call_obj = obj
2414 ret = []
2414 ret = []
2415 if inspect.isbuiltin(obj):
2415 if inspect.isbuiltin(obj):
2416 pass
2416 pass
2417 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2417 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2418 if inspect.isclass(obj):
2418 if inspect.isclass(obj):
2419 #for cython embedsignature=True the constructor docstring
2419 #for cython embedsignature=True the constructor docstring
2420 #belongs to the object itself not __init__
2420 #belongs to the object itself not __init__
2421 ret += self._default_arguments_from_docstring(
2421 ret += self._default_arguments_from_docstring(
2422 getattr(obj, '__doc__', ''))
2422 getattr(obj, '__doc__', ''))
2423 # for classes, check for __init__,__new__
2423 # for classes, check for __init__,__new__
2424 call_obj = (getattr(obj, '__init__', None) or
2424 call_obj = (getattr(obj, '__init__', None) or
2425 getattr(obj, '__new__', None))
2425 getattr(obj, '__new__', None))
2426 # for all others, check if they are __call__able
2426 # for all others, check if they are __call__able
2427 elif hasattr(obj, '__call__'):
2427 elif hasattr(obj, '__call__'):
2428 call_obj = obj.__call__
2428 call_obj = obj.__call__
2429 ret += self._default_arguments_from_docstring(
2429 ret += self._default_arguments_from_docstring(
2430 getattr(call_obj, '__doc__', ''))
2430 getattr(call_obj, '__doc__', ''))
2431
2431
2432 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2432 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2433 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2433 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2434
2434
2435 try:
2435 try:
2436 sig = inspect.signature(obj)
2436 sig = inspect.signature(obj)
2437 ret.extend(k for k, v in sig.parameters.items() if
2437 ret.extend(k for k, v in sig.parameters.items() if
2438 v.kind in _keeps)
2438 v.kind in _keeps)
2439 except ValueError:
2439 except ValueError:
2440 pass
2440 pass
2441
2441
2442 return list(set(ret))
2442 return list(set(ret))
2443
2443
2444 @context_matcher()
2444 @context_matcher()
2445 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2445 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2446 """Match named parameters (kwargs) of the last open function."""
2446 """Match named parameters (kwargs) of the last open function."""
2447 matches = self.python_func_kw_matches(context.token)
2447 matches = self.python_func_kw_matches(context.token)
2448 return _convert_matcher_v1_result_to_v2(matches, type="param")
2448 return _convert_matcher_v1_result_to_v2(matches, type="param")
2449
2449
2450 def python_func_kw_matches(self, text):
2450 def python_func_kw_matches(self, text):
2451 """Match named parameters (kwargs) of the last open function.
2451 """Match named parameters (kwargs) of the last open function.
2452
2452
2453 .. deprecated:: 8.6
2453 .. deprecated:: 8.6
2454 You can use :meth:`python_func_kw_matcher` instead.
2454 You can use :meth:`python_func_kw_matcher` instead.
2455 """
2455 """
2456
2456
2457 if "." in text: # a parameter cannot be dotted
2457 if "." in text: # a parameter cannot be dotted
2458 return []
2458 return []
2459 try: regexp = self.__funcParamsRegex
2459 try: regexp = self.__funcParamsRegex
2460 except AttributeError:
2460 except AttributeError:
2461 regexp = self.__funcParamsRegex = re.compile(r'''
2461 regexp = self.__funcParamsRegex = re.compile(r'''
2462 '.*?(?<!\\)' | # single quoted strings or
2462 '.*?(?<!\\)' | # single quoted strings or
2463 ".*?(?<!\\)" | # double quoted strings or
2463 ".*?(?<!\\)" | # double quoted strings or
2464 \w+ | # identifier
2464 \w+ | # identifier
2465 \S # other characters
2465 \S # other characters
2466 ''', re.VERBOSE | re.DOTALL)
2466 ''', re.VERBOSE | re.DOTALL)
2467 # 1. find the nearest identifier that comes before an unclosed
2467 # 1. find the nearest identifier that comes before an unclosed
2468 # parenthesis before the cursor
2468 # parenthesis before the cursor
2469 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2469 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2470 tokens = regexp.findall(self.text_until_cursor)
2470 tokens = regexp.findall(self.text_until_cursor)
2471 iterTokens = reversed(tokens); openPar = 0
2471 iterTokens = reversed(tokens); openPar = 0
2472
2472
2473 for token in iterTokens:
2473 for token in iterTokens:
2474 if token == ')':
2474 if token == ')':
2475 openPar -= 1
2475 openPar -= 1
2476 elif token == '(':
2476 elif token == '(':
2477 openPar += 1
2477 openPar += 1
2478 if openPar > 0:
2478 if openPar > 0:
2479 # found the last unclosed parenthesis
2479 # found the last unclosed parenthesis
2480 break
2480 break
2481 else:
2481 else:
2482 return []
2482 return []
2483 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2483 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2484 ids = []
2484 ids = []
2485 isId = re.compile(r'\w+$').match
2485 isId = re.compile(r'\w+$').match
2486
2486
2487 while True:
2487 while True:
2488 try:
2488 try:
2489 ids.append(next(iterTokens))
2489 ids.append(next(iterTokens))
2490 if not isId(ids[-1]):
2490 if not isId(ids[-1]):
2491 ids.pop(); break
2491 ids.pop(); break
2492 if not next(iterTokens) == '.':
2492 if not next(iterTokens) == '.':
2493 break
2493 break
2494 except StopIteration:
2494 except StopIteration:
2495 break
2495 break
2496
2496
2497 # Find all named arguments already assigned to, as to avoid suggesting
2497 # Find all named arguments already assigned to, as to avoid suggesting
2498 # them again
2498 # them again
2499 usedNamedArgs = set()
2499 usedNamedArgs = set()
2500 par_level = -1
2500 par_level = -1
2501 for token, next_token in zip(tokens, tokens[1:]):
2501 for token, next_token in zip(tokens, tokens[1:]):
2502 if token == '(':
2502 if token == '(':
2503 par_level += 1
2503 par_level += 1
2504 elif token == ')':
2504 elif token == ')':
2505 par_level -= 1
2505 par_level -= 1
2506
2506
2507 if par_level != 0:
2507 if par_level != 0:
2508 continue
2508 continue
2509
2509
2510 if next_token != '=':
2510 if next_token != '=':
2511 continue
2511 continue
2512
2512
2513 usedNamedArgs.add(token)
2513 usedNamedArgs.add(token)
2514
2514
2515 argMatches = []
2515 argMatches = []
2516 try:
2516 try:
2517 callableObj = '.'.join(ids[::-1])
2517 callableObj = '.'.join(ids[::-1])
2518 namedArgs = self._default_arguments(eval(callableObj,
2518 namedArgs = self._default_arguments(eval(callableObj,
2519 self.namespace))
2519 self.namespace))
2520
2520
2521 # Remove used named arguments from the list, no need to show twice
2521 # Remove used named arguments from the list, no need to show twice
2522 for namedArg in set(namedArgs) - usedNamedArgs:
2522 for namedArg in set(namedArgs) - usedNamedArgs:
2523 if namedArg.startswith(text):
2523 if namedArg.startswith(text):
2524 argMatches.append("%s=" %namedArg)
2524 argMatches.append("%s=" %namedArg)
2525 except:
2525 except:
2526 pass
2526 pass
2527
2527
2528 return argMatches
2528 return argMatches
2529
2529
2530 @staticmethod
2530 @staticmethod
2531 def _get_keys(obj: Any) -> List[Any]:
2531 def _get_keys(obj: Any) -> List[Any]:
2532 # Objects can define their own completions by defining an
2532 # Objects can define their own completions by defining an
2533 # _ipy_key_completions_() method.
2533 # _ipy_key_completions_() method.
2534 method = get_real_method(obj, '_ipython_key_completions_')
2534 method = get_real_method(obj, '_ipython_key_completions_')
2535 if method is not None:
2535 if method is not None:
2536 return method()
2536 return method()
2537
2537
2538 # Special case some common in-memory dict-like types
2538 # Special case some common in-memory dict-like types
2539 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2539 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2540 try:
2540 try:
2541 return list(obj.keys())
2541 return list(obj.keys())
2542 except Exception:
2542 except Exception:
2543 return []
2543 return []
2544 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2544 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2545 try:
2545 try:
2546 return list(obj.obj.keys())
2546 return list(obj.obj.keys())
2547 except Exception:
2547 except Exception:
2548 return []
2548 return []
2549 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2549 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2550 _safe_isinstance(obj, 'numpy', 'void'):
2550 _safe_isinstance(obj, 'numpy', 'void'):
2551 return obj.dtype.names or []
2551 return obj.dtype.names or []
2552 return []
2552 return []
2553
2553
2554 @context_matcher()
2554 @context_matcher()
2555 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2555 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2556 """Match string keys in a dictionary, after e.g. ``foo[``."""
2556 """Match string keys in a dictionary, after e.g. ``foo[``."""
2557 matches = self.dict_key_matches(context.token)
2557 matches = self.dict_key_matches(context.token)
2558 return _convert_matcher_v1_result_to_v2(
2558 return _convert_matcher_v1_result_to_v2(
2559 matches, type="dict key", suppress_if_matches=True
2559 matches, type="dict key", suppress_if_matches=True
2560 )
2560 )
2561
2561
2562 def dict_key_matches(self, text: str) -> List[str]:
2562 def dict_key_matches(self, text: str) -> List[str]:
2563 """Match string keys in a dictionary, after e.g. ``foo[``.
2563 """Match string keys in a dictionary, after e.g. ``foo[``.
2564
2564
2565 .. deprecated:: 8.6
2565 .. deprecated:: 8.6
2566 You can use :meth:`dict_key_matcher` instead.
2566 You can use :meth:`dict_key_matcher` instead.
2567 """
2567 """
2568
2568
2569 # Short-circuit on closed dictionary (regular expression would
2569 # Short-circuit on closed dictionary (regular expression would
2570 # not match anyway, but would take quite a while).
2570 # not match anyway, but would take quite a while).
2571 if self.text_until_cursor.strip().endswith("]"):
2571 if self.text_until_cursor.strip().endswith("]"):
2572 return []
2572 return []
2573
2573
2574 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2574 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2575
2575
2576 if match is None:
2576 if match is None:
2577 return []
2577 return []
2578
2578
2579 expr, prior_tuple_keys, key_prefix = match.groups()
2579 expr, prior_tuple_keys, key_prefix = match.groups()
2580
2580
2581 obj = self._evaluate_expr(expr)
2581 obj = self._evaluate_expr(expr)
2582
2582
2583 if obj is not_found:
2583 if obj is not_found:
2584 return []
2584 return []
2585
2585
2586 keys = self._get_keys(obj)
2586 keys = self._get_keys(obj)
2587 if not keys:
2587 if not keys:
2588 return keys
2588 return keys
2589
2589
2590 tuple_prefix = guarded_eval(
2590 tuple_prefix = guarded_eval(
2591 prior_tuple_keys,
2591 prior_tuple_keys,
2592 EvaluationContext(
2592 EvaluationContext(
2593 globals=self.global_namespace,
2593 globals=self.global_namespace,
2594 locals=self.namespace,
2594 locals=self.namespace,
2595 evaluation=self.evaluation, # type: ignore
2595 evaluation=self.evaluation, # type: ignore
2596 in_subscript=True,
2596 in_subscript=True,
2597 ),
2597 ),
2598 )
2598 )
2599
2599
2600 closing_quote, token_offset, matches = match_dict_keys(
2600 closing_quote, token_offset, matches = match_dict_keys(
2601 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2601 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2602 )
2602 )
2603 if not matches:
2603 if not matches:
2604 return []
2604 return []
2605
2605
2606 # get the cursor position of
2606 # get the cursor position of
2607 # - the text being completed
2607 # - the text being completed
2608 # - the start of the key text
2608 # - the start of the key text
2609 # - the start of the completion
2609 # - the start of the completion
2610 text_start = len(self.text_until_cursor) - len(text)
2610 text_start = len(self.text_until_cursor) - len(text)
2611 if key_prefix:
2611 if key_prefix:
2612 key_start = match.start(3)
2612 key_start = match.start(3)
2613 completion_start = key_start + token_offset
2613 completion_start = key_start + token_offset
2614 else:
2614 else:
2615 key_start = completion_start = match.end()
2615 key_start = completion_start = match.end()
2616
2616
2617 # grab the leading prefix, to make sure all completions start with `text`
2617 # grab the leading prefix, to make sure all completions start with `text`
2618 if text_start > key_start:
2618 if text_start > key_start:
2619 leading = ''
2619 leading = ''
2620 else:
2620 else:
2621 leading = text[text_start:completion_start]
2621 leading = text[text_start:completion_start]
2622
2622
2623 # append closing quote and bracket as appropriate
2623 # append closing quote and bracket as appropriate
2624 # this is *not* appropriate if the opening quote or bracket is outside
2624 # this is *not* appropriate if the opening quote or bracket is outside
2625 # the text given to this method, e.g. `d["""a\nt
2625 # the text given to this method, e.g. `d["""a\nt
2626 can_close_quote = False
2626 can_close_quote = False
2627 can_close_bracket = False
2627 can_close_bracket = False
2628
2628
2629 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2629 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2630
2630
2631 if continuation.startswith(closing_quote):
2631 if continuation.startswith(closing_quote):
2632 # do not close if already closed, e.g. `d['a<tab>'`
2632 # do not close if already closed, e.g. `d['a<tab>'`
2633 continuation = continuation[len(closing_quote) :]
2633 continuation = continuation[len(closing_quote) :]
2634 else:
2634 else:
2635 can_close_quote = True
2635 can_close_quote = True
2636
2636
2637 continuation = continuation.strip()
2637 continuation = continuation.strip()
2638
2638
2639 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2639 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2640 # handling it is out of scope, so let's avoid appending suffixes.
2640 # handling it is out of scope, so let's avoid appending suffixes.
2641 has_known_tuple_handling = isinstance(obj, dict)
2641 has_known_tuple_handling = isinstance(obj, dict)
2642
2642
2643 can_close_bracket = (
2643 can_close_bracket = (
2644 not continuation.startswith("]") and self.auto_close_dict_keys
2644 not continuation.startswith("]") and self.auto_close_dict_keys
2645 )
2645 )
2646 can_close_tuple_item = (
2646 can_close_tuple_item = (
2647 not continuation.startswith(",")
2647 not continuation.startswith(",")
2648 and has_known_tuple_handling
2648 and has_known_tuple_handling
2649 and self.auto_close_dict_keys
2649 and self.auto_close_dict_keys
2650 )
2650 )
2651 can_close_quote = can_close_quote and self.auto_close_dict_keys
2651 can_close_quote = can_close_quote and self.auto_close_dict_keys
2652
2652
2653 # fast path if closing qoute should be appended but not suffix is allowed
2653 # fast path if closing qoute should be appended but not suffix is allowed
2654 if not can_close_quote and not can_close_bracket and closing_quote:
2654 if not can_close_quote and not can_close_bracket and closing_quote:
2655 return [leading + k for k in matches]
2655 return [leading + k for k in matches]
2656
2656
2657 results = []
2657 results = []
2658
2658
2659 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2659 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2660
2660
2661 for k, state_flag in matches.items():
2661 for k, state_flag in matches.items():
2662 result = leading + k
2662 result = leading + k
2663 if can_close_quote and closing_quote:
2663 if can_close_quote and closing_quote:
2664 result += closing_quote
2664 result += closing_quote
2665
2665
2666 if state_flag == end_of_tuple_or_item:
2666 if state_flag == end_of_tuple_or_item:
2667 # We do not know which suffix to add,
2667 # We do not know which suffix to add,
2668 # e.g. both tuple item and string
2668 # e.g. both tuple item and string
2669 # match this item.
2669 # match this item.
2670 pass
2670 pass
2671
2671
2672 if state_flag in end_of_tuple_or_item and can_close_bracket:
2672 if state_flag in end_of_tuple_or_item and can_close_bracket:
2673 result += "]"
2673 result += "]"
2674 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2674 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2675 result += ", "
2675 result += ", "
2676 results.append(result)
2676 results.append(result)
2677 return results
2677 return results
2678
2678
2679 @context_matcher()
2679 @context_matcher()
2680 def unicode_name_matcher(self, context: CompletionContext):
2680 def unicode_name_matcher(self, context: CompletionContext):
2681 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2681 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2682 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2682 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2683 return _convert_matcher_v1_result_to_v2(
2683 return _convert_matcher_v1_result_to_v2(
2684 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2684 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2685 )
2685 )
2686
2686
2687 @staticmethod
2687 @staticmethod
2688 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2688 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2689 """Match Latex-like syntax for unicode characters base
2689 """Match Latex-like syntax for unicode characters base
2690 on the name of the character.
2690 on the name of the character.
2691
2691
2692 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2692 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2693
2693
2694 Works only on valid python 3 identifier, or on combining characters that
2694 Works only on valid python 3 identifier, or on combining characters that
2695 will combine to form a valid identifier.
2695 will combine to form a valid identifier.
2696 """
2696 """
2697 slashpos = text.rfind('\\')
2697 slashpos = text.rfind('\\')
2698 if slashpos > -1:
2698 if slashpos > -1:
2699 s = text[slashpos+1:]
2699 s = text[slashpos+1:]
2700 try :
2700 try :
2701 unic = unicodedata.lookup(s)
2701 unic = unicodedata.lookup(s)
2702 # allow combining chars
2702 # allow combining chars
2703 if ('a'+unic).isidentifier():
2703 if ('a'+unic).isidentifier():
2704 return '\\'+s,[unic]
2704 return '\\'+s,[unic]
2705 except KeyError:
2705 except KeyError:
2706 pass
2706 pass
2707 return '', []
2707 return '', []
2708
2708
2709 @context_matcher()
2709 @context_matcher()
2710 def latex_name_matcher(self, context: CompletionContext):
2710 def latex_name_matcher(self, context: CompletionContext):
2711 """Match Latex syntax for unicode characters.
2711 """Match Latex syntax for unicode characters.
2712
2712
2713 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2713 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2714 """
2714 """
2715 fragment, matches = self.latex_matches(context.text_until_cursor)
2715 fragment, matches = self.latex_matches(context.text_until_cursor)
2716 return _convert_matcher_v1_result_to_v2(
2716 return _convert_matcher_v1_result_to_v2(
2717 matches, type="latex", fragment=fragment, suppress_if_matches=True
2717 matches, type="latex", fragment=fragment, suppress_if_matches=True
2718 )
2718 )
2719
2719
2720 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2720 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2721 """Match Latex syntax for unicode characters.
2721 """Match Latex syntax for unicode characters.
2722
2722
2723 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2723 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2724
2724
2725 .. deprecated:: 8.6
2725 .. deprecated:: 8.6
2726 You can use :meth:`latex_name_matcher` instead.
2726 You can use :meth:`latex_name_matcher` instead.
2727 """
2727 """
2728 slashpos = text.rfind('\\')
2728 slashpos = text.rfind('\\')
2729 if slashpos > -1:
2729 if slashpos > -1:
2730 s = text[slashpos:]
2730 s = text[slashpos:]
2731 if s in latex_symbols:
2731 if s in latex_symbols:
2732 # Try to complete a full latex symbol to unicode
2732 # Try to complete a full latex symbol to unicode
2733 # \\alpha -> Ξ±
2733 # \\alpha -> Ξ±
2734 return s, [latex_symbols[s]]
2734 return s, [latex_symbols[s]]
2735 else:
2735 else:
2736 # If a user has partially typed a latex symbol, give them
2736 # If a user has partially typed a latex symbol, give them
2737 # a full list of options \al -> [\aleph, \alpha]
2737 # a full list of options \al -> [\aleph, \alpha]
2738 matches = [k for k in latex_symbols if k.startswith(s)]
2738 matches = [k for k in latex_symbols if k.startswith(s)]
2739 if matches:
2739 if matches:
2740 return s, matches
2740 return s, matches
2741 return '', ()
2741 return '', ()
2742
2742
2743 @context_matcher()
2743 @context_matcher()
2744 def custom_completer_matcher(self, context):
2744 def custom_completer_matcher(self, context):
2745 """Dispatch custom completer.
2745 """Dispatch custom completer.
2746
2746
2747 If a match is found, suppresses all other matchers except for Jedi.
2747 If a match is found, suppresses all other matchers except for Jedi.
2748 """
2748 """
2749 matches = self.dispatch_custom_completer(context.token) or []
2749 matches = self.dispatch_custom_completer(context.token) or []
2750 result = _convert_matcher_v1_result_to_v2(
2750 result = _convert_matcher_v1_result_to_v2(
2751 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2751 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2752 )
2752 )
2753 result["ordered"] = True
2753 result["ordered"] = True
2754 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2754 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2755 return result
2755 return result
2756
2756
2757 def dispatch_custom_completer(self, text):
2757 def dispatch_custom_completer(self, text):
2758 """
2758 """
2759 .. deprecated:: 8.6
2759 .. deprecated:: 8.6
2760 You can use :meth:`custom_completer_matcher` instead.
2760 You can use :meth:`custom_completer_matcher` instead.
2761 """
2761 """
2762 if not self.custom_completers:
2762 if not self.custom_completers:
2763 return
2763 return
2764
2764
2765 line = self.line_buffer
2765 line = self.line_buffer
2766 if not line.strip():
2766 if not line.strip():
2767 return None
2767 return None
2768
2768
2769 # Create a little structure to pass all the relevant information about
2769 # Create a little structure to pass all the relevant information about
2770 # the current completion to any custom completer.
2770 # the current completion to any custom completer.
2771 event = SimpleNamespace()
2771 event = SimpleNamespace()
2772 event.line = line
2772 event.line = line
2773 event.symbol = text
2773 event.symbol = text
2774 cmd = line.split(None,1)[0]
2774 cmd = line.split(None,1)[0]
2775 event.command = cmd
2775 event.command = cmd
2776 event.text_until_cursor = self.text_until_cursor
2776 event.text_until_cursor = self.text_until_cursor
2777
2777
2778 # for foo etc, try also to find completer for %foo
2778 # for foo etc, try also to find completer for %foo
2779 if not cmd.startswith(self.magic_escape):
2779 if not cmd.startswith(self.magic_escape):
2780 try_magic = self.custom_completers.s_matches(
2780 try_magic = self.custom_completers.s_matches(
2781 self.magic_escape + cmd)
2781 self.magic_escape + cmd)
2782 else:
2782 else:
2783 try_magic = []
2783 try_magic = []
2784
2784
2785 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2785 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2786 try_magic,
2786 try_magic,
2787 self.custom_completers.flat_matches(self.text_until_cursor)):
2787 self.custom_completers.flat_matches(self.text_until_cursor)):
2788 try:
2788 try:
2789 res = c(event)
2789 res = c(event)
2790 if res:
2790 if res:
2791 # first, try case sensitive match
2791 # first, try case sensitive match
2792 withcase = [r for r in res if r.startswith(text)]
2792 withcase = [r for r in res if r.startswith(text)]
2793 if withcase:
2793 if withcase:
2794 return withcase
2794 return withcase
2795 # if none, then case insensitive ones are ok too
2795 # if none, then case insensitive ones are ok too
2796 text_low = text.lower()
2796 text_low = text.lower()
2797 return [r for r in res if r.lower().startswith(text_low)]
2797 return [r for r in res if r.lower().startswith(text_low)]
2798 except TryNext:
2798 except TryNext:
2799 pass
2799 pass
2800 except KeyboardInterrupt:
2800 except KeyboardInterrupt:
2801 """
2801 """
2802 If custom completer take too long,
2802 If custom completer take too long,
2803 let keyboard interrupt abort and return nothing.
2803 let keyboard interrupt abort and return nothing.
2804 """
2804 """
2805 break
2805 break
2806
2806
2807 return None
2807 return None
2808
2808
2809 def completions(self, text: str, offset: int)->Iterator[Completion]:
2809 def completions(self, text: str, offset: int)->Iterator[Completion]:
2810 """
2810 """
2811 Returns an iterator over the possible completions
2811 Returns an iterator over the possible completions
2812
2812
2813 .. warning::
2813 .. warning::
2814
2814
2815 Unstable
2815 Unstable
2816
2816
2817 This function is unstable, API may change without warning.
2817 This function is unstable, API may change without warning.
2818 It will also raise unless use in proper context manager.
2818 It will also raise unless use in proper context manager.
2819
2819
2820 Parameters
2820 Parameters
2821 ----------
2821 ----------
2822 text : str
2822 text : str
2823 Full text of the current input, multi line string.
2823 Full text of the current input, multi line string.
2824 offset : int
2824 offset : int
2825 Integer representing the position of the cursor in ``text``. Offset
2825 Integer representing the position of the cursor in ``text``. Offset
2826 is 0-based indexed.
2826 is 0-based indexed.
2827
2827
2828 Yields
2828 Yields
2829 ------
2829 ------
2830 Completion
2830 Completion
2831
2831
2832 Notes
2832 Notes
2833 -----
2833 -----
2834 The cursor on a text can either be seen as being "in between"
2834 The cursor on a text can either be seen as being "in between"
2835 characters or "On" a character depending on the interface visible to
2835 characters or "On" a character depending on the interface visible to
2836 the user. For consistency the cursor being on "in between" characters X
2836 the user. For consistency the cursor being on "in between" characters X
2837 and Y is equivalent to the cursor being "on" character Y, that is to say
2837 and Y is equivalent to the cursor being "on" character Y, that is to say
2838 the character the cursor is on is considered as being after the cursor.
2838 the character the cursor is on is considered as being after the cursor.
2839
2839
2840 Combining characters may span more that one position in the
2840 Combining characters may span more that one position in the
2841 text.
2841 text.
2842
2842
2843 .. note::
2843 .. note::
2844
2844
2845 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2845 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2846 fake Completion token to distinguish completion returned by Jedi
2846 fake Completion token to distinguish completion returned by Jedi
2847 and usual IPython completion.
2847 and usual IPython completion.
2848
2848
2849 .. note::
2849 .. note::
2850
2850
2851 Completions are not completely deduplicated yet. If identical
2851 Completions are not completely deduplicated yet. If identical
2852 completions are coming from different sources this function does not
2852 completions are coming from different sources this function does not
2853 ensure that each completion object will only be present once.
2853 ensure that each completion object will only be present once.
2854 """
2854 """
2855 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2855 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2856 "It may change without warnings. "
2856 "It may change without warnings. "
2857 "Use in corresponding context manager.",
2857 "Use in corresponding context manager.",
2858 category=ProvisionalCompleterWarning, stacklevel=2)
2858 category=ProvisionalCompleterWarning, stacklevel=2)
2859
2859
2860 seen = set()
2860 seen = set()
2861 profiler:Optional[cProfile.Profile]
2861 profiler:Optional[cProfile.Profile]
2862 try:
2862 try:
2863 if self.profile_completions:
2863 if self.profile_completions:
2864 import cProfile
2864 import cProfile
2865 profiler = cProfile.Profile()
2865 profiler = cProfile.Profile()
2866 profiler.enable()
2866 profiler.enable()
2867 else:
2867 else:
2868 profiler = None
2868 profiler = None
2869
2869
2870 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2870 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2871 if c and (c in seen):
2871 if c and (c in seen):
2872 continue
2872 continue
2873 yield c
2873 yield c
2874 seen.add(c)
2874 seen.add(c)
2875 except KeyboardInterrupt:
2875 except KeyboardInterrupt:
2876 """if completions take too long and users send keyboard interrupt,
2876 """if completions take too long and users send keyboard interrupt,
2877 do not crash and return ASAP. """
2877 do not crash and return ASAP. """
2878 pass
2878 pass
2879 finally:
2879 finally:
2880 if profiler is not None:
2880 if profiler is not None:
2881 profiler.disable()
2881 profiler.disable()
2882 ensure_dir_exists(self.profiler_output_dir)
2882 ensure_dir_exists(self.profiler_output_dir)
2883 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2883 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2884 print("Writing profiler output to", output_path)
2884 print("Writing profiler output to", output_path)
2885 profiler.dump_stats(output_path)
2885 profiler.dump_stats(output_path)
2886
2886
2887 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2887 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2888 """
2888 """
2889 Core completion module.Same signature as :any:`completions`, with the
2889 Core completion module.Same signature as :any:`completions`, with the
2890 extra `timeout` parameter (in seconds).
2890 extra `timeout` parameter (in seconds).
2891
2891
2892 Computing jedi's completion ``.type`` can be quite expensive (it is a
2892 Computing jedi's completion ``.type`` can be quite expensive (it is a
2893 lazy property) and can require some warm-up, more warm up than just
2893 lazy property) and can require some warm-up, more warm up than just
2894 computing the ``name`` of a completion. The warm-up can be :
2894 computing the ``name`` of a completion. The warm-up can be :
2895
2895
2896 - Long warm-up the first time a module is encountered after
2896 - Long warm-up the first time a module is encountered after
2897 install/update: actually build parse/inference tree.
2897 install/update: actually build parse/inference tree.
2898
2898
2899 - first time the module is encountered in a session: load tree from
2899 - first time the module is encountered in a session: load tree from
2900 disk.
2900 disk.
2901
2901
2902 We don't want to block completions for tens of seconds so we give the
2902 We don't want to block completions for tens of seconds so we give the
2903 completer a "budget" of ``_timeout`` seconds per invocation to compute
2903 completer a "budget" of ``_timeout`` seconds per invocation to compute
2904 completions types, the completions that have not yet been computed will
2904 completions types, the completions that have not yet been computed will
2905 be marked as "unknown" an will have a chance to be computed next round
2905 be marked as "unknown" an will have a chance to be computed next round
2906 are things get cached.
2906 are things get cached.
2907
2907
2908 Keep in mind that Jedi is not the only thing treating the completion so
2908 Keep in mind that Jedi is not the only thing treating the completion so
2909 keep the timeout short-ish as if we take more than 0.3 second we still
2909 keep the timeout short-ish as if we take more than 0.3 second we still
2910 have lots of processing to do.
2910 have lots of processing to do.
2911
2911
2912 """
2912 """
2913 deadline = time.monotonic() + _timeout
2913 deadline = time.monotonic() + _timeout
2914
2914
2915 before = full_text[:offset]
2915 before = full_text[:offset]
2916 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2916 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2917
2917
2918 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2918 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2919
2919
2920 def is_non_jedi_result(
2920 def is_non_jedi_result(
2921 result: MatcherResult, identifier: str
2921 result: MatcherResult, identifier: str
2922 ) -> TypeGuard[SimpleMatcherResult]:
2922 ) -> TypeGuard[SimpleMatcherResult]:
2923 return identifier != jedi_matcher_id
2923 return identifier != jedi_matcher_id
2924
2924
2925 results = self._complete(
2925 results = self._complete(
2926 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2926 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2927 )
2927 )
2928
2928
2929 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2929 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2930 identifier: result
2930 identifier: result
2931 for identifier, result in results.items()
2931 for identifier, result in results.items()
2932 if is_non_jedi_result(result, identifier)
2932 if is_non_jedi_result(result, identifier)
2933 }
2933 }
2934
2934
2935 jedi_matches = (
2935 jedi_matches = (
2936 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2936 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2937 if jedi_matcher_id in results
2937 if jedi_matcher_id in results
2938 else ()
2938 else ()
2939 )
2939 )
2940
2940
2941 iter_jm = iter(jedi_matches)
2941 iter_jm = iter(jedi_matches)
2942 if _timeout:
2942 if _timeout:
2943 for jm in iter_jm:
2943 for jm in iter_jm:
2944 try:
2944 try:
2945 type_ = jm.type
2945 type_ = jm.type
2946 except Exception:
2946 except Exception:
2947 if self.debug:
2947 if self.debug:
2948 print("Error in Jedi getting type of ", jm)
2948 print("Error in Jedi getting type of ", jm)
2949 type_ = None
2949 type_ = None
2950 delta = len(jm.name_with_symbols) - len(jm.complete)
2950 delta = len(jm.name_with_symbols) - len(jm.complete)
2951 if type_ == 'function':
2951 if type_ == 'function':
2952 signature = _make_signature(jm)
2952 signature = _make_signature(jm)
2953 else:
2953 else:
2954 signature = ''
2954 signature = ''
2955 yield Completion(start=offset - delta,
2955 yield Completion(start=offset - delta,
2956 end=offset,
2956 end=offset,
2957 text=jm.name_with_symbols,
2957 text=jm.name_with_symbols,
2958 type=type_,
2958 type=type_,
2959 signature=signature,
2959 signature=signature,
2960 _origin='jedi')
2960 _origin='jedi')
2961
2961
2962 if time.monotonic() > deadline:
2962 if time.monotonic() > deadline:
2963 break
2963 break
2964
2964
2965 for jm in iter_jm:
2965 for jm in iter_jm:
2966 delta = len(jm.name_with_symbols) - len(jm.complete)
2966 delta = len(jm.name_with_symbols) - len(jm.complete)
2967 yield Completion(
2967 yield Completion(
2968 start=offset - delta,
2968 start=offset - delta,
2969 end=offset,
2969 end=offset,
2970 text=jm.name_with_symbols,
2970 text=jm.name_with_symbols,
2971 type=_UNKNOWN_TYPE, # don't compute type for speed
2971 type=_UNKNOWN_TYPE, # don't compute type for speed
2972 _origin="jedi",
2972 _origin="jedi",
2973 signature="",
2973 signature="",
2974 )
2974 )
2975
2975
2976 # TODO:
2976 # TODO:
2977 # Suppress this, right now just for debug.
2977 # Suppress this, right now just for debug.
2978 if jedi_matches and non_jedi_results and self.debug:
2978 if jedi_matches and non_jedi_results and self.debug:
2979 some_start_offset = before.rfind(
2979 some_start_offset = before.rfind(
2980 next(iter(non_jedi_results.values()))["matched_fragment"]
2980 next(iter(non_jedi_results.values()))["matched_fragment"]
2981 )
2981 )
2982 yield Completion(
2982 yield Completion(
2983 start=some_start_offset,
2983 start=some_start_offset,
2984 end=offset,
2984 end=offset,
2985 text="--jedi/ipython--",
2985 text="--jedi/ipython--",
2986 _origin="debug",
2986 _origin="debug",
2987 type="none",
2987 type="none",
2988 signature="",
2988 signature="",
2989 )
2989 )
2990
2990
2991 ordered: List[Completion] = []
2991 ordered: List[Completion] = []
2992 sortable: List[Completion] = []
2992 sortable: List[Completion] = []
2993
2993
2994 for origin, result in non_jedi_results.items():
2994 for origin, result in non_jedi_results.items():
2995 matched_text = result["matched_fragment"]
2995 matched_text = result["matched_fragment"]
2996 start_offset = before.rfind(matched_text)
2996 start_offset = before.rfind(matched_text)
2997 is_ordered = result.get("ordered", False)
2997 is_ordered = result.get("ordered", False)
2998 container = ordered if is_ordered else sortable
2998 container = ordered if is_ordered else sortable
2999
2999
3000 # I'm unsure if this is always true, so let's assert and see if it
3000 # I'm unsure if this is always true, so let's assert and see if it
3001 # crash
3001 # crash
3002 assert before.endswith(matched_text)
3002 assert before.endswith(matched_text)
3003
3003
3004 for simple_completion in result["completions"]:
3004 for simple_completion in result["completions"]:
3005 completion = Completion(
3005 completion = Completion(
3006 start=start_offset,
3006 start=start_offset,
3007 end=offset,
3007 end=offset,
3008 text=simple_completion.text,
3008 text=simple_completion.text,
3009 _origin=origin,
3009 _origin=origin,
3010 signature="",
3010 signature="",
3011 type=simple_completion.type or _UNKNOWN_TYPE,
3011 type=simple_completion.type or _UNKNOWN_TYPE,
3012 )
3012 )
3013 container.append(completion)
3013 container.append(completion)
3014
3014
3015 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3015 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3016 :MATCHES_LIMIT
3016 :MATCHES_LIMIT
3017 ]
3017 ]
3018
3018
3019 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3019 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3020 """Find completions for the given text and line context.
3020 """Find completions for the given text and line context.
3021
3021
3022 Note that both the text and the line_buffer are optional, but at least
3022 Note that both the text and the line_buffer are optional, but at least
3023 one of them must be given.
3023 one of them must be given.
3024
3024
3025 Parameters
3025 Parameters
3026 ----------
3026 ----------
3027 text : string, optional
3027 text : string, optional
3028 Text to perform the completion on. If not given, the line buffer
3028 Text to perform the completion on. If not given, the line buffer
3029 is split using the instance's CompletionSplitter object.
3029 is split using the instance's CompletionSplitter object.
3030 line_buffer : string, optional
3030 line_buffer : string, optional
3031 If not given, the completer attempts to obtain the current line
3031 If not given, the completer attempts to obtain the current line
3032 buffer via readline. This keyword allows clients which are
3032 buffer via readline. This keyword allows clients which are
3033 requesting for text completions in non-readline contexts to inform
3033 requesting for text completions in non-readline contexts to inform
3034 the completer of the entire text.
3034 the completer of the entire text.
3035 cursor_pos : int, optional
3035 cursor_pos : int, optional
3036 Index of the cursor in the full line buffer. Should be provided by
3036 Index of the cursor in the full line buffer. Should be provided by
3037 remote frontends where kernel has no access to frontend state.
3037 remote frontends where kernel has no access to frontend state.
3038
3038
3039 Returns
3039 Returns
3040 -------
3040 -------
3041 Tuple of two items:
3041 Tuple of two items:
3042 text : str
3042 text : str
3043 Text that was actually used in the completion.
3043 Text that was actually used in the completion.
3044 matches : list
3044 matches : list
3045 A list of completion matches.
3045 A list of completion matches.
3046
3046
3047 Notes
3047 Notes
3048 -----
3048 -----
3049 This API is likely to be deprecated and replaced by
3049 This API is likely to be deprecated and replaced by
3050 :any:`IPCompleter.completions` in the future.
3050 :any:`IPCompleter.completions` in the future.
3051
3051
3052 """
3052 """
3053 warnings.warn('`Completer.complete` is pending deprecation since '
3053 warnings.warn('`Completer.complete` is pending deprecation since '
3054 'IPython 6.0 and will be replaced by `Completer.completions`.',
3054 'IPython 6.0 and will be replaced by `Completer.completions`.',
3055 PendingDeprecationWarning)
3055 PendingDeprecationWarning)
3056 # potential todo, FOLD the 3rd throw away argument of _complete
3056 # potential todo, FOLD the 3rd throw away argument of _complete
3057 # into the first 2 one.
3057 # into the first 2 one.
3058 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3058 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3059 # TODO: should we deprecate now, or does it stay?
3059 # TODO: should we deprecate now, or does it stay?
3060
3060
3061 results = self._complete(
3061 results = self._complete(
3062 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3062 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3063 )
3063 )
3064
3064
3065 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3065 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3066
3066
3067 return self._arrange_and_extract(
3067 return self._arrange_and_extract(
3068 results,
3068 results,
3069 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3069 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3070 skip_matchers={jedi_matcher_id},
3070 skip_matchers={jedi_matcher_id},
3071 # this API does not support different start/end positions (fragments of token).
3071 # this API does not support different start/end positions (fragments of token).
3072 abort_if_offset_changes=True,
3072 abort_if_offset_changes=True,
3073 )
3073 )
3074
3074
3075 def _arrange_and_extract(
3075 def _arrange_and_extract(
3076 self,
3076 self,
3077 results: Dict[str, MatcherResult],
3077 results: Dict[str, MatcherResult],
3078 skip_matchers: Set[str],
3078 skip_matchers: Set[str],
3079 abort_if_offset_changes: bool,
3079 abort_if_offset_changes: bool,
3080 ):
3080 ):
3081 sortable: List[AnyMatcherCompletion] = []
3081 sortable: List[AnyMatcherCompletion] = []
3082 ordered: List[AnyMatcherCompletion] = []
3082 ordered: List[AnyMatcherCompletion] = []
3083 most_recent_fragment = None
3083 most_recent_fragment = None
3084 for identifier, result in results.items():
3084 for identifier, result in results.items():
3085 if identifier in skip_matchers:
3085 if identifier in skip_matchers:
3086 continue
3086 continue
3087 if not result["completions"]:
3087 if not result["completions"]:
3088 continue
3088 continue
3089 if not most_recent_fragment:
3089 if not most_recent_fragment:
3090 most_recent_fragment = result["matched_fragment"]
3090 most_recent_fragment = result["matched_fragment"]
3091 if (
3091 if (
3092 abort_if_offset_changes
3092 abort_if_offset_changes
3093 and result["matched_fragment"] != most_recent_fragment
3093 and result["matched_fragment"] != most_recent_fragment
3094 ):
3094 ):
3095 break
3095 break
3096 if result.get("ordered", False):
3096 if result.get("ordered", False):
3097 ordered.extend(result["completions"])
3097 ordered.extend(result["completions"])
3098 else:
3098 else:
3099 sortable.extend(result["completions"])
3099 sortable.extend(result["completions"])
3100
3100
3101 if not most_recent_fragment:
3101 if not most_recent_fragment:
3102 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3102 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3103
3103
3104 return most_recent_fragment, [
3104 return most_recent_fragment, [
3105 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3105 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3106 ]
3106 ]
3107
3107
3108 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3108 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3109 full_text=None) -> _CompleteResult:
3109 full_text=None) -> _CompleteResult:
3110 """
3110 """
3111 Like complete but can also returns raw jedi completions as well as the
3111 Like complete but can also returns raw jedi completions as well as the
3112 origin of the completion text. This could (and should) be made much
3112 origin of the completion text. This could (and should) be made much
3113 cleaner but that will be simpler once we drop the old (and stateful)
3113 cleaner but that will be simpler once we drop the old (and stateful)
3114 :any:`complete` API.
3114 :any:`complete` API.
3115
3115
3116 With current provisional API, cursor_pos act both (depending on the
3116 With current provisional API, cursor_pos act both (depending on the
3117 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3117 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3118 ``column`` when passing multiline strings this could/should be renamed
3118 ``column`` when passing multiline strings this could/should be renamed
3119 but would add extra noise.
3119 but would add extra noise.
3120
3120
3121 Parameters
3121 Parameters
3122 ----------
3122 ----------
3123 cursor_line
3123 cursor_line
3124 Index of the line the cursor is on. 0 indexed.
3124 Index of the line the cursor is on. 0 indexed.
3125 cursor_pos
3125 cursor_pos
3126 Position of the cursor in the current line/line_buffer/text. 0
3126 Position of the cursor in the current line/line_buffer/text. 0
3127 indexed.
3127 indexed.
3128 line_buffer : optional, str
3128 line_buffer : optional, str
3129 The current line the cursor is in, this is mostly due to legacy
3129 The current line the cursor is in, this is mostly due to legacy
3130 reason that readline could only give a us the single current line.
3130 reason that readline could only give a us the single current line.
3131 Prefer `full_text`.
3131 Prefer `full_text`.
3132 text : str
3132 text : str
3133 The current "token" the cursor is in, mostly also for historical
3133 The current "token" the cursor is in, mostly also for historical
3134 reasons. as the completer would trigger only after the current line
3134 reasons. as the completer would trigger only after the current line
3135 was parsed.
3135 was parsed.
3136 full_text : str
3136 full_text : str
3137 Full text of the current cell.
3137 Full text of the current cell.
3138
3138
3139 Returns
3139 Returns
3140 -------
3140 -------
3141 An ordered dictionary where keys are identifiers of completion
3141 An ordered dictionary where keys are identifiers of completion
3142 matchers and values are ``MatcherResult``s.
3142 matchers and values are ``MatcherResult``s.
3143 """
3143 """
3144
3144
3145 # if the cursor position isn't given, the only sane assumption we can
3145 # if the cursor position isn't given, the only sane assumption we can
3146 # make is that it's at the end of the line (the common case)
3146 # make is that it's at the end of the line (the common case)
3147 if cursor_pos is None:
3147 if cursor_pos is None:
3148 cursor_pos = len(line_buffer) if text is None else len(text)
3148 cursor_pos = len(line_buffer) if text is None else len(text)
3149
3149
3150 if self.use_main_ns:
3150 if self.use_main_ns:
3151 self.namespace = __main__.__dict__
3151 self.namespace = __main__.__dict__
3152
3152
3153 # if text is either None or an empty string, rely on the line buffer
3153 # if text is either None or an empty string, rely on the line buffer
3154 if (not line_buffer) and full_text:
3154 if (not line_buffer) and full_text:
3155 line_buffer = full_text.split('\n')[cursor_line]
3155 line_buffer = full_text.split('\n')[cursor_line]
3156 if not text: # issue #11508: check line_buffer before calling split_line
3156 if not text: # issue #11508: check line_buffer before calling split_line
3157 text = (
3157 text = (
3158 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3158 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3159 )
3159 )
3160
3160
3161 # If no line buffer is given, assume the input text is all there was
3161 # If no line buffer is given, assume the input text is all there was
3162 if line_buffer is None:
3162 if line_buffer is None:
3163 line_buffer = text
3163 line_buffer = text
3164
3164
3165 # deprecated - do not use `line_buffer` in new code.
3165 # deprecated - do not use `line_buffer` in new code.
3166 self.line_buffer = line_buffer
3166 self.line_buffer = line_buffer
3167 self.text_until_cursor = self.line_buffer[:cursor_pos]
3167 self.text_until_cursor = self.line_buffer[:cursor_pos]
3168
3168
3169 if not full_text:
3169 if not full_text:
3170 full_text = line_buffer
3170 full_text = line_buffer
3171
3171
3172 context = CompletionContext(
3172 context = CompletionContext(
3173 full_text=full_text,
3173 full_text=full_text,
3174 cursor_position=cursor_pos,
3174 cursor_position=cursor_pos,
3175 cursor_line=cursor_line,
3175 cursor_line=cursor_line,
3176 token=text,
3176 token=text,
3177 limit=MATCHES_LIMIT,
3177 limit=MATCHES_LIMIT,
3178 )
3178 )
3179
3179
3180 # Start with a clean slate of completions
3180 # Start with a clean slate of completions
3181 results: Dict[str, MatcherResult] = {}
3181 results: Dict[str, MatcherResult] = {}
3182
3182
3183 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3183 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3184
3184
3185 suppressed_matchers: Set[str] = set()
3185 suppressed_matchers: Set[str] = set()
3186
3186
3187 matchers = {
3187 matchers = {
3188 _get_matcher_id(matcher): matcher
3188 _get_matcher_id(matcher): matcher
3189 for matcher in sorted(
3189 for matcher in sorted(
3190 self.matchers, key=_get_matcher_priority, reverse=True
3190 self.matchers, key=_get_matcher_priority, reverse=True
3191 )
3191 )
3192 }
3192 }
3193
3193
3194 for matcher_id, matcher in matchers.items():
3194 for matcher_id, matcher in matchers.items():
3195 matcher_id = _get_matcher_id(matcher)
3195 matcher_id = _get_matcher_id(matcher)
3196
3196
3197 if matcher_id in self.disable_matchers:
3197 if matcher_id in self.disable_matchers:
3198 continue
3198 continue
3199
3199
3200 if matcher_id in results:
3200 if matcher_id in results:
3201 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3201 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3202
3202
3203 if matcher_id in suppressed_matchers:
3203 if matcher_id in suppressed_matchers:
3204 continue
3204 continue
3205
3205
3206 result: MatcherResult
3206 result: MatcherResult
3207 try:
3207 try:
3208 if _is_matcher_v1(matcher):
3208 if _is_matcher_v1(matcher):
3209 result = _convert_matcher_v1_result_to_v2(
3209 result = _convert_matcher_v1_result_to_v2(
3210 matcher(text), type=_UNKNOWN_TYPE
3210 matcher(text), type=_UNKNOWN_TYPE
3211 )
3211 )
3212 elif _is_matcher_v2(matcher):
3212 elif _is_matcher_v2(matcher):
3213 result = matcher(context)
3213 result = matcher(context)
3214 else:
3214 else:
3215 api_version = _get_matcher_api_version(matcher)
3215 api_version = _get_matcher_api_version(matcher)
3216 raise ValueError(f"Unsupported API version {api_version}")
3216 raise ValueError(f"Unsupported API version {api_version}")
3217 except:
3217 except:
3218 # Show the ugly traceback if the matcher causes an
3218 # Show the ugly traceback if the matcher causes an
3219 # exception, but do NOT crash the kernel!
3219 # exception, but do NOT crash the kernel!
3220 sys.excepthook(*sys.exc_info())
3220 sys.excepthook(*sys.exc_info())
3221 continue
3221 continue
3222
3222
3223 # set default value for matched fragment if suffix was not selected.
3223 # set default value for matched fragment if suffix was not selected.
3224 result["matched_fragment"] = result.get("matched_fragment", context.token)
3224 result["matched_fragment"] = result.get("matched_fragment", context.token)
3225
3225
3226 if not suppressed_matchers:
3226 if not suppressed_matchers:
3227 suppression_recommended: Union[bool, Set[str]] = result.get(
3227 suppression_recommended: Union[bool, Set[str]] = result.get(
3228 "suppress", False
3228 "suppress", False
3229 )
3229 )
3230
3230
3231 suppression_config = (
3231 suppression_config = (
3232 self.suppress_competing_matchers.get(matcher_id, None)
3232 self.suppress_competing_matchers.get(matcher_id, None)
3233 if isinstance(self.suppress_competing_matchers, dict)
3233 if isinstance(self.suppress_competing_matchers, dict)
3234 else self.suppress_competing_matchers
3234 else self.suppress_competing_matchers
3235 )
3235 )
3236 should_suppress = (
3236 should_suppress = (
3237 (suppression_config is True)
3237 (suppression_config is True)
3238 or (suppression_recommended and (suppression_config is not False))
3238 or (suppression_recommended and (suppression_config is not False))
3239 ) and has_any_completions(result)
3239 ) and has_any_completions(result)
3240
3240
3241 if should_suppress:
3241 if should_suppress:
3242 suppression_exceptions: Set[str] = result.get(
3242 suppression_exceptions: Set[str] = result.get(
3243 "do_not_suppress", set()
3243 "do_not_suppress", set()
3244 )
3244 )
3245 if isinstance(suppression_recommended, Iterable):
3245 if isinstance(suppression_recommended, Iterable):
3246 to_suppress = set(suppression_recommended)
3246 to_suppress = set(suppression_recommended)
3247 else:
3247 else:
3248 to_suppress = set(matchers)
3248 to_suppress = set(matchers)
3249 suppressed_matchers = to_suppress - suppression_exceptions
3249 suppressed_matchers = to_suppress - suppression_exceptions
3250
3250
3251 new_results = {}
3251 new_results = {}
3252 for previous_matcher_id, previous_result in results.items():
3252 for previous_matcher_id, previous_result in results.items():
3253 if previous_matcher_id not in suppressed_matchers:
3253 if previous_matcher_id not in suppressed_matchers:
3254 new_results[previous_matcher_id] = previous_result
3254 new_results[previous_matcher_id] = previous_result
3255 results = new_results
3255 results = new_results
3256
3256
3257 results[matcher_id] = result
3257 results[matcher_id] = result
3258
3258
3259 _, matches = self._arrange_and_extract(
3259 _, matches = self._arrange_and_extract(
3260 results,
3260 results,
3261 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3261 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3262 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3262 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3263 skip_matchers={jedi_matcher_id},
3263 skip_matchers={jedi_matcher_id},
3264 abort_if_offset_changes=False,
3264 abort_if_offset_changes=False,
3265 )
3265 )
3266
3266
3267 # populate legacy stateful API
3267 # populate legacy stateful API
3268 self.matches = matches
3268 self.matches = matches
3269
3269
3270 return results
3270 return results
3271
3271
3272 @staticmethod
3272 @staticmethod
3273 def _deduplicate(
3273 def _deduplicate(
3274 matches: Sequence[AnyCompletion],
3274 matches: Sequence[AnyCompletion],
3275 ) -> Iterable[AnyCompletion]:
3275 ) -> Iterable[AnyCompletion]:
3276 filtered_matches: Dict[str, AnyCompletion] = {}
3276 filtered_matches: Dict[str, AnyCompletion] = {}
3277 for match in matches:
3277 for match in matches:
3278 text = match.text
3278 text = match.text
3279 if (
3279 if (
3280 text not in filtered_matches
3280 text not in filtered_matches
3281 or filtered_matches[text].type == _UNKNOWN_TYPE
3281 or filtered_matches[text].type == _UNKNOWN_TYPE
3282 ):
3282 ):
3283 filtered_matches[text] = match
3283 filtered_matches[text] = match
3284
3284
3285 return filtered_matches.values()
3285 return filtered_matches.values()
3286
3286
3287 @staticmethod
3287 @staticmethod
3288 def _sort(matches: Sequence[AnyCompletion]):
3288 def _sort(matches: Sequence[AnyCompletion]):
3289 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3289 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3290
3290
3291 @context_matcher()
3291 @context_matcher()
3292 def fwd_unicode_matcher(self, context: CompletionContext):
3292 def fwd_unicode_matcher(self, context: CompletionContext):
3293 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3293 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3294 # TODO: use `context.limit` to terminate early once we matched the maximum
3294 # TODO: use `context.limit` to terminate early once we matched the maximum
3295 # number that will be used downstream; can be added as an optional to
3295 # number that will be used downstream; can be added as an optional to
3296 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3296 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3297 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3297 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3298 return _convert_matcher_v1_result_to_v2(
3298 return _convert_matcher_v1_result_to_v2(
3299 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3299 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3300 )
3300 )
3301
3301
3302 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3302 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3303 """
3303 """
3304 Forward match a string starting with a backslash with a list of
3304 Forward match a string starting with a backslash with a list of
3305 potential Unicode completions.
3305 potential Unicode completions.
3306
3306
3307 Will compute list of Unicode character names on first call and cache it.
3307 Will compute list of Unicode character names on first call and cache it.
3308
3308
3309 .. deprecated:: 8.6
3309 .. deprecated:: 8.6
3310 You can use :meth:`fwd_unicode_matcher` instead.
3310 You can use :meth:`fwd_unicode_matcher` instead.
3311
3311
3312 Returns
3312 Returns
3313 -------
3313 -------
3314 At tuple with:
3314 At tuple with:
3315 - matched text (empty if no matches)
3315 - matched text (empty if no matches)
3316 - list of potential completions, empty tuple otherwise)
3316 - list of potential completions, empty tuple otherwise)
3317 """
3317 """
3318 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3318 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3319 # We could do a faster match using a Trie.
3319 # We could do a faster match using a Trie.
3320
3320
3321 # Using pygtrie the following seem to work:
3321 # Using pygtrie the following seem to work:
3322
3322
3323 # s = PrefixSet()
3323 # s = PrefixSet()
3324
3324
3325 # for c in range(0,0x10FFFF + 1):
3325 # for c in range(0,0x10FFFF + 1):
3326 # try:
3326 # try:
3327 # s.add(unicodedata.name(chr(c)))
3327 # s.add(unicodedata.name(chr(c)))
3328 # except ValueError:
3328 # except ValueError:
3329 # pass
3329 # pass
3330 # [''.join(k) for k in s.iter(prefix)]
3330 # [''.join(k) for k in s.iter(prefix)]
3331
3331
3332 # But need to be timed and adds an extra dependency.
3332 # But need to be timed and adds an extra dependency.
3333
3333
3334 slashpos = text.rfind('\\')
3334 slashpos = text.rfind('\\')
3335 # if text starts with slash
3335 # if text starts with slash
3336 if slashpos > -1:
3336 if slashpos > -1:
3337 # PERF: It's important that we don't access self._unicode_names
3337 # PERF: It's important that we don't access self._unicode_names
3338 # until we're inside this if-block. _unicode_names is lazily
3338 # until we're inside this if-block. _unicode_names is lazily
3339 # initialized, and it takes a user-noticeable amount of time to
3339 # initialized, and it takes a user-noticeable amount of time to
3340 # initialize it, so we don't want to initialize it unless we're
3340 # initialize it, so we don't want to initialize it unless we're
3341 # actually going to use it.
3341 # actually going to use it.
3342 s = text[slashpos + 1 :]
3342 s = text[slashpos + 1 :]
3343 sup = s.upper()
3343 sup = s.upper()
3344 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3344 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3345 if candidates:
3345 if candidates:
3346 return s, candidates
3346 return s, candidates
3347 candidates = [x for x in self.unicode_names if sup in x]
3347 candidates = [x for x in self.unicode_names if sup in x]
3348 if candidates:
3348 if candidates:
3349 return s, candidates
3349 return s, candidates
3350 splitsup = sup.split(" ")
3350 splitsup = sup.split(" ")
3351 candidates = [
3351 candidates = [
3352 x for x in self.unicode_names if all(u in x for u in splitsup)
3352 x for x in self.unicode_names if all(u in x for u in splitsup)
3353 ]
3353 ]
3354 if candidates:
3354 if candidates:
3355 return s, candidates
3355 return s, candidates
3356
3356
3357 return "", ()
3357 return "", ()
3358
3358
3359 # if text does not start with slash
3359 # if text does not start with slash
3360 else:
3360 else:
3361 return '', ()
3361 return '', ()
3362
3362
3363 @property
3363 @property
3364 def unicode_names(self) -> List[str]:
3364 def unicode_names(self) -> List[str]:
3365 """List of names of unicode code points that can be completed.
3365 """List of names of unicode code points that can be completed.
3366
3366
3367 The list is lazily initialized on first access.
3367 The list is lazily initialized on first access.
3368 """
3368 """
3369 if self._unicode_names is None:
3369 if self._unicode_names is None:
3370 names = []
3370 names = []
3371 for c in range(0,0x10FFFF + 1):
3371 for c in range(0,0x10FFFF + 1):
3372 try:
3372 try:
3373 names.append(unicodedata.name(chr(c)))
3373 names.append(unicodedata.name(chr(c)))
3374 except ValueError:
3374 except ValueError:
3375 pass
3375 pass
3376 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3376 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3377
3377
3378 return self._unicode_names
3378 return self._unicode_names
3379
3379
3380 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3380 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3381 names = []
3381 names = []
3382 for start,stop in ranges:
3382 for start,stop in ranges:
3383 for c in range(start, stop) :
3383 for c in range(start, stop) :
3384 try:
3384 try:
3385 names.append(unicodedata.name(chr(c)))
3385 names.append(unicodedata.name(chr(c)))
3386 except ValueError:
3386 except ValueError:
3387 pass
3387 pass
3388 return names
3388 return names
General Comments 0
You need to be logged in to leave comments. Login now