##// END OF EJS Templates
reformat
M Bussonnier -
Show More
@@ -1,3378 +1,3377
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press :kbd:`Tab` to expand it to its latex form.
53 and press :kbd:`Tab` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 :std:configtrait:`Completer.backslash_combining_completions` option to
62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 ``False``.
63 ``False``.
64
64
65
65
66 Experimental
66 Experimental
67 ============
67 ============
68
68
69 Starting with IPython 6.0, this module can make use of the Jedi library to
69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 generate completions both using static analysis of the code, and dynamically
70 generate completions both using static analysis of the code, and dynamically
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 for Python. The APIs attached to this new mechanism is unstable and will
72 for Python. The APIs attached to this new mechanism is unstable and will
73 raise unless use in an :any:`provisionalcompleter` context manager.
73 raise unless use in an :any:`provisionalcompleter` context manager.
74
74
75 You will find that the following are experimental:
75 You will find that the following are experimental:
76
76
77 - :any:`provisionalcompleter`
77 - :any:`provisionalcompleter`
78 - :any:`IPCompleter.completions`
78 - :any:`IPCompleter.completions`
79 - :any:`Completion`
79 - :any:`Completion`
80 - :any:`rectify_completions`
80 - :any:`rectify_completions`
81
81
82 .. note::
82 .. note::
83
83
84 better name for :any:`rectify_completions` ?
84 better name for :any:`rectify_completions` ?
85
85
86 We welcome any feedback on these new API, and we also encourage you to try this
86 We welcome any feedback on these new API, and we also encourage you to try this
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 to have extra logging information if :any:`jedi` is crashing, or if current
88 to have extra logging information if :any:`jedi` is crashing, or if current
89 IPython completer pending deprecations are returning results not yet handled
89 IPython completer pending deprecations are returning results not yet handled
90 by :any:`jedi`
90 by :any:`jedi`
91
91
92 Using Jedi for tab completion allow snippets like the following to work without
92 Using Jedi for tab completion allow snippets like the following to work without
93 having to execute any code:
93 having to execute any code:
94
94
95 >>> myvar = ['hello', 42]
95 >>> myvar = ['hello', 42]
96 ... myvar[1].bi<tab>
96 ... myvar[1].bi<tab>
97
97
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 option.
100 option.
101
101
102 Be sure to update :any:`jedi` to the latest stable version or to try the
102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 current development version to get better completions.
103 current development version to get better completions.
104
104
105 Matchers
105 Matchers
106 ========
106 ========
107
107
108 All completions routines are implemented using unified *Matchers* API.
108 All completions routines are implemented using unified *Matchers* API.
109 The matchers API is provisional and subject to change without notice.
109 The matchers API is provisional and subject to change without notice.
110
110
111 The built-in matchers include:
111 The built-in matchers include:
112
112
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 - :any:`IPCompleter.unicode_name_matcher`,
115 - :any:`IPCompleter.unicode_name_matcher`,
116 :any:`IPCompleter.fwd_unicode_matcher`
116 :any:`IPCompleter.fwd_unicode_matcher`
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 (`complete_command`) with string dispatch (including regular expressions).
125 (`complete_command`) with string dispatch (including regular expressions).
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 Jedi results to match behaviour in earlier IPython versions.
127 Jedi results to match behaviour in earlier IPython versions.
128
128
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130
130
131 Matcher API
131 Matcher API
132 -----------
132 -----------
133
133
134 Simplifying some details, the ``Matcher`` interface can described as
134 Simplifying some details, the ``Matcher`` interface can described as
135
135
136 .. code-block::
136 .. code-block::
137
137
138 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv1 = Callable[[str], list[str]]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140
140
141 Matcher = MatcherAPIv1 | MatcherAPIv2
141 Matcher = MatcherAPIv1 | MatcherAPIv2
142
142
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 and remains supported as a simplest way for generating completions. This is also
144 and remains supported as a simplest way for generating completions. This is also
145 currently the only API supported by the IPython hooks system `complete_command`.
145 currently the only API supported by the IPython hooks system `complete_command`.
146
146
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 and requires a literal ``2`` for v2 Matchers.
149 and requires a literal ``2`` for v2 Matchers.
150
150
151 Once the API stabilises future versions may relax the requirement for specifying
151 Once the API stabilises future versions may relax the requirement for specifying
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154
154
155 Suppression of competing matchers
155 Suppression of competing matchers
156 ---------------------------------
156 ---------------------------------
157
157
158 By default results from all matchers are combined, in the order determined by
158 By default results from all matchers are combined, in the order determined by
159 their priority. Matchers can request to suppress results from subsequent
159 their priority. Matchers can request to suppress results from subsequent
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161
161
162 When multiple matchers simultaneously request suppression, the results from of
162 When multiple matchers simultaneously request suppression, the results from of
163 the matcher with higher priority will be returned.
163 the matcher with higher priority will be returned.
164
164
165 Sometimes it is desirable to suppress most but not all other matchers;
165 Sometimes it is desirable to suppress most but not all other matchers;
166 this can be achieved by adding a set of identifiers of matchers which
166 this can be achieved by adding a set of identifiers of matchers which
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168
168
169 The suppression behaviour can is user-configurable via
169 The suppression behaviour can is user-configurable via
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 """
171 """
172
172
173
173
174 # Copyright (c) IPython Development Team.
174 # Copyright (c) IPython Development Team.
175 # Distributed under the terms of the Modified BSD License.
175 # Distributed under the terms of the Modified BSD License.
176 #
176 #
177 # Some of this code originated from rlcompleter in the Python standard library
177 # Some of this code originated from rlcompleter in the Python standard library
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179
179
180 from __future__ import annotations
180 from __future__ import annotations
181 import builtins as builtin_mod
181 import builtins as builtin_mod
182 import enum
182 import enum
183 import glob
183 import glob
184 import inspect
184 import inspect
185 import itertools
185 import itertools
186 import keyword
186 import keyword
187 import os
187 import os
188 import re
188 import re
189 import string
189 import string
190 import sys
190 import sys
191 import tokenize
191 import tokenize
192 import time
192 import time
193 import unicodedata
193 import unicodedata
194 import uuid
194 import uuid
195 import warnings
195 import warnings
196 from ast import literal_eval
196 from ast import literal_eval
197 from collections import defaultdict
197 from collections import defaultdict
198 from contextlib import contextmanager
198 from contextlib import contextmanager
199 from dataclasses import dataclass
199 from dataclasses import dataclass
200 from functools import cached_property, partial
200 from functools import cached_property, partial
201 from types import SimpleNamespace
201 from types import SimpleNamespace
202 from typing import (
202 from typing import (
203 Iterable,
203 Iterable,
204 Iterator,
204 Iterator,
205 List,
205 List,
206 Tuple,
206 Tuple,
207 Union,
207 Union,
208 Any,
208 Any,
209 Sequence,
209 Sequence,
210 Dict,
210 Dict,
211 Optional,
211 Optional,
212 TYPE_CHECKING,
212 TYPE_CHECKING,
213 Set,
213 Set,
214 Sized,
214 Sized,
215 TypeVar,
215 TypeVar,
216 Literal,
216 Literal,
217 )
217 )
218
218
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 from IPython.core.error import TryNext
220 from IPython.core.error import TryNext
221 from IPython.core.inputtransformer2 import ESC_MAGIC
221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 from IPython.core.oinspect import InspectColors
223 from IPython.core.oinspect import InspectColors
224 from IPython.testing.skipdoctest import skip_doctest
224 from IPython.testing.skipdoctest import skip_doctest
225 from IPython.utils import generics
225 from IPython.utils import generics
226 from IPython.utils.decorators import sphinx_options
226 from IPython.utils.decorators import sphinx_options
227 from IPython.utils.dir2 import dir2, get_real_method
227 from IPython.utils.dir2 import dir2, get_real_method
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 from IPython.utils.path import ensure_dir_exists
229 from IPython.utils.path import ensure_dir_exists
230 from IPython.utils.process import arg_split
230 from IPython.utils.process import arg_split
231 from traitlets import (
231 from traitlets import (
232 Bool,
232 Bool,
233 Enum,
233 Enum,
234 Int,
234 Int,
235 List as ListTrait,
235 List as ListTrait,
236 Unicode,
236 Unicode,
237 Dict as DictTrait,
237 Dict as DictTrait,
238 Union as UnionTrait,
238 Union as UnionTrait,
239 observe,
239 observe,
240 )
240 )
241 from traitlets.config.configurable import Configurable
241 from traitlets.config.configurable import Configurable
242
242
243 import __main__
243 import __main__
244
244
245 from typing import cast
245 from typing import cast
246 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
246 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
247
247
248
248
249 # skip module docstests
249 # skip module docstests
250 __skip_doctest__ = True
250 __skip_doctest__ = True
251
251
252
252
253 try:
253 try:
254 import jedi
254 import jedi
255 jedi.settings.case_insensitive_completion = False
255 jedi.settings.case_insensitive_completion = False
256 import jedi.api.helpers
256 import jedi.api.helpers
257 import jedi.api.classes
257 import jedi.api.classes
258 JEDI_INSTALLED = True
258 JEDI_INSTALLED = True
259 except ImportError:
259 except ImportError:
260 JEDI_INSTALLED = False
260 JEDI_INSTALLED = False
261
261
262
262
263
264 if GENERATING_DOCUMENTATION:
263 if GENERATING_DOCUMENTATION:
265 from typing import TypedDict
264 from typing import TypedDict
266
265
267 # -----------------------------------------------------------------------------
266 # -----------------------------------------------------------------------------
268 # Globals
267 # Globals
269 #-----------------------------------------------------------------------------
268 #-----------------------------------------------------------------------------
270
269
271 # ranges where we have most of the valid unicode names. We could be more finer
270 # ranges where we have most of the valid unicode names. We could be more finer
272 # grained but is it worth it for performance While unicode have character in the
271 # grained but is it worth it for performance While unicode have character in the
273 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
272 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
274 # write this). With below range we cover them all, with a density of ~67%
273 # write this). With below range we cover them all, with a density of ~67%
275 # biggest next gap we consider only adds up about 1% density and there are 600
274 # biggest next gap we consider only adds up about 1% density and there are 600
276 # gaps that would need hard coding.
275 # gaps that would need hard coding.
277 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
276 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
278
277
279 # Public API
278 # Public API
280 __all__ = ["Completer", "IPCompleter"]
279 __all__ = ["Completer", "IPCompleter"]
281
280
282 if sys.platform == 'win32':
281 if sys.platform == 'win32':
283 PROTECTABLES = ' '
282 PROTECTABLES = ' '
284 else:
283 else:
285 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
284 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
286
285
287 # Protect against returning an enormous number of completions which the frontend
286 # Protect against returning an enormous number of completions which the frontend
288 # may have trouble processing.
287 # may have trouble processing.
289 MATCHES_LIMIT = 500
288 MATCHES_LIMIT = 500
290
289
291 # Completion type reported when no type can be inferred.
290 # Completion type reported when no type can be inferred.
292 _UNKNOWN_TYPE = "<unknown>"
291 _UNKNOWN_TYPE = "<unknown>"
293
292
294 # sentinel value to signal lack of a match
293 # sentinel value to signal lack of a match
295 not_found = object()
294 not_found = object()
296
295
297 class ProvisionalCompleterWarning(FutureWarning):
296 class ProvisionalCompleterWarning(FutureWarning):
298 """
297 """
299 Exception raise by an experimental feature in this module.
298 Exception raise by an experimental feature in this module.
300
299
301 Wrap code in :any:`provisionalcompleter` context manager if you
300 Wrap code in :any:`provisionalcompleter` context manager if you
302 are certain you want to use an unstable feature.
301 are certain you want to use an unstable feature.
303 """
302 """
304 pass
303 pass
305
304
306 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
305 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
307
306
308
307
309 @skip_doctest
308 @skip_doctest
310 @contextmanager
309 @contextmanager
311 def provisionalcompleter(action='ignore'):
310 def provisionalcompleter(action='ignore'):
312 """
311 """
313 This context manager has to be used in any place where unstable completer
312 This context manager has to be used in any place where unstable completer
314 behavior and API may be called.
313 behavior and API may be called.
315
314
316 >>> with provisionalcompleter():
315 >>> with provisionalcompleter():
317 ... completer.do_experimental_things() # works
316 ... completer.do_experimental_things() # works
318
317
319 >>> completer.do_experimental_things() # raises.
318 >>> completer.do_experimental_things() # raises.
320
319
321 .. note::
320 .. note::
322
321
323 Unstable
322 Unstable
324
323
325 By using this context manager you agree that the API in use may change
324 By using this context manager you agree that the API in use may change
326 without warning, and that you won't complain if they do so.
325 without warning, and that you won't complain if they do so.
327
326
328 You also understand that, if the API is not to your liking, you should report
327 You also understand that, if the API is not to your liking, you should report
329 a bug to explain your use case upstream.
328 a bug to explain your use case upstream.
330
329
331 We'll be happy to get your feedback, feature requests, and improvements on
330 We'll be happy to get your feedback, feature requests, and improvements on
332 any of the unstable APIs!
331 any of the unstable APIs!
333 """
332 """
334 with warnings.catch_warnings():
333 with warnings.catch_warnings():
335 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
334 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
336 yield
335 yield
337
336
338
337
339 def has_open_quotes(s):
338 def has_open_quotes(s):
340 """Return whether a string has open quotes.
339 """Return whether a string has open quotes.
341
340
342 This simply counts whether the number of quote characters of either type in
341 This simply counts whether the number of quote characters of either type in
343 the string is odd.
342 the string is odd.
344
343
345 Returns
344 Returns
346 -------
345 -------
347 If there is an open quote, the quote character is returned. Else, return
346 If there is an open quote, the quote character is returned. Else, return
348 False.
347 False.
349 """
348 """
350 # We check " first, then ', so complex cases with nested quotes will get
349 # We check " first, then ', so complex cases with nested quotes will get
351 # the " to take precedence.
350 # the " to take precedence.
352 if s.count('"') % 2:
351 if s.count('"') % 2:
353 return '"'
352 return '"'
354 elif s.count("'") % 2:
353 elif s.count("'") % 2:
355 return "'"
354 return "'"
356 else:
355 else:
357 return False
356 return False
358
357
359
358
360 def protect_filename(s, protectables=PROTECTABLES):
359 def protect_filename(s, protectables=PROTECTABLES):
361 """Escape a string to protect certain characters."""
360 """Escape a string to protect certain characters."""
362 if set(s) & set(protectables):
361 if set(s) & set(protectables):
363 if sys.platform == "win32":
362 if sys.platform == "win32":
364 return '"' + s + '"'
363 return '"' + s + '"'
365 else:
364 else:
366 return "".join(("\\" + c if c in protectables else c) for c in s)
365 return "".join(("\\" + c if c in protectables else c) for c in s)
367 else:
366 else:
368 return s
367 return s
369
368
370
369
371 def expand_user(path:str) -> Tuple[str, bool, str]:
370 def expand_user(path:str) -> Tuple[str, bool, str]:
372 """Expand ``~``-style usernames in strings.
371 """Expand ``~``-style usernames in strings.
373
372
374 This is similar to :func:`os.path.expanduser`, but it computes and returns
373 This is similar to :func:`os.path.expanduser`, but it computes and returns
375 extra information that will be useful if the input was being used in
374 extra information that will be useful if the input was being used in
376 computing completions, and you wish to return the completions with the
375 computing completions, and you wish to return the completions with the
377 original '~' instead of its expanded value.
376 original '~' instead of its expanded value.
378
377
379 Parameters
378 Parameters
380 ----------
379 ----------
381 path : str
380 path : str
382 String to be expanded. If no ~ is present, the output is the same as the
381 String to be expanded. If no ~ is present, the output is the same as the
383 input.
382 input.
384
383
385 Returns
384 Returns
386 -------
385 -------
387 newpath : str
386 newpath : str
388 Result of ~ expansion in the input path.
387 Result of ~ expansion in the input path.
389 tilde_expand : bool
388 tilde_expand : bool
390 Whether any expansion was performed or not.
389 Whether any expansion was performed or not.
391 tilde_val : str
390 tilde_val : str
392 The value that ~ was replaced with.
391 The value that ~ was replaced with.
393 """
392 """
394 # Default values
393 # Default values
395 tilde_expand = False
394 tilde_expand = False
396 tilde_val = ''
395 tilde_val = ''
397 newpath = path
396 newpath = path
398
397
399 if path.startswith('~'):
398 if path.startswith('~'):
400 tilde_expand = True
399 tilde_expand = True
401 rest = len(path)-1
400 rest = len(path)-1
402 newpath = os.path.expanduser(path)
401 newpath = os.path.expanduser(path)
403 if rest:
402 if rest:
404 tilde_val = newpath[:-rest]
403 tilde_val = newpath[:-rest]
405 else:
404 else:
406 tilde_val = newpath
405 tilde_val = newpath
407
406
408 return newpath, tilde_expand, tilde_val
407 return newpath, tilde_expand, tilde_val
409
408
410
409
411 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
410 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
412 """Does the opposite of expand_user, with its outputs.
411 """Does the opposite of expand_user, with its outputs.
413 """
412 """
414 if tilde_expand:
413 if tilde_expand:
415 return path.replace(tilde_val, '~')
414 return path.replace(tilde_val, '~')
416 else:
415 else:
417 return path
416 return path
418
417
419
418
420 def completions_sorting_key(word):
419 def completions_sorting_key(word):
421 """key for sorting completions
420 """key for sorting completions
422
421
423 This does several things:
422 This does several things:
424
423
425 - Demote any completions starting with underscores to the end
424 - Demote any completions starting with underscores to the end
426 - Insert any %magic and %%cellmagic completions in the alphabetical order
425 - Insert any %magic and %%cellmagic completions in the alphabetical order
427 by their name
426 by their name
428 """
427 """
429 prio1, prio2 = 0, 0
428 prio1, prio2 = 0, 0
430
429
431 if word.startswith('__'):
430 if word.startswith('__'):
432 prio1 = 2
431 prio1 = 2
433 elif word.startswith('_'):
432 elif word.startswith('_'):
434 prio1 = 1
433 prio1 = 1
435
434
436 if word.endswith('='):
435 if word.endswith('='):
437 prio1 = -1
436 prio1 = -1
438
437
439 if word.startswith('%%'):
438 if word.startswith('%%'):
440 # If there's another % in there, this is something else, so leave it alone
439 # If there's another % in there, this is something else, so leave it alone
441 if not "%" in word[2:]:
440 if not "%" in word[2:]:
442 word = word[2:]
441 word = word[2:]
443 prio2 = 2
442 prio2 = 2
444 elif word.startswith('%'):
443 elif word.startswith('%'):
445 if not "%" in word[1:]:
444 if not "%" in word[1:]:
446 word = word[1:]
445 word = word[1:]
447 prio2 = 1
446 prio2 = 1
448
447
449 return prio1, word, prio2
448 return prio1, word, prio2
450
449
451
450
452 class _FakeJediCompletion:
451 class _FakeJediCompletion:
453 """
452 """
454 This is a workaround to communicate to the UI that Jedi has crashed and to
453 This is a workaround to communicate to the UI that Jedi has crashed and to
455 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
454 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
456
455
457 Added in IPython 6.0 so should likely be removed for 7.0
456 Added in IPython 6.0 so should likely be removed for 7.0
458
457
459 """
458 """
460
459
461 def __init__(self, name):
460 def __init__(self, name):
462
461
463 self.name = name
462 self.name = name
464 self.complete = name
463 self.complete = name
465 self.type = 'crashed'
464 self.type = 'crashed'
466 self.name_with_symbols = name
465 self.name_with_symbols = name
467 self.signature = ""
466 self.signature = ""
468 self._origin = "fake"
467 self._origin = "fake"
469 self.text = "crashed"
468 self.text = "crashed"
470
469
471 def __repr__(self):
470 def __repr__(self):
472 return '<Fake completion object jedi has crashed>'
471 return '<Fake completion object jedi has crashed>'
473
472
474
473
475 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
474 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
476
475
477
476
478 class Completion:
477 class Completion:
479 """
478 """
480 Completion object used and returned by IPython completers.
479 Completion object used and returned by IPython completers.
481
480
482 .. warning::
481 .. warning::
483
482
484 Unstable
483 Unstable
485
484
486 This function is unstable, API may change without warning.
485 This function is unstable, API may change without warning.
487 It will also raise unless use in proper context manager.
486 It will also raise unless use in proper context manager.
488
487
489 This act as a middle ground :any:`Completion` object between the
488 This act as a middle ground :any:`Completion` object between the
490 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
489 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
491 object. While Jedi need a lot of information about evaluator and how the
490 object. While Jedi need a lot of information about evaluator and how the
492 code should be ran/inspected, PromptToolkit (and other frontend) mostly
491 code should be ran/inspected, PromptToolkit (and other frontend) mostly
493 need user facing information.
492 need user facing information.
494
493
495 - Which range should be replaced replaced by what.
494 - Which range should be replaced replaced by what.
496 - Some metadata (like completion type), or meta information to displayed to
495 - Some metadata (like completion type), or meta information to displayed to
497 the use user.
496 the use user.
498
497
499 For debugging purpose we can also store the origin of the completion (``jedi``,
498 For debugging purpose we can also store the origin of the completion (``jedi``,
500 ``IPython.python_matches``, ``IPython.magics_matches``...).
499 ``IPython.python_matches``, ``IPython.magics_matches``...).
501 """
500 """
502
501
503 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
502 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
504
503
505 def __init__(
504 def __init__(
506 self,
505 self,
507 start: int,
506 start: int,
508 end: int,
507 end: int,
509 text: str,
508 text: str,
510 *,
509 *,
511 type: Optional[str] = None,
510 type: Optional[str] = None,
512 _origin="",
511 _origin="",
513 signature="",
512 signature="",
514 ) -> None:
513 ) -> None:
515 warnings.warn(
514 warnings.warn(
516 "``Completion`` is a provisional API (as of IPython 6.0). "
515 "``Completion`` is a provisional API (as of IPython 6.0). "
517 "It may change without warnings. "
516 "It may change without warnings. "
518 "Use in corresponding context manager.",
517 "Use in corresponding context manager.",
519 category=ProvisionalCompleterWarning,
518 category=ProvisionalCompleterWarning,
520 stacklevel=2,
519 stacklevel=2,
521 )
520 )
522
521
523 self.start = start
522 self.start = start
524 self.end = end
523 self.end = end
525 self.text = text
524 self.text = text
526 self.type = type
525 self.type = type
527 self.signature = signature
526 self.signature = signature
528 self._origin = _origin
527 self._origin = _origin
529
528
530 def __repr__(self):
529 def __repr__(self):
531 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
530 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
532 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
531 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
533
532
534 def __eq__(self, other) -> bool:
533 def __eq__(self, other) -> bool:
535 """
534 """
536 Equality and hash do not hash the type (as some completer may not be
535 Equality and hash do not hash the type (as some completer may not be
537 able to infer the type), but are use to (partially) de-duplicate
536 able to infer the type), but are use to (partially) de-duplicate
538 completion.
537 completion.
539
538
540 Completely de-duplicating completion is a bit tricker that just
539 Completely de-duplicating completion is a bit tricker that just
541 comparing as it depends on surrounding text, which Completions are not
540 comparing as it depends on surrounding text, which Completions are not
542 aware of.
541 aware of.
543 """
542 """
544 return self.start == other.start and \
543 return self.start == other.start and \
545 self.end == other.end and \
544 self.end == other.end and \
546 self.text == other.text
545 self.text == other.text
547
546
548 def __hash__(self):
547 def __hash__(self):
549 return hash((self.start, self.end, self.text))
548 return hash((self.start, self.end, self.text))
550
549
551
550
552 class SimpleCompletion:
551 class SimpleCompletion:
553 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
552 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
554
553
555 .. warning::
554 .. warning::
556
555
557 Provisional
556 Provisional
558
557
559 This class is used to describe the currently supported attributes of
558 This class is used to describe the currently supported attributes of
560 simple completion items, and any additional implementation details
559 simple completion items, and any additional implementation details
561 should not be relied on. Additional attributes may be included in
560 should not be relied on. Additional attributes may be included in
562 future versions, and meaning of text disambiguated from the current
561 future versions, and meaning of text disambiguated from the current
563 dual meaning of "text to insert" and "text to used as a label".
562 dual meaning of "text to insert" and "text to used as a label".
564 """
563 """
565
564
566 __slots__ = ["text", "type"]
565 __slots__ = ["text", "type"]
567
566
568 def __init__(self, text: str, *, type: Optional[str] = None):
567 def __init__(self, text: str, *, type: Optional[str] = None):
569 self.text = text
568 self.text = text
570 self.type = type
569 self.type = type
571
570
572 def __repr__(self):
571 def __repr__(self):
573 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
572 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
574
573
575
574
576 class _MatcherResultBase(TypedDict):
575 class _MatcherResultBase(TypedDict):
577 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
576 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
578
577
579 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
578 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
580 matched_fragment: NotRequired[str]
579 matched_fragment: NotRequired[str]
581
580
582 #: Whether to suppress results from all other matchers (True), some
581 #: Whether to suppress results from all other matchers (True), some
583 #: matchers (set of identifiers) or none (False); default is False.
582 #: matchers (set of identifiers) or none (False); default is False.
584 suppress: NotRequired[Union[bool, Set[str]]]
583 suppress: NotRequired[Union[bool, Set[str]]]
585
584
586 #: Identifiers of matchers which should NOT be suppressed when this matcher
585 #: Identifiers of matchers which should NOT be suppressed when this matcher
587 #: requests to suppress all other matchers; defaults to an empty set.
586 #: requests to suppress all other matchers; defaults to an empty set.
588 do_not_suppress: NotRequired[Set[str]]
587 do_not_suppress: NotRequired[Set[str]]
589
588
590 #: Are completions already ordered and should be left as-is? default is False.
589 #: Are completions already ordered and should be left as-is? default is False.
591 ordered: NotRequired[bool]
590 ordered: NotRequired[bool]
592
591
593
592
594 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
593 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
595 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
594 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
596 """Result of new-style completion matcher."""
595 """Result of new-style completion matcher."""
597
596
598 # note: TypedDict is added again to the inheritance chain
597 # note: TypedDict is added again to the inheritance chain
599 # in order to get __orig_bases__ for documentation
598 # in order to get __orig_bases__ for documentation
600
599
601 #: List of candidate completions
600 #: List of candidate completions
602 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
601 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
603
602
604
603
605 class _JediMatcherResult(_MatcherResultBase):
604 class _JediMatcherResult(_MatcherResultBase):
606 """Matching result returned by Jedi (will be processed differently)"""
605 """Matching result returned by Jedi (will be processed differently)"""
607
606
608 #: list of candidate completions
607 #: list of candidate completions
609 completions: Iterator[_JediCompletionLike]
608 completions: Iterator[_JediCompletionLike]
610
609
611
610
612 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
611 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
613 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
612 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
614
613
615
614
616 @dataclass
615 @dataclass
617 class CompletionContext:
616 class CompletionContext:
618 """Completion context provided as an argument to matchers in the Matcher API v2."""
617 """Completion context provided as an argument to matchers in the Matcher API v2."""
619
618
620 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
619 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
621 # which was not explicitly visible as an argument of the matcher, making any refactor
620 # which was not explicitly visible as an argument of the matcher, making any refactor
622 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
621 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
623 # from the completer, and make substituting them in sub-classes easier.
622 # from the completer, and make substituting them in sub-classes easier.
624
623
625 #: Relevant fragment of code directly preceding the cursor.
624 #: Relevant fragment of code directly preceding the cursor.
626 #: The extraction of token is implemented via splitter heuristic
625 #: The extraction of token is implemented via splitter heuristic
627 #: (following readline behaviour for legacy reasons), which is user configurable
626 #: (following readline behaviour for legacy reasons), which is user configurable
628 #: (by switching the greedy mode).
627 #: (by switching the greedy mode).
629 token: str
628 token: str
630
629
631 #: The full available content of the editor or buffer
630 #: The full available content of the editor or buffer
632 full_text: str
631 full_text: str
633
632
634 #: Cursor position in the line (the same for ``full_text`` and ``text``).
633 #: Cursor position in the line (the same for ``full_text`` and ``text``).
635 cursor_position: int
634 cursor_position: int
636
635
637 #: Cursor line in ``full_text``.
636 #: Cursor line in ``full_text``.
638 cursor_line: int
637 cursor_line: int
639
638
640 #: The maximum number of completions that will be used downstream.
639 #: The maximum number of completions that will be used downstream.
641 #: Matchers can use this information to abort early.
640 #: Matchers can use this information to abort early.
642 #: The built-in Jedi matcher is currently excepted from this limit.
641 #: The built-in Jedi matcher is currently excepted from this limit.
643 # If not given, return all possible completions.
642 # If not given, return all possible completions.
644 limit: Optional[int]
643 limit: Optional[int]
645
644
646 @cached_property
645 @cached_property
647 def text_until_cursor(self) -> str:
646 def text_until_cursor(self) -> str:
648 return self.line_with_cursor[: self.cursor_position]
647 return self.line_with_cursor[: self.cursor_position]
649
648
650 @cached_property
649 @cached_property
651 def line_with_cursor(self) -> str:
650 def line_with_cursor(self) -> str:
652 return self.full_text.split("\n")[self.cursor_line]
651 return self.full_text.split("\n")[self.cursor_line]
653
652
654
653
655 #: Matcher results for API v2.
654 #: Matcher results for API v2.
656 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
655 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
657
656
658
657
659 class _MatcherAPIv1Base(Protocol):
658 class _MatcherAPIv1Base(Protocol):
660 def __call__(self, text: str) -> List[str]:
659 def __call__(self, text: str) -> List[str]:
661 """Call signature."""
660 """Call signature."""
662 ...
661 ...
663
662
664 #: Used to construct the default matcher identifier
663 #: Used to construct the default matcher identifier
665 __qualname__: str
664 __qualname__: str
666
665
667
666
668 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
667 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
669 #: API version
668 #: API version
670 matcher_api_version: Optional[Literal[1]]
669 matcher_api_version: Optional[Literal[1]]
671
670
672 def __call__(self, text: str) -> List[str]:
671 def __call__(self, text: str) -> List[str]:
673 """Call signature."""
672 """Call signature."""
674 ...
673 ...
675
674
676
675
677 #: Protocol describing Matcher API v1.
676 #: Protocol describing Matcher API v1.
678 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
677 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
679
678
680
679
681 class MatcherAPIv2(Protocol):
680 class MatcherAPIv2(Protocol):
682 """Protocol describing Matcher API v2."""
681 """Protocol describing Matcher API v2."""
683
682
684 #: API version
683 #: API version
685 matcher_api_version: Literal[2] = 2
684 matcher_api_version: Literal[2] = 2
686
685
687 def __call__(self, context: CompletionContext) -> MatcherResult:
686 def __call__(self, context: CompletionContext) -> MatcherResult:
688 """Call signature."""
687 """Call signature."""
689 ...
688 ...
690
689
691 #: Used to construct the default matcher identifier
690 #: Used to construct the default matcher identifier
692 __qualname__: str
691 __qualname__: str
693
692
694
693
695 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
694 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
696
695
697
696
698 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
697 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
699 api_version = _get_matcher_api_version(matcher)
698 api_version = _get_matcher_api_version(matcher)
700 return api_version == 1
699 return api_version == 1
701
700
702
701
703 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
702 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
704 api_version = _get_matcher_api_version(matcher)
703 api_version = _get_matcher_api_version(matcher)
705 return api_version == 2
704 return api_version == 2
706
705
707
706
708 def _is_sizable(value: Any) -> TypeGuard[Sized]:
707 def _is_sizable(value: Any) -> TypeGuard[Sized]:
709 """Determines whether objects is sizable"""
708 """Determines whether objects is sizable"""
710 return hasattr(value, "__len__")
709 return hasattr(value, "__len__")
711
710
712
711
713 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
712 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
714 """Determines whether objects is sizable"""
713 """Determines whether objects is sizable"""
715 return hasattr(value, "__next__")
714 return hasattr(value, "__next__")
716
715
717
716
718 def has_any_completions(result: MatcherResult) -> bool:
717 def has_any_completions(result: MatcherResult) -> bool:
719 """Check if any result includes any completions."""
718 """Check if any result includes any completions."""
720 completions = result["completions"]
719 completions = result["completions"]
721 if _is_sizable(completions):
720 if _is_sizable(completions):
722 return len(completions) != 0
721 return len(completions) != 0
723 if _is_iterator(completions):
722 if _is_iterator(completions):
724 try:
723 try:
725 old_iterator = completions
724 old_iterator = completions
726 first = next(old_iterator)
725 first = next(old_iterator)
727 result["completions"] = cast(
726 result["completions"] = cast(
728 Iterator[SimpleCompletion],
727 Iterator[SimpleCompletion],
729 itertools.chain([first], old_iterator),
728 itertools.chain([first], old_iterator),
730 )
729 )
731 return True
730 return True
732 except StopIteration:
731 except StopIteration:
733 return False
732 return False
734 raise ValueError(
733 raise ValueError(
735 "Completions returned by matcher need to be an Iterator or a Sizable"
734 "Completions returned by matcher need to be an Iterator or a Sizable"
736 )
735 )
737
736
738
737
739 def completion_matcher(
738 def completion_matcher(
740 *,
739 *,
741 priority: Optional[float] = None,
740 priority: Optional[float] = None,
742 identifier: Optional[str] = None,
741 identifier: Optional[str] = None,
743 api_version: int = 1,
742 api_version: int = 1,
744 ):
743 ):
745 """Adds attributes describing the matcher.
744 """Adds attributes describing the matcher.
746
745
747 Parameters
746 Parameters
748 ----------
747 ----------
749 priority : Optional[float]
748 priority : Optional[float]
750 The priority of the matcher, determines the order of execution of matchers.
749 The priority of the matcher, determines the order of execution of matchers.
751 Higher priority means that the matcher will be executed first. Defaults to 0.
750 Higher priority means that the matcher will be executed first. Defaults to 0.
752 identifier : Optional[str]
751 identifier : Optional[str]
753 identifier of the matcher allowing users to modify the behaviour via traitlets,
752 identifier of the matcher allowing users to modify the behaviour via traitlets,
754 and also used to for debugging (will be passed as ``origin`` with the completions).
753 and also used to for debugging (will be passed as ``origin`` with the completions).
755
754
756 Defaults to matcher function's ``__qualname__`` (for example,
755 Defaults to matcher function's ``__qualname__`` (for example,
757 ``IPCompleter.file_matcher`` for the built-in matched defined
756 ``IPCompleter.file_matcher`` for the built-in matched defined
758 as a ``file_matcher`` method of the ``IPCompleter`` class).
757 as a ``file_matcher`` method of the ``IPCompleter`` class).
759 api_version: Optional[int]
758 api_version: Optional[int]
760 version of the Matcher API used by this matcher.
759 version of the Matcher API used by this matcher.
761 Currently supported values are 1 and 2.
760 Currently supported values are 1 and 2.
762 Defaults to 1.
761 Defaults to 1.
763 """
762 """
764
763
765 def wrapper(func: Matcher):
764 def wrapper(func: Matcher):
766 func.matcher_priority = priority or 0 # type: ignore
765 func.matcher_priority = priority or 0 # type: ignore
767 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
766 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
768 func.matcher_api_version = api_version # type: ignore
767 func.matcher_api_version = api_version # type: ignore
769 if TYPE_CHECKING:
768 if TYPE_CHECKING:
770 if api_version == 1:
769 if api_version == 1:
771 func = cast(MatcherAPIv1, func)
770 func = cast(MatcherAPIv1, func)
772 elif api_version == 2:
771 elif api_version == 2:
773 func = cast(MatcherAPIv2, func)
772 func = cast(MatcherAPIv2, func)
774 return func
773 return func
775
774
776 return wrapper
775 return wrapper
777
776
778
777
779 def _get_matcher_priority(matcher: Matcher):
778 def _get_matcher_priority(matcher: Matcher):
780 return getattr(matcher, "matcher_priority", 0)
779 return getattr(matcher, "matcher_priority", 0)
781
780
782
781
783 def _get_matcher_id(matcher: Matcher):
782 def _get_matcher_id(matcher: Matcher):
784 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
783 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
785
784
786
785
787 def _get_matcher_api_version(matcher):
786 def _get_matcher_api_version(matcher):
788 return getattr(matcher, "matcher_api_version", 1)
787 return getattr(matcher, "matcher_api_version", 1)
789
788
790
789
791 context_matcher = partial(completion_matcher, api_version=2)
790 context_matcher = partial(completion_matcher, api_version=2)
792
791
793
792
794 _IC = Iterable[Completion]
793 _IC = Iterable[Completion]
795
794
796
795
797 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
796 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
798 """
797 """
799 Deduplicate a set of completions.
798 Deduplicate a set of completions.
800
799
801 .. warning::
800 .. warning::
802
801
803 Unstable
802 Unstable
804
803
805 This function is unstable, API may change without warning.
804 This function is unstable, API may change without warning.
806
805
807 Parameters
806 Parameters
808 ----------
807 ----------
809 text : str
808 text : str
810 text that should be completed.
809 text that should be completed.
811 completions : Iterator[Completion]
810 completions : Iterator[Completion]
812 iterator over the completions to deduplicate
811 iterator over the completions to deduplicate
813
812
814 Yields
813 Yields
815 ------
814 ------
816 `Completions` objects
815 `Completions` objects
817 Completions coming from multiple sources, may be different but end up having
816 Completions coming from multiple sources, may be different but end up having
818 the same effect when applied to ``text``. If this is the case, this will
817 the same effect when applied to ``text``. If this is the case, this will
819 consider completions as equal and only emit the first encountered.
818 consider completions as equal and only emit the first encountered.
820 Not folded in `completions()` yet for debugging purpose, and to detect when
819 Not folded in `completions()` yet for debugging purpose, and to detect when
821 the IPython completer does return things that Jedi does not, but should be
820 the IPython completer does return things that Jedi does not, but should be
822 at some point.
821 at some point.
823 """
822 """
824 completions = list(completions)
823 completions = list(completions)
825 if not completions:
824 if not completions:
826 return
825 return
827
826
828 new_start = min(c.start for c in completions)
827 new_start = min(c.start for c in completions)
829 new_end = max(c.end for c in completions)
828 new_end = max(c.end for c in completions)
830
829
831 seen = set()
830 seen = set()
832 for c in completions:
831 for c in completions:
833 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
832 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
834 if new_text not in seen:
833 if new_text not in seen:
835 yield c
834 yield c
836 seen.add(new_text)
835 seen.add(new_text)
837
836
838
837
839 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
838 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
840 """
839 """
841 Rectify a set of completions to all have the same ``start`` and ``end``
840 Rectify a set of completions to all have the same ``start`` and ``end``
842
841
843 .. warning::
842 .. warning::
844
843
845 Unstable
844 Unstable
846
845
847 This function is unstable, API may change without warning.
846 This function is unstable, API may change without warning.
848 It will also raise unless use in proper context manager.
847 It will also raise unless use in proper context manager.
849
848
850 Parameters
849 Parameters
851 ----------
850 ----------
852 text : str
851 text : str
853 text that should be completed.
852 text that should be completed.
854 completions : Iterator[Completion]
853 completions : Iterator[Completion]
855 iterator over the completions to rectify
854 iterator over the completions to rectify
856 _debug : bool
855 _debug : bool
857 Log failed completion
856 Log failed completion
858
857
859 Notes
858 Notes
860 -----
859 -----
861 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
860 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
862 the Jupyter Protocol requires them to behave like so. This will readjust
861 the Jupyter Protocol requires them to behave like so. This will readjust
863 the completion to have the same ``start`` and ``end`` by padding both
862 the completion to have the same ``start`` and ``end`` by padding both
864 extremities with surrounding text.
863 extremities with surrounding text.
865
864
866 During stabilisation should support a ``_debug`` option to log which
865 During stabilisation should support a ``_debug`` option to log which
867 completion are return by the IPython completer and not found in Jedi in
866 completion are return by the IPython completer and not found in Jedi in
868 order to make upstream bug report.
867 order to make upstream bug report.
869 """
868 """
870 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
869 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
871 "It may change without warnings. "
870 "It may change without warnings. "
872 "Use in corresponding context manager.",
871 "Use in corresponding context manager.",
873 category=ProvisionalCompleterWarning, stacklevel=2)
872 category=ProvisionalCompleterWarning, stacklevel=2)
874
873
875 completions = list(completions)
874 completions = list(completions)
876 if not completions:
875 if not completions:
877 return
876 return
878 starts = (c.start for c in completions)
877 starts = (c.start for c in completions)
879 ends = (c.end for c in completions)
878 ends = (c.end for c in completions)
880
879
881 new_start = min(starts)
880 new_start = min(starts)
882 new_end = max(ends)
881 new_end = max(ends)
883
882
884 seen_jedi = set()
883 seen_jedi = set()
885 seen_python_matches = set()
884 seen_python_matches = set()
886 for c in completions:
885 for c in completions:
887 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
886 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
888 if c._origin == 'jedi':
887 if c._origin == 'jedi':
889 seen_jedi.add(new_text)
888 seen_jedi.add(new_text)
890 elif c._origin == "IPCompleter.python_matcher":
889 elif c._origin == "IPCompleter.python_matcher":
891 seen_python_matches.add(new_text)
890 seen_python_matches.add(new_text)
892 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
891 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
893 diff = seen_python_matches.difference(seen_jedi)
892 diff = seen_python_matches.difference(seen_jedi)
894 if diff and _debug:
893 if diff and _debug:
895 print('IPython.python matches have extras:', diff)
894 print('IPython.python matches have extras:', diff)
896
895
897
896
898 if sys.platform == 'win32':
897 if sys.platform == 'win32':
899 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
898 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
900 else:
899 else:
901 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
900 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
902
901
903 GREEDY_DELIMS = ' =\r\n'
902 GREEDY_DELIMS = ' =\r\n'
904
903
905
904
906 class CompletionSplitter(object):
905 class CompletionSplitter(object):
907 """An object to split an input line in a manner similar to readline.
906 """An object to split an input line in a manner similar to readline.
908
907
909 By having our own implementation, we can expose readline-like completion in
908 By having our own implementation, we can expose readline-like completion in
910 a uniform manner to all frontends. This object only needs to be given the
909 a uniform manner to all frontends. This object only needs to be given the
911 line of text to be split and the cursor position on said line, and it
910 line of text to be split and the cursor position on said line, and it
912 returns the 'word' to be completed on at the cursor after splitting the
911 returns the 'word' to be completed on at the cursor after splitting the
913 entire line.
912 entire line.
914
913
915 What characters are used as splitting delimiters can be controlled by
914 What characters are used as splitting delimiters can be controlled by
916 setting the ``delims`` attribute (this is a property that internally
915 setting the ``delims`` attribute (this is a property that internally
917 automatically builds the necessary regular expression)"""
916 automatically builds the necessary regular expression)"""
918
917
919 # Private interface
918 # Private interface
920
919
921 # A string of delimiter characters. The default value makes sense for
920 # A string of delimiter characters. The default value makes sense for
922 # IPython's most typical usage patterns.
921 # IPython's most typical usage patterns.
923 _delims = DELIMS
922 _delims = DELIMS
924
923
925 # The expression (a normal string) to be compiled into a regular expression
924 # The expression (a normal string) to be compiled into a regular expression
926 # for actual splitting. We store it as an attribute mostly for ease of
925 # for actual splitting. We store it as an attribute mostly for ease of
927 # debugging, since this type of code can be so tricky to debug.
926 # debugging, since this type of code can be so tricky to debug.
928 _delim_expr = None
927 _delim_expr = None
929
928
930 # The regular expression that does the actual splitting
929 # The regular expression that does the actual splitting
931 _delim_re = None
930 _delim_re = None
932
931
933 def __init__(self, delims=None):
932 def __init__(self, delims=None):
934 delims = CompletionSplitter._delims if delims is None else delims
933 delims = CompletionSplitter._delims if delims is None else delims
935 self.delims = delims
934 self.delims = delims
936
935
937 @property
936 @property
938 def delims(self):
937 def delims(self):
939 """Return the string of delimiter characters."""
938 """Return the string of delimiter characters."""
940 return self._delims
939 return self._delims
941
940
942 @delims.setter
941 @delims.setter
943 def delims(self, delims):
942 def delims(self, delims):
944 """Set the delimiters for line splitting."""
943 """Set the delimiters for line splitting."""
945 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
944 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
946 self._delim_re = re.compile(expr)
945 self._delim_re = re.compile(expr)
947 self._delims = delims
946 self._delims = delims
948 self._delim_expr = expr
947 self._delim_expr = expr
949
948
950 def split_line(self, line, cursor_pos=None):
949 def split_line(self, line, cursor_pos=None):
951 """Split a line of text with a cursor at the given position.
950 """Split a line of text with a cursor at the given position.
952 """
951 """
953 l = line if cursor_pos is None else line[:cursor_pos]
952 l = line if cursor_pos is None else line[:cursor_pos]
954 return self._delim_re.split(l)[-1]
953 return self._delim_re.split(l)[-1]
955
954
956
955
957
956
958 class Completer(Configurable):
957 class Completer(Configurable):
959
958
960 greedy = Bool(
959 greedy = Bool(
961 False,
960 False,
962 help="""Activate greedy completion.
961 help="""Activate greedy completion.
963
962
964 .. deprecated:: 8.8
963 .. deprecated:: 8.8
965 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
964 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
966
965
967 When enabled in IPython 8.8 or newer, changes configuration as follows:
966 When enabled in IPython 8.8 or newer, changes configuration as follows:
968
967
969 - ``Completer.evaluation = 'unsafe'``
968 - ``Completer.evaluation = 'unsafe'``
970 - ``Completer.auto_close_dict_keys = True``
969 - ``Completer.auto_close_dict_keys = True``
971 """,
970 """,
972 ).tag(config=True)
971 ).tag(config=True)
973
972
974 evaluation = Enum(
973 evaluation = Enum(
975 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
974 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
976 default_value="limited",
975 default_value="limited",
977 help="""Policy for code evaluation under completion.
976 help="""Policy for code evaluation under completion.
978
977
979 Successive options allow to enable more eager evaluation for better
978 Successive options allow to enable more eager evaluation for better
980 completion suggestions, including for nested dictionaries, nested lists,
979 completion suggestions, including for nested dictionaries, nested lists,
981 or even results of function calls.
980 or even results of function calls.
982 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
981 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
983 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
982 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
984
983
985 Allowed values are:
984 Allowed values are:
986
985
987 - ``forbidden``: no evaluation of code is permitted,
986 - ``forbidden``: no evaluation of code is permitted,
988 - ``minimal``: evaluation of literals and access to built-in namespace;
987 - ``minimal``: evaluation of literals and access to built-in namespace;
989 no item/attribute evaluationm no access to locals/globals,
988 no item/attribute evaluationm no access to locals/globals,
990 no evaluation of any operations or comparisons.
989 no evaluation of any operations or comparisons.
991 - ``limited``: access to all namespaces, evaluation of hard-coded methods
990 - ``limited``: access to all namespaces, evaluation of hard-coded methods
992 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
991 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
993 :any:`object.__getitem__`) on allow-listed objects (for example:
992 :any:`object.__getitem__`) on allow-listed objects (for example:
994 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
993 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
995 - ``unsafe``: evaluation of all methods and function calls but not of
994 - ``unsafe``: evaluation of all methods and function calls but not of
996 syntax with side-effects like `del x`,
995 syntax with side-effects like `del x`,
997 - ``dangerous``: completely arbitrary evaluation.
996 - ``dangerous``: completely arbitrary evaluation.
998 """,
997 """,
999 ).tag(config=True)
998 ).tag(config=True)
1000
999
1001 use_jedi = Bool(default_value=JEDI_INSTALLED,
1000 use_jedi = Bool(default_value=JEDI_INSTALLED,
1002 help="Experimental: Use Jedi to generate autocompletions. "
1001 help="Experimental: Use Jedi to generate autocompletions. "
1003 "Default to True if jedi is installed.").tag(config=True)
1002 "Default to True if jedi is installed.").tag(config=True)
1004
1003
1005 jedi_compute_type_timeout = Int(default_value=400,
1004 jedi_compute_type_timeout = Int(default_value=400,
1006 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1005 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1007 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1006 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1008 performance by preventing jedi to build its cache.
1007 performance by preventing jedi to build its cache.
1009 """).tag(config=True)
1008 """).tag(config=True)
1010
1009
1011 debug = Bool(default_value=False,
1010 debug = Bool(default_value=False,
1012 help='Enable debug for the Completer. Mostly print extra '
1011 help='Enable debug for the Completer. Mostly print extra '
1013 'information for experimental jedi integration.')\
1012 'information for experimental jedi integration.')\
1014 .tag(config=True)
1013 .tag(config=True)
1015
1014
1016 backslash_combining_completions = Bool(True,
1015 backslash_combining_completions = Bool(True,
1017 help="Enable unicode completions, e.g. \\alpha<tab> . "
1016 help="Enable unicode completions, e.g. \\alpha<tab> . "
1018 "Includes completion of latex commands, unicode names, and expanding "
1017 "Includes completion of latex commands, unicode names, and expanding "
1019 "unicode characters back to latex commands.").tag(config=True)
1018 "unicode characters back to latex commands.").tag(config=True)
1020
1019
1021 auto_close_dict_keys = Bool(
1020 auto_close_dict_keys = Bool(
1022 False,
1021 False,
1023 help="""
1022 help="""
1024 Enable auto-closing dictionary keys.
1023 Enable auto-closing dictionary keys.
1025
1024
1026 When enabled string keys will be suffixed with a final quote
1025 When enabled string keys will be suffixed with a final quote
1027 (matching the opening quote), tuple keys will also receive a
1026 (matching the opening quote), tuple keys will also receive a
1028 separating comma if needed, and keys which are final will
1027 separating comma if needed, and keys which are final will
1029 receive a closing bracket (``]``).
1028 receive a closing bracket (``]``).
1030 """,
1029 """,
1031 ).tag(config=True)
1030 ).tag(config=True)
1032
1031
1033 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1032 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1034 """Create a new completer for the command line.
1033 """Create a new completer for the command line.
1035
1034
1036 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1035 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1037
1036
1038 If unspecified, the default namespace where completions are performed
1037 If unspecified, the default namespace where completions are performed
1039 is __main__ (technically, __main__.__dict__). Namespaces should be
1038 is __main__ (technically, __main__.__dict__). Namespaces should be
1040 given as dictionaries.
1039 given as dictionaries.
1041
1040
1042 An optional second namespace can be given. This allows the completer
1041 An optional second namespace can be given. This allows the completer
1043 to handle cases where both the local and global scopes need to be
1042 to handle cases where both the local and global scopes need to be
1044 distinguished.
1043 distinguished.
1045 """
1044 """
1046
1045
1047 # Don't bind to namespace quite yet, but flag whether the user wants a
1046 # Don't bind to namespace quite yet, but flag whether the user wants a
1048 # specific namespace or to use __main__.__dict__. This will allow us
1047 # specific namespace or to use __main__.__dict__. This will allow us
1049 # to bind to __main__.__dict__ at completion time, not now.
1048 # to bind to __main__.__dict__ at completion time, not now.
1050 if namespace is None:
1049 if namespace is None:
1051 self.use_main_ns = True
1050 self.use_main_ns = True
1052 else:
1051 else:
1053 self.use_main_ns = False
1052 self.use_main_ns = False
1054 self.namespace = namespace
1053 self.namespace = namespace
1055
1054
1056 # The global namespace, if given, can be bound directly
1055 # The global namespace, if given, can be bound directly
1057 if global_namespace is None:
1056 if global_namespace is None:
1058 self.global_namespace = {}
1057 self.global_namespace = {}
1059 else:
1058 else:
1060 self.global_namespace = global_namespace
1059 self.global_namespace = global_namespace
1061
1060
1062 self.custom_matchers = []
1061 self.custom_matchers = []
1063
1062
1064 super(Completer, self).__init__(**kwargs)
1063 super(Completer, self).__init__(**kwargs)
1065
1064
1066 def complete(self, text, state):
1065 def complete(self, text, state):
1067 """Return the next possible completion for 'text'.
1066 """Return the next possible completion for 'text'.
1068
1067
1069 This is called successively with state == 0, 1, 2, ... until it
1068 This is called successively with state == 0, 1, 2, ... until it
1070 returns None. The completion should begin with 'text'.
1069 returns None. The completion should begin with 'text'.
1071
1070
1072 """
1071 """
1073 if self.use_main_ns:
1072 if self.use_main_ns:
1074 self.namespace = __main__.__dict__
1073 self.namespace = __main__.__dict__
1075
1074
1076 if state == 0:
1075 if state == 0:
1077 if "." in text:
1076 if "." in text:
1078 self.matches = self.attr_matches(text)
1077 self.matches = self.attr_matches(text)
1079 else:
1078 else:
1080 self.matches = self.global_matches(text)
1079 self.matches = self.global_matches(text)
1081 try:
1080 try:
1082 return self.matches[state]
1081 return self.matches[state]
1083 except IndexError:
1082 except IndexError:
1084 return None
1083 return None
1085
1084
1086 def global_matches(self, text):
1085 def global_matches(self, text):
1087 """Compute matches when text is a simple name.
1086 """Compute matches when text is a simple name.
1088
1087
1089 Return a list of all keywords, built-in functions and names currently
1088 Return a list of all keywords, built-in functions and names currently
1090 defined in self.namespace or self.global_namespace that match.
1089 defined in self.namespace or self.global_namespace that match.
1091
1090
1092 """
1091 """
1093 matches = []
1092 matches = []
1094 match_append = matches.append
1093 match_append = matches.append
1095 n = len(text)
1094 n = len(text)
1096 for lst in [
1095 for lst in [
1097 keyword.kwlist,
1096 keyword.kwlist,
1098 builtin_mod.__dict__.keys(),
1097 builtin_mod.__dict__.keys(),
1099 list(self.namespace.keys()),
1098 list(self.namespace.keys()),
1100 list(self.global_namespace.keys()),
1099 list(self.global_namespace.keys()),
1101 ]:
1100 ]:
1102 for word in lst:
1101 for word in lst:
1103 if word[:n] == text and word != "__builtins__":
1102 if word[:n] == text and word != "__builtins__":
1104 match_append(word)
1103 match_append(word)
1105
1104
1106 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1105 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1107 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1106 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1108 shortened = {
1107 shortened = {
1109 "_".join([sub[0] for sub in word.split("_")]): word
1108 "_".join([sub[0] for sub in word.split("_")]): word
1110 for word in lst
1109 for word in lst
1111 if snake_case_re.match(word)
1110 if snake_case_re.match(word)
1112 }
1111 }
1113 for word in shortened.keys():
1112 for word in shortened.keys():
1114 if word[:n] == text and word != "__builtins__":
1113 if word[:n] == text and word != "__builtins__":
1115 match_append(shortened[word])
1114 match_append(shortened[word])
1116 return matches
1115 return matches
1117
1116
1118 def attr_matches(self, text):
1117 def attr_matches(self, text):
1119 """Compute matches when text contains a dot.
1118 """Compute matches when text contains a dot.
1120
1119
1121 Assuming the text is of the form NAME.NAME....[NAME], and is
1120 Assuming the text is of the form NAME.NAME....[NAME], and is
1122 evaluatable in self.namespace or self.global_namespace, it will be
1121 evaluatable in self.namespace or self.global_namespace, it will be
1123 evaluated and its attributes (as revealed by dir()) are used as
1122 evaluated and its attributes (as revealed by dir()) are used as
1124 possible completions. (For class instances, class members are
1123 possible completions. (For class instances, class members are
1125 also considered.)
1124 also considered.)
1126
1125
1127 WARNING: this can still invoke arbitrary C code, if an object
1126 WARNING: this can still invoke arbitrary C code, if an object
1128 with a __getattr__ hook is evaluated.
1127 with a __getattr__ hook is evaluated.
1129
1128
1130 """
1129 """
1131 return self._attr_matches(text)[0]
1130 return self._attr_matches(text)[0]
1132
1131
1133 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1132 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1134 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1133 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1135 if not m2:
1134 if not m2:
1136 return [], ""
1135 return [], ""
1137 expr, attr = m2.group(1, 2)
1136 expr, attr = m2.group(1, 2)
1138
1137
1139 obj = self._evaluate_expr(expr)
1138 obj = self._evaluate_expr(expr)
1140
1139
1141 if obj is not_found:
1140 if obj is not_found:
1142 return [], ""
1141 return [], ""
1143
1142
1144 if self.limit_to__all__ and hasattr(obj, '__all__'):
1143 if self.limit_to__all__ and hasattr(obj, '__all__'):
1145 words = get__all__entries(obj)
1144 words = get__all__entries(obj)
1146 else:
1145 else:
1147 words = dir2(obj)
1146 words = dir2(obj)
1148
1147
1149 try:
1148 try:
1150 words = generics.complete_object(obj, words)
1149 words = generics.complete_object(obj, words)
1151 except TryNext:
1150 except TryNext:
1152 pass
1151 pass
1153 except AssertionError:
1152 except AssertionError:
1154 raise
1153 raise
1155 except Exception:
1154 except Exception:
1156 # Silence errors from completion function
1155 # Silence errors from completion function
1157 pass
1156 pass
1158 # Build match list to return
1157 # Build match list to return
1159 n = len(attr)
1158 n = len(attr)
1160
1159
1161 # Note: ideally we would just return words here and the prefix
1160 # Note: ideally we would just return words here and the prefix
1162 # reconciliator would know that we intend to append to rather than
1161 # reconciliator would know that we intend to append to rather than
1163 # replace the input text; this requires refactoring to return range
1162 # replace the input text; this requires refactoring to return range
1164 # which ought to be replaced (as does jedi).
1163 # which ought to be replaced (as does jedi).
1165 if include_prefix:
1164 if include_prefix:
1166 tokens = _parse_tokens(expr)
1165 tokens = _parse_tokens(expr)
1167 rev_tokens = reversed(tokens)
1166 rev_tokens = reversed(tokens)
1168 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1167 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1169 name_turn = True
1168 name_turn = True
1170
1169
1171 parts = []
1170 parts = []
1172 for token in rev_tokens:
1171 for token in rev_tokens:
1173 if token.type in skip_over:
1172 if token.type in skip_over:
1174 continue
1173 continue
1175 if token.type == tokenize.NAME and name_turn:
1174 if token.type == tokenize.NAME and name_turn:
1176 parts.append(token.string)
1175 parts.append(token.string)
1177 name_turn = False
1176 name_turn = False
1178 elif (
1177 elif (
1179 token.type == tokenize.OP and token.string == "." and not name_turn
1178 token.type == tokenize.OP and token.string == "." and not name_turn
1180 ):
1179 ):
1181 parts.append(token.string)
1180 parts.append(token.string)
1182 name_turn = True
1181 name_turn = True
1183 else:
1182 else:
1184 # short-circuit if not empty nor name token
1183 # short-circuit if not empty nor name token
1185 break
1184 break
1186
1185
1187 prefix_after_space = "".join(reversed(parts))
1186 prefix_after_space = "".join(reversed(parts))
1188 else:
1187 else:
1189 prefix_after_space = ""
1188 prefix_after_space = ""
1190
1189
1191 return (
1190 return (
1192 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1191 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1193 "." + attr,
1192 "." + attr,
1194 )
1193 )
1195
1194
1196 def _evaluate_expr(self, expr):
1195 def _evaluate_expr(self, expr):
1197 obj = not_found
1196 obj = not_found
1198 done = False
1197 done = False
1199 while not done and expr:
1198 while not done and expr:
1200 try:
1199 try:
1201 obj = guarded_eval(
1200 obj = guarded_eval(
1202 expr,
1201 expr,
1203 EvaluationContext(
1202 EvaluationContext(
1204 globals=self.global_namespace,
1203 globals=self.global_namespace,
1205 locals=self.namespace,
1204 locals=self.namespace,
1206 evaluation=self.evaluation,
1205 evaluation=self.evaluation,
1207 ),
1206 ),
1208 )
1207 )
1209 done = True
1208 done = True
1210 except Exception as e:
1209 except Exception as e:
1211 if self.debug:
1210 if self.debug:
1212 print("Evaluation exception", e)
1211 print("Evaluation exception", e)
1213 # trim the expression to remove any invalid prefix
1212 # trim the expression to remove any invalid prefix
1214 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1213 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1215 # where parenthesis is not closed.
1214 # where parenthesis is not closed.
1216 # TODO: make this faster by reusing parts of the computation?
1215 # TODO: make this faster by reusing parts of the computation?
1217 expr = expr[1:]
1216 expr = expr[1:]
1218 return obj
1217 return obj
1219
1218
1220 def get__all__entries(obj):
1219 def get__all__entries(obj):
1221 """returns the strings in the __all__ attribute"""
1220 """returns the strings in the __all__ attribute"""
1222 try:
1221 try:
1223 words = getattr(obj, '__all__')
1222 words = getattr(obj, '__all__')
1224 except:
1223 except:
1225 return []
1224 return []
1226
1225
1227 return [w for w in words if isinstance(w, str)]
1226 return [w for w in words if isinstance(w, str)]
1228
1227
1229
1228
1230 class _DictKeyState(enum.Flag):
1229 class _DictKeyState(enum.Flag):
1231 """Represent state of the key match in context of other possible matches.
1230 """Represent state of the key match in context of other possible matches.
1232
1231
1233 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1232 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1234 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1233 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1235 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1234 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1236 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1235 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1237 """
1236 """
1238
1237
1239 BASELINE = 0
1238 BASELINE = 0
1240 END_OF_ITEM = enum.auto()
1239 END_OF_ITEM = enum.auto()
1241 END_OF_TUPLE = enum.auto()
1240 END_OF_TUPLE = enum.auto()
1242 IN_TUPLE = enum.auto()
1241 IN_TUPLE = enum.auto()
1243
1242
1244
1243
1245 def _parse_tokens(c):
1244 def _parse_tokens(c):
1246 """Parse tokens even if there is an error."""
1245 """Parse tokens even if there is an error."""
1247 tokens = []
1246 tokens = []
1248 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1247 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1249 while True:
1248 while True:
1250 try:
1249 try:
1251 tokens.append(next(token_generator))
1250 tokens.append(next(token_generator))
1252 except tokenize.TokenError:
1251 except tokenize.TokenError:
1253 return tokens
1252 return tokens
1254 except StopIteration:
1253 except StopIteration:
1255 return tokens
1254 return tokens
1256
1255
1257
1256
1258 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1257 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1259 """Match any valid Python numeric literal in a prefix of dictionary keys.
1258 """Match any valid Python numeric literal in a prefix of dictionary keys.
1260
1259
1261 References:
1260 References:
1262 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1261 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1263 - https://docs.python.org/3/library/tokenize.html
1262 - https://docs.python.org/3/library/tokenize.html
1264 """
1263 """
1265 if prefix[-1].isspace():
1264 if prefix[-1].isspace():
1266 # if user typed a space we do not have anything to complete
1265 # if user typed a space we do not have anything to complete
1267 # even if there was a valid number token before
1266 # even if there was a valid number token before
1268 return None
1267 return None
1269 tokens = _parse_tokens(prefix)
1268 tokens = _parse_tokens(prefix)
1270 rev_tokens = reversed(tokens)
1269 rev_tokens = reversed(tokens)
1271 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1270 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1272 number = None
1271 number = None
1273 for token in rev_tokens:
1272 for token in rev_tokens:
1274 if token.type in skip_over:
1273 if token.type in skip_over:
1275 continue
1274 continue
1276 if number is None:
1275 if number is None:
1277 if token.type == tokenize.NUMBER:
1276 if token.type == tokenize.NUMBER:
1278 number = token.string
1277 number = token.string
1279 continue
1278 continue
1280 else:
1279 else:
1281 # we did not match a number
1280 # we did not match a number
1282 return None
1281 return None
1283 if token.type == tokenize.OP:
1282 if token.type == tokenize.OP:
1284 if token.string == ",":
1283 if token.string == ",":
1285 break
1284 break
1286 if token.string in {"+", "-"}:
1285 if token.string in {"+", "-"}:
1287 number = token.string + number
1286 number = token.string + number
1288 else:
1287 else:
1289 return None
1288 return None
1290 return number
1289 return number
1291
1290
1292
1291
1293 _INT_FORMATS = {
1292 _INT_FORMATS = {
1294 "0b": bin,
1293 "0b": bin,
1295 "0o": oct,
1294 "0o": oct,
1296 "0x": hex,
1295 "0x": hex,
1297 }
1296 }
1298
1297
1299
1298
1300 def match_dict_keys(
1299 def match_dict_keys(
1301 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1300 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1302 prefix: str,
1301 prefix: str,
1303 delims: str,
1302 delims: str,
1304 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1303 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1305 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1304 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1306 """Used by dict_key_matches, matching the prefix to a list of keys
1305 """Used by dict_key_matches, matching the prefix to a list of keys
1307
1306
1308 Parameters
1307 Parameters
1309 ----------
1308 ----------
1310 keys
1309 keys
1311 list of keys in dictionary currently being completed.
1310 list of keys in dictionary currently being completed.
1312 prefix
1311 prefix
1313 Part of the text already typed by the user. E.g. `mydict[b'fo`
1312 Part of the text already typed by the user. E.g. `mydict[b'fo`
1314 delims
1313 delims
1315 String of delimiters to consider when finding the current key.
1314 String of delimiters to consider when finding the current key.
1316 extra_prefix : optional
1315 extra_prefix : optional
1317 Part of the text already typed in multi-key index cases. E.g. for
1316 Part of the text already typed in multi-key index cases. E.g. for
1318 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1317 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1319
1318
1320 Returns
1319 Returns
1321 -------
1320 -------
1322 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1321 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1323 ``quote`` being the quote that need to be used to close current string.
1322 ``quote`` being the quote that need to be used to close current string.
1324 ``token_start`` the position where the replacement should start occurring,
1323 ``token_start`` the position where the replacement should start occurring,
1325 ``matches`` a dictionary of replacement/completion keys on keys and values
1324 ``matches`` a dictionary of replacement/completion keys on keys and values
1326 indicating whether the state.
1325 indicating whether the state.
1327 """
1326 """
1328 prefix_tuple = extra_prefix if extra_prefix else ()
1327 prefix_tuple = extra_prefix if extra_prefix else ()
1329
1328
1330 prefix_tuple_size = sum(
1329 prefix_tuple_size = sum(
1331 [
1330 [
1332 # for pandas, do not count slices as taking space
1331 # for pandas, do not count slices as taking space
1333 not isinstance(k, slice)
1332 not isinstance(k, slice)
1334 for k in prefix_tuple
1333 for k in prefix_tuple
1335 ]
1334 ]
1336 )
1335 )
1337 text_serializable_types = (str, bytes, int, float, slice)
1336 text_serializable_types = (str, bytes, int, float, slice)
1338
1337
1339 def filter_prefix_tuple(key):
1338 def filter_prefix_tuple(key):
1340 # Reject too short keys
1339 # Reject too short keys
1341 if len(key) <= prefix_tuple_size:
1340 if len(key) <= prefix_tuple_size:
1342 return False
1341 return False
1343 # Reject keys which cannot be serialised to text
1342 # Reject keys which cannot be serialised to text
1344 for k in key:
1343 for k in key:
1345 if not isinstance(k, text_serializable_types):
1344 if not isinstance(k, text_serializable_types):
1346 return False
1345 return False
1347 # Reject keys that do not match the prefix
1346 # Reject keys that do not match the prefix
1348 for k, pt in zip(key, prefix_tuple):
1347 for k, pt in zip(key, prefix_tuple):
1349 if k != pt and not isinstance(pt, slice):
1348 if k != pt and not isinstance(pt, slice):
1350 return False
1349 return False
1351 # All checks passed!
1350 # All checks passed!
1352 return True
1351 return True
1353
1352
1354 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1353 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1355 defaultdict(lambda: _DictKeyState.BASELINE)
1354 defaultdict(lambda: _DictKeyState.BASELINE)
1356 )
1355 )
1357
1356
1358 for k in keys:
1357 for k in keys:
1359 # If at least one of the matches is not final, mark as undetermined.
1358 # If at least one of the matches is not final, mark as undetermined.
1360 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1359 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1361 # `111` appears final on first match but is not final on the second.
1360 # `111` appears final on first match but is not final on the second.
1362
1361
1363 if isinstance(k, tuple):
1362 if isinstance(k, tuple):
1364 if filter_prefix_tuple(k):
1363 if filter_prefix_tuple(k):
1365 key_fragment = k[prefix_tuple_size]
1364 key_fragment = k[prefix_tuple_size]
1366 filtered_key_is_final[key_fragment] |= (
1365 filtered_key_is_final[key_fragment] |= (
1367 _DictKeyState.END_OF_TUPLE
1366 _DictKeyState.END_OF_TUPLE
1368 if len(k) == prefix_tuple_size + 1
1367 if len(k) == prefix_tuple_size + 1
1369 else _DictKeyState.IN_TUPLE
1368 else _DictKeyState.IN_TUPLE
1370 )
1369 )
1371 elif prefix_tuple_size > 0:
1370 elif prefix_tuple_size > 0:
1372 # we are completing a tuple but this key is not a tuple,
1371 # we are completing a tuple but this key is not a tuple,
1373 # so we should ignore it
1372 # so we should ignore it
1374 pass
1373 pass
1375 else:
1374 else:
1376 if isinstance(k, text_serializable_types):
1375 if isinstance(k, text_serializable_types):
1377 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1376 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1378
1377
1379 filtered_keys = filtered_key_is_final.keys()
1378 filtered_keys = filtered_key_is_final.keys()
1380
1379
1381 if not prefix:
1380 if not prefix:
1382 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1381 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1383
1382
1384 quote_match = re.search("(?:\"|')", prefix)
1383 quote_match = re.search("(?:\"|')", prefix)
1385 is_user_prefix_numeric = False
1384 is_user_prefix_numeric = False
1386
1385
1387 if quote_match:
1386 if quote_match:
1388 quote = quote_match.group()
1387 quote = quote_match.group()
1389 valid_prefix = prefix + quote
1388 valid_prefix = prefix + quote
1390 try:
1389 try:
1391 prefix_str = literal_eval(valid_prefix)
1390 prefix_str = literal_eval(valid_prefix)
1392 except Exception:
1391 except Exception:
1393 return "", 0, {}
1392 return "", 0, {}
1394 else:
1393 else:
1395 # If it does not look like a string, let's assume
1394 # If it does not look like a string, let's assume
1396 # we are dealing with a number or variable.
1395 # we are dealing with a number or variable.
1397 number_match = _match_number_in_dict_key_prefix(prefix)
1396 number_match = _match_number_in_dict_key_prefix(prefix)
1398
1397
1399 # We do not want the key matcher to suggest variable names so we yield:
1398 # We do not want the key matcher to suggest variable names so we yield:
1400 if number_match is None:
1399 if number_match is None:
1401 # The alternative would be to assume that user forgort the quote
1400 # The alternative would be to assume that user forgort the quote
1402 # and if the substring matches, suggest adding it at the start.
1401 # and if the substring matches, suggest adding it at the start.
1403 return "", 0, {}
1402 return "", 0, {}
1404
1403
1405 prefix_str = number_match
1404 prefix_str = number_match
1406 is_user_prefix_numeric = True
1405 is_user_prefix_numeric = True
1407 quote = ""
1406 quote = ""
1408
1407
1409 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1408 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1410 token_match = re.search(pattern, prefix, re.UNICODE)
1409 token_match = re.search(pattern, prefix, re.UNICODE)
1411 assert token_match is not None # silence mypy
1410 assert token_match is not None # silence mypy
1412 token_start = token_match.start()
1411 token_start = token_match.start()
1413 token_prefix = token_match.group()
1412 token_prefix = token_match.group()
1414
1413
1415 matched: Dict[str, _DictKeyState] = {}
1414 matched: Dict[str, _DictKeyState] = {}
1416
1415
1417 str_key: Union[str, bytes]
1416 str_key: Union[str, bytes]
1418
1417
1419 for key in filtered_keys:
1418 for key in filtered_keys:
1420 if isinstance(key, (int, float)):
1419 if isinstance(key, (int, float)):
1421 # User typed a number but this key is not a number.
1420 # User typed a number but this key is not a number.
1422 if not is_user_prefix_numeric:
1421 if not is_user_prefix_numeric:
1423 continue
1422 continue
1424 str_key = str(key)
1423 str_key = str(key)
1425 if isinstance(key, int):
1424 if isinstance(key, int):
1426 int_base = prefix_str[:2].lower()
1425 int_base = prefix_str[:2].lower()
1427 # if user typed integer using binary/oct/hex notation:
1426 # if user typed integer using binary/oct/hex notation:
1428 if int_base in _INT_FORMATS:
1427 if int_base in _INT_FORMATS:
1429 int_format = _INT_FORMATS[int_base]
1428 int_format = _INT_FORMATS[int_base]
1430 str_key = int_format(key)
1429 str_key = int_format(key)
1431 else:
1430 else:
1432 # User typed a string but this key is a number.
1431 # User typed a string but this key is a number.
1433 if is_user_prefix_numeric:
1432 if is_user_prefix_numeric:
1434 continue
1433 continue
1435 str_key = key
1434 str_key = key
1436 try:
1435 try:
1437 if not str_key.startswith(prefix_str):
1436 if not str_key.startswith(prefix_str):
1438 continue
1437 continue
1439 except (AttributeError, TypeError, UnicodeError) as e:
1438 except (AttributeError, TypeError, UnicodeError) as e:
1440 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1439 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1441 continue
1440 continue
1442
1441
1443 # reformat remainder of key to begin with prefix
1442 # reformat remainder of key to begin with prefix
1444 rem = str_key[len(prefix_str) :]
1443 rem = str_key[len(prefix_str) :]
1445 # force repr wrapped in '
1444 # force repr wrapped in '
1446 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1445 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1447 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1446 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1448 if quote == '"':
1447 if quote == '"':
1449 # The entered prefix is quoted with ",
1448 # The entered prefix is quoted with ",
1450 # but the match is quoted with '.
1449 # but the match is quoted with '.
1451 # A contained " hence needs escaping for comparison:
1450 # A contained " hence needs escaping for comparison:
1452 rem_repr = rem_repr.replace('"', '\\"')
1451 rem_repr = rem_repr.replace('"', '\\"')
1453
1452
1454 # then reinsert prefix from start of token
1453 # then reinsert prefix from start of token
1455 match = "%s%s" % (token_prefix, rem_repr)
1454 match = "%s%s" % (token_prefix, rem_repr)
1456
1455
1457 matched[match] = filtered_key_is_final[key]
1456 matched[match] = filtered_key_is_final[key]
1458 return quote, token_start, matched
1457 return quote, token_start, matched
1459
1458
1460
1459
1461 def cursor_to_position(text:str, line:int, column:int)->int:
1460 def cursor_to_position(text:str, line:int, column:int)->int:
1462 """
1461 """
1463 Convert the (line,column) position of the cursor in text to an offset in a
1462 Convert the (line,column) position of the cursor in text to an offset in a
1464 string.
1463 string.
1465
1464
1466 Parameters
1465 Parameters
1467 ----------
1466 ----------
1468 text : str
1467 text : str
1469 The text in which to calculate the cursor offset
1468 The text in which to calculate the cursor offset
1470 line : int
1469 line : int
1471 Line of the cursor; 0-indexed
1470 Line of the cursor; 0-indexed
1472 column : int
1471 column : int
1473 Column of the cursor 0-indexed
1472 Column of the cursor 0-indexed
1474
1473
1475 Returns
1474 Returns
1476 -------
1475 -------
1477 Position of the cursor in ``text``, 0-indexed.
1476 Position of the cursor in ``text``, 0-indexed.
1478
1477
1479 See Also
1478 See Also
1480 --------
1479 --------
1481 position_to_cursor : reciprocal of this function
1480 position_to_cursor : reciprocal of this function
1482
1481
1483 """
1482 """
1484 lines = text.split('\n')
1483 lines = text.split('\n')
1485 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1484 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1486
1485
1487 return sum(len(l) + 1 for l in lines[:line]) + column
1486 return sum(len(l) + 1 for l in lines[:line]) + column
1488
1487
1489 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1488 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1490 """
1489 """
1491 Convert the position of the cursor in text (0 indexed) to a line
1490 Convert the position of the cursor in text (0 indexed) to a line
1492 number(0-indexed) and a column number (0-indexed) pair
1491 number(0-indexed) and a column number (0-indexed) pair
1493
1492
1494 Position should be a valid position in ``text``.
1493 Position should be a valid position in ``text``.
1495
1494
1496 Parameters
1495 Parameters
1497 ----------
1496 ----------
1498 text : str
1497 text : str
1499 The text in which to calculate the cursor offset
1498 The text in which to calculate the cursor offset
1500 offset : int
1499 offset : int
1501 Position of the cursor in ``text``, 0-indexed.
1500 Position of the cursor in ``text``, 0-indexed.
1502
1501
1503 Returns
1502 Returns
1504 -------
1503 -------
1505 (line, column) : (int, int)
1504 (line, column) : (int, int)
1506 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1505 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1507
1506
1508 See Also
1507 See Also
1509 --------
1508 --------
1510 cursor_to_position : reciprocal of this function
1509 cursor_to_position : reciprocal of this function
1511
1510
1512 """
1511 """
1513
1512
1514 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1513 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1515
1514
1516 before = text[:offset]
1515 before = text[:offset]
1517 blines = before.split('\n') # ! splitnes trim trailing \n
1516 blines = before.split('\n') # ! splitnes trim trailing \n
1518 line = before.count('\n')
1517 line = before.count('\n')
1519 col = len(blines[-1])
1518 col = len(blines[-1])
1520 return line, col
1519 return line, col
1521
1520
1522
1521
1523 def _safe_isinstance(obj, module, class_name, *attrs):
1522 def _safe_isinstance(obj, module, class_name, *attrs):
1524 """Checks if obj is an instance of module.class_name if loaded
1523 """Checks if obj is an instance of module.class_name if loaded
1525 """
1524 """
1526 if module in sys.modules:
1525 if module in sys.modules:
1527 m = sys.modules[module]
1526 m = sys.modules[module]
1528 for attr in [class_name, *attrs]:
1527 for attr in [class_name, *attrs]:
1529 m = getattr(m, attr)
1528 m = getattr(m, attr)
1530 return isinstance(obj, m)
1529 return isinstance(obj, m)
1531
1530
1532
1531
1533 @context_matcher()
1532 @context_matcher()
1534 def back_unicode_name_matcher(context: CompletionContext):
1533 def back_unicode_name_matcher(context: CompletionContext):
1535 """Match Unicode characters back to Unicode name
1534 """Match Unicode characters back to Unicode name
1536
1535
1537 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1536 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1538 """
1537 """
1539 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1538 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1540 return _convert_matcher_v1_result_to_v2(
1539 return _convert_matcher_v1_result_to_v2(
1541 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1540 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1542 )
1541 )
1543
1542
1544
1543
1545 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1544 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1546 """Match Unicode characters back to Unicode name
1545 """Match Unicode characters back to Unicode name
1547
1546
1548 This does ``β˜ƒ`` -> ``\\snowman``
1547 This does ``β˜ƒ`` -> ``\\snowman``
1549
1548
1550 Note that snowman is not a valid python3 combining character but will be expanded.
1549 Note that snowman is not a valid python3 combining character but will be expanded.
1551 Though it will not recombine back to the snowman character by the completion machinery.
1550 Though it will not recombine back to the snowman character by the completion machinery.
1552
1551
1553 This will not either back-complete standard sequences like \\n, \\b ...
1552 This will not either back-complete standard sequences like \\n, \\b ...
1554
1553
1555 .. deprecated:: 8.6
1554 .. deprecated:: 8.6
1556 You can use :meth:`back_unicode_name_matcher` instead.
1555 You can use :meth:`back_unicode_name_matcher` instead.
1557
1556
1558 Returns
1557 Returns
1559 =======
1558 =======
1560
1559
1561 Return a tuple with two elements:
1560 Return a tuple with two elements:
1562
1561
1563 - The Unicode character that was matched (preceded with a backslash), or
1562 - The Unicode character that was matched (preceded with a backslash), or
1564 empty string,
1563 empty string,
1565 - a sequence (of 1), name for the match Unicode character, preceded by
1564 - a sequence (of 1), name for the match Unicode character, preceded by
1566 backslash, or empty if no match.
1565 backslash, or empty if no match.
1567 """
1566 """
1568 if len(text)<2:
1567 if len(text)<2:
1569 return '', ()
1568 return '', ()
1570 maybe_slash = text[-2]
1569 maybe_slash = text[-2]
1571 if maybe_slash != '\\':
1570 if maybe_slash != '\\':
1572 return '', ()
1571 return '', ()
1573
1572
1574 char = text[-1]
1573 char = text[-1]
1575 # no expand on quote for completion in strings.
1574 # no expand on quote for completion in strings.
1576 # nor backcomplete standard ascii keys
1575 # nor backcomplete standard ascii keys
1577 if char in string.ascii_letters or char in ('"',"'"):
1576 if char in string.ascii_letters or char in ('"',"'"):
1578 return '', ()
1577 return '', ()
1579 try :
1578 try :
1580 unic = unicodedata.name(char)
1579 unic = unicodedata.name(char)
1581 return '\\'+char,('\\'+unic,)
1580 return '\\'+char,('\\'+unic,)
1582 except KeyError:
1581 except KeyError:
1583 pass
1582 pass
1584 return '', ()
1583 return '', ()
1585
1584
1586
1585
1587 @context_matcher()
1586 @context_matcher()
1588 def back_latex_name_matcher(context: CompletionContext):
1587 def back_latex_name_matcher(context: CompletionContext):
1589 """Match latex characters back to unicode name
1588 """Match latex characters back to unicode name
1590
1589
1591 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1590 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1592 """
1591 """
1593 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1592 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1594 return _convert_matcher_v1_result_to_v2(
1593 return _convert_matcher_v1_result_to_v2(
1595 matches, type="latex", fragment=fragment, suppress_if_matches=True
1594 matches, type="latex", fragment=fragment, suppress_if_matches=True
1596 )
1595 )
1597
1596
1598
1597
1599 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1598 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1600 """Match latex characters back to unicode name
1599 """Match latex characters back to unicode name
1601
1600
1602 This does ``\\β„΅`` -> ``\\aleph``
1601 This does ``\\β„΅`` -> ``\\aleph``
1603
1602
1604 .. deprecated:: 8.6
1603 .. deprecated:: 8.6
1605 You can use :meth:`back_latex_name_matcher` instead.
1604 You can use :meth:`back_latex_name_matcher` instead.
1606 """
1605 """
1607 if len(text)<2:
1606 if len(text)<2:
1608 return '', ()
1607 return '', ()
1609 maybe_slash = text[-2]
1608 maybe_slash = text[-2]
1610 if maybe_slash != '\\':
1609 if maybe_slash != '\\':
1611 return '', ()
1610 return '', ()
1612
1611
1613
1612
1614 char = text[-1]
1613 char = text[-1]
1615 # no expand on quote for completion in strings.
1614 # no expand on quote for completion in strings.
1616 # nor backcomplete standard ascii keys
1615 # nor backcomplete standard ascii keys
1617 if char in string.ascii_letters or char in ('"',"'"):
1616 if char in string.ascii_letters or char in ('"',"'"):
1618 return '', ()
1617 return '', ()
1619 try :
1618 try :
1620 latex = reverse_latex_symbol[char]
1619 latex = reverse_latex_symbol[char]
1621 # '\\' replace the \ as well
1620 # '\\' replace the \ as well
1622 return '\\'+char,[latex]
1621 return '\\'+char,[latex]
1623 except KeyError:
1622 except KeyError:
1624 pass
1623 pass
1625 return '', ()
1624 return '', ()
1626
1625
1627
1626
1628 def _formatparamchildren(parameter) -> str:
1627 def _formatparamchildren(parameter) -> str:
1629 """
1628 """
1630 Get parameter name and value from Jedi Private API
1629 Get parameter name and value from Jedi Private API
1631
1630
1632 Jedi does not expose a simple way to get `param=value` from its API.
1631 Jedi does not expose a simple way to get `param=value` from its API.
1633
1632
1634 Parameters
1633 Parameters
1635 ----------
1634 ----------
1636 parameter
1635 parameter
1637 Jedi's function `Param`
1636 Jedi's function `Param`
1638
1637
1639 Returns
1638 Returns
1640 -------
1639 -------
1641 A string like 'a', 'b=1', '*args', '**kwargs'
1640 A string like 'a', 'b=1', '*args', '**kwargs'
1642
1641
1643 """
1642 """
1644 description = parameter.description
1643 description = parameter.description
1645 if not description.startswith('param '):
1644 if not description.startswith('param '):
1646 raise ValueError('Jedi function parameter description have change format.'
1645 raise ValueError('Jedi function parameter description have change format.'
1647 'Expected "param ...", found %r".' % description)
1646 'Expected "param ...", found %r".' % description)
1648 return description[6:]
1647 return description[6:]
1649
1648
1650 def _make_signature(completion)-> str:
1649 def _make_signature(completion)-> str:
1651 """
1650 """
1652 Make the signature from a jedi completion
1651 Make the signature from a jedi completion
1653
1652
1654 Parameters
1653 Parameters
1655 ----------
1654 ----------
1656 completion : jedi.Completion
1655 completion : jedi.Completion
1657 object does not complete a function type
1656 object does not complete a function type
1658
1657
1659 Returns
1658 Returns
1660 -------
1659 -------
1661 a string consisting of the function signature, with the parenthesis but
1660 a string consisting of the function signature, with the parenthesis but
1662 without the function name. example:
1661 without the function name. example:
1663 `(a, *args, b=1, **kwargs)`
1662 `(a, *args, b=1, **kwargs)`
1664
1663
1665 """
1664 """
1666
1665
1667 # it looks like this might work on jedi 0.17
1666 # it looks like this might work on jedi 0.17
1668 if hasattr(completion, 'get_signatures'):
1667 if hasattr(completion, 'get_signatures'):
1669 signatures = completion.get_signatures()
1668 signatures = completion.get_signatures()
1670 if not signatures:
1669 if not signatures:
1671 return '(?)'
1670 return '(?)'
1672
1671
1673 c0 = completion.get_signatures()[0]
1672 c0 = completion.get_signatures()[0]
1674 return '('+c0.to_string().split('(', maxsplit=1)[1]
1673 return '('+c0.to_string().split('(', maxsplit=1)[1]
1675
1674
1676 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1675 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1677 for p in signature.defined_names()) if f])
1676 for p in signature.defined_names()) if f])
1678
1677
1679
1678
1680 _CompleteResult = Dict[str, MatcherResult]
1679 _CompleteResult = Dict[str, MatcherResult]
1681
1680
1682
1681
1683 DICT_MATCHER_REGEX = re.compile(
1682 DICT_MATCHER_REGEX = re.compile(
1684 r"""(?x)
1683 r"""(?x)
1685 ( # match dict-referring - or any get item object - expression
1684 ( # match dict-referring - or any get item object - expression
1686 .+
1685 .+
1687 )
1686 )
1688 \[ # open bracket
1687 \[ # open bracket
1689 \s* # and optional whitespace
1688 \s* # and optional whitespace
1690 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1689 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1691 # and slices
1690 # and slices
1692 ((?:(?:
1691 ((?:(?:
1693 (?: # closed string
1692 (?: # closed string
1694 [uUbB]? # string prefix (r not handled)
1693 [uUbB]? # string prefix (r not handled)
1695 (?:
1694 (?:
1696 '(?:[^']|(?<!\\)\\')*'
1695 '(?:[^']|(?<!\\)\\')*'
1697 |
1696 |
1698 "(?:[^"]|(?<!\\)\\")*"
1697 "(?:[^"]|(?<!\\)\\")*"
1699 )
1698 )
1700 )
1699 )
1701 |
1700 |
1702 # capture integers and slices
1701 # capture integers and slices
1703 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1702 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1704 |
1703 |
1705 # integer in bin/hex/oct notation
1704 # integer in bin/hex/oct notation
1706 0[bBxXoO]_?(?:\w|\d)+
1705 0[bBxXoO]_?(?:\w|\d)+
1707 )
1706 )
1708 \s*,\s*
1707 \s*,\s*
1709 )*)
1708 )*)
1710 ((?:
1709 ((?:
1711 (?: # unclosed string
1710 (?: # unclosed string
1712 [uUbB]? # string prefix (r not handled)
1711 [uUbB]? # string prefix (r not handled)
1713 (?:
1712 (?:
1714 '(?:[^']|(?<!\\)\\')*
1713 '(?:[^']|(?<!\\)\\')*
1715 |
1714 |
1716 "(?:[^"]|(?<!\\)\\")*
1715 "(?:[^"]|(?<!\\)\\")*
1717 )
1716 )
1718 )
1717 )
1719 |
1718 |
1720 # unfinished integer
1719 # unfinished integer
1721 (?:[-+]?\d+)
1720 (?:[-+]?\d+)
1722 |
1721 |
1723 # integer in bin/hex/oct notation
1722 # integer in bin/hex/oct notation
1724 0[bBxXoO]_?(?:\w|\d)+
1723 0[bBxXoO]_?(?:\w|\d)+
1725 )
1724 )
1726 )?
1725 )?
1727 $
1726 $
1728 """
1727 """
1729 )
1728 )
1730
1729
1731
1730
1732 def _convert_matcher_v1_result_to_v2(
1731 def _convert_matcher_v1_result_to_v2(
1733 matches: Sequence[str],
1732 matches: Sequence[str],
1734 type: str,
1733 type: str,
1735 fragment: Optional[str] = None,
1734 fragment: Optional[str] = None,
1736 suppress_if_matches: bool = False,
1735 suppress_if_matches: bool = False,
1737 ) -> SimpleMatcherResult:
1736 ) -> SimpleMatcherResult:
1738 """Utility to help with transition"""
1737 """Utility to help with transition"""
1739 result = {
1738 result = {
1740 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1739 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1741 "suppress": (True if matches else False) if suppress_if_matches else False,
1740 "suppress": (True if matches else False) if suppress_if_matches else False,
1742 }
1741 }
1743 if fragment is not None:
1742 if fragment is not None:
1744 result["matched_fragment"] = fragment
1743 result["matched_fragment"] = fragment
1745 return cast(SimpleMatcherResult, result)
1744 return cast(SimpleMatcherResult, result)
1746
1745
1747
1746
1748 class IPCompleter(Completer):
1747 class IPCompleter(Completer):
1749 """Extension of the completer class with IPython-specific features"""
1748 """Extension of the completer class with IPython-specific features"""
1750
1749
1751 @observe('greedy')
1750 @observe('greedy')
1752 def _greedy_changed(self, change):
1751 def _greedy_changed(self, change):
1753 """update the splitter and readline delims when greedy is changed"""
1752 """update the splitter and readline delims when greedy is changed"""
1754 if change["new"]:
1753 if change["new"]:
1755 self.evaluation = "unsafe"
1754 self.evaluation = "unsafe"
1756 self.auto_close_dict_keys = True
1755 self.auto_close_dict_keys = True
1757 self.splitter.delims = GREEDY_DELIMS
1756 self.splitter.delims = GREEDY_DELIMS
1758 else:
1757 else:
1759 self.evaluation = "limited"
1758 self.evaluation = "limited"
1760 self.auto_close_dict_keys = False
1759 self.auto_close_dict_keys = False
1761 self.splitter.delims = DELIMS
1760 self.splitter.delims = DELIMS
1762
1761
1763 dict_keys_only = Bool(
1762 dict_keys_only = Bool(
1764 False,
1763 False,
1765 help="""
1764 help="""
1766 Whether to show dict key matches only.
1765 Whether to show dict key matches only.
1767
1766
1768 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1767 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1769 """,
1768 """,
1770 )
1769 )
1771
1770
1772 suppress_competing_matchers = UnionTrait(
1771 suppress_competing_matchers = UnionTrait(
1773 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1772 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1774 default_value=None,
1773 default_value=None,
1775 help="""
1774 help="""
1776 Whether to suppress completions from other *Matchers*.
1775 Whether to suppress completions from other *Matchers*.
1777
1776
1778 When set to ``None`` (default) the matchers will attempt to auto-detect
1777 When set to ``None`` (default) the matchers will attempt to auto-detect
1779 whether suppression of other matchers is desirable. For example, at
1778 whether suppression of other matchers is desirable. For example, at
1780 the beginning of a line followed by `%` we expect a magic completion
1779 the beginning of a line followed by `%` we expect a magic completion
1781 to be the only applicable option, and after ``my_dict['`` we usually
1780 to be the only applicable option, and after ``my_dict['`` we usually
1782 expect a completion with an existing dictionary key.
1781 expect a completion with an existing dictionary key.
1783
1782
1784 If you want to disable this heuristic and see completions from all matchers,
1783 If you want to disable this heuristic and see completions from all matchers,
1785 set ``IPCompleter.suppress_competing_matchers = False``.
1784 set ``IPCompleter.suppress_competing_matchers = False``.
1786 To disable the heuristic for specific matchers provide a dictionary mapping:
1785 To disable the heuristic for specific matchers provide a dictionary mapping:
1787 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1786 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1788
1787
1789 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1788 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1790 completions to the set of matchers with the highest priority;
1789 completions to the set of matchers with the highest priority;
1791 this is equivalent to ``IPCompleter.merge_completions`` and
1790 this is equivalent to ``IPCompleter.merge_completions`` and
1792 can be beneficial for performance, but will sometimes omit relevant
1791 can be beneficial for performance, but will sometimes omit relevant
1793 candidates from matchers further down the priority list.
1792 candidates from matchers further down the priority list.
1794 """,
1793 """,
1795 ).tag(config=True)
1794 ).tag(config=True)
1796
1795
1797 merge_completions = Bool(
1796 merge_completions = Bool(
1798 True,
1797 True,
1799 help="""Whether to merge completion results into a single list
1798 help="""Whether to merge completion results into a single list
1800
1799
1801 If False, only the completion results from the first non-empty
1800 If False, only the completion results from the first non-empty
1802 completer will be returned.
1801 completer will be returned.
1803
1802
1804 As of version 8.6.0, setting the value to ``False`` is an alias for:
1803 As of version 8.6.0, setting the value to ``False`` is an alias for:
1805 ``IPCompleter.suppress_competing_matchers = True.``.
1804 ``IPCompleter.suppress_competing_matchers = True.``.
1806 """,
1805 """,
1807 ).tag(config=True)
1806 ).tag(config=True)
1808
1807
1809 disable_matchers = ListTrait(
1808 disable_matchers = ListTrait(
1810 Unicode(),
1809 Unicode(),
1811 help="""List of matchers to disable.
1810 help="""List of matchers to disable.
1812
1811
1813 The list should contain matcher identifiers (see :any:`completion_matcher`).
1812 The list should contain matcher identifiers (see :any:`completion_matcher`).
1814 """,
1813 """,
1815 ).tag(config=True)
1814 ).tag(config=True)
1816
1815
1817 omit__names = Enum(
1816 omit__names = Enum(
1818 (0, 1, 2),
1817 (0, 1, 2),
1819 default_value=2,
1818 default_value=2,
1820 help="""Instruct the completer to omit private method names
1819 help="""Instruct the completer to omit private method names
1821
1820
1822 Specifically, when completing on ``object.<tab>``.
1821 Specifically, when completing on ``object.<tab>``.
1823
1822
1824 When 2 [default]: all names that start with '_' will be excluded.
1823 When 2 [default]: all names that start with '_' will be excluded.
1825
1824
1826 When 1: all 'magic' names (``__foo__``) will be excluded.
1825 When 1: all 'magic' names (``__foo__``) will be excluded.
1827
1826
1828 When 0: nothing will be excluded.
1827 When 0: nothing will be excluded.
1829 """
1828 """
1830 ).tag(config=True)
1829 ).tag(config=True)
1831 limit_to__all__ = Bool(False,
1830 limit_to__all__ = Bool(False,
1832 help="""
1831 help="""
1833 DEPRECATED as of version 5.0.
1832 DEPRECATED as of version 5.0.
1834
1833
1835 Instruct the completer to use __all__ for the completion
1834 Instruct the completer to use __all__ for the completion
1836
1835
1837 Specifically, when completing on ``object.<tab>``.
1836 Specifically, when completing on ``object.<tab>``.
1838
1837
1839 When True: only those names in obj.__all__ will be included.
1838 When True: only those names in obj.__all__ will be included.
1840
1839
1841 When False [default]: the __all__ attribute is ignored
1840 When False [default]: the __all__ attribute is ignored
1842 """,
1841 """,
1843 ).tag(config=True)
1842 ).tag(config=True)
1844
1843
1845 profile_completions = Bool(
1844 profile_completions = Bool(
1846 default_value=False,
1845 default_value=False,
1847 help="If True, emit profiling data for completion subsystem using cProfile."
1846 help="If True, emit profiling data for completion subsystem using cProfile."
1848 ).tag(config=True)
1847 ).tag(config=True)
1849
1848
1850 profiler_output_dir = Unicode(
1849 profiler_output_dir = Unicode(
1851 default_value=".completion_profiles",
1850 default_value=".completion_profiles",
1852 help="Template for path at which to output profile data for completions."
1851 help="Template for path at which to output profile data for completions."
1853 ).tag(config=True)
1852 ).tag(config=True)
1854
1853
1855 @observe('limit_to__all__')
1854 @observe('limit_to__all__')
1856 def _limit_to_all_changed(self, change):
1855 def _limit_to_all_changed(self, change):
1857 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1856 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1858 'value has been deprecated since IPython 5.0, will be made to have '
1857 'value has been deprecated since IPython 5.0, will be made to have '
1859 'no effects and then removed in future version of IPython.',
1858 'no effects and then removed in future version of IPython.',
1860 UserWarning)
1859 UserWarning)
1861
1860
1862 def __init__(
1861 def __init__(
1863 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1862 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1864 ):
1863 ):
1865 """IPCompleter() -> completer
1864 """IPCompleter() -> completer
1866
1865
1867 Return a completer object.
1866 Return a completer object.
1868
1867
1869 Parameters
1868 Parameters
1870 ----------
1869 ----------
1871 shell
1870 shell
1872 a pointer to the ipython shell itself. This is needed
1871 a pointer to the ipython shell itself. This is needed
1873 because this completer knows about magic functions, and those can
1872 because this completer knows about magic functions, and those can
1874 only be accessed via the ipython instance.
1873 only be accessed via the ipython instance.
1875 namespace : dict, optional
1874 namespace : dict, optional
1876 an optional dict where completions are performed.
1875 an optional dict where completions are performed.
1877 global_namespace : dict, optional
1876 global_namespace : dict, optional
1878 secondary optional dict for completions, to
1877 secondary optional dict for completions, to
1879 handle cases (such as IPython embedded inside functions) where
1878 handle cases (such as IPython embedded inside functions) where
1880 both Python scopes are visible.
1879 both Python scopes are visible.
1881 config : Config
1880 config : Config
1882 traitlet's config object
1881 traitlet's config object
1883 **kwargs
1882 **kwargs
1884 passed to super class unmodified.
1883 passed to super class unmodified.
1885 """
1884 """
1886
1885
1887 self.magic_escape = ESC_MAGIC
1886 self.magic_escape = ESC_MAGIC
1888 self.splitter = CompletionSplitter()
1887 self.splitter = CompletionSplitter()
1889
1888
1890 # _greedy_changed() depends on splitter and readline being defined:
1889 # _greedy_changed() depends on splitter and readline being defined:
1891 super().__init__(
1890 super().__init__(
1892 namespace=namespace,
1891 namespace=namespace,
1893 global_namespace=global_namespace,
1892 global_namespace=global_namespace,
1894 config=config,
1893 config=config,
1895 **kwargs,
1894 **kwargs,
1896 )
1895 )
1897
1896
1898 # List where completion matches will be stored
1897 # List where completion matches will be stored
1899 self.matches = []
1898 self.matches = []
1900 self.shell = shell
1899 self.shell = shell
1901 # Regexp to split filenames with spaces in them
1900 # Regexp to split filenames with spaces in them
1902 self.space_name_re = re.compile(r'([^\\] )')
1901 self.space_name_re = re.compile(r'([^\\] )')
1903 # Hold a local ref. to glob.glob for speed
1902 # Hold a local ref. to glob.glob for speed
1904 self.glob = glob.glob
1903 self.glob = glob.glob
1905
1904
1906 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1905 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1907 # buffers, to avoid completion problems.
1906 # buffers, to avoid completion problems.
1908 term = os.environ.get('TERM','xterm')
1907 term = os.environ.get('TERM','xterm')
1909 self.dumb_terminal = term in ['dumb','emacs']
1908 self.dumb_terminal = term in ['dumb','emacs']
1910
1909
1911 # Special handling of backslashes needed in win32 platforms
1910 # Special handling of backslashes needed in win32 platforms
1912 if sys.platform == "win32":
1911 if sys.platform == "win32":
1913 self.clean_glob = self._clean_glob_win32
1912 self.clean_glob = self._clean_glob_win32
1914 else:
1913 else:
1915 self.clean_glob = self._clean_glob
1914 self.clean_glob = self._clean_glob
1916
1915
1917 #regexp to parse docstring for function signature
1916 #regexp to parse docstring for function signature
1918 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1917 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1919 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1918 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1920 #use this if positional argument name is also needed
1919 #use this if positional argument name is also needed
1921 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1920 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1922
1921
1923 self.magic_arg_matchers = [
1922 self.magic_arg_matchers = [
1924 self.magic_config_matcher,
1923 self.magic_config_matcher,
1925 self.magic_color_matcher,
1924 self.magic_color_matcher,
1926 ]
1925 ]
1927
1926
1928 # This is set externally by InteractiveShell
1927 # This is set externally by InteractiveShell
1929 self.custom_completers = None
1928 self.custom_completers = None
1930
1929
1931 # This is a list of names of unicode characters that can be completed
1930 # This is a list of names of unicode characters that can be completed
1932 # into their corresponding unicode value. The list is large, so we
1931 # into their corresponding unicode value. The list is large, so we
1933 # lazily initialize it on first use. Consuming code should access this
1932 # lazily initialize it on first use. Consuming code should access this
1934 # attribute through the `@unicode_names` property.
1933 # attribute through the `@unicode_names` property.
1935 self._unicode_names = None
1934 self._unicode_names = None
1936
1935
1937 self._backslash_combining_matchers = [
1936 self._backslash_combining_matchers = [
1938 self.latex_name_matcher,
1937 self.latex_name_matcher,
1939 self.unicode_name_matcher,
1938 self.unicode_name_matcher,
1940 back_latex_name_matcher,
1939 back_latex_name_matcher,
1941 back_unicode_name_matcher,
1940 back_unicode_name_matcher,
1942 self.fwd_unicode_matcher,
1941 self.fwd_unicode_matcher,
1943 ]
1942 ]
1944
1943
1945 if not self.backslash_combining_completions:
1944 if not self.backslash_combining_completions:
1946 for matcher in self._backslash_combining_matchers:
1945 for matcher in self._backslash_combining_matchers:
1947 self.disable_matchers.append(_get_matcher_id(matcher))
1946 self.disable_matchers.append(_get_matcher_id(matcher))
1948
1947
1949 if not self.merge_completions:
1948 if not self.merge_completions:
1950 self.suppress_competing_matchers = True
1949 self.suppress_competing_matchers = True
1951
1950
1952 @property
1951 @property
1953 def matchers(self) -> List[Matcher]:
1952 def matchers(self) -> List[Matcher]:
1954 """All active matcher routines for completion"""
1953 """All active matcher routines for completion"""
1955 if self.dict_keys_only:
1954 if self.dict_keys_only:
1956 return [self.dict_key_matcher]
1955 return [self.dict_key_matcher]
1957
1956
1958 if self.use_jedi:
1957 if self.use_jedi:
1959 return [
1958 return [
1960 *self.custom_matchers,
1959 *self.custom_matchers,
1961 *self._backslash_combining_matchers,
1960 *self._backslash_combining_matchers,
1962 *self.magic_arg_matchers,
1961 *self.magic_arg_matchers,
1963 self.custom_completer_matcher,
1962 self.custom_completer_matcher,
1964 self.magic_matcher,
1963 self.magic_matcher,
1965 self._jedi_matcher,
1964 self._jedi_matcher,
1966 self.dict_key_matcher,
1965 self.dict_key_matcher,
1967 self.file_matcher,
1966 self.file_matcher,
1968 ]
1967 ]
1969 else:
1968 else:
1970 return [
1969 return [
1971 *self.custom_matchers,
1970 *self.custom_matchers,
1972 *self._backslash_combining_matchers,
1971 *self._backslash_combining_matchers,
1973 *self.magic_arg_matchers,
1972 *self.magic_arg_matchers,
1974 self.custom_completer_matcher,
1973 self.custom_completer_matcher,
1975 self.dict_key_matcher,
1974 self.dict_key_matcher,
1976 self.magic_matcher,
1975 self.magic_matcher,
1977 self.python_matcher,
1976 self.python_matcher,
1978 self.file_matcher,
1977 self.file_matcher,
1979 self.python_func_kw_matcher,
1978 self.python_func_kw_matcher,
1980 ]
1979 ]
1981
1980
1982 def all_completions(self, text:str) -> List[str]:
1981 def all_completions(self, text:str) -> List[str]:
1983 """
1982 """
1984 Wrapper around the completion methods for the benefit of emacs.
1983 Wrapper around the completion methods for the benefit of emacs.
1985 """
1984 """
1986 prefix = text.rpartition('.')[0]
1985 prefix = text.rpartition('.')[0]
1987 with provisionalcompleter():
1986 with provisionalcompleter():
1988 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1987 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1989 for c in self.completions(text, len(text))]
1988 for c in self.completions(text, len(text))]
1990
1989
1991 return self.complete(text)[1]
1990 return self.complete(text)[1]
1992
1991
1993 def _clean_glob(self, text:str):
1992 def _clean_glob(self, text:str):
1994 return self.glob("%s*" % text)
1993 return self.glob("%s*" % text)
1995
1994
1996 def _clean_glob_win32(self, text:str):
1995 def _clean_glob_win32(self, text:str):
1997 return [f.replace("\\","/")
1996 return [f.replace("\\","/")
1998 for f in self.glob("%s*" % text)]
1997 for f in self.glob("%s*" % text)]
1999
1998
2000 @context_matcher()
1999 @context_matcher()
2001 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2000 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2002 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2001 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2003 matches = self.file_matches(context.token)
2002 matches = self.file_matches(context.token)
2004 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2003 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2005 # starts with `/home/`, `C:\`, etc)
2004 # starts with `/home/`, `C:\`, etc)
2006 return _convert_matcher_v1_result_to_v2(matches, type="path")
2005 return _convert_matcher_v1_result_to_v2(matches, type="path")
2007
2006
2008 def file_matches(self, text: str) -> List[str]:
2007 def file_matches(self, text: str) -> List[str]:
2009 """Match filenames, expanding ~USER type strings.
2008 """Match filenames, expanding ~USER type strings.
2010
2009
2011 Most of the seemingly convoluted logic in this completer is an
2010 Most of the seemingly convoluted logic in this completer is an
2012 attempt to handle filenames with spaces in them. And yet it's not
2011 attempt to handle filenames with spaces in them. And yet it's not
2013 quite perfect, because Python's readline doesn't expose all of the
2012 quite perfect, because Python's readline doesn't expose all of the
2014 GNU readline details needed for this to be done correctly.
2013 GNU readline details needed for this to be done correctly.
2015
2014
2016 For a filename with a space in it, the printed completions will be
2015 For a filename with a space in it, the printed completions will be
2017 only the parts after what's already been typed (instead of the
2016 only the parts after what's already been typed (instead of the
2018 full completions, as is normally done). I don't think with the
2017 full completions, as is normally done). I don't think with the
2019 current (as of Python 2.3) Python readline it's possible to do
2018 current (as of Python 2.3) Python readline it's possible to do
2020 better.
2019 better.
2021
2020
2022 .. deprecated:: 8.6
2021 .. deprecated:: 8.6
2023 You can use :meth:`file_matcher` instead.
2022 You can use :meth:`file_matcher` instead.
2024 """
2023 """
2025
2024
2026 # chars that require escaping with backslash - i.e. chars
2025 # chars that require escaping with backslash - i.e. chars
2027 # that readline treats incorrectly as delimiters, but we
2026 # that readline treats incorrectly as delimiters, but we
2028 # don't want to treat as delimiters in filename matching
2027 # don't want to treat as delimiters in filename matching
2029 # when escaped with backslash
2028 # when escaped with backslash
2030 if text.startswith('!'):
2029 if text.startswith('!'):
2031 text = text[1:]
2030 text = text[1:]
2032 text_prefix = u'!'
2031 text_prefix = u'!'
2033 else:
2032 else:
2034 text_prefix = u''
2033 text_prefix = u''
2035
2034
2036 text_until_cursor = self.text_until_cursor
2035 text_until_cursor = self.text_until_cursor
2037 # track strings with open quotes
2036 # track strings with open quotes
2038 open_quotes = has_open_quotes(text_until_cursor)
2037 open_quotes = has_open_quotes(text_until_cursor)
2039
2038
2040 if '(' in text_until_cursor or '[' in text_until_cursor:
2039 if '(' in text_until_cursor or '[' in text_until_cursor:
2041 lsplit = text
2040 lsplit = text
2042 else:
2041 else:
2043 try:
2042 try:
2044 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2043 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2045 lsplit = arg_split(text_until_cursor)[-1]
2044 lsplit = arg_split(text_until_cursor)[-1]
2046 except ValueError:
2045 except ValueError:
2047 # typically an unmatched ", or backslash without escaped char.
2046 # typically an unmatched ", or backslash without escaped char.
2048 if open_quotes:
2047 if open_quotes:
2049 lsplit = text_until_cursor.split(open_quotes)[-1]
2048 lsplit = text_until_cursor.split(open_quotes)[-1]
2050 else:
2049 else:
2051 return []
2050 return []
2052 except IndexError:
2051 except IndexError:
2053 # tab pressed on empty line
2052 # tab pressed on empty line
2054 lsplit = ""
2053 lsplit = ""
2055
2054
2056 if not open_quotes and lsplit != protect_filename(lsplit):
2055 if not open_quotes and lsplit != protect_filename(lsplit):
2057 # if protectables are found, do matching on the whole escaped name
2056 # if protectables are found, do matching on the whole escaped name
2058 has_protectables = True
2057 has_protectables = True
2059 text0,text = text,lsplit
2058 text0,text = text,lsplit
2060 else:
2059 else:
2061 has_protectables = False
2060 has_protectables = False
2062 text = os.path.expanduser(text)
2061 text = os.path.expanduser(text)
2063
2062
2064 if text == "":
2063 if text == "":
2065 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2064 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2066
2065
2067 # Compute the matches from the filesystem
2066 # Compute the matches from the filesystem
2068 if sys.platform == 'win32':
2067 if sys.platform == 'win32':
2069 m0 = self.clean_glob(text)
2068 m0 = self.clean_glob(text)
2070 else:
2069 else:
2071 m0 = self.clean_glob(text.replace('\\', ''))
2070 m0 = self.clean_glob(text.replace('\\', ''))
2072
2071
2073 if has_protectables:
2072 if has_protectables:
2074 # If we had protectables, we need to revert our changes to the
2073 # If we had protectables, we need to revert our changes to the
2075 # beginning of filename so that we don't double-write the part
2074 # beginning of filename so that we don't double-write the part
2076 # of the filename we have so far
2075 # of the filename we have so far
2077 len_lsplit = len(lsplit)
2076 len_lsplit = len(lsplit)
2078 matches = [text_prefix + text0 +
2077 matches = [text_prefix + text0 +
2079 protect_filename(f[len_lsplit:]) for f in m0]
2078 protect_filename(f[len_lsplit:]) for f in m0]
2080 else:
2079 else:
2081 if open_quotes:
2080 if open_quotes:
2082 # if we have a string with an open quote, we don't need to
2081 # if we have a string with an open quote, we don't need to
2083 # protect the names beyond the quote (and we _shouldn't_, as
2082 # protect the names beyond the quote (and we _shouldn't_, as
2084 # it would cause bugs when the filesystem call is made).
2083 # it would cause bugs when the filesystem call is made).
2085 matches = m0 if sys.platform == "win32" else\
2084 matches = m0 if sys.platform == "win32" else\
2086 [protect_filename(f, open_quotes) for f in m0]
2085 [protect_filename(f, open_quotes) for f in m0]
2087 else:
2086 else:
2088 matches = [text_prefix +
2087 matches = [text_prefix +
2089 protect_filename(f) for f in m0]
2088 protect_filename(f) for f in m0]
2090
2089
2091 # Mark directories in input list by appending '/' to their names.
2090 # Mark directories in input list by appending '/' to their names.
2092 return [x+'/' if os.path.isdir(x) else x for x in matches]
2091 return [x+'/' if os.path.isdir(x) else x for x in matches]
2093
2092
2094 @context_matcher()
2093 @context_matcher()
2095 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2094 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2096 """Match magics."""
2095 """Match magics."""
2097 text = context.token
2096 text = context.token
2098 matches = self.magic_matches(text)
2097 matches = self.magic_matches(text)
2099 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2098 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2100 is_magic_prefix = len(text) > 0 and text[0] == "%"
2099 is_magic_prefix = len(text) > 0 and text[0] == "%"
2101 result["suppress"] = is_magic_prefix and bool(result["completions"])
2100 result["suppress"] = is_magic_prefix and bool(result["completions"])
2102 return result
2101 return result
2103
2102
2104 def magic_matches(self, text: str):
2103 def magic_matches(self, text: str):
2105 """Match magics.
2104 """Match magics.
2106
2105
2107 .. deprecated:: 8.6
2106 .. deprecated:: 8.6
2108 You can use :meth:`magic_matcher` instead.
2107 You can use :meth:`magic_matcher` instead.
2109 """
2108 """
2110 # Get all shell magics now rather than statically, so magics loaded at
2109 # Get all shell magics now rather than statically, so magics loaded at
2111 # runtime show up too.
2110 # runtime show up too.
2112 lsm = self.shell.magics_manager.lsmagic()
2111 lsm = self.shell.magics_manager.lsmagic()
2113 line_magics = lsm['line']
2112 line_magics = lsm['line']
2114 cell_magics = lsm['cell']
2113 cell_magics = lsm['cell']
2115 pre = self.magic_escape
2114 pre = self.magic_escape
2116 pre2 = pre+pre
2115 pre2 = pre+pre
2117
2116
2118 explicit_magic = text.startswith(pre)
2117 explicit_magic = text.startswith(pre)
2119
2118
2120 # Completion logic:
2119 # Completion logic:
2121 # - user gives %%: only do cell magics
2120 # - user gives %%: only do cell magics
2122 # - user gives %: do both line and cell magics
2121 # - user gives %: do both line and cell magics
2123 # - no prefix: do both
2122 # - no prefix: do both
2124 # In other words, line magics are skipped if the user gives %% explicitly
2123 # In other words, line magics are skipped if the user gives %% explicitly
2125 #
2124 #
2126 # We also exclude magics that match any currently visible names:
2125 # We also exclude magics that match any currently visible names:
2127 # https://github.com/ipython/ipython/issues/4877, unless the user has
2126 # https://github.com/ipython/ipython/issues/4877, unless the user has
2128 # typed a %:
2127 # typed a %:
2129 # https://github.com/ipython/ipython/issues/10754
2128 # https://github.com/ipython/ipython/issues/10754
2130 bare_text = text.lstrip(pre)
2129 bare_text = text.lstrip(pre)
2131 global_matches = self.global_matches(bare_text)
2130 global_matches = self.global_matches(bare_text)
2132 if not explicit_magic:
2131 if not explicit_magic:
2133 def matches(magic):
2132 def matches(magic):
2134 """
2133 """
2135 Filter magics, in particular remove magics that match
2134 Filter magics, in particular remove magics that match
2136 a name present in global namespace.
2135 a name present in global namespace.
2137 """
2136 """
2138 return ( magic.startswith(bare_text) and
2137 return ( magic.startswith(bare_text) and
2139 magic not in global_matches )
2138 magic not in global_matches )
2140 else:
2139 else:
2141 def matches(magic):
2140 def matches(magic):
2142 return magic.startswith(bare_text)
2141 return magic.startswith(bare_text)
2143
2142
2144 comp = [ pre2+m for m in cell_magics if matches(m)]
2143 comp = [ pre2+m for m in cell_magics if matches(m)]
2145 if not text.startswith(pre2):
2144 if not text.startswith(pre2):
2146 comp += [ pre+m for m in line_magics if matches(m)]
2145 comp += [ pre+m for m in line_magics if matches(m)]
2147
2146
2148 return comp
2147 return comp
2149
2148
2150 @context_matcher()
2149 @context_matcher()
2151 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2150 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2152 """Match class names and attributes for %config magic."""
2151 """Match class names and attributes for %config magic."""
2153 # NOTE: uses `line_buffer` equivalent for compatibility
2152 # NOTE: uses `line_buffer` equivalent for compatibility
2154 matches = self.magic_config_matches(context.line_with_cursor)
2153 matches = self.magic_config_matches(context.line_with_cursor)
2155 return _convert_matcher_v1_result_to_v2(matches, type="param")
2154 return _convert_matcher_v1_result_to_v2(matches, type="param")
2156
2155
2157 def magic_config_matches(self, text: str) -> List[str]:
2156 def magic_config_matches(self, text: str) -> List[str]:
2158 """Match class names and attributes for %config magic.
2157 """Match class names and attributes for %config magic.
2159
2158
2160 .. deprecated:: 8.6
2159 .. deprecated:: 8.6
2161 You can use :meth:`magic_config_matcher` instead.
2160 You can use :meth:`magic_config_matcher` instead.
2162 """
2161 """
2163 texts = text.strip().split()
2162 texts = text.strip().split()
2164
2163
2165 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2164 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2166 # get all configuration classes
2165 # get all configuration classes
2167 classes = sorted(set([ c for c in self.shell.configurables
2166 classes = sorted(set([ c for c in self.shell.configurables
2168 if c.__class__.class_traits(config=True)
2167 if c.__class__.class_traits(config=True)
2169 ]), key=lambda x: x.__class__.__name__)
2168 ]), key=lambda x: x.__class__.__name__)
2170 classnames = [ c.__class__.__name__ for c in classes ]
2169 classnames = [ c.__class__.__name__ for c in classes ]
2171
2170
2172 # return all classnames if config or %config is given
2171 # return all classnames if config or %config is given
2173 if len(texts) == 1:
2172 if len(texts) == 1:
2174 return classnames
2173 return classnames
2175
2174
2176 # match classname
2175 # match classname
2177 classname_texts = texts[1].split('.')
2176 classname_texts = texts[1].split('.')
2178 classname = classname_texts[0]
2177 classname = classname_texts[0]
2179 classname_matches = [ c for c in classnames
2178 classname_matches = [ c for c in classnames
2180 if c.startswith(classname) ]
2179 if c.startswith(classname) ]
2181
2180
2182 # return matched classes or the matched class with attributes
2181 # return matched classes or the matched class with attributes
2183 if texts[1].find('.') < 0:
2182 if texts[1].find('.') < 0:
2184 return classname_matches
2183 return classname_matches
2185 elif len(classname_matches) == 1 and \
2184 elif len(classname_matches) == 1 and \
2186 classname_matches[0] == classname:
2185 classname_matches[0] == classname:
2187 cls = classes[classnames.index(classname)].__class__
2186 cls = classes[classnames.index(classname)].__class__
2188 help = cls.class_get_help()
2187 help = cls.class_get_help()
2189 # strip leading '--' from cl-args:
2188 # strip leading '--' from cl-args:
2190 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2189 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2191 return [ attr.split('=')[0]
2190 return [ attr.split('=')[0]
2192 for attr in help.strip().splitlines()
2191 for attr in help.strip().splitlines()
2193 if attr.startswith(texts[1]) ]
2192 if attr.startswith(texts[1]) ]
2194 return []
2193 return []
2195
2194
2196 @context_matcher()
2195 @context_matcher()
2197 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2196 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2198 """Match color schemes for %colors magic."""
2197 """Match color schemes for %colors magic."""
2199 # NOTE: uses `line_buffer` equivalent for compatibility
2198 # NOTE: uses `line_buffer` equivalent for compatibility
2200 matches = self.magic_color_matches(context.line_with_cursor)
2199 matches = self.magic_color_matches(context.line_with_cursor)
2201 return _convert_matcher_v1_result_to_v2(matches, type="param")
2200 return _convert_matcher_v1_result_to_v2(matches, type="param")
2202
2201
2203 def magic_color_matches(self, text: str) -> List[str]:
2202 def magic_color_matches(self, text: str) -> List[str]:
2204 """Match color schemes for %colors magic.
2203 """Match color schemes for %colors magic.
2205
2204
2206 .. deprecated:: 8.6
2205 .. deprecated:: 8.6
2207 You can use :meth:`magic_color_matcher` instead.
2206 You can use :meth:`magic_color_matcher` instead.
2208 """
2207 """
2209 texts = text.split()
2208 texts = text.split()
2210 if text.endswith(' '):
2209 if text.endswith(' '):
2211 # .split() strips off the trailing whitespace. Add '' back
2210 # .split() strips off the trailing whitespace. Add '' back
2212 # so that: '%colors ' -> ['%colors', '']
2211 # so that: '%colors ' -> ['%colors', '']
2213 texts.append('')
2212 texts.append('')
2214
2213
2215 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2214 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2216 prefix = texts[1]
2215 prefix = texts[1]
2217 return [ color for color in InspectColors.keys()
2216 return [ color for color in InspectColors.keys()
2218 if color.startswith(prefix) ]
2217 if color.startswith(prefix) ]
2219 return []
2218 return []
2220
2219
2221 @context_matcher(identifier="IPCompleter.jedi_matcher")
2220 @context_matcher(identifier="IPCompleter.jedi_matcher")
2222 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2221 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2223 matches = self._jedi_matches(
2222 matches = self._jedi_matches(
2224 cursor_column=context.cursor_position,
2223 cursor_column=context.cursor_position,
2225 cursor_line=context.cursor_line,
2224 cursor_line=context.cursor_line,
2226 text=context.full_text,
2225 text=context.full_text,
2227 )
2226 )
2228 return {
2227 return {
2229 "completions": matches,
2228 "completions": matches,
2230 # static analysis should not suppress other matchers
2229 # static analysis should not suppress other matchers
2231 "suppress": False,
2230 "suppress": False,
2232 }
2231 }
2233
2232
2234 def _jedi_matches(
2233 def _jedi_matches(
2235 self, cursor_column: int, cursor_line: int, text: str
2234 self, cursor_column: int, cursor_line: int, text: str
2236 ) -> Iterator[_JediCompletionLike]:
2235 ) -> Iterator[_JediCompletionLike]:
2237 """
2236 """
2238 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2237 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2239 cursor position.
2238 cursor position.
2240
2239
2241 Parameters
2240 Parameters
2242 ----------
2241 ----------
2243 cursor_column : int
2242 cursor_column : int
2244 column position of the cursor in ``text``, 0-indexed.
2243 column position of the cursor in ``text``, 0-indexed.
2245 cursor_line : int
2244 cursor_line : int
2246 line position of the cursor in ``text``, 0-indexed
2245 line position of the cursor in ``text``, 0-indexed
2247 text : str
2246 text : str
2248 text to complete
2247 text to complete
2249
2248
2250 Notes
2249 Notes
2251 -----
2250 -----
2252 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2251 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2253 object containing a string with the Jedi debug information attached.
2252 object containing a string with the Jedi debug information attached.
2254
2253
2255 .. deprecated:: 8.6
2254 .. deprecated:: 8.6
2256 You can use :meth:`_jedi_matcher` instead.
2255 You can use :meth:`_jedi_matcher` instead.
2257 """
2256 """
2258 namespaces = [self.namespace]
2257 namespaces = [self.namespace]
2259 if self.global_namespace is not None:
2258 if self.global_namespace is not None:
2260 namespaces.append(self.global_namespace)
2259 namespaces.append(self.global_namespace)
2261
2260
2262 completion_filter = lambda x:x
2261 completion_filter = lambda x:x
2263 offset = cursor_to_position(text, cursor_line, cursor_column)
2262 offset = cursor_to_position(text, cursor_line, cursor_column)
2264 # filter output if we are completing for object members
2263 # filter output if we are completing for object members
2265 if offset:
2264 if offset:
2266 pre = text[offset-1]
2265 pre = text[offset-1]
2267 if pre == '.':
2266 if pre == '.':
2268 if self.omit__names == 2:
2267 if self.omit__names == 2:
2269 completion_filter = lambda c:not c.name.startswith('_')
2268 completion_filter = lambda c:not c.name.startswith('_')
2270 elif self.omit__names == 1:
2269 elif self.omit__names == 1:
2271 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2270 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2272 elif self.omit__names == 0:
2271 elif self.omit__names == 0:
2273 completion_filter = lambda x:x
2272 completion_filter = lambda x:x
2274 else:
2273 else:
2275 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2274 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2276
2275
2277 interpreter = jedi.Interpreter(text[:offset], namespaces)
2276 interpreter = jedi.Interpreter(text[:offset], namespaces)
2278 try_jedi = True
2277 try_jedi = True
2279
2278
2280 try:
2279 try:
2281 # find the first token in the current tree -- if it is a ' or " then we are in a string
2280 # find the first token in the current tree -- if it is a ' or " then we are in a string
2282 completing_string = False
2281 completing_string = False
2283 try:
2282 try:
2284 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2283 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2285 except StopIteration:
2284 except StopIteration:
2286 pass
2285 pass
2287 else:
2286 else:
2288 # note the value may be ', ", or it may also be ''' or """, or
2287 # note the value may be ', ", or it may also be ''' or """, or
2289 # in some cases, """what/you/typed..., but all of these are
2288 # in some cases, """what/you/typed..., but all of these are
2290 # strings.
2289 # strings.
2291 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2290 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2292
2291
2293 # if we are in a string jedi is likely not the right candidate for
2292 # if we are in a string jedi is likely not the right candidate for
2294 # now. Skip it.
2293 # now. Skip it.
2295 try_jedi = not completing_string
2294 try_jedi = not completing_string
2296 except Exception as e:
2295 except Exception as e:
2297 # many of things can go wrong, we are using private API just don't crash.
2296 # many of things can go wrong, we are using private API just don't crash.
2298 if self.debug:
2297 if self.debug:
2299 print("Error detecting if completing a non-finished string :", e, '|')
2298 print("Error detecting if completing a non-finished string :", e, '|')
2300
2299
2301 if not try_jedi:
2300 if not try_jedi:
2302 return iter([])
2301 return iter([])
2303 try:
2302 try:
2304 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2303 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2305 except Exception as e:
2304 except Exception as e:
2306 if self.debug:
2305 if self.debug:
2307 return iter(
2306 return iter(
2308 [
2307 [
2309 _FakeJediCompletion(
2308 _FakeJediCompletion(
2310 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2309 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2311 % (e)
2310 % (e)
2312 )
2311 )
2313 ]
2312 ]
2314 )
2313 )
2315 else:
2314 else:
2316 return iter([])
2315 return iter([])
2317
2316
2318 @context_matcher()
2317 @context_matcher()
2319 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2318 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2320 """Match attributes or global python names"""
2319 """Match attributes or global python names"""
2321 text = context.line_with_cursor
2320 text = context.line_with_cursor
2322 if "." in text:
2321 if "." in text:
2323 try:
2322 try:
2324 matches, fragment = self._attr_matches(text, include_prefix=False)
2323 matches, fragment = self._attr_matches(text, include_prefix=False)
2325 if text.endswith(".") and self.omit__names:
2324 if text.endswith(".") and self.omit__names:
2326 if self.omit__names == 1:
2325 if self.omit__names == 1:
2327 # true if txt is _not_ a __ name, false otherwise:
2326 # true if txt is _not_ a __ name, false otherwise:
2328 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2327 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2329 else:
2328 else:
2330 # true if txt is _not_ a _ name, false otherwise:
2329 # true if txt is _not_ a _ name, false otherwise:
2331 no__name = (
2330 no__name = (
2332 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2331 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2333 is None
2332 is None
2334 )
2333 )
2335 matches = filter(no__name, matches)
2334 matches = filter(no__name, matches)
2336 return _convert_matcher_v1_result_to_v2(
2335 return _convert_matcher_v1_result_to_v2(
2337 matches, type="attribute", fragment=fragment
2336 matches, type="attribute", fragment=fragment
2338 )
2337 )
2339 except NameError:
2338 except NameError:
2340 # catches <undefined attributes>.<tab>
2339 # catches <undefined attributes>.<tab>
2341 matches = []
2340 matches = []
2342 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2341 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2343 else:
2342 else:
2344 matches = self.global_matches(context.token)
2343 matches = self.global_matches(context.token)
2345 # TODO: maybe distinguish between functions, modules and just "variables"
2344 # TODO: maybe distinguish between functions, modules and just "variables"
2346 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2345 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2347
2346
2348 @completion_matcher(api_version=1)
2347 @completion_matcher(api_version=1)
2349 def python_matches(self, text: str) -> Iterable[str]:
2348 def python_matches(self, text: str) -> Iterable[str]:
2350 """Match attributes or global python names.
2349 """Match attributes or global python names.
2351
2350
2352 .. deprecated:: 8.27
2351 .. deprecated:: 8.27
2353 You can use :meth:`python_matcher` instead."""
2352 You can use :meth:`python_matcher` instead."""
2354 if "." in text:
2353 if "." in text:
2355 try:
2354 try:
2356 matches = self.attr_matches(text)
2355 matches = self.attr_matches(text)
2357 if text.endswith('.') and self.omit__names:
2356 if text.endswith('.') and self.omit__names:
2358 if self.omit__names == 1:
2357 if self.omit__names == 1:
2359 # true if txt is _not_ a __ name, false otherwise:
2358 # true if txt is _not_ a __ name, false otherwise:
2360 no__name = (lambda txt:
2359 no__name = (lambda txt:
2361 re.match(r'.*\.__.*?__',txt) is None)
2360 re.match(r'.*\.__.*?__',txt) is None)
2362 else:
2361 else:
2363 # true if txt is _not_ a _ name, false otherwise:
2362 # true if txt is _not_ a _ name, false otherwise:
2364 no__name = (lambda txt:
2363 no__name = (lambda txt:
2365 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2364 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2366 matches = filter(no__name, matches)
2365 matches = filter(no__name, matches)
2367 except NameError:
2366 except NameError:
2368 # catches <undefined attributes>.<tab>
2367 # catches <undefined attributes>.<tab>
2369 matches = []
2368 matches = []
2370 else:
2369 else:
2371 matches = self.global_matches(text)
2370 matches = self.global_matches(text)
2372 return matches
2371 return matches
2373
2372
2374 def _default_arguments_from_docstring(self, doc):
2373 def _default_arguments_from_docstring(self, doc):
2375 """Parse the first line of docstring for call signature.
2374 """Parse the first line of docstring for call signature.
2376
2375
2377 Docstring should be of the form 'min(iterable[, key=func])\n'.
2376 Docstring should be of the form 'min(iterable[, key=func])\n'.
2378 It can also parse cython docstring of the form
2377 It can also parse cython docstring of the form
2379 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2378 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2380 """
2379 """
2381 if doc is None:
2380 if doc is None:
2382 return []
2381 return []
2383
2382
2384 #care only the firstline
2383 #care only the firstline
2385 line = doc.lstrip().splitlines()[0]
2384 line = doc.lstrip().splitlines()[0]
2386
2385
2387 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2386 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2388 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2387 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2389 sig = self.docstring_sig_re.search(line)
2388 sig = self.docstring_sig_re.search(line)
2390 if sig is None:
2389 if sig is None:
2391 return []
2390 return []
2392 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2391 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2393 sig = sig.groups()[0].split(',')
2392 sig = sig.groups()[0].split(',')
2394 ret = []
2393 ret = []
2395 for s in sig:
2394 for s in sig:
2396 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2395 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2397 ret += self.docstring_kwd_re.findall(s)
2396 ret += self.docstring_kwd_re.findall(s)
2398 return ret
2397 return ret
2399
2398
2400 def _default_arguments(self, obj):
2399 def _default_arguments(self, obj):
2401 """Return the list of default arguments of obj if it is callable,
2400 """Return the list of default arguments of obj if it is callable,
2402 or empty list otherwise."""
2401 or empty list otherwise."""
2403 call_obj = obj
2402 call_obj = obj
2404 ret = []
2403 ret = []
2405 if inspect.isbuiltin(obj):
2404 if inspect.isbuiltin(obj):
2406 pass
2405 pass
2407 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2406 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2408 if inspect.isclass(obj):
2407 if inspect.isclass(obj):
2409 #for cython embedsignature=True the constructor docstring
2408 #for cython embedsignature=True the constructor docstring
2410 #belongs to the object itself not __init__
2409 #belongs to the object itself not __init__
2411 ret += self._default_arguments_from_docstring(
2410 ret += self._default_arguments_from_docstring(
2412 getattr(obj, '__doc__', ''))
2411 getattr(obj, '__doc__', ''))
2413 # for classes, check for __init__,__new__
2412 # for classes, check for __init__,__new__
2414 call_obj = (getattr(obj, '__init__', None) or
2413 call_obj = (getattr(obj, '__init__', None) or
2415 getattr(obj, '__new__', None))
2414 getattr(obj, '__new__', None))
2416 # for all others, check if they are __call__able
2415 # for all others, check if they are __call__able
2417 elif hasattr(obj, '__call__'):
2416 elif hasattr(obj, '__call__'):
2418 call_obj = obj.__call__
2417 call_obj = obj.__call__
2419 ret += self._default_arguments_from_docstring(
2418 ret += self._default_arguments_from_docstring(
2420 getattr(call_obj, '__doc__', ''))
2419 getattr(call_obj, '__doc__', ''))
2421
2420
2422 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2421 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2423 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2422 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2424
2423
2425 try:
2424 try:
2426 sig = inspect.signature(obj)
2425 sig = inspect.signature(obj)
2427 ret.extend(k for k, v in sig.parameters.items() if
2426 ret.extend(k for k, v in sig.parameters.items() if
2428 v.kind in _keeps)
2427 v.kind in _keeps)
2429 except ValueError:
2428 except ValueError:
2430 pass
2429 pass
2431
2430
2432 return list(set(ret))
2431 return list(set(ret))
2433
2432
2434 @context_matcher()
2433 @context_matcher()
2435 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2434 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2436 """Match named parameters (kwargs) of the last open function."""
2435 """Match named parameters (kwargs) of the last open function."""
2437 matches = self.python_func_kw_matches(context.token)
2436 matches = self.python_func_kw_matches(context.token)
2438 return _convert_matcher_v1_result_to_v2(matches, type="param")
2437 return _convert_matcher_v1_result_to_v2(matches, type="param")
2439
2438
2440 def python_func_kw_matches(self, text):
2439 def python_func_kw_matches(self, text):
2441 """Match named parameters (kwargs) of the last open function.
2440 """Match named parameters (kwargs) of the last open function.
2442
2441
2443 .. deprecated:: 8.6
2442 .. deprecated:: 8.6
2444 You can use :meth:`python_func_kw_matcher` instead.
2443 You can use :meth:`python_func_kw_matcher` instead.
2445 """
2444 """
2446
2445
2447 if "." in text: # a parameter cannot be dotted
2446 if "." in text: # a parameter cannot be dotted
2448 return []
2447 return []
2449 try: regexp = self.__funcParamsRegex
2448 try: regexp = self.__funcParamsRegex
2450 except AttributeError:
2449 except AttributeError:
2451 regexp = self.__funcParamsRegex = re.compile(r'''
2450 regexp = self.__funcParamsRegex = re.compile(r'''
2452 '.*?(?<!\\)' | # single quoted strings or
2451 '.*?(?<!\\)' | # single quoted strings or
2453 ".*?(?<!\\)" | # double quoted strings or
2452 ".*?(?<!\\)" | # double quoted strings or
2454 \w+ | # identifier
2453 \w+ | # identifier
2455 \S # other characters
2454 \S # other characters
2456 ''', re.VERBOSE | re.DOTALL)
2455 ''', re.VERBOSE | re.DOTALL)
2457 # 1. find the nearest identifier that comes before an unclosed
2456 # 1. find the nearest identifier that comes before an unclosed
2458 # parenthesis before the cursor
2457 # parenthesis before the cursor
2459 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2458 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2460 tokens = regexp.findall(self.text_until_cursor)
2459 tokens = regexp.findall(self.text_until_cursor)
2461 iterTokens = reversed(tokens); openPar = 0
2460 iterTokens = reversed(tokens); openPar = 0
2462
2461
2463 for token in iterTokens:
2462 for token in iterTokens:
2464 if token == ')':
2463 if token == ')':
2465 openPar -= 1
2464 openPar -= 1
2466 elif token == '(':
2465 elif token == '(':
2467 openPar += 1
2466 openPar += 1
2468 if openPar > 0:
2467 if openPar > 0:
2469 # found the last unclosed parenthesis
2468 # found the last unclosed parenthesis
2470 break
2469 break
2471 else:
2470 else:
2472 return []
2471 return []
2473 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2472 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2474 ids = []
2473 ids = []
2475 isId = re.compile(r'\w+$').match
2474 isId = re.compile(r'\w+$').match
2476
2475
2477 while True:
2476 while True:
2478 try:
2477 try:
2479 ids.append(next(iterTokens))
2478 ids.append(next(iterTokens))
2480 if not isId(ids[-1]):
2479 if not isId(ids[-1]):
2481 ids.pop(); break
2480 ids.pop(); break
2482 if not next(iterTokens) == '.':
2481 if not next(iterTokens) == '.':
2483 break
2482 break
2484 except StopIteration:
2483 except StopIteration:
2485 break
2484 break
2486
2485
2487 # Find all named arguments already assigned to, as to avoid suggesting
2486 # Find all named arguments already assigned to, as to avoid suggesting
2488 # them again
2487 # them again
2489 usedNamedArgs = set()
2488 usedNamedArgs = set()
2490 par_level = -1
2489 par_level = -1
2491 for token, next_token in zip(tokens, tokens[1:]):
2490 for token, next_token in zip(tokens, tokens[1:]):
2492 if token == '(':
2491 if token == '(':
2493 par_level += 1
2492 par_level += 1
2494 elif token == ')':
2493 elif token == ')':
2495 par_level -= 1
2494 par_level -= 1
2496
2495
2497 if par_level != 0:
2496 if par_level != 0:
2498 continue
2497 continue
2499
2498
2500 if next_token != '=':
2499 if next_token != '=':
2501 continue
2500 continue
2502
2501
2503 usedNamedArgs.add(token)
2502 usedNamedArgs.add(token)
2504
2503
2505 argMatches = []
2504 argMatches = []
2506 try:
2505 try:
2507 callableObj = '.'.join(ids[::-1])
2506 callableObj = '.'.join(ids[::-1])
2508 namedArgs = self._default_arguments(eval(callableObj,
2507 namedArgs = self._default_arguments(eval(callableObj,
2509 self.namespace))
2508 self.namespace))
2510
2509
2511 # Remove used named arguments from the list, no need to show twice
2510 # Remove used named arguments from the list, no need to show twice
2512 for namedArg in set(namedArgs) - usedNamedArgs:
2511 for namedArg in set(namedArgs) - usedNamedArgs:
2513 if namedArg.startswith(text):
2512 if namedArg.startswith(text):
2514 argMatches.append("%s=" %namedArg)
2513 argMatches.append("%s=" %namedArg)
2515 except:
2514 except:
2516 pass
2515 pass
2517
2516
2518 return argMatches
2517 return argMatches
2519
2518
2520 @staticmethod
2519 @staticmethod
2521 def _get_keys(obj: Any) -> List[Any]:
2520 def _get_keys(obj: Any) -> List[Any]:
2522 # Objects can define their own completions by defining an
2521 # Objects can define their own completions by defining an
2523 # _ipy_key_completions_() method.
2522 # _ipy_key_completions_() method.
2524 method = get_real_method(obj, '_ipython_key_completions_')
2523 method = get_real_method(obj, '_ipython_key_completions_')
2525 if method is not None:
2524 if method is not None:
2526 return method()
2525 return method()
2527
2526
2528 # Special case some common in-memory dict-like types
2527 # Special case some common in-memory dict-like types
2529 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2528 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2530 try:
2529 try:
2531 return list(obj.keys())
2530 return list(obj.keys())
2532 except Exception:
2531 except Exception:
2533 return []
2532 return []
2534 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2533 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2535 try:
2534 try:
2536 return list(obj.obj.keys())
2535 return list(obj.obj.keys())
2537 except Exception:
2536 except Exception:
2538 return []
2537 return []
2539 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2538 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2540 _safe_isinstance(obj, 'numpy', 'void'):
2539 _safe_isinstance(obj, 'numpy', 'void'):
2541 return obj.dtype.names or []
2540 return obj.dtype.names or []
2542 return []
2541 return []
2543
2542
2544 @context_matcher()
2543 @context_matcher()
2545 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2544 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2546 """Match string keys in a dictionary, after e.g. ``foo[``."""
2545 """Match string keys in a dictionary, after e.g. ``foo[``."""
2547 matches = self.dict_key_matches(context.token)
2546 matches = self.dict_key_matches(context.token)
2548 return _convert_matcher_v1_result_to_v2(
2547 return _convert_matcher_v1_result_to_v2(
2549 matches, type="dict key", suppress_if_matches=True
2548 matches, type="dict key", suppress_if_matches=True
2550 )
2549 )
2551
2550
2552 def dict_key_matches(self, text: str) -> List[str]:
2551 def dict_key_matches(self, text: str) -> List[str]:
2553 """Match string keys in a dictionary, after e.g. ``foo[``.
2552 """Match string keys in a dictionary, after e.g. ``foo[``.
2554
2553
2555 .. deprecated:: 8.6
2554 .. deprecated:: 8.6
2556 You can use :meth:`dict_key_matcher` instead.
2555 You can use :meth:`dict_key_matcher` instead.
2557 """
2556 """
2558
2557
2559 # Short-circuit on closed dictionary (regular expression would
2558 # Short-circuit on closed dictionary (regular expression would
2560 # not match anyway, but would take quite a while).
2559 # not match anyway, but would take quite a while).
2561 if self.text_until_cursor.strip().endswith("]"):
2560 if self.text_until_cursor.strip().endswith("]"):
2562 return []
2561 return []
2563
2562
2564 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2563 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2565
2564
2566 if match is None:
2565 if match is None:
2567 return []
2566 return []
2568
2567
2569 expr, prior_tuple_keys, key_prefix = match.groups()
2568 expr, prior_tuple_keys, key_prefix = match.groups()
2570
2569
2571 obj = self._evaluate_expr(expr)
2570 obj = self._evaluate_expr(expr)
2572
2571
2573 if obj is not_found:
2572 if obj is not_found:
2574 return []
2573 return []
2575
2574
2576 keys = self._get_keys(obj)
2575 keys = self._get_keys(obj)
2577 if not keys:
2576 if not keys:
2578 return keys
2577 return keys
2579
2578
2580 tuple_prefix = guarded_eval(
2579 tuple_prefix = guarded_eval(
2581 prior_tuple_keys,
2580 prior_tuple_keys,
2582 EvaluationContext(
2581 EvaluationContext(
2583 globals=self.global_namespace,
2582 globals=self.global_namespace,
2584 locals=self.namespace,
2583 locals=self.namespace,
2585 evaluation=self.evaluation, # type: ignore
2584 evaluation=self.evaluation, # type: ignore
2586 in_subscript=True,
2585 in_subscript=True,
2587 ),
2586 ),
2588 )
2587 )
2589
2588
2590 closing_quote, token_offset, matches = match_dict_keys(
2589 closing_quote, token_offset, matches = match_dict_keys(
2591 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2590 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2592 )
2591 )
2593 if not matches:
2592 if not matches:
2594 return []
2593 return []
2595
2594
2596 # get the cursor position of
2595 # get the cursor position of
2597 # - the text being completed
2596 # - the text being completed
2598 # - the start of the key text
2597 # - the start of the key text
2599 # - the start of the completion
2598 # - the start of the completion
2600 text_start = len(self.text_until_cursor) - len(text)
2599 text_start = len(self.text_until_cursor) - len(text)
2601 if key_prefix:
2600 if key_prefix:
2602 key_start = match.start(3)
2601 key_start = match.start(3)
2603 completion_start = key_start + token_offset
2602 completion_start = key_start + token_offset
2604 else:
2603 else:
2605 key_start = completion_start = match.end()
2604 key_start = completion_start = match.end()
2606
2605
2607 # grab the leading prefix, to make sure all completions start with `text`
2606 # grab the leading prefix, to make sure all completions start with `text`
2608 if text_start > key_start:
2607 if text_start > key_start:
2609 leading = ''
2608 leading = ''
2610 else:
2609 else:
2611 leading = text[text_start:completion_start]
2610 leading = text[text_start:completion_start]
2612
2611
2613 # append closing quote and bracket as appropriate
2612 # append closing quote and bracket as appropriate
2614 # this is *not* appropriate if the opening quote or bracket is outside
2613 # this is *not* appropriate if the opening quote or bracket is outside
2615 # the text given to this method, e.g. `d["""a\nt
2614 # the text given to this method, e.g. `d["""a\nt
2616 can_close_quote = False
2615 can_close_quote = False
2617 can_close_bracket = False
2616 can_close_bracket = False
2618
2617
2619 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2618 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2620
2619
2621 if continuation.startswith(closing_quote):
2620 if continuation.startswith(closing_quote):
2622 # do not close if already closed, e.g. `d['a<tab>'`
2621 # do not close if already closed, e.g. `d['a<tab>'`
2623 continuation = continuation[len(closing_quote) :]
2622 continuation = continuation[len(closing_quote) :]
2624 else:
2623 else:
2625 can_close_quote = True
2624 can_close_quote = True
2626
2625
2627 continuation = continuation.strip()
2626 continuation = continuation.strip()
2628
2627
2629 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2628 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2630 # handling it is out of scope, so let's avoid appending suffixes.
2629 # handling it is out of scope, so let's avoid appending suffixes.
2631 has_known_tuple_handling = isinstance(obj, dict)
2630 has_known_tuple_handling = isinstance(obj, dict)
2632
2631
2633 can_close_bracket = (
2632 can_close_bracket = (
2634 not continuation.startswith("]") and self.auto_close_dict_keys
2633 not continuation.startswith("]") and self.auto_close_dict_keys
2635 )
2634 )
2636 can_close_tuple_item = (
2635 can_close_tuple_item = (
2637 not continuation.startswith(",")
2636 not continuation.startswith(",")
2638 and has_known_tuple_handling
2637 and has_known_tuple_handling
2639 and self.auto_close_dict_keys
2638 and self.auto_close_dict_keys
2640 )
2639 )
2641 can_close_quote = can_close_quote and self.auto_close_dict_keys
2640 can_close_quote = can_close_quote and self.auto_close_dict_keys
2642
2641
2643 # fast path if closing quote should be appended but not suffix is allowed
2642 # fast path if closing quote should be appended but not suffix is allowed
2644 if not can_close_quote and not can_close_bracket and closing_quote:
2643 if not can_close_quote and not can_close_bracket and closing_quote:
2645 return [leading + k for k in matches]
2644 return [leading + k for k in matches]
2646
2645
2647 results = []
2646 results = []
2648
2647
2649 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2648 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2650
2649
2651 for k, state_flag in matches.items():
2650 for k, state_flag in matches.items():
2652 result = leading + k
2651 result = leading + k
2653 if can_close_quote and closing_quote:
2652 if can_close_quote and closing_quote:
2654 result += closing_quote
2653 result += closing_quote
2655
2654
2656 if state_flag == end_of_tuple_or_item:
2655 if state_flag == end_of_tuple_or_item:
2657 # We do not know which suffix to add,
2656 # We do not know which suffix to add,
2658 # e.g. both tuple item and string
2657 # e.g. both tuple item and string
2659 # match this item.
2658 # match this item.
2660 pass
2659 pass
2661
2660
2662 if state_flag in end_of_tuple_or_item and can_close_bracket:
2661 if state_flag in end_of_tuple_or_item and can_close_bracket:
2663 result += "]"
2662 result += "]"
2664 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2663 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2665 result += ", "
2664 result += ", "
2666 results.append(result)
2665 results.append(result)
2667 return results
2666 return results
2668
2667
2669 @context_matcher()
2668 @context_matcher()
2670 def unicode_name_matcher(self, context: CompletionContext):
2669 def unicode_name_matcher(self, context: CompletionContext):
2671 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2670 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2672 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2671 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2673 return _convert_matcher_v1_result_to_v2(
2672 return _convert_matcher_v1_result_to_v2(
2674 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2673 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2675 )
2674 )
2676
2675
2677 @staticmethod
2676 @staticmethod
2678 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2677 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2679 """Match Latex-like syntax for unicode characters base
2678 """Match Latex-like syntax for unicode characters base
2680 on the name of the character.
2679 on the name of the character.
2681
2680
2682 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2681 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2683
2682
2684 Works only on valid python 3 identifier, or on combining characters that
2683 Works only on valid python 3 identifier, or on combining characters that
2685 will combine to form a valid identifier.
2684 will combine to form a valid identifier.
2686 """
2685 """
2687 slashpos = text.rfind('\\')
2686 slashpos = text.rfind('\\')
2688 if slashpos > -1:
2687 if slashpos > -1:
2689 s = text[slashpos+1:]
2688 s = text[slashpos+1:]
2690 try :
2689 try :
2691 unic = unicodedata.lookup(s)
2690 unic = unicodedata.lookup(s)
2692 # allow combining chars
2691 # allow combining chars
2693 if ('a'+unic).isidentifier():
2692 if ('a'+unic).isidentifier():
2694 return '\\'+s,[unic]
2693 return '\\'+s,[unic]
2695 except KeyError:
2694 except KeyError:
2696 pass
2695 pass
2697 return '', []
2696 return '', []
2698
2697
2699 @context_matcher()
2698 @context_matcher()
2700 def latex_name_matcher(self, context: CompletionContext):
2699 def latex_name_matcher(self, context: CompletionContext):
2701 """Match Latex syntax for unicode characters.
2700 """Match Latex syntax for unicode characters.
2702
2701
2703 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2702 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2704 """
2703 """
2705 fragment, matches = self.latex_matches(context.text_until_cursor)
2704 fragment, matches = self.latex_matches(context.text_until_cursor)
2706 return _convert_matcher_v1_result_to_v2(
2705 return _convert_matcher_v1_result_to_v2(
2707 matches, type="latex", fragment=fragment, suppress_if_matches=True
2706 matches, type="latex", fragment=fragment, suppress_if_matches=True
2708 )
2707 )
2709
2708
2710 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2709 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2711 """Match Latex syntax for unicode characters.
2710 """Match Latex syntax for unicode characters.
2712
2711
2713 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2712 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2714
2713
2715 .. deprecated:: 8.6
2714 .. deprecated:: 8.6
2716 You can use :meth:`latex_name_matcher` instead.
2715 You can use :meth:`latex_name_matcher` instead.
2717 """
2716 """
2718 slashpos = text.rfind('\\')
2717 slashpos = text.rfind('\\')
2719 if slashpos > -1:
2718 if slashpos > -1:
2720 s = text[slashpos:]
2719 s = text[slashpos:]
2721 if s in latex_symbols:
2720 if s in latex_symbols:
2722 # Try to complete a full latex symbol to unicode
2721 # Try to complete a full latex symbol to unicode
2723 # \\alpha -> Ξ±
2722 # \\alpha -> Ξ±
2724 return s, [latex_symbols[s]]
2723 return s, [latex_symbols[s]]
2725 else:
2724 else:
2726 # If a user has partially typed a latex symbol, give them
2725 # If a user has partially typed a latex symbol, give them
2727 # a full list of options \al -> [\aleph, \alpha]
2726 # a full list of options \al -> [\aleph, \alpha]
2728 matches = [k for k in latex_symbols if k.startswith(s)]
2727 matches = [k for k in latex_symbols if k.startswith(s)]
2729 if matches:
2728 if matches:
2730 return s, matches
2729 return s, matches
2731 return '', ()
2730 return '', ()
2732
2731
2733 @context_matcher()
2732 @context_matcher()
2734 def custom_completer_matcher(self, context):
2733 def custom_completer_matcher(self, context):
2735 """Dispatch custom completer.
2734 """Dispatch custom completer.
2736
2735
2737 If a match is found, suppresses all other matchers except for Jedi.
2736 If a match is found, suppresses all other matchers except for Jedi.
2738 """
2737 """
2739 matches = self.dispatch_custom_completer(context.token) or []
2738 matches = self.dispatch_custom_completer(context.token) or []
2740 result = _convert_matcher_v1_result_to_v2(
2739 result = _convert_matcher_v1_result_to_v2(
2741 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2740 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2742 )
2741 )
2743 result["ordered"] = True
2742 result["ordered"] = True
2744 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2743 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2745 return result
2744 return result
2746
2745
2747 def dispatch_custom_completer(self, text):
2746 def dispatch_custom_completer(self, text):
2748 """
2747 """
2749 .. deprecated:: 8.6
2748 .. deprecated:: 8.6
2750 You can use :meth:`custom_completer_matcher` instead.
2749 You can use :meth:`custom_completer_matcher` instead.
2751 """
2750 """
2752 if not self.custom_completers:
2751 if not self.custom_completers:
2753 return
2752 return
2754
2753
2755 line = self.line_buffer
2754 line = self.line_buffer
2756 if not line.strip():
2755 if not line.strip():
2757 return None
2756 return None
2758
2757
2759 # Create a little structure to pass all the relevant information about
2758 # Create a little structure to pass all the relevant information about
2760 # the current completion to any custom completer.
2759 # the current completion to any custom completer.
2761 event = SimpleNamespace()
2760 event = SimpleNamespace()
2762 event.line = line
2761 event.line = line
2763 event.symbol = text
2762 event.symbol = text
2764 cmd = line.split(None,1)[0]
2763 cmd = line.split(None,1)[0]
2765 event.command = cmd
2764 event.command = cmd
2766 event.text_until_cursor = self.text_until_cursor
2765 event.text_until_cursor = self.text_until_cursor
2767
2766
2768 # for foo etc, try also to find completer for %foo
2767 # for foo etc, try also to find completer for %foo
2769 if not cmd.startswith(self.magic_escape):
2768 if not cmd.startswith(self.magic_escape):
2770 try_magic = self.custom_completers.s_matches(
2769 try_magic = self.custom_completers.s_matches(
2771 self.magic_escape + cmd)
2770 self.magic_escape + cmd)
2772 else:
2771 else:
2773 try_magic = []
2772 try_magic = []
2774
2773
2775 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2774 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2776 try_magic,
2775 try_magic,
2777 self.custom_completers.flat_matches(self.text_until_cursor)):
2776 self.custom_completers.flat_matches(self.text_until_cursor)):
2778 try:
2777 try:
2779 res = c(event)
2778 res = c(event)
2780 if res:
2779 if res:
2781 # first, try case sensitive match
2780 # first, try case sensitive match
2782 withcase = [r for r in res if r.startswith(text)]
2781 withcase = [r for r in res if r.startswith(text)]
2783 if withcase:
2782 if withcase:
2784 return withcase
2783 return withcase
2785 # if none, then case insensitive ones are ok too
2784 # if none, then case insensitive ones are ok too
2786 text_low = text.lower()
2785 text_low = text.lower()
2787 return [r for r in res if r.lower().startswith(text_low)]
2786 return [r for r in res if r.lower().startswith(text_low)]
2788 except TryNext:
2787 except TryNext:
2789 pass
2788 pass
2790 except KeyboardInterrupt:
2789 except KeyboardInterrupt:
2791 """
2790 """
2792 If custom completer take too long,
2791 If custom completer take too long,
2793 let keyboard interrupt abort and return nothing.
2792 let keyboard interrupt abort and return nothing.
2794 """
2793 """
2795 break
2794 break
2796
2795
2797 return None
2796 return None
2798
2797
2799 def completions(self, text: str, offset: int)->Iterator[Completion]:
2798 def completions(self, text: str, offset: int)->Iterator[Completion]:
2800 """
2799 """
2801 Returns an iterator over the possible completions
2800 Returns an iterator over the possible completions
2802
2801
2803 .. warning::
2802 .. warning::
2804
2803
2805 Unstable
2804 Unstable
2806
2805
2807 This function is unstable, API may change without warning.
2806 This function is unstable, API may change without warning.
2808 It will also raise unless use in proper context manager.
2807 It will also raise unless use in proper context manager.
2809
2808
2810 Parameters
2809 Parameters
2811 ----------
2810 ----------
2812 text : str
2811 text : str
2813 Full text of the current input, multi line string.
2812 Full text of the current input, multi line string.
2814 offset : int
2813 offset : int
2815 Integer representing the position of the cursor in ``text``. Offset
2814 Integer representing the position of the cursor in ``text``. Offset
2816 is 0-based indexed.
2815 is 0-based indexed.
2817
2816
2818 Yields
2817 Yields
2819 ------
2818 ------
2820 Completion
2819 Completion
2821
2820
2822 Notes
2821 Notes
2823 -----
2822 -----
2824 The cursor on a text can either be seen as being "in between"
2823 The cursor on a text can either be seen as being "in between"
2825 characters or "On" a character depending on the interface visible to
2824 characters or "On" a character depending on the interface visible to
2826 the user. For consistency the cursor being on "in between" characters X
2825 the user. For consistency the cursor being on "in between" characters X
2827 and Y is equivalent to the cursor being "on" character Y, that is to say
2826 and Y is equivalent to the cursor being "on" character Y, that is to say
2828 the character the cursor is on is considered as being after the cursor.
2827 the character the cursor is on is considered as being after the cursor.
2829
2828
2830 Combining characters may span more that one position in the
2829 Combining characters may span more that one position in the
2831 text.
2830 text.
2832
2831
2833 .. note::
2832 .. note::
2834
2833
2835 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2834 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2836 fake Completion token to distinguish completion returned by Jedi
2835 fake Completion token to distinguish completion returned by Jedi
2837 and usual IPython completion.
2836 and usual IPython completion.
2838
2837
2839 .. note::
2838 .. note::
2840
2839
2841 Completions are not completely deduplicated yet. If identical
2840 Completions are not completely deduplicated yet. If identical
2842 completions are coming from different sources this function does not
2841 completions are coming from different sources this function does not
2843 ensure that each completion object will only be present once.
2842 ensure that each completion object will only be present once.
2844 """
2843 """
2845 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2844 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2846 "It may change without warnings. "
2845 "It may change without warnings. "
2847 "Use in corresponding context manager.",
2846 "Use in corresponding context manager.",
2848 category=ProvisionalCompleterWarning, stacklevel=2)
2847 category=ProvisionalCompleterWarning, stacklevel=2)
2849
2848
2850 seen = set()
2849 seen = set()
2851 profiler:Optional[cProfile.Profile]
2850 profiler:Optional[cProfile.Profile]
2852 try:
2851 try:
2853 if self.profile_completions:
2852 if self.profile_completions:
2854 import cProfile
2853 import cProfile
2855 profiler = cProfile.Profile()
2854 profiler = cProfile.Profile()
2856 profiler.enable()
2855 profiler.enable()
2857 else:
2856 else:
2858 profiler = None
2857 profiler = None
2859
2858
2860 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2859 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2861 if c and (c in seen):
2860 if c and (c in seen):
2862 continue
2861 continue
2863 yield c
2862 yield c
2864 seen.add(c)
2863 seen.add(c)
2865 except KeyboardInterrupt:
2864 except KeyboardInterrupt:
2866 """if completions take too long and users send keyboard interrupt,
2865 """if completions take too long and users send keyboard interrupt,
2867 do not crash and return ASAP. """
2866 do not crash and return ASAP. """
2868 pass
2867 pass
2869 finally:
2868 finally:
2870 if profiler is not None:
2869 if profiler is not None:
2871 profiler.disable()
2870 profiler.disable()
2872 ensure_dir_exists(self.profiler_output_dir)
2871 ensure_dir_exists(self.profiler_output_dir)
2873 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2872 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2874 print("Writing profiler output to", output_path)
2873 print("Writing profiler output to", output_path)
2875 profiler.dump_stats(output_path)
2874 profiler.dump_stats(output_path)
2876
2875
2877 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2876 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2878 """
2877 """
2879 Core completion module.Same signature as :any:`completions`, with the
2878 Core completion module.Same signature as :any:`completions`, with the
2880 extra `timeout` parameter (in seconds).
2879 extra `timeout` parameter (in seconds).
2881
2880
2882 Computing jedi's completion ``.type`` can be quite expensive (it is a
2881 Computing jedi's completion ``.type`` can be quite expensive (it is a
2883 lazy property) and can require some warm-up, more warm up than just
2882 lazy property) and can require some warm-up, more warm up than just
2884 computing the ``name`` of a completion. The warm-up can be :
2883 computing the ``name`` of a completion. The warm-up can be :
2885
2884
2886 - Long warm-up the first time a module is encountered after
2885 - Long warm-up the first time a module is encountered after
2887 install/update: actually build parse/inference tree.
2886 install/update: actually build parse/inference tree.
2888
2887
2889 - first time the module is encountered in a session: load tree from
2888 - first time the module is encountered in a session: load tree from
2890 disk.
2889 disk.
2891
2890
2892 We don't want to block completions for tens of seconds so we give the
2891 We don't want to block completions for tens of seconds so we give the
2893 completer a "budget" of ``_timeout`` seconds per invocation to compute
2892 completer a "budget" of ``_timeout`` seconds per invocation to compute
2894 completions types, the completions that have not yet been computed will
2893 completions types, the completions that have not yet been computed will
2895 be marked as "unknown" an will have a chance to be computed next round
2894 be marked as "unknown" an will have a chance to be computed next round
2896 are things get cached.
2895 are things get cached.
2897
2896
2898 Keep in mind that Jedi is not the only thing treating the completion so
2897 Keep in mind that Jedi is not the only thing treating the completion so
2899 keep the timeout short-ish as if we take more than 0.3 second we still
2898 keep the timeout short-ish as if we take more than 0.3 second we still
2900 have lots of processing to do.
2899 have lots of processing to do.
2901
2900
2902 """
2901 """
2903 deadline = time.monotonic() + _timeout
2902 deadline = time.monotonic() + _timeout
2904
2903
2905 before = full_text[:offset]
2904 before = full_text[:offset]
2906 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2905 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2907
2906
2908 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2907 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2909
2908
2910 def is_non_jedi_result(
2909 def is_non_jedi_result(
2911 result: MatcherResult, identifier: str
2910 result: MatcherResult, identifier: str
2912 ) -> TypeGuard[SimpleMatcherResult]:
2911 ) -> TypeGuard[SimpleMatcherResult]:
2913 return identifier != jedi_matcher_id
2912 return identifier != jedi_matcher_id
2914
2913
2915 results = self._complete(
2914 results = self._complete(
2916 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2915 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2917 )
2916 )
2918
2917
2919 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2918 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2920 identifier: result
2919 identifier: result
2921 for identifier, result in results.items()
2920 for identifier, result in results.items()
2922 if is_non_jedi_result(result, identifier)
2921 if is_non_jedi_result(result, identifier)
2923 }
2922 }
2924
2923
2925 jedi_matches = (
2924 jedi_matches = (
2926 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2925 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2927 if jedi_matcher_id in results
2926 if jedi_matcher_id in results
2928 else ()
2927 else ()
2929 )
2928 )
2930
2929
2931 iter_jm = iter(jedi_matches)
2930 iter_jm = iter(jedi_matches)
2932 if _timeout:
2931 if _timeout:
2933 for jm in iter_jm:
2932 for jm in iter_jm:
2934 try:
2933 try:
2935 type_ = jm.type
2934 type_ = jm.type
2936 except Exception:
2935 except Exception:
2937 if self.debug:
2936 if self.debug:
2938 print("Error in Jedi getting type of ", jm)
2937 print("Error in Jedi getting type of ", jm)
2939 type_ = None
2938 type_ = None
2940 delta = len(jm.name_with_symbols) - len(jm.complete)
2939 delta = len(jm.name_with_symbols) - len(jm.complete)
2941 if type_ == 'function':
2940 if type_ == 'function':
2942 signature = _make_signature(jm)
2941 signature = _make_signature(jm)
2943 else:
2942 else:
2944 signature = ''
2943 signature = ''
2945 yield Completion(start=offset - delta,
2944 yield Completion(start=offset - delta,
2946 end=offset,
2945 end=offset,
2947 text=jm.name_with_symbols,
2946 text=jm.name_with_symbols,
2948 type=type_,
2947 type=type_,
2949 signature=signature,
2948 signature=signature,
2950 _origin='jedi')
2949 _origin='jedi')
2951
2950
2952 if time.monotonic() > deadline:
2951 if time.monotonic() > deadline:
2953 break
2952 break
2954
2953
2955 for jm in iter_jm:
2954 for jm in iter_jm:
2956 delta = len(jm.name_with_symbols) - len(jm.complete)
2955 delta = len(jm.name_with_symbols) - len(jm.complete)
2957 yield Completion(
2956 yield Completion(
2958 start=offset - delta,
2957 start=offset - delta,
2959 end=offset,
2958 end=offset,
2960 text=jm.name_with_symbols,
2959 text=jm.name_with_symbols,
2961 type=_UNKNOWN_TYPE, # don't compute type for speed
2960 type=_UNKNOWN_TYPE, # don't compute type for speed
2962 _origin="jedi",
2961 _origin="jedi",
2963 signature="",
2962 signature="",
2964 )
2963 )
2965
2964
2966 # TODO:
2965 # TODO:
2967 # Suppress this, right now just for debug.
2966 # Suppress this, right now just for debug.
2968 if jedi_matches and non_jedi_results and self.debug:
2967 if jedi_matches and non_jedi_results and self.debug:
2969 some_start_offset = before.rfind(
2968 some_start_offset = before.rfind(
2970 next(iter(non_jedi_results.values()))["matched_fragment"]
2969 next(iter(non_jedi_results.values()))["matched_fragment"]
2971 )
2970 )
2972 yield Completion(
2971 yield Completion(
2973 start=some_start_offset,
2972 start=some_start_offset,
2974 end=offset,
2973 end=offset,
2975 text="--jedi/ipython--",
2974 text="--jedi/ipython--",
2976 _origin="debug",
2975 _origin="debug",
2977 type="none",
2976 type="none",
2978 signature="",
2977 signature="",
2979 )
2978 )
2980
2979
2981 ordered: List[Completion] = []
2980 ordered: List[Completion] = []
2982 sortable: List[Completion] = []
2981 sortable: List[Completion] = []
2983
2982
2984 for origin, result in non_jedi_results.items():
2983 for origin, result in non_jedi_results.items():
2985 matched_text = result["matched_fragment"]
2984 matched_text = result["matched_fragment"]
2986 start_offset = before.rfind(matched_text)
2985 start_offset = before.rfind(matched_text)
2987 is_ordered = result.get("ordered", False)
2986 is_ordered = result.get("ordered", False)
2988 container = ordered if is_ordered else sortable
2987 container = ordered if is_ordered else sortable
2989
2988
2990 # I'm unsure if this is always true, so let's assert and see if it
2989 # I'm unsure if this is always true, so let's assert and see if it
2991 # crash
2990 # crash
2992 assert before.endswith(matched_text)
2991 assert before.endswith(matched_text)
2993
2992
2994 for simple_completion in result["completions"]:
2993 for simple_completion in result["completions"]:
2995 completion = Completion(
2994 completion = Completion(
2996 start=start_offset,
2995 start=start_offset,
2997 end=offset,
2996 end=offset,
2998 text=simple_completion.text,
2997 text=simple_completion.text,
2999 _origin=origin,
2998 _origin=origin,
3000 signature="",
2999 signature="",
3001 type=simple_completion.type or _UNKNOWN_TYPE,
3000 type=simple_completion.type or _UNKNOWN_TYPE,
3002 )
3001 )
3003 container.append(completion)
3002 container.append(completion)
3004
3003
3005 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3004 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3006 :MATCHES_LIMIT
3005 :MATCHES_LIMIT
3007 ]
3006 ]
3008
3007
3009 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3008 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3010 """Find completions for the given text and line context.
3009 """Find completions for the given text and line context.
3011
3010
3012 Note that both the text and the line_buffer are optional, but at least
3011 Note that both the text and the line_buffer are optional, but at least
3013 one of them must be given.
3012 one of them must be given.
3014
3013
3015 Parameters
3014 Parameters
3016 ----------
3015 ----------
3017 text : string, optional
3016 text : string, optional
3018 Text to perform the completion on. If not given, the line buffer
3017 Text to perform the completion on. If not given, the line buffer
3019 is split using the instance's CompletionSplitter object.
3018 is split using the instance's CompletionSplitter object.
3020 line_buffer : string, optional
3019 line_buffer : string, optional
3021 If not given, the completer attempts to obtain the current line
3020 If not given, the completer attempts to obtain the current line
3022 buffer via readline. This keyword allows clients which are
3021 buffer via readline. This keyword allows clients which are
3023 requesting for text completions in non-readline contexts to inform
3022 requesting for text completions in non-readline contexts to inform
3024 the completer of the entire text.
3023 the completer of the entire text.
3025 cursor_pos : int, optional
3024 cursor_pos : int, optional
3026 Index of the cursor in the full line buffer. Should be provided by
3025 Index of the cursor in the full line buffer. Should be provided by
3027 remote frontends where kernel has no access to frontend state.
3026 remote frontends where kernel has no access to frontend state.
3028
3027
3029 Returns
3028 Returns
3030 -------
3029 -------
3031 Tuple of two items:
3030 Tuple of two items:
3032 text : str
3031 text : str
3033 Text that was actually used in the completion.
3032 Text that was actually used in the completion.
3034 matches : list
3033 matches : list
3035 A list of completion matches.
3034 A list of completion matches.
3036
3035
3037 Notes
3036 Notes
3038 -----
3037 -----
3039 This API is likely to be deprecated and replaced by
3038 This API is likely to be deprecated and replaced by
3040 :any:`IPCompleter.completions` in the future.
3039 :any:`IPCompleter.completions` in the future.
3041
3040
3042 """
3041 """
3043 warnings.warn('`Completer.complete` is pending deprecation since '
3042 warnings.warn('`Completer.complete` is pending deprecation since '
3044 'IPython 6.0 and will be replaced by `Completer.completions`.',
3043 'IPython 6.0 and will be replaced by `Completer.completions`.',
3045 PendingDeprecationWarning)
3044 PendingDeprecationWarning)
3046 # potential todo, FOLD the 3rd throw away argument of _complete
3045 # potential todo, FOLD the 3rd throw away argument of _complete
3047 # into the first 2 one.
3046 # into the first 2 one.
3048 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3047 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3049 # TODO: should we deprecate now, or does it stay?
3048 # TODO: should we deprecate now, or does it stay?
3050
3049
3051 results = self._complete(
3050 results = self._complete(
3052 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3051 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3053 )
3052 )
3054
3053
3055 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3054 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3056
3055
3057 return self._arrange_and_extract(
3056 return self._arrange_and_extract(
3058 results,
3057 results,
3059 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3058 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3060 skip_matchers={jedi_matcher_id},
3059 skip_matchers={jedi_matcher_id},
3061 # this API does not support different start/end positions (fragments of token).
3060 # this API does not support different start/end positions (fragments of token).
3062 abort_if_offset_changes=True,
3061 abort_if_offset_changes=True,
3063 )
3062 )
3064
3063
3065 def _arrange_and_extract(
3064 def _arrange_and_extract(
3066 self,
3065 self,
3067 results: Dict[str, MatcherResult],
3066 results: Dict[str, MatcherResult],
3068 skip_matchers: Set[str],
3067 skip_matchers: Set[str],
3069 abort_if_offset_changes: bool,
3068 abort_if_offset_changes: bool,
3070 ):
3069 ):
3071 sortable: List[AnyMatcherCompletion] = []
3070 sortable: List[AnyMatcherCompletion] = []
3072 ordered: List[AnyMatcherCompletion] = []
3071 ordered: List[AnyMatcherCompletion] = []
3073 most_recent_fragment = None
3072 most_recent_fragment = None
3074 for identifier, result in results.items():
3073 for identifier, result in results.items():
3075 if identifier in skip_matchers:
3074 if identifier in skip_matchers:
3076 continue
3075 continue
3077 if not result["completions"]:
3076 if not result["completions"]:
3078 continue
3077 continue
3079 if not most_recent_fragment:
3078 if not most_recent_fragment:
3080 most_recent_fragment = result["matched_fragment"]
3079 most_recent_fragment = result["matched_fragment"]
3081 if (
3080 if (
3082 abort_if_offset_changes
3081 abort_if_offset_changes
3083 and result["matched_fragment"] != most_recent_fragment
3082 and result["matched_fragment"] != most_recent_fragment
3084 ):
3083 ):
3085 break
3084 break
3086 if result.get("ordered", False):
3085 if result.get("ordered", False):
3087 ordered.extend(result["completions"])
3086 ordered.extend(result["completions"])
3088 else:
3087 else:
3089 sortable.extend(result["completions"])
3088 sortable.extend(result["completions"])
3090
3089
3091 if not most_recent_fragment:
3090 if not most_recent_fragment:
3092 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3091 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3093
3092
3094 return most_recent_fragment, [
3093 return most_recent_fragment, [
3095 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3094 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3096 ]
3095 ]
3097
3096
3098 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3097 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3099 full_text=None) -> _CompleteResult:
3098 full_text=None) -> _CompleteResult:
3100 """
3099 """
3101 Like complete but can also returns raw jedi completions as well as the
3100 Like complete but can also returns raw jedi completions as well as the
3102 origin of the completion text. This could (and should) be made much
3101 origin of the completion text. This could (and should) be made much
3103 cleaner but that will be simpler once we drop the old (and stateful)
3102 cleaner but that will be simpler once we drop the old (and stateful)
3104 :any:`complete` API.
3103 :any:`complete` API.
3105
3104
3106 With current provisional API, cursor_pos act both (depending on the
3105 With current provisional API, cursor_pos act both (depending on the
3107 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3106 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3108 ``column`` when passing multiline strings this could/should be renamed
3107 ``column`` when passing multiline strings this could/should be renamed
3109 but would add extra noise.
3108 but would add extra noise.
3110
3109
3111 Parameters
3110 Parameters
3112 ----------
3111 ----------
3113 cursor_line
3112 cursor_line
3114 Index of the line the cursor is on. 0 indexed.
3113 Index of the line the cursor is on. 0 indexed.
3115 cursor_pos
3114 cursor_pos
3116 Position of the cursor in the current line/line_buffer/text. 0
3115 Position of the cursor in the current line/line_buffer/text. 0
3117 indexed.
3116 indexed.
3118 line_buffer : optional, str
3117 line_buffer : optional, str
3119 The current line the cursor is in, this is mostly due to legacy
3118 The current line the cursor is in, this is mostly due to legacy
3120 reason that readline could only give a us the single current line.
3119 reason that readline could only give a us the single current line.
3121 Prefer `full_text`.
3120 Prefer `full_text`.
3122 text : str
3121 text : str
3123 The current "token" the cursor is in, mostly also for historical
3122 The current "token" the cursor is in, mostly also for historical
3124 reasons. as the completer would trigger only after the current line
3123 reasons. as the completer would trigger only after the current line
3125 was parsed.
3124 was parsed.
3126 full_text : str
3125 full_text : str
3127 Full text of the current cell.
3126 Full text of the current cell.
3128
3127
3129 Returns
3128 Returns
3130 -------
3129 -------
3131 An ordered dictionary where keys are identifiers of completion
3130 An ordered dictionary where keys are identifiers of completion
3132 matchers and values are ``MatcherResult``s.
3131 matchers and values are ``MatcherResult``s.
3133 """
3132 """
3134
3133
3135 # if the cursor position isn't given, the only sane assumption we can
3134 # if the cursor position isn't given, the only sane assumption we can
3136 # make is that it's at the end of the line (the common case)
3135 # make is that it's at the end of the line (the common case)
3137 if cursor_pos is None:
3136 if cursor_pos is None:
3138 cursor_pos = len(line_buffer) if text is None else len(text)
3137 cursor_pos = len(line_buffer) if text is None else len(text)
3139
3138
3140 if self.use_main_ns:
3139 if self.use_main_ns:
3141 self.namespace = __main__.__dict__
3140 self.namespace = __main__.__dict__
3142
3141
3143 # if text is either None or an empty string, rely on the line buffer
3142 # if text is either None or an empty string, rely on the line buffer
3144 if (not line_buffer) and full_text:
3143 if (not line_buffer) and full_text:
3145 line_buffer = full_text.split('\n')[cursor_line]
3144 line_buffer = full_text.split('\n')[cursor_line]
3146 if not text: # issue #11508: check line_buffer before calling split_line
3145 if not text: # issue #11508: check line_buffer before calling split_line
3147 text = (
3146 text = (
3148 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3147 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3149 )
3148 )
3150
3149
3151 # If no line buffer is given, assume the input text is all there was
3150 # If no line buffer is given, assume the input text is all there was
3152 if line_buffer is None:
3151 if line_buffer is None:
3153 line_buffer = text
3152 line_buffer = text
3154
3153
3155 # deprecated - do not use `line_buffer` in new code.
3154 # deprecated - do not use `line_buffer` in new code.
3156 self.line_buffer = line_buffer
3155 self.line_buffer = line_buffer
3157 self.text_until_cursor = self.line_buffer[:cursor_pos]
3156 self.text_until_cursor = self.line_buffer[:cursor_pos]
3158
3157
3159 if not full_text:
3158 if not full_text:
3160 full_text = line_buffer
3159 full_text = line_buffer
3161
3160
3162 context = CompletionContext(
3161 context = CompletionContext(
3163 full_text=full_text,
3162 full_text=full_text,
3164 cursor_position=cursor_pos,
3163 cursor_position=cursor_pos,
3165 cursor_line=cursor_line,
3164 cursor_line=cursor_line,
3166 token=text,
3165 token=text,
3167 limit=MATCHES_LIMIT,
3166 limit=MATCHES_LIMIT,
3168 )
3167 )
3169
3168
3170 # Start with a clean slate of completions
3169 # Start with a clean slate of completions
3171 results: Dict[str, MatcherResult] = {}
3170 results: Dict[str, MatcherResult] = {}
3172
3171
3173 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3172 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3174
3173
3175 suppressed_matchers: Set[str] = set()
3174 suppressed_matchers: Set[str] = set()
3176
3175
3177 matchers = {
3176 matchers = {
3178 _get_matcher_id(matcher): matcher
3177 _get_matcher_id(matcher): matcher
3179 for matcher in sorted(
3178 for matcher in sorted(
3180 self.matchers, key=_get_matcher_priority, reverse=True
3179 self.matchers, key=_get_matcher_priority, reverse=True
3181 )
3180 )
3182 }
3181 }
3183
3182
3184 for matcher_id, matcher in matchers.items():
3183 for matcher_id, matcher in matchers.items():
3185 matcher_id = _get_matcher_id(matcher)
3184 matcher_id = _get_matcher_id(matcher)
3186
3185
3187 if matcher_id in self.disable_matchers:
3186 if matcher_id in self.disable_matchers:
3188 continue
3187 continue
3189
3188
3190 if matcher_id in results:
3189 if matcher_id in results:
3191 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3190 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3192
3191
3193 if matcher_id in suppressed_matchers:
3192 if matcher_id in suppressed_matchers:
3194 continue
3193 continue
3195
3194
3196 result: MatcherResult
3195 result: MatcherResult
3197 try:
3196 try:
3198 if _is_matcher_v1(matcher):
3197 if _is_matcher_v1(matcher):
3199 result = _convert_matcher_v1_result_to_v2(
3198 result = _convert_matcher_v1_result_to_v2(
3200 matcher(text), type=_UNKNOWN_TYPE
3199 matcher(text), type=_UNKNOWN_TYPE
3201 )
3200 )
3202 elif _is_matcher_v2(matcher):
3201 elif _is_matcher_v2(matcher):
3203 result = matcher(context)
3202 result = matcher(context)
3204 else:
3203 else:
3205 api_version = _get_matcher_api_version(matcher)
3204 api_version = _get_matcher_api_version(matcher)
3206 raise ValueError(f"Unsupported API version {api_version}")
3205 raise ValueError(f"Unsupported API version {api_version}")
3207 except:
3206 except:
3208 # Show the ugly traceback if the matcher causes an
3207 # Show the ugly traceback if the matcher causes an
3209 # exception, but do NOT crash the kernel!
3208 # exception, but do NOT crash the kernel!
3210 sys.excepthook(*sys.exc_info())
3209 sys.excepthook(*sys.exc_info())
3211 continue
3210 continue
3212
3211
3213 # set default value for matched fragment if suffix was not selected.
3212 # set default value for matched fragment if suffix was not selected.
3214 result["matched_fragment"] = result.get("matched_fragment", context.token)
3213 result["matched_fragment"] = result.get("matched_fragment", context.token)
3215
3214
3216 if not suppressed_matchers:
3215 if not suppressed_matchers:
3217 suppression_recommended: Union[bool, Set[str]] = result.get(
3216 suppression_recommended: Union[bool, Set[str]] = result.get(
3218 "suppress", False
3217 "suppress", False
3219 )
3218 )
3220
3219
3221 suppression_config = (
3220 suppression_config = (
3222 self.suppress_competing_matchers.get(matcher_id, None)
3221 self.suppress_competing_matchers.get(matcher_id, None)
3223 if isinstance(self.suppress_competing_matchers, dict)
3222 if isinstance(self.suppress_competing_matchers, dict)
3224 else self.suppress_competing_matchers
3223 else self.suppress_competing_matchers
3225 )
3224 )
3226 should_suppress = (
3225 should_suppress = (
3227 (suppression_config is True)
3226 (suppression_config is True)
3228 or (suppression_recommended and (suppression_config is not False))
3227 or (suppression_recommended and (suppression_config is not False))
3229 ) and has_any_completions(result)
3228 ) and has_any_completions(result)
3230
3229
3231 if should_suppress:
3230 if should_suppress:
3232 suppression_exceptions: Set[str] = result.get(
3231 suppression_exceptions: Set[str] = result.get(
3233 "do_not_suppress", set()
3232 "do_not_suppress", set()
3234 )
3233 )
3235 if isinstance(suppression_recommended, Iterable):
3234 if isinstance(suppression_recommended, Iterable):
3236 to_suppress = set(suppression_recommended)
3235 to_suppress = set(suppression_recommended)
3237 else:
3236 else:
3238 to_suppress = set(matchers)
3237 to_suppress = set(matchers)
3239 suppressed_matchers = to_suppress - suppression_exceptions
3238 suppressed_matchers = to_suppress - suppression_exceptions
3240
3239
3241 new_results = {}
3240 new_results = {}
3242 for previous_matcher_id, previous_result in results.items():
3241 for previous_matcher_id, previous_result in results.items():
3243 if previous_matcher_id not in suppressed_matchers:
3242 if previous_matcher_id not in suppressed_matchers:
3244 new_results[previous_matcher_id] = previous_result
3243 new_results[previous_matcher_id] = previous_result
3245 results = new_results
3244 results = new_results
3246
3245
3247 results[matcher_id] = result
3246 results[matcher_id] = result
3248
3247
3249 _, matches = self._arrange_and_extract(
3248 _, matches = self._arrange_and_extract(
3250 results,
3249 results,
3251 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3250 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3252 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3251 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3253 skip_matchers={jedi_matcher_id},
3252 skip_matchers={jedi_matcher_id},
3254 abort_if_offset_changes=False,
3253 abort_if_offset_changes=False,
3255 )
3254 )
3256
3255
3257 # populate legacy stateful API
3256 # populate legacy stateful API
3258 self.matches = matches
3257 self.matches = matches
3259
3258
3260 return results
3259 return results
3261
3260
3262 @staticmethod
3261 @staticmethod
3263 def _deduplicate(
3262 def _deduplicate(
3264 matches: Sequence[AnyCompletion],
3263 matches: Sequence[AnyCompletion],
3265 ) -> Iterable[AnyCompletion]:
3264 ) -> Iterable[AnyCompletion]:
3266 filtered_matches: Dict[str, AnyCompletion] = {}
3265 filtered_matches: Dict[str, AnyCompletion] = {}
3267 for match in matches:
3266 for match in matches:
3268 text = match.text
3267 text = match.text
3269 if (
3268 if (
3270 text not in filtered_matches
3269 text not in filtered_matches
3271 or filtered_matches[text].type == _UNKNOWN_TYPE
3270 or filtered_matches[text].type == _UNKNOWN_TYPE
3272 ):
3271 ):
3273 filtered_matches[text] = match
3272 filtered_matches[text] = match
3274
3273
3275 return filtered_matches.values()
3274 return filtered_matches.values()
3276
3275
3277 @staticmethod
3276 @staticmethod
3278 def _sort(matches: Sequence[AnyCompletion]):
3277 def _sort(matches: Sequence[AnyCompletion]):
3279 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3278 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3280
3279
3281 @context_matcher()
3280 @context_matcher()
3282 def fwd_unicode_matcher(self, context: CompletionContext):
3281 def fwd_unicode_matcher(self, context: CompletionContext):
3283 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3282 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3284 # TODO: use `context.limit` to terminate early once we matched the maximum
3283 # TODO: use `context.limit` to terminate early once we matched the maximum
3285 # number that will be used downstream; can be added as an optional to
3284 # number that will be used downstream; can be added as an optional to
3286 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3285 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3287 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3286 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3288 return _convert_matcher_v1_result_to_v2(
3287 return _convert_matcher_v1_result_to_v2(
3289 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3288 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3290 )
3289 )
3291
3290
3292 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3291 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3293 """
3292 """
3294 Forward match a string starting with a backslash with a list of
3293 Forward match a string starting with a backslash with a list of
3295 potential Unicode completions.
3294 potential Unicode completions.
3296
3295
3297 Will compute list of Unicode character names on first call and cache it.
3296 Will compute list of Unicode character names on first call and cache it.
3298
3297
3299 .. deprecated:: 8.6
3298 .. deprecated:: 8.6
3300 You can use :meth:`fwd_unicode_matcher` instead.
3299 You can use :meth:`fwd_unicode_matcher` instead.
3301
3300
3302 Returns
3301 Returns
3303 -------
3302 -------
3304 At tuple with:
3303 At tuple with:
3305 - matched text (empty if no matches)
3304 - matched text (empty if no matches)
3306 - list of potential completions, empty tuple otherwise)
3305 - list of potential completions, empty tuple otherwise)
3307 """
3306 """
3308 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3307 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3309 # We could do a faster match using a Trie.
3308 # We could do a faster match using a Trie.
3310
3309
3311 # Using pygtrie the following seem to work:
3310 # Using pygtrie the following seem to work:
3312
3311
3313 # s = PrefixSet()
3312 # s = PrefixSet()
3314
3313
3315 # for c in range(0,0x10FFFF + 1):
3314 # for c in range(0,0x10FFFF + 1):
3316 # try:
3315 # try:
3317 # s.add(unicodedata.name(chr(c)))
3316 # s.add(unicodedata.name(chr(c)))
3318 # except ValueError:
3317 # except ValueError:
3319 # pass
3318 # pass
3320 # [''.join(k) for k in s.iter(prefix)]
3319 # [''.join(k) for k in s.iter(prefix)]
3321
3320
3322 # But need to be timed and adds an extra dependency.
3321 # But need to be timed and adds an extra dependency.
3323
3322
3324 slashpos = text.rfind('\\')
3323 slashpos = text.rfind('\\')
3325 # if text starts with slash
3324 # if text starts with slash
3326 if slashpos > -1:
3325 if slashpos > -1:
3327 # PERF: It's important that we don't access self._unicode_names
3326 # PERF: It's important that we don't access self._unicode_names
3328 # until we're inside this if-block. _unicode_names is lazily
3327 # until we're inside this if-block. _unicode_names is lazily
3329 # initialized, and it takes a user-noticeable amount of time to
3328 # initialized, and it takes a user-noticeable amount of time to
3330 # initialize it, so we don't want to initialize it unless we're
3329 # initialize it, so we don't want to initialize it unless we're
3331 # actually going to use it.
3330 # actually going to use it.
3332 s = text[slashpos + 1 :]
3331 s = text[slashpos + 1 :]
3333 sup = s.upper()
3332 sup = s.upper()
3334 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3333 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3335 if candidates:
3334 if candidates:
3336 return s, candidates
3335 return s, candidates
3337 candidates = [x for x in self.unicode_names if sup in x]
3336 candidates = [x for x in self.unicode_names if sup in x]
3338 if candidates:
3337 if candidates:
3339 return s, candidates
3338 return s, candidates
3340 splitsup = sup.split(" ")
3339 splitsup = sup.split(" ")
3341 candidates = [
3340 candidates = [
3342 x for x in self.unicode_names if all(u in x for u in splitsup)
3341 x for x in self.unicode_names if all(u in x for u in splitsup)
3343 ]
3342 ]
3344 if candidates:
3343 if candidates:
3345 return s, candidates
3344 return s, candidates
3346
3345
3347 return "", ()
3346 return "", ()
3348
3347
3349 # if text does not start with slash
3348 # if text does not start with slash
3350 else:
3349 else:
3351 return '', ()
3350 return '', ()
3352
3351
3353 @property
3352 @property
3354 def unicode_names(self) -> List[str]:
3353 def unicode_names(self) -> List[str]:
3355 """List of names of unicode code points that can be completed.
3354 """List of names of unicode code points that can be completed.
3356
3355
3357 The list is lazily initialized on first access.
3356 The list is lazily initialized on first access.
3358 """
3357 """
3359 if self._unicode_names is None:
3358 if self._unicode_names is None:
3360 names = []
3359 names = []
3361 for c in range(0,0x10FFFF + 1):
3360 for c in range(0,0x10FFFF + 1):
3362 try:
3361 try:
3363 names.append(unicodedata.name(chr(c)))
3362 names.append(unicodedata.name(chr(c)))
3364 except ValueError:
3363 except ValueError:
3365 pass
3364 pass
3366 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3365 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3367
3366
3368 return self._unicode_names
3367 return self._unicode_names
3369
3368
3370 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3369 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3371 names = []
3370 names = []
3372 for start,stop in ranges:
3371 for start,stop in ranges:
3373 for c in range(start, stop) :
3372 for c in range(start, stop) :
3374 try:
3373 try:
3375 names.append(unicodedata.name(chr(c)))
3374 names.append(unicodedata.name(chr(c)))
3376 except ValueError:
3375 except ValueError:
3377 pass
3376 pass
3378 return names
3377 return names
General Comments 0
You need to be logged in to leave comments. Login now