##// END OF EJS Templates
Switch `CompletionContext` to `dataclass`, use `cached_property`...
krassowski -
Show More
@@ -1,2957 +1,2957 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press ``<tab>`` to expand it to its latex form.
53 and press ``<tab>`` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103
103
104 Matchers
104 Matchers
105 ========
105 ========
106
106
107 All completions routines are implemented using unified *Matchers* API.
107 All completions routines are implemented using unified *Matchers* API.
108 The matchers API is provisional and subject to change without notice.
108 The matchers API is provisional and subject to change without notice.
109
109
110 The built-in matchers include:
110 The built-in matchers include:
111
111
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.unicode_name_matcher`,
114 - :any:`IPCompleter.unicode_name_matcher`,
115 :any:`IPCompleter.fwd_unicode_matcher`
115 :any:`IPCompleter.fwd_unicode_matcher`
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 (`complete_command`) with string dispatch (including regular expressions).
124 (`complete_command`) with string dispatch (including regular expressions).
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Jedi results to match behaviour in earlier IPython versions.
126 Jedi results to match behaviour in earlier IPython versions.
127
127
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129
129
130 Matcher API
130 Matcher API
131 -----------
131 -----------
132
132
133 Simplifying some details, the ``Matcher`` interface can described as
133 Simplifying some details, the ``Matcher`` interface can described as
134
134
135 .. code-block::
135 .. code-block::
136
136
137 MatcherAPIv1 = Callable[[str], list[str]]
137 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139
139
140 Matcher = MatcherAPIv1 | MatcherAPIv2
140 Matcher = MatcherAPIv1 | MatcherAPIv2
141
141
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 and remains supported as a simplest way for generating completions. This is also
143 and remains supported as a simplest way for generating completions. This is also
144 currently the only API supported by the IPython hooks system `complete_command`.
144 currently the only API supported by the IPython hooks system `complete_command`.
145
145
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 and requires a literal ``2`` for v2 Matchers.
148 and requires a literal ``2`` for v2 Matchers.
149
149
150 Once the API stabilises future versions may relax the requirement for specifying
150 Once the API stabilises future versions may relax the requirement for specifying
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153
153
154 Suppression of competing matchers
154 Suppression of competing matchers
155 ---------------------------------
155 ---------------------------------
156
156
157 By default results from all matchers are combined, in the order determined by
157 By default results from all matchers are combined, in the order determined by
158 their priority. Matchers can request to suppress results from subsequent
158 their priority. Matchers can request to suppress results from subsequent
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160
160
161 When multiple matchers simultaneously request surpression, the results from of
161 When multiple matchers simultaneously request surpression, the results from of
162 the matcher with higher priority will be returned.
162 the matcher with higher priority will be returned.
163
163
164 Sometimes it is desirable to suppress most but not all other matchers;
164 Sometimes it is desirable to suppress most but not all other matchers;
165 this can be achieved by adding a list of identifiers of matchers which
165 this can be achieved by adding a list of identifiers of matchers which
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167
167
168 The suppression behaviour can is user-configurable via
168 The suppression behaviour can is user-configurable via
169 :any:`IPCompleter.suppress_competing_matchers`.
169 :any:`IPCompleter.suppress_competing_matchers`.
170 """
170 """
171
171
172
172
173 # Copyright (c) IPython Development Team.
173 # Copyright (c) IPython Development Team.
174 # Distributed under the terms of the Modified BSD License.
174 # Distributed under the terms of the Modified BSD License.
175 #
175 #
176 # Some of this code originated from rlcompleter in the Python standard library
176 # Some of this code originated from rlcompleter in the Python standard library
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
178
178
179 from __future__ import annotations
179 from __future__ import annotations
180 import builtins as builtin_mod
180 import builtins as builtin_mod
181 import glob
181 import glob
182 import inspect
182 import inspect
183 import itertools
183 import itertools
184 import keyword
184 import keyword
185 import os
185 import os
186 import re
186 import re
187 import string
187 import string
188 import sys
188 import sys
189 import time
189 import time
190 import unicodedata
190 import unicodedata
191 import uuid
191 import uuid
192 import warnings
192 import warnings
193 from contextlib import contextmanager
193 from contextlib import contextmanager
194 from functools import lru_cache, partial
194 from dataclasses import dataclass
195 from functools import cached_property, partial
195 from importlib import import_module
196 from importlib import import_module
196 from types import SimpleNamespace
197 from types import SimpleNamespace
197 from typing import (
198 from typing import (
198 Iterable,
199 Iterable,
199 Iterator,
200 Iterator,
200 List,
201 List,
201 Tuple,
202 Tuple,
202 Union,
203 Union,
203 Any,
204 Any,
204 Sequence,
205 Sequence,
205 Dict,
206 Dict,
206 NamedTuple,
207 NamedTuple,
207 Pattern,
208 Pattern,
208 Optional,
209 Optional,
209 TYPE_CHECKING,
210 TYPE_CHECKING,
210 Set,
211 Set,
211 Literal,
212 Literal,
212 )
213 )
213
214
214 from IPython.core.error import TryNext
215 from IPython.core.error import TryNext
215 from IPython.core.inputtransformer2 import ESC_MAGIC
216 from IPython.core.inputtransformer2 import ESC_MAGIC
216 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
217 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
217 from IPython.core.oinspect import InspectColors
218 from IPython.core.oinspect import InspectColors
218 from IPython.testing.skipdoctest import skip_doctest
219 from IPython.testing.skipdoctest import skip_doctest
219 from IPython.utils import generics
220 from IPython.utils import generics
220 from IPython.utils.decorators import sphinx_options
221 from IPython.utils.decorators import sphinx_options
221 from IPython.utils.dir2 import dir2, get_real_method
222 from IPython.utils.dir2 import dir2, get_real_method
222 from IPython.utils.docs import GENERATING_DOCUMENTATION
223 from IPython.utils.docs import GENERATING_DOCUMENTATION
223 from IPython.utils.path import ensure_dir_exists
224 from IPython.utils.path import ensure_dir_exists
224 from IPython.utils.process import arg_split
225 from IPython.utils.process import arg_split
225 from traitlets import (
226 from traitlets import (
226 Bool,
227 Bool,
227 Enum,
228 Enum,
228 Int,
229 Int,
229 List as ListTrait,
230 List as ListTrait,
230 Unicode,
231 Unicode,
231 Dict as DictTrait,
232 Dict as DictTrait,
232 Union as UnionTrait,
233 Union as UnionTrait,
233 default,
234 default,
234 observe,
235 observe,
235 )
236 )
236 from traitlets.config.configurable import Configurable
237 from traitlets.config.configurable import Configurable
237
238
238 import __main__
239 import __main__
239
240
240 # skip module docstests
241 # skip module docstests
241 __skip_doctest__ = True
242 __skip_doctest__ = True
242
243
243
244
244 try:
245 try:
245 import jedi
246 import jedi
246 jedi.settings.case_insensitive_completion = False
247 jedi.settings.case_insensitive_completion = False
247 import jedi.api.helpers
248 import jedi.api.helpers
248 import jedi.api.classes
249 import jedi.api.classes
249 JEDI_INSTALLED = True
250 JEDI_INSTALLED = True
250 except ImportError:
251 except ImportError:
251 JEDI_INSTALLED = False
252 JEDI_INSTALLED = False
252
253
253
254
254 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
255 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
255 from typing import cast
256 from typing import cast
256 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
257 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
257 else:
258 else:
258
259
259 def cast(obj, type_):
260 def cast(obj, type_):
260 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
261 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
261 return obj
262 return obj
262
263
263 # do not require on runtime
264 # do not require on runtime
264 NotRequired = Tuple # requires Python >=3.11
265 NotRequired = Tuple # requires Python >=3.11
265 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
266 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
266 Protocol = object # requires Python >=3.8
267 Protocol = object # requires Python >=3.8
267 TypeAlias = Any # requires Python >=3.10
268 TypeAlias = Any # requires Python >=3.10
268 if GENERATING_DOCUMENTATION:
269 if GENERATING_DOCUMENTATION:
269 from typing import TypedDict
270 from typing import TypedDict
270
271
271 # -----------------------------------------------------------------------------
272 # -----------------------------------------------------------------------------
272 # Globals
273 # Globals
273 #-----------------------------------------------------------------------------
274 #-----------------------------------------------------------------------------
274
275
275 # ranges where we have most of the valid unicode names. We could be more finer
276 # ranges where we have most of the valid unicode names. We could be more finer
276 # grained but is it worth it for performance While unicode have character in the
277 # grained but is it worth it for performance While unicode have character in the
277 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
278 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
278 # write this). With below range we cover them all, with a density of ~67%
279 # write this). With below range we cover them all, with a density of ~67%
279 # biggest next gap we consider only adds up about 1% density and there are 600
280 # biggest next gap we consider only adds up about 1% density and there are 600
280 # gaps that would need hard coding.
281 # gaps that would need hard coding.
281 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
282 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
282
283
283 # Public API
284 # Public API
284 __all__ = ["Completer", "IPCompleter"]
285 __all__ = ["Completer", "IPCompleter"]
285
286
286 if sys.platform == 'win32':
287 if sys.platform == 'win32':
287 PROTECTABLES = ' '
288 PROTECTABLES = ' '
288 else:
289 else:
289 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
290 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
290
291
291 # Protect against returning an enormous number of completions which the frontend
292 # Protect against returning an enormous number of completions which the frontend
292 # may have trouble processing.
293 # may have trouble processing.
293 MATCHES_LIMIT = 500
294 MATCHES_LIMIT = 500
294
295
295 # Completion type reported when no type can be inferred.
296 # Completion type reported when no type can be inferred.
296 _UNKNOWN_TYPE = "<unknown>"
297 _UNKNOWN_TYPE = "<unknown>"
297
298
298 class ProvisionalCompleterWarning(FutureWarning):
299 class ProvisionalCompleterWarning(FutureWarning):
299 """
300 """
300 Exception raise by an experimental feature in this module.
301 Exception raise by an experimental feature in this module.
301
302
302 Wrap code in :any:`provisionalcompleter` context manager if you
303 Wrap code in :any:`provisionalcompleter` context manager if you
303 are certain you want to use an unstable feature.
304 are certain you want to use an unstable feature.
304 """
305 """
305 pass
306 pass
306
307
307 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
308 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
308
309
309
310
310 @skip_doctest
311 @skip_doctest
311 @contextmanager
312 @contextmanager
312 def provisionalcompleter(action='ignore'):
313 def provisionalcompleter(action='ignore'):
313 """
314 """
314 This context manager has to be used in any place where unstable completer
315 This context manager has to be used in any place where unstable completer
315 behavior and API may be called.
316 behavior and API may be called.
316
317
317 >>> with provisionalcompleter():
318 >>> with provisionalcompleter():
318 ... completer.do_experimental_things() # works
319 ... completer.do_experimental_things() # works
319
320
320 >>> completer.do_experimental_things() # raises.
321 >>> completer.do_experimental_things() # raises.
321
322
322 .. note::
323 .. note::
323
324
324 Unstable
325 Unstable
325
326
326 By using this context manager you agree that the API in use may change
327 By using this context manager you agree that the API in use may change
327 without warning, and that you won't complain if they do so.
328 without warning, and that you won't complain if they do so.
328
329
329 You also understand that, if the API is not to your liking, you should report
330 You also understand that, if the API is not to your liking, you should report
330 a bug to explain your use case upstream.
331 a bug to explain your use case upstream.
331
332
332 We'll be happy to get your feedback, feature requests, and improvements on
333 We'll be happy to get your feedback, feature requests, and improvements on
333 any of the unstable APIs!
334 any of the unstable APIs!
334 """
335 """
335 with warnings.catch_warnings():
336 with warnings.catch_warnings():
336 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
337 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
337 yield
338 yield
338
339
339
340
340 def has_open_quotes(s):
341 def has_open_quotes(s):
341 """Return whether a string has open quotes.
342 """Return whether a string has open quotes.
342
343
343 This simply counts whether the number of quote characters of either type in
344 This simply counts whether the number of quote characters of either type in
344 the string is odd.
345 the string is odd.
345
346
346 Returns
347 Returns
347 -------
348 -------
348 If there is an open quote, the quote character is returned. Else, return
349 If there is an open quote, the quote character is returned. Else, return
349 False.
350 False.
350 """
351 """
351 # We check " first, then ', so complex cases with nested quotes will get
352 # We check " first, then ', so complex cases with nested quotes will get
352 # the " to take precedence.
353 # the " to take precedence.
353 if s.count('"') % 2:
354 if s.count('"') % 2:
354 return '"'
355 return '"'
355 elif s.count("'") % 2:
356 elif s.count("'") % 2:
356 return "'"
357 return "'"
357 else:
358 else:
358 return False
359 return False
359
360
360
361
361 def protect_filename(s, protectables=PROTECTABLES):
362 def protect_filename(s, protectables=PROTECTABLES):
362 """Escape a string to protect certain characters."""
363 """Escape a string to protect certain characters."""
363 if set(s) & set(protectables):
364 if set(s) & set(protectables):
364 if sys.platform == "win32":
365 if sys.platform == "win32":
365 return '"' + s + '"'
366 return '"' + s + '"'
366 else:
367 else:
367 return "".join(("\\" + c if c in protectables else c) for c in s)
368 return "".join(("\\" + c if c in protectables else c) for c in s)
368 else:
369 else:
369 return s
370 return s
370
371
371
372
372 def expand_user(path:str) -> Tuple[str, bool, str]:
373 def expand_user(path:str) -> Tuple[str, bool, str]:
373 """Expand ``~``-style usernames in strings.
374 """Expand ``~``-style usernames in strings.
374
375
375 This is similar to :func:`os.path.expanduser`, but it computes and returns
376 This is similar to :func:`os.path.expanduser`, but it computes and returns
376 extra information that will be useful if the input was being used in
377 extra information that will be useful if the input was being used in
377 computing completions, and you wish to return the completions with the
378 computing completions, and you wish to return the completions with the
378 original '~' instead of its expanded value.
379 original '~' instead of its expanded value.
379
380
380 Parameters
381 Parameters
381 ----------
382 ----------
382 path : str
383 path : str
383 String to be expanded. If no ~ is present, the output is the same as the
384 String to be expanded. If no ~ is present, the output is the same as the
384 input.
385 input.
385
386
386 Returns
387 Returns
387 -------
388 -------
388 newpath : str
389 newpath : str
389 Result of ~ expansion in the input path.
390 Result of ~ expansion in the input path.
390 tilde_expand : bool
391 tilde_expand : bool
391 Whether any expansion was performed or not.
392 Whether any expansion was performed or not.
392 tilde_val : str
393 tilde_val : str
393 The value that ~ was replaced with.
394 The value that ~ was replaced with.
394 """
395 """
395 # Default values
396 # Default values
396 tilde_expand = False
397 tilde_expand = False
397 tilde_val = ''
398 tilde_val = ''
398 newpath = path
399 newpath = path
399
400
400 if path.startswith('~'):
401 if path.startswith('~'):
401 tilde_expand = True
402 tilde_expand = True
402 rest = len(path)-1
403 rest = len(path)-1
403 newpath = os.path.expanduser(path)
404 newpath = os.path.expanduser(path)
404 if rest:
405 if rest:
405 tilde_val = newpath[:-rest]
406 tilde_val = newpath[:-rest]
406 else:
407 else:
407 tilde_val = newpath
408 tilde_val = newpath
408
409
409 return newpath, tilde_expand, tilde_val
410 return newpath, tilde_expand, tilde_val
410
411
411
412
412 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
413 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
413 """Does the opposite of expand_user, with its outputs.
414 """Does the opposite of expand_user, with its outputs.
414 """
415 """
415 if tilde_expand:
416 if tilde_expand:
416 return path.replace(tilde_val, '~')
417 return path.replace(tilde_val, '~')
417 else:
418 else:
418 return path
419 return path
419
420
420
421
421 def completions_sorting_key(word):
422 def completions_sorting_key(word):
422 """key for sorting completions
423 """key for sorting completions
423
424
424 This does several things:
425 This does several things:
425
426
426 - Demote any completions starting with underscores to the end
427 - Demote any completions starting with underscores to the end
427 - Insert any %magic and %%cellmagic completions in the alphabetical order
428 - Insert any %magic and %%cellmagic completions in the alphabetical order
428 by their name
429 by their name
429 """
430 """
430 prio1, prio2 = 0, 0
431 prio1, prio2 = 0, 0
431
432
432 if word.startswith('__'):
433 if word.startswith('__'):
433 prio1 = 2
434 prio1 = 2
434 elif word.startswith('_'):
435 elif word.startswith('_'):
435 prio1 = 1
436 prio1 = 1
436
437
437 if word.endswith('='):
438 if word.endswith('='):
438 prio1 = -1
439 prio1 = -1
439
440
440 if word.startswith('%%'):
441 if word.startswith('%%'):
441 # If there's another % in there, this is something else, so leave it alone
442 # If there's another % in there, this is something else, so leave it alone
442 if not "%" in word[2:]:
443 if not "%" in word[2:]:
443 word = word[2:]
444 word = word[2:]
444 prio2 = 2
445 prio2 = 2
445 elif word.startswith('%'):
446 elif word.startswith('%'):
446 if not "%" in word[1:]:
447 if not "%" in word[1:]:
447 word = word[1:]
448 word = word[1:]
448 prio2 = 1
449 prio2 = 1
449
450
450 return prio1, word, prio2
451 return prio1, word, prio2
451
452
452
453
453 class _FakeJediCompletion:
454 class _FakeJediCompletion:
454 """
455 """
455 This is a workaround to communicate to the UI that Jedi has crashed and to
456 This is a workaround to communicate to the UI that Jedi has crashed and to
456 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
457 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
457
458
458 Added in IPython 6.0 so should likely be removed for 7.0
459 Added in IPython 6.0 so should likely be removed for 7.0
459
460
460 """
461 """
461
462
462 def __init__(self, name):
463 def __init__(self, name):
463
464
464 self.name = name
465 self.name = name
465 self.complete = name
466 self.complete = name
466 self.type = 'crashed'
467 self.type = 'crashed'
467 self.name_with_symbols = name
468 self.name_with_symbols = name
468 self.signature = ''
469 self.signature = ''
469 self._origin = 'fake'
470 self._origin = 'fake'
470
471
471 def __repr__(self):
472 def __repr__(self):
472 return '<Fake completion object jedi has crashed>'
473 return '<Fake completion object jedi has crashed>'
473
474
474
475
475 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
476 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
476
477
477
478
478 class Completion:
479 class Completion:
479 """
480 """
480 Completion object used and returned by IPython completers.
481 Completion object used and returned by IPython completers.
481
482
482 .. warning::
483 .. warning::
483
484
484 Unstable
485 Unstable
485
486
486 This function is unstable, API may change without warning.
487 This function is unstable, API may change without warning.
487 It will also raise unless use in proper context manager.
488 It will also raise unless use in proper context manager.
488
489
489 This act as a middle ground :any:`Completion` object between the
490 This act as a middle ground :any:`Completion` object between the
490 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
491 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
491 object. While Jedi need a lot of information about evaluator and how the
492 object. While Jedi need a lot of information about evaluator and how the
492 code should be ran/inspected, PromptToolkit (and other frontend) mostly
493 code should be ran/inspected, PromptToolkit (and other frontend) mostly
493 need user facing information.
494 need user facing information.
494
495
495 - Which range should be replaced replaced by what.
496 - Which range should be replaced replaced by what.
496 - Some metadata (like completion type), or meta information to displayed to
497 - Some metadata (like completion type), or meta information to displayed to
497 the use user.
498 the use user.
498
499
499 For debugging purpose we can also store the origin of the completion (``jedi``,
500 For debugging purpose we can also store the origin of the completion (``jedi``,
500 ``IPython.python_matches``, ``IPython.magics_matches``...).
501 ``IPython.python_matches``, ``IPython.magics_matches``...).
501 """
502 """
502
503
503 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
504 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
504
505
505 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
506 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
506 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
507 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
507 "It may change without warnings. "
508 "It may change without warnings. "
508 "Use in corresponding context manager.",
509 "Use in corresponding context manager.",
509 category=ProvisionalCompleterWarning, stacklevel=2)
510 category=ProvisionalCompleterWarning, stacklevel=2)
510
511
511 self.start = start
512 self.start = start
512 self.end = end
513 self.end = end
513 self.text = text
514 self.text = text
514 self.type = type
515 self.type = type
515 self.signature = signature
516 self.signature = signature
516 self._origin = _origin
517 self._origin = _origin
517
518
518 def __repr__(self):
519 def __repr__(self):
519 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
520 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
520 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
521 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
521
522
522 def __eq__(self, other)->Bool:
523 def __eq__(self, other)->Bool:
523 """
524 """
524 Equality and hash do not hash the type (as some completer may not be
525 Equality and hash do not hash the type (as some completer may not be
525 able to infer the type), but are use to (partially) de-duplicate
526 able to infer the type), but are use to (partially) de-duplicate
526 completion.
527 completion.
527
528
528 Completely de-duplicating completion is a bit tricker that just
529 Completely de-duplicating completion is a bit tricker that just
529 comparing as it depends on surrounding text, which Completions are not
530 comparing as it depends on surrounding text, which Completions are not
530 aware of.
531 aware of.
531 """
532 """
532 return self.start == other.start and \
533 return self.start == other.start and \
533 self.end == other.end and \
534 self.end == other.end and \
534 self.text == other.text
535 self.text == other.text
535
536
536 def __hash__(self):
537 def __hash__(self):
537 return hash((self.start, self.end, self.text))
538 return hash((self.start, self.end, self.text))
538
539
539
540
540 class SimpleCompletion:
541 class SimpleCompletion:
541 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
542 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
542
543
543 .. warning::
544 .. warning::
544
545
545 Provisional
546 Provisional
546
547
547 This class is used to describe the currently supported attributes of
548 This class is used to describe the currently supported attributes of
548 simple completion items, and any additional implementation details
549 simple completion items, and any additional implementation details
549 should not be relied on. Additional attributes may be included in
550 should not be relied on. Additional attributes may be included in
550 future versions, and meaning of text disambiguated from the current
551 future versions, and meaning of text disambiguated from the current
551 dual meaning of "text to insert" and "text to used as a label".
552 dual meaning of "text to insert" and "text to used as a label".
552 """
553 """
553
554
554 __slots__ = ["text", "type"]
555 __slots__ = ["text", "type"]
555
556
556 def __init__(self, text: str, *, type: str = None):
557 def __init__(self, text: str, *, type: str = None):
557 self.text = text
558 self.text = text
558 self.type = type
559 self.type = type
559
560
560 def __repr__(self):
561 def __repr__(self):
561 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
562 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
562
563
563
564
564 class _MatcherResultBase(TypedDict):
565 class _MatcherResultBase(TypedDict):
565 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
566 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
566
567
567 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
568 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
568 matched_fragment: NotRequired[str]
569 matched_fragment: NotRequired[str]
569
570
570 #: Whether to suppress results from all other matchers (True), some
571 #: Whether to suppress results from all other matchers (True), some
571 #: matchers (set of identifiers) or none (False); default is False.
572 #: matchers (set of identifiers) or none (False); default is False.
572 suppress: NotRequired[Union[bool, Set[str]]]
573 suppress: NotRequired[Union[bool, Set[str]]]
573
574
574 #: Identifiers of matchers which should NOT be suppressed when this matcher
575 #: Identifiers of matchers which should NOT be suppressed when this matcher
575 #: requests to suppress all other matchers; defaults to an empty set.
576 #: requests to suppress all other matchers; defaults to an empty set.
576 do_not_suppress: NotRequired[Set[str]]
577 do_not_suppress: NotRequired[Set[str]]
577
578
578 #: Are completions already ordered and should be left as-is? default is False.
579 #: Are completions already ordered and should be left as-is? default is False.
579 ordered: NotRequired[bool]
580 ordered: NotRequired[bool]
580
581
581
582
582 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
583 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
583 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
584 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
584 """Result of new-style completion matcher."""
585 """Result of new-style completion matcher."""
585
586
586 # note: TypedDict is added again to the inheritance chain
587 # note: TypedDict is added again to the inheritance chain
587 # in order to get __orig_bases__ for documentation
588 # in order to get __orig_bases__ for documentation
588
589
589 #: List of candidate completions
590 #: List of candidate completions
590 completions: Sequence[SimpleCompletion]
591 completions: Sequence[SimpleCompletion]
591
592
592
593
593 class _JediMatcherResult(_MatcherResultBase):
594 class _JediMatcherResult(_MatcherResultBase):
594 """Matching result returned by Jedi (will be processed differently)"""
595 """Matching result returned by Jedi (will be processed differently)"""
595
596
596 #: list of candidate completions
597 #: list of candidate completions
597 completions: Iterable[_JediCompletionLike]
598 completions: Iterable[_JediCompletionLike]
598
599
599
600
600 class CompletionContext(NamedTuple):
601 @dataclass
602 class CompletionContext:
601 """Completion context provided as an argument to matchers in the Matcher API v2."""
603 """Completion context provided as an argument to matchers in the Matcher API v2."""
602
604
603 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
605 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
604 # which was not explicitly visible as an argument of the matcher, making any refactor
606 # which was not explicitly visible as an argument of the matcher, making any refactor
605 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
607 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
606 # from the completer, and make substituting them in sub-classes easier.
608 # from the completer, and make substituting them in sub-classes easier.
607
609
608 #: Relevant fragment of code directly preceding the cursor.
610 #: Relevant fragment of code directly preceding the cursor.
609 #: The extraction of token is implemented via splitter heuristic
611 #: The extraction of token is implemented via splitter heuristic
610 #: (following readline behaviour for legacy reasons), which is user configurable
612 #: (following readline behaviour for legacy reasons), which is user configurable
611 #: (by switching the greedy mode).
613 #: (by switching the greedy mode).
612 token: str
614 token: str
613
615
614 #: The full available content of the editor or buffer
616 #: The full available content of the editor or buffer
615 full_text: str
617 full_text: str
616
618
617 #: Cursor position in the line (the same for ``full_text`` and ``text``).
619 #: Cursor position in the line (the same for ``full_text`` and ``text``).
618 cursor_position: int
620 cursor_position: int
619
621
620 #: Cursor line in ``full_text``.
622 #: Cursor line in ``full_text``.
621 cursor_line: int
623 cursor_line: int
622
624
623 #: The maximum number of completions that will be used downstream.
625 #: The maximum number of completions that will be used downstream.
624 #: Matchers can use this information to abort early.
626 #: Matchers can use this information to abort early.
625 #: The built-in Jedi matcher is currently excepted from this limit.
627 #: The built-in Jedi matcher is currently excepted from this limit.
626 # If not given, return all possible completions.
628 # If not given, return all possible completions.
627 limit: Optional[int]
629 limit: Optional[int]
628
630
629 @property
631 @cached_property
630 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
631 def text_until_cursor(self) -> str:
632 def text_until_cursor(self) -> str:
632 return self.line_with_cursor[: self.cursor_position]
633 return self.line_with_cursor[: self.cursor_position]
633
634
634 @property
635 @cached_property
635 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
636 def line_with_cursor(self) -> str:
636 def line_with_cursor(self) -> str:
637 return self.full_text.split("\n")[self.cursor_line]
637 return self.full_text.split("\n")[self.cursor_line]
638
638
639
639
640 #: Matcher results for API v2.
640 #: Matcher results for API v2.
641 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
641 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
642
642
643
643
644 class _MatcherAPIv1Base(Protocol):
644 class _MatcherAPIv1Base(Protocol):
645 def __call__(self, text: str) -> list[str]:
645 def __call__(self, text: str) -> list[str]:
646 """Call signature."""
646 """Call signature."""
647
647
648
648
649 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
649 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
650 #: API version
650 #: API version
651 matcher_api_version: Optional[Literal[1]]
651 matcher_api_version: Optional[Literal[1]]
652
652
653 def __call__(self, text: str) -> list[str]:
653 def __call__(self, text: str) -> list[str]:
654 """Call signature."""
654 """Call signature."""
655
655
656
656
657 #: Protocol describing Matcher API v1.
657 #: Protocol describing Matcher API v1.
658 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
658 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
659
659
660
660
661 class MatcherAPIv2(Protocol):
661 class MatcherAPIv2(Protocol):
662 """Protocol describing Matcher API v2."""
662 """Protocol describing Matcher API v2."""
663
663
664 #: API version
664 #: API version
665 matcher_api_version: Literal[2] = 2
665 matcher_api_version: Literal[2] = 2
666
666
667 def __call__(self, context: CompletionContext) -> MatcherResult:
667 def __call__(self, context: CompletionContext) -> MatcherResult:
668 """Call signature."""
668 """Call signature."""
669
669
670
670
671 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
671 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
672
672
673
673
674 def completion_matcher(
674 def completion_matcher(
675 *, priority: float = None, identifier: str = None, api_version: int = 1
675 *, priority: float = None, identifier: str = None, api_version: int = 1
676 ):
676 ):
677 """Adds attributes describing the matcher.
677 """Adds attributes describing the matcher.
678
678
679 Parameters
679 Parameters
680 ----------
680 ----------
681 priority : Optional[float]
681 priority : Optional[float]
682 The priority of the matcher, determines the order of execution of matchers.
682 The priority of the matcher, determines the order of execution of matchers.
683 Higher priority means that the matcher will be executed first. Defaults to 0.
683 Higher priority means that the matcher will be executed first. Defaults to 0.
684 identifier : Optional[str]
684 identifier : Optional[str]
685 identifier of the matcher allowing users to modify the behaviour via traitlets,
685 identifier of the matcher allowing users to modify the behaviour via traitlets,
686 and also used to for debugging (will be passed as ``origin`` with the completions).
686 and also used to for debugging (will be passed as ``origin`` with the completions).
687 Defaults to matcher function ``__qualname__``.
687 Defaults to matcher function ``__qualname__``.
688 api_version: Optional[int]
688 api_version: Optional[int]
689 version of the Matcher API used by this matcher.
689 version of the Matcher API used by this matcher.
690 Currently supported values are 1 and 2.
690 Currently supported values are 1 and 2.
691 Defaults to 1.
691 Defaults to 1.
692 """
692 """
693
693
694 def wrapper(func: Matcher):
694 def wrapper(func: Matcher):
695 func.matcher_priority = priority or 0
695 func.matcher_priority = priority or 0
696 func.matcher_identifier = identifier or func.__qualname__
696 func.matcher_identifier = identifier or func.__qualname__
697 func.matcher_api_version = api_version
697 func.matcher_api_version = api_version
698 if TYPE_CHECKING:
698 if TYPE_CHECKING:
699 if api_version == 1:
699 if api_version == 1:
700 func = cast(func, MatcherAPIv1)
700 func = cast(func, MatcherAPIv1)
701 elif api_version == 2:
701 elif api_version == 2:
702 func = cast(func, MatcherAPIv2)
702 func = cast(func, MatcherAPIv2)
703 return func
703 return func
704
704
705 return wrapper
705 return wrapper
706
706
707
707
708 def _get_matcher_priority(matcher: Matcher):
708 def _get_matcher_priority(matcher: Matcher):
709 return getattr(matcher, "matcher_priority", 0)
709 return getattr(matcher, "matcher_priority", 0)
710
710
711
711
712 def _get_matcher_id(matcher: Matcher):
712 def _get_matcher_id(matcher: Matcher):
713 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
713 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
714
714
715
715
716 def _get_matcher_api_version(matcher):
716 def _get_matcher_api_version(matcher):
717 return getattr(matcher, "matcher_api_version", 1)
717 return getattr(matcher, "matcher_api_version", 1)
718
718
719
719
720 context_matcher = partial(completion_matcher, api_version=2)
720 context_matcher = partial(completion_matcher, api_version=2)
721
721
722
722
723 _IC = Iterable[Completion]
723 _IC = Iterable[Completion]
724
724
725
725
726 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
726 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
727 """
727 """
728 Deduplicate a set of completions.
728 Deduplicate a set of completions.
729
729
730 .. warning::
730 .. warning::
731
731
732 Unstable
732 Unstable
733
733
734 This function is unstable, API may change without warning.
734 This function is unstable, API may change without warning.
735
735
736 Parameters
736 Parameters
737 ----------
737 ----------
738 text : str
738 text : str
739 text that should be completed.
739 text that should be completed.
740 completions : Iterator[Completion]
740 completions : Iterator[Completion]
741 iterator over the completions to deduplicate
741 iterator over the completions to deduplicate
742
742
743 Yields
743 Yields
744 ------
744 ------
745 `Completions` objects
745 `Completions` objects
746 Completions coming from multiple sources, may be different but end up having
746 Completions coming from multiple sources, may be different but end up having
747 the same effect when applied to ``text``. If this is the case, this will
747 the same effect when applied to ``text``. If this is the case, this will
748 consider completions as equal and only emit the first encountered.
748 consider completions as equal and only emit the first encountered.
749 Not folded in `completions()` yet for debugging purpose, and to detect when
749 Not folded in `completions()` yet for debugging purpose, and to detect when
750 the IPython completer does return things that Jedi does not, but should be
750 the IPython completer does return things that Jedi does not, but should be
751 at some point.
751 at some point.
752 """
752 """
753 completions = list(completions)
753 completions = list(completions)
754 if not completions:
754 if not completions:
755 return
755 return
756
756
757 new_start = min(c.start for c in completions)
757 new_start = min(c.start for c in completions)
758 new_end = max(c.end for c in completions)
758 new_end = max(c.end for c in completions)
759
759
760 seen = set()
760 seen = set()
761 for c in completions:
761 for c in completions:
762 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
762 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
763 if new_text not in seen:
763 if new_text not in seen:
764 yield c
764 yield c
765 seen.add(new_text)
765 seen.add(new_text)
766
766
767
767
768 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
768 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
769 """
769 """
770 Rectify a set of completions to all have the same ``start`` and ``end``
770 Rectify a set of completions to all have the same ``start`` and ``end``
771
771
772 .. warning::
772 .. warning::
773
773
774 Unstable
774 Unstable
775
775
776 This function is unstable, API may change without warning.
776 This function is unstable, API may change without warning.
777 It will also raise unless use in proper context manager.
777 It will also raise unless use in proper context manager.
778
778
779 Parameters
779 Parameters
780 ----------
780 ----------
781 text : str
781 text : str
782 text that should be completed.
782 text that should be completed.
783 completions : Iterator[Completion]
783 completions : Iterator[Completion]
784 iterator over the completions to rectify
784 iterator over the completions to rectify
785 _debug : bool
785 _debug : bool
786 Log failed completion
786 Log failed completion
787
787
788 Notes
788 Notes
789 -----
789 -----
790 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
790 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
791 the Jupyter Protocol requires them to behave like so. This will readjust
791 the Jupyter Protocol requires them to behave like so. This will readjust
792 the completion to have the same ``start`` and ``end`` by padding both
792 the completion to have the same ``start`` and ``end`` by padding both
793 extremities with surrounding text.
793 extremities with surrounding text.
794
794
795 During stabilisation should support a ``_debug`` option to log which
795 During stabilisation should support a ``_debug`` option to log which
796 completion are return by the IPython completer and not found in Jedi in
796 completion are return by the IPython completer and not found in Jedi in
797 order to make upstream bug report.
797 order to make upstream bug report.
798 """
798 """
799 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
799 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
800 "It may change without warnings. "
800 "It may change without warnings. "
801 "Use in corresponding context manager.",
801 "Use in corresponding context manager.",
802 category=ProvisionalCompleterWarning, stacklevel=2)
802 category=ProvisionalCompleterWarning, stacklevel=2)
803
803
804 completions = list(completions)
804 completions = list(completions)
805 if not completions:
805 if not completions:
806 return
806 return
807 starts = (c.start for c in completions)
807 starts = (c.start for c in completions)
808 ends = (c.end for c in completions)
808 ends = (c.end for c in completions)
809
809
810 new_start = min(starts)
810 new_start = min(starts)
811 new_end = max(ends)
811 new_end = max(ends)
812
812
813 seen_jedi = set()
813 seen_jedi = set()
814 seen_python_matches = set()
814 seen_python_matches = set()
815 for c in completions:
815 for c in completions:
816 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
816 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
817 if c._origin == 'jedi':
817 if c._origin == 'jedi':
818 seen_jedi.add(new_text)
818 seen_jedi.add(new_text)
819 elif c._origin == 'IPCompleter.python_matches':
819 elif c._origin == 'IPCompleter.python_matches':
820 seen_python_matches.add(new_text)
820 seen_python_matches.add(new_text)
821 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
821 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
822 diff = seen_python_matches.difference(seen_jedi)
822 diff = seen_python_matches.difference(seen_jedi)
823 if diff and _debug:
823 if diff and _debug:
824 print('IPython.python matches have extras:', diff)
824 print('IPython.python matches have extras:', diff)
825
825
826
826
827 if sys.platform == 'win32':
827 if sys.platform == 'win32':
828 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
828 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
829 else:
829 else:
830 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
830 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
831
831
832 GREEDY_DELIMS = ' =\r\n'
832 GREEDY_DELIMS = ' =\r\n'
833
833
834
834
835 class CompletionSplitter(object):
835 class CompletionSplitter(object):
836 """An object to split an input line in a manner similar to readline.
836 """An object to split an input line in a manner similar to readline.
837
837
838 By having our own implementation, we can expose readline-like completion in
838 By having our own implementation, we can expose readline-like completion in
839 a uniform manner to all frontends. This object only needs to be given the
839 a uniform manner to all frontends. This object only needs to be given the
840 line of text to be split and the cursor position on said line, and it
840 line of text to be split and the cursor position on said line, and it
841 returns the 'word' to be completed on at the cursor after splitting the
841 returns the 'word' to be completed on at the cursor after splitting the
842 entire line.
842 entire line.
843
843
844 What characters are used as splitting delimiters can be controlled by
844 What characters are used as splitting delimiters can be controlled by
845 setting the ``delims`` attribute (this is a property that internally
845 setting the ``delims`` attribute (this is a property that internally
846 automatically builds the necessary regular expression)"""
846 automatically builds the necessary regular expression)"""
847
847
848 # Private interface
848 # Private interface
849
849
850 # A string of delimiter characters. The default value makes sense for
850 # A string of delimiter characters. The default value makes sense for
851 # IPython's most typical usage patterns.
851 # IPython's most typical usage patterns.
852 _delims = DELIMS
852 _delims = DELIMS
853
853
854 # The expression (a normal string) to be compiled into a regular expression
854 # The expression (a normal string) to be compiled into a regular expression
855 # for actual splitting. We store it as an attribute mostly for ease of
855 # for actual splitting. We store it as an attribute mostly for ease of
856 # debugging, since this type of code can be so tricky to debug.
856 # debugging, since this type of code can be so tricky to debug.
857 _delim_expr = None
857 _delim_expr = None
858
858
859 # The regular expression that does the actual splitting
859 # The regular expression that does the actual splitting
860 _delim_re = None
860 _delim_re = None
861
861
862 def __init__(self, delims=None):
862 def __init__(self, delims=None):
863 delims = CompletionSplitter._delims if delims is None else delims
863 delims = CompletionSplitter._delims if delims is None else delims
864 self.delims = delims
864 self.delims = delims
865
865
866 @property
866 @property
867 def delims(self):
867 def delims(self):
868 """Return the string of delimiter characters."""
868 """Return the string of delimiter characters."""
869 return self._delims
869 return self._delims
870
870
871 @delims.setter
871 @delims.setter
872 def delims(self, delims):
872 def delims(self, delims):
873 """Set the delimiters for line splitting."""
873 """Set the delimiters for line splitting."""
874 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
874 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
875 self._delim_re = re.compile(expr)
875 self._delim_re = re.compile(expr)
876 self._delims = delims
876 self._delims = delims
877 self._delim_expr = expr
877 self._delim_expr = expr
878
878
879 def split_line(self, line, cursor_pos=None):
879 def split_line(self, line, cursor_pos=None):
880 """Split a line of text with a cursor at the given position.
880 """Split a line of text with a cursor at the given position.
881 """
881 """
882 l = line if cursor_pos is None else line[:cursor_pos]
882 l = line if cursor_pos is None else line[:cursor_pos]
883 return self._delim_re.split(l)[-1]
883 return self._delim_re.split(l)[-1]
884
884
885
885
886
886
887 class Completer(Configurable):
887 class Completer(Configurable):
888
888
889 greedy = Bool(False,
889 greedy = Bool(False,
890 help="""Activate greedy completion
890 help="""Activate greedy completion
891 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
891 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
892
892
893 This will enable completion on elements of lists, results of function calls, etc.,
893 This will enable completion on elements of lists, results of function calls, etc.,
894 but can be unsafe because the code is actually evaluated on TAB.
894 but can be unsafe because the code is actually evaluated on TAB.
895 """,
895 """,
896 ).tag(config=True)
896 ).tag(config=True)
897
897
898 use_jedi = Bool(default_value=JEDI_INSTALLED,
898 use_jedi = Bool(default_value=JEDI_INSTALLED,
899 help="Experimental: Use Jedi to generate autocompletions. "
899 help="Experimental: Use Jedi to generate autocompletions. "
900 "Default to True if jedi is installed.").tag(config=True)
900 "Default to True if jedi is installed.").tag(config=True)
901
901
902 jedi_compute_type_timeout = Int(default_value=400,
902 jedi_compute_type_timeout = Int(default_value=400,
903 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
903 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
904 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
904 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
905 performance by preventing jedi to build its cache.
905 performance by preventing jedi to build its cache.
906 """).tag(config=True)
906 """).tag(config=True)
907
907
908 debug = Bool(default_value=False,
908 debug = Bool(default_value=False,
909 help='Enable debug for the Completer. Mostly print extra '
909 help='Enable debug for the Completer. Mostly print extra '
910 'information for experimental jedi integration.')\
910 'information for experimental jedi integration.')\
911 .tag(config=True)
911 .tag(config=True)
912
912
913 backslash_combining_completions = Bool(True,
913 backslash_combining_completions = Bool(True,
914 help="Enable unicode completions, e.g. \\alpha<tab> . "
914 help="Enable unicode completions, e.g. \\alpha<tab> . "
915 "Includes completion of latex commands, unicode names, and expanding "
915 "Includes completion of latex commands, unicode names, and expanding "
916 "unicode characters back to latex commands.").tag(config=True)
916 "unicode characters back to latex commands.").tag(config=True)
917
917
918 def __init__(self, namespace=None, global_namespace=None, **kwargs):
918 def __init__(self, namespace=None, global_namespace=None, **kwargs):
919 """Create a new completer for the command line.
919 """Create a new completer for the command line.
920
920
921 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
921 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
922
922
923 If unspecified, the default namespace where completions are performed
923 If unspecified, the default namespace where completions are performed
924 is __main__ (technically, __main__.__dict__). Namespaces should be
924 is __main__ (technically, __main__.__dict__). Namespaces should be
925 given as dictionaries.
925 given as dictionaries.
926
926
927 An optional second namespace can be given. This allows the completer
927 An optional second namespace can be given. This allows the completer
928 to handle cases where both the local and global scopes need to be
928 to handle cases where both the local and global scopes need to be
929 distinguished.
929 distinguished.
930 """
930 """
931
931
932 # Don't bind to namespace quite yet, but flag whether the user wants a
932 # Don't bind to namespace quite yet, but flag whether the user wants a
933 # specific namespace or to use __main__.__dict__. This will allow us
933 # specific namespace or to use __main__.__dict__. This will allow us
934 # to bind to __main__.__dict__ at completion time, not now.
934 # to bind to __main__.__dict__ at completion time, not now.
935 if namespace is None:
935 if namespace is None:
936 self.use_main_ns = True
936 self.use_main_ns = True
937 else:
937 else:
938 self.use_main_ns = False
938 self.use_main_ns = False
939 self.namespace = namespace
939 self.namespace = namespace
940
940
941 # The global namespace, if given, can be bound directly
941 # The global namespace, if given, can be bound directly
942 if global_namespace is None:
942 if global_namespace is None:
943 self.global_namespace = {}
943 self.global_namespace = {}
944 else:
944 else:
945 self.global_namespace = global_namespace
945 self.global_namespace = global_namespace
946
946
947 self.custom_matchers = []
947 self.custom_matchers = []
948
948
949 super(Completer, self).__init__(**kwargs)
949 super(Completer, self).__init__(**kwargs)
950
950
951 def complete(self, text, state):
951 def complete(self, text, state):
952 """Return the next possible completion for 'text'.
952 """Return the next possible completion for 'text'.
953
953
954 This is called successively with state == 0, 1, 2, ... until it
954 This is called successively with state == 0, 1, 2, ... until it
955 returns None. The completion should begin with 'text'.
955 returns None. The completion should begin with 'text'.
956
956
957 """
957 """
958 if self.use_main_ns:
958 if self.use_main_ns:
959 self.namespace = __main__.__dict__
959 self.namespace = __main__.__dict__
960
960
961 if state == 0:
961 if state == 0:
962 if "." in text:
962 if "." in text:
963 self.matches = self.attr_matches(text)
963 self.matches = self.attr_matches(text)
964 else:
964 else:
965 self.matches = self.global_matches(text)
965 self.matches = self.global_matches(text)
966 try:
966 try:
967 return self.matches[state]
967 return self.matches[state]
968 except IndexError:
968 except IndexError:
969 return None
969 return None
970
970
971 def global_matches(self, text):
971 def global_matches(self, text):
972 """Compute matches when text is a simple name.
972 """Compute matches when text is a simple name.
973
973
974 Return a list of all keywords, built-in functions and names currently
974 Return a list of all keywords, built-in functions and names currently
975 defined in self.namespace or self.global_namespace that match.
975 defined in self.namespace or self.global_namespace that match.
976
976
977 """
977 """
978 matches = []
978 matches = []
979 match_append = matches.append
979 match_append = matches.append
980 n = len(text)
980 n = len(text)
981 for lst in [
981 for lst in [
982 keyword.kwlist,
982 keyword.kwlist,
983 builtin_mod.__dict__.keys(),
983 builtin_mod.__dict__.keys(),
984 list(self.namespace.keys()),
984 list(self.namespace.keys()),
985 list(self.global_namespace.keys()),
985 list(self.global_namespace.keys()),
986 ]:
986 ]:
987 for word in lst:
987 for word in lst:
988 if word[:n] == text and word != "__builtins__":
988 if word[:n] == text and word != "__builtins__":
989 match_append(word)
989 match_append(word)
990
990
991 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
991 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
992 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
992 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
993 shortened = {
993 shortened = {
994 "_".join([sub[0] for sub in word.split("_")]): word
994 "_".join([sub[0] for sub in word.split("_")]): word
995 for word in lst
995 for word in lst
996 if snake_case_re.match(word)
996 if snake_case_re.match(word)
997 }
997 }
998 for word in shortened.keys():
998 for word in shortened.keys():
999 if word[:n] == text and word != "__builtins__":
999 if word[:n] == text and word != "__builtins__":
1000 match_append(shortened[word])
1000 match_append(shortened[word])
1001 return matches
1001 return matches
1002
1002
1003 def attr_matches(self, text):
1003 def attr_matches(self, text):
1004 """Compute matches when text contains a dot.
1004 """Compute matches when text contains a dot.
1005
1005
1006 Assuming the text is of the form NAME.NAME....[NAME], and is
1006 Assuming the text is of the form NAME.NAME....[NAME], and is
1007 evaluatable in self.namespace or self.global_namespace, it will be
1007 evaluatable in self.namespace or self.global_namespace, it will be
1008 evaluated and its attributes (as revealed by dir()) are used as
1008 evaluated and its attributes (as revealed by dir()) are used as
1009 possible completions. (For class instances, class members are
1009 possible completions. (For class instances, class members are
1010 also considered.)
1010 also considered.)
1011
1011
1012 WARNING: this can still invoke arbitrary C code, if an object
1012 WARNING: this can still invoke arbitrary C code, if an object
1013 with a __getattr__ hook is evaluated.
1013 with a __getattr__ hook is evaluated.
1014
1014
1015 """
1015 """
1016
1016
1017 # Another option, seems to work great. Catches things like ''.<tab>
1017 # Another option, seems to work great. Catches things like ''.<tab>
1018 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
1018 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
1019
1019
1020 if m:
1020 if m:
1021 expr, attr = m.group(1, 3)
1021 expr, attr = m.group(1, 3)
1022 elif self.greedy:
1022 elif self.greedy:
1023 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1023 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1024 if not m2:
1024 if not m2:
1025 return []
1025 return []
1026 expr, attr = m2.group(1,2)
1026 expr, attr = m2.group(1,2)
1027 else:
1027 else:
1028 return []
1028 return []
1029
1029
1030 try:
1030 try:
1031 obj = eval(expr, self.namespace)
1031 obj = eval(expr, self.namespace)
1032 except:
1032 except:
1033 try:
1033 try:
1034 obj = eval(expr, self.global_namespace)
1034 obj = eval(expr, self.global_namespace)
1035 except:
1035 except:
1036 return []
1036 return []
1037
1037
1038 if self.limit_to__all__ and hasattr(obj, '__all__'):
1038 if self.limit_to__all__ and hasattr(obj, '__all__'):
1039 words = get__all__entries(obj)
1039 words = get__all__entries(obj)
1040 else:
1040 else:
1041 words = dir2(obj)
1041 words = dir2(obj)
1042
1042
1043 try:
1043 try:
1044 words = generics.complete_object(obj, words)
1044 words = generics.complete_object(obj, words)
1045 except TryNext:
1045 except TryNext:
1046 pass
1046 pass
1047 except AssertionError:
1047 except AssertionError:
1048 raise
1048 raise
1049 except Exception:
1049 except Exception:
1050 # Silence errors from completion function
1050 # Silence errors from completion function
1051 #raise # dbg
1051 #raise # dbg
1052 pass
1052 pass
1053 # Build match list to return
1053 # Build match list to return
1054 n = len(attr)
1054 n = len(attr)
1055 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
1055 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
1056
1056
1057
1057
1058 def get__all__entries(obj):
1058 def get__all__entries(obj):
1059 """returns the strings in the __all__ attribute"""
1059 """returns the strings in the __all__ attribute"""
1060 try:
1060 try:
1061 words = getattr(obj, '__all__')
1061 words = getattr(obj, '__all__')
1062 except:
1062 except:
1063 return []
1063 return []
1064
1064
1065 return [w for w in words if isinstance(w, str)]
1065 return [w for w in words if isinstance(w, str)]
1066
1066
1067
1067
1068 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
1068 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
1069 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
1069 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
1070 """Used by dict_key_matches, matching the prefix to a list of keys
1070 """Used by dict_key_matches, matching the prefix to a list of keys
1071
1071
1072 Parameters
1072 Parameters
1073 ----------
1073 ----------
1074 keys
1074 keys
1075 list of keys in dictionary currently being completed.
1075 list of keys in dictionary currently being completed.
1076 prefix
1076 prefix
1077 Part of the text already typed by the user. E.g. `mydict[b'fo`
1077 Part of the text already typed by the user. E.g. `mydict[b'fo`
1078 delims
1078 delims
1079 String of delimiters to consider when finding the current key.
1079 String of delimiters to consider when finding the current key.
1080 extra_prefix : optional
1080 extra_prefix : optional
1081 Part of the text already typed in multi-key index cases. E.g. for
1081 Part of the text already typed in multi-key index cases. E.g. for
1082 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1082 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1083
1083
1084 Returns
1084 Returns
1085 -------
1085 -------
1086 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1086 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1087 ``quote`` being the quote that need to be used to close current string.
1087 ``quote`` being the quote that need to be used to close current string.
1088 ``token_start`` the position where the replacement should start occurring,
1088 ``token_start`` the position where the replacement should start occurring,
1089 ``matches`` a list of replacement/completion
1089 ``matches`` a list of replacement/completion
1090
1090
1091 """
1091 """
1092 prefix_tuple = extra_prefix if extra_prefix else ()
1092 prefix_tuple = extra_prefix if extra_prefix else ()
1093 Nprefix = len(prefix_tuple)
1093 Nprefix = len(prefix_tuple)
1094 def filter_prefix_tuple(key):
1094 def filter_prefix_tuple(key):
1095 # Reject too short keys
1095 # Reject too short keys
1096 if len(key) <= Nprefix:
1096 if len(key) <= Nprefix:
1097 return False
1097 return False
1098 # Reject keys with non str/bytes in it
1098 # Reject keys with non str/bytes in it
1099 for k in key:
1099 for k in key:
1100 if not isinstance(k, (str, bytes)):
1100 if not isinstance(k, (str, bytes)):
1101 return False
1101 return False
1102 # Reject keys that do not match the prefix
1102 # Reject keys that do not match the prefix
1103 for k, pt in zip(key, prefix_tuple):
1103 for k, pt in zip(key, prefix_tuple):
1104 if k != pt:
1104 if k != pt:
1105 return False
1105 return False
1106 # All checks passed!
1106 # All checks passed!
1107 return True
1107 return True
1108
1108
1109 filtered_keys:List[Union[str,bytes]] = []
1109 filtered_keys:List[Union[str,bytes]] = []
1110 def _add_to_filtered_keys(key):
1110 def _add_to_filtered_keys(key):
1111 if isinstance(key, (str, bytes)):
1111 if isinstance(key, (str, bytes)):
1112 filtered_keys.append(key)
1112 filtered_keys.append(key)
1113
1113
1114 for k in keys:
1114 for k in keys:
1115 if isinstance(k, tuple):
1115 if isinstance(k, tuple):
1116 if filter_prefix_tuple(k):
1116 if filter_prefix_tuple(k):
1117 _add_to_filtered_keys(k[Nprefix])
1117 _add_to_filtered_keys(k[Nprefix])
1118 else:
1118 else:
1119 _add_to_filtered_keys(k)
1119 _add_to_filtered_keys(k)
1120
1120
1121 if not prefix:
1121 if not prefix:
1122 return '', 0, [repr(k) for k in filtered_keys]
1122 return '', 0, [repr(k) for k in filtered_keys]
1123 quote_match = re.search('["\']', prefix)
1123 quote_match = re.search('["\']', prefix)
1124 assert quote_match is not None # silence mypy
1124 assert quote_match is not None # silence mypy
1125 quote = quote_match.group()
1125 quote = quote_match.group()
1126 try:
1126 try:
1127 prefix_str = eval(prefix + quote, {})
1127 prefix_str = eval(prefix + quote, {})
1128 except Exception:
1128 except Exception:
1129 return '', 0, []
1129 return '', 0, []
1130
1130
1131 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1131 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1132 token_match = re.search(pattern, prefix, re.UNICODE)
1132 token_match = re.search(pattern, prefix, re.UNICODE)
1133 assert token_match is not None # silence mypy
1133 assert token_match is not None # silence mypy
1134 token_start = token_match.start()
1134 token_start = token_match.start()
1135 token_prefix = token_match.group()
1135 token_prefix = token_match.group()
1136
1136
1137 matched:List[str] = []
1137 matched:List[str] = []
1138 for key in filtered_keys:
1138 for key in filtered_keys:
1139 try:
1139 try:
1140 if not key.startswith(prefix_str):
1140 if not key.startswith(prefix_str):
1141 continue
1141 continue
1142 except (AttributeError, TypeError, UnicodeError):
1142 except (AttributeError, TypeError, UnicodeError):
1143 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1143 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1144 continue
1144 continue
1145
1145
1146 # reformat remainder of key to begin with prefix
1146 # reformat remainder of key to begin with prefix
1147 rem = key[len(prefix_str):]
1147 rem = key[len(prefix_str):]
1148 # force repr wrapped in '
1148 # force repr wrapped in '
1149 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1149 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1150 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1150 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1151 if quote == '"':
1151 if quote == '"':
1152 # The entered prefix is quoted with ",
1152 # The entered prefix is quoted with ",
1153 # but the match is quoted with '.
1153 # but the match is quoted with '.
1154 # A contained " hence needs escaping for comparison:
1154 # A contained " hence needs escaping for comparison:
1155 rem_repr = rem_repr.replace('"', '\\"')
1155 rem_repr = rem_repr.replace('"', '\\"')
1156
1156
1157 # then reinsert prefix from start of token
1157 # then reinsert prefix from start of token
1158 matched.append('%s%s' % (token_prefix, rem_repr))
1158 matched.append('%s%s' % (token_prefix, rem_repr))
1159 return quote, token_start, matched
1159 return quote, token_start, matched
1160
1160
1161
1161
1162 def cursor_to_position(text:str, line:int, column:int)->int:
1162 def cursor_to_position(text:str, line:int, column:int)->int:
1163 """
1163 """
1164 Convert the (line,column) position of the cursor in text to an offset in a
1164 Convert the (line,column) position of the cursor in text to an offset in a
1165 string.
1165 string.
1166
1166
1167 Parameters
1167 Parameters
1168 ----------
1168 ----------
1169 text : str
1169 text : str
1170 The text in which to calculate the cursor offset
1170 The text in which to calculate the cursor offset
1171 line : int
1171 line : int
1172 Line of the cursor; 0-indexed
1172 Line of the cursor; 0-indexed
1173 column : int
1173 column : int
1174 Column of the cursor 0-indexed
1174 Column of the cursor 0-indexed
1175
1175
1176 Returns
1176 Returns
1177 -------
1177 -------
1178 Position of the cursor in ``text``, 0-indexed.
1178 Position of the cursor in ``text``, 0-indexed.
1179
1179
1180 See Also
1180 See Also
1181 --------
1181 --------
1182 position_to_cursor : reciprocal of this function
1182 position_to_cursor : reciprocal of this function
1183
1183
1184 """
1184 """
1185 lines = text.split('\n')
1185 lines = text.split('\n')
1186 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1186 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1187
1187
1188 return sum(len(l) + 1 for l in lines[:line]) + column
1188 return sum(len(l) + 1 for l in lines[:line]) + column
1189
1189
1190 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1190 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1191 """
1191 """
1192 Convert the position of the cursor in text (0 indexed) to a line
1192 Convert the position of the cursor in text (0 indexed) to a line
1193 number(0-indexed) and a column number (0-indexed) pair
1193 number(0-indexed) and a column number (0-indexed) pair
1194
1194
1195 Position should be a valid position in ``text``.
1195 Position should be a valid position in ``text``.
1196
1196
1197 Parameters
1197 Parameters
1198 ----------
1198 ----------
1199 text : str
1199 text : str
1200 The text in which to calculate the cursor offset
1200 The text in which to calculate the cursor offset
1201 offset : int
1201 offset : int
1202 Position of the cursor in ``text``, 0-indexed.
1202 Position of the cursor in ``text``, 0-indexed.
1203
1203
1204 Returns
1204 Returns
1205 -------
1205 -------
1206 (line, column) : (int, int)
1206 (line, column) : (int, int)
1207 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1207 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1208
1208
1209 See Also
1209 See Also
1210 --------
1210 --------
1211 cursor_to_position : reciprocal of this function
1211 cursor_to_position : reciprocal of this function
1212
1212
1213 """
1213 """
1214
1214
1215 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1215 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1216
1216
1217 before = text[:offset]
1217 before = text[:offset]
1218 blines = before.split('\n') # ! splitnes trim trailing \n
1218 blines = before.split('\n') # ! splitnes trim trailing \n
1219 line = before.count('\n')
1219 line = before.count('\n')
1220 col = len(blines[-1])
1220 col = len(blines[-1])
1221 return line, col
1221 return line, col
1222
1222
1223
1223
1224 def _safe_isinstance(obj, module, class_name):
1224 def _safe_isinstance(obj, module, class_name):
1225 """Checks if obj is an instance of module.class_name if loaded
1225 """Checks if obj is an instance of module.class_name if loaded
1226 """
1226 """
1227 return (module in sys.modules and
1227 return (module in sys.modules and
1228 isinstance(obj, getattr(import_module(module), class_name)))
1228 isinstance(obj, getattr(import_module(module), class_name)))
1229
1229
1230
1230
1231 @context_matcher()
1231 @context_matcher()
1232 def back_unicode_name_matcher(context: CompletionContext):
1232 def back_unicode_name_matcher(context: CompletionContext):
1233 """Match Unicode characters back to Unicode name
1233 """Match Unicode characters back to Unicode name
1234
1234
1235 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1235 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1236 """
1236 """
1237 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1237 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1238 return _convert_matcher_v1_result_to_v2(
1238 return _convert_matcher_v1_result_to_v2(
1239 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1239 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1240 )
1240 )
1241
1241
1242
1242
1243 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1243 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1244 """Match Unicode characters back to Unicode name
1244 """Match Unicode characters back to Unicode name
1245
1245
1246 This does ``β˜ƒ`` -> ``\\snowman``
1246 This does ``β˜ƒ`` -> ``\\snowman``
1247
1247
1248 Note that snowman is not a valid python3 combining character but will be expanded.
1248 Note that snowman is not a valid python3 combining character but will be expanded.
1249 Though it will not recombine back to the snowman character by the completion machinery.
1249 Though it will not recombine back to the snowman character by the completion machinery.
1250
1250
1251 This will not either back-complete standard sequences like \\n, \\b ...
1251 This will not either back-complete standard sequences like \\n, \\b ...
1252
1252
1253 .. deprecated:: 8.6
1253 .. deprecated:: 8.6
1254 You can use :meth:`back_unicode_name_matcher` instead.
1254 You can use :meth:`back_unicode_name_matcher` instead.
1255
1255
1256 Returns
1256 Returns
1257 =======
1257 =======
1258
1258
1259 Return a tuple with two elements:
1259 Return a tuple with two elements:
1260
1260
1261 - The Unicode character that was matched (preceded with a backslash), or
1261 - The Unicode character that was matched (preceded with a backslash), or
1262 empty string,
1262 empty string,
1263 - a sequence (of 1), name for the match Unicode character, preceded by
1263 - a sequence (of 1), name for the match Unicode character, preceded by
1264 backslash, or empty if no match.
1264 backslash, or empty if no match.
1265 """
1265 """
1266 if len(text)<2:
1266 if len(text)<2:
1267 return '', ()
1267 return '', ()
1268 maybe_slash = text[-2]
1268 maybe_slash = text[-2]
1269 if maybe_slash != '\\':
1269 if maybe_slash != '\\':
1270 return '', ()
1270 return '', ()
1271
1271
1272 char = text[-1]
1272 char = text[-1]
1273 # no expand on quote for completion in strings.
1273 # no expand on quote for completion in strings.
1274 # nor backcomplete standard ascii keys
1274 # nor backcomplete standard ascii keys
1275 if char in string.ascii_letters or char in ('"',"'"):
1275 if char in string.ascii_letters or char in ('"',"'"):
1276 return '', ()
1276 return '', ()
1277 try :
1277 try :
1278 unic = unicodedata.name(char)
1278 unic = unicodedata.name(char)
1279 return '\\'+char,('\\'+unic,)
1279 return '\\'+char,('\\'+unic,)
1280 except KeyError:
1280 except KeyError:
1281 pass
1281 pass
1282 return '', ()
1282 return '', ()
1283
1283
1284
1284
1285 @context_matcher()
1285 @context_matcher()
1286 def back_latex_name_matcher(context: CompletionContext):
1286 def back_latex_name_matcher(context: CompletionContext):
1287 """Match latex characters back to unicode name
1287 """Match latex characters back to unicode name
1288
1288
1289 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1289 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1290 """
1290 """
1291 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1291 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1292 return _convert_matcher_v1_result_to_v2(
1292 return _convert_matcher_v1_result_to_v2(
1293 matches, type="latex", fragment=fragment, suppress_if_matches=True
1293 matches, type="latex", fragment=fragment, suppress_if_matches=True
1294 )
1294 )
1295
1295
1296
1296
1297 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1297 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1298 """Match latex characters back to unicode name
1298 """Match latex characters back to unicode name
1299
1299
1300 This does ``\\β„΅`` -> ``\\aleph``
1300 This does ``\\β„΅`` -> ``\\aleph``
1301
1301
1302 .. deprecated:: 8.6
1302 .. deprecated:: 8.6
1303 You can use :meth:`back_latex_name_matcher` instead.
1303 You can use :meth:`back_latex_name_matcher` instead.
1304 """
1304 """
1305 if len(text)<2:
1305 if len(text)<2:
1306 return '', ()
1306 return '', ()
1307 maybe_slash = text[-2]
1307 maybe_slash = text[-2]
1308 if maybe_slash != '\\':
1308 if maybe_slash != '\\':
1309 return '', ()
1309 return '', ()
1310
1310
1311
1311
1312 char = text[-1]
1312 char = text[-1]
1313 # no expand on quote for completion in strings.
1313 # no expand on quote for completion in strings.
1314 # nor backcomplete standard ascii keys
1314 # nor backcomplete standard ascii keys
1315 if char in string.ascii_letters or char in ('"',"'"):
1315 if char in string.ascii_letters or char in ('"',"'"):
1316 return '', ()
1316 return '', ()
1317 try :
1317 try :
1318 latex = reverse_latex_symbol[char]
1318 latex = reverse_latex_symbol[char]
1319 # '\\' replace the \ as well
1319 # '\\' replace the \ as well
1320 return '\\'+char,[latex]
1320 return '\\'+char,[latex]
1321 except KeyError:
1321 except KeyError:
1322 pass
1322 pass
1323 return '', ()
1323 return '', ()
1324
1324
1325
1325
1326 def _formatparamchildren(parameter) -> str:
1326 def _formatparamchildren(parameter) -> str:
1327 """
1327 """
1328 Get parameter name and value from Jedi Private API
1328 Get parameter name and value from Jedi Private API
1329
1329
1330 Jedi does not expose a simple way to get `param=value` from its API.
1330 Jedi does not expose a simple way to get `param=value` from its API.
1331
1331
1332 Parameters
1332 Parameters
1333 ----------
1333 ----------
1334 parameter
1334 parameter
1335 Jedi's function `Param`
1335 Jedi's function `Param`
1336
1336
1337 Returns
1337 Returns
1338 -------
1338 -------
1339 A string like 'a', 'b=1', '*args', '**kwargs'
1339 A string like 'a', 'b=1', '*args', '**kwargs'
1340
1340
1341 """
1341 """
1342 description = parameter.description
1342 description = parameter.description
1343 if not description.startswith('param '):
1343 if not description.startswith('param '):
1344 raise ValueError('Jedi function parameter description have change format.'
1344 raise ValueError('Jedi function parameter description have change format.'
1345 'Expected "param ...", found %r".' % description)
1345 'Expected "param ...", found %r".' % description)
1346 return description[6:]
1346 return description[6:]
1347
1347
1348 def _make_signature(completion)-> str:
1348 def _make_signature(completion)-> str:
1349 """
1349 """
1350 Make the signature from a jedi completion
1350 Make the signature from a jedi completion
1351
1351
1352 Parameters
1352 Parameters
1353 ----------
1353 ----------
1354 completion : jedi.Completion
1354 completion : jedi.Completion
1355 object does not complete a function type
1355 object does not complete a function type
1356
1356
1357 Returns
1357 Returns
1358 -------
1358 -------
1359 a string consisting of the function signature, with the parenthesis but
1359 a string consisting of the function signature, with the parenthesis but
1360 without the function name. example:
1360 without the function name. example:
1361 `(a, *args, b=1, **kwargs)`
1361 `(a, *args, b=1, **kwargs)`
1362
1362
1363 """
1363 """
1364
1364
1365 # it looks like this might work on jedi 0.17
1365 # it looks like this might work on jedi 0.17
1366 if hasattr(completion, 'get_signatures'):
1366 if hasattr(completion, 'get_signatures'):
1367 signatures = completion.get_signatures()
1367 signatures = completion.get_signatures()
1368 if not signatures:
1368 if not signatures:
1369 return '(?)'
1369 return '(?)'
1370
1370
1371 c0 = completion.get_signatures()[0]
1371 c0 = completion.get_signatures()[0]
1372 return '('+c0.to_string().split('(', maxsplit=1)[1]
1372 return '('+c0.to_string().split('(', maxsplit=1)[1]
1373
1373
1374 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1374 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1375 for p in signature.defined_names()) if f])
1375 for p in signature.defined_names()) if f])
1376
1376
1377
1377
1378 _CompleteResult = Dict[str, MatcherResult]
1378 _CompleteResult = Dict[str, MatcherResult]
1379
1379
1380
1380
1381 def _convert_matcher_v1_result_to_v2(
1381 def _convert_matcher_v1_result_to_v2(
1382 matches: Sequence[str],
1382 matches: Sequence[str],
1383 type: str,
1383 type: str,
1384 fragment: str = None,
1384 fragment: str = None,
1385 suppress_if_matches: bool = False,
1385 suppress_if_matches: bool = False,
1386 ) -> SimpleMatcherResult:
1386 ) -> SimpleMatcherResult:
1387 """Utility to help with transition"""
1387 """Utility to help with transition"""
1388 result = {
1388 result = {
1389 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1389 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1390 "suppress": (True if matches else False) if suppress_if_matches else False,
1390 "suppress": (True if matches else False) if suppress_if_matches else False,
1391 }
1391 }
1392 if fragment is not None:
1392 if fragment is not None:
1393 result["matched_fragment"] = fragment
1393 result["matched_fragment"] = fragment
1394 return result
1394 return result
1395
1395
1396
1396
1397 class IPCompleter(Completer):
1397 class IPCompleter(Completer):
1398 """Extension of the completer class with IPython-specific features"""
1398 """Extension of the completer class with IPython-specific features"""
1399
1399
1400 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1400 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1401
1401
1402 @observe('greedy')
1402 @observe('greedy')
1403 def _greedy_changed(self, change):
1403 def _greedy_changed(self, change):
1404 """update the splitter and readline delims when greedy is changed"""
1404 """update the splitter and readline delims when greedy is changed"""
1405 if change['new']:
1405 if change['new']:
1406 self.splitter.delims = GREEDY_DELIMS
1406 self.splitter.delims = GREEDY_DELIMS
1407 else:
1407 else:
1408 self.splitter.delims = DELIMS
1408 self.splitter.delims = DELIMS
1409
1409
1410 dict_keys_only = Bool(
1410 dict_keys_only = Bool(
1411 False,
1411 False,
1412 help="""
1412 help="""
1413 Whether to show dict key matches only.
1413 Whether to show dict key matches only.
1414
1414
1415 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1415 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1416 """,
1416 """,
1417 )
1417 )
1418
1418
1419 suppress_competing_matchers = UnionTrait(
1419 suppress_competing_matchers = UnionTrait(
1420 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1420 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1421 default_value=None,
1421 default_value=None,
1422 help="""
1422 help="""
1423 Whether to suppress completions from other *Matchers*.
1423 Whether to suppress completions from other *Matchers*.
1424
1424
1425 When set to ``None`` (default) the matchers will attempt to auto-detect
1425 When set to ``None`` (default) the matchers will attempt to auto-detect
1426 whether suppression of other matchers is desirable. For example, at
1426 whether suppression of other matchers is desirable. For example, at
1427 the beginning of a line followed by `%` we expect a magic completion
1427 the beginning of a line followed by `%` we expect a magic completion
1428 to be the only applicable option, and after ``my_dict['`` we usually
1428 to be the only applicable option, and after ``my_dict['`` we usually
1429 expect a completion with an existing dictionary key.
1429 expect a completion with an existing dictionary key.
1430
1430
1431 If you want to disable this heuristic and see completions from all matchers,
1431 If you want to disable this heuristic and see completions from all matchers,
1432 set ``IPCompleter.suppress_competing_matchers = False``.
1432 set ``IPCompleter.suppress_competing_matchers = False``.
1433 To disable the heuristic for specific matchers provide a dictionary mapping:
1433 To disable the heuristic for specific matchers provide a dictionary mapping:
1434 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1434 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1435
1435
1436 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1436 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1437 completions to the set of matchers with the highest priority;
1437 completions to the set of matchers with the highest priority;
1438 this is equivalent to ``IPCompleter.merge_completions`` and
1438 this is equivalent to ``IPCompleter.merge_completions`` and
1439 can be beneficial for performance, but will sometimes omit relevant
1439 can be beneficial for performance, but will sometimes omit relevant
1440 candidates from matchers further down the priority list.
1440 candidates from matchers further down the priority list.
1441 """,
1441 """,
1442 ).tag(config=True)
1442 ).tag(config=True)
1443
1443
1444 merge_completions = Bool(
1444 merge_completions = Bool(
1445 True,
1445 True,
1446 help="""Whether to merge completion results into a single list
1446 help="""Whether to merge completion results into a single list
1447
1447
1448 If False, only the completion results from the first non-empty
1448 If False, only the completion results from the first non-empty
1449 completer will be returned.
1449 completer will be returned.
1450
1450
1451 As of version 8.6.0, setting the value to ``False`` is an alias for:
1451 As of version 8.6.0, setting the value to ``False`` is an alias for:
1452 ``IPCompleter.suppress_competing_matchers = True.``.
1452 ``IPCompleter.suppress_competing_matchers = True.``.
1453 """,
1453 """,
1454 ).tag(config=True)
1454 ).tag(config=True)
1455
1455
1456 disable_matchers = ListTrait(
1456 disable_matchers = ListTrait(
1457 Unicode(), help="""List of matchers to disable."""
1457 Unicode(), help="""List of matchers to disable."""
1458 ).tag(config=True)
1458 ).tag(config=True)
1459
1459
1460 omit__names = Enum(
1460 omit__names = Enum(
1461 (0, 1, 2),
1461 (0, 1, 2),
1462 default_value=2,
1462 default_value=2,
1463 help="""Instruct the completer to omit private method names
1463 help="""Instruct the completer to omit private method names
1464
1464
1465 Specifically, when completing on ``object.<tab>``.
1465 Specifically, when completing on ``object.<tab>``.
1466
1466
1467 When 2 [default]: all names that start with '_' will be excluded.
1467 When 2 [default]: all names that start with '_' will be excluded.
1468
1468
1469 When 1: all 'magic' names (``__foo__``) will be excluded.
1469 When 1: all 'magic' names (``__foo__``) will be excluded.
1470
1470
1471 When 0: nothing will be excluded.
1471 When 0: nothing will be excluded.
1472 """
1472 """
1473 ).tag(config=True)
1473 ).tag(config=True)
1474 limit_to__all__ = Bool(False,
1474 limit_to__all__ = Bool(False,
1475 help="""
1475 help="""
1476 DEPRECATED as of version 5.0.
1476 DEPRECATED as of version 5.0.
1477
1477
1478 Instruct the completer to use __all__ for the completion
1478 Instruct the completer to use __all__ for the completion
1479
1479
1480 Specifically, when completing on ``object.<tab>``.
1480 Specifically, when completing on ``object.<tab>``.
1481
1481
1482 When True: only those names in obj.__all__ will be included.
1482 When True: only those names in obj.__all__ will be included.
1483
1483
1484 When False [default]: the __all__ attribute is ignored
1484 When False [default]: the __all__ attribute is ignored
1485 """,
1485 """,
1486 ).tag(config=True)
1486 ).tag(config=True)
1487
1487
1488 profile_completions = Bool(
1488 profile_completions = Bool(
1489 default_value=False,
1489 default_value=False,
1490 help="If True, emit profiling data for completion subsystem using cProfile."
1490 help="If True, emit profiling data for completion subsystem using cProfile."
1491 ).tag(config=True)
1491 ).tag(config=True)
1492
1492
1493 profiler_output_dir = Unicode(
1493 profiler_output_dir = Unicode(
1494 default_value=".completion_profiles",
1494 default_value=".completion_profiles",
1495 help="Template for path at which to output profile data for completions."
1495 help="Template for path at which to output profile data for completions."
1496 ).tag(config=True)
1496 ).tag(config=True)
1497
1497
1498 @observe('limit_to__all__')
1498 @observe('limit_to__all__')
1499 def _limit_to_all_changed(self, change):
1499 def _limit_to_all_changed(self, change):
1500 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1500 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1501 'value has been deprecated since IPython 5.0, will be made to have '
1501 'value has been deprecated since IPython 5.0, will be made to have '
1502 'no effects and then removed in future version of IPython.',
1502 'no effects and then removed in future version of IPython.',
1503 UserWarning)
1503 UserWarning)
1504
1504
1505 def __init__(
1505 def __init__(
1506 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1506 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1507 ):
1507 ):
1508 """IPCompleter() -> completer
1508 """IPCompleter() -> completer
1509
1509
1510 Return a completer object.
1510 Return a completer object.
1511
1511
1512 Parameters
1512 Parameters
1513 ----------
1513 ----------
1514 shell
1514 shell
1515 a pointer to the ipython shell itself. This is needed
1515 a pointer to the ipython shell itself. This is needed
1516 because this completer knows about magic functions, and those can
1516 because this completer knows about magic functions, and those can
1517 only be accessed via the ipython instance.
1517 only be accessed via the ipython instance.
1518 namespace : dict, optional
1518 namespace : dict, optional
1519 an optional dict where completions are performed.
1519 an optional dict where completions are performed.
1520 global_namespace : dict, optional
1520 global_namespace : dict, optional
1521 secondary optional dict for completions, to
1521 secondary optional dict for completions, to
1522 handle cases (such as IPython embedded inside functions) where
1522 handle cases (such as IPython embedded inside functions) where
1523 both Python scopes are visible.
1523 both Python scopes are visible.
1524 config : Config
1524 config : Config
1525 traitlet's config object
1525 traitlet's config object
1526 **kwargs
1526 **kwargs
1527 passed to super class unmodified.
1527 passed to super class unmodified.
1528 """
1528 """
1529
1529
1530 self.magic_escape = ESC_MAGIC
1530 self.magic_escape = ESC_MAGIC
1531 self.splitter = CompletionSplitter()
1531 self.splitter = CompletionSplitter()
1532
1532
1533 # _greedy_changed() depends on splitter and readline being defined:
1533 # _greedy_changed() depends on splitter and readline being defined:
1534 super().__init__(
1534 super().__init__(
1535 namespace=namespace,
1535 namespace=namespace,
1536 global_namespace=global_namespace,
1536 global_namespace=global_namespace,
1537 config=config,
1537 config=config,
1538 **kwargs,
1538 **kwargs,
1539 )
1539 )
1540
1540
1541 # List where completion matches will be stored
1541 # List where completion matches will be stored
1542 self.matches = []
1542 self.matches = []
1543 self.shell = shell
1543 self.shell = shell
1544 # Regexp to split filenames with spaces in them
1544 # Regexp to split filenames with spaces in them
1545 self.space_name_re = re.compile(r'([^\\] )')
1545 self.space_name_re = re.compile(r'([^\\] )')
1546 # Hold a local ref. to glob.glob for speed
1546 # Hold a local ref. to glob.glob for speed
1547 self.glob = glob.glob
1547 self.glob = glob.glob
1548
1548
1549 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1549 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1550 # buffers, to avoid completion problems.
1550 # buffers, to avoid completion problems.
1551 term = os.environ.get('TERM','xterm')
1551 term = os.environ.get('TERM','xterm')
1552 self.dumb_terminal = term in ['dumb','emacs']
1552 self.dumb_terminal = term in ['dumb','emacs']
1553
1553
1554 # Special handling of backslashes needed in win32 platforms
1554 # Special handling of backslashes needed in win32 platforms
1555 if sys.platform == "win32":
1555 if sys.platform == "win32":
1556 self.clean_glob = self._clean_glob_win32
1556 self.clean_glob = self._clean_glob_win32
1557 else:
1557 else:
1558 self.clean_glob = self._clean_glob
1558 self.clean_glob = self._clean_glob
1559
1559
1560 #regexp to parse docstring for function signature
1560 #regexp to parse docstring for function signature
1561 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1561 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1562 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1562 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1563 #use this if positional argument name is also needed
1563 #use this if positional argument name is also needed
1564 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1564 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1565
1565
1566 self.magic_arg_matchers = [
1566 self.magic_arg_matchers = [
1567 self.magic_config_matcher,
1567 self.magic_config_matcher,
1568 self.magic_color_matcher,
1568 self.magic_color_matcher,
1569 ]
1569 ]
1570
1570
1571 # This is set externally by InteractiveShell
1571 # This is set externally by InteractiveShell
1572 self.custom_completers = None
1572 self.custom_completers = None
1573
1573
1574 # This is a list of names of unicode characters that can be completed
1574 # This is a list of names of unicode characters that can be completed
1575 # into their corresponding unicode value. The list is large, so we
1575 # into their corresponding unicode value. The list is large, so we
1576 # lazily initialize it on first use. Consuming code should access this
1576 # lazily initialize it on first use. Consuming code should access this
1577 # attribute through the `@unicode_names` property.
1577 # attribute through the `@unicode_names` property.
1578 self._unicode_names = None
1578 self._unicode_names = None
1579
1579
1580 self._backslash_combining_matchers = [
1580 self._backslash_combining_matchers = [
1581 self.latex_name_matcher,
1581 self.latex_name_matcher,
1582 self.unicode_name_matcher,
1582 self.unicode_name_matcher,
1583 back_latex_name_matcher,
1583 back_latex_name_matcher,
1584 back_unicode_name_matcher,
1584 back_unicode_name_matcher,
1585 self.fwd_unicode_matcher,
1585 self.fwd_unicode_matcher,
1586 ]
1586 ]
1587
1587
1588 if not self.backslash_combining_completions:
1588 if not self.backslash_combining_completions:
1589 for matcher in self._backslash_combining_matchers:
1589 for matcher in self._backslash_combining_matchers:
1590 self.disable_matchers.append(matcher.matcher_identifier)
1590 self.disable_matchers.append(matcher.matcher_identifier)
1591
1591
1592 if not self.merge_completions:
1592 if not self.merge_completions:
1593 self.suppress_competing_matchers = True
1593 self.suppress_competing_matchers = True
1594
1594
1595 @property
1595 @property
1596 def matchers(self) -> List[Matcher]:
1596 def matchers(self) -> List[Matcher]:
1597 """All active matcher routines for completion"""
1597 """All active matcher routines for completion"""
1598 if self.dict_keys_only:
1598 if self.dict_keys_only:
1599 return [self.dict_key_matcher]
1599 return [self.dict_key_matcher]
1600
1600
1601 if self.use_jedi:
1601 if self.use_jedi:
1602 return [
1602 return [
1603 *self.custom_matchers,
1603 *self.custom_matchers,
1604 *self._backslash_combining_matchers,
1604 *self._backslash_combining_matchers,
1605 *self.magic_arg_matchers,
1605 *self.magic_arg_matchers,
1606 self.custom_completer_matcher,
1606 self.custom_completer_matcher,
1607 self.magic_matcher,
1607 self.magic_matcher,
1608 self._jedi_matcher,
1608 self._jedi_matcher,
1609 self.dict_key_matcher,
1609 self.dict_key_matcher,
1610 self.file_matcher,
1610 self.file_matcher,
1611 ]
1611 ]
1612 else:
1612 else:
1613 return [
1613 return [
1614 *self.custom_matchers,
1614 *self.custom_matchers,
1615 *self._backslash_combining_matchers,
1615 *self._backslash_combining_matchers,
1616 *self.magic_arg_matchers,
1616 *self.magic_arg_matchers,
1617 self.custom_completer_matcher,
1617 self.custom_completer_matcher,
1618 self.dict_key_matcher,
1618 self.dict_key_matcher,
1619 # TODO: convert python_matches to v2 API
1619 # TODO: convert python_matches to v2 API
1620 self.magic_matcher,
1620 self.magic_matcher,
1621 self.python_matches,
1621 self.python_matches,
1622 self.file_matcher,
1622 self.file_matcher,
1623 self.python_func_kw_matcher,
1623 self.python_func_kw_matcher,
1624 ]
1624 ]
1625
1625
1626 def all_completions(self, text:str) -> List[str]:
1626 def all_completions(self, text:str) -> List[str]:
1627 """
1627 """
1628 Wrapper around the completion methods for the benefit of emacs.
1628 Wrapper around the completion methods for the benefit of emacs.
1629 """
1629 """
1630 prefix = text.rpartition('.')[0]
1630 prefix = text.rpartition('.')[0]
1631 with provisionalcompleter():
1631 with provisionalcompleter():
1632 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1632 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1633 for c in self.completions(text, len(text))]
1633 for c in self.completions(text, len(text))]
1634
1634
1635 return self.complete(text)[1]
1635 return self.complete(text)[1]
1636
1636
1637 def _clean_glob(self, text:str):
1637 def _clean_glob(self, text:str):
1638 return self.glob("%s*" % text)
1638 return self.glob("%s*" % text)
1639
1639
1640 def _clean_glob_win32(self, text:str):
1640 def _clean_glob_win32(self, text:str):
1641 return [f.replace("\\","/")
1641 return [f.replace("\\","/")
1642 for f in self.glob("%s*" % text)]
1642 for f in self.glob("%s*" % text)]
1643
1643
1644 @context_matcher()
1644 @context_matcher()
1645 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1645 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1646 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1646 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1647 matches = self.file_matches(context.token)
1647 matches = self.file_matches(context.token)
1648 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1648 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1649 # starts with `/home/`, `C:\`, etc)
1649 # starts with `/home/`, `C:\`, etc)
1650 return _convert_matcher_v1_result_to_v2(matches, type="path")
1650 return _convert_matcher_v1_result_to_v2(matches, type="path")
1651
1651
1652 def file_matches(self, text: str) -> List[str]:
1652 def file_matches(self, text: str) -> List[str]:
1653 """Match filenames, expanding ~USER type strings.
1653 """Match filenames, expanding ~USER type strings.
1654
1654
1655 Most of the seemingly convoluted logic in this completer is an
1655 Most of the seemingly convoluted logic in this completer is an
1656 attempt to handle filenames with spaces in them. And yet it's not
1656 attempt to handle filenames with spaces in them. And yet it's not
1657 quite perfect, because Python's readline doesn't expose all of the
1657 quite perfect, because Python's readline doesn't expose all of the
1658 GNU readline details needed for this to be done correctly.
1658 GNU readline details needed for this to be done correctly.
1659
1659
1660 For a filename with a space in it, the printed completions will be
1660 For a filename with a space in it, the printed completions will be
1661 only the parts after what's already been typed (instead of the
1661 only the parts after what's already been typed (instead of the
1662 full completions, as is normally done). I don't think with the
1662 full completions, as is normally done). I don't think with the
1663 current (as of Python 2.3) Python readline it's possible to do
1663 current (as of Python 2.3) Python readline it's possible to do
1664 better.
1664 better.
1665
1665
1666 .. deprecated:: 8.6
1666 .. deprecated:: 8.6
1667 You can use :meth:`file_matcher` instead.
1667 You can use :meth:`file_matcher` instead.
1668 """
1668 """
1669
1669
1670 # chars that require escaping with backslash - i.e. chars
1670 # chars that require escaping with backslash - i.e. chars
1671 # that readline treats incorrectly as delimiters, but we
1671 # that readline treats incorrectly as delimiters, but we
1672 # don't want to treat as delimiters in filename matching
1672 # don't want to treat as delimiters in filename matching
1673 # when escaped with backslash
1673 # when escaped with backslash
1674 if text.startswith('!'):
1674 if text.startswith('!'):
1675 text = text[1:]
1675 text = text[1:]
1676 text_prefix = u'!'
1676 text_prefix = u'!'
1677 else:
1677 else:
1678 text_prefix = u''
1678 text_prefix = u''
1679
1679
1680 text_until_cursor = self.text_until_cursor
1680 text_until_cursor = self.text_until_cursor
1681 # track strings with open quotes
1681 # track strings with open quotes
1682 open_quotes = has_open_quotes(text_until_cursor)
1682 open_quotes = has_open_quotes(text_until_cursor)
1683
1683
1684 if '(' in text_until_cursor or '[' in text_until_cursor:
1684 if '(' in text_until_cursor or '[' in text_until_cursor:
1685 lsplit = text
1685 lsplit = text
1686 else:
1686 else:
1687 try:
1687 try:
1688 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1688 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1689 lsplit = arg_split(text_until_cursor)[-1]
1689 lsplit = arg_split(text_until_cursor)[-1]
1690 except ValueError:
1690 except ValueError:
1691 # typically an unmatched ", or backslash without escaped char.
1691 # typically an unmatched ", or backslash without escaped char.
1692 if open_quotes:
1692 if open_quotes:
1693 lsplit = text_until_cursor.split(open_quotes)[-1]
1693 lsplit = text_until_cursor.split(open_quotes)[-1]
1694 else:
1694 else:
1695 return []
1695 return []
1696 except IndexError:
1696 except IndexError:
1697 # tab pressed on empty line
1697 # tab pressed on empty line
1698 lsplit = ""
1698 lsplit = ""
1699
1699
1700 if not open_quotes and lsplit != protect_filename(lsplit):
1700 if not open_quotes and lsplit != protect_filename(lsplit):
1701 # if protectables are found, do matching on the whole escaped name
1701 # if protectables are found, do matching on the whole escaped name
1702 has_protectables = True
1702 has_protectables = True
1703 text0,text = text,lsplit
1703 text0,text = text,lsplit
1704 else:
1704 else:
1705 has_protectables = False
1705 has_protectables = False
1706 text = os.path.expanduser(text)
1706 text = os.path.expanduser(text)
1707
1707
1708 if text == "":
1708 if text == "":
1709 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1709 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1710
1710
1711 # Compute the matches from the filesystem
1711 # Compute the matches from the filesystem
1712 if sys.platform == 'win32':
1712 if sys.platform == 'win32':
1713 m0 = self.clean_glob(text)
1713 m0 = self.clean_glob(text)
1714 else:
1714 else:
1715 m0 = self.clean_glob(text.replace('\\', ''))
1715 m0 = self.clean_glob(text.replace('\\', ''))
1716
1716
1717 if has_protectables:
1717 if has_protectables:
1718 # If we had protectables, we need to revert our changes to the
1718 # If we had protectables, we need to revert our changes to the
1719 # beginning of filename so that we don't double-write the part
1719 # beginning of filename so that we don't double-write the part
1720 # of the filename we have so far
1720 # of the filename we have so far
1721 len_lsplit = len(lsplit)
1721 len_lsplit = len(lsplit)
1722 matches = [text_prefix + text0 +
1722 matches = [text_prefix + text0 +
1723 protect_filename(f[len_lsplit:]) for f in m0]
1723 protect_filename(f[len_lsplit:]) for f in m0]
1724 else:
1724 else:
1725 if open_quotes:
1725 if open_quotes:
1726 # if we have a string with an open quote, we don't need to
1726 # if we have a string with an open quote, we don't need to
1727 # protect the names beyond the quote (and we _shouldn't_, as
1727 # protect the names beyond the quote (and we _shouldn't_, as
1728 # it would cause bugs when the filesystem call is made).
1728 # it would cause bugs when the filesystem call is made).
1729 matches = m0 if sys.platform == "win32" else\
1729 matches = m0 if sys.platform == "win32" else\
1730 [protect_filename(f, open_quotes) for f in m0]
1730 [protect_filename(f, open_quotes) for f in m0]
1731 else:
1731 else:
1732 matches = [text_prefix +
1732 matches = [text_prefix +
1733 protect_filename(f) for f in m0]
1733 protect_filename(f) for f in m0]
1734
1734
1735 # Mark directories in input list by appending '/' to their names.
1735 # Mark directories in input list by appending '/' to their names.
1736 return [x+'/' if os.path.isdir(x) else x for x in matches]
1736 return [x+'/' if os.path.isdir(x) else x for x in matches]
1737
1737
1738 @context_matcher()
1738 @context_matcher()
1739 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1739 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1740 """Match magics."""
1740 """Match magics."""
1741 text = context.token
1741 text = context.token
1742 matches = self.magic_matches(text)
1742 matches = self.magic_matches(text)
1743 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1743 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1744 is_magic_prefix = len(text) > 0 and text[0] == "%"
1744 is_magic_prefix = len(text) > 0 and text[0] == "%"
1745 result["suppress"] = is_magic_prefix and bool(result["completions"])
1745 result["suppress"] = is_magic_prefix and bool(result["completions"])
1746 return result
1746 return result
1747
1747
1748 def magic_matches(self, text: str):
1748 def magic_matches(self, text: str):
1749 """Match magics.
1749 """Match magics.
1750
1750
1751 .. deprecated:: 8.6
1751 .. deprecated:: 8.6
1752 You can use :meth:`magic_matcher` instead.
1752 You can use :meth:`magic_matcher` instead.
1753 """
1753 """
1754 # Get all shell magics now rather than statically, so magics loaded at
1754 # Get all shell magics now rather than statically, so magics loaded at
1755 # runtime show up too.
1755 # runtime show up too.
1756 lsm = self.shell.magics_manager.lsmagic()
1756 lsm = self.shell.magics_manager.lsmagic()
1757 line_magics = lsm['line']
1757 line_magics = lsm['line']
1758 cell_magics = lsm['cell']
1758 cell_magics = lsm['cell']
1759 pre = self.magic_escape
1759 pre = self.magic_escape
1760 pre2 = pre+pre
1760 pre2 = pre+pre
1761
1761
1762 explicit_magic = text.startswith(pre)
1762 explicit_magic = text.startswith(pre)
1763
1763
1764 # Completion logic:
1764 # Completion logic:
1765 # - user gives %%: only do cell magics
1765 # - user gives %%: only do cell magics
1766 # - user gives %: do both line and cell magics
1766 # - user gives %: do both line and cell magics
1767 # - no prefix: do both
1767 # - no prefix: do both
1768 # In other words, line magics are skipped if the user gives %% explicitly
1768 # In other words, line magics are skipped if the user gives %% explicitly
1769 #
1769 #
1770 # We also exclude magics that match any currently visible names:
1770 # We also exclude magics that match any currently visible names:
1771 # https://github.com/ipython/ipython/issues/4877, unless the user has
1771 # https://github.com/ipython/ipython/issues/4877, unless the user has
1772 # typed a %:
1772 # typed a %:
1773 # https://github.com/ipython/ipython/issues/10754
1773 # https://github.com/ipython/ipython/issues/10754
1774 bare_text = text.lstrip(pre)
1774 bare_text = text.lstrip(pre)
1775 global_matches = self.global_matches(bare_text)
1775 global_matches = self.global_matches(bare_text)
1776 if not explicit_magic:
1776 if not explicit_magic:
1777 def matches(magic):
1777 def matches(magic):
1778 """
1778 """
1779 Filter magics, in particular remove magics that match
1779 Filter magics, in particular remove magics that match
1780 a name present in global namespace.
1780 a name present in global namespace.
1781 """
1781 """
1782 return ( magic.startswith(bare_text) and
1782 return ( magic.startswith(bare_text) and
1783 magic not in global_matches )
1783 magic not in global_matches )
1784 else:
1784 else:
1785 def matches(magic):
1785 def matches(magic):
1786 return magic.startswith(bare_text)
1786 return magic.startswith(bare_text)
1787
1787
1788 comp = [ pre2+m for m in cell_magics if matches(m)]
1788 comp = [ pre2+m for m in cell_magics if matches(m)]
1789 if not text.startswith(pre2):
1789 if not text.startswith(pre2):
1790 comp += [ pre+m for m in line_magics if matches(m)]
1790 comp += [ pre+m for m in line_magics if matches(m)]
1791
1791
1792 return comp
1792 return comp
1793
1793
1794 @context_matcher()
1794 @context_matcher()
1795 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1795 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1796 """Match class names and attributes for %config magic."""
1796 """Match class names and attributes for %config magic."""
1797 # NOTE: uses `line_buffer` equivalent for compatibility
1797 # NOTE: uses `line_buffer` equivalent for compatibility
1798 matches = self.magic_config_matches(context.line_with_cursor)
1798 matches = self.magic_config_matches(context.line_with_cursor)
1799 return _convert_matcher_v1_result_to_v2(matches, type="param")
1799 return _convert_matcher_v1_result_to_v2(matches, type="param")
1800
1800
1801 def magic_config_matches(self, text: str) -> List[str]:
1801 def magic_config_matches(self, text: str) -> List[str]:
1802 """Match class names and attributes for %config magic.
1802 """Match class names and attributes for %config magic.
1803
1803
1804 .. deprecated:: 8.6
1804 .. deprecated:: 8.6
1805 You can use :meth:`magic_config_matcher` instead.
1805 You can use :meth:`magic_config_matcher` instead.
1806 """
1806 """
1807 texts = text.strip().split()
1807 texts = text.strip().split()
1808
1808
1809 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1809 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1810 # get all configuration classes
1810 # get all configuration classes
1811 classes = sorted(set([ c for c in self.shell.configurables
1811 classes = sorted(set([ c for c in self.shell.configurables
1812 if c.__class__.class_traits(config=True)
1812 if c.__class__.class_traits(config=True)
1813 ]), key=lambda x: x.__class__.__name__)
1813 ]), key=lambda x: x.__class__.__name__)
1814 classnames = [ c.__class__.__name__ for c in classes ]
1814 classnames = [ c.__class__.__name__ for c in classes ]
1815
1815
1816 # return all classnames if config or %config is given
1816 # return all classnames if config or %config is given
1817 if len(texts) == 1:
1817 if len(texts) == 1:
1818 return classnames
1818 return classnames
1819
1819
1820 # match classname
1820 # match classname
1821 classname_texts = texts[1].split('.')
1821 classname_texts = texts[1].split('.')
1822 classname = classname_texts[0]
1822 classname = classname_texts[0]
1823 classname_matches = [ c for c in classnames
1823 classname_matches = [ c for c in classnames
1824 if c.startswith(classname) ]
1824 if c.startswith(classname) ]
1825
1825
1826 # return matched classes or the matched class with attributes
1826 # return matched classes or the matched class with attributes
1827 if texts[1].find('.') < 0:
1827 if texts[1].find('.') < 0:
1828 return classname_matches
1828 return classname_matches
1829 elif len(classname_matches) == 1 and \
1829 elif len(classname_matches) == 1 and \
1830 classname_matches[0] == classname:
1830 classname_matches[0] == classname:
1831 cls = classes[classnames.index(classname)].__class__
1831 cls = classes[classnames.index(classname)].__class__
1832 help = cls.class_get_help()
1832 help = cls.class_get_help()
1833 # strip leading '--' from cl-args:
1833 # strip leading '--' from cl-args:
1834 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1834 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1835 return [ attr.split('=')[0]
1835 return [ attr.split('=')[0]
1836 for attr in help.strip().splitlines()
1836 for attr in help.strip().splitlines()
1837 if attr.startswith(texts[1]) ]
1837 if attr.startswith(texts[1]) ]
1838 return []
1838 return []
1839
1839
1840 @context_matcher()
1840 @context_matcher()
1841 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1841 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1842 """Match color schemes for %colors magic."""
1842 """Match color schemes for %colors magic."""
1843 # NOTE: uses `line_buffer` equivalent for compatibility
1843 # NOTE: uses `line_buffer` equivalent for compatibility
1844 matches = self.magic_color_matches(context.line_with_cursor)
1844 matches = self.magic_color_matches(context.line_with_cursor)
1845 return _convert_matcher_v1_result_to_v2(matches, type="param")
1845 return _convert_matcher_v1_result_to_v2(matches, type="param")
1846
1846
1847 def magic_color_matches(self, text: str) -> List[str]:
1847 def magic_color_matches(self, text: str) -> List[str]:
1848 """Match color schemes for %colors magic.
1848 """Match color schemes for %colors magic.
1849
1849
1850 .. deprecated:: 8.6
1850 .. deprecated:: 8.6
1851 You can use :meth:`magic_color_matcher` instead.
1851 You can use :meth:`magic_color_matcher` instead.
1852 """
1852 """
1853 texts = text.split()
1853 texts = text.split()
1854 if text.endswith(' '):
1854 if text.endswith(' '):
1855 # .split() strips off the trailing whitespace. Add '' back
1855 # .split() strips off the trailing whitespace. Add '' back
1856 # so that: '%colors ' -> ['%colors', '']
1856 # so that: '%colors ' -> ['%colors', '']
1857 texts.append('')
1857 texts.append('')
1858
1858
1859 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1859 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1860 prefix = texts[1]
1860 prefix = texts[1]
1861 return [ color for color in InspectColors.keys()
1861 return [ color for color in InspectColors.keys()
1862 if color.startswith(prefix) ]
1862 if color.startswith(prefix) ]
1863 return []
1863 return []
1864
1864
1865 @context_matcher(identifier="IPCompleter.jedi_matcher")
1865 @context_matcher(identifier="IPCompleter.jedi_matcher")
1866 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1866 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1867 matches = self._jedi_matches(
1867 matches = self._jedi_matches(
1868 cursor_column=context.cursor_position,
1868 cursor_column=context.cursor_position,
1869 cursor_line=context.cursor_line,
1869 cursor_line=context.cursor_line,
1870 text=context.full_text,
1870 text=context.full_text,
1871 )
1871 )
1872 return {
1872 return {
1873 "completions": matches,
1873 "completions": matches,
1874 # static analysis should not suppress other matchers
1874 # static analysis should not suppress other matchers
1875 "suppress": False,
1875 "suppress": False,
1876 }
1876 }
1877
1877
1878 def _jedi_matches(
1878 def _jedi_matches(
1879 self, cursor_column: int, cursor_line: int, text: str
1879 self, cursor_column: int, cursor_line: int, text: str
1880 ) -> Iterable[_JediCompletionLike]:
1880 ) -> Iterable[_JediCompletionLike]:
1881 """
1881 """
1882 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1882 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1883 cursor position.
1883 cursor position.
1884
1884
1885 Parameters
1885 Parameters
1886 ----------
1886 ----------
1887 cursor_column : int
1887 cursor_column : int
1888 column position of the cursor in ``text``, 0-indexed.
1888 column position of the cursor in ``text``, 0-indexed.
1889 cursor_line : int
1889 cursor_line : int
1890 line position of the cursor in ``text``, 0-indexed
1890 line position of the cursor in ``text``, 0-indexed
1891 text : str
1891 text : str
1892 text to complete
1892 text to complete
1893
1893
1894 Notes
1894 Notes
1895 -----
1895 -----
1896 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1896 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1897 object containing a string with the Jedi debug information attached.
1897 object containing a string with the Jedi debug information attached.
1898
1898
1899 .. deprecated:: 8.6
1899 .. deprecated:: 8.6
1900 You can use :meth:`_jedi_matcher` instead.
1900 You can use :meth:`_jedi_matcher` instead.
1901 """
1901 """
1902 namespaces = [self.namespace]
1902 namespaces = [self.namespace]
1903 if self.global_namespace is not None:
1903 if self.global_namespace is not None:
1904 namespaces.append(self.global_namespace)
1904 namespaces.append(self.global_namespace)
1905
1905
1906 completion_filter = lambda x:x
1906 completion_filter = lambda x:x
1907 offset = cursor_to_position(text, cursor_line, cursor_column)
1907 offset = cursor_to_position(text, cursor_line, cursor_column)
1908 # filter output if we are completing for object members
1908 # filter output if we are completing for object members
1909 if offset:
1909 if offset:
1910 pre = text[offset-1]
1910 pre = text[offset-1]
1911 if pre == '.':
1911 if pre == '.':
1912 if self.omit__names == 2:
1912 if self.omit__names == 2:
1913 completion_filter = lambda c:not c.name.startswith('_')
1913 completion_filter = lambda c:not c.name.startswith('_')
1914 elif self.omit__names == 1:
1914 elif self.omit__names == 1:
1915 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1915 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1916 elif self.omit__names == 0:
1916 elif self.omit__names == 0:
1917 completion_filter = lambda x:x
1917 completion_filter = lambda x:x
1918 else:
1918 else:
1919 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1919 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1920
1920
1921 interpreter = jedi.Interpreter(text[:offset], namespaces)
1921 interpreter = jedi.Interpreter(text[:offset], namespaces)
1922 try_jedi = True
1922 try_jedi = True
1923
1923
1924 try:
1924 try:
1925 # find the first token in the current tree -- if it is a ' or " then we are in a string
1925 # find the first token in the current tree -- if it is a ' or " then we are in a string
1926 completing_string = False
1926 completing_string = False
1927 try:
1927 try:
1928 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1928 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1929 except StopIteration:
1929 except StopIteration:
1930 pass
1930 pass
1931 else:
1931 else:
1932 # note the value may be ', ", or it may also be ''' or """, or
1932 # note the value may be ', ", or it may also be ''' or """, or
1933 # in some cases, """what/you/typed..., but all of these are
1933 # in some cases, """what/you/typed..., but all of these are
1934 # strings.
1934 # strings.
1935 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1935 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1936
1936
1937 # if we are in a string jedi is likely not the right candidate for
1937 # if we are in a string jedi is likely not the right candidate for
1938 # now. Skip it.
1938 # now. Skip it.
1939 try_jedi = not completing_string
1939 try_jedi = not completing_string
1940 except Exception as e:
1940 except Exception as e:
1941 # many of things can go wrong, we are using private API just don't crash.
1941 # many of things can go wrong, we are using private API just don't crash.
1942 if self.debug:
1942 if self.debug:
1943 print("Error detecting if completing a non-finished string :", e, '|')
1943 print("Error detecting if completing a non-finished string :", e, '|')
1944
1944
1945 if not try_jedi:
1945 if not try_jedi:
1946 return []
1946 return []
1947 try:
1947 try:
1948 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1948 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1949 except Exception as e:
1949 except Exception as e:
1950 if self.debug:
1950 if self.debug:
1951 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1951 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1952 else:
1952 else:
1953 return []
1953 return []
1954
1954
1955 def python_matches(self, text:str)->List[str]:
1955 def python_matches(self, text:str)->List[str]:
1956 """Match attributes or global python names"""
1956 """Match attributes or global python names"""
1957 if "." in text:
1957 if "." in text:
1958 try:
1958 try:
1959 matches = self.attr_matches(text)
1959 matches = self.attr_matches(text)
1960 if text.endswith('.') and self.omit__names:
1960 if text.endswith('.') and self.omit__names:
1961 if self.omit__names == 1:
1961 if self.omit__names == 1:
1962 # true if txt is _not_ a __ name, false otherwise:
1962 # true if txt is _not_ a __ name, false otherwise:
1963 no__name = (lambda txt:
1963 no__name = (lambda txt:
1964 re.match(r'.*\.__.*?__',txt) is None)
1964 re.match(r'.*\.__.*?__',txt) is None)
1965 else:
1965 else:
1966 # true if txt is _not_ a _ name, false otherwise:
1966 # true if txt is _not_ a _ name, false otherwise:
1967 no__name = (lambda txt:
1967 no__name = (lambda txt:
1968 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1968 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1969 matches = filter(no__name, matches)
1969 matches = filter(no__name, matches)
1970 except NameError:
1970 except NameError:
1971 # catches <undefined attributes>.<tab>
1971 # catches <undefined attributes>.<tab>
1972 matches = []
1972 matches = []
1973 else:
1973 else:
1974 matches = self.global_matches(text)
1974 matches = self.global_matches(text)
1975 return matches
1975 return matches
1976
1976
1977 def _default_arguments_from_docstring(self, doc):
1977 def _default_arguments_from_docstring(self, doc):
1978 """Parse the first line of docstring for call signature.
1978 """Parse the first line of docstring for call signature.
1979
1979
1980 Docstring should be of the form 'min(iterable[, key=func])\n'.
1980 Docstring should be of the form 'min(iterable[, key=func])\n'.
1981 It can also parse cython docstring of the form
1981 It can also parse cython docstring of the form
1982 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1982 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1983 """
1983 """
1984 if doc is None:
1984 if doc is None:
1985 return []
1985 return []
1986
1986
1987 #care only the firstline
1987 #care only the firstline
1988 line = doc.lstrip().splitlines()[0]
1988 line = doc.lstrip().splitlines()[0]
1989
1989
1990 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1990 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1991 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1991 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1992 sig = self.docstring_sig_re.search(line)
1992 sig = self.docstring_sig_re.search(line)
1993 if sig is None:
1993 if sig is None:
1994 return []
1994 return []
1995 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1995 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1996 sig = sig.groups()[0].split(',')
1996 sig = sig.groups()[0].split(',')
1997 ret = []
1997 ret = []
1998 for s in sig:
1998 for s in sig:
1999 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1999 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2000 ret += self.docstring_kwd_re.findall(s)
2000 ret += self.docstring_kwd_re.findall(s)
2001 return ret
2001 return ret
2002
2002
2003 def _default_arguments(self, obj):
2003 def _default_arguments(self, obj):
2004 """Return the list of default arguments of obj if it is callable,
2004 """Return the list of default arguments of obj if it is callable,
2005 or empty list otherwise."""
2005 or empty list otherwise."""
2006 call_obj = obj
2006 call_obj = obj
2007 ret = []
2007 ret = []
2008 if inspect.isbuiltin(obj):
2008 if inspect.isbuiltin(obj):
2009 pass
2009 pass
2010 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2010 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2011 if inspect.isclass(obj):
2011 if inspect.isclass(obj):
2012 #for cython embedsignature=True the constructor docstring
2012 #for cython embedsignature=True the constructor docstring
2013 #belongs to the object itself not __init__
2013 #belongs to the object itself not __init__
2014 ret += self._default_arguments_from_docstring(
2014 ret += self._default_arguments_from_docstring(
2015 getattr(obj, '__doc__', ''))
2015 getattr(obj, '__doc__', ''))
2016 # for classes, check for __init__,__new__
2016 # for classes, check for __init__,__new__
2017 call_obj = (getattr(obj, '__init__', None) or
2017 call_obj = (getattr(obj, '__init__', None) or
2018 getattr(obj, '__new__', None))
2018 getattr(obj, '__new__', None))
2019 # for all others, check if they are __call__able
2019 # for all others, check if they are __call__able
2020 elif hasattr(obj, '__call__'):
2020 elif hasattr(obj, '__call__'):
2021 call_obj = obj.__call__
2021 call_obj = obj.__call__
2022 ret += self._default_arguments_from_docstring(
2022 ret += self._default_arguments_from_docstring(
2023 getattr(call_obj, '__doc__', ''))
2023 getattr(call_obj, '__doc__', ''))
2024
2024
2025 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2025 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2026 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2026 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2027
2027
2028 try:
2028 try:
2029 sig = inspect.signature(obj)
2029 sig = inspect.signature(obj)
2030 ret.extend(k for k, v in sig.parameters.items() if
2030 ret.extend(k for k, v in sig.parameters.items() if
2031 v.kind in _keeps)
2031 v.kind in _keeps)
2032 except ValueError:
2032 except ValueError:
2033 pass
2033 pass
2034
2034
2035 return list(set(ret))
2035 return list(set(ret))
2036
2036
2037 @context_matcher()
2037 @context_matcher()
2038 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2038 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2039 """Match named parameters (kwargs) of the last open function."""
2039 """Match named parameters (kwargs) of the last open function."""
2040 matches = self.python_func_kw_matches(context.token)
2040 matches = self.python_func_kw_matches(context.token)
2041 return _convert_matcher_v1_result_to_v2(matches, type="param")
2041 return _convert_matcher_v1_result_to_v2(matches, type="param")
2042
2042
2043 def python_func_kw_matches(self, text):
2043 def python_func_kw_matches(self, text):
2044 """Match named parameters (kwargs) of the last open function.
2044 """Match named parameters (kwargs) of the last open function.
2045
2045
2046 .. deprecated:: 8.6
2046 .. deprecated:: 8.6
2047 You can use :meth:`python_func_kw_matcher` instead.
2047 You can use :meth:`python_func_kw_matcher` instead.
2048 """
2048 """
2049
2049
2050 if "." in text: # a parameter cannot be dotted
2050 if "." in text: # a parameter cannot be dotted
2051 return []
2051 return []
2052 try: regexp = self.__funcParamsRegex
2052 try: regexp = self.__funcParamsRegex
2053 except AttributeError:
2053 except AttributeError:
2054 regexp = self.__funcParamsRegex = re.compile(r'''
2054 regexp = self.__funcParamsRegex = re.compile(r'''
2055 '.*?(?<!\\)' | # single quoted strings or
2055 '.*?(?<!\\)' | # single quoted strings or
2056 ".*?(?<!\\)" | # double quoted strings or
2056 ".*?(?<!\\)" | # double quoted strings or
2057 \w+ | # identifier
2057 \w+ | # identifier
2058 \S # other characters
2058 \S # other characters
2059 ''', re.VERBOSE | re.DOTALL)
2059 ''', re.VERBOSE | re.DOTALL)
2060 # 1. find the nearest identifier that comes before an unclosed
2060 # 1. find the nearest identifier that comes before an unclosed
2061 # parenthesis before the cursor
2061 # parenthesis before the cursor
2062 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2062 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2063 tokens = regexp.findall(self.text_until_cursor)
2063 tokens = regexp.findall(self.text_until_cursor)
2064 iterTokens = reversed(tokens); openPar = 0
2064 iterTokens = reversed(tokens); openPar = 0
2065
2065
2066 for token in iterTokens:
2066 for token in iterTokens:
2067 if token == ')':
2067 if token == ')':
2068 openPar -= 1
2068 openPar -= 1
2069 elif token == '(':
2069 elif token == '(':
2070 openPar += 1
2070 openPar += 1
2071 if openPar > 0:
2071 if openPar > 0:
2072 # found the last unclosed parenthesis
2072 # found the last unclosed parenthesis
2073 break
2073 break
2074 else:
2074 else:
2075 return []
2075 return []
2076 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2076 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2077 ids = []
2077 ids = []
2078 isId = re.compile(r'\w+$').match
2078 isId = re.compile(r'\w+$').match
2079
2079
2080 while True:
2080 while True:
2081 try:
2081 try:
2082 ids.append(next(iterTokens))
2082 ids.append(next(iterTokens))
2083 if not isId(ids[-1]):
2083 if not isId(ids[-1]):
2084 ids.pop(); break
2084 ids.pop(); break
2085 if not next(iterTokens) == '.':
2085 if not next(iterTokens) == '.':
2086 break
2086 break
2087 except StopIteration:
2087 except StopIteration:
2088 break
2088 break
2089
2089
2090 # Find all named arguments already assigned to, as to avoid suggesting
2090 # Find all named arguments already assigned to, as to avoid suggesting
2091 # them again
2091 # them again
2092 usedNamedArgs = set()
2092 usedNamedArgs = set()
2093 par_level = -1
2093 par_level = -1
2094 for token, next_token in zip(tokens, tokens[1:]):
2094 for token, next_token in zip(tokens, tokens[1:]):
2095 if token == '(':
2095 if token == '(':
2096 par_level += 1
2096 par_level += 1
2097 elif token == ')':
2097 elif token == ')':
2098 par_level -= 1
2098 par_level -= 1
2099
2099
2100 if par_level != 0:
2100 if par_level != 0:
2101 continue
2101 continue
2102
2102
2103 if next_token != '=':
2103 if next_token != '=':
2104 continue
2104 continue
2105
2105
2106 usedNamedArgs.add(token)
2106 usedNamedArgs.add(token)
2107
2107
2108 argMatches = []
2108 argMatches = []
2109 try:
2109 try:
2110 callableObj = '.'.join(ids[::-1])
2110 callableObj = '.'.join(ids[::-1])
2111 namedArgs = self._default_arguments(eval(callableObj,
2111 namedArgs = self._default_arguments(eval(callableObj,
2112 self.namespace))
2112 self.namespace))
2113
2113
2114 # Remove used named arguments from the list, no need to show twice
2114 # Remove used named arguments from the list, no need to show twice
2115 for namedArg in set(namedArgs) - usedNamedArgs:
2115 for namedArg in set(namedArgs) - usedNamedArgs:
2116 if namedArg.startswith(text):
2116 if namedArg.startswith(text):
2117 argMatches.append("%s=" %namedArg)
2117 argMatches.append("%s=" %namedArg)
2118 except:
2118 except:
2119 pass
2119 pass
2120
2120
2121 return argMatches
2121 return argMatches
2122
2122
2123 @staticmethod
2123 @staticmethod
2124 def _get_keys(obj: Any) -> List[Any]:
2124 def _get_keys(obj: Any) -> List[Any]:
2125 # Objects can define their own completions by defining an
2125 # Objects can define their own completions by defining an
2126 # _ipy_key_completions_() method.
2126 # _ipy_key_completions_() method.
2127 method = get_real_method(obj, '_ipython_key_completions_')
2127 method = get_real_method(obj, '_ipython_key_completions_')
2128 if method is not None:
2128 if method is not None:
2129 return method()
2129 return method()
2130
2130
2131 # Special case some common in-memory dict-like types
2131 # Special case some common in-memory dict-like types
2132 if isinstance(obj, dict) or\
2132 if isinstance(obj, dict) or\
2133 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2133 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2134 try:
2134 try:
2135 return list(obj.keys())
2135 return list(obj.keys())
2136 except Exception:
2136 except Exception:
2137 return []
2137 return []
2138 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2138 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2139 _safe_isinstance(obj, 'numpy', 'void'):
2139 _safe_isinstance(obj, 'numpy', 'void'):
2140 return obj.dtype.names or []
2140 return obj.dtype.names or []
2141 return []
2141 return []
2142
2142
2143 @context_matcher()
2143 @context_matcher()
2144 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2144 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2145 """Match string keys in a dictionary, after e.g. ``foo[``."""
2145 """Match string keys in a dictionary, after e.g. ``foo[``."""
2146 matches = self.dict_key_matches(context.token)
2146 matches = self.dict_key_matches(context.token)
2147 return _convert_matcher_v1_result_to_v2(
2147 return _convert_matcher_v1_result_to_v2(
2148 matches, type="dict key", suppress_if_matches=True
2148 matches, type="dict key", suppress_if_matches=True
2149 )
2149 )
2150
2150
2151 def dict_key_matches(self, text: str) -> List[str]:
2151 def dict_key_matches(self, text: str) -> List[str]:
2152 """Match string keys in a dictionary, after e.g. ``foo[``.
2152 """Match string keys in a dictionary, after e.g. ``foo[``.
2153
2153
2154 .. deprecated:: 8.6
2154 .. deprecated:: 8.6
2155 You can use :meth:`dict_key_matcher` instead.
2155 You can use :meth:`dict_key_matcher` instead.
2156 """
2156 """
2157
2157
2158 if self.__dict_key_regexps is not None:
2158 if self.__dict_key_regexps is not None:
2159 regexps = self.__dict_key_regexps
2159 regexps = self.__dict_key_regexps
2160 else:
2160 else:
2161 dict_key_re_fmt = r'''(?x)
2161 dict_key_re_fmt = r'''(?x)
2162 ( # match dict-referring expression wrt greedy setting
2162 ( # match dict-referring expression wrt greedy setting
2163 %s
2163 %s
2164 )
2164 )
2165 \[ # open bracket
2165 \[ # open bracket
2166 \s* # and optional whitespace
2166 \s* # and optional whitespace
2167 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2167 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2168 ((?:[uUbB]? # string prefix (r not handled)
2168 ((?:[uUbB]? # string prefix (r not handled)
2169 (?:
2169 (?:
2170 '(?:[^']|(?<!\\)\\')*'
2170 '(?:[^']|(?<!\\)\\')*'
2171 |
2171 |
2172 "(?:[^"]|(?<!\\)\\")*"
2172 "(?:[^"]|(?<!\\)\\")*"
2173 )
2173 )
2174 \s*,\s*
2174 \s*,\s*
2175 )*)
2175 )*)
2176 ([uUbB]? # string prefix (r not handled)
2176 ([uUbB]? # string prefix (r not handled)
2177 (?: # unclosed string
2177 (?: # unclosed string
2178 '(?:[^']|(?<!\\)\\')*
2178 '(?:[^']|(?<!\\)\\')*
2179 |
2179 |
2180 "(?:[^"]|(?<!\\)\\")*
2180 "(?:[^"]|(?<!\\)\\")*
2181 )
2181 )
2182 )?
2182 )?
2183 $
2183 $
2184 '''
2184 '''
2185 regexps = self.__dict_key_regexps = {
2185 regexps = self.__dict_key_regexps = {
2186 False: re.compile(dict_key_re_fmt % r'''
2186 False: re.compile(dict_key_re_fmt % r'''
2187 # identifiers separated by .
2187 # identifiers separated by .
2188 (?!\d)\w+
2188 (?!\d)\w+
2189 (?:\.(?!\d)\w+)*
2189 (?:\.(?!\d)\w+)*
2190 '''),
2190 '''),
2191 True: re.compile(dict_key_re_fmt % '''
2191 True: re.compile(dict_key_re_fmt % '''
2192 .+
2192 .+
2193 ''')
2193 ''')
2194 }
2194 }
2195
2195
2196 match = regexps[self.greedy].search(self.text_until_cursor)
2196 match = regexps[self.greedy].search(self.text_until_cursor)
2197
2197
2198 if match is None:
2198 if match is None:
2199 return []
2199 return []
2200
2200
2201 expr, prefix0, prefix = match.groups()
2201 expr, prefix0, prefix = match.groups()
2202 try:
2202 try:
2203 obj = eval(expr, self.namespace)
2203 obj = eval(expr, self.namespace)
2204 except Exception:
2204 except Exception:
2205 try:
2205 try:
2206 obj = eval(expr, self.global_namespace)
2206 obj = eval(expr, self.global_namespace)
2207 except Exception:
2207 except Exception:
2208 return []
2208 return []
2209
2209
2210 keys = self._get_keys(obj)
2210 keys = self._get_keys(obj)
2211 if not keys:
2211 if not keys:
2212 return keys
2212 return keys
2213
2213
2214 extra_prefix = eval(prefix0) if prefix0 != '' else None
2214 extra_prefix = eval(prefix0) if prefix0 != '' else None
2215
2215
2216 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2216 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2217 if not matches:
2217 if not matches:
2218 return matches
2218 return matches
2219
2219
2220 # get the cursor position of
2220 # get the cursor position of
2221 # - the text being completed
2221 # - the text being completed
2222 # - the start of the key text
2222 # - the start of the key text
2223 # - the start of the completion
2223 # - the start of the completion
2224 text_start = len(self.text_until_cursor) - len(text)
2224 text_start = len(self.text_until_cursor) - len(text)
2225 if prefix:
2225 if prefix:
2226 key_start = match.start(3)
2226 key_start = match.start(3)
2227 completion_start = key_start + token_offset
2227 completion_start = key_start + token_offset
2228 else:
2228 else:
2229 key_start = completion_start = match.end()
2229 key_start = completion_start = match.end()
2230
2230
2231 # grab the leading prefix, to make sure all completions start with `text`
2231 # grab the leading prefix, to make sure all completions start with `text`
2232 if text_start > key_start:
2232 if text_start > key_start:
2233 leading = ''
2233 leading = ''
2234 else:
2234 else:
2235 leading = text[text_start:completion_start]
2235 leading = text[text_start:completion_start]
2236
2236
2237 # the index of the `[` character
2237 # the index of the `[` character
2238 bracket_idx = match.end(1)
2238 bracket_idx = match.end(1)
2239
2239
2240 # append closing quote and bracket as appropriate
2240 # append closing quote and bracket as appropriate
2241 # this is *not* appropriate if the opening quote or bracket is outside
2241 # this is *not* appropriate if the opening quote or bracket is outside
2242 # the text given to this method
2242 # the text given to this method
2243 suf = ''
2243 suf = ''
2244 continuation = self.line_buffer[len(self.text_until_cursor):]
2244 continuation = self.line_buffer[len(self.text_until_cursor):]
2245 if key_start > text_start and closing_quote:
2245 if key_start > text_start and closing_quote:
2246 # quotes were opened inside text, maybe close them
2246 # quotes were opened inside text, maybe close them
2247 if continuation.startswith(closing_quote):
2247 if continuation.startswith(closing_quote):
2248 continuation = continuation[len(closing_quote):]
2248 continuation = continuation[len(closing_quote):]
2249 else:
2249 else:
2250 suf += closing_quote
2250 suf += closing_quote
2251 if bracket_idx > text_start:
2251 if bracket_idx > text_start:
2252 # brackets were opened inside text, maybe close them
2252 # brackets were opened inside text, maybe close them
2253 if not continuation.startswith(']'):
2253 if not continuation.startswith(']'):
2254 suf += ']'
2254 suf += ']'
2255
2255
2256 return [leading + k + suf for k in matches]
2256 return [leading + k + suf for k in matches]
2257
2257
2258 @context_matcher()
2258 @context_matcher()
2259 def unicode_name_matcher(self, context: CompletionContext):
2259 def unicode_name_matcher(self, context: CompletionContext):
2260 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2260 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2261 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2261 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2262 return _convert_matcher_v1_result_to_v2(
2262 return _convert_matcher_v1_result_to_v2(
2263 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2263 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2264 )
2264 )
2265
2265
2266 @staticmethod
2266 @staticmethod
2267 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2267 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2268 """Match Latex-like syntax for unicode characters base
2268 """Match Latex-like syntax for unicode characters base
2269 on the name of the character.
2269 on the name of the character.
2270
2270
2271 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2271 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2272
2272
2273 Works only on valid python 3 identifier, or on combining characters that
2273 Works only on valid python 3 identifier, or on combining characters that
2274 will combine to form a valid identifier.
2274 will combine to form a valid identifier.
2275 """
2275 """
2276 slashpos = text.rfind('\\')
2276 slashpos = text.rfind('\\')
2277 if slashpos > -1:
2277 if slashpos > -1:
2278 s = text[slashpos+1:]
2278 s = text[slashpos+1:]
2279 try :
2279 try :
2280 unic = unicodedata.lookup(s)
2280 unic = unicodedata.lookup(s)
2281 # allow combining chars
2281 # allow combining chars
2282 if ('a'+unic).isidentifier():
2282 if ('a'+unic).isidentifier():
2283 return '\\'+s,[unic]
2283 return '\\'+s,[unic]
2284 except KeyError:
2284 except KeyError:
2285 pass
2285 pass
2286 return '', []
2286 return '', []
2287
2287
2288 @context_matcher()
2288 @context_matcher()
2289 def latex_name_matcher(self, context: CompletionContext):
2289 def latex_name_matcher(self, context: CompletionContext):
2290 """Match Latex syntax for unicode characters.
2290 """Match Latex syntax for unicode characters.
2291
2291
2292 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2292 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2293 """
2293 """
2294 fragment, matches = self.latex_matches(context.text_until_cursor)
2294 fragment, matches = self.latex_matches(context.text_until_cursor)
2295 return _convert_matcher_v1_result_to_v2(
2295 return _convert_matcher_v1_result_to_v2(
2296 matches, type="latex", fragment=fragment, suppress_if_matches=True
2296 matches, type="latex", fragment=fragment, suppress_if_matches=True
2297 )
2297 )
2298
2298
2299 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2299 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2300 """Match Latex syntax for unicode characters.
2300 """Match Latex syntax for unicode characters.
2301
2301
2302 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2302 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2303
2303
2304 .. deprecated:: 8.6
2304 .. deprecated:: 8.6
2305 You can use :meth:`latex_name_matcher` instead.
2305 You can use :meth:`latex_name_matcher` instead.
2306 """
2306 """
2307 slashpos = text.rfind('\\')
2307 slashpos = text.rfind('\\')
2308 if slashpos > -1:
2308 if slashpos > -1:
2309 s = text[slashpos:]
2309 s = text[slashpos:]
2310 if s in latex_symbols:
2310 if s in latex_symbols:
2311 # Try to complete a full latex symbol to unicode
2311 # Try to complete a full latex symbol to unicode
2312 # \\alpha -> Ξ±
2312 # \\alpha -> Ξ±
2313 return s, [latex_symbols[s]]
2313 return s, [latex_symbols[s]]
2314 else:
2314 else:
2315 # If a user has partially typed a latex symbol, give them
2315 # If a user has partially typed a latex symbol, give them
2316 # a full list of options \al -> [\aleph, \alpha]
2316 # a full list of options \al -> [\aleph, \alpha]
2317 matches = [k for k in latex_symbols if k.startswith(s)]
2317 matches = [k for k in latex_symbols if k.startswith(s)]
2318 if matches:
2318 if matches:
2319 return s, matches
2319 return s, matches
2320 return '', ()
2320 return '', ()
2321
2321
2322 @context_matcher()
2322 @context_matcher()
2323 def custom_completer_matcher(self, context):
2323 def custom_completer_matcher(self, context):
2324 """Dispatch custom completer.
2324 """Dispatch custom completer.
2325
2325
2326 If a match is found, suppresses all other matchers except for Jedi.
2326 If a match is found, suppresses all other matchers except for Jedi.
2327 """
2327 """
2328 matches = self.dispatch_custom_completer(context.token) or []
2328 matches = self.dispatch_custom_completer(context.token) or []
2329 result = _convert_matcher_v1_result_to_v2(
2329 result = _convert_matcher_v1_result_to_v2(
2330 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2330 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2331 )
2331 )
2332 result["ordered"] = True
2332 result["ordered"] = True
2333 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2333 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2334 return result
2334 return result
2335
2335
2336 def dispatch_custom_completer(self, text):
2336 def dispatch_custom_completer(self, text):
2337 """
2337 """
2338 .. deprecated:: 8.6
2338 .. deprecated:: 8.6
2339 You can use :meth:`custom_completer_matcher` instead.
2339 You can use :meth:`custom_completer_matcher` instead.
2340 """
2340 """
2341 if not self.custom_completers:
2341 if not self.custom_completers:
2342 return
2342 return
2343
2343
2344 line = self.line_buffer
2344 line = self.line_buffer
2345 if not line.strip():
2345 if not line.strip():
2346 return None
2346 return None
2347
2347
2348 # Create a little structure to pass all the relevant information about
2348 # Create a little structure to pass all the relevant information about
2349 # the current completion to any custom completer.
2349 # the current completion to any custom completer.
2350 event = SimpleNamespace()
2350 event = SimpleNamespace()
2351 event.line = line
2351 event.line = line
2352 event.symbol = text
2352 event.symbol = text
2353 cmd = line.split(None,1)[0]
2353 cmd = line.split(None,1)[0]
2354 event.command = cmd
2354 event.command = cmd
2355 event.text_until_cursor = self.text_until_cursor
2355 event.text_until_cursor = self.text_until_cursor
2356
2356
2357 # for foo etc, try also to find completer for %foo
2357 # for foo etc, try also to find completer for %foo
2358 if not cmd.startswith(self.magic_escape):
2358 if not cmd.startswith(self.magic_escape):
2359 try_magic = self.custom_completers.s_matches(
2359 try_magic = self.custom_completers.s_matches(
2360 self.magic_escape + cmd)
2360 self.magic_escape + cmd)
2361 else:
2361 else:
2362 try_magic = []
2362 try_magic = []
2363
2363
2364 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2364 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2365 try_magic,
2365 try_magic,
2366 self.custom_completers.flat_matches(self.text_until_cursor)):
2366 self.custom_completers.flat_matches(self.text_until_cursor)):
2367 try:
2367 try:
2368 res = c(event)
2368 res = c(event)
2369 if res:
2369 if res:
2370 # first, try case sensitive match
2370 # first, try case sensitive match
2371 withcase = [r for r in res if r.startswith(text)]
2371 withcase = [r for r in res if r.startswith(text)]
2372 if withcase:
2372 if withcase:
2373 return withcase
2373 return withcase
2374 # if none, then case insensitive ones are ok too
2374 # if none, then case insensitive ones are ok too
2375 text_low = text.lower()
2375 text_low = text.lower()
2376 return [r for r in res if r.lower().startswith(text_low)]
2376 return [r for r in res if r.lower().startswith(text_low)]
2377 except TryNext:
2377 except TryNext:
2378 pass
2378 pass
2379 except KeyboardInterrupt:
2379 except KeyboardInterrupt:
2380 """
2380 """
2381 If custom completer take too long,
2381 If custom completer take too long,
2382 let keyboard interrupt abort and return nothing.
2382 let keyboard interrupt abort and return nothing.
2383 """
2383 """
2384 break
2384 break
2385
2385
2386 return None
2386 return None
2387
2387
2388 def completions(self, text: str, offset: int)->Iterator[Completion]:
2388 def completions(self, text: str, offset: int)->Iterator[Completion]:
2389 """
2389 """
2390 Returns an iterator over the possible completions
2390 Returns an iterator over the possible completions
2391
2391
2392 .. warning::
2392 .. warning::
2393
2393
2394 Unstable
2394 Unstable
2395
2395
2396 This function is unstable, API may change without warning.
2396 This function is unstable, API may change without warning.
2397 It will also raise unless use in proper context manager.
2397 It will also raise unless use in proper context manager.
2398
2398
2399 Parameters
2399 Parameters
2400 ----------
2400 ----------
2401 text : str
2401 text : str
2402 Full text of the current input, multi line string.
2402 Full text of the current input, multi line string.
2403 offset : int
2403 offset : int
2404 Integer representing the position of the cursor in ``text``. Offset
2404 Integer representing the position of the cursor in ``text``. Offset
2405 is 0-based indexed.
2405 is 0-based indexed.
2406
2406
2407 Yields
2407 Yields
2408 ------
2408 ------
2409 Completion
2409 Completion
2410
2410
2411 Notes
2411 Notes
2412 -----
2412 -----
2413 The cursor on a text can either be seen as being "in between"
2413 The cursor on a text can either be seen as being "in between"
2414 characters or "On" a character depending on the interface visible to
2414 characters or "On" a character depending on the interface visible to
2415 the user. For consistency the cursor being on "in between" characters X
2415 the user. For consistency the cursor being on "in between" characters X
2416 and Y is equivalent to the cursor being "on" character Y, that is to say
2416 and Y is equivalent to the cursor being "on" character Y, that is to say
2417 the character the cursor is on is considered as being after the cursor.
2417 the character the cursor is on is considered as being after the cursor.
2418
2418
2419 Combining characters may span more that one position in the
2419 Combining characters may span more that one position in the
2420 text.
2420 text.
2421
2421
2422 .. note::
2422 .. note::
2423
2423
2424 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2424 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2425 fake Completion token to distinguish completion returned by Jedi
2425 fake Completion token to distinguish completion returned by Jedi
2426 and usual IPython completion.
2426 and usual IPython completion.
2427
2427
2428 .. note::
2428 .. note::
2429
2429
2430 Completions are not completely deduplicated yet. If identical
2430 Completions are not completely deduplicated yet. If identical
2431 completions are coming from different sources this function does not
2431 completions are coming from different sources this function does not
2432 ensure that each completion object will only be present once.
2432 ensure that each completion object will only be present once.
2433 """
2433 """
2434 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2434 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2435 "It may change without warnings. "
2435 "It may change without warnings. "
2436 "Use in corresponding context manager.",
2436 "Use in corresponding context manager.",
2437 category=ProvisionalCompleterWarning, stacklevel=2)
2437 category=ProvisionalCompleterWarning, stacklevel=2)
2438
2438
2439 seen = set()
2439 seen = set()
2440 profiler:Optional[cProfile.Profile]
2440 profiler:Optional[cProfile.Profile]
2441 try:
2441 try:
2442 if self.profile_completions:
2442 if self.profile_completions:
2443 import cProfile
2443 import cProfile
2444 profiler = cProfile.Profile()
2444 profiler = cProfile.Profile()
2445 profiler.enable()
2445 profiler.enable()
2446 else:
2446 else:
2447 profiler = None
2447 profiler = None
2448
2448
2449 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2449 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2450 if c and (c in seen):
2450 if c and (c in seen):
2451 continue
2451 continue
2452 yield c
2452 yield c
2453 seen.add(c)
2453 seen.add(c)
2454 except KeyboardInterrupt:
2454 except KeyboardInterrupt:
2455 """if completions take too long and users send keyboard interrupt,
2455 """if completions take too long and users send keyboard interrupt,
2456 do not crash and return ASAP. """
2456 do not crash and return ASAP. """
2457 pass
2457 pass
2458 finally:
2458 finally:
2459 if profiler is not None:
2459 if profiler is not None:
2460 profiler.disable()
2460 profiler.disable()
2461 ensure_dir_exists(self.profiler_output_dir)
2461 ensure_dir_exists(self.profiler_output_dir)
2462 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2462 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2463 print("Writing profiler output to", output_path)
2463 print("Writing profiler output to", output_path)
2464 profiler.dump_stats(output_path)
2464 profiler.dump_stats(output_path)
2465
2465
2466 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2466 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2467 """
2467 """
2468 Core completion module.Same signature as :any:`completions`, with the
2468 Core completion module.Same signature as :any:`completions`, with the
2469 extra `timeout` parameter (in seconds).
2469 extra `timeout` parameter (in seconds).
2470
2470
2471 Computing jedi's completion ``.type`` can be quite expensive (it is a
2471 Computing jedi's completion ``.type`` can be quite expensive (it is a
2472 lazy property) and can require some warm-up, more warm up than just
2472 lazy property) and can require some warm-up, more warm up than just
2473 computing the ``name`` of a completion. The warm-up can be :
2473 computing the ``name`` of a completion. The warm-up can be :
2474
2474
2475 - Long warm-up the first time a module is encountered after
2475 - Long warm-up the first time a module is encountered after
2476 install/update: actually build parse/inference tree.
2476 install/update: actually build parse/inference tree.
2477
2477
2478 - first time the module is encountered in a session: load tree from
2478 - first time the module is encountered in a session: load tree from
2479 disk.
2479 disk.
2480
2480
2481 We don't want to block completions for tens of seconds so we give the
2481 We don't want to block completions for tens of seconds so we give the
2482 completer a "budget" of ``_timeout`` seconds per invocation to compute
2482 completer a "budget" of ``_timeout`` seconds per invocation to compute
2483 completions types, the completions that have not yet been computed will
2483 completions types, the completions that have not yet been computed will
2484 be marked as "unknown" an will have a chance to be computed next round
2484 be marked as "unknown" an will have a chance to be computed next round
2485 are things get cached.
2485 are things get cached.
2486
2486
2487 Keep in mind that Jedi is not the only thing treating the completion so
2487 Keep in mind that Jedi is not the only thing treating the completion so
2488 keep the timeout short-ish as if we take more than 0.3 second we still
2488 keep the timeout short-ish as if we take more than 0.3 second we still
2489 have lots of processing to do.
2489 have lots of processing to do.
2490
2490
2491 """
2491 """
2492 deadline = time.monotonic() + _timeout
2492 deadline = time.monotonic() + _timeout
2493
2493
2494 before = full_text[:offset]
2494 before = full_text[:offset]
2495 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2495 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2496
2496
2497 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2497 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2498
2498
2499 results = self._complete(
2499 results = self._complete(
2500 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2500 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2501 )
2501 )
2502 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2502 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2503 identifier: result
2503 identifier: result
2504 for identifier, result in results.items()
2504 for identifier, result in results.items()
2505 if identifier != jedi_matcher_id
2505 if identifier != jedi_matcher_id
2506 }
2506 }
2507
2507
2508 jedi_matches = (
2508 jedi_matches = (
2509 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2509 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2510 if jedi_matcher_id in results
2510 if jedi_matcher_id in results
2511 else ()
2511 else ()
2512 )
2512 )
2513
2513
2514 iter_jm = iter(jedi_matches)
2514 iter_jm = iter(jedi_matches)
2515 if _timeout:
2515 if _timeout:
2516 for jm in iter_jm:
2516 for jm in iter_jm:
2517 try:
2517 try:
2518 type_ = jm.type
2518 type_ = jm.type
2519 except Exception:
2519 except Exception:
2520 if self.debug:
2520 if self.debug:
2521 print("Error in Jedi getting type of ", jm)
2521 print("Error in Jedi getting type of ", jm)
2522 type_ = None
2522 type_ = None
2523 delta = len(jm.name_with_symbols) - len(jm.complete)
2523 delta = len(jm.name_with_symbols) - len(jm.complete)
2524 if type_ == 'function':
2524 if type_ == 'function':
2525 signature = _make_signature(jm)
2525 signature = _make_signature(jm)
2526 else:
2526 else:
2527 signature = ''
2527 signature = ''
2528 yield Completion(start=offset - delta,
2528 yield Completion(start=offset - delta,
2529 end=offset,
2529 end=offset,
2530 text=jm.name_with_symbols,
2530 text=jm.name_with_symbols,
2531 type=type_,
2531 type=type_,
2532 signature=signature,
2532 signature=signature,
2533 _origin='jedi')
2533 _origin='jedi')
2534
2534
2535 if time.monotonic() > deadline:
2535 if time.monotonic() > deadline:
2536 break
2536 break
2537
2537
2538 for jm in iter_jm:
2538 for jm in iter_jm:
2539 delta = len(jm.name_with_symbols) - len(jm.complete)
2539 delta = len(jm.name_with_symbols) - len(jm.complete)
2540 yield Completion(
2540 yield Completion(
2541 start=offset - delta,
2541 start=offset - delta,
2542 end=offset,
2542 end=offset,
2543 text=jm.name_with_symbols,
2543 text=jm.name_with_symbols,
2544 type=_UNKNOWN_TYPE, # don't compute type for speed
2544 type=_UNKNOWN_TYPE, # don't compute type for speed
2545 _origin="jedi",
2545 _origin="jedi",
2546 signature="",
2546 signature="",
2547 )
2547 )
2548
2548
2549 # TODO:
2549 # TODO:
2550 # Suppress this, right now just for debug.
2550 # Suppress this, right now just for debug.
2551 if jedi_matches and non_jedi_results and self.debug:
2551 if jedi_matches and non_jedi_results and self.debug:
2552 some_start_offset = before.rfind(
2552 some_start_offset = before.rfind(
2553 next(iter(non_jedi_results.values()))["matched_fragment"]
2553 next(iter(non_jedi_results.values()))["matched_fragment"]
2554 )
2554 )
2555 yield Completion(
2555 yield Completion(
2556 start=some_start_offset,
2556 start=some_start_offset,
2557 end=offset,
2557 end=offset,
2558 text="--jedi/ipython--",
2558 text="--jedi/ipython--",
2559 _origin="debug",
2559 _origin="debug",
2560 type="none",
2560 type="none",
2561 signature="",
2561 signature="",
2562 )
2562 )
2563
2563
2564 ordered = []
2564 ordered = []
2565 sortable = []
2565 sortable = []
2566
2566
2567 for origin, result in non_jedi_results.items():
2567 for origin, result in non_jedi_results.items():
2568 matched_text = result["matched_fragment"]
2568 matched_text = result["matched_fragment"]
2569 start_offset = before.rfind(matched_text)
2569 start_offset = before.rfind(matched_text)
2570 is_ordered = result.get("ordered", False)
2570 is_ordered = result.get("ordered", False)
2571 container = ordered if is_ordered else sortable
2571 container = ordered if is_ordered else sortable
2572
2572
2573 # I'm unsure if this is always true, so let's assert and see if it
2573 # I'm unsure if this is always true, so let's assert and see if it
2574 # crash
2574 # crash
2575 assert before.endswith(matched_text)
2575 assert before.endswith(matched_text)
2576
2576
2577 for simple_completion in result["completions"]:
2577 for simple_completion in result["completions"]:
2578 completion = Completion(
2578 completion = Completion(
2579 start=start_offset,
2579 start=start_offset,
2580 end=offset,
2580 end=offset,
2581 text=simple_completion.text,
2581 text=simple_completion.text,
2582 _origin=origin,
2582 _origin=origin,
2583 signature="",
2583 signature="",
2584 type=simple_completion.type or _UNKNOWN_TYPE,
2584 type=simple_completion.type or _UNKNOWN_TYPE,
2585 )
2585 )
2586 container.append(completion)
2586 container.append(completion)
2587
2587
2588 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2588 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2589 :MATCHES_LIMIT
2589 :MATCHES_LIMIT
2590 ]
2590 ]
2591
2591
2592 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2592 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2593 """Find completions for the given text and line context.
2593 """Find completions for the given text and line context.
2594
2594
2595 Note that both the text and the line_buffer are optional, but at least
2595 Note that both the text and the line_buffer are optional, but at least
2596 one of them must be given.
2596 one of them must be given.
2597
2597
2598 Parameters
2598 Parameters
2599 ----------
2599 ----------
2600 text : string, optional
2600 text : string, optional
2601 Text to perform the completion on. If not given, the line buffer
2601 Text to perform the completion on. If not given, the line buffer
2602 is split using the instance's CompletionSplitter object.
2602 is split using the instance's CompletionSplitter object.
2603 line_buffer : string, optional
2603 line_buffer : string, optional
2604 If not given, the completer attempts to obtain the current line
2604 If not given, the completer attempts to obtain the current line
2605 buffer via readline. This keyword allows clients which are
2605 buffer via readline. This keyword allows clients which are
2606 requesting for text completions in non-readline contexts to inform
2606 requesting for text completions in non-readline contexts to inform
2607 the completer of the entire text.
2607 the completer of the entire text.
2608 cursor_pos : int, optional
2608 cursor_pos : int, optional
2609 Index of the cursor in the full line buffer. Should be provided by
2609 Index of the cursor in the full line buffer. Should be provided by
2610 remote frontends where kernel has no access to frontend state.
2610 remote frontends where kernel has no access to frontend state.
2611
2611
2612 Returns
2612 Returns
2613 -------
2613 -------
2614 Tuple of two items:
2614 Tuple of two items:
2615 text : str
2615 text : str
2616 Text that was actually used in the completion.
2616 Text that was actually used in the completion.
2617 matches : list
2617 matches : list
2618 A list of completion matches.
2618 A list of completion matches.
2619
2619
2620 Notes
2620 Notes
2621 -----
2621 -----
2622 This API is likely to be deprecated and replaced by
2622 This API is likely to be deprecated and replaced by
2623 :any:`IPCompleter.completions` in the future.
2623 :any:`IPCompleter.completions` in the future.
2624
2624
2625 """
2625 """
2626 warnings.warn('`Completer.complete` is pending deprecation since '
2626 warnings.warn('`Completer.complete` is pending deprecation since '
2627 'IPython 6.0 and will be replaced by `Completer.completions`.',
2627 'IPython 6.0 and will be replaced by `Completer.completions`.',
2628 PendingDeprecationWarning)
2628 PendingDeprecationWarning)
2629 # potential todo, FOLD the 3rd throw away argument of _complete
2629 # potential todo, FOLD the 3rd throw away argument of _complete
2630 # into the first 2 one.
2630 # into the first 2 one.
2631 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2631 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2632 # TODO: should we deprecate now, or does it stay?
2632 # TODO: should we deprecate now, or does it stay?
2633
2633
2634 results = self._complete(
2634 results = self._complete(
2635 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2635 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2636 )
2636 )
2637
2637
2638 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2638 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2639
2639
2640 return self._arrange_and_extract(
2640 return self._arrange_and_extract(
2641 results,
2641 results,
2642 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2642 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2643 skip_matchers={jedi_matcher_id},
2643 skip_matchers={jedi_matcher_id},
2644 # this API does not support different start/end positions (fragments of token).
2644 # this API does not support different start/end positions (fragments of token).
2645 abort_if_offset_changes=True,
2645 abort_if_offset_changes=True,
2646 )
2646 )
2647
2647
2648 def _arrange_and_extract(
2648 def _arrange_and_extract(
2649 self,
2649 self,
2650 results: Dict[str, MatcherResult],
2650 results: Dict[str, MatcherResult],
2651 skip_matchers: Set[str],
2651 skip_matchers: Set[str],
2652 abort_if_offset_changes: bool,
2652 abort_if_offset_changes: bool,
2653 ):
2653 ):
2654
2654
2655 sortable = []
2655 sortable = []
2656 ordered = []
2656 ordered = []
2657 most_recent_fragment = None
2657 most_recent_fragment = None
2658 for identifier, result in results.items():
2658 for identifier, result in results.items():
2659 if identifier in skip_matchers:
2659 if identifier in skip_matchers:
2660 continue
2660 continue
2661 if not result["completions"]:
2661 if not result["completions"]:
2662 continue
2662 continue
2663 if not most_recent_fragment:
2663 if not most_recent_fragment:
2664 most_recent_fragment = result["matched_fragment"]
2664 most_recent_fragment = result["matched_fragment"]
2665 if (
2665 if (
2666 abort_if_offset_changes
2666 abort_if_offset_changes
2667 and result["matched_fragment"] != most_recent_fragment
2667 and result["matched_fragment"] != most_recent_fragment
2668 ):
2668 ):
2669 break
2669 break
2670 if result.get("ordered", False):
2670 if result.get("ordered", False):
2671 ordered.extend(result["completions"])
2671 ordered.extend(result["completions"])
2672 else:
2672 else:
2673 sortable.extend(result["completions"])
2673 sortable.extend(result["completions"])
2674
2674
2675 if not most_recent_fragment:
2675 if not most_recent_fragment:
2676 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2676 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2677
2677
2678 return most_recent_fragment, [
2678 return most_recent_fragment, [
2679 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2679 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2680 ]
2680 ]
2681
2681
2682 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2682 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2683 full_text=None) -> _CompleteResult:
2683 full_text=None) -> _CompleteResult:
2684 """
2684 """
2685 Like complete but can also returns raw jedi completions as well as the
2685 Like complete but can also returns raw jedi completions as well as the
2686 origin of the completion text. This could (and should) be made much
2686 origin of the completion text. This could (and should) be made much
2687 cleaner but that will be simpler once we drop the old (and stateful)
2687 cleaner but that will be simpler once we drop the old (and stateful)
2688 :any:`complete` API.
2688 :any:`complete` API.
2689
2689
2690 With current provisional API, cursor_pos act both (depending on the
2690 With current provisional API, cursor_pos act both (depending on the
2691 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2691 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2692 ``column`` when passing multiline strings this could/should be renamed
2692 ``column`` when passing multiline strings this could/should be renamed
2693 but would add extra noise.
2693 but would add extra noise.
2694
2694
2695 Parameters
2695 Parameters
2696 ----------
2696 ----------
2697 cursor_line
2697 cursor_line
2698 Index of the line the cursor is on. 0 indexed.
2698 Index of the line the cursor is on. 0 indexed.
2699 cursor_pos
2699 cursor_pos
2700 Position of the cursor in the current line/line_buffer/text. 0
2700 Position of the cursor in the current line/line_buffer/text. 0
2701 indexed.
2701 indexed.
2702 line_buffer : optional, str
2702 line_buffer : optional, str
2703 The current line the cursor is in, this is mostly due to legacy
2703 The current line the cursor is in, this is mostly due to legacy
2704 reason that readline could only give a us the single current line.
2704 reason that readline could only give a us the single current line.
2705 Prefer `full_text`.
2705 Prefer `full_text`.
2706 text : str
2706 text : str
2707 The current "token" the cursor is in, mostly also for historical
2707 The current "token" the cursor is in, mostly also for historical
2708 reasons. as the completer would trigger only after the current line
2708 reasons. as the completer would trigger only after the current line
2709 was parsed.
2709 was parsed.
2710 full_text : str
2710 full_text : str
2711 Full text of the current cell.
2711 Full text of the current cell.
2712
2712
2713 Returns
2713 Returns
2714 -------
2714 -------
2715 An ordered dictionary where keys are identifiers of completion
2715 An ordered dictionary where keys are identifiers of completion
2716 matchers and values are ``MatcherResult``s.
2716 matchers and values are ``MatcherResult``s.
2717 """
2717 """
2718
2718
2719 # if the cursor position isn't given, the only sane assumption we can
2719 # if the cursor position isn't given, the only sane assumption we can
2720 # make is that it's at the end of the line (the common case)
2720 # make is that it's at the end of the line (the common case)
2721 if cursor_pos is None:
2721 if cursor_pos is None:
2722 cursor_pos = len(line_buffer) if text is None else len(text)
2722 cursor_pos = len(line_buffer) if text is None else len(text)
2723
2723
2724 if self.use_main_ns:
2724 if self.use_main_ns:
2725 self.namespace = __main__.__dict__
2725 self.namespace = __main__.__dict__
2726
2726
2727 # if text is either None or an empty string, rely on the line buffer
2727 # if text is either None or an empty string, rely on the line buffer
2728 if (not line_buffer) and full_text:
2728 if (not line_buffer) and full_text:
2729 line_buffer = full_text.split('\n')[cursor_line]
2729 line_buffer = full_text.split('\n')[cursor_line]
2730 if not text: # issue #11508: check line_buffer before calling split_line
2730 if not text: # issue #11508: check line_buffer before calling split_line
2731 text = (
2731 text = (
2732 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2732 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2733 )
2733 )
2734
2734
2735 # If no line buffer is given, assume the input text is all there was
2735 # If no line buffer is given, assume the input text is all there was
2736 if line_buffer is None:
2736 if line_buffer is None:
2737 line_buffer = text
2737 line_buffer = text
2738
2738
2739 # deprecated - do not use `line_buffer` in new code.
2739 # deprecated - do not use `line_buffer` in new code.
2740 self.line_buffer = line_buffer
2740 self.line_buffer = line_buffer
2741 self.text_until_cursor = self.line_buffer[:cursor_pos]
2741 self.text_until_cursor = self.line_buffer[:cursor_pos]
2742
2742
2743 if not full_text:
2743 if not full_text:
2744 full_text = line_buffer
2744 full_text = line_buffer
2745
2745
2746 context = CompletionContext(
2746 context = CompletionContext(
2747 full_text=full_text,
2747 full_text=full_text,
2748 cursor_position=cursor_pos,
2748 cursor_position=cursor_pos,
2749 cursor_line=cursor_line,
2749 cursor_line=cursor_line,
2750 token=text,
2750 token=text,
2751 limit=MATCHES_LIMIT,
2751 limit=MATCHES_LIMIT,
2752 )
2752 )
2753
2753
2754 # Start with a clean slate of completions
2754 # Start with a clean slate of completions
2755 results = {}
2755 results = {}
2756
2756
2757 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2757 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2758
2758
2759 suppressed_matchers = set()
2759 suppressed_matchers = set()
2760
2760
2761 matchers = {
2761 matchers = {
2762 _get_matcher_id(matcher): matcher
2762 _get_matcher_id(matcher): matcher
2763 for matcher in sorted(
2763 for matcher in sorted(
2764 self.matchers, key=_get_matcher_priority, reverse=True
2764 self.matchers, key=_get_matcher_priority, reverse=True
2765 )
2765 )
2766 }
2766 }
2767
2767
2768 for matcher_id, matcher in matchers.items():
2768 for matcher_id, matcher in matchers.items():
2769 api_version = _get_matcher_api_version(matcher)
2769 api_version = _get_matcher_api_version(matcher)
2770 matcher_id = _get_matcher_id(matcher)
2770 matcher_id = _get_matcher_id(matcher)
2771
2771
2772 if matcher_id in self.disable_matchers:
2772 if matcher_id in self.disable_matchers:
2773 continue
2773 continue
2774
2774
2775 if matcher_id in results:
2775 if matcher_id in results:
2776 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2776 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2777
2777
2778 if matcher_id in suppressed_matchers:
2778 if matcher_id in suppressed_matchers:
2779 continue
2779 continue
2780
2780
2781 try:
2781 try:
2782 if api_version == 1:
2782 if api_version == 1:
2783 result = _convert_matcher_v1_result_to_v2(
2783 result = _convert_matcher_v1_result_to_v2(
2784 matcher(text), type=_UNKNOWN_TYPE
2784 matcher(text), type=_UNKNOWN_TYPE
2785 )
2785 )
2786 elif api_version == 2:
2786 elif api_version == 2:
2787 result = cast(matcher, MatcherAPIv2)(context)
2787 result = cast(matcher, MatcherAPIv2)(context)
2788 else:
2788 else:
2789 raise ValueError(f"Unsupported API version {api_version}")
2789 raise ValueError(f"Unsupported API version {api_version}")
2790 except:
2790 except:
2791 # Show the ugly traceback if the matcher causes an
2791 # Show the ugly traceback if the matcher causes an
2792 # exception, but do NOT crash the kernel!
2792 # exception, but do NOT crash the kernel!
2793 sys.excepthook(*sys.exc_info())
2793 sys.excepthook(*sys.exc_info())
2794 continue
2794 continue
2795
2795
2796 # set default value for matched fragment if suffix was not selected.
2796 # set default value for matched fragment if suffix was not selected.
2797 result["matched_fragment"] = result.get("matched_fragment", context.token)
2797 result["matched_fragment"] = result.get("matched_fragment", context.token)
2798
2798
2799 if not suppressed_matchers:
2799 if not suppressed_matchers:
2800 suppression_recommended = result.get("suppress", False)
2800 suppression_recommended = result.get("suppress", False)
2801
2801
2802 suppression_config = (
2802 suppression_config = (
2803 self.suppress_competing_matchers.get(matcher_id, None)
2803 self.suppress_competing_matchers.get(matcher_id, None)
2804 if isinstance(self.suppress_competing_matchers, dict)
2804 if isinstance(self.suppress_competing_matchers, dict)
2805 else self.suppress_competing_matchers
2805 else self.suppress_competing_matchers
2806 )
2806 )
2807 should_suppress = (
2807 should_suppress = (
2808 (suppression_config is True)
2808 (suppression_config is True)
2809 or (suppression_recommended and (suppression_config is not False))
2809 or (suppression_recommended and (suppression_config is not False))
2810 ) and len(result["completions"])
2810 ) and len(result["completions"])
2811
2811
2812 if should_suppress:
2812 if should_suppress:
2813 suppression_exceptions = result.get("do_not_suppress", set())
2813 suppression_exceptions = result.get("do_not_suppress", set())
2814 try:
2814 try:
2815 to_suppress = set(suppression_recommended)
2815 to_suppress = set(suppression_recommended)
2816 except TypeError:
2816 except TypeError:
2817 to_suppress = set(matchers)
2817 to_suppress = set(matchers)
2818 suppressed_matchers = to_suppress - suppression_exceptions
2818 suppressed_matchers = to_suppress - suppression_exceptions
2819
2819
2820 new_results = {}
2820 new_results = {}
2821 for previous_matcher_id, previous_result in results.items():
2821 for previous_matcher_id, previous_result in results.items():
2822 if previous_matcher_id not in suppressed_matchers:
2822 if previous_matcher_id not in suppressed_matchers:
2823 new_results[previous_matcher_id] = previous_result
2823 new_results[previous_matcher_id] = previous_result
2824 results = new_results
2824 results = new_results
2825
2825
2826 results[matcher_id] = result
2826 results[matcher_id] = result
2827
2827
2828 _, matches = self._arrange_and_extract(
2828 _, matches = self._arrange_and_extract(
2829 results,
2829 results,
2830 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2830 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2831 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2831 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2832 skip_matchers={jedi_matcher_id},
2832 skip_matchers={jedi_matcher_id},
2833 abort_if_offset_changes=False,
2833 abort_if_offset_changes=False,
2834 )
2834 )
2835
2835
2836 # populate legacy stateful API
2836 # populate legacy stateful API
2837 self.matches = matches
2837 self.matches = matches
2838
2838
2839 return results
2839 return results
2840
2840
2841 @staticmethod
2841 @staticmethod
2842 def _deduplicate(
2842 def _deduplicate(
2843 matches: Sequence[SimpleCompletion],
2843 matches: Sequence[SimpleCompletion],
2844 ) -> Iterable[SimpleCompletion]:
2844 ) -> Iterable[SimpleCompletion]:
2845 filtered_matches = {}
2845 filtered_matches = {}
2846 for match in matches:
2846 for match in matches:
2847 text = match.text
2847 text = match.text
2848 if (
2848 if (
2849 text not in filtered_matches
2849 text not in filtered_matches
2850 or filtered_matches[text].type == _UNKNOWN_TYPE
2850 or filtered_matches[text].type == _UNKNOWN_TYPE
2851 ):
2851 ):
2852 filtered_matches[text] = match
2852 filtered_matches[text] = match
2853
2853
2854 return filtered_matches.values()
2854 return filtered_matches.values()
2855
2855
2856 @staticmethod
2856 @staticmethod
2857 def _sort(matches: Sequence[SimpleCompletion]):
2857 def _sort(matches: Sequence[SimpleCompletion]):
2858 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2858 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2859
2859
2860 @context_matcher()
2860 @context_matcher()
2861 def fwd_unicode_matcher(self, context: CompletionContext):
2861 def fwd_unicode_matcher(self, context: CompletionContext):
2862 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
2862 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
2863 # TODO: use `context.limit` to terminate early once we matched the maximum
2863 # TODO: use `context.limit` to terminate early once we matched the maximum
2864 # number that will be used downstream; can be added as an optional to
2864 # number that will be used downstream; can be added as an optional to
2865 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
2865 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
2866 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
2866 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
2867 return _convert_matcher_v1_result_to_v2(
2867 return _convert_matcher_v1_result_to_v2(
2868 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2868 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2869 )
2869 )
2870
2870
2871 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2871 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2872 """
2872 """
2873 Forward match a string starting with a backslash with a list of
2873 Forward match a string starting with a backslash with a list of
2874 potential Unicode completions.
2874 potential Unicode completions.
2875
2875
2876 Will compute list of Unicode character names on first call and cache it.
2876 Will compute list of Unicode character names on first call and cache it.
2877
2877
2878 .. deprecated:: 8.6
2878 .. deprecated:: 8.6
2879 You can use :meth:`fwd_unicode_matcher` instead.
2879 You can use :meth:`fwd_unicode_matcher` instead.
2880
2880
2881 Returns
2881 Returns
2882 -------
2882 -------
2883 At tuple with:
2883 At tuple with:
2884 - matched text (empty if no matches)
2884 - matched text (empty if no matches)
2885 - list of potential completions, empty tuple otherwise)
2885 - list of potential completions, empty tuple otherwise)
2886 """
2886 """
2887 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2887 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2888 # We could do a faster match using a Trie.
2888 # We could do a faster match using a Trie.
2889
2889
2890 # Using pygtrie the following seem to work:
2890 # Using pygtrie the following seem to work:
2891
2891
2892 # s = PrefixSet()
2892 # s = PrefixSet()
2893
2893
2894 # for c in range(0,0x10FFFF + 1):
2894 # for c in range(0,0x10FFFF + 1):
2895 # try:
2895 # try:
2896 # s.add(unicodedata.name(chr(c)))
2896 # s.add(unicodedata.name(chr(c)))
2897 # except ValueError:
2897 # except ValueError:
2898 # pass
2898 # pass
2899 # [''.join(k) for k in s.iter(prefix)]
2899 # [''.join(k) for k in s.iter(prefix)]
2900
2900
2901 # But need to be timed and adds an extra dependency.
2901 # But need to be timed and adds an extra dependency.
2902
2902
2903 slashpos = text.rfind('\\')
2903 slashpos = text.rfind('\\')
2904 # if text starts with slash
2904 # if text starts with slash
2905 if slashpos > -1:
2905 if slashpos > -1:
2906 # PERF: It's important that we don't access self._unicode_names
2906 # PERF: It's important that we don't access self._unicode_names
2907 # until we're inside this if-block. _unicode_names is lazily
2907 # until we're inside this if-block. _unicode_names is lazily
2908 # initialized, and it takes a user-noticeable amount of time to
2908 # initialized, and it takes a user-noticeable amount of time to
2909 # initialize it, so we don't want to initialize it unless we're
2909 # initialize it, so we don't want to initialize it unless we're
2910 # actually going to use it.
2910 # actually going to use it.
2911 s = text[slashpos + 1 :]
2911 s = text[slashpos + 1 :]
2912 sup = s.upper()
2912 sup = s.upper()
2913 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2913 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2914 if candidates:
2914 if candidates:
2915 return s, candidates
2915 return s, candidates
2916 candidates = [x for x in self.unicode_names if sup in x]
2916 candidates = [x for x in self.unicode_names if sup in x]
2917 if candidates:
2917 if candidates:
2918 return s, candidates
2918 return s, candidates
2919 splitsup = sup.split(" ")
2919 splitsup = sup.split(" ")
2920 candidates = [
2920 candidates = [
2921 x for x in self.unicode_names if all(u in x for u in splitsup)
2921 x for x in self.unicode_names if all(u in x for u in splitsup)
2922 ]
2922 ]
2923 if candidates:
2923 if candidates:
2924 return s, candidates
2924 return s, candidates
2925
2925
2926 return "", ()
2926 return "", ()
2927
2927
2928 # if text does not start with slash
2928 # if text does not start with slash
2929 else:
2929 else:
2930 return '', ()
2930 return '', ()
2931
2931
2932 @property
2932 @property
2933 def unicode_names(self) -> List[str]:
2933 def unicode_names(self) -> List[str]:
2934 """List of names of unicode code points that can be completed.
2934 """List of names of unicode code points that can be completed.
2935
2935
2936 The list is lazily initialized on first access.
2936 The list is lazily initialized on first access.
2937 """
2937 """
2938 if self._unicode_names is None:
2938 if self._unicode_names is None:
2939 names = []
2939 names = []
2940 for c in range(0,0x10FFFF + 1):
2940 for c in range(0,0x10FFFF + 1):
2941 try:
2941 try:
2942 names.append(unicodedata.name(chr(c)))
2942 names.append(unicodedata.name(chr(c)))
2943 except ValueError:
2943 except ValueError:
2944 pass
2944 pass
2945 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2945 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2946
2946
2947 return self._unicode_names
2947 return self._unicode_names
2948
2948
2949 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2949 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2950 names = []
2950 names = []
2951 for start,stop in ranges:
2951 for start,stop in ranges:
2952 for c in range(start, stop) :
2952 for c in range(start, stop) :
2953 try:
2953 try:
2954 names.append(unicodedata.name(chr(c)))
2954 names.append(unicodedata.name(chr(c)))
2955 except ValueError:
2955 except ValueError:
2956 pass
2956 pass
2957 return names
2957 return names
General Comments 0
You need to be logged in to leave comments. Login now