##// END OF EJS Templates
Relax constraint on limit to allow no limit
krassowski -
Show More
@@ -1,2956 +1,2957 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press ``<tab>`` to expand it to its latex form.
53 and press ``<tab>`` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103
103
104 Matchers
104 Matchers
105 ========
105 ========
106
106
107 All completions routines are implemented using unified *Matchers* API.
107 All completions routines are implemented using unified *Matchers* API.
108 The matchers API is provisional and subject to change without notice.
108 The matchers API is provisional and subject to change without notice.
109
109
110 The built-in matchers include:
110 The built-in matchers include:
111
111
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.unicode_name_matcher`,
114 - :any:`IPCompleter.unicode_name_matcher`,
115 :any:`IPCompleter.fwd_unicode_matcher`
115 :any:`IPCompleter.fwd_unicode_matcher`
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 (`complete_command`) with string dispatch (including regular expressions).
124 (`complete_command`) with string dispatch (including regular expressions).
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Jedi results to match behaviour in earlier IPython versions.
126 Jedi results to match behaviour in earlier IPython versions.
127
127
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129
129
130 Matcher API
130 Matcher API
131 -----------
131 -----------
132
132
133 Simplifying some details, the ``Matcher`` interface can described as
133 Simplifying some details, the ``Matcher`` interface can described as
134
134
135 .. code-block::
135 .. code-block::
136
136
137 MatcherAPIv1 = Callable[[str], list[str]]
137 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139
139
140 Matcher = MatcherAPIv1 | MatcherAPIv2
140 Matcher = MatcherAPIv1 | MatcherAPIv2
141
141
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 and remains supported as a simplest way for generating completions. This is also
143 and remains supported as a simplest way for generating completions. This is also
144 currently the only API supported by the IPython hooks system `complete_command`.
144 currently the only API supported by the IPython hooks system `complete_command`.
145
145
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 and requires a literal ``2`` for v2 Matchers.
148 and requires a literal ``2`` for v2 Matchers.
149
149
150 Once the API stabilises future versions may relax the requirement for specifying
150 Once the API stabilises future versions may relax the requirement for specifying
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153
153
154 Suppression of competing matchers
154 Suppression of competing matchers
155 ---------------------------------
155 ---------------------------------
156
156
157 By default results from all matchers are combined, in the order determined by
157 By default results from all matchers are combined, in the order determined by
158 their priority. Matchers can request to suppress results from subsequent
158 their priority. Matchers can request to suppress results from subsequent
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160
160
161 When multiple matchers simultaneously request surpression, the results from of
161 When multiple matchers simultaneously request surpression, the results from of
162 the matcher with higher priority will be returned.
162 the matcher with higher priority will be returned.
163
163
164 Sometimes it is desirable to suppress most but not all other matchers;
164 Sometimes it is desirable to suppress most but not all other matchers;
165 this can be achieved by adding a list of identifiers of matchers which
165 this can be achieved by adding a list of identifiers of matchers which
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167
167
168 The suppression behaviour can is user-configurable via
168 The suppression behaviour can is user-configurable via
169 :any:`IPCompleter.suppress_competing_matchers`.
169 :any:`IPCompleter.suppress_competing_matchers`.
170 """
170 """
171
171
172
172
173 # Copyright (c) IPython Development Team.
173 # Copyright (c) IPython Development Team.
174 # Distributed under the terms of the Modified BSD License.
174 # Distributed under the terms of the Modified BSD License.
175 #
175 #
176 # Some of this code originated from rlcompleter in the Python standard library
176 # Some of this code originated from rlcompleter in the Python standard library
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
178
178
179 from __future__ import annotations
179 from __future__ import annotations
180 import builtins as builtin_mod
180 import builtins as builtin_mod
181 import glob
181 import glob
182 import inspect
182 import inspect
183 import itertools
183 import itertools
184 import keyword
184 import keyword
185 import os
185 import os
186 import re
186 import re
187 import string
187 import string
188 import sys
188 import sys
189 import time
189 import time
190 import unicodedata
190 import unicodedata
191 import uuid
191 import uuid
192 import warnings
192 import warnings
193 from contextlib import contextmanager
193 from contextlib import contextmanager
194 from functools import lru_cache, partial
194 from functools import lru_cache, partial
195 from importlib import import_module
195 from importlib import import_module
196 from types import SimpleNamespace
196 from types import SimpleNamespace
197 from typing import (
197 from typing import (
198 Iterable,
198 Iterable,
199 Iterator,
199 Iterator,
200 List,
200 List,
201 Tuple,
201 Tuple,
202 Union,
202 Union,
203 Any,
203 Any,
204 Sequence,
204 Sequence,
205 Dict,
205 Dict,
206 NamedTuple,
206 NamedTuple,
207 Pattern,
207 Pattern,
208 Optional,
208 Optional,
209 TYPE_CHECKING,
209 TYPE_CHECKING,
210 Set,
210 Set,
211 Literal,
211 Literal,
212 )
212 )
213
213
214 from IPython.core.error import TryNext
214 from IPython.core.error import TryNext
215 from IPython.core.inputtransformer2 import ESC_MAGIC
215 from IPython.core.inputtransformer2 import ESC_MAGIC
216 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
216 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
217 from IPython.core.oinspect import InspectColors
217 from IPython.core.oinspect import InspectColors
218 from IPython.testing.skipdoctest import skip_doctest
218 from IPython.testing.skipdoctest import skip_doctest
219 from IPython.utils import generics
219 from IPython.utils import generics
220 from IPython.utils.decorators import sphinx_options
220 from IPython.utils.decorators import sphinx_options
221 from IPython.utils.dir2 import dir2, get_real_method
221 from IPython.utils.dir2 import dir2, get_real_method
222 from IPython.utils.docs import GENERATING_DOCUMENTATION
222 from IPython.utils.docs import GENERATING_DOCUMENTATION
223 from IPython.utils.path import ensure_dir_exists
223 from IPython.utils.path import ensure_dir_exists
224 from IPython.utils.process import arg_split
224 from IPython.utils.process import arg_split
225 from traitlets import (
225 from traitlets import (
226 Bool,
226 Bool,
227 Enum,
227 Enum,
228 Int,
228 Int,
229 List as ListTrait,
229 List as ListTrait,
230 Unicode,
230 Unicode,
231 Dict as DictTrait,
231 Dict as DictTrait,
232 Union as UnionTrait,
232 Union as UnionTrait,
233 default,
233 default,
234 observe,
234 observe,
235 )
235 )
236 from traitlets.config.configurable import Configurable
236 from traitlets.config.configurable import Configurable
237
237
238 import __main__
238 import __main__
239
239
240 # skip module docstests
240 # skip module docstests
241 __skip_doctest__ = True
241 __skip_doctest__ = True
242
242
243
243
244 try:
244 try:
245 import jedi
245 import jedi
246 jedi.settings.case_insensitive_completion = False
246 jedi.settings.case_insensitive_completion = False
247 import jedi.api.helpers
247 import jedi.api.helpers
248 import jedi.api.classes
248 import jedi.api.classes
249 JEDI_INSTALLED = True
249 JEDI_INSTALLED = True
250 except ImportError:
250 except ImportError:
251 JEDI_INSTALLED = False
251 JEDI_INSTALLED = False
252
252
253
253
254 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
254 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
255 from typing import cast
255 from typing import cast
256 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
256 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
257 else:
257 else:
258
258
259 def cast(obj, type_):
259 def cast(obj, type_):
260 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
260 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
261 return obj
261 return obj
262
262
263 # do not require on runtime
263 # do not require on runtime
264 NotRequired = Tuple # requires Python >=3.11
264 NotRequired = Tuple # requires Python >=3.11
265 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
265 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
266 Protocol = object # requires Python >=3.8
266 Protocol = object # requires Python >=3.8
267 TypeAlias = Any # requires Python >=3.10
267 TypeAlias = Any # requires Python >=3.10
268 if GENERATING_DOCUMENTATION:
268 if GENERATING_DOCUMENTATION:
269 from typing import TypedDict
269 from typing import TypedDict
270
270
271 # -----------------------------------------------------------------------------
271 # -----------------------------------------------------------------------------
272 # Globals
272 # Globals
273 #-----------------------------------------------------------------------------
273 #-----------------------------------------------------------------------------
274
274
275 # ranges where we have most of the valid unicode names. We could be more finer
275 # ranges where we have most of the valid unicode names. We could be more finer
276 # grained but is it worth it for performance While unicode have character in the
276 # grained but is it worth it for performance While unicode have character in the
277 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
277 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
278 # write this). With below range we cover them all, with a density of ~67%
278 # write this). With below range we cover them all, with a density of ~67%
279 # biggest next gap we consider only adds up about 1% density and there are 600
279 # biggest next gap we consider only adds up about 1% density and there are 600
280 # gaps that would need hard coding.
280 # gaps that would need hard coding.
281 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
281 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
282
282
283 # Public API
283 # Public API
284 __all__ = ["Completer", "IPCompleter"]
284 __all__ = ["Completer", "IPCompleter"]
285
285
286 if sys.platform == 'win32':
286 if sys.platform == 'win32':
287 PROTECTABLES = ' '
287 PROTECTABLES = ' '
288 else:
288 else:
289 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
289 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
290
290
291 # Protect against returning an enormous number of completions which the frontend
291 # Protect against returning an enormous number of completions which the frontend
292 # may have trouble processing.
292 # may have trouble processing.
293 MATCHES_LIMIT = 500
293 MATCHES_LIMIT = 500
294
294
295 # Completion type reported when no type can be inferred.
295 # Completion type reported when no type can be inferred.
296 _UNKNOWN_TYPE = "<unknown>"
296 _UNKNOWN_TYPE = "<unknown>"
297
297
298 class ProvisionalCompleterWarning(FutureWarning):
298 class ProvisionalCompleterWarning(FutureWarning):
299 """
299 """
300 Exception raise by an experimental feature in this module.
300 Exception raise by an experimental feature in this module.
301
301
302 Wrap code in :any:`provisionalcompleter` context manager if you
302 Wrap code in :any:`provisionalcompleter` context manager if you
303 are certain you want to use an unstable feature.
303 are certain you want to use an unstable feature.
304 """
304 """
305 pass
305 pass
306
306
307 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
307 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
308
308
309
309
310 @skip_doctest
310 @skip_doctest
311 @contextmanager
311 @contextmanager
312 def provisionalcompleter(action='ignore'):
312 def provisionalcompleter(action='ignore'):
313 """
313 """
314 This context manager has to be used in any place where unstable completer
314 This context manager has to be used in any place where unstable completer
315 behavior and API may be called.
315 behavior and API may be called.
316
316
317 >>> with provisionalcompleter():
317 >>> with provisionalcompleter():
318 ... completer.do_experimental_things() # works
318 ... completer.do_experimental_things() # works
319
319
320 >>> completer.do_experimental_things() # raises.
320 >>> completer.do_experimental_things() # raises.
321
321
322 .. note::
322 .. note::
323
323
324 Unstable
324 Unstable
325
325
326 By using this context manager you agree that the API in use may change
326 By using this context manager you agree that the API in use may change
327 without warning, and that you won't complain if they do so.
327 without warning, and that you won't complain if they do so.
328
328
329 You also understand that, if the API is not to your liking, you should report
329 You also understand that, if the API is not to your liking, you should report
330 a bug to explain your use case upstream.
330 a bug to explain your use case upstream.
331
331
332 We'll be happy to get your feedback, feature requests, and improvements on
332 We'll be happy to get your feedback, feature requests, and improvements on
333 any of the unstable APIs!
333 any of the unstable APIs!
334 """
334 """
335 with warnings.catch_warnings():
335 with warnings.catch_warnings():
336 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
336 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
337 yield
337 yield
338
338
339
339
340 def has_open_quotes(s):
340 def has_open_quotes(s):
341 """Return whether a string has open quotes.
341 """Return whether a string has open quotes.
342
342
343 This simply counts whether the number of quote characters of either type in
343 This simply counts whether the number of quote characters of either type in
344 the string is odd.
344 the string is odd.
345
345
346 Returns
346 Returns
347 -------
347 -------
348 If there is an open quote, the quote character is returned. Else, return
348 If there is an open quote, the quote character is returned. Else, return
349 False.
349 False.
350 """
350 """
351 # We check " first, then ', so complex cases with nested quotes will get
351 # We check " first, then ', so complex cases with nested quotes will get
352 # the " to take precedence.
352 # the " to take precedence.
353 if s.count('"') % 2:
353 if s.count('"') % 2:
354 return '"'
354 return '"'
355 elif s.count("'") % 2:
355 elif s.count("'") % 2:
356 return "'"
356 return "'"
357 else:
357 else:
358 return False
358 return False
359
359
360
360
361 def protect_filename(s, protectables=PROTECTABLES):
361 def protect_filename(s, protectables=PROTECTABLES):
362 """Escape a string to protect certain characters."""
362 """Escape a string to protect certain characters."""
363 if set(s) & set(protectables):
363 if set(s) & set(protectables):
364 if sys.platform == "win32":
364 if sys.platform == "win32":
365 return '"' + s + '"'
365 return '"' + s + '"'
366 else:
366 else:
367 return "".join(("\\" + c if c in protectables else c) for c in s)
367 return "".join(("\\" + c if c in protectables else c) for c in s)
368 else:
368 else:
369 return s
369 return s
370
370
371
371
372 def expand_user(path:str) -> Tuple[str, bool, str]:
372 def expand_user(path:str) -> Tuple[str, bool, str]:
373 """Expand ``~``-style usernames in strings.
373 """Expand ``~``-style usernames in strings.
374
374
375 This is similar to :func:`os.path.expanduser`, but it computes and returns
375 This is similar to :func:`os.path.expanduser`, but it computes and returns
376 extra information that will be useful if the input was being used in
376 extra information that will be useful if the input was being used in
377 computing completions, and you wish to return the completions with the
377 computing completions, and you wish to return the completions with the
378 original '~' instead of its expanded value.
378 original '~' instead of its expanded value.
379
379
380 Parameters
380 Parameters
381 ----------
381 ----------
382 path : str
382 path : str
383 String to be expanded. If no ~ is present, the output is the same as the
383 String to be expanded. If no ~ is present, the output is the same as the
384 input.
384 input.
385
385
386 Returns
386 Returns
387 -------
387 -------
388 newpath : str
388 newpath : str
389 Result of ~ expansion in the input path.
389 Result of ~ expansion in the input path.
390 tilde_expand : bool
390 tilde_expand : bool
391 Whether any expansion was performed or not.
391 Whether any expansion was performed or not.
392 tilde_val : str
392 tilde_val : str
393 The value that ~ was replaced with.
393 The value that ~ was replaced with.
394 """
394 """
395 # Default values
395 # Default values
396 tilde_expand = False
396 tilde_expand = False
397 tilde_val = ''
397 tilde_val = ''
398 newpath = path
398 newpath = path
399
399
400 if path.startswith('~'):
400 if path.startswith('~'):
401 tilde_expand = True
401 tilde_expand = True
402 rest = len(path)-1
402 rest = len(path)-1
403 newpath = os.path.expanduser(path)
403 newpath = os.path.expanduser(path)
404 if rest:
404 if rest:
405 tilde_val = newpath[:-rest]
405 tilde_val = newpath[:-rest]
406 else:
406 else:
407 tilde_val = newpath
407 tilde_val = newpath
408
408
409 return newpath, tilde_expand, tilde_val
409 return newpath, tilde_expand, tilde_val
410
410
411
411
412 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
412 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
413 """Does the opposite of expand_user, with its outputs.
413 """Does the opposite of expand_user, with its outputs.
414 """
414 """
415 if tilde_expand:
415 if tilde_expand:
416 return path.replace(tilde_val, '~')
416 return path.replace(tilde_val, '~')
417 else:
417 else:
418 return path
418 return path
419
419
420
420
421 def completions_sorting_key(word):
421 def completions_sorting_key(word):
422 """key for sorting completions
422 """key for sorting completions
423
423
424 This does several things:
424 This does several things:
425
425
426 - Demote any completions starting with underscores to the end
426 - Demote any completions starting with underscores to the end
427 - Insert any %magic and %%cellmagic completions in the alphabetical order
427 - Insert any %magic and %%cellmagic completions in the alphabetical order
428 by their name
428 by their name
429 """
429 """
430 prio1, prio2 = 0, 0
430 prio1, prio2 = 0, 0
431
431
432 if word.startswith('__'):
432 if word.startswith('__'):
433 prio1 = 2
433 prio1 = 2
434 elif word.startswith('_'):
434 elif word.startswith('_'):
435 prio1 = 1
435 prio1 = 1
436
436
437 if word.endswith('='):
437 if word.endswith('='):
438 prio1 = -1
438 prio1 = -1
439
439
440 if word.startswith('%%'):
440 if word.startswith('%%'):
441 # If there's another % in there, this is something else, so leave it alone
441 # If there's another % in there, this is something else, so leave it alone
442 if not "%" in word[2:]:
442 if not "%" in word[2:]:
443 word = word[2:]
443 word = word[2:]
444 prio2 = 2
444 prio2 = 2
445 elif word.startswith('%'):
445 elif word.startswith('%'):
446 if not "%" in word[1:]:
446 if not "%" in word[1:]:
447 word = word[1:]
447 word = word[1:]
448 prio2 = 1
448 prio2 = 1
449
449
450 return prio1, word, prio2
450 return prio1, word, prio2
451
451
452
452
453 class _FakeJediCompletion:
453 class _FakeJediCompletion:
454 """
454 """
455 This is a workaround to communicate to the UI that Jedi has crashed and to
455 This is a workaround to communicate to the UI that Jedi has crashed and to
456 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
456 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
457
457
458 Added in IPython 6.0 so should likely be removed for 7.0
458 Added in IPython 6.0 so should likely be removed for 7.0
459
459
460 """
460 """
461
461
462 def __init__(self, name):
462 def __init__(self, name):
463
463
464 self.name = name
464 self.name = name
465 self.complete = name
465 self.complete = name
466 self.type = 'crashed'
466 self.type = 'crashed'
467 self.name_with_symbols = name
467 self.name_with_symbols = name
468 self.signature = ''
468 self.signature = ''
469 self._origin = 'fake'
469 self._origin = 'fake'
470
470
471 def __repr__(self):
471 def __repr__(self):
472 return '<Fake completion object jedi has crashed>'
472 return '<Fake completion object jedi has crashed>'
473
473
474
474
475 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
475 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
476
476
477
477
478 class Completion:
478 class Completion:
479 """
479 """
480 Completion object used and returned by IPython completers.
480 Completion object used and returned by IPython completers.
481
481
482 .. warning::
482 .. warning::
483
483
484 Unstable
484 Unstable
485
485
486 This function is unstable, API may change without warning.
486 This function is unstable, API may change without warning.
487 It will also raise unless use in proper context manager.
487 It will also raise unless use in proper context manager.
488
488
489 This act as a middle ground :any:`Completion` object between the
489 This act as a middle ground :any:`Completion` object between the
490 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
490 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
491 object. While Jedi need a lot of information about evaluator and how the
491 object. While Jedi need a lot of information about evaluator and how the
492 code should be ran/inspected, PromptToolkit (and other frontend) mostly
492 code should be ran/inspected, PromptToolkit (and other frontend) mostly
493 need user facing information.
493 need user facing information.
494
494
495 - Which range should be replaced replaced by what.
495 - Which range should be replaced replaced by what.
496 - Some metadata (like completion type), or meta information to displayed to
496 - Some metadata (like completion type), or meta information to displayed to
497 the use user.
497 the use user.
498
498
499 For debugging purpose we can also store the origin of the completion (``jedi``,
499 For debugging purpose we can also store the origin of the completion (``jedi``,
500 ``IPython.python_matches``, ``IPython.magics_matches``...).
500 ``IPython.python_matches``, ``IPython.magics_matches``...).
501 """
501 """
502
502
503 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
503 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
504
504
505 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
505 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
506 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
506 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
507 "It may change without warnings. "
507 "It may change without warnings. "
508 "Use in corresponding context manager.",
508 "Use in corresponding context manager.",
509 category=ProvisionalCompleterWarning, stacklevel=2)
509 category=ProvisionalCompleterWarning, stacklevel=2)
510
510
511 self.start = start
511 self.start = start
512 self.end = end
512 self.end = end
513 self.text = text
513 self.text = text
514 self.type = type
514 self.type = type
515 self.signature = signature
515 self.signature = signature
516 self._origin = _origin
516 self._origin = _origin
517
517
518 def __repr__(self):
518 def __repr__(self):
519 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
519 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
520 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
520 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
521
521
522 def __eq__(self, other)->Bool:
522 def __eq__(self, other)->Bool:
523 """
523 """
524 Equality and hash do not hash the type (as some completer may not be
524 Equality and hash do not hash the type (as some completer may not be
525 able to infer the type), but are use to (partially) de-duplicate
525 able to infer the type), but are use to (partially) de-duplicate
526 completion.
526 completion.
527
527
528 Completely de-duplicating completion is a bit tricker that just
528 Completely de-duplicating completion is a bit tricker that just
529 comparing as it depends on surrounding text, which Completions are not
529 comparing as it depends on surrounding text, which Completions are not
530 aware of.
530 aware of.
531 """
531 """
532 return self.start == other.start and \
532 return self.start == other.start and \
533 self.end == other.end and \
533 self.end == other.end and \
534 self.text == other.text
534 self.text == other.text
535
535
536 def __hash__(self):
536 def __hash__(self):
537 return hash((self.start, self.end, self.text))
537 return hash((self.start, self.end, self.text))
538
538
539
539
540 class SimpleCompletion:
540 class SimpleCompletion:
541 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
541 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
542
542
543 .. warning::
543 .. warning::
544
544
545 Provisional
545 Provisional
546
546
547 This class is used to describe the currently supported attributes of
547 This class is used to describe the currently supported attributes of
548 simple completion items, and any additional implementation details
548 simple completion items, and any additional implementation details
549 should not be relied on. Additional attributes may be included in
549 should not be relied on. Additional attributes may be included in
550 future versions, and meaning of text disambiguated from the current
550 future versions, and meaning of text disambiguated from the current
551 dual meaning of "text to insert" and "text to used as a label".
551 dual meaning of "text to insert" and "text to used as a label".
552 """
552 """
553
553
554 __slots__ = ["text", "type"]
554 __slots__ = ["text", "type"]
555
555
556 def __init__(self, text: str, *, type: str = None):
556 def __init__(self, text: str, *, type: str = None):
557 self.text = text
557 self.text = text
558 self.type = type
558 self.type = type
559
559
560 def __repr__(self):
560 def __repr__(self):
561 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
561 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
562
562
563
563
564 class _MatcherResultBase(TypedDict):
564 class _MatcherResultBase(TypedDict):
565 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
565 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
566
566
567 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
567 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
568 matched_fragment: NotRequired[str]
568 matched_fragment: NotRequired[str]
569
569
570 #: Whether to suppress results from all other matchers (True), some
570 #: Whether to suppress results from all other matchers (True), some
571 #: matchers (set of identifiers) or none (False); default is False.
571 #: matchers (set of identifiers) or none (False); default is False.
572 suppress: NotRequired[Union[bool, Set[str]]]
572 suppress: NotRequired[Union[bool, Set[str]]]
573
573
574 #: Identifiers of matchers which should NOT be suppressed when this matcher
574 #: Identifiers of matchers which should NOT be suppressed when this matcher
575 #: requests to suppress all other matchers; defaults to an empty set.
575 #: requests to suppress all other matchers; defaults to an empty set.
576 do_not_suppress: NotRequired[Set[str]]
576 do_not_suppress: NotRequired[Set[str]]
577
577
578 #: Are completions already ordered and should be left as-is? default is False.
578 #: Are completions already ordered and should be left as-is? default is False.
579 ordered: NotRequired[bool]
579 ordered: NotRequired[bool]
580
580
581
581
582 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
582 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
583 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
583 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
584 """Result of new-style completion matcher."""
584 """Result of new-style completion matcher."""
585
585
586 # note: TypedDict is added again to the inheritance chain
586 # note: TypedDict is added again to the inheritance chain
587 # in order to get __orig_bases__ for documentation
587 # in order to get __orig_bases__ for documentation
588
588
589 #: List of candidate completions
589 #: List of candidate completions
590 completions: Sequence[SimpleCompletion]
590 completions: Sequence[SimpleCompletion]
591
591
592
592
593 class _JediMatcherResult(_MatcherResultBase):
593 class _JediMatcherResult(_MatcherResultBase):
594 """Matching result returned by Jedi (will be processed differently)"""
594 """Matching result returned by Jedi (will be processed differently)"""
595
595
596 #: list of candidate completions
596 #: list of candidate completions
597 completions: Iterable[_JediCompletionLike]
597 completions: Iterable[_JediCompletionLike]
598
598
599
599
600 class CompletionContext(NamedTuple):
600 class CompletionContext(NamedTuple):
601 """Completion context provided as an argument to matchers in the Matcher API v2."""
601 """Completion context provided as an argument to matchers in the Matcher API v2."""
602
602
603 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
603 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
604 # which was not explicitly visible as an argument of the matcher, making any refactor
604 # which was not explicitly visible as an argument of the matcher, making any refactor
605 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
605 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
606 # from the completer, and make substituting them in sub-classes easier.
606 # from the completer, and make substituting them in sub-classes easier.
607
607
608 #: Relevant fragment of code directly preceding the cursor.
608 #: Relevant fragment of code directly preceding the cursor.
609 #: The extraction of token is implemented via splitter heuristic
609 #: The extraction of token is implemented via splitter heuristic
610 #: (following readline behaviour for legacy reasons), which is user configurable
610 #: (following readline behaviour for legacy reasons), which is user configurable
611 #: (by switching the greedy mode).
611 #: (by switching the greedy mode).
612 token: str
612 token: str
613
613
614 #: The full available content of the editor or buffer
614 #: The full available content of the editor or buffer
615 full_text: str
615 full_text: str
616
616
617 #: Cursor position in the line (the same for ``full_text`` and ``text``).
617 #: Cursor position in the line (the same for ``full_text`` and ``text``).
618 cursor_position: int
618 cursor_position: int
619
619
620 #: Cursor line in ``full_text``.
620 #: Cursor line in ``full_text``.
621 cursor_line: int
621 cursor_line: int
622
622
623 #: The maximum number of completions that will be used downstream.
623 #: The maximum number of completions that will be used downstream.
624 #: Matchers can use this information to abort early.
624 #: Matchers can use this information to abort early.
625 #: The built-in Jedi matcher is currently excepted from this limit.
625 #: The built-in Jedi matcher is currently excepted from this limit.
626 limit: int
626 # If not given, return all possible completions.
627 limit: Optional[int]
627
628
628 @property
629 @property
629 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
630 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
630 def text_until_cursor(self) -> str:
631 def text_until_cursor(self) -> str:
631 return self.line_with_cursor[: self.cursor_position]
632 return self.line_with_cursor[: self.cursor_position]
632
633
633 @property
634 @property
634 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
635 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
635 def line_with_cursor(self) -> str:
636 def line_with_cursor(self) -> str:
636 return self.full_text.split("\n")[self.cursor_line]
637 return self.full_text.split("\n")[self.cursor_line]
637
638
638
639
639 #: Matcher results for API v2.
640 #: Matcher results for API v2.
640 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
641 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
641
642
642
643
643 class _MatcherAPIv1Base(Protocol):
644 class _MatcherAPIv1Base(Protocol):
644 def __call__(self, text: str) -> list[str]:
645 def __call__(self, text: str) -> list[str]:
645 """Call signature."""
646 """Call signature."""
646
647
647
648
648 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
649 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
649 #: API version
650 #: API version
650 matcher_api_version: Optional[Literal[1]]
651 matcher_api_version: Optional[Literal[1]]
651
652
652 def __call__(self, text: str) -> list[str]:
653 def __call__(self, text: str) -> list[str]:
653 """Call signature."""
654 """Call signature."""
654
655
655
656
656 #: Protocol describing Matcher API v1.
657 #: Protocol describing Matcher API v1.
657 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
658 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
658
659
659
660
660 class MatcherAPIv2(Protocol):
661 class MatcherAPIv2(Protocol):
661 """Protocol describing Matcher API v2."""
662 """Protocol describing Matcher API v2."""
662
663
663 #: API version
664 #: API version
664 matcher_api_version: Literal[2] = 2
665 matcher_api_version: Literal[2] = 2
665
666
666 def __call__(self, context: CompletionContext) -> MatcherResult:
667 def __call__(self, context: CompletionContext) -> MatcherResult:
667 """Call signature."""
668 """Call signature."""
668
669
669
670
670 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
671 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
671
672
672
673
673 def completion_matcher(
674 def completion_matcher(
674 *, priority: float = None, identifier: str = None, api_version: int = 1
675 *, priority: float = None, identifier: str = None, api_version: int = 1
675 ):
676 ):
676 """Adds attributes describing the matcher.
677 """Adds attributes describing the matcher.
677
678
678 Parameters
679 Parameters
679 ----------
680 ----------
680 priority : Optional[float]
681 priority : Optional[float]
681 The priority of the matcher, determines the order of execution of matchers.
682 The priority of the matcher, determines the order of execution of matchers.
682 Higher priority means that the matcher will be executed first. Defaults to 0.
683 Higher priority means that the matcher will be executed first. Defaults to 0.
683 identifier : Optional[str]
684 identifier : Optional[str]
684 identifier of the matcher allowing users to modify the behaviour via traitlets,
685 identifier of the matcher allowing users to modify the behaviour via traitlets,
685 and also used to for debugging (will be passed as ``origin`` with the completions).
686 and also used to for debugging (will be passed as ``origin`` with the completions).
686 Defaults to matcher function ``__qualname__``.
687 Defaults to matcher function ``__qualname__``.
687 api_version: Optional[int]
688 api_version: Optional[int]
688 version of the Matcher API used by this matcher.
689 version of the Matcher API used by this matcher.
689 Currently supported values are 1 and 2.
690 Currently supported values are 1 and 2.
690 Defaults to 1.
691 Defaults to 1.
691 """
692 """
692
693
693 def wrapper(func: Matcher):
694 def wrapper(func: Matcher):
694 func.matcher_priority = priority or 0
695 func.matcher_priority = priority or 0
695 func.matcher_identifier = identifier or func.__qualname__
696 func.matcher_identifier = identifier or func.__qualname__
696 func.matcher_api_version = api_version
697 func.matcher_api_version = api_version
697 if TYPE_CHECKING:
698 if TYPE_CHECKING:
698 if api_version == 1:
699 if api_version == 1:
699 func = cast(func, MatcherAPIv1)
700 func = cast(func, MatcherAPIv1)
700 elif api_version == 2:
701 elif api_version == 2:
701 func = cast(func, MatcherAPIv2)
702 func = cast(func, MatcherAPIv2)
702 return func
703 return func
703
704
704 return wrapper
705 return wrapper
705
706
706
707
707 def _get_matcher_priority(matcher: Matcher):
708 def _get_matcher_priority(matcher: Matcher):
708 return getattr(matcher, "matcher_priority", 0)
709 return getattr(matcher, "matcher_priority", 0)
709
710
710
711
711 def _get_matcher_id(matcher: Matcher):
712 def _get_matcher_id(matcher: Matcher):
712 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
713 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
713
714
714
715
715 def _get_matcher_api_version(matcher):
716 def _get_matcher_api_version(matcher):
716 return getattr(matcher, "matcher_api_version", 1)
717 return getattr(matcher, "matcher_api_version", 1)
717
718
718
719
719 context_matcher = partial(completion_matcher, api_version=2)
720 context_matcher = partial(completion_matcher, api_version=2)
720
721
721
722
722 _IC = Iterable[Completion]
723 _IC = Iterable[Completion]
723
724
724
725
725 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
726 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
726 """
727 """
727 Deduplicate a set of completions.
728 Deduplicate a set of completions.
728
729
729 .. warning::
730 .. warning::
730
731
731 Unstable
732 Unstable
732
733
733 This function is unstable, API may change without warning.
734 This function is unstable, API may change without warning.
734
735
735 Parameters
736 Parameters
736 ----------
737 ----------
737 text : str
738 text : str
738 text that should be completed.
739 text that should be completed.
739 completions : Iterator[Completion]
740 completions : Iterator[Completion]
740 iterator over the completions to deduplicate
741 iterator over the completions to deduplicate
741
742
742 Yields
743 Yields
743 ------
744 ------
744 `Completions` objects
745 `Completions` objects
745 Completions coming from multiple sources, may be different but end up having
746 Completions coming from multiple sources, may be different but end up having
746 the same effect when applied to ``text``. If this is the case, this will
747 the same effect when applied to ``text``. If this is the case, this will
747 consider completions as equal and only emit the first encountered.
748 consider completions as equal and only emit the first encountered.
748 Not folded in `completions()` yet for debugging purpose, and to detect when
749 Not folded in `completions()` yet for debugging purpose, and to detect when
749 the IPython completer does return things that Jedi does not, but should be
750 the IPython completer does return things that Jedi does not, but should be
750 at some point.
751 at some point.
751 """
752 """
752 completions = list(completions)
753 completions = list(completions)
753 if not completions:
754 if not completions:
754 return
755 return
755
756
756 new_start = min(c.start for c in completions)
757 new_start = min(c.start for c in completions)
757 new_end = max(c.end for c in completions)
758 new_end = max(c.end for c in completions)
758
759
759 seen = set()
760 seen = set()
760 for c in completions:
761 for c in completions:
761 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
762 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
762 if new_text not in seen:
763 if new_text not in seen:
763 yield c
764 yield c
764 seen.add(new_text)
765 seen.add(new_text)
765
766
766
767
767 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
768 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
768 """
769 """
769 Rectify a set of completions to all have the same ``start`` and ``end``
770 Rectify a set of completions to all have the same ``start`` and ``end``
770
771
771 .. warning::
772 .. warning::
772
773
773 Unstable
774 Unstable
774
775
775 This function is unstable, API may change without warning.
776 This function is unstable, API may change without warning.
776 It will also raise unless use in proper context manager.
777 It will also raise unless use in proper context manager.
777
778
778 Parameters
779 Parameters
779 ----------
780 ----------
780 text : str
781 text : str
781 text that should be completed.
782 text that should be completed.
782 completions : Iterator[Completion]
783 completions : Iterator[Completion]
783 iterator over the completions to rectify
784 iterator over the completions to rectify
784 _debug : bool
785 _debug : bool
785 Log failed completion
786 Log failed completion
786
787
787 Notes
788 Notes
788 -----
789 -----
789 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
790 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
790 the Jupyter Protocol requires them to behave like so. This will readjust
791 the Jupyter Protocol requires them to behave like so. This will readjust
791 the completion to have the same ``start`` and ``end`` by padding both
792 the completion to have the same ``start`` and ``end`` by padding both
792 extremities with surrounding text.
793 extremities with surrounding text.
793
794
794 During stabilisation should support a ``_debug`` option to log which
795 During stabilisation should support a ``_debug`` option to log which
795 completion are return by the IPython completer and not found in Jedi in
796 completion are return by the IPython completer and not found in Jedi in
796 order to make upstream bug report.
797 order to make upstream bug report.
797 """
798 """
798 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
799 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
799 "It may change without warnings. "
800 "It may change without warnings. "
800 "Use in corresponding context manager.",
801 "Use in corresponding context manager.",
801 category=ProvisionalCompleterWarning, stacklevel=2)
802 category=ProvisionalCompleterWarning, stacklevel=2)
802
803
803 completions = list(completions)
804 completions = list(completions)
804 if not completions:
805 if not completions:
805 return
806 return
806 starts = (c.start for c in completions)
807 starts = (c.start for c in completions)
807 ends = (c.end for c in completions)
808 ends = (c.end for c in completions)
808
809
809 new_start = min(starts)
810 new_start = min(starts)
810 new_end = max(ends)
811 new_end = max(ends)
811
812
812 seen_jedi = set()
813 seen_jedi = set()
813 seen_python_matches = set()
814 seen_python_matches = set()
814 for c in completions:
815 for c in completions:
815 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
816 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
816 if c._origin == 'jedi':
817 if c._origin == 'jedi':
817 seen_jedi.add(new_text)
818 seen_jedi.add(new_text)
818 elif c._origin == 'IPCompleter.python_matches':
819 elif c._origin == 'IPCompleter.python_matches':
819 seen_python_matches.add(new_text)
820 seen_python_matches.add(new_text)
820 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
821 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
821 diff = seen_python_matches.difference(seen_jedi)
822 diff = seen_python_matches.difference(seen_jedi)
822 if diff and _debug:
823 if diff and _debug:
823 print('IPython.python matches have extras:', diff)
824 print('IPython.python matches have extras:', diff)
824
825
825
826
826 if sys.platform == 'win32':
827 if sys.platform == 'win32':
827 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
828 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
828 else:
829 else:
829 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
830 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
830
831
831 GREEDY_DELIMS = ' =\r\n'
832 GREEDY_DELIMS = ' =\r\n'
832
833
833
834
834 class CompletionSplitter(object):
835 class CompletionSplitter(object):
835 """An object to split an input line in a manner similar to readline.
836 """An object to split an input line in a manner similar to readline.
836
837
837 By having our own implementation, we can expose readline-like completion in
838 By having our own implementation, we can expose readline-like completion in
838 a uniform manner to all frontends. This object only needs to be given the
839 a uniform manner to all frontends. This object only needs to be given the
839 line of text to be split and the cursor position on said line, and it
840 line of text to be split and the cursor position on said line, and it
840 returns the 'word' to be completed on at the cursor after splitting the
841 returns the 'word' to be completed on at the cursor after splitting the
841 entire line.
842 entire line.
842
843
843 What characters are used as splitting delimiters can be controlled by
844 What characters are used as splitting delimiters can be controlled by
844 setting the ``delims`` attribute (this is a property that internally
845 setting the ``delims`` attribute (this is a property that internally
845 automatically builds the necessary regular expression)"""
846 automatically builds the necessary regular expression)"""
846
847
847 # Private interface
848 # Private interface
848
849
849 # A string of delimiter characters. The default value makes sense for
850 # A string of delimiter characters. The default value makes sense for
850 # IPython's most typical usage patterns.
851 # IPython's most typical usage patterns.
851 _delims = DELIMS
852 _delims = DELIMS
852
853
853 # The expression (a normal string) to be compiled into a regular expression
854 # The expression (a normal string) to be compiled into a regular expression
854 # for actual splitting. We store it as an attribute mostly for ease of
855 # for actual splitting. We store it as an attribute mostly for ease of
855 # debugging, since this type of code can be so tricky to debug.
856 # debugging, since this type of code can be so tricky to debug.
856 _delim_expr = None
857 _delim_expr = None
857
858
858 # The regular expression that does the actual splitting
859 # The regular expression that does the actual splitting
859 _delim_re = None
860 _delim_re = None
860
861
861 def __init__(self, delims=None):
862 def __init__(self, delims=None):
862 delims = CompletionSplitter._delims if delims is None else delims
863 delims = CompletionSplitter._delims if delims is None else delims
863 self.delims = delims
864 self.delims = delims
864
865
865 @property
866 @property
866 def delims(self):
867 def delims(self):
867 """Return the string of delimiter characters."""
868 """Return the string of delimiter characters."""
868 return self._delims
869 return self._delims
869
870
870 @delims.setter
871 @delims.setter
871 def delims(self, delims):
872 def delims(self, delims):
872 """Set the delimiters for line splitting."""
873 """Set the delimiters for line splitting."""
873 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
874 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
874 self._delim_re = re.compile(expr)
875 self._delim_re = re.compile(expr)
875 self._delims = delims
876 self._delims = delims
876 self._delim_expr = expr
877 self._delim_expr = expr
877
878
878 def split_line(self, line, cursor_pos=None):
879 def split_line(self, line, cursor_pos=None):
879 """Split a line of text with a cursor at the given position.
880 """Split a line of text with a cursor at the given position.
880 """
881 """
881 l = line if cursor_pos is None else line[:cursor_pos]
882 l = line if cursor_pos is None else line[:cursor_pos]
882 return self._delim_re.split(l)[-1]
883 return self._delim_re.split(l)[-1]
883
884
884
885
885
886
886 class Completer(Configurable):
887 class Completer(Configurable):
887
888
888 greedy = Bool(False,
889 greedy = Bool(False,
889 help="""Activate greedy completion
890 help="""Activate greedy completion
890 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
891 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
891
892
892 This will enable completion on elements of lists, results of function calls, etc.,
893 This will enable completion on elements of lists, results of function calls, etc.,
893 but can be unsafe because the code is actually evaluated on TAB.
894 but can be unsafe because the code is actually evaluated on TAB.
894 """,
895 """,
895 ).tag(config=True)
896 ).tag(config=True)
896
897
897 use_jedi = Bool(default_value=JEDI_INSTALLED,
898 use_jedi = Bool(default_value=JEDI_INSTALLED,
898 help="Experimental: Use Jedi to generate autocompletions. "
899 help="Experimental: Use Jedi to generate autocompletions. "
899 "Default to True if jedi is installed.").tag(config=True)
900 "Default to True if jedi is installed.").tag(config=True)
900
901
901 jedi_compute_type_timeout = Int(default_value=400,
902 jedi_compute_type_timeout = Int(default_value=400,
902 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
903 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
903 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
904 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
904 performance by preventing jedi to build its cache.
905 performance by preventing jedi to build its cache.
905 """).tag(config=True)
906 """).tag(config=True)
906
907
907 debug = Bool(default_value=False,
908 debug = Bool(default_value=False,
908 help='Enable debug for the Completer. Mostly print extra '
909 help='Enable debug for the Completer. Mostly print extra '
909 'information for experimental jedi integration.')\
910 'information for experimental jedi integration.')\
910 .tag(config=True)
911 .tag(config=True)
911
912
912 backslash_combining_completions = Bool(True,
913 backslash_combining_completions = Bool(True,
913 help="Enable unicode completions, e.g. \\alpha<tab> . "
914 help="Enable unicode completions, e.g. \\alpha<tab> . "
914 "Includes completion of latex commands, unicode names, and expanding "
915 "Includes completion of latex commands, unicode names, and expanding "
915 "unicode characters back to latex commands.").tag(config=True)
916 "unicode characters back to latex commands.").tag(config=True)
916
917
917 def __init__(self, namespace=None, global_namespace=None, **kwargs):
918 def __init__(self, namespace=None, global_namespace=None, **kwargs):
918 """Create a new completer for the command line.
919 """Create a new completer for the command line.
919
920
920 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
921 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
921
922
922 If unspecified, the default namespace where completions are performed
923 If unspecified, the default namespace where completions are performed
923 is __main__ (technically, __main__.__dict__). Namespaces should be
924 is __main__ (technically, __main__.__dict__). Namespaces should be
924 given as dictionaries.
925 given as dictionaries.
925
926
926 An optional second namespace can be given. This allows the completer
927 An optional second namespace can be given. This allows the completer
927 to handle cases where both the local and global scopes need to be
928 to handle cases where both the local and global scopes need to be
928 distinguished.
929 distinguished.
929 """
930 """
930
931
931 # Don't bind to namespace quite yet, but flag whether the user wants a
932 # Don't bind to namespace quite yet, but flag whether the user wants a
932 # specific namespace or to use __main__.__dict__. This will allow us
933 # specific namespace or to use __main__.__dict__. This will allow us
933 # to bind to __main__.__dict__ at completion time, not now.
934 # to bind to __main__.__dict__ at completion time, not now.
934 if namespace is None:
935 if namespace is None:
935 self.use_main_ns = True
936 self.use_main_ns = True
936 else:
937 else:
937 self.use_main_ns = False
938 self.use_main_ns = False
938 self.namespace = namespace
939 self.namespace = namespace
939
940
940 # The global namespace, if given, can be bound directly
941 # The global namespace, if given, can be bound directly
941 if global_namespace is None:
942 if global_namespace is None:
942 self.global_namespace = {}
943 self.global_namespace = {}
943 else:
944 else:
944 self.global_namespace = global_namespace
945 self.global_namespace = global_namespace
945
946
946 self.custom_matchers = []
947 self.custom_matchers = []
947
948
948 super(Completer, self).__init__(**kwargs)
949 super(Completer, self).__init__(**kwargs)
949
950
950 def complete(self, text, state):
951 def complete(self, text, state):
951 """Return the next possible completion for 'text'.
952 """Return the next possible completion for 'text'.
952
953
953 This is called successively with state == 0, 1, 2, ... until it
954 This is called successively with state == 0, 1, 2, ... until it
954 returns None. The completion should begin with 'text'.
955 returns None. The completion should begin with 'text'.
955
956
956 """
957 """
957 if self.use_main_ns:
958 if self.use_main_ns:
958 self.namespace = __main__.__dict__
959 self.namespace = __main__.__dict__
959
960
960 if state == 0:
961 if state == 0:
961 if "." in text:
962 if "." in text:
962 self.matches = self.attr_matches(text)
963 self.matches = self.attr_matches(text)
963 else:
964 else:
964 self.matches = self.global_matches(text)
965 self.matches = self.global_matches(text)
965 try:
966 try:
966 return self.matches[state]
967 return self.matches[state]
967 except IndexError:
968 except IndexError:
968 return None
969 return None
969
970
970 def global_matches(self, text):
971 def global_matches(self, text):
971 """Compute matches when text is a simple name.
972 """Compute matches when text is a simple name.
972
973
973 Return a list of all keywords, built-in functions and names currently
974 Return a list of all keywords, built-in functions and names currently
974 defined in self.namespace or self.global_namespace that match.
975 defined in self.namespace or self.global_namespace that match.
975
976
976 """
977 """
977 matches = []
978 matches = []
978 match_append = matches.append
979 match_append = matches.append
979 n = len(text)
980 n = len(text)
980 for lst in [
981 for lst in [
981 keyword.kwlist,
982 keyword.kwlist,
982 builtin_mod.__dict__.keys(),
983 builtin_mod.__dict__.keys(),
983 list(self.namespace.keys()),
984 list(self.namespace.keys()),
984 list(self.global_namespace.keys()),
985 list(self.global_namespace.keys()),
985 ]:
986 ]:
986 for word in lst:
987 for word in lst:
987 if word[:n] == text and word != "__builtins__":
988 if word[:n] == text and word != "__builtins__":
988 match_append(word)
989 match_append(word)
989
990
990 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
991 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
991 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
992 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
992 shortened = {
993 shortened = {
993 "_".join([sub[0] for sub in word.split("_")]): word
994 "_".join([sub[0] for sub in word.split("_")]): word
994 for word in lst
995 for word in lst
995 if snake_case_re.match(word)
996 if snake_case_re.match(word)
996 }
997 }
997 for word in shortened.keys():
998 for word in shortened.keys():
998 if word[:n] == text and word != "__builtins__":
999 if word[:n] == text and word != "__builtins__":
999 match_append(shortened[word])
1000 match_append(shortened[word])
1000 return matches
1001 return matches
1001
1002
1002 def attr_matches(self, text):
1003 def attr_matches(self, text):
1003 """Compute matches when text contains a dot.
1004 """Compute matches when text contains a dot.
1004
1005
1005 Assuming the text is of the form NAME.NAME....[NAME], and is
1006 Assuming the text is of the form NAME.NAME....[NAME], and is
1006 evaluatable in self.namespace or self.global_namespace, it will be
1007 evaluatable in self.namespace or self.global_namespace, it will be
1007 evaluated and its attributes (as revealed by dir()) are used as
1008 evaluated and its attributes (as revealed by dir()) are used as
1008 possible completions. (For class instances, class members are
1009 possible completions. (For class instances, class members are
1009 also considered.)
1010 also considered.)
1010
1011
1011 WARNING: this can still invoke arbitrary C code, if an object
1012 WARNING: this can still invoke arbitrary C code, if an object
1012 with a __getattr__ hook is evaluated.
1013 with a __getattr__ hook is evaluated.
1013
1014
1014 """
1015 """
1015
1016
1016 # Another option, seems to work great. Catches things like ''.<tab>
1017 # Another option, seems to work great. Catches things like ''.<tab>
1017 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
1018 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
1018
1019
1019 if m:
1020 if m:
1020 expr, attr = m.group(1, 3)
1021 expr, attr = m.group(1, 3)
1021 elif self.greedy:
1022 elif self.greedy:
1022 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1023 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1023 if not m2:
1024 if not m2:
1024 return []
1025 return []
1025 expr, attr = m2.group(1,2)
1026 expr, attr = m2.group(1,2)
1026 else:
1027 else:
1027 return []
1028 return []
1028
1029
1029 try:
1030 try:
1030 obj = eval(expr, self.namespace)
1031 obj = eval(expr, self.namespace)
1031 except:
1032 except:
1032 try:
1033 try:
1033 obj = eval(expr, self.global_namespace)
1034 obj = eval(expr, self.global_namespace)
1034 except:
1035 except:
1035 return []
1036 return []
1036
1037
1037 if self.limit_to__all__ and hasattr(obj, '__all__'):
1038 if self.limit_to__all__ and hasattr(obj, '__all__'):
1038 words = get__all__entries(obj)
1039 words = get__all__entries(obj)
1039 else:
1040 else:
1040 words = dir2(obj)
1041 words = dir2(obj)
1041
1042
1042 try:
1043 try:
1043 words = generics.complete_object(obj, words)
1044 words = generics.complete_object(obj, words)
1044 except TryNext:
1045 except TryNext:
1045 pass
1046 pass
1046 except AssertionError:
1047 except AssertionError:
1047 raise
1048 raise
1048 except Exception:
1049 except Exception:
1049 # Silence errors from completion function
1050 # Silence errors from completion function
1050 #raise # dbg
1051 #raise # dbg
1051 pass
1052 pass
1052 # Build match list to return
1053 # Build match list to return
1053 n = len(attr)
1054 n = len(attr)
1054 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
1055 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
1055
1056
1056
1057
1057 def get__all__entries(obj):
1058 def get__all__entries(obj):
1058 """returns the strings in the __all__ attribute"""
1059 """returns the strings in the __all__ attribute"""
1059 try:
1060 try:
1060 words = getattr(obj, '__all__')
1061 words = getattr(obj, '__all__')
1061 except:
1062 except:
1062 return []
1063 return []
1063
1064
1064 return [w for w in words if isinstance(w, str)]
1065 return [w for w in words if isinstance(w, str)]
1065
1066
1066
1067
1067 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
1068 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
1068 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
1069 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
1069 """Used by dict_key_matches, matching the prefix to a list of keys
1070 """Used by dict_key_matches, matching the prefix to a list of keys
1070
1071
1071 Parameters
1072 Parameters
1072 ----------
1073 ----------
1073 keys
1074 keys
1074 list of keys in dictionary currently being completed.
1075 list of keys in dictionary currently being completed.
1075 prefix
1076 prefix
1076 Part of the text already typed by the user. E.g. `mydict[b'fo`
1077 Part of the text already typed by the user. E.g. `mydict[b'fo`
1077 delims
1078 delims
1078 String of delimiters to consider when finding the current key.
1079 String of delimiters to consider when finding the current key.
1079 extra_prefix : optional
1080 extra_prefix : optional
1080 Part of the text already typed in multi-key index cases. E.g. for
1081 Part of the text already typed in multi-key index cases. E.g. for
1081 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1082 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1082
1083
1083 Returns
1084 Returns
1084 -------
1085 -------
1085 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1086 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1086 ``quote`` being the quote that need to be used to close current string.
1087 ``quote`` being the quote that need to be used to close current string.
1087 ``token_start`` the position where the replacement should start occurring,
1088 ``token_start`` the position where the replacement should start occurring,
1088 ``matches`` a list of replacement/completion
1089 ``matches`` a list of replacement/completion
1089
1090
1090 """
1091 """
1091 prefix_tuple = extra_prefix if extra_prefix else ()
1092 prefix_tuple = extra_prefix if extra_prefix else ()
1092 Nprefix = len(prefix_tuple)
1093 Nprefix = len(prefix_tuple)
1093 def filter_prefix_tuple(key):
1094 def filter_prefix_tuple(key):
1094 # Reject too short keys
1095 # Reject too short keys
1095 if len(key) <= Nprefix:
1096 if len(key) <= Nprefix:
1096 return False
1097 return False
1097 # Reject keys with non str/bytes in it
1098 # Reject keys with non str/bytes in it
1098 for k in key:
1099 for k in key:
1099 if not isinstance(k, (str, bytes)):
1100 if not isinstance(k, (str, bytes)):
1100 return False
1101 return False
1101 # Reject keys that do not match the prefix
1102 # Reject keys that do not match the prefix
1102 for k, pt in zip(key, prefix_tuple):
1103 for k, pt in zip(key, prefix_tuple):
1103 if k != pt:
1104 if k != pt:
1104 return False
1105 return False
1105 # All checks passed!
1106 # All checks passed!
1106 return True
1107 return True
1107
1108
1108 filtered_keys:List[Union[str,bytes]] = []
1109 filtered_keys:List[Union[str,bytes]] = []
1109 def _add_to_filtered_keys(key):
1110 def _add_to_filtered_keys(key):
1110 if isinstance(key, (str, bytes)):
1111 if isinstance(key, (str, bytes)):
1111 filtered_keys.append(key)
1112 filtered_keys.append(key)
1112
1113
1113 for k in keys:
1114 for k in keys:
1114 if isinstance(k, tuple):
1115 if isinstance(k, tuple):
1115 if filter_prefix_tuple(k):
1116 if filter_prefix_tuple(k):
1116 _add_to_filtered_keys(k[Nprefix])
1117 _add_to_filtered_keys(k[Nprefix])
1117 else:
1118 else:
1118 _add_to_filtered_keys(k)
1119 _add_to_filtered_keys(k)
1119
1120
1120 if not prefix:
1121 if not prefix:
1121 return '', 0, [repr(k) for k in filtered_keys]
1122 return '', 0, [repr(k) for k in filtered_keys]
1122 quote_match = re.search('["\']', prefix)
1123 quote_match = re.search('["\']', prefix)
1123 assert quote_match is not None # silence mypy
1124 assert quote_match is not None # silence mypy
1124 quote = quote_match.group()
1125 quote = quote_match.group()
1125 try:
1126 try:
1126 prefix_str = eval(prefix + quote, {})
1127 prefix_str = eval(prefix + quote, {})
1127 except Exception:
1128 except Exception:
1128 return '', 0, []
1129 return '', 0, []
1129
1130
1130 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1131 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1131 token_match = re.search(pattern, prefix, re.UNICODE)
1132 token_match = re.search(pattern, prefix, re.UNICODE)
1132 assert token_match is not None # silence mypy
1133 assert token_match is not None # silence mypy
1133 token_start = token_match.start()
1134 token_start = token_match.start()
1134 token_prefix = token_match.group()
1135 token_prefix = token_match.group()
1135
1136
1136 matched:List[str] = []
1137 matched:List[str] = []
1137 for key in filtered_keys:
1138 for key in filtered_keys:
1138 try:
1139 try:
1139 if not key.startswith(prefix_str):
1140 if not key.startswith(prefix_str):
1140 continue
1141 continue
1141 except (AttributeError, TypeError, UnicodeError):
1142 except (AttributeError, TypeError, UnicodeError):
1142 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1143 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1143 continue
1144 continue
1144
1145
1145 # reformat remainder of key to begin with prefix
1146 # reformat remainder of key to begin with prefix
1146 rem = key[len(prefix_str):]
1147 rem = key[len(prefix_str):]
1147 # force repr wrapped in '
1148 # force repr wrapped in '
1148 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1149 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1149 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1150 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1150 if quote == '"':
1151 if quote == '"':
1151 # The entered prefix is quoted with ",
1152 # The entered prefix is quoted with ",
1152 # but the match is quoted with '.
1153 # but the match is quoted with '.
1153 # A contained " hence needs escaping for comparison:
1154 # A contained " hence needs escaping for comparison:
1154 rem_repr = rem_repr.replace('"', '\\"')
1155 rem_repr = rem_repr.replace('"', '\\"')
1155
1156
1156 # then reinsert prefix from start of token
1157 # then reinsert prefix from start of token
1157 matched.append('%s%s' % (token_prefix, rem_repr))
1158 matched.append('%s%s' % (token_prefix, rem_repr))
1158 return quote, token_start, matched
1159 return quote, token_start, matched
1159
1160
1160
1161
1161 def cursor_to_position(text:str, line:int, column:int)->int:
1162 def cursor_to_position(text:str, line:int, column:int)->int:
1162 """
1163 """
1163 Convert the (line,column) position of the cursor in text to an offset in a
1164 Convert the (line,column) position of the cursor in text to an offset in a
1164 string.
1165 string.
1165
1166
1166 Parameters
1167 Parameters
1167 ----------
1168 ----------
1168 text : str
1169 text : str
1169 The text in which to calculate the cursor offset
1170 The text in which to calculate the cursor offset
1170 line : int
1171 line : int
1171 Line of the cursor; 0-indexed
1172 Line of the cursor; 0-indexed
1172 column : int
1173 column : int
1173 Column of the cursor 0-indexed
1174 Column of the cursor 0-indexed
1174
1175
1175 Returns
1176 Returns
1176 -------
1177 -------
1177 Position of the cursor in ``text``, 0-indexed.
1178 Position of the cursor in ``text``, 0-indexed.
1178
1179
1179 See Also
1180 See Also
1180 --------
1181 --------
1181 position_to_cursor : reciprocal of this function
1182 position_to_cursor : reciprocal of this function
1182
1183
1183 """
1184 """
1184 lines = text.split('\n')
1185 lines = text.split('\n')
1185 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1186 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1186
1187
1187 return sum(len(l) + 1 for l in lines[:line]) + column
1188 return sum(len(l) + 1 for l in lines[:line]) + column
1188
1189
1189 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1190 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1190 """
1191 """
1191 Convert the position of the cursor in text (0 indexed) to a line
1192 Convert the position of the cursor in text (0 indexed) to a line
1192 number(0-indexed) and a column number (0-indexed) pair
1193 number(0-indexed) and a column number (0-indexed) pair
1193
1194
1194 Position should be a valid position in ``text``.
1195 Position should be a valid position in ``text``.
1195
1196
1196 Parameters
1197 Parameters
1197 ----------
1198 ----------
1198 text : str
1199 text : str
1199 The text in which to calculate the cursor offset
1200 The text in which to calculate the cursor offset
1200 offset : int
1201 offset : int
1201 Position of the cursor in ``text``, 0-indexed.
1202 Position of the cursor in ``text``, 0-indexed.
1202
1203
1203 Returns
1204 Returns
1204 -------
1205 -------
1205 (line, column) : (int, int)
1206 (line, column) : (int, int)
1206 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1207 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1207
1208
1208 See Also
1209 See Also
1209 --------
1210 --------
1210 cursor_to_position : reciprocal of this function
1211 cursor_to_position : reciprocal of this function
1211
1212
1212 """
1213 """
1213
1214
1214 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1215 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1215
1216
1216 before = text[:offset]
1217 before = text[:offset]
1217 blines = before.split('\n') # ! splitnes trim trailing \n
1218 blines = before.split('\n') # ! splitnes trim trailing \n
1218 line = before.count('\n')
1219 line = before.count('\n')
1219 col = len(blines[-1])
1220 col = len(blines[-1])
1220 return line, col
1221 return line, col
1221
1222
1222
1223
1223 def _safe_isinstance(obj, module, class_name):
1224 def _safe_isinstance(obj, module, class_name):
1224 """Checks if obj is an instance of module.class_name if loaded
1225 """Checks if obj is an instance of module.class_name if loaded
1225 """
1226 """
1226 return (module in sys.modules and
1227 return (module in sys.modules and
1227 isinstance(obj, getattr(import_module(module), class_name)))
1228 isinstance(obj, getattr(import_module(module), class_name)))
1228
1229
1229
1230
1230 @context_matcher()
1231 @context_matcher()
1231 def back_unicode_name_matcher(context: CompletionContext):
1232 def back_unicode_name_matcher(context: CompletionContext):
1232 """Match Unicode characters back to Unicode name
1233 """Match Unicode characters back to Unicode name
1233
1234
1234 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1235 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1235 """
1236 """
1236 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1237 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1237 return _convert_matcher_v1_result_to_v2(
1238 return _convert_matcher_v1_result_to_v2(
1238 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1239 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1239 )
1240 )
1240
1241
1241
1242
1242 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1243 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1243 """Match Unicode characters back to Unicode name
1244 """Match Unicode characters back to Unicode name
1244
1245
1245 This does ``β˜ƒ`` -> ``\\snowman``
1246 This does ``β˜ƒ`` -> ``\\snowman``
1246
1247
1247 Note that snowman is not a valid python3 combining character but will be expanded.
1248 Note that snowman is not a valid python3 combining character but will be expanded.
1248 Though it will not recombine back to the snowman character by the completion machinery.
1249 Though it will not recombine back to the snowman character by the completion machinery.
1249
1250
1250 This will not either back-complete standard sequences like \\n, \\b ...
1251 This will not either back-complete standard sequences like \\n, \\b ...
1251
1252
1252 .. deprecated:: 8.6
1253 .. deprecated:: 8.6
1253 You can use :meth:`back_unicode_name_matcher` instead.
1254 You can use :meth:`back_unicode_name_matcher` instead.
1254
1255
1255 Returns
1256 Returns
1256 =======
1257 =======
1257
1258
1258 Return a tuple with two elements:
1259 Return a tuple with two elements:
1259
1260
1260 - The Unicode character that was matched (preceded with a backslash), or
1261 - The Unicode character that was matched (preceded with a backslash), or
1261 empty string,
1262 empty string,
1262 - a sequence (of 1), name for the match Unicode character, preceded by
1263 - a sequence (of 1), name for the match Unicode character, preceded by
1263 backslash, or empty if no match.
1264 backslash, or empty if no match.
1264 """
1265 """
1265 if len(text)<2:
1266 if len(text)<2:
1266 return '', ()
1267 return '', ()
1267 maybe_slash = text[-2]
1268 maybe_slash = text[-2]
1268 if maybe_slash != '\\':
1269 if maybe_slash != '\\':
1269 return '', ()
1270 return '', ()
1270
1271
1271 char = text[-1]
1272 char = text[-1]
1272 # no expand on quote for completion in strings.
1273 # no expand on quote for completion in strings.
1273 # nor backcomplete standard ascii keys
1274 # nor backcomplete standard ascii keys
1274 if char in string.ascii_letters or char in ('"',"'"):
1275 if char in string.ascii_letters or char in ('"',"'"):
1275 return '', ()
1276 return '', ()
1276 try :
1277 try :
1277 unic = unicodedata.name(char)
1278 unic = unicodedata.name(char)
1278 return '\\'+char,('\\'+unic,)
1279 return '\\'+char,('\\'+unic,)
1279 except KeyError:
1280 except KeyError:
1280 pass
1281 pass
1281 return '', ()
1282 return '', ()
1282
1283
1283
1284
1284 @context_matcher()
1285 @context_matcher()
1285 def back_latex_name_matcher(context: CompletionContext):
1286 def back_latex_name_matcher(context: CompletionContext):
1286 """Match latex characters back to unicode name
1287 """Match latex characters back to unicode name
1287
1288
1288 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1289 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1289 """
1290 """
1290 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1291 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1291 return _convert_matcher_v1_result_to_v2(
1292 return _convert_matcher_v1_result_to_v2(
1292 matches, type="latex", fragment=fragment, suppress_if_matches=True
1293 matches, type="latex", fragment=fragment, suppress_if_matches=True
1293 )
1294 )
1294
1295
1295
1296
1296 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1297 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1297 """Match latex characters back to unicode name
1298 """Match latex characters back to unicode name
1298
1299
1299 This does ``\\β„΅`` -> ``\\aleph``
1300 This does ``\\β„΅`` -> ``\\aleph``
1300
1301
1301 .. deprecated:: 8.6
1302 .. deprecated:: 8.6
1302 You can use :meth:`back_latex_name_matcher` instead.
1303 You can use :meth:`back_latex_name_matcher` instead.
1303 """
1304 """
1304 if len(text)<2:
1305 if len(text)<2:
1305 return '', ()
1306 return '', ()
1306 maybe_slash = text[-2]
1307 maybe_slash = text[-2]
1307 if maybe_slash != '\\':
1308 if maybe_slash != '\\':
1308 return '', ()
1309 return '', ()
1309
1310
1310
1311
1311 char = text[-1]
1312 char = text[-1]
1312 # no expand on quote for completion in strings.
1313 # no expand on quote for completion in strings.
1313 # nor backcomplete standard ascii keys
1314 # nor backcomplete standard ascii keys
1314 if char in string.ascii_letters or char in ('"',"'"):
1315 if char in string.ascii_letters or char in ('"',"'"):
1315 return '', ()
1316 return '', ()
1316 try :
1317 try :
1317 latex = reverse_latex_symbol[char]
1318 latex = reverse_latex_symbol[char]
1318 # '\\' replace the \ as well
1319 # '\\' replace the \ as well
1319 return '\\'+char,[latex]
1320 return '\\'+char,[latex]
1320 except KeyError:
1321 except KeyError:
1321 pass
1322 pass
1322 return '', ()
1323 return '', ()
1323
1324
1324
1325
1325 def _formatparamchildren(parameter) -> str:
1326 def _formatparamchildren(parameter) -> str:
1326 """
1327 """
1327 Get parameter name and value from Jedi Private API
1328 Get parameter name and value from Jedi Private API
1328
1329
1329 Jedi does not expose a simple way to get `param=value` from its API.
1330 Jedi does not expose a simple way to get `param=value` from its API.
1330
1331
1331 Parameters
1332 Parameters
1332 ----------
1333 ----------
1333 parameter
1334 parameter
1334 Jedi's function `Param`
1335 Jedi's function `Param`
1335
1336
1336 Returns
1337 Returns
1337 -------
1338 -------
1338 A string like 'a', 'b=1', '*args', '**kwargs'
1339 A string like 'a', 'b=1', '*args', '**kwargs'
1339
1340
1340 """
1341 """
1341 description = parameter.description
1342 description = parameter.description
1342 if not description.startswith('param '):
1343 if not description.startswith('param '):
1343 raise ValueError('Jedi function parameter description have change format.'
1344 raise ValueError('Jedi function parameter description have change format.'
1344 'Expected "param ...", found %r".' % description)
1345 'Expected "param ...", found %r".' % description)
1345 return description[6:]
1346 return description[6:]
1346
1347
1347 def _make_signature(completion)-> str:
1348 def _make_signature(completion)-> str:
1348 """
1349 """
1349 Make the signature from a jedi completion
1350 Make the signature from a jedi completion
1350
1351
1351 Parameters
1352 Parameters
1352 ----------
1353 ----------
1353 completion : jedi.Completion
1354 completion : jedi.Completion
1354 object does not complete a function type
1355 object does not complete a function type
1355
1356
1356 Returns
1357 Returns
1357 -------
1358 -------
1358 a string consisting of the function signature, with the parenthesis but
1359 a string consisting of the function signature, with the parenthesis but
1359 without the function name. example:
1360 without the function name. example:
1360 `(a, *args, b=1, **kwargs)`
1361 `(a, *args, b=1, **kwargs)`
1361
1362
1362 """
1363 """
1363
1364
1364 # it looks like this might work on jedi 0.17
1365 # it looks like this might work on jedi 0.17
1365 if hasattr(completion, 'get_signatures'):
1366 if hasattr(completion, 'get_signatures'):
1366 signatures = completion.get_signatures()
1367 signatures = completion.get_signatures()
1367 if not signatures:
1368 if not signatures:
1368 return '(?)'
1369 return '(?)'
1369
1370
1370 c0 = completion.get_signatures()[0]
1371 c0 = completion.get_signatures()[0]
1371 return '('+c0.to_string().split('(', maxsplit=1)[1]
1372 return '('+c0.to_string().split('(', maxsplit=1)[1]
1372
1373
1373 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1374 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1374 for p in signature.defined_names()) if f])
1375 for p in signature.defined_names()) if f])
1375
1376
1376
1377
1377 _CompleteResult = Dict[str, MatcherResult]
1378 _CompleteResult = Dict[str, MatcherResult]
1378
1379
1379
1380
1380 def _convert_matcher_v1_result_to_v2(
1381 def _convert_matcher_v1_result_to_v2(
1381 matches: Sequence[str],
1382 matches: Sequence[str],
1382 type: str,
1383 type: str,
1383 fragment: str = None,
1384 fragment: str = None,
1384 suppress_if_matches: bool = False,
1385 suppress_if_matches: bool = False,
1385 ) -> SimpleMatcherResult:
1386 ) -> SimpleMatcherResult:
1386 """Utility to help with transition"""
1387 """Utility to help with transition"""
1387 result = {
1388 result = {
1388 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1389 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1389 "suppress": (True if matches else False) if suppress_if_matches else False,
1390 "suppress": (True if matches else False) if suppress_if_matches else False,
1390 }
1391 }
1391 if fragment is not None:
1392 if fragment is not None:
1392 result["matched_fragment"] = fragment
1393 result["matched_fragment"] = fragment
1393 return result
1394 return result
1394
1395
1395
1396
1396 class IPCompleter(Completer):
1397 class IPCompleter(Completer):
1397 """Extension of the completer class with IPython-specific features"""
1398 """Extension of the completer class with IPython-specific features"""
1398
1399
1399 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1400 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1400
1401
1401 @observe('greedy')
1402 @observe('greedy')
1402 def _greedy_changed(self, change):
1403 def _greedy_changed(self, change):
1403 """update the splitter and readline delims when greedy is changed"""
1404 """update the splitter and readline delims when greedy is changed"""
1404 if change['new']:
1405 if change['new']:
1405 self.splitter.delims = GREEDY_DELIMS
1406 self.splitter.delims = GREEDY_DELIMS
1406 else:
1407 else:
1407 self.splitter.delims = DELIMS
1408 self.splitter.delims = DELIMS
1408
1409
1409 dict_keys_only = Bool(
1410 dict_keys_only = Bool(
1410 False,
1411 False,
1411 help="""
1412 help="""
1412 Whether to show dict key matches only.
1413 Whether to show dict key matches only.
1413
1414
1414 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1415 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1415 """,
1416 """,
1416 )
1417 )
1417
1418
1418 suppress_competing_matchers = UnionTrait(
1419 suppress_competing_matchers = UnionTrait(
1419 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1420 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1420 default_value=None,
1421 default_value=None,
1421 help="""
1422 help="""
1422 Whether to suppress completions from other *Matchers*.
1423 Whether to suppress completions from other *Matchers*.
1423
1424
1424 When set to ``None`` (default) the matchers will attempt to auto-detect
1425 When set to ``None`` (default) the matchers will attempt to auto-detect
1425 whether suppression of other matchers is desirable. For example, at
1426 whether suppression of other matchers is desirable. For example, at
1426 the beginning of a line followed by `%` we expect a magic completion
1427 the beginning of a line followed by `%` we expect a magic completion
1427 to be the only applicable option, and after ``my_dict['`` we usually
1428 to be the only applicable option, and after ``my_dict['`` we usually
1428 expect a completion with an existing dictionary key.
1429 expect a completion with an existing dictionary key.
1429
1430
1430 If you want to disable this heuristic and see completions from all matchers,
1431 If you want to disable this heuristic and see completions from all matchers,
1431 set ``IPCompleter.suppress_competing_matchers = False``.
1432 set ``IPCompleter.suppress_competing_matchers = False``.
1432 To disable the heuristic for specific matchers provide a dictionary mapping:
1433 To disable the heuristic for specific matchers provide a dictionary mapping:
1433 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1434 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1434
1435
1435 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1436 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1436 completions to the set of matchers with the highest priority;
1437 completions to the set of matchers with the highest priority;
1437 this is equivalent to ``IPCompleter.merge_completions`` and
1438 this is equivalent to ``IPCompleter.merge_completions`` and
1438 can be beneficial for performance, but will sometimes omit relevant
1439 can be beneficial for performance, but will sometimes omit relevant
1439 candidates from matchers further down the priority list.
1440 candidates from matchers further down the priority list.
1440 """,
1441 """,
1441 ).tag(config=True)
1442 ).tag(config=True)
1442
1443
1443 merge_completions = Bool(
1444 merge_completions = Bool(
1444 True,
1445 True,
1445 help="""Whether to merge completion results into a single list
1446 help="""Whether to merge completion results into a single list
1446
1447
1447 If False, only the completion results from the first non-empty
1448 If False, only the completion results from the first non-empty
1448 completer will be returned.
1449 completer will be returned.
1449
1450
1450 As of version 8.6.0, setting the value to ``False`` is an alias for:
1451 As of version 8.6.0, setting the value to ``False`` is an alias for:
1451 ``IPCompleter.suppress_competing_matchers = True.``.
1452 ``IPCompleter.suppress_competing_matchers = True.``.
1452 """,
1453 """,
1453 ).tag(config=True)
1454 ).tag(config=True)
1454
1455
1455 disable_matchers = ListTrait(
1456 disable_matchers = ListTrait(
1456 Unicode(), help="""List of matchers to disable."""
1457 Unicode(), help="""List of matchers to disable."""
1457 ).tag(config=True)
1458 ).tag(config=True)
1458
1459
1459 omit__names = Enum(
1460 omit__names = Enum(
1460 (0, 1, 2),
1461 (0, 1, 2),
1461 default_value=2,
1462 default_value=2,
1462 help="""Instruct the completer to omit private method names
1463 help="""Instruct the completer to omit private method names
1463
1464
1464 Specifically, when completing on ``object.<tab>``.
1465 Specifically, when completing on ``object.<tab>``.
1465
1466
1466 When 2 [default]: all names that start with '_' will be excluded.
1467 When 2 [default]: all names that start with '_' will be excluded.
1467
1468
1468 When 1: all 'magic' names (``__foo__``) will be excluded.
1469 When 1: all 'magic' names (``__foo__``) will be excluded.
1469
1470
1470 When 0: nothing will be excluded.
1471 When 0: nothing will be excluded.
1471 """
1472 """
1472 ).tag(config=True)
1473 ).tag(config=True)
1473 limit_to__all__ = Bool(False,
1474 limit_to__all__ = Bool(False,
1474 help="""
1475 help="""
1475 DEPRECATED as of version 5.0.
1476 DEPRECATED as of version 5.0.
1476
1477
1477 Instruct the completer to use __all__ for the completion
1478 Instruct the completer to use __all__ for the completion
1478
1479
1479 Specifically, when completing on ``object.<tab>``.
1480 Specifically, when completing on ``object.<tab>``.
1480
1481
1481 When True: only those names in obj.__all__ will be included.
1482 When True: only those names in obj.__all__ will be included.
1482
1483
1483 When False [default]: the __all__ attribute is ignored
1484 When False [default]: the __all__ attribute is ignored
1484 """,
1485 """,
1485 ).tag(config=True)
1486 ).tag(config=True)
1486
1487
1487 profile_completions = Bool(
1488 profile_completions = Bool(
1488 default_value=False,
1489 default_value=False,
1489 help="If True, emit profiling data for completion subsystem using cProfile."
1490 help="If True, emit profiling data for completion subsystem using cProfile."
1490 ).tag(config=True)
1491 ).tag(config=True)
1491
1492
1492 profiler_output_dir = Unicode(
1493 profiler_output_dir = Unicode(
1493 default_value=".completion_profiles",
1494 default_value=".completion_profiles",
1494 help="Template for path at which to output profile data for completions."
1495 help="Template for path at which to output profile data for completions."
1495 ).tag(config=True)
1496 ).tag(config=True)
1496
1497
1497 @observe('limit_to__all__')
1498 @observe('limit_to__all__')
1498 def _limit_to_all_changed(self, change):
1499 def _limit_to_all_changed(self, change):
1499 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1500 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1500 'value has been deprecated since IPython 5.0, will be made to have '
1501 'value has been deprecated since IPython 5.0, will be made to have '
1501 'no effects and then removed in future version of IPython.',
1502 'no effects and then removed in future version of IPython.',
1502 UserWarning)
1503 UserWarning)
1503
1504
1504 def __init__(
1505 def __init__(
1505 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1506 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1506 ):
1507 ):
1507 """IPCompleter() -> completer
1508 """IPCompleter() -> completer
1508
1509
1509 Return a completer object.
1510 Return a completer object.
1510
1511
1511 Parameters
1512 Parameters
1512 ----------
1513 ----------
1513 shell
1514 shell
1514 a pointer to the ipython shell itself. This is needed
1515 a pointer to the ipython shell itself. This is needed
1515 because this completer knows about magic functions, and those can
1516 because this completer knows about magic functions, and those can
1516 only be accessed via the ipython instance.
1517 only be accessed via the ipython instance.
1517 namespace : dict, optional
1518 namespace : dict, optional
1518 an optional dict where completions are performed.
1519 an optional dict where completions are performed.
1519 global_namespace : dict, optional
1520 global_namespace : dict, optional
1520 secondary optional dict for completions, to
1521 secondary optional dict for completions, to
1521 handle cases (such as IPython embedded inside functions) where
1522 handle cases (such as IPython embedded inside functions) where
1522 both Python scopes are visible.
1523 both Python scopes are visible.
1523 config : Config
1524 config : Config
1524 traitlet's config object
1525 traitlet's config object
1525 **kwargs
1526 **kwargs
1526 passed to super class unmodified.
1527 passed to super class unmodified.
1527 """
1528 """
1528
1529
1529 self.magic_escape = ESC_MAGIC
1530 self.magic_escape = ESC_MAGIC
1530 self.splitter = CompletionSplitter()
1531 self.splitter = CompletionSplitter()
1531
1532
1532 # _greedy_changed() depends on splitter and readline being defined:
1533 # _greedy_changed() depends on splitter and readline being defined:
1533 super().__init__(
1534 super().__init__(
1534 namespace=namespace,
1535 namespace=namespace,
1535 global_namespace=global_namespace,
1536 global_namespace=global_namespace,
1536 config=config,
1537 config=config,
1537 **kwargs,
1538 **kwargs,
1538 )
1539 )
1539
1540
1540 # List where completion matches will be stored
1541 # List where completion matches will be stored
1541 self.matches = []
1542 self.matches = []
1542 self.shell = shell
1543 self.shell = shell
1543 # Regexp to split filenames with spaces in them
1544 # Regexp to split filenames with spaces in them
1544 self.space_name_re = re.compile(r'([^\\] )')
1545 self.space_name_re = re.compile(r'([^\\] )')
1545 # Hold a local ref. to glob.glob for speed
1546 # Hold a local ref. to glob.glob for speed
1546 self.glob = glob.glob
1547 self.glob = glob.glob
1547
1548
1548 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1549 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1549 # buffers, to avoid completion problems.
1550 # buffers, to avoid completion problems.
1550 term = os.environ.get('TERM','xterm')
1551 term = os.environ.get('TERM','xterm')
1551 self.dumb_terminal = term in ['dumb','emacs']
1552 self.dumb_terminal = term in ['dumb','emacs']
1552
1553
1553 # Special handling of backslashes needed in win32 platforms
1554 # Special handling of backslashes needed in win32 platforms
1554 if sys.platform == "win32":
1555 if sys.platform == "win32":
1555 self.clean_glob = self._clean_glob_win32
1556 self.clean_glob = self._clean_glob_win32
1556 else:
1557 else:
1557 self.clean_glob = self._clean_glob
1558 self.clean_glob = self._clean_glob
1558
1559
1559 #regexp to parse docstring for function signature
1560 #regexp to parse docstring for function signature
1560 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1561 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1561 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1562 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1562 #use this if positional argument name is also needed
1563 #use this if positional argument name is also needed
1563 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1564 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1564
1565
1565 self.magic_arg_matchers = [
1566 self.magic_arg_matchers = [
1566 self.magic_config_matcher,
1567 self.magic_config_matcher,
1567 self.magic_color_matcher,
1568 self.magic_color_matcher,
1568 ]
1569 ]
1569
1570
1570 # This is set externally by InteractiveShell
1571 # This is set externally by InteractiveShell
1571 self.custom_completers = None
1572 self.custom_completers = None
1572
1573
1573 # This is a list of names of unicode characters that can be completed
1574 # This is a list of names of unicode characters that can be completed
1574 # into their corresponding unicode value. The list is large, so we
1575 # into their corresponding unicode value. The list is large, so we
1575 # lazily initialize it on first use. Consuming code should access this
1576 # lazily initialize it on first use. Consuming code should access this
1576 # attribute through the `@unicode_names` property.
1577 # attribute through the `@unicode_names` property.
1577 self._unicode_names = None
1578 self._unicode_names = None
1578
1579
1579 self._backslash_combining_matchers = [
1580 self._backslash_combining_matchers = [
1580 self.latex_name_matcher,
1581 self.latex_name_matcher,
1581 self.unicode_name_matcher,
1582 self.unicode_name_matcher,
1582 back_latex_name_matcher,
1583 back_latex_name_matcher,
1583 back_unicode_name_matcher,
1584 back_unicode_name_matcher,
1584 self.fwd_unicode_matcher,
1585 self.fwd_unicode_matcher,
1585 ]
1586 ]
1586
1587
1587 if not self.backslash_combining_completions:
1588 if not self.backslash_combining_completions:
1588 for matcher in self._backslash_combining_matchers:
1589 for matcher in self._backslash_combining_matchers:
1589 self.disable_matchers.append(matcher.matcher_identifier)
1590 self.disable_matchers.append(matcher.matcher_identifier)
1590
1591
1591 if not self.merge_completions:
1592 if not self.merge_completions:
1592 self.suppress_competing_matchers = True
1593 self.suppress_competing_matchers = True
1593
1594
1594 @property
1595 @property
1595 def matchers(self) -> List[Matcher]:
1596 def matchers(self) -> List[Matcher]:
1596 """All active matcher routines for completion"""
1597 """All active matcher routines for completion"""
1597 if self.dict_keys_only:
1598 if self.dict_keys_only:
1598 return [self.dict_key_matcher]
1599 return [self.dict_key_matcher]
1599
1600
1600 if self.use_jedi:
1601 if self.use_jedi:
1601 return [
1602 return [
1602 *self.custom_matchers,
1603 *self.custom_matchers,
1603 *self._backslash_combining_matchers,
1604 *self._backslash_combining_matchers,
1604 *self.magic_arg_matchers,
1605 *self.magic_arg_matchers,
1605 self.custom_completer_matcher,
1606 self.custom_completer_matcher,
1606 self.magic_matcher,
1607 self.magic_matcher,
1607 self._jedi_matcher,
1608 self._jedi_matcher,
1608 self.dict_key_matcher,
1609 self.dict_key_matcher,
1609 self.file_matcher,
1610 self.file_matcher,
1610 ]
1611 ]
1611 else:
1612 else:
1612 return [
1613 return [
1613 *self.custom_matchers,
1614 *self.custom_matchers,
1614 *self._backslash_combining_matchers,
1615 *self._backslash_combining_matchers,
1615 *self.magic_arg_matchers,
1616 *self.magic_arg_matchers,
1616 self.custom_completer_matcher,
1617 self.custom_completer_matcher,
1617 self.dict_key_matcher,
1618 self.dict_key_matcher,
1618 # TODO: convert python_matches to v2 API
1619 # TODO: convert python_matches to v2 API
1619 self.magic_matcher,
1620 self.magic_matcher,
1620 self.python_matches,
1621 self.python_matches,
1621 self.file_matcher,
1622 self.file_matcher,
1622 self.python_func_kw_matcher,
1623 self.python_func_kw_matcher,
1623 ]
1624 ]
1624
1625
1625 def all_completions(self, text:str) -> List[str]:
1626 def all_completions(self, text:str) -> List[str]:
1626 """
1627 """
1627 Wrapper around the completion methods for the benefit of emacs.
1628 Wrapper around the completion methods for the benefit of emacs.
1628 """
1629 """
1629 prefix = text.rpartition('.')[0]
1630 prefix = text.rpartition('.')[0]
1630 with provisionalcompleter():
1631 with provisionalcompleter():
1631 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1632 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1632 for c in self.completions(text, len(text))]
1633 for c in self.completions(text, len(text))]
1633
1634
1634 return self.complete(text)[1]
1635 return self.complete(text)[1]
1635
1636
1636 def _clean_glob(self, text:str):
1637 def _clean_glob(self, text:str):
1637 return self.glob("%s*" % text)
1638 return self.glob("%s*" % text)
1638
1639
1639 def _clean_glob_win32(self, text:str):
1640 def _clean_glob_win32(self, text:str):
1640 return [f.replace("\\","/")
1641 return [f.replace("\\","/")
1641 for f in self.glob("%s*" % text)]
1642 for f in self.glob("%s*" % text)]
1642
1643
1643 @context_matcher()
1644 @context_matcher()
1644 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1645 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1645 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1646 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1646 matches = self.file_matches(context.token)
1647 matches = self.file_matches(context.token)
1647 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1648 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1648 # starts with `/home/`, `C:\`, etc)
1649 # starts with `/home/`, `C:\`, etc)
1649 return _convert_matcher_v1_result_to_v2(matches, type="path")
1650 return _convert_matcher_v1_result_to_v2(matches, type="path")
1650
1651
1651 def file_matches(self, text: str) -> List[str]:
1652 def file_matches(self, text: str) -> List[str]:
1652 """Match filenames, expanding ~USER type strings.
1653 """Match filenames, expanding ~USER type strings.
1653
1654
1654 Most of the seemingly convoluted logic in this completer is an
1655 Most of the seemingly convoluted logic in this completer is an
1655 attempt to handle filenames with spaces in them. And yet it's not
1656 attempt to handle filenames with spaces in them. And yet it's not
1656 quite perfect, because Python's readline doesn't expose all of the
1657 quite perfect, because Python's readline doesn't expose all of the
1657 GNU readline details needed for this to be done correctly.
1658 GNU readline details needed for this to be done correctly.
1658
1659
1659 For a filename with a space in it, the printed completions will be
1660 For a filename with a space in it, the printed completions will be
1660 only the parts after what's already been typed (instead of the
1661 only the parts after what's already been typed (instead of the
1661 full completions, as is normally done). I don't think with the
1662 full completions, as is normally done). I don't think with the
1662 current (as of Python 2.3) Python readline it's possible to do
1663 current (as of Python 2.3) Python readline it's possible to do
1663 better.
1664 better.
1664
1665
1665 .. deprecated:: 8.6
1666 .. deprecated:: 8.6
1666 You can use :meth:`file_matcher` instead.
1667 You can use :meth:`file_matcher` instead.
1667 """
1668 """
1668
1669
1669 # chars that require escaping with backslash - i.e. chars
1670 # chars that require escaping with backslash - i.e. chars
1670 # that readline treats incorrectly as delimiters, but we
1671 # that readline treats incorrectly as delimiters, but we
1671 # don't want to treat as delimiters in filename matching
1672 # don't want to treat as delimiters in filename matching
1672 # when escaped with backslash
1673 # when escaped with backslash
1673 if text.startswith('!'):
1674 if text.startswith('!'):
1674 text = text[1:]
1675 text = text[1:]
1675 text_prefix = u'!'
1676 text_prefix = u'!'
1676 else:
1677 else:
1677 text_prefix = u''
1678 text_prefix = u''
1678
1679
1679 text_until_cursor = self.text_until_cursor
1680 text_until_cursor = self.text_until_cursor
1680 # track strings with open quotes
1681 # track strings with open quotes
1681 open_quotes = has_open_quotes(text_until_cursor)
1682 open_quotes = has_open_quotes(text_until_cursor)
1682
1683
1683 if '(' in text_until_cursor or '[' in text_until_cursor:
1684 if '(' in text_until_cursor or '[' in text_until_cursor:
1684 lsplit = text
1685 lsplit = text
1685 else:
1686 else:
1686 try:
1687 try:
1687 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1688 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1688 lsplit = arg_split(text_until_cursor)[-1]
1689 lsplit = arg_split(text_until_cursor)[-1]
1689 except ValueError:
1690 except ValueError:
1690 # typically an unmatched ", or backslash without escaped char.
1691 # typically an unmatched ", or backslash without escaped char.
1691 if open_quotes:
1692 if open_quotes:
1692 lsplit = text_until_cursor.split(open_quotes)[-1]
1693 lsplit = text_until_cursor.split(open_quotes)[-1]
1693 else:
1694 else:
1694 return []
1695 return []
1695 except IndexError:
1696 except IndexError:
1696 # tab pressed on empty line
1697 # tab pressed on empty line
1697 lsplit = ""
1698 lsplit = ""
1698
1699
1699 if not open_quotes and lsplit != protect_filename(lsplit):
1700 if not open_quotes and lsplit != protect_filename(lsplit):
1700 # if protectables are found, do matching on the whole escaped name
1701 # if protectables are found, do matching on the whole escaped name
1701 has_protectables = True
1702 has_protectables = True
1702 text0,text = text,lsplit
1703 text0,text = text,lsplit
1703 else:
1704 else:
1704 has_protectables = False
1705 has_protectables = False
1705 text = os.path.expanduser(text)
1706 text = os.path.expanduser(text)
1706
1707
1707 if text == "":
1708 if text == "":
1708 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1709 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1709
1710
1710 # Compute the matches from the filesystem
1711 # Compute the matches from the filesystem
1711 if sys.platform == 'win32':
1712 if sys.platform == 'win32':
1712 m0 = self.clean_glob(text)
1713 m0 = self.clean_glob(text)
1713 else:
1714 else:
1714 m0 = self.clean_glob(text.replace('\\', ''))
1715 m0 = self.clean_glob(text.replace('\\', ''))
1715
1716
1716 if has_protectables:
1717 if has_protectables:
1717 # If we had protectables, we need to revert our changes to the
1718 # If we had protectables, we need to revert our changes to the
1718 # beginning of filename so that we don't double-write the part
1719 # beginning of filename so that we don't double-write the part
1719 # of the filename we have so far
1720 # of the filename we have so far
1720 len_lsplit = len(lsplit)
1721 len_lsplit = len(lsplit)
1721 matches = [text_prefix + text0 +
1722 matches = [text_prefix + text0 +
1722 protect_filename(f[len_lsplit:]) for f in m0]
1723 protect_filename(f[len_lsplit:]) for f in m0]
1723 else:
1724 else:
1724 if open_quotes:
1725 if open_quotes:
1725 # if we have a string with an open quote, we don't need to
1726 # if we have a string with an open quote, we don't need to
1726 # protect the names beyond the quote (and we _shouldn't_, as
1727 # protect the names beyond the quote (and we _shouldn't_, as
1727 # it would cause bugs when the filesystem call is made).
1728 # it would cause bugs when the filesystem call is made).
1728 matches = m0 if sys.platform == "win32" else\
1729 matches = m0 if sys.platform == "win32" else\
1729 [protect_filename(f, open_quotes) for f in m0]
1730 [protect_filename(f, open_quotes) for f in m0]
1730 else:
1731 else:
1731 matches = [text_prefix +
1732 matches = [text_prefix +
1732 protect_filename(f) for f in m0]
1733 protect_filename(f) for f in m0]
1733
1734
1734 # Mark directories in input list by appending '/' to their names.
1735 # Mark directories in input list by appending '/' to their names.
1735 return [x+'/' if os.path.isdir(x) else x for x in matches]
1736 return [x+'/' if os.path.isdir(x) else x for x in matches]
1736
1737
1737 @context_matcher()
1738 @context_matcher()
1738 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1739 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1739 """Match magics."""
1740 """Match magics."""
1740 text = context.token
1741 text = context.token
1741 matches = self.magic_matches(text)
1742 matches = self.magic_matches(text)
1742 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1743 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1743 is_magic_prefix = len(text) > 0 and text[0] == "%"
1744 is_magic_prefix = len(text) > 0 and text[0] == "%"
1744 result["suppress"] = is_magic_prefix and bool(result["completions"])
1745 result["suppress"] = is_magic_prefix and bool(result["completions"])
1745 return result
1746 return result
1746
1747
1747 def magic_matches(self, text: str):
1748 def magic_matches(self, text: str):
1748 """Match magics.
1749 """Match magics.
1749
1750
1750 .. deprecated:: 8.6
1751 .. deprecated:: 8.6
1751 You can use :meth:`magic_matcher` instead.
1752 You can use :meth:`magic_matcher` instead.
1752 """
1753 """
1753 # Get all shell magics now rather than statically, so magics loaded at
1754 # Get all shell magics now rather than statically, so magics loaded at
1754 # runtime show up too.
1755 # runtime show up too.
1755 lsm = self.shell.magics_manager.lsmagic()
1756 lsm = self.shell.magics_manager.lsmagic()
1756 line_magics = lsm['line']
1757 line_magics = lsm['line']
1757 cell_magics = lsm['cell']
1758 cell_magics = lsm['cell']
1758 pre = self.magic_escape
1759 pre = self.magic_escape
1759 pre2 = pre+pre
1760 pre2 = pre+pre
1760
1761
1761 explicit_magic = text.startswith(pre)
1762 explicit_magic = text.startswith(pre)
1762
1763
1763 # Completion logic:
1764 # Completion logic:
1764 # - user gives %%: only do cell magics
1765 # - user gives %%: only do cell magics
1765 # - user gives %: do both line and cell magics
1766 # - user gives %: do both line and cell magics
1766 # - no prefix: do both
1767 # - no prefix: do both
1767 # In other words, line magics are skipped if the user gives %% explicitly
1768 # In other words, line magics are skipped if the user gives %% explicitly
1768 #
1769 #
1769 # We also exclude magics that match any currently visible names:
1770 # We also exclude magics that match any currently visible names:
1770 # https://github.com/ipython/ipython/issues/4877, unless the user has
1771 # https://github.com/ipython/ipython/issues/4877, unless the user has
1771 # typed a %:
1772 # typed a %:
1772 # https://github.com/ipython/ipython/issues/10754
1773 # https://github.com/ipython/ipython/issues/10754
1773 bare_text = text.lstrip(pre)
1774 bare_text = text.lstrip(pre)
1774 global_matches = self.global_matches(bare_text)
1775 global_matches = self.global_matches(bare_text)
1775 if not explicit_magic:
1776 if not explicit_magic:
1776 def matches(magic):
1777 def matches(magic):
1777 """
1778 """
1778 Filter magics, in particular remove magics that match
1779 Filter magics, in particular remove magics that match
1779 a name present in global namespace.
1780 a name present in global namespace.
1780 """
1781 """
1781 return ( magic.startswith(bare_text) and
1782 return ( magic.startswith(bare_text) and
1782 magic not in global_matches )
1783 magic not in global_matches )
1783 else:
1784 else:
1784 def matches(magic):
1785 def matches(magic):
1785 return magic.startswith(bare_text)
1786 return magic.startswith(bare_text)
1786
1787
1787 comp = [ pre2+m for m in cell_magics if matches(m)]
1788 comp = [ pre2+m for m in cell_magics if matches(m)]
1788 if not text.startswith(pre2):
1789 if not text.startswith(pre2):
1789 comp += [ pre+m for m in line_magics if matches(m)]
1790 comp += [ pre+m for m in line_magics if matches(m)]
1790
1791
1791 return comp
1792 return comp
1792
1793
1793 @context_matcher()
1794 @context_matcher()
1794 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1795 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1795 """Match class names and attributes for %config magic."""
1796 """Match class names and attributes for %config magic."""
1796 # NOTE: uses `line_buffer` equivalent for compatibility
1797 # NOTE: uses `line_buffer` equivalent for compatibility
1797 matches = self.magic_config_matches(context.line_with_cursor)
1798 matches = self.magic_config_matches(context.line_with_cursor)
1798 return _convert_matcher_v1_result_to_v2(matches, type="param")
1799 return _convert_matcher_v1_result_to_v2(matches, type="param")
1799
1800
1800 def magic_config_matches(self, text: str) -> List[str]:
1801 def magic_config_matches(self, text: str) -> List[str]:
1801 """Match class names and attributes for %config magic.
1802 """Match class names and attributes for %config magic.
1802
1803
1803 .. deprecated:: 8.6
1804 .. deprecated:: 8.6
1804 You can use :meth:`magic_config_matcher` instead.
1805 You can use :meth:`magic_config_matcher` instead.
1805 """
1806 """
1806 texts = text.strip().split()
1807 texts = text.strip().split()
1807
1808
1808 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1809 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1809 # get all configuration classes
1810 # get all configuration classes
1810 classes = sorted(set([ c for c in self.shell.configurables
1811 classes = sorted(set([ c for c in self.shell.configurables
1811 if c.__class__.class_traits(config=True)
1812 if c.__class__.class_traits(config=True)
1812 ]), key=lambda x: x.__class__.__name__)
1813 ]), key=lambda x: x.__class__.__name__)
1813 classnames = [ c.__class__.__name__ for c in classes ]
1814 classnames = [ c.__class__.__name__ for c in classes ]
1814
1815
1815 # return all classnames if config or %config is given
1816 # return all classnames if config or %config is given
1816 if len(texts) == 1:
1817 if len(texts) == 1:
1817 return classnames
1818 return classnames
1818
1819
1819 # match classname
1820 # match classname
1820 classname_texts = texts[1].split('.')
1821 classname_texts = texts[1].split('.')
1821 classname = classname_texts[0]
1822 classname = classname_texts[0]
1822 classname_matches = [ c for c in classnames
1823 classname_matches = [ c for c in classnames
1823 if c.startswith(classname) ]
1824 if c.startswith(classname) ]
1824
1825
1825 # return matched classes or the matched class with attributes
1826 # return matched classes or the matched class with attributes
1826 if texts[1].find('.') < 0:
1827 if texts[1].find('.') < 0:
1827 return classname_matches
1828 return classname_matches
1828 elif len(classname_matches) == 1 and \
1829 elif len(classname_matches) == 1 and \
1829 classname_matches[0] == classname:
1830 classname_matches[0] == classname:
1830 cls = classes[classnames.index(classname)].__class__
1831 cls = classes[classnames.index(classname)].__class__
1831 help = cls.class_get_help()
1832 help = cls.class_get_help()
1832 # strip leading '--' from cl-args:
1833 # strip leading '--' from cl-args:
1833 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1834 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1834 return [ attr.split('=')[0]
1835 return [ attr.split('=')[0]
1835 for attr in help.strip().splitlines()
1836 for attr in help.strip().splitlines()
1836 if attr.startswith(texts[1]) ]
1837 if attr.startswith(texts[1]) ]
1837 return []
1838 return []
1838
1839
1839 @context_matcher()
1840 @context_matcher()
1840 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1841 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1841 """Match color schemes for %colors magic."""
1842 """Match color schemes for %colors magic."""
1842 # NOTE: uses `line_buffer` equivalent for compatibility
1843 # NOTE: uses `line_buffer` equivalent for compatibility
1843 matches = self.magic_color_matches(context.line_with_cursor)
1844 matches = self.magic_color_matches(context.line_with_cursor)
1844 return _convert_matcher_v1_result_to_v2(matches, type="param")
1845 return _convert_matcher_v1_result_to_v2(matches, type="param")
1845
1846
1846 def magic_color_matches(self, text: str) -> List[str]:
1847 def magic_color_matches(self, text: str) -> List[str]:
1847 """Match color schemes for %colors magic.
1848 """Match color schemes for %colors magic.
1848
1849
1849 .. deprecated:: 8.6
1850 .. deprecated:: 8.6
1850 You can use :meth:`magic_color_matcher` instead.
1851 You can use :meth:`magic_color_matcher` instead.
1851 """
1852 """
1852 texts = text.split()
1853 texts = text.split()
1853 if text.endswith(' '):
1854 if text.endswith(' '):
1854 # .split() strips off the trailing whitespace. Add '' back
1855 # .split() strips off the trailing whitespace. Add '' back
1855 # so that: '%colors ' -> ['%colors', '']
1856 # so that: '%colors ' -> ['%colors', '']
1856 texts.append('')
1857 texts.append('')
1857
1858
1858 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1859 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1859 prefix = texts[1]
1860 prefix = texts[1]
1860 return [ color for color in InspectColors.keys()
1861 return [ color for color in InspectColors.keys()
1861 if color.startswith(prefix) ]
1862 if color.startswith(prefix) ]
1862 return []
1863 return []
1863
1864
1864 @context_matcher(identifier="IPCompleter.jedi_matcher")
1865 @context_matcher(identifier="IPCompleter.jedi_matcher")
1865 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1866 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1866 matches = self._jedi_matches(
1867 matches = self._jedi_matches(
1867 cursor_column=context.cursor_position,
1868 cursor_column=context.cursor_position,
1868 cursor_line=context.cursor_line,
1869 cursor_line=context.cursor_line,
1869 text=context.full_text,
1870 text=context.full_text,
1870 )
1871 )
1871 return {
1872 return {
1872 "completions": matches,
1873 "completions": matches,
1873 # static analysis should not suppress other matchers
1874 # static analysis should not suppress other matchers
1874 "suppress": False,
1875 "suppress": False,
1875 }
1876 }
1876
1877
1877 def _jedi_matches(
1878 def _jedi_matches(
1878 self, cursor_column: int, cursor_line: int, text: str
1879 self, cursor_column: int, cursor_line: int, text: str
1879 ) -> Iterable[_JediCompletionLike]:
1880 ) -> Iterable[_JediCompletionLike]:
1880 """
1881 """
1881 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1882 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1882 cursor position.
1883 cursor position.
1883
1884
1884 Parameters
1885 Parameters
1885 ----------
1886 ----------
1886 cursor_column : int
1887 cursor_column : int
1887 column position of the cursor in ``text``, 0-indexed.
1888 column position of the cursor in ``text``, 0-indexed.
1888 cursor_line : int
1889 cursor_line : int
1889 line position of the cursor in ``text``, 0-indexed
1890 line position of the cursor in ``text``, 0-indexed
1890 text : str
1891 text : str
1891 text to complete
1892 text to complete
1892
1893
1893 Notes
1894 Notes
1894 -----
1895 -----
1895 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1896 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1896 object containing a string with the Jedi debug information attached.
1897 object containing a string with the Jedi debug information attached.
1897
1898
1898 .. deprecated:: 8.6
1899 .. deprecated:: 8.6
1899 You can use :meth:`_jedi_matcher` instead.
1900 You can use :meth:`_jedi_matcher` instead.
1900 """
1901 """
1901 namespaces = [self.namespace]
1902 namespaces = [self.namespace]
1902 if self.global_namespace is not None:
1903 if self.global_namespace is not None:
1903 namespaces.append(self.global_namespace)
1904 namespaces.append(self.global_namespace)
1904
1905
1905 completion_filter = lambda x:x
1906 completion_filter = lambda x:x
1906 offset = cursor_to_position(text, cursor_line, cursor_column)
1907 offset = cursor_to_position(text, cursor_line, cursor_column)
1907 # filter output if we are completing for object members
1908 # filter output if we are completing for object members
1908 if offset:
1909 if offset:
1909 pre = text[offset-1]
1910 pre = text[offset-1]
1910 if pre == '.':
1911 if pre == '.':
1911 if self.omit__names == 2:
1912 if self.omit__names == 2:
1912 completion_filter = lambda c:not c.name.startswith('_')
1913 completion_filter = lambda c:not c.name.startswith('_')
1913 elif self.omit__names == 1:
1914 elif self.omit__names == 1:
1914 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1915 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1915 elif self.omit__names == 0:
1916 elif self.omit__names == 0:
1916 completion_filter = lambda x:x
1917 completion_filter = lambda x:x
1917 else:
1918 else:
1918 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1919 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1919
1920
1920 interpreter = jedi.Interpreter(text[:offset], namespaces)
1921 interpreter = jedi.Interpreter(text[:offset], namespaces)
1921 try_jedi = True
1922 try_jedi = True
1922
1923
1923 try:
1924 try:
1924 # find the first token in the current tree -- if it is a ' or " then we are in a string
1925 # find the first token in the current tree -- if it is a ' or " then we are in a string
1925 completing_string = False
1926 completing_string = False
1926 try:
1927 try:
1927 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1928 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1928 except StopIteration:
1929 except StopIteration:
1929 pass
1930 pass
1930 else:
1931 else:
1931 # note the value may be ', ", or it may also be ''' or """, or
1932 # note the value may be ', ", or it may also be ''' or """, or
1932 # in some cases, """what/you/typed..., but all of these are
1933 # in some cases, """what/you/typed..., but all of these are
1933 # strings.
1934 # strings.
1934 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1935 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1935
1936
1936 # if we are in a string jedi is likely not the right candidate for
1937 # if we are in a string jedi is likely not the right candidate for
1937 # now. Skip it.
1938 # now. Skip it.
1938 try_jedi = not completing_string
1939 try_jedi = not completing_string
1939 except Exception as e:
1940 except Exception as e:
1940 # many of things can go wrong, we are using private API just don't crash.
1941 # many of things can go wrong, we are using private API just don't crash.
1941 if self.debug:
1942 if self.debug:
1942 print("Error detecting if completing a non-finished string :", e, '|')
1943 print("Error detecting if completing a non-finished string :", e, '|')
1943
1944
1944 if not try_jedi:
1945 if not try_jedi:
1945 return []
1946 return []
1946 try:
1947 try:
1947 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1948 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1948 except Exception as e:
1949 except Exception as e:
1949 if self.debug:
1950 if self.debug:
1950 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1951 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1951 else:
1952 else:
1952 return []
1953 return []
1953
1954
1954 def python_matches(self, text:str)->List[str]:
1955 def python_matches(self, text:str)->List[str]:
1955 """Match attributes or global python names"""
1956 """Match attributes or global python names"""
1956 if "." in text:
1957 if "." in text:
1957 try:
1958 try:
1958 matches = self.attr_matches(text)
1959 matches = self.attr_matches(text)
1959 if text.endswith('.') and self.omit__names:
1960 if text.endswith('.') and self.omit__names:
1960 if self.omit__names == 1:
1961 if self.omit__names == 1:
1961 # true if txt is _not_ a __ name, false otherwise:
1962 # true if txt is _not_ a __ name, false otherwise:
1962 no__name = (lambda txt:
1963 no__name = (lambda txt:
1963 re.match(r'.*\.__.*?__',txt) is None)
1964 re.match(r'.*\.__.*?__',txt) is None)
1964 else:
1965 else:
1965 # true if txt is _not_ a _ name, false otherwise:
1966 # true if txt is _not_ a _ name, false otherwise:
1966 no__name = (lambda txt:
1967 no__name = (lambda txt:
1967 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1968 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1968 matches = filter(no__name, matches)
1969 matches = filter(no__name, matches)
1969 except NameError:
1970 except NameError:
1970 # catches <undefined attributes>.<tab>
1971 # catches <undefined attributes>.<tab>
1971 matches = []
1972 matches = []
1972 else:
1973 else:
1973 matches = self.global_matches(text)
1974 matches = self.global_matches(text)
1974 return matches
1975 return matches
1975
1976
1976 def _default_arguments_from_docstring(self, doc):
1977 def _default_arguments_from_docstring(self, doc):
1977 """Parse the first line of docstring for call signature.
1978 """Parse the first line of docstring for call signature.
1978
1979
1979 Docstring should be of the form 'min(iterable[, key=func])\n'.
1980 Docstring should be of the form 'min(iterable[, key=func])\n'.
1980 It can also parse cython docstring of the form
1981 It can also parse cython docstring of the form
1981 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1982 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1982 """
1983 """
1983 if doc is None:
1984 if doc is None:
1984 return []
1985 return []
1985
1986
1986 #care only the firstline
1987 #care only the firstline
1987 line = doc.lstrip().splitlines()[0]
1988 line = doc.lstrip().splitlines()[0]
1988
1989
1989 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1990 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1990 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1991 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1991 sig = self.docstring_sig_re.search(line)
1992 sig = self.docstring_sig_re.search(line)
1992 if sig is None:
1993 if sig is None:
1993 return []
1994 return []
1994 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1995 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1995 sig = sig.groups()[0].split(',')
1996 sig = sig.groups()[0].split(',')
1996 ret = []
1997 ret = []
1997 for s in sig:
1998 for s in sig:
1998 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1999 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1999 ret += self.docstring_kwd_re.findall(s)
2000 ret += self.docstring_kwd_re.findall(s)
2000 return ret
2001 return ret
2001
2002
2002 def _default_arguments(self, obj):
2003 def _default_arguments(self, obj):
2003 """Return the list of default arguments of obj if it is callable,
2004 """Return the list of default arguments of obj if it is callable,
2004 or empty list otherwise."""
2005 or empty list otherwise."""
2005 call_obj = obj
2006 call_obj = obj
2006 ret = []
2007 ret = []
2007 if inspect.isbuiltin(obj):
2008 if inspect.isbuiltin(obj):
2008 pass
2009 pass
2009 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2010 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2010 if inspect.isclass(obj):
2011 if inspect.isclass(obj):
2011 #for cython embedsignature=True the constructor docstring
2012 #for cython embedsignature=True the constructor docstring
2012 #belongs to the object itself not __init__
2013 #belongs to the object itself not __init__
2013 ret += self._default_arguments_from_docstring(
2014 ret += self._default_arguments_from_docstring(
2014 getattr(obj, '__doc__', ''))
2015 getattr(obj, '__doc__', ''))
2015 # for classes, check for __init__,__new__
2016 # for classes, check for __init__,__new__
2016 call_obj = (getattr(obj, '__init__', None) or
2017 call_obj = (getattr(obj, '__init__', None) or
2017 getattr(obj, '__new__', None))
2018 getattr(obj, '__new__', None))
2018 # for all others, check if they are __call__able
2019 # for all others, check if they are __call__able
2019 elif hasattr(obj, '__call__'):
2020 elif hasattr(obj, '__call__'):
2020 call_obj = obj.__call__
2021 call_obj = obj.__call__
2021 ret += self._default_arguments_from_docstring(
2022 ret += self._default_arguments_from_docstring(
2022 getattr(call_obj, '__doc__', ''))
2023 getattr(call_obj, '__doc__', ''))
2023
2024
2024 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2025 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2025 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2026 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2026
2027
2027 try:
2028 try:
2028 sig = inspect.signature(obj)
2029 sig = inspect.signature(obj)
2029 ret.extend(k for k, v in sig.parameters.items() if
2030 ret.extend(k for k, v in sig.parameters.items() if
2030 v.kind in _keeps)
2031 v.kind in _keeps)
2031 except ValueError:
2032 except ValueError:
2032 pass
2033 pass
2033
2034
2034 return list(set(ret))
2035 return list(set(ret))
2035
2036
2036 @context_matcher()
2037 @context_matcher()
2037 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2038 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2038 """Match named parameters (kwargs) of the last open function."""
2039 """Match named parameters (kwargs) of the last open function."""
2039 matches = self.python_func_kw_matches(context.token)
2040 matches = self.python_func_kw_matches(context.token)
2040 return _convert_matcher_v1_result_to_v2(matches, type="param")
2041 return _convert_matcher_v1_result_to_v2(matches, type="param")
2041
2042
2042 def python_func_kw_matches(self, text):
2043 def python_func_kw_matches(self, text):
2043 """Match named parameters (kwargs) of the last open function.
2044 """Match named parameters (kwargs) of the last open function.
2044
2045
2045 .. deprecated:: 8.6
2046 .. deprecated:: 8.6
2046 You can use :meth:`python_func_kw_matcher` instead.
2047 You can use :meth:`python_func_kw_matcher` instead.
2047 """
2048 """
2048
2049
2049 if "." in text: # a parameter cannot be dotted
2050 if "." in text: # a parameter cannot be dotted
2050 return []
2051 return []
2051 try: regexp = self.__funcParamsRegex
2052 try: regexp = self.__funcParamsRegex
2052 except AttributeError:
2053 except AttributeError:
2053 regexp = self.__funcParamsRegex = re.compile(r'''
2054 regexp = self.__funcParamsRegex = re.compile(r'''
2054 '.*?(?<!\\)' | # single quoted strings or
2055 '.*?(?<!\\)' | # single quoted strings or
2055 ".*?(?<!\\)" | # double quoted strings or
2056 ".*?(?<!\\)" | # double quoted strings or
2056 \w+ | # identifier
2057 \w+ | # identifier
2057 \S # other characters
2058 \S # other characters
2058 ''', re.VERBOSE | re.DOTALL)
2059 ''', re.VERBOSE | re.DOTALL)
2059 # 1. find the nearest identifier that comes before an unclosed
2060 # 1. find the nearest identifier that comes before an unclosed
2060 # parenthesis before the cursor
2061 # parenthesis before the cursor
2061 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2062 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2062 tokens = regexp.findall(self.text_until_cursor)
2063 tokens = regexp.findall(self.text_until_cursor)
2063 iterTokens = reversed(tokens); openPar = 0
2064 iterTokens = reversed(tokens); openPar = 0
2064
2065
2065 for token in iterTokens:
2066 for token in iterTokens:
2066 if token == ')':
2067 if token == ')':
2067 openPar -= 1
2068 openPar -= 1
2068 elif token == '(':
2069 elif token == '(':
2069 openPar += 1
2070 openPar += 1
2070 if openPar > 0:
2071 if openPar > 0:
2071 # found the last unclosed parenthesis
2072 # found the last unclosed parenthesis
2072 break
2073 break
2073 else:
2074 else:
2074 return []
2075 return []
2075 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2076 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2076 ids = []
2077 ids = []
2077 isId = re.compile(r'\w+$').match
2078 isId = re.compile(r'\w+$').match
2078
2079
2079 while True:
2080 while True:
2080 try:
2081 try:
2081 ids.append(next(iterTokens))
2082 ids.append(next(iterTokens))
2082 if not isId(ids[-1]):
2083 if not isId(ids[-1]):
2083 ids.pop(); break
2084 ids.pop(); break
2084 if not next(iterTokens) == '.':
2085 if not next(iterTokens) == '.':
2085 break
2086 break
2086 except StopIteration:
2087 except StopIteration:
2087 break
2088 break
2088
2089
2089 # Find all named arguments already assigned to, as to avoid suggesting
2090 # Find all named arguments already assigned to, as to avoid suggesting
2090 # them again
2091 # them again
2091 usedNamedArgs = set()
2092 usedNamedArgs = set()
2092 par_level = -1
2093 par_level = -1
2093 for token, next_token in zip(tokens, tokens[1:]):
2094 for token, next_token in zip(tokens, tokens[1:]):
2094 if token == '(':
2095 if token == '(':
2095 par_level += 1
2096 par_level += 1
2096 elif token == ')':
2097 elif token == ')':
2097 par_level -= 1
2098 par_level -= 1
2098
2099
2099 if par_level != 0:
2100 if par_level != 0:
2100 continue
2101 continue
2101
2102
2102 if next_token != '=':
2103 if next_token != '=':
2103 continue
2104 continue
2104
2105
2105 usedNamedArgs.add(token)
2106 usedNamedArgs.add(token)
2106
2107
2107 argMatches = []
2108 argMatches = []
2108 try:
2109 try:
2109 callableObj = '.'.join(ids[::-1])
2110 callableObj = '.'.join(ids[::-1])
2110 namedArgs = self._default_arguments(eval(callableObj,
2111 namedArgs = self._default_arguments(eval(callableObj,
2111 self.namespace))
2112 self.namespace))
2112
2113
2113 # Remove used named arguments from the list, no need to show twice
2114 # Remove used named arguments from the list, no need to show twice
2114 for namedArg in set(namedArgs) - usedNamedArgs:
2115 for namedArg in set(namedArgs) - usedNamedArgs:
2115 if namedArg.startswith(text):
2116 if namedArg.startswith(text):
2116 argMatches.append("%s=" %namedArg)
2117 argMatches.append("%s=" %namedArg)
2117 except:
2118 except:
2118 pass
2119 pass
2119
2120
2120 return argMatches
2121 return argMatches
2121
2122
2122 @staticmethod
2123 @staticmethod
2123 def _get_keys(obj: Any) -> List[Any]:
2124 def _get_keys(obj: Any) -> List[Any]:
2124 # Objects can define their own completions by defining an
2125 # Objects can define their own completions by defining an
2125 # _ipy_key_completions_() method.
2126 # _ipy_key_completions_() method.
2126 method = get_real_method(obj, '_ipython_key_completions_')
2127 method = get_real_method(obj, '_ipython_key_completions_')
2127 if method is not None:
2128 if method is not None:
2128 return method()
2129 return method()
2129
2130
2130 # Special case some common in-memory dict-like types
2131 # Special case some common in-memory dict-like types
2131 if isinstance(obj, dict) or\
2132 if isinstance(obj, dict) or\
2132 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2133 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2133 try:
2134 try:
2134 return list(obj.keys())
2135 return list(obj.keys())
2135 except Exception:
2136 except Exception:
2136 return []
2137 return []
2137 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2138 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2138 _safe_isinstance(obj, 'numpy', 'void'):
2139 _safe_isinstance(obj, 'numpy', 'void'):
2139 return obj.dtype.names or []
2140 return obj.dtype.names or []
2140 return []
2141 return []
2141
2142
2142 @context_matcher()
2143 @context_matcher()
2143 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2144 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2144 """Match string keys in a dictionary, after e.g. ``foo[``."""
2145 """Match string keys in a dictionary, after e.g. ``foo[``."""
2145 matches = self.dict_key_matches(context.token)
2146 matches = self.dict_key_matches(context.token)
2146 return _convert_matcher_v1_result_to_v2(
2147 return _convert_matcher_v1_result_to_v2(
2147 matches, type="dict key", suppress_if_matches=True
2148 matches, type="dict key", suppress_if_matches=True
2148 )
2149 )
2149
2150
2150 def dict_key_matches(self, text: str) -> List[str]:
2151 def dict_key_matches(self, text: str) -> List[str]:
2151 """Match string keys in a dictionary, after e.g. ``foo[``.
2152 """Match string keys in a dictionary, after e.g. ``foo[``.
2152
2153
2153 .. deprecated:: 8.6
2154 .. deprecated:: 8.6
2154 You can use :meth:`dict_key_matcher` instead.
2155 You can use :meth:`dict_key_matcher` instead.
2155 """
2156 """
2156
2157
2157 if self.__dict_key_regexps is not None:
2158 if self.__dict_key_regexps is not None:
2158 regexps = self.__dict_key_regexps
2159 regexps = self.__dict_key_regexps
2159 else:
2160 else:
2160 dict_key_re_fmt = r'''(?x)
2161 dict_key_re_fmt = r'''(?x)
2161 ( # match dict-referring expression wrt greedy setting
2162 ( # match dict-referring expression wrt greedy setting
2162 %s
2163 %s
2163 )
2164 )
2164 \[ # open bracket
2165 \[ # open bracket
2165 \s* # and optional whitespace
2166 \s* # and optional whitespace
2166 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2167 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2167 ((?:[uUbB]? # string prefix (r not handled)
2168 ((?:[uUbB]? # string prefix (r not handled)
2168 (?:
2169 (?:
2169 '(?:[^']|(?<!\\)\\')*'
2170 '(?:[^']|(?<!\\)\\')*'
2170 |
2171 |
2171 "(?:[^"]|(?<!\\)\\")*"
2172 "(?:[^"]|(?<!\\)\\")*"
2172 )
2173 )
2173 \s*,\s*
2174 \s*,\s*
2174 )*)
2175 )*)
2175 ([uUbB]? # string prefix (r not handled)
2176 ([uUbB]? # string prefix (r not handled)
2176 (?: # unclosed string
2177 (?: # unclosed string
2177 '(?:[^']|(?<!\\)\\')*
2178 '(?:[^']|(?<!\\)\\')*
2178 |
2179 |
2179 "(?:[^"]|(?<!\\)\\")*
2180 "(?:[^"]|(?<!\\)\\")*
2180 )
2181 )
2181 )?
2182 )?
2182 $
2183 $
2183 '''
2184 '''
2184 regexps = self.__dict_key_regexps = {
2185 regexps = self.__dict_key_regexps = {
2185 False: re.compile(dict_key_re_fmt % r'''
2186 False: re.compile(dict_key_re_fmt % r'''
2186 # identifiers separated by .
2187 # identifiers separated by .
2187 (?!\d)\w+
2188 (?!\d)\w+
2188 (?:\.(?!\d)\w+)*
2189 (?:\.(?!\d)\w+)*
2189 '''),
2190 '''),
2190 True: re.compile(dict_key_re_fmt % '''
2191 True: re.compile(dict_key_re_fmt % '''
2191 .+
2192 .+
2192 ''')
2193 ''')
2193 }
2194 }
2194
2195
2195 match = regexps[self.greedy].search(self.text_until_cursor)
2196 match = regexps[self.greedy].search(self.text_until_cursor)
2196
2197
2197 if match is None:
2198 if match is None:
2198 return []
2199 return []
2199
2200
2200 expr, prefix0, prefix = match.groups()
2201 expr, prefix0, prefix = match.groups()
2201 try:
2202 try:
2202 obj = eval(expr, self.namespace)
2203 obj = eval(expr, self.namespace)
2203 except Exception:
2204 except Exception:
2204 try:
2205 try:
2205 obj = eval(expr, self.global_namespace)
2206 obj = eval(expr, self.global_namespace)
2206 except Exception:
2207 except Exception:
2207 return []
2208 return []
2208
2209
2209 keys = self._get_keys(obj)
2210 keys = self._get_keys(obj)
2210 if not keys:
2211 if not keys:
2211 return keys
2212 return keys
2212
2213
2213 extra_prefix = eval(prefix0) if prefix0 != '' else None
2214 extra_prefix = eval(prefix0) if prefix0 != '' else None
2214
2215
2215 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2216 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2216 if not matches:
2217 if not matches:
2217 return matches
2218 return matches
2218
2219
2219 # get the cursor position of
2220 # get the cursor position of
2220 # - the text being completed
2221 # - the text being completed
2221 # - the start of the key text
2222 # - the start of the key text
2222 # - the start of the completion
2223 # - the start of the completion
2223 text_start = len(self.text_until_cursor) - len(text)
2224 text_start = len(self.text_until_cursor) - len(text)
2224 if prefix:
2225 if prefix:
2225 key_start = match.start(3)
2226 key_start = match.start(3)
2226 completion_start = key_start + token_offset
2227 completion_start = key_start + token_offset
2227 else:
2228 else:
2228 key_start = completion_start = match.end()
2229 key_start = completion_start = match.end()
2229
2230
2230 # grab the leading prefix, to make sure all completions start with `text`
2231 # grab the leading prefix, to make sure all completions start with `text`
2231 if text_start > key_start:
2232 if text_start > key_start:
2232 leading = ''
2233 leading = ''
2233 else:
2234 else:
2234 leading = text[text_start:completion_start]
2235 leading = text[text_start:completion_start]
2235
2236
2236 # the index of the `[` character
2237 # the index of the `[` character
2237 bracket_idx = match.end(1)
2238 bracket_idx = match.end(1)
2238
2239
2239 # append closing quote and bracket as appropriate
2240 # append closing quote and bracket as appropriate
2240 # this is *not* appropriate if the opening quote or bracket is outside
2241 # this is *not* appropriate if the opening quote or bracket is outside
2241 # the text given to this method
2242 # the text given to this method
2242 suf = ''
2243 suf = ''
2243 continuation = self.line_buffer[len(self.text_until_cursor):]
2244 continuation = self.line_buffer[len(self.text_until_cursor):]
2244 if key_start > text_start and closing_quote:
2245 if key_start > text_start and closing_quote:
2245 # quotes were opened inside text, maybe close them
2246 # quotes were opened inside text, maybe close them
2246 if continuation.startswith(closing_quote):
2247 if continuation.startswith(closing_quote):
2247 continuation = continuation[len(closing_quote):]
2248 continuation = continuation[len(closing_quote):]
2248 else:
2249 else:
2249 suf += closing_quote
2250 suf += closing_quote
2250 if bracket_idx > text_start:
2251 if bracket_idx > text_start:
2251 # brackets were opened inside text, maybe close them
2252 # brackets were opened inside text, maybe close them
2252 if not continuation.startswith(']'):
2253 if not continuation.startswith(']'):
2253 suf += ']'
2254 suf += ']'
2254
2255
2255 return [leading + k + suf for k in matches]
2256 return [leading + k + suf for k in matches]
2256
2257
2257 @context_matcher()
2258 @context_matcher()
2258 def unicode_name_matcher(self, context: CompletionContext):
2259 def unicode_name_matcher(self, context: CompletionContext):
2259 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2260 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2260 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2261 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2261 return _convert_matcher_v1_result_to_v2(
2262 return _convert_matcher_v1_result_to_v2(
2262 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2263 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2263 )
2264 )
2264
2265
2265 @staticmethod
2266 @staticmethod
2266 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2267 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2267 """Match Latex-like syntax for unicode characters base
2268 """Match Latex-like syntax for unicode characters base
2268 on the name of the character.
2269 on the name of the character.
2269
2270
2270 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2271 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2271
2272
2272 Works only on valid python 3 identifier, or on combining characters that
2273 Works only on valid python 3 identifier, or on combining characters that
2273 will combine to form a valid identifier.
2274 will combine to form a valid identifier.
2274 """
2275 """
2275 slashpos = text.rfind('\\')
2276 slashpos = text.rfind('\\')
2276 if slashpos > -1:
2277 if slashpos > -1:
2277 s = text[slashpos+1:]
2278 s = text[slashpos+1:]
2278 try :
2279 try :
2279 unic = unicodedata.lookup(s)
2280 unic = unicodedata.lookup(s)
2280 # allow combining chars
2281 # allow combining chars
2281 if ('a'+unic).isidentifier():
2282 if ('a'+unic).isidentifier():
2282 return '\\'+s,[unic]
2283 return '\\'+s,[unic]
2283 except KeyError:
2284 except KeyError:
2284 pass
2285 pass
2285 return '', []
2286 return '', []
2286
2287
2287 @context_matcher()
2288 @context_matcher()
2288 def latex_name_matcher(self, context: CompletionContext):
2289 def latex_name_matcher(self, context: CompletionContext):
2289 """Match Latex syntax for unicode characters.
2290 """Match Latex syntax for unicode characters.
2290
2291
2291 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2292 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2292 """
2293 """
2293 fragment, matches = self.latex_matches(context.text_until_cursor)
2294 fragment, matches = self.latex_matches(context.text_until_cursor)
2294 return _convert_matcher_v1_result_to_v2(
2295 return _convert_matcher_v1_result_to_v2(
2295 matches, type="latex", fragment=fragment, suppress_if_matches=True
2296 matches, type="latex", fragment=fragment, suppress_if_matches=True
2296 )
2297 )
2297
2298
2298 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2299 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2299 """Match Latex syntax for unicode characters.
2300 """Match Latex syntax for unicode characters.
2300
2301
2301 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2302 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2302
2303
2303 .. deprecated:: 8.6
2304 .. deprecated:: 8.6
2304 You can use :meth:`latex_name_matcher` instead.
2305 You can use :meth:`latex_name_matcher` instead.
2305 """
2306 """
2306 slashpos = text.rfind('\\')
2307 slashpos = text.rfind('\\')
2307 if slashpos > -1:
2308 if slashpos > -1:
2308 s = text[slashpos:]
2309 s = text[slashpos:]
2309 if s in latex_symbols:
2310 if s in latex_symbols:
2310 # Try to complete a full latex symbol to unicode
2311 # Try to complete a full latex symbol to unicode
2311 # \\alpha -> Ξ±
2312 # \\alpha -> Ξ±
2312 return s, [latex_symbols[s]]
2313 return s, [latex_symbols[s]]
2313 else:
2314 else:
2314 # If a user has partially typed a latex symbol, give them
2315 # If a user has partially typed a latex symbol, give them
2315 # a full list of options \al -> [\aleph, \alpha]
2316 # a full list of options \al -> [\aleph, \alpha]
2316 matches = [k for k in latex_symbols if k.startswith(s)]
2317 matches = [k for k in latex_symbols if k.startswith(s)]
2317 if matches:
2318 if matches:
2318 return s, matches
2319 return s, matches
2319 return '', ()
2320 return '', ()
2320
2321
2321 @context_matcher()
2322 @context_matcher()
2322 def custom_completer_matcher(self, context):
2323 def custom_completer_matcher(self, context):
2323 """Dispatch custom completer.
2324 """Dispatch custom completer.
2324
2325
2325 If a match is found, suppresses all other matchers except for Jedi.
2326 If a match is found, suppresses all other matchers except for Jedi.
2326 """
2327 """
2327 matches = self.dispatch_custom_completer(context.token) or []
2328 matches = self.dispatch_custom_completer(context.token) or []
2328 result = _convert_matcher_v1_result_to_v2(
2329 result = _convert_matcher_v1_result_to_v2(
2329 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2330 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2330 )
2331 )
2331 result["ordered"] = True
2332 result["ordered"] = True
2332 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2333 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2333 return result
2334 return result
2334
2335
2335 def dispatch_custom_completer(self, text):
2336 def dispatch_custom_completer(self, text):
2336 """
2337 """
2337 .. deprecated:: 8.6
2338 .. deprecated:: 8.6
2338 You can use :meth:`custom_completer_matcher` instead.
2339 You can use :meth:`custom_completer_matcher` instead.
2339 """
2340 """
2340 if not self.custom_completers:
2341 if not self.custom_completers:
2341 return
2342 return
2342
2343
2343 line = self.line_buffer
2344 line = self.line_buffer
2344 if not line.strip():
2345 if not line.strip():
2345 return None
2346 return None
2346
2347
2347 # Create a little structure to pass all the relevant information about
2348 # Create a little structure to pass all the relevant information about
2348 # the current completion to any custom completer.
2349 # the current completion to any custom completer.
2349 event = SimpleNamespace()
2350 event = SimpleNamespace()
2350 event.line = line
2351 event.line = line
2351 event.symbol = text
2352 event.symbol = text
2352 cmd = line.split(None,1)[0]
2353 cmd = line.split(None,1)[0]
2353 event.command = cmd
2354 event.command = cmd
2354 event.text_until_cursor = self.text_until_cursor
2355 event.text_until_cursor = self.text_until_cursor
2355
2356
2356 # for foo etc, try also to find completer for %foo
2357 # for foo etc, try also to find completer for %foo
2357 if not cmd.startswith(self.magic_escape):
2358 if not cmd.startswith(self.magic_escape):
2358 try_magic = self.custom_completers.s_matches(
2359 try_magic = self.custom_completers.s_matches(
2359 self.magic_escape + cmd)
2360 self.magic_escape + cmd)
2360 else:
2361 else:
2361 try_magic = []
2362 try_magic = []
2362
2363
2363 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2364 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2364 try_magic,
2365 try_magic,
2365 self.custom_completers.flat_matches(self.text_until_cursor)):
2366 self.custom_completers.flat_matches(self.text_until_cursor)):
2366 try:
2367 try:
2367 res = c(event)
2368 res = c(event)
2368 if res:
2369 if res:
2369 # first, try case sensitive match
2370 # first, try case sensitive match
2370 withcase = [r for r in res if r.startswith(text)]
2371 withcase = [r for r in res if r.startswith(text)]
2371 if withcase:
2372 if withcase:
2372 return withcase
2373 return withcase
2373 # if none, then case insensitive ones are ok too
2374 # if none, then case insensitive ones are ok too
2374 text_low = text.lower()
2375 text_low = text.lower()
2375 return [r for r in res if r.lower().startswith(text_low)]
2376 return [r for r in res if r.lower().startswith(text_low)]
2376 except TryNext:
2377 except TryNext:
2377 pass
2378 pass
2378 except KeyboardInterrupt:
2379 except KeyboardInterrupt:
2379 """
2380 """
2380 If custom completer take too long,
2381 If custom completer take too long,
2381 let keyboard interrupt abort and return nothing.
2382 let keyboard interrupt abort and return nothing.
2382 """
2383 """
2383 break
2384 break
2384
2385
2385 return None
2386 return None
2386
2387
2387 def completions(self, text: str, offset: int)->Iterator[Completion]:
2388 def completions(self, text: str, offset: int)->Iterator[Completion]:
2388 """
2389 """
2389 Returns an iterator over the possible completions
2390 Returns an iterator over the possible completions
2390
2391
2391 .. warning::
2392 .. warning::
2392
2393
2393 Unstable
2394 Unstable
2394
2395
2395 This function is unstable, API may change without warning.
2396 This function is unstable, API may change without warning.
2396 It will also raise unless use in proper context manager.
2397 It will also raise unless use in proper context manager.
2397
2398
2398 Parameters
2399 Parameters
2399 ----------
2400 ----------
2400 text : str
2401 text : str
2401 Full text of the current input, multi line string.
2402 Full text of the current input, multi line string.
2402 offset : int
2403 offset : int
2403 Integer representing the position of the cursor in ``text``. Offset
2404 Integer representing the position of the cursor in ``text``. Offset
2404 is 0-based indexed.
2405 is 0-based indexed.
2405
2406
2406 Yields
2407 Yields
2407 ------
2408 ------
2408 Completion
2409 Completion
2409
2410
2410 Notes
2411 Notes
2411 -----
2412 -----
2412 The cursor on a text can either be seen as being "in between"
2413 The cursor on a text can either be seen as being "in between"
2413 characters or "On" a character depending on the interface visible to
2414 characters or "On" a character depending on the interface visible to
2414 the user. For consistency the cursor being on "in between" characters X
2415 the user. For consistency the cursor being on "in between" characters X
2415 and Y is equivalent to the cursor being "on" character Y, that is to say
2416 and Y is equivalent to the cursor being "on" character Y, that is to say
2416 the character the cursor is on is considered as being after the cursor.
2417 the character the cursor is on is considered as being after the cursor.
2417
2418
2418 Combining characters may span more that one position in the
2419 Combining characters may span more that one position in the
2419 text.
2420 text.
2420
2421
2421 .. note::
2422 .. note::
2422
2423
2423 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2424 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2424 fake Completion token to distinguish completion returned by Jedi
2425 fake Completion token to distinguish completion returned by Jedi
2425 and usual IPython completion.
2426 and usual IPython completion.
2426
2427
2427 .. note::
2428 .. note::
2428
2429
2429 Completions are not completely deduplicated yet. If identical
2430 Completions are not completely deduplicated yet. If identical
2430 completions are coming from different sources this function does not
2431 completions are coming from different sources this function does not
2431 ensure that each completion object will only be present once.
2432 ensure that each completion object will only be present once.
2432 """
2433 """
2433 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2434 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2434 "It may change without warnings. "
2435 "It may change without warnings. "
2435 "Use in corresponding context manager.",
2436 "Use in corresponding context manager.",
2436 category=ProvisionalCompleterWarning, stacklevel=2)
2437 category=ProvisionalCompleterWarning, stacklevel=2)
2437
2438
2438 seen = set()
2439 seen = set()
2439 profiler:Optional[cProfile.Profile]
2440 profiler:Optional[cProfile.Profile]
2440 try:
2441 try:
2441 if self.profile_completions:
2442 if self.profile_completions:
2442 import cProfile
2443 import cProfile
2443 profiler = cProfile.Profile()
2444 profiler = cProfile.Profile()
2444 profiler.enable()
2445 profiler.enable()
2445 else:
2446 else:
2446 profiler = None
2447 profiler = None
2447
2448
2448 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2449 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2449 if c and (c in seen):
2450 if c and (c in seen):
2450 continue
2451 continue
2451 yield c
2452 yield c
2452 seen.add(c)
2453 seen.add(c)
2453 except KeyboardInterrupt:
2454 except KeyboardInterrupt:
2454 """if completions take too long and users send keyboard interrupt,
2455 """if completions take too long and users send keyboard interrupt,
2455 do not crash and return ASAP. """
2456 do not crash and return ASAP. """
2456 pass
2457 pass
2457 finally:
2458 finally:
2458 if profiler is not None:
2459 if profiler is not None:
2459 profiler.disable()
2460 profiler.disable()
2460 ensure_dir_exists(self.profiler_output_dir)
2461 ensure_dir_exists(self.profiler_output_dir)
2461 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2462 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2462 print("Writing profiler output to", output_path)
2463 print("Writing profiler output to", output_path)
2463 profiler.dump_stats(output_path)
2464 profiler.dump_stats(output_path)
2464
2465
2465 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2466 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2466 """
2467 """
2467 Core completion module.Same signature as :any:`completions`, with the
2468 Core completion module.Same signature as :any:`completions`, with the
2468 extra `timeout` parameter (in seconds).
2469 extra `timeout` parameter (in seconds).
2469
2470
2470 Computing jedi's completion ``.type`` can be quite expensive (it is a
2471 Computing jedi's completion ``.type`` can be quite expensive (it is a
2471 lazy property) and can require some warm-up, more warm up than just
2472 lazy property) and can require some warm-up, more warm up than just
2472 computing the ``name`` of a completion. The warm-up can be :
2473 computing the ``name`` of a completion. The warm-up can be :
2473
2474
2474 - Long warm-up the first time a module is encountered after
2475 - Long warm-up the first time a module is encountered after
2475 install/update: actually build parse/inference tree.
2476 install/update: actually build parse/inference tree.
2476
2477
2477 - first time the module is encountered in a session: load tree from
2478 - first time the module is encountered in a session: load tree from
2478 disk.
2479 disk.
2479
2480
2480 We don't want to block completions for tens of seconds so we give the
2481 We don't want to block completions for tens of seconds so we give the
2481 completer a "budget" of ``_timeout`` seconds per invocation to compute
2482 completer a "budget" of ``_timeout`` seconds per invocation to compute
2482 completions types, the completions that have not yet been computed will
2483 completions types, the completions that have not yet been computed will
2483 be marked as "unknown" an will have a chance to be computed next round
2484 be marked as "unknown" an will have a chance to be computed next round
2484 are things get cached.
2485 are things get cached.
2485
2486
2486 Keep in mind that Jedi is not the only thing treating the completion so
2487 Keep in mind that Jedi is not the only thing treating the completion so
2487 keep the timeout short-ish as if we take more than 0.3 second we still
2488 keep the timeout short-ish as if we take more than 0.3 second we still
2488 have lots of processing to do.
2489 have lots of processing to do.
2489
2490
2490 """
2491 """
2491 deadline = time.monotonic() + _timeout
2492 deadline = time.monotonic() + _timeout
2492
2493
2493 before = full_text[:offset]
2494 before = full_text[:offset]
2494 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2495 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2495
2496
2496 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2497 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2497
2498
2498 results = self._complete(
2499 results = self._complete(
2499 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2500 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2500 )
2501 )
2501 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2502 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2502 identifier: result
2503 identifier: result
2503 for identifier, result in results.items()
2504 for identifier, result in results.items()
2504 if identifier != jedi_matcher_id
2505 if identifier != jedi_matcher_id
2505 }
2506 }
2506
2507
2507 jedi_matches = (
2508 jedi_matches = (
2508 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2509 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2509 if jedi_matcher_id in results
2510 if jedi_matcher_id in results
2510 else ()
2511 else ()
2511 )
2512 )
2512
2513
2513 iter_jm = iter(jedi_matches)
2514 iter_jm = iter(jedi_matches)
2514 if _timeout:
2515 if _timeout:
2515 for jm in iter_jm:
2516 for jm in iter_jm:
2516 try:
2517 try:
2517 type_ = jm.type
2518 type_ = jm.type
2518 except Exception:
2519 except Exception:
2519 if self.debug:
2520 if self.debug:
2520 print("Error in Jedi getting type of ", jm)
2521 print("Error in Jedi getting type of ", jm)
2521 type_ = None
2522 type_ = None
2522 delta = len(jm.name_with_symbols) - len(jm.complete)
2523 delta = len(jm.name_with_symbols) - len(jm.complete)
2523 if type_ == 'function':
2524 if type_ == 'function':
2524 signature = _make_signature(jm)
2525 signature = _make_signature(jm)
2525 else:
2526 else:
2526 signature = ''
2527 signature = ''
2527 yield Completion(start=offset - delta,
2528 yield Completion(start=offset - delta,
2528 end=offset,
2529 end=offset,
2529 text=jm.name_with_symbols,
2530 text=jm.name_with_symbols,
2530 type=type_,
2531 type=type_,
2531 signature=signature,
2532 signature=signature,
2532 _origin='jedi')
2533 _origin='jedi')
2533
2534
2534 if time.monotonic() > deadline:
2535 if time.monotonic() > deadline:
2535 break
2536 break
2536
2537
2537 for jm in iter_jm:
2538 for jm in iter_jm:
2538 delta = len(jm.name_with_symbols) - len(jm.complete)
2539 delta = len(jm.name_with_symbols) - len(jm.complete)
2539 yield Completion(
2540 yield Completion(
2540 start=offset - delta,
2541 start=offset - delta,
2541 end=offset,
2542 end=offset,
2542 text=jm.name_with_symbols,
2543 text=jm.name_with_symbols,
2543 type=_UNKNOWN_TYPE, # don't compute type for speed
2544 type=_UNKNOWN_TYPE, # don't compute type for speed
2544 _origin="jedi",
2545 _origin="jedi",
2545 signature="",
2546 signature="",
2546 )
2547 )
2547
2548
2548 # TODO:
2549 # TODO:
2549 # Suppress this, right now just for debug.
2550 # Suppress this, right now just for debug.
2550 if jedi_matches and non_jedi_results and self.debug:
2551 if jedi_matches and non_jedi_results and self.debug:
2551 some_start_offset = before.rfind(
2552 some_start_offset = before.rfind(
2552 next(iter(non_jedi_results.values()))["matched_fragment"]
2553 next(iter(non_jedi_results.values()))["matched_fragment"]
2553 )
2554 )
2554 yield Completion(
2555 yield Completion(
2555 start=some_start_offset,
2556 start=some_start_offset,
2556 end=offset,
2557 end=offset,
2557 text="--jedi/ipython--",
2558 text="--jedi/ipython--",
2558 _origin="debug",
2559 _origin="debug",
2559 type="none",
2560 type="none",
2560 signature="",
2561 signature="",
2561 )
2562 )
2562
2563
2563 ordered = []
2564 ordered = []
2564 sortable = []
2565 sortable = []
2565
2566
2566 for origin, result in non_jedi_results.items():
2567 for origin, result in non_jedi_results.items():
2567 matched_text = result["matched_fragment"]
2568 matched_text = result["matched_fragment"]
2568 start_offset = before.rfind(matched_text)
2569 start_offset = before.rfind(matched_text)
2569 is_ordered = result.get("ordered", False)
2570 is_ordered = result.get("ordered", False)
2570 container = ordered if is_ordered else sortable
2571 container = ordered if is_ordered else sortable
2571
2572
2572 # I'm unsure if this is always true, so let's assert and see if it
2573 # I'm unsure if this is always true, so let's assert and see if it
2573 # crash
2574 # crash
2574 assert before.endswith(matched_text)
2575 assert before.endswith(matched_text)
2575
2576
2576 for simple_completion in result["completions"]:
2577 for simple_completion in result["completions"]:
2577 completion = Completion(
2578 completion = Completion(
2578 start=start_offset,
2579 start=start_offset,
2579 end=offset,
2580 end=offset,
2580 text=simple_completion.text,
2581 text=simple_completion.text,
2581 _origin=origin,
2582 _origin=origin,
2582 signature="",
2583 signature="",
2583 type=simple_completion.type or _UNKNOWN_TYPE,
2584 type=simple_completion.type or _UNKNOWN_TYPE,
2584 )
2585 )
2585 container.append(completion)
2586 container.append(completion)
2586
2587
2587 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2588 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2588 :MATCHES_LIMIT
2589 :MATCHES_LIMIT
2589 ]
2590 ]
2590
2591
2591 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2592 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2592 """Find completions for the given text and line context.
2593 """Find completions for the given text and line context.
2593
2594
2594 Note that both the text and the line_buffer are optional, but at least
2595 Note that both the text and the line_buffer are optional, but at least
2595 one of them must be given.
2596 one of them must be given.
2596
2597
2597 Parameters
2598 Parameters
2598 ----------
2599 ----------
2599 text : string, optional
2600 text : string, optional
2600 Text to perform the completion on. If not given, the line buffer
2601 Text to perform the completion on. If not given, the line buffer
2601 is split using the instance's CompletionSplitter object.
2602 is split using the instance's CompletionSplitter object.
2602 line_buffer : string, optional
2603 line_buffer : string, optional
2603 If not given, the completer attempts to obtain the current line
2604 If not given, the completer attempts to obtain the current line
2604 buffer via readline. This keyword allows clients which are
2605 buffer via readline. This keyword allows clients which are
2605 requesting for text completions in non-readline contexts to inform
2606 requesting for text completions in non-readline contexts to inform
2606 the completer of the entire text.
2607 the completer of the entire text.
2607 cursor_pos : int, optional
2608 cursor_pos : int, optional
2608 Index of the cursor in the full line buffer. Should be provided by
2609 Index of the cursor in the full line buffer. Should be provided by
2609 remote frontends where kernel has no access to frontend state.
2610 remote frontends where kernel has no access to frontend state.
2610
2611
2611 Returns
2612 Returns
2612 -------
2613 -------
2613 Tuple of two items:
2614 Tuple of two items:
2614 text : str
2615 text : str
2615 Text that was actually used in the completion.
2616 Text that was actually used in the completion.
2616 matches : list
2617 matches : list
2617 A list of completion matches.
2618 A list of completion matches.
2618
2619
2619 Notes
2620 Notes
2620 -----
2621 -----
2621 This API is likely to be deprecated and replaced by
2622 This API is likely to be deprecated and replaced by
2622 :any:`IPCompleter.completions` in the future.
2623 :any:`IPCompleter.completions` in the future.
2623
2624
2624 """
2625 """
2625 warnings.warn('`Completer.complete` is pending deprecation since '
2626 warnings.warn('`Completer.complete` is pending deprecation since '
2626 'IPython 6.0 and will be replaced by `Completer.completions`.',
2627 'IPython 6.0 and will be replaced by `Completer.completions`.',
2627 PendingDeprecationWarning)
2628 PendingDeprecationWarning)
2628 # potential todo, FOLD the 3rd throw away argument of _complete
2629 # potential todo, FOLD the 3rd throw away argument of _complete
2629 # into the first 2 one.
2630 # into the first 2 one.
2630 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2631 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2631 # TODO: should we deprecate now, or does it stay?
2632 # TODO: should we deprecate now, or does it stay?
2632
2633
2633 results = self._complete(
2634 results = self._complete(
2634 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2635 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2635 )
2636 )
2636
2637
2637 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2638 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2638
2639
2639 return self._arrange_and_extract(
2640 return self._arrange_and_extract(
2640 results,
2641 results,
2641 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2642 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2642 skip_matchers={jedi_matcher_id},
2643 skip_matchers={jedi_matcher_id},
2643 # this API does not support different start/end positions (fragments of token).
2644 # this API does not support different start/end positions (fragments of token).
2644 abort_if_offset_changes=True,
2645 abort_if_offset_changes=True,
2645 )
2646 )
2646
2647
2647 def _arrange_and_extract(
2648 def _arrange_and_extract(
2648 self,
2649 self,
2649 results: Dict[str, MatcherResult],
2650 results: Dict[str, MatcherResult],
2650 skip_matchers: Set[str],
2651 skip_matchers: Set[str],
2651 abort_if_offset_changes: bool,
2652 abort_if_offset_changes: bool,
2652 ):
2653 ):
2653
2654
2654 sortable = []
2655 sortable = []
2655 ordered = []
2656 ordered = []
2656 most_recent_fragment = None
2657 most_recent_fragment = None
2657 for identifier, result in results.items():
2658 for identifier, result in results.items():
2658 if identifier in skip_matchers:
2659 if identifier in skip_matchers:
2659 continue
2660 continue
2660 if not result["completions"]:
2661 if not result["completions"]:
2661 continue
2662 continue
2662 if not most_recent_fragment:
2663 if not most_recent_fragment:
2663 most_recent_fragment = result["matched_fragment"]
2664 most_recent_fragment = result["matched_fragment"]
2664 if (
2665 if (
2665 abort_if_offset_changes
2666 abort_if_offset_changes
2666 and result["matched_fragment"] != most_recent_fragment
2667 and result["matched_fragment"] != most_recent_fragment
2667 ):
2668 ):
2668 break
2669 break
2669 if result.get("ordered", False):
2670 if result.get("ordered", False):
2670 ordered.extend(result["completions"])
2671 ordered.extend(result["completions"])
2671 else:
2672 else:
2672 sortable.extend(result["completions"])
2673 sortable.extend(result["completions"])
2673
2674
2674 if not most_recent_fragment:
2675 if not most_recent_fragment:
2675 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2676 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2676
2677
2677 return most_recent_fragment, [
2678 return most_recent_fragment, [
2678 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2679 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2679 ]
2680 ]
2680
2681
2681 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2682 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2682 full_text=None) -> _CompleteResult:
2683 full_text=None) -> _CompleteResult:
2683 """
2684 """
2684 Like complete but can also returns raw jedi completions as well as the
2685 Like complete but can also returns raw jedi completions as well as the
2685 origin of the completion text. This could (and should) be made much
2686 origin of the completion text. This could (and should) be made much
2686 cleaner but that will be simpler once we drop the old (and stateful)
2687 cleaner but that will be simpler once we drop the old (and stateful)
2687 :any:`complete` API.
2688 :any:`complete` API.
2688
2689
2689 With current provisional API, cursor_pos act both (depending on the
2690 With current provisional API, cursor_pos act both (depending on the
2690 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2691 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2691 ``column`` when passing multiline strings this could/should be renamed
2692 ``column`` when passing multiline strings this could/should be renamed
2692 but would add extra noise.
2693 but would add extra noise.
2693
2694
2694 Parameters
2695 Parameters
2695 ----------
2696 ----------
2696 cursor_line
2697 cursor_line
2697 Index of the line the cursor is on. 0 indexed.
2698 Index of the line the cursor is on. 0 indexed.
2698 cursor_pos
2699 cursor_pos
2699 Position of the cursor in the current line/line_buffer/text. 0
2700 Position of the cursor in the current line/line_buffer/text. 0
2700 indexed.
2701 indexed.
2701 line_buffer : optional, str
2702 line_buffer : optional, str
2702 The current line the cursor is in, this is mostly due to legacy
2703 The current line the cursor is in, this is mostly due to legacy
2703 reason that readline could only give a us the single current line.
2704 reason that readline could only give a us the single current line.
2704 Prefer `full_text`.
2705 Prefer `full_text`.
2705 text : str
2706 text : str
2706 The current "token" the cursor is in, mostly also for historical
2707 The current "token" the cursor is in, mostly also for historical
2707 reasons. as the completer would trigger only after the current line
2708 reasons. as the completer would trigger only after the current line
2708 was parsed.
2709 was parsed.
2709 full_text : str
2710 full_text : str
2710 Full text of the current cell.
2711 Full text of the current cell.
2711
2712
2712 Returns
2713 Returns
2713 -------
2714 -------
2714 An ordered dictionary where keys are identifiers of completion
2715 An ordered dictionary where keys are identifiers of completion
2715 matchers and values are ``MatcherResult``s.
2716 matchers and values are ``MatcherResult``s.
2716 """
2717 """
2717
2718
2718 # if the cursor position isn't given, the only sane assumption we can
2719 # if the cursor position isn't given, the only sane assumption we can
2719 # make is that it's at the end of the line (the common case)
2720 # make is that it's at the end of the line (the common case)
2720 if cursor_pos is None:
2721 if cursor_pos is None:
2721 cursor_pos = len(line_buffer) if text is None else len(text)
2722 cursor_pos = len(line_buffer) if text is None else len(text)
2722
2723
2723 if self.use_main_ns:
2724 if self.use_main_ns:
2724 self.namespace = __main__.__dict__
2725 self.namespace = __main__.__dict__
2725
2726
2726 # if text is either None or an empty string, rely on the line buffer
2727 # if text is either None or an empty string, rely on the line buffer
2727 if (not line_buffer) and full_text:
2728 if (not line_buffer) and full_text:
2728 line_buffer = full_text.split('\n')[cursor_line]
2729 line_buffer = full_text.split('\n')[cursor_line]
2729 if not text: # issue #11508: check line_buffer before calling split_line
2730 if not text: # issue #11508: check line_buffer before calling split_line
2730 text = (
2731 text = (
2731 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2732 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2732 )
2733 )
2733
2734
2734 # If no line buffer is given, assume the input text is all there was
2735 # If no line buffer is given, assume the input text is all there was
2735 if line_buffer is None:
2736 if line_buffer is None:
2736 line_buffer = text
2737 line_buffer = text
2737
2738
2738 # deprecated - do not use `line_buffer` in new code.
2739 # deprecated - do not use `line_buffer` in new code.
2739 self.line_buffer = line_buffer
2740 self.line_buffer = line_buffer
2740 self.text_until_cursor = self.line_buffer[:cursor_pos]
2741 self.text_until_cursor = self.line_buffer[:cursor_pos]
2741
2742
2742 if not full_text:
2743 if not full_text:
2743 full_text = line_buffer
2744 full_text = line_buffer
2744
2745
2745 context = CompletionContext(
2746 context = CompletionContext(
2746 full_text=full_text,
2747 full_text=full_text,
2747 cursor_position=cursor_pos,
2748 cursor_position=cursor_pos,
2748 cursor_line=cursor_line,
2749 cursor_line=cursor_line,
2749 token=text,
2750 token=text,
2750 limit=MATCHES_LIMIT,
2751 limit=MATCHES_LIMIT,
2751 )
2752 )
2752
2753
2753 # Start with a clean slate of completions
2754 # Start with a clean slate of completions
2754 results = {}
2755 results = {}
2755
2756
2756 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2757 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2757
2758
2758 suppressed_matchers = set()
2759 suppressed_matchers = set()
2759
2760
2760 matchers = {
2761 matchers = {
2761 _get_matcher_id(matcher): matcher
2762 _get_matcher_id(matcher): matcher
2762 for matcher in sorted(
2763 for matcher in sorted(
2763 self.matchers, key=_get_matcher_priority, reverse=True
2764 self.matchers, key=_get_matcher_priority, reverse=True
2764 )
2765 )
2765 }
2766 }
2766
2767
2767 for matcher_id, matcher in matchers.items():
2768 for matcher_id, matcher in matchers.items():
2768 api_version = _get_matcher_api_version(matcher)
2769 api_version = _get_matcher_api_version(matcher)
2769 matcher_id = _get_matcher_id(matcher)
2770 matcher_id = _get_matcher_id(matcher)
2770
2771
2771 if matcher_id in self.disable_matchers:
2772 if matcher_id in self.disable_matchers:
2772 continue
2773 continue
2773
2774
2774 if matcher_id in results:
2775 if matcher_id in results:
2775 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2776 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2776
2777
2777 if matcher_id in suppressed_matchers:
2778 if matcher_id in suppressed_matchers:
2778 continue
2779 continue
2779
2780
2780 try:
2781 try:
2781 if api_version == 1:
2782 if api_version == 1:
2782 result = _convert_matcher_v1_result_to_v2(
2783 result = _convert_matcher_v1_result_to_v2(
2783 matcher(text), type=_UNKNOWN_TYPE
2784 matcher(text), type=_UNKNOWN_TYPE
2784 )
2785 )
2785 elif api_version == 2:
2786 elif api_version == 2:
2786 result = cast(matcher, MatcherAPIv2)(context)
2787 result = cast(matcher, MatcherAPIv2)(context)
2787 else:
2788 else:
2788 raise ValueError(f"Unsupported API version {api_version}")
2789 raise ValueError(f"Unsupported API version {api_version}")
2789 except:
2790 except:
2790 # Show the ugly traceback if the matcher causes an
2791 # Show the ugly traceback if the matcher causes an
2791 # exception, but do NOT crash the kernel!
2792 # exception, but do NOT crash the kernel!
2792 sys.excepthook(*sys.exc_info())
2793 sys.excepthook(*sys.exc_info())
2793 continue
2794 continue
2794
2795
2795 # set default value for matched fragment if suffix was not selected.
2796 # set default value for matched fragment if suffix was not selected.
2796 result["matched_fragment"] = result.get("matched_fragment", context.token)
2797 result["matched_fragment"] = result.get("matched_fragment", context.token)
2797
2798
2798 if not suppressed_matchers:
2799 if not suppressed_matchers:
2799 suppression_recommended = result.get("suppress", False)
2800 suppression_recommended = result.get("suppress", False)
2800
2801
2801 suppression_config = (
2802 suppression_config = (
2802 self.suppress_competing_matchers.get(matcher_id, None)
2803 self.suppress_competing_matchers.get(matcher_id, None)
2803 if isinstance(self.suppress_competing_matchers, dict)
2804 if isinstance(self.suppress_competing_matchers, dict)
2804 else self.suppress_competing_matchers
2805 else self.suppress_competing_matchers
2805 )
2806 )
2806 should_suppress = (
2807 should_suppress = (
2807 (suppression_config is True)
2808 (suppression_config is True)
2808 or (suppression_recommended and (suppression_config is not False))
2809 or (suppression_recommended and (suppression_config is not False))
2809 ) and len(result["completions"])
2810 ) and len(result["completions"])
2810
2811
2811 if should_suppress:
2812 if should_suppress:
2812 suppression_exceptions = result.get("do_not_suppress", set())
2813 suppression_exceptions = result.get("do_not_suppress", set())
2813 try:
2814 try:
2814 to_suppress = set(suppression_recommended)
2815 to_suppress = set(suppression_recommended)
2815 except TypeError:
2816 except TypeError:
2816 to_suppress = set(matchers)
2817 to_suppress = set(matchers)
2817 suppressed_matchers = to_suppress - suppression_exceptions
2818 suppressed_matchers = to_suppress - suppression_exceptions
2818
2819
2819 new_results = {}
2820 new_results = {}
2820 for previous_matcher_id, previous_result in results.items():
2821 for previous_matcher_id, previous_result in results.items():
2821 if previous_matcher_id not in suppressed_matchers:
2822 if previous_matcher_id not in suppressed_matchers:
2822 new_results[previous_matcher_id] = previous_result
2823 new_results[previous_matcher_id] = previous_result
2823 results = new_results
2824 results = new_results
2824
2825
2825 results[matcher_id] = result
2826 results[matcher_id] = result
2826
2827
2827 _, matches = self._arrange_and_extract(
2828 _, matches = self._arrange_and_extract(
2828 results,
2829 results,
2829 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2830 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2830 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2831 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2831 skip_matchers={jedi_matcher_id},
2832 skip_matchers={jedi_matcher_id},
2832 abort_if_offset_changes=False,
2833 abort_if_offset_changes=False,
2833 )
2834 )
2834
2835
2835 # populate legacy stateful API
2836 # populate legacy stateful API
2836 self.matches = matches
2837 self.matches = matches
2837
2838
2838 return results
2839 return results
2839
2840
2840 @staticmethod
2841 @staticmethod
2841 def _deduplicate(
2842 def _deduplicate(
2842 matches: Sequence[SimpleCompletion],
2843 matches: Sequence[SimpleCompletion],
2843 ) -> Iterable[SimpleCompletion]:
2844 ) -> Iterable[SimpleCompletion]:
2844 filtered_matches = {}
2845 filtered_matches = {}
2845 for match in matches:
2846 for match in matches:
2846 text = match.text
2847 text = match.text
2847 if (
2848 if (
2848 text not in filtered_matches
2849 text not in filtered_matches
2849 or filtered_matches[text].type == _UNKNOWN_TYPE
2850 or filtered_matches[text].type == _UNKNOWN_TYPE
2850 ):
2851 ):
2851 filtered_matches[text] = match
2852 filtered_matches[text] = match
2852
2853
2853 return filtered_matches.values()
2854 return filtered_matches.values()
2854
2855
2855 @staticmethod
2856 @staticmethod
2856 def _sort(matches: Sequence[SimpleCompletion]):
2857 def _sort(matches: Sequence[SimpleCompletion]):
2857 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2858 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2858
2859
2859 @context_matcher()
2860 @context_matcher()
2860 def fwd_unicode_matcher(self, context: CompletionContext):
2861 def fwd_unicode_matcher(self, context: CompletionContext):
2861 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
2862 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
2862 # TODO: use `context.limit` to terminate early once we matched the maximum
2863 # TODO: use `context.limit` to terminate early once we matched the maximum
2863 # number that will be used downstream; can be added as an optional to
2864 # number that will be used downstream; can be added as an optional to
2864 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
2865 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
2865 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
2866 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
2866 return _convert_matcher_v1_result_to_v2(
2867 return _convert_matcher_v1_result_to_v2(
2867 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2868 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2868 )
2869 )
2869
2870
2870 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2871 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2871 """
2872 """
2872 Forward match a string starting with a backslash with a list of
2873 Forward match a string starting with a backslash with a list of
2873 potential Unicode completions.
2874 potential Unicode completions.
2874
2875
2875 Will compute list of Unicode character names on first call and cache it.
2876 Will compute list of Unicode character names on first call and cache it.
2876
2877
2877 .. deprecated:: 8.6
2878 .. deprecated:: 8.6
2878 You can use :meth:`fwd_unicode_matcher` instead.
2879 You can use :meth:`fwd_unicode_matcher` instead.
2879
2880
2880 Returns
2881 Returns
2881 -------
2882 -------
2882 At tuple with:
2883 At tuple with:
2883 - matched text (empty if no matches)
2884 - matched text (empty if no matches)
2884 - list of potential completions, empty tuple otherwise)
2885 - list of potential completions, empty tuple otherwise)
2885 """
2886 """
2886 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2887 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2887 # We could do a faster match using a Trie.
2888 # We could do a faster match using a Trie.
2888
2889
2889 # Using pygtrie the following seem to work:
2890 # Using pygtrie the following seem to work:
2890
2891
2891 # s = PrefixSet()
2892 # s = PrefixSet()
2892
2893
2893 # for c in range(0,0x10FFFF + 1):
2894 # for c in range(0,0x10FFFF + 1):
2894 # try:
2895 # try:
2895 # s.add(unicodedata.name(chr(c)))
2896 # s.add(unicodedata.name(chr(c)))
2896 # except ValueError:
2897 # except ValueError:
2897 # pass
2898 # pass
2898 # [''.join(k) for k in s.iter(prefix)]
2899 # [''.join(k) for k in s.iter(prefix)]
2899
2900
2900 # But need to be timed and adds an extra dependency.
2901 # But need to be timed and adds an extra dependency.
2901
2902
2902 slashpos = text.rfind('\\')
2903 slashpos = text.rfind('\\')
2903 # if text starts with slash
2904 # if text starts with slash
2904 if slashpos > -1:
2905 if slashpos > -1:
2905 # PERF: It's important that we don't access self._unicode_names
2906 # PERF: It's important that we don't access self._unicode_names
2906 # until we're inside this if-block. _unicode_names is lazily
2907 # until we're inside this if-block. _unicode_names is lazily
2907 # initialized, and it takes a user-noticeable amount of time to
2908 # initialized, and it takes a user-noticeable amount of time to
2908 # initialize it, so we don't want to initialize it unless we're
2909 # initialize it, so we don't want to initialize it unless we're
2909 # actually going to use it.
2910 # actually going to use it.
2910 s = text[slashpos + 1 :]
2911 s = text[slashpos + 1 :]
2911 sup = s.upper()
2912 sup = s.upper()
2912 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2913 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2913 if candidates:
2914 if candidates:
2914 return s, candidates
2915 return s, candidates
2915 candidates = [x for x in self.unicode_names if sup in x]
2916 candidates = [x for x in self.unicode_names if sup in x]
2916 if candidates:
2917 if candidates:
2917 return s, candidates
2918 return s, candidates
2918 splitsup = sup.split(" ")
2919 splitsup = sup.split(" ")
2919 candidates = [
2920 candidates = [
2920 x for x in self.unicode_names if all(u in x for u in splitsup)
2921 x for x in self.unicode_names if all(u in x for u in splitsup)
2921 ]
2922 ]
2922 if candidates:
2923 if candidates:
2923 return s, candidates
2924 return s, candidates
2924
2925
2925 return "", ()
2926 return "", ()
2926
2927
2927 # if text does not start with slash
2928 # if text does not start with slash
2928 else:
2929 else:
2929 return '', ()
2930 return '', ()
2930
2931
2931 @property
2932 @property
2932 def unicode_names(self) -> List[str]:
2933 def unicode_names(self) -> List[str]:
2933 """List of names of unicode code points that can be completed.
2934 """List of names of unicode code points that can be completed.
2934
2935
2935 The list is lazily initialized on first access.
2936 The list is lazily initialized on first access.
2936 """
2937 """
2937 if self._unicode_names is None:
2938 if self._unicode_names is None:
2938 names = []
2939 names = []
2939 for c in range(0,0x10FFFF + 1):
2940 for c in range(0,0x10FFFF + 1):
2940 try:
2941 try:
2941 names.append(unicodedata.name(chr(c)))
2942 names.append(unicodedata.name(chr(c)))
2942 except ValueError:
2943 except ValueError:
2943 pass
2944 pass
2944 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2945 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2945
2946
2946 return self._unicode_names
2947 return self._unicode_names
2947
2948
2948 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2949 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2949 names = []
2950 names = []
2950 for start,stop in ranges:
2951 for start,stop in ranges:
2951 for c in range(start, stop) :
2952 for c in range(start, stop) :
2952 try:
2953 try:
2953 names.append(unicodedata.name(chr(c)))
2954 names.append(unicodedata.name(chr(c)))
2954 except ValueError:
2955 except ValueError:
2955 pass
2956 pass
2956 return names
2957 return names
General Comments 0
You need to be logged in to leave comments. Login now