##// END OF EJS Templates
quote jedi in typings
Nicholas Bollweg -
Show More
@@ -1,2977 +1,2977 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press ``<tab>`` to expand it to its latex form.
53 and press ``<tab>`` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103
103
104 Matchers
104 Matchers
105 ========
105 ========
106
106
107 All completions routines are implemented using unified *Matchers* API.
107 All completions routines are implemented using unified *Matchers* API.
108 The matchers API is provisional and subject to change without notice.
108 The matchers API is provisional and subject to change without notice.
109
109
110 The built-in matchers include:
110 The built-in matchers include:
111
111
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.unicode_name_matcher`,
114 - :any:`IPCompleter.unicode_name_matcher`,
115 :any:`IPCompleter.fwd_unicode_matcher`
115 :any:`IPCompleter.fwd_unicode_matcher`
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 (`complete_command`) with string dispatch (including regular expressions).
124 (`complete_command`) with string dispatch (including regular expressions).
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Jedi results to match behaviour in earlier IPython versions.
126 Jedi results to match behaviour in earlier IPython versions.
127
127
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129
129
130 Matcher API
130 Matcher API
131 -----------
131 -----------
132
132
133 Simplifying some details, the ``Matcher`` interface can described as
133 Simplifying some details, the ``Matcher`` interface can described as
134
134
135 .. code-block::
135 .. code-block::
136
136
137 MatcherAPIv1 = Callable[[str], list[str]]
137 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139
139
140 Matcher = MatcherAPIv1 | MatcherAPIv2
140 Matcher = MatcherAPIv1 | MatcherAPIv2
141
141
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 and remains supported as a simplest way for generating completions. This is also
143 and remains supported as a simplest way for generating completions. This is also
144 currently the only API supported by the IPython hooks system `complete_command`.
144 currently the only API supported by the IPython hooks system `complete_command`.
145
145
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 and requires a literal ``2`` for v2 Matchers.
148 and requires a literal ``2`` for v2 Matchers.
149
149
150 Once the API stabilises future versions may relax the requirement for specifying
150 Once the API stabilises future versions may relax the requirement for specifying
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153
153
154 Suppression of competing matchers
154 Suppression of competing matchers
155 ---------------------------------
155 ---------------------------------
156
156
157 By default results from all matchers are combined, in the order determined by
157 By default results from all matchers are combined, in the order determined by
158 their priority. Matchers can request to suppress results from subsequent
158 their priority. Matchers can request to suppress results from subsequent
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160
160
161 When multiple matchers simultaneously request surpression, the results from of
161 When multiple matchers simultaneously request surpression, the results from of
162 the matcher with higher priority will be returned.
162 the matcher with higher priority will be returned.
163
163
164 Sometimes it is desirable to suppress most but not all other matchers;
164 Sometimes it is desirable to suppress most but not all other matchers;
165 this can be achieved by adding a list of identifiers of matchers which
165 this can be achieved by adding a list of identifiers of matchers which
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167
167
168 The suppression behaviour can is user-configurable via
168 The suppression behaviour can is user-configurable via
169 :any:`IPCompleter.suppress_competing_matchers`.
169 :any:`IPCompleter.suppress_competing_matchers`.
170 """
170 """
171
171
172
172
173 # Copyright (c) IPython Development Team.
173 # Copyright (c) IPython Development Team.
174 # Distributed under the terms of the Modified BSD License.
174 # Distributed under the terms of the Modified BSD License.
175 #
175 #
176 # Some of this code originated from rlcompleter in the Python standard library
176 # Some of this code originated from rlcompleter in the Python standard library
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
178
178
179 from __future__ import annotations
179 from __future__ import annotations
180 import builtins as builtin_mod
180 import builtins as builtin_mod
181 import glob
181 import glob
182 import inspect
182 import inspect
183 import itertools
183 import itertools
184 import keyword
184 import keyword
185 import os
185 import os
186 import re
186 import re
187 import string
187 import string
188 import sys
188 import sys
189 import time
189 import time
190 import unicodedata
190 import unicodedata
191 import uuid
191 import uuid
192 import warnings
192 import warnings
193 from contextlib import contextmanager
193 from contextlib import contextmanager
194 from dataclasses import dataclass
194 from dataclasses import dataclass
195 from functools import cached_property, partial
195 from functools import cached_property, partial
196 from importlib import import_module
196 from importlib import import_module
197 from types import SimpleNamespace
197 from types import SimpleNamespace
198 from typing import (
198 from typing import (
199 Iterable,
199 Iterable,
200 Iterator,
200 Iterator,
201 List,
201 List,
202 Tuple,
202 Tuple,
203 Union,
203 Union,
204 Any,
204 Any,
205 Sequence,
205 Sequence,
206 Dict,
206 Dict,
207 NamedTuple,
207 NamedTuple,
208 Pattern,
208 Pattern,
209 Optional,
209 Optional,
210 TYPE_CHECKING,
210 TYPE_CHECKING,
211 Set,
211 Set,
212 Literal,
212 Literal,
213 )
213 )
214
214
215 from IPython.core.error import TryNext
215 from IPython.core.error import TryNext
216 from IPython.core.inputtransformer2 import ESC_MAGIC
216 from IPython.core.inputtransformer2 import ESC_MAGIC
217 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
217 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
218 from IPython.core.oinspect import InspectColors
218 from IPython.core.oinspect import InspectColors
219 from IPython.testing.skipdoctest import skip_doctest
219 from IPython.testing.skipdoctest import skip_doctest
220 from IPython.utils import generics
220 from IPython.utils import generics
221 from IPython.utils.decorators import sphinx_options
221 from IPython.utils.decorators import sphinx_options
222 from IPython.utils.dir2 import dir2, get_real_method
222 from IPython.utils.dir2 import dir2, get_real_method
223 from IPython.utils.docs import GENERATING_DOCUMENTATION
223 from IPython.utils.docs import GENERATING_DOCUMENTATION
224 from IPython.utils.path import ensure_dir_exists
224 from IPython.utils.path import ensure_dir_exists
225 from IPython.utils.process import arg_split
225 from IPython.utils.process import arg_split
226 from traitlets import (
226 from traitlets import (
227 Bool,
227 Bool,
228 Enum,
228 Enum,
229 Int,
229 Int,
230 List as ListTrait,
230 List as ListTrait,
231 Unicode,
231 Unicode,
232 Dict as DictTrait,
232 Dict as DictTrait,
233 Union as UnionTrait,
233 Union as UnionTrait,
234 default,
234 default,
235 observe,
235 observe,
236 )
236 )
237 from traitlets.config.configurable import Configurable
237 from traitlets.config.configurable import Configurable
238
238
239 import __main__
239 import __main__
240
240
241 # skip module docstests
241 # skip module docstests
242 __skip_doctest__ = True
242 __skip_doctest__ = True
243
243
244
244
245 try:
245 try:
246 import jedi
246 import jedi
247 jedi.settings.case_insensitive_completion = False
247 jedi.settings.case_insensitive_completion = False
248 import jedi.api.helpers
248 import jedi.api.helpers
249 import jedi.api.classes
249 import jedi.api.classes
250 JEDI_INSTALLED = True
250 JEDI_INSTALLED = True
251 except ImportError:
251 except ImportError:
252 JEDI_INSTALLED = False
252 JEDI_INSTALLED = False
253
253
254
254
255 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
255 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
256 from typing import cast
256 from typing import cast
257 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
257 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
258 else:
258 else:
259
259
260 def cast(obj, type_):
260 def cast(obj, type_):
261 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
261 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
262 return obj
262 return obj
263
263
264 # do not require on runtime
264 # do not require on runtime
265 NotRequired = Tuple # requires Python >=3.11
265 NotRequired = Tuple # requires Python >=3.11
266 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
266 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
267 Protocol = object # requires Python >=3.8
267 Protocol = object # requires Python >=3.8
268 TypeAlias = Any # requires Python >=3.10
268 TypeAlias = Any # requires Python >=3.10
269 if GENERATING_DOCUMENTATION:
269 if GENERATING_DOCUMENTATION:
270 from typing import TypedDict
270 from typing import TypedDict
271
271
272 # -----------------------------------------------------------------------------
272 # -----------------------------------------------------------------------------
273 # Globals
273 # Globals
274 #-----------------------------------------------------------------------------
274 #-----------------------------------------------------------------------------
275
275
276 # ranges where we have most of the valid unicode names. We could be more finer
276 # ranges where we have most of the valid unicode names. We could be more finer
277 # grained but is it worth it for performance While unicode have character in the
277 # grained but is it worth it for performance While unicode have character in the
278 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
278 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
279 # write this). With below range we cover them all, with a density of ~67%
279 # write this). With below range we cover them all, with a density of ~67%
280 # biggest next gap we consider only adds up about 1% density and there are 600
280 # biggest next gap we consider only adds up about 1% density and there are 600
281 # gaps that would need hard coding.
281 # gaps that would need hard coding.
282 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
282 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
283
283
284 # Public API
284 # Public API
285 __all__ = ["Completer", "IPCompleter"]
285 __all__ = ["Completer", "IPCompleter"]
286
286
287 if sys.platform == 'win32':
287 if sys.platform == 'win32':
288 PROTECTABLES = ' '
288 PROTECTABLES = ' '
289 else:
289 else:
290 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
290 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
291
291
292 # Protect against returning an enormous number of completions which the frontend
292 # Protect against returning an enormous number of completions which the frontend
293 # may have trouble processing.
293 # may have trouble processing.
294 MATCHES_LIMIT = 500
294 MATCHES_LIMIT = 500
295
295
296 # Completion type reported when no type can be inferred.
296 # Completion type reported when no type can be inferred.
297 _UNKNOWN_TYPE = "<unknown>"
297 _UNKNOWN_TYPE = "<unknown>"
298
298
299 class ProvisionalCompleterWarning(FutureWarning):
299 class ProvisionalCompleterWarning(FutureWarning):
300 """
300 """
301 Exception raise by an experimental feature in this module.
301 Exception raise by an experimental feature in this module.
302
302
303 Wrap code in :any:`provisionalcompleter` context manager if you
303 Wrap code in :any:`provisionalcompleter` context manager if you
304 are certain you want to use an unstable feature.
304 are certain you want to use an unstable feature.
305 """
305 """
306 pass
306 pass
307
307
308 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
308 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
309
309
310
310
311 @skip_doctest
311 @skip_doctest
312 @contextmanager
312 @contextmanager
313 def provisionalcompleter(action='ignore'):
313 def provisionalcompleter(action='ignore'):
314 """
314 """
315 This context manager has to be used in any place where unstable completer
315 This context manager has to be used in any place where unstable completer
316 behavior and API may be called.
316 behavior and API may be called.
317
317
318 >>> with provisionalcompleter():
318 >>> with provisionalcompleter():
319 ... completer.do_experimental_things() # works
319 ... completer.do_experimental_things() # works
320
320
321 >>> completer.do_experimental_things() # raises.
321 >>> completer.do_experimental_things() # raises.
322
322
323 .. note::
323 .. note::
324
324
325 Unstable
325 Unstable
326
326
327 By using this context manager you agree that the API in use may change
327 By using this context manager you agree that the API in use may change
328 without warning, and that you won't complain if they do so.
328 without warning, and that you won't complain if they do so.
329
329
330 You also understand that, if the API is not to your liking, you should report
330 You also understand that, if the API is not to your liking, you should report
331 a bug to explain your use case upstream.
331 a bug to explain your use case upstream.
332
332
333 We'll be happy to get your feedback, feature requests, and improvements on
333 We'll be happy to get your feedback, feature requests, and improvements on
334 any of the unstable APIs!
334 any of the unstable APIs!
335 """
335 """
336 with warnings.catch_warnings():
336 with warnings.catch_warnings():
337 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
337 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
338 yield
338 yield
339
339
340
340
341 def has_open_quotes(s):
341 def has_open_quotes(s):
342 """Return whether a string has open quotes.
342 """Return whether a string has open quotes.
343
343
344 This simply counts whether the number of quote characters of either type in
344 This simply counts whether the number of quote characters of either type in
345 the string is odd.
345 the string is odd.
346
346
347 Returns
347 Returns
348 -------
348 -------
349 If there is an open quote, the quote character is returned. Else, return
349 If there is an open quote, the quote character is returned. Else, return
350 False.
350 False.
351 """
351 """
352 # We check " first, then ', so complex cases with nested quotes will get
352 # We check " first, then ', so complex cases with nested quotes will get
353 # the " to take precedence.
353 # the " to take precedence.
354 if s.count('"') % 2:
354 if s.count('"') % 2:
355 return '"'
355 return '"'
356 elif s.count("'") % 2:
356 elif s.count("'") % 2:
357 return "'"
357 return "'"
358 else:
358 else:
359 return False
359 return False
360
360
361
361
362 def protect_filename(s, protectables=PROTECTABLES):
362 def protect_filename(s, protectables=PROTECTABLES):
363 """Escape a string to protect certain characters."""
363 """Escape a string to protect certain characters."""
364 if set(s) & set(protectables):
364 if set(s) & set(protectables):
365 if sys.platform == "win32":
365 if sys.platform == "win32":
366 return '"' + s + '"'
366 return '"' + s + '"'
367 else:
367 else:
368 return "".join(("\\" + c if c in protectables else c) for c in s)
368 return "".join(("\\" + c if c in protectables else c) for c in s)
369 else:
369 else:
370 return s
370 return s
371
371
372
372
373 def expand_user(path:str) -> Tuple[str, bool, str]:
373 def expand_user(path:str) -> Tuple[str, bool, str]:
374 """Expand ``~``-style usernames in strings.
374 """Expand ``~``-style usernames in strings.
375
375
376 This is similar to :func:`os.path.expanduser`, but it computes and returns
376 This is similar to :func:`os.path.expanduser`, but it computes and returns
377 extra information that will be useful if the input was being used in
377 extra information that will be useful if the input was being used in
378 computing completions, and you wish to return the completions with the
378 computing completions, and you wish to return the completions with the
379 original '~' instead of its expanded value.
379 original '~' instead of its expanded value.
380
380
381 Parameters
381 Parameters
382 ----------
382 ----------
383 path : str
383 path : str
384 String to be expanded. If no ~ is present, the output is the same as the
384 String to be expanded. If no ~ is present, the output is the same as the
385 input.
385 input.
386
386
387 Returns
387 Returns
388 -------
388 -------
389 newpath : str
389 newpath : str
390 Result of ~ expansion in the input path.
390 Result of ~ expansion in the input path.
391 tilde_expand : bool
391 tilde_expand : bool
392 Whether any expansion was performed or not.
392 Whether any expansion was performed or not.
393 tilde_val : str
393 tilde_val : str
394 The value that ~ was replaced with.
394 The value that ~ was replaced with.
395 """
395 """
396 # Default values
396 # Default values
397 tilde_expand = False
397 tilde_expand = False
398 tilde_val = ''
398 tilde_val = ''
399 newpath = path
399 newpath = path
400
400
401 if path.startswith('~'):
401 if path.startswith('~'):
402 tilde_expand = True
402 tilde_expand = True
403 rest = len(path)-1
403 rest = len(path)-1
404 newpath = os.path.expanduser(path)
404 newpath = os.path.expanduser(path)
405 if rest:
405 if rest:
406 tilde_val = newpath[:-rest]
406 tilde_val = newpath[:-rest]
407 else:
407 else:
408 tilde_val = newpath
408 tilde_val = newpath
409
409
410 return newpath, tilde_expand, tilde_val
410 return newpath, tilde_expand, tilde_val
411
411
412
412
413 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
413 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
414 """Does the opposite of expand_user, with its outputs.
414 """Does the opposite of expand_user, with its outputs.
415 """
415 """
416 if tilde_expand:
416 if tilde_expand:
417 return path.replace(tilde_val, '~')
417 return path.replace(tilde_val, '~')
418 else:
418 else:
419 return path
419 return path
420
420
421
421
422 def completions_sorting_key(word):
422 def completions_sorting_key(word):
423 """key for sorting completions
423 """key for sorting completions
424
424
425 This does several things:
425 This does several things:
426
426
427 - Demote any completions starting with underscores to the end
427 - Demote any completions starting with underscores to the end
428 - Insert any %magic and %%cellmagic completions in the alphabetical order
428 - Insert any %magic and %%cellmagic completions in the alphabetical order
429 by their name
429 by their name
430 """
430 """
431 prio1, prio2 = 0, 0
431 prio1, prio2 = 0, 0
432
432
433 if word.startswith('__'):
433 if word.startswith('__'):
434 prio1 = 2
434 prio1 = 2
435 elif word.startswith('_'):
435 elif word.startswith('_'):
436 prio1 = 1
436 prio1 = 1
437
437
438 if word.endswith('='):
438 if word.endswith('='):
439 prio1 = -1
439 prio1 = -1
440
440
441 if word.startswith('%%'):
441 if word.startswith('%%'):
442 # If there's another % in there, this is something else, so leave it alone
442 # If there's another % in there, this is something else, so leave it alone
443 if not "%" in word[2:]:
443 if not "%" in word[2:]:
444 word = word[2:]
444 word = word[2:]
445 prio2 = 2
445 prio2 = 2
446 elif word.startswith('%'):
446 elif word.startswith('%'):
447 if not "%" in word[1:]:
447 if not "%" in word[1:]:
448 word = word[1:]
448 word = word[1:]
449 prio2 = 1
449 prio2 = 1
450
450
451 return prio1, word, prio2
451 return prio1, word, prio2
452
452
453
453
454 class _FakeJediCompletion:
454 class _FakeJediCompletion:
455 """
455 """
456 This is a workaround to communicate to the UI that Jedi has crashed and to
456 This is a workaround to communicate to the UI that Jedi has crashed and to
457 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
457 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
458
458
459 Added in IPython 6.0 so should likely be removed for 7.0
459 Added in IPython 6.0 so should likely be removed for 7.0
460
460
461 """
461 """
462
462
463 def __init__(self, name):
463 def __init__(self, name):
464
464
465 self.name = name
465 self.name = name
466 self.complete = name
466 self.complete = name
467 self.type = 'crashed'
467 self.type = 'crashed'
468 self.name_with_symbols = name
468 self.name_with_symbols = name
469 self.signature = ''
469 self.signature = ''
470 self._origin = 'fake'
470 self._origin = 'fake'
471
471
472 def __repr__(self):
472 def __repr__(self):
473 return '<Fake completion object jedi has crashed>'
473 return '<Fake completion object jedi has crashed>'
474
474
475
475
476 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
476 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
477
477
478
478
479 class Completion:
479 class Completion:
480 """
480 """
481 Completion object used and returned by IPython completers.
481 Completion object used and returned by IPython completers.
482
482
483 .. warning::
483 .. warning::
484
484
485 Unstable
485 Unstable
486
486
487 This function is unstable, API may change without warning.
487 This function is unstable, API may change without warning.
488 It will also raise unless use in proper context manager.
488 It will also raise unless use in proper context manager.
489
489
490 This act as a middle ground :any:`Completion` object between the
490 This act as a middle ground :any:`Completion` object between the
491 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
491 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
492 object. While Jedi need a lot of information about evaluator and how the
492 object. While Jedi need a lot of information about evaluator and how the
493 code should be ran/inspected, PromptToolkit (and other frontend) mostly
493 code should be ran/inspected, PromptToolkit (and other frontend) mostly
494 need user facing information.
494 need user facing information.
495
495
496 - Which range should be replaced replaced by what.
496 - Which range should be replaced replaced by what.
497 - Some metadata (like completion type), or meta information to displayed to
497 - Some metadata (like completion type), or meta information to displayed to
498 the use user.
498 the use user.
499
499
500 For debugging purpose we can also store the origin of the completion (``jedi``,
500 For debugging purpose we can also store the origin of the completion (``jedi``,
501 ``IPython.python_matches``, ``IPython.magics_matches``...).
501 ``IPython.python_matches``, ``IPython.magics_matches``...).
502 """
502 """
503
503
504 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
504 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
505
505
506 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
506 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
507 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
507 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
508 "It may change without warnings. "
508 "It may change without warnings. "
509 "Use in corresponding context manager.",
509 "Use in corresponding context manager.",
510 category=ProvisionalCompleterWarning, stacklevel=2)
510 category=ProvisionalCompleterWarning, stacklevel=2)
511
511
512 self.start = start
512 self.start = start
513 self.end = end
513 self.end = end
514 self.text = text
514 self.text = text
515 self.type = type
515 self.type = type
516 self.signature = signature
516 self.signature = signature
517 self._origin = _origin
517 self._origin = _origin
518
518
519 def __repr__(self):
519 def __repr__(self):
520 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
520 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
521 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
521 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
522
522
523 def __eq__(self, other)->Bool:
523 def __eq__(self, other)->Bool:
524 """
524 """
525 Equality and hash do not hash the type (as some completer may not be
525 Equality and hash do not hash the type (as some completer may not be
526 able to infer the type), but are use to (partially) de-duplicate
526 able to infer the type), but are use to (partially) de-duplicate
527 completion.
527 completion.
528
528
529 Completely de-duplicating completion is a bit tricker that just
529 Completely de-duplicating completion is a bit tricker that just
530 comparing as it depends on surrounding text, which Completions are not
530 comparing as it depends on surrounding text, which Completions are not
531 aware of.
531 aware of.
532 """
532 """
533 return self.start == other.start and \
533 return self.start == other.start and \
534 self.end == other.end and \
534 self.end == other.end and \
535 self.text == other.text
535 self.text == other.text
536
536
537 def __hash__(self):
537 def __hash__(self):
538 return hash((self.start, self.end, self.text))
538 return hash((self.start, self.end, self.text))
539
539
540
540
541 class SimpleCompletion:
541 class SimpleCompletion:
542 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
542 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
543
543
544 .. warning::
544 .. warning::
545
545
546 Provisional
546 Provisional
547
547
548 This class is used to describe the currently supported attributes of
548 This class is used to describe the currently supported attributes of
549 simple completion items, and any additional implementation details
549 simple completion items, and any additional implementation details
550 should not be relied on. Additional attributes may be included in
550 should not be relied on. Additional attributes may be included in
551 future versions, and meaning of text disambiguated from the current
551 future versions, and meaning of text disambiguated from the current
552 dual meaning of "text to insert" and "text to used as a label".
552 dual meaning of "text to insert" and "text to used as a label".
553 """
553 """
554
554
555 __slots__ = ["text", "type"]
555 __slots__ = ["text", "type"]
556
556
557 def __init__(self, text: str, *, type: str = None):
557 def __init__(self, text: str, *, type: str = None):
558 self.text = text
558 self.text = text
559 self.type = type
559 self.type = type
560
560
561 def __repr__(self):
561 def __repr__(self):
562 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
562 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
563
563
564
564
565 class _MatcherResultBase(TypedDict):
565 class _MatcherResultBase(TypedDict):
566 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
566 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
567
567
568 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
568 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
569 matched_fragment: NotRequired[str]
569 matched_fragment: NotRequired[str]
570
570
571 #: Whether to suppress results from all other matchers (True), some
571 #: Whether to suppress results from all other matchers (True), some
572 #: matchers (set of identifiers) or none (False); default is False.
572 #: matchers (set of identifiers) or none (False); default is False.
573 suppress: NotRequired[Union[bool, Set[str]]]
573 suppress: NotRequired[Union[bool, Set[str]]]
574
574
575 #: Identifiers of matchers which should NOT be suppressed when this matcher
575 #: Identifiers of matchers which should NOT be suppressed when this matcher
576 #: requests to suppress all other matchers; defaults to an empty set.
576 #: requests to suppress all other matchers; defaults to an empty set.
577 do_not_suppress: NotRequired[Set[str]]
577 do_not_suppress: NotRequired[Set[str]]
578
578
579 #: Are completions already ordered and should be left as-is? default is False.
579 #: Are completions already ordered and should be left as-is? default is False.
580 ordered: NotRequired[bool]
580 ordered: NotRequired[bool]
581
581
582
582
583 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
583 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
584 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
584 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
585 """Result of new-style completion matcher."""
585 """Result of new-style completion matcher."""
586
586
587 # note: TypedDict is added again to the inheritance chain
587 # note: TypedDict is added again to the inheritance chain
588 # in order to get __orig_bases__ for documentation
588 # in order to get __orig_bases__ for documentation
589
589
590 #: List of candidate completions
590 #: List of candidate completions
591 completions: Sequence[SimpleCompletion]
591 completions: Sequence[SimpleCompletion]
592
592
593
593
594 class _JediMatcherResult(_MatcherResultBase):
594 class _JediMatcherResult(_MatcherResultBase):
595 """Matching result returned by Jedi (will be processed differently)"""
595 """Matching result returned by Jedi (will be processed differently)"""
596
596
597 #: list of candidate completions
597 #: list of candidate completions
598 completions: Iterable[_JediCompletionLike]
598 completions: Iterable[_JediCompletionLike]
599
599
600
600
601 @dataclass
601 @dataclass
602 class CompletionContext:
602 class CompletionContext:
603 """Completion context provided as an argument to matchers in the Matcher API v2."""
603 """Completion context provided as an argument to matchers in the Matcher API v2."""
604
604
605 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
605 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
606 # which was not explicitly visible as an argument of the matcher, making any refactor
606 # which was not explicitly visible as an argument of the matcher, making any refactor
607 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
607 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
608 # from the completer, and make substituting them in sub-classes easier.
608 # from the completer, and make substituting them in sub-classes easier.
609
609
610 #: Relevant fragment of code directly preceding the cursor.
610 #: Relevant fragment of code directly preceding the cursor.
611 #: The extraction of token is implemented via splitter heuristic
611 #: The extraction of token is implemented via splitter heuristic
612 #: (following readline behaviour for legacy reasons), which is user configurable
612 #: (following readline behaviour for legacy reasons), which is user configurable
613 #: (by switching the greedy mode).
613 #: (by switching the greedy mode).
614 token: str
614 token: str
615
615
616 #: The full available content of the editor or buffer
616 #: The full available content of the editor or buffer
617 full_text: str
617 full_text: str
618
618
619 #: Cursor position in the line (the same for ``full_text`` and ``text``).
619 #: Cursor position in the line (the same for ``full_text`` and ``text``).
620 cursor_position: int
620 cursor_position: int
621
621
622 #: Cursor line in ``full_text``.
622 #: Cursor line in ``full_text``.
623 cursor_line: int
623 cursor_line: int
624
624
625 #: The maximum number of completions that will be used downstream.
625 #: The maximum number of completions that will be used downstream.
626 #: Matchers can use this information to abort early.
626 #: Matchers can use this information to abort early.
627 #: The built-in Jedi matcher is currently excepted from this limit.
627 #: The built-in Jedi matcher is currently excepted from this limit.
628 # If not given, return all possible completions.
628 # If not given, return all possible completions.
629 limit: Optional[int]
629 limit: Optional[int]
630
630
631 @cached_property
631 @cached_property
632 def text_until_cursor(self) -> str:
632 def text_until_cursor(self) -> str:
633 return self.line_with_cursor[: self.cursor_position]
633 return self.line_with_cursor[: self.cursor_position]
634
634
635 @cached_property
635 @cached_property
636 def line_with_cursor(self) -> str:
636 def line_with_cursor(self) -> str:
637 return self.full_text.split("\n")[self.cursor_line]
637 return self.full_text.split("\n")[self.cursor_line]
638
638
639
639
640 #: Matcher results for API v2.
640 #: Matcher results for API v2.
641 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
641 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
642
642
643
643
644 class _MatcherAPIv1Base(Protocol):
644 class _MatcherAPIv1Base(Protocol):
645 def __call__(self, text: str) -> list[str]:
645 def __call__(self, text: str) -> list[str]:
646 """Call signature."""
646 """Call signature."""
647
647
648
648
649 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
649 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
650 #: API version
650 #: API version
651 matcher_api_version: Optional[Literal[1]]
651 matcher_api_version: Optional[Literal[1]]
652
652
653 def __call__(self, text: str) -> list[str]:
653 def __call__(self, text: str) -> list[str]:
654 """Call signature."""
654 """Call signature."""
655
655
656
656
657 #: Protocol describing Matcher API v1.
657 #: Protocol describing Matcher API v1.
658 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
658 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
659
659
660
660
661 class MatcherAPIv2(Protocol):
661 class MatcherAPIv2(Protocol):
662 """Protocol describing Matcher API v2."""
662 """Protocol describing Matcher API v2."""
663
663
664 #: API version
664 #: API version
665 matcher_api_version: Literal[2] = 2
665 matcher_api_version: Literal[2] = 2
666
666
667 def __call__(self, context: CompletionContext) -> MatcherResult:
667 def __call__(self, context: CompletionContext) -> MatcherResult:
668 """Call signature."""
668 """Call signature."""
669
669
670
670
671 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
671 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
672
672
673
673
674 def has_any_completions(result: MatcherResult) -> bool:
674 def has_any_completions(result: MatcherResult) -> bool:
675 """Check if any result includes any completions."""
675 """Check if any result includes any completions."""
676 if hasattr(result["completions"], "__len__"):
676 if hasattr(result["completions"], "__len__"):
677 return len(result["completions"]) != 0
677 return len(result["completions"]) != 0
678 try:
678 try:
679 old_iterator = result["completions"]
679 old_iterator = result["completions"]
680 first = next(old_iterator)
680 first = next(old_iterator)
681 result["completions"] = itertools.chain([first], old_iterator)
681 result["completions"] = itertools.chain([first], old_iterator)
682 return True
682 return True
683 except StopIteration:
683 except StopIteration:
684 return False
684 return False
685
685
686
686
687 def completion_matcher(
687 def completion_matcher(
688 *, priority: float = None, identifier: str = None, api_version: int = 1
688 *, priority: float = None, identifier: str = None, api_version: int = 1
689 ):
689 ):
690 """Adds attributes describing the matcher.
690 """Adds attributes describing the matcher.
691
691
692 Parameters
692 Parameters
693 ----------
693 ----------
694 priority : Optional[float]
694 priority : Optional[float]
695 The priority of the matcher, determines the order of execution of matchers.
695 The priority of the matcher, determines the order of execution of matchers.
696 Higher priority means that the matcher will be executed first. Defaults to 0.
696 Higher priority means that the matcher will be executed first. Defaults to 0.
697 identifier : Optional[str]
697 identifier : Optional[str]
698 identifier of the matcher allowing users to modify the behaviour via traitlets,
698 identifier of the matcher allowing users to modify the behaviour via traitlets,
699 and also used to for debugging (will be passed as ``origin`` with the completions).
699 and also used to for debugging (will be passed as ``origin`` with the completions).
700
700
701 Defaults to matcher function's ``__qualname__`` (for example,
701 Defaults to matcher function's ``__qualname__`` (for example,
702 ``IPCompleter.file_matcher`` for the built-in matched defined
702 ``IPCompleter.file_matcher`` for the built-in matched defined
703 as a ``file_matcher`` method of the ``IPCompleter`` class).
703 as a ``file_matcher`` method of the ``IPCompleter`` class).
704 api_version: Optional[int]
704 api_version: Optional[int]
705 version of the Matcher API used by this matcher.
705 version of the Matcher API used by this matcher.
706 Currently supported values are 1 and 2.
706 Currently supported values are 1 and 2.
707 Defaults to 1.
707 Defaults to 1.
708 """
708 """
709
709
710 def wrapper(func: Matcher):
710 def wrapper(func: Matcher):
711 func.matcher_priority = priority or 0
711 func.matcher_priority = priority or 0
712 func.matcher_identifier = identifier or func.__qualname__
712 func.matcher_identifier = identifier or func.__qualname__
713 func.matcher_api_version = api_version
713 func.matcher_api_version = api_version
714 if TYPE_CHECKING:
714 if TYPE_CHECKING:
715 if api_version == 1:
715 if api_version == 1:
716 func = cast(func, MatcherAPIv1)
716 func = cast(func, MatcherAPIv1)
717 elif api_version == 2:
717 elif api_version == 2:
718 func = cast(func, MatcherAPIv2)
718 func = cast(func, MatcherAPIv2)
719 return func
719 return func
720
720
721 return wrapper
721 return wrapper
722
722
723
723
724 def _get_matcher_priority(matcher: Matcher):
724 def _get_matcher_priority(matcher: Matcher):
725 return getattr(matcher, "matcher_priority", 0)
725 return getattr(matcher, "matcher_priority", 0)
726
726
727
727
728 def _get_matcher_id(matcher: Matcher):
728 def _get_matcher_id(matcher: Matcher):
729 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
729 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
730
730
731
731
732 def _get_matcher_api_version(matcher):
732 def _get_matcher_api_version(matcher):
733 return getattr(matcher, "matcher_api_version", 1)
733 return getattr(matcher, "matcher_api_version", 1)
734
734
735
735
736 context_matcher = partial(completion_matcher, api_version=2)
736 context_matcher = partial(completion_matcher, api_version=2)
737
737
738
738
739 _IC = Iterable[Completion]
739 _IC = Iterable[Completion]
740
740
741
741
742 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
742 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
743 """
743 """
744 Deduplicate a set of completions.
744 Deduplicate a set of completions.
745
745
746 .. warning::
746 .. warning::
747
747
748 Unstable
748 Unstable
749
749
750 This function is unstable, API may change without warning.
750 This function is unstable, API may change without warning.
751
751
752 Parameters
752 Parameters
753 ----------
753 ----------
754 text : str
754 text : str
755 text that should be completed.
755 text that should be completed.
756 completions : Iterator[Completion]
756 completions : Iterator[Completion]
757 iterator over the completions to deduplicate
757 iterator over the completions to deduplicate
758
758
759 Yields
759 Yields
760 ------
760 ------
761 `Completions` objects
761 `Completions` objects
762 Completions coming from multiple sources, may be different but end up having
762 Completions coming from multiple sources, may be different but end up having
763 the same effect when applied to ``text``. If this is the case, this will
763 the same effect when applied to ``text``. If this is the case, this will
764 consider completions as equal and only emit the first encountered.
764 consider completions as equal and only emit the first encountered.
765 Not folded in `completions()` yet for debugging purpose, and to detect when
765 Not folded in `completions()` yet for debugging purpose, and to detect when
766 the IPython completer does return things that Jedi does not, but should be
766 the IPython completer does return things that Jedi does not, but should be
767 at some point.
767 at some point.
768 """
768 """
769 completions = list(completions)
769 completions = list(completions)
770 if not completions:
770 if not completions:
771 return
771 return
772
772
773 new_start = min(c.start for c in completions)
773 new_start = min(c.start for c in completions)
774 new_end = max(c.end for c in completions)
774 new_end = max(c.end for c in completions)
775
775
776 seen = set()
776 seen = set()
777 for c in completions:
777 for c in completions:
778 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
778 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
779 if new_text not in seen:
779 if new_text not in seen:
780 yield c
780 yield c
781 seen.add(new_text)
781 seen.add(new_text)
782
782
783
783
784 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
784 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
785 """
785 """
786 Rectify a set of completions to all have the same ``start`` and ``end``
786 Rectify a set of completions to all have the same ``start`` and ``end``
787
787
788 .. warning::
788 .. warning::
789
789
790 Unstable
790 Unstable
791
791
792 This function is unstable, API may change without warning.
792 This function is unstable, API may change without warning.
793 It will also raise unless use in proper context manager.
793 It will also raise unless use in proper context manager.
794
794
795 Parameters
795 Parameters
796 ----------
796 ----------
797 text : str
797 text : str
798 text that should be completed.
798 text that should be completed.
799 completions : Iterator[Completion]
799 completions : Iterator[Completion]
800 iterator over the completions to rectify
800 iterator over the completions to rectify
801 _debug : bool
801 _debug : bool
802 Log failed completion
802 Log failed completion
803
803
804 Notes
804 Notes
805 -----
805 -----
806 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
806 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
807 the Jupyter Protocol requires them to behave like so. This will readjust
807 the Jupyter Protocol requires them to behave like so. This will readjust
808 the completion to have the same ``start`` and ``end`` by padding both
808 the completion to have the same ``start`` and ``end`` by padding both
809 extremities with surrounding text.
809 extremities with surrounding text.
810
810
811 During stabilisation should support a ``_debug`` option to log which
811 During stabilisation should support a ``_debug`` option to log which
812 completion are return by the IPython completer and not found in Jedi in
812 completion are return by the IPython completer and not found in Jedi in
813 order to make upstream bug report.
813 order to make upstream bug report.
814 """
814 """
815 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
815 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
816 "It may change without warnings. "
816 "It may change without warnings. "
817 "Use in corresponding context manager.",
817 "Use in corresponding context manager.",
818 category=ProvisionalCompleterWarning, stacklevel=2)
818 category=ProvisionalCompleterWarning, stacklevel=2)
819
819
820 completions = list(completions)
820 completions = list(completions)
821 if not completions:
821 if not completions:
822 return
822 return
823 starts = (c.start for c in completions)
823 starts = (c.start for c in completions)
824 ends = (c.end for c in completions)
824 ends = (c.end for c in completions)
825
825
826 new_start = min(starts)
826 new_start = min(starts)
827 new_end = max(ends)
827 new_end = max(ends)
828
828
829 seen_jedi = set()
829 seen_jedi = set()
830 seen_python_matches = set()
830 seen_python_matches = set()
831 for c in completions:
831 for c in completions:
832 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
832 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
833 if c._origin == 'jedi':
833 if c._origin == 'jedi':
834 seen_jedi.add(new_text)
834 seen_jedi.add(new_text)
835 elif c._origin == 'IPCompleter.python_matches':
835 elif c._origin == 'IPCompleter.python_matches':
836 seen_python_matches.add(new_text)
836 seen_python_matches.add(new_text)
837 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
837 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
838 diff = seen_python_matches.difference(seen_jedi)
838 diff = seen_python_matches.difference(seen_jedi)
839 if diff and _debug:
839 if diff and _debug:
840 print('IPython.python matches have extras:', diff)
840 print('IPython.python matches have extras:', diff)
841
841
842
842
843 if sys.platform == 'win32':
843 if sys.platform == 'win32':
844 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
844 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
845 else:
845 else:
846 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
846 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
847
847
848 GREEDY_DELIMS = ' =\r\n'
848 GREEDY_DELIMS = ' =\r\n'
849
849
850
850
851 class CompletionSplitter(object):
851 class CompletionSplitter(object):
852 """An object to split an input line in a manner similar to readline.
852 """An object to split an input line in a manner similar to readline.
853
853
854 By having our own implementation, we can expose readline-like completion in
854 By having our own implementation, we can expose readline-like completion in
855 a uniform manner to all frontends. This object only needs to be given the
855 a uniform manner to all frontends. This object only needs to be given the
856 line of text to be split and the cursor position on said line, and it
856 line of text to be split and the cursor position on said line, and it
857 returns the 'word' to be completed on at the cursor after splitting the
857 returns the 'word' to be completed on at the cursor after splitting the
858 entire line.
858 entire line.
859
859
860 What characters are used as splitting delimiters can be controlled by
860 What characters are used as splitting delimiters can be controlled by
861 setting the ``delims`` attribute (this is a property that internally
861 setting the ``delims`` attribute (this is a property that internally
862 automatically builds the necessary regular expression)"""
862 automatically builds the necessary regular expression)"""
863
863
864 # Private interface
864 # Private interface
865
865
866 # A string of delimiter characters. The default value makes sense for
866 # A string of delimiter characters. The default value makes sense for
867 # IPython's most typical usage patterns.
867 # IPython's most typical usage patterns.
868 _delims = DELIMS
868 _delims = DELIMS
869
869
870 # The expression (a normal string) to be compiled into a regular expression
870 # The expression (a normal string) to be compiled into a regular expression
871 # for actual splitting. We store it as an attribute mostly for ease of
871 # for actual splitting. We store it as an attribute mostly for ease of
872 # debugging, since this type of code can be so tricky to debug.
872 # debugging, since this type of code can be so tricky to debug.
873 _delim_expr = None
873 _delim_expr = None
874
874
875 # The regular expression that does the actual splitting
875 # The regular expression that does the actual splitting
876 _delim_re = None
876 _delim_re = None
877
877
878 def __init__(self, delims=None):
878 def __init__(self, delims=None):
879 delims = CompletionSplitter._delims if delims is None else delims
879 delims = CompletionSplitter._delims if delims is None else delims
880 self.delims = delims
880 self.delims = delims
881
881
882 @property
882 @property
883 def delims(self):
883 def delims(self):
884 """Return the string of delimiter characters."""
884 """Return the string of delimiter characters."""
885 return self._delims
885 return self._delims
886
886
887 @delims.setter
887 @delims.setter
888 def delims(self, delims):
888 def delims(self, delims):
889 """Set the delimiters for line splitting."""
889 """Set the delimiters for line splitting."""
890 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
890 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
891 self._delim_re = re.compile(expr)
891 self._delim_re = re.compile(expr)
892 self._delims = delims
892 self._delims = delims
893 self._delim_expr = expr
893 self._delim_expr = expr
894
894
895 def split_line(self, line, cursor_pos=None):
895 def split_line(self, line, cursor_pos=None):
896 """Split a line of text with a cursor at the given position.
896 """Split a line of text with a cursor at the given position.
897 """
897 """
898 l = line if cursor_pos is None else line[:cursor_pos]
898 l = line if cursor_pos is None else line[:cursor_pos]
899 return self._delim_re.split(l)[-1]
899 return self._delim_re.split(l)[-1]
900
900
901
901
902
902
903 class Completer(Configurable):
903 class Completer(Configurable):
904
904
905 greedy = Bool(False,
905 greedy = Bool(False,
906 help="""Activate greedy completion
906 help="""Activate greedy completion
907 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
907 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
908
908
909 This will enable completion on elements of lists, results of function calls, etc.,
909 This will enable completion on elements of lists, results of function calls, etc.,
910 but can be unsafe because the code is actually evaluated on TAB.
910 but can be unsafe because the code is actually evaluated on TAB.
911 """,
911 """,
912 ).tag(config=True)
912 ).tag(config=True)
913
913
914 use_jedi = Bool(default_value=JEDI_INSTALLED,
914 use_jedi = Bool(default_value=JEDI_INSTALLED,
915 help="Experimental: Use Jedi to generate autocompletions. "
915 help="Experimental: Use Jedi to generate autocompletions. "
916 "Default to True if jedi is installed.").tag(config=True)
916 "Default to True if jedi is installed.").tag(config=True)
917
917
918 jedi_compute_type_timeout = Int(default_value=400,
918 jedi_compute_type_timeout = Int(default_value=400,
919 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
919 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
920 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
920 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
921 performance by preventing jedi to build its cache.
921 performance by preventing jedi to build its cache.
922 """).tag(config=True)
922 """).tag(config=True)
923
923
924 debug = Bool(default_value=False,
924 debug = Bool(default_value=False,
925 help='Enable debug for the Completer. Mostly print extra '
925 help='Enable debug for the Completer. Mostly print extra '
926 'information for experimental jedi integration.')\
926 'information for experimental jedi integration.')\
927 .tag(config=True)
927 .tag(config=True)
928
928
929 backslash_combining_completions = Bool(True,
929 backslash_combining_completions = Bool(True,
930 help="Enable unicode completions, e.g. \\alpha<tab> . "
930 help="Enable unicode completions, e.g. \\alpha<tab> . "
931 "Includes completion of latex commands, unicode names, and expanding "
931 "Includes completion of latex commands, unicode names, and expanding "
932 "unicode characters back to latex commands.").tag(config=True)
932 "unicode characters back to latex commands.").tag(config=True)
933
933
934 def __init__(self, namespace=None, global_namespace=None, **kwargs):
934 def __init__(self, namespace=None, global_namespace=None, **kwargs):
935 """Create a new completer for the command line.
935 """Create a new completer for the command line.
936
936
937 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
937 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
938
938
939 If unspecified, the default namespace where completions are performed
939 If unspecified, the default namespace where completions are performed
940 is __main__ (technically, __main__.__dict__). Namespaces should be
940 is __main__ (technically, __main__.__dict__). Namespaces should be
941 given as dictionaries.
941 given as dictionaries.
942
942
943 An optional second namespace can be given. This allows the completer
943 An optional second namespace can be given. This allows the completer
944 to handle cases where both the local and global scopes need to be
944 to handle cases where both the local and global scopes need to be
945 distinguished.
945 distinguished.
946 """
946 """
947
947
948 # Don't bind to namespace quite yet, but flag whether the user wants a
948 # Don't bind to namespace quite yet, but flag whether the user wants a
949 # specific namespace or to use __main__.__dict__. This will allow us
949 # specific namespace or to use __main__.__dict__. This will allow us
950 # to bind to __main__.__dict__ at completion time, not now.
950 # to bind to __main__.__dict__ at completion time, not now.
951 if namespace is None:
951 if namespace is None:
952 self.use_main_ns = True
952 self.use_main_ns = True
953 else:
953 else:
954 self.use_main_ns = False
954 self.use_main_ns = False
955 self.namespace = namespace
955 self.namespace = namespace
956
956
957 # The global namespace, if given, can be bound directly
957 # The global namespace, if given, can be bound directly
958 if global_namespace is None:
958 if global_namespace is None:
959 self.global_namespace = {}
959 self.global_namespace = {}
960 else:
960 else:
961 self.global_namespace = global_namespace
961 self.global_namespace = global_namespace
962
962
963 self.custom_matchers = []
963 self.custom_matchers = []
964
964
965 super(Completer, self).__init__(**kwargs)
965 super(Completer, self).__init__(**kwargs)
966
966
967 def complete(self, text, state):
967 def complete(self, text, state):
968 """Return the next possible completion for 'text'.
968 """Return the next possible completion for 'text'.
969
969
970 This is called successively with state == 0, 1, 2, ... until it
970 This is called successively with state == 0, 1, 2, ... until it
971 returns None. The completion should begin with 'text'.
971 returns None. The completion should begin with 'text'.
972
972
973 """
973 """
974 if self.use_main_ns:
974 if self.use_main_ns:
975 self.namespace = __main__.__dict__
975 self.namespace = __main__.__dict__
976
976
977 if state == 0:
977 if state == 0:
978 if "." in text:
978 if "." in text:
979 self.matches = self.attr_matches(text)
979 self.matches = self.attr_matches(text)
980 else:
980 else:
981 self.matches = self.global_matches(text)
981 self.matches = self.global_matches(text)
982 try:
982 try:
983 return self.matches[state]
983 return self.matches[state]
984 except IndexError:
984 except IndexError:
985 return None
985 return None
986
986
987 def global_matches(self, text):
987 def global_matches(self, text):
988 """Compute matches when text is a simple name.
988 """Compute matches when text is a simple name.
989
989
990 Return a list of all keywords, built-in functions and names currently
990 Return a list of all keywords, built-in functions and names currently
991 defined in self.namespace or self.global_namespace that match.
991 defined in self.namespace or self.global_namespace that match.
992
992
993 """
993 """
994 matches = []
994 matches = []
995 match_append = matches.append
995 match_append = matches.append
996 n = len(text)
996 n = len(text)
997 for lst in [
997 for lst in [
998 keyword.kwlist,
998 keyword.kwlist,
999 builtin_mod.__dict__.keys(),
999 builtin_mod.__dict__.keys(),
1000 list(self.namespace.keys()),
1000 list(self.namespace.keys()),
1001 list(self.global_namespace.keys()),
1001 list(self.global_namespace.keys()),
1002 ]:
1002 ]:
1003 for word in lst:
1003 for word in lst:
1004 if word[:n] == text and word != "__builtins__":
1004 if word[:n] == text and word != "__builtins__":
1005 match_append(word)
1005 match_append(word)
1006
1006
1007 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1007 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1008 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1008 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1009 shortened = {
1009 shortened = {
1010 "_".join([sub[0] for sub in word.split("_")]): word
1010 "_".join([sub[0] for sub in word.split("_")]): word
1011 for word in lst
1011 for word in lst
1012 if snake_case_re.match(word)
1012 if snake_case_re.match(word)
1013 }
1013 }
1014 for word in shortened.keys():
1014 for word in shortened.keys():
1015 if word[:n] == text and word != "__builtins__":
1015 if word[:n] == text and word != "__builtins__":
1016 match_append(shortened[word])
1016 match_append(shortened[word])
1017 return matches
1017 return matches
1018
1018
1019 def attr_matches(self, text):
1019 def attr_matches(self, text):
1020 """Compute matches when text contains a dot.
1020 """Compute matches when text contains a dot.
1021
1021
1022 Assuming the text is of the form NAME.NAME....[NAME], and is
1022 Assuming the text is of the form NAME.NAME....[NAME], and is
1023 evaluatable in self.namespace or self.global_namespace, it will be
1023 evaluatable in self.namespace or self.global_namespace, it will be
1024 evaluated and its attributes (as revealed by dir()) are used as
1024 evaluated and its attributes (as revealed by dir()) are used as
1025 possible completions. (For class instances, class members are
1025 possible completions. (For class instances, class members are
1026 also considered.)
1026 also considered.)
1027
1027
1028 WARNING: this can still invoke arbitrary C code, if an object
1028 WARNING: this can still invoke arbitrary C code, if an object
1029 with a __getattr__ hook is evaluated.
1029 with a __getattr__ hook is evaluated.
1030
1030
1031 """
1031 """
1032
1032
1033 # Another option, seems to work great. Catches things like ''.<tab>
1033 # Another option, seems to work great. Catches things like ''.<tab>
1034 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
1034 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
1035
1035
1036 if m:
1036 if m:
1037 expr, attr = m.group(1, 3)
1037 expr, attr = m.group(1, 3)
1038 elif self.greedy:
1038 elif self.greedy:
1039 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1039 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1040 if not m2:
1040 if not m2:
1041 return []
1041 return []
1042 expr, attr = m2.group(1,2)
1042 expr, attr = m2.group(1,2)
1043 else:
1043 else:
1044 return []
1044 return []
1045
1045
1046 try:
1046 try:
1047 obj = eval(expr, self.namespace)
1047 obj = eval(expr, self.namespace)
1048 except:
1048 except:
1049 try:
1049 try:
1050 obj = eval(expr, self.global_namespace)
1050 obj = eval(expr, self.global_namespace)
1051 except:
1051 except:
1052 return []
1052 return []
1053
1053
1054 if self.limit_to__all__ and hasattr(obj, '__all__'):
1054 if self.limit_to__all__ and hasattr(obj, '__all__'):
1055 words = get__all__entries(obj)
1055 words = get__all__entries(obj)
1056 else:
1056 else:
1057 words = dir2(obj)
1057 words = dir2(obj)
1058
1058
1059 try:
1059 try:
1060 words = generics.complete_object(obj, words)
1060 words = generics.complete_object(obj, words)
1061 except TryNext:
1061 except TryNext:
1062 pass
1062 pass
1063 except AssertionError:
1063 except AssertionError:
1064 raise
1064 raise
1065 except Exception:
1065 except Exception:
1066 # Silence errors from completion function
1066 # Silence errors from completion function
1067 #raise # dbg
1067 #raise # dbg
1068 pass
1068 pass
1069 # Build match list to return
1069 # Build match list to return
1070 n = len(attr)
1070 n = len(attr)
1071 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
1071 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
1072
1072
1073
1073
1074 def get__all__entries(obj):
1074 def get__all__entries(obj):
1075 """returns the strings in the __all__ attribute"""
1075 """returns the strings in the __all__ attribute"""
1076 try:
1076 try:
1077 words = getattr(obj, '__all__')
1077 words = getattr(obj, '__all__')
1078 except:
1078 except:
1079 return []
1079 return []
1080
1080
1081 return [w for w in words if isinstance(w, str)]
1081 return [w for w in words if isinstance(w, str)]
1082
1082
1083
1083
1084 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
1084 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
1085 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
1085 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
1086 """Used by dict_key_matches, matching the prefix to a list of keys
1086 """Used by dict_key_matches, matching the prefix to a list of keys
1087
1087
1088 Parameters
1088 Parameters
1089 ----------
1089 ----------
1090 keys
1090 keys
1091 list of keys in dictionary currently being completed.
1091 list of keys in dictionary currently being completed.
1092 prefix
1092 prefix
1093 Part of the text already typed by the user. E.g. `mydict[b'fo`
1093 Part of the text already typed by the user. E.g. `mydict[b'fo`
1094 delims
1094 delims
1095 String of delimiters to consider when finding the current key.
1095 String of delimiters to consider when finding the current key.
1096 extra_prefix : optional
1096 extra_prefix : optional
1097 Part of the text already typed in multi-key index cases. E.g. for
1097 Part of the text already typed in multi-key index cases. E.g. for
1098 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1098 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1099
1099
1100 Returns
1100 Returns
1101 -------
1101 -------
1102 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1102 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1103 ``quote`` being the quote that need to be used to close current string.
1103 ``quote`` being the quote that need to be used to close current string.
1104 ``token_start`` the position where the replacement should start occurring,
1104 ``token_start`` the position where the replacement should start occurring,
1105 ``matches`` a list of replacement/completion
1105 ``matches`` a list of replacement/completion
1106
1106
1107 """
1107 """
1108 prefix_tuple = extra_prefix if extra_prefix else ()
1108 prefix_tuple = extra_prefix if extra_prefix else ()
1109 Nprefix = len(prefix_tuple)
1109 Nprefix = len(prefix_tuple)
1110 def filter_prefix_tuple(key):
1110 def filter_prefix_tuple(key):
1111 # Reject too short keys
1111 # Reject too short keys
1112 if len(key) <= Nprefix:
1112 if len(key) <= Nprefix:
1113 return False
1113 return False
1114 # Reject keys with non str/bytes in it
1114 # Reject keys with non str/bytes in it
1115 for k in key:
1115 for k in key:
1116 if not isinstance(k, (str, bytes)):
1116 if not isinstance(k, (str, bytes)):
1117 return False
1117 return False
1118 # Reject keys that do not match the prefix
1118 # Reject keys that do not match the prefix
1119 for k, pt in zip(key, prefix_tuple):
1119 for k, pt in zip(key, prefix_tuple):
1120 if k != pt:
1120 if k != pt:
1121 return False
1121 return False
1122 # All checks passed!
1122 # All checks passed!
1123 return True
1123 return True
1124
1124
1125 filtered_keys:List[Union[str,bytes]] = []
1125 filtered_keys:List[Union[str,bytes]] = []
1126 def _add_to_filtered_keys(key):
1126 def _add_to_filtered_keys(key):
1127 if isinstance(key, (str, bytes)):
1127 if isinstance(key, (str, bytes)):
1128 filtered_keys.append(key)
1128 filtered_keys.append(key)
1129
1129
1130 for k in keys:
1130 for k in keys:
1131 if isinstance(k, tuple):
1131 if isinstance(k, tuple):
1132 if filter_prefix_tuple(k):
1132 if filter_prefix_tuple(k):
1133 _add_to_filtered_keys(k[Nprefix])
1133 _add_to_filtered_keys(k[Nprefix])
1134 else:
1134 else:
1135 _add_to_filtered_keys(k)
1135 _add_to_filtered_keys(k)
1136
1136
1137 if not prefix:
1137 if not prefix:
1138 return '', 0, [repr(k) for k in filtered_keys]
1138 return '', 0, [repr(k) for k in filtered_keys]
1139 quote_match = re.search('["\']', prefix)
1139 quote_match = re.search('["\']', prefix)
1140 assert quote_match is not None # silence mypy
1140 assert quote_match is not None # silence mypy
1141 quote = quote_match.group()
1141 quote = quote_match.group()
1142 try:
1142 try:
1143 prefix_str = eval(prefix + quote, {})
1143 prefix_str = eval(prefix + quote, {})
1144 except Exception:
1144 except Exception:
1145 return '', 0, []
1145 return '', 0, []
1146
1146
1147 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1147 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1148 token_match = re.search(pattern, prefix, re.UNICODE)
1148 token_match = re.search(pattern, prefix, re.UNICODE)
1149 assert token_match is not None # silence mypy
1149 assert token_match is not None # silence mypy
1150 token_start = token_match.start()
1150 token_start = token_match.start()
1151 token_prefix = token_match.group()
1151 token_prefix = token_match.group()
1152
1152
1153 matched:List[str] = []
1153 matched:List[str] = []
1154 for key in filtered_keys:
1154 for key in filtered_keys:
1155 try:
1155 try:
1156 if not key.startswith(prefix_str):
1156 if not key.startswith(prefix_str):
1157 continue
1157 continue
1158 except (AttributeError, TypeError, UnicodeError):
1158 except (AttributeError, TypeError, UnicodeError):
1159 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1159 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1160 continue
1160 continue
1161
1161
1162 # reformat remainder of key to begin with prefix
1162 # reformat remainder of key to begin with prefix
1163 rem = key[len(prefix_str):]
1163 rem = key[len(prefix_str):]
1164 # force repr wrapped in '
1164 # force repr wrapped in '
1165 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1165 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1166 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1166 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1167 if quote == '"':
1167 if quote == '"':
1168 # The entered prefix is quoted with ",
1168 # The entered prefix is quoted with ",
1169 # but the match is quoted with '.
1169 # but the match is quoted with '.
1170 # A contained " hence needs escaping for comparison:
1170 # A contained " hence needs escaping for comparison:
1171 rem_repr = rem_repr.replace('"', '\\"')
1171 rem_repr = rem_repr.replace('"', '\\"')
1172
1172
1173 # then reinsert prefix from start of token
1173 # then reinsert prefix from start of token
1174 matched.append('%s%s' % (token_prefix, rem_repr))
1174 matched.append('%s%s' % (token_prefix, rem_repr))
1175 return quote, token_start, matched
1175 return quote, token_start, matched
1176
1176
1177
1177
1178 def cursor_to_position(text:str, line:int, column:int)->int:
1178 def cursor_to_position(text:str, line:int, column:int)->int:
1179 """
1179 """
1180 Convert the (line,column) position of the cursor in text to an offset in a
1180 Convert the (line,column) position of the cursor in text to an offset in a
1181 string.
1181 string.
1182
1182
1183 Parameters
1183 Parameters
1184 ----------
1184 ----------
1185 text : str
1185 text : str
1186 The text in which to calculate the cursor offset
1186 The text in which to calculate the cursor offset
1187 line : int
1187 line : int
1188 Line of the cursor; 0-indexed
1188 Line of the cursor; 0-indexed
1189 column : int
1189 column : int
1190 Column of the cursor 0-indexed
1190 Column of the cursor 0-indexed
1191
1191
1192 Returns
1192 Returns
1193 -------
1193 -------
1194 Position of the cursor in ``text``, 0-indexed.
1194 Position of the cursor in ``text``, 0-indexed.
1195
1195
1196 See Also
1196 See Also
1197 --------
1197 --------
1198 position_to_cursor : reciprocal of this function
1198 position_to_cursor : reciprocal of this function
1199
1199
1200 """
1200 """
1201 lines = text.split('\n')
1201 lines = text.split('\n')
1202 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1202 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1203
1203
1204 return sum(len(l) + 1 for l in lines[:line]) + column
1204 return sum(len(l) + 1 for l in lines[:line]) + column
1205
1205
1206 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1206 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1207 """
1207 """
1208 Convert the position of the cursor in text (0 indexed) to a line
1208 Convert the position of the cursor in text (0 indexed) to a line
1209 number(0-indexed) and a column number (0-indexed) pair
1209 number(0-indexed) and a column number (0-indexed) pair
1210
1210
1211 Position should be a valid position in ``text``.
1211 Position should be a valid position in ``text``.
1212
1212
1213 Parameters
1213 Parameters
1214 ----------
1214 ----------
1215 text : str
1215 text : str
1216 The text in which to calculate the cursor offset
1216 The text in which to calculate the cursor offset
1217 offset : int
1217 offset : int
1218 Position of the cursor in ``text``, 0-indexed.
1218 Position of the cursor in ``text``, 0-indexed.
1219
1219
1220 Returns
1220 Returns
1221 -------
1221 -------
1222 (line, column) : (int, int)
1222 (line, column) : (int, int)
1223 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1223 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1224
1224
1225 See Also
1225 See Also
1226 --------
1226 --------
1227 cursor_to_position : reciprocal of this function
1227 cursor_to_position : reciprocal of this function
1228
1228
1229 """
1229 """
1230
1230
1231 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1231 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1232
1232
1233 before = text[:offset]
1233 before = text[:offset]
1234 blines = before.split('\n') # ! splitnes trim trailing \n
1234 blines = before.split('\n') # ! splitnes trim trailing \n
1235 line = before.count('\n')
1235 line = before.count('\n')
1236 col = len(blines[-1])
1236 col = len(blines[-1])
1237 return line, col
1237 return line, col
1238
1238
1239
1239
1240 def _safe_isinstance(obj, module, class_name):
1240 def _safe_isinstance(obj, module, class_name):
1241 """Checks if obj is an instance of module.class_name if loaded
1241 """Checks if obj is an instance of module.class_name if loaded
1242 """
1242 """
1243 return (module in sys.modules and
1243 return (module in sys.modules and
1244 isinstance(obj, getattr(import_module(module), class_name)))
1244 isinstance(obj, getattr(import_module(module), class_name)))
1245
1245
1246
1246
1247 @context_matcher()
1247 @context_matcher()
1248 def back_unicode_name_matcher(context: CompletionContext):
1248 def back_unicode_name_matcher(context: CompletionContext):
1249 """Match Unicode characters back to Unicode name
1249 """Match Unicode characters back to Unicode name
1250
1250
1251 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1251 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1252 """
1252 """
1253 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1253 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1254 return _convert_matcher_v1_result_to_v2(
1254 return _convert_matcher_v1_result_to_v2(
1255 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1255 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1256 )
1256 )
1257
1257
1258
1258
1259 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1259 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1260 """Match Unicode characters back to Unicode name
1260 """Match Unicode characters back to Unicode name
1261
1261
1262 This does ``β˜ƒ`` -> ``\\snowman``
1262 This does ``β˜ƒ`` -> ``\\snowman``
1263
1263
1264 Note that snowman is not a valid python3 combining character but will be expanded.
1264 Note that snowman is not a valid python3 combining character but will be expanded.
1265 Though it will not recombine back to the snowman character by the completion machinery.
1265 Though it will not recombine back to the snowman character by the completion machinery.
1266
1266
1267 This will not either back-complete standard sequences like \\n, \\b ...
1267 This will not either back-complete standard sequences like \\n, \\b ...
1268
1268
1269 .. deprecated:: 8.6
1269 .. deprecated:: 8.6
1270 You can use :meth:`back_unicode_name_matcher` instead.
1270 You can use :meth:`back_unicode_name_matcher` instead.
1271
1271
1272 Returns
1272 Returns
1273 =======
1273 =======
1274
1274
1275 Return a tuple with two elements:
1275 Return a tuple with two elements:
1276
1276
1277 - The Unicode character that was matched (preceded with a backslash), or
1277 - The Unicode character that was matched (preceded with a backslash), or
1278 empty string,
1278 empty string,
1279 - a sequence (of 1), name for the match Unicode character, preceded by
1279 - a sequence (of 1), name for the match Unicode character, preceded by
1280 backslash, or empty if no match.
1280 backslash, or empty if no match.
1281 """
1281 """
1282 if len(text)<2:
1282 if len(text)<2:
1283 return '', ()
1283 return '', ()
1284 maybe_slash = text[-2]
1284 maybe_slash = text[-2]
1285 if maybe_slash != '\\':
1285 if maybe_slash != '\\':
1286 return '', ()
1286 return '', ()
1287
1287
1288 char = text[-1]
1288 char = text[-1]
1289 # no expand on quote for completion in strings.
1289 # no expand on quote for completion in strings.
1290 # nor backcomplete standard ascii keys
1290 # nor backcomplete standard ascii keys
1291 if char in string.ascii_letters or char in ('"',"'"):
1291 if char in string.ascii_letters or char in ('"',"'"):
1292 return '', ()
1292 return '', ()
1293 try :
1293 try :
1294 unic = unicodedata.name(char)
1294 unic = unicodedata.name(char)
1295 return '\\'+char,('\\'+unic,)
1295 return '\\'+char,('\\'+unic,)
1296 except KeyError:
1296 except KeyError:
1297 pass
1297 pass
1298 return '', ()
1298 return '', ()
1299
1299
1300
1300
1301 @context_matcher()
1301 @context_matcher()
1302 def back_latex_name_matcher(context: CompletionContext):
1302 def back_latex_name_matcher(context: CompletionContext):
1303 """Match latex characters back to unicode name
1303 """Match latex characters back to unicode name
1304
1304
1305 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1305 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1306 """
1306 """
1307 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1307 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1308 return _convert_matcher_v1_result_to_v2(
1308 return _convert_matcher_v1_result_to_v2(
1309 matches, type="latex", fragment=fragment, suppress_if_matches=True
1309 matches, type="latex", fragment=fragment, suppress_if_matches=True
1310 )
1310 )
1311
1311
1312
1312
1313 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1313 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1314 """Match latex characters back to unicode name
1314 """Match latex characters back to unicode name
1315
1315
1316 This does ``\\β„΅`` -> ``\\aleph``
1316 This does ``\\β„΅`` -> ``\\aleph``
1317
1317
1318 .. deprecated:: 8.6
1318 .. deprecated:: 8.6
1319 You can use :meth:`back_latex_name_matcher` instead.
1319 You can use :meth:`back_latex_name_matcher` instead.
1320 """
1320 """
1321 if len(text)<2:
1321 if len(text)<2:
1322 return '', ()
1322 return '', ()
1323 maybe_slash = text[-2]
1323 maybe_slash = text[-2]
1324 if maybe_slash != '\\':
1324 if maybe_slash != '\\':
1325 return '', ()
1325 return '', ()
1326
1326
1327
1327
1328 char = text[-1]
1328 char = text[-1]
1329 # no expand on quote for completion in strings.
1329 # no expand on quote for completion in strings.
1330 # nor backcomplete standard ascii keys
1330 # nor backcomplete standard ascii keys
1331 if char in string.ascii_letters or char in ('"',"'"):
1331 if char in string.ascii_letters or char in ('"',"'"):
1332 return '', ()
1332 return '', ()
1333 try :
1333 try :
1334 latex = reverse_latex_symbol[char]
1334 latex = reverse_latex_symbol[char]
1335 # '\\' replace the \ as well
1335 # '\\' replace the \ as well
1336 return '\\'+char,[latex]
1336 return '\\'+char,[latex]
1337 except KeyError:
1337 except KeyError:
1338 pass
1338 pass
1339 return '', ()
1339 return '', ()
1340
1340
1341
1341
1342 def _formatparamchildren(parameter) -> str:
1342 def _formatparamchildren(parameter) -> str:
1343 """
1343 """
1344 Get parameter name and value from Jedi Private API
1344 Get parameter name and value from Jedi Private API
1345
1345
1346 Jedi does not expose a simple way to get `param=value` from its API.
1346 Jedi does not expose a simple way to get `param=value` from its API.
1347
1347
1348 Parameters
1348 Parameters
1349 ----------
1349 ----------
1350 parameter
1350 parameter
1351 Jedi's function `Param`
1351 Jedi's function `Param`
1352
1352
1353 Returns
1353 Returns
1354 -------
1354 -------
1355 A string like 'a', 'b=1', '*args', '**kwargs'
1355 A string like 'a', 'b=1', '*args', '**kwargs'
1356
1356
1357 """
1357 """
1358 description = parameter.description
1358 description = parameter.description
1359 if not description.startswith('param '):
1359 if not description.startswith('param '):
1360 raise ValueError('Jedi function parameter description have change format.'
1360 raise ValueError('Jedi function parameter description have change format.'
1361 'Expected "param ...", found %r".' % description)
1361 'Expected "param ...", found %r".' % description)
1362 return description[6:]
1362 return description[6:]
1363
1363
1364 def _make_signature(completion)-> str:
1364 def _make_signature(completion)-> str:
1365 """
1365 """
1366 Make the signature from a jedi completion
1366 Make the signature from a jedi completion
1367
1367
1368 Parameters
1368 Parameters
1369 ----------
1369 ----------
1370 completion : jedi.Completion
1370 completion : jedi.Completion
1371 object does not complete a function type
1371 object does not complete a function type
1372
1372
1373 Returns
1373 Returns
1374 -------
1374 -------
1375 a string consisting of the function signature, with the parenthesis but
1375 a string consisting of the function signature, with the parenthesis but
1376 without the function name. example:
1376 without the function name. example:
1377 `(a, *args, b=1, **kwargs)`
1377 `(a, *args, b=1, **kwargs)`
1378
1378
1379 """
1379 """
1380
1380
1381 # it looks like this might work on jedi 0.17
1381 # it looks like this might work on jedi 0.17
1382 if hasattr(completion, 'get_signatures'):
1382 if hasattr(completion, 'get_signatures'):
1383 signatures = completion.get_signatures()
1383 signatures = completion.get_signatures()
1384 if not signatures:
1384 if not signatures:
1385 return '(?)'
1385 return '(?)'
1386
1386
1387 c0 = completion.get_signatures()[0]
1387 c0 = completion.get_signatures()[0]
1388 return '('+c0.to_string().split('(', maxsplit=1)[1]
1388 return '('+c0.to_string().split('(', maxsplit=1)[1]
1389
1389
1390 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1390 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1391 for p in signature.defined_names()) if f])
1391 for p in signature.defined_names()) if f])
1392
1392
1393
1393
1394 _CompleteResult = Dict[str, MatcherResult]
1394 _CompleteResult = Dict[str, MatcherResult]
1395
1395
1396
1396
1397 def _convert_matcher_v1_result_to_v2(
1397 def _convert_matcher_v1_result_to_v2(
1398 matches: Sequence[str],
1398 matches: Sequence[str],
1399 type: str,
1399 type: str,
1400 fragment: str = None,
1400 fragment: str = None,
1401 suppress_if_matches: bool = False,
1401 suppress_if_matches: bool = False,
1402 ) -> SimpleMatcherResult:
1402 ) -> SimpleMatcherResult:
1403 """Utility to help with transition"""
1403 """Utility to help with transition"""
1404 result = {
1404 result = {
1405 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1405 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1406 "suppress": (True if matches else False) if suppress_if_matches else False,
1406 "suppress": (True if matches else False) if suppress_if_matches else False,
1407 }
1407 }
1408 if fragment is not None:
1408 if fragment is not None:
1409 result["matched_fragment"] = fragment
1409 result["matched_fragment"] = fragment
1410 return result
1410 return result
1411
1411
1412
1412
1413 class IPCompleter(Completer):
1413 class IPCompleter(Completer):
1414 """Extension of the completer class with IPython-specific features"""
1414 """Extension of the completer class with IPython-specific features"""
1415
1415
1416 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1416 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1417
1417
1418 @observe('greedy')
1418 @observe('greedy')
1419 def _greedy_changed(self, change):
1419 def _greedy_changed(self, change):
1420 """update the splitter and readline delims when greedy is changed"""
1420 """update the splitter and readline delims when greedy is changed"""
1421 if change['new']:
1421 if change['new']:
1422 self.splitter.delims = GREEDY_DELIMS
1422 self.splitter.delims = GREEDY_DELIMS
1423 else:
1423 else:
1424 self.splitter.delims = DELIMS
1424 self.splitter.delims = DELIMS
1425
1425
1426 dict_keys_only = Bool(
1426 dict_keys_only = Bool(
1427 False,
1427 False,
1428 help="""
1428 help="""
1429 Whether to show dict key matches only.
1429 Whether to show dict key matches only.
1430
1430
1431 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1431 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1432 """,
1432 """,
1433 )
1433 )
1434
1434
1435 suppress_competing_matchers = UnionTrait(
1435 suppress_competing_matchers = UnionTrait(
1436 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1436 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1437 default_value=None,
1437 default_value=None,
1438 help="""
1438 help="""
1439 Whether to suppress completions from other *Matchers*.
1439 Whether to suppress completions from other *Matchers*.
1440
1440
1441 When set to ``None`` (default) the matchers will attempt to auto-detect
1441 When set to ``None`` (default) the matchers will attempt to auto-detect
1442 whether suppression of other matchers is desirable. For example, at
1442 whether suppression of other matchers is desirable. For example, at
1443 the beginning of a line followed by `%` we expect a magic completion
1443 the beginning of a line followed by `%` we expect a magic completion
1444 to be the only applicable option, and after ``my_dict['`` we usually
1444 to be the only applicable option, and after ``my_dict['`` we usually
1445 expect a completion with an existing dictionary key.
1445 expect a completion with an existing dictionary key.
1446
1446
1447 If you want to disable this heuristic and see completions from all matchers,
1447 If you want to disable this heuristic and see completions from all matchers,
1448 set ``IPCompleter.suppress_competing_matchers = False``.
1448 set ``IPCompleter.suppress_competing_matchers = False``.
1449 To disable the heuristic for specific matchers provide a dictionary mapping:
1449 To disable the heuristic for specific matchers provide a dictionary mapping:
1450 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1450 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1451
1451
1452 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1452 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1453 completions to the set of matchers with the highest priority;
1453 completions to the set of matchers with the highest priority;
1454 this is equivalent to ``IPCompleter.merge_completions`` and
1454 this is equivalent to ``IPCompleter.merge_completions`` and
1455 can be beneficial for performance, but will sometimes omit relevant
1455 can be beneficial for performance, but will sometimes omit relevant
1456 candidates from matchers further down the priority list.
1456 candidates from matchers further down the priority list.
1457 """,
1457 """,
1458 ).tag(config=True)
1458 ).tag(config=True)
1459
1459
1460 merge_completions = Bool(
1460 merge_completions = Bool(
1461 True,
1461 True,
1462 help="""Whether to merge completion results into a single list
1462 help="""Whether to merge completion results into a single list
1463
1463
1464 If False, only the completion results from the first non-empty
1464 If False, only the completion results from the first non-empty
1465 completer will be returned.
1465 completer will be returned.
1466
1466
1467 As of version 8.6.0, setting the value to ``False`` is an alias for:
1467 As of version 8.6.0, setting the value to ``False`` is an alias for:
1468 ``IPCompleter.suppress_competing_matchers = True.``.
1468 ``IPCompleter.suppress_competing_matchers = True.``.
1469 """,
1469 """,
1470 ).tag(config=True)
1470 ).tag(config=True)
1471
1471
1472 disable_matchers = ListTrait(
1472 disable_matchers = ListTrait(
1473 Unicode(),
1473 Unicode(),
1474 help="""List of matchers to disable.
1474 help="""List of matchers to disable.
1475
1475
1476 The list should contain matcher identifiers (see :any:`completion_matcher`).
1476 The list should contain matcher identifiers (see :any:`completion_matcher`).
1477 """,
1477 """,
1478 ).tag(config=True)
1478 ).tag(config=True)
1479
1479
1480 omit__names = Enum(
1480 omit__names = Enum(
1481 (0, 1, 2),
1481 (0, 1, 2),
1482 default_value=2,
1482 default_value=2,
1483 help="""Instruct the completer to omit private method names
1483 help="""Instruct the completer to omit private method names
1484
1484
1485 Specifically, when completing on ``object.<tab>``.
1485 Specifically, when completing on ``object.<tab>``.
1486
1486
1487 When 2 [default]: all names that start with '_' will be excluded.
1487 When 2 [default]: all names that start with '_' will be excluded.
1488
1488
1489 When 1: all 'magic' names (``__foo__``) will be excluded.
1489 When 1: all 'magic' names (``__foo__``) will be excluded.
1490
1490
1491 When 0: nothing will be excluded.
1491 When 0: nothing will be excluded.
1492 """
1492 """
1493 ).tag(config=True)
1493 ).tag(config=True)
1494 limit_to__all__ = Bool(False,
1494 limit_to__all__ = Bool(False,
1495 help="""
1495 help="""
1496 DEPRECATED as of version 5.0.
1496 DEPRECATED as of version 5.0.
1497
1497
1498 Instruct the completer to use __all__ for the completion
1498 Instruct the completer to use __all__ for the completion
1499
1499
1500 Specifically, when completing on ``object.<tab>``.
1500 Specifically, when completing on ``object.<tab>``.
1501
1501
1502 When True: only those names in obj.__all__ will be included.
1502 When True: only those names in obj.__all__ will be included.
1503
1503
1504 When False [default]: the __all__ attribute is ignored
1504 When False [default]: the __all__ attribute is ignored
1505 """,
1505 """,
1506 ).tag(config=True)
1506 ).tag(config=True)
1507
1507
1508 profile_completions = Bool(
1508 profile_completions = Bool(
1509 default_value=False,
1509 default_value=False,
1510 help="If True, emit profiling data for completion subsystem using cProfile."
1510 help="If True, emit profiling data for completion subsystem using cProfile."
1511 ).tag(config=True)
1511 ).tag(config=True)
1512
1512
1513 profiler_output_dir = Unicode(
1513 profiler_output_dir = Unicode(
1514 default_value=".completion_profiles",
1514 default_value=".completion_profiles",
1515 help="Template for path at which to output profile data for completions."
1515 help="Template for path at which to output profile data for completions."
1516 ).tag(config=True)
1516 ).tag(config=True)
1517
1517
1518 @observe('limit_to__all__')
1518 @observe('limit_to__all__')
1519 def _limit_to_all_changed(self, change):
1519 def _limit_to_all_changed(self, change):
1520 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1520 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1521 'value has been deprecated since IPython 5.0, will be made to have '
1521 'value has been deprecated since IPython 5.0, will be made to have '
1522 'no effects and then removed in future version of IPython.',
1522 'no effects and then removed in future version of IPython.',
1523 UserWarning)
1523 UserWarning)
1524
1524
1525 def __init__(
1525 def __init__(
1526 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1526 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1527 ):
1527 ):
1528 """IPCompleter() -> completer
1528 """IPCompleter() -> completer
1529
1529
1530 Return a completer object.
1530 Return a completer object.
1531
1531
1532 Parameters
1532 Parameters
1533 ----------
1533 ----------
1534 shell
1534 shell
1535 a pointer to the ipython shell itself. This is needed
1535 a pointer to the ipython shell itself. This is needed
1536 because this completer knows about magic functions, and those can
1536 because this completer knows about magic functions, and those can
1537 only be accessed via the ipython instance.
1537 only be accessed via the ipython instance.
1538 namespace : dict, optional
1538 namespace : dict, optional
1539 an optional dict where completions are performed.
1539 an optional dict where completions are performed.
1540 global_namespace : dict, optional
1540 global_namespace : dict, optional
1541 secondary optional dict for completions, to
1541 secondary optional dict for completions, to
1542 handle cases (such as IPython embedded inside functions) where
1542 handle cases (such as IPython embedded inside functions) where
1543 both Python scopes are visible.
1543 both Python scopes are visible.
1544 config : Config
1544 config : Config
1545 traitlet's config object
1545 traitlet's config object
1546 **kwargs
1546 **kwargs
1547 passed to super class unmodified.
1547 passed to super class unmodified.
1548 """
1548 """
1549
1549
1550 self.magic_escape = ESC_MAGIC
1550 self.magic_escape = ESC_MAGIC
1551 self.splitter = CompletionSplitter()
1551 self.splitter = CompletionSplitter()
1552
1552
1553 # _greedy_changed() depends on splitter and readline being defined:
1553 # _greedy_changed() depends on splitter and readline being defined:
1554 super().__init__(
1554 super().__init__(
1555 namespace=namespace,
1555 namespace=namespace,
1556 global_namespace=global_namespace,
1556 global_namespace=global_namespace,
1557 config=config,
1557 config=config,
1558 **kwargs,
1558 **kwargs,
1559 )
1559 )
1560
1560
1561 # List where completion matches will be stored
1561 # List where completion matches will be stored
1562 self.matches = []
1562 self.matches = []
1563 self.shell = shell
1563 self.shell = shell
1564 # Regexp to split filenames with spaces in them
1564 # Regexp to split filenames with spaces in them
1565 self.space_name_re = re.compile(r'([^\\] )')
1565 self.space_name_re = re.compile(r'([^\\] )')
1566 # Hold a local ref. to glob.glob for speed
1566 # Hold a local ref. to glob.glob for speed
1567 self.glob = glob.glob
1567 self.glob = glob.glob
1568
1568
1569 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1569 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1570 # buffers, to avoid completion problems.
1570 # buffers, to avoid completion problems.
1571 term = os.environ.get('TERM','xterm')
1571 term = os.environ.get('TERM','xterm')
1572 self.dumb_terminal = term in ['dumb','emacs']
1572 self.dumb_terminal = term in ['dumb','emacs']
1573
1573
1574 # Special handling of backslashes needed in win32 platforms
1574 # Special handling of backslashes needed in win32 platforms
1575 if sys.platform == "win32":
1575 if sys.platform == "win32":
1576 self.clean_glob = self._clean_glob_win32
1576 self.clean_glob = self._clean_glob_win32
1577 else:
1577 else:
1578 self.clean_glob = self._clean_glob
1578 self.clean_glob = self._clean_glob
1579
1579
1580 #regexp to parse docstring for function signature
1580 #regexp to parse docstring for function signature
1581 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1581 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1582 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1582 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1583 #use this if positional argument name is also needed
1583 #use this if positional argument name is also needed
1584 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1584 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1585
1585
1586 self.magic_arg_matchers = [
1586 self.magic_arg_matchers = [
1587 self.magic_config_matcher,
1587 self.magic_config_matcher,
1588 self.magic_color_matcher,
1588 self.magic_color_matcher,
1589 ]
1589 ]
1590
1590
1591 # This is set externally by InteractiveShell
1591 # This is set externally by InteractiveShell
1592 self.custom_completers = None
1592 self.custom_completers = None
1593
1593
1594 # This is a list of names of unicode characters that can be completed
1594 # This is a list of names of unicode characters that can be completed
1595 # into their corresponding unicode value. The list is large, so we
1595 # into their corresponding unicode value. The list is large, so we
1596 # lazily initialize it on first use. Consuming code should access this
1596 # lazily initialize it on first use. Consuming code should access this
1597 # attribute through the `@unicode_names` property.
1597 # attribute through the `@unicode_names` property.
1598 self._unicode_names = None
1598 self._unicode_names = None
1599
1599
1600 self._backslash_combining_matchers = [
1600 self._backslash_combining_matchers = [
1601 self.latex_name_matcher,
1601 self.latex_name_matcher,
1602 self.unicode_name_matcher,
1602 self.unicode_name_matcher,
1603 back_latex_name_matcher,
1603 back_latex_name_matcher,
1604 back_unicode_name_matcher,
1604 back_unicode_name_matcher,
1605 self.fwd_unicode_matcher,
1605 self.fwd_unicode_matcher,
1606 ]
1606 ]
1607
1607
1608 if not self.backslash_combining_completions:
1608 if not self.backslash_combining_completions:
1609 for matcher in self._backslash_combining_matchers:
1609 for matcher in self._backslash_combining_matchers:
1610 self.disable_matchers.append(matcher.matcher_identifier)
1610 self.disable_matchers.append(matcher.matcher_identifier)
1611
1611
1612 if not self.merge_completions:
1612 if not self.merge_completions:
1613 self.suppress_competing_matchers = True
1613 self.suppress_competing_matchers = True
1614
1614
1615 @property
1615 @property
1616 def matchers(self) -> List[Matcher]:
1616 def matchers(self) -> List[Matcher]:
1617 """All active matcher routines for completion"""
1617 """All active matcher routines for completion"""
1618 if self.dict_keys_only:
1618 if self.dict_keys_only:
1619 return [self.dict_key_matcher]
1619 return [self.dict_key_matcher]
1620
1620
1621 if self.use_jedi:
1621 if self.use_jedi:
1622 return [
1622 return [
1623 *self.custom_matchers,
1623 *self.custom_matchers,
1624 *self._backslash_combining_matchers,
1624 *self._backslash_combining_matchers,
1625 *self.magic_arg_matchers,
1625 *self.magic_arg_matchers,
1626 self.custom_completer_matcher,
1626 self.custom_completer_matcher,
1627 self.magic_matcher,
1627 self.magic_matcher,
1628 self._jedi_matcher,
1628 self._jedi_matcher,
1629 self.dict_key_matcher,
1629 self.dict_key_matcher,
1630 self.file_matcher,
1630 self.file_matcher,
1631 ]
1631 ]
1632 else:
1632 else:
1633 return [
1633 return [
1634 *self.custom_matchers,
1634 *self.custom_matchers,
1635 *self._backslash_combining_matchers,
1635 *self._backslash_combining_matchers,
1636 *self.magic_arg_matchers,
1636 *self.magic_arg_matchers,
1637 self.custom_completer_matcher,
1637 self.custom_completer_matcher,
1638 self.dict_key_matcher,
1638 self.dict_key_matcher,
1639 # TODO: convert python_matches to v2 API
1639 # TODO: convert python_matches to v2 API
1640 self.magic_matcher,
1640 self.magic_matcher,
1641 self.python_matches,
1641 self.python_matches,
1642 self.file_matcher,
1642 self.file_matcher,
1643 self.python_func_kw_matcher,
1643 self.python_func_kw_matcher,
1644 ]
1644 ]
1645
1645
1646 def all_completions(self, text:str) -> List[str]:
1646 def all_completions(self, text:str) -> List[str]:
1647 """
1647 """
1648 Wrapper around the completion methods for the benefit of emacs.
1648 Wrapper around the completion methods for the benefit of emacs.
1649 """
1649 """
1650 prefix = text.rpartition('.')[0]
1650 prefix = text.rpartition('.')[0]
1651 with provisionalcompleter():
1651 with provisionalcompleter():
1652 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1652 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1653 for c in self.completions(text, len(text))]
1653 for c in self.completions(text, len(text))]
1654
1654
1655 return self.complete(text)[1]
1655 return self.complete(text)[1]
1656
1656
1657 def _clean_glob(self, text:str):
1657 def _clean_glob(self, text:str):
1658 return self.glob("%s*" % text)
1658 return self.glob("%s*" % text)
1659
1659
1660 def _clean_glob_win32(self, text:str):
1660 def _clean_glob_win32(self, text:str):
1661 return [f.replace("\\","/")
1661 return [f.replace("\\","/")
1662 for f in self.glob("%s*" % text)]
1662 for f in self.glob("%s*" % text)]
1663
1663
1664 @context_matcher()
1664 @context_matcher()
1665 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1665 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1666 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1666 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1667 matches = self.file_matches(context.token)
1667 matches = self.file_matches(context.token)
1668 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1668 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1669 # starts with `/home/`, `C:\`, etc)
1669 # starts with `/home/`, `C:\`, etc)
1670 return _convert_matcher_v1_result_to_v2(matches, type="path")
1670 return _convert_matcher_v1_result_to_v2(matches, type="path")
1671
1671
1672 def file_matches(self, text: str) -> List[str]:
1672 def file_matches(self, text: str) -> List[str]:
1673 """Match filenames, expanding ~USER type strings.
1673 """Match filenames, expanding ~USER type strings.
1674
1674
1675 Most of the seemingly convoluted logic in this completer is an
1675 Most of the seemingly convoluted logic in this completer is an
1676 attempt to handle filenames with spaces in them. And yet it's not
1676 attempt to handle filenames with spaces in them. And yet it's not
1677 quite perfect, because Python's readline doesn't expose all of the
1677 quite perfect, because Python's readline doesn't expose all of the
1678 GNU readline details needed for this to be done correctly.
1678 GNU readline details needed for this to be done correctly.
1679
1679
1680 For a filename with a space in it, the printed completions will be
1680 For a filename with a space in it, the printed completions will be
1681 only the parts after what's already been typed (instead of the
1681 only the parts after what's already been typed (instead of the
1682 full completions, as is normally done). I don't think with the
1682 full completions, as is normally done). I don't think with the
1683 current (as of Python 2.3) Python readline it's possible to do
1683 current (as of Python 2.3) Python readline it's possible to do
1684 better.
1684 better.
1685
1685
1686 .. deprecated:: 8.6
1686 .. deprecated:: 8.6
1687 You can use :meth:`file_matcher` instead.
1687 You can use :meth:`file_matcher` instead.
1688 """
1688 """
1689
1689
1690 # chars that require escaping with backslash - i.e. chars
1690 # chars that require escaping with backslash - i.e. chars
1691 # that readline treats incorrectly as delimiters, but we
1691 # that readline treats incorrectly as delimiters, but we
1692 # don't want to treat as delimiters in filename matching
1692 # don't want to treat as delimiters in filename matching
1693 # when escaped with backslash
1693 # when escaped with backslash
1694 if text.startswith('!'):
1694 if text.startswith('!'):
1695 text = text[1:]
1695 text = text[1:]
1696 text_prefix = u'!'
1696 text_prefix = u'!'
1697 else:
1697 else:
1698 text_prefix = u''
1698 text_prefix = u''
1699
1699
1700 text_until_cursor = self.text_until_cursor
1700 text_until_cursor = self.text_until_cursor
1701 # track strings with open quotes
1701 # track strings with open quotes
1702 open_quotes = has_open_quotes(text_until_cursor)
1702 open_quotes = has_open_quotes(text_until_cursor)
1703
1703
1704 if '(' in text_until_cursor or '[' in text_until_cursor:
1704 if '(' in text_until_cursor or '[' in text_until_cursor:
1705 lsplit = text
1705 lsplit = text
1706 else:
1706 else:
1707 try:
1707 try:
1708 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1708 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1709 lsplit = arg_split(text_until_cursor)[-1]
1709 lsplit = arg_split(text_until_cursor)[-1]
1710 except ValueError:
1710 except ValueError:
1711 # typically an unmatched ", or backslash without escaped char.
1711 # typically an unmatched ", or backslash without escaped char.
1712 if open_quotes:
1712 if open_quotes:
1713 lsplit = text_until_cursor.split(open_quotes)[-1]
1713 lsplit = text_until_cursor.split(open_quotes)[-1]
1714 else:
1714 else:
1715 return []
1715 return []
1716 except IndexError:
1716 except IndexError:
1717 # tab pressed on empty line
1717 # tab pressed on empty line
1718 lsplit = ""
1718 lsplit = ""
1719
1719
1720 if not open_quotes and lsplit != protect_filename(lsplit):
1720 if not open_quotes and lsplit != protect_filename(lsplit):
1721 # if protectables are found, do matching on the whole escaped name
1721 # if protectables are found, do matching on the whole escaped name
1722 has_protectables = True
1722 has_protectables = True
1723 text0,text = text,lsplit
1723 text0,text = text,lsplit
1724 else:
1724 else:
1725 has_protectables = False
1725 has_protectables = False
1726 text = os.path.expanduser(text)
1726 text = os.path.expanduser(text)
1727
1727
1728 if text == "":
1728 if text == "":
1729 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1729 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1730
1730
1731 # Compute the matches from the filesystem
1731 # Compute the matches from the filesystem
1732 if sys.platform == 'win32':
1732 if sys.platform == 'win32':
1733 m0 = self.clean_glob(text)
1733 m0 = self.clean_glob(text)
1734 else:
1734 else:
1735 m0 = self.clean_glob(text.replace('\\', ''))
1735 m0 = self.clean_glob(text.replace('\\', ''))
1736
1736
1737 if has_protectables:
1737 if has_protectables:
1738 # If we had protectables, we need to revert our changes to the
1738 # If we had protectables, we need to revert our changes to the
1739 # beginning of filename so that we don't double-write the part
1739 # beginning of filename so that we don't double-write the part
1740 # of the filename we have so far
1740 # of the filename we have so far
1741 len_lsplit = len(lsplit)
1741 len_lsplit = len(lsplit)
1742 matches = [text_prefix + text0 +
1742 matches = [text_prefix + text0 +
1743 protect_filename(f[len_lsplit:]) for f in m0]
1743 protect_filename(f[len_lsplit:]) for f in m0]
1744 else:
1744 else:
1745 if open_quotes:
1745 if open_quotes:
1746 # if we have a string with an open quote, we don't need to
1746 # if we have a string with an open quote, we don't need to
1747 # protect the names beyond the quote (and we _shouldn't_, as
1747 # protect the names beyond the quote (and we _shouldn't_, as
1748 # it would cause bugs when the filesystem call is made).
1748 # it would cause bugs when the filesystem call is made).
1749 matches = m0 if sys.platform == "win32" else\
1749 matches = m0 if sys.platform == "win32" else\
1750 [protect_filename(f, open_quotes) for f in m0]
1750 [protect_filename(f, open_quotes) for f in m0]
1751 else:
1751 else:
1752 matches = [text_prefix +
1752 matches = [text_prefix +
1753 protect_filename(f) for f in m0]
1753 protect_filename(f) for f in m0]
1754
1754
1755 # Mark directories in input list by appending '/' to their names.
1755 # Mark directories in input list by appending '/' to their names.
1756 return [x+'/' if os.path.isdir(x) else x for x in matches]
1756 return [x+'/' if os.path.isdir(x) else x for x in matches]
1757
1757
1758 @context_matcher()
1758 @context_matcher()
1759 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1759 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1760 """Match magics."""
1760 """Match magics."""
1761 text = context.token
1761 text = context.token
1762 matches = self.magic_matches(text)
1762 matches = self.magic_matches(text)
1763 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1763 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1764 is_magic_prefix = len(text) > 0 and text[0] == "%"
1764 is_magic_prefix = len(text) > 0 and text[0] == "%"
1765 result["suppress"] = is_magic_prefix and bool(result["completions"])
1765 result["suppress"] = is_magic_prefix and bool(result["completions"])
1766 return result
1766 return result
1767
1767
1768 def magic_matches(self, text: str):
1768 def magic_matches(self, text: str):
1769 """Match magics.
1769 """Match magics.
1770
1770
1771 .. deprecated:: 8.6
1771 .. deprecated:: 8.6
1772 You can use :meth:`magic_matcher` instead.
1772 You can use :meth:`magic_matcher` instead.
1773 """
1773 """
1774 # Get all shell magics now rather than statically, so magics loaded at
1774 # Get all shell magics now rather than statically, so magics loaded at
1775 # runtime show up too.
1775 # runtime show up too.
1776 lsm = self.shell.magics_manager.lsmagic()
1776 lsm = self.shell.magics_manager.lsmagic()
1777 line_magics = lsm['line']
1777 line_magics = lsm['line']
1778 cell_magics = lsm['cell']
1778 cell_magics = lsm['cell']
1779 pre = self.magic_escape
1779 pre = self.magic_escape
1780 pre2 = pre+pre
1780 pre2 = pre+pre
1781
1781
1782 explicit_magic = text.startswith(pre)
1782 explicit_magic = text.startswith(pre)
1783
1783
1784 # Completion logic:
1784 # Completion logic:
1785 # - user gives %%: only do cell magics
1785 # - user gives %%: only do cell magics
1786 # - user gives %: do both line and cell magics
1786 # - user gives %: do both line and cell magics
1787 # - no prefix: do both
1787 # - no prefix: do both
1788 # In other words, line magics are skipped if the user gives %% explicitly
1788 # In other words, line magics are skipped if the user gives %% explicitly
1789 #
1789 #
1790 # We also exclude magics that match any currently visible names:
1790 # We also exclude magics that match any currently visible names:
1791 # https://github.com/ipython/ipython/issues/4877, unless the user has
1791 # https://github.com/ipython/ipython/issues/4877, unless the user has
1792 # typed a %:
1792 # typed a %:
1793 # https://github.com/ipython/ipython/issues/10754
1793 # https://github.com/ipython/ipython/issues/10754
1794 bare_text = text.lstrip(pre)
1794 bare_text = text.lstrip(pre)
1795 global_matches = self.global_matches(bare_text)
1795 global_matches = self.global_matches(bare_text)
1796 if not explicit_magic:
1796 if not explicit_magic:
1797 def matches(magic):
1797 def matches(magic):
1798 """
1798 """
1799 Filter magics, in particular remove magics that match
1799 Filter magics, in particular remove magics that match
1800 a name present in global namespace.
1800 a name present in global namespace.
1801 """
1801 """
1802 return ( magic.startswith(bare_text) and
1802 return ( magic.startswith(bare_text) and
1803 magic not in global_matches )
1803 magic not in global_matches )
1804 else:
1804 else:
1805 def matches(magic):
1805 def matches(magic):
1806 return magic.startswith(bare_text)
1806 return magic.startswith(bare_text)
1807
1807
1808 comp = [ pre2+m for m in cell_magics if matches(m)]
1808 comp = [ pre2+m for m in cell_magics if matches(m)]
1809 if not text.startswith(pre2):
1809 if not text.startswith(pre2):
1810 comp += [ pre+m for m in line_magics if matches(m)]
1810 comp += [ pre+m for m in line_magics if matches(m)]
1811
1811
1812 return comp
1812 return comp
1813
1813
1814 @context_matcher()
1814 @context_matcher()
1815 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1815 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1816 """Match class names and attributes for %config magic."""
1816 """Match class names and attributes for %config magic."""
1817 # NOTE: uses `line_buffer` equivalent for compatibility
1817 # NOTE: uses `line_buffer` equivalent for compatibility
1818 matches = self.magic_config_matches(context.line_with_cursor)
1818 matches = self.magic_config_matches(context.line_with_cursor)
1819 return _convert_matcher_v1_result_to_v2(matches, type="param")
1819 return _convert_matcher_v1_result_to_v2(matches, type="param")
1820
1820
1821 def magic_config_matches(self, text: str) -> List[str]:
1821 def magic_config_matches(self, text: str) -> List[str]:
1822 """Match class names and attributes for %config magic.
1822 """Match class names and attributes for %config magic.
1823
1823
1824 .. deprecated:: 8.6
1824 .. deprecated:: 8.6
1825 You can use :meth:`magic_config_matcher` instead.
1825 You can use :meth:`magic_config_matcher` instead.
1826 """
1826 """
1827 texts = text.strip().split()
1827 texts = text.strip().split()
1828
1828
1829 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1829 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1830 # get all configuration classes
1830 # get all configuration classes
1831 classes = sorted(set([ c for c in self.shell.configurables
1831 classes = sorted(set([ c for c in self.shell.configurables
1832 if c.__class__.class_traits(config=True)
1832 if c.__class__.class_traits(config=True)
1833 ]), key=lambda x: x.__class__.__name__)
1833 ]), key=lambda x: x.__class__.__name__)
1834 classnames = [ c.__class__.__name__ for c in classes ]
1834 classnames = [ c.__class__.__name__ for c in classes ]
1835
1835
1836 # return all classnames if config or %config is given
1836 # return all classnames if config or %config is given
1837 if len(texts) == 1:
1837 if len(texts) == 1:
1838 return classnames
1838 return classnames
1839
1839
1840 # match classname
1840 # match classname
1841 classname_texts = texts[1].split('.')
1841 classname_texts = texts[1].split('.')
1842 classname = classname_texts[0]
1842 classname = classname_texts[0]
1843 classname_matches = [ c for c in classnames
1843 classname_matches = [ c for c in classnames
1844 if c.startswith(classname) ]
1844 if c.startswith(classname) ]
1845
1845
1846 # return matched classes or the matched class with attributes
1846 # return matched classes or the matched class with attributes
1847 if texts[1].find('.') < 0:
1847 if texts[1].find('.') < 0:
1848 return classname_matches
1848 return classname_matches
1849 elif len(classname_matches) == 1 and \
1849 elif len(classname_matches) == 1 and \
1850 classname_matches[0] == classname:
1850 classname_matches[0] == classname:
1851 cls = classes[classnames.index(classname)].__class__
1851 cls = classes[classnames.index(classname)].__class__
1852 help = cls.class_get_help()
1852 help = cls.class_get_help()
1853 # strip leading '--' from cl-args:
1853 # strip leading '--' from cl-args:
1854 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1854 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1855 return [ attr.split('=')[0]
1855 return [ attr.split('=')[0]
1856 for attr in help.strip().splitlines()
1856 for attr in help.strip().splitlines()
1857 if attr.startswith(texts[1]) ]
1857 if attr.startswith(texts[1]) ]
1858 return []
1858 return []
1859
1859
1860 @context_matcher()
1860 @context_matcher()
1861 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1861 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1862 """Match color schemes for %colors magic."""
1862 """Match color schemes for %colors magic."""
1863 # NOTE: uses `line_buffer` equivalent for compatibility
1863 # NOTE: uses `line_buffer` equivalent for compatibility
1864 matches = self.magic_color_matches(context.line_with_cursor)
1864 matches = self.magic_color_matches(context.line_with_cursor)
1865 return _convert_matcher_v1_result_to_v2(matches, type="param")
1865 return _convert_matcher_v1_result_to_v2(matches, type="param")
1866
1866
1867 def magic_color_matches(self, text: str) -> List[str]:
1867 def magic_color_matches(self, text: str) -> List[str]:
1868 """Match color schemes for %colors magic.
1868 """Match color schemes for %colors magic.
1869
1869
1870 .. deprecated:: 8.6
1870 .. deprecated:: 8.6
1871 You can use :meth:`magic_color_matcher` instead.
1871 You can use :meth:`magic_color_matcher` instead.
1872 """
1872 """
1873 texts = text.split()
1873 texts = text.split()
1874 if text.endswith(' '):
1874 if text.endswith(' '):
1875 # .split() strips off the trailing whitespace. Add '' back
1875 # .split() strips off the trailing whitespace. Add '' back
1876 # so that: '%colors ' -> ['%colors', '']
1876 # so that: '%colors ' -> ['%colors', '']
1877 texts.append('')
1877 texts.append('')
1878
1878
1879 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1879 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1880 prefix = texts[1]
1880 prefix = texts[1]
1881 return [ color for color in InspectColors.keys()
1881 return [ color for color in InspectColors.keys()
1882 if color.startswith(prefix) ]
1882 if color.startswith(prefix) ]
1883 return []
1883 return []
1884
1884
1885 @context_matcher(identifier="IPCompleter.jedi_matcher")
1885 @context_matcher(identifier="IPCompleter.jedi_matcher")
1886 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1886 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1887 matches = self._jedi_matches(
1887 matches = self._jedi_matches(
1888 cursor_column=context.cursor_position,
1888 cursor_column=context.cursor_position,
1889 cursor_line=context.cursor_line,
1889 cursor_line=context.cursor_line,
1890 text=context.full_text,
1890 text=context.full_text,
1891 )
1891 )
1892 return {
1892 return {
1893 "completions": matches,
1893 "completions": matches,
1894 # static analysis should not suppress other matchers
1894 # static analysis should not suppress other matchers
1895 "suppress": False,
1895 "suppress": False,
1896 }
1896 }
1897
1897
1898 def _jedi_matches(
1898 def _jedi_matches(
1899 self, cursor_column: int, cursor_line: int, text: str
1899 self, cursor_column: int, cursor_line: int, text: str
1900 ) -> Iterable[_JediCompletionLike]:
1900 ) -> Iterable[_JediCompletionLike]:
1901 """
1901 """
1902 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1902 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1903 cursor position.
1903 cursor position.
1904
1904
1905 Parameters
1905 Parameters
1906 ----------
1906 ----------
1907 cursor_column : int
1907 cursor_column : int
1908 column position of the cursor in ``text``, 0-indexed.
1908 column position of the cursor in ``text``, 0-indexed.
1909 cursor_line : int
1909 cursor_line : int
1910 line position of the cursor in ``text``, 0-indexed
1910 line position of the cursor in ``text``, 0-indexed
1911 text : str
1911 text : str
1912 text to complete
1912 text to complete
1913
1913
1914 Notes
1914 Notes
1915 -----
1915 -----
1916 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1916 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1917 object containing a string with the Jedi debug information attached.
1917 object containing a string with the Jedi debug information attached.
1918
1918
1919 .. deprecated:: 8.6
1919 .. deprecated:: 8.6
1920 You can use :meth:`_jedi_matcher` instead.
1920 You can use :meth:`_jedi_matcher` instead.
1921 """
1921 """
1922 namespaces = [self.namespace]
1922 namespaces = [self.namespace]
1923 if self.global_namespace is not None:
1923 if self.global_namespace is not None:
1924 namespaces.append(self.global_namespace)
1924 namespaces.append(self.global_namespace)
1925
1925
1926 completion_filter = lambda x:x
1926 completion_filter = lambda x:x
1927 offset = cursor_to_position(text, cursor_line, cursor_column)
1927 offset = cursor_to_position(text, cursor_line, cursor_column)
1928 # filter output if we are completing for object members
1928 # filter output if we are completing for object members
1929 if offset:
1929 if offset:
1930 pre = text[offset-1]
1930 pre = text[offset-1]
1931 if pre == '.':
1931 if pre == '.':
1932 if self.omit__names == 2:
1932 if self.omit__names == 2:
1933 completion_filter = lambda c:not c.name.startswith('_')
1933 completion_filter = lambda c:not c.name.startswith('_')
1934 elif self.omit__names == 1:
1934 elif self.omit__names == 1:
1935 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1935 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1936 elif self.omit__names == 0:
1936 elif self.omit__names == 0:
1937 completion_filter = lambda x:x
1937 completion_filter = lambda x:x
1938 else:
1938 else:
1939 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1939 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1940
1940
1941 interpreter = jedi.Interpreter(text[:offset], namespaces)
1941 interpreter = jedi.Interpreter(text[:offset], namespaces)
1942 try_jedi = True
1942 try_jedi = True
1943
1943
1944 try:
1944 try:
1945 # find the first token in the current tree -- if it is a ' or " then we are in a string
1945 # find the first token in the current tree -- if it is a ' or " then we are in a string
1946 completing_string = False
1946 completing_string = False
1947 try:
1947 try:
1948 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1948 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1949 except StopIteration:
1949 except StopIteration:
1950 pass
1950 pass
1951 else:
1951 else:
1952 # note the value may be ', ", or it may also be ''' or """, or
1952 # note the value may be ', ", or it may also be ''' or """, or
1953 # in some cases, """what/you/typed..., but all of these are
1953 # in some cases, """what/you/typed..., but all of these are
1954 # strings.
1954 # strings.
1955 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1955 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1956
1956
1957 # if we are in a string jedi is likely not the right candidate for
1957 # if we are in a string jedi is likely not the right candidate for
1958 # now. Skip it.
1958 # now. Skip it.
1959 try_jedi = not completing_string
1959 try_jedi = not completing_string
1960 except Exception as e:
1960 except Exception as e:
1961 # many of things can go wrong, we are using private API just don't crash.
1961 # many of things can go wrong, we are using private API just don't crash.
1962 if self.debug:
1962 if self.debug:
1963 print("Error detecting if completing a non-finished string :", e, '|')
1963 print("Error detecting if completing a non-finished string :", e, '|')
1964
1964
1965 if not try_jedi:
1965 if not try_jedi:
1966 return []
1966 return []
1967 try:
1967 try:
1968 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1968 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1969 except Exception as e:
1969 except Exception as e:
1970 if self.debug:
1970 if self.debug:
1971 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1971 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1972 else:
1972 else:
1973 return []
1973 return []
1974
1974
1975 def python_matches(self, text: str) -> Iterable[str]:
1975 def python_matches(self, text: str) -> Iterable[str]:
1976 """Match attributes or global python names"""
1976 """Match attributes or global python names"""
1977 if "." in text:
1977 if "." in text:
1978 try:
1978 try:
1979 matches = self.attr_matches(text)
1979 matches = self.attr_matches(text)
1980 if text.endswith('.') and self.omit__names:
1980 if text.endswith('.') and self.omit__names:
1981 if self.omit__names == 1:
1981 if self.omit__names == 1:
1982 # true if txt is _not_ a __ name, false otherwise:
1982 # true if txt is _not_ a __ name, false otherwise:
1983 no__name = (lambda txt:
1983 no__name = (lambda txt:
1984 re.match(r'.*\.__.*?__',txt) is None)
1984 re.match(r'.*\.__.*?__',txt) is None)
1985 else:
1985 else:
1986 # true if txt is _not_ a _ name, false otherwise:
1986 # true if txt is _not_ a _ name, false otherwise:
1987 no__name = (lambda txt:
1987 no__name = (lambda txt:
1988 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1988 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1989 matches = filter(no__name, matches)
1989 matches = filter(no__name, matches)
1990 except NameError:
1990 except NameError:
1991 # catches <undefined attributes>.<tab>
1991 # catches <undefined attributes>.<tab>
1992 matches = []
1992 matches = []
1993 else:
1993 else:
1994 matches = self.global_matches(text)
1994 matches = self.global_matches(text)
1995 return matches
1995 return matches
1996
1996
1997 def _default_arguments_from_docstring(self, doc):
1997 def _default_arguments_from_docstring(self, doc):
1998 """Parse the first line of docstring for call signature.
1998 """Parse the first line of docstring for call signature.
1999
1999
2000 Docstring should be of the form 'min(iterable[, key=func])\n'.
2000 Docstring should be of the form 'min(iterable[, key=func])\n'.
2001 It can also parse cython docstring of the form
2001 It can also parse cython docstring of the form
2002 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2002 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2003 """
2003 """
2004 if doc is None:
2004 if doc is None:
2005 return []
2005 return []
2006
2006
2007 #care only the firstline
2007 #care only the firstline
2008 line = doc.lstrip().splitlines()[0]
2008 line = doc.lstrip().splitlines()[0]
2009
2009
2010 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2010 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2011 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2011 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2012 sig = self.docstring_sig_re.search(line)
2012 sig = self.docstring_sig_re.search(line)
2013 if sig is None:
2013 if sig is None:
2014 return []
2014 return []
2015 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2015 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2016 sig = sig.groups()[0].split(',')
2016 sig = sig.groups()[0].split(',')
2017 ret = []
2017 ret = []
2018 for s in sig:
2018 for s in sig:
2019 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2019 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2020 ret += self.docstring_kwd_re.findall(s)
2020 ret += self.docstring_kwd_re.findall(s)
2021 return ret
2021 return ret
2022
2022
2023 def _default_arguments(self, obj):
2023 def _default_arguments(self, obj):
2024 """Return the list of default arguments of obj if it is callable,
2024 """Return the list of default arguments of obj if it is callable,
2025 or empty list otherwise."""
2025 or empty list otherwise."""
2026 call_obj = obj
2026 call_obj = obj
2027 ret = []
2027 ret = []
2028 if inspect.isbuiltin(obj):
2028 if inspect.isbuiltin(obj):
2029 pass
2029 pass
2030 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2030 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2031 if inspect.isclass(obj):
2031 if inspect.isclass(obj):
2032 #for cython embedsignature=True the constructor docstring
2032 #for cython embedsignature=True the constructor docstring
2033 #belongs to the object itself not __init__
2033 #belongs to the object itself not __init__
2034 ret += self._default_arguments_from_docstring(
2034 ret += self._default_arguments_from_docstring(
2035 getattr(obj, '__doc__', ''))
2035 getattr(obj, '__doc__', ''))
2036 # for classes, check for __init__,__new__
2036 # for classes, check for __init__,__new__
2037 call_obj = (getattr(obj, '__init__', None) or
2037 call_obj = (getattr(obj, '__init__', None) or
2038 getattr(obj, '__new__', None))
2038 getattr(obj, '__new__', None))
2039 # for all others, check if they are __call__able
2039 # for all others, check if they are __call__able
2040 elif hasattr(obj, '__call__'):
2040 elif hasattr(obj, '__call__'):
2041 call_obj = obj.__call__
2041 call_obj = obj.__call__
2042 ret += self._default_arguments_from_docstring(
2042 ret += self._default_arguments_from_docstring(
2043 getattr(call_obj, '__doc__', ''))
2043 getattr(call_obj, '__doc__', ''))
2044
2044
2045 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2045 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2046 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2046 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2047
2047
2048 try:
2048 try:
2049 sig = inspect.signature(obj)
2049 sig = inspect.signature(obj)
2050 ret.extend(k for k, v in sig.parameters.items() if
2050 ret.extend(k for k, v in sig.parameters.items() if
2051 v.kind in _keeps)
2051 v.kind in _keeps)
2052 except ValueError:
2052 except ValueError:
2053 pass
2053 pass
2054
2054
2055 return list(set(ret))
2055 return list(set(ret))
2056
2056
2057 @context_matcher()
2057 @context_matcher()
2058 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2058 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2059 """Match named parameters (kwargs) of the last open function."""
2059 """Match named parameters (kwargs) of the last open function."""
2060 matches = self.python_func_kw_matches(context.token)
2060 matches = self.python_func_kw_matches(context.token)
2061 return _convert_matcher_v1_result_to_v2(matches, type="param")
2061 return _convert_matcher_v1_result_to_v2(matches, type="param")
2062
2062
2063 def python_func_kw_matches(self, text):
2063 def python_func_kw_matches(self, text):
2064 """Match named parameters (kwargs) of the last open function.
2064 """Match named parameters (kwargs) of the last open function.
2065
2065
2066 .. deprecated:: 8.6
2066 .. deprecated:: 8.6
2067 You can use :meth:`python_func_kw_matcher` instead.
2067 You can use :meth:`python_func_kw_matcher` instead.
2068 """
2068 """
2069
2069
2070 if "." in text: # a parameter cannot be dotted
2070 if "." in text: # a parameter cannot be dotted
2071 return []
2071 return []
2072 try: regexp = self.__funcParamsRegex
2072 try: regexp = self.__funcParamsRegex
2073 except AttributeError:
2073 except AttributeError:
2074 regexp = self.__funcParamsRegex = re.compile(r'''
2074 regexp = self.__funcParamsRegex = re.compile(r'''
2075 '.*?(?<!\\)' | # single quoted strings or
2075 '.*?(?<!\\)' | # single quoted strings or
2076 ".*?(?<!\\)" | # double quoted strings or
2076 ".*?(?<!\\)" | # double quoted strings or
2077 \w+ | # identifier
2077 \w+ | # identifier
2078 \S # other characters
2078 \S # other characters
2079 ''', re.VERBOSE | re.DOTALL)
2079 ''', re.VERBOSE | re.DOTALL)
2080 # 1. find the nearest identifier that comes before an unclosed
2080 # 1. find the nearest identifier that comes before an unclosed
2081 # parenthesis before the cursor
2081 # parenthesis before the cursor
2082 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2082 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2083 tokens = regexp.findall(self.text_until_cursor)
2083 tokens = regexp.findall(self.text_until_cursor)
2084 iterTokens = reversed(tokens); openPar = 0
2084 iterTokens = reversed(tokens); openPar = 0
2085
2085
2086 for token in iterTokens:
2086 for token in iterTokens:
2087 if token == ')':
2087 if token == ')':
2088 openPar -= 1
2088 openPar -= 1
2089 elif token == '(':
2089 elif token == '(':
2090 openPar += 1
2090 openPar += 1
2091 if openPar > 0:
2091 if openPar > 0:
2092 # found the last unclosed parenthesis
2092 # found the last unclosed parenthesis
2093 break
2093 break
2094 else:
2094 else:
2095 return []
2095 return []
2096 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2096 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2097 ids = []
2097 ids = []
2098 isId = re.compile(r'\w+$').match
2098 isId = re.compile(r'\w+$').match
2099
2099
2100 while True:
2100 while True:
2101 try:
2101 try:
2102 ids.append(next(iterTokens))
2102 ids.append(next(iterTokens))
2103 if not isId(ids[-1]):
2103 if not isId(ids[-1]):
2104 ids.pop(); break
2104 ids.pop(); break
2105 if not next(iterTokens) == '.':
2105 if not next(iterTokens) == '.':
2106 break
2106 break
2107 except StopIteration:
2107 except StopIteration:
2108 break
2108 break
2109
2109
2110 # Find all named arguments already assigned to, as to avoid suggesting
2110 # Find all named arguments already assigned to, as to avoid suggesting
2111 # them again
2111 # them again
2112 usedNamedArgs = set()
2112 usedNamedArgs = set()
2113 par_level = -1
2113 par_level = -1
2114 for token, next_token in zip(tokens, tokens[1:]):
2114 for token, next_token in zip(tokens, tokens[1:]):
2115 if token == '(':
2115 if token == '(':
2116 par_level += 1
2116 par_level += 1
2117 elif token == ')':
2117 elif token == ')':
2118 par_level -= 1
2118 par_level -= 1
2119
2119
2120 if par_level != 0:
2120 if par_level != 0:
2121 continue
2121 continue
2122
2122
2123 if next_token != '=':
2123 if next_token != '=':
2124 continue
2124 continue
2125
2125
2126 usedNamedArgs.add(token)
2126 usedNamedArgs.add(token)
2127
2127
2128 argMatches = []
2128 argMatches = []
2129 try:
2129 try:
2130 callableObj = '.'.join(ids[::-1])
2130 callableObj = '.'.join(ids[::-1])
2131 namedArgs = self._default_arguments(eval(callableObj,
2131 namedArgs = self._default_arguments(eval(callableObj,
2132 self.namespace))
2132 self.namespace))
2133
2133
2134 # Remove used named arguments from the list, no need to show twice
2134 # Remove used named arguments from the list, no need to show twice
2135 for namedArg in set(namedArgs) - usedNamedArgs:
2135 for namedArg in set(namedArgs) - usedNamedArgs:
2136 if namedArg.startswith(text):
2136 if namedArg.startswith(text):
2137 argMatches.append("%s=" %namedArg)
2137 argMatches.append("%s=" %namedArg)
2138 except:
2138 except:
2139 pass
2139 pass
2140
2140
2141 return argMatches
2141 return argMatches
2142
2142
2143 @staticmethod
2143 @staticmethod
2144 def _get_keys(obj: Any) -> List[Any]:
2144 def _get_keys(obj: Any) -> List[Any]:
2145 # Objects can define their own completions by defining an
2145 # Objects can define their own completions by defining an
2146 # _ipy_key_completions_() method.
2146 # _ipy_key_completions_() method.
2147 method = get_real_method(obj, '_ipython_key_completions_')
2147 method = get_real_method(obj, '_ipython_key_completions_')
2148 if method is not None:
2148 if method is not None:
2149 return method()
2149 return method()
2150
2150
2151 # Special case some common in-memory dict-like types
2151 # Special case some common in-memory dict-like types
2152 if isinstance(obj, dict) or\
2152 if isinstance(obj, dict) or\
2153 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2153 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2154 try:
2154 try:
2155 return list(obj.keys())
2155 return list(obj.keys())
2156 except Exception:
2156 except Exception:
2157 return []
2157 return []
2158 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2158 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2159 _safe_isinstance(obj, 'numpy', 'void'):
2159 _safe_isinstance(obj, 'numpy', 'void'):
2160 return obj.dtype.names or []
2160 return obj.dtype.names or []
2161 return []
2161 return []
2162
2162
2163 @context_matcher()
2163 @context_matcher()
2164 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2164 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2165 """Match string keys in a dictionary, after e.g. ``foo[``."""
2165 """Match string keys in a dictionary, after e.g. ``foo[``."""
2166 matches = self.dict_key_matches(context.token)
2166 matches = self.dict_key_matches(context.token)
2167 return _convert_matcher_v1_result_to_v2(
2167 return _convert_matcher_v1_result_to_v2(
2168 matches, type="dict key", suppress_if_matches=True
2168 matches, type="dict key", suppress_if_matches=True
2169 )
2169 )
2170
2170
2171 def dict_key_matches(self, text: str) -> List[str]:
2171 def dict_key_matches(self, text: str) -> List[str]:
2172 """Match string keys in a dictionary, after e.g. ``foo[``.
2172 """Match string keys in a dictionary, after e.g. ``foo[``.
2173
2173
2174 .. deprecated:: 8.6
2174 .. deprecated:: 8.6
2175 You can use :meth:`dict_key_matcher` instead.
2175 You can use :meth:`dict_key_matcher` instead.
2176 """
2176 """
2177
2177
2178 if self.__dict_key_regexps is not None:
2178 if self.__dict_key_regexps is not None:
2179 regexps = self.__dict_key_regexps
2179 regexps = self.__dict_key_regexps
2180 else:
2180 else:
2181 dict_key_re_fmt = r'''(?x)
2181 dict_key_re_fmt = r'''(?x)
2182 ( # match dict-referring expression wrt greedy setting
2182 ( # match dict-referring expression wrt greedy setting
2183 %s
2183 %s
2184 )
2184 )
2185 \[ # open bracket
2185 \[ # open bracket
2186 \s* # and optional whitespace
2186 \s* # and optional whitespace
2187 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2187 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2188 ((?:[uUbB]? # string prefix (r not handled)
2188 ((?:[uUbB]? # string prefix (r not handled)
2189 (?:
2189 (?:
2190 '(?:[^']|(?<!\\)\\')*'
2190 '(?:[^']|(?<!\\)\\')*'
2191 |
2191 |
2192 "(?:[^"]|(?<!\\)\\")*"
2192 "(?:[^"]|(?<!\\)\\")*"
2193 )
2193 )
2194 \s*,\s*
2194 \s*,\s*
2195 )*)
2195 )*)
2196 ([uUbB]? # string prefix (r not handled)
2196 ([uUbB]? # string prefix (r not handled)
2197 (?: # unclosed string
2197 (?: # unclosed string
2198 '(?:[^']|(?<!\\)\\')*
2198 '(?:[^']|(?<!\\)\\')*
2199 |
2199 |
2200 "(?:[^"]|(?<!\\)\\")*
2200 "(?:[^"]|(?<!\\)\\")*
2201 )
2201 )
2202 )?
2202 )?
2203 $
2203 $
2204 '''
2204 '''
2205 regexps = self.__dict_key_regexps = {
2205 regexps = self.__dict_key_regexps = {
2206 False: re.compile(dict_key_re_fmt % r'''
2206 False: re.compile(dict_key_re_fmt % r'''
2207 # identifiers separated by .
2207 # identifiers separated by .
2208 (?!\d)\w+
2208 (?!\d)\w+
2209 (?:\.(?!\d)\w+)*
2209 (?:\.(?!\d)\w+)*
2210 '''),
2210 '''),
2211 True: re.compile(dict_key_re_fmt % '''
2211 True: re.compile(dict_key_re_fmt % '''
2212 .+
2212 .+
2213 ''')
2213 ''')
2214 }
2214 }
2215
2215
2216 match = regexps[self.greedy].search(self.text_until_cursor)
2216 match = regexps[self.greedy].search(self.text_until_cursor)
2217
2217
2218 if match is None:
2218 if match is None:
2219 return []
2219 return []
2220
2220
2221 expr, prefix0, prefix = match.groups()
2221 expr, prefix0, prefix = match.groups()
2222 try:
2222 try:
2223 obj = eval(expr, self.namespace)
2223 obj = eval(expr, self.namespace)
2224 except Exception:
2224 except Exception:
2225 try:
2225 try:
2226 obj = eval(expr, self.global_namespace)
2226 obj = eval(expr, self.global_namespace)
2227 except Exception:
2227 except Exception:
2228 return []
2228 return []
2229
2229
2230 keys = self._get_keys(obj)
2230 keys = self._get_keys(obj)
2231 if not keys:
2231 if not keys:
2232 return keys
2232 return keys
2233
2233
2234 extra_prefix = eval(prefix0) if prefix0 != '' else None
2234 extra_prefix = eval(prefix0) if prefix0 != '' else None
2235
2235
2236 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2236 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2237 if not matches:
2237 if not matches:
2238 return matches
2238 return matches
2239
2239
2240 # get the cursor position of
2240 # get the cursor position of
2241 # - the text being completed
2241 # - the text being completed
2242 # - the start of the key text
2242 # - the start of the key text
2243 # - the start of the completion
2243 # - the start of the completion
2244 text_start = len(self.text_until_cursor) - len(text)
2244 text_start = len(self.text_until_cursor) - len(text)
2245 if prefix:
2245 if prefix:
2246 key_start = match.start(3)
2246 key_start = match.start(3)
2247 completion_start = key_start + token_offset
2247 completion_start = key_start + token_offset
2248 else:
2248 else:
2249 key_start = completion_start = match.end()
2249 key_start = completion_start = match.end()
2250
2250
2251 # grab the leading prefix, to make sure all completions start with `text`
2251 # grab the leading prefix, to make sure all completions start with `text`
2252 if text_start > key_start:
2252 if text_start > key_start:
2253 leading = ''
2253 leading = ''
2254 else:
2254 else:
2255 leading = text[text_start:completion_start]
2255 leading = text[text_start:completion_start]
2256
2256
2257 # the index of the `[` character
2257 # the index of the `[` character
2258 bracket_idx = match.end(1)
2258 bracket_idx = match.end(1)
2259
2259
2260 # append closing quote and bracket as appropriate
2260 # append closing quote and bracket as appropriate
2261 # this is *not* appropriate if the opening quote or bracket is outside
2261 # this is *not* appropriate if the opening quote or bracket is outside
2262 # the text given to this method
2262 # the text given to this method
2263 suf = ''
2263 suf = ''
2264 continuation = self.line_buffer[len(self.text_until_cursor):]
2264 continuation = self.line_buffer[len(self.text_until_cursor):]
2265 if key_start > text_start and closing_quote:
2265 if key_start > text_start and closing_quote:
2266 # quotes were opened inside text, maybe close them
2266 # quotes were opened inside text, maybe close them
2267 if continuation.startswith(closing_quote):
2267 if continuation.startswith(closing_quote):
2268 continuation = continuation[len(closing_quote):]
2268 continuation = continuation[len(closing_quote):]
2269 else:
2269 else:
2270 suf += closing_quote
2270 suf += closing_quote
2271 if bracket_idx > text_start:
2271 if bracket_idx > text_start:
2272 # brackets were opened inside text, maybe close them
2272 # brackets were opened inside text, maybe close them
2273 if not continuation.startswith(']'):
2273 if not continuation.startswith(']'):
2274 suf += ']'
2274 suf += ']'
2275
2275
2276 return [leading + k + suf for k in matches]
2276 return [leading + k + suf for k in matches]
2277
2277
2278 @context_matcher()
2278 @context_matcher()
2279 def unicode_name_matcher(self, context: CompletionContext):
2279 def unicode_name_matcher(self, context: CompletionContext):
2280 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2280 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2281 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2281 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2282 return _convert_matcher_v1_result_to_v2(
2282 return _convert_matcher_v1_result_to_v2(
2283 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2283 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2284 )
2284 )
2285
2285
2286 @staticmethod
2286 @staticmethod
2287 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2287 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2288 """Match Latex-like syntax for unicode characters base
2288 """Match Latex-like syntax for unicode characters base
2289 on the name of the character.
2289 on the name of the character.
2290
2290
2291 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2291 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2292
2292
2293 Works only on valid python 3 identifier, or on combining characters that
2293 Works only on valid python 3 identifier, or on combining characters that
2294 will combine to form a valid identifier.
2294 will combine to form a valid identifier.
2295 """
2295 """
2296 slashpos = text.rfind('\\')
2296 slashpos = text.rfind('\\')
2297 if slashpos > -1:
2297 if slashpos > -1:
2298 s = text[slashpos+1:]
2298 s = text[slashpos+1:]
2299 try :
2299 try :
2300 unic = unicodedata.lookup(s)
2300 unic = unicodedata.lookup(s)
2301 # allow combining chars
2301 # allow combining chars
2302 if ('a'+unic).isidentifier():
2302 if ('a'+unic).isidentifier():
2303 return '\\'+s,[unic]
2303 return '\\'+s,[unic]
2304 except KeyError:
2304 except KeyError:
2305 pass
2305 pass
2306 return '', []
2306 return '', []
2307
2307
2308 @context_matcher()
2308 @context_matcher()
2309 def latex_name_matcher(self, context: CompletionContext):
2309 def latex_name_matcher(self, context: CompletionContext):
2310 """Match Latex syntax for unicode characters.
2310 """Match Latex syntax for unicode characters.
2311
2311
2312 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2312 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2313 """
2313 """
2314 fragment, matches = self.latex_matches(context.text_until_cursor)
2314 fragment, matches = self.latex_matches(context.text_until_cursor)
2315 return _convert_matcher_v1_result_to_v2(
2315 return _convert_matcher_v1_result_to_v2(
2316 matches, type="latex", fragment=fragment, suppress_if_matches=True
2316 matches, type="latex", fragment=fragment, suppress_if_matches=True
2317 )
2317 )
2318
2318
2319 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2319 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2320 """Match Latex syntax for unicode characters.
2320 """Match Latex syntax for unicode characters.
2321
2321
2322 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2322 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2323
2323
2324 .. deprecated:: 8.6
2324 .. deprecated:: 8.6
2325 You can use :meth:`latex_name_matcher` instead.
2325 You can use :meth:`latex_name_matcher` instead.
2326 """
2326 """
2327 slashpos = text.rfind('\\')
2327 slashpos = text.rfind('\\')
2328 if slashpos > -1:
2328 if slashpos > -1:
2329 s = text[slashpos:]
2329 s = text[slashpos:]
2330 if s in latex_symbols:
2330 if s in latex_symbols:
2331 # Try to complete a full latex symbol to unicode
2331 # Try to complete a full latex symbol to unicode
2332 # \\alpha -> Ξ±
2332 # \\alpha -> Ξ±
2333 return s, [latex_symbols[s]]
2333 return s, [latex_symbols[s]]
2334 else:
2334 else:
2335 # If a user has partially typed a latex symbol, give them
2335 # If a user has partially typed a latex symbol, give them
2336 # a full list of options \al -> [\aleph, \alpha]
2336 # a full list of options \al -> [\aleph, \alpha]
2337 matches = [k for k in latex_symbols if k.startswith(s)]
2337 matches = [k for k in latex_symbols if k.startswith(s)]
2338 if matches:
2338 if matches:
2339 return s, matches
2339 return s, matches
2340 return '', ()
2340 return '', ()
2341
2341
2342 @context_matcher()
2342 @context_matcher()
2343 def custom_completer_matcher(self, context):
2343 def custom_completer_matcher(self, context):
2344 """Dispatch custom completer.
2344 """Dispatch custom completer.
2345
2345
2346 If a match is found, suppresses all other matchers except for Jedi.
2346 If a match is found, suppresses all other matchers except for Jedi.
2347 """
2347 """
2348 matches = self.dispatch_custom_completer(context.token) or []
2348 matches = self.dispatch_custom_completer(context.token) or []
2349 result = _convert_matcher_v1_result_to_v2(
2349 result = _convert_matcher_v1_result_to_v2(
2350 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2350 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2351 )
2351 )
2352 result["ordered"] = True
2352 result["ordered"] = True
2353 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2353 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2354 return result
2354 return result
2355
2355
2356 def dispatch_custom_completer(self, text):
2356 def dispatch_custom_completer(self, text):
2357 """
2357 """
2358 .. deprecated:: 8.6
2358 .. deprecated:: 8.6
2359 You can use :meth:`custom_completer_matcher` instead.
2359 You can use :meth:`custom_completer_matcher` instead.
2360 """
2360 """
2361 if not self.custom_completers:
2361 if not self.custom_completers:
2362 return
2362 return
2363
2363
2364 line = self.line_buffer
2364 line = self.line_buffer
2365 if not line.strip():
2365 if not line.strip():
2366 return None
2366 return None
2367
2367
2368 # Create a little structure to pass all the relevant information about
2368 # Create a little structure to pass all the relevant information about
2369 # the current completion to any custom completer.
2369 # the current completion to any custom completer.
2370 event = SimpleNamespace()
2370 event = SimpleNamespace()
2371 event.line = line
2371 event.line = line
2372 event.symbol = text
2372 event.symbol = text
2373 cmd = line.split(None,1)[0]
2373 cmd = line.split(None,1)[0]
2374 event.command = cmd
2374 event.command = cmd
2375 event.text_until_cursor = self.text_until_cursor
2375 event.text_until_cursor = self.text_until_cursor
2376
2376
2377 # for foo etc, try also to find completer for %foo
2377 # for foo etc, try also to find completer for %foo
2378 if not cmd.startswith(self.magic_escape):
2378 if not cmd.startswith(self.magic_escape):
2379 try_magic = self.custom_completers.s_matches(
2379 try_magic = self.custom_completers.s_matches(
2380 self.magic_escape + cmd)
2380 self.magic_escape + cmd)
2381 else:
2381 else:
2382 try_magic = []
2382 try_magic = []
2383
2383
2384 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2384 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2385 try_magic,
2385 try_magic,
2386 self.custom_completers.flat_matches(self.text_until_cursor)):
2386 self.custom_completers.flat_matches(self.text_until_cursor)):
2387 try:
2387 try:
2388 res = c(event)
2388 res = c(event)
2389 if res:
2389 if res:
2390 # first, try case sensitive match
2390 # first, try case sensitive match
2391 withcase = [r for r in res if r.startswith(text)]
2391 withcase = [r for r in res if r.startswith(text)]
2392 if withcase:
2392 if withcase:
2393 return withcase
2393 return withcase
2394 # if none, then case insensitive ones are ok too
2394 # if none, then case insensitive ones are ok too
2395 text_low = text.lower()
2395 text_low = text.lower()
2396 return [r for r in res if r.lower().startswith(text_low)]
2396 return [r for r in res if r.lower().startswith(text_low)]
2397 except TryNext:
2397 except TryNext:
2398 pass
2398 pass
2399 except KeyboardInterrupt:
2399 except KeyboardInterrupt:
2400 """
2400 """
2401 If custom completer take too long,
2401 If custom completer take too long,
2402 let keyboard interrupt abort and return nothing.
2402 let keyboard interrupt abort and return nothing.
2403 """
2403 """
2404 break
2404 break
2405
2405
2406 return None
2406 return None
2407
2407
2408 def completions(self, text: str, offset: int)->Iterator[Completion]:
2408 def completions(self, text: str, offset: int)->Iterator[Completion]:
2409 """
2409 """
2410 Returns an iterator over the possible completions
2410 Returns an iterator over the possible completions
2411
2411
2412 .. warning::
2412 .. warning::
2413
2413
2414 Unstable
2414 Unstable
2415
2415
2416 This function is unstable, API may change without warning.
2416 This function is unstable, API may change without warning.
2417 It will also raise unless use in proper context manager.
2417 It will also raise unless use in proper context manager.
2418
2418
2419 Parameters
2419 Parameters
2420 ----------
2420 ----------
2421 text : str
2421 text : str
2422 Full text of the current input, multi line string.
2422 Full text of the current input, multi line string.
2423 offset : int
2423 offset : int
2424 Integer representing the position of the cursor in ``text``. Offset
2424 Integer representing the position of the cursor in ``text``. Offset
2425 is 0-based indexed.
2425 is 0-based indexed.
2426
2426
2427 Yields
2427 Yields
2428 ------
2428 ------
2429 Completion
2429 Completion
2430
2430
2431 Notes
2431 Notes
2432 -----
2432 -----
2433 The cursor on a text can either be seen as being "in between"
2433 The cursor on a text can either be seen as being "in between"
2434 characters or "On" a character depending on the interface visible to
2434 characters or "On" a character depending on the interface visible to
2435 the user. For consistency the cursor being on "in between" characters X
2435 the user. For consistency the cursor being on "in between" characters X
2436 and Y is equivalent to the cursor being "on" character Y, that is to say
2436 and Y is equivalent to the cursor being "on" character Y, that is to say
2437 the character the cursor is on is considered as being after the cursor.
2437 the character the cursor is on is considered as being after the cursor.
2438
2438
2439 Combining characters may span more that one position in the
2439 Combining characters may span more that one position in the
2440 text.
2440 text.
2441
2441
2442 .. note::
2442 .. note::
2443
2443
2444 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2444 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2445 fake Completion token to distinguish completion returned by Jedi
2445 fake Completion token to distinguish completion returned by Jedi
2446 and usual IPython completion.
2446 and usual IPython completion.
2447
2447
2448 .. note::
2448 .. note::
2449
2449
2450 Completions are not completely deduplicated yet. If identical
2450 Completions are not completely deduplicated yet. If identical
2451 completions are coming from different sources this function does not
2451 completions are coming from different sources this function does not
2452 ensure that each completion object will only be present once.
2452 ensure that each completion object will only be present once.
2453 """
2453 """
2454 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2454 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2455 "It may change without warnings. "
2455 "It may change without warnings. "
2456 "Use in corresponding context manager.",
2456 "Use in corresponding context manager.",
2457 category=ProvisionalCompleterWarning, stacklevel=2)
2457 category=ProvisionalCompleterWarning, stacklevel=2)
2458
2458
2459 seen = set()
2459 seen = set()
2460 profiler:Optional[cProfile.Profile]
2460 profiler:Optional[cProfile.Profile]
2461 try:
2461 try:
2462 if self.profile_completions:
2462 if self.profile_completions:
2463 import cProfile
2463 import cProfile
2464 profiler = cProfile.Profile()
2464 profiler = cProfile.Profile()
2465 profiler.enable()
2465 profiler.enable()
2466 else:
2466 else:
2467 profiler = None
2467 profiler = None
2468
2468
2469 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2469 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2470 if c and (c in seen):
2470 if c and (c in seen):
2471 continue
2471 continue
2472 yield c
2472 yield c
2473 seen.add(c)
2473 seen.add(c)
2474 except KeyboardInterrupt:
2474 except KeyboardInterrupt:
2475 """if completions take too long and users send keyboard interrupt,
2475 """if completions take too long and users send keyboard interrupt,
2476 do not crash and return ASAP. """
2476 do not crash and return ASAP. """
2477 pass
2477 pass
2478 finally:
2478 finally:
2479 if profiler is not None:
2479 if profiler is not None:
2480 profiler.disable()
2480 profiler.disable()
2481 ensure_dir_exists(self.profiler_output_dir)
2481 ensure_dir_exists(self.profiler_output_dir)
2482 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2482 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2483 print("Writing profiler output to", output_path)
2483 print("Writing profiler output to", output_path)
2484 profiler.dump_stats(output_path)
2484 profiler.dump_stats(output_path)
2485
2485
2486 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2486 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2487 """
2487 """
2488 Core completion module.Same signature as :any:`completions`, with the
2488 Core completion module.Same signature as :any:`completions`, with the
2489 extra `timeout` parameter (in seconds).
2489 extra `timeout` parameter (in seconds).
2490
2490
2491 Computing jedi's completion ``.type`` can be quite expensive (it is a
2491 Computing jedi's completion ``.type`` can be quite expensive (it is a
2492 lazy property) and can require some warm-up, more warm up than just
2492 lazy property) and can require some warm-up, more warm up than just
2493 computing the ``name`` of a completion. The warm-up can be :
2493 computing the ``name`` of a completion. The warm-up can be :
2494
2494
2495 - Long warm-up the first time a module is encountered after
2495 - Long warm-up the first time a module is encountered after
2496 install/update: actually build parse/inference tree.
2496 install/update: actually build parse/inference tree.
2497
2497
2498 - first time the module is encountered in a session: load tree from
2498 - first time the module is encountered in a session: load tree from
2499 disk.
2499 disk.
2500
2500
2501 We don't want to block completions for tens of seconds so we give the
2501 We don't want to block completions for tens of seconds so we give the
2502 completer a "budget" of ``_timeout`` seconds per invocation to compute
2502 completer a "budget" of ``_timeout`` seconds per invocation to compute
2503 completions types, the completions that have not yet been computed will
2503 completions types, the completions that have not yet been computed will
2504 be marked as "unknown" an will have a chance to be computed next round
2504 be marked as "unknown" an will have a chance to be computed next round
2505 are things get cached.
2505 are things get cached.
2506
2506
2507 Keep in mind that Jedi is not the only thing treating the completion so
2507 Keep in mind that Jedi is not the only thing treating the completion so
2508 keep the timeout short-ish as if we take more than 0.3 second we still
2508 keep the timeout short-ish as if we take more than 0.3 second we still
2509 have lots of processing to do.
2509 have lots of processing to do.
2510
2510
2511 """
2511 """
2512 deadline = time.monotonic() + _timeout
2512 deadline = time.monotonic() + _timeout
2513
2513
2514 before = full_text[:offset]
2514 before = full_text[:offset]
2515 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2515 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2516
2516
2517 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2517 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2518
2518
2519 results = self._complete(
2519 results = self._complete(
2520 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2520 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2521 )
2521 )
2522 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2522 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2523 identifier: result
2523 identifier: result
2524 for identifier, result in results.items()
2524 for identifier, result in results.items()
2525 if identifier != jedi_matcher_id
2525 if identifier != jedi_matcher_id
2526 }
2526 }
2527
2527
2528 jedi_matches = (
2528 jedi_matches = (
2529 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2529 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2530 if jedi_matcher_id in results
2530 if jedi_matcher_id in results
2531 else ()
2531 else ()
2532 )
2532 )
2533
2533
2534 iter_jm = iter(jedi_matches)
2534 iter_jm = iter(jedi_matches)
2535 if _timeout:
2535 if _timeout:
2536 for jm in iter_jm:
2536 for jm in iter_jm:
2537 try:
2537 try:
2538 type_ = jm.type
2538 type_ = jm.type
2539 except Exception:
2539 except Exception:
2540 if self.debug:
2540 if self.debug:
2541 print("Error in Jedi getting type of ", jm)
2541 print("Error in Jedi getting type of ", jm)
2542 type_ = None
2542 type_ = None
2543 delta = len(jm.name_with_symbols) - len(jm.complete)
2543 delta = len(jm.name_with_symbols) - len(jm.complete)
2544 if type_ == 'function':
2544 if type_ == 'function':
2545 signature = _make_signature(jm)
2545 signature = _make_signature(jm)
2546 else:
2546 else:
2547 signature = ''
2547 signature = ''
2548 yield Completion(start=offset - delta,
2548 yield Completion(start=offset - delta,
2549 end=offset,
2549 end=offset,
2550 text=jm.name_with_symbols,
2550 text=jm.name_with_symbols,
2551 type=type_,
2551 type=type_,
2552 signature=signature,
2552 signature=signature,
2553 _origin='jedi')
2553 _origin='jedi')
2554
2554
2555 if time.monotonic() > deadline:
2555 if time.monotonic() > deadline:
2556 break
2556 break
2557
2557
2558 for jm in iter_jm:
2558 for jm in iter_jm:
2559 delta = len(jm.name_with_symbols) - len(jm.complete)
2559 delta = len(jm.name_with_symbols) - len(jm.complete)
2560 yield Completion(
2560 yield Completion(
2561 start=offset - delta,
2561 start=offset - delta,
2562 end=offset,
2562 end=offset,
2563 text=jm.name_with_symbols,
2563 text=jm.name_with_symbols,
2564 type=_UNKNOWN_TYPE, # don't compute type for speed
2564 type=_UNKNOWN_TYPE, # don't compute type for speed
2565 _origin="jedi",
2565 _origin="jedi",
2566 signature="",
2566 signature="",
2567 )
2567 )
2568
2568
2569 # TODO:
2569 # TODO:
2570 # Suppress this, right now just for debug.
2570 # Suppress this, right now just for debug.
2571 if jedi_matches and non_jedi_results and self.debug:
2571 if jedi_matches and non_jedi_results and self.debug:
2572 some_start_offset = before.rfind(
2572 some_start_offset = before.rfind(
2573 next(iter(non_jedi_results.values()))["matched_fragment"]
2573 next(iter(non_jedi_results.values()))["matched_fragment"]
2574 )
2574 )
2575 yield Completion(
2575 yield Completion(
2576 start=some_start_offset,
2576 start=some_start_offset,
2577 end=offset,
2577 end=offset,
2578 text="--jedi/ipython--",
2578 text="--jedi/ipython--",
2579 _origin="debug",
2579 _origin="debug",
2580 type="none",
2580 type="none",
2581 signature="",
2581 signature="",
2582 )
2582 )
2583
2583
2584 ordered = []
2584 ordered = []
2585 sortable = []
2585 sortable = []
2586
2586
2587 for origin, result in non_jedi_results.items():
2587 for origin, result in non_jedi_results.items():
2588 matched_text = result["matched_fragment"]
2588 matched_text = result["matched_fragment"]
2589 start_offset = before.rfind(matched_text)
2589 start_offset = before.rfind(matched_text)
2590 is_ordered = result.get("ordered", False)
2590 is_ordered = result.get("ordered", False)
2591 container = ordered if is_ordered else sortable
2591 container = ordered if is_ordered else sortable
2592
2592
2593 # I'm unsure if this is always true, so let's assert and see if it
2593 # I'm unsure if this is always true, so let's assert and see if it
2594 # crash
2594 # crash
2595 assert before.endswith(matched_text)
2595 assert before.endswith(matched_text)
2596
2596
2597 for simple_completion in result["completions"]:
2597 for simple_completion in result["completions"]:
2598 completion = Completion(
2598 completion = Completion(
2599 start=start_offset,
2599 start=start_offset,
2600 end=offset,
2600 end=offset,
2601 text=simple_completion.text,
2601 text=simple_completion.text,
2602 _origin=origin,
2602 _origin=origin,
2603 signature="",
2603 signature="",
2604 type=simple_completion.type or _UNKNOWN_TYPE,
2604 type=simple_completion.type or _UNKNOWN_TYPE,
2605 )
2605 )
2606 container.append(completion)
2606 container.append(completion)
2607
2607
2608 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2608 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2609 :MATCHES_LIMIT
2609 :MATCHES_LIMIT
2610 ]
2610 ]
2611
2611
2612 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2612 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2613 """Find completions for the given text and line context.
2613 """Find completions for the given text and line context.
2614
2614
2615 Note that both the text and the line_buffer are optional, but at least
2615 Note that both the text and the line_buffer are optional, but at least
2616 one of them must be given.
2616 one of them must be given.
2617
2617
2618 Parameters
2618 Parameters
2619 ----------
2619 ----------
2620 text : string, optional
2620 text : string, optional
2621 Text to perform the completion on. If not given, the line buffer
2621 Text to perform the completion on. If not given, the line buffer
2622 is split using the instance's CompletionSplitter object.
2622 is split using the instance's CompletionSplitter object.
2623 line_buffer : string, optional
2623 line_buffer : string, optional
2624 If not given, the completer attempts to obtain the current line
2624 If not given, the completer attempts to obtain the current line
2625 buffer via readline. This keyword allows clients which are
2625 buffer via readline. This keyword allows clients which are
2626 requesting for text completions in non-readline contexts to inform
2626 requesting for text completions in non-readline contexts to inform
2627 the completer of the entire text.
2627 the completer of the entire text.
2628 cursor_pos : int, optional
2628 cursor_pos : int, optional
2629 Index of the cursor in the full line buffer. Should be provided by
2629 Index of the cursor in the full line buffer. Should be provided by
2630 remote frontends where kernel has no access to frontend state.
2630 remote frontends where kernel has no access to frontend state.
2631
2631
2632 Returns
2632 Returns
2633 -------
2633 -------
2634 Tuple of two items:
2634 Tuple of two items:
2635 text : str
2635 text : str
2636 Text that was actually used in the completion.
2636 Text that was actually used in the completion.
2637 matches : list
2637 matches : list
2638 A list of completion matches.
2638 A list of completion matches.
2639
2639
2640 Notes
2640 Notes
2641 -----
2641 -----
2642 This API is likely to be deprecated and replaced by
2642 This API is likely to be deprecated and replaced by
2643 :any:`IPCompleter.completions` in the future.
2643 :any:`IPCompleter.completions` in the future.
2644
2644
2645 """
2645 """
2646 warnings.warn('`Completer.complete` is pending deprecation since '
2646 warnings.warn('`Completer.complete` is pending deprecation since '
2647 'IPython 6.0 and will be replaced by `Completer.completions`.',
2647 'IPython 6.0 and will be replaced by `Completer.completions`.',
2648 PendingDeprecationWarning)
2648 PendingDeprecationWarning)
2649 # potential todo, FOLD the 3rd throw away argument of _complete
2649 # potential todo, FOLD the 3rd throw away argument of _complete
2650 # into the first 2 one.
2650 # into the first 2 one.
2651 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2651 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2652 # TODO: should we deprecate now, or does it stay?
2652 # TODO: should we deprecate now, or does it stay?
2653
2653
2654 results = self._complete(
2654 results = self._complete(
2655 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2655 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2656 )
2656 )
2657
2657
2658 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2658 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2659
2659
2660 return self._arrange_and_extract(
2660 return self._arrange_and_extract(
2661 results,
2661 results,
2662 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2662 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2663 skip_matchers={jedi_matcher_id},
2663 skip_matchers={jedi_matcher_id},
2664 # this API does not support different start/end positions (fragments of token).
2664 # this API does not support different start/end positions (fragments of token).
2665 abort_if_offset_changes=True,
2665 abort_if_offset_changes=True,
2666 )
2666 )
2667
2667
2668 def _arrange_and_extract(
2668 def _arrange_and_extract(
2669 self,
2669 self,
2670 results: Dict[str, MatcherResult],
2670 results: Dict[str, MatcherResult],
2671 skip_matchers: Set[str],
2671 skip_matchers: Set[str],
2672 abort_if_offset_changes: bool,
2672 abort_if_offset_changes: bool,
2673 ):
2673 ):
2674
2674
2675 sortable = []
2675 sortable = []
2676 ordered = []
2676 ordered = []
2677 most_recent_fragment = None
2677 most_recent_fragment = None
2678 for identifier, result in results.items():
2678 for identifier, result in results.items():
2679 if identifier in skip_matchers:
2679 if identifier in skip_matchers:
2680 continue
2680 continue
2681 if not result["completions"]:
2681 if not result["completions"]:
2682 continue
2682 continue
2683 if not most_recent_fragment:
2683 if not most_recent_fragment:
2684 most_recent_fragment = result["matched_fragment"]
2684 most_recent_fragment = result["matched_fragment"]
2685 if (
2685 if (
2686 abort_if_offset_changes
2686 abort_if_offset_changes
2687 and result["matched_fragment"] != most_recent_fragment
2687 and result["matched_fragment"] != most_recent_fragment
2688 ):
2688 ):
2689 break
2689 break
2690 if result.get("ordered", False):
2690 if result.get("ordered", False):
2691 ordered.extend(result["completions"])
2691 ordered.extend(result["completions"])
2692 else:
2692 else:
2693 sortable.extend(result["completions"])
2693 sortable.extend(result["completions"])
2694
2694
2695 if not most_recent_fragment:
2695 if not most_recent_fragment:
2696 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2696 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2697
2697
2698 return most_recent_fragment, [
2698 return most_recent_fragment, [
2699 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2699 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2700 ]
2700 ]
2701
2701
2702 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2702 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2703 full_text=None) -> _CompleteResult:
2703 full_text=None) -> _CompleteResult:
2704 """
2704 """
2705 Like complete but can also returns raw jedi completions as well as the
2705 Like complete but can also returns raw jedi completions as well as the
2706 origin of the completion text. This could (and should) be made much
2706 origin of the completion text. This could (and should) be made much
2707 cleaner but that will be simpler once we drop the old (and stateful)
2707 cleaner but that will be simpler once we drop the old (and stateful)
2708 :any:`complete` API.
2708 :any:`complete` API.
2709
2709
2710 With current provisional API, cursor_pos act both (depending on the
2710 With current provisional API, cursor_pos act both (depending on the
2711 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2711 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2712 ``column`` when passing multiline strings this could/should be renamed
2712 ``column`` when passing multiline strings this could/should be renamed
2713 but would add extra noise.
2713 but would add extra noise.
2714
2714
2715 Parameters
2715 Parameters
2716 ----------
2716 ----------
2717 cursor_line
2717 cursor_line
2718 Index of the line the cursor is on. 0 indexed.
2718 Index of the line the cursor is on. 0 indexed.
2719 cursor_pos
2719 cursor_pos
2720 Position of the cursor in the current line/line_buffer/text. 0
2720 Position of the cursor in the current line/line_buffer/text. 0
2721 indexed.
2721 indexed.
2722 line_buffer : optional, str
2722 line_buffer : optional, str
2723 The current line the cursor is in, this is mostly due to legacy
2723 The current line the cursor is in, this is mostly due to legacy
2724 reason that readline could only give a us the single current line.
2724 reason that readline could only give a us the single current line.
2725 Prefer `full_text`.
2725 Prefer `full_text`.
2726 text : str
2726 text : str
2727 The current "token" the cursor is in, mostly also for historical
2727 The current "token" the cursor is in, mostly also for historical
2728 reasons. as the completer would trigger only after the current line
2728 reasons. as the completer would trigger only after the current line
2729 was parsed.
2729 was parsed.
2730 full_text : str
2730 full_text : str
2731 Full text of the current cell.
2731 Full text of the current cell.
2732
2732
2733 Returns
2733 Returns
2734 -------
2734 -------
2735 An ordered dictionary where keys are identifiers of completion
2735 An ordered dictionary where keys are identifiers of completion
2736 matchers and values are ``MatcherResult``s.
2736 matchers and values are ``MatcherResult``s.
2737 """
2737 """
2738
2738
2739 # if the cursor position isn't given, the only sane assumption we can
2739 # if the cursor position isn't given, the only sane assumption we can
2740 # make is that it's at the end of the line (the common case)
2740 # make is that it's at the end of the line (the common case)
2741 if cursor_pos is None:
2741 if cursor_pos is None:
2742 cursor_pos = len(line_buffer) if text is None else len(text)
2742 cursor_pos = len(line_buffer) if text is None else len(text)
2743
2743
2744 if self.use_main_ns:
2744 if self.use_main_ns:
2745 self.namespace = __main__.__dict__
2745 self.namespace = __main__.__dict__
2746
2746
2747 # if text is either None or an empty string, rely on the line buffer
2747 # if text is either None or an empty string, rely on the line buffer
2748 if (not line_buffer) and full_text:
2748 if (not line_buffer) and full_text:
2749 line_buffer = full_text.split('\n')[cursor_line]
2749 line_buffer = full_text.split('\n')[cursor_line]
2750 if not text: # issue #11508: check line_buffer before calling split_line
2750 if not text: # issue #11508: check line_buffer before calling split_line
2751 text = (
2751 text = (
2752 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2752 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2753 )
2753 )
2754
2754
2755 # If no line buffer is given, assume the input text is all there was
2755 # If no line buffer is given, assume the input text is all there was
2756 if line_buffer is None:
2756 if line_buffer is None:
2757 line_buffer = text
2757 line_buffer = text
2758
2758
2759 # deprecated - do not use `line_buffer` in new code.
2759 # deprecated - do not use `line_buffer` in new code.
2760 self.line_buffer = line_buffer
2760 self.line_buffer = line_buffer
2761 self.text_until_cursor = self.line_buffer[:cursor_pos]
2761 self.text_until_cursor = self.line_buffer[:cursor_pos]
2762
2762
2763 if not full_text:
2763 if not full_text:
2764 full_text = line_buffer
2764 full_text = line_buffer
2765
2765
2766 context = CompletionContext(
2766 context = CompletionContext(
2767 full_text=full_text,
2767 full_text=full_text,
2768 cursor_position=cursor_pos,
2768 cursor_position=cursor_pos,
2769 cursor_line=cursor_line,
2769 cursor_line=cursor_line,
2770 token=text,
2770 token=text,
2771 limit=MATCHES_LIMIT,
2771 limit=MATCHES_LIMIT,
2772 )
2772 )
2773
2773
2774 # Start with a clean slate of completions
2774 # Start with a clean slate of completions
2775 results = {}
2775 results = {}
2776
2776
2777 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2777 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2778
2778
2779 suppressed_matchers = set()
2779 suppressed_matchers = set()
2780
2780
2781 matchers = {
2781 matchers = {
2782 _get_matcher_id(matcher): matcher
2782 _get_matcher_id(matcher): matcher
2783 for matcher in sorted(
2783 for matcher in sorted(
2784 self.matchers, key=_get_matcher_priority, reverse=True
2784 self.matchers, key=_get_matcher_priority, reverse=True
2785 )
2785 )
2786 }
2786 }
2787
2787
2788 for matcher_id, matcher in matchers.items():
2788 for matcher_id, matcher in matchers.items():
2789 api_version = _get_matcher_api_version(matcher)
2789 api_version = _get_matcher_api_version(matcher)
2790 matcher_id = _get_matcher_id(matcher)
2790 matcher_id = _get_matcher_id(matcher)
2791
2791
2792 if matcher_id in self.disable_matchers:
2792 if matcher_id in self.disable_matchers:
2793 continue
2793 continue
2794
2794
2795 if matcher_id in results:
2795 if matcher_id in results:
2796 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2796 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2797
2797
2798 if matcher_id in suppressed_matchers:
2798 if matcher_id in suppressed_matchers:
2799 continue
2799 continue
2800
2800
2801 try:
2801 try:
2802 if api_version == 1:
2802 if api_version == 1:
2803 result = _convert_matcher_v1_result_to_v2(
2803 result = _convert_matcher_v1_result_to_v2(
2804 matcher(text), type=_UNKNOWN_TYPE
2804 matcher(text), type=_UNKNOWN_TYPE
2805 )
2805 )
2806 elif api_version == 2:
2806 elif api_version == 2:
2807 result = cast(matcher, MatcherAPIv2)(context)
2807 result = cast(matcher, MatcherAPIv2)(context)
2808 else:
2808 else:
2809 raise ValueError(f"Unsupported API version {api_version}")
2809 raise ValueError(f"Unsupported API version {api_version}")
2810 except:
2810 except:
2811 # Show the ugly traceback if the matcher causes an
2811 # Show the ugly traceback if the matcher causes an
2812 # exception, but do NOT crash the kernel!
2812 # exception, but do NOT crash the kernel!
2813 sys.excepthook(*sys.exc_info())
2813 sys.excepthook(*sys.exc_info())
2814 continue
2814 continue
2815
2815
2816 # set default value for matched fragment if suffix was not selected.
2816 # set default value for matched fragment if suffix was not selected.
2817 result["matched_fragment"] = result.get("matched_fragment", context.token)
2817 result["matched_fragment"] = result.get("matched_fragment", context.token)
2818
2818
2819 if not suppressed_matchers:
2819 if not suppressed_matchers:
2820 suppression_recommended = result.get("suppress", False)
2820 suppression_recommended = result.get("suppress", False)
2821
2821
2822 suppression_config = (
2822 suppression_config = (
2823 self.suppress_competing_matchers.get(matcher_id, None)
2823 self.suppress_competing_matchers.get(matcher_id, None)
2824 if isinstance(self.suppress_competing_matchers, dict)
2824 if isinstance(self.suppress_competing_matchers, dict)
2825 else self.suppress_competing_matchers
2825 else self.suppress_competing_matchers
2826 )
2826 )
2827 should_suppress = (
2827 should_suppress = (
2828 (suppression_config is True)
2828 (suppression_config is True)
2829 or (suppression_recommended and (suppression_config is not False))
2829 or (suppression_recommended and (suppression_config is not False))
2830 ) and has_any_completions(result)
2830 ) and has_any_completions(result)
2831
2831
2832 if should_suppress:
2832 if should_suppress:
2833 suppression_exceptions = result.get("do_not_suppress", set())
2833 suppression_exceptions = result.get("do_not_suppress", set())
2834 try:
2834 try:
2835 to_suppress = set(suppression_recommended)
2835 to_suppress = set(suppression_recommended)
2836 except TypeError:
2836 except TypeError:
2837 to_suppress = set(matchers)
2837 to_suppress = set(matchers)
2838 suppressed_matchers = to_suppress - suppression_exceptions
2838 suppressed_matchers = to_suppress - suppression_exceptions
2839
2839
2840 new_results = {}
2840 new_results = {}
2841 for previous_matcher_id, previous_result in results.items():
2841 for previous_matcher_id, previous_result in results.items():
2842 if previous_matcher_id not in suppressed_matchers:
2842 if previous_matcher_id not in suppressed_matchers:
2843 new_results[previous_matcher_id] = previous_result
2843 new_results[previous_matcher_id] = previous_result
2844 results = new_results
2844 results = new_results
2845
2845
2846 results[matcher_id] = result
2846 results[matcher_id] = result
2847
2847
2848 _, matches = self._arrange_and_extract(
2848 _, matches = self._arrange_and_extract(
2849 results,
2849 results,
2850 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2850 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2851 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2851 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2852 skip_matchers={jedi_matcher_id},
2852 skip_matchers={jedi_matcher_id},
2853 abort_if_offset_changes=False,
2853 abort_if_offset_changes=False,
2854 )
2854 )
2855
2855
2856 # populate legacy stateful API
2856 # populate legacy stateful API
2857 self.matches = matches
2857 self.matches = matches
2858
2858
2859 return results
2859 return results
2860
2860
2861 @staticmethod
2861 @staticmethod
2862 def _deduplicate(
2862 def _deduplicate(
2863 matches: Sequence[SimpleCompletion],
2863 matches: Sequence[SimpleCompletion],
2864 ) -> Iterable[SimpleCompletion]:
2864 ) -> Iterable[SimpleCompletion]:
2865 filtered_matches = {}
2865 filtered_matches = {}
2866 for match in matches:
2866 for match in matches:
2867 text = match.text
2867 text = match.text
2868 if (
2868 if (
2869 text not in filtered_matches
2869 text not in filtered_matches
2870 or filtered_matches[text].type == _UNKNOWN_TYPE
2870 or filtered_matches[text].type == _UNKNOWN_TYPE
2871 ):
2871 ):
2872 filtered_matches[text] = match
2872 filtered_matches[text] = match
2873
2873
2874 return filtered_matches.values()
2874 return filtered_matches.values()
2875
2875
2876 @staticmethod
2876 @staticmethod
2877 def _sort(matches: Sequence[SimpleCompletion]):
2877 def _sort(matches: Sequence[SimpleCompletion]):
2878 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2878 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2879
2879
2880 @context_matcher()
2880 @context_matcher()
2881 def fwd_unicode_matcher(self, context: CompletionContext):
2881 def fwd_unicode_matcher(self, context: CompletionContext):
2882 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
2882 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
2883 # TODO: use `context.limit` to terminate early once we matched the maximum
2883 # TODO: use `context.limit` to terminate early once we matched the maximum
2884 # number that will be used downstream; can be added as an optional to
2884 # number that will be used downstream; can be added as an optional to
2885 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
2885 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
2886 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
2886 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
2887 return _convert_matcher_v1_result_to_v2(
2887 return _convert_matcher_v1_result_to_v2(
2888 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2888 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2889 )
2889 )
2890
2890
2891 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2891 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2892 """
2892 """
2893 Forward match a string starting with a backslash with a list of
2893 Forward match a string starting with a backslash with a list of
2894 potential Unicode completions.
2894 potential Unicode completions.
2895
2895
2896 Will compute list of Unicode character names on first call and cache it.
2896 Will compute list of Unicode character names on first call and cache it.
2897
2897
2898 .. deprecated:: 8.6
2898 .. deprecated:: 8.6
2899 You can use :meth:`fwd_unicode_matcher` instead.
2899 You can use :meth:`fwd_unicode_matcher` instead.
2900
2900
2901 Returns
2901 Returns
2902 -------
2902 -------
2903 At tuple with:
2903 At tuple with:
2904 - matched text (empty if no matches)
2904 - matched text (empty if no matches)
2905 - list of potential completions, empty tuple otherwise)
2905 - list of potential completions, empty tuple otherwise)
2906 """
2906 """
2907 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2907 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2908 # We could do a faster match using a Trie.
2908 # We could do a faster match using a Trie.
2909
2909
2910 # Using pygtrie the following seem to work:
2910 # Using pygtrie the following seem to work:
2911
2911
2912 # s = PrefixSet()
2912 # s = PrefixSet()
2913
2913
2914 # for c in range(0,0x10FFFF + 1):
2914 # for c in range(0,0x10FFFF + 1):
2915 # try:
2915 # try:
2916 # s.add(unicodedata.name(chr(c)))
2916 # s.add(unicodedata.name(chr(c)))
2917 # except ValueError:
2917 # except ValueError:
2918 # pass
2918 # pass
2919 # [''.join(k) for k in s.iter(prefix)]
2919 # [''.join(k) for k in s.iter(prefix)]
2920
2920
2921 # But need to be timed and adds an extra dependency.
2921 # But need to be timed and adds an extra dependency.
2922
2922
2923 slashpos = text.rfind('\\')
2923 slashpos = text.rfind('\\')
2924 # if text starts with slash
2924 # if text starts with slash
2925 if slashpos > -1:
2925 if slashpos > -1:
2926 # PERF: It's important that we don't access self._unicode_names
2926 # PERF: It's important that we don't access self._unicode_names
2927 # until we're inside this if-block. _unicode_names is lazily
2927 # until we're inside this if-block. _unicode_names is lazily
2928 # initialized, and it takes a user-noticeable amount of time to
2928 # initialized, and it takes a user-noticeable amount of time to
2929 # initialize it, so we don't want to initialize it unless we're
2929 # initialize it, so we don't want to initialize it unless we're
2930 # actually going to use it.
2930 # actually going to use it.
2931 s = text[slashpos + 1 :]
2931 s = text[slashpos + 1 :]
2932 sup = s.upper()
2932 sup = s.upper()
2933 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2933 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2934 if candidates:
2934 if candidates:
2935 return s, candidates
2935 return s, candidates
2936 candidates = [x for x in self.unicode_names if sup in x]
2936 candidates = [x for x in self.unicode_names if sup in x]
2937 if candidates:
2937 if candidates:
2938 return s, candidates
2938 return s, candidates
2939 splitsup = sup.split(" ")
2939 splitsup = sup.split(" ")
2940 candidates = [
2940 candidates = [
2941 x for x in self.unicode_names if all(u in x for u in splitsup)
2941 x for x in self.unicode_names if all(u in x for u in splitsup)
2942 ]
2942 ]
2943 if candidates:
2943 if candidates:
2944 return s, candidates
2944 return s, candidates
2945
2945
2946 return "", ()
2946 return "", ()
2947
2947
2948 # if text does not start with slash
2948 # if text does not start with slash
2949 else:
2949 else:
2950 return '', ()
2950 return '', ()
2951
2951
2952 @property
2952 @property
2953 def unicode_names(self) -> List[str]:
2953 def unicode_names(self) -> List[str]:
2954 """List of names of unicode code points that can be completed.
2954 """List of names of unicode code points that can be completed.
2955
2955
2956 The list is lazily initialized on first access.
2956 The list is lazily initialized on first access.
2957 """
2957 """
2958 if self._unicode_names is None:
2958 if self._unicode_names is None:
2959 names = []
2959 names = []
2960 for c in range(0,0x10FFFF + 1):
2960 for c in range(0,0x10FFFF + 1):
2961 try:
2961 try:
2962 names.append(unicodedata.name(chr(c)))
2962 names.append(unicodedata.name(chr(c)))
2963 except ValueError:
2963 except ValueError:
2964 pass
2964 pass
2965 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2965 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2966
2966
2967 return self._unicode_names
2967 return self._unicode_names
2968
2968
2969 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2969 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2970 names = []
2970 names = []
2971 for start,stop in ranges:
2971 for start,stop in ranges:
2972 for c in range(start, stop) :
2972 for c in range(start, stop) :
2973 try:
2973 try:
2974 names.append(unicodedata.name(chr(c)))
2974 names.append(unicodedata.name(chr(c)))
2975 except ValueError:
2975 except ValueError:
2976 pass
2976 pass
2977 return names
2977 return names
General Comments 0
You need to be logged in to leave comments. Login now