##// END OF EJS Templates
Shim TypedDict and NotRequired at runtime until...
krassowski -
Show More
@@ -1,2763 +1,2765 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press ``<tab>`` to expand it to its latex form.
53 and press ``<tab>`` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103
103
104 Matchers
104 Matchers
105 ========
105 ========
106
106
107 All completions routines are implemented using unified ``matchers`` API.
107 All completions routines are implemented using unified ``matchers`` API.
108 The matchers API is provisional and subject to change without notice.
108 The matchers API is provisional and subject to change without notice.
109
109
110 The built-in matchers include:
110 The built-in matchers include:
111
111
112 - ``IPCompleter.dict_key_matcher``: dictionary key completions,
112 - ``IPCompleter.dict_key_matcher``: dictionary key completions,
113 - ``IPCompleter.magic_matcher``: completions for magics,
113 - ``IPCompleter.magic_matcher``: completions for magics,
114 - ``IPCompleter.unicode_name_matcher``, ``IPCompleter.fwd_unicode_matcher`` and ``IPCompleter.latex_matcher``: see `Forward latex/unicode completion`_,
114 - ``IPCompleter.unicode_name_matcher``, ``IPCompleter.fwd_unicode_matcher`` and ``IPCompleter.latex_matcher``: see `Forward latex/unicode completion`_,
115 - ``back_unicode_name_matcher`` and ``back_latex_name_matcher``: see `Backward latex completion`_,
115 - ``back_unicode_name_matcher`` and ``back_latex_name_matcher``: see `Backward latex completion`_,
116 - ``IPCompleter.file_matcher``: paths to files and directories,
116 - ``IPCompleter.file_matcher``: paths to files and directories,
117 - ``IPCompleter.python_func_kw_matcher`` - function keywords,
117 - ``IPCompleter.python_func_kw_matcher`` - function keywords,
118 - ``IPCompleter.python_matches`` - globals and attributes (v1 API),
118 - ``IPCompleter.python_matches`` - globals and attributes (v1 API),
119 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
119 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
120 - ``IPCompleter.custom_completer_matcher`` - pluggable completer with a default implementation in ``core.InteractiveShell``
120 - ``IPCompleter.custom_completer_matcher`` - pluggable completer with a default implementation in ``core.InteractiveShell``
121 which uses uses IPython hooks system (`complete_command`) with string dispatch (including regular expressions).
121 which uses uses IPython hooks system (`complete_command`) with string dispatch (including regular expressions).
122 Differently to other matchers, ``custom_completer_matcher`` will not suppress Jedi results to match
122 Differently to other matchers, ``custom_completer_matcher`` will not suppress Jedi results to match
123 behaviour in earlier IPython versions.
123 behaviour in earlier IPython versions.
124
124
125 Adding custom matchers is possible by appending to `IPCompleter.custom_matchers` list,
125 Adding custom matchers is possible by appending to `IPCompleter.custom_matchers` list,
126 but please be aware that this API is subject to change.
126 but please be aware that this API is subject to change.
127 """
127 """
128
128
129
129
130 # Copyright (c) IPython Development Team.
130 # Copyright (c) IPython Development Team.
131 # Distributed under the terms of the Modified BSD License.
131 # Distributed under the terms of the Modified BSD License.
132 #
132 #
133 # Some of this code originated from rlcompleter in the Python standard library
133 # Some of this code originated from rlcompleter in the Python standard library
134 # Copyright (C) 2001 Python Software Foundation, www.python.org
134 # Copyright (C) 2001 Python Software Foundation, www.python.org
135
135
136
136
137 import builtins as builtin_mod
137 import builtins as builtin_mod
138 import glob
138 import glob
139 import inspect
139 import inspect
140 import itertools
140 import itertools
141 import keyword
141 import keyword
142 import os
142 import os
143 import re
143 import re
144 import string
144 import string
145 import sys
145 import sys
146 import time
146 import time
147 import unicodedata
147 import unicodedata
148 import uuid
148 import uuid
149 import warnings
149 import warnings
150 from contextlib import contextmanager
150 from contextlib import contextmanager
151 from functools import lru_cache, partial
151 from functools import lru_cache, partial
152 from importlib import import_module
152 from importlib import import_module
153 from types import SimpleNamespace
153 from types import SimpleNamespace
154 from typing import (
154 from typing import (
155 Iterable,
155 Iterable,
156 Iterator,
156 Iterator,
157 List,
157 List,
158 Tuple,
158 Tuple,
159 Union,
159 Union,
160 Any,
160 Any,
161 Sequence,
161 Sequence,
162 Dict,
162 Dict,
163 NamedTuple,
163 NamedTuple,
164 Pattern,
164 Pattern,
165 Optional,
165 Optional,
166 Callable,
166 Callable,
167 TYPE_CHECKING,
167 TYPE_CHECKING,
168 Set,
168 Set,
169 )
169 )
170 from typing_extensions import TypedDict, NotRequired
171
170
172 from IPython.core.error import TryNext
171 from IPython.core.error import TryNext
173 from IPython.core.inputtransformer2 import ESC_MAGIC
172 from IPython.core.inputtransformer2 import ESC_MAGIC
174 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
173 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
175 from IPython.core.oinspect import InspectColors
174 from IPython.core.oinspect import InspectColors
176 from IPython.testing.skipdoctest import skip_doctest
175 from IPython.testing.skipdoctest import skip_doctest
177 from IPython.utils import generics
176 from IPython.utils import generics
178 from IPython.utils.dir2 import dir2, get_real_method
177 from IPython.utils.dir2 import dir2, get_real_method
179 from IPython.utils.path import ensure_dir_exists
178 from IPython.utils.path import ensure_dir_exists
180 from IPython.utils.process import arg_split
179 from IPython.utils.process import arg_split
181 from traitlets import (
180 from traitlets import (
182 Bool,
181 Bool,
183 Enum,
182 Enum,
184 Int,
183 Int,
185 List as ListTrait,
184 List as ListTrait,
186 Unicode,
185 Unicode,
187 Dict as DictTrait,
186 Dict as DictTrait,
188 Union as UnionTrait,
187 Union as UnionTrait,
189 default,
188 default,
190 observe,
189 observe,
191 )
190 )
192 from traitlets.config.configurable import Configurable
191 from traitlets.config.configurable import Configurable
193
192
194 import __main__
193 import __main__
195
194
196 # skip module docstests
195 # skip module docstests
197 __skip_doctest__ = True
196 __skip_doctest__ = True
198
197
199
198
200 try:
199 try:
201 import jedi
200 import jedi
202 jedi.settings.case_insensitive_completion = False
201 jedi.settings.case_insensitive_completion = False
203 import jedi.api.helpers
202 import jedi.api.helpers
204 import jedi.api.classes
203 import jedi.api.classes
205 JEDI_INSTALLED = True
204 JEDI_INSTALLED = True
206 except ImportError:
205 except ImportError:
207 JEDI_INSTALLED = False
206 JEDI_INSTALLED = False
208
207
209 if TYPE_CHECKING:
208 if TYPE_CHECKING:
210 from typing import cast
209 from typing import cast
210 from typing_extensions import TypedDict, NotRequired
211 else:
211 else:
212
212
213 def cast(obj, _type):
213 def cast(obj, _type):
214 return obj
214 return obj
215
215
216 TypedDict = Dict
217 NotRequired = Tuple
216
218
217 # -----------------------------------------------------------------------------
219 # -----------------------------------------------------------------------------
218 # Globals
220 # Globals
219 #-----------------------------------------------------------------------------
221 #-----------------------------------------------------------------------------
220
222
221 # ranges where we have most of the valid unicode names. We could be more finer
223 # ranges where we have most of the valid unicode names. We could be more finer
222 # grained but is it worth it for performance While unicode have character in the
224 # grained but is it worth it for performance While unicode have character in the
223 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
225 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
224 # write this). With below range we cover them all, with a density of ~67%
226 # write this). With below range we cover them all, with a density of ~67%
225 # biggest next gap we consider only adds up about 1% density and there are 600
227 # biggest next gap we consider only adds up about 1% density and there are 600
226 # gaps that would need hard coding.
228 # gaps that would need hard coding.
227 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
229 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
228
230
229 # Public API
231 # Public API
230 __all__ = ['Completer','IPCompleter']
232 __all__ = ['Completer','IPCompleter']
231
233
232 if sys.platform == 'win32':
234 if sys.platform == 'win32':
233 PROTECTABLES = ' '
235 PROTECTABLES = ' '
234 else:
236 else:
235 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
237 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
236
238
237 # Protect against returning an enormous number of completions which the frontend
239 # Protect against returning an enormous number of completions which the frontend
238 # may have trouble processing.
240 # may have trouble processing.
239 MATCHES_LIMIT = 500
241 MATCHES_LIMIT = 500
240
242
241 # Completion type reported when no type can be inferred.
243 # Completion type reported when no type can be inferred.
242 _UNKNOWN_TYPE = "<unknown>"
244 _UNKNOWN_TYPE = "<unknown>"
243
245
244 class ProvisionalCompleterWarning(FutureWarning):
246 class ProvisionalCompleterWarning(FutureWarning):
245 """
247 """
246 Exception raise by an experimental feature in this module.
248 Exception raise by an experimental feature in this module.
247
249
248 Wrap code in :any:`provisionalcompleter` context manager if you
250 Wrap code in :any:`provisionalcompleter` context manager if you
249 are certain you want to use an unstable feature.
251 are certain you want to use an unstable feature.
250 """
252 """
251 pass
253 pass
252
254
253 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
255 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
254
256
255
257
256 @skip_doctest
258 @skip_doctest
257 @contextmanager
259 @contextmanager
258 def provisionalcompleter(action='ignore'):
260 def provisionalcompleter(action='ignore'):
259 """
261 """
260 This context manager has to be used in any place where unstable completer
262 This context manager has to be used in any place where unstable completer
261 behavior and API may be called.
263 behavior and API may be called.
262
264
263 >>> with provisionalcompleter():
265 >>> with provisionalcompleter():
264 ... completer.do_experimental_things() # works
266 ... completer.do_experimental_things() # works
265
267
266 >>> completer.do_experimental_things() # raises.
268 >>> completer.do_experimental_things() # raises.
267
269
268 .. note::
270 .. note::
269
271
270 Unstable
272 Unstable
271
273
272 By using this context manager you agree that the API in use may change
274 By using this context manager you agree that the API in use may change
273 without warning, and that you won't complain if they do so.
275 without warning, and that you won't complain if they do so.
274
276
275 You also understand that, if the API is not to your liking, you should report
277 You also understand that, if the API is not to your liking, you should report
276 a bug to explain your use case upstream.
278 a bug to explain your use case upstream.
277
279
278 We'll be happy to get your feedback, feature requests, and improvements on
280 We'll be happy to get your feedback, feature requests, and improvements on
279 any of the unstable APIs!
281 any of the unstable APIs!
280 """
282 """
281 with warnings.catch_warnings():
283 with warnings.catch_warnings():
282 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
284 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
283 yield
285 yield
284
286
285
287
286 def has_open_quotes(s):
288 def has_open_quotes(s):
287 """Return whether a string has open quotes.
289 """Return whether a string has open quotes.
288
290
289 This simply counts whether the number of quote characters of either type in
291 This simply counts whether the number of quote characters of either type in
290 the string is odd.
292 the string is odd.
291
293
292 Returns
294 Returns
293 -------
295 -------
294 If there is an open quote, the quote character is returned. Else, return
296 If there is an open quote, the quote character is returned. Else, return
295 False.
297 False.
296 """
298 """
297 # We check " first, then ', so complex cases with nested quotes will get
299 # We check " first, then ', so complex cases with nested quotes will get
298 # the " to take precedence.
300 # the " to take precedence.
299 if s.count('"') % 2:
301 if s.count('"') % 2:
300 return '"'
302 return '"'
301 elif s.count("'") % 2:
303 elif s.count("'") % 2:
302 return "'"
304 return "'"
303 else:
305 else:
304 return False
306 return False
305
307
306
308
307 def protect_filename(s, protectables=PROTECTABLES):
309 def protect_filename(s, protectables=PROTECTABLES):
308 """Escape a string to protect certain characters."""
310 """Escape a string to protect certain characters."""
309 if set(s) & set(protectables):
311 if set(s) & set(protectables):
310 if sys.platform == "win32":
312 if sys.platform == "win32":
311 return '"' + s + '"'
313 return '"' + s + '"'
312 else:
314 else:
313 return "".join(("\\" + c if c in protectables else c) for c in s)
315 return "".join(("\\" + c if c in protectables else c) for c in s)
314 else:
316 else:
315 return s
317 return s
316
318
317
319
318 def expand_user(path:str) -> Tuple[str, bool, str]:
320 def expand_user(path:str) -> Tuple[str, bool, str]:
319 """Expand ``~``-style usernames in strings.
321 """Expand ``~``-style usernames in strings.
320
322
321 This is similar to :func:`os.path.expanduser`, but it computes and returns
323 This is similar to :func:`os.path.expanduser`, but it computes and returns
322 extra information that will be useful if the input was being used in
324 extra information that will be useful if the input was being used in
323 computing completions, and you wish to return the completions with the
325 computing completions, and you wish to return the completions with the
324 original '~' instead of its expanded value.
326 original '~' instead of its expanded value.
325
327
326 Parameters
328 Parameters
327 ----------
329 ----------
328 path : str
330 path : str
329 String to be expanded. If no ~ is present, the output is the same as the
331 String to be expanded. If no ~ is present, the output is the same as the
330 input.
332 input.
331
333
332 Returns
334 Returns
333 -------
335 -------
334 newpath : str
336 newpath : str
335 Result of ~ expansion in the input path.
337 Result of ~ expansion in the input path.
336 tilde_expand : bool
338 tilde_expand : bool
337 Whether any expansion was performed or not.
339 Whether any expansion was performed or not.
338 tilde_val : str
340 tilde_val : str
339 The value that ~ was replaced with.
341 The value that ~ was replaced with.
340 """
342 """
341 # Default values
343 # Default values
342 tilde_expand = False
344 tilde_expand = False
343 tilde_val = ''
345 tilde_val = ''
344 newpath = path
346 newpath = path
345
347
346 if path.startswith('~'):
348 if path.startswith('~'):
347 tilde_expand = True
349 tilde_expand = True
348 rest = len(path)-1
350 rest = len(path)-1
349 newpath = os.path.expanduser(path)
351 newpath = os.path.expanduser(path)
350 if rest:
352 if rest:
351 tilde_val = newpath[:-rest]
353 tilde_val = newpath[:-rest]
352 else:
354 else:
353 tilde_val = newpath
355 tilde_val = newpath
354
356
355 return newpath, tilde_expand, tilde_val
357 return newpath, tilde_expand, tilde_val
356
358
357
359
358 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
360 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
359 """Does the opposite of expand_user, with its outputs.
361 """Does the opposite of expand_user, with its outputs.
360 """
362 """
361 if tilde_expand:
363 if tilde_expand:
362 return path.replace(tilde_val, '~')
364 return path.replace(tilde_val, '~')
363 else:
365 else:
364 return path
366 return path
365
367
366
368
367 def completions_sorting_key(word):
369 def completions_sorting_key(word):
368 """key for sorting completions
370 """key for sorting completions
369
371
370 This does several things:
372 This does several things:
371
373
372 - Demote any completions starting with underscores to the end
374 - Demote any completions starting with underscores to the end
373 - Insert any %magic and %%cellmagic completions in the alphabetical order
375 - Insert any %magic and %%cellmagic completions in the alphabetical order
374 by their name
376 by their name
375 """
377 """
376 prio1, prio2 = 0, 0
378 prio1, prio2 = 0, 0
377
379
378 if word.startswith('__'):
380 if word.startswith('__'):
379 prio1 = 2
381 prio1 = 2
380 elif word.startswith('_'):
382 elif word.startswith('_'):
381 prio1 = 1
383 prio1 = 1
382
384
383 if word.endswith('='):
385 if word.endswith('='):
384 prio1 = -1
386 prio1 = -1
385
387
386 if word.startswith('%%'):
388 if word.startswith('%%'):
387 # If there's another % in there, this is something else, so leave it alone
389 # If there's another % in there, this is something else, so leave it alone
388 if not "%" in word[2:]:
390 if not "%" in word[2:]:
389 word = word[2:]
391 word = word[2:]
390 prio2 = 2
392 prio2 = 2
391 elif word.startswith('%'):
393 elif word.startswith('%'):
392 if not "%" in word[1:]:
394 if not "%" in word[1:]:
393 word = word[1:]
395 word = word[1:]
394 prio2 = 1
396 prio2 = 1
395
397
396 return prio1, word, prio2
398 return prio1, word, prio2
397
399
398
400
399 class _FakeJediCompletion:
401 class _FakeJediCompletion:
400 """
402 """
401 This is a workaround to communicate to the UI that Jedi has crashed and to
403 This is a workaround to communicate to the UI that Jedi has crashed and to
402 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
404 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
403
405
404 Added in IPython 6.0 so should likely be removed for 7.0
406 Added in IPython 6.0 so should likely be removed for 7.0
405
407
406 """
408 """
407
409
408 def __init__(self, name):
410 def __init__(self, name):
409
411
410 self.name = name
412 self.name = name
411 self.complete = name
413 self.complete = name
412 self.type = 'crashed'
414 self.type = 'crashed'
413 self.name_with_symbols = name
415 self.name_with_symbols = name
414 self.signature = ''
416 self.signature = ''
415 self._origin = 'fake'
417 self._origin = 'fake'
416
418
417 def __repr__(self):
419 def __repr__(self):
418 return '<Fake completion object jedi has crashed>'
420 return '<Fake completion object jedi has crashed>'
419
421
420
422
421 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
423 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
422
424
423
425
424 class Completion:
426 class Completion:
425 """
427 """
426 Completion object used and return by IPython completers.
428 Completion object used and return by IPython completers.
427
429
428 .. warning::
430 .. warning::
429
431
430 Unstable
432 Unstable
431
433
432 This function is unstable, API may change without warning.
434 This function is unstable, API may change without warning.
433 It will also raise unless use in proper context manager.
435 It will also raise unless use in proper context manager.
434
436
435 This act as a middle ground :any:`Completion` object between the
437 This act as a middle ground :any:`Completion` object between the
436 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
438 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
437 object. While Jedi need a lot of information about evaluator and how the
439 object. While Jedi need a lot of information about evaluator and how the
438 code should be ran/inspected, PromptToolkit (and other frontend) mostly
440 code should be ran/inspected, PromptToolkit (and other frontend) mostly
439 need user facing information.
441 need user facing information.
440
442
441 - Which range should be replaced replaced by what.
443 - Which range should be replaced replaced by what.
442 - Some metadata (like completion type), or meta information to displayed to
444 - Some metadata (like completion type), or meta information to displayed to
443 the use user.
445 the use user.
444
446
445 For debugging purpose we can also store the origin of the completion (``jedi``,
447 For debugging purpose we can also store the origin of the completion (``jedi``,
446 ``IPython.python_matches``, ``IPython.magics_matches``...).
448 ``IPython.python_matches``, ``IPython.magics_matches``...).
447 """
449 """
448
450
449 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
451 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
450
452
451 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
453 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
452 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
454 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
453 "It may change without warnings. "
455 "It may change without warnings. "
454 "Use in corresponding context manager.",
456 "Use in corresponding context manager.",
455 category=ProvisionalCompleterWarning, stacklevel=2)
457 category=ProvisionalCompleterWarning, stacklevel=2)
456
458
457 self.start = start
459 self.start = start
458 self.end = end
460 self.end = end
459 self.text = text
461 self.text = text
460 self.type = type
462 self.type = type
461 self.signature = signature
463 self.signature = signature
462 self._origin = _origin
464 self._origin = _origin
463
465
464 def __repr__(self):
466 def __repr__(self):
465 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
467 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
466 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
468 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
467
469
468 def __eq__(self, other)->Bool:
470 def __eq__(self, other)->Bool:
469 """
471 """
470 Equality and hash do not hash the type (as some completer may not be
472 Equality and hash do not hash the type (as some completer may not be
471 able to infer the type), but are use to (partially) de-duplicate
473 able to infer the type), but are use to (partially) de-duplicate
472 completion.
474 completion.
473
475
474 Completely de-duplicating completion is a bit tricker that just
476 Completely de-duplicating completion is a bit tricker that just
475 comparing as it depends on surrounding text, which Completions are not
477 comparing as it depends on surrounding text, which Completions are not
476 aware of.
478 aware of.
477 """
479 """
478 return self.start == other.start and \
480 return self.start == other.start and \
479 self.end == other.end and \
481 self.end == other.end and \
480 self.text == other.text
482 self.text == other.text
481
483
482 def __hash__(self):
484 def __hash__(self):
483 return hash((self.start, self.end, self.text))
485 return hash((self.start, self.end, self.text))
484
486
485
487
486 class SimpleCompletion:
488 class SimpleCompletion:
487 # TODO: decide whether we should keep the ``SimpleCompletion`` separate from ``Completion``
489 # TODO: decide whether we should keep the ``SimpleCompletion`` separate from ``Completion``
488 # there are two advantages of keeping them separate:
490 # there are two advantages of keeping them separate:
489 # - compatibility with old readline `Completer.complete` interface (less important)
491 # - compatibility with old readline `Completer.complete` interface (less important)
490 # - ease of use for third parties (just return matched text and don't worry about coordinates)
492 # - ease of use for third parties (just return matched text and don't worry about coordinates)
491 # the disadvantage is that we need to loop over the completions again to transform them into
493 # the disadvantage is that we need to loop over the completions again to transform them into
492 # `Completion` objects (but it was done like that before the refactor into `SimpleCompletion` too).
494 # `Completion` objects (but it was done like that before the refactor into `SimpleCompletion` too).
493 __slots__ = ["text", "type"]
495 __slots__ = ["text", "type"]
494
496
495 def __init__(self, text: str, *, type: str = None):
497 def __init__(self, text: str, *, type: str = None):
496 self.text = text
498 self.text = text
497 self.type = type
499 self.type = type
498
500
499 def __repr__(self):
501 def __repr__(self):
500 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
502 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
501
503
502
504
503 class _MatcherResultBase(TypedDict):
505 class _MatcherResultBase(TypedDict):
504
506
505 #: suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
507 #: suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
506 matched_fragment: NotRequired[str]
508 matched_fragment: NotRequired[str]
507
509
508 #: whether to suppress results from other matchers; default is False.
510 #: whether to suppress results from other matchers; default is False.
509 suppress_others: NotRequired[bool]
511 suppress_others: NotRequired[bool]
510
512
511 #: are completions already ordered and should be left as-is? default is False.
513 #: are completions already ordered and should be left as-is? default is False.
512 ordered: NotRequired[bool]
514 ordered: NotRequired[bool]
513
515
514 # TODO: should we used a relevance score for ordering?
516 # TODO: should we used a relevance score for ordering?
515 #: value between 0 (likely not relevant) and 100 (likely relevant); default is 50.
517 #: value between 0 (likely not relevant) and 100 (likely relevant); default is 50.
516 # relevance: NotRequired[float]
518 # relevance: NotRequired[float]
517
519
518
520
519 class SimpleMatcherResult(_MatcherResultBase):
521 class SimpleMatcherResult(_MatcherResultBase):
520 """Result of new-style completion matcher."""
522 """Result of new-style completion matcher."""
521
523
522 #: list of candidate completions
524 #: list of candidate completions
523 completions: Sequence[SimpleCompletion]
525 completions: Sequence[SimpleCompletion]
524
526
525
527
526 class _JediMatcherResult(_MatcherResultBase):
528 class _JediMatcherResult(_MatcherResultBase):
527 """Matching result returned by Jedi (will be processed differently)"""
529 """Matching result returned by Jedi (will be processed differently)"""
528
530
529 #: list of candidate completions
531 #: list of candidate completions
530 completions: Iterable[_JediCompletionLike]
532 completions: Iterable[_JediCompletionLike]
531
533
532
534
533 class CompletionContext(NamedTuple):
535 class CompletionContext(NamedTuple):
534 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
536 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
535 # which was not explicitly visible as an argument of the matcher, making any refactor
537 # which was not explicitly visible as an argument of the matcher, making any refactor
536 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
538 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
537 # from the completer, and make substituting them in sub-classes easier.
539 # from the completer, and make substituting them in sub-classes easier.
538
540
539 #: Relevant fragment of code directly preceding the cursor.
541 #: Relevant fragment of code directly preceding the cursor.
540 #: The extraction of token is implemented via splitter heuristic
542 #: The extraction of token is implemented via splitter heuristic
541 #: (following readline behaviour for legacy reasons), which is user configurable
543 #: (following readline behaviour for legacy reasons), which is user configurable
542 #: (by switching the greedy mode).
544 #: (by switching the greedy mode).
543 token: str
545 token: str
544
546
545 full_text: str
547 full_text: str
546
548
547 #: Cursor position in the line (the same for ``full_text`` and `text``).
549 #: Cursor position in the line (the same for ``full_text`` and `text``).
548 cursor_position: int
550 cursor_position: int
549
551
550 #: Cursor line in ``full_text``.
552 #: Cursor line in ``full_text``.
551 cursor_line: int
553 cursor_line: int
552
554
553 @property
555 @property
554 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
556 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
555 def text_until_cursor(self) -> str:
557 def text_until_cursor(self) -> str:
556 return self.line_with_cursor[: self.cursor_position]
558 return self.line_with_cursor[: self.cursor_position]
557
559
558 @property
560 @property
559 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
561 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
560 def line_with_cursor(self) -> str:
562 def line_with_cursor(self) -> str:
561 return self.full_text.split("\n")[self.cursor_line]
563 return self.full_text.split("\n")[self.cursor_line]
562
564
563
565
564 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
566 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
565
567
566 MatcherAPIv1 = Callable[[str], List[str]]
568 MatcherAPIv1 = Callable[[str], List[str]]
567 MatcherAPIv2 = Callable[[CompletionContext], MatcherResult]
569 MatcherAPIv2 = Callable[[CompletionContext], MatcherResult]
568 Matcher = Union[MatcherAPIv1, MatcherAPIv2]
570 Matcher = Union[MatcherAPIv1, MatcherAPIv2]
569
571
570
572
571 def completion_matcher(
573 def completion_matcher(
572 *, priority: float = None, identifier: str = None, api_version=1
574 *, priority: float = None, identifier: str = None, api_version=1
573 ):
575 ):
574 """Adds attributes describing the matcher.
576 """Adds attributes describing the matcher.
575
577
576 Parameters
578 Parameters
577 ----------
579 ----------
578 priority : Optional[float]
580 priority : Optional[float]
579 The priority of the matcher, determines the order of execution of matchers.
581 The priority of the matcher, determines the order of execution of matchers.
580 Higher priority means that the matcher will be executed first. Defaults to 50.
582 Higher priority means that the matcher will be executed first. Defaults to 50.
581 identifier : Optional[str]
583 identifier : Optional[str]
582 identifier of the matcher allowing users to modify the behaviour via traitlets,
584 identifier of the matcher allowing users to modify the behaviour via traitlets,
583 and also used to for debugging (will be passed as ``origin`` with the completions).
585 and also used to for debugging (will be passed as ``origin`` with the completions).
584 Defaults to matcher function ``__qualname__``.
586 Defaults to matcher function ``__qualname__``.
585 api_version: Optional[int]
587 api_version: Optional[int]
586 version of the Matcher API used by this matcher.
588 version of the Matcher API used by this matcher.
587 Currently supported values are 1 and 2.
589 Currently supported values are 1 and 2.
588 Defaults to 1.
590 Defaults to 1.
589 """
591 """
590
592
591 def wrapper(func: Matcher):
593 def wrapper(func: Matcher):
592 func.matcher_priority = priority
594 func.matcher_priority = priority
593 func.matcher_identifier = identifier or func.__qualname__
595 func.matcher_identifier = identifier or func.__qualname__
594 func.matcher_api_version = api_version
596 func.matcher_api_version = api_version
595 return func
597 return func
596
598
597 return wrapper
599 return wrapper
598
600
599
601
600 def _get_matcher_id(matcher: Matcher):
602 def _get_matcher_id(matcher: Matcher):
601 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
603 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
602
604
603
605
604 def _get_matcher_api_version(matcher):
606 def _get_matcher_api_version(matcher):
605 return getattr(matcher, "matcher_api_version", 1)
607 return getattr(matcher, "matcher_api_version", 1)
606
608
607
609
608 context_matcher = partial(completion_matcher, api_version=2)
610 context_matcher = partial(completion_matcher, api_version=2)
609
611
610
612
611 _IC = Iterable[Completion]
613 _IC = Iterable[Completion]
612
614
613
615
614 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
616 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
615 """
617 """
616 Deduplicate a set of completions.
618 Deduplicate a set of completions.
617
619
618 .. warning::
620 .. warning::
619
621
620 Unstable
622 Unstable
621
623
622 This function is unstable, API may change without warning.
624 This function is unstable, API may change without warning.
623
625
624 Parameters
626 Parameters
625 ----------
627 ----------
626 text : str
628 text : str
627 text that should be completed.
629 text that should be completed.
628 completions : Iterator[Completion]
630 completions : Iterator[Completion]
629 iterator over the completions to deduplicate
631 iterator over the completions to deduplicate
630
632
631 Yields
633 Yields
632 ------
634 ------
633 `Completions` objects
635 `Completions` objects
634 Completions coming from multiple sources, may be different but end up having
636 Completions coming from multiple sources, may be different but end up having
635 the same effect when applied to ``text``. If this is the case, this will
637 the same effect when applied to ``text``. If this is the case, this will
636 consider completions as equal and only emit the first encountered.
638 consider completions as equal and only emit the first encountered.
637 Not folded in `completions()` yet for debugging purpose, and to detect when
639 Not folded in `completions()` yet for debugging purpose, and to detect when
638 the IPython completer does return things that Jedi does not, but should be
640 the IPython completer does return things that Jedi does not, but should be
639 at some point.
641 at some point.
640 """
642 """
641 completions = list(completions)
643 completions = list(completions)
642 if not completions:
644 if not completions:
643 return
645 return
644
646
645 new_start = min(c.start for c in completions)
647 new_start = min(c.start for c in completions)
646 new_end = max(c.end for c in completions)
648 new_end = max(c.end for c in completions)
647
649
648 seen = set()
650 seen = set()
649 for c in completions:
651 for c in completions:
650 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
652 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
651 if new_text not in seen:
653 if new_text not in seen:
652 yield c
654 yield c
653 seen.add(new_text)
655 seen.add(new_text)
654
656
655
657
656 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
658 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
657 """
659 """
658 Rectify a set of completions to all have the same ``start`` and ``end``
660 Rectify a set of completions to all have the same ``start`` and ``end``
659
661
660 .. warning::
662 .. warning::
661
663
662 Unstable
664 Unstable
663
665
664 This function is unstable, API may change without warning.
666 This function is unstable, API may change without warning.
665 It will also raise unless use in proper context manager.
667 It will also raise unless use in proper context manager.
666
668
667 Parameters
669 Parameters
668 ----------
670 ----------
669 text : str
671 text : str
670 text that should be completed.
672 text that should be completed.
671 completions : Iterator[Completion]
673 completions : Iterator[Completion]
672 iterator over the completions to rectify
674 iterator over the completions to rectify
673 _debug : bool
675 _debug : bool
674 Log failed completion
676 Log failed completion
675
677
676 Notes
678 Notes
677 -----
679 -----
678 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
680 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
679 the Jupyter Protocol requires them to behave like so. This will readjust
681 the Jupyter Protocol requires them to behave like so. This will readjust
680 the completion to have the same ``start`` and ``end`` by padding both
682 the completion to have the same ``start`` and ``end`` by padding both
681 extremities with surrounding text.
683 extremities with surrounding text.
682
684
683 During stabilisation should support a ``_debug`` option to log which
685 During stabilisation should support a ``_debug`` option to log which
684 completion are return by the IPython completer and not found in Jedi in
686 completion are return by the IPython completer and not found in Jedi in
685 order to make upstream bug report.
687 order to make upstream bug report.
686 """
688 """
687 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
689 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
688 "It may change without warnings. "
690 "It may change without warnings. "
689 "Use in corresponding context manager.",
691 "Use in corresponding context manager.",
690 category=ProvisionalCompleterWarning, stacklevel=2)
692 category=ProvisionalCompleterWarning, stacklevel=2)
691
693
692 completions = list(completions)
694 completions = list(completions)
693 if not completions:
695 if not completions:
694 return
696 return
695 starts = (c.start for c in completions)
697 starts = (c.start for c in completions)
696 ends = (c.end for c in completions)
698 ends = (c.end for c in completions)
697
699
698 new_start = min(starts)
700 new_start = min(starts)
699 new_end = max(ends)
701 new_end = max(ends)
700
702
701 seen_jedi = set()
703 seen_jedi = set()
702 seen_python_matches = set()
704 seen_python_matches = set()
703 for c in completions:
705 for c in completions:
704 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
706 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
705 if c._origin == 'jedi':
707 if c._origin == 'jedi':
706 seen_jedi.add(new_text)
708 seen_jedi.add(new_text)
707 elif c._origin == 'IPCompleter.python_matches':
709 elif c._origin == 'IPCompleter.python_matches':
708 seen_python_matches.add(new_text)
710 seen_python_matches.add(new_text)
709 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
711 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
710 diff = seen_python_matches.difference(seen_jedi)
712 diff = seen_python_matches.difference(seen_jedi)
711 if diff and _debug:
713 if diff and _debug:
712 print('IPython.python matches have extras:', diff)
714 print('IPython.python matches have extras:', diff)
713
715
714
716
715 if sys.platform == 'win32':
717 if sys.platform == 'win32':
716 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
718 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
717 else:
719 else:
718 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
720 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
719
721
720 GREEDY_DELIMS = ' =\r\n'
722 GREEDY_DELIMS = ' =\r\n'
721
723
722
724
723 class CompletionSplitter(object):
725 class CompletionSplitter(object):
724 """An object to split an input line in a manner similar to readline.
726 """An object to split an input line in a manner similar to readline.
725
727
726 By having our own implementation, we can expose readline-like completion in
728 By having our own implementation, we can expose readline-like completion in
727 a uniform manner to all frontends. This object only needs to be given the
729 a uniform manner to all frontends. This object only needs to be given the
728 line of text to be split and the cursor position on said line, and it
730 line of text to be split and the cursor position on said line, and it
729 returns the 'word' to be completed on at the cursor after splitting the
731 returns the 'word' to be completed on at the cursor after splitting the
730 entire line.
732 entire line.
731
733
732 What characters are used as splitting delimiters can be controlled by
734 What characters are used as splitting delimiters can be controlled by
733 setting the ``delims`` attribute (this is a property that internally
735 setting the ``delims`` attribute (this is a property that internally
734 automatically builds the necessary regular expression)"""
736 automatically builds the necessary regular expression)"""
735
737
736 # Private interface
738 # Private interface
737
739
738 # A string of delimiter characters. The default value makes sense for
740 # A string of delimiter characters. The default value makes sense for
739 # IPython's most typical usage patterns.
741 # IPython's most typical usage patterns.
740 _delims = DELIMS
742 _delims = DELIMS
741
743
742 # The expression (a normal string) to be compiled into a regular expression
744 # The expression (a normal string) to be compiled into a regular expression
743 # for actual splitting. We store it as an attribute mostly for ease of
745 # for actual splitting. We store it as an attribute mostly for ease of
744 # debugging, since this type of code can be so tricky to debug.
746 # debugging, since this type of code can be so tricky to debug.
745 _delim_expr = None
747 _delim_expr = None
746
748
747 # The regular expression that does the actual splitting
749 # The regular expression that does the actual splitting
748 _delim_re = None
750 _delim_re = None
749
751
750 def __init__(self, delims=None):
752 def __init__(self, delims=None):
751 delims = CompletionSplitter._delims if delims is None else delims
753 delims = CompletionSplitter._delims if delims is None else delims
752 self.delims = delims
754 self.delims = delims
753
755
754 @property
756 @property
755 def delims(self):
757 def delims(self):
756 """Return the string of delimiter characters."""
758 """Return the string of delimiter characters."""
757 return self._delims
759 return self._delims
758
760
759 @delims.setter
761 @delims.setter
760 def delims(self, delims):
762 def delims(self, delims):
761 """Set the delimiters for line splitting."""
763 """Set the delimiters for line splitting."""
762 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
764 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
763 self._delim_re = re.compile(expr)
765 self._delim_re = re.compile(expr)
764 self._delims = delims
766 self._delims = delims
765 self._delim_expr = expr
767 self._delim_expr = expr
766
768
767 def split_line(self, line, cursor_pos=None):
769 def split_line(self, line, cursor_pos=None):
768 """Split a line of text with a cursor at the given position.
770 """Split a line of text with a cursor at the given position.
769 """
771 """
770 l = line if cursor_pos is None else line[:cursor_pos]
772 l = line if cursor_pos is None else line[:cursor_pos]
771 return self._delim_re.split(l)[-1]
773 return self._delim_re.split(l)[-1]
772
774
773
775
774
776
775 class Completer(Configurable):
777 class Completer(Configurable):
776
778
777 greedy = Bool(False,
779 greedy = Bool(False,
778 help="""Activate greedy completion
780 help="""Activate greedy completion
779 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
781 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
780
782
781 This will enable completion on elements of lists, results of function calls, etc.,
783 This will enable completion on elements of lists, results of function calls, etc.,
782 but can be unsafe because the code is actually evaluated on TAB.
784 but can be unsafe because the code is actually evaluated on TAB.
783 """,
785 """,
784 ).tag(config=True)
786 ).tag(config=True)
785
787
786 use_jedi = Bool(default_value=JEDI_INSTALLED,
788 use_jedi = Bool(default_value=JEDI_INSTALLED,
787 help="Experimental: Use Jedi to generate autocompletions. "
789 help="Experimental: Use Jedi to generate autocompletions. "
788 "Default to True if jedi is installed.").tag(config=True)
790 "Default to True if jedi is installed.").tag(config=True)
789
791
790 jedi_compute_type_timeout = Int(default_value=400,
792 jedi_compute_type_timeout = Int(default_value=400,
791 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
793 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
792 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
794 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
793 performance by preventing jedi to build its cache.
795 performance by preventing jedi to build its cache.
794 """).tag(config=True)
796 """).tag(config=True)
795
797
796 debug = Bool(default_value=False,
798 debug = Bool(default_value=False,
797 help='Enable debug for the Completer. Mostly print extra '
799 help='Enable debug for the Completer. Mostly print extra '
798 'information for experimental jedi integration.')\
800 'information for experimental jedi integration.')\
799 .tag(config=True)
801 .tag(config=True)
800
802
801 backslash_combining_completions = Bool(True,
803 backslash_combining_completions = Bool(True,
802 help="Enable unicode completions, e.g. \\alpha<tab> . "
804 help="Enable unicode completions, e.g. \\alpha<tab> . "
803 "Includes completion of latex commands, unicode names, and expanding "
805 "Includes completion of latex commands, unicode names, and expanding "
804 "unicode characters back to latex commands.").tag(config=True)
806 "unicode characters back to latex commands.").tag(config=True)
805
807
806 def __init__(self, namespace=None, global_namespace=None, **kwargs):
808 def __init__(self, namespace=None, global_namespace=None, **kwargs):
807 """Create a new completer for the command line.
809 """Create a new completer for the command line.
808
810
809 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
811 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
810
812
811 If unspecified, the default namespace where completions are performed
813 If unspecified, the default namespace where completions are performed
812 is __main__ (technically, __main__.__dict__). Namespaces should be
814 is __main__ (technically, __main__.__dict__). Namespaces should be
813 given as dictionaries.
815 given as dictionaries.
814
816
815 An optional second namespace can be given. This allows the completer
817 An optional second namespace can be given. This allows the completer
816 to handle cases where both the local and global scopes need to be
818 to handle cases where both the local and global scopes need to be
817 distinguished.
819 distinguished.
818 """
820 """
819
821
820 # Don't bind to namespace quite yet, but flag whether the user wants a
822 # Don't bind to namespace quite yet, but flag whether the user wants a
821 # specific namespace or to use __main__.__dict__. This will allow us
823 # specific namespace or to use __main__.__dict__. This will allow us
822 # to bind to __main__.__dict__ at completion time, not now.
824 # to bind to __main__.__dict__ at completion time, not now.
823 if namespace is None:
825 if namespace is None:
824 self.use_main_ns = True
826 self.use_main_ns = True
825 else:
827 else:
826 self.use_main_ns = False
828 self.use_main_ns = False
827 self.namespace = namespace
829 self.namespace = namespace
828
830
829 # The global namespace, if given, can be bound directly
831 # The global namespace, if given, can be bound directly
830 if global_namespace is None:
832 if global_namespace is None:
831 self.global_namespace = {}
833 self.global_namespace = {}
832 else:
834 else:
833 self.global_namespace = global_namespace
835 self.global_namespace = global_namespace
834
836
835 self.custom_matchers = []
837 self.custom_matchers = []
836
838
837 super(Completer, self).__init__(**kwargs)
839 super(Completer, self).__init__(**kwargs)
838
840
839 def complete(self, text, state):
841 def complete(self, text, state):
840 """Return the next possible completion for 'text'.
842 """Return the next possible completion for 'text'.
841
843
842 This is called successively with state == 0, 1, 2, ... until it
844 This is called successively with state == 0, 1, 2, ... until it
843 returns None. The completion should begin with 'text'.
845 returns None. The completion should begin with 'text'.
844
846
845 """
847 """
846 if self.use_main_ns:
848 if self.use_main_ns:
847 self.namespace = __main__.__dict__
849 self.namespace = __main__.__dict__
848
850
849 if state == 0:
851 if state == 0:
850 if "." in text:
852 if "." in text:
851 self.matches = self.attr_matches(text)
853 self.matches = self.attr_matches(text)
852 else:
854 else:
853 self.matches = self.global_matches(text)
855 self.matches = self.global_matches(text)
854 try:
856 try:
855 return self.matches[state]
857 return self.matches[state]
856 except IndexError:
858 except IndexError:
857 return None
859 return None
858
860
859 def global_matches(self, text):
861 def global_matches(self, text):
860 """Compute matches when text is a simple name.
862 """Compute matches when text is a simple name.
861
863
862 Return a list of all keywords, built-in functions and names currently
864 Return a list of all keywords, built-in functions and names currently
863 defined in self.namespace or self.global_namespace that match.
865 defined in self.namespace or self.global_namespace that match.
864
866
865 """
867 """
866 matches = []
868 matches = []
867 match_append = matches.append
869 match_append = matches.append
868 n = len(text)
870 n = len(text)
869 for lst in [keyword.kwlist,
871 for lst in [keyword.kwlist,
870 builtin_mod.__dict__.keys(),
872 builtin_mod.__dict__.keys(),
871 self.namespace.keys(),
873 self.namespace.keys(),
872 self.global_namespace.keys()]:
874 self.global_namespace.keys()]:
873 for word in lst:
875 for word in lst:
874 if word[:n] == text and word != "__builtins__":
876 if word[:n] == text and word != "__builtins__":
875 match_append(word)
877 match_append(word)
876
878
877 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
879 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
878 for lst in [self.namespace.keys(),
880 for lst in [self.namespace.keys(),
879 self.global_namespace.keys()]:
881 self.global_namespace.keys()]:
880 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
882 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
881 for word in lst if snake_case_re.match(word)}
883 for word in lst if snake_case_re.match(word)}
882 for word in shortened.keys():
884 for word in shortened.keys():
883 if word[:n] == text and word != "__builtins__":
885 if word[:n] == text and word != "__builtins__":
884 match_append(shortened[word])
886 match_append(shortened[word])
885 return matches
887 return matches
886
888
887 def attr_matches(self, text):
889 def attr_matches(self, text):
888 """Compute matches when text contains a dot.
890 """Compute matches when text contains a dot.
889
891
890 Assuming the text is of the form NAME.NAME....[NAME], and is
892 Assuming the text is of the form NAME.NAME....[NAME], and is
891 evaluatable in self.namespace or self.global_namespace, it will be
893 evaluatable in self.namespace or self.global_namespace, it will be
892 evaluated and its attributes (as revealed by dir()) are used as
894 evaluated and its attributes (as revealed by dir()) are used as
893 possible completions. (For class instances, class members are
895 possible completions. (For class instances, class members are
894 also considered.)
896 also considered.)
895
897
896 WARNING: this can still invoke arbitrary C code, if an object
898 WARNING: this can still invoke arbitrary C code, if an object
897 with a __getattr__ hook is evaluated.
899 with a __getattr__ hook is evaluated.
898
900
899 """
901 """
900
902
901 # Another option, seems to work great. Catches things like ''.<tab>
903 # Another option, seems to work great. Catches things like ''.<tab>
902 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
904 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
903
905
904 if m:
906 if m:
905 expr, attr = m.group(1, 3)
907 expr, attr = m.group(1, 3)
906 elif self.greedy:
908 elif self.greedy:
907 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
909 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
908 if not m2:
910 if not m2:
909 return []
911 return []
910 expr, attr = m2.group(1,2)
912 expr, attr = m2.group(1,2)
911 else:
913 else:
912 return []
914 return []
913
915
914 try:
916 try:
915 obj = eval(expr, self.namespace)
917 obj = eval(expr, self.namespace)
916 except:
918 except:
917 try:
919 try:
918 obj = eval(expr, self.global_namespace)
920 obj = eval(expr, self.global_namespace)
919 except:
921 except:
920 return []
922 return []
921
923
922 if self.limit_to__all__ and hasattr(obj, '__all__'):
924 if self.limit_to__all__ and hasattr(obj, '__all__'):
923 words = get__all__entries(obj)
925 words = get__all__entries(obj)
924 else:
926 else:
925 words = dir2(obj)
927 words = dir2(obj)
926
928
927 try:
929 try:
928 words = generics.complete_object(obj, words)
930 words = generics.complete_object(obj, words)
929 except TryNext:
931 except TryNext:
930 pass
932 pass
931 except AssertionError:
933 except AssertionError:
932 raise
934 raise
933 except Exception:
935 except Exception:
934 # Silence errors from completion function
936 # Silence errors from completion function
935 #raise # dbg
937 #raise # dbg
936 pass
938 pass
937 # Build match list to return
939 # Build match list to return
938 n = len(attr)
940 n = len(attr)
939 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
941 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
940
942
941
943
942 def get__all__entries(obj):
944 def get__all__entries(obj):
943 """returns the strings in the __all__ attribute"""
945 """returns the strings in the __all__ attribute"""
944 try:
946 try:
945 words = getattr(obj, '__all__')
947 words = getattr(obj, '__all__')
946 except:
948 except:
947 return []
949 return []
948
950
949 return [w for w in words if isinstance(w, str)]
951 return [w for w in words if isinstance(w, str)]
950
952
951
953
952 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
954 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
953 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
955 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
954 """Used by dict_key_matches, matching the prefix to a list of keys
956 """Used by dict_key_matches, matching the prefix to a list of keys
955
957
956 Parameters
958 Parameters
957 ----------
959 ----------
958 keys
960 keys
959 list of keys in dictionary currently being completed.
961 list of keys in dictionary currently being completed.
960 prefix
962 prefix
961 Part of the text already typed by the user. E.g. `mydict[b'fo`
963 Part of the text already typed by the user. E.g. `mydict[b'fo`
962 delims
964 delims
963 String of delimiters to consider when finding the current key.
965 String of delimiters to consider when finding the current key.
964 extra_prefix : optional
966 extra_prefix : optional
965 Part of the text already typed in multi-key index cases. E.g. for
967 Part of the text already typed in multi-key index cases. E.g. for
966 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
968 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
967
969
968 Returns
970 Returns
969 -------
971 -------
970 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
972 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
971 ``quote`` being the quote that need to be used to close current string.
973 ``quote`` being the quote that need to be used to close current string.
972 ``token_start`` the position where the replacement should start occurring,
974 ``token_start`` the position where the replacement should start occurring,
973 ``matches`` a list of replacement/completion
975 ``matches`` a list of replacement/completion
974
976
975 """
977 """
976 prefix_tuple = extra_prefix if extra_prefix else ()
978 prefix_tuple = extra_prefix if extra_prefix else ()
977 Nprefix = len(prefix_tuple)
979 Nprefix = len(prefix_tuple)
978 def filter_prefix_tuple(key):
980 def filter_prefix_tuple(key):
979 # Reject too short keys
981 # Reject too short keys
980 if len(key) <= Nprefix:
982 if len(key) <= Nprefix:
981 return False
983 return False
982 # Reject keys with non str/bytes in it
984 # Reject keys with non str/bytes in it
983 for k in key:
985 for k in key:
984 if not isinstance(k, (str, bytes)):
986 if not isinstance(k, (str, bytes)):
985 return False
987 return False
986 # Reject keys that do not match the prefix
988 # Reject keys that do not match the prefix
987 for k, pt in zip(key, prefix_tuple):
989 for k, pt in zip(key, prefix_tuple):
988 if k != pt:
990 if k != pt:
989 return False
991 return False
990 # All checks passed!
992 # All checks passed!
991 return True
993 return True
992
994
993 filtered_keys:List[Union[str,bytes]] = []
995 filtered_keys:List[Union[str,bytes]] = []
994 def _add_to_filtered_keys(key):
996 def _add_to_filtered_keys(key):
995 if isinstance(key, (str, bytes)):
997 if isinstance(key, (str, bytes)):
996 filtered_keys.append(key)
998 filtered_keys.append(key)
997
999
998 for k in keys:
1000 for k in keys:
999 if isinstance(k, tuple):
1001 if isinstance(k, tuple):
1000 if filter_prefix_tuple(k):
1002 if filter_prefix_tuple(k):
1001 _add_to_filtered_keys(k[Nprefix])
1003 _add_to_filtered_keys(k[Nprefix])
1002 else:
1004 else:
1003 _add_to_filtered_keys(k)
1005 _add_to_filtered_keys(k)
1004
1006
1005 if not prefix:
1007 if not prefix:
1006 return '', 0, [repr(k) for k in filtered_keys]
1008 return '', 0, [repr(k) for k in filtered_keys]
1007 quote_match = re.search('["\']', prefix)
1009 quote_match = re.search('["\']', prefix)
1008 assert quote_match is not None # silence mypy
1010 assert quote_match is not None # silence mypy
1009 quote = quote_match.group()
1011 quote = quote_match.group()
1010 try:
1012 try:
1011 prefix_str = eval(prefix + quote, {})
1013 prefix_str = eval(prefix + quote, {})
1012 except Exception:
1014 except Exception:
1013 return '', 0, []
1015 return '', 0, []
1014
1016
1015 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1017 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1016 token_match = re.search(pattern, prefix, re.UNICODE)
1018 token_match = re.search(pattern, prefix, re.UNICODE)
1017 assert token_match is not None # silence mypy
1019 assert token_match is not None # silence mypy
1018 token_start = token_match.start()
1020 token_start = token_match.start()
1019 token_prefix = token_match.group()
1021 token_prefix = token_match.group()
1020
1022
1021 matched:List[str] = []
1023 matched:List[str] = []
1022 for key in filtered_keys:
1024 for key in filtered_keys:
1023 try:
1025 try:
1024 if not key.startswith(prefix_str):
1026 if not key.startswith(prefix_str):
1025 continue
1027 continue
1026 except (AttributeError, TypeError, UnicodeError):
1028 except (AttributeError, TypeError, UnicodeError):
1027 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1029 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1028 continue
1030 continue
1029
1031
1030 # reformat remainder of key to begin with prefix
1032 # reformat remainder of key to begin with prefix
1031 rem = key[len(prefix_str):]
1033 rem = key[len(prefix_str):]
1032 # force repr wrapped in '
1034 # force repr wrapped in '
1033 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1035 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1034 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1036 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1035 if quote == '"':
1037 if quote == '"':
1036 # The entered prefix is quoted with ",
1038 # The entered prefix is quoted with ",
1037 # but the match is quoted with '.
1039 # but the match is quoted with '.
1038 # A contained " hence needs escaping for comparison:
1040 # A contained " hence needs escaping for comparison:
1039 rem_repr = rem_repr.replace('"', '\\"')
1041 rem_repr = rem_repr.replace('"', '\\"')
1040
1042
1041 # then reinsert prefix from start of token
1043 # then reinsert prefix from start of token
1042 matched.append('%s%s' % (token_prefix, rem_repr))
1044 matched.append('%s%s' % (token_prefix, rem_repr))
1043 return quote, token_start, matched
1045 return quote, token_start, matched
1044
1046
1045
1047
1046 def cursor_to_position(text:str, line:int, column:int)->int:
1048 def cursor_to_position(text:str, line:int, column:int)->int:
1047 """
1049 """
1048 Convert the (line,column) position of the cursor in text to an offset in a
1050 Convert the (line,column) position of the cursor in text to an offset in a
1049 string.
1051 string.
1050
1052
1051 Parameters
1053 Parameters
1052 ----------
1054 ----------
1053 text : str
1055 text : str
1054 The text in which to calculate the cursor offset
1056 The text in which to calculate the cursor offset
1055 line : int
1057 line : int
1056 Line of the cursor; 0-indexed
1058 Line of the cursor; 0-indexed
1057 column : int
1059 column : int
1058 Column of the cursor 0-indexed
1060 Column of the cursor 0-indexed
1059
1061
1060 Returns
1062 Returns
1061 -------
1063 -------
1062 Position of the cursor in ``text``, 0-indexed.
1064 Position of the cursor in ``text``, 0-indexed.
1063
1065
1064 See Also
1066 See Also
1065 --------
1067 --------
1066 position_to_cursor : reciprocal of this function
1068 position_to_cursor : reciprocal of this function
1067
1069
1068 """
1070 """
1069 lines = text.split('\n')
1071 lines = text.split('\n')
1070 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1072 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1071
1073
1072 return sum(len(l) + 1 for l in lines[:line]) + column
1074 return sum(len(l) + 1 for l in lines[:line]) + column
1073
1075
1074 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1076 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1075 """
1077 """
1076 Convert the position of the cursor in text (0 indexed) to a line
1078 Convert the position of the cursor in text (0 indexed) to a line
1077 number(0-indexed) and a column number (0-indexed) pair
1079 number(0-indexed) and a column number (0-indexed) pair
1078
1080
1079 Position should be a valid position in ``text``.
1081 Position should be a valid position in ``text``.
1080
1082
1081 Parameters
1083 Parameters
1082 ----------
1084 ----------
1083 text : str
1085 text : str
1084 The text in which to calculate the cursor offset
1086 The text in which to calculate the cursor offset
1085 offset : int
1087 offset : int
1086 Position of the cursor in ``text``, 0-indexed.
1088 Position of the cursor in ``text``, 0-indexed.
1087
1089
1088 Returns
1090 Returns
1089 -------
1091 -------
1090 (line, column) : (int, int)
1092 (line, column) : (int, int)
1091 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1093 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1092
1094
1093 See Also
1095 See Also
1094 --------
1096 --------
1095 cursor_to_position : reciprocal of this function
1097 cursor_to_position : reciprocal of this function
1096
1098
1097 """
1099 """
1098
1100
1099 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1101 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1100
1102
1101 before = text[:offset]
1103 before = text[:offset]
1102 blines = before.split('\n') # ! splitnes trim trailing \n
1104 blines = before.split('\n') # ! splitnes trim trailing \n
1103 line = before.count('\n')
1105 line = before.count('\n')
1104 col = len(blines[-1])
1106 col = len(blines[-1])
1105 return line, col
1107 return line, col
1106
1108
1107
1109
1108 def _safe_isinstance(obj, module, class_name):
1110 def _safe_isinstance(obj, module, class_name):
1109 """Checks if obj is an instance of module.class_name if loaded
1111 """Checks if obj is an instance of module.class_name if loaded
1110 """
1112 """
1111 return (module in sys.modules and
1113 return (module in sys.modules and
1112 isinstance(obj, getattr(import_module(module), class_name)))
1114 isinstance(obj, getattr(import_module(module), class_name)))
1113
1115
1114
1116
1115 @context_matcher()
1117 @context_matcher()
1116 def back_unicode_name_matcher(context):
1118 def back_unicode_name_matcher(context):
1117 fragment, matches = back_unicode_name_matches(context.token)
1119 fragment, matches = back_unicode_name_matches(context.token)
1118 return _convert_matcher_v1_result_to_v2(
1120 return _convert_matcher_v1_result_to_v2(
1119 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1121 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1120 )
1122 )
1121
1123
1122
1124
1123 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1125 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1124 """Match Unicode characters back to Unicode name
1126 """Match Unicode characters back to Unicode name
1125
1127
1126 This does ``β˜ƒ`` -> ``\\snowman``
1128 This does ``β˜ƒ`` -> ``\\snowman``
1127
1129
1128 Note that snowman is not a valid python3 combining character but will be expanded.
1130 Note that snowman is not a valid python3 combining character but will be expanded.
1129 Though it will not recombine back to the snowman character by the completion machinery.
1131 Though it will not recombine back to the snowman character by the completion machinery.
1130
1132
1131 This will not either back-complete standard sequences like \\n, \\b ...
1133 This will not either back-complete standard sequences like \\n, \\b ...
1132
1134
1133 Returns
1135 Returns
1134 =======
1136 =======
1135
1137
1136 Return a tuple with two elements:
1138 Return a tuple with two elements:
1137
1139
1138 - The Unicode character that was matched (preceded with a backslash), or
1140 - The Unicode character that was matched (preceded with a backslash), or
1139 empty string,
1141 empty string,
1140 - a sequence (of 1), name for the match Unicode character, preceded by
1142 - a sequence (of 1), name for the match Unicode character, preceded by
1141 backslash, or empty if no match.
1143 backslash, or empty if no match.
1142
1144
1143 """
1145 """
1144 if len(text)<2:
1146 if len(text)<2:
1145 return '', ()
1147 return '', ()
1146 maybe_slash = text[-2]
1148 maybe_slash = text[-2]
1147 if maybe_slash != '\\':
1149 if maybe_slash != '\\':
1148 return '', ()
1150 return '', ()
1149
1151
1150 char = text[-1]
1152 char = text[-1]
1151 # no expand on quote for completion in strings.
1153 # no expand on quote for completion in strings.
1152 # nor backcomplete standard ascii keys
1154 # nor backcomplete standard ascii keys
1153 if char in string.ascii_letters or char in ('"',"'"):
1155 if char in string.ascii_letters or char in ('"',"'"):
1154 return '', ()
1156 return '', ()
1155 try :
1157 try :
1156 unic = unicodedata.name(char)
1158 unic = unicodedata.name(char)
1157 return '\\'+char,('\\'+unic,)
1159 return '\\'+char,('\\'+unic,)
1158 except KeyError:
1160 except KeyError:
1159 pass
1161 pass
1160 return '', ()
1162 return '', ()
1161
1163
1162
1164
1163 @context_matcher()
1165 @context_matcher()
1164 def back_latex_name_matcher(context):
1166 def back_latex_name_matcher(context):
1165 fragment, matches = back_latex_name_matches(context.token)
1167 fragment, matches = back_latex_name_matches(context.token)
1166 return _convert_matcher_v1_result_to_v2(
1168 return _convert_matcher_v1_result_to_v2(
1167 matches, type="latex", fragment=fragment, suppress_if_matches=True
1169 matches, type="latex", fragment=fragment, suppress_if_matches=True
1168 )
1170 )
1169
1171
1170
1172
1171 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1173 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1172 """Match latex characters back to unicode name
1174 """Match latex characters back to unicode name
1173
1175
1174 This does ``\\β„΅`` -> ``\\aleph``
1176 This does ``\\β„΅`` -> ``\\aleph``
1175
1177
1176 """
1178 """
1177 if len(text)<2:
1179 if len(text)<2:
1178 return '', ()
1180 return '', ()
1179 maybe_slash = text[-2]
1181 maybe_slash = text[-2]
1180 if maybe_slash != '\\':
1182 if maybe_slash != '\\':
1181 return '', ()
1183 return '', ()
1182
1184
1183
1185
1184 char = text[-1]
1186 char = text[-1]
1185 # no expand on quote for completion in strings.
1187 # no expand on quote for completion in strings.
1186 # nor backcomplete standard ascii keys
1188 # nor backcomplete standard ascii keys
1187 if char in string.ascii_letters or char in ('"',"'"):
1189 if char in string.ascii_letters or char in ('"',"'"):
1188 return '', ()
1190 return '', ()
1189 try :
1191 try :
1190 latex = reverse_latex_symbol[char]
1192 latex = reverse_latex_symbol[char]
1191 # '\\' replace the \ as well
1193 # '\\' replace the \ as well
1192 return '\\'+char,[latex]
1194 return '\\'+char,[latex]
1193 except KeyError:
1195 except KeyError:
1194 pass
1196 pass
1195 return '', ()
1197 return '', ()
1196
1198
1197
1199
1198 def _formatparamchildren(parameter) -> str:
1200 def _formatparamchildren(parameter) -> str:
1199 """
1201 """
1200 Get parameter name and value from Jedi Private API
1202 Get parameter name and value from Jedi Private API
1201
1203
1202 Jedi does not expose a simple way to get `param=value` from its API.
1204 Jedi does not expose a simple way to get `param=value` from its API.
1203
1205
1204 Parameters
1206 Parameters
1205 ----------
1207 ----------
1206 parameter
1208 parameter
1207 Jedi's function `Param`
1209 Jedi's function `Param`
1208
1210
1209 Returns
1211 Returns
1210 -------
1212 -------
1211 A string like 'a', 'b=1', '*args', '**kwargs'
1213 A string like 'a', 'b=1', '*args', '**kwargs'
1212
1214
1213 """
1215 """
1214 description = parameter.description
1216 description = parameter.description
1215 if not description.startswith('param '):
1217 if not description.startswith('param '):
1216 raise ValueError('Jedi function parameter description have change format.'
1218 raise ValueError('Jedi function parameter description have change format.'
1217 'Expected "param ...", found %r".' % description)
1219 'Expected "param ...", found %r".' % description)
1218 return description[6:]
1220 return description[6:]
1219
1221
1220 def _make_signature(completion)-> str:
1222 def _make_signature(completion)-> str:
1221 """
1223 """
1222 Make the signature from a jedi completion
1224 Make the signature from a jedi completion
1223
1225
1224 Parameters
1226 Parameters
1225 ----------
1227 ----------
1226 completion : jedi.Completion
1228 completion : jedi.Completion
1227 object does not complete a function type
1229 object does not complete a function type
1228
1230
1229 Returns
1231 Returns
1230 -------
1232 -------
1231 a string consisting of the function signature, with the parenthesis but
1233 a string consisting of the function signature, with the parenthesis but
1232 without the function name. example:
1234 without the function name. example:
1233 `(a, *args, b=1, **kwargs)`
1235 `(a, *args, b=1, **kwargs)`
1234
1236
1235 """
1237 """
1236
1238
1237 # it looks like this might work on jedi 0.17
1239 # it looks like this might work on jedi 0.17
1238 if hasattr(completion, 'get_signatures'):
1240 if hasattr(completion, 'get_signatures'):
1239 signatures = completion.get_signatures()
1241 signatures = completion.get_signatures()
1240 if not signatures:
1242 if not signatures:
1241 return '(?)'
1243 return '(?)'
1242
1244
1243 c0 = completion.get_signatures()[0]
1245 c0 = completion.get_signatures()[0]
1244 return '('+c0.to_string().split('(', maxsplit=1)[1]
1246 return '('+c0.to_string().split('(', maxsplit=1)[1]
1245
1247
1246 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1248 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1247 for p in signature.defined_names()) if f])
1249 for p in signature.defined_names()) if f])
1248
1250
1249
1251
1250 _CompleteResult = Dict[str, MatcherResult]
1252 _CompleteResult = Dict[str, MatcherResult]
1251
1253
1252
1254
1253 def _convert_matcher_v1_result_to_v2(
1255 def _convert_matcher_v1_result_to_v2(
1254 matches: Sequence[str],
1256 matches: Sequence[str],
1255 type: str,
1257 type: str,
1256 fragment: str = None,
1258 fragment: str = None,
1257 suppress_if_matches: bool = False,
1259 suppress_if_matches: bool = False,
1258 ) -> SimpleMatcherResult:
1260 ) -> SimpleMatcherResult:
1259 """Utility to help with transition"""
1261 """Utility to help with transition"""
1260 result = {
1262 result = {
1261 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1263 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1262 "suppress_others": (True if matches else False)
1264 "suppress_others": (True if matches else False)
1263 if suppress_if_matches
1265 if suppress_if_matches
1264 else False,
1266 else False,
1265 }
1267 }
1266 if fragment is not None:
1268 if fragment is not None:
1267 result["matched_fragment"] = fragment
1269 result["matched_fragment"] = fragment
1268 return result
1270 return result
1269
1271
1270
1272
1271 class IPCompleter(Completer):
1273 class IPCompleter(Completer):
1272 """Extension of the completer class with IPython-specific features"""
1274 """Extension of the completer class with IPython-specific features"""
1273
1275
1274 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1276 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1275
1277
1276 @observe('greedy')
1278 @observe('greedy')
1277 def _greedy_changed(self, change):
1279 def _greedy_changed(self, change):
1278 """update the splitter and readline delims when greedy is changed"""
1280 """update the splitter and readline delims when greedy is changed"""
1279 if change['new']:
1281 if change['new']:
1280 self.splitter.delims = GREEDY_DELIMS
1282 self.splitter.delims = GREEDY_DELIMS
1281 else:
1283 else:
1282 self.splitter.delims = DELIMS
1284 self.splitter.delims = DELIMS
1283
1285
1284 dict_keys_only = Bool(
1286 dict_keys_only = Bool(
1285 False,
1287 False,
1286 help="""
1288 help="""
1287 Whether to show dict key matches only.
1289 Whether to show dict key matches only.
1288
1290
1289 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1291 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1290 """,
1292 """,
1291 )
1293 )
1292
1294
1293 suppress_competing_matchers = UnionTrait(
1295 suppress_competing_matchers = UnionTrait(
1294 [Bool(), DictTrait(Bool(None, allow_none=True))],
1296 [Bool(), DictTrait(Bool(None, allow_none=True))],
1295 help="""
1297 help="""
1296 Whether to suppress completions from other `Matchers`_.
1298 Whether to suppress completions from other `Matchers`_.
1297
1299
1298 When set to ``None`` (default) the matchers will attempt to auto-detect
1300 When set to ``None`` (default) the matchers will attempt to auto-detect
1299 whether suppression of other matchers is desirable. For example, at
1301 whether suppression of other matchers is desirable. For example, at
1300 the beginning of a line followed by `%` we expect a magic completion
1302 the beginning of a line followed by `%` we expect a magic completion
1301 to be the only applicable option, and after ``my_dict['`` we usually
1303 to be the only applicable option, and after ``my_dict['`` we usually
1302 expect a completion with an existing dictionary key.
1304 expect a completion with an existing dictionary key.
1303
1305
1304 If you want to disable this heuristic and see completions from all matchers,
1306 If you want to disable this heuristic and see completions from all matchers,
1305 set ``IPCompleter.suppress_competing_matchers = False``.
1307 set ``IPCompleter.suppress_competing_matchers = False``.
1306 To disable the heuristic for specific matchers provide a dictionary mapping:
1308 To disable the heuristic for specific matchers provide a dictionary mapping:
1307 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1309 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1308
1310
1309 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1311 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1310 completions to the set of matchers with the highest priority;
1312 completions to the set of matchers with the highest priority;
1311 this is equivalent to ``IPCompleter.merge_completions`` and
1313 this is equivalent to ``IPCompleter.merge_completions`` and
1312 can be beneficial for performance, but will sometimes omit relevant
1314 can be beneficial for performance, but will sometimes omit relevant
1313 candidates from matchers further down the priority list.
1315 candidates from matchers further down the priority list.
1314 """,
1316 """,
1315 ).tag(config=True)
1317 ).tag(config=True)
1316
1318
1317 merge_completions = Bool(
1319 merge_completions = Bool(
1318 True,
1320 True,
1319 help="""Whether to merge completion results into a single list
1321 help="""Whether to merge completion results into a single list
1320
1322
1321 If False, only the completion results from the first non-empty
1323 If False, only the completion results from the first non-empty
1322 completer will be returned.
1324 completer will be returned.
1323
1325
1324 As of version 8.5.0, setting the value to ``False`` is an alias for:
1326 As of version 8.5.0, setting the value to ``False`` is an alias for:
1325 ``IPCompleter.suppress_competing_matchers = True.``.
1327 ``IPCompleter.suppress_competing_matchers = True.``.
1326 """,
1328 """,
1327 ).tag(config=True)
1329 ).tag(config=True)
1328
1330
1329 disable_matchers = ListTrait(
1331 disable_matchers = ListTrait(
1330 Unicode(), help="""List of matchers to disable."""
1332 Unicode(), help="""List of matchers to disable."""
1331 ).tag(config=True)
1333 ).tag(config=True)
1332
1334
1333 omit__names = Enum(
1335 omit__names = Enum(
1334 (0, 1, 2),
1336 (0, 1, 2),
1335 default_value=2,
1337 default_value=2,
1336 help="""Instruct the completer to omit private method names
1338 help="""Instruct the completer to omit private method names
1337
1339
1338 Specifically, when completing on ``object.<tab>``.
1340 Specifically, when completing on ``object.<tab>``.
1339
1341
1340 When 2 [default]: all names that start with '_' will be excluded.
1342 When 2 [default]: all names that start with '_' will be excluded.
1341
1343
1342 When 1: all 'magic' names (``__foo__``) will be excluded.
1344 When 1: all 'magic' names (``__foo__``) will be excluded.
1343
1345
1344 When 0: nothing will be excluded.
1346 When 0: nothing will be excluded.
1345 """
1347 """
1346 ).tag(config=True)
1348 ).tag(config=True)
1347 limit_to__all__ = Bool(False,
1349 limit_to__all__ = Bool(False,
1348 help="""
1350 help="""
1349 DEPRECATED as of version 5.0.
1351 DEPRECATED as of version 5.0.
1350
1352
1351 Instruct the completer to use __all__ for the completion
1353 Instruct the completer to use __all__ for the completion
1352
1354
1353 Specifically, when completing on ``object.<tab>``.
1355 Specifically, when completing on ``object.<tab>``.
1354
1356
1355 When True: only those names in obj.__all__ will be included.
1357 When True: only those names in obj.__all__ will be included.
1356
1358
1357 When False [default]: the __all__ attribute is ignored
1359 When False [default]: the __all__ attribute is ignored
1358 """,
1360 """,
1359 ).tag(config=True)
1361 ).tag(config=True)
1360
1362
1361 profile_completions = Bool(
1363 profile_completions = Bool(
1362 default_value=False,
1364 default_value=False,
1363 help="If True, emit profiling data for completion subsystem using cProfile."
1365 help="If True, emit profiling data for completion subsystem using cProfile."
1364 ).tag(config=True)
1366 ).tag(config=True)
1365
1367
1366 profiler_output_dir = Unicode(
1368 profiler_output_dir = Unicode(
1367 default_value=".completion_profiles",
1369 default_value=".completion_profiles",
1368 help="Template for path at which to output profile data for completions."
1370 help="Template for path at which to output profile data for completions."
1369 ).tag(config=True)
1371 ).tag(config=True)
1370
1372
1371 @observe('limit_to__all__')
1373 @observe('limit_to__all__')
1372 def _limit_to_all_changed(self, change):
1374 def _limit_to_all_changed(self, change):
1373 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1375 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1374 'value has been deprecated since IPython 5.0, will be made to have '
1376 'value has been deprecated since IPython 5.0, will be made to have '
1375 'no effects and then removed in future version of IPython.',
1377 'no effects and then removed in future version of IPython.',
1376 UserWarning)
1378 UserWarning)
1377
1379
1378 def __init__(
1380 def __init__(
1379 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1381 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1380 ):
1382 ):
1381 """IPCompleter() -> completer
1383 """IPCompleter() -> completer
1382
1384
1383 Return a completer object.
1385 Return a completer object.
1384
1386
1385 Parameters
1387 Parameters
1386 ----------
1388 ----------
1387 shell
1389 shell
1388 a pointer to the ipython shell itself. This is needed
1390 a pointer to the ipython shell itself. This is needed
1389 because this completer knows about magic functions, and those can
1391 because this completer knows about magic functions, and those can
1390 only be accessed via the ipython instance.
1392 only be accessed via the ipython instance.
1391 namespace : dict, optional
1393 namespace : dict, optional
1392 an optional dict where completions are performed.
1394 an optional dict where completions are performed.
1393 global_namespace : dict, optional
1395 global_namespace : dict, optional
1394 secondary optional dict for completions, to
1396 secondary optional dict for completions, to
1395 handle cases (such as IPython embedded inside functions) where
1397 handle cases (such as IPython embedded inside functions) where
1396 both Python scopes are visible.
1398 both Python scopes are visible.
1397 config : Config
1399 config : Config
1398 traitlet's config object
1400 traitlet's config object
1399 **kwargs
1401 **kwargs
1400 passed to super class unmodified.
1402 passed to super class unmodified.
1401 """
1403 """
1402
1404
1403 self.magic_escape = ESC_MAGIC
1405 self.magic_escape = ESC_MAGIC
1404 self.splitter = CompletionSplitter()
1406 self.splitter = CompletionSplitter()
1405
1407
1406 # _greedy_changed() depends on splitter and readline being defined:
1408 # _greedy_changed() depends on splitter and readline being defined:
1407 super().__init__(
1409 super().__init__(
1408 namespace=namespace,
1410 namespace=namespace,
1409 global_namespace=global_namespace,
1411 global_namespace=global_namespace,
1410 config=config,
1412 config=config,
1411 **kwargs,
1413 **kwargs,
1412 )
1414 )
1413
1415
1414 # List where completion matches will be stored
1416 # List where completion matches will be stored
1415 self.matches = []
1417 self.matches = []
1416 self.shell = shell
1418 self.shell = shell
1417 # Regexp to split filenames with spaces in them
1419 # Regexp to split filenames with spaces in them
1418 self.space_name_re = re.compile(r'([^\\] )')
1420 self.space_name_re = re.compile(r'([^\\] )')
1419 # Hold a local ref. to glob.glob for speed
1421 # Hold a local ref. to glob.glob for speed
1420 self.glob = glob.glob
1422 self.glob = glob.glob
1421
1423
1422 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1424 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1423 # buffers, to avoid completion problems.
1425 # buffers, to avoid completion problems.
1424 term = os.environ.get('TERM','xterm')
1426 term = os.environ.get('TERM','xterm')
1425 self.dumb_terminal = term in ['dumb','emacs']
1427 self.dumb_terminal = term in ['dumb','emacs']
1426
1428
1427 # Special handling of backslashes needed in win32 platforms
1429 # Special handling of backslashes needed in win32 platforms
1428 if sys.platform == "win32":
1430 if sys.platform == "win32":
1429 self.clean_glob = self._clean_glob_win32
1431 self.clean_glob = self._clean_glob_win32
1430 else:
1432 else:
1431 self.clean_glob = self._clean_glob
1433 self.clean_glob = self._clean_glob
1432
1434
1433 #regexp to parse docstring for function signature
1435 #regexp to parse docstring for function signature
1434 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1436 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1435 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1437 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1436 #use this if positional argument name is also needed
1438 #use this if positional argument name is also needed
1437 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1439 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1438
1440
1439 self.magic_arg_matchers = [
1441 self.magic_arg_matchers = [
1440 self.magic_config_matcher,
1442 self.magic_config_matcher,
1441 self.magic_color_matcher,
1443 self.magic_color_matcher,
1442 ]
1444 ]
1443
1445
1444 # This is set externally by InteractiveShell
1446 # This is set externally by InteractiveShell
1445 self.custom_completers = None
1447 self.custom_completers = None
1446
1448
1447 # This is a list of names of unicode characters that can be completed
1449 # This is a list of names of unicode characters that can be completed
1448 # into their corresponding unicode value. The list is large, so we
1450 # into their corresponding unicode value. The list is large, so we
1449 # lazily initialize it on first use. Consuming code should access this
1451 # lazily initialize it on first use. Consuming code should access this
1450 # attribute through the `@unicode_names` property.
1452 # attribute through the `@unicode_names` property.
1451 self._unicode_names = None
1453 self._unicode_names = None
1452
1454
1453 self._backslash_combining_matchers = [
1455 self._backslash_combining_matchers = [
1454 self.latex_name_matcher,
1456 self.latex_name_matcher,
1455 self.unicode_name_matcher,
1457 self.unicode_name_matcher,
1456 back_latex_name_matcher,
1458 back_latex_name_matcher,
1457 back_unicode_name_matcher,
1459 back_unicode_name_matcher,
1458 self.fwd_unicode_matcher,
1460 self.fwd_unicode_matcher,
1459 ]
1461 ]
1460
1462
1461 if not self.backslash_combining_completions:
1463 if not self.backslash_combining_completions:
1462 for matcher in self._backslash_combining_matchers:
1464 for matcher in self._backslash_combining_matchers:
1463 self.disable_matchers.append(matcher.matcher_identifier)
1465 self.disable_matchers.append(matcher.matcher_identifier)
1464
1466
1465 if not self.merge_completions:
1467 if not self.merge_completions:
1466 self.suppress_competing_matchers = True
1468 self.suppress_competing_matchers = True
1467
1469
1468 if self.dict_keys_only:
1470 if self.dict_keys_only:
1469 self.disable_matchers.append(self.dict_key_matcher.matcher_identifier)
1471 self.disable_matchers.append(self.dict_key_matcher.matcher_identifier)
1470
1472
1471 @property
1473 @property
1472 def matchers(self) -> List[Matcher]:
1474 def matchers(self) -> List[Matcher]:
1473 """All active matcher routines for completion"""
1475 """All active matcher routines for completion"""
1474 if self.dict_keys_only:
1476 if self.dict_keys_only:
1475 return [self.dict_key_matcher]
1477 return [self.dict_key_matcher]
1476
1478
1477 if self.use_jedi:
1479 if self.use_jedi:
1478 return [
1480 return [
1479 *self.custom_matchers,
1481 *self.custom_matchers,
1480 *self._backslash_combining_matchers,
1482 *self._backslash_combining_matchers,
1481 *self.magic_arg_matchers,
1483 *self.magic_arg_matchers,
1482 self.custom_completer_matcher,
1484 self.custom_completer_matcher,
1483 self.magic_matcher,
1485 self.magic_matcher,
1484 self._jedi_matcher,
1486 self._jedi_matcher,
1485 self.dict_key_matcher,
1487 self.dict_key_matcher,
1486 self.file_matcher,
1488 self.file_matcher,
1487 ]
1489 ]
1488 else:
1490 else:
1489 return [
1491 return [
1490 *self.custom_matchers,
1492 *self.custom_matchers,
1491 *self._backslash_combining_matchers,
1493 *self._backslash_combining_matchers,
1492 *self.magic_arg_matchers,
1494 *self.magic_arg_matchers,
1493 self.custom_completer_matcher,
1495 self.custom_completer_matcher,
1494 self.dict_key_matcher,
1496 self.dict_key_matcher,
1495 # TODO: convert python_matches to v2 API
1497 # TODO: convert python_matches to v2 API
1496 self.magic_matcher,
1498 self.magic_matcher,
1497 self.python_matches,
1499 self.python_matches,
1498 self.file_matcher,
1500 self.file_matcher,
1499 self.python_func_kw_matcher,
1501 self.python_func_kw_matcher,
1500 ]
1502 ]
1501
1503
1502 def all_completions(self, text:str) -> List[str]:
1504 def all_completions(self, text:str) -> List[str]:
1503 """
1505 """
1504 Wrapper around the completion methods for the benefit of emacs.
1506 Wrapper around the completion methods for the benefit of emacs.
1505 """
1507 """
1506 prefix = text.rpartition('.')[0]
1508 prefix = text.rpartition('.')[0]
1507 with provisionalcompleter():
1509 with provisionalcompleter():
1508 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1510 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1509 for c in self.completions(text, len(text))]
1511 for c in self.completions(text, len(text))]
1510
1512
1511 return self.complete(text)[1]
1513 return self.complete(text)[1]
1512
1514
1513 def _clean_glob(self, text:str):
1515 def _clean_glob(self, text:str):
1514 return self.glob("%s*" % text)
1516 return self.glob("%s*" % text)
1515
1517
1516 def _clean_glob_win32(self, text:str):
1518 def _clean_glob_win32(self, text:str):
1517 return [f.replace("\\","/")
1519 return [f.replace("\\","/")
1518 for f in self.glob("%s*" % text)]
1520 for f in self.glob("%s*" % text)]
1519
1521
1520 @context_matcher()
1522 @context_matcher()
1521 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1523 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1522 matches = self.file_matches(context.token)
1524 matches = self.file_matches(context.token)
1523 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1525 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1524 # starts with `/home/`, `C:\`, etc)
1526 # starts with `/home/`, `C:\`, etc)
1525 return _convert_matcher_v1_result_to_v2(matches, type="path")
1527 return _convert_matcher_v1_result_to_v2(matches, type="path")
1526
1528
1527 def file_matches(self, text: str) -> List[str]:
1529 def file_matches(self, text: str) -> List[str]:
1528 """Match filenames, expanding ~USER type strings.
1530 """Match filenames, expanding ~USER type strings.
1529
1531
1530 Most of the seemingly convoluted logic in this completer is an
1532 Most of the seemingly convoluted logic in this completer is an
1531 attempt to handle filenames with spaces in them. And yet it's not
1533 attempt to handle filenames with spaces in them. And yet it's not
1532 quite perfect, because Python's readline doesn't expose all of the
1534 quite perfect, because Python's readline doesn't expose all of the
1533 GNU readline details needed for this to be done correctly.
1535 GNU readline details needed for this to be done correctly.
1534
1536
1535 For a filename with a space in it, the printed completions will be
1537 For a filename with a space in it, the printed completions will be
1536 only the parts after what's already been typed (instead of the
1538 only the parts after what's already been typed (instead of the
1537 full completions, as is normally done). I don't think with the
1539 full completions, as is normally done). I don't think with the
1538 current (as of Python 2.3) Python readline it's possible to do
1540 current (as of Python 2.3) Python readline it's possible to do
1539 better."""
1541 better."""
1540
1542
1541 # chars that require escaping with backslash - i.e. chars
1543 # chars that require escaping with backslash - i.e. chars
1542 # that readline treats incorrectly as delimiters, but we
1544 # that readline treats incorrectly as delimiters, but we
1543 # don't want to treat as delimiters in filename matching
1545 # don't want to treat as delimiters in filename matching
1544 # when escaped with backslash
1546 # when escaped with backslash
1545 if text.startswith('!'):
1547 if text.startswith('!'):
1546 text = text[1:]
1548 text = text[1:]
1547 text_prefix = u'!'
1549 text_prefix = u'!'
1548 else:
1550 else:
1549 text_prefix = u''
1551 text_prefix = u''
1550
1552
1551 text_until_cursor = self.text_until_cursor
1553 text_until_cursor = self.text_until_cursor
1552 # track strings with open quotes
1554 # track strings with open quotes
1553 open_quotes = has_open_quotes(text_until_cursor)
1555 open_quotes = has_open_quotes(text_until_cursor)
1554
1556
1555 if '(' in text_until_cursor or '[' in text_until_cursor:
1557 if '(' in text_until_cursor or '[' in text_until_cursor:
1556 lsplit = text
1558 lsplit = text
1557 else:
1559 else:
1558 try:
1560 try:
1559 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1561 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1560 lsplit = arg_split(text_until_cursor)[-1]
1562 lsplit = arg_split(text_until_cursor)[-1]
1561 except ValueError:
1563 except ValueError:
1562 # typically an unmatched ", or backslash without escaped char.
1564 # typically an unmatched ", or backslash without escaped char.
1563 if open_quotes:
1565 if open_quotes:
1564 lsplit = text_until_cursor.split(open_quotes)[-1]
1566 lsplit = text_until_cursor.split(open_quotes)[-1]
1565 else:
1567 else:
1566 return []
1568 return []
1567 except IndexError:
1569 except IndexError:
1568 # tab pressed on empty line
1570 # tab pressed on empty line
1569 lsplit = ""
1571 lsplit = ""
1570
1572
1571 if not open_quotes and lsplit != protect_filename(lsplit):
1573 if not open_quotes and lsplit != protect_filename(lsplit):
1572 # if protectables are found, do matching on the whole escaped name
1574 # if protectables are found, do matching on the whole escaped name
1573 has_protectables = True
1575 has_protectables = True
1574 text0,text = text,lsplit
1576 text0,text = text,lsplit
1575 else:
1577 else:
1576 has_protectables = False
1578 has_protectables = False
1577 text = os.path.expanduser(text)
1579 text = os.path.expanduser(text)
1578
1580
1579 if text == "":
1581 if text == "":
1580 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1582 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1581
1583
1582 # Compute the matches from the filesystem
1584 # Compute the matches from the filesystem
1583 if sys.platform == 'win32':
1585 if sys.platform == 'win32':
1584 m0 = self.clean_glob(text)
1586 m0 = self.clean_glob(text)
1585 else:
1587 else:
1586 m0 = self.clean_glob(text.replace('\\', ''))
1588 m0 = self.clean_glob(text.replace('\\', ''))
1587
1589
1588 if has_protectables:
1590 if has_protectables:
1589 # If we had protectables, we need to revert our changes to the
1591 # If we had protectables, we need to revert our changes to the
1590 # beginning of filename so that we don't double-write the part
1592 # beginning of filename so that we don't double-write the part
1591 # of the filename we have so far
1593 # of the filename we have so far
1592 len_lsplit = len(lsplit)
1594 len_lsplit = len(lsplit)
1593 matches = [text_prefix + text0 +
1595 matches = [text_prefix + text0 +
1594 protect_filename(f[len_lsplit:]) for f in m0]
1596 protect_filename(f[len_lsplit:]) for f in m0]
1595 else:
1597 else:
1596 if open_quotes:
1598 if open_quotes:
1597 # if we have a string with an open quote, we don't need to
1599 # if we have a string with an open quote, we don't need to
1598 # protect the names beyond the quote (and we _shouldn't_, as
1600 # protect the names beyond the quote (and we _shouldn't_, as
1599 # it would cause bugs when the filesystem call is made).
1601 # it would cause bugs when the filesystem call is made).
1600 matches = m0 if sys.platform == "win32" else\
1602 matches = m0 if sys.platform == "win32" else\
1601 [protect_filename(f, open_quotes) for f in m0]
1603 [protect_filename(f, open_quotes) for f in m0]
1602 else:
1604 else:
1603 matches = [text_prefix +
1605 matches = [text_prefix +
1604 protect_filename(f) for f in m0]
1606 protect_filename(f) for f in m0]
1605
1607
1606 # Mark directories in input list by appending '/' to their names.
1608 # Mark directories in input list by appending '/' to their names.
1607 return [x+'/' if os.path.isdir(x) else x for x in matches]
1609 return [x+'/' if os.path.isdir(x) else x for x in matches]
1608
1610
1609 @context_matcher()
1611 @context_matcher()
1610 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1612 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1611 text = context.token
1613 text = context.token
1612 matches = self.magic_matches(text)
1614 matches = self.magic_matches(text)
1613 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1615 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1614 is_magic_prefix = len(text) > 0 and text[0] == "%"
1616 is_magic_prefix = len(text) > 0 and text[0] == "%"
1615 result["suppress_others"] = is_magic_prefix and bool(result["completions"])
1617 result["suppress_others"] = is_magic_prefix and bool(result["completions"])
1616 return result
1618 return result
1617
1619
1618 def magic_matches(self, text: str):
1620 def magic_matches(self, text: str):
1619 """Match magics"""
1621 """Match magics"""
1620 # Get all shell magics now rather than statically, so magics loaded at
1622 # Get all shell magics now rather than statically, so magics loaded at
1621 # runtime show up too.
1623 # runtime show up too.
1622 lsm = self.shell.magics_manager.lsmagic()
1624 lsm = self.shell.magics_manager.lsmagic()
1623 line_magics = lsm['line']
1625 line_magics = lsm['line']
1624 cell_magics = lsm['cell']
1626 cell_magics = lsm['cell']
1625 pre = self.magic_escape
1627 pre = self.magic_escape
1626 pre2 = pre+pre
1628 pre2 = pre+pre
1627
1629
1628 explicit_magic = text.startswith(pre)
1630 explicit_magic = text.startswith(pre)
1629
1631
1630 # Completion logic:
1632 # Completion logic:
1631 # - user gives %%: only do cell magics
1633 # - user gives %%: only do cell magics
1632 # - user gives %: do both line and cell magics
1634 # - user gives %: do both line and cell magics
1633 # - no prefix: do both
1635 # - no prefix: do both
1634 # In other words, line magics are skipped if the user gives %% explicitly
1636 # In other words, line magics are skipped if the user gives %% explicitly
1635 #
1637 #
1636 # We also exclude magics that match any currently visible names:
1638 # We also exclude magics that match any currently visible names:
1637 # https://github.com/ipython/ipython/issues/4877, unless the user has
1639 # https://github.com/ipython/ipython/issues/4877, unless the user has
1638 # typed a %:
1640 # typed a %:
1639 # https://github.com/ipython/ipython/issues/10754
1641 # https://github.com/ipython/ipython/issues/10754
1640 bare_text = text.lstrip(pre)
1642 bare_text = text.lstrip(pre)
1641 global_matches = self.global_matches(bare_text)
1643 global_matches = self.global_matches(bare_text)
1642 if not explicit_magic:
1644 if not explicit_magic:
1643 def matches(magic):
1645 def matches(magic):
1644 """
1646 """
1645 Filter magics, in particular remove magics that match
1647 Filter magics, in particular remove magics that match
1646 a name present in global namespace.
1648 a name present in global namespace.
1647 """
1649 """
1648 return ( magic.startswith(bare_text) and
1650 return ( magic.startswith(bare_text) and
1649 magic not in global_matches )
1651 magic not in global_matches )
1650 else:
1652 else:
1651 def matches(magic):
1653 def matches(magic):
1652 return magic.startswith(bare_text)
1654 return magic.startswith(bare_text)
1653
1655
1654 comp = [ pre2+m for m in cell_magics if matches(m)]
1656 comp = [ pre2+m for m in cell_magics if matches(m)]
1655 if not text.startswith(pre2):
1657 if not text.startswith(pre2):
1656 comp += [ pre+m for m in line_magics if matches(m)]
1658 comp += [ pre+m for m in line_magics if matches(m)]
1657
1659
1658 return comp
1660 return comp
1659
1661
1660 @context_matcher()
1662 @context_matcher()
1661 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1663 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1662 # NOTE: uses `line_buffer` equivalent for compatibility
1664 # NOTE: uses `line_buffer` equivalent for compatibility
1663 matches = self.magic_config_matches(context.line_with_cursor)
1665 matches = self.magic_config_matches(context.line_with_cursor)
1664 return _convert_matcher_v1_result_to_v2(matches, type="param")
1666 return _convert_matcher_v1_result_to_v2(matches, type="param")
1665
1667
1666 def magic_config_matches(self, text: str) -> List[str]:
1668 def magic_config_matches(self, text: str) -> List[str]:
1667 """Match class names and attributes for %config magic"""
1669 """Match class names and attributes for %config magic"""
1668 texts = text.strip().split()
1670 texts = text.strip().split()
1669
1671
1670 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1672 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1671 # get all configuration classes
1673 # get all configuration classes
1672 classes = sorted(set([ c for c in self.shell.configurables
1674 classes = sorted(set([ c for c in self.shell.configurables
1673 if c.__class__.class_traits(config=True)
1675 if c.__class__.class_traits(config=True)
1674 ]), key=lambda x: x.__class__.__name__)
1676 ]), key=lambda x: x.__class__.__name__)
1675 classnames = [ c.__class__.__name__ for c in classes ]
1677 classnames = [ c.__class__.__name__ for c in classes ]
1676
1678
1677 # return all classnames if config or %config is given
1679 # return all classnames if config or %config is given
1678 if len(texts) == 1:
1680 if len(texts) == 1:
1679 return classnames
1681 return classnames
1680
1682
1681 # match classname
1683 # match classname
1682 classname_texts = texts[1].split('.')
1684 classname_texts = texts[1].split('.')
1683 classname = classname_texts[0]
1685 classname = classname_texts[0]
1684 classname_matches = [ c for c in classnames
1686 classname_matches = [ c for c in classnames
1685 if c.startswith(classname) ]
1687 if c.startswith(classname) ]
1686
1688
1687 # return matched classes or the matched class with attributes
1689 # return matched classes or the matched class with attributes
1688 if texts[1].find('.') < 0:
1690 if texts[1].find('.') < 0:
1689 return classname_matches
1691 return classname_matches
1690 elif len(classname_matches) == 1 and \
1692 elif len(classname_matches) == 1 and \
1691 classname_matches[0] == classname:
1693 classname_matches[0] == classname:
1692 cls = classes[classnames.index(classname)].__class__
1694 cls = classes[classnames.index(classname)].__class__
1693 help = cls.class_get_help()
1695 help = cls.class_get_help()
1694 # strip leading '--' from cl-args:
1696 # strip leading '--' from cl-args:
1695 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1697 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1696 return [ attr.split('=')[0]
1698 return [ attr.split('=')[0]
1697 for attr in help.strip().splitlines()
1699 for attr in help.strip().splitlines()
1698 if attr.startswith(texts[1]) ]
1700 if attr.startswith(texts[1]) ]
1699 return []
1701 return []
1700
1702
1701 @context_matcher()
1703 @context_matcher()
1702 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1704 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1703 # NOTE: uses `line_buffer` equivalent for compatibility
1705 # NOTE: uses `line_buffer` equivalent for compatibility
1704 matches = self.magic_color_matches(context.line_with_cursor)
1706 matches = self.magic_color_matches(context.line_with_cursor)
1705 return _convert_matcher_v1_result_to_v2(matches, type="param")
1707 return _convert_matcher_v1_result_to_v2(matches, type="param")
1706
1708
1707 def magic_color_matches(self, text: str) -> List[str]:
1709 def magic_color_matches(self, text: str) -> List[str]:
1708 """Match color schemes for %colors magic"""
1710 """Match color schemes for %colors magic"""
1709 texts = text.split()
1711 texts = text.split()
1710 if text.endswith(' '):
1712 if text.endswith(' '):
1711 # .split() strips off the trailing whitespace. Add '' back
1713 # .split() strips off the trailing whitespace. Add '' back
1712 # so that: '%colors ' -> ['%colors', '']
1714 # so that: '%colors ' -> ['%colors', '']
1713 texts.append('')
1715 texts.append('')
1714
1716
1715 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1717 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1716 prefix = texts[1]
1718 prefix = texts[1]
1717 return [ color for color in InspectColors.keys()
1719 return [ color for color in InspectColors.keys()
1718 if color.startswith(prefix) ]
1720 if color.startswith(prefix) ]
1719 return []
1721 return []
1720
1722
1721 @context_matcher(identifier="IPCompleter.jedi_matcher")
1723 @context_matcher(identifier="IPCompleter.jedi_matcher")
1722 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1724 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1723 matches = self._jedi_matches(
1725 matches = self._jedi_matches(
1724 cursor_column=context.cursor_position,
1726 cursor_column=context.cursor_position,
1725 cursor_line=context.cursor_line,
1727 cursor_line=context.cursor_line,
1726 text=context.full_text,
1728 text=context.full_text,
1727 )
1729 )
1728 return {
1730 return {
1729 "completions": matches,
1731 "completions": matches,
1730 # statis analysis should not suppress other matchers
1732 # statis analysis should not suppress other matchers
1731 "suppress_others": False,
1733 "suppress_others": False,
1732 }
1734 }
1733
1735
1734 def _jedi_matches(
1736 def _jedi_matches(
1735 self, cursor_column: int, cursor_line: int, text: str
1737 self, cursor_column: int, cursor_line: int, text: str
1736 ) -> Iterable[_JediCompletionLike]:
1738 ) -> Iterable[_JediCompletionLike]:
1737 """
1739 """
1738 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1740 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1739 cursor position.
1741 cursor position.
1740
1742
1741 Parameters
1743 Parameters
1742 ----------
1744 ----------
1743 cursor_column : int
1745 cursor_column : int
1744 column position of the cursor in ``text``, 0-indexed.
1746 column position of the cursor in ``text``, 0-indexed.
1745 cursor_line : int
1747 cursor_line : int
1746 line position of the cursor in ``text``, 0-indexed
1748 line position of the cursor in ``text``, 0-indexed
1747 text : str
1749 text : str
1748 text to complete
1750 text to complete
1749
1751
1750 Notes
1752 Notes
1751 -----
1753 -----
1752 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1754 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1753 object containing a string with the Jedi debug information attached.
1755 object containing a string with the Jedi debug information attached.
1754 """
1756 """
1755 namespaces = [self.namespace]
1757 namespaces = [self.namespace]
1756 if self.global_namespace is not None:
1758 if self.global_namespace is not None:
1757 namespaces.append(self.global_namespace)
1759 namespaces.append(self.global_namespace)
1758
1760
1759 completion_filter = lambda x:x
1761 completion_filter = lambda x:x
1760 offset = cursor_to_position(text, cursor_line, cursor_column)
1762 offset = cursor_to_position(text, cursor_line, cursor_column)
1761 # filter output if we are completing for object members
1763 # filter output if we are completing for object members
1762 if offset:
1764 if offset:
1763 pre = text[offset-1]
1765 pre = text[offset-1]
1764 if pre == '.':
1766 if pre == '.':
1765 if self.omit__names == 2:
1767 if self.omit__names == 2:
1766 completion_filter = lambda c:not c.name.startswith('_')
1768 completion_filter = lambda c:not c.name.startswith('_')
1767 elif self.omit__names == 1:
1769 elif self.omit__names == 1:
1768 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1770 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1769 elif self.omit__names == 0:
1771 elif self.omit__names == 0:
1770 completion_filter = lambda x:x
1772 completion_filter = lambda x:x
1771 else:
1773 else:
1772 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1774 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1773
1775
1774 interpreter = jedi.Interpreter(text[:offset], namespaces)
1776 interpreter = jedi.Interpreter(text[:offset], namespaces)
1775 try_jedi = True
1777 try_jedi = True
1776
1778
1777 try:
1779 try:
1778 # find the first token in the current tree -- if it is a ' or " then we are in a string
1780 # find the first token in the current tree -- if it is a ' or " then we are in a string
1779 completing_string = False
1781 completing_string = False
1780 try:
1782 try:
1781 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1783 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1782 except StopIteration:
1784 except StopIteration:
1783 pass
1785 pass
1784 else:
1786 else:
1785 # note the value may be ', ", or it may also be ''' or """, or
1787 # note the value may be ', ", or it may also be ''' or """, or
1786 # in some cases, """what/you/typed..., but all of these are
1788 # in some cases, """what/you/typed..., but all of these are
1787 # strings.
1789 # strings.
1788 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1790 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1789
1791
1790 # if we are in a string jedi is likely not the right candidate for
1792 # if we are in a string jedi is likely not the right candidate for
1791 # now. Skip it.
1793 # now. Skip it.
1792 try_jedi = not completing_string
1794 try_jedi = not completing_string
1793 except Exception as e:
1795 except Exception as e:
1794 # many of things can go wrong, we are using private API just don't crash.
1796 # many of things can go wrong, we are using private API just don't crash.
1795 if self.debug:
1797 if self.debug:
1796 print("Error detecting if completing a non-finished string :", e, '|')
1798 print("Error detecting if completing a non-finished string :", e, '|')
1797
1799
1798 if not try_jedi:
1800 if not try_jedi:
1799 return []
1801 return []
1800 try:
1802 try:
1801 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1803 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1802 except Exception as e:
1804 except Exception as e:
1803 if self.debug:
1805 if self.debug:
1804 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1806 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1805 else:
1807 else:
1806 return []
1808 return []
1807
1809
1808 def python_matches(self, text:str)->List[str]:
1810 def python_matches(self, text:str)->List[str]:
1809 """Match attributes or global python names"""
1811 """Match attributes or global python names"""
1810 if "." in text:
1812 if "." in text:
1811 try:
1813 try:
1812 matches = self.attr_matches(text)
1814 matches = self.attr_matches(text)
1813 if text.endswith('.') and self.omit__names:
1815 if text.endswith('.') and self.omit__names:
1814 if self.omit__names == 1:
1816 if self.omit__names == 1:
1815 # true if txt is _not_ a __ name, false otherwise:
1817 # true if txt is _not_ a __ name, false otherwise:
1816 no__name = (lambda txt:
1818 no__name = (lambda txt:
1817 re.match(r'.*\.__.*?__',txt) is None)
1819 re.match(r'.*\.__.*?__',txt) is None)
1818 else:
1820 else:
1819 # true if txt is _not_ a _ name, false otherwise:
1821 # true if txt is _not_ a _ name, false otherwise:
1820 no__name = (lambda txt:
1822 no__name = (lambda txt:
1821 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1823 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1822 matches = filter(no__name, matches)
1824 matches = filter(no__name, matches)
1823 except NameError:
1825 except NameError:
1824 # catches <undefined attributes>.<tab>
1826 # catches <undefined attributes>.<tab>
1825 matches = []
1827 matches = []
1826 else:
1828 else:
1827 matches = self.global_matches(text)
1829 matches = self.global_matches(text)
1828 return matches
1830 return matches
1829
1831
1830 def _default_arguments_from_docstring(self, doc):
1832 def _default_arguments_from_docstring(self, doc):
1831 """Parse the first line of docstring for call signature.
1833 """Parse the first line of docstring for call signature.
1832
1834
1833 Docstring should be of the form 'min(iterable[, key=func])\n'.
1835 Docstring should be of the form 'min(iterable[, key=func])\n'.
1834 It can also parse cython docstring of the form
1836 It can also parse cython docstring of the form
1835 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1837 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1836 """
1838 """
1837 if doc is None:
1839 if doc is None:
1838 return []
1840 return []
1839
1841
1840 #care only the firstline
1842 #care only the firstline
1841 line = doc.lstrip().splitlines()[0]
1843 line = doc.lstrip().splitlines()[0]
1842
1844
1843 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1845 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1844 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1846 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1845 sig = self.docstring_sig_re.search(line)
1847 sig = self.docstring_sig_re.search(line)
1846 if sig is None:
1848 if sig is None:
1847 return []
1849 return []
1848 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1850 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1849 sig = sig.groups()[0].split(',')
1851 sig = sig.groups()[0].split(',')
1850 ret = []
1852 ret = []
1851 for s in sig:
1853 for s in sig:
1852 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1854 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1853 ret += self.docstring_kwd_re.findall(s)
1855 ret += self.docstring_kwd_re.findall(s)
1854 return ret
1856 return ret
1855
1857
1856 def _default_arguments(self, obj):
1858 def _default_arguments(self, obj):
1857 """Return the list of default arguments of obj if it is callable,
1859 """Return the list of default arguments of obj if it is callable,
1858 or empty list otherwise."""
1860 or empty list otherwise."""
1859 call_obj = obj
1861 call_obj = obj
1860 ret = []
1862 ret = []
1861 if inspect.isbuiltin(obj):
1863 if inspect.isbuiltin(obj):
1862 pass
1864 pass
1863 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1865 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1864 if inspect.isclass(obj):
1866 if inspect.isclass(obj):
1865 #for cython embedsignature=True the constructor docstring
1867 #for cython embedsignature=True the constructor docstring
1866 #belongs to the object itself not __init__
1868 #belongs to the object itself not __init__
1867 ret += self._default_arguments_from_docstring(
1869 ret += self._default_arguments_from_docstring(
1868 getattr(obj, '__doc__', ''))
1870 getattr(obj, '__doc__', ''))
1869 # for classes, check for __init__,__new__
1871 # for classes, check for __init__,__new__
1870 call_obj = (getattr(obj, '__init__', None) or
1872 call_obj = (getattr(obj, '__init__', None) or
1871 getattr(obj, '__new__', None))
1873 getattr(obj, '__new__', None))
1872 # for all others, check if they are __call__able
1874 # for all others, check if they are __call__able
1873 elif hasattr(obj, '__call__'):
1875 elif hasattr(obj, '__call__'):
1874 call_obj = obj.__call__
1876 call_obj = obj.__call__
1875 ret += self._default_arguments_from_docstring(
1877 ret += self._default_arguments_from_docstring(
1876 getattr(call_obj, '__doc__', ''))
1878 getattr(call_obj, '__doc__', ''))
1877
1879
1878 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1880 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1879 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1881 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1880
1882
1881 try:
1883 try:
1882 sig = inspect.signature(obj)
1884 sig = inspect.signature(obj)
1883 ret.extend(k for k, v in sig.parameters.items() if
1885 ret.extend(k for k, v in sig.parameters.items() if
1884 v.kind in _keeps)
1886 v.kind in _keeps)
1885 except ValueError:
1887 except ValueError:
1886 pass
1888 pass
1887
1889
1888 return list(set(ret))
1890 return list(set(ret))
1889
1891
1890 @context_matcher()
1892 @context_matcher()
1891 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1893 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1892 matches = self.python_func_kw_matches(context.token)
1894 matches = self.python_func_kw_matches(context.token)
1893 return _convert_matcher_v1_result_to_v2(matches, type="param")
1895 return _convert_matcher_v1_result_to_v2(matches, type="param")
1894
1896
1895 def python_func_kw_matches(self, text):
1897 def python_func_kw_matches(self, text):
1896 """Match named parameters (kwargs) of the last open function"""
1898 """Match named parameters (kwargs) of the last open function"""
1897
1899
1898 if "." in text: # a parameter cannot be dotted
1900 if "." in text: # a parameter cannot be dotted
1899 return []
1901 return []
1900 try: regexp = self.__funcParamsRegex
1902 try: regexp = self.__funcParamsRegex
1901 except AttributeError:
1903 except AttributeError:
1902 regexp = self.__funcParamsRegex = re.compile(r'''
1904 regexp = self.__funcParamsRegex = re.compile(r'''
1903 '.*?(?<!\\)' | # single quoted strings or
1905 '.*?(?<!\\)' | # single quoted strings or
1904 ".*?(?<!\\)" | # double quoted strings or
1906 ".*?(?<!\\)" | # double quoted strings or
1905 \w+ | # identifier
1907 \w+ | # identifier
1906 \S # other characters
1908 \S # other characters
1907 ''', re.VERBOSE | re.DOTALL)
1909 ''', re.VERBOSE | re.DOTALL)
1908 # 1. find the nearest identifier that comes before an unclosed
1910 # 1. find the nearest identifier that comes before an unclosed
1909 # parenthesis before the cursor
1911 # parenthesis before the cursor
1910 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1912 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1911 tokens = regexp.findall(self.text_until_cursor)
1913 tokens = regexp.findall(self.text_until_cursor)
1912 iterTokens = reversed(tokens); openPar = 0
1914 iterTokens = reversed(tokens); openPar = 0
1913
1915
1914 for token in iterTokens:
1916 for token in iterTokens:
1915 if token == ')':
1917 if token == ')':
1916 openPar -= 1
1918 openPar -= 1
1917 elif token == '(':
1919 elif token == '(':
1918 openPar += 1
1920 openPar += 1
1919 if openPar > 0:
1921 if openPar > 0:
1920 # found the last unclosed parenthesis
1922 # found the last unclosed parenthesis
1921 break
1923 break
1922 else:
1924 else:
1923 return []
1925 return []
1924 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1926 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1925 ids = []
1927 ids = []
1926 isId = re.compile(r'\w+$').match
1928 isId = re.compile(r'\w+$').match
1927
1929
1928 while True:
1930 while True:
1929 try:
1931 try:
1930 ids.append(next(iterTokens))
1932 ids.append(next(iterTokens))
1931 if not isId(ids[-1]):
1933 if not isId(ids[-1]):
1932 ids.pop(); break
1934 ids.pop(); break
1933 if not next(iterTokens) == '.':
1935 if not next(iterTokens) == '.':
1934 break
1936 break
1935 except StopIteration:
1937 except StopIteration:
1936 break
1938 break
1937
1939
1938 # Find all named arguments already assigned to, as to avoid suggesting
1940 # Find all named arguments already assigned to, as to avoid suggesting
1939 # them again
1941 # them again
1940 usedNamedArgs = set()
1942 usedNamedArgs = set()
1941 par_level = -1
1943 par_level = -1
1942 for token, next_token in zip(tokens, tokens[1:]):
1944 for token, next_token in zip(tokens, tokens[1:]):
1943 if token == '(':
1945 if token == '(':
1944 par_level += 1
1946 par_level += 1
1945 elif token == ')':
1947 elif token == ')':
1946 par_level -= 1
1948 par_level -= 1
1947
1949
1948 if par_level != 0:
1950 if par_level != 0:
1949 continue
1951 continue
1950
1952
1951 if next_token != '=':
1953 if next_token != '=':
1952 continue
1954 continue
1953
1955
1954 usedNamedArgs.add(token)
1956 usedNamedArgs.add(token)
1955
1957
1956 argMatches = []
1958 argMatches = []
1957 try:
1959 try:
1958 callableObj = '.'.join(ids[::-1])
1960 callableObj = '.'.join(ids[::-1])
1959 namedArgs = self._default_arguments(eval(callableObj,
1961 namedArgs = self._default_arguments(eval(callableObj,
1960 self.namespace))
1962 self.namespace))
1961
1963
1962 # Remove used named arguments from the list, no need to show twice
1964 # Remove used named arguments from the list, no need to show twice
1963 for namedArg in set(namedArgs) - usedNamedArgs:
1965 for namedArg in set(namedArgs) - usedNamedArgs:
1964 if namedArg.startswith(text):
1966 if namedArg.startswith(text):
1965 argMatches.append("%s=" %namedArg)
1967 argMatches.append("%s=" %namedArg)
1966 except:
1968 except:
1967 pass
1969 pass
1968
1970
1969 return argMatches
1971 return argMatches
1970
1972
1971 @staticmethod
1973 @staticmethod
1972 def _get_keys(obj: Any) -> List[Any]:
1974 def _get_keys(obj: Any) -> List[Any]:
1973 # Objects can define their own completions by defining an
1975 # Objects can define their own completions by defining an
1974 # _ipy_key_completions_() method.
1976 # _ipy_key_completions_() method.
1975 method = get_real_method(obj, '_ipython_key_completions_')
1977 method = get_real_method(obj, '_ipython_key_completions_')
1976 if method is not None:
1978 if method is not None:
1977 return method()
1979 return method()
1978
1980
1979 # Special case some common in-memory dict-like types
1981 # Special case some common in-memory dict-like types
1980 if isinstance(obj, dict) or\
1982 if isinstance(obj, dict) or\
1981 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1983 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1982 try:
1984 try:
1983 return list(obj.keys())
1985 return list(obj.keys())
1984 except Exception:
1986 except Exception:
1985 return []
1987 return []
1986 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1988 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1987 _safe_isinstance(obj, 'numpy', 'void'):
1989 _safe_isinstance(obj, 'numpy', 'void'):
1988 return obj.dtype.names or []
1990 return obj.dtype.names or []
1989 return []
1991 return []
1990
1992
1991 @context_matcher()
1993 @context_matcher()
1992 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1994 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1993 matches = self.dict_key_matches(context.token)
1995 matches = self.dict_key_matches(context.token)
1994 return _convert_matcher_v1_result_to_v2(
1996 return _convert_matcher_v1_result_to_v2(
1995 matches, type="dict key", suppress_if_matches=True
1997 matches, type="dict key", suppress_if_matches=True
1996 )
1998 )
1997
1999
1998 def dict_key_matches(self, text: str) -> List[str]:
2000 def dict_key_matches(self, text: str) -> List[str]:
1999 """Match string keys in a dictionary, after e.g. ``foo[``.
2001 """Match string keys in a dictionary, after e.g. ``foo[``.
2000
2002
2001 DEPRECATED: Deprecated since 8.5. Use ``dict_key_matcher`` instead.
2003 DEPRECATED: Deprecated since 8.5. Use ``dict_key_matcher`` instead.
2002 """
2004 """
2003
2005
2004 if self.__dict_key_regexps is not None:
2006 if self.__dict_key_regexps is not None:
2005 regexps = self.__dict_key_regexps
2007 regexps = self.__dict_key_regexps
2006 else:
2008 else:
2007 dict_key_re_fmt = r'''(?x)
2009 dict_key_re_fmt = r'''(?x)
2008 ( # match dict-referring expression wrt greedy setting
2010 ( # match dict-referring expression wrt greedy setting
2009 %s
2011 %s
2010 )
2012 )
2011 \[ # open bracket
2013 \[ # open bracket
2012 \s* # and optional whitespace
2014 \s* # and optional whitespace
2013 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2015 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2014 ((?:[uUbB]? # string prefix (r not handled)
2016 ((?:[uUbB]? # string prefix (r not handled)
2015 (?:
2017 (?:
2016 '(?:[^']|(?<!\\)\\')*'
2018 '(?:[^']|(?<!\\)\\')*'
2017 |
2019 |
2018 "(?:[^"]|(?<!\\)\\")*"
2020 "(?:[^"]|(?<!\\)\\")*"
2019 )
2021 )
2020 \s*,\s*
2022 \s*,\s*
2021 )*)
2023 )*)
2022 ([uUbB]? # string prefix (r not handled)
2024 ([uUbB]? # string prefix (r not handled)
2023 (?: # unclosed string
2025 (?: # unclosed string
2024 '(?:[^']|(?<!\\)\\')*
2026 '(?:[^']|(?<!\\)\\')*
2025 |
2027 |
2026 "(?:[^"]|(?<!\\)\\")*
2028 "(?:[^"]|(?<!\\)\\")*
2027 )
2029 )
2028 )?
2030 )?
2029 $
2031 $
2030 '''
2032 '''
2031 regexps = self.__dict_key_regexps = {
2033 regexps = self.__dict_key_regexps = {
2032 False: re.compile(dict_key_re_fmt % r'''
2034 False: re.compile(dict_key_re_fmt % r'''
2033 # identifiers separated by .
2035 # identifiers separated by .
2034 (?!\d)\w+
2036 (?!\d)\w+
2035 (?:\.(?!\d)\w+)*
2037 (?:\.(?!\d)\w+)*
2036 '''),
2038 '''),
2037 True: re.compile(dict_key_re_fmt % '''
2039 True: re.compile(dict_key_re_fmt % '''
2038 .+
2040 .+
2039 ''')
2041 ''')
2040 }
2042 }
2041
2043
2042 match = regexps[self.greedy].search(self.text_until_cursor)
2044 match = regexps[self.greedy].search(self.text_until_cursor)
2043
2045
2044 if match is None:
2046 if match is None:
2045 return []
2047 return []
2046
2048
2047 expr, prefix0, prefix = match.groups()
2049 expr, prefix0, prefix = match.groups()
2048 try:
2050 try:
2049 obj = eval(expr, self.namespace)
2051 obj = eval(expr, self.namespace)
2050 except Exception:
2052 except Exception:
2051 try:
2053 try:
2052 obj = eval(expr, self.global_namespace)
2054 obj = eval(expr, self.global_namespace)
2053 except Exception:
2055 except Exception:
2054 return []
2056 return []
2055
2057
2056 keys = self._get_keys(obj)
2058 keys = self._get_keys(obj)
2057 if not keys:
2059 if not keys:
2058 return keys
2060 return keys
2059
2061
2060 extra_prefix = eval(prefix0) if prefix0 != '' else None
2062 extra_prefix = eval(prefix0) if prefix0 != '' else None
2061
2063
2062 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2064 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2063 if not matches:
2065 if not matches:
2064 return matches
2066 return matches
2065
2067
2066 # get the cursor position of
2068 # get the cursor position of
2067 # - the text being completed
2069 # - the text being completed
2068 # - the start of the key text
2070 # - the start of the key text
2069 # - the start of the completion
2071 # - the start of the completion
2070 text_start = len(self.text_until_cursor) - len(text)
2072 text_start = len(self.text_until_cursor) - len(text)
2071 if prefix:
2073 if prefix:
2072 key_start = match.start(3)
2074 key_start = match.start(3)
2073 completion_start = key_start + token_offset
2075 completion_start = key_start + token_offset
2074 else:
2076 else:
2075 key_start = completion_start = match.end()
2077 key_start = completion_start = match.end()
2076
2078
2077 # grab the leading prefix, to make sure all completions start with `text`
2079 # grab the leading prefix, to make sure all completions start with `text`
2078 if text_start > key_start:
2080 if text_start > key_start:
2079 leading = ''
2081 leading = ''
2080 else:
2082 else:
2081 leading = text[text_start:completion_start]
2083 leading = text[text_start:completion_start]
2082
2084
2083 # the index of the `[` character
2085 # the index of the `[` character
2084 bracket_idx = match.end(1)
2086 bracket_idx = match.end(1)
2085
2087
2086 # append closing quote and bracket as appropriate
2088 # append closing quote and bracket as appropriate
2087 # this is *not* appropriate if the opening quote or bracket is outside
2089 # this is *not* appropriate if the opening quote or bracket is outside
2088 # the text given to this method
2090 # the text given to this method
2089 suf = ''
2091 suf = ''
2090 continuation = self.line_buffer[len(self.text_until_cursor):]
2092 continuation = self.line_buffer[len(self.text_until_cursor):]
2091 if key_start > text_start and closing_quote:
2093 if key_start > text_start and closing_quote:
2092 # quotes were opened inside text, maybe close them
2094 # quotes were opened inside text, maybe close them
2093 if continuation.startswith(closing_quote):
2095 if continuation.startswith(closing_quote):
2094 continuation = continuation[len(closing_quote):]
2096 continuation = continuation[len(closing_quote):]
2095 else:
2097 else:
2096 suf += closing_quote
2098 suf += closing_quote
2097 if bracket_idx > text_start:
2099 if bracket_idx > text_start:
2098 # brackets were opened inside text, maybe close them
2100 # brackets were opened inside text, maybe close them
2099 if not continuation.startswith(']'):
2101 if not continuation.startswith(']'):
2100 suf += ']'
2102 suf += ']'
2101
2103
2102 return [leading + k + suf for k in matches]
2104 return [leading + k + suf for k in matches]
2103
2105
2104 @context_matcher()
2106 @context_matcher()
2105 def unicode_name_matcher(self, context):
2107 def unicode_name_matcher(self, context):
2106 fragment, matches = self.unicode_name_matches(context.token)
2108 fragment, matches = self.unicode_name_matches(context.token)
2107 return _convert_matcher_v1_result_to_v2(
2109 return _convert_matcher_v1_result_to_v2(
2108 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2110 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2109 )
2111 )
2110
2112
2111 @staticmethod
2113 @staticmethod
2112 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2114 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2113 """Match Latex-like syntax for unicode characters base
2115 """Match Latex-like syntax for unicode characters base
2114 on the name of the character.
2116 on the name of the character.
2115
2117
2116 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2118 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2117
2119
2118 Works only on valid python 3 identifier, or on combining characters that
2120 Works only on valid python 3 identifier, or on combining characters that
2119 will combine to form a valid identifier.
2121 will combine to form a valid identifier.
2120 """
2122 """
2121 slashpos = text.rfind('\\')
2123 slashpos = text.rfind('\\')
2122 if slashpos > -1:
2124 if slashpos > -1:
2123 s = text[slashpos+1:]
2125 s = text[slashpos+1:]
2124 try :
2126 try :
2125 unic = unicodedata.lookup(s)
2127 unic = unicodedata.lookup(s)
2126 # allow combining chars
2128 # allow combining chars
2127 if ('a'+unic).isidentifier():
2129 if ('a'+unic).isidentifier():
2128 return '\\'+s,[unic]
2130 return '\\'+s,[unic]
2129 except KeyError:
2131 except KeyError:
2130 pass
2132 pass
2131 return '', []
2133 return '', []
2132
2134
2133 @context_matcher()
2135 @context_matcher()
2134 def latex_name_matcher(self, context):
2136 def latex_name_matcher(self, context):
2135 fragment, matches = self.latex_matches(context.token)
2137 fragment, matches = self.latex_matches(context.token)
2136 return _convert_matcher_v1_result_to_v2(
2138 return _convert_matcher_v1_result_to_v2(
2137 matches, type="latex", fragment=fragment, suppress_if_matches=True
2139 matches, type="latex", fragment=fragment, suppress_if_matches=True
2138 )
2140 )
2139
2141
2140 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2142 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2141 """Match Latex syntax for unicode characters.
2143 """Match Latex syntax for unicode characters.
2142
2144
2143 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2145 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2144 """
2146 """
2145 slashpos = text.rfind('\\')
2147 slashpos = text.rfind('\\')
2146 if slashpos > -1:
2148 if slashpos > -1:
2147 s = text[slashpos:]
2149 s = text[slashpos:]
2148 if s in latex_symbols:
2150 if s in latex_symbols:
2149 # Try to complete a full latex symbol to unicode
2151 # Try to complete a full latex symbol to unicode
2150 # \\alpha -> Ξ±
2152 # \\alpha -> Ξ±
2151 return s, [latex_symbols[s]]
2153 return s, [latex_symbols[s]]
2152 else:
2154 else:
2153 # If a user has partially typed a latex symbol, give them
2155 # If a user has partially typed a latex symbol, give them
2154 # a full list of options \al -> [\aleph, \alpha]
2156 # a full list of options \al -> [\aleph, \alpha]
2155 matches = [k for k in latex_symbols if k.startswith(s)]
2157 matches = [k for k in latex_symbols if k.startswith(s)]
2156 if matches:
2158 if matches:
2157 return s, matches
2159 return s, matches
2158 return '', ()
2160 return '', ()
2159
2161
2160 @context_matcher()
2162 @context_matcher()
2161 def custom_completer_matcher(self, context):
2163 def custom_completer_matcher(self, context):
2162 matches = self.dispatch_custom_completer(context.token) or []
2164 matches = self.dispatch_custom_completer(context.token) or []
2163 result = _convert_matcher_v1_result_to_v2(
2165 result = _convert_matcher_v1_result_to_v2(
2164 matches, type="<unknown>", suppress_if_matches=True
2166 matches, type="<unknown>", suppress_if_matches=True
2165 )
2167 )
2166 result["ordered"] = True
2168 result["ordered"] = True
2167 return result
2169 return result
2168
2170
2169 def dispatch_custom_completer(self, text):
2171 def dispatch_custom_completer(self, text):
2170 if not self.custom_completers:
2172 if not self.custom_completers:
2171 return
2173 return
2172
2174
2173 line = self.line_buffer
2175 line = self.line_buffer
2174 if not line.strip():
2176 if not line.strip():
2175 return None
2177 return None
2176
2178
2177 # Create a little structure to pass all the relevant information about
2179 # Create a little structure to pass all the relevant information about
2178 # the current completion to any custom completer.
2180 # the current completion to any custom completer.
2179 event = SimpleNamespace()
2181 event = SimpleNamespace()
2180 event.line = line
2182 event.line = line
2181 event.symbol = text
2183 event.symbol = text
2182 cmd = line.split(None,1)[0]
2184 cmd = line.split(None,1)[0]
2183 event.command = cmd
2185 event.command = cmd
2184 event.text_until_cursor = self.text_until_cursor
2186 event.text_until_cursor = self.text_until_cursor
2185
2187
2186 # for foo etc, try also to find completer for %foo
2188 # for foo etc, try also to find completer for %foo
2187 if not cmd.startswith(self.magic_escape):
2189 if not cmd.startswith(self.magic_escape):
2188 try_magic = self.custom_completers.s_matches(
2190 try_magic = self.custom_completers.s_matches(
2189 self.magic_escape + cmd)
2191 self.magic_escape + cmd)
2190 else:
2192 else:
2191 try_magic = []
2193 try_magic = []
2192
2194
2193 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2195 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2194 try_magic,
2196 try_magic,
2195 self.custom_completers.flat_matches(self.text_until_cursor)):
2197 self.custom_completers.flat_matches(self.text_until_cursor)):
2196 try:
2198 try:
2197 res = c(event)
2199 res = c(event)
2198 if res:
2200 if res:
2199 # first, try case sensitive match
2201 # first, try case sensitive match
2200 withcase = [r for r in res if r.startswith(text)]
2202 withcase = [r for r in res if r.startswith(text)]
2201 if withcase:
2203 if withcase:
2202 return withcase
2204 return withcase
2203 # if none, then case insensitive ones are ok too
2205 # if none, then case insensitive ones are ok too
2204 text_low = text.lower()
2206 text_low = text.lower()
2205 return [r for r in res if r.lower().startswith(text_low)]
2207 return [r for r in res if r.lower().startswith(text_low)]
2206 except TryNext:
2208 except TryNext:
2207 pass
2209 pass
2208 except KeyboardInterrupt:
2210 except KeyboardInterrupt:
2209 """
2211 """
2210 If custom completer take too long,
2212 If custom completer take too long,
2211 let keyboard interrupt abort and return nothing.
2213 let keyboard interrupt abort and return nothing.
2212 """
2214 """
2213 break
2215 break
2214
2216
2215 return None
2217 return None
2216
2218
2217 def completions(self, text: str, offset: int)->Iterator[Completion]:
2219 def completions(self, text: str, offset: int)->Iterator[Completion]:
2218 """
2220 """
2219 Returns an iterator over the possible completions
2221 Returns an iterator over the possible completions
2220
2222
2221 .. warning::
2223 .. warning::
2222
2224
2223 Unstable
2225 Unstable
2224
2226
2225 This function is unstable, API may change without warning.
2227 This function is unstable, API may change without warning.
2226 It will also raise unless use in proper context manager.
2228 It will also raise unless use in proper context manager.
2227
2229
2228 Parameters
2230 Parameters
2229 ----------
2231 ----------
2230 text : str
2232 text : str
2231 Full text of the current input, multi line string.
2233 Full text of the current input, multi line string.
2232 offset : int
2234 offset : int
2233 Integer representing the position of the cursor in ``text``. Offset
2235 Integer representing the position of the cursor in ``text``. Offset
2234 is 0-based indexed.
2236 is 0-based indexed.
2235
2237
2236 Yields
2238 Yields
2237 ------
2239 ------
2238 Completion
2240 Completion
2239
2241
2240 Notes
2242 Notes
2241 -----
2243 -----
2242 The cursor on a text can either be seen as being "in between"
2244 The cursor on a text can either be seen as being "in between"
2243 characters or "On" a character depending on the interface visible to
2245 characters or "On" a character depending on the interface visible to
2244 the user. For consistency the cursor being on "in between" characters X
2246 the user. For consistency the cursor being on "in between" characters X
2245 and Y is equivalent to the cursor being "on" character Y, that is to say
2247 and Y is equivalent to the cursor being "on" character Y, that is to say
2246 the character the cursor is on is considered as being after the cursor.
2248 the character the cursor is on is considered as being after the cursor.
2247
2249
2248 Combining characters may span more that one position in the
2250 Combining characters may span more that one position in the
2249 text.
2251 text.
2250
2252
2251 .. note::
2253 .. note::
2252
2254
2253 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2255 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2254 fake Completion token to distinguish completion returned by Jedi
2256 fake Completion token to distinguish completion returned by Jedi
2255 and usual IPython completion.
2257 and usual IPython completion.
2256
2258
2257 .. note::
2259 .. note::
2258
2260
2259 Completions are not completely deduplicated yet. If identical
2261 Completions are not completely deduplicated yet. If identical
2260 completions are coming from different sources this function does not
2262 completions are coming from different sources this function does not
2261 ensure that each completion object will only be present once.
2263 ensure that each completion object will only be present once.
2262 """
2264 """
2263 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2265 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2264 "It may change without warnings. "
2266 "It may change without warnings. "
2265 "Use in corresponding context manager.",
2267 "Use in corresponding context manager.",
2266 category=ProvisionalCompleterWarning, stacklevel=2)
2268 category=ProvisionalCompleterWarning, stacklevel=2)
2267
2269
2268 seen = set()
2270 seen = set()
2269 profiler:Optional[cProfile.Profile]
2271 profiler:Optional[cProfile.Profile]
2270 try:
2272 try:
2271 if self.profile_completions:
2273 if self.profile_completions:
2272 import cProfile
2274 import cProfile
2273 profiler = cProfile.Profile()
2275 profiler = cProfile.Profile()
2274 profiler.enable()
2276 profiler.enable()
2275 else:
2277 else:
2276 profiler = None
2278 profiler = None
2277
2279
2278 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2280 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2279 if c and (c in seen):
2281 if c and (c in seen):
2280 continue
2282 continue
2281 yield c
2283 yield c
2282 seen.add(c)
2284 seen.add(c)
2283 except KeyboardInterrupt:
2285 except KeyboardInterrupt:
2284 """if completions take too long and users send keyboard interrupt,
2286 """if completions take too long and users send keyboard interrupt,
2285 do not crash and return ASAP. """
2287 do not crash and return ASAP. """
2286 pass
2288 pass
2287 finally:
2289 finally:
2288 if profiler is not None:
2290 if profiler is not None:
2289 profiler.disable()
2291 profiler.disable()
2290 ensure_dir_exists(self.profiler_output_dir)
2292 ensure_dir_exists(self.profiler_output_dir)
2291 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2293 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2292 print("Writing profiler output to", output_path)
2294 print("Writing profiler output to", output_path)
2293 profiler.dump_stats(output_path)
2295 profiler.dump_stats(output_path)
2294
2296
2295 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2297 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2296 """
2298 """
2297 Core completion module.Same signature as :any:`completions`, with the
2299 Core completion module.Same signature as :any:`completions`, with the
2298 extra `timeout` parameter (in seconds).
2300 extra `timeout` parameter (in seconds).
2299
2301
2300 Computing jedi's completion ``.type`` can be quite expensive (it is a
2302 Computing jedi's completion ``.type`` can be quite expensive (it is a
2301 lazy property) and can require some warm-up, more warm up than just
2303 lazy property) and can require some warm-up, more warm up than just
2302 computing the ``name`` of a completion. The warm-up can be :
2304 computing the ``name`` of a completion. The warm-up can be :
2303
2305
2304 - Long warm-up the first time a module is encountered after
2306 - Long warm-up the first time a module is encountered after
2305 install/update: actually build parse/inference tree.
2307 install/update: actually build parse/inference tree.
2306
2308
2307 - first time the module is encountered in a session: load tree from
2309 - first time the module is encountered in a session: load tree from
2308 disk.
2310 disk.
2309
2311
2310 We don't want to block completions for tens of seconds so we give the
2312 We don't want to block completions for tens of seconds so we give the
2311 completer a "budget" of ``_timeout`` seconds per invocation to compute
2313 completer a "budget" of ``_timeout`` seconds per invocation to compute
2312 completions types, the completions that have not yet been computed will
2314 completions types, the completions that have not yet been computed will
2313 be marked as "unknown" an will have a chance to be computed next round
2315 be marked as "unknown" an will have a chance to be computed next round
2314 are things get cached.
2316 are things get cached.
2315
2317
2316 Keep in mind that Jedi is not the only thing treating the completion so
2318 Keep in mind that Jedi is not the only thing treating the completion so
2317 keep the timeout short-ish as if we take more than 0.3 second we still
2319 keep the timeout short-ish as if we take more than 0.3 second we still
2318 have lots of processing to do.
2320 have lots of processing to do.
2319
2321
2320 """
2322 """
2321 deadline = time.monotonic() + _timeout
2323 deadline = time.monotonic() + _timeout
2322
2324
2323 before = full_text[:offset]
2325 before = full_text[:offset]
2324 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2326 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2325
2327
2326 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2328 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2327
2329
2328 results = self._complete(
2330 results = self._complete(
2329 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2331 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2330 )
2332 )
2331 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2333 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2332 identifier: result
2334 identifier: result
2333 for identifier, result in results.items()
2335 for identifier, result in results.items()
2334 if identifier != jedi_matcher_id
2336 if identifier != jedi_matcher_id
2335 }
2337 }
2336
2338
2337 jedi_matches = (
2339 jedi_matches = (
2338 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2340 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2339 if jedi_matcher_id in results
2341 if jedi_matcher_id in results
2340 else ()
2342 else ()
2341 )
2343 )
2342
2344
2343 iter_jm = iter(jedi_matches)
2345 iter_jm = iter(jedi_matches)
2344 if _timeout:
2346 if _timeout:
2345 for jm in iter_jm:
2347 for jm in iter_jm:
2346 try:
2348 try:
2347 type_ = jm.type
2349 type_ = jm.type
2348 except Exception:
2350 except Exception:
2349 if self.debug:
2351 if self.debug:
2350 print("Error in Jedi getting type of ", jm)
2352 print("Error in Jedi getting type of ", jm)
2351 type_ = None
2353 type_ = None
2352 delta = len(jm.name_with_symbols) - len(jm.complete)
2354 delta = len(jm.name_with_symbols) - len(jm.complete)
2353 if type_ == 'function':
2355 if type_ == 'function':
2354 signature = _make_signature(jm)
2356 signature = _make_signature(jm)
2355 else:
2357 else:
2356 signature = ''
2358 signature = ''
2357 yield Completion(start=offset - delta,
2359 yield Completion(start=offset - delta,
2358 end=offset,
2360 end=offset,
2359 text=jm.name_with_symbols,
2361 text=jm.name_with_symbols,
2360 type=type_,
2362 type=type_,
2361 signature=signature,
2363 signature=signature,
2362 _origin='jedi')
2364 _origin='jedi')
2363
2365
2364 if time.monotonic() > deadline:
2366 if time.monotonic() > deadline:
2365 break
2367 break
2366
2368
2367 for jm in iter_jm:
2369 for jm in iter_jm:
2368 delta = len(jm.name_with_symbols) - len(jm.complete)
2370 delta = len(jm.name_with_symbols) - len(jm.complete)
2369 yield Completion(
2371 yield Completion(
2370 start=offset - delta,
2372 start=offset - delta,
2371 end=offset,
2373 end=offset,
2372 text=jm.name_with_symbols,
2374 text=jm.name_with_symbols,
2373 type=_UNKNOWN_TYPE, # don't compute type for speed
2375 type=_UNKNOWN_TYPE, # don't compute type for speed
2374 _origin="jedi",
2376 _origin="jedi",
2375 signature="",
2377 signature="",
2376 )
2378 )
2377
2379
2378 # TODO:
2380 # TODO:
2379 # Suppress this, right now just for debug.
2381 # Suppress this, right now just for debug.
2380 if jedi_matches and non_jedi_results and self.debug:
2382 if jedi_matches and non_jedi_results and self.debug:
2381 some_start_offset = before.rfind(
2383 some_start_offset = before.rfind(
2382 next(iter(non_jedi_results.values()))["matched_fragment"]
2384 next(iter(non_jedi_results.values()))["matched_fragment"]
2383 )
2385 )
2384 yield Completion(
2386 yield Completion(
2385 start=some_start_offset,
2387 start=some_start_offset,
2386 end=offset,
2388 end=offset,
2387 text="--jedi/ipython--",
2389 text="--jedi/ipython--",
2388 _origin="debug",
2390 _origin="debug",
2389 type="none",
2391 type="none",
2390 signature="",
2392 signature="",
2391 )
2393 )
2392
2394
2393 ordered = []
2395 ordered = []
2394 sortable = []
2396 sortable = []
2395
2397
2396 for origin, result in non_jedi_results.items():
2398 for origin, result in non_jedi_results.items():
2397 matched_text = result["matched_fragment"]
2399 matched_text = result["matched_fragment"]
2398 start_offset = before.rfind(matched_text)
2400 start_offset = before.rfind(matched_text)
2399 is_ordered = result.get("ordered", False)
2401 is_ordered = result.get("ordered", False)
2400 container = ordered if is_ordered else sortable
2402 container = ordered if is_ordered else sortable
2401
2403
2402 # I'm unsure if this is always true, so let's assert and see if it
2404 # I'm unsure if this is always true, so let's assert and see if it
2403 # crash
2405 # crash
2404 assert before.endswith(matched_text)
2406 assert before.endswith(matched_text)
2405
2407
2406 for simple_completion in result["completions"]:
2408 for simple_completion in result["completions"]:
2407 completion = Completion(
2409 completion = Completion(
2408 start=start_offset,
2410 start=start_offset,
2409 end=offset,
2411 end=offset,
2410 text=simple_completion.text,
2412 text=simple_completion.text,
2411 _origin=origin,
2413 _origin=origin,
2412 signature="",
2414 signature="",
2413 type=simple_completion.type or _UNKNOWN_TYPE,
2415 type=simple_completion.type or _UNKNOWN_TYPE,
2414 )
2416 )
2415 container.append(completion)
2417 container.append(completion)
2416
2418
2417 yield from self._deduplicate(ordered + self._sort(sortable))
2419 yield from self._deduplicate(ordered + self._sort(sortable))
2418
2420
2419
2421
2420 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2422 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2421 """Find completions for the given text and line context.
2423 """Find completions for the given text and line context.
2422
2424
2423 Note that both the text and the line_buffer are optional, but at least
2425 Note that both the text and the line_buffer are optional, but at least
2424 one of them must be given.
2426 one of them must be given.
2425
2427
2426 Parameters
2428 Parameters
2427 ----------
2429 ----------
2428 text : string, optional
2430 text : string, optional
2429 Text to perform the completion on. If not given, the line buffer
2431 Text to perform the completion on. If not given, the line buffer
2430 is split using the instance's CompletionSplitter object.
2432 is split using the instance's CompletionSplitter object.
2431 line_buffer : string, optional
2433 line_buffer : string, optional
2432 If not given, the completer attempts to obtain the current line
2434 If not given, the completer attempts to obtain the current line
2433 buffer via readline. This keyword allows clients which are
2435 buffer via readline. This keyword allows clients which are
2434 requesting for text completions in non-readline contexts to inform
2436 requesting for text completions in non-readline contexts to inform
2435 the completer of the entire text.
2437 the completer of the entire text.
2436 cursor_pos : int, optional
2438 cursor_pos : int, optional
2437 Index of the cursor in the full line buffer. Should be provided by
2439 Index of the cursor in the full line buffer. Should be provided by
2438 remote frontends where kernel has no access to frontend state.
2440 remote frontends where kernel has no access to frontend state.
2439
2441
2440 Returns
2442 Returns
2441 -------
2443 -------
2442 Tuple of two items:
2444 Tuple of two items:
2443 text : str
2445 text : str
2444 Text that was actually used in the completion.
2446 Text that was actually used in the completion.
2445 matches : list
2447 matches : list
2446 A list of completion matches.
2448 A list of completion matches.
2447
2449
2448 Notes
2450 Notes
2449 -----
2451 -----
2450 This API is likely to be deprecated and replaced by
2452 This API is likely to be deprecated and replaced by
2451 :any:`IPCompleter.completions` in the future.
2453 :any:`IPCompleter.completions` in the future.
2452
2454
2453 """
2455 """
2454 warnings.warn('`Completer.complete` is pending deprecation since '
2456 warnings.warn('`Completer.complete` is pending deprecation since '
2455 'IPython 6.0 and will be replaced by `Completer.completions`.',
2457 'IPython 6.0 and will be replaced by `Completer.completions`.',
2456 PendingDeprecationWarning)
2458 PendingDeprecationWarning)
2457 # potential todo, FOLD the 3rd throw away argument of _complete
2459 # potential todo, FOLD the 3rd throw away argument of _complete
2458 # into the first 2 one.
2460 # into the first 2 one.
2459 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2461 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2460 # TODO: should we deprecate now, or does it stay?
2462 # TODO: should we deprecate now, or does it stay?
2461
2463
2462 results = self._complete(
2464 results = self._complete(
2463 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2465 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2464 )
2466 )
2465
2467
2466 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2468 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2467
2469
2468 return self._arrange_and_extract(
2470 return self._arrange_and_extract(
2469 results,
2471 results,
2470 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2472 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2471 skip_matchers={jedi_matcher_id},
2473 skip_matchers={jedi_matcher_id},
2472 # this API does not support different start/end positions (fragments of token).
2474 # this API does not support different start/end positions (fragments of token).
2473 abort_if_offset_changes=True,
2475 abort_if_offset_changes=True,
2474 )
2476 )
2475
2477
2476 def _arrange_and_extract(
2478 def _arrange_and_extract(
2477 self,
2479 self,
2478 results: Dict[str, MatcherResult],
2480 results: Dict[str, MatcherResult],
2479 skip_matchers: Set[str],
2481 skip_matchers: Set[str],
2480 abort_if_offset_changes: bool,
2482 abort_if_offset_changes: bool,
2481 ):
2483 ):
2482
2484
2483 sortable = []
2485 sortable = []
2484 ordered = []
2486 ordered = []
2485 most_recent_fragment = None
2487 most_recent_fragment = None
2486 for identifier, result in results.items():
2488 for identifier, result in results.items():
2487 if identifier in skip_matchers:
2489 if identifier in skip_matchers:
2488 continue
2490 continue
2489 if not most_recent_fragment:
2491 if not most_recent_fragment:
2490 most_recent_fragment = result["matched_fragment"]
2492 most_recent_fragment = result["matched_fragment"]
2491 if (
2493 if (
2492 abort_if_offset_changes
2494 abort_if_offset_changes
2493 and result["matched_fragment"] != most_recent_fragment
2495 and result["matched_fragment"] != most_recent_fragment
2494 ):
2496 ):
2495 break
2497 break
2496 if result.get("ordered", False):
2498 if result.get("ordered", False):
2497 ordered.extend(result["completions"])
2499 ordered.extend(result["completions"])
2498 else:
2500 else:
2499 sortable.extend(result["completions"])
2501 sortable.extend(result["completions"])
2500
2502
2501 if not most_recent_fragment:
2503 if not most_recent_fragment:
2502 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2504 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2503
2505
2504 return most_recent_fragment, [
2506 return most_recent_fragment, [
2505 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2507 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2506 ]
2508 ]
2507
2509
2508 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2510 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2509 full_text=None) -> _CompleteResult:
2511 full_text=None) -> _CompleteResult:
2510 """
2512 """
2511 Like complete but can also returns raw jedi completions as well as the
2513 Like complete but can also returns raw jedi completions as well as the
2512 origin of the completion text. This could (and should) be made much
2514 origin of the completion text. This could (and should) be made much
2513 cleaner but that will be simpler once we drop the old (and stateful)
2515 cleaner but that will be simpler once we drop the old (and stateful)
2514 :any:`complete` API.
2516 :any:`complete` API.
2515
2517
2516 With current provisional API, cursor_pos act both (depending on the
2518 With current provisional API, cursor_pos act both (depending on the
2517 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2519 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2518 ``column`` when passing multiline strings this could/should be renamed
2520 ``column`` when passing multiline strings this could/should be renamed
2519 but would add extra noise.
2521 but would add extra noise.
2520
2522
2521 Parameters
2523 Parameters
2522 ----------
2524 ----------
2523 cursor_line
2525 cursor_line
2524 Index of the line the cursor is on. 0 indexed.
2526 Index of the line the cursor is on. 0 indexed.
2525 cursor_pos
2527 cursor_pos
2526 Position of the cursor in the current line/line_buffer/text. 0
2528 Position of the cursor in the current line/line_buffer/text. 0
2527 indexed.
2529 indexed.
2528 line_buffer : optional, str
2530 line_buffer : optional, str
2529 The current line the cursor is in, this is mostly due to legacy
2531 The current line the cursor is in, this is mostly due to legacy
2530 reason that readline could only give a us the single current line.
2532 reason that readline could only give a us the single current line.
2531 Prefer `full_text`.
2533 Prefer `full_text`.
2532 text : str
2534 text : str
2533 The current "token" the cursor is in, mostly also for historical
2535 The current "token" the cursor is in, mostly also for historical
2534 reasons. as the completer would trigger only after the current line
2536 reasons. as the completer would trigger only after the current line
2535 was parsed.
2537 was parsed.
2536 full_text : str
2538 full_text : str
2537 Full text of the current cell.
2539 Full text of the current cell.
2538
2540
2539 Returns
2541 Returns
2540 -------
2542 -------
2541 An ordered dictionary where keys are identifiers of completion
2543 An ordered dictionary where keys are identifiers of completion
2542 matchers and values are ``MatcherResult``s.
2544 matchers and values are ``MatcherResult``s.
2543 """
2545 """
2544
2546
2545 # if the cursor position isn't given, the only sane assumption we can
2547 # if the cursor position isn't given, the only sane assumption we can
2546 # make is that it's at the end of the line (the common case)
2548 # make is that it's at the end of the line (the common case)
2547 if cursor_pos is None:
2549 if cursor_pos is None:
2548 cursor_pos = len(line_buffer) if text is None else len(text)
2550 cursor_pos = len(line_buffer) if text is None else len(text)
2549
2551
2550 if self.use_main_ns:
2552 if self.use_main_ns:
2551 self.namespace = __main__.__dict__
2553 self.namespace = __main__.__dict__
2552
2554
2553 # if text is either None or an empty string, rely on the line buffer
2555 # if text is either None or an empty string, rely on the line buffer
2554 if (not line_buffer) and full_text:
2556 if (not line_buffer) and full_text:
2555 line_buffer = full_text.split('\n')[cursor_line]
2557 line_buffer = full_text.split('\n')[cursor_line]
2556 if not text: # issue #11508: check line_buffer before calling split_line
2558 if not text: # issue #11508: check line_buffer before calling split_line
2557 text = (
2559 text = (
2558 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2560 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2559 )
2561 )
2560
2562
2561 # If no line buffer is given, assume the input text is all there was
2563 # If no line buffer is given, assume the input text is all there was
2562 if line_buffer is None:
2564 if line_buffer is None:
2563 line_buffer = text
2565 line_buffer = text
2564
2566
2565 # deprecated - do not use `line_buffer` in new code.
2567 # deprecated - do not use `line_buffer` in new code.
2566 self.line_buffer = line_buffer
2568 self.line_buffer = line_buffer
2567 self.text_until_cursor = self.line_buffer[:cursor_pos]
2569 self.text_until_cursor = self.line_buffer[:cursor_pos]
2568
2570
2569 if not full_text:
2571 if not full_text:
2570 full_text = line_buffer
2572 full_text = line_buffer
2571
2573
2572 context = CompletionContext(
2574 context = CompletionContext(
2573 full_text=full_text,
2575 full_text=full_text,
2574 cursor_position=cursor_pos,
2576 cursor_position=cursor_pos,
2575 cursor_line=cursor_line,
2577 cursor_line=cursor_line,
2576 token=text,
2578 token=text,
2577 )
2579 )
2578
2580
2579 # Start with a clean slate of completions
2581 # Start with a clean slate of completions
2580 results = {}
2582 results = {}
2581
2583
2582 custom_completer_matcher_id = _get_matcher_id(self.custom_completer_matcher)
2584 custom_completer_matcher_id = _get_matcher_id(self.custom_completer_matcher)
2583 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2585 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2584
2586
2585 for matcher in self.matchers:
2587 for matcher in self.matchers:
2586 api_version = _get_matcher_api_version(matcher)
2588 api_version = _get_matcher_api_version(matcher)
2587 matcher_id = _get_matcher_id(matcher)
2589 matcher_id = _get_matcher_id(matcher)
2588
2590
2589 if matcher_id in results:
2591 if matcher_id in results:
2590 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2592 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2591
2593
2592 try:
2594 try:
2593 if api_version == 1:
2595 if api_version == 1:
2594 result = _convert_matcher_v1_result_to_v2(
2596 result = _convert_matcher_v1_result_to_v2(
2595 matcher(text), type=_UNKNOWN_TYPE
2597 matcher(text), type=_UNKNOWN_TYPE
2596 )
2598 )
2597 elif api_version == 2:
2599 elif api_version == 2:
2598 # TODO: MATCHES_LIMIT was used inconsistently in previous version
2600 # TODO: MATCHES_LIMIT was used inconsistently in previous version
2599 # (applied individually to latex/unicode and magic arguments matcher,
2601 # (applied individually to latex/unicode and magic arguments matcher,
2600 # but not Jedi, paths, magics, etc). Jedi did not have a limit here at
2602 # but not Jedi, paths, magics, etc). Jedi did not have a limit here at
2601 # all, but others had a total limit (retained in `_deduplicate_and_sort`).
2603 # all, but others had a total limit (retained in `_deduplicate_and_sort`).
2602 # 1) Was that deliberate or an omission?
2604 # 1) Was that deliberate or an omission?
2603 # 2) Should we include the limit in the API v2 signature to allow
2605 # 2) Should we include the limit in the API v2 signature to allow
2604 # more expensive matchers to return early?
2606 # more expensive matchers to return early?
2605 result = cast(matcher, MatcherAPIv2)(context)
2607 result = cast(matcher, MatcherAPIv2)(context)
2606 else:
2608 else:
2607 raise ValueError(f"Unsupported API version {api_version}")
2609 raise ValueError(f"Unsupported API version {api_version}")
2608 except:
2610 except:
2609 # Show the ugly traceback if the matcher causes an
2611 # Show the ugly traceback if the matcher causes an
2610 # exception, but do NOT crash the kernel!
2612 # exception, but do NOT crash the kernel!
2611 sys.excepthook(*sys.exc_info())
2613 sys.excepthook(*sys.exc_info())
2612 continue
2614 continue
2613
2615
2614 # set default value for matched fragment if suffix was not selected.
2616 # set default value for matched fragment if suffix was not selected.
2615 result["matched_fragment"] = result.get("matched_fragment", context.token)
2617 result["matched_fragment"] = result.get("matched_fragment", context.token)
2616
2618
2617 suppression_recommended = result.get("suppress_others", False)
2619 suppression_recommended = result.get("suppress_others", False)
2618
2620
2619 should_suppress = (
2621 should_suppress = (
2620 self.suppress_competing_matchers is True
2622 self.suppress_competing_matchers is True
2621 or suppression_recommended
2623 or suppression_recommended
2622 or (
2624 or (
2623 isinstance(self.suppress_competing_matchers, dict)
2625 isinstance(self.suppress_competing_matchers, dict)
2624 and self.suppress_competing_matchers[matcher_id]
2626 and self.suppress_competing_matchers[matcher_id]
2625 )
2627 )
2626 ) and len(result["completions"])
2628 ) and len(result["completions"])
2627
2629
2628 if should_suppress:
2630 if should_suppress:
2629 new_results = {matcher_id: result}
2631 new_results = {matcher_id: result}
2630 if (
2632 if (
2631 matcher_id == custom_completer_matcher_id
2633 matcher_id == custom_completer_matcher_id
2632 and jedi_matcher_id in results
2634 and jedi_matcher_id in results
2633 ):
2635 ):
2634 # custom completer does not suppress Jedi (this may change in future versions).
2636 # custom completer does not suppress Jedi (this may change in future versions).
2635 new_results[jedi_matcher_id] = results[jedi_matcher_id]
2637 new_results[jedi_matcher_id] = results[jedi_matcher_id]
2636 results = new_results
2638 results = new_results
2637 break
2639 break
2638
2640
2639 results[matcher_id] = result
2641 results[matcher_id] = result
2640
2642
2641 _, matches = self._arrange_and_extract(
2643 _, matches = self._arrange_and_extract(
2642 results,
2644 results,
2643 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2645 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2644 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2646 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2645 skip_matchers={jedi_matcher_id},
2647 skip_matchers={jedi_matcher_id},
2646 abort_if_offset_changes=False,
2648 abort_if_offset_changes=False,
2647 )
2649 )
2648
2650
2649 # populate legacy stateful API
2651 # populate legacy stateful API
2650 self.matches = matches
2652 self.matches = matches
2651
2653
2652 return results
2654 return results
2653
2655
2654 @staticmethod
2656 @staticmethod
2655 def _deduplicate(
2657 def _deduplicate(
2656 matches: Sequence[SimpleCompletion],
2658 matches: Sequence[SimpleCompletion],
2657 ) -> Iterable[SimpleCompletion]:
2659 ) -> Iterable[SimpleCompletion]:
2658 filtered_matches = {}
2660 filtered_matches = {}
2659 for match in matches:
2661 for match in matches:
2660 text = match.text
2662 text = match.text
2661 if (
2663 if (
2662 text not in filtered_matches
2664 text not in filtered_matches
2663 or filtered_matches[text].type == _UNKNOWN_TYPE
2665 or filtered_matches[text].type == _UNKNOWN_TYPE
2664 ):
2666 ):
2665 filtered_matches[text] = match
2667 filtered_matches[text] = match
2666
2668
2667 return filtered_matches.values()
2669 return filtered_matches.values()
2668
2670
2669 @staticmethod
2671 @staticmethod
2670 def _sort(matches: Sequence[SimpleCompletion]):
2672 def _sort(matches: Sequence[SimpleCompletion]):
2671 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2673 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2672
2674
2673 @context_matcher()
2675 @context_matcher()
2674 def fwd_unicode_matcher(self, context):
2676 def fwd_unicode_matcher(self, context):
2675 fragment, matches = self.latex_matches(context.token)
2677 fragment, matches = self.latex_matches(context.token)
2676 return _convert_matcher_v1_result_to_v2(
2678 return _convert_matcher_v1_result_to_v2(
2677 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2679 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2678 )
2680 )
2679
2681
2680 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2682 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2681 """
2683 """
2682 Forward match a string starting with a backslash with a list of
2684 Forward match a string starting with a backslash with a list of
2683 potential Unicode completions.
2685 potential Unicode completions.
2684
2686
2685 Will compute list list of Unicode character names on first call and cache it.
2687 Will compute list list of Unicode character names on first call and cache it.
2686
2688
2687 Returns
2689 Returns
2688 -------
2690 -------
2689 At tuple with:
2691 At tuple with:
2690 - matched text (empty if no matches)
2692 - matched text (empty if no matches)
2691 - list of potential completions, empty tuple otherwise)
2693 - list of potential completions, empty tuple otherwise)
2692 """
2694 """
2693 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2695 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2694 # We could do a faster match using a Trie.
2696 # We could do a faster match using a Trie.
2695
2697
2696 # Using pygtrie the following seem to work:
2698 # Using pygtrie the following seem to work:
2697
2699
2698 # s = PrefixSet()
2700 # s = PrefixSet()
2699
2701
2700 # for c in range(0,0x10FFFF + 1):
2702 # for c in range(0,0x10FFFF + 1):
2701 # try:
2703 # try:
2702 # s.add(unicodedata.name(chr(c)))
2704 # s.add(unicodedata.name(chr(c)))
2703 # except ValueError:
2705 # except ValueError:
2704 # pass
2706 # pass
2705 # [''.join(k) for k in s.iter(prefix)]
2707 # [''.join(k) for k in s.iter(prefix)]
2706
2708
2707 # But need to be timed and adds an extra dependency.
2709 # But need to be timed and adds an extra dependency.
2708
2710
2709 slashpos = text.rfind('\\')
2711 slashpos = text.rfind('\\')
2710 # if text starts with slash
2712 # if text starts with slash
2711 if slashpos > -1:
2713 if slashpos > -1:
2712 # PERF: It's important that we don't access self._unicode_names
2714 # PERF: It's important that we don't access self._unicode_names
2713 # until we're inside this if-block. _unicode_names is lazily
2715 # until we're inside this if-block. _unicode_names is lazily
2714 # initialized, and it takes a user-noticeable amount of time to
2716 # initialized, and it takes a user-noticeable amount of time to
2715 # initialize it, so we don't want to initialize it unless we're
2717 # initialize it, so we don't want to initialize it unless we're
2716 # actually going to use it.
2718 # actually going to use it.
2717 s = text[slashpos + 1 :]
2719 s = text[slashpos + 1 :]
2718 sup = s.upper()
2720 sup = s.upper()
2719 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2721 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2720 if candidates:
2722 if candidates:
2721 return s, candidates
2723 return s, candidates
2722 candidates = [x for x in self.unicode_names if sup in x]
2724 candidates = [x for x in self.unicode_names if sup in x]
2723 if candidates:
2725 if candidates:
2724 return s, candidates
2726 return s, candidates
2725 splitsup = sup.split(" ")
2727 splitsup = sup.split(" ")
2726 candidates = [
2728 candidates = [
2727 x for x in self.unicode_names if all(u in x for u in splitsup)
2729 x for x in self.unicode_names if all(u in x for u in splitsup)
2728 ]
2730 ]
2729 if candidates:
2731 if candidates:
2730 return s, candidates
2732 return s, candidates
2731
2733
2732 return "", ()
2734 return "", ()
2733
2735
2734 # if text does not start with slash
2736 # if text does not start with slash
2735 else:
2737 else:
2736 return '', ()
2738 return '', ()
2737
2739
2738 @property
2740 @property
2739 def unicode_names(self) -> List[str]:
2741 def unicode_names(self) -> List[str]:
2740 """List of names of unicode code points that can be completed.
2742 """List of names of unicode code points that can be completed.
2741
2743
2742 The list is lazily initialized on first access.
2744 The list is lazily initialized on first access.
2743 """
2745 """
2744 if self._unicode_names is None:
2746 if self._unicode_names is None:
2745 names = []
2747 names = []
2746 for c in range(0,0x10FFFF + 1):
2748 for c in range(0,0x10FFFF + 1):
2747 try:
2749 try:
2748 names.append(unicodedata.name(chr(c)))
2750 names.append(unicodedata.name(chr(c)))
2749 except ValueError:
2751 except ValueError:
2750 pass
2752 pass
2751 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2753 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2752
2754
2753 return self._unicode_names
2755 return self._unicode_names
2754
2756
2755 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2757 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2756 names = []
2758 names = []
2757 for start,stop in ranges:
2759 for start,stop in ranges:
2758 for c in range(start, stop) :
2760 for c in range(start, stop) :
2759 try:
2761 try:
2760 names.append(unicodedata.name(chr(c)))
2762 names.append(unicodedata.name(chr(c)))
2761 except ValueError:
2763 except ValueError:
2762 pass
2764 pass
2763 return names
2765 return names
@@ -1,115 +1,116 b''
1 [metadata]
1 [metadata]
2 name = ipython
2 name = ipython
3 version = attr: IPython.core.release.__version__
3 version = attr: IPython.core.release.__version__
4 url = https://ipython.org
4 url = https://ipython.org
5 description = IPython: Productive Interactive Computing
5 description = IPython: Productive Interactive Computing
6 long_description_content_type = text/x-rst
6 long_description_content_type = text/x-rst
7 long_description = file: long_description.rst
7 long_description = file: long_description.rst
8 license_file = LICENSE
8 license_file = LICENSE
9 project_urls =
9 project_urls =
10 Documentation = https://ipython.readthedocs.io/
10 Documentation = https://ipython.readthedocs.io/
11 Funding = https://numfocus.org/
11 Funding = https://numfocus.org/
12 Source = https://github.com/ipython/ipython
12 Source = https://github.com/ipython/ipython
13 Tracker = https://github.com/ipython/ipython/issues
13 Tracker = https://github.com/ipython/ipython/issues
14 keywords = Interactive, Interpreter, Shell, Embedding
14 keywords = Interactive, Interpreter, Shell, Embedding
15 platforms = Linux, Mac OSX, Windows
15 platforms = Linux, Mac OSX, Windows
16 classifiers =
16 classifiers =
17 Framework :: IPython
17 Framework :: IPython
18 Framework :: Jupyter
18 Framework :: Jupyter
19 Intended Audience :: Developers
19 Intended Audience :: Developers
20 Intended Audience :: Science/Research
20 Intended Audience :: Science/Research
21 License :: OSI Approved :: BSD License
21 License :: OSI Approved :: BSD License
22 Programming Language :: Python
22 Programming Language :: Python
23 Programming Language :: Python :: 3
23 Programming Language :: Python :: 3
24 Programming Language :: Python :: 3 :: Only
24 Programming Language :: Python :: 3 :: Only
25 Topic :: System :: Shells
25 Topic :: System :: Shells
26
26
27 [options]
27 [options]
28 packages = find:
28 packages = find:
29 python_requires = >=3.8
29 python_requires = >=3.8
30 zip_safe = False
30 zip_safe = False
31 install_requires =
31 install_requires =
32 appnope; sys_platform == "darwin"
32 appnope; sys_platform == "darwin"
33 backcall
33 backcall
34 colorama; sys_platform == "win32"
34 colorama; sys_platform == "win32"
35 decorator
35 decorator
36 jedi>=0.16
36 jedi>=0.16
37 matplotlib-inline
37 matplotlib-inline
38 pexpect>4.3; sys_platform != "win32"
38 pexpect>4.3; sys_platform != "win32"
39 pickleshare
39 pickleshare
40 prompt_toolkit>3.0.1,<3.1.0
40 prompt_toolkit>3.0.1,<3.1.0
41 pygments>=2.4.0
41 pygments>=2.4.0
42 setuptools>=18.5
42 setuptools>=18.5
43 stack_data
43 stack_data
44 traitlets>=5
44 traitlets>=5
45
45
46 [options.extras_require]
46 [options.extras_require]
47 black =
47 black =
48 black
48 black
49 doc =
49 doc =
50 Sphinx>=1.3
50 Sphinx>=1.3
51 kernel =
51 kernel =
52 ipykernel
52 ipykernel
53 nbconvert =
53 nbconvert =
54 nbconvert
54 nbconvert
55 nbformat =
55 nbformat =
56 nbformat
56 nbformat
57 notebook =
57 notebook =
58 ipywidgets
58 ipywidgets
59 notebook
59 notebook
60 parallel =
60 parallel =
61 ipyparallel
61 ipyparallel
62 qtconsole =
62 qtconsole =
63 qtconsole
63 qtconsole
64 terminal =
64 terminal =
65 test =
65 test =
66 pytest<7.1
66 pytest<7.1
67 pytest-asyncio
67 pytest-asyncio
68 testpath
68 testpath
69 test_extra =
69 test_extra =
70 %(test)s
70 %(test)s
71 curio
71 curio
72 matplotlib!=3.2.0
72 matplotlib!=3.2.0
73 nbformat
73 nbformat
74 numpy>=1.19
74 numpy>=1.19
75 pandas
75 pandas
76 trio
76 trio
77 typing_extensions
77 all =
78 all =
78 %(black)s
79 %(black)s
79 %(doc)s
80 %(doc)s
80 %(kernel)s
81 %(kernel)s
81 %(nbconvert)s
82 %(nbconvert)s
82 %(nbformat)s
83 %(nbformat)s
83 %(notebook)s
84 %(notebook)s
84 %(parallel)s
85 %(parallel)s
85 %(qtconsole)s
86 %(qtconsole)s
86 %(terminal)s
87 %(terminal)s
87 %(test_extra)s
88 %(test_extra)s
88 %(test)s
89 %(test)s
89
90
90 [options.packages.find]
91 [options.packages.find]
91 exclude =
92 exclude =
92 setupext
93 setupext
93
94
94 [options.package_data]
95 [options.package_data]
95 IPython.core = profile/README*
96 IPython.core = profile/README*
96 IPython.core.tests = *.png, *.jpg, daft_extension/*.py
97 IPython.core.tests = *.png, *.jpg, daft_extension/*.py
97 IPython.lib.tests = *.wav
98 IPython.lib.tests = *.wav
98 IPython.testing.plugin = *.txt
99 IPython.testing.plugin = *.txt
99
100
100 [options.entry_points]
101 [options.entry_points]
101 console_scripts =
102 console_scripts =
102 ipython = IPython:start_ipython
103 ipython = IPython:start_ipython
103 ipython3 = IPython:start_ipython
104 ipython3 = IPython:start_ipython
104 pygments.lexers =
105 pygments.lexers =
105 ipythonconsole = IPython.lib.lexers:IPythonConsoleLexer
106 ipythonconsole = IPython.lib.lexers:IPythonConsoleLexer
106 ipython = IPython.lib.lexers:IPythonLexer
107 ipython = IPython.lib.lexers:IPythonLexer
107 ipython3 = IPython.lib.lexers:IPython3Lexer
108 ipython3 = IPython.lib.lexers:IPython3Lexer
108
109
109 [velin]
110 [velin]
110 ignore_patterns =
111 ignore_patterns =
111 IPython/core/tests
112 IPython/core/tests
112 IPython/testing
113 IPython/testing
113
114
114 [tool.black]
115 [tool.black]
115 exclude = 'timing\.py'
116 exclude = 'timing\.py'
General Comments 0
You need to be logged in to leave comments. Login now