##// END OF EJS Templates
fix IPCompleter inside tuples/arrays when jedi is disabled...
M Bussonnier -
Show More
@@ -1,3389 +1,3420
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press :kbd:`Tab` to expand it to its latex form.
53 and press :kbd:`Tab` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 :std:configtrait:`Completer.backslash_combining_completions` option to
62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 ``False``.
63 ``False``.
64
64
65
65
66 Experimental
66 Experimental
67 ============
67 ============
68
68
69 Starting with IPython 6.0, this module can make use of the Jedi library to
69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 generate completions both using static analysis of the code, and dynamically
70 generate completions both using static analysis of the code, and dynamically
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 for Python. The APIs attached to this new mechanism is unstable and will
72 for Python. The APIs attached to this new mechanism is unstable and will
73 raise unless use in an :any:`provisionalcompleter` context manager.
73 raise unless use in an :any:`provisionalcompleter` context manager.
74
74
75 You will find that the following are experimental:
75 You will find that the following are experimental:
76
76
77 - :any:`provisionalcompleter`
77 - :any:`provisionalcompleter`
78 - :any:`IPCompleter.completions`
78 - :any:`IPCompleter.completions`
79 - :any:`Completion`
79 - :any:`Completion`
80 - :any:`rectify_completions`
80 - :any:`rectify_completions`
81
81
82 .. note::
82 .. note::
83
83
84 better name for :any:`rectify_completions` ?
84 better name for :any:`rectify_completions` ?
85
85
86 We welcome any feedback on these new API, and we also encourage you to try this
86 We welcome any feedback on these new API, and we also encourage you to try this
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 to have extra logging information if :any:`jedi` is crashing, or if current
88 to have extra logging information if :any:`jedi` is crashing, or if current
89 IPython completer pending deprecations are returning results not yet handled
89 IPython completer pending deprecations are returning results not yet handled
90 by :any:`jedi`
90 by :any:`jedi`
91
91
92 Using Jedi for tab completion allow snippets like the following to work without
92 Using Jedi for tab completion allow snippets like the following to work without
93 having to execute any code:
93 having to execute any code:
94
94
95 >>> myvar = ['hello', 42]
95 >>> myvar = ['hello', 42]
96 ... myvar[1].bi<tab>
96 ... myvar[1].bi<tab>
97
97
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 option.
100 option.
101
101
102 Be sure to update :any:`jedi` to the latest stable version or to try the
102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 current development version to get better completions.
103 current development version to get better completions.
104
104
105 Matchers
105 Matchers
106 ========
106 ========
107
107
108 All completions routines are implemented using unified *Matchers* API.
108 All completions routines are implemented using unified *Matchers* API.
109 The matchers API is provisional and subject to change without notice.
109 The matchers API is provisional and subject to change without notice.
110
110
111 The built-in matchers include:
111 The built-in matchers include:
112
112
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 - :any:`IPCompleter.unicode_name_matcher`,
115 - :any:`IPCompleter.unicode_name_matcher`,
116 :any:`IPCompleter.fwd_unicode_matcher`
116 :any:`IPCompleter.fwd_unicode_matcher`
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 (`complete_command`) with string dispatch (including regular expressions).
125 (`complete_command`) with string dispatch (including regular expressions).
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 Jedi results to match behaviour in earlier IPython versions.
127 Jedi results to match behaviour in earlier IPython versions.
128
128
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130
130
131 Matcher API
131 Matcher API
132 -----------
132 -----------
133
133
134 Simplifying some details, the ``Matcher`` interface can described as
134 Simplifying some details, the ``Matcher`` interface can described as
135
135
136 .. code-block::
136 .. code-block::
137
137
138 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv1 = Callable[[str], list[str]]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140
140
141 Matcher = MatcherAPIv1 | MatcherAPIv2
141 Matcher = MatcherAPIv1 | MatcherAPIv2
142
142
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 and remains supported as a simplest way for generating completions. This is also
144 and remains supported as a simplest way for generating completions. This is also
145 currently the only API supported by the IPython hooks system `complete_command`.
145 currently the only API supported by the IPython hooks system `complete_command`.
146
146
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 and requires a literal ``2`` for v2 Matchers.
149 and requires a literal ``2`` for v2 Matchers.
150
150
151 Once the API stabilises future versions may relax the requirement for specifying
151 Once the API stabilises future versions may relax the requirement for specifying
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154
154
155 Suppression of competing matchers
155 Suppression of competing matchers
156 ---------------------------------
156 ---------------------------------
157
157
158 By default results from all matchers are combined, in the order determined by
158 By default results from all matchers are combined, in the order determined by
159 their priority. Matchers can request to suppress results from subsequent
159 their priority. Matchers can request to suppress results from subsequent
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161
161
162 When multiple matchers simultaneously request suppression, the results from of
162 When multiple matchers simultaneously request suppression, the results from of
163 the matcher with higher priority will be returned.
163 the matcher with higher priority will be returned.
164
164
165 Sometimes it is desirable to suppress most but not all other matchers;
165 Sometimes it is desirable to suppress most but not all other matchers;
166 this can be achieved by adding a set of identifiers of matchers which
166 this can be achieved by adding a set of identifiers of matchers which
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168
168
169 The suppression behaviour can is user-configurable via
169 The suppression behaviour can is user-configurable via
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 """
171 """
172
172
173
173
174 # Copyright (c) IPython Development Team.
174 # Copyright (c) IPython Development Team.
175 # Distributed under the terms of the Modified BSD License.
175 # Distributed under the terms of the Modified BSD License.
176 #
176 #
177 # Some of this code originated from rlcompleter in the Python standard library
177 # Some of this code originated from rlcompleter in the Python standard library
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179
179
180 from __future__ import annotations
180 from __future__ import annotations
181 import builtins as builtin_mod
181 import builtins as builtin_mod
182 import enum
182 import enum
183 import glob
183 import glob
184 import inspect
184 import inspect
185 import itertools
185 import itertools
186 import keyword
186 import keyword
187 import ast
187 import os
188 import os
188 import re
189 import re
189 import string
190 import string
190 import sys
191 import sys
191 import tokenize
192 import tokenize
192 import time
193 import time
193 import unicodedata
194 import unicodedata
194 import uuid
195 import uuid
195 import warnings
196 import warnings
196 from ast import literal_eval
197 from ast import literal_eval
197 from collections import defaultdict
198 from collections import defaultdict
198 from contextlib import contextmanager
199 from contextlib import contextmanager
199 from dataclasses import dataclass
200 from dataclasses import dataclass
200 from functools import cached_property, partial
201 from functools import cached_property, partial
201 from types import SimpleNamespace
202 from types import SimpleNamespace
202 from typing import (
203 from typing import (
203 Iterable,
204 Iterable,
204 Iterator,
205 Iterator,
205 List,
206 List,
206 Tuple,
207 Tuple,
207 Union,
208 Union,
208 Any,
209 Any,
209 Sequence,
210 Sequence,
210 Dict,
211 Dict,
211 Optional,
212 Optional,
212 TYPE_CHECKING,
213 TYPE_CHECKING,
213 Set,
214 Set,
214 Sized,
215 Sized,
215 TypeVar,
216 TypeVar,
216 Literal,
217 Literal,
217 )
218 )
218
219
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 from IPython.core.error import TryNext
221 from IPython.core.error import TryNext
221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 from IPython.core.inputtransformer2 import ESC_MAGIC
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 from IPython.core.oinspect import InspectColors
224 from IPython.core.oinspect import InspectColors
224 from IPython.testing.skipdoctest import skip_doctest
225 from IPython.testing.skipdoctest import skip_doctest
225 from IPython.utils import generics
226 from IPython.utils import generics
226 from IPython.utils.decorators import sphinx_options
227 from IPython.utils.decorators import sphinx_options
227 from IPython.utils.dir2 import dir2, get_real_method
228 from IPython.utils.dir2 import dir2, get_real_method
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 from IPython.utils.path import ensure_dir_exists
230 from IPython.utils.path import ensure_dir_exists
230 from IPython.utils.process import arg_split
231 from IPython.utils.process import arg_split
231 from traitlets import (
232 from traitlets import (
232 Bool,
233 Bool,
233 Enum,
234 Enum,
234 Int,
235 Int,
235 List as ListTrait,
236 List as ListTrait,
236 Unicode,
237 Unicode,
237 Dict as DictTrait,
238 Dict as DictTrait,
238 Union as UnionTrait,
239 Union as UnionTrait,
239 observe,
240 observe,
240 )
241 )
241 from traitlets.config.configurable import Configurable
242 from traitlets.config.configurable import Configurable
242
243
243 import __main__
244 import __main__
244
245
245 # skip module docstests
246 # skip module docstests
246 __skip_doctest__ = True
247 __skip_doctest__ = True
247
248
248
249
249 try:
250 try:
250 import jedi
251 import jedi
251 jedi.settings.case_insensitive_completion = False
252 jedi.settings.case_insensitive_completion = False
252 import jedi.api.helpers
253 import jedi.api.helpers
253 import jedi.api.classes
254 import jedi.api.classes
254 JEDI_INSTALLED = True
255 JEDI_INSTALLED = True
255 except ImportError:
256 except ImportError:
256 JEDI_INSTALLED = False
257 JEDI_INSTALLED = False
257
258
258
259
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 from typing import cast
261 from typing import cast
261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 else:
263 else:
263 from typing import Generic
264 from typing import Generic
264
265
265 def cast(type_, obj):
266 def cast(type_, obj):
266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 return obj
268 return obj
268
269
269 # do not require on runtime
270 # do not require on runtime
270 NotRequired = Tuple # requires Python >=3.11
271 NotRequired = Tuple # requires Python >=3.11
271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 Protocol = object # requires Python >=3.8
273 Protocol = object # requires Python >=3.8
273 TypeAlias = Any # requires Python >=3.10
274 TypeAlias = Any # requires Python >=3.10
274 TypeGuard = Generic # requires Python >=3.10
275 TypeGuard = Generic # requires Python >=3.10
275 if GENERATING_DOCUMENTATION:
276 if GENERATING_DOCUMENTATION:
276 from typing import TypedDict
277 from typing import TypedDict
277
278
278 # -----------------------------------------------------------------------------
279 # -----------------------------------------------------------------------------
279 # Globals
280 # Globals
280 #-----------------------------------------------------------------------------
281 #-----------------------------------------------------------------------------
281
282
282 # ranges where we have most of the valid unicode names. We could be more finer
283 # ranges where we have most of the valid unicode names. We could be more finer
283 # grained but is it worth it for performance While unicode have character in the
284 # grained but is it worth it for performance While unicode have character in the
284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 # write this). With below range we cover them all, with a density of ~67%
286 # write this). With below range we cover them all, with a density of ~67%
286 # biggest next gap we consider only adds up about 1% density and there are 600
287 # biggest next gap we consider only adds up about 1% density and there are 600
287 # gaps that would need hard coding.
288 # gaps that would need hard coding.
288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289
290
290 # Public API
291 # Public API
291 __all__ = ["Completer", "IPCompleter"]
292 __all__ = ["Completer", "IPCompleter"]
292
293
293 if sys.platform == 'win32':
294 if sys.platform == 'win32':
294 PROTECTABLES = ' '
295 PROTECTABLES = ' '
295 else:
296 else:
296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297
298
298 # Protect against returning an enormous number of completions which the frontend
299 # Protect against returning an enormous number of completions which the frontend
299 # may have trouble processing.
300 # may have trouble processing.
300 MATCHES_LIMIT = 500
301 MATCHES_LIMIT = 500
301
302
302 # Completion type reported when no type can be inferred.
303 # Completion type reported when no type can be inferred.
303 _UNKNOWN_TYPE = "<unknown>"
304 _UNKNOWN_TYPE = "<unknown>"
304
305
305 # sentinel value to signal lack of a match
306 # sentinel value to signal lack of a match
306 not_found = object()
307 not_found = object()
307
308
308 class ProvisionalCompleterWarning(FutureWarning):
309 class ProvisionalCompleterWarning(FutureWarning):
309 """
310 """
310 Exception raise by an experimental feature in this module.
311 Exception raise by an experimental feature in this module.
311
312
312 Wrap code in :any:`provisionalcompleter` context manager if you
313 Wrap code in :any:`provisionalcompleter` context manager if you
313 are certain you want to use an unstable feature.
314 are certain you want to use an unstable feature.
314 """
315 """
315 pass
316 pass
316
317
317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318
319
319
320
320 @skip_doctest
321 @skip_doctest
321 @contextmanager
322 @contextmanager
322 def provisionalcompleter(action='ignore'):
323 def provisionalcompleter(action='ignore'):
323 """
324 """
324 This context manager has to be used in any place where unstable completer
325 This context manager has to be used in any place where unstable completer
325 behavior and API may be called.
326 behavior and API may be called.
326
327
327 >>> with provisionalcompleter():
328 >>> with provisionalcompleter():
328 ... completer.do_experimental_things() # works
329 ... completer.do_experimental_things() # works
329
330
330 >>> completer.do_experimental_things() # raises.
331 >>> completer.do_experimental_things() # raises.
331
332
332 .. note::
333 .. note::
333
334
334 Unstable
335 Unstable
335
336
336 By using this context manager you agree that the API in use may change
337 By using this context manager you agree that the API in use may change
337 without warning, and that you won't complain if they do so.
338 without warning, and that you won't complain if they do so.
338
339
339 You also understand that, if the API is not to your liking, you should report
340 You also understand that, if the API is not to your liking, you should report
340 a bug to explain your use case upstream.
341 a bug to explain your use case upstream.
341
342
342 We'll be happy to get your feedback, feature requests, and improvements on
343 We'll be happy to get your feedback, feature requests, and improvements on
343 any of the unstable APIs!
344 any of the unstable APIs!
344 """
345 """
345 with warnings.catch_warnings():
346 with warnings.catch_warnings():
346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 yield
348 yield
348
349
349
350
350 def has_open_quotes(s):
351 def has_open_quotes(s):
351 """Return whether a string has open quotes.
352 """Return whether a string has open quotes.
352
353
353 This simply counts whether the number of quote characters of either type in
354 This simply counts whether the number of quote characters of either type in
354 the string is odd.
355 the string is odd.
355
356
356 Returns
357 Returns
357 -------
358 -------
358 If there is an open quote, the quote character is returned. Else, return
359 If there is an open quote, the quote character is returned. Else, return
359 False.
360 False.
360 """
361 """
361 # We check " first, then ', so complex cases with nested quotes will get
362 # We check " first, then ', so complex cases with nested quotes will get
362 # the " to take precedence.
363 # the " to take precedence.
363 if s.count('"') % 2:
364 if s.count('"') % 2:
364 return '"'
365 return '"'
365 elif s.count("'") % 2:
366 elif s.count("'") % 2:
366 return "'"
367 return "'"
367 else:
368 else:
368 return False
369 return False
369
370
370
371
371 def protect_filename(s, protectables=PROTECTABLES):
372 def protect_filename(s, protectables=PROTECTABLES):
372 """Escape a string to protect certain characters."""
373 """Escape a string to protect certain characters."""
373 if set(s) & set(protectables):
374 if set(s) & set(protectables):
374 if sys.platform == "win32":
375 if sys.platform == "win32":
375 return '"' + s + '"'
376 return '"' + s + '"'
376 else:
377 else:
377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 return "".join(("\\" + c if c in protectables else c) for c in s)
378 else:
379 else:
379 return s
380 return s
380
381
381
382
382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 def expand_user(path:str) -> Tuple[str, bool, str]:
383 """Expand ``~``-style usernames in strings.
384 """Expand ``~``-style usernames in strings.
384
385
385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 extra information that will be useful if the input was being used in
387 extra information that will be useful if the input was being used in
387 computing completions, and you wish to return the completions with the
388 computing completions, and you wish to return the completions with the
388 original '~' instead of its expanded value.
389 original '~' instead of its expanded value.
389
390
390 Parameters
391 Parameters
391 ----------
392 ----------
392 path : str
393 path : str
393 String to be expanded. If no ~ is present, the output is the same as the
394 String to be expanded. If no ~ is present, the output is the same as the
394 input.
395 input.
395
396
396 Returns
397 Returns
397 -------
398 -------
398 newpath : str
399 newpath : str
399 Result of ~ expansion in the input path.
400 Result of ~ expansion in the input path.
400 tilde_expand : bool
401 tilde_expand : bool
401 Whether any expansion was performed or not.
402 Whether any expansion was performed or not.
402 tilde_val : str
403 tilde_val : str
403 The value that ~ was replaced with.
404 The value that ~ was replaced with.
404 """
405 """
405 # Default values
406 # Default values
406 tilde_expand = False
407 tilde_expand = False
407 tilde_val = ''
408 tilde_val = ''
408 newpath = path
409 newpath = path
409
410
410 if path.startswith('~'):
411 if path.startswith('~'):
411 tilde_expand = True
412 tilde_expand = True
412 rest = len(path)-1
413 rest = len(path)-1
413 newpath = os.path.expanduser(path)
414 newpath = os.path.expanduser(path)
414 if rest:
415 if rest:
415 tilde_val = newpath[:-rest]
416 tilde_val = newpath[:-rest]
416 else:
417 else:
417 tilde_val = newpath
418 tilde_val = newpath
418
419
419 return newpath, tilde_expand, tilde_val
420 return newpath, tilde_expand, tilde_val
420
421
421
422
422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 """Does the opposite of expand_user, with its outputs.
424 """Does the opposite of expand_user, with its outputs.
424 """
425 """
425 if tilde_expand:
426 if tilde_expand:
426 return path.replace(tilde_val, '~')
427 return path.replace(tilde_val, '~')
427 else:
428 else:
428 return path
429 return path
429
430
430
431
431 def completions_sorting_key(word):
432 def completions_sorting_key(word):
432 """key for sorting completions
433 """key for sorting completions
433
434
434 This does several things:
435 This does several things:
435
436
436 - Demote any completions starting with underscores to the end
437 - Demote any completions starting with underscores to the end
437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 by their name
439 by their name
439 """
440 """
440 prio1, prio2 = 0, 0
441 prio1, prio2 = 0, 0
441
442
442 if word.startswith('__'):
443 if word.startswith('__'):
443 prio1 = 2
444 prio1 = 2
444 elif word.startswith('_'):
445 elif word.startswith('_'):
445 prio1 = 1
446 prio1 = 1
446
447
447 if word.endswith('='):
448 if word.endswith('='):
448 prio1 = -1
449 prio1 = -1
449
450
450 if word.startswith('%%'):
451 if word.startswith('%%'):
451 # If there's another % in there, this is something else, so leave it alone
452 # If there's another % in there, this is something else, so leave it alone
452 if not "%" in word[2:]:
453 if "%" not in word[2:]:
453 word = word[2:]
454 word = word[2:]
454 prio2 = 2
455 prio2 = 2
455 elif word.startswith('%'):
456 elif word.startswith('%'):
456 if not "%" in word[1:]:
457 if "%" not in word[1:]:
457 word = word[1:]
458 word = word[1:]
458 prio2 = 1
459 prio2 = 1
459
460
460 return prio1, word, prio2
461 return prio1, word, prio2
461
462
462
463
463 class _FakeJediCompletion:
464 class _FakeJediCompletion:
464 """
465 """
465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 This is a workaround to communicate to the UI that Jedi has crashed and to
466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467
468
468 Added in IPython 6.0 so should likely be removed for 7.0
469 Added in IPython 6.0 so should likely be removed for 7.0
469
470
470 """
471 """
471
472
472 def __init__(self, name):
473 def __init__(self, name):
473
474
474 self.name = name
475 self.name = name
475 self.complete = name
476 self.complete = name
476 self.type = 'crashed'
477 self.type = 'crashed'
477 self.name_with_symbols = name
478 self.name_with_symbols = name
478 self.signature = ""
479 self.signature = ""
479 self._origin = "fake"
480 self._origin = "fake"
480 self.text = "crashed"
481 self.text = "crashed"
481
482
482 def __repr__(self):
483 def __repr__(self):
483 return '<Fake completion object jedi has crashed>'
484 return '<Fake completion object jedi has crashed>'
484
485
485
486
486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487
488
488
489
489 class Completion:
490 class Completion:
490 """
491 """
491 Completion object used and returned by IPython completers.
492 Completion object used and returned by IPython completers.
492
493
493 .. warning::
494 .. warning::
494
495
495 Unstable
496 Unstable
496
497
497 This function is unstable, API may change without warning.
498 This function is unstable, API may change without warning.
498 It will also raise unless use in proper context manager.
499 It will also raise unless use in proper context manager.
499
500
500 This act as a middle ground :any:`Completion` object between the
501 This act as a middle ground :any:`Completion` object between the
501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 object. While Jedi need a lot of information about evaluator and how the
503 object. While Jedi need a lot of information about evaluator and how the
503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 need user facing information.
505 need user facing information.
505
506
506 - Which range should be replaced replaced by what.
507 - Which range should be replaced replaced by what.
507 - Some metadata (like completion type), or meta information to displayed to
508 - Some metadata (like completion type), or meta information to displayed to
508 the use user.
509 the use user.
509
510
510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 For debugging purpose we can also store the origin of the completion (``jedi``,
511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 """
513 """
513
514
514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515
516
516 def __init__(
517 def __init__(
517 self,
518 self,
518 start: int,
519 start: int,
519 end: int,
520 end: int,
520 text: str,
521 text: str,
521 *,
522 *,
522 type: Optional[str] = None,
523 type: Optional[str] = None,
523 _origin="",
524 _origin="",
524 signature="",
525 signature="",
525 ) -> None:
526 ) -> None:
526 warnings.warn(
527 warnings.warn(
527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 "``Completion`` is a provisional API (as of IPython 6.0). "
528 "It may change without warnings. "
529 "It may change without warnings. "
529 "Use in corresponding context manager.",
530 "Use in corresponding context manager.",
530 category=ProvisionalCompleterWarning,
531 category=ProvisionalCompleterWarning,
531 stacklevel=2,
532 stacklevel=2,
532 )
533 )
533
534
534 self.start = start
535 self.start = start
535 self.end = end
536 self.end = end
536 self.text = text
537 self.text = text
537 self.type = type
538 self.type = type
538 self.signature = signature
539 self.signature = signature
539 self._origin = _origin
540 self._origin = _origin
540
541
541 def __repr__(self):
542 def __repr__(self):
542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544
545
545 def __eq__(self, other) -> bool:
546 def __eq__(self, other) -> bool:
546 """
547 """
547 Equality and hash do not hash the type (as some completer may not be
548 Equality and hash do not hash the type (as some completer may not be
548 able to infer the type), but are use to (partially) de-duplicate
549 able to infer the type), but are use to (partially) de-duplicate
549 completion.
550 completion.
550
551
551 Completely de-duplicating completion is a bit tricker that just
552 Completely de-duplicating completion is a bit tricker that just
552 comparing as it depends on surrounding text, which Completions are not
553 comparing as it depends on surrounding text, which Completions are not
553 aware of.
554 aware of.
554 """
555 """
555 return self.start == other.start and \
556 return self.start == other.start and \
556 self.end == other.end and \
557 self.end == other.end and \
557 self.text == other.text
558 self.text == other.text
558
559
559 def __hash__(self):
560 def __hash__(self):
560 return hash((self.start, self.end, self.text))
561 return hash((self.start, self.end, self.text))
561
562
562
563
563 class SimpleCompletion:
564 class SimpleCompletion:
564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565
566
566 .. warning::
567 .. warning::
567
568
568 Provisional
569 Provisional
569
570
570 This class is used to describe the currently supported attributes of
571 This class is used to describe the currently supported attributes of
571 simple completion items, and any additional implementation details
572 simple completion items, and any additional implementation details
572 should not be relied on. Additional attributes may be included in
573 should not be relied on. Additional attributes may be included in
573 future versions, and meaning of text disambiguated from the current
574 future versions, and meaning of text disambiguated from the current
574 dual meaning of "text to insert" and "text to used as a label".
575 dual meaning of "text to insert" and "text to used as a label".
575 """
576 """
576
577
577 __slots__ = ["text", "type"]
578 __slots__ = ["text", "type"]
578
579
579 def __init__(self, text: str, *, type: Optional[str] = None):
580 def __init__(self, text: str, *, type: Optional[str] = None):
580 self.text = text
581 self.text = text
581 self.type = type
582 self.type = type
582
583
583 def __repr__(self):
584 def __repr__(self):
584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585
586
586
587
587 class _MatcherResultBase(TypedDict):
588 class _MatcherResultBase(TypedDict):
588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589
590
590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 matched_fragment: NotRequired[str]
592 matched_fragment: NotRequired[str]
592
593
593 #: Whether to suppress results from all other matchers (True), some
594 #: Whether to suppress results from all other matchers (True), some
594 #: matchers (set of identifiers) or none (False); default is False.
595 #: matchers (set of identifiers) or none (False); default is False.
595 suppress: NotRequired[Union[bool, Set[str]]]
596 suppress: NotRequired[Union[bool, Set[str]]]
596
597
597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 #: requests to suppress all other matchers; defaults to an empty set.
599 #: requests to suppress all other matchers; defaults to an empty set.
599 do_not_suppress: NotRequired[Set[str]]
600 do_not_suppress: NotRequired[Set[str]]
600
601
601 #: Are completions already ordered and should be left as-is? default is False.
602 #: Are completions already ordered and should be left as-is? default is False.
602 ordered: NotRequired[bool]
603 ordered: NotRequired[bool]
603
604
604
605
605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 """Result of new-style completion matcher."""
608 """Result of new-style completion matcher."""
608
609
609 # note: TypedDict is added again to the inheritance chain
610 # note: TypedDict is added again to the inheritance chain
610 # in order to get __orig_bases__ for documentation
611 # in order to get __orig_bases__ for documentation
611
612
612 #: List of candidate completions
613 #: List of candidate completions
613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614
615
615
616
616 class _JediMatcherResult(_MatcherResultBase):
617 class _JediMatcherResult(_MatcherResultBase):
617 """Matching result returned by Jedi (will be processed differently)"""
618 """Matching result returned by Jedi (will be processed differently)"""
618
619
619 #: list of candidate completions
620 #: list of candidate completions
620 completions: Iterator[_JediCompletionLike]
621 completions: Iterator[_JediCompletionLike]
621
622
622
623
623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625
626
626
627
627 @dataclass
628 @dataclass
628 class CompletionContext:
629 class CompletionContext:
629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630 """Completion context provided as an argument to matchers in the Matcher API v2."""
630
631
631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 # which was not explicitly visible as an argument of the matcher, making any refactor
633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 # from the completer, and make substituting them in sub-classes easier.
635 # from the completer, and make substituting them in sub-classes easier.
635
636
636 #: Relevant fragment of code directly preceding the cursor.
637 #: Relevant fragment of code directly preceding the cursor.
637 #: The extraction of token is implemented via splitter heuristic
638 #: The extraction of token is implemented via splitter heuristic
638 #: (following readline behaviour for legacy reasons), which is user configurable
639 #: (following readline behaviour for legacy reasons), which is user configurable
639 #: (by switching the greedy mode).
640 #: (by switching the greedy mode).
640 token: str
641 token: str
641
642
642 #: The full available content of the editor or buffer
643 #: The full available content of the editor or buffer
643 full_text: str
644 full_text: str
644
645
645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 cursor_position: int
647 cursor_position: int
647
648
648 #: Cursor line in ``full_text``.
649 #: Cursor line in ``full_text``.
649 cursor_line: int
650 cursor_line: int
650
651
651 #: The maximum number of completions that will be used downstream.
652 #: The maximum number of completions that will be used downstream.
652 #: Matchers can use this information to abort early.
653 #: Matchers can use this information to abort early.
653 #: The built-in Jedi matcher is currently excepted from this limit.
654 #: The built-in Jedi matcher is currently excepted from this limit.
654 # If not given, return all possible completions.
655 # If not given, return all possible completions.
655 limit: Optional[int]
656 limit: Optional[int]
656
657
657 @cached_property
658 @cached_property
658 def text_until_cursor(self) -> str:
659 def text_until_cursor(self) -> str:
659 return self.line_with_cursor[: self.cursor_position]
660 return self.line_with_cursor[: self.cursor_position]
660
661
661 @cached_property
662 @cached_property
662 def line_with_cursor(self) -> str:
663 def line_with_cursor(self) -> str:
663 return self.full_text.split("\n")[self.cursor_line]
664 return self.full_text.split("\n")[self.cursor_line]
664
665
665
666
666 #: Matcher results for API v2.
667 #: Matcher results for API v2.
667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668
669
669
670
670 class _MatcherAPIv1Base(Protocol):
671 class _MatcherAPIv1Base(Protocol):
671 def __call__(self, text: str) -> List[str]:
672 def __call__(self, text: str) -> List[str]:
672 """Call signature."""
673 """Call signature."""
673 ...
674 ...
674
675
675 #: Used to construct the default matcher identifier
676 #: Used to construct the default matcher identifier
676 __qualname__: str
677 __qualname__: str
677
678
678
679
679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 #: API version
681 #: API version
681 matcher_api_version: Optional[Literal[1]]
682 matcher_api_version: Optional[Literal[1]]
682
683
683 def __call__(self, text: str) -> List[str]:
684 def __call__(self, text: str) -> List[str]:
684 """Call signature."""
685 """Call signature."""
685 ...
686 ...
686
687
687
688
688 #: Protocol describing Matcher API v1.
689 #: Protocol describing Matcher API v1.
689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690
691
691
692
692 class MatcherAPIv2(Protocol):
693 class MatcherAPIv2(Protocol):
693 """Protocol describing Matcher API v2."""
694 """Protocol describing Matcher API v2."""
694
695
695 #: API version
696 #: API version
696 matcher_api_version: Literal[2] = 2
697 matcher_api_version: Literal[2] = 2
697
698
698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 def __call__(self, context: CompletionContext) -> MatcherResult:
699 """Call signature."""
700 """Call signature."""
700 ...
701 ...
701
702
702 #: Used to construct the default matcher identifier
703 #: Used to construct the default matcher identifier
703 __qualname__: str
704 __qualname__: str
704
705
705
706
706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707
708
708
709
709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 api_version = _get_matcher_api_version(matcher)
711 api_version = _get_matcher_api_version(matcher)
711 return api_version == 1
712 return api_version == 1
712
713
713
714
714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 api_version = _get_matcher_api_version(matcher)
716 api_version = _get_matcher_api_version(matcher)
716 return api_version == 2
717 return api_version == 2
717
718
718
719
719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 """Determines whether objects is sizable"""
721 """Determines whether objects is sizable"""
721 return hasattr(value, "__len__")
722 return hasattr(value, "__len__")
722
723
723
724
724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 """Determines whether objects is sizable"""
726 """Determines whether objects is sizable"""
726 return hasattr(value, "__next__")
727 return hasattr(value, "__next__")
727
728
728
729
729 def has_any_completions(result: MatcherResult) -> bool:
730 def has_any_completions(result: MatcherResult) -> bool:
730 """Check if any result includes any completions."""
731 """Check if any result includes any completions."""
731 completions = result["completions"]
732 completions = result["completions"]
732 if _is_sizable(completions):
733 if _is_sizable(completions):
733 return len(completions) != 0
734 return len(completions) != 0
734 if _is_iterator(completions):
735 if _is_iterator(completions):
735 try:
736 try:
736 old_iterator = completions
737 old_iterator = completions
737 first = next(old_iterator)
738 first = next(old_iterator)
738 result["completions"] = cast(
739 result["completions"] = cast(
739 Iterator[SimpleCompletion],
740 Iterator[SimpleCompletion],
740 itertools.chain([first], old_iterator),
741 itertools.chain([first], old_iterator),
741 )
742 )
742 return True
743 return True
743 except StopIteration:
744 except StopIteration:
744 return False
745 return False
745 raise ValueError(
746 raise ValueError(
746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 "Completions returned by matcher need to be an Iterator or a Sizable"
747 )
748 )
748
749
749
750
750 def completion_matcher(
751 def completion_matcher(
751 *,
752 *,
752 priority: Optional[float] = None,
753 priority: Optional[float] = None,
753 identifier: Optional[str] = None,
754 identifier: Optional[str] = None,
754 api_version: int = 1,
755 api_version: int = 1,
755 ):
756 ):
756 """Adds attributes describing the matcher.
757 """Adds attributes describing the matcher.
757
758
758 Parameters
759 Parameters
759 ----------
760 ----------
760 priority : Optional[float]
761 priority : Optional[float]
761 The priority of the matcher, determines the order of execution of matchers.
762 The priority of the matcher, determines the order of execution of matchers.
762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 Higher priority means that the matcher will be executed first. Defaults to 0.
763 identifier : Optional[str]
764 identifier : Optional[str]
764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 and also used to for debugging (will be passed as ``origin`` with the completions).
766 and also used to for debugging (will be passed as ``origin`` with the completions).
766
767
767 Defaults to matcher function's ``__qualname__`` (for example,
768 Defaults to matcher function's ``__qualname__`` (for example,
768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 ``IPCompleter.file_matcher`` for the built-in matched defined
769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 api_version: Optional[int]
771 api_version: Optional[int]
771 version of the Matcher API used by this matcher.
772 version of the Matcher API used by this matcher.
772 Currently supported values are 1 and 2.
773 Currently supported values are 1 and 2.
773 Defaults to 1.
774 Defaults to 1.
774 """
775 """
775
776
776 def wrapper(func: Matcher):
777 def wrapper(func: Matcher):
777 func.matcher_priority = priority or 0 # type: ignore
778 func.matcher_priority = priority or 0 # type: ignore
778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 func.matcher_api_version = api_version # type: ignore
780 func.matcher_api_version = api_version # type: ignore
780 if TYPE_CHECKING:
781 if TYPE_CHECKING:
781 if api_version == 1:
782 if api_version == 1:
782 func = cast(MatcherAPIv1, func)
783 func = cast(MatcherAPIv1, func)
783 elif api_version == 2:
784 elif api_version == 2:
784 func = cast(MatcherAPIv2, func)
785 func = cast(MatcherAPIv2, func)
785 return func
786 return func
786
787
787 return wrapper
788 return wrapper
788
789
789
790
790 def _get_matcher_priority(matcher: Matcher):
791 def _get_matcher_priority(matcher: Matcher):
791 return getattr(matcher, "matcher_priority", 0)
792 return getattr(matcher, "matcher_priority", 0)
792
793
793
794
794 def _get_matcher_id(matcher: Matcher):
795 def _get_matcher_id(matcher: Matcher):
795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796
797
797
798
798 def _get_matcher_api_version(matcher):
799 def _get_matcher_api_version(matcher):
799 return getattr(matcher, "matcher_api_version", 1)
800 return getattr(matcher, "matcher_api_version", 1)
800
801
801
802
802 context_matcher = partial(completion_matcher, api_version=2)
803 context_matcher = partial(completion_matcher, api_version=2)
803
804
804
805
805 _IC = Iterable[Completion]
806 _IC = Iterable[Completion]
806
807
807
808
808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 """
810 """
810 Deduplicate a set of completions.
811 Deduplicate a set of completions.
811
812
812 .. warning::
813 .. warning::
813
814
814 Unstable
815 Unstable
815
816
816 This function is unstable, API may change without warning.
817 This function is unstable, API may change without warning.
817
818
818 Parameters
819 Parameters
819 ----------
820 ----------
820 text : str
821 text : str
821 text that should be completed.
822 text that should be completed.
822 completions : Iterator[Completion]
823 completions : Iterator[Completion]
823 iterator over the completions to deduplicate
824 iterator over the completions to deduplicate
824
825
825 Yields
826 Yields
826 ------
827 ------
827 `Completions` objects
828 `Completions` objects
828 Completions coming from multiple sources, may be different but end up having
829 Completions coming from multiple sources, may be different but end up having
829 the same effect when applied to ``text``. If this is the case, this will
830 the same effect when applied to ``text``. If this is the case, this will
830 consider completions as equal and only emit the first encountered.
831 consider completions as equal and only emit the first encountered.
831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 Not folded in `completions()` yet for debugging purpose, and to detect when
832 the IPython completer does return things that Jedi does not, but should be
833 the IPython completer does return things that Jedi does not, but should be
833 at some point.
834 at some point.
834 """
835 """
835 completions = list(completions)
836 completions = list(completions)
836 if not completions:
837 if not completions:
837 return
838 return
838
839
839 new_start = min(c.start for c in completions)
840 new_start = min(c.start for c in completions)
840 new_end = max(c.end for c in completions)
841 new_end = max(c.end for c in completions)
841
842
842 seen = set()
843 seen = set()
843 for c in completions:
844 for c in completions:
844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 if new_text not in seen:
846 if new_text not in seen:
846 yield c
847 yield c
847 seen.add(new_text)
848 seen.add(new_text)
848
849
849
850
850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 """
852 """
852 Rectify a set of completions to all have the same ``start`` and ``end``
853 Rectify a set of completions to all have the same ``start`` and ``end``
853
854
854 .. warning::
855 .. warning::
855
856
856 Unstable
857 Unstable
857
858
858 This function is unstable, API may change without warning.
859 This function is unstable, API may change without warning.
859 It will also raise unless use in proper context manager.
860 It will also raise unless use in proper context manager.
860
861
861 Parameters
862 Parameters
862 ----------
863 ----------
863 text : str
864 text : str
864 text that should be completed.
865 text that should be completed.
865 completions : Iterator[Completion]
866 completions : Iterator[Completion]
866 iterator over the completions to rectify
867 iterator over the completions to rectify
867 _debug : bool
868 _debug : bool
868 Log failed completion
869 Log failed completion
869
870
870 Notes
871 Notes
871 -----
872 -----
872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 the Jupyter Protocol requires them to behave like so. This will readjust
874 the Jupyter Protocol requires them to behave like so. This will readjust
874 the completion to have the same ``start`` and ``end`` by padding both
875 the completion to have the same ``start`` and ``end`` by padding both
875 extremities with surrounding text.
876 extremities with surrounding text.
876
877
877 During stabilisation should support a ``_debug`` option to log which
878 During stabilisation should support a ``_debug`` option to log which
878 completion are return by the IPython completer and not found in Jedi in
879 completion are return by the IPython completer and not found in Jedi in
879 order to make upstream bug report.
880 order to make upstream bug report.
880 """
881 """
881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 "It may change without warnings. "
883 "It may change without warnings. "
883 "Use in corresponding context manager.",
884 "Use in corresponding context manager.",
884 category=ProvisionalCompleterWarning, stacklevel=2)
885 category=ProvisionalCompleterWarning, stacklevel=2)
885
886
886 completions = list(completions)
887 completions = list(completions)
887 if not completions:
888 if not completions:
888 return
889 return
889 starts = (c.start for c in completions)
890 starts = (c.start for c in completions)
890 ends = (c.end for c in completions)
891 ends = (c.end for c in completions)
891
892
892 new_start = min(starts)
893 new_start = min(starts)
893 new_end = max(ends)
894 new_end = max(ends)
894
895
895 seen_jedi = set()
896 seen_jedi = set()
896 seen_python_matches = set()
897 seen_python_matches = set()
897 for c in completions:
898 for c in completions:
898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 if c._origin == 'jedi':
900 if c._origin == 'jedi':
900 seen_jedi.add(new_text)
901 seen_jedi.add(new_text)
901 elif c._origin == "IPCompleter.python_matcher":
902 elif c._origin == "IPCompleter.python_matcher":
902 seen_python_matches.add(new_text)
903 seen_python_matches.add(new_text)
903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 diff = seen_python_matches.difference(seen_jedi)
905 diff = seen_python_matches.difference(seen_jedi)
905 if diff and _debug:
906 if diff and _debug:
906 print('IPython.python matches have extras:', diff)
907 print('IPython.python matches have extras:', diff)
907
908
908
909
909 if sys.platform == 'win32':
910 if sys.platform == 'win32':
910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 else:
912 else:
912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913
914
914 GREEDY_DELIMS = ' =\r\n'
915 GREEDY_DELIMS = ' =\r\n'
915
916
916
917
917 class CompletionSplitter(object):
918 class CompletionSplitter(object):
918 """An object to split an input line in a manner similar to readline.
919 """An object to split an input line in a manner similar to readline.
919
920
920 By having our own implementation, we can expose readline-like completion in
921 By having our own implementation, we can expose readline-like completion in
921 a uniform manner to all frontends. This object only needs to be given the
922 a uniform manner to all frontends. This object only needs to be given the
922 line of text to be split and the cursor position on said line, and it
923 line of text to be split and the cursor position on said line, and it
923 returns the 'word' to be completed on at the cursor after splitting the
924 returns the 'word' to be completed on at the cursor after splitting the
924 entire line.
925 entire line.
925
926
926 What characters are used as splitting delimiters can be controlled by
927 What characters are used as splitting delimiters can be controlled by
927 setting the ``delims`` attribute (this is a property that internally
928 setting the ``delims`` attribute (this is a property that internally
928 automatically builds the necessary regular expression)"""
929 automatically builds the necessary regular expression)"""
929
930
930 # Private interface
931 # Private interface
931
932
932 # A string of delimiter characters. The default value makes sense for
933 # A string of delimiter characters. The default value makes sense for
933 # IPython's most typical usage patterns.
934 # IPython's most typical usage patterns.
934 _delims = DELIMS
935 _delims = DELIMS
935
936
936 # The expression (a normal string) to be compiled into a regular expression
937 # The expression (a normal string) to be compiled into a regular expression
937 # for actual splitting. We store it as an attribute mostly for ease of
938 # for actual splitting. We store it as an attribute mostly for ease of
938 # debugging, since this type of code can be so tricky to debug.
939 # debugging, since this type of code can be so tricky to debug.
939 _delim_expr = None
940 _delim_expr = None
940
941
941 # The regular expression that does the actual splitting
942 # The regular expression that does the actual splitting
942 _delim_re = None
943 _delim_re = None
943
944
944 def __init__(self, delims=None):
945 def __init__(self, delims=None):
945 delims = CompletionSplitter._delims if delims is None else delims
946 delims = CompletionSplitter._delims if delims is None else delims
946 self.delims = delims
947 self.delims = delims
947
948
948 @property
949 @property
949 def delims(self):
950 def delims(self):
950 """Return the string of delimiter characters."""
951 """Return the string of delimiter characters."""
951 return self._delims
952 return self._delims
952
953
953 @delims.setter
954 @delims.setter
954 def delims(self, delims):
955 def delims(self, delims):
955 """Set the delimiters for line splitting."""
956 """Set the delimiters for line splitting."""
956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 self._delim_re = re.compile(expr)
958 self._delim_re = re.compile(expr)
958 self._delims = delims
959 self._delims = delims
959 self._delim_expr = expr
960 self._delim_expr = expr
960
961
961 def split_line(self, line, cursor_pos=None):
962 def split_line(self, line, cursor_pos=None):
962 """Split a line of text with a cursor at the given position.
963 """Split a line of text with a cursor at the given position.
963 """
964 """
964 l = line if cursor_pos is None else line[:cursor_pos]
965 cut_line = line if cursor_pos is None else line[:cursor_pos]
965 return self._delim_re.split(l)[-1]
966 return self._delim_re.split(cut_line)[-1]
966
967
967
968
968
969
969 class Completer(Configurable):
970 class Completer(Configurable):
970
971
971 greedy = Bool(
972 greedy = Bool(
972 False,
973 False,
973 help="""Activate greedy completion.
974 help="""Activate greedy completion.
974
975
975 .. deprecated:: 8.8
976 .. deprecated:: 8.8
976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977
978
978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979 When enabled in IPython 8.8 or newer, changes configuration as follows:
979
980
980 - ``Completer.evaluation = 'unsafe'``
981 - ``Completer.evaluation = 'unsafe'``
981 - ``Completer.auto_close_dict_keys = True``
982 - ``Completer.auto_close_dict_keys = True``
982 """,
983 """,
983 ).tag(config=True)
984 ).tag(config=True)
984
985
985 evaluation = Enum(
986 evaluation = Enum(
986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 default_value="limited",
988 default_value="limited",
988 help="""Policy for code evaluation under completion.
989 help="""Policy for code evaluation under completion.
989
990
990 Successive options allow to enable more eager evaluation for better
991 Successive options allow to enable more eager evaluation for better
991 completion suggestions, including for nested dictionaries, nested lists,
992 completion suggestions, including for nested dictionaries, nested lists,
992 or even results of function calls.
993 or even results of function calls.
993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995
996
996 Allowed values are:
997 Allowed values are:
997
998
998 - ``forbidden``: no evaluation of code is permitted,
999 - ``forbidden``: no evaluation of code is permitted,
999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 no item/attribute evaluationm no access to locals/globals,
1001 no item/attribute evaluationm no access to locals/globals,
1001 no evaluation of any operations or comparisons.
1002 no evaluation of any operations or comparisons.
1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 - ``unsafe``: evaluation of all methods and function calls but not of
1007 syntax with side-effects like `del x`,
1008 syntax with side-effects like `del x`,
1008 - ``dangerous``: completely arbitrary evaluation.
1009 - ``dangerous``: completely arbitrary evaluation.
1009 """,
1010 """,
1010 ).tag(config=True)
1011 ).tag(config=True)
1011
1012
1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 help="Experimental: Use Jedi to generate autocompletions. "
1014 "Default to True if jedi is installed.").tag(config=True)
1015 "Default to True if jedi is installed.").tag(config=True)
1015
1016
1016 jedi_compute_type_timeout = Int(default_value=400,
1017 jedi_compute_type_timeout = Int(default_value=400,
1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 performance by preventing jedi to build its cache.
1020 performance by preventing jedi to build its cache.
1020 """).tag(config=True)
1021 """).tag(config=True)
1021
1022
1022 debug = Bool(default_value=False,
1023 debug = Bool(default_value=False,
1023 help='Enable debug for the Completer. Mostly print extra '
1024 help='Enable debug for the Completer. Mostly print extra '
1024 'information for experimental jedi integration.')\
1025 'information for experimental jedi integration.')\
1025 .tag(config=True)
1026 .tag(config=True)
1026
1027
1027 backslash_combining_completions = Bool(True,
1028 backslash_combining_completions = Bool(True,
1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 "Includes completion of latex commands, unicode names, and expanding "
1030 "Includes completion of latex commands, unicode names, and expanding "
1030 "unicode characters back to latex commands.").tag(config=True)
1031 "unicode characters back to latex commands.").tag(config=True)
1031
1032
1032 auto_close_dict_keys = Bool(
1033 auto_close_dict_keys = Bool(
1033 False,
1034 False,
1034 help="""
1035 help="""
1035 Enable auto-closing dictionary keys.
1036 Enable auto-closing dictionary keys.
1036
1037
1037 When enabled string keys will be suffixed with a final quote
1038 When enabled string keys will be suffixed with a final quote
1038 (matching the opening quote), tuple keys will also receive a
1039 (matching the opening quote), tuple keys will also receive a
1039 separating comma if needed, and keys which are final will
1040 separating comma if needed, and keys which are final will
1040 receive a closing bracket (``]``).
1041 receive a closing bracket (``]``).
1041 """,
1042 """,
1042 ).tag(config=True)
1043 ).tag(config=True)
1043
1044
1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 """Create a new completer for the command line.
1046 """Create a new completer for the command line.
1046
1047
1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048
1049
1049 If unspecified, the default namespace where completions are performed
1050 If unspecified, the default namespace where completions are performed
1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 given as dictionaries.
1052 given as dictionaries.
1052
1053
1053 An optional second namespace can be given. This allows the completer
1054 An optional second namespace can be given. This allows the completer
1054 to handle cases where both the local and global scopes need to be
1055 to handle cases where both the local and global scopes need to be
1055 distinguished.
1056 distinguished.
1056 """
1057 """
1057
1058
1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 # specific namespace or to use __main__.__dict__. This will allow us
1060 # to bind to __main__.__dict__ at completion time, not now.
1061 # to bind to __main__.__dict__ at completion time, not now.
1061 if namespace is None:
1062 if namespace is None:
1062 self.use_main_ns = True
1063 self.use_main_ns = True
1063 else:
1064 else:
1064 self.use_main_ns = False
1065 self.use_main_ns = False
1065 self.namespace = namespace
1066 self.namespace = namespace
1066
1067
1067 # The global namespace, if given, can be bound directly
1068 # The global namespace, if given, can be bound directly
1068 if global_namespace is None:
1069 if global_namespace is None:
1069 self.global_namespace = {}
1070 self.global_namespace = {}
1070 else:
1071 else:
1071 self.global_namespace = global_namespace
1072 self.global_namespace = global_namespace
1072
1073
1073 self.custom_matchers = []
1074 self.custom_matchers = []
1074
1075
1075 super(Completer, self).__init__(**kwargs)
1076 super(Completer, self).__init__(**kwargs)
1076
1077
1077 def complete(self, text, state):
1078 def complete(self, text, state):
1078 """Return the next possible completion for 'text'.
1079 """Return the next possible completion for 'text'.
1079
1080
1080 This is called successively with state == 0, 1, 2, ... until it
1081 This is called successively with state == 0, 1, 2, ... until it
1081 returns None. The completion should begin with 'text'.
1082 returns None. The completion should begin with 'text'.
1082
1083
1083 """
1084 """
1084 if self.use_main_ns:
1085 if self.use_main_ns:
1085 self.namespace = __main__.__dict__
1086 self.namespace = __main__.__dict__
1086
1087
1087 if state == 0:
1088 if state == 0:
1088 if "." in text:
1089 if "." in text:
1089 self.matches = self.attr_matches(text)
1090 self.matches = self.attr_matches(text)
1090 else:
1091 else:
1091 self.matches = self.global_matches(text)
1092 self.matches = self.global_matches(text)
1092 try:
1093 try:
1093 return self.matches[state]
1094 return self.matches[state]
1094 except IndexError:
1095 except IndexError:
1095 return None
1096 return None
1096
1097
1097 def global_matches(self, text):
1098 def global_matches(self, text):
1098 """Compute matches when text is a simple name.
1099 """Compute matches when text is a simple name.
1099
1100
1100 Return a list of all keywords, built-in functions and names currently
1101 Return a list of all keywords, built-in functions and names currently
1101 defined in self.namespace or self.global_namespace that match.
1102 defined in self.namespace or self.global_namespace that match.
1102
1103
1103 """
1104 """
1104 matches = []
1105 matches = []
1105 match_append = matches.append
1106 match_append = matches.append
1106 n = len(text)
1107 n = len(text)
1107 for lst in [
1108 for lst in [
1108 keyword.kwlist,
1109 keyword.kwlist,
1109 builtin_mod.__dict__.keys(),
1110 builtin_mod.__dict__.keys(),
1110 list(self.namespace.keys()),
1111 list(self.namespace.keys()),
1111 list(self.global_namespace.keys()),
1112 list(self.global_namespace.keys()),
1112 ]:
1113 ]:
1113 for word in lst:
1114 for word in lst:
1114 if word[:n] == text and word != "__builtins__":
1115 if word[:n] == text and word != "__builtins__":
1115 match_append(word)
1116 match_append(word)
1116
1117
1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 shortened = {
1120 shortened = {
1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 "_".join([sub[0] for sub in word.split("_")]): word
1121 for word in lst
1122 for word in lst
1122 if snake_case_re.match(word)
1123 if snake_case_re.match(word)
1123 }
1124 }
1124 for word in shortened.keys():
1125 for word in shortened.keys():
1125 if word[:n] == text and word != "__builtins__":
1126 if word[:n] == text and word != "__builtins__":
1126 match_append(shortened[word])
1127 match_append(shortened[word])
1127 return matches
1128 return matches
1128
1129
1129 def attr_matches(self, text):
1130 def attr_matches(self, text):
1130 """Compute matches when text contains a dot.
1131 """Compute matches when text contains a dot.
1131
1132
1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 evaluatable in self.namespace or self.global_namespace, it will be
1134 evaluated and its attributes (as revealed by dir()) are used as
1135 evaluated and its attributes (as revealed by dir()) are used as
1135 possible completions. (For class instances, class members are
1136 possible completions. (For class instances, class members are
1136 also considered.)
1137 also considered.)
1137
1138
1138 WARNING: this can still invoke arbitrary C code, if an object
1139 WARNING: this can still invoke arbitrary C code, if an object
1139 with a __getattr__ hook is evaluated.
1140 with a __getattr__ hook is evaluated.
1140
1141
1141 """
1142 """
1142 return self._attr_matches(text)[0]
1143 return self._attr_matches(text)[0]
1143
1144
1145 # we simple attribute matching with normal identifiers.
1146 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$")
1147
1144 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1148 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1149
1150 m2 = self._ATTR_MATCH_RE.match(self.line_buffer)
1146 if not m2:
1151 if not m2:
1147 return [], ""
1152 return [], ""
1148 expr, attr = m2.group(1, 2)
1153 expr, attr = m2.group(1, 2)
1149
1154
1150 obj = self._evaluate_expr(expr)
1155 obj = self._evaluate_expr(expr)
1151
1156
1152 if obj is not_found:
1157 if obj is not_found:
1153 return [], ""
1158 return [], ""
1154
1159
1155 if self.limit_to__all__ and hasattr(obj, '__all__'):
1160 if self.limit_to__all__ and hasattr(obj, '__all__'):
1156 words = get__all__entries(obj)
1161 words = get__all__entries(obj)
1157 else:
1162 else:
1158 words = dir2(obj)
1163 words = dir2(obj)
1159
1164
1160 try:
1165 try:
1161 words = generics.complete_object(obj, words)
1166 words = generics.complete_object(obj, words)
1162 except TryNext:
1167 except TryNext:
1163 pass
1168 pass
1164 except AssertionError:
1169 except AssertionError:
1165 raise
1170 raise
1166 except Exception:
1171 except Exception:
1167 # Silence errors from completion function
1172 # Silence errors from completion function
1168 pass
1173 pass
1169 # Build match list to return
1174 # Build match list to return
1170 n = len(attr)
1175 n = len(attr)
1171
1176
1172 # Note: ideally we would just return words here and the prefix
1177 # Note: ideally we would just return words here and the prefix
1173 # reconciliator would know that we intend to append to rather than
1178 # reconciliator would know that we intend to append to rather than
1174 # replace the input text; this requires refactoring to return range
1179 # replace the input text; this requires refactoring to return range
1175 # which ought to be replaced (as does jedi).
1180 # which ought to be replaced (as does jedi).
1176 if include_prefix:
1181 if include_prefix:
1177 tokens = _parse_tokens(expr)
1182 tokens = _parse_tokens(expr)
1178 rev_tokens = reversed(tokens)
1183 rev_tokens = reversed(tokens)
1179 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1184 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1180 name_turn = True
1185 name_turn = True
1181
1186
1182 parts = []
1187 parts = []
1183 for token in rev_tokens:
1188 for token in rev_tokens:
1184 if token.type in skip_over:
1189 if token.type in skip_over:
1185 continue
1190 continue
1186 if token.type == tokenize.NAME and name_turn:
1191 if token.type == tokenize.NAME and name_turn:
1187 parts.append(token.string)
1192 parts.append(token.string)
1188 name_turn = False
1193 name_turn = False
1189 elif (
1194 elif (
1190 token.type == tokenize.OP and token.string == "." and not name_turn
1195 token.type == tokenize.OP and token.string == "." and not name_turn
1191 ):
1196 ):
1192 parts.append(token.string)
1197 parts.append(token.string)
1193 name_turn = True
1198 name_turn = True
1194 else:
1199 else:
1195 # short-circuit if not empty nor name token
1200 # short-circuit if not empty nor name token
1196 break
1201 break
1197
1202
1198 prefix_after_space = "".join(reversed(parts))
1203 prefix_after_space = "".join(reversed(parts))
1199 else:
1204 else:
1200 prefix_after_space = ""
1205 prefix_after_space = ""
1201
1206
1202 return (
1207 return (
1203 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1208 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1204 "." + attr,
1209 "." + attr,
1205 )
1210 )
1206
1211
1212 def _trim_expr(self, code: str) -> str:
1213 """
1214 Trim the code until it is a valid expression and not a tuple;
1215
1216 return the trimmed expression for guarded_eval.
1217 """
1218 while code:
1219 code = code[1:]
1220 try:
1221 res = ast.parse(code)
1222 except SyntaxError:
1223 continue
1224
1225 assert res is not None
1226 if len(res.body) != 1:
1227 continue
1228 expr = res.body[0].value
1229 if isinstance(expr, ast.Tuple) and not code[-1] == ")":
1230 # we skip implicit tuple, like when trimming `fun(a,b`<completion>
1231 # as `a,b` would be a tuple, and we actually expect to get only `b`
1232 continue
1233 return code
1234 return ""
1235
1207 def _evaluate_expr(self, expr):
1236 def _evaluate_expr(self, expr):
1208 obj = not_found
1237 obj = not_found
1209 done = False
1238 done = False
1210 while not done and expr:
1239 while not done and expr:
1211 try:
1240 try:
1212 obj = guarded_eval(
1241 obj = guarded_eval(
1213 expr,
1242 expr,
1214 EvaluationContext(
1243 EvaluationContext(
1215 globals=self.global_namespace,
1244 globals=self.global_namespace,
1216 locals=self.namespace,
1245 locals=self.namespace,
1217 evaluation=self.evaluation,
1246 evaluation=self.evaluation,
1218 ),
1247 ),
1219 )
1248 )
1220 done = True
1249 done = True
1221 except Exception as e:
1250 except Exception as e:
1222 if self.debug:
1251 if self.debug:
1223 print("Evaluation exception", e)
1252 print("Evaluation exception", e)
1224 # trim the expression to remove any invalid prefix
1253 # trim the expression to remove any invalid prefix
1225 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1254 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1226 # where parenthesis is not closed.
1255 # where parenthesis is not closed.
1227 # TODO: make this faster by reusing parts of the computation?
1256 # TODO: make this faster by reusing parts of the computation?
1228 expr = expr[1:]
1257 expr = self._trim_expr(expr)
1229 return obj
1258 return obj
1230
1259
1231 def get__all__entries(obj):
1260 def get__all__entries(obj):
1232 """returns the strings in the __all__ attribute"""
1261 """returns the strings in the __all__ attribute"""
1233 try:
1262 try:
1234 words = getattr(obj, '__all__')
1263 words = getattr(obj, '__all__')
1235 except:
1264 except Exception:
1236 return []
1265 return []
1237
1266
1238 return [w for w in words if isinstance(w, str)]
1267 return [w for w in words if isinstance(w, str)]
1239
1268
1240
1269
1241 class _DictKeyState(enum.Flag):
1270 class _DictKeyState(enum.Flag):
1242 """Represent state of the key match in context of other possible matches.
1271 """Represent state of the key match in context of other possible matches.
1243
1272
1244 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1273 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1245 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1274 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1246 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1275 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1247 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1276 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1248 """
1277 """
1249
1278
1250 BASELINE = 0
1279 BASELINE = 0
1251 END_OF_ITEM = enum.auto()
1280 END_OF_ITEM = enum.auto()
1252 END_OF_TUPLE = enum.auto()
1281 END_OF_TUPLE = enum.auto()
1253 IN_TUPLE = enum.auto()
1282 IN_TUPLE = enum.auto()
1254
1283
1255
1284
1256 def _parse_tokens(c):
1285 def _parse_tokens(c):
1257 """Parse tokens even if there is an error."""
1286 """Parse tokens even if there is an error."""
1258 tokens = []
1287 tokens = []
1259 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1288 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1260 while True:
1289 while True:
1261 try:
1290 try:
1262 tokens.append(next(token_generator))
1291 tokens.append(next(token_generator))
1263 except tokenize.TokenError:
1292 except tokenize.TokenError:
1264 return tokens
1293 return tokens
1265 except StopIteration:
1294 except StopIteration:
1266 return tokens
1295 return tokens
1267
1296
1268
1297
1269 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1298 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1270 """Match any valid Python numeric literal in a prefix of dictionary keys.
1299 """Match any valid Python numeric literal in a prefix of dictionary keys.
1271
1300
1272 References:
1301 References:
1273 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1302 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1274 - https://docs.python.org/3/library/tokenize.html
1303 - https://docs.python.org/3/library/tokenize.html
1275 """
1304 """
1276 if prefix[-1].isspace():
1305 if prefix[-1].isspace():
1277 # if user typed a space we do not have anything to complete
1306 # if user typed a space we do not have anything to complete
1278 # even if there was a valid number token before
1307 # even if there was a valid number token before
1279 return None
1308 return None
1280 tokens = _parse_tokens(prefix)
1309 tokens = _parse_tokens(prefix)
1281 rev_tokens = reversed(tokens)
1310 rev_tokens = reversed(tokens)
1282 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1311 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1283 number = None
1312 number = None
1284 for token in rev_tokens:
1313 for token in rev_tokens:
1285 if token.type in skip_over:
1314 if token.type in skip_over:
1286 continue
1315 continue
1287 if number is None:
1316 if number is None:
1288 if token.type == tokenize.NUMBER:
1317 if token.type == tokenize.NUMBER:
1289 number = token.string
1318 number = token.string
1290 continue
1319 continue
1291 else:
1320 else:
1292 # we did not match a number
1321 # we did not match a number
1293 return None
1322 return None
1294 if token.type == tokenize.OP:
1323 if token.type == tokenize.OP:
1295 if token.string == ",":
1324 if token.string == ",":
1296 break
1325 break
1297 if token.string in {"+", "-"}:
1326 if token.string in {"+", "-"}:
1298 number = token.string + number
1327 number = token.string + number
1299 else:
1328 else:
1300 return None
1329 return None
1301 return number
1330 return number
1302
1331
1303
1332
1304 _INT_FORMATS = {
1333 _INT_FORMATS = {
1305 "0b": bin,
1334 "0b": bin,
1306 "0o": oct,
1335 "0o": oct,
1307 "0x": hex,
1336 "0x": hex,
1308 }
1337 }
1309
1338
1310
1339
1311 def match_dict_keys(
1340 def match_dict_keys(
1312 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1341 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1313 prefix: str,
1342 prefix: str,
1314 delims: str,
1343 delims: str,
1315 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1344 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1316 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1345 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1317 """Used by dict_key_matches, matching the prefix to a list of keys
1346 """Used by dict_key_matches, matching the prefix to a list of keys
1318
1347
1319 Parameters
1348 Parameters
1320 ----------
1349 ----------
1321 keys
1350 keys
1322 list of keys in dictionary currently being completed.
1351 list of keys in dictionary currently being completed.
1323 prefix
1352 prefix
1324 Part of the text already typed by the user. E.g. `mydict[b'fo`
1353 Part of the text already typed by the user. E.g. `mydict[b'fo`
1325 delims
1354 delims
1326 String of delimiters to consider when finding the current key.
1355 String of delimiters to consider when finding the current key.
1327 extra_prefix : optional
1356 extra_prefix : optional
1328 Part of the text already typed in multi-key index cases. E.g. for
1357 Part of the text already typed in multi-key index cases. E.g. for
1329 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1358 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1330
1359
1331 Returns
1360 Returns
1332 -------
1361 -------
1333 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1362 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1334 ``quote`` being the quote that need to be used to close current string.
1363 ``quote`` being the quote that need to be used to close current string.
1335 ``token_start`` the position where the replacement should start occurring,
1364 ``token_start`` the position where the replacement should start occurring,
1336 ``matches`` a dictionary of replacement/completion keys on keys and values
1365 ``matches`` a dictionary of replacement/completion keys on keys and values
1337 indicating whether the state.
1366 indicating whether the state.
1338 """
1367 """
1339 prefix_tuple = extra_prefix if extra_prefix else ()
1368 prefix_tuple = extra_prefix if extra_prefix else ()
1340
1369
1341 prefix_tuple_size = sum(
1370 prefix_tuple_size = sum(
1342 [
1371 [
1343 # for pandas, do not count slices as taking space
1372 # for pandas, do not count slices as taking space
1344 not isinstance(k, slice)
1373 not isinstance(k, slice)
1345 for k in prefix_tuple
1374 for k in prefix_tuple
1346 ]
1375 ]
1347 )
1376 )
1348 text_serializable_types = (str, bytes, int, float, slice)
1377 text_serializable_types = (str, bytes, int, float, slice)
1349
1378
1350 def filter_prefix_tuple(key):
1379 def filter_prefix_tuple(key):
1351 # Reject too short keys
1380 # Reject too short keys
1352 if len(key) <= prefix_tuple_size:
1381 if len(key) <= prefix_tuple_size:
1353 return False
1382 return False
1354 # Reject keys which cannot be serialised to text
1383 # Reject keys which cannot be serialised to text
1355 for k in key:
1384 for k in key:
1356 if not isinstance(k, text_serializable_types):
1385 if not isinstance(k, text_serializable_types):
1357 return False
1386 return False
1358 # Reject keys that do not match the prefix
1387 # Reject keys that do not match the prefix
1359 for k, pt in zip(key, prefix_tuple):
1388 for k, pt in zip(key, prefix_tuple):
1360 if k != pt and not isinstance(pt, slice):
1389 if k != pt and not isinstance(pt, slice):
1361 return False
1390 return False
1362 # All checks passed!
1391 # All checks passed!
1363 return True
1392 return True
1364
1393
1365 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1394 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1366 defaultdict(lambda: _DictKeyState.BASELINE)
1395 defaultdict(lambda: _DictKeyState.BASELINE)
1367 )
1396 )
1368
1397
1369 for k in keys:
1398 for k in keys:
1370 # If at least one of the matches is not final, mark as undetermined.
1399 # If at least one of the matches is not final, mark as undetermined.
1371 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1400 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1372 # `111` appears final on first match but is not final on the second.
1401 # `111` appears final on first match but is not final on the second.
1373
1402
1374 if isinstance(k, tuple):
1403 if isinstance(k, tuple):
1375 if filter_prefix_tuple(k):
1404 if filter_prefix_tuple(k):
1376 key_fragment = k[prefix_tuple_size]
1405 key_fragment = k[prefix_tuple_size]
1377 filtered_key_is_final[key_fragment] |= (
1406 filtered_key_is_final[key_fragment] |= (
1378 _DictKeyState.END_OF_TUPLE
1407 _DictKeyState.END_OF_TUPLE
1379 if len(k) == prefix_tuple_size + 1
1408 if len(k) == prefix_tuple_size + 1
1380 else _DictKeyState.IN_TUPLE
1409 else _DictKeyState.IN_TUPLE
1381 )
1410 )
1382 elif prefix_tuple_size > 0:
1411 elif prefix_tuple_size > 0:
1383 # we are completing a tuple but this key is not a tuple,
1412 # we are completing a tuple but this key is not a tuple,
1384 # so we should ignore it
1413 # so we should ignore it
1385 pass
1414 pass
1386 else:
1415 else:
1387 if isinstance(k, text_serializable_types):
1416 if isinstance(k, text_serializable_types):
1388 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1417 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1389
1418
1390 filtered_keys = filtered_key_is_final.keys()
1419 filtered_keys = filtered_key_is_final.keys()
1391
1420
1392 if not prefix:
1421 if not prefix:
1393 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1422 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1394
1423
1395 quote_match = re.search("(?:\"|')", prefix)
1424 quote_match = re.search("(?:\"|')", prefix)
1396 is_user_prefix_numeric = False
1425 is_user_prefix_numeric = False
1397
1426
1398 if quote_match:
1427 if quote_match:
1399 quote = quote_match.group()
1428 quote = quote_match.group()
1400 valid_prefix = prefix + quote
1429 valid_prefix = prefix + quote
1401 try:
1430 try:
1402 prefix_str = literal_eval(valid_prefix)
1431 prefix_str = literal_eval(valid_prefix)
1403 except Exception:
1432 except Exception:
1404 return "", 0, {}
1433 return "", 0, {}
1405 else:
1434 else:
1406 # If it does not look like a string, let's assume
1435 # If it does not look like a string, let's assume
1407 # we are dealing with a number or variable.
1436 # we are dealing with a number or variable.
1408 number_match = _match_number_in_dict_key_prefix(prefix)
1437 number_match = _match_number_in_dict_key_prefix(prefix)
1409
1438
1410 # We do not want the key matcher to suggest variable names so we yield:
1439 # We do not want the key matcher to suggest variable names so we yield:
1411 if number_match is None:
1440 if number_match is None:
1412 # The alternative would be to assume that user forgort the quote
1441 # The alternative would be to assume that user forgort the quote
1413 # and if the substring matches, suggest adding it at the start.
1442 # and if the substring matches, suggest adding it at the start.
1414 return "", 0, {}
1443 return "", 0, {}
1415
1444
1416 prefix_str = number_match
1445 prefix_str = number_match
1417 is_user_prefix_numeric = True
1446 is_user_prefix_numeric = True
1418 quote = ""
1447 quote = ""
1419
1448
1420 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1449 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1421 token_match = re.search(pattern, prefix, re.UNICODE)
1450 token_match = re.search(pattern, prefix, re.UNICODE)
1422 assert token_match is not None # silence mypy
1451 assert token_match is not None # silence mypy
1423 token_start = token_match.start()
1452 token_start = token_match.start()
1424 token_prefix = token_match.group()
1453 token_prefix = token_match.group()
1425
1454
1426 matched: Dict[str, _DictKeyState] = {}
1455 matched: Dict[str, _DictKeyState] = {}
1427
1456
1428 str_key: Union[str, bytes]
1457 str_key: Union[str, bytes]
1429
1458
1430 for key in filtered_keys:
1459 for key in filtered_keys:
1431 if isinstance(key, (int, float)):
1460 if isinstance(key, (int, float)):
1432 # User typed a number but this key is not a number.
1461 # User typed a number but this key is not a number.
1433 if not is_user_prefix_numeric:
1462 if not is_user_prefix_numeric:
1434 continue
1463 continue
1435 str_key = str(key)
1464 str_key = str(key)
1436 if isinstance(key, int):
1465 if isinstance(key, int):
1437 int_base = prefix_str[:2].lower()
1466 int_base = prefix_str[:2].lower()
1438 # if user typed integer using binary/oct/hex notation:
1467 # if user typed integer using binary/oct/hex notation:
1439 if int_base in _INT_FORMATS:
1468 if int_base in _INT_FORMATS:
1440 int_format = _INT_FORMATS[int_base]
1469 int_format = _INT_FORMATS[int_base]
1441 str_key = int_format(key)
1470 str_key = int_format(key)
1442 else:
1471 else:
1443 # User typed a string but this key is a number.
1472 # User typed a string but this key is a number.
1444 if is_user_prefix_numeric:
1473 if is_user_prefix_numeric:
1445 continue
1474 continue
1446 str_key = key
1475 str_key = key
1447 try:
1476 try:
1448 if not str_key.startswith(prefix_str):
1477 if not str_key.startswith(prefix_str):
1449 continue
1478 continue
1450 except (AttributeError, TypeError, UnicodeError) as e:
1479 except (AttributeError, TypeError, UnicodeError):
1451 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1480 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1452 continue
1481 continue
1453
1482
1454 # reformat remainder of key to begin with prefix
1483 # reformat remainder of key to begin with prefix
1455 rem = str_key[len(prefix_str) :]
1484 rem = str_key[len(prefix_str) :]
1456 # force repr wrapped in '
1485 # force repr wrapped in '
1457 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1486 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1458 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1487 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1459 if quote == '"':
1488 if quote == '"':
1460 # The entered prefix is quoted with ",
1489 # The entered prefix is quoted with ",
1461 # but the match is quoted with '.
1490 # but the match is quoted with '.
1462 # A contained " hence needs escaping for comparison:
1491 # A contained " hence needs escaping for comparison:
1463 rem_repr = rem_repr.replace('"', '\\"')
1492 rem_repr = rem_repr.replace('"', '\\"')
1464
1493
1465 # then reinsert prefix from start of token
1494 # then reinsert prefix from start of token
1466 match = "%s%s" % (token_prefix, rem_repr)
1495 match = "%s%s" % (token_prefix, rem_repr)
1467
1496
1468 matched[match] = filtered_key_is_final[key]
1497 matched[match] = filtered_key_is_final[key]
1469 return quote, token_start, matched
1498 return quote, token_start, matched
1470
1499
1471
1500
1472 def cursor_to_position(text:str, line:int, column:int)->int:
1501 def cursor_to_position(text:str, line:int, column:int)->int:
1473 """
1502 """
1474 Convert the (line,column) position of the cursor in text to an offset in a
1503 Convert the (line,column) position of the cursor in text to an offset in a
1475 string.
1504 string.
1476
1505
1477 Parameters
1506 Parameters
1478 ----------
1507 ----------
1479 text : str
1508 text : str
1480 The text in which to calculate the cursor offset
1509 The text in which to calculate the cursor offset
1481 line : int
1510 line : int
1482 Line of the cursor; 0-indexed
1511 Line of the cursor; 0-indexed
1483 column : int
1512 column : int
1484 Column of the cursor 0-indexed
1513 Column of the cursor 0-indexed
1485
1514
1486 Returns
1515 Returns
1487 -------
1516 -------
1488 Position of the cursor in ``text``, 0-indexed.
1517 Position of the cursor in ``text``, 0-indexed.
1489
1518
1490 See Also
1519 See Also
1491 --------
1520 --------
1492 position_to_cursor : reciprocal of this function
1521 position_to_cursor : reciprocal of this function
1493
1522
1494 """
1523 """
1495 lines = text.split('\n')
1524 lines = text.split('\n')
1496 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1525 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1497
1526
1498 return sum(len(l) + 1 for l in lines[:line]) + column
1527 return sum(len(line) + 1 for line in lines[:line]) + column
1499
1528
1500 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1529 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1501 """
1530 """
1502 Convert the position of the cursor in text (0 indexed) to a line
1531 Convert the position of the cursor in text (0 indexed) to a line
1503 number(0-indexed) and a column number (0-indexed) pair
1532 number(0-indexed) and a column number (0-indexed) pair
1504
1533
1505 Position should be a valid position in ``text``.
1534 Position should be a valid position in ``text``.
1506
1535
1507 Parameters
1536 Parameters
1508 ----------
1537 ----------
1509 text : str
1538 text : str
1510 The text in which to calculate the cursor offset
1539 The text in which to calculate the cursor offset
1511 offset : int
1540 offset : int
1512 Position of the cursor in ``text``, 0-indexed.
1541 Position of the cursor in ``text``, 0-indexed.
1513
1542
1514 Returns
1543 Returns
1515 -------
1544 -------
1516 (line, column) : (int, int)
1545 (line, column) : (int, int)
1517 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1546 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1518
1547
1519 See Also
1548 See Also
1520 --------
1549 --------
1521 cursor_to_position : reciprocal of this function
1550 cursor_to_position : reciprocal of this function
1522
1551
1523 """
1552 """
1524
1553
1525 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1554 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1526
1555
1527 before = text[:offset]
1556 before = text[:offset]
1528 blines = before.split('\n') # ! splitnes trim trailing \n
1557 blines = before.split('\n') # ! splitnes trim trailing \n
1529 line = before.count('\n')
1558 line = before.count('\n')
1530 col = len(blines[-1])
1559 col = len(blines[-1])
1531 return line, col
1560 return line, col
1532
1561
1533
1562
1534 def _safe_isinstance(obj, module, class_name, *attrs):
1563 def _safe_isinstance(obj, module, class_name, *attrs):
1535 """Checks if obj is an instance of module.class_name if loaded
1564 """Checks if obj is an instance of module.class_name if loaded
1536 """
1565 """
1537 if module in sys.modules:
1566 if module in sys.modules:
1538 m = sys.modules[module]
1567 m = sys.modules[module]
1539 for attr in [class_name, *attrs]:
1568 for attr in [class_name, *attrs]:
1540 m = getattr(m, attr)
1569 m = getattr(m, attr)
1541 return isinstance(obj, m)
1570 return isinstance(obj, m)
1542
1571
1543
1572
1544 @context_matcher()
1573 @context_matcher()
1545 def back_unicode_name_matcher(context: CompletionContext):
1574 def back_unicode_name_matcher(context: CompletionContext):
1546 """Match Unicode characters back to Unicode name
1575 """Match Unicode characters back to Unicode name
1547
1576
1548 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1577 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1549 """
1578 """
1550 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1579 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1551 return _convert_matcher_v1_result_to_v2(
1580 return _convert_matcher_v1_result_to_v2(
1552 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1581 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1553 )
1582 )
1554
1583
1555
1584
1556 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1585 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1557 """Match Unicode characters back to Unicode name
1586 """Match Unicode characters back to Unicode name
1558
1587
1559 This does ``β˜ƒ`` -> ``\\snowman``
1588 This does ``β˜ƒ`` -> ``\\snowman``
1560
1589
1561 Note that snowman is not a valid python3 combining character but will be expanded.
1590 Note that snowman is not a valid python3 combining character but will be expanded.
1562 Though it will not recombine back to the snowman character by the completion machinery.
1591 Though it will not recombine back to the snowman character by the completion machinery.
1563
1592
1564 This will not either back-complete standard sequences like \\n, \\b ...
1593 This will not either back-complete standard sequences like \\n, \\b ...
1565
1594
1566 .. deprecated:: 8.6
1595 .. deprecated:: 8.6
1567 You can use :meth:`back_unicode_name_matcher` instead.
1596 You can use :meth:`back_unicode_name_matcher` instead.
1568
1597
1569 Returns
1598 Returns
1570 =======
1599 =======
1571
1600
1572 Return a tuple with two elements:
1601 Return a tuple with two elements:
1573
1602
1574 - The Unicode character that was matched (preceded with a backslash), or
1603 - The Unicode character that was matched (preceded with a backslash), or
1575 empty string,
1604 empty string,
1576 - a sequence (of 1), name for the match Unicode character, preceded by
1605 - a sequence (of 1), name for the match Unicode character, preceded by
1577 backslash, or empty if no match.
1606 backslash, or empty if no match.
1578 """
1607 """
1579 if len(text)<2:
1608 if len(text)<2:
1580 return '', ()
1609 return '', ()
1581 maybe_slash = text[-2]
1610 maybe_slash = text[-2]
1582 if maybe_slash != '\\':
1611 if maybe_slash != '\\':
1583 return '', ()
1612 return '', ()
1584
1613
1585 char = text[-1]
1614 char = text[-1]
1586 # no expand on quote for completion in strings.
1615 # no expand on quote for completion in strings.
1587 # nor backcomplete standard ascii keys
1616 # nor backcomplete standard ascii keys
1588 if char in string.ascii_letters or char in ('"',"'"):
1617 if char in string.ascii_letters or char in ('"',"'"):
1589 return '', ()
1618 return '', ()
1590 try :
1619 try :
1591 unic = unicodedata.name(char)
1620 unic = unicodedata.name(char)
1592 return '\\'+char,('\\'+unic,)
1621 return '\\'+char,('\\'+unic,)
1593 except KeyError:
1622 except KeyError:
1594 pass
1623 pass
1595 return '', ()
1624 return '', ()
1596
1625
1597
1626
1598 @context_matcher()
1627 @context_matcher()
1599 def back_latex_name_matcher(context: CompletionContext):
1628 def back_latex_name_matcher(context: CompletionContext):
1600 """Match latex characters back to unicode name
1629 """Match latex characters back to unicode name
1601
1630
1602 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1631 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1603 """
1632 """
1604 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1633 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1605 return _convert_matcher_v1_result_to_v2(
1634 return _convert_matcher_v1_result_to_v2(
1606 matches, type="latex", fragment=fragment, suppress_if_matches=True
1635 matches, type="latex", fragment=fragment, suppress_if_matches=True
1607 )
1636 )
1608
1637
1609
1638
1610 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1639 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1611 """Match latex characters back to unicode name
1640 """Match latex characters back to unicode name
1612
1641
1613 This does ``\\β„΅`` -> ``\\aleph``
1642 This does ``\\β„΅`` -> ``\\aleph``
1614
1643
1615 .. deprecated:: 8.6
1644 .. deprecated:: 8.6
1616 You can use :meth:`back_latex_name_matcher` instead.
1645 You can use :meth:`back_latex_name_matcher` instead.
1617 """
1646 """
1618 if len(text)<2:
1647 if len(text)<2:
1619 return '', ()
1648 return '', ()
1620 maybe_slash = text[-2]
1649 maybe_slash = text[-2]
1621 if maybe_slash != '\\':
1650 if maybe_slash != '\\':
1622 return '', ()
1651 return '', ()
1623
1652
1624
1653
1625 char = text[-1]
1654 char = text[-1]
1626 # no expand on quote for completion in strings.
1655 # no expand on quote for completion in strings.
1627 # nor backcomplete standard ascii keys
1656 # nor backcomplete standard ascii keys
1628 if char in string.ascii_letters or char in ('"',"'"):
1657 if char in string.ascii_letters or char in ('"',"'"):
1629 return '', ()
1658 return '', ()
1630 try :
1659 try :
1631 latex = reverse_latex_symbol[char]
1660 latex = reverse_latex_symbol[char]
1632 # '\\' replace the \ as well
1661 # '\\' replace the \ as well
1633 return '\\'+char,[latex]
1662 return '\\'+char,[latex]
1634 except KeyError:
1663 except KeyError:
1635 pass
1664 pass
1636 return '', ()
1665 return '', ()
1637
1666
1638
1667
1639 def _formatparamchildren(parameter) -> str:
1668 def _formatparamchildren(parameter) -> str:
1640 """
1669 """
1641 Get parameter name and value from Jedi Private API
1670 Get parameter name and value from Jedi Private API
1642
1671
1643 Jedi does not expose a simple way to get `param=value` from its API.
1672 Jedi does not expose a simple way to get `param=value` from its API.
1644
1673
1645 Parameters
1674 Parameters
1646 ----------
1675 ----------
1647 parameter
1676 parameter
1648 Jedi's function `Param`
1677 Jedi's function `Param`
1649
1678
1650 Returns
1679 Returns
1651 -------
1680 -------
1652 A string like 'a', 'b=1', '*args', '**kwargs'
1681 A string like 'a', 'b=1', '*args', '**kwargs'
1653
1682
1654 """
1683 """
1655 description = parameter.description
1684 description = parameter.description
1656 if not description.startswith('param '):
1685 if not description.startswith('param '):
1657 raise ValueError('Jedi function parameter description have change format.'
1686 raise ValueError('Jedi function parameter description have change format.'
1658 'Expected "param ...", found %r".' % description)
1687 'Expected "param ...", found %r".' % description)
1659 return description[6:]
1688 return description[6:]
1660
1689
1661 def _make_signature(completion)-> str:
1690 def _make_signature(completion)-> str:
1662 """
1691 """
1663 Make the signature from a jedi completion
1692 Make the signature from a jedi completion
1664
1693
1665 Parameters
1694 Parameters
1666 ----------
1695 ----------
1667 completion : jedi.Completion
1696 completion : jedi.Completion
1668 object does not complete a function type
1697 object does not complete a function type
1669
1698
1670 Returns
1699 Returns
1671 -------
1700 -------
1672 a string consisting of the function signature, with the parenthesis but
1701 a string consisting of the function signature, with the parenthesis but
1673 without the function name. example:
1702 without the function name. example:
1674 `(a, *args, b=1, **kwargs)`
1703 `(a, *args, b=1, **kwargs)`
1675
1704
1676 """
1705 """
1677
1706
1678 # it looks like this might work on jedi 0.17
1707 # it looks like this might work on jedi 0.17
1679 if hasattr(completion, 'get_signatures'):
1708 if hasattr(completion, 'get_signatures'):
1680 signatures = completion.get_signatures()
1709 signatures = completion.get_signatures()
1681 if not signatures:
1710 if not signatures:
1682 return '(?)'
1711 return '(?)'
1683
1712
1684 c0 = completion.get_signatures()[0]
1713 c0 = completion.get_signatures()[0]
1685 return '('+c0.to_string().split('(', maxsplit=1)[1]
1714 return '('+c0.to_string().split('(', maxsplit=1)[1]
1686
1715
1687 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1716 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1688 for p in signature.defined_names()) if f])
1717 for p in signature.defined_names()) if f])
1689
1718
1690
1719
1691 _CompleteResult = Dict[str, MatcherResult]
1720 _CompleteResult = Dict[str, MatcherResult]
1692
1721
1693
1722
1694 DICT_MATCHER_REGEX = re.compile(
1723 DICT_MATCHER_REGEX = re.compile(
1695 r"""(?x)
1724 r"""(?x)
1696 ( # match dict-referring - or any get item object - expression
1725 ( # match dict-referring - or any get item object - expression
1697 .+
1726 .+
1698 )
1727 )
1699 \[ # open bracket
1728 \[ # open bracket
1700 \s* # and optional whitespace
1729 \s* # and optional whitespace
1701 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1730 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1702 # and slices
1731 # and slices
1703 ((?:(?:
1732 ((?:(?:
1704 (?: # closed string
1733 (?: # closed string
1705 [uUbB]? # string prefix (r not handled)
1734 [uUbB]? # string prefix (r not handled)
1706 (?:
1735 (?:
1707 '(?:[^']|(?<!\\)\\')*'
1736 '(?:[^']|(?<!\\)\\')*'
1708 |
1737 |
1709 "(?:[^"]|(?<!\\)\\")*"
1738 "(?:[^"]|(?<!\\)\\")*"
1710 )
1739 )
1711 )
1740 )
1712 |
1741 |
1713 # capture integers and slices
1742 # capture integers and slices
1714 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1743 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1715 |
1744 |
1716 # integer in bin/hex/oct notation
1745 # integer in bin/hex/oct notation
1717 0[bBxXoO]_?(?:\w|\d)+
1746 0[bBxXoO]_?(?:\w|\d)+
1718 )
1747 )
1719 \s*,\s*
1748 \s*,\s*
1720 )*)
1749 )*)
1721 ((?:
1750 ((?:
1722 (?: # unclosed string
1751 (?: # unclosed string
1723 [uUbB]? # string prefix (r not handled)
1752 [uUbB]? # string prefix (r not handled)
1724 (?:
1753 (?:
1725 '(?:[^']|(?<!\\)\\')*
1754 '(?:[^']|(?<!\\)\\')*
1726 |
1755 |
1727 "(?:[^"]|(?<!\\)\\")*
1756 "(?:[^"]|(?<!\\)\\")*
1728 )
1757 )
1729 )
1758 )
1730 |
1759 |
1731 # unfinished integer
1760 # unfinished integer
1732 (?:[-+]?\d+)
1761 (?:[-+]?\d+)
1733 |
1762 |
1734 # integer in bin/hex/oct notation
1763 # integer in bin/hex/oct notation
1735 0[bBxXoO]_?(?:\w|\d)+
1764 0[bBxXoO]_?(?:\w|\d)+
1736 )
1765 )
1737 )?
1766 )?
1738 $
1767 $
1739 """
1768 """
1740 )
1769 )
1741
1770
1742
1771
1743 def _convert_matcher_v1_result_to_v2(
1772 def _convert_matcher_v1_result_to_v2(
1744 matches: Sequence[str],
1773 matches: Sequence[str],
1745 type: str,
1774 type: str,
1746 fragment: Optional[str] = None,
1775 fragment: Optional[str] = None,
1747 suppress_if_matches: bool = False,
1776 suppress_if_matches: bool = False,
1748 ) -> SimpleMatcherResult:
1777 ) -> SimpleMatcherResult:
1749 """Utility to help with transition"""
1778 """Utility to help with transition"""
1750 result = {
1779 result = {
1751 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1780 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1752 "suppress": (True if matches else False) if suppress_if_matches else False,
1781 "suppress": (True if matches else False) if suppress_if_matches else False,
1753 }
1782 }
1754 if fragment is not None:
1783 if fragment is not None:
1755 result["matched_fragment"] = fragment
1784 result["matched_fragment"] = fragment
1756 return cast(SimpleMatcherResult, result)
1785 return cast(SimpleMatcherResult, result)
1757
1786
1758
1787
1759 class IPCompleter(Completer):
1788 class IPCompleter(Completer):
1760 """Extension of the completer class with IPython-specific features"""
1789 """Extension of the completer class with IPython-specific features"""
1761
1790
1762 @observe('greedy')
1791 @observe('greedy')
1763 def _greedy_changed(self, change):
1792 def _greedy_changed(self, change):
1764 """update the splitter and readline delims when greedy is changed"""
1793 """update the splitter and readline delims when greedy is changed"""
1765 if change["new"]:
1794 if change["new"]:
1766 self.evaluation = "unsafe"
1795 self.evaluation = "unsafe"
1767 self.auto_close_dict_keys = True
1796 self.auto_close_dict_keys = True
1768 self.splitter.delims = GREEDY_DELIMS
1797 self.splitter.delims = GREEDY_DELIMS
1769 else:
1798 else:
1770 self.evaluation = "limited"
1799 self.evaluation = "limited"
1771 self.auto_close_dict_keys = False
1800 self.auto_close_dict_keys = False
1772 self.splitter.delims = DELIMS
1801 self.splitter.delims = DELIMS
1773
1802
1774 dict_keys_only = Bool(
1803 dict_keys_only = Bool(
1775 False,
1804 False,
1776 help="""
1805 help="""
1777 Whether to show dict key matches only.
1806 Whether to show dict key matches only.
1778
1807
1779 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1808 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1780 """,
1809 """,
1781 )
1810 )
1782
1811
1783 suppress_competing_matchers = UnionTrait(
1812 suppress_competing_matchers = UnionTrait(
1784 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1813 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1785 default_value=None,
1814 default_value=None,
1786 help="""
1815 help="""
1787 Whether to suppress completions from other *Matchers*.
1816 Whether to suppress completions from other *Matchers*.
1788
1817
1789 When set to ``None`` (default) the matchers will attempt to auto-detect
1818 When set to ``None`` (default) the matchers will attempt to auto-detect
1790 whether suppression of other matchers is desirable. For example, at
1819 whether suppression of other matchers is desirable. For example, at
1791 the beginning of a line followed by `%` we expect a magic completion
1820 the beginning of a line followed by `%` we expect a magic completion
1792 to be the only applicable option, and after ``my_dict['`` we usually
1821 to be the only applicable option, and after ``my_dict['`` we usually
1793 expect a completion with an existing dictionary key.
1822 expect a completion with an existing dictionary key.
1794
1823
1795 If you want to disable this heuristic and see completions from all matchers,
1824 If you want to disable this heuristic and see completions from all matchers,
1796 set ``IPCompleter.suppress_competing_matchers = False``.
1825 set ``IPCompleter.suppress_competing_matchers = False``.
1797 To disable the heuristic for specific matchers provide a dictionary mapping:
1826 To disable the heuristic for specific matchers provide a dictionary mapping:
1798 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1827 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1799
1828
1800 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1829 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1801 completions to the set of matchers with the highest priority;
1830 completions to the set of matchers with the highest priority;
1802 this is equivalent to ``IPCompleter.merge_completions`` and
1831 this is equivalent to ``IPCompleter.merge_completions`` and
1803 can be beneficial for performance, but will sometimes omit relevant
1832 can be beneficial for performance, but will sometimes omit relevant
1804 candidates from matchers further down the priority list.
1833 candidates from matchers further down the priority list.
1805 """,
1834 """,
1806 ).tag(config=True)
1835 ).tag(config=True)
1807
1836
1808 merge_completions = Bool(
1837 merge_completions = Bool(
1809 True,
1838 True,
1810 help="""Whether to merge completion results into a single list
1839 help="""Whether to merge completion results into a single list
1811
1840
1812 If False, only the completion results from the first non-empty
1841 If False, only the completion results from the first non-empty
1813 completer will be returned.
1842 completer will be returned.
1814
1843
1815 As of version 8.6.0, setting the value to ``False`` is an alias for:
1844 As of version 8.6.0, setting the value to ``False`` is an alias for:
1816 ``IPCompleter.suppress_competing_matchers = True.``.
1845 ``IPCompleter.suppress_competing_matchers = True.``.
1817 """,
1846 """,
1818 ).tag(config=True)
1847 ).tag(config=True)
1819
1848
1820 disable_matchers = ListTrait(
1849 disable_matchers = ListTrait(
1821 Unicode(),
1850 Unicode(),
1822 help="""List of matchers to disable.
1851 help="""List of matchers to disable.
1823
1852
1824 The list should contain matcher identifiers (see :any:`completion_matcher`).
1853 The list should contain matcher identifiers (see :any:`completion_matcher`).
1825 """,
1854 """,
1826 ).tag(config=True)
1855 ).tag(config=True)
1827
1856
1828 omit__names = Enum(
1857 omit__names = Enum(
1829 (0, 1, 2),
1858 (0, 1, 2),
1830 default_value=2,
1859 default_value=2,
1831 help="""Instruct the completer to omit private method names
1860 help="""Instruct the completer to omit private method names
1832
1861
1833 Specifically, when completing on ``object.<tab>``.
1862 Specifically, when completing on ``object.<tab>``.
1834
1863
1835 When 2 [default]: all names that start with '_' will be excluded.
1864 When 2 [default]: all names that start with '_' will be excluded.
1836
1865
1837 When 1: all 'magic' names (``__foo__``) will be excluded.
1866 When 1: all 'magic' names (``__foo__``) will be excluded.
1838
1867
1839 When 0: nothing will be excluded.
1868 When 0: nothing will be excluded.
1840 """
1869 """
1841 ).tag(config=True)
1870 ).tag(config=True)
1842 limit_to__all__ = Bool(False,
1871 limit_to__all__ = Bool(False,
1843 help="""
1872 help="""
1844 DEPRECATED as of version 5.0.
1873 DEPRECATED as of version 5.0.
1845
1874
1846 Instruct the completer to use __all__ for the completion
1875 Instruct the completer to use __all__ for the completion
1847
1876
1848 Specifically, when completing on ``object.<tab>``.
1877 Specifically, when completing on ``object.<tab>``.
1849
1878
1850 When True: only those names in obj.__all__ will be included.
1879 When True: only those names in obj.__all__ will be included.
1851
1880
1852 When False [default]: the __all__ attribute is ignored
1881 When False [default]: the __all__ attribute is ignored
1853 """,
1882 """,
1854 ).tag(config=True)
1883 ).tag(config=True)
1855
1884
1856 profile_completions = Bool(
1885 profile_completions = Bool(
1857 default_value=False,
1886 default_value=False,
1858 help="If True, emit profiling data for completion subsystem using cProfile."
1887 help="If True, emit profiling data for completion subsystem using cProfile."
1859 ).tag(config=True)
1888 ).tag(config=True)
1860
1889
1861 profiler_output_dir = Unicode(
1890 profiler_output_dir = Unicode(
1862 default_value=".completion_profiles",
1891 default_value=".completion_profiles",
1863 help="Template for path at which to output profile data for completions."
1892 help="Template for path at which to output profile data for completions."
1864 ).tag(config=True)
1893 ).tag(config=True)
1865
1894
1866 @observe('limit_to__all__')
1895 @observe('limit_to__all__')
1867 def _limit_to_all_changed(self, change):
1896 def _limit_to_all_changed(self, change):
1868 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1897 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1869 'value has been deprecated since IPython 5.0, will be made to have '
1898 'value has been deprecated since IPython 5.0, will be made to have '
1870 'no effects and then removed in future version of IPython.',
1899 'no effects and then removed in future version of IPython.',
1871 UserWarning)
1900 UserWarning)
1872
1901
1873 def __init__(
1902 def __init__(
1874 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1903 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1875 ):
1904 ):
1876 """IPCompleter() -> completer
1905 """IPCompleter() -> completer
1877
1906
1878 Return a completer object.
1907 Return a completer object.
1879
1908
1880 Parameters
1909 Parameters
1881 ----------
1910 ----------
1882 shell
1911 shell
1883 a pointer to the ipython shell itself. This is needed
1912 a pointer to the ipython shell itself. This is needed
1884 because this completer knows about magic functions, and those can
1913 because this completer knows about magic functions, and those can
1885 only be accessed via the ipython instance.
1914 only be accessed via the ipython instance.
1886 namespace : dict, optional
1915 namespace : dict, optional
1887 an optional dict where completions are performed.
1916 an optional dict where completions are performed.
1888 global_namespace : dict, optional
1917 global_namespace : dict, optional
1889 secondary optional dict for completions, to
1918 secondary optional dict for completions, to
1890 handle cases (such as IPython embedded inside functions) where
1919 handle cases (such as IPython embedded inside functions) where
1891 both Python scopes are visible.
1920 both Python scopes are visible.
1892 config : Config
1921 config : Config
1893 traitlet's config object
1922 traitlet's config object
1894 **kwargs
1923 **kwargs
1895 passed to super class unmodified.
1924 passed to super class unmodified.
1896 """
1925 """
1897
1926
1898 self.magic_escape = ESC_MAGIC
1927 self.magic_escape = ESC_MAGIC
1899 self.splitter = CompletionSplitter()
1928 self.splitter = CompletionSplitter()
1900
1929
1901 # _greedy_changed() depends on splitter and readline being defined:
1930 # _greedy_changed() depends on splitter and readline being defined:
1902 super().__init__(
1931 super().__init__(
1903 namespace=namespace,
1932 namespace=namespace,
1904 global_namespace=global_namespace,
1933 global_namespace=global_namespace,
1905 config=config,
1934 config=config,
1906 **kwargs,
1935 **kwargs,
1907 )
1936 )
1908
1937
1909 # List where completion matches will be stored
1938 # List where completion matches will be stored
1910 self.matches = []
1939 self.matches = []
1911 self.shell = shell
1940 self.shell = shell
1912 # Regexp to split filenames with spaces in them
1941 # Regexp to split filenames with spaces in them
1913 self.space_name_re = re.compile(r'([^\\] )')
1942 self.space_name_re = re.compile(r'([^\\] )')
1914 # Hold a local ref. to glob.glob for speed
1943 # Hold a local ref. to glob.glob for speed
1915 self.glob = glob.glob
1944 self.glob = glob.glob
1916
1945
1917 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1946 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1918 # buffers, to avoid completion problems.
1947 # buffers, to avoid completion problems.
1919 term = os.environ.get('TERM','xterm')
1948 term = os.environ.get('TERM','xterm')
1920 self.dumb_terminal = term in ['dumb','emacs']
1949 self.dumb_terminal = term in ['dumb','emacs']
1921
1950
1922 # Special handling of backslashes needed in win32 platforms
1951 # Special handling of backslashes needed in win32 platforms
1923 if sys.platform == "win32":
1952 if sys.platform == "win32":
1924 self.clean_glob = self._clean_glob_win32
1953 self.clean_glob = self._clean_glob_win32
1925 else:
1954 else:
1926 self.clean_glob = self._clean_glob
1955 self.clean_glob = self._clean_glob
1927
1956
1928 #regexp to parse docstring for function signature
1957 #regexp to parse docstring for function signature
1929 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1958 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1930 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1959 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1931 #use this if positional argument name is also needed
1960 #use this if positional argument name is also needed
1932 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1961 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1933
1962
1934 self.magic_arg_matchers = [
1963 self.magic_arg_matchers = [
1935 self.magic_config_matcher,
1964 self.magic_config_matcher,
1936 self.magic_color_matcher,
1965 self.magic_color_matcher,
1937 ]
1966 ]
1938
1967
1939 # This is set externally by InteractiveShell
1968 # This is set externally by InteractiveShell
1940 self.custom_completers = None
1969 self.custom_completers = None
1941
1970
1942 # This is a list of names of unicode characters that can be completed
1971 # This is a list of names of unicode characters that can be completed
1943 # into their corresponding unicode value. The list is large, so we
1972 # into their corresponding unicode value. The list is large, so we
1944 # lazily initialize it on first use. Consuming code should access this
1973 # lazily initialize it on first use. Consuming code should access this
1945 # attribute through the `@unicode_names` property.
1974 # attribute through the `@unicode_names` property.
1946 self._unicode_names = None
1975 self._unicode_names = None
1947
1976
1948 self._backslash_combining_matchers = [
1977 self._backslash_combining_matchers = [
1949 self.latex_name_matcher,
1978 self.latex_name_matcher,
1950 self.unicode_name_matcher,
1979 self.unicode_name_matcher,
1951 back_latex_name_matcher,
1980 back_latex_name_matcher,
1952 back_unicode_name_matcher,
1981 back_unicode_name_matcher,
1953 self.fwd_unicode_matcher,
1982 self.fwd_unicode_matcher,
1954 ]
1983 ]
1955
1984
1956 if not self.backslash_combining_completions:
1985 if not self.backslash_combining_completions:
1957 for matcher in self._backslash_combining_matchers:
1986 for matcher in self._backslash_combining_matchers:
1958 self.disable_matchers.append(_get_matcher_id(matcher))
1987 self.disable_matchers.append(_get_matcher_id(matcher))
1959
1988
1960 if not self.merge_completions:
1989 if not self.merge_completions:
1961 self.suppress_competing_matchers = True
1990 self.suppress_competing_matchers = True
1962
1991
1963 @property
1992 @property
1964 def matchers(self) -> List[Matcher]:
1993 def matchers(self) -> List[Matcher]:
1965 """All active matcher routines for completion"""
1994 """All active matcher routines for completion"""
1966 if self.dict_keys_only:
1995 if self.dict_keys_only:
1967 return [self.dict_key_matcher]
1996 return [self.dict_key_matcher]
1968
1997
1969 if self.use_jedi:
1998 if self.use_jedi:
1970 return [
1999 return [
1971 *self.custom_matchers,
2000 *self.custom_matchers,
1972 *self._backslash_combining_matchers,
2001 *self._backslash_combining_matchers,
1973 *self.magic_arg_matchers,
2002 *self.magic_arg_matchers,
1974 self.custom_completer_matcher,
2003 self.custom_completer_matcher,
1975 self.magic_matcher,
2004 self.magic_matcher,
1976 self._jedi_matcher,
2005 self._jedi_matcher,
1977 self.dict_key_matcher,
2006 self.dict_key_matcher,
1978 self.file_matcher,
2007 self.file_matcher,
1979 ]
2008 ]
1980 else:
2009 else:
1981 return [
2010 return [
1982 *self.custom_matchers,
2011 *self.custom_matchers,
1983 *self._backslash_combining_matchers,
2012 *self._backslash_combining_matchers,
1984 *self.magic_arg_matchers,
2013 *self.magic_arg_matchers,
1985 self.custom_completer_matcher,
2014 self.custom_completer_matcher,
1986 self.dict_key_matcher,
2015 self.dict_key_matcher,
1987 self.magic_matcher,
2016 self.magic_matcher,
1988 self.python_matcher,
2017 self.python_matcher,
1989 self.file_matcher,
2018 self.file_matcher,
1990 self.python_func_kw_matcher,
2019 self.python_func_kw_matcher,
1991 ]
2020 ]
1992
2021
1993 def all_completions(self, text:str) -> List[str]:
2022 def all_completions(self, text:str) -> List[str]:
1994 """
2023 """
1995 Wrapper around the completion methods for the benefit of emacs.
2024 Wrapper around the completion methods for the benefit of emacs.
1996 """
2025 """
1997 prefix = text.rpartition('.')[0]
2026 prefix = text.rpartition('.')[0]
1998 with provisionalcompleter():
2027 with provisionalcompleter():
1999 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2028 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2000 for c in self.completions(text, len(text))]
2029 for c in self.completions(text, len(text))]
2001
2030
2002 return self.complete(text)[1]
2031 return self.complete(text)[1]
2003
2032
2004 def _clean_glob(self, text:str):
2033 def _clean_glob(self, text:str):
2005 return self.glob("%s*" % text)
2034 return self.glob("%s*" % text)
2006
2035
2007 def _clean_glob_win32(self, text:str):
2036 def _clean_glob_win32(self, text:str):
2008 return [f.replace("\\","/")
2037 return [f.replace("\\","/")
2009 for f in self.glob("%s*" % text)]
2038 for f in self.glob("%s*" % text)]
2010
2039
2011 @context_matcher()
2040 @context_matcher()
2012 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2041 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2013 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2042 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2014 matches = self.file_matches(context.token)
2043 matches = self.file_matches(context.token)
2015 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2044 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2016 # starts with `/home/`, `C:\`, etc)
2045 # starts with `/home/`, `C:\`, etc)
2017 return _convert_matcher_v1_result_to_v2(matches, type="path")
2046 return _convert_matcher_v1_result_to_v2(matches, type="path")
2018
2047
2019 def file_matches(self, text: str) -> List[str]:
2048 def file_matches(self, text: str) -> List[str]:
2020 """Match filenames, expanding ~USER type strings.
2049 """Match filenames, expanding ~USER type strings.
2021
2050
2022 Most of the seemingly convoluted logic in this completer is an
2051 Most of the seemingly convoluted logic in this completer is an
2023 attempt to handle filenames with spaces in them. And yet it's not
2052 attempt to handle filenames with spaces in them. And yet it's not
2024 quite perfect, because Python's readline doesn't expose all of the
2053 quite perfect, because Python's readline doesn't expose all of the
2025 GNU readline details needed for this to be done correctly.
2054 GNU readline details needed for this to be done correctly.
2026
2055
2027 For a filename with a space in it, the printed completions will be
2056 For a filename with a space in it, the printed completions will be
2028 only the parts after what's already been typed (instead of the
2057 only the parts after what's already been typed (instead of the
2029 full completions, as is normally done). I don't think with the
2058 full completions, as is normally done). I don't think with the
2030 current (as of Python 2.3) Python readline it's possible to do
2059 current (as of Python 2.3) Python readline it's possible to do
2031 better.
2060 better.
2032
2061
2033 .. deprecated:: 8.6
2062 .. deprecated:: 8.6
2034 You can use :meth:`file_matcher` instead.
2063 You can use :meth:`file_matcher` instead.
2035 """
2064 """
2036
2065
2037 # chars that require escaping with backslash - i.e. chars
2066 # chars that require escaping with backslash - i.e. chars
2038 # that readline treats incorrectly as delimiters, but we
2067 # that readline treats incorrectly as delimiters, but we
2039 # don't want to treat as delimiters in filename matching
2068 # don't want to treat as delimiters in filename matching
2040 # when escaped with backslash
2069 # when escaped with backslash
2041 if text.startswith('!'):
2070 if text.startswith('!'):
2042 text = text[1:]
2071 text = text[1:]
2043 text_prefix = u'!'
2072 text_prefix = u'!'
2044 else:
2073 else:
2045 text_prefix = u''
2074 text_prefix = u''
2046
2075
2047 text_until_cursor = self.text_until_cursor
2076 text_until_cursor = self.text_until_cursor
2048 # track strings with open quotes
2077 # track strings with open quotes
2049 open_quotes = has_open_quotes(text_until_cursor)
2078 open_quotes = has_open_quotes(text_until_cursor)
2050
2079
2051 if '(' in text_until_cursor or '[' in text_until_cursor:
2080 if '(' in text_until_cursor or '[' in text_until_cursor:
2052 lsplit = text
2081 lsplit = text
2053 else:
2082 else:
2054 try:
2083 try:
2055 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2084 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2056 lsplit = arg_split(text_until_cursor)[-1]
2085 lsplit = arg_split(text_until_cursor)[-1]
2057 except ValueError:
2086 except ValueError:
2058 # typically an unmatched ", or backslash without escaped char.
2087 # typically an unmatched ", or backslash without escaped char.
2059 if open_quotes:
2088 if open_quotes:
2060 lsplit = text_until_cursor.split(open_quotes)[-1]
2089 lsplit = text_until_cursor.split(open_quotes)[-1]
2061 else:
2090 else:
2062 return []
2091 return []
2063 except IndexError:
2092 except IndexError:
2064 # tab pressed on empty line
2093 # tab pressed on empty line
2065 lsplit = ""
2094 lsplit = ""
2066
2095
2067 if not open_quotes and lsplit != protect_filename(lsplit):
2096 if not open_quotes and lsplit != protect_filename(lsplit):
2068 # if protectables are found, do matching on the whole escaped name
2097 # if protectables are found, do matching on the whole escaped name
2069 has_protectables = True
2098 has_protectables = True
2070 text0,text = text,lsplit
2099 text0,text = text,lsplit
2071 else:
2100 else:
2072 has_protectables = False
2101 has_protectables = False
2073 text = os.path.expanduser(text)
2102 text = os.path.expanduser(text)
2074
2103
2075 if text == "":
2104 if text == "":
2076 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2105 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2077
2106
2078 # Compute the matches from the filesystem
2107 # Compute the matches from the filesystem
2079 if sys.platform == 'win32':
2108 if sys.platform == 'win32':
2080 m0 = self.clean_glob(text)
2109 m0 = self.clean_glob(text)
2081 else:
2110 else:
2082 m0 = self.clean_glob(text.replace('\\', ''))
2111 m0 = self.clean_glob(text.replace('\\', ''))
2083
2112
2084 if has_protectables:
2113 if has_protectables:
2085 # If we had protectables, we need to revert our changes to the
2114 # If we had protectables, we need to revert our changes to the
2086 # beginning of filename so that we don't double-write the part
2115 # beginning of filename so that we don't double-write the part
2087 # of the filename we have so far
2116 # of the filename we have so far
2088 len_lsplit = len(lsplit)
2117 len_lsplit = len(lsplit)
2089 matches = [text_prefix + text0 +
2118 matches = [text_prefix + text0 +
2090 protect_filename(f[len_lsplit:]) for f in m0]
2119 protect_filename(f[len_lsplit:]) for f in m0]
2091 else:
2120 else:
2092 if open_quotes:
2121 if open_quotes:
2093 # if we have a string with an open quote, we don't need to
2122 # if we have a string with an open quote, we don't need to
2094 # protect the names beyond the quote (and we _shouldn't_, as
2123 # protect the names beyond the quote (and we _shouldn't_, as
2095 # it would cause bugs when the filesystem call is made).
2124 # it would cause bugs when the filesystem call is made).
2096 matches = m0 if sys.platform == "win32" else\
2125 matches = m0 if sys.platform == "win32" else\
2097 [protect_filename(f, open_quotes) for f in m0]
2126 [protect_filename(f, open_quotes) for f in m0]
2098 else:
2127 else:
2099 matches = [text_prefix +
2128 matches = [text_prefix +
2100 protect_filename(f) for f in m0]
2129 protect_filename(f) for f in m0]
2101
2130
2102 # Mark directories in input list by appending '/' to their names.
2131 # Mark directories in input list by appending '/' to their names.
2103 return [x+'/' if os.path.isdir(x) else x for x in matches]
2132 return [x+'/' if os.path.isdir(x) else x for x in matches]
2104
2133
2105 @context_matcher()
2134 @context_matcher()
2106 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2135 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2107 """Match magics."""
2136 """Match magics."""
2108 text = context.token
2137 text = context.token
2109 matches = self.magic_matches(text)
2138 matches = self.magic_matches(text)
2110 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2139 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2111 is_magic_prefix = len(text) > 0 and text[0] == "%"
2140 is_magic_prefix = len(text) > 0 and text[0] == "%"
2112 result["suppress"] = is_magic_prefix and bool(result["completions"])
2141 result["suppress"] = is_magic_prefix and bool(result["completions"])
2113 return result
2142 return result
2114
2143
2115 def magic_matches(self, text: str):
2144 def magic_matches(self, text: str):
2116 """Match magics.
2145 """Match magics.
2117
2146
2118 .. deprecated:: 8.6
2147 .. deprecated:: 8.6
2119 You can use :meth:`magic_matcher` instead.
2148 You can use :meth:`magic_matcher` instead.
2120 """
2149 """
2121 # Get all shell magics now rather than statically, so magics loaded at
2150 # Get all shell magics now rather than statically, so magics loaded at
2122 # runtime show up too.
2151 # runtime show up too.
2123 lsm = self.shell.magics_manager.lsmagic()
2152 lsm = self.shell.magics_manager.lsmagic()
2124 line_magics = lsm['line']
2153 line_magics = lsm['line']
2125 cell_magics = lsm['cell']
2154 cell_magics = lsm['cell']
2126 pre = self.magic_escape
2155 pre = self.magic_escape
2127 pre2 = pre+pre
2156 pre2 = pre+pre
2128
2157
2129 explicit_magic = text.startswith(pre)
2158 explicit_magic = text.startswith(pre)
2130
2159
2131 # Completion logic:
2160 # Completion logic:
2132 # - user gives %%: only do cell magics
2161 # - user gives %%: only do cell magics
2133 # - user gives %: do both line and cell magics
2162 # - user gives %: do both line and cell magics
2134 # - no prefix: do both
2163 # - no prefix: do both
2135 # In other words, line magics are skipped if the user gives %% explicitly
2164 # In other words, line magics are skipped if the user gives %% explicitly
2136 #
2165 #
2137 # We also exclude magics that match any currently visible names:
2166 # We also exclude magics that match any currently visible names:
2138 # https://github.com/ipython/ipython/issues/4877, unless the user has
2167 # https://github.com/ipython/ipython/issues/4877, unless the user has
2139 # typed a %:
2168 # typed a %:
2140 # https://github.com/ipython/ipython/issues/10754
2169 # https://github.com/ipython/ipython/issues/10754
2141 bare_text = text.lstrip(pre)
2170 bare_text = text.lstrip(pre)
2142 global_matches = self.global_matches(bare_text)
2171 global_matches = self.global_matches(bare_text)
2143 if not explicit_magic:
2172 if not explicit_magic:
2144 def matches(magic):
2173 def matches(magic):
2145 """
2174 """
2146 Filter magics, in particular remove magics that match
2175 Filter magics, in particular remove magics that match
2147 a name present in global namespace.
2176 a name present in global namespace.
2148 """
2177 """
2149 return ( magic.startswith(bare_text) and
2178 return ( magic.startswith(bare_text) and
2150 magic not in global_matches )
2179 magic not in global_matches )
2151 else:
2180 else:
2152 def matches(magic):
2181 def matches(magic):
2153 return magic.startswith(bare_text)
2182 return magic.startswith(bare_text)
2154
2183
2155 comp = [ pre2+m for m in cell_magics if matches(m)]
2184 comp = [ pre2+m for m in cell_magics if matches(m)]
2156 if not text.startswith(pre2):
2185 if not text.startswith(pre2):
2157 comp += [ pre+m for m in line_magics if matches(m)]
2186 comp += [ pre+m for m in line_magics if matches(m)]
2158
2187
2159 return comp
2188 return comp
2160
2189
2161 @context_matcher()
2190 @context_matcher()
2162 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2191 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2163 """Match class names and attributes for %config magic."""
2192 """Match class names and attributes for %config magic."""
2164 # NOTE: uses `line_buffer` equivalent for compatibility
2193 # NOTE: uses `line_buffer` equivalent for compatibility
2165 matches = self.magic_config_matches(context.line_with_cursor)
2194 matches = self.magic_config_matches(context.line_with_cursor)
2166 return _convert_matcher_v1_result_to_v2(matches, type="param")
2195 return _convert_matcher_v1_result_to_v2(matches, type="param")
2167
2196
2168 def magic_config_matches(self, text: str) -> List[str]:
2197 def magic_config_matches(self, text: str) -> List[str]:
2169 """Match class names and attributes for %config magic.
2198 """Match class names and attributes for %config magic.
2170
2199
2171 .. deprecated:: 8.6
2200 .. deprecated:: 8.6
2172 You can use :meth:`magic_config_matcher` instead.
2201 You can use :meth:`magic_config_matcher` instead.
2173 """
2202 """
2174 texts = text.strip().split()
2203 texts = text.strip().split()
2175
2204
2176 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2205 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2177 # get all configuration classes
2206 # get all configuration classes
2178 classes = sorted(set([ c for c in self.shell.configurables
2207 classes = sorted(set([ c for c in self.shell.configurables
2179 if c.__class__.class_traits(config=True)
2208 if c.__class__.class_traits(config=True)
2180 ]), key=lambda x: x.__class__.__name__)
2209 ]), key=lambda x: x.__class__.__name__)
2181 classnames = [ c.__class__.__name__ for c in classes ]
2210 classnames = [ c.__class__.__name__ for c in classes ]
2182
2211
2183 # return all classnames if config or %config is given
2212 # return all classnames if config or %config is given
2184 if len(texts) == 1:
2213 if len(texts) == 1:
2185 return classnames
2214 return classnames
2186
2215
2187 # match classname
2216 # match classname
2188 classname_texts = texts[1].split('.')
2217 classname_texts = texts[1].split('.')
2189 classname = classname_texts[0]
2218 classname = classname_texts[0]
2190 classname_matches = [ c for c in classnames
2219 classname_matches = [ c for c in classnames
2191 if c.startswith(classname) ]
2220 if c.startswith(classname) ]
2192
2221
2193 # return matched classes or the matched class with attributes
2222 # return matched classes or the matched class with attributes
2194 if texts[1].find('.') < 0:
2223 if texts[1].find('.') < 0:
2195 return classname_matches
2224 return classname_matches
2196 elif len(classname_matches) == 1 and \
2225 elif len(classname_matches) == 1 and \
2197 classname_matches[0] == classname:
2226 classname_matches[0] == classname:
2198 cls = classes[classnames.index(classname)].__class__
2227 cls = classes[classnames.index(classname)].__class__
2199 help = cls.class_get_help()
2228 help = cls.class_get_help()
2200 # strip leading '--' from cl-args:
2229 # strip leading '--' from cl-args:
2201 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2230 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2202 return [ attr.split('=')[0]
2231 return [ attr.split('=')[0]
2203 for attr in help.strip().splitlines()
2232 for attr in help.strip().splitlines()
2204 if attr.startswith(texts[1]) ]
2233 if attr.startswith(texts[1]) ]
2205 return []
2234 return []
2206
2235
2207 @context_matcher()
2236 @context_matcher()
2208 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2237 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2209 """Match color schemes for %colors magic."""
2238 """Match color schemes for %colors magic."""
2210 # NOTE: uses `line_buffer` equivalent for compatibility
2239 # NOTE: uses `line_buffer` equivalent for compatibility
2211 matches = self.magic_color_matches(context.line_with_cursor)
2240 matches = self.magic_color_matches(context.line_with_cursor)
2212 return _convert_matcher_v1_result_to_v2(matches, type="param")
2241 return _convert_matcher_v1_result_to_v2(matches, type="param")
2213
2242
2214 def magic_color_matches(self, text: str) -> List[str]:
2243 def magic_color_matches(self, text: str) -> List[str]:
2215 """Match color schemes for %colors magic.
2244 """Match color schemes for %colors magic.
2216
2245
2217 .. deprecated:: 8.6
2246 .. deprecated:: 8.6
2218 You can use :meth:`magic_color_matcher` instead.
2247 You can use :meth:`magic_color_matcher` instead.
2219 """
2248 """
2220 texts = text.split()
2249 texts = text.split()
2221 if text.endswith(' '):
2250 if text.endswith(' '):
2222 # .split() strips off the trailing whitespace. Add '' back
2251 # .split() strips off the trailing whitespace. Add '' back
2223 # so that: '%colors ' -> ['%colors', '']
2252 # so that: '%colors ' -> ['%colors', '']
2224 texts.append('')
2253 texts.append('')
2225
2254
2226 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2255 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2227 prefix = texts[1]
2256 prefix = texts[1]
2228 return [ color for color in InspectColors.keys()
2257 return [ color for color in InspectColors.keys()
2229 if color.startswith(prefix) ]
2258 if color.startswith(prefix) ]
2230 return []
2259 return []
2231
2260
2232 @context_matcher(identifier="IPCompleter.jedi_matcher")
2261 @context_matcher(identifier="IPCompleter.jedi_matcher")
2233 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2262 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2234 matches = self._jedi_matches(
2263 matches = self._jedi_matches(
2235 cursor_column=context.cursor_position,
2264 cursor_column=context.cursor_position,
2236 cursor_line=context.cursor_line,
2265 cursor_line=context.cursor_line,
2237 text=context.full_text,
2266 text=context.full_text,
2238 )
2267 )
2239 return {
2268 return {
2240 "completions": matches,
2269 "completions": matches,
2241 # static analysis should not suppress other matchers
2270 # static analysis should not suppress other matchers
2242 "suppress": False,
2271 "suppress": False,
2243 }
2272 }
2244
2273
2245 def _jedi_matches(
2274 def _jedi_matches(
2246 self, cursor_column: int, cursor_line: int, text: str
2275 self, cursor_column: int, cursor_line: int, text: str
2247 ) -> Iterator[_JediCompletionLike]:
2276 ) -> Iterator[_JediCompletionLike]:
2248 """
2277 """
2249 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2278 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2250 cursor position.
2279 cursor position.
2251
2280
2252 Parameters
2281 Parameters
2253 ----------
2282 ----------
2254 cursor_column : int
2283 cursor_column : int
2255 column position of the cursor in ``text``, 0-indexed.
2284 column position of the cursor in ``text``, 0-indexed.
2256 cursor_line : int
2285 cursor_line : int
2257 line position of the cursor in ``text``, 0-indexed
2286 line position of the cursor in ``text``, 0-indexed
2258 text : str
2287 text : str
2259 text to complete
2288 text to complete
2260
2289
2261 Notes
2290 Notes
2262 -----
2291 -----
2263 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2292 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2264 object containing a string with the Jedi debug information attached.
2293 object containing a string with the Jedi debug information attached.
2265
2294
2266 .. deprecated:: 8.6
2295 .. deprecated:: 8.6
2267 You can use :meth:`_jedi_matcher` instead.
2296 You can use :meth:`_jedi_matcher` instead.
2268 """
2297 """
2269 namespaces = [self.namespace]
2298 namespaces = [self.namespace]
2270 if self.global_namespace is not None:
2299 if self.global_namespace is not None:
2271 namespaces.append(self.global_namespace)
2300 namespaces.append(self.global_namespace)
2272
2301
2273 completion_filter = lambda x:x
2302 completion_filter = lambda x:x
2274 offset = cursor_to_position(text, cursor_line, cursor_column)
2303 offset = cursor_to_position(text, cursor_line, cursor_column)
2275 # filter output if we are completing for object members
2304 # filter output if we are completing for object members
2276 if offset:
2305 if offset:
2277 pre = text[offset-1]
2306 pre = text[offset-1]
2278 if pre == '.':
2307 if pre == '.':
2279 if self.omit__names == 2:
2308 if self.omit__names == 2:
2280 completion_filter = lambda c:not c.name.startswith('_')
2309 completion_filter = lambda c:not c.name.startswith('_')
2281 elif self.omit__names == 1:
2310 elif self.omit__names == 1:
2282 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2311 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2283 elif self.omit__names == 0:
2312 elif self.omit__names == 0:
2284 completion_filter = lambda x:x
2313 completion_filter = lambda x:x
2285 else:
2314 else:
2286 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2315 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2287
2316
2288 interpreter = jedi.Interpreter(text[:offset], namespaces)
2317 interpreter = jedi.Interpreter(text[:offset], namespaces)
2289 try_jedi = True
2318 try_jedi = True
2290
2319
2291 try:
2320 try:
2292 # find the first token in the current tree -- if it is a ' or " then we are in a string
2321 # find the first token in the current tree -- if it is a ' or " then we are in a string
2293 completing_string = False
2322 completing_string = False
2294 try:
2323 try:
2295 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2324 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2296 except StopIteration:
2325 except StopIteration:
2297 pass
2326 pass
2298 else:
2327 else:
2299 # note the value may be ', ", or it may also be ''' or """, or
2328 # note the value may be ', ", or it may also be ''' or """, or
2300 # in some cases, """what/you/typed..., but all of these are
2329 # in some cases, """what/you/typed..., but all of these are
2301 # strings.
2330 # strings.
2302 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2331 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2303
2332
2304 # if we are in a string jedi is likely not the right candidate for
2333 # if we are in a string jedi is likely not the right candidate for
2305 # now. Skip it.
2334 # now. Skip it.
2306 try_jedi = not completing_string
2335 try_jedi = not completing_string
2307 except Exception as e:
2336 except Exception as e:
2308 # many of things can go wrong, we are using private API just don't crash.
2337 # many of things can go wrong, we are using private API just don't crash.
2309 if self.debug:
2338 if self.debug:
2310 print("Error detecting if completing a non-finished string :", e, '|')
2339 print("Error detecting if completing a non-finished string :", e, '|')
2311
2340
2312 if not try_jedi:
2341 if not try_jedi:
2313 return iter([])
2342 return iter([])
2314 try:
2343 try:
2315 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2344 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2316 except Exception as e:
2345 except Exception as e:
2317 if self.debug:
2346 if self.debug:
2318 return iter(
2347 return iter(
2319 [
2348 [
2320 _FakeJediCompletion(
2349 _FakeJediCompletion(
2321 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2350 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2322 % (e)
2351 % (e)
2323 )
2352 )
2324 ]
2353 ]
2325 )
2354 )
2326 else:
2355 else:
2327 return iter([])
2356 return iter([])
2328
2357
2329 @context_matcher()
2358 @context_matcher()
2330 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2359 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2331 """Match attributes or global python names"""
2360 """Match attributes or global python names"""
2332 text = context.line_with_cursor
2361 text = context.line_with_cursor
2333 if "." in text:
2362 if "." in text:
2334 try:
2363 try:
2335 matches, fragment = self._attr_matches(text, include_prefix=False)
2364 matches, fragment = self._attr_matches(text, include_prefix=False)
2336 if text.endswith(".") and self.omit__names:
2365 if text.endswith(".") and self.omit__names:
2337 if self.omit__names == 1:
2366 if self.omit__names == 1:
2338 # true if txt is _not_ a __ name, false otherwise:
2367 # true if txt is _not_ a __ name, false otherwise:
2339 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2368 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2340 else:
2369 else:
2341 # true if txt is _not_ a _ name, false otherwise:
2370 # true if txt is _not_ a _ name, false otherwise:
2342 no__name = (
2371 no__name = (
2343 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2372 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2344 is None
2373 is None
2345 )
2374 )
2346 matches = filter(no__name, matches)
2375 matches = filter(no__name, matches)
2347 return _convert_matcher_v1_result_to_v2(
2376 return _convert_matcher_v1_result_to_v2(
2348 matches, type="attribute", fragment=fragment
2377 matches, type="attribute", fragment=fragment
2349 )
2378 )
2350 except NameError:
2379 except NameError:
2351 # catches <undefined attributes>.<tab>
2380 # catches <undefined attributes>.<tab>
2352 matches = []
2381 matches = []
2353 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2382 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2354 else:
2383 else:
2355 matches = self.global_matches(context.token)
2384 matches = self.global_matches(context.token)
2356 # TODO: maybe distinguish between functions, modules and just "variables"
2385 # TODO: maybe distinguish between functions, modules and just "variables"
2357 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2386 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2358
2387
2359 @completion_matcher(api_version=1)
2388 @completion_matcher(api_version=1)
2360 def python_matches(self, text: str) -> Iterable[str]:
2389 def python_matches(self, text: str) -> Iterable[str]:
2361 """Match attributes or global python names.
2390 """Match attributes or global python names.
2362
2391
2363 .. deprecated:: 8.27
2392 .. deprecated:: 8.27
2364 You can use :meth:`python_matcher` instead."""
2393 You can use :meth:`python_matcher` instead."""
2365 if "." in text:
2394 if "." in text:
2366 try:
2395 try:
2367 matches = self.attr_matches(text)
2396 matches = self.attr_matches(text)
2368 if text.endswith('.') and self.omit__names:
2397 if text.endswith('.') and self.omit__names:
2369 if self.omit__names == 1:
2398 if self.omit__names == 1:
2370 # true if txt is _not_ a __ name, false otherwise:
2399 # true if txt is _not_ a __ name, false otherwise:
2371 no__name = (lambda txt:
2400 no__name = (lambda txt:
2372 re.match(r'.*\.__.*?__',txt) is None)
2401 re.match(r'.*\.__.*?__',txt) is None)
2373 else:
2402 else:
2374 # true if txt is _not_ a _ name, false otherwise:
2403 # true if txt is _not_ a _ name, false otherwise:
2375 no__name = (lambda txt:
2404 no__name = (lambda txt:
2376 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2405 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2377 matches = filter(no__name, matches)
2406 matches = filter(no__name, matches)
2378 except NameError:
2407 except NameError:
2379 # catches <undefined attributes>.<tab>
2408 # catches <undefined attributes>.<tab>
2380 matches = []
2409 matches = []
2381 else:
2410 else:
2382 matches = self.global_matches(text)
2411 matches = self.global_matches(text)
2383 return matches
2412 return matches
2384
2413
2385 def _default_arguments_from_docstring(self, doc):
2414 def _default_arguments_from_docstring(self, doc):
2386 """Parse the first line of docstring for call signature.
2415 """Parse the first line of docstring for call signature.
2387
2416
2388 Docstring should be of the form 'min(iterable[, key=func])\n'.
2417 Docstring should be of the form 'min(iterable[, key=func])\n'.
2389 It can also parse cython docstring of the form
2418 It can also parse cython docstring of the form
2390 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2419 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2391 """
2420 """
2392 if doc is None:
2421 if doc is None:
2393 return []
2422 return []
2394
2423
2395 #care only the firstline
2424 #care only the firstline
2396 line = doc.lstrip().splitlines()[0]
2425 line = doc.lstrip().splitlines()[0]
2397
2426
2398 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2427 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2399 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2428 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2400 sig = self.docstring_sig_re.search(line)
2429 sig = self.docstring_sig_re.search(line)
2401 if sig is None:
2430 if sig is None:
2402 return []
2431 return []
2403 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2432 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2404 sig = sig.groups()[0].split(',')
2433 sig = sig.groups()[0].split(',')
2405 ret = []
2434 ret = []
2406 for s in sig:
2435 for s in sig:
2407 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2436 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2408 ret += self.docstring_kwd_re.findall(s)
2437 ret += self.docstring_kwd_re.findall(s)
2409 return ret
2438 return ret
2410
2439
2411 def _default_arguments(self, obj):
2440 def _default_arguments(self, obj):
2412 """Return the list of default arguments of obj if it is callable,
2441 """Return the list of default arguments of obj if it is callable,
2413 or empty list otherwise."""
2442 or empty list otherwise."""
2414 call_obj = obj
2443 call_obj = obj
2415 ret = []
2444 ret = []
2416 if inspect.isbuiltin(obj):
2445 if inspect.isbuiltin(obj):
2417 pass
2446 pass
2418 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2447 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2419 if inspect.isclass(obj):
2448 if inspect.isclass(obj):
2420 #for cython embedsignature=True the constructor docstring
2449 #for cython embedsignature=True the constructor docstring
2421 #belongs to the object itself not __init__
2450 #belongs to the object itself not __init__
2422 ret += self._default_arguments_from_docstring(
2451 ret += self._default_arguments_from_docstring(
2423 getattr(obj, '__doc__', ''))
2452 getattr(obj, '__doc__', ''))
2424 # for classes, check for __init__,__new__
2453 # for classes, check for __init__,__new__
2425 call_obj = (getattr(obj, '__init__', None) or
2454 call_obj = (getattr(obj, '__init__', None) or
2426 getattr(obj, '__new__', None))
2455 getattr(obj, '__new__', None))
2427 # for all others, check if they are __call__able
2456 # for all others, check if they are __call__able
2428 elif hasattr(obj, '__call__'):
2457 elif hasattr(obj, '__call__'):
2429 call_obj = obj.__call__
2458 call_obj = obj.__call__
2430 ret += self._default_arguments_from_docstring(
2459 ret += self._default_arguments_from_docstring(
2431 getattr(call_obj, '__doc__', ''))
2460 getattr(call_obj, '__doc__', ''))
2432
2461
2433 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2462 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2434 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2463 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2435
2464
2436 try:
2465 try:
2437 sig = inspect.signature(obj)
2466 sig = inspect.signature(obj)
2438 ret.extend(k for k, v in sig.parameters.items() if
2467 ret.extend(k for k, v in sig.parameters.items() if
2439 v.kind in _keeps)
2468 v.kind in _keeps)
2440 except ValueError:
2469 except ValueError:
2441 pass
2470 pass
2442
2471
2443 return list(set(ret))
2472 return list(set(ret))
2444
2473
2445 @context_matcher()
2474 @context_matcher()
2446 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2475 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2447 """Match named parameters (kwargs) of the last open function."""
2476 """Match named parameters (kwargs) of the last open function."""
2448 matches = self.python_func_kw_matches(context.token)
2477 matches = self.python_func_kw_matches(context.token)
2449 return _convert_matcher_v1_result_to_v2(matches, type="param")
2478 return _convert_matcher_v1_result_to_v2(matches, type="param")
2450
2479
2451 def python_func_kw_matches(self, text):
2480 def python_func_kw_matches(self, text):
2452 """Match named parameters (kwargs) of the last open function.
2481 """Match named parameters (kwargs) of the last open function.
2453
2482
2454 .. deprecated:: 8.6
2483 .. deprecated:: 8.6
2455 You can use :meth:`python_func_kw_matcher` instead.
2484 You can use :meth:`python_func_kw_matcher` instead.
2456 """
2485 """
2457
2486
2458 if "." in text: # a parameter cannot be dotted
2487 if "." in text: # a parameter cannot be dotted
2459 return []
2488 return []
2460 try: regexp = self.__funcParamsRegex
2489 try: regexp = self.__funcParamsRegex
2461 except AttributeError:
2490 except AttributeError:
2462 regexp = self.__funcParamsRegex = re.compile(r'''
2491 regexp = self.__funcParamsRegex = re.compile(r'''
2463 '.*?(?<!\\)' | # single quoted strings or
2492 '.*?(?<!\\)' | # single quoted strings or
2464 ".*?(?<!\\)" | # double quoted strings or
2493 ".*?(?<!\\)" | # double quoted strings or
2465 \w+ | # identifier
2494 \w+ | # identifier
2466 \S # other characters
2495 \S # other characters
2467 ''', re.VERBOSE | re.DOTALL)
2496 ''', re.VERBOSE | re.DOTALL)
2468 # 1. find the nearest identifier that comes before an unclosed
2497 # 1. find the nearest identifier that comes before an unclosed
2469 # parenthesis before the cursor
2498 # parenthesis before the cursor
2470 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2499 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2471 tokens = regexp.findall(self.text_until_cursor)
2500 tokens = regexp.findall(self.text_until_cursor)
2472 iterTokens = reversed(tokens); openPar = 0
2501 iterTokens = reversed(tokens)
2502 openPar = 0
2473
2503
2474 for token in iterTokens:
2504 for token in iterTokens:
2475 if token == ')':
2505 if token == ')':
2476 openPar -= 1
2506 openPar -= 1
2477 elif token == '(':
2507 elif token == '(':
2478 openPar += 1
2508 openPar += 1
2479 if openPar > 0:
2509 if openPar > 0:
2480 # found the last unclosed parenthesis
2510 # found the last unclosed parenthesis
2481 break
2511 break
2482 else:
2512 else:
2483 return []
2513 return []
2484 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2514 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2485 ids = []
2515 ids = []
2486 isId = re.compile(r'\w+$').match
2516 isId = re.compile(r'\w+$').match
2487
2517
2488 while True:
2518 while True:
2489 try:
2519 try:
2490 ids.append(next(iterTokens))
2520 ids.append(next(iterTokens))
2491 if not isId(ids[-1]):
2521 if not isId(ids[-1]):
2492 ids.pop(); break
2522 ids.pop()
2523 break
2493 if not next(iterTokens) == '.':
2524 if not next(iterTokens) == '.':
2494 break
2525 break
2495 except StopIteration:
2526 except StopIteration:
2496 break
2527 break
2497
2528
2498 # Find all named arguments already assigned to, as to avoid suggesting
2529 # Find all named arguments already assigned to, as to avoid suggesting
2499 # them again
2530 # them again
2500 usedNamedArgs = set()
2531 usedNamedArgs = set()
2501 par_level = -1
2532 par_level = -1
2502 for token, next_token in zip(tokens, tokens[1:]):
2533 for token, next_token in zip(tokens, tokens[1:]):
2503 if token == '(':
2534 if token == '(':
2504 par_level += 1
2535 par_level += 1
2505 elif token == ')':
2536 elif token == ')':
2506 par_level -= 1
2537 par_level -= 1
2507
2538
2508 if par_level != 0:
2539 if par_level != 0:
2509 continue
2540 continue
2510
2541
2511 if next_token != '=':
2542 if next_token != '=':
2512 continue
2543 continue
2513
2544
2514 usedNamedArgs.add(token)
2545 usedNamedArgs.add(token)
2515
2546
2516 argMatches = []
2547 argMatches = []
2517 try:
2548 try:
2518 callableObj = '.'.join(ids[::-1])
2549 callableObj = '.'.join(ids[::-1])
2519 namedArgs = self._default_arguments(eval(callableObj,
2550 namedArgs = self._default_arguments(eval(callableObj,
2520 self.namespace))
2551 self.namespace))
2521
2552
2522 # Remove used named arguments from the list, no need to show twice
2553 # Remove used named arguments from the list, no need to show twice
2523 for namedArg in set(namedArgs) - usedNamedArgs:
2554 for namedArg in set(namedArgs) - usedNamedArgs:
2524 if namedArg.startswith(text):
2555 if namedArg.startswith(text):
2525 argMatches.append("%s=" %namedArg)
2556 argMatches.append("%s=" %namedArg)
2526 except:
2557 except:
2527 pass
2558 pass
2528
2559
2529 return argMatches
2560 return argMatches
2530
2561
2531 @staticmethod
2562 @staticmethod
2532 def _get_keys(obj: Any) -> List[Any]:
2563 def _get_keys(obj: Any) -> List[Any]:
2533 # Objects can define their own completions by defining an
2564 # Objects can define their own completions by defining an
2534 # _ipy_key_completions_() method.
2565 # _ipy_key_completions_() method.
2535 method = get_real_method(obj, '_ipython_key_completions_')
2566 method = get_real_method(obj, '_ipython_key_completions_')
2536 if method is not None:
2567 if method is not None:
2537 return method()
2568 return method()
2538
2569
2539 # Special case some common in-memory dict-like types
2570 # Special case some common in-memory dict-like types
2540 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2571 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2541 try:
2572 try:
2542 return list(obj.keys())
2573 return list(obj.keys())
2543 except Exception:
2574 except Exception:
2544 return []
2575 return []
2545 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2576 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2546 try:
2577 try:
2547 return list(obj.obj.keys())
2578 return list(obj.obj.keys())
2548 except Exception:
2579 except Exception:
2549 return []
2580 return []
2550 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2581 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2551 _safe_isinstance(obj, 'numpy', 'void'):
2582 _safe_isinstance(obj, 'numpy', 'void'):
2552 return obj.dtype.names or []
2583 return obj.dtype.names or []
2553 return []
2584 return []
2554
2585
2555 @context_matcher()
2586 @context_matcher()
2556 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2587 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2557 """Match string keys in a dictionary, after e.g. ``foo[``."""
2588 """Match string keys in a dictionary, after e.g. ``foo[``."""
2558 matches = self.dict_key_matches(context.token)
2589 matches = self.dict_key_matches(context.token)
2559 return _convert_matcher_v1_result_to_v2(
2590 return _convert_matcher_v1_result_to_v2(
2560 matches, type="dict key", suppress_if_matches=True
2591 matches, type="dict key", suppress_if_matches=True
2561 )
2592 )
2562
2593
2563 def dict_key_matches(self, text: str) -> List[str]:
2594 def dict_key_matches(self, text: str) -> List[str]:
2564 """Match string keys in a dictionary, after e.g. ``foo[``.
2595 """Match string keys in a dictionary, after e.g. ``foo[``.
2565
2596
2566 .. deprecated:: 8.6
2597 .. deprecated:: 8.6
2567 You can use :meth:`dict_key_matcher` instead.
2598 You can use :meth:`dict_key_matcher` instead.
2568 """
2599 """
2569
2600
2570 # Short-circuit on closed dictionary (regular expression would
2601 # Short-circuit on closed dictionary (regular expression would
2571 # not match anyway, but would take quite a while).
2602 # not match anyway, but would take quite a while).
2572 if self.text_until_cursor.strip().endswith("]"):
2603 if self.text_until_cursor.strip().endswith("]"):
2573 return []
2604 return []
2574
2605
2575 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2606 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2576
2607
2577 if match is None:
2608 if match is None:
2578 return []
2609 return []
2579
2610
2580 expr, prior_tuple_keys, key_prefix = match.groups()
2611 expr, prior_tuple_keys, key_prefix = match.groups()
2581
2612
2582 obj = self._evaluate_expr(expr)
2613 obj = self._evaluate_expr(expr)
2583
2614
2584 if obj is not_found:
2615 if obj is not_found:
2585 return []
2616 return []
2586
2617
2587 keys = self._get_keys(obj)
2618 keys = self._get_keys(obj)
2588 if not keys:
2619 if not keys:
2589 return keys
2620 return keys
2590
2621
2591 tuple_prefix = guarded_eval(
2622 tuple_prefix = guarded_eval(
2592 prior_tuple_keys,
2623 prior_tuple_keys,
2593 EvaluationContext(
2624 EvaluationContext(
2594 globals=self.global_namespace,
2625 globals=self.global_namespace,
2595 locals=self.namespace,
2626 locals=self.namespace,
2596 evaluation=self.evaluation, # type: ignore
2627 evaluation=self.evaluation, # type: ignore
2597 in_subscript=True,
2628 in_subscript=True,
2598 ),
2629 ),
2599 )
2630 )
2600
2631
2601 closing_quote, token_offset, matches = match_dict_keys(
2632 closing_quote, token_offset, matches = match_dict_keys(
2602 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2633 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2603 )
2634 )
2604 if not matches:
2635 if not matches:
2605 return []
2636 return []
2606
2637
2607 # get the cursor position of
2638 # get the cursor position of
2608 # - the text being completed
2639 # - the text being completed
2609 # - the start of the key text
2640 # - the start of the key text
2610 # - the start of the completion
2641 # - the start of the completion
2611 text_start = len(self.text_until_cursor) - len(text)
2642 text_start = len(self.text_until_cursor) - len(text)
2612 if key_prefix:
2643 if key_prefix:
2613 key_start = match.start(3)
2644 key_start = match.start(3)
2614 completion_start = key_start + token_offset
2645 completion_start = key_start + token_offset
2615 else:
2646 else:
2616 key_start = completion_start = match.end()
2647 key_start = completion_start = match.end()
2617
2648
2618 # grab the leading prefix, to make sure all completions start with `text`
2649 # grab the leading prefix, to make sure all completions start with `text`
2619 if text_start > key_start:
2650 if text_start > key_start:
2620 leading = ''
2651 leading = ''
2621 else:
2652 else:
2622 leading = text[text_start:completion_start]
2653 leading = text[text_start:completion_start]
2623
2654
2624 # append closing quote and bracket as appropriate
2655 # append closing quote and bracket as appropriate
2625 # this is *not* appropriate if the opening quote or bracket is outside
2656 # this is *not* appropriate if the opening quote or bracket is outside
2626 # the text given to this method, e.g. `d["""a\nt
2657 # the text given to this method, e.g. `d["""a\nt
2627 can_close_quote = False
2658 can_close_quote = False
2628 can_close_bracket = False
2659 can_close_bracket = False
2629
2660
2630 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2661 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2631
2662
2632 if continuation.startswith(closing_quote):
2663 if continuation.startswith(closing_quote):
2633 # do not close if already closed, e.g. `d['a<tab>'`
2664 # do not close if already closed, e.g. `d['a<tab>'`
2634 continuation = continuation[len(closing_quote) :]
2665 continuation = continuation[len(closing_quote) :]
2635 else:
2666 else:
2636 can_close_quote = True
2667 can_close_quote = True
2637
2668
2638 continuation = continuation.strip()
2669 continuation = continuation.strip()
2639
2670
2640 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2671 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2641 # handling it is out of scope, so let's avoid appending suffixes.
2672 # handling it is out of scope, so let's avoid appending suffixes.
2642 has_known_tuple_handling = isinstance(obj, dict)
2673 has_known_tuple_handling = isinstance(obj, dict)
2643
2674
2644 can_close_bracket = (
2675 can_close_bracket = (
2645 not continuation.startswith("]") and self.auto_close_dict_keys
2676 not continuation.startswith("]") and self.auto_close_dict_keys
2646 )
2677 )
2647 can_close_tuple_item = (
2678 can_close_tuple_item = (
2648 not continuation.startswith(",")
2679 not continuation.startswith(",")
2649 and has_known_tuple_handling
2680 and has_known_tuple_handling
2650 and self.auto_close_dict_keys
2681 and self.auto_close_dict_keys
2651 )
2682 )
2652 can_close_quote = can_close_quote and self.auto_close_dict_keys
2683 can_close_quote = can_close_quote and self.auto_close_dict_keys
2653
2684
2654 # fast path if closing quote should be appended but not suffix is allowed
2685 # fast path if closing quote should be appended but not suffix is allowed
2655 if not can_close_quote and not can_close_bracket and closing_quote:
2686 if not can_close_quote and not can_close_bracket and closing_quote:
2656 return [leading + k for k in matches]
2687 return [leading + k for k in matches]
2657
2688
2658 results = []
2689 results = []
2659
2690
2660 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2691 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2661
2692
2662 for k, state_flag in matches.items():
2693 for k, state_flag in matches.items():
2663 result = leading + k
2694 result = leading + k
2664 if can_close_quote and closing_quote:
2695 if can_close_quote and closing_quote:
2665 result += closing_quote
2696 result += closing_quote
2666
2697
2667 if state_flag == end_of_tuple_or_item:
2698 if state_flag == end_of_tuple_or_item:
2668 # We do not know which suffix to add,
2699 # We do not know which suffix to add,
2669 # e.g. both tuple item and string
2700 # e.g. both tuple item and string
2670 # match this item.
2701 # match this item.
2671 pass
2702 pass
2672
2703
2673 if state_flag in end_of_tuple_or_item and can_close_bracket:
2704 if state_flag in end_of_tuple_or_item and can_close_bracket:
2674 result += "]"
2705 result += "]"
2675 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2706 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2676 result += ", "
2707 result += ", "
2677 results.append(result)
2708 results.append(result)
2678 return results
2709 return results
2679
2710
2680 @context_matcher()
2711 @context_matcher()
2681 def unicode_name_matcher(self, context: CompletionContext):
2712 def unicode_name_matcher(self, context: CompletionContext):
2682 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2713 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2683 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2714 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2684 return _convert_matcher_v1_result_to_v2(
2715 return _convert_matcher_v1_result_to_v2(
2685 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2716 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2686 )
2717 )
2687
2718
2688 @staticmethod
2719 @staticmethod
2689 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2720 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2690 """Match Latex-like syntax for unicode characters base
2721 """Match Latex-like syntax for unicode characters base
2691 on the name of the character.
2722 on the name of the character.
2692
2723
2693 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2724 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2694
2725
2695 Works only on valid python 3 identifier, or on combining characters that
2726 Works only on valid python 3 identifier, or on combining characters that
2696 will combine to form a valid identifier.
2727 will combine to form a valid identifier.
2697 """
2728 """
2698 slashpos = text.rfind('\\')
2729 slashpos = text.rfind('\\')
2699 if slashpos > -1:
2730 if slashpos > -1:
2700 s = text[slashpos+1:]
2731 s = text[slashpos+1:]
2701 try :
2732 try :
2702 unic = unicodedata.lookup(s)
2733 unic = unicodedata.lookup(s)
2703 # allow combining chars
2734 # allow combining chars
2704 if ('a'+unic).isidentifier():
2735 if ('a'+unic).isidentifier():
2705 return '\\'+s,[unic]
2736 return '\\'+s,[unic]
2706 except KeyError:
2737 except KeyError:
2707 pass
2738 pass
2708 return '', []
2739 return '', []
2709
2740
2710 @context_matcher()
2741 @context_matcher()
2711 def latex_name_matcher(self, context: CompletionContext):
2742 def latex_name_matcher(self, context: CompletionContext):
2712 """Match Latex syntax for unicode characters.
2743 """Match Latex syntax for unicode characters.
2713
2744
2714 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2745 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2715 """
2746 """
2716 fragment, matches = self.latex_matches(context.text_until_cursor)
2747 fragment, matches = self.latex_matches(context.text_until_cursor)
2717 return _convert_matcher_v1_result_to_v2(
2748 return _convert_matcher_v1_result_to_v2(
2718 matches, type="latex", fragment=fragment, suppress_if_matches=True
2749 matches, type="latex", fragment=fragment, suppress_if_matches=True
2719 )
2750 )
2720
2751
2721 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2752 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2722 """Match Latex syntax for unicode characters.
2753 """Match Latex syntax for unicode characters.
2723
2754
2724 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2755 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2725
2756
2726 .. deprecated:: 8.6
2757 .. deprecated:: 8.6
2727 You can use :meth:`latex_name_matcher` instead.
2758 You can use :meth:`latex_name_matcher` instead.
2728 """
2759 """
2729 slashpos = text.rfind('\\')
2760 slashpos = text.rfind('\\')
2730 if slashpos > -1:
2761 if slashpos > -1:
2731 s = text[slashpos:]
2762 s = text[slashpos:]
2732 if s in latex_symbols:
2763 if s in latex_symbols:
2733 # Try to complete a full latex symbol to unicode
2764 # Try to complete a full latex symbol to unicode
2734 # \\alpha -> Ξ±
2765 # \\alpha -> Ξ±
2735 return s, [latex_symbols[s]]
2766 return s, [latex_symbols[s]]
2736 else:
2767 else:
2737 # If a user has partially typed a latex symbol, give them
2768 # If a user has partially typed a latex symbol, give them
2738 # a full list of options \al -> [\aleph, \alpha]
2769 # a full list of options \al -> [\aleph, \alpha]
2739 matches = [k for k in latex_symbols if k.startswith(s)]
2770 matches = [k for k in latex_symbols if k.startswith(s)]
2740 if matches:
2771 if matches:
2741 return s, matches
2772 return s, matches
2742 return '', ()
2773 return '', ()
2743
2774
2744 @context_matcher()
2775 @context_matcher()
2745 def custom_completer_matcher(self, context):
2776 def custom_completer_matcher(self, context):
2746 """Dispatch custom completer.
2777 """Dispatch custom completer.
2747
2778
2748 If a match is found, suppresses all other matchers except for Jedi.
2779 If a match is found, suppresses all other matchers except for Jedi.
2749 """
2780 """
2750 matches = self.dispatch_custom_completer(context.token) or []
2781 matches = self.dispatch_custom_completer(context.token) or []
2751 result = _convert_matcher_v1_result_to_v2(
2782 result = _convert_matcher_v1_result_to_v2(
2752 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2783 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2753 )
2784 )
2754 result["ordered"] = True
2785 result["ordered"] = True
2755 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2786 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2756 return result
2787 return result
2757
2788
2758 def dispatch_custom_completer(self, text):
2789 def dispatch_custom_completer(self, text):
2759 """
2790 """
2760 .. deprecated:: 8.6
2791 .. deprecated:: 8.6
2761 You can use :meth:`custom_completer_matcher` instead.
2792 You can use :meth:`custom_completer_matcher` instead.
2762 """
2793 """
2763 if not self.custom_completers:
2794 if not self.custom_completers:
2764 return
2795 return
2765
2796
2766 line = self.line_buffer
2797 line = self.line_buffer
2767 if not line.strip():
2798 if not line.strip():
2768 return None
2799 return None
2769
2800
2770 # Create a little structure to pass all the relevant information about
2801 # Create a little structure to pass all the relevant information about
2771 # the current completion to any custom completer.
2802 # the current completion to any custom completer.
2772 event = SimpleNamespace()
2803 event = SimpleNamespace()
2773 event.line = line
2804 event.line = line
2774 event.symbol = text
2805 event.symbol = text
2775 cmd = line.split(None,1)[0]
2806 cmd = line.split(None,1)[0]
2776 event.command = cmd
2807 event.command = cmd
2777 event.text_until_cursor = self.text_until_cursor
2808 event.text_until_cursor = self.text_until_cursor
2778
2809
2779 # for foo etc, try also to find completer for %foo
2810 # for foo etc, try also to find completer for %foo
2780 if not cmd.startswith(self.magic_escape):
2811 if not cmd.startswith(self.magic_escape):
2781 try_magic = self.custom_completers.s_matches(
2812 try_magic = self.custom_completers.s_matches(
2782 self.magic_escape + cmd)
2813 self.magic_escape + cmd)
2783 else:
2814 else:
2784 try_magic = []
2815 try_magic = []
2785
2816
2786 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2817 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2787 try_magic,
2818 try_magic,
2788 self.custom_completers.flat_matches(self.text_until_cursor)):
2819 self.custom_completers.flat_matches(self.text_until_cursor)):
2789 try:
2820 try:
2790 res = c(event)
2821 res = c(event)
2791 if res:
2822 if res:
2792 # first, try case sensitive match
2823 # first, try case sensitive match
2793 withcase = [r for r in res if r.startswith(text)]
2824 withcase = [r for r in res if r.startswith(text)]
2794 if withcase:
2825 if withcase:
2795 return withcase
2826 return withcase
2796 # if none, then case insensitive ones are ok too
2827 # if none, then case insensitive ones are ok too
2797 text_low = text.lower()
2828 text_low = text.lower()
2798 return [r for r in res if r.lower().startswith(text_low)]
2829 return [r for r in res if r.lower().startswith(text_low)]
2799 except TryNext:
2830 except TryNext:
2800 pass
2831 pass
2801 except KeyboardInterrupt:
2832 except KeyboardInterrupt:
2802 """
2833 """
2803 If custom completer take too long,
2834 If custom completer take too long,
2804 let keyboard interrupt abort and return nothing.
2835 let keyboard interrupt abort and return nothing.
2805 """
2836 """
2806 break
2837 break
2807
2838
2808 return None
2839 return None
2809
2840
2810 def completions(self, text: str, offset: int)->Iterator[Completion]:
2841 def completions(self, text: str, offset: int)->Iterator[Completion]:
2811 """
2842 """
2812 Returns an iterator over the possible completions
2843 Returns an iterator over the possible completions
2813
2844
2814 .. warning::
2845 .. warning::
2815
2846
2816 Unstable
2847 Unstable
2817
2848
2818 This function is unstable, API may change without warning.
2849 This function is unstable, API may change without warning.
2819 It will also raise unless use in proper context manager.
2850 It will also raise unless use in proper context manager.
2820
2851
2821 Parameters
2852 Parameters
2822 ----------
2853 ----------
2823 text : str
2854 text : str
2824 Full text of the current input, multi line string.
2855 Full text of the current input, multi line string.
2825 offset : int
2856 offset : int
2826 Integer representing the position of the cursor in ``text``. Offset
2857 Integer representing the position of the cursor in ``text``. Offset
2827 is 0-based indexed.
2858 is 0-based indexed.
2828
2859
2829 Yields
2860 Yields
2830 ------
2861 ------
2831 Completion
2862 Completion
2832
2863
2833 Notes
2864 Notes
2834 -----
2865 -----
2835 The cursor on a text can either be seen as being "in between"
2866 The cursor on a text can either be seen as being "in between"
2836 characters or "On" a character depending on the interface visible to
2867 characters or "On" a character depending on the interface visible to
2837 the user. For consistency the cursor being on "in between" characters X
2868 the user. For consistency the cursor being on "in between" characters X
2838 and Y is equivalent to the cursor being "on" character Y, that is to say
2869 and Y is equivalent to the cursor being "on" character Y, that is to say
2839 the character the cursor is on is considered as being after the cursor.
2870 the character the cursor is on is considered as being after the cursor.
2840
2871
2841 Combining characters may span more that one position in the
2872 Combining characters may span more that one position in the
2842 text.
2873 text.
2843
2874
2844 .. note::
2875 .. note::
2845
2876
2846 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2877 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2847 fake Completion token to distinguish completion returned by Jedi
2878 fake Completion token to distinguish completion returned by Jedi
2848 and usual IPython completion.
2879 and usual IPython completion.
2849
2880
2850 .. note::
2881 .. note::
2851
2882
2852 Completions are not completely deduplicated yet. If identical
2883 Completions are not completely deduplicated yet. If identical
2853 completions are coming from different sources this function does not
2884 completions are coming from different sources this function does not
2854 ensure that each completion object will only be present once.
2885 ensure that each completion object will only be present once.
2855 """
2886 """
2856 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2887 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2857 "It may change without warnings. "
2888 "It may change without warnings. "
2858 "Use in corresponding context manager.",
2889 "Use in corresponding context manager.",
2859 category=ProvisionalCompleterWarning, stacklevel=2)
2890 category=ProvisionalCompleterWarning, stacklevel=2)
2860
2891
2861 seen = set()
2892 seen = set()
2862 profiler:Optional[cProfile.Profile]
2893 profiler:Optional[cProfile.Profile]
2863 try:
2894 try:
2864 if self.profile_completions:
2895 if self.profile_completions:
2865 import cProfile
2896 import cProfile
2866 profiler = cProfile.Profile()
2897 profiler = cProfile.Profile()
2867 profiler.enable()
2898 profiler.enable()
2868 else:
2899 else:
2869 profiler = None
2900 profiler = None
2870
2901
2871 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2902 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2872 if c and (c in seen):
2903 if c and (c in seen):
2873 continue
2904 continue
2874 yield c
2905 yield c
2875 seen.add(c)
2906 seen.add(c)
2876 except KeyboardInterrupt:
2907 except KeyboardInterrupt:
2877 """if completions take too long and users send keyboard interrupt,
2908 """if completions take too long and users send keyboard interrupt,
2878 do not crash and return ASAP. """
2909 do not crash and return ASAP. """
2879 pass
2910 pass
2880 finally:
2911 finally:
2881 if profiler is not None:
2912 if profiler is not None:
2882 profiler.disable()
2913 profiler.disable()
2883 ensure_dir_exists(self.profiler_output_dir)
2914 ensure_dir_exists(self.profiler_output_dir)
2884 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2915 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2885 print("Writing profiler output to", output_path)
2916 print("Writing profiler output to", output_path)
2886 profiler.dump_stats(output_path)
2917 profiler.dump_stats(output_path)
2887
2918
2888 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2919 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2889 """
2920 """
2890 Core completion module.Same signature as :any:`completions`, with the
2921 Core completion module.Same signature as :any:`completions`, with the
2891 extra `timeout` parameter (in seconds).
2922 extra `timeout` parameter (in seconds).
2892
2923
2893 Computing jedi's completion ``.type`` can be quite expensive (it is a
2924 Computing jedi's completion ``.type`` can be quite expensive (it is a
2894 lazy property) and can require some warm-up, more warm up than just
2925 lazy property) and can require some warm-up, more warm up than just
2895 computing the ``name`` of a completion. The warm-up can be :
2926 computing the ``name`` of a completion. The warm-up can be :
2896
2927
2897 - Long warm-up the first time a module is encountered after
2928 - Long warm-up the first time a module is encountered after
2898 install/update: actually build parse/inference tree.
2929 install/update: actually build parse/inference tree.
2899
2930
2900 - first time the module is encountered in a session: load tree from
2931 - first time the module is encountered in a session: load tree from
2901 disk.
2932 disk.
2902
2933
2903 We don't want to block completions for tens of seconds so we give the
2934 We don't want to block completions for tens of seconds so we give the
2904 completer a "budget" of ``_timeout`` seconds per invocation to compute
2935 completer a "budget" of ``_timeout`` seconds per invocation to compute
2905 completions types, the completions that have not yet been computed will
2936 completions types, the completions that have not yet been computed will
2906 be marked as "unknown" an will have a chance to be computed next round
2937 be marked as "unknown" an will have a chance to be computed next round
2907 are things get cached.
2938 are things get cached.
2908
2939
2909 Keep in mind that Jedi is not the only thing treating the completion so
2940 Keep in mind that Jedi is not the only thing treating the completion so
2910 keep the timeout short-ish as if we take more than 0.3 second we still
2941 keep the timeout short-ish as if we take more than 0.3 second we still
2911 have lots of processing to do.
2942 have lots of processing to do.
2912
2943
2913 """
2944 """
2914 deadline = time.monotonic() + _timeout
2945 deadline = time.monotonic() + _timeout
2915
2946
2916 before = full_text[:offset]
2947 before = full_text[:offset]
2917 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2948 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2918
2949
2919 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2950 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2920
2951
2921 def is_non_jedi_result(
2952 def is_non_jedi_result(
2922 result: MatcherResult, identifier: str
2953 result: MatcherResult, identifier: str
2923 ) -> TypeGuard[SimpleMatcherResult]:
2954 ) -> TypeGuard[SimpleMatcherResult]:
2924 return identifier != jedi_matcher_id
2955 return identifier != jedi_matcher_id
2925
2956
2926 results = self._complete(
2957 results = self._complete(
2927 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2958 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2928 )
2959 )
2929
2960
2930 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2961 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2931 identifier: result
2962 identifier: result
2932 for identifier, result in results.items()
2963 for identifier, result in results.items()
2933 if is_non_jedi_result(result, identifier)
2964 if is_non_jedi_result(result, identifier)
2934 }
2965 }
2935
2966
2936 jedi_matches = (
2967 jedi_matches = (
2937 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2968 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2938 if jedi_matcher_id in results
2969 if jedi_matcher_id in results
2939 else ()
2970 else ()
2940 )
2971 )
2941
2972
2942 iter_jm = iter(jedi_matches)
2973 iter_jm = iter(jedi_matches)
2943 if _timeout:
2974 if _timeout:
2944 for jm in iter_jm:
2975 for jm in iter_jm:
2945 try:
2976 try:
2946 type_ = jm.type
2977 type_ = jm.type
2947 except Exception:
2978 except Exception:
2948 if self.debug:
2979 if self.debug:
2949 print("Error in Jedi getting type of ", jm)
2980 print("Error in Jedi getting type of ", jm)
2950 type_ = None
2981 type_ = None
2951 delta = len(jm.name_with_symbols) - len(jm.complete)
2982 delta = len(jm.name_with_symbols) - len(jm.complete)
2952 if type_ == 'function':
2983 if type_ == 'function':
2953 signature = _make_signature(jm)
2984 signature = _make_signature(jm)
2954 else:
2985 else:
2955 signature = ''
2986 signature = ''
2956 yield Completion(start=offset - delta,
2987 yield Completion(start=offset - delta,
2957 end=offset,
2988 end=offset,
2958 text=jm.name_with_symbols,
2989 text=jm.name_with_symbols,
2959 type=type_,
2990 type=type_,
2960 signature=signature,
2991 signature=signature,
2961 _origin='jedi')
2992 _origin='jedi')
2962
2993
2963 if time.monotonic() > deadline:
2994 if time.monotonic() > deadline:
2964 break
2995 break
2965
2996
2966 for jm in iter_jm:
2997 for jm in iter_jm:
2967 delta = len(jm.name_with_symbols) - len(jm.complete)
2998 delta = len(jm.name_with_symbols) - len(jm.complete)
2968 yield Completion(
2999 yield Completion(
2969 start=offset - delta,
3000 start=offset - delta,
2970 end=offset,
3001 end=offset,
2971 text=jm.name_with_symbols,
3002 text=jm.name_with_symbols,
2972 type=_UNKNOWN_TYPE, # don't compute type for speed
3003 type=_UNKNOWN_TYPE, # don't compute type for speed
2973 _origin="jedi",
3004 _origin="jedi",
2974 signature="",
3005 signature="",
2975 )
3006 )
2976
3007
2977 # TODO:
3008 # TODO:
2978 # Suppress this, right now just for debug.
3009 # Suppress this, right now just for debug.
2979 if jedi_matches and non_jedi_results and self.debug:
3010 if jedi_matches and non_jedi_results and self.debug:
2980 some_start_offset = before.rfind(
3011 some_start_offset = before.rfind(
2981 next(iter(non_jedi_results.values()))["matched_fragment"]
3012 next(iter(non_jedi_results.values()))["matched_fragment"]
2982 )
3013 )
2983 yield Completion(
3014 yield Completion(
2984 start=some_start_offset,
3015 start=some_start_offset,
2985 end=offset,
3016 end=offset,
2986 text="--jedi/ipython--",
3017 text="--jedi/ipython--",
2987 _origin="debug",
3018 _origin="debug",
2988 type="none",
3019 type="none",
2989 signature="",
3020 signature="",
2990 )
3021 )
2991
3022
2992 ordered: List[Completion] = []
3023 ordered: List[Completion] = []
2993 sortable: List[Completion] = []
3024 sortable: List[Completion] = []
2994
3025
2995 for origin, result in non_jedi_results.items():
3026 for origin, result in non_jedi_results.items():
2996 matched_text = result["matched_fragment"]
3027 matched_text = result["matched_fragment"]
2997 start_offset = before.rfind(matched_text)
3028 start_offset = before.rfind(matched_text)
2998 is_ordered = result.get("ordered", False)
3029 is_ordered = result.get("ordered", False)
2999 container = ordered if is_ordered else sortable
3030 container = ordered if is_ordered else sortable
3000
3031
3001 # I'm unsure if this is always true, so let's assert and see if it
3032 # I'm unsure if this is always true, so let's assert and see if it
3002 # crash
3033 # crash
3003 assert before.endswith(matched_text)
3034 assert before.endswith(matched_text)
3004
3035
3005 for simple_completion in result["completions"]:
3036 for simple_completion in result["completions"]:
3006 completion = Completion(
3037 completion = Completion(
3007 start=start_offset,
3038 start=start_offset,
3008 end=offset,
3039 end=offset,
3009 text=simple_completion.text,
3040 text=simple_completion.text,
3010 _origin=origin,
3041 _origin=origin,
3011 signature="",
3042 signature="",
3012 type=simple_completion.type or _UNKNOWN_TYPE,
3043 type=simple_completion.type or _UNKNOWN_TYPE,
3013 )
3044 )
3014 container.append(completion)
3045 container.append(completion)
3015
3046
3016 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3047 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3017 :MATCHES_LIMIT
3048 :MATCHES_LIMIT
3018 ]
3049 ]
3019
3050
3020 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3051 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3021 """Find completions for the given text and line context.
3052 """Find completions for the given text and line context.
3022
3053
3023 Note that both the text and the line_buffer are optional, but at least
3054 Note that both the text and the line_buffer are optional, but at least
3024 one of them must be given.
3055 one of them must be given.
3025
3056
3026 Parameters
3057 Parameters
3027 ----------
3058 ----------
3028 text : string, optional
3059 text : string, optional
3029 Text to perform the completion on. If not given, the line buffer
3060 Text to perform the completion on. If not given, the line buffer
3030 is split using the instance's CompletionSplitter object.
3061 is split using the instance's CompletionSplitter object.
3031 line_buffer : string, optional
3062 line_buffer : string, optional
3032 If not given, the completer attempts to obtain the current line
3063 If not given, the completer attempts to obtain the current line
3033 buffer via readline. This keyword allows clients which are
3064 buffer via readline. This keyword allows clients which are
3034 requesting for text completions in non-readline contexts to inform
3065 requesting for text completions in non-readline contexts to inform
3035 the completer of the entire text.
3066 the completer of the entire text.
3036 cursor_pos : int, optional
3067 cursor_pos : int, optional
3037 Index of the cursor in the full line buffer. Should be provided by
3068 Index of the cursor in the full line buffer. Should be provided by
3038 remote frontends where kernel has no access to frontend state.
3069 remote frontends where kernel has no access to frontend state.
3039
3070
3040 Returns
3071 Returns
3041 -------
3072 -------
3042 Tuple of two items:
3073 Tuple of two items:
3043 text : str
3074 text : str
3044 Text that was actually used in the completion.
3075 Text that was actually used in the completion.
3045 matches : list
3076 matches : list
3046 A list of completion matches.
3077 A list of completion matches.
3047
3078
3048 Notes
3079 Notes
3049 -----
3080 -----
3050 This API is likely to be deprecated and replaced by
3081 This API is likely to be deprecated and replaced by
3051 :any:`IPCompleter.completions` in the future.
3082 :any:`IPCompleter.completions` in the future.
3052
3083
3053 """
3084 """
3054 warnings.warn('`Completer.complete` is pending deprecation since '
3085 warnings.warn('`Completer.complete` is pending deprecation since '
3055 'IPython 6.0 and will be replaced by `Completer.completions`.',
3086 'IPython 6.0 and will be replaced by `Completer.completions`.',
3056 PendingDeprecationWarning)
3087 PendingDeprecationWarning)
3057 # potential todo, FOLD the 3rd throw away argument of _complete
3088 # potential todo, FOLD the 3rd throw away argument of _complete
3058 # into the first 2 one.
3089 # into the first 2 one.
3059 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3090 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3060 # TODO: should we deprecate now, or does it stay?
3091 # TODO: should we deprecate now, or does it stay?
3061
3092
3062 results = self._complete(
3093 results = self._complete(
3063 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3094 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3064 )
3095 )
3065
3096
3066 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3097 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3067
3098
3068 return self._arrange_and_extract(
3099 return self._arrange_and_extract(
3069 results,
3100 results,
3070 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3101 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3071 skip_matchers={jedi_matcher_id},
3102 skip_matchers={jedi_matcher_id},
3072 # this API does not support different start/end positions (fragments of token).
3103 # this API does not support different start/end positions (fragments of token).
3073 abort_if_offset_changes=True,
3104 abort_if_offset_changes=True,
3074 )
3105 )
3075
3106
3076 def _arrange_and_extract(
3107 def _arrange_and_extract(
3077 self,
3108 self,
3078 results: Dict[str, MatcherResult],
3109 results: Dict[str, MatcherResult],
3079 skip_matchers: Set[str],
3110 skip_matchers: Set[str],
3080 abort_if_offset_changes: bool,
3111 abort_if_offset_changes: bool,
3081 ):
3112 ):
3082 sortable: List[AnyMatcherCompletion] = []
3113 sortable: List[AnyMatcherCompletion] = []
3083 ordered: List[AnyMatcherCompletion] = []
3114 ordered: List[AnyMatcherCompletion] = []
3084 most_recent_fragment = None
3115 most_recent_fragment = None
3085 for identifier, result in results.items():
3116 for identifier, result in results.items():
3086 if identifier in skip_matchers:
3117 if identifier in skip_matchers:
3087 continue
3118 continue
3088 if not result["completions"]:
3119 if not result["completions"]:
3089 continue
3120 continue
3090 if not most_recent_fragment:
3121 if not most_recent_fragment:
3091 most_recent_fragment = result["matched_fragment"]
3122 most_recent_fragment = result["matched_fragment"]
3092 if (
3123 if (
3093 abort_if_offset_changes
3124 abort_if_offset_changes
3094 and result["matched_fragment"] != most_recent_fragment
3125 and result["matched_fragment"] != most_recent_fragment
3095 ):
3126 ):
3096 break
3127 break
3097 if result.get("ordered", False):
3128 if result.get("ordered", False):
3098 ordered.extend(result["completions"])
3129 ordered.extend(result["completions"])
3099 else:
3130 else:
3100 sortable.extend(result["completions"])
3131 sortable.extend(result["completions"])
3101
3132
3102 if not most_recent_fragment:
3133 if not most_recent_fragment:
3103 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3134 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3104
3135
3105 return most_recent_fragment, [
3136 return most_recent_fragment, [
3106 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3137 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3107 ]
3138 ]
3108
3139
3109 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3140 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3110 full_text=None) -> _CompleteResult:
3141 full_text=None) -> _CompleteResult:
3111 """
3142 """
3112 Like complete but can also returns raw jedi completions as well as the
3143 Like complete but can also returns raw jedi completions as well as the
3113 origin of the completion text. This could (and should) be made much
3144 origin of the completion text. This could (and should) be made much
3114 cleaner but that will be simpler once we drop the old (and stateful)
3145 cleaner but that will be simpler once we drop the old (and stateful)
3115 :any:`complete` API.
3146 :any:`complete` API.
3116
3147
3117 With current provisional API, cursor_pos act both (depending on the
3148 With current provisional API, cursor_pos act both (depending on the
3118 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3149 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3119 ``column`` when passing multiline strings this could/should be renamed
3150 ``column`` when passing multiline strings this could/should be renamed
3120 but would add extra noise.
3151 but would add extra noise.
3121
3152
3122 Parameters
3153 Parameters
3123 ----------
3154 ----------
3124 cursor_line
3155 cursor_line
3125 Index of the line the cursor is on. 0 indexed.
3156 Index of the line the cursor is on. 0 indexed.
3126 cursor_pos
3157 cursor_pos
3127 Position of the cursor in the current line/line_buffer/text. 0
3158 Position of the cursor in the current line/line_buffer/text. 0
3128 indexed.
3159 indexed.
3129 line_buffer : optional, str
3160 line_buffer : optional, str
3130 The current line the cursor is in, this is mostly due to legacy
3161 The current line the cursor is in, this is mostly due to legacy
3131 reason that readline could only give a us the single current line.
3162 reason that readline could only give a us the single current line.
3132 Prefer `full_text`.
3163 Prefer `full_text`.
3133 text : str
3164 text : str
3134 The current "token" the cursor is in, mostly also for historical
3165 The current "token" the cursor is in, mostly also for historical
3135 reasons. as the completer would trigger only after the current line
3166 reasons. as the completer would trigger only after the current line
3136 was parsed.
3167 was parsed.
3137 full_text : str
3168 full_text : str
3138 Full text of the current cell.
3169 Full text of the current cell.
3139
3170
3140 Returns
3171 Returns
3141 -------
3172 -------
3142 An ordered dictionary where keys are identifiers of completion
3173 An ordered dictionary where keys are identifiers of completion
3143 matchers and values are ``MatcherResult``s.
3174 matchers and values are ``MatcherResult``s.
3144 """
3175 """
3145
3176
3146 # if the cursor position isn't given, the only sane assumption we can
3177 # if the cursor position isn't given, the only sane assumption we can
3147 # make is that it's at the end of the line (the common case)
3178 # make is that it's at the end of the line (the common case)
3148 if cursor_pos is None:
3179 if cursor_pos is None:
3149 cursor_pos = len(line_buffer) if text is None else len(text)
3180 cursor_pos = len(line_buffer) if text is None else len(text)
3150
3181
3151 if self.use_main_ns:
3182 if self.use_main_ns:
3152 self.namespace = __main__.__dict__
3183 self.namespace = __main__.__dict__
3153
3184
3154 # if text is either None or an empty string, rely on the line buffer
3185 # if text is either None or an empty string, rely on the line buffer
3155 if (not line_buffer) and full_text:
3186 if (not line_buffer) and full_text:
3156 line_buffer = full_text.split('\n')[cursor_line]
3187 line_buffer = full_text.split('\n')[cursor_line]
3157 if not text: # issue #11508: check line_buffer before calling split_line
3188 if not text: # issue #11508: check line_buffer before calling split_line
3158 text = (
3189 text = (
3159 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3190 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3160 )
3191 )
3161
3192
3162 # If no line buffer is given, assume the input text is all there was
3193 # If no line buffer is given, assume the input text is all there was
3163 if line_buffer is None:
3194 if line_buffer is None:
3164 line_buffer = text
3195 line_buffer = text
3165
3196
3166 # deprecated - do not use `line_buffer` in new code.
3197 # deprecated - do not use `line_buffer` in new code.
3167 self.line_buffer = line_buffer
3198 self.line_buffer = line_buffer
3168 self.text_until_cursor = self.line_buffer[:cursor_pos]
3199 self.text_until_cursor = self.line_buffer[:cursor_pos]
3169
3200
3170 if not full_text:
3201 if not full_text:
3171 full_text = line_buffer
3202 full_text = line_buffer
3172
3203
3173 context = CompletionContext(
3204 context = CompletionContext(
3174 full_text=full_text,
3205 full_text=full_text,
3175 cursor_position=cursor_pos,
3206 cursor_position=cursor_pos,
3176 cursor_line=cursor_line,
3207 cursor_line=cursor_line,
3177 token=text,
3208 token=text,
3178 limit=MATCHES_LIMIT,
3209 limit=MATCHES_LIMIT,
3179 )
3210 )
3180
3211
3181 # Start with a clean slate of completions
3212 # Start with a clean slate of completions
3182 results: Dict[str, MatcherResult] = {}
3213 results: Dict[str, MatcherResult] = {}
3183
3214
3184 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3215 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3185
3216
3186 suppressed_matchers: Set[str] = set()
3217 suppressed_matchers: Set[str] = set()
3187
3218
3188 matchers = {
3219 matchers = {
3189 _get_matcher_id(matcher): matcher
3220 _get_matcher_id(matcher): matcher
3190 for matcher in sorted(
3221 for matcher in sorted(
3191 self.matchers, key=_get_matcher_priority, reverse=True
3222 self.matchers, key=_get_matcher_priority, reverse=True
3192 )
3223 )
3193 }
3224 }
3194
3225
3195 for matcher_id, matcher in matchers.items():
3226 for matcher_id, matcher in matchers.items():
3196 matcher_id = _get_matcher_id(matcher)
3227 matcher_id = _get_matcher_id(matcher)
3197
3228
3198 if matcher_id in self.disable_matchers:
3229 if matcher_id in self.disable_matchers:
3199 continue
3230 continue
3200
3231
3201 if matcher_id in results:
3232 if matcher_id in results:
3202 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3233 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3203
3234
3204 if matcher_id in suppressed_matchers:
3235 if matcher_id in suppressed_matchers:
3205 continue
3236 continue
3206
3237
3207 result: MatcherResult
3238 result: MatcherResult
3208 try:
3239 try:
3209 if _is_matcher_v1(matcher):
3240 if _is_matcher_v1(matcher):
3210 result = _convert_matcher_v1_result_to_v2(
3241 result = _convert_matcher_v1_result_to_v2(
3211 matcher(text), type=_UNKNOWN_TYPE
3242 matcher(text), type=_UNKNOWN_TYPE
3212 )
3243 )
3213 elif _is_matcher_v2(matcher):
3244 elif _is_matcher_v2(matcher):
3214 result = matcher(context)
3245 result = matcher(context)
3215 else:
3246 else:
3216 api_version = _get_matcher_api_version(matcher)
3247 api_version = _get_matcher_api_version(matcher)
3217 raise ValueError(f"Unsupported API version {api_version}")
3248 raise ValueError(f"Unsupported API version {api_version}")
3218 except:
3249 except BaseException:
3219 # Show the ugly traceback if the matcher causes an
3250 # Show the ugly traceback if the matcher causes an
3220 # exception, but do NOT crash the kernel!
3251 # exception, but do NOT crash the kernel!
3221 sys.excepthook(*sys.exc_info())
3252 sys.excepthook(*sys.exc_info())
3222 continue
3253 continue
3223
3254
3224 # set default value for matched fragment if suffix was not selected.
3255 # set default value for matched fragment if suffix was not selected.
3225 result["matched_fragment"] = result.get("matched_fragment", context.token)
3256 result["matched_fragment"] = result.get("matched_fragment", context.token)
3226
3257
3227 if not suppressed_matchers:
3258 if not suppressed_matchers:
3228 suppression_recommended: Union[bool, Set[str]] = result.get(
3259 suppression_recommended: Union[bool, Set[str]] = result.get(
3229 "suppress", False
3260 "suppress", False
3230 )
3261 )
3231
3262
3232 suppression_config = (
3263 suppression_config = (
3233 self.suppress_competing_matchers.get(matcher_id, None)
3264 self.suppress_competing_matchers.get(matcher_id, None)
3234 if isinstance(self.suppress_competing_matchers, dict)
3265 if isinstance(self.suppress_competing_matchers, dict)
3235 else self.suppress_competing_matchers
3266 else self.suppress_competing_matchers
3236 )
3267 )
3237 should_suppress = (
3268 should_suppress = (
3238 (suppression_config is True)
3269 (suppression_config is True)
3239 or (suppression_recommended and (suppression_config is not False))
3270 or (suppression_recommended and (suppression_config is not False))
3240 ) and has_any_completions(result)
3271 ) and has_any_completions(result)
3241
3272
3242 if should_suppress:
3273 if should_suppress:
3243 suppression_exceptions: Set[str] = result.get(
3274 suppression_exceptions: Set[str] = result.get(
3244 "do_not_suppress", set()
3275 "do_not_suppress", set()
3245 )
3276 )
3246 if isinstance(suppression_recommended, Iterable):
3277 if isinstance(suppression_recommended, Iterable):
3247 to_suppress = set(suppression_recommended)
3278 to_suppress = set(suppression_recommended)
3248 else:
3279 else:
3249 to_suppress = set(matchers)
3280 to_suppress = set(matchers)
3250 suppressed_matchers = to_suppress - suppression_exceptions
3281 suppressed_matchers = to_suppress - suppression_exceptions
3251
3282
3252 new_results = {}
3283 new_results = {}
3253 for previous_matcher_id, previous_result in results.items():
3284 for previous_matcher_id, previous_result in results.items():
3254 if previous_matcher_id not in suppressed_matchers:
3285 if previous_matcher_id not in suppressed_matchers:
3255 new_results[previous_matcher_id] = previous_result
3286 new_results[previous_matcher_id] = previous_result
3256 results = new_results
3287 results = new_results
3257
3288
3258 results[matcher_id] = result
3289 results[matcher_id] = result
3259
3290
3260 _, matches = self._arrange_and_extract(
3291 _, matches = self._arrange_and_extract(
3261 results,
3292 results,
3262 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3293 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3263 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3294 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3264 skip_matchers={jedi_matcher_id},
3295 skip_matchers={jedi_matcher_id},
3265 abort_if_offset_changes=False,
3296 abort_if_offset_changes=False,
3266 )
3297 )
3267
3298
3268 # populate legacy stateful API
3299 # populate legacy stateful API
3269 self.matches = matches
3300 self.matches = matches
3270
3301
3271 return results
3302 return results
3272
3303
3273 @staticmethod
3304 @staticmethod
3274 def _deduplicate(
3305 def _deduplicate(
3275 matches: Sequence[AnyCompletion],
3306 matches: Sequence[AnyCompletion],
3276 ) -> Iterable[AnyCompletion]:
3307 ) -> Iterable[AnyCompletion]:
3277 filtered_matches: Dict[str, AnyCompletion] = {}
3308 filtered_matches: Dict[str, AnyCompletion] = {}
3278 for match in matches:
3309 for match in matches:
3279 text = match.text
3310 text = match.text
3280 if (
3311 if (
3281 text not in filtered_matches
3312 text not in filtered_matches
3282 or filtered_matches[text].type == _UNKNOWN_TYPE
3313 or filtered_matches[text].type == _UNKNOWN_TYPE
3283 ):
3314 ):
3284 filtered_matches[text] = match
3315 filtered_matches[text] = match
3285
3316
3286 return filtered_matches.values()
3317 return filtered_matches.values()
3287
3318
3288 @staticmethod
3319 @staticmethod
3289 def _sort(matches: Sequence[AnyCompletion]):
3320 def _sort(matches: Sequence[AnyCompletion]):
3290 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3321 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3291
3322
3292 @context_matcher()
3323 @context_matcher()
3293 def fwd_unicode_matcher(self, context: CompletionContext):
3324 def fwd_unicode_matcher(self, context: CompletionContext):
3294 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3325 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3295 # TODO: use `context.limit` to terminate early once we matched the maximum
3326 # TODO: use `context.limit` to terminate early once we matched the maximum
3296 # number that will be used downstream; can be added as an optional to
3327 # number that will be used downstream; can be added as an optional to
3297 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3328 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3298 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3329 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3299 return _convert_matcher_v1_result_to_v2(
3330 return _convert_matcher_v1_result_to_v2(
3300 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3331 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3301 )
3332 )
3302
3333
3303 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3334 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3304 """
3335 """
3305 Forward match a string starting with a backslash with a list of
3336 Forward match a string starting with a backslash with a list of
3306 potential Unicode completions.
3337 potential Unicode completions.
3307
3338
3308 Will compute list of Unicode character names on first call and cache it.
3339 Will compute list of Unicode character names on first call and cache it.
3309
3340
3310 .. deprecated:: 8.6
3341 .. deprecated:: 8.6
3311 You can use :meth:`fwd_unicode_matcher` instead.
3342 You can use :meth:`fwd_unicode_matcher` instead.
3312
3343
3313 Returns
3344 Returns
3314 -------
3345 -------
3315 At tuple with:
3346 At tuple with:
3316 - matched text (empty if no matches)
3347 - matched text (empty if no matches)
3317 - list of potential completions, empty tuple otherwise)
3348 - list of potential completions, empty tuple otherwise)
3318 """
3349 """
3319 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3350 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3320 # We could do a faster match using a Trie.
3351 # We could do a faster match using a Trie.
3321
3352
3322 # Using pygtrie the following seem to work:
3353 # Using pygtrie the following seem to work:
3323
3354
3324 # s = PrefixSet()
3355 # s = PrefixSet()
3325
3356
3326 # for c in range(0,0x10FFFF + 1):
3357 # for c in range(0,0x10FFFF + 1):
3327 # try:
3358 # try:
3328 # s.add(unicodedata.name(chr(c)))
3359 # s.add(unicodedata.name(chr(c)))
3329 # except ValueError:
3360 # except ValueError:
3330 # pass
3361 # pass
3331 # [''.join(k) for k in s.iter(prefix)]
3362 # [''.join(k) for k in s.iter(prefix)]
3332
3363
3333 # But need to be timed and adds an extra dependency.
3364 # But need to be timed and adds an extra dependency.
3334
3365
3335 slashpos = text.rfind('\\')
3366 slashpos = text.rfind('\\')
3336 # if text starts with slash
3367 # if text starts with slash
3337 if slashpos > -1:
3368 if slashpos > -1:
3338 # PERF: It's important that we don't access self._unicode_names
3369 # PERF: It's important that we don't access self._unicode_names
3339 # until we're inside this if-block. _unicode_names is lazily
3370 # until we're inside this if-block. _unicode_names is lazily
3340 # initialized, and it takes a user-noticeable amount of time to
3371 # initialized, and it takes a user-noticeable amount of time to
3341 # initialize it, so we don't want to initialize it unless we're
3372 # initialize it, so we don't want to initialize it unless we're
3342 # actually going to use it.
3373 # actually going to use it.
3343 s = text[slashpos + 1 :]
3374 s = text[slashpos + 1 :]
3344 sup = s.upper()
3375 sup = s.upper()
3345 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3376 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3346 if candidates:
3377 if candidates:
3347 return s, candidates
3378 return s, candidates
3348 candidates = [x for x in self.unicode_names if sup in x]
3379 candidates = [x for x in self.unicode_names if sup in x]
3349 if candidates:
3380 if candidates:
3350 return s, candidates
3381 return s, candidates
3351 splitsup = sup.split(" ")
3382 splitsup = sup.split(" ")
3352 candidates = [
3383 candidates = [
3353 x for x in self.unicode_names if all(u in x for u in splitsup)
3384 x for x in self.unicode_names if all(u in x for u in splitsup)
3354 ]
3385 ]
3355 if candidates:
3386 if candidates:
3356 return s, candidates
3387 return s, candidates
3357
3388
3358 return "", ()
3389 return "", ()
3359
3390
3360 # if text does not start with slash
3391 # if text does not start with slash
3361 else:
3392 else:
3362 return '', ()
3393 return '', ()
3363
3394
3364 @property
3395 @property
3365 def unicode_names(self) -> List[str]:
3396 def unicode_names(self) -> List[str]:
3366 """List of names of unicode code points that can be completed.
3397 """List of names of unicode code points that can be completed.
3367
3398
3368 The list is lazily initialized on first access.
3399 The list is lazily initialized on first access.
3369 """
3400 """
3370 if self._unicode_names is None:
3401 if self._unicode_names is None:
3371 names = []
3402 names = []
3372 for c in range(0,0x10FFFF + 1):
3403 for c in range(0,0x10FFFF + 1):
3373 try:
3404 try:
3374 names.append(unicodedata.name(chr(c)))
3405 names.append(unicodedata.name(chr(c)))
3375 except ValueError:
3406 except ValueError:
3376 pass
3407 pass
3377 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3408 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3378
3409
3379 return self._unicode_names
3410 return self._unicode_names
3380
3411
3381 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3412 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3382 names = []
3413 names = []
3383 for start,stop in ranges:
3414 for start,stop in ranges:
3384 for c in range(start, stop) :
3415 for c in range(start, stop) :
3385 try:
3416 try:
3386 names.append(unicodedata.name(chr(c)))
3417 names.append(unicodedata.name(chr(c)))
3387 except ValueError:
3418 except ValueError:
3388 pass
3419 pass
3389 return names
3420 return names
@@ -1,1769 +1,1819
1 # encoding: utf-8
1 # encoding: utf-8
2 """Tests for the IPython tab-completion machinery."""
2 """Tests for the IPython tab-completion machinery."""
3
3
4 # Copyright (c) IPython Development Team.
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
5 # Distributed under the terms of the Modified BSD License.
6
6
7 import os
7 import os
8 import pytest
8 import pytest
9 import sys
9 import sys
10 import textwrap
10 import textwrap
11 import unittest
11 import unittest
12 import random
12
13
13 from importlib.metadata import version
14 from importlib.metadata import version
14
15
15
16 from contextlib import contextmanager
16 from contextlib import contextmanager
17
17
18 from traitlets.config.loader import Config
18 from traitlets.config.loader import Config
19 from IPython import get_ipython
19 from IPython import get_ipython
20 from IPython.core import completer
20 from IPython.core import completer
21 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
21 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
22 from IPython.utils.generics import complete_object
22 from IPython.utils.generics import complete_object
23 from IPython.testing import decorators as dec
23 from IPython.testing import decorators as dec
24 from IPython.core.latex_symbols import latex_symbols
24
25
25 from IPython.core.completer import (
26 from IPython.core.completer import (
26 Completion,
27 Completion,
27 provisionalcompleter,
28 provisionalcompleter,
28 match_dict_keys,
29 match_dict_keys,
29 _deduplicate_completions,
30 _deduplicate_completions,
30 _match_number_in_dict_key_prefix,
31 _match_number_in_dict_key_prefix,
31 completion_matcher,
32 completion_matcher,
32 SimpleCompletion,
33 SimpleCompletion,
33 CompletionContext,
34 CompletionContext,
35 _unicode_name_compute,
36 _UNICODE_RANGES,
34 )
37 )
35
38
36 from packaging.version import parse
39 from packaging.version import parse
37
40
38
41
42 @contextmanager
43 def jedi_status(status: bool):
44 completer = get_ipython().Completer
45 try:
46 old = completer.use_jedi
47 completer.use_jedi = status
48 yield
49 finally:
50 completer.use_jedi = old
51
52
39 # -----------------------------------------------------------------------------
53 # -----------------------------------------------------------------------------
40 # Test functions
54 # Test functions
41 # -----------------------------------------------------------------------------
55 # -----------------------------------------------------------------------------
42
56
43
57
44 def recompute_unicode_ranges():
58 def recompute_unicode_ranges():
45 """
59 """
46 utility to recompute the largest unicode range without any characters
60 utility to recompute the largest unicode range without any characters
47
61
48 use to recompute the gap in the global _UNICODE_RANGES of completer.py
62 use to recompute the gap in the global _UNICODE_RANGES of completer.py
49 """
63 """
50 import itertools
64 import itertools
51 import unicodedata
65 import unicodedata
52
66
53 valid = []
67 valid = []
54 for c in range(0, 0x10FFFF + 1):
68 for c in range(0, 0x10FFFF + 1):
55 try:
69 try:
56 unicodedata.name(chr(c))
70 unicodedata.name(chr(c))
57 except ValueError:
71 except ValueError:
58 continue
72 continue
59 valid.append(c)
73 valid.append(c)
60
74
61 def ranges(i):
75 def ranges(i):
62 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
76 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
63 b = list(b)
77 b = list(b)
64 yield b[0][1], b[-1][1]
78 yield b[0][1], b[-1][1]
65
79
66 rg = list(ranges(valid))
80 rg = list(ranges(valid))
67 lens = []
81 lens = []
68 gap_lens = []
82 gap_lens = []
69 pstart, pstop = 0, 0
83 _pstart, pstop = 0, 0
70 for start, stop in rg:
84 for start, stop in rg:
71 lens.append(stop - start)
85 lens.append(stop - start)
72 gap_lens.append(
86 gap_lens.append(
73 (
87 (
74 start - pstop,
88 start - pstop,
75 hex(pstop + 1),
89 hex(pstop + 1),
76 hex(start),
90 hex(start),
77 f"{round((start - pstop)/0xe01f0*100)}%",
91 f"{round((start - pstop)/0xe01f0*100)}%",
78 )
92 )
79 )
93 )
80 pstart, pstop = start, stop
94 _pstart, pstop = start, stop
81
95
82 return sorted(gap_lens)[-1]
96 return sorted(gap_lens)[-1]
83
97
84
98
85 def test_unicode_range():
99 def test_unicode_range():
86 """
100 """
87 Test that the ranges we test for unicode names give the same number of
101 Test that the ranges we test for unicode names give the same number of
88 results than testing the full length.
102 results than testing the full length.
89 """
103 """
90 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
91
104
92 expected_list = _unicode_name_compute([(0, 0x110000)])
105 expected_list = _unicode_name_compute([(0, 0x110000)])
93 test = _unicode_name_compute(_UNICODE_RANGES)
106 test = _unicode_name_compute(_UNICODE_RANGES)
94 len_exp = len(expected_list)
107 len_exp = len(expected_list)
95 len_test = len(test)
108 len_test = len(test)
96
109
97 # do not inline the len() or on error pytest will try to print the 130 000 +
110 # do not inline the len() or on error pytest will try to print the 130 000 +
98 # elements.
111 # elements.
99 message = None
112 message = None
100 if len_exp != len_test or len_exp > 131808:
113 if len_exp != len_test or len_exp > 131808:
101 size, start, stop, prct = recompute_unicode_ranges()
114 size, start, stop, prct = recompute_unicode_ranges()
102 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
115 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
103 likely due to a new release of Python. We've find that the biggest gap
116 likely due to a new release of Python. We've find that the biggest gap
104 in unicode characters has reduces in size to be {size} characters
117 in unicode characters has reduces in size to be {size} characters
105 ({prct}), from {start}, to {stop}. In completer.py likely update to
118 ({prct}), from {start}, to {stop}. In completer.py likely update to
106
119
107 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
120 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
108
121
109 And update the assertion below to use
122 And update the assertion below to use
110
123
111 len_exp <= {len_exp}
124 len_exp <= {len_exp}
112 """
125 """
113 assert len_exp == len_test, message
126 assert len_exp == len_test, message
114
127
115 # fail if new unicode symbols have been added.
128 # fail if new unicode symbols have been added.
116 assert len_exp <= 143668, message
129 assert len_exp <= 143668, message
117
130
118
131
119 @contextmanager
132 @contextmanager
120 def greedy_completion():
133 def greedy_completion():
121 ip = get_ipython()
134 ip = get_ipython()
122 greedy_original = ip.Completer.greedy
135 greedy_original = ip.Completer.greedy
123 try:
136 try:
124 ip.Completer.greedy = True
137 ip.Completer.greedy = True
125 yield
138 yield
126 finally:
139 finally:
127 ip.Completer.greedy = greedy_original
140 ip.Completer.greedy = greedy_original
128
141
129
142
130 @contextmanager
143 @contextmanager
131 def evaluation_policy(evaluation: str):
144 def evaluation_policy(evaluation: str):
132 ip = get_ipython()
145 ip = get_ipython()
133 evaluation_original = ip.Completer.evaluation
146 evaluation_original = ip.Completer.evaluation
134 try:
147 try:
135 ip.Completer.evaluation = evaluation
148 ip.Completer.evaluation = evaluation
136 yield
149 yield
137 finally:
150 finally:
138 ip.Completer.evaluation = evaluation_original
151 ip.Completer.evaluation = evaluation_original
139
152
140
153
141 @contextmanager
154 @contextmanager
142 def custom_matchers(matchers):
155 def custom_matchers(matchers):
143 ip = get_ipython()
156 ip = get_ipython()
144 try:
157 try:
145 ip.Completer.custom_matchers.extend(matchers)
158 ip.Completer.custom_matchers.extend(matchers)
146 yield
159 yield
147 finally:
160 finally:
148 ip.Completer.custom_matchers.clear()
161 ip.Completer.custom_matchers.clear()
149
162
150
163
151 def test_protect_filename():
164 if sys.platform == "win32":
152 if sys.platform == "win32":
165 pairs = [
153 pairs = [
166 ("abc", "abc"),
154 ("abc", "abc"),
167 (" abc", '" abc"'),
155 (" abc", '" abc"'),
168 ("a bc", '"a bc"'),
156 ("a bc", '"a bc"'),
169 ("a bc", '"a bc"'),
157 ("a bc", '"a bc"'),
170 (" bc", '" bc"'),
158 (" bc", '" bc"'),
171 ]
159 ]
172 else:
160 else:
173 pairs = [
161 pairs = [
174 ("abc", "abc"),
162 ("abc", "abc"),
175 (" abc", r"\ abc"),
163 (" abc", r"\ abc"),
176 ("a bc", r"a\ bc"),
164 ("a bc", r"a\ bc"),
177 ("a bc", r"a\ \ bc"),
165 ("a bc", r"a\ \ bc"),
178 (" bc", r"\ \ bc"),
166 (" bc", r"\ \ bc"),
179 # On posix, we also protect parens and other special characters.
167 # On posix, we also protect parens and other special characters.
180 ("a(bc", r"a\(bc"),
168 ("a(bc", r"a\(bc"),
181 ("a)bc", r"a\)bc"),
169 ("a)bc", r"a\)bc"),
182 ("a( )bc", r"a\(\ \)bc"),
170 ("a( )bc", r"a\(\ \)bc"),
183 ("a[1]bc", r"a\[1\]bc"),
171 ("a[1]bc", r"a\[1\]bc"),
184 ("a{1}bc", r"a\{1\}bc"),
172 ("a{1}bc", r"a\{1\}bc"),
185 ("a#bc", r"a\#bc"),
173 ("a#bc", r"a\#bc"),
186 ("a?bc", r"a\?bc"),
174 ("a?bc", r"a\?bc"),
187 ("a=bc", r"a\=bc"),
175 ("a=bc", r"a\=bc"),
188 ("a\\bc", r"a\\bc"),
176 ("a\\bc", r"a\\bc"),
189 ("a|bc", r"a\|bc"),
177 ("a|bc", r"a\|bc"),
190 ("a;bc", r"a\;bc"),
178 ("a;bc", r"a\;bc"),
191 ("a:bc", r"a\:bc"),
179 ("a:bc", r"a\:bc"),
192 ("a'bc", r"a\'bc"),
180 ("a'bc", r"a\'bc"),
193 ("a*bc", r"a\*bc"),
181 ("a*bc", r"a\*bc"),
194 ('a"bc', r"a\"bc"),
182 ('a"bc', r"a\"bc"),
195 ("a^bc", r"a\^bc"),
183 ("a^bc", r"a\^bc"),
196 ("a&bc", r"a\&bc"),
184 ("a&bc", r"a\&bc"),
197 ]
185 ]
198
186 # run the actual tests
199
187 for s1, s2 in pairs:
200 @pytest.mark.parametrize("s1,expected", pairs)
188 s1p = completer.protect_filename(s1)
201 def test_protect_filename(s1, expected):
189 assert s1p == s2
202 assert completer.protect_filename(s1) == expected
190
203
191
204
192 def check_line_split(splitter, test_specs):
205 def check_line_split(splitter, test_specs):
193 for part1, part2, split in test_specs:
206 for part1, part2, split in test_specs:
194 cursor_pos = len(part1)
207 cursor_pos = len(part1)
195 line = part1 + part2
208 line = part1 + part2
196 out = splitter.split_line(line, cursor_pos)
209 out = splitter.split_line(line, cursor_pos)
197 assert out == split
210 assert out == split
198
211
199 def test_line_split():
212 def test_line_split():
200 """Basic line splitter test with default specs."""
213 """Basic line splitter test with default specs."""
201 sp = completer.CompletionSplitter()
214 sp = completer.CompletionSplitter()
202 # The format of the test specs is: part1, part2, expected answer. Parts 1
215 # The format of the test specs is: part1, part2, expected answer. Parts 1
203 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
216 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
204 # was at the end of part1. So an empty part2 represents someone hitting
217 # was at the end of part1. So an empty part2 represents someone hitting
205 # tab at the end of the line, the most common case.
218 # tab at the end of the line, the most common case.
206 t = [
219 t = [
207 ("run some/script", "", "some/script"),
220 ("run some/script", "", "some/script"),
208 ("run scripts/er", "ror.py foo", "scripts/er"),
221 ("run scripts/er", "ror.py foo", "scripts/er"),
209 ("echo $HOM", "", "HOM"),
222 ("echo $HOM", "", "HOM"),
210 ("print sys.pa", "", "sys.pa"),
223 ("print sys.pa", "", "sys.pa"),
211 ("print(sys.pa", "", "sys.pa"),
224 ("print(sys.pa", "", "sys.pa"),
212 ("execfile('scripts/er", "", "scripts/er"),
225 ("execfile('scripts/er", "", "scripts/er"),
213 ("a[x.", "", "x."),
226 ("a[x.", "", "x."),
214 ("a[x.", "y", "x."),
227 ("a[x.", "y", "x."),
215 ('cd "some_file/', "", "some_file/"),
228 ('cd "some_file/', "", "some_file/"),
216 ]
229 ]
217 check_line_split(sp, t)
230 check_line_split(sp, t)
218 # Ensure splitting works OK with unicode by re-running the tests with
231 # Ensure splitting works OK with unicode by re-running the tests with
219 # all inputs turned into unicode
232 # all inputs turned into unicode
220 check_line_split(sp, [map(str, p) for p in t])
233 check_line_split(sp, [map(str, p) for p in t])
221
234
222
235
223 class NamedInstanceClass:
236 class NamedInstanceClass:
224 instances = {}
237 instances = {}
225
238
226 def __init__(self, name):
239 def __init__(self, name):
227 self.instances[name] = self
240 self.instances[name] = self
228
241
229 @classmethod
242 @classmethod
230 def _ipython_key_completions_(cls):
243 def _ipython_key_completions_(cls):
231 return cls.instances.keys()
244 return cls.instances.keys()
232
245
233
246
234 class KeyCompletable:
247 class KeyCompletable:
235 def __init__(self, things=()):
248 def __init__(self, things=()):
236 self.things = things
249 self.things = things
237
250
238 def _ipython_key_completions_(self):
251 def _ipython_key_completions_(self):
239 return list(self.things)
252 return list(self.things)
240
253
241
254
242 class TestCompleter(unittest.TestCase):
255 class TestCompleter(unittest.TestCase):
243 def setUp(self):
256 def setUp(self):
244 """
257 """
245 We want to silence all PendingDeprecationWarning when testing the completer
258 We want to silence all PendingDeprecationWarning when testing the completer
246 """
259 """
247 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
260 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
248 self._assertwarns.__enter__()
261 self._assertwarns.__enter__()
249
262
250 def tearDown(self):
263 def tearDown(self):
251 try:
264 try:
252 self._assertwarns.__exit__(None, None, None)
265 self._assertwarns.__exit__(None, None, None)
253 except AssertionError:
266 except AssertionError:
254 pass
267 pass
255
268
256 def test_custom_completion_error(self):
269 def test_custom_completion_error(self):
257 """Test that errors from custom attribute completers are silenced."""
270 """Test that errors from custom attribute completers are silenced."""
258 ip = get_ipython()
271 ip = get_ipython()
259
272
260 class A:
273 class A:
261 pass
274 pass
262
275
263 ip.user_ns["x"] = A()
276 ip.user_ns["x"] = A()
264
277
265 @complete_object.register(A)
278 @complete_object.register(A)
266 def complete_A(a, existing_completions):
279 def complete_A(a, existing_completions):
267 raise TypeError("this should be silenced")
280 raise TypeError("this should be silenced")
268
281
269 ip.complete("x.")
282 ip.complete("x.")
270
283
271 def test_custom_completion_ordering(self):
284 def test_custom_completion_ordering(self):
272 """Test that errors from custom attribute completers are silenced."""
285 """Test that errors from custom attribute completers are silenced."""
273 ip = get_ipython()
286 ip = get_ipython()
274
287
275 _, matches = ip.complete('in')
288 _, matches = ip.complete('in')
276 assert matches.index('input') < matches.index('int')
289 assert matches.index('input') < matches.index('int')
277
290
278 def complete_example(a):
291 def complete_example(a):
279 return ['example2', 'example1']
292 return ['example2', 'example1']
280
293
281 ip.Completer.custom_completers.add_re('ex*', complete_example)
294 ip.Completer.custom_completers.add_re('ex*', complete_example)
282 _, matches = ip.complete('ex')
295 _, matches = ip.complete('ex')
283 assert matches.index('example2') < matches.index('example1')
296 assert matches.index('example2') < matches.index('example1')
284
297
285 def test_unicode_completions(self):
298 def test_unicode_completions(self):
286 ip = get_ipython()
299 ip = get_ipython()
287 # Some strings that trigger different types of completion. Check them both
300 # Some strings that trigger different types of completion. Check them both
288 # in str and unicode forms
301 # in str and unicode forms
289 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
302 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
290 for t in s + list(map(str, s)):
303 for t in s + list(map(str, s)):
291 # We don't need to check exact completion values (they may change
304 # We don't need to check exact completion values (they may change
292 # depending on the state of the namespace, but at least no exceptions
305 # depending on the state of the namespace, but at least no exceptions
293 # should be thrown and the return value should be a pair of text, list
306 # should be thrown and the return value should be a pair of text, list
294 # values.
307 # values.
295 text, matches = ip.complete(t)
308 text, matches = ip.complete(t)
296 self.assertIsInstance(text, str)
309 self.assertIsInstance(text, str)
297 self.assertIsInstance(matches, list)
310 self.assertIsInstance(matches, list)
298
311
299 def test_latex_completions(self):
312 def test_latex_completions(self):
300 from IPython.core.latex_symbols import latex_symbols
301 import random
302
313
303 ip = get_ipython()
314 ip = get_ipython()
304 # Test some random unicode symbols
315 # Test some random unicode symbols
305 keys = random.sample(sorted(latex_symbols), 10)
316 keys = random.sample(sorted(latex_symbols), 10)
306 for k in keys:
317 for k in keys:
307 text, matches = ip.complete(k)
318 text, matches = ip.complete(k)
308 self.assertEqual(text, k)
319 self.assertEqual(text, k)
309 self.assertEqual(matches, [latex_symbols[k]])
320 self.assertEqual(matches, [latex_symbols[k]])
310 # Test a more complex line
321 # Test a more complex line
311 text, matches = ip.complete("print(\\alpha")
322 text, matches = ip.complete("print(\\alpha")
312 self.assertEqual(text, "\\alpha")
323 self.assertEqual(text, "\\alpha")
313 self.assertEqual(matches[0], latex_symbols["\\alpha"])
324 self.assertEqual(matches[0], latex_symbols["\\alpha"])
314 # Test multiple matching latex symbols
325 # Test multiple matching latex symbols
315 text, matches = ip.complete("\\al")
326 text, matches = ip.complete("\\al")
316 self.assertIn("\\alpha", matches)
327 self.assertIn("\\alpha", matches)
317 self.assertIn("\\aleph", matches)
328 self.assertIn("\\aleph", matches)
318
329
319 def test_latex_no_results(self):
330 def test_latex_no_results(self):
320 """
331 """
321 forward latex should really return nothing in either field if nothing is found.
332 forward latex should really return nothing in either field if nothing is found.
322 """
333 """
323 ip = get_ipython()
334 ip = get_ipython()
324 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
335 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
325 self.assertEqual(text, "")
336 self.assertEqual(text, "")
326 self.assertEqual(matches, ())
337 self.assertEqual(matches, ())
327
338
328 def test_back_latex_completion(self):
339 def test_back_latex_completion(self):
329 ip = get_ipython()
340 ip = get_ipython()
330
341
331 # do not return more than 1 matches for \beta, only the latex one.
342 # do not return more than 1 matches for \beta, only the latex one.
332 name, matches = ip.complete("\\Ξ²")
343 name, matches = ip.complete("\\Ξ²")
333 self.assertEqual(matches, ["\\beta"])
344 self.assertEqual(matches, ["\\beta"])
334
345
335 def test_back_unicode_completion(self):
346 def test_back_unicode_completion(self):
336 ip = get_ipython()
347 ip = get_ipython()
337
348
338 name, matches = ip.complete("\\β…€")
349 name, matches = ip.complete("\\β…€")
339 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
350 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
340
351
341 def test_forward_unicode_completion(self):
352 def test_forward_unicode_completion(self):
342 ip = get_ipython()
353 ip = get_ipython()
343
354
344 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
355 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
345 self.assertEqual(matches, ["β…€"]) # This is not a V
356 self.assertEqual(matches, ["β…€"]) # This is not a V
346 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
357 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
347
358
348 def test_delim_setting(self):
359 def test_delim_setting(self):
349 sp = completer.CompletionSplitter()
360 sp = completer.CompletionSplitter()
350 sp.delims = " "
361 sp.delims = " "
351 self.assertEqual(sp.delims, " ")
362 self.assertEqual(sp.delims, " ")
352 self.assertEqual(sp._delim_expr, r"[\ ]")
363 self.assertEqual(sp._delim_expr, r"[\ ]")
353
364
354 def test_spaces(self):
365 def test_spaces(self):
355 """Test with only spaces as split chars."""
366 """Test with only spaces as split chars."""
356 sp = completer.CompletionSplitter()
367 sp = completer.CompletionSplitter()
357 sp.delims = " "
368 sp.delims = " "
358 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
369 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
359 check_line_split(sp, t)
370 check_line_split(sp, t)
360
371
361 def test_has_open_quotes1(self):
372 def test_has_open_quotes1(self):
362 for s in ["'", "'''", "'hi' '"]:
373 for s in ["'", "'''", "'hi' '"]:
363 self.assertEqual(completer.has_open_quotes(s), "'")
374 self.assertEqual(completer.has_open_quotes(s), "'")
364
375
365 def test_has_open_quotes2(self):
376 def test_has_open_quotes2(self):
366 for s in ['"', '"""', '"hi" "']:
377 for s in ['"', '"""', '"hi" "']:
367 self.assertEqual(completer.has_open_quotes(s), '"')
378 self.assertEqual(completer.has_open_quotes(s), '"')
368
379
369 def test_has_open_quotes3(self):
380 def test_has_open_quotes3(self):
370 for s in ["''", "''' '''", "'hi' 'ipython'"]:
381 for s in ["''", "''' '''", "'hi' 'ipython'"]:
371 self.assertFalse(completer.has_open_quotes(s))
382 self.assertFalse(completer.has_open_quotes(s))
372
383
373 def test_has_open_quotes4(self):
384 def test_has_open_quotes4(self):
374 for s in ['""', '""" """', '"hi" "ipython"']:
385 for s in ['""', '""" """', '"hi" "ipython"']:
375 self.assertFalse(completer.has_open_quotes(s))
386 self.assertFalse(completer.has_open_quotes(s))
376
387
377 @pytest.mark.xfail(
388 @pytest.mark.xfail(
378 sys.platform == "win32", reason="abspath completions fail on Windows"
389 sys.platform == "win32", reason="abspath completions fail on Windows"
379 )
390 )
380 def test_abspath_file_completions(self):
391 def test_abspath_file_completions(self):
381 ip = get_ipython()
392 ip = get_ipython()
382 with TemporaryDirectory() as tmpdir:
393 with TemporaryDirectory() as tmpdir:
383 prefix = os.path.join(tmpdir, "foo")
394 prefix = os.path.join(tmpdir, "foo")
384 suffixes = ["1", "2"]
395 suffixes = ["1", "2"]
385 names = [prefix + s for s in suffixes]
396 names = [prefix + s for s in suffixes]
386 for n in names:
397 for n in names:
387 open(n, "w", encoding="utf-8").close()
398 open(n, "w", encoding="utf-8").close()
388
399
389 # Check simple completion
400 # Check simple completion
390 c = ip.complete(prefix)[1]
401 c = ip.complete(prefix)[1]
391 self.assertEqual(c, names)
402 self.assertEqual(c, names)
392
403
393 # Now check with a function call
404 # Now check with a function call
394 cmd = 'a = f("%s' % prefix
405 cmd = 'a = f("%s' % prefix
395 c = ip.complete(prefix, cmd)[1]
406 c = ip.complete(prefix, cmd)[1]
396 comp = [prefix + s for s in suffixes]
407 comp = [prefix + s for s in suffixes]
397 self.assertEqual(c, comp)
408 self.assertEqual(c, comp)
398
409
399 def test_local_file_completions(self):
410 def test_local_file_completions(self):
400 ip = get_ipython()
411 ip = get_ipython()
401 with TemporaryWorkingDirectory():
412 with TemporaryWorkingDirectory():
402 prefix = "./foo"
413 prefix = "./foo"
403 suffixes = ["1", "2"]
414 suffixes = ["1", "2"]
404 names = [prefix + s for s in suffixes]
415 names = [prefix + s for s in suffixes]
405 for n in names:
416 for n in names:
406 open(n, "w", encoding="utf-8").close()
417 open(n, "w", encoding="utf-8").close()
407
418
408 # Check simple completion
419 # Check simple completion
409 c = ip.complete(prefix)[1]
420 c = ip.complete(prefix)[1]
410 self.assertEqual(c, names)
421 self.assertEqual(c, names)
411
422
412 # Now check with a function call
423 # Now check with a function call
413 cmd = 'a = f("%s' % prefix
424 cmd = 'a = f("%s' % prefix
414 c = ip.complete(prefix, cmd)[1]
425 c = ip.complete(prefix, cmd)[1]
415 comp = {prefix + s for s in suffixes}
426 comp = {prefix + s for s in suffixes}
416 self.assertTrue(comp.issubset(set(c)))
427 self.assertTrue(comp.issubset(set(c)))
417
428
418 def test_quoted_file_completions(self):
429 def test_quoted_file_completions(self):
419 ip = get_ipython()
430 ip = get_ipython()
420
431
421 def _(text):
432 def _(text):
422 return ip.Completer._complete(
433 return ip.Completer._complete(
423 cursor_line=0, cursor_pos=len(text), full_text=text
434 cursor_line=0, cursor_pos=len(text), full_text=text
424 )["IPCompleter.file_matcher"]["completions"]
435 )["IPCompleter.file_matcher"]["completions"]
425
436
426 with TemporaryWorkingDirectory():
437 with TemporaryWorkingDirectory():
427 name = "foo'bar"
438 name = "foo'bar"
428 open(name, "w", encoding="utf-8").close()
439 open(name, "w", encoding="utf-8").close()
429
440
430 # Don't escape Windows
441 # Don't escape Windows
431 escaped = name if sys.platform == "win32" else "foo\\'bar"
442 escaped = name if sys.platform == "win32" else "foo\\'bar"
432
443
433 # Single quote matches embedded single quote
444 # Single quote matches embedded single quote
434 c = _("open('foo")[0]
445 c = _("open('foo")[0]
435 self.assertEqual(c.text, escaped)
446 self.assertEqual(c.text, escaped)
436
447
437 # Double quote requires no escape
448 # Double quote requires no escape
438 c = _('open("foo')[0]
449 c = _('open("foo')[0]
439 self.assertEqual(c.text, name)
450 self.assertEqual(c.text, name)
440
451
441 # No quote requires an escape
452 # No quote requires an escape
442 c = _("%ls foo")[0]
453 c = _("%ls foo")[0]
443 self.assertEqual(c.text, escaped)
454 self.assertEqual(c.text, escaped)
444
455
445 @pytest.mark.xfail(
456 @pytest.mark.xfail(
446 sys.version_info.releaselevel in ("alpha",),
457 sys.version_info.releaselevel in ("alpha",),
447 reason="Parso does not yet parse 3.13",
458 reason="Parso does not yet parse 3.13",
448 )
459 )
449 def test_all_completions_dups(self):
460 def test_all_completions_dups(self):
450 """
461 """
451 Make sure the output of `IPCompleter.all_completions` does not have
462 Make sure the output of `IPCompleter.all_completions` does not have
452 duplicated prefixes.
463 duplicated prefixes.
453 """
464 """
454 ip = get_ipython()
465 ip = get_ipython()
455 c = ip.Completer
466 c = ip.Completer
456 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
467 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
457 for jedi_status in [True, False]:
468 for jedi_status in [True, False]:
458 with provisionalcompleter():
469 with provisionalcompleter():
459 ip.Completer.use_jedi = jedi_status
470 ip.Completer.use_jedi = jedi_status
460 matches = c.all_completions("TestCl")
471 matches = c.all_completions("TestCl")
461 assert matches == ["TestClass"], (jedi_status, matches)
472 assert matches == ["TestClass"], (jedi_status, matches)
462 matches = c.all_completions("TestClass.")
473 matches = c.all_completions("TestClass.")
463 assert len(matches) > 2, (jedi_status, matches)
474 assert len(matches) > 2, (jedi_status, matches)
464 matches = c.all_completions("TestClass.a")
475 matches = c.all_completions("TestClass.a")
465 if jedi_status:
476 if jedi_status:
466 assert matches == ["TestClass.a", "TestClass.a1"], jedi_status
477 assert matches == ["TestClass.a", "TestClass.a1"], jedi_status
467 else:
478 else:
468 assert matches == [".a", ".a1"], jedi_status
479 assert matches == [".a", ".a1"], jedi_status
469
480
470 @pytest.mark.xfail(
481 @pytest.mark.xfail(
471 sys.version_info.releaselevel in ("alpha",),
482 sys.version_info.releaselevel in ("alpha",),
472 reason="Parso does not yet parse 3.13",
483 reason="Parso does not yet parse 3.13",
473 )
484 )
474 def test_jedi(self):
485 def test_jedi(self):
475 """
486 """
476 A couple of issue we had with Jedi
487 A couple of issue we had with Jedi
477 """
488 """
478 ip = get_ipython()
489 ip = get_ipython()
479
490
480 def _test_complete(reason, s, comp, start=None, end=None):
491 def _test_complete(reason, s, comp, start=None, end=None):
481 l = len(s)
492 l = len(s)
482 start = start if start is not None else l
493 start = start if start is not None else l
483 end = end if end is not None else l
494 end = end if end is not None else l
484 with provisionalcompleter():
495 with provisionalcompleter():
485 ip.Completer.use_jedi = True
496 ip.Completer.use_jedi = True
486 completions = set(ip.Completer.completions(s, l))
497 completions = set(ip.Completer.completions(s, l))
487 ip.Completer.use_jedi = False
498 ip.Completer.use_jedi = False
488 assert Completion(start, end, comp) in completions, reason
499 assert Completion(start, end, comp) in completions, reason
489
500
490 def _test_not_complete(reason, s, comp):
501 def _test_not_complete(reason, s, comp):
491 l = len(s)
502 l = len(s)
492 with provisionalcompleter():
503 with provisionalcompleter():
493 ip.Completer.use_jedi = True
504 ip.Completer.use_jedi = True
494 completions = set(ip.Completer.completions(s, l))
505 completions = set(ip.Completer.completions(s, l))
495 ip.Completer.use_jedi = False
506 ip.Completer.use_jedi = False
496 assert Completion(l, l, comp) not in completions, reason
507 assert Completion(l, l, comp) not in completions, reason
497
508
498 import jedi
509 import jedi
499
510
500 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
511 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
501 if jedi_version > (0, 10):
512 if jedi_version > (0, 10):
502 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
513 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
503 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
514 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
504 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
515 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
505 _test_complete("cover duplicate completions", "im", "import", 0, 2)
516 _test_complete("cover duplicate completions", "im", "import", 0, 2)
506
517
507 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
518 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
508
519
509 @pytest.mark.xfail(
520 @pytest.mark.xfail(
510 sys.version_info.releaselevel in ("alpha",),
521 sys.version_info.releaselevel in ("alpha",),
511 reason="Parso does not yet parse 3.13",
522 reason="Parso does not yet parse 3.13",
512 )
523 )
513 def test_completion_have_signature(self):
524 def test_completion_have_signature(self):
514 """
525 """
515 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
526 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
516 """
527 """
517 ip = get_ipython()
528 ip = get_ipython()
518 with provisionalcompleter():
529 with provisionalcompleter():
519 ip.Completer.use_jedi = True
530 ip.Completer.use_jedi = True
520 completions = ip.Completer.completions("ope", 3)
531 completions = ip.Completer.completions("ope", 3)
521 c = next(completions) # should be `open`
532 c = next(completions) # should be `open`
522 ip.Completer.use_jedi = False
533 ip.Completer.use_jedi = False
523 assert "file" in c.signature, "Signature of function was not found by completer"
534 assert "file" in c.signature, "Signature of function was not found by completer"
524 assert (
535 assert (
525 "encoding" in c.signature
536 "encoding" in c.signature
526 ), "Signature of function was not found by completer"
537 ), "Signature of function was not found by completer"
527
538
528 @pytest.mark.xfail(
539 @pytest.mark.xfail(
529 sys.version_info.releaselevel in ("alpha",),
540 sys.version_info.releaselevel in ("alpha",),
530 reason="Parso does not yet parse 3.13",
541 reason="Parso does not yet parse 3.13",
531 )
542 )
532 def test_completions_have_type(self):
543 def test_completions_have_type(self):
533 """
544 """
534 Lets make sure matchers provide completion type.
545 Lets make sure matchers provide completion type.
535 """
546 """
536 ip = get_ipython()
547 ip = get_ipython()
537 with provisionalcompleter():
548 with provisionalcompleter():
538 ip.Completer.use_jedi = False
549 ip.Completer.use_jedi = False
539 completions = ip.Completer.completions("%tim", 3)
550 completions = ip.Completer.completions("%tim", 3)
540 c = next(completions) # should be `%time` or similar
551 c = next(completions) # should be `%time` or similar
541 assert c.type == "magic", "Type of magic was not assigned by completer"
552 assert c.type == "magic", "Type of magic was not assigned by completer"
542
553
543 @pytest.mark.xfail(
554 @pytest.mark.xfail(
544 parse(version("jedi")) <= parse("0.18.0"),
555 parse(version("jedi")) <= parse("0.18.0"),
545 reason="Known failure on jedi<=0.18.0",
556 reason="Known failure on jedi<=0.18.0",
546 strict=True,
557 strict=True,
547 )
558 )
548 def test_deduplicate_completions(self):
559 def test_deduplicate_completions(self):
549 """
560 """
550 Test that completions are correctly deduplicated (even if ranges are not the same)
561 Test that completions are correctly deduplicated (even if ranges are not the same)
551 """
562 """
552 ip = get_ipython()
563 ip = get_ipython()
553 ip.ex(
564 ip.ex(
554 textwrap.dedent(
565 textwrap.dedent(
555 """
566 """
556 class Z:
567 class Z:
557 zoo = 1
568 zoo = 1
558 """
569 """
559 )
570 )
560 )
571 )
561 with provisionalcompleter():
572 with provisionalcompleter():
562 ip.Completer.use_jedi = True
573 ip.Completer.use_jedi = True
563 l = list(
574 l = list(
564 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
575 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
565 )
576 )
566 ip.Completer.use_jedi = False
577 ip.Completer.use_jedi = False
567
578
568 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
579 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
569 assert l[0].text == "zoo" # and not `it.accumulate`
580 assert l[0].text == "zoo" # and not `it.accumulate`
570
581
571 @pytest.mark.xfail(
582 @pytest.mark.xfail(
572 sys.version_info.releaselevel in ("alpha",),
583 sys.version_info.releaselevel in ("alpha",),
573 reason="Parso does not yet parse 3.13",
584 reason="Parso does not yet parse 3.13",
574 )
585 )
575 def test_greedy_completions(self):
586 def test_greedy_completions(self):
576 """
587 """
577 Test the capability of the Greedy completer.
588 Test the capability of the Greedy completer.
578
589
579 Most of the test here does not really show off the greedy completer, for proof
590 Most of the test here does not really show off the greedy completer, for proof
580 each of the text below now pass with Jedi. The greedy completer is capable of more.
591 each of the text below now pass with Jedi. The greedy completer is capable of more.
581
592
582 See the :any:`test_dict_key_completion_contexts`
593 See the :any:`test_dict_key_completion_contexts`
583
594
584 """
595 """
585 ip = get_ipython()
596 ip = get_ipython()
586 ip.ex("a=list(range(5))")
597 ip.ex("a=list(range(5))")
587 ip.ex("d = {'a b': str}")
598 ip.ex("d = {'a b': str}")
588 _, c = ip.complete(".", line="a[0].")
599 _, c = ip.complete(".", line="a[0].")
589 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
600 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
590
601
591 def _(line, cursor_pos, expect, message, completion):
602 def _(line, cursor_pos, expect, message, completion):
592 with greedy_completion(), provisionalcompleter():
603 with greedy_completion(), provisionalcompleter():
593 ip.Completer.use_jedi = False
604 ip.Completer.use_jedi = False
594 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
605 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
595 self.assertIn(expect, c, message % c)
606 self.assertIn(expect, c, message % c)
596
607
597 ip.Completer.use_jedi = True
608 ip.Completer.use_jedi = True
598 with provisionalcompleter():
609 with provisionalcompleter():
599 completions = ip.Completer.completions(line, cursor_pos)
610 completions = ip.Completer.completions(line, cursor_pos)
600 self.assertIn(completion, list(completions))
611 self.assertIn(completion, list(completions))
601
612
602 with provisionalcompleter():
613 with provisionalcompleter():
603 _(
614 _(
604 "a[0].",
615 "a[0].",
605 5,
616 5,
606 ".real",
617 ".real",
607 "Should have completed on a[0].: %s",
618 "Should have completed on a[0].: %s",
608 Completion(5, 5, "real"),
619 Completion(5, 5, "real"),
609 )
620 )
610 _(
621 _(
611 "a[0].r",
622 "a[0].r",
612 6,
623 6,
613 ".real",
624 ".real",
614 "Should have completed on a[0].r: %s",
625 "Should have completed on a[0].r: %s",
615 Completion(5, 6, "real"),
626 Completion(5, 6, "real"),
616 )
627 )
617
628
618 _(
629 _(
619 "a[0].from_",
630 "a[0].from_",
620 10,
631 10,
621 ".from_bytes",
632 ".from_bytes",
622 "Should have completed on a[0].from_: %s",
633 "Should have completed on a[0].from_: %s",
623 Completion(5, 10, "from_bytes"),
634 Completion(5, 10, "from_bytes"),
624 )
635 )
625 _(
636 _(
626 "assert str.star",
637 "assert str.star",
627 14,
638 14,
628 ".startswith",
639 ".startswith",
629 "Should have completed on `assert str.star`: %s",
640 "Should have completed on `assert str.star`: %s",
630 Completion(11, 14, "startswith"),
641 Completion(11, 14, "startswith"),
631 )
642 )
632 _(
643 _(
633 "d['a b'].str",
644 "d['a b'].str",
634 12,
645 12,
635 ".strip",
646 ".strip",
636 "Should have completed on `d['a b'].str`: %s",
647 "Should have completed on `d['a b'].str`: %s",
637 Completion(9, 12, "strip"),
648 Completion(9, 12, "strip"),
638 )
649 )
639 _(
650 _(
640 "a.app",
651 "a.app",
641 4,
652 4,
642 ".append",
653 ".append",
643 "Should have completed on `a.app`: %s",
654 "Should have completed on `a.app`: %s",
644 Completion(2, 4, "append"),
655 Completion(2, 4, "append"),
645 )
656 )
646
657
647 def test_omit__names(self):
658 def test_omit__names(self):
648 # also happens to test IPCompleter as a configurable
659 # also happens to test IPCompleter as a configurable
649 ip = get_ipython()
660 ip = get_ipython()
650 ip._hidden_attr = 1
661 ip._hidden_attr = 1
651 ip._x = {}
662 ip._x = {}
652 c = ip.Completer
663 c = ip.Completer
653 ip.ex("ip=get_ipython()")
664 ip.ex("ip=get_ipython()")
654 cfg = Config()
665 cfg = Config()
655 cfg.IPCompleter.omit__names = 0
666 cfg.IPCompleter.omit__names = 0
656 c.update_config(cfg)
667 c.update_config(cfg)
657 with provisionalcompleter():
668 with provisionalcompleter():
658 c.use_jedi = False
669 c.use_jedi = False
659 s, matches = c.complete("ip.")
670 s, matches = c.complete("ip.")
660 self.assertIn(".__str__", matches)
671 self.assertIn(".__str__", matches)
661 self.assertIn("._hidden_attr", matches)
672 self.assertIn("._hidden_attr", matches)
662
673
663 # c.use_jedi = True
674 # c.use_jedi = True
664 # completions = set(c.completions('ip.', 3))
675 # completions = set(c.completions('ip.', 3))
665 # self.assertIn(Completion(3, 3, '__str__'), completions)
676 # self.assertIn(Completion(3, 3, '__str__'), completions)
666 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
677 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
667
678
668 cfg = Config()
679 cfg = Config()
669 cfg.IPCompleter.omit__names = 1
680 cfg.IPCompleter.omit__names = 1
670 c.update_config(cfg)
681 c.update_config(cfg)
671 with provisionalcompleter():
682 with provisionalcompleter():
672 c.use_jedi = False
683 c.use_jedi = False
673 s, matches = c.complete("ip.")
684 s, matches = c.complete("ip.")
674 self.assertNotIn(".__str__", matches)
685 self.assertNotIn(".__str__", matches)
675 # self.assertIn('ip._hidden_attr', matches)
686 # self.assertIn('ip._hidden_attr', matches)
676
687
677 # c.use_jedi = True
688 # c.use_jedi = True
678 # completions = set(c.completions('ip.', 3))
689 # completions = set(c.completions('ip.', 3))
679 # self.assertNotIn(Completion(3,3,'__str__'), completions)
690 # self.assertNotIn(Completion(3,3,'__str__'), completions)
680 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
691 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
681
692
682 cfg = Config()
693 cfg = Config()
683 cfg.IPCompleter.omit__names = 2
694 cfg.IPCompleter.omit__names = 2
684 c.update_config(cfg)
695 c.update_config(cfg)
685 with provisionalcompleter():
696 with provisionalcompleter():
686 c.use_jedi = False
697 c.use_jedi = False
687 s, matches = c.complete("ip.")
698 s, matches = c.complete("ip.")
688 self.assertNotIn(".__str__", matches)
699 self.assertNotIn(".__str__", matches)
689 self.assertNotIn("._hidden_attr", matches)
700 self.assertNotIn("._hidden_attr", matches)
690
701
691 # c.use_jedi = True
702 # c.use_jedi = True
692 # completions = set(c.completions('ip.', 3))
703 # completions = set(c.completions('ip.', 3))
693 # self.assertNotIn(Completion(3,3,'__str__'), completions)
704 # self.assertNotIn(Completion(3,3,'__str__'), completions)
694 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
705 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
695
706
696 with provisionalcompleter():
707 with provisionalcompleter():
697 c.use_jedi = False
708 c.use_jedi = False
698 s, matches = c.complete("ip._x.")
709 s, matches = c.complete("ip._x.")
699 self.assertIn(".keys", matches)
710 self.assertIn(".keys", matches)
700
711
701 # c.use_jedi = True
712 # c.use_jedi = True
702 # completions = set(c.completions('ip._x.', 6))
713 # completions = set(c.completions('ip._x.', 6))
703 # self.assertIn(Completion(6,6, "keys"), completions)
714 # self.assertIn(Completion(6,6, "keys"), completions)
704
715
705 del ip._hidden_attr
716 del ip._hidden_attr
706 del ip._x
717 del ip._x
707
718
708 def test_limit_to__all__False_ok(self):
719 def test_limit_to__all__False_ok(self):
709 """
720 """
710 Limit to all is deprecated, once we remove it this test can go away.
721 Limit to all is deprecated, once we remove it this test can go away.
711 """
722 """
712 ip = get_ipython()
723 ip = get_ipython()
713 c = ip.Completer
724 c = ip.Completer
714 c.use_jedi = False
725 c.use_jedi = False
715 ip.ex("class D: x=24")
726 ip.ex("class D: x=24")
716 ip.ex("d=D()")
727 ip.ex("d=D()")
717 cfg = Config()
728 cfg = Config()
718 cfg.IPCompleter.limit_to__all__ = False
729 cfg.IPCompleter.limit_to__all__ = False
719 c.update_config(cfg)
730 c.update_config(cfg)
720 s, matches = c.complete("d.")
731 s, matches = c.complete("d.")
721 self.assertIn(".x", matches)
732 self.assertIn(".x", matches)
722
733
723 def test_get__all__entries_ok(self):
734 def test_get__all__entries_ok(self):
724 class A:
735 class A:
725 __all__ = ["x", 1]
736 __all__ = ["x", 1]
726
737
727 words = completer.get__all__entries(A())
738 words = completer.get__all__entries(A())
728 self.assertEqual(words, ["x"])
739 self.assertEqual(words, ["x"])
729
740
730 def test_get__all__entries_no__all__ok(self):
741 def test_get__all__entries_no__all__ok(self):
731 class A:
742 class A:
732 pass
743 pass
733
744
734 words = completer.get__all__entries(A())
745 words = completer.get__all__entries(A())
735 self.assertEqual(words, [])
746 self.assertEqual(words, [])
736
747
737 def test_func_kw_completions(self):
748 def test_func_kw_completions(self):
738 ip = get_ipython()
749 ip = get_ipython()
739 c = ip.Completer
750 c = ip.Completer
740 c.use_jedi = False
751 c.use_jedi = False
741 ip.ex("def myfunc(a=1,b=2): return a+b")
752 ip.ex("def myfunc(a=1,b=2): return a+b")
742 s, matches = c.complete(None, "myfunc(1,b")
753 s, matches = c.complete(None, "myfunc(1,b")
743 self.assertIn("b=", matches)
754 self.assertIn("b=", matches)
744 # Simulate completing with cursor right after b (pos==10):
755 # Simulate completing with cursor right after b (pos==10):
745 s, matches = c.complete(None, "myfunc(1,b)", 10)
756 s, matches = c.complete(None, "myfunc(1,b)", 10)
746 self.assertIn("b=", matches)
757 self.assertIn("b=", matches)
747 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
758 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
748 self.assertIn("b=", matches)
759 self.assertIn("b=", matches)
749 # builtin function
760 # builtin function
750 s, matches = c.complete(None, "min(k, k")
761 s, matches = c.complete(None, "min(k, k")
751 self.assertIn("key=", matches)
762 self.assertIn("key=", matches)
752
763
753 def test_default_arguments_from_docstring(self):
764 def test_default_arguments_from_docstring(self):
754 ip = get_ipython()
765 ip = get_ipython()
755 c = ip.Completer
766 c = ip.Completer
756 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
767 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
757 self.assertEqual(kwd, ["key"])
768 self.assertEqual(kwd, ["key"])
758 # with cython type etc
769 # with cython type etc
759 kwd = c._default_arguments_from_docstring(
770 kwd = c._default_arguments_from_docstring(
760 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
771 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
761 )
772 )
762 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
773 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
763 # white spaces
774 # white spaces
764 kwd = c._default_arguments_from_docstring(
775 kwd = c._default_arguments_from_docstring(
765 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
776 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
766 )
777 )
767 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
778 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
768
779
769 def test_line_magics(self):
780 def test_line_magics(self):
770 ip = get_ipython()
781 ip = get_ipython()
771 c = ip.Completer
782 c = ip.Completer
772 s, matches = c.complete(None, "lsmag")
783 s, matches = c.complete(None, "lsmag")
773 self.assertIn("%lsmagic", matches)
784 self.assertIn("%lsmagic", matches)
774 s, matches = c.complete(None, "%lsmag")
785 s, matches = c.complete(None, "%lsmag")
775 self.assertIn("%lsmagic", matches)
786 self.assertIn("%lsmagic", matches)
776
787
777 def test_cell_magics(self):
788 def test_cell_magics(self):
778 from IPython.core.magic import register_cell_magic
789 from IPython.core.magic import register_cell_magic
779
790
780 @register_cell_magic
791 @register_cell_magic
781 def _foo_cellm(line, cell):
792 def _foo_cellm(line, cell):
782 pass
793 pass
783
794
784 ip = get_ipython()
795 ip = get_ipython()
785 c = ip.Completer
796 c = ip.Completer
786
797
787 s, matches = c.complete(None, "_foo_ce")
798 s, matches = c.complete(None, "_foo_ce")
788 self.assertIn("%%_foo_cellm", matches)
799 self.assertIn("%%_foo_cellm", matches)
789 s, matches = c.complete(None, "%%_foo_ce")
800 s, matches = c.complete(None, "%%_foo_ce")
790 self.assertIn("%%_foo_cellm", matches)
801 self.assertIn("%%_foo_cellm", matches)
791
802
792 def test_line_cell_magics(self):
803 def test_line_cell_magics(self):
793 from IPython.core.magic import register_line_cell_magic
804 from IPython.core.magic import register_line_cell_magic
794
805
795 @register_line_cell_magic
806 @register_line_cell_magic
796 def _bar_cellm(line, cell):
807 def _bar_cellm(line, cell):
797 pass
808 pass
798
809
799 ip = get_ipython()
810 ip = get_ipython()
800 c = ip.Completer
811 c = ip.Completer
801
812
802 # The policy here is trickier, see comments in completion code. The
813 # The policy here is trickier, see comments in completion code. The
803 # returned values depend on whether the user passes %% or not explicitly,
814 # returned values depend on whether the user passes %% or not explicitly,
804 # and this will show a difference if the same name is both a line and cell
815 # and this will show a difference if the same name is both a line and cell
805 # magic.
816 # magic.
806 s, matches = c.complete(None, "_bar_ce")
817 s, matches = c.complete(None, "_bar_ce")
807 self.assertIn("%_bar_cellm", matches)
818 self.assertIn("%_bar_cellm", matches)
808 self.assertIn("%%_bar_cellm", matches)
819 self.assertIn("%%_bar_cellm", matches)
809 s, matches = c.complete(None, "%_bar_ce")
820 s, matches = c.complete(None, "%_bar_ce")
810 self.assertIn("%_bar_cellm", matches)
821 self.assertIn("%_bar_cellm", matches)
811 self.assertIn("%%_bar_cellm", matches)
822 self.assertIn("%%_bar_cellm", matches)
812 s, matches = c.complete(None, "%%_bar_ce")
823 s, matches = c.complete(None, "%%_bar_ce")
813 self.assertNotIn("%_bar_cellm", matches)
824 self.assertNotIn("%_bar_cellm", matches)
814 self.assertIn("%%_bar_cellm", matches)
825 self.assertIn("%%_bar_cellm", matches)
815
826
816 def test_magic_completion_order(self):
827 def test_magic_completion_order(self):
817 ip = get_ipython()
828 ip = get_ipython()
818 c = ip.Completer
829 c = ip.Completer
819
830
820 # Test ordering of line and cell magics.
831 # Test ordering of line and cell magics.
821 text, matches = c.complete("timeit")
832 text, matches = c.complete("timeit")
822 self.assertEqual(matches, ["%timeit", "%%timeit"])
833 self.assertEqual(matches, ["%timeit", "%%timeit"])
823
834
824 def test_magic_completion_shadowing(self):
835 def test_magic_completion_shadowing(self):
825 ip = get_ipython()
836 ip = get_ipython()
826 c = ip.Completer
837 c = ip.Completer
827 c.use_jedi = False
838 c.use_jedi = False
828
839
829 # Before importing matplotlib, %matplotlib magic should be the only option.
840 # Before importing matplotlib, %matplotlib magic should be the only option.
830 text, matches = c.complete("mat")
841 text, matches = c.complete("mat")
831 self.assertEqual(matches, ["%matplotlib"])
842 self.assertEqual(matches, ["%matplotlib"])
832
843
833 # The newly introduced name should shadow the magic.
844 # The newly introduced name should shadow the magic.
834 ip.run_cell("matplotlib = 1")
845 ip.run_cell("matplotlib = 1")
835 text, matches = c.complete("mat")
846 text, matches = c.complete("mat")
836 self.assertEqual(matches, ["matplotlib"])
847 self.assertEqual(matches, ["matplotlib"])
837
848
838 # After removing matplotlib from namespace, the magic should again be
849 # After removing matplotlib from namespace, the magic should again be
839 # the only option.
850 # the only option.
840 del ip.user_ns["matplotlib"]
851 del ip.user_ns["matplotlib"]
841 text, matches = c.complete("mat")
852 text, matches = c.complete("mat")
842 self.assertEqual(matches, ["%matplotlib"])
853 self.assertEqual(matches, ["%matplotlib"])
843
854
844 def test_magic_completion_shadowing_explicit(self):
855 def test_magic_completion_shadowing_explicit(self):
845 """
856 """
846 If the user try to complete a shadowed magic, and explicit % start should
857 If the user try to complete a shadowed magic, and explicit % start should
847 still return the completions.
858 still return the completions.
848 """
859 """
849 ip = get_ipython()
860 ip = get_ipython()
850 c = ip.Completer
861 c = ip.Completer
851
862
852 # Before importing matplotlib, %matplotlib magic should be the only option.
863 # Before importing matplotlib, %matplotlib magic should be the only option.
853 text, matches = c.complete("%mat")
864 text, matches = c.complete("%mat")
854 self.assertEqual(matches, ["%matplotlib"])
865 self.assertEqual(matches, ["%matplotlib"])
855
866
856 ip.run_cell("matplotlib = 1")
867 ip.run_cell("matplotlib = 1")
857
868
858 # After removing matplotlib from namespace, the magic should still be
869 # After removing matplotlib from namespace, the magic should still be
859 # the only option.
870 # the only option.
860 text, matches = c.complete("%mat")
871 text, matches = c.complete("%mat")
861 self.assertEqual(matches, ["%matplotlib"])
872 self.assertEqual(matches, ["%matplotlib"])
862
873
863 def test_magic_config(self):
874 def test_magic_config(self):
864 ip = get_ipython()
875 ip = get_ipython()
865 c = ip.Completer
876 c = ip.Completer
866
877
867 s, matches = c.complete(None, "conf")
878 s, matches = c.complete(None, "conf")
868 self.assertIn("%config", matches)
879 self.assertIn("%config", matches)
869 s, matches = c.complete(None, "conf")
880 s, matches = c.complete(None, "conf")
870 self.assertNotIn("AliasManager", matches)
881 self.assertNotIn("AliasManager", matches)
871 s, matches = c.complete(None, "config ")
882 s, matches = c.complete(None, "config ")
872 self.assertIn("AliasManager", matches)
883 self.assertIn("AliasManager", matches)
873 s, matches = c.complete(None, "%config ")
884 s, matches = c.complete(None, "%config ")
874 self.assertIn("AliasManager", matches)
885 self.assertIn("AliasManager", matches)
875 s, matches = c.complete(None, "config Ali")
886 s, matches = c.complete(None, "config Ali")
876 self.assertListEqual(["AliasManager"], matches)
887 self.assertListEqual(["AliasManager"], matches)
877 s, matches = c.complete(None, "%config Ali")
888 s, matches = c.complete(None, "%config Ali")
878 self.assertListEqual(["AliasManager"], matches)
889 self.assertListEqual(["AliasManager"], matches)
879 s, matches = c.complete(None, "config AliasManager")
890 s, matches = c.complete(None, "config AliasManager")
880 self.assertListEqual(["AliasManager"], matches)
891 self.assertListEqual(["AliasManager"], matches)
881 s, matches = c.complete(None, "%config AliasManager")
892 s, matches = c.complete(None, "%config AliasManager")
882 self.assertListEqual(["AliasManager"], matches)
893 self.assertListEqual(["AliasManager"], matches)
883 s, matches = c.complete(None, "config AliasManager.")
894 s, matches = c.complete(None, "config AliasManager.")
884 self.assertIn("AliasManager.default_aliases", matches)
895 self.assertIn("AliasManager.default_aliases", matches)
885 s, matches = c.complete(None, "%config AliasManager.")
896 s, matches = c.complete(None, "%config AliasManager.")
886 self.assertIn("AliasManager.default_aliases", matches)
897 self.assertIn("AliasManager.default_aliases", matches)
887 s, matches = c.complete(None, "config AliasManager.de")
898 s, matches = c.complete(None, "config AliasManager.de")
888 self.assertListEqual(["AliasManager.default_aliases"], matches)
899 self.assertListEqual(["AliasManager.default_aliases"], matches)
889 s, matches = c.complete(None, "config AliasManager.de")
900 s, matches = c.complete(None, "config AliasManager.de")
890 self.assertListEqual(["AliasManager.default_aliases"], matches)
901 self.assertListEqual(["AliasManager.default_aliases"], matches)
891
902
892 def test_magic_color(self):
903 def test_magic_color(self):
893 ip = get_ipython()
904 ip = get_ipython()
894 c = ip.Completer
905 c = ip.Completer
895
906
896 s, matches = c.complete(None, "colo")
907 s, matches = c.complete(None, "colo")
897 self.assertIn("%colors", matches)
908 self.assertIn("%colors", matches)
898 s, matches = c.complete(None, "colo")
909 s, matches = c.complete(None, "colo")
899 self.assertNotIn("NoColor", matches)
910 self.assertNotIn("NoColor", matches)
900 s, matches = c.complete(None, "%colors") # No trailing space
911 s, matches = c.complete(None, "%colors") # No trailing space
901 self.assertNotIn("NoColor", matches)
912 self.assertNotIn("NoColor", matches)
902 s, matches = c.complete(None, "colors ")
913 s, matches = c.complete(None, "colors ")
903 self.assertIn("NoColor", matches)
914 self.assertIn("NoColor", matches)
904 s, matches = c.complete(None, "%colors ")
915 s, matches = c.complete(None, "%colors ")
905 self.assertIn("NoColor", matches)
916 self.assertIn("NoColor", matches)
906 s, matches = c.complete(None, "colors NoCo")
917 s, matches = c.complete(None, "colors NoCo")
907 self.assertListEqual(["NoColor"], matches)
918 self.assertListEqual(["NoColor"], matches)
908 s, matches = c.complete(None, "%colors NoCo")
919 s, matches = c.complete(None, "%colors NoCo")
909 self.assertListEqual(["NoColor"], matches)
920 self.assertListEqual(["NoColor"], matches)
910
921
911 def test_match_dict_keys(self):
922 def test_match_dict_keys(self):
912 """
923 """
913 Test that match_dict_keys works on a couple of use case does return what
924 Test that match_dict_keys works on a couple of use case does return what
914 expected, and does not crash
925 expected, and does not crash
915 """
926 """
916 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
927 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
917
928
918 def match(*args, **kwargs):
929 def match(*args, **kwargs):
919 quote, offset, matches = match_dict_keys(*args, delims=delims, **kwargs)
930 quote, offset, matches = match_dict_keys(*args, delims=delims, **kwargs)
920 return quote, offset, list(matches)
931 return quote, offset, list(matches)
921
932
922 keys = ["foo", b"far"]
933 keys = ["foo", b"far"]
923 assert match(keys, "b'") == ("'", 2, ["far"])
934 assert match(keys, "b'") == ("'", 2, ["far"])
924 assert match(keys, "b'f") == ("'", 2, ["far"])
935 assert match(keys, "b'f") == ("'", 2, ["far"])
925 assert match(keys, 'b"') == ('"', 2, ["far"])
936 assert match(keys, 'b"') == ('"', 2, ["far"])
926 assert match(keys, 'b"f') == ('"', 2, ["far"])
937 assert match(keys, 'b"f') == ('"', 2, ["far"])
927
938
928 assert match(keys, "'") == ("'", 1, ["foo"])
939 assert match(keys, "'") == ("'", 1, ["foo"])
929 assert match(keys, "'f") == ("'", 1, ["foo"])
940 assert match(keys, "'f") == ("'", 1, ["foo"])
930 assert match(keys, '"') == ('"', 1, ["foo"])
941 assert match(keys, '"') == ('"', 1, ["foo"])
931 assert match(keys, '"f') == ('"', 1, ["foo"])
942 assert match(keys, '"f') == ('"', 1, ["foo"])
932
943
933 # Completion on first item of tuple
944 # Completion on first item of tuple
934 keys = [("foo", 1111), ("foo", 2222), (3333, "bar"), (3333, "test")]
945 keys = [("foo", 1111), ("foo", 2222), (3333, "bar"), (3333, "test")]
935 assert match(keys, "'f") == ("'", 1, ["foo"])
946 assert match(keys, "'f") == ("'", 1, ["foo"])
936 assert match(keys, "33") == ("", 0, ["3333"])
947 assert match(keys, "33") == ("", 0, ["3333"])
937
948
938 # Completion on numbers
949 # Completion on numbers
939 keys = [
950 keys = [
940 0xDEADBEEF,
951 0xDEADBEEF,
941 1111,
952 1111,
942 1234,
953 1234,
943 "1999",
954 "1999",
944 0b10101,
955 0b10101,
945 22,
956 22,
946 ] # 0xDEADBEEF = 3735928559; 0b10101 = 21
957 ] # 0xDEADBEEF = 3735928559; 0b10101 = 21
947 assert match(keys, "0xdead") == ("", 0, ["0xdeadbeef"])
958 assert match(keys, "0xdead") == ("", 0, ["0xdeadbeef"])
948 assert match(keys, "1") == ("", 0, ["1111", "1234"])
959 assert match(keys, "1") == ("", 0, ["1111", "1234"])
949 assert match(keys, "2") == ("", 0, ["21", "22"])
960 assert match(keys, "2") == ("", 0, ["21", "22"])
950 assert match(keys, "0b101") == ("", 0, ["0b10101", "0b10110"])
961 assert match(keys, "0b101") == ("", 0, ["0b10101", "0b10110"])
951
962
952 # Should yield on variables
963 # Should yield on variables
953 assert match(keys, "a_variable") == ("", 0, [])
964 assert match(keys, "a_variable") == ("", 0, [])
954
965
955 # Should pass over invalid literals
966 # Should pass over invalid literals
956 assert match(keys, "'' ''") == ("", 0, [])
967 assert match(keys, "'' ''") == ("", 0, [])
957
968
958 def test_match_dict_keys_tuple(self):
969 def test_match_dict_keys_tuple(self):
959 """
970 """
960 Test that match_dict_keys called with extra prefix works on a couple of use case,
971 Test that match_dict_keys called with extra prefix works on a couple of use case,
961 does return what expected, and does not crash.
972 does return what expected, and does not crash.
962 """
973 """
963 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
974 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
964
975
965 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
976 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
966
977
967 def match(*args, extra=None, **kwargs):
978 def match(*args, extra=None, **kwargs):
968 quote, offset, matches = match_dict_keys(
979 quote, offset, matches = match_dict_keys(
969 *args, delims=delims, extra_prefix=extra, **kwargs
980 *args, delims=delims, extra_prefix=extra, **kwargs
970 )
981 )
971 return quote, offset, list(matches)
982 return quote, offset, list(matches)
972
983
973 # Completion on first key == "foo"
984 # Completion on first key == "foo"
974 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["bar", "oof"])
985 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["bar", "oof"])
975 assert match(keys, '"', extra=("foo",)) == ('"', 1, ["bar", "oof"])
986 assert match(keys, '"', extra=("foo",)) == ('"', 1, ["bar", "oof"])
976 assert match(keys, "'o", extra=("foo",)) == ("'", 1, ["oof"])
987 assert match(keys, "'o", extra=("foo",)) == ("'", 1, ["oof"])
977 assert match(keys, '"o', extra=("foo",)) == ('"', 1, ["oof"])
988 assert match(keys, '"o', extra=("foo",)) == ('"', 1, ["oof"])
978 assert match(keys, "b'", extra=("foo",)) == ("'", 2, ["bar"])
989 assert match(keys, "b'", extra=("foo",)) == ("'", 2, ["bar"])
979 assert match(keys, 'b"', extra=("foo",)) == ('"', 2, ["bar"])
990 assert match(keys, 'b"', extra=("foo",)) == ('"', 2, ["bar"])
980 assert match(keys, "b'b", extra=("foo",)) == ("'", 2, ["bar"])
991 assert match(keys, "b'b", extra=("foo",)) == ("'", 2, ["bar"])
981 assert match(keys, 'b"b', extra=("foo",)) == ('"', 2, ["bar"])
992 assert match(keys, 'b"b', extra=("foo",)) == ('"', 2, ["bar"])
982
993
983 # No Completion
994 # No Completion
984 assert match(keys, "'", extra=("no_foo",)) == ("'", 1, [])
995 assert match(keys, "'", extra=("no_foo",)) == ("'", 1, [])
985 assert match(keys, "'", extra=("fo",)) == ("'", 1, [])
996 assert match(keys, "'", extra=("fo",)) == ("'", 1, [])
986
997
987 keys = [("foo1", "foo2", "foo3", "foo4"), ("foo1", "foo2", "bar", "foo4")]
998 keys = [("foo1", "foo2", "foo3", "foo4"), ("foo1", "foo2", "bar", "foo4")]
988 assert match(keys, "'foo", extra=("foo1",)) == ("'", 1, ["foo2"])
999 assert match(keys, "'foo", extra=("foo1",)) == ("'", 1, ["foo2"])
989 assert match(keys, "'foo", extra=("foo1", "foo2")) == ("'", 1, ["foo3"])
1000 assert match(keys, "'foo", extra=("foo1", "foo2")) == ("'", 1, ["foo3"])
990 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3")) == ("'", 1, ["foo4"])
1001 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3")) == ("'", 1, ["foo4"])
991 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3", "foo4")) == (
1002 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3", "foo4")) == (
992 "'",
1003 "'",
993 1,
1004 1,
994 [],
1005 [],
995 )
1006 )
996
1007
997 keys = [("foo", 1111), ("foo", "2222"), (3333, "bar"), (3333, 4444)]
1008 keys = [("foo", 1111), ("foo", "2222"), (3333, "bar"), (3333, 4444)]
998 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["2222"])
1009 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["2222"])
999 assert match(keys, "", extra=("foo",)) == ("", 0, ["1111", "'2222'"])
1010 assert match(keys, "", extra=("foo",)) == ("", 0, ["1111", "'2222'"])
1000 assert match(keys, "'", extra=(3333,)) == ("'", 1, ["bar"])
1011 assert match(keys, "'", extra=(3333,)) == ("'", 1, ["bar"])
1001 assert match(keys, "", extra=(3333,)) == ("", 0, ["'bar'", "4444"])
1012 assert match(keys, "", extra=(3333,)) == ("", 0, ["'bar'", "4444"])
1002 assert match(keys, "'", extra=("3333",)) == ("'", 1, [])
1013 assert match(keys, "'", extra=("3333",)) == ("'", 1, [])
1003 assert match(keys, "33") == ("", 0, ["3333"])
1014 assert match(keys, "33") == ("", 0, ["3333"])
1004
1015
1005 def test_dict_key_completion_closures(self):
1016 def test_dict_key_completion_closures(self):
1006 ip = get_ipython()
1017 ip = get_ipython()
1007 complete = ip.Completer.complete
1018 complete = ip.Completer.complete
1008 ip.Completer.auto_close_dict_keys = True
1019 ip.Completer.auto_close_dict_keys = True
1009
1020
1010 ip.user_ns["d"] = {
1021 ip.user_ns["d"] = {
1011 # tuple only
1022 # tuple only
1012 ("aa", 11): None,
1023 ("aa", 11): None,
1013 # tuple and non-tuple
1024 # tuple and non-tuple
1014 ("bb", 22): None,
1025 ("bb", 22): None,
1015 "bb": None,
1026 "bb": None,
1016 # non-tuple only
1027 # non-tuple only
1017 "cc": None,
1028 "cc": None,
1018 # numeric tuple only
1029 # numeric tuple only
1019 (77, "x"): None,
1030 (77, "x"): None,
1020 # numeric tuple and non-tuple
1031 # numeric tuple and non-tuple
1021 (88, "y"): None,
1032 (88, "y"): None,
1022 88: None,
1033 88: None,
1023 # numeric non-tuple only
1034 # numeric non-tuple only
1024 99: None,
1035 99: None,
1025 }
1036 }
1026
1037
1027 _, matches = complete(line_buffer="d[")
1038 _, matches = complete(line_buffer="d[")
1028 # should append `, ` if matches a tuple only
1039 # should append `, ` if matches a tuple only
1029 self.assertIn("'aa', ", matches)
1040 self.assertIn("'aa', ", matches)
1030 # should not append anything if matches a tuple and an item
1041 # should not append anything if matches a tuple and an item
1031 self.assertIn("'bb'", matches)
1042 self.assertIn("'bb'", matches)
1032 # should append `]` if matches and item only
1043 # should append `]` if matches and item only
1033 self.assertIn("'cc']", matches)
1044 self.assertIn("'cc']", matches)
1034
1045
1035 # should append `, ` if matches a tuple only
1046 # should append `, ` if matches a tuple only
1036 self.assertIn("77, ", matches)
1047 self.assertIn("77, ", matches)
1037 # should not append anything if matches a tuple and an item
1048 # should not append anything if matches a tuple and an item
1038 self.assertIn("88", matches)
1049 self.assertIn("88", matches)
1039 # should append `]` if matches and item only
1050 # should append `]` if matches and item only
1040 self.assertIn("99]", matches)
1051 self.assertIn("99]", matches)
1041
1052
1042 _, matches = complete(line_buffer="d['aa', ")
1053 _, matches = complete(line_buffer="d['aa', ")
1043 # should restrict matches to those matching tuple prefix
1054 # should restrict matches to those matching tuple prefix
1044 self.assertIn("11]", matches)
1055 self.assertIn("11]", matches)
1045 self.assertNotIn("'bb'", matches)
1056 self.assertNotIn("'bb'", matches)
1046 self.assertNotIn("'bb', ", matches)
1057 self.assertNotIn("'bb', ", matches)
1047 self.assertNotIn("'bb']", matches)
1058 self.assertNotIn("'bb']", matches)
1048 self.assertNotIn("'cc'", matches)
1059 self.assertNotIn("'cc'", matches)
1049 self.assertNotIn("'cc', ", matches)
1060 self.assertNotIn("'cc', ", matches)
1050 self.assertNotIn("'cc']", matches)
1061 self.assertNotIn("'cc']", matches)
1051 ip.Completer.auto_close_dict_keys = False
1062 ip.Completer.auto_close_dict_keys = False
1052
1063
1053 def test_dict_key_completion_string(self):
1064 def test_dict_key_completion_string(self):
1054 """Test dictionary key completion for string keys"""
1065 """Test dictionary key completion for string keys"""
1055 ip = get_ipython()
1066 ip = get_ipython()
1056 complete = ip.Completer.complete
1067 complete = ip.Completer.complete
1057
1068
1058 ip.user_ns["d"] = {"abc": None}
1069 ip.user_ns["d"] = {"abc": None}
1059
1070
1060 # check completion at different stages
1071 # check completion at different stages
1061 _, matches = complete(line_buffer="d[")
1072 _, matches = complete(line_buffer="d[")
1062 self.assertIn("'abc'", matches)
1073 self.assertIn("'abc'", matches)
1063 self.assertNotIn("'abc']", matches)
1074 self.assertNotIn("'abc']", matches)
1064
1075
1065 _, matches = complete(line_buffer="d['")
1076 _, matches = complete(line_buffer="d['")
1066 self.assertIn("abc", matches)
1077 self.assertIn("abc", matches)
1067 self.assertNotIn("abc']", matches)
1078 self.assertNotIn("abc']", matches)
1068
1079
1069 _, matches = complete(line_buffer="d['a")
1080 _, matches = complete(line_buffer="d['a")
1070 self.assertIn("abc", matches)
1081 self.assertIn("abc", matches)
1071 self.assertNotIn("abc']", matches)
1082 self.assertNotIn("abc']", matches)
1072
1083
1073 # check use of different quoting
1084 # check use of different quoting
1074 _, matches = complete(line_buffer='d["')
1085 _, matches = complete(line_buffer='d["')
1075 self.assertIn("abc", matches)
1086 self.assertIn("abc", matches)
1076 self.assertNotIn('abc"]', matches)
1087 self.assertNotIn('abc"]', matches)
1077
1088
1078 _, matches = complete(line_buffer='d["a')
1089 _, matches = complete(line_buffer='d["a')
1079 self.assertIn("abc", matches)
1090 self.assertIn("abc", matches)
1080 self.assertNotIn('abc"]', matches)
1091 self.assertNotIn('abc"]', matches)
1081
1092
1082 # check sensitivity to following context
1093 # check sensitivity to following context
1083 _, matches = complete(line_buffer="d[]", cursor_pos=2)
1094 _, matches = complete(line_buffer="d[]", cursor_pos=2)
1084 self.assertIn("'abc'", matches)
1095 self.assertIn("'abc'", matches)
1085
1096
1086 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1097 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1087 self.assertIn("abc", matches)
1098 self.assertIn("abc", matches)
1088 self.assertNotIn("abc'", matches)
1099 self.assertNotIn("abc'", matches)
1089 self.assertNotIn("abc']", matches)
1100 self.assertNotIn("abc']", matches)
1090
1101
1091 # check multiple solutions are correctly returned and that noise is not
1102 # check multiple solutions are correctly returned and that noise is not
1092 ip.user_ns["d"] = {
1103 ip.user_ns["d"] = {
1093 "abc": None,
1104 "abc": None,
1094 "abd": None,
1105 "abd": None,
1095 "bad": None,
1106 "bad": None,
1096 object(): None,
1107 object(): None,
1097 5: None,
1108 5: None,
1098 ("abe", None): None,
1109 ("abe", None): None,
1099 (None, "abf"): None
1110 (None, "abf"): None
1100 }
1111 }
1101
1112
1102 _, matches = complete(line_buffer="d['a")
1113 _, matches = complete(line_buffer="d['a")
1103 self.assertIn("abc", matches)
1114 self.assertIn("abc", matches)
1104 self.assertIn("abd", matches)
1115 self.assertIn("abd", matches)
1105 self.assertNotIn("bad", matches)
1116 self.assertNotIn("bad", matches)
1106 self.assertNotIn("abe", matches)
1117 self.assertNotIn("abe", matches)
1107 self.assertNotIn("abf", matches)
1118 self.assertNotIn("abf", matches)
1108 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1119 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1109
1120
1110 # check escaping and whitespace
1121 # check escaping and whitespace
1111 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
1122 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
1112 _, matches = complete(line_buffer="d['a")
1123 _, matches = complete(line_buffer="d['a")
1113 self.assertIn("a\\nb", matches)
1124 self.assertIn("a\\nb", matches)
1114 self.assertIn("a\\'b", matches)
1125 self.assertIn("a\\'b", matches)
1115 self.assertIn('a"b', matches)
1126 self.assertIn('a"b', matches)
1116 self.assertIn("a word", matches)
1127 self.assertIn("a word", matches)
1117 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1128 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1118
1129
1119 # - can complete on non-initial word of the string
1130 # - can complete on non-initial word of the string
1120 _, matches = complete(line_buffer="d['a w")
1131 _, matches = complete(line_buffer="d['a w")
1121 self.assertIn("word", matches)
1132 self.assertIn("word", matches)
1122
1133
1123 # - understands quote escaping
1134 # - understands quote escaping
1124 _, matches = complete(line_buffer="d['a\\'")
1135 _, matches = complete(line_buffer="d['a\\'")
1125 self.assertIn("b", matches)
1136 self.assertIn("b", matches)
1126
1137
1127 # - default quoting should work like repr
1138 # - default quoting should work like repr
1128 _, matches = complete(line_buffer="d[")
1139 _, matches = complete(line_buffer="d[")
1129 self.assertIn('"a\'b"', matches)
1140 self.assertIn('"a\'b"', matches)
1130
1141
1131 # - when opening quote with ", possible to match with unescaped apostrophe
1142 # - when opening quote with ", possible to match with unescaped apostrophe
1132 _, matches = complete(line_buffer="d[\"a'")
1143 _, matches = complete(line_buffer="d[\"a'")
1133 self.assertIn("b", matches)
1144 self.assertIn("b", matches)
1134
1145
1135 # need to not split at delims that readline won't split at
1146 # need to not split at delims that readline won't split at
1136 if "-" not in ip.Completer.splitter.delims:
1147 if "-" not in ip.Completer.splitter.delims:
1137 ip.user_ns["d"] = {"before-after": None}
1148 ip.user_ns["d"] = {"before-after": None}
1138 _, matches = complete(line_buffer="d['before-af")
1149 _, matches = complete(line_buffer="d['before-af")
1139 self.assertIn("before-after", matches)
1150 self.assertIn("before-after", matches)
1140
1151
1141 # check completion on tuple-of-string keys at different stage - on first key
1152 # check completion on tuple-of-string keys at different stage - on first key
1142 ip.user_ns["d"] = {('foo', 'bar'): None}
1153 ip.user_ns["d"] = {('foo', 'bar'): None}
1143 _, matches = complete(line_buffer="d[")
1154 _, matches = complete(line_buffer="d[")
1144 self.assertIn("'foo'", matches)
1155 self.assertIn("'foo'", matches)
1145 self.assertNotIn("'foo']", matches)
1156 self.assertNotIn("'foo']", matches)
1146 self.assertNotIn("'bar'", matches)
1157 self.assertNotIn("'bar'", matches)
1147 self.assertNotIn("foo", matches)
1158 self.assertNotIn("foo", matches)
1148 self.assertNotIn("bar", matches)
1159 self.assertNotIn("bar", matches)
1149
1160
1150 # - match the prefix
1161 # - match the prefix
1151 _, matches = complete(line_buffer="d['f")
1162 _, matches = complete(line_buffer="d['f")
1152 self.assertIn("foo", matches)
1163 self.assertIn("foo", matches)
1153 self.assertNotIn("foo']", matches)
1164 self.assertNotIn("foo']", matches)
1154 self.assertNotIn('foo"]', matches)
1165 self.assertNotIn('foo"]', matches)
1155 _, matches = complete(line_buffer="d['foo")
1166 _, matches = complete(line_buffer="d['foo")
1156 self.assertIn("foo", matches)
1167 self.assertIn("foo", matches)
1157
1168
1158 # - can complete on second key
1169 # - can complete on second key
1159 _, matches = complete(line_buffer="d['foo', ")
1170 _, matches = complete(line_buffer="d['foo', ")
1160 self.assertIn("'bar'", matches)
1171 self.assertIn("'bar'", matches)
1161 _, matches = complete(line_buffer="d['foo', 'b")
1172 _, matches = complete(line_buffer="d['foo', 'b")
1162 self.assertIn("bar", matches)
1173 self.assertIn("bar", matches)
1163 self.assertNotIn("foo", matches)
1174 self.assertNotIn("foo", matches)
1164
1175
1165 # - does not propose missing keys
1176 # - does not propose missing keys
1166 _, matches = complete(line_buffer="d['foo', 'f")
1177 _, matches = complete(line_buffer="d['foo', 'f")
1167 self.assertNotIn("bar", matches)
1178 self.assertNotIn("bar", matches)
1168 self.assertNotIn("foo", matches)
1179 self.assertNotIn("foo", matches)
1169
1180
1170 # check sensitivity to following context
1181 # check sensitivity to following context
1171 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1182 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1172 self.assertIn("'bar'", matches)
1183 self.assertIn("'bar'", matches)
1173 self.assertNotIn("bar", matches)
1184 self.assertNotIn("bar", matches)
1174 self.assertNotIn("'foo'", matches)
1185 self.assertNotIn("'foo'", matches)
1175 self.assertNotIn("foo", matches)
1186 self.assertNotIn("foo", matches)
1176
1187
1177 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1188 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1178 self.assertIn("foo", matches)
1189 self.assertIn("foo", matches)
1179 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1190 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1180
1191
1181 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1192 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1182 self.assertIn("foo", matches)
1193 self.assertIn("foo", matches)
1183 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1194 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1184
1195
1185 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1196 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1186 self.assertIn("bar", matches)
1197 self.assertIn("bar", matches)
1187 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1198 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1188
1199
1189 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1200 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1190 self.assertIn("'bar'", matches)
1201 self.assertIn("'bar'", matches)
1191 self.assertNotIn("bar", matches)
1202 self.assertNotIn("bar", matches)
1192
1203
1193 # Can complete with longer tuple keys
1204 # Can complete with longer tuple keys
1194 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1205 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1195
1206
1196 # - can complete second key
1207 # - can complete second key
1197 _, matches = complete(line_buffer="d['foo', 'b")
1208 _, matches = complete(line_buffer="d['foo', 'b")
1198 self.assertIn("bar", matches)
1209 self.assertIn("bar", matches)
1199 self.assertNotIn("foo", matches)
1210 self.assertNotIn("foo", matches)
1200 self.assertNotIn("foobar", matches)
1211 self.assertNotIn("foobar", matches)
1201
1212
1202 # - can complete third key
1213 # - can complete third key
1203 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1214 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1204 self.assertIn("foobar", matches)
1215 self.assertIn("foobar", matches)
1205 self.assertNotIn("foo", matches)
1216 self.assertNotIn("foo", matches)
1206 self.assertNotIn("bar", matches)
1217 self.assertNotIn("bar", matches)
1207
1218
1208 def test_dict_key_completion_numbers(self):
1219 def test_dict_key_completion_numbers(self):
1209 ip = get_ipython()
1220 ip = get_ipython()
1210 complete = ip.Completer.complete
1221 complete = ip.Completer.complete
1211
1222
1212 ip.user_ns["d"] = {
1223 ip.user_ns["d"] = {
1213 0xDEADBEEF: None, # 3735928559
1224 0xDEADBEEF: None, # 3735928559
1214 1111: None,
1225 1111: None,
1215 1234: None,
1226 1234: None,
1216 "1999": None,
1227 "1999": None,
1217 0b10101: None, # 21
1228 0b10101: None, # 21
1218 22: None,
1229 22: None,
1219 }
1230 }
1220 _, matches = complete(line_buffer="d[1")
1231 _, matches = complete(line_buffer="d[1")
1221 self.assertIn("1111", matches)
1232 self.assertIn("1111", matches)
1222 self.assertIn("1234", matches)
1233 self.assertIn("1234", matches)
1223 self.assertNotIn("1999", matches)
1234 self.assertNotIn("1999", matches)
1224 self.assertNotIn("'1999'", matches)
1235 self.assertNotIn("'1999'", matches)
1225
1236
1226 _, matches = complete(line_buffer="d[0xdead")
1237 _, matches = complete(line_buffer="d[0xdead")
1227 self.assertIn("0xdeadbeef", matches)
1238 self.assertIn("0xdeadbeef", matches)
1228
1239
1229 _, matches = complete(line_buffer="d[2")
1240 _, matches = complete(line_buffer="d[2")
1230 self.assertIn("21", matches)
1241 self.assertIn("21", matches)
1231 self.assertIn("22", matches)
1242 self.assertIn("22", matches)
1232
1243
1233 _, matches = complete(line_buffer="d[0b101")
1244 _, matches = complete(line_buffer="d[0b101")
1234 self.assertIn("0b10101", matches)
1245 self.assertIn("0b10101", matches)
1235 self.assertIn("0b10110", matches)
1246 self.assertIn("0b10110", matches)
1236
1247
1237 def test_dict_key_completion_contexts(self):
1248 def test_dict_key_completion_contexts(self):
1238 """Test expression contexts in which dict key completion occurs"""
1249 """Test expression contexts in which dict key completion occurs"""
1239 ip = get_ipython()
1250 ip = get_ipython()
1240 complete = ip.Completer.complete
1251 complete = ip.Completer.complete
1241 d = {"abc": None}
1252 d = {"abc": None}
1242 ip.user_ns["d"] = d
1253 ip.user_ns["d"] = d
1243
1254
1244 class C:
1255 class C:
1245 data = d
1256 data = d
1246
1257
1247 ip.user_ns["C"] = C
1258 ip.user_ns["C"] = C
1248 ip.user_ns["get"] = lambda: d
1259 ip.user_ns["get"] = lambda: d
1249 ip.user_ns["nested"] = {"x": d}
1260 ip.user_ns["nested"] = {"x": d}
1250
1261
1251 def assert_no_completion(**kwargs):
1262 def assert_no_completion(**kwargs):
1252 _, matches = complete(**kwargs)
1263 _, matches = complete(**kwargs)
1253 self.assertNotIn("abc", matches)
1264 self.assertNotIn("abc", matches)
1254 self.assertNotIn("abc'", matches)
1265 self.assertNotIn("abc'", matches)
1255 self.assertNotIn("abc']", matches)
1266 self.assertNotIn("abc']", matches)
1256 self.assertNotIn("'abc'", matches)
1267 self.assertNotIn("'abc'", matches)
1257 self.assertNotIn("'abc']", matches)
1268 self.assertNotIn("'abc']", matches)
1258
1269
1259 def assert_completion(**kwargs):
1270 def assert_completion(**kwargs):
1260 _, matches = complete(**kwargs)
1271 _, matches = complete(**kwargs)
1261 self.assertIn("'abc'", matches)
1272 self.assertIn("'abc'", matches)
1262 self.assertNotIn("'abc']", matches)
1273 self.assertNotIn("'abc']", matches)
1263
1274
1264 # no completion after string closed, even if reopened
1275 # no completion after string closed, even if reopened
1265 assert_no_completion(line_buffer="d['a'")
1276 assert_no_completion(line_buffer="d['a'")
1266 assert_no_completion(line_buffer='d["a"')
1277 assert_no_completion(line_buffer='d["a"')
1267 assert_no_completion(line_buffer="d['a' + ")
1278 assert_no_completion(line_buffer="d['a' + ")
1268 assert_no_completion(line_buffer="d['a' + '")
1279 assert_no_completion(line_buffer="d['a' + '")
1269
1280
1270 # completion in non-trivial expressions
1281 # completion in non-trivial expressions
1271 assert_completion(line_buffer="+ d[")
1282 assert_completion(line_buffer="+ d[")
1272 assert_completion(line_buffer="(d[")
1283 assert_completion(line_buffer="(d[")
1273 assert_completion(line_buffer="C.data[")
1284 assert_completion(line_buffer="C.data[")
1274
1285
1275 # nested dict completion
1286 # nested dict completion
1276 assert_completion(line_buffer="nested['x'][")
1287 assert_completion(line_buffer="nested['x'][")
1277
1288
1278 with evaluation_policy("minimal"):
1289 with evaluation_policy("minimal"):
1279 with pytest.raises(AssertionError):
1290 with pytest.raises(AssertionError):
1280 assert_completion(line_buffer="nested['x'][")
1291 assert_completion(line_buffer="nested['x'][")
1281
1292
1282 # greedy flag
1293 # greedy flag
1283 def assert_completion(**kwargs):
1294 def assert_completion(**kwargs):
1284 _, matches = complete(**kwargs)
1295 _, matches = complete(**kwargs)
1285 self.assertIn("get()['abc']", matches)
1296 self.assertIn("get()['abc']", matches)
1286
1297
1287 assert_no_completion(line_buffer="get()[")
1298 assert_no_completion(line_buffer="get()[")
1288 with greedy_completion():
1299 with greedy_completion():
1289 assert_completion(line_buffer="get()[")
1300 assert_completion(line_buffer="get()[")
1290 assert_completion(line_buffer="get()['")
1301 assert_completion(line_buffer="get()['")
1291 assert_completion(line_buffer="get()['a")
1302 assert_completion(line_buffer="get()['a")
1292 assert_completion(line_buffer="get()['ab")
1303 assert_completion(line_buffer="get()['ab")
1293 assert_completion(line_buffer="get()['abc")
1304 assert_completion(line_buffer="get()['abc")
1294
1305
1295 def test_dict_key_completion_bytes(self):
1306 def test_dict_key_completion_bytes(self):
1296 """Test handling of bytes in dict key completion"""
1307 """Test handling of bytes in dict key completion"""
1297 ip = get_ipython()
1308 ip = get_ipython()
1298 complete = ip.Completer.complete
1309 complete = ip.Completer.complete
1299
1310
1300 ip.user_ns["d"] = {"abc": None, b"abd": None}
1311 ip.user_ns["d"] = {"abc": None, b"abd": None}
1301
1312
1302 _, matches = complete(line_buffer="d[")
1313 _, matches = complete(line_buffer="d[")
1303 self.assertIn("'abc'", matches)
1314 self.assertIn("'abc'", matches)
1304 self.assertIn("b'abd'", matches)
1315 self.assertIn("b'abd'", matches)
1305
1316
1306 if False: # not currently implemented
1317 if False: # not currently implemented
1307 _, matches = complete(line_buffer="d[b")
1318 _, matches = complete(line_buffer="d[b")
1308 self.assertIn("b'abd'", matches)
1319 self.assertIn("b'abd'", matches)
1309 self.assertNotIn("b'abc'", matches)
1320 self.assertNotIn("b'abc'", matches)
1310
1321
1311 _, matches = complete(line_buffer="d[b'")
1322 _, matches = complete(line_buffer="d[b'")
1312 self.assertIn("abd", matches)
1323 self.assertIn("abd", matches)
1313 self.assertNotIn("abc", matches)
1324 self.assertNotIn("abc", matches)
1314
1325
1315 _, matches = complete(line_buffer="d[B'")
1326 _, matches = complete(line_buffer="d[B'")
1316 self.assertIn("abd", matches)
1327 self.assertIn("abd", matches)
1317 self.assertNotIn("abc", matches)
1328 self.assertNotIn("abc", matches)
1318
1329
1319 _, matches = complete(line_buffer="d['")
1330 _, matches = complete(line_buffer="d['")
1320 self.assertIn("abc", matches)
1331 self.assertIn("abc", matches)
1321 self.assertNotIn("abd", matches)
1332 self.assertNotIn("abd", matches)
1322
1333
1323 def test_dict_key_completion_unicode_py3(self):
1334 def test_dict_key_completion_unicode_py3(self):
1324 """Test handling of unicode in dict key completion"""
1335 """Test handling of unicode in dict key completion"""
1325 ip = get_ipython()
1336 ip = get_ipython()
1326 complete = ip.Completer.complete
1337 complete = ip.Completer.complete
1327
1338
1328 ip.user_ns["d"] = {"a\u05d0": None}
1339 ip.user_ns["d"] = {"a\u05d0": None}
1329
1340
1330 # query using escape
1341 # query using escape
1331 if sys.platform != "win32":
1342 if sys.platform != "win32":
1332 # Known failure on Windows
1343 # Known failure on Windows
1333 _, matches = complete(line_buffer="d['a\\u05d0")
1344 _, matches = complete(line_buffer="d['a\\u05d0")
1334 self.assertIn("u05d0", matches) # tokenized after \\
1345 self.assertIn("u05d0", matches) # tokenized after \\
1335
1346
1336 # query using character
1347 # query using character
1337 _, matches = complete(line_buffer="d['a\u05d0")
1348 _, matches = complete(line_buffer="d['a\u05d0")
1338 self.assertIn("a\u05d0", matches)
1349 self.assertIn("a\u05d0", matches)
1339
1350
1340 with greedy_completion():
1351 with greedy_completion():
1341 # query using escape
1352 # query using escape
1342 _, matches = complete(line_buffer="d['a\\u05d0")
1353 _, matches = complete(line_buffer="d['a\\u05d0")
1343 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1354 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1344
1355
1345 # query using character
1356 # query using character
1346 _, matches = complete(line_buffer="d['a\u05d0")
1357 _, matches = complete(line_buffer="d['a\u05d0")
1347 self.assertIn("d['a\u05d0']", matches)
1358 self.assertIn("d['a\u05d0']", matches)
1348
1359
1349 @dec.skip_without("numpy")
1360 @dec.skip_without("numpy")
1350 def test_struct_array_key_completion(self):
1361 def test_struct_array_key_completion(self):
1351 """Test dict key completion applies to numpy struct arrays"""
1362 """Test dict key completion applies to numpy struct arrays"""
1352 import numpy
1363 import numpy
1353
1364
1354 ip = get_ipython()
1365 ip = get_ipython()
1355 complete = ip.Completer.complete
1366 complete = ip.Completer.complete
1356 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1367 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1357 _, matches = complete(line_buffer="d['")
1368 _, matches = complete(line_buffer="d['")
1358 self.assertIn("hello", matches)
1369 self.assertIn("hello", matches)
1359 self.assertIn("world", matches)
1370 self.assertIn("world", matches)
1360 # complete on the numpy struct itself
1371 # complete on the numpy struct itself
1361 dt = numpy.dtype(
1372 dt = numpy.dtype(
1362 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1373 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1363 )
1374 )
1364 x = numpy.zeros(2, dtype=dt)
1375 x = numpy.zeros(2, dtype=dt)
1365 ip.user_ns["d"] = x[1]
1376 ip.user_ns["d"] = x[1]
1366 _, matches = complete(line_buffer="d['")
1377 _, matches = complete(line_buffer="d['")
1367 self.assertIn("my_head", matches)
1378 self.assertIn("my_head", matches)
1368 self.assertIn("my_data", matches)
1379 self.assertIn("my_data", matches)
1369
1380
1370 def completes_on_nested():
1381 def completes_on_nested():
1371 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1382 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1372 _, matches = complete(line_buffer="d[1]['my_head']['")
1383 _, matches = complete(line_buffer="d[1]['my_head']['")
1373 self.assertTrue(any(["my_dt" in m for m in matches]))
1384 self.assertTrue(any(["my_dt" in m for m in matches]))
1374 self.assertTrue(any(["my_df" in m for m in matches]))
1385 self.assertTrue(any(["my_df" in m for m in matches]))
1375 # complete on a nested level
1386 # complete on a nested level
1376 with greedy_completion():
1387 with greedy_completion():
1377 completes_on_nested()
1388 completes_on_nested()
1378
1389
1379 with evaluation_policy("limited"):
1390 with evaluation_policy("limited"):
1380 completes_on_nested()
1391 completes_on_nested()
1381
1392
1382 with evaluation_policy("minimal"):
1393 with evaluation_policy("minimal"):
1383 with pytest.raises(AssertionError):
1394 with pytest.raises(AssertionError):
1384 completes_on_nested()
1395 completes_on_nested()
1385
1396
1386 @dec.skip_without("pandas")
1397 @dec.skip_without("pandas")
1387 def test_dataframe_key_completion(self):
1398 def test_dataframe_key_completion(self):
1388 """Test dict key completion applies to pandas DataFrames"""
1399 """Test dict key completion applies to pandas DataFrames"""
1389 import pandas
1400 import pandas
1390
1401
1391 ip = get_ipython()
1402 ip = get_ipython()
1392 complete = ip.Completer.complete
1403 complete = ip.Completer.complete
1393 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1404 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1394 _, matches = complete(line_buffer="d['")
1405 _, matches = complete(line_buffer="d['")
1395 self.assertIn("hello", matches)
1406 self.assertIn("hello", matches)
1396 self.assertIn("world", matches)
1407 self.assertIn("world", matches)
1397 _, matches = complete(line_buffer="d.loc[:, '")
1408 _, matches = complete(line_buffer="d.loc[:, '")
1398 self.assertIn("hello", matches)
1409 self.assertIn("hello", matches)
1399 self.assertIn("world", matches)
1410 self.assertIn("world", matches)
1400 _, matches = complete(line_buffer="d.loc[1:, '")
1411 _, matches = complete(line_buffer="d.loc[1:, '")
1401 self.assertIn("hello", matches)
1412 self.assertIn("hello", matches)
1402 _, matches = complete(line_buffer="d.loc[1:1, '")
1413 _, matches = complete(line_buffer="d.loc[1:1, '")
1403 self.assertIn("hello", matches)
1414 self.assertIn("hello", matches)
1404 _, matches = complete(line_buffer="d.loc[1:1:-1, '")
1415 _, matches = complete(line_buffer="d.loc[1:1:-1, '")
1405 self.assertIn("hello", matches)
1416 self.assertIn("hello", matches)
1406 _, matches = complete(line_buffer="d.loc[::, '")
1417 _, matches = complete(line_buffer="d.loc[::, '")
1407 self.assertIn("hello", matches)
1418 self.assertIn("hello", matches)
1408
1419
1409 def test_dict_key_completion_invalids(self):
1420 def test_dict_key_completion_invalids(self):
1410 """Smoke test cases dict key completion can't handle"""
1421 """Smoke test cases dict key completion can't handle"""
1411 ip = get_ipython()
1422 ip = get_ipython()
1412 complete = ip.Completer.complete
1423 complete = ip.Completer.complete
1413
1424
1414 ip.user_ns["no_getitem"] = None
1425 ip.user_ns["no_getitem"] = None
1415 ip.user_ns["no_keys"] = []
1426 ip.user_ns["no_keys"] = []
1416 ip.user_ns["cant_call_keys"] = dict
1427 ip.user_ns["cant_call_keys"] = dict
1417 ip.user_ns["empty"] = {}
1428 ip.user_ns["empty"] = {}
1418 ip.user_ns["d"] = {"abc": 5}
1429 ip.user_ns["d"] = {"abc": 5}
1419
1430
1420 _, matches = complete(line_buffer="no_getitem['")
1431 _, matches = complete(line_buffer="no_getitem['")
1421 _, matches = complete(line_buffer="no_keys['")
1432 _, matches = complete(line_buffer="no_keys['")
1422 _, matches = complete(line_buffer="cant_call_keys['")
1433 _, matches = complete(line_buffer="cant_call_keys['")
1423 _, matches = complete(line_buffer="empty['")
1434 _, matches = complete(line_buffer="empty['")
1424 _, matches = complete(line_buffer="name_error['")
1435 _, matches = complete(line_buffer="name_error['")
1425 _, matches = complete(line_buffer="d['\\") # incomplete escape
1436 _, matches = complete(line_buffer="d['\\") # incomplete escape
1426
1437
1427 def test_object_key_completion(self):
1438 def test_object_key_completion(self):
1428 ip = get_ipython()
1439 ip = get_ipython()
1429 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1440 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1430
1441
1431 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1442 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1432 self.assertIn("qwerty", matches)
1443 self.assertIn("qwerty", matches)
1433 self.assertIn("qwick", matches)
1444 self.assertIn("qwick", matches)
1434
1445
1435 def test_class_key_completion(self):
1446 def test_class_key_completion(self):
1436 ip = get_ipython()
1447 ip = get_ipython()
1437 NamedInstanceClass("qwerty")
1448 NamedInstanceClass("qwerty")
1438 NamedInstanceClass("qwick")
1449 NamedInstanceClass("qwick")
1439 ip.user_ns["named_instance_class"] = NamedInstanceClass
1450 ip.user_ns["named_instance_class"] = NamedInstanceClass
1440
1451
1441 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1452 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1442 self.assertIn("qwerty", matches)
1453 self.assertIn("qwerty", matches)
1443 self.assertIn("qwick", matches)
1454 self.assertIn("qwick", matches)
1444
1455
1445 def test_tryimport(self):
1456 def test_tryimport(self):
1446 """
1457 """
1447 Test that try-import don't crash on trailing dot, and import modules before
1458 Test that try-import don't crash on trailing dot, and import modules before
1448 """
1459 """
1449 from IPython.core.completerlib import try_import
1460 from IPython.core.completerlib import try_import
1450
1461
1451 assert try_import("IPython.")
1462 assert try_import("IPython.")
1452
1463
1453 def test_aimport_module_completer(self):
1464 def test_aimport_module_completer(self):
1454 ip = get_ipython()
1465 ip = get_ipython()
1455 _, matches = ip.complete("i", "%aimport i")
1466 _, matches = ip.complete("i", "%aimport i")
1456 self.assertIn("io", matches)
1467 self.assertIn("io", matches)
1457 self.assertNotIn("int", matches)
1468 self.assertNotIn("int", matches)
1458
1469
1459 def test_nested_import_module_completer(self):
1470 def test_nested_import_module_completer(self):
1460 ip = get_ipython()
1471 ip = get_ipython()
1461 _, matches = ip.complete(None, "import IPython.co", 17)
1472 _, matches = ip.complete(None, "import IPython.co", 17)
1462 self.assertIn("IPython.core", matches)
1473 self.assertIn("IPython.core", matches)
1463 self.assertNotIn("import IPython.core", matches)
1474 self.assertNotIn("import IPython.core", matches)
1464 self.assertNotIn("IPython.display", matches)
1475 self.assertNotIn("IPython.display", matches)
1465
1476
1466 def test_import_module_completer(self):
1477 def test_import_module_completer(self):
1467 ip = get_ipython()
1478 ip = get_ipython()
1468 _, matches = ip.complete("i", "import i")
1479 _, matches = ip.complete("i", "import i")
1469 self.assertIn("io", matches)
1480 self.assertIn("io", matches)
1470 self.assertNotIn("int", matches)
1481 self.assertNotIn("int", matches)
1471
1482
1472 def test_from_module_completer(self):
1483 def test_from_module_completer(self):
1473 ip = get_ipython()
1484 ip = get_ipython()
1474 _, matches = ip.complete("B", "from io import B", 16)
1485 _, matches = ip.complete("B", "from io import B", 16)
1475 self.assertIn("BytesIO", matches)
1486 self.assertIn("BytesIO", matches)
1476 self.assertNotIn("BaseException", matches)
1487 self.assertNotIn("BaseException", matches)
1477
1488
1478 def test_snake_case_completion(self):
1489 def test_snake_case_completion(self):
1479 ip = get_ipython()
1490 ip = get_ipython()
1480 ip.Completer.use_jedi = False
1491 ip.Completer.use_jedi = False
1481 ip.user_ns["some_three"] = 3
1492 ip.user_ns["some_three"] = 3
1482 ip.user_ns["some_four"] = 4
1493 ip.user_ns["some_four"] = 4
1483 _, matches = ip.complete("s_", "print(s_f")
1494 _, matches = ip.complete("s_", "print(s_f")
1484 self.assertIn("some_three", matches)
1495 self.assertIn("some_three", matches)
1485 self.assertIn("some_four", matches)
1496 self.assertIn("some_four", matches)
1486
1497
1487 def test_mix_terms(self):
1498 def test_mix_terms(self):
1488 ip = get_ipython()
1499 ip = get_ipython()
1489 from textwrap import dedent
1500 from textwrap import dedent
1490
1501
1491 ip.Completer.use_jedi = False
1502 ip.Completer.use_jedi = False
1492 ip.ex(
1503 ip.ex(
1493 dedent(
1504 dedent(
1494 """
1505 """
1495 class Test:
1506 class Test:
1496 def meth(self, meth_arg1):
1507 def meth(self, meth_arg1):
1497 print("meth")
1508 print("meth")
1498
1509
1499 def meth_1(self, meth1_arg1, meth1_arg2):
1510 def meth_1(self, meth1_arg1, meth1_arg2):
1500 print("meth1")
1511 print("meth1")
1501
1512
1502 def meth_2(self, meth2_arg1, meth2_arg2):
1513 def meth_2(self, meth2_arg1, meth2_arg2):
1503 print("meth2")
1514 print("meth2")
1504 test = Test()
1515 test = Test()
1505 """
1516 """
1506 )
1517 )
1507 )
1518 )
1508 _, matches = ip.complete(None, "test.meth(")
1519 _, matches = ip.complete(None, "test.meth(")
1509 self.assertIn("meth_arg1=", matches)
1520 self.assertIn("meth_arg1=", matches)
1510 self.assertNotIn("meth2_arg1=", matches)
1521 self.assertNotIn("meth2_arg1=", matches)
1511
1522
1512 def test_percent_symbol_restrict_to_magic_completions(self):
1523 def test_percent_symbol_restrict_to_magic_completions(self):
1513 ip = get_ipython()
1524 ip = get_ipython()
1514 completer = ip.Completer
1525 completer = ip.Completer
1515 text = "%a"
1526 text = "%a"
1516
1527
1517 with provisionalcompleter():
1528 with provisionalcompleter():
1518 completer.use_jedi = True
1529 completer.use_jedi = True
1519 completions = completer.completions(text, len(text))
1530 completions = completer.completions(text, len(text))
1520 for c in completions:
1531 for c in completions:
1521 self.assertEqual(c.text[0], "%")
1532 self.assertEqual(c.text[0], "%")
1522
1533
1523 def test_fwd_unicode_restricts(self):
1534 def test_fwd_unicode_restricts(self):
1524 ip = get_ipython()
1535 ip = get_ipython()
1525 completer = ip.Completer
1536 completer = ip.Completer
1526 text = "\\ROMAN NUMERAL FIVE"
1537 text = "\\ROMAN NUMERAL FIVE"
1527
1538
1528 with provisionalcompleter():
1539 with provisionalcompleter():
1529 completer.use_jedi = True
1540 completer.use_jedi = True
1530 completions = [
1541 completions = [
1531 completion.text for completion in completer.completions(text, len(text))
1542 completion.text for completion in completer.completions(text, len(text))
1532 ]
1543 ]
1533 self.assertEqual(completions, ["\u2164"])
1544 self.assertEqual(completions, ["\u2164"])
1534
1545
1535 def test_dict_key_restrict_to_dicts(self):
1546 def test_dict_key_restrict_to_dicts(self):
1536 """Test that dict key suppresses non-dict completion items"""
1547 """Test that dict key suppresses non-dict completion items"""
1537 ip = get_ipython()
1548 ip = get_ipython()
1538 c = ip.Completer
1549 c = ip.Completer
1539 d = {"abc": None}
1550 d = {"abc": None}
1540 ip.user_ns["d"] = d
1551 ip.user_ns["d"] = d
1541
1552
1542 text = 'd["a'
1553 text = 'd["a'
1543
1554
1544 def _():
1555 def _():
1545 with provisionalcompleter():
1556 with provisionalcompleter():
1546 c.use_jedi = True
1557 c.use_jedi = True
1547 return [
1558 return [
1548 completion.text for completion in c.completions(text, len(text))
1559 completion.text for completion in c.completions(text, len(text))
1549 ]
1560 ]
1550
1561
1551 completions = _()
1562 completions = _()
1552 self.assertEqual(completions, ["abc"])
1563 self.assertEqual(completions, ["abc"])
1553
1564
1554 # check that it can be disabled in granular manner:
1565 # check that it can be disabled in granular manner:
1555 cfg = Config()
1566 cfg = Config()
1556 cfg.IPCompleter.suppress_competing_matchers = {
1567 cfg.IPCompleter.suppress_competing_matchers = {
1557 "IPCompleter.dict_key_matcher": False
1568 "IPCompleter.dict_key_matcher": False
1558 }
1569 }
1559 c.update_config(cfg)
1570 c.update_config(cfg)
1560
1571
1561 completions = _()
1572 completions = _()
1562 self.assertIn("abc", completions)
1573 self.assertIn("abc", completions)
1563 self.assertGreater(len(completions), 1)
1574 self.assertGreater(len(completions), 1)
1564
1575
1565 def test_matcher_suppression(self):
1576 def test_matcher_suppression(self):
1566 @completion_matcher(identifier="a_matcher")
1577 @completion_matcher(identifier="a_matcher")
1567 def a_matcher(text):
1578 def a_matcher(text):
1568 return ["completion_a"]
1579 return ["completion_a"]
1569
1580
1570 @completion_matcher(identifier="b_matcher", api_version=2)
1581 @completion_matcher(identifier="b_matcher", api_version=2)
1571 def b_matcher(context: CompletionContext):
1582 def b_matcher(context: CompletionContext):
1572 text = context.token
1583 text = context.token
1573 result = {"completions": [SimpleCompletion("completion_b")]}
1584 result = {"completions": [SimpleCompletion("completion_b")]}
1574
1585
1575 if text == "suppress c":
1586 if text == "suppress c":
1576 result["suppress"] = {"c_matcher"}
1587 result["suppress"] = {"c_matcher"}
1577
1588
1578 if text.startswith("suppress all"):
1589 if text.startswith("suppress all"):
1579 result["suppress"] = True
1590 result["suppress"] = True
1580 if text == "suppress all but c":
1591 if text == "suppress all but c":
1581 result["do_not_suppress"] = {"c_matcher"}
1592 result["do_not_suppress"] = {"c_matcher"}
1582 if text == "suppress all but a":
1593 if text == "suppress all but a":
1583 result["do_not_suppress"] = {"a_matcher"}
1594 result["do_not_suppress"] = {"a_matcher"}
1584
1595
1585 return result
1596 return result
1586
1597
1587 @completion_matcher(identifier="c_matcher")
1598 @completion_matcher(identifier="c_matcher")
1588 def c_matcher(text):
1599 def c_matcher(text):
1589 return ["completion_c"]
1600 return ["completion_c"]
1590
1601
1591 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1602 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1592 ip = get_ipython()
1603 ip = get_ipython()
1593 c = ip.Completer
1604 c = ip.Completer
1594
1605
1595 def _(text, expected):
1606 def _(text, expected):
1596 c.use_jedi = False
1607 c.use_jedi = False
1597 s, matches = c.complete(text)
1608 s, matches = c.complete(text)
1598 self.assertEqual(expected, matches)
1609 self.assertEqual(expected, matches)
1599
1610
1600 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1611 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1601 _("suppress all", ["completion_b"])
1612 _("suppress all", ["completion_b"])
1602 _("suppress all but a", ["completion_a", "completion_b"])
1613 _("suppress all but a", ["completion_a", "completion_b"])
1603 _("suppress all but c", ["completion_b", "completion_c"])
1614 _("suppress all but c", ["completion_b", "completion_c"])
1604
1615
1605 def configure(suppression_config):
1616 def configure(suppression_config):
1606 cfg = Config()
1617 cfg = Config()
1607 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1618 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1608 c.update_config(cfg)
1619 c.update_config(cfg)
1609
1620
1610 # test that configuration takes priority over the run-time decisions
1621 # test that configuration takes priority over the run-time decisions
1611
1622
1612 configure(False)
1623 configure(False)
1613 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1624 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1614
1625
1615 configure({"b_matcher": False})
1626 configure({"b_matcher": False})
1616 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1627 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1617
1628
1618 configure({"a_matcher": False})
1629 configure({"a_matcher": False})
1619 _("suppress all", ["completion_b"])
1630 _("suppress all", ["completion_b"])
1620
1631
1621 configure({"b_matcher": True})
1632 configure({"b_matcher": True})
1622 _("do not suppress", ["completion_b"])
1633 _("do not suppress", ["completion_b"])
1623
1634
1624 configure(True)
1635 configure(True)
1625 _("do not suppress", ["completion_a"])
1636 _("do not suppress", ["completion_a"])
1626
1637
1627 def test_matcher_suppression_with_iterator(self):
1638 def test_matcher_suppression_with_iterator(self):
1628 @completion_matcher(identifier="matcher_returning_iterator")
1639 @completion_matcher(identifier="matcher_returning_iterator")
1629 def matcher_returning_iterator(text):
1640 def matcher_returning_iterator(text):
1630 return iter(["completion_iter"])
1641 return iter(["completion_iter"])
1631
1642
1632 @completion_matcher(identifier="matcher_returning_list")
1643 @completion_matcher(identifier="matcher_returning_list")
1633 def matcher_returning_list(text):
1644 def matcher_returning_list(text):
1634 return ["completion_list"]
1645 return ["completion_list"]
1635
1646
1636 with custom_matchers([matcher_returning_iterator, matcher_returning_list]):
1647 with custom_matchers([matcher_returning_iterator, matcher_returning_list]):
1637 ip = get_ipython()
1648 ip = get_ipython()
1638 c = ip.Completer
1649 c = ip.Completer
1639
1650
1640 def _(text, expected):
1651 def _(text, expected):
1641 c.use_jedi = False
1652 c.use_jedi = False
1642 s, matches = c.complete(text)
1653 s, matches = c.complete(text)
1643 self.assertEqual(expected, matches)
1654 self.assertEqual(expected, matches)
1644
1655
1645 def configure(suppression_config):
1656 def configure(suppression_config):
1646 cfg = Config()
1657 cfg = Config()
1647 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1658 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1648 c.update_config(cfg)
1659 c.update_config(cfg)
1649
1660
1650 configure(False)
1661 configure(False)
1651 _("---", ["completion_iter", "completion_list"])
1662 _("---", ["completion_iter", "completion_list"])
1652
1663
1653 configure(True)
1664 configure(True)
1654 _("---", ["completion_iter"])
1665 _("---", ["completion_iter"])
1655
1666
1656 configure(None)
1667 configure(None)
1657 _("--", ["completion_iter", "completion_list"])
1668 _("--", ["completion_iter", "completion_list"])
1658
1669
1659 @pytest.mark.xfail(
1670 @pytest.mark.xfail(
1660 sys.version_info.releaselevel in ("alpha",),
1671 sys.version_info.releaselevel in ("alpha",),
1661 reason="Parso does not yet parse 3.13",
1672 reason="Parso does not yet parse 3.13",
1662 )
1673 )
1663 def test_matcher_suppression_with_jedi(self):
1674 def test_matcher_suppression_with_jedi(self):
1664 ip = get_ipython()
1675 ip = get_ipython()
1665 c = ip.Completer
1676 c = ip.Completer
1666 c.use_jedi = True
1677 c.use_jedi = True
1667
1678
1668 def configure(suppression_config):
1679 def configure(suppression_config):
1669 cfg = Config()
1680 cfg = Config()
1670 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1681 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1671 c.update_config(cfg)
1682 c.update_config(cfg)
1672
1683
1673 def _():
1684 def _():
1674 with provisionalcompleter():
1685 with provisionalcompleter():
1675 matches = [completion.text for completion in c.completions("dict.", 5)]
1686 matches = [completion.text for completion in c.completions("dict.", 5)]
1676 self.assertIn("keys", matches)
1687 self.assertIn("keys", matches)
1677
1688
1678 configure(False)
1689 configure(False)
1679 _()
1690 _()
1680
1691
1681 configure(True)
1692 configure(True)
1682 _()
1693 _()
1683
1694
1684 configure(None)
1695 configure(None)
1685 _()
1696 _()
1686
1697
1687 def test_matcher_disabling(self):
1698 def test_matcher_disabling(self):
1688 @completion_matcher(identifier="a_matcher")
1699 @completion_matcher(identifier="a_matcher")
1689 def a_matcher(text):
1700 def a_matcher(text):
1690 return ["completion_a"]
1701 return ["completion_a"]
1691
1702
1692 @completion_matcher(identifier="b_matcher")
1703 @completion_matcher(identifier="b_matcher")
1693 def b_matcher(text):
1704 def b_matcher(text):
1694 return ["completion_b"]
1705 return ["completion_b"]
1695
1706
1696 def _(expected):
1707 def _(expected):
1697 s, matches = c.complete("completion_")
1708 s, matches = c.complete("completion_")
1698 self.assertEqual(expected, matches)
1709 self.assertEqual(expected, matches)
1699
1710
1700 with custom_matchers([a_matcher, b_matcher]):
1711 with custom_matchers([a_matcher, b_matcher]):
1701 ip = get_ipython()
1712 ip = get_ipython()
1702 c = ip.Completer
1713 c = ip.Completer
1703
1714
1704 _(["completion_a", "completion_b"])
1715 _(["completion_a", "completion_b"])
1705
1716
1706 cfg = Config()
1717 cfg = Config()
1707 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1718 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1708 c.update_config(cfg)
1719 c.update_config(cfg)
1709
1720
1710 _(["completion_a"])
1721 _(["completion_a"])
1711
1722
1712 cfg.IPCompleter.disable_matchers = []
1723 cfg.IPCompleter.disable_matchers = []
1713 c.update_config(cfg)
1724 c.update_config(cfg)
1714
1725
1715 def test_matcher_priority(self):
1726 def test_matcher_priority(self):
1716 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1727 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1717 def a_matcher(text):
1728 def a_matcher(text):
1718 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1729 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1719
1730
1720 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1731 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1721 def b_matcher(text):
1732 def b_matcher(text):
1722 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1733 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1723
1734
1724 def _(expected):
1735 def _(expected):
1725 s, matches = c.complete("completion_")
1736 s, matches = c.complete("completion_")
1726 self.assertEqual(expected, matches)
1737 self.assertEqual(expected, matches)
1727
1738
1728 with custom_matchers([a_matcher, b_matcher]):
1739 with custom_matchers([a_matcher, b_matcher]):
1729 ip = get_ipython()
1740 ip = get_ipython()
1730 c = ip.Completer
1741 c = ip.Completer
1731
1742
1732 _(["completion_b"])
1743 _(["completion_b"])
1733 a_matcher.matcher_priority = 3
1744 a_matcher.matcher_priority = 3
1734 _(["completion_a"])
1745 _(["completion_a"])
1735
1746
1736
1747
1737 @pytest.mark.parametrize(
1748 @pytest.mark.parametrize(
1749 "setup,code,expected,not_expected",
1750 [
1751 ('a="str"; b=1', "(a, b.", [".bit_count", ".conjugate"], [".count"]),
1752 ('a="str"; b=1', "(a, b).", [".count"], [".bit_count", ".capitalize"]),
1753 ('x="str"; y=1', "x = {1, y.", [".bit_count"], [".count"]),
1754 ('x="str"; y=1', "x = [1, y.", [".bit_count"], [".count"]),
1755 ('x="str"; y=1; fun=lambda x:x', "x = fun(1, y.", [".bit_count"], [".count"]),
1756 ],
1757 )
1758 def test_misc_no_jedi_completions(setup, code, expected, not_expected):
1759 ip = get_ipython()
1760 c = ip.Completer
1761 ip.ex(setup)
1762 with provisionalcompleter(), jedi_status(False):
1763 matches = c.all_completions(code)
1764 assert set(expected) - set(matches) == set(), set(matches)
1765 assert set(matches).intersection(set(not_expected)) == set()
1766
1767
1768 @pytest.mark.parametrize(
1769 "code,expected",
1770 [
1771 (" (a, b", "b"),
1772 ("(a, b", "b"),
1773 ("(a, b)", ""), # trim always start by trimming
1774 (" (a, b)", "(a, b)"),
1775 (" [a, b]", "[a, b]"),
1776 (" a, b", "b"),
1777 ("x = {1, y", "y"),
1778 ("x = [1, y", "y"),
1779 ("x = fun(1, y", "y"),
1780 ],
1781 )
1782 def test_trim_expr(code, expected):
1783 c = get_ipython().Completer
1784 assert c._trim_expr(code) == expected
1785
1786
1787 @pytest.mark.parametrize(
1738 "input, expected",
1788 "input, expected",
1739 [
1789 [
1740 ["1.234", "1.234"],
1790 ["1.234", "1.234"],
1741 # should match signed numbers
1791 # should match signed numbers
1742 ["+1", "+1"],
1792 ["+1", "+1"],
1743 ["-1", "-1"],
1793 ["-1", "-1"],
1744 ["-1.0", "-1.0"],
1794 ["-1.0", "-1.0"],
1745 ["-1.", "-1."],
1795 ["-1.", "-1."],
1746 ["+1.", "+1."],
1796 ["+1.", "+1."],
1747 [".1", ".1"],
1797 [".1", ".1"],
1748 # should not match non-numbers
1798 # should not match non-numbers
1749 ["1..", None],
1799 ["1..", None],
1750 ["..", None],
1800 ["..", None],
1751 [".1.", None],
1801 [".1.", None],
1752 # should match after comma
1802 # should match after comma
1753 [",1", "1"],
1803 [",1", "1"],
1754 [", 1", "1"],
1804 [", 1", "1"],
1755 [", .1", ".1"],
1805 [", .1", ".1"],
1756 [", +.1", "+.1"],
1806 [", +.1", "+.1"],
1757 # should not match after trailing spaces
1807 # should not match after trailing spaces
1758 [".1 ", None],
1808 [".1 ", None],
1759 # some complex cases
1809 # some complex cases
1760 ["0b_0011_1111_0100_1110", "0b_0011_1111_0100_1110"],
1810 ["0b_0011_1111_0100_1110", "0b_0011_1111_0100_1110"],
1761 ["0xdeadbeef", "0xdeadbeef"],
1811 ["0xdeadbeef", "0xdeadbeef"],
1762 ["0b_1110_0101", "0b_1110_0101"],
1812 ["0b_1110_0101", "0b_1110_0101"],
1763 # should not match if in an operation
1813 # should not match if in an operation
1764 ["1 + 1", None],
1814 ["1 + 1", None],
1765 [", 1 + 1", None],
1815 [", 1 + 1", None],
1766 ],
1816 ],
1767 )
1817 )
1768 def test_match_numeric_literal_for_dict_key(input, expected):
1818 def test_match_numeric_literal_for_dict_key(input, expected):
1769 assert _match_number_in_dict_key_prefix(input) == expected
1819 assert _match_number_in_dict_key_prefix(input) == expected
General Comments 0
You need to be logged in to leave comments. Login now