##// END OF EJS Templates
Fix typos
krassowski -
Show More
@@ -1,3223 +1,3223 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press ``<tab>`` to expand it to its latex form.
53 and press ``<tab>`` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103
103
104 Matchers
104 Matchers
105 ========
105 ========
106
106
107 All completions routines are implemented using unified *Matchers* API.
107 All completions routines are implemented using unified *Matchers* API.
108 The matchers API is provisional and subject to change without notice.
108 The matchers API is provisional and subject to change without notice.
109
109
110 The built-in matchers include:
110 The built-in matchers include:
111
111
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.unicode_name_matcher`,
114 - :any:`IPCompleter.unicode_name_matcher`,
115 :any:`IPCompleter.fwd_unicode_matcher`
115 :any:`IPCompleter.fwd_unicode_matcher`
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 (`complete_command`) with string dispatch (including regular expressions).
124 (`complete_command`) with string dispatch (including regular expressions).
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Jedi results to match behaviour in earlier IPython versions.
126 Jedi results to match behaviour in earlier IPython versions.
127
127
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129
129
130 Matcher API
130 Matcher API
131 -----------
131 -----------
132
132
133 Simplifying some details, the ``Matcher`` interface can described as
133 Simplifying some details, the ``Matcher`` interface can described as
134
134
135 .. code-block::
135 .. code-block::
136
136
137 MatcherAPIv1 = Callable[[str], list[str]]
137 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139
139
140 Matcher = MatcherAPIv1 | MatcherAPIv2
140 Matcher = MatcherAPIv1 | MatcherAPIv2
141
141
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 and remains supported as a simplest way for generating completions. This is also
143 and remains supported as a simplest way for generating completions. This is also
144 currently the only API supported by the IPython hooks system `complete_command`.
144 currently the only API supported by the IPython hooks system `complete_command`.
145
145
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 and requires a literal ``2`` for v2 Matchers.
148 and requires a literal ``2`` for v2 Matchers.
149
149
150 Once the API stabilises future versions may relax the requirement for specifying
150 Once the API stabilises future versions may relax the requirement for specifying
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153
153
154 Suppression of competing matchers
154 Suppression of competing matchers
155 ---------------------------------
155 ---------------------------------
156
156
157 By default results from all matchers are combined, in the order determined by
157 By default results from all matchers are combined, in the order determined by
158 their priority. Matchers can request to suppress results from subsequent
158 their priority. Matchers can request to suppress results from subsequent
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160
160
161 When multiple matchers simultaneously request surpression, the results from of
161 When multiple matchers simultaneously request surpression, the results from of
162 the matcher with higher priority will be returned.
162 the matcher with higher priority will be returned.
163
163
164 Sometimes it is desirable to suppress most but not all other matchers;
164 Sometimes it is desirable to suppress most but not all other matchers;
165 this can be achieved by adding a list of identifiers of matchers which
165 this can be achieved by adding a list of identifiers of matchers which
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167
167
168 The suppression behaviour can is user-configurable via
168 The suppression behaviour can is user-configurable via
169 :any:`IPCompleter.suppress_competing_matchers`.
169 :any:`IPCompleter.suppress_competing_matchers`.
170 """
170 """
171
171
172
172
173 # Copyright (c) IPython Development Team.
173 # Copyright (c) IPython Development Team.
174 # Distributed under the terms of the Modified BSD License.
174 # Distributed under the terms of the Modified BSD License.
175 #
175 #
176 # Some of this code originated from rlcompleter in the Python standard library
176 # Some of this code originated from rlcompleter in the Python standard library
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
178
178
179 from __future__ import annotations
179 from __future__ import annotations
180 import builtins as builtin_mod
180 import builtins as builtin_mod
181 import enum
181 import enum
182 import glob
182 import glob
183 import inspect
183 import inspect
184 import itertools
184 import itertools
185 import keyword
185 import keyword
186 import os
186 import os
187 import re
187 import re
188 import string
188 import string
189 import sys
189 import sys
190 import tokenize
190 import tokenize
191 import time
191 import time
192 import unicodedata
192 import unicodedata
193 import uuid
193 import uuid
194 import warnings
194 import warnings
195 from ast import literal_eval
195 from ast import literal_eval
196 from collections import defaultdict
196 from collections import defaultdict
197 from contextlib import contextmanager
197 from contextlib import contextmanager
198 from dataclasses import dataclass
198 from dataclasses import dataclass
199 from functools import cached_property, partial
199 from functools import cached_property, partial
200 from types import SimpleNamespace
200 from types import SimpleNamespace
201 from typing import (
201 from typing import (
202 Iterable,
202 Iterable,
203 Iterator,
203 Iterator,
204 List,
204 List,
205 Tuple,
205 Tuple,
206 Union,
206 Union,
207 Any,
207 Any,
208 Sequence,
208 Sequence,
209 Dict,
209 Dict,
210 Optional,
210 Optional,
211 TYPE_CHECKING,
211 TYPE_CHECKING,
212 Set,
212 Set,
213 Literal,
213 Literal,
214 )
214 )
215
215
216 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
216 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
217 from IPython.core.error import TryNext
217 from IPython.core.error import TryNext
218 from IPython.core.inputtransformer2 import ESC_MAGIC
218 from IPython.core.inputtransformer2 import ESC_MAGIC
219 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
219 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
220 from IPython.core.oinspect import InspectColors
220 from IPython.core.oinspect import InspectColors
221 from IPython.testing.skipdoctest import skip_doctest
221 from IPython.testing.skipdoctest import skip_doctest
222 from IPython.utils import generics
222 from IPython.utils import generics
223 from IPython.utils.decorators import sphinx_options
223 from IPython.utils.decorators import sphinx_options
224 from IPython.utils.dir2 import dir2, get_real_method
224 from IPython.utils.dir2 import dir2, get_real_method
225 from IPython.utils.docs import GENERATING_DOCUMENTATION
225 from IPython.utils.docs import GENERATING_DOCUMENTATION
226 from IPython.utils.path import ensure_dir_exists
226 from IPython.utils.path import ensure_dir_exists
227 from IPython.utils.process import arg_split
227 from IPython.utils.process import arg_split
228 from traitlets import (
228 from traitlets import (
229 Bool,
229 Bool,
230 Enum,
230 Enum,
231 Int,
231 Int,
232 List as ListTrait,
232 List as ListTrait,
233 Unicode,
233 Unicode,
234 Dict as DictTrait,
234 Dict as DictTrait,
235 Union as UnionTrait,
235 Union as UnionTrait,
236 observe,
236 observe,
237 )
237 )
238 from traitlets.config.configurable import Configurable
238 from traitlets.config.configurable import Configurable
239
239
240 import __main__
240 import __main__
241
241
242 # skip module docstests
242 # skip module docstests
243 __skip_doctest__ = True
243 __skip_doctest__ = True
244
244
245
245
246 try:
246 try:
247 import jedi
247 import jedi
248 jedi.settings.case_insensitive_completion = False
248 jedi.settings.case_insensitive_completion = False
249 import jedi.api.helpers
249 import jedi.api.helpers
250 import jedi.api.classes
250 import jedi.api.classes
251 JEDI_INSTALLED = True
251 JEDI_INSTALLED = True
252 except ImportError:
252 except ImportError:
253 JEDI_INSTALLED = False
253 JEDI_INSTALLED = False
254
254
255
255
256 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
256 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
257 from typing import cast
257 from typing import cast
258 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
258 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
259 else:
259 else:
260
260
261 def cast(obj, type_):
261 def cast(obj, type_):
262 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
262 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
263 return obj
263 return obj
264
264
265 # do not require on runtime
265 # do not require on runtime
266 NotRequired = Tuple # requires Python >=3.11
266 NotRequired = Tuple # requires Python >=3.11
267 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
267 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
268 Protocol = object # requires Python >=3.8
268 Protocol = object # requires Python >=3.8
269 TypeAlias = Any # requires Python >=3.10
269 TypeAlias = Any # requires Python >=3.10
270 if GENERATING_DOCUMENTATION:
270 if GENERATING_DOCUMENTATION:
271 from typing import TypedDict
271 from typing import TypedDict
272
272
273 # -----------------------------------------------------------------------------
273 # -----------------------------------------------------------------------------
274 # Globals
274 # Globals
275 #-----------------------------------------------------------------------------
275 #-----------------------------------------------------------------------------
276
276
277 # ranges where we have most of the valid unicode names. We could be more finer
277 # ranges where we have most of the valid unicode names. We could be more finer
278 # grained but is it worth it for performance While unicode have character in the
278 # grained but is it worth it for performance While unicode have character in the
279 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
279 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
280 # write this). With below range we cover them all, with a density of ~67%
280 # write this). With below range we cover them all, with a density of ~67%
281 # biggest next gap we consider only adds up about 1% density and there are 600
281 # biggest next gap we consider only adds up about 1% density and there are 600
282 # gaps that would need hard coding.
282 # gaps that would need hard coding.
283 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
283 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
284
284
285 # Public API
285 # Public API
286 __all__ = ["Completer", "IPCompleter"]
286 __all__ = ["Completer", "IPCompleter"]
287
287
288 if sys.platform == 'win32':
288 if sys.platform == 'win32':
289 PROTECTABLES = ' '
289 PROTECTABLES = ' '
290 else:
290 else:
291 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
291 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
292
292
293 # Protect against returning an enormous number of completions which the frontend
293 # Protect against returning an enormous number of completions which the frontend
294 # may have trouble processing.
294 # may have trouble processing.
295 MATCHES_LIMIT = 500
295 MATCHES_LIMIT = 500
296
296
297 # Completion type reported when no type can be inferred.
297 # Completion type reported when no type can be inferred.
298 _UNKNOWN_TYPE = "<unknown>"
298 _UNKNOWN_TYPE = "<unknown>"
299
299
300 # sentinel value to signal lack of a match
300 # sentinel value to signal lack of a match
301 not_found = object()
301 not_found = object()
302
302
303 class ProvisionalCompleterWarning(FutureWarning):
303 class ProvisionalCompleterWarning(FutureWarning):
304 """
304 """
305 Exception raise by an experimental feature in this module.
305 Exception raise by an experimental feature in this module.
306
306
307 Wrap code in :any:`provisionalcompleter` context manager if you
307 Wrap code in :any:`provisionalcompleter` context manager if you
308 are certain you want to use an unstable feature.
308 are certain you want to use an unstable feature.
309 """
309 """
310 pass
310 pass
311
311
312 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
312 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
313
313
314
314
315 @skip_doctest
315 @skip_doctest
316 @contextmanager
316 @contextmanager
317 def provisionalcompleter(action='ignore'):
317 def provisionalcompleter(action='ignore'):
318 """
318 """
319 This context manager has to be used in any place where unstable completer
319 This context manager has to be used in any place where unstable completer
320 behavior and API may be called.
320 behavior and API may be called.
321
321
322 >>> with provisionalcompleter():
322 >>> with provisionalcompleter():
323 ... completer.do_experimental_things() # works
323 ... completer.do_experimental_things() # works
324
324
325 >>> completer.do_experimental_things() # raises.
325 >>> completer.do_experimental_things() # raises.
326
326
327 .. note::
327 .. note::
328
328
329 Unstable
329 Unstable
330
330
331 By using this context manager you agree that the API in use may change
331 By using this context manager you agree that the API in use may change
332 without warning, and that you won't complain if they do so.
332 without warning, and that you won't complain if they do so.
333
333
334 You also understand that, if the API is not to your liking, you should report
334 You also understand that, if the API is not to your liking, you should report
335 a bug to explain your use case upstream.
335 a bug to explain your use case upstream.
336
336
337 We'll be happy to get your feedback, feature requests, and improvements on
337 We'll be happy to get your feedback, feature requests, and improvements on
338 any of the unstable APIs!
338 any of the unstable APIs!
339 """
339 """
340 with warnings.catch_warnings():
340 with warnings.catch_warnings():
341 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
341 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
342 yield
342 yield
343
343
344
344
345 def has_open_quotes(s):
345 def has_open_quotes(s):
346 """Return whether a string has open quotes.
346 """Return whether a string has open quotes.
347
347
348 This simply counts whether the number of quote characters of either type in
348 This simply counts whether the number of quote characters of either type in
349 the string is odd.
349 the string is odd.
350
350
351 Returns
351 Returns
352 -------
352 -------
353 If there is an open quote, the quote character is returned. Else, return
353 If there is an open quote, the quote character is returned. Else, return
354 False.
354 False.
355 """
355 """
356 # We check " first, then ', so complex cases with nested quotes will get
356 # We check " first, then ', so complex cases with nested quotes will get
357 # the " to take precedence.
357 # the " to take precedence.
358 if s.count('"') % 2:
358 if s.count('"') % 2:
359 return '"'
359 return '"'
360 elif s.count("'") % 2:
360 elif s.count("'") % 2:
361 return "'"
361 return "'"
362 else:
362 else:
363 return False
363 return False
364
364
365
365
366 def protect_filename(s, protectables=PROTECTABLES):
366 def protect_filename(s, protectables=PROTECTABLES):
367 """Escape a string to protect certain characters."""
367 """Escape a string to protect certain characters."""
368 if set(s) & set(protectables):
368 if set(s) & set(protectables):
369 if sys.platform == "win32":
369 if sys.platform == "win32":
370 return '"' + s + '"'
370 return '"' + s + '"'
371 else:
371 else:
372 return "".join(("\\" + c if c in protectables else c) for c in s)
372 return "".join(("\\" + c if c in protectables else c) for c in s)
373 else:
373 else:
374 return s
374 return s
375
375
376
376
377 def expand_user(path:str) -> Tuple[str, bool, str]:
377 def expand_user(path:str) -> Tuple[str, bool, str]:
378 """Expand ``~``-style usernames in strings.
378 """Expand ``~``-style usernames in strings.
379
379
380 This is similar to :func:`os.path.expanduser`, but it computes and returns
380 This is similar to :func:`os.path.expanduser`, but it computes and returns
381 extra information that will be useful if the input was being used in
381 extra information that will be useful if the input was being used in
382 computing completions, and you wish to return the completions with the
382 computing completions, and you wish to return the completions with the
383 original '~' instead of its expanded value.
383 original '~' instead of its expanded value.
384
384
385 Parameters
385 Parameters
386 ----------
386 ----------
387 path : str
387 path : str
388 String to be expanded. If no ~ is present, the output is the same as the
388 String to be expanded. If no ~ is present, the output is the same as the
389 input.
389 input.
390
390
391 Returns
391 Returns
392 -------
392 -------
393 newpath : str
393 newpath : str
394 Result of ~ expansion in the input path.
394 Result of ~ expansion in the input path.
395 tilde_expand : bool
395 tilde_expand : bool
396 Whether any expansion was performed or not.
396 Whether any expansion was performed or not.
397 tilde_val : str
397 tilde_val : str
398 The value that ~ was replaced with.
398 The value that ~ was replaced with.
399 """
399 """
400 # Default values
400 # Default values
401 tilde_expand = False
401 tilde_expand = False
402 tilde_val = ''
402 tilde_val = ''
403 newpath = path
403 newpath = path
404
404
405 if path.startswith('~'):
405 if path.startswith('~'):
406 tilde_expand = True
406 tilde_expand = True
407 rest = len(path)-1
407 rest = len(path)-1
408 newpath = os.path.expanduser(path)
408 newpath = os.path.expanduser(path)
409 if rest:
409 if rest:
410 tilde_val = newpath[:-rest]
410 tilde_val = newpath[:-rest]
411 else:
411 else:
412 tilde_val = newpath
412 tilde_val = newpath
413
413
414 return newpath, tilde_expand, tilde_val
414 return newpath, tilde_expand, tilde_val
415
415
416
416
417 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
417 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
418 """Does the opposite of expand_user, with its outputs.
418 """Does the opposite of expand_user, with its outputs.
419 """
419 """
420 if tilde_expand:
420 if tilde_expand:
421 return path.replace(tilde_val, '~')
421 return path.replace(tilde_val, '~')
422 else:
422 else:
423 return path
423 return path
424
424
425
425
426 def completions_sorting_key(word):
426 def completions_sorting_key(word):
427 """key for sorting completions
427 """key for sorting completions
428
428
429 This does several things:
429 This does several things:
430
430
431 - Demote any completions starting with underscores to the end
431 - Demote any completions starting with underscores to the end
432 - Insert any %magic and %%cellmagic completions in the alphabetical order
432 - Insert any %magic and %%cellmagic completions in the alphabetical order
433 by their name
433 by their name
434 """
434 """
435 prio1, prio2 = 0, 0
435 prio1, prio2 = 0, 0
436
436
437 if word.startswith('__'):
437 if word.startswith('__'):
438 prio1 = 2
438 prio1 = 2
439 elif word.startswith('_'):
439 elif word.startswith('_'):
440 prio1 = 1
440 prio1 = 1
441
441
442 if word.endswith('='):
442 if word.endswith('='):
443 prio1 = -1
443 prio1 = -1
444
444
445 if word.startswith('%%'):
445 if word.startswith('%%'):
446 # If there's another % in there, this is something else, so leave it alone
446 # If there's another % in there, this is something else, so leave it alone
447 if not "%" in word[2:]:
447 if not "%" in word[2:]:
448 word = word[2:]
448 word = word[2:]
449 prio2 = 2
449 prio2 = 2
450 elif word.startswith('%'):
450 elif word.startswith('%'):
451 if not "%" in word[1:]:
451 if not "%" in word[1:]:
452 word = word[1:]
452 word = word[1:]
453 prio2 = 1
453 prio2 = 1
454
454
455 return prio1, word, prio2
455 return prio1, word, prio2
456
456
457
457
458 class _FakeJediCompletion:
458 class _FakeJediCompletion:
459 """
459 """
460 This is a workaround to communicate to the UI that Jedi has crashed and to
460 This is a workaround to communicate to the UI that Jedi has crashed and to
461 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
461 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
462
462
463 Added in IPython 6.0 so should likely be removed for 7.0
463 Added in IPython 6.0 so should likely be removed for 7.0
464
464
465 """
465 """
466
466
467 def __init__(self, name):
467 def __init__(self, name):
468
468
469 self.name = name
469 self.name = name
470 self.complete = name
470 self.complete = name
471 self.type = 'crashed'
471 self.type = 'crashed'
472 self.name_with_symbols = name
472 self.name_with_symbols = name
473 self.signature = ''
473 self.signature = ''
474 self._origin = 'fake'
474 self._origin = 'fake'
475
475
476 def __repr__(self):
476 def __repr__(self):
477 return '<Fake completion object jedi has crashed>'
477 return '<Fake completion object jedi has crashed>'
478
478
479
479
480 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
480 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
481
481
482
482
483 class Completion:
483 class Completion:
484 """
484 """
485 Completion object used and returned by IPython completers.
485 Completion object used and returned by IPython completers.
486
486
487 .. warning::
487 .. warning::
488
488
489 Unstable
489 Unstable
490
490
491 This function is unstable, API may change without warning.
491 This function is unstable, API may change without warning.
492 It will also raise unless use in proper context manager.
492 It will also raise unless use in proper context manager.
493
493
494 This act as a middle ground :any:`Completion` object between the
494 This act as a middle ground :any:`Completion` object between the
495 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
495 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
496 object. While Jedi need a lot of information about evaluator and how the
496 object. While Jedi need a lot of information about evaluator and how the
497 code should be ran/inspected, PromptToolkit (and other frontend) mostly
497 code should be ran/inspected, PromptToolkit (and other frontend) mostly
498 need user facing information.
498 need user facing information.
499
499
500 - Which range should be replaced replaced by what.
500 - Which range should be replaced replaced by what.
501 - Some metadata (like completion type), or meta information to displayed to
501 - Some metadata (like completion type), or meta information to displayed to
502 the use user.
502 the use user.
503
503
504 For debugging purpose we can also store the origin of the completion (``jedi``,
504 For debugging purpose we can also store the origin of the completion (``jedi``,
505 ``IPython.python_matches``, ``IPython.magics_matches``...).
505 ``IPython.python_matches``, ``IPython.magics_matches``...).
506 """
506 """
507
507
508 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
508 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
509
509
510 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
510 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
511 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
511 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
512 "It may change without warnings. "
512 "It may change without warnings. "
513 "Use in corresponding context manager.",
513 "Use in corresponding context manager.",
514 category=ProvisionalCompleterWarning, stacklevel=2)
514 category=ProvisionalCompleterWarning, stacklevel=2)
515
515
516 self.start = start
516 self.start = start
517 self.end = end
517 self.end = end
518 self.text = text
518 self.text = text
519 self.type = type
519 self.type = type
520 self.signature = signature
520 self.signature = signature
521 self._origin = _origin
521 self._origin = _origin
522
522
523 def __repr__(self):
523 def __repr__(self):
524 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
524 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
525 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
525 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
526
526
527 def __eq__(self, other)->Bool:
527 def __eq__(self, other)->Bool:
528 """
528 """
529 Equality and hash do not hash the type (as some completer may not be
529 Equality and hash do not hash the type (as some completer may not be
530 able to infer the type), but are use to (partially) de-duplicate
530 able to infer the type), but are use to (partially) de-duplicate
531 completion.
531 completion.
532
532
533 Completely de-duplicating completion is a bit tricker that just
533 Completely de-duplicating completion is a bit tricker that just
534 comparing as it depends on surrounding text, which Completions are not
534 comparing as it depends on surrounding text, which Completions are not
535 aware of.
535 aware of.
536 """
536 """
537 return self.start == other.start and \
537 return self.start == other.start and \
538 self.end == other.end and \
538 self.end == other.end and \
539 self.text == other.text
539 self.text == other.text
540
540
541 def __hash__(self):
541 def __hash__(self):
542 return hash((self.start, self.end, self.text))
542 return hash((self.start, self.end, self.text))
543
543
544
544
545 class SimpleCompletion:
545 class SimpleCompletion:
546 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
546 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
547
547
548 .. warning::
548 .. warning::
549
549
550 Provisional
550 Provisional
551
551
552 This class is used to describe the currently supported attributes of
552 This class is used to describe the currently supported attributes of
553 simple completion items, and any additional implementation details
553 simple completion items, and any additional implementation details
554 should not be relied on. Additional attributes may be included in
554 should not be relied on. Additional attributes may be included in
555 future versions, and meaning of text disambiguated from the current
555 future versions, and meaning of text disambiguated from the current
556 dual meaning of "text to insert" and "text to used as a label".
556 dual meaning of "text to insert" and "text to used as a label".
557 """
557 """
558
558
559 __slots__ = ["text", "type"]
559 __slots__ = ["text", "type"]
560
560
561 def __init__(self, text: str, *, type: Optional[str] = None):
561 def __init__(self, text: str, *, type: Optional[str] = None):
562 self.text = text
562 self.text = text
563 self.type = type
563 self.type = type
564
564
565 def __repr__(self):
565 def __repr__(self):
566 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
566 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
567
567
568
568
569 class _MatcherResultBase(TypedDict):
569 class _MatcherResultBase(TypedDict):
570 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
570 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
571
571
572 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
572 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
573 matched_fragment: NotRequired[str]
573 matched_fragment: NotRequired[str]
574
574
575 #: Whether to suppress results from all other matchers (True), some
575 #: Whether to suppress results from all other matchers (True), some
576 #: matchers (set of identifiers) or none (False); default is False.
576 #: matchers (set of identifiers) or none (False); default is False.
577 suppress: NotRequired[Union[bool, Set[str]]]
577 suppress: NotRequired[Union[bool, Set[str]]]
578
578
579 #: Identifiers of matchers which should NOT be suppressed when this matcher
579 #: Identifiers of matchers which should NOT be suppressed when this matcher
580 #: requests to suppress all other matchers; defaults to an empty set.
580 #: requests to suppress all other matchers; defaults to an empty set.
581 do_not_suppress: NotRequired[Set[str]]
581 do_not_suppress: NotRequired[Set[str]]
582
582
583 #: Are completions already ordered and should be left as-is? default is False.
583 #: Are completions already ordered and should be left as-is? default is False.
584 ordered: NotRequired[bool]
584 ordered: NotRequired[bool]
585
585
586
586
587 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
587 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
588 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
588 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
589 """Result of new-style completion matcher."""
589 """Result of new-style completion matcher."""
590
590
591 # note: TypedDict is added again to the inheritance chain
591 # note: TypedDict is added again to the inheritance chain
592 # in order to get __orig_bases__ for documentation
592 # in order to get __orig_bases__ for documentation
593
593
594 #: List of candidate completions
594 #: List of candidate completions
595 completions: Sequence[SimpleCompletion]
595 completions: Sequence[SimpleCompletion]
596
596
597
597
598 class _JediMatcherResult(_MatcherResultBase):
598 class _JediMatcherResult(_MatcherResultBase):
599 """Matching result returned by Jedi (will be processed differently)"""
599 """Matching result returned by Jedi (will be processed differently)"""
600
600
601 #: list of candidate completions
601 #: list of candidate completions
602 completions: Iterable[_JediCompletionLike]
602 completions: Iterable[_JediCompletionLike]
603
603
604
604
605 @dataclass
605 @dataclass
606 class CompletionContext:
606 class CompletionContext:
607 """Completion context provided as an argument to matchers in the Matcher API v2."""
607 """Completion context provided as an argument to matchers in the Matcher API v2."""
608
608
609 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
609 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
610 # which was not explicitly visible as an argument of the matcher, making any refactor
610 # which was not explicitly visible as an argument of the matcher, making any refactor
611 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
611 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
612 # from the completer, and make substituting them in sub-classes easier.
612 # from the completer, and make substituting them in sub-classes easier.
613
613
614 #: Relevant fragment of code directly preceding the cursor.
614 #: Relevant fragment of code directly preceding the cursor.
615 #: The extraction of token is implemented via splitter heuristic
615 #: The extraction of token is implemented via splitter heuristic
616 #: (following readline behaviour for legacy reasons), which is user configurable
616 #: (following readline behaviour for legacy reasons), which is user configurable
617 #: (by switching the greedy mode).
617 #: (by switching the greedy mode).
618 token: str
618 token: str
619
619
620 #: The full available content of the editor or buffer
620 #: The full available content of the editor or buffer
621 full_text: str
621 full_text: str
622
622
623 #: Cursor position in the line (the same for ``full_text`` and ``text``).
623 #: Cursor position in the line (the same for ``full_text`` and ``text``).
624 cursor_position: int
624 cursor_position: int
625
625
626 #: Cursor line in ``full_text``.
626 #: Cursor line in ``full_text``.
627 cursor_line: int
627 cursor_line: int
628
628
629 #: The maximum number of completions that will be used downstream.
629 #: The maximum number of completions that will be used downstream.
630 #: Matchers can use this information to abort early.
630 #: Matchers can use this information to abort early.
631 #: The built-in Jedi matcher is currently excepted from this limit.
631 #: The built-in Jedi matcher is currently excepted from this limit.
632 # If not given, return all possible completions.
632 # If not given, return all possible completions.
633 limit: Optional[int]
633 limit: Optional[int]
634
634
635 @cached_property
635 @cached_property
636 def text_until_cursor(self) -> str:
636 def text_until_cursor(self) -> str:
637 return self.line_with_cursor[: self.cursor_position]
637 return self.line_with_cursor[: self.cursor_position]
638
638
639 @cached_property
639 @cached_property
640 def line_with_cursor(self) -> str:
640 def line_with_cursor(self) -> str:
641 return self.full_text.split("\n")[self.cursor_line]
641 return self.full_text.split("\n")[self.cursor_line]
642
642
643
643
644 #: Matcher results for API v2.
644 #: Matcher results for API v2.
645 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
645 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
646
646
647
647
648 class _MatcherAPIv1Base(Protocol):
648 class _MatcherAPIv1Base(Protocol):
649 def __call__(self, text: str) -> List[str]:
649 def __call__(self, text: str) -> List[str]:
650 """Call signature."""
650 """Call signature."""
651 ...
651 ...
652
652
653
653
654 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
654 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
655 #: API version
655 #: API version
656 matcher_api_version: Optional[Literal[1]]
656 matcher_api_version: Optional[Literal[1]]
657
657
658 def __call__(self, text: str) -> List[str]:
658 def __call__(self, text: str) -> List[str]:
659 """Call signature."""
659 """Call signature."""
660 ...
660 ...
661
661
662
662
663 #: Protocol describing Matcher API v1.
663 #: Protocol describing Matcher API v1.
664 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
664 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
665
665
666
666
667 class MatcherAPIv2(Protocol):
667 class MatcherAPIv2(Protocol):
668 """Protocol describing Matcher API v2."""
668 """Protocol describing Matcher API v2."""
669
669
670 #: API version
670 #: API version
671 matcher_api_version: Literal[2] = 2
671 matcher_api_version: Literal[2] = 2
672
672
673 def __call__(self, context: CompletionContext) -> MatcherResult:
673 def __call__(self, context: CompletionContext) -> MatcherResult:
674 """Call signature."""
674 """Call signature."""
675 ...
675 ...
676
676
677
677
678 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
678 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
679
679
680
680
681 def has_any_completions(result: MatcherResult) -> bool:
681 def has_any_completions(result: MatcherResult) -> bool:
682 """Check if any result includes any completions."""
682 """Check if any result includes any completions."""
683 if hasattr(result["completions"], "__len__"):
683 if hasattr(result["completions"], "__len__"):
684 return len(result["completions"]) != 0
684 return len(result["completions"]) != 0
685 try:
685 try:
686 old_iterator = result["completions"]
686 old_iterator = result["completions"]
687 first = next(old_iterator)
687 first = next(old_iterator)
688 result["completions"] = itertools.chain([first], old_iterator)
688 result["completions"] = itertools.chain([first], old_iterator)
689 return True
689 return True
690 except StopIteration:
690 except StopIteration:
691 return False
691 return False
692
692
693
693
694 def completion_matcher(
694 def completion_matcher(
695 *, priority: float = None, identifier: str = None, api_version: int = 1
695 *, priority: float = None, identifier: str = None, api_version: int = 1
696 ):
696 ):
697 """Adds attributes describing the matcher.
697 """Adds attributes describing the matcher.
698
698
699 Parameters
699 Parameters
700 ----------
700 ----------
701 priority : Optional[float]
701 priority : Optional[float]
702 The priority of the matcher, determines the order of execution of matchers.
702 The priority of the matcher, determines the order of execution of matchers.
703 Higher priority means that the matcher will be executed first. Defaults to 0.
703 Higher priority means that the matcher will be executed first. Defaults to 0.
704 identifier : Optional[str]
704 identifier : Optional[str]
705 identifier of the matcher allowing users to modify the behaviour via traitlets,
705 identifier of the matcher allowing users to modify the behaviour via traitlets,
706 and also used to for debugging (will be passed as ``origin`` with the completions).
706 and also used to for debugging (will be passed as ``origin`` with the completions).
707
707
708 Defaults to matcher function's ``__qualname__`` (for example,
708 Defaults to matcher function's ``__qualname__`` (for example,
709 ``IPCompleter.file_matcher`` for the built-in matched defined
709 ``IPCompleter.file_matcher`` for the built-in matched defined
710 as a ``file_matcher`` method of the ``IPCompleter`` class).
710 as a ``file_matcher`` method of the ``IPCompleter`` class).
711 api_version: Optional[int]
711 api_version: Optional[int]
712 version of the Matcher API used by this matcher.
712 version of the Matcher API used by this matcher.
713 Currently supported values are 1 and 2.
713 Currently supported values are 1 and 2.
714 Defaults to 1.
714 Defaults to 1.
715 """
715 """
716
716
717 def wrapper(func: Matcher):
717 def wrapper(func: Matcher):
718 func.matcher_priority = priority or 0
718 func.matcher_priority = priority or 0
719 func.matcher_identifier = identifier or func.__qualname__
719 func.matcher_identifier = identifier or func.__qualname__
720 func.matcher_api_version = api_version
720 func.matcher_api_version = api_version
721 if TYPE_CHECKING:
721 if TYPE_CHECKING:
722 if api_version == 1:
722 if api_version == 1:
723 func = cast(func, MatcherAPIv1)
723 func = cast(func, MatcherAPIv1)
724 elif api_version == 2:
724 elif api_version == 2:
725 func = cast(func, MatcherAPIv2)
725 func = cast(func, MatcherAPIv2)
726 return func
726 return func
727
727
728 return wrapper
728 return wrapper
729
729
730
730
731 def _get_matcher_priority(matcher: Matcher):
731 def _get_matcher_priority(matcher: Matcher):
732 return getattr(matcher, "matcher_priority", 0)
732 return getattr(matcher, "matcher_priority", 0)
733
733
734
734
735 def _get_matcher_id(matcher: Matcher):
735 def _get_matcher_id(matcher: Matcher):
736 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
736 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
737
737
738
738
739 def _get_matcher_api_version(matcher):
739 def _get_matcher_api_version(matcher):
740 return getattr(matcher, "matcher_api_version", 1)
740 return getattr(matcher, "matcher_api_version", 1)
741
741
742
742
743 context_matcher = partial(completion_matcher, api_version=2)
743 context_matcher = partial(completion_matcher, api_version=2)
744
744
745
745
746 _IC = Iterable[Completion]
746 _IC = Iterable[Completion]
747
747
748
748
749 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
749 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
750 """
750 """
751 Deduplicate a set of completions.
751 Deduplicate a set of completions.
752
752
753 .. warning::
753 .. warning::
754
754
755 Unstable
755 Unstable
756
756
757 This function is unstable, API may change without warning.
757 This function is unstable, API may change without warning.
758
758
759 Parameters
759 Parameters
760 ----------
760 ----------
761 text : str
761 text : str
762 text that should be completed.
762 text that should be completed.
763 completions : Iterator[Completion]
763 completions : Iterator[Completion]
764 iterator over the completions to deduplicate
764 iterator over the completions to deduplicate
765
765
766 Yields
766 Yields
767 ------
767 ------
768 `Completions` objects
768 `Completions` objects
769 Completions coming from multiple sources, may be different but end up having
769 Completions coming from multiple sources, may be different but end up having
770 the same effect when applied to ``text``. If this is the case, this will
770 the same effect when applied to ``text``. If this is the case, this will
771 consider completions as equal and only emit the first encountered.
771 consider completions as equal and only emit the first encountered.
772 Not folded in `completions()` yet for debugging purpose, and to detect when
772 Not folded in `completions()` yet for debugging purpose, and to detect when
773 the IPython completer does return things that Jedi does not, but should be
773 the IPython completer does return things that Jedi does not, but should be
774 at some point.
774 at some point.
775 """
775 """
776 completions = list(completions)
776 completions = list(completions)
777 if not completions:
777 if not completions:
778 return
778 return
779
779
780 new_start = min(c.start for c in completions)
780 new_start = min(c.start for c in completions)
781 new_end = max(c.end for c in completions)
781 new_end = max(c.end for c in completions)
782
782
783 seen = set()
783 seen = set()
784 for c in completions:
784 for c in completions:
785 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
785 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
786 if new_text not in seen:
786 if new_text not in seen:
787 yield c
787 yield c
788 seen.add(new_text)
788 seen.add(new_text)
789
789
790
790
791 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
791 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
792 """
792 """
793 Rectify a set of completions to all have the same ``start`` and ``end``
793 Rectify a set of completions to all have the same ``start`` and ``end``
794
794
795 .. warning::
795 .. warning::
796
796
797 Unstable
797 Unstable
798
798
799 This function is unstable, API may change without warning.
799 This function is unstable, API may change without warning.
800 It will also raise unless use in proper context manager.
800 It will also raise unless use in proper context manager.
801
801
802 Parameters
802 Parameters
803 ----------
803 ----------
804 text : str
804 text : str
805 text that should be completed.
805 text that should be completed.
806 completions : Iterator[Completion]
806 completions : Iterator[Completion]
807 iterator over the completions to rectify
807 iterator over the completions to rectify
808 _debug : bool
808 _debug : bool
809 Log failed completion
809 Log failed completion
810
810
811 Notes
811 Notes
812 -----
812 -----
813 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
813 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
814 the Jupyter Protocol requires them to behave like so. This will readjust
814 the Jupyter Protocol requires them to behave like so. This will readjust
815 the completion to have the same ``start`` and ``end`` by padding both
815 the completion to have the same ``start`` and ``end`` by padding both
816 extremities with surrounding text.
816 extremities with surrounding text.
817
817
818 During stabilisation should support a ``_debug`` option to log which
818 During stabilisation should support a ``_debug`` option to log which
819 completion are return by the IPython completer and not found in Jedi in
819 completion are return by the IPython completer and not found in Jedi in
820 order to make upstream bug report.
820 order to make upstream bug report.
821 """
821 """
822 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
822 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
823 "It may change without warnings. "
823 "It may change without warnings. "
824 "Use in corresponding context manager.",
824 "Use in corresponding context manager.",
825 category=ProvisionalCompleterWarning, stacklevel=2)
825 category=ProvisionalCompleterWarning, stacklevel=2)
826
826
827 completions = list(completions)
827 completions = list(completions)
828 if not completions:
828 if not completions:
829 return
829 return
830 starts = (c.start for c in completions)
830 starts = (c.start for c in completions)
831 ends = (c.end for c in completions)
831 ends = (c.end for c in completions)
832
832
833 new_start = min(starts)
833 new_start = min(starts)
834 new_end = max(ends)
834 new_end = max(ends)
835
835
836 seen_jedi = set()
836 seen_jedi = set()
837 seen_python_matches = set()
837 seen_python_matches = set()
838 for c in completions:
838 for c in completions:
839 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
839 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
840 if c._origin == 'jedi':
840 if c._origin == 'jedi':
841 seen_jedi.add(new_text)
841 seen_jedi.add(new_text)
842 elif c._origin == 'IPCompleter.python_matches':
842 elif c._origin == 'IPCompleter.python_matches':
843 seen_python_matches.add(new_text)
843 seen_python_matches.add(new_text)
844 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
844 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
845 diff = seen_python_matches.difference(seen_jedi)
845 diff = seen_python_matches.difference(seen_jedi)
846 if diff and _debug:
846 if diff and _debug:
847 print('IPython.python matches have extras:', diff)
847 print('IPython.python matches have extras:', diff)
848
848
849
849
850 if sys.platform == 'win32':
850 if sys.platform == 'win32':
851 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
851 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
852 else:
852 else:
853 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
853 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
854
854
855 GREEDY_DELIMS = ' =\r\n'
855 GREEDY_DELIMS = ' =\r\n'
856
856
857
857
858 class CompletionSplitter(object):
858 class CompletionSplitter(object):
859 """An object to split an input line in a manner similar to readline.
859 """An object to split an input line in a manner similar to readline.
860
860
861 By having our own implementation, we can expose readline-like completion in
861 By having our own implementation, we can expose readline-like completion in
862 a uniform manner to all frontends. This object only needs to be given the
862 a uniform manner to all frontends. This object only needs to be given the
863 line of text to be split and the cursor position on said line, and it
863 line of text to be split and the cursor position on said line, and it
864 returns the 'word' to be completed on at the cursor after splitting the
864 returns the 'word' to be completed on at the cursor after splitting the
865 entire line.
865 entire line.
866
866
867 What characters are used as splitting delimiters can be controlled by
867 What characters are used as splitting delimiters can be controlled by
868 setting the ``delims`` attribute (this is a property that internally
868 setting the ``delims`` attribute (this is a property that internally
869 automatically builds the necessary regular expression)"""
869 automatically builds the necessary regular expression)"""
870
870
871 # Private interface
871 # Private interface
872
872
873 # A string of delimiter characters. The default value makes sense for
873 # A string of delimiter characters. The default value makes sense for
874 # IPython's most typical usage patterns.
874 # IPython's most typical usage patterns.
875 _delims = DELIMS
875 _delims = DELIMS
876
876
877 # The expression (a normal string) to be compiled into a regular expression
877 # The expression (a normal string) to be compiled into a regular expression
878 # for actual splitting. We store it as an attribute mostly for ease of
878 # for actual splitting. We store it as an attribute mostly for ease of
879 # debugging, since this type of code can be so tricky to debug.
879 # debugging, since this type of code can be so tricky to debug.
880 _delim_expr = None
880 _delim_expr = None
881
881
882 # The regular expression that does the actual splitting
882 # The regular expression that does the actual splitting
883 _delim_re = None
883 _delim_re = None
884
884
885 def __init__(self, delims=None):
885 def __init__(self, delims=None):
886 delims = CompletionSplitter._delims if delims is None else delims
886 delims = CompletionSplitter._delims if delims is None else delims
887 self.delims = delims
887 self.delims = delims
888
888
889 @property
889 @property
890 def delims(self):
890 def delims(self):
891 """Return the string of delimiter characters."""
891 """Return the string of delimiter characters."""
892 return self._delims
892 return self._delims
893
893
894 @delims.setter
894 @delims.setter
895 def delims(self, delims):
895 def delims(self, delims):
896 """Set the delimiters for line splitting."""
896 """Set the delimiters for line splitting."""
897 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
897 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
898 self._delim_re = re.compile(expr)
898 self._delim_re = re.compile(expr)
899 self._delims = delims
899 self._delims = delims
900 self._delim_expr = expr
900 self._delim_expr = expr
901
901
902 def split_line(self, line, cursor_pos=None):
902 def split_line(self, line, cursor_pos=None):
903 """Split a line of text with a cursor at the given position.
903 """Split a line of text with a cursor at the given position.
904 """
904 """
905 l = line if cursor_pos is None else line[:cursor_pos]
905 l = line if cursor_pos is None else line[:cursor_pos]
906 return self._delim_re.split(l)[-1]
906 return self._delim_re.split(l)[-1]
907
907
908
908
909
909
910 class Completer(Configurable):
910 class Completer(Configurable):
911
911
912 greedy = Bool(
912 greedy = Bool(
913 False,
913 False,
914 help="""Activate greedy completion.
914 help="""Activate greedy completion.
915
915
916 .. deprecated:: 8.8
916 .. deprecated:: 8.8
917 Use :any:`evaluation` and :any:`auto_close_dict_keys` instead.
917 Use :any:`evaluation` and :any:`auto_close_dict_keys` instead.
918
918
919 Whent enabled in IPython 8.8+ activates following settings for compatibility:
919 When enabled in IPython 8.8+ activates following settings for compatibility:
920 - ``evaluation = 'unsafe'``
920 - ``evaluation = 'unsafe'``
921 - ``auto_close_dict_keys = True``
921 - ``auto_close_dict_keys = True``
922 """,
922 """,
923 ).tag(config=True)
923 ).tag(config=True)
924
924
925 evaluation = Enum(
925 evaluation = Enum(
926 ("forbidden", "minimal", "limitted", "unsafe", "dangerous"),
926 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
927 default_value="limitted",
927 default_value="limited",
928 help="""Code evaluation under completion.
928 help="""Code evaluation under completion.
929
929
930 Successive options allow to enable more eager evaluation for more accurate completion suggestions,
930 Successive options allow to enable more eager evaluation for more accurate completion suggestions,
931 including for nested dictionaries, nested lists, or even results of function calls. Setting `unsafe`
931 including for nested dictionaries, nested lists, or even results of function calls. Setting `unsafe`
932 or higher can lead to evaluation of arbitrary user code on TAB with potentially dangerous side effects.
932 or higher can lead to evaluation of arbitrary user code on TAB with potentially dangerous side effects.
933
933
934 Allowed values are:
934 Allowed values are:
935 - `forbidden`: no evaluation at all
935 - `forbidden`: no evaluation at all
936 - `minimal`: evaluation of literals and access to built-in namespaces; no item/attribute evaluation nor access to locals/globals
936 - `minimal`: evaluation of literals and access to built-in namespaces; no item/attribute evaluation nor access to locals/globals
937 - `limitted` (default): access to all namespaces, evaluation of hard-coded methods (``keys()``, ``__getattr__``, ``__getitems__``, etc) on allow-listed objects (e.g. ``dict``, ``list``, ``tuple``, ``pandas.Series``)
937 - `limited` (default): access to all namespaces, evaluation of hard-coded methods (``keys()``, ``__getattr__``, ``__getitems__``, etc) on allow-listed objects (e.g. ``dict``, ``list``, ``tuple``, ``pandas.Series``)
938 - `unsafe`: evaluation of all methods and function calls but not of syntax with side-effects like `del x`,
938 - `unsafe`: evaluation of all methods and function calls but not of syntax with side-effects like `del x`,
939 - `dangerous`: completely arbitrary evaluation
939 - `dangerous`: completely arbitrary evaluation
940 """,
940 """,
941 ).tag(config=True)
941 ).tag(config=True)
942
942
943 use_jedi = Bool(default_value=JEDI_INSTALLED,
943 use_jedi = Bool(default_value=JEDI_INSTALLED,
944 help="Experimental: Use Jedi to generate autocompletions. "
944 help="Experimental: Use Jedi to generate autocompletions. "
945 "Default to True if jedi is installed.").tag(config=True)
945 "Default to True if jedi is installed.").tag(config=True)
946
946
947 jedi_compute_type_timeout = Int(default_value=400,
947 jedi_compute_type_timeout = Int(default_value=400,
948 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
948 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
949 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
949 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
950 performance by preventing jedi to build its cache.
950 performance by preventing jedi to build its cache.
951 """).tag(config=True)
951 """).tag(config=True)
952
952
953 debug = Bool(default_value=False,
953 debug = Bool(default_value=False,
954 help='Enable debug for the Completer. Mostly print extra '
954 help='Enable debug for the Completer. Mostly print extra '
955 'information for experimental jedi integration.')\
955 'information for experimental jedi integration.')\
956 .tag(config=True)
956 .tag(config=True)
957
957
958 backslash_combining_completions = Bool(True,
958 backslash_combining_completions = Bool(True,
959 help="Enable unicode completions, e.g. \\alpha<tab> . "
959 help="Enable unicode completions, e.g. \\alpha<tab> . "
960 "Includes completion of latex commands, unicode names, and expanding "
960 "Includes completion of latex commands, unicode names, and expanding "
961 "unicode characters back to latex commands.").tag(config=True)
961 "unicode characters back to latex commands.").tag(config=True)
962
962
963 auto_close_dict_keys = Bool(
963 auto_close_dict_keys = Bool(
964 False, help="""Enable auto-closing dictionary keys."""
964 False, help="""Enable auto-closing dictionary keys."""
965 ).tag(config=True)
965 ).tag(config=True)
966
966
967 def __init__(self, namespace=None, global_namespace=None, **kwargs):
967 def __init__(self, namespace=None, global_namespace=None, **kwargs):
968 """Create a new completer for the command line.
968 """Create a new completer for the command line.
969
969
970 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
970 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
971
971
972 If unspecified, the default namespace where completions are performed
972 If unspecified, the default namespace where completions are performed
973 is __main__ (technically, __main__.__dict__). Namespaces should be
973 is __main__ (technically, __main__.__dict__). Namespaces should be
974 given as dictionaries.
974 given as dictionaries.
975
975
976 An optional second namespace can be given. This allows the completer
976 An optional second namespace can be given. This allows the completer
977 to handle cases where both the local and global scopes need to be
977 to handle cases where both the local and global scopes need to be
978 distinguished.
978 distinguished.
979 """
979 """
980
980
981 # Don't bind to namespace quite yet, but flag whether the user wants a
981 # Don't bind to namespace quite yet, but flag whether the user wants a
982 # specific namespace or to use __main__.__dict__. This will allow us
982 # specific namespace or to use __main__.__dict__. This will allow us
983 # to bind to __main__.__dict__ at completion time, not now.
983 # to bind to __main__.__dict__ at completion time, not now.
984 if namespace is None:
984 if namespace is None:
985 self.use_main_ns = True
985 self.use_main_ns = True
986 else:
986 else:
987 self.use_main_ns = False
987 self.use_main_ns = False
988 self.namespace = namespace
988 self.namespace = namespace
989
989
990 # The global namespace, if given, can be bound directly
990 # The global namespace, if given, can be bound directly
991 if global_namespace is None:
991 if global_namespace is None:
992 self.global_namespace = {}
992 self.global_namespace = {}
993 else:
993 else:
994 self.global_namespace = global_namespace
994 self.global_namespace = global_namespace
995
995
996 self.custom_matchers = []
996 self.custom_matchers = []
997
997
998 super(Completer, self).__init__(**kwargs)
998 super(Completer, self).__init__(**kwargs)
999
999
1000 def complete(self, text, state):
1000 def complete(self, text, state):
1001 """Return the next possible completion for 'text'.
1001 """Return the next possible completion for 'text'.
1002
1002
1003 This is called successively with state == 0, 1, 2, ... until it
1003 This is called successively with state == 0, 1, 2, ... until it
1004 returns None. The completion should begin with 'text'.
1004 returns None. The completion should begin with 'text'.
1005
1005
1006 """
1006 """
1007 if self.use_main_ns:
1007 if self.use_main_ns:
1008 self.namespace = __main__.__dict__
1008 self.namespace = __main__.__dict__
1009
1009
1010 if state == 0:
1010 if state == 0:
1011 if "." in text:
1011 if "." in text:
1012 self.matches = self.attr_matches(text)
1012 self.matches = self.attr_matches(text)
1013 else:
1013 else:
1014 self.matches = self.global_matches(text)
1014 self.matches = self.global_matches(text)
1015 try:
1015 try:
1016 return self.matches[state]
1016 return self.matches[state]
1017 except IndexError:
1017 except IndexError:
1018 return None
1018 return None
1019
1019
1020 def global_matches(self, text):
1020 def global_matches(self, text):
1021 """Compute matches when text is a simple name.
1021 """Compute matches when text is a simple name.
1022
1022
1023 Return a list of all keywords, built-in functions and names currently
1023 Return a list of all keywords, built-in functions and names currently
1024 defined in self.namespace or self.global_namespace that match.
1024 defined in self.namespace or self.global_namespace that match.
1025
1025
1026 """
1026 """
1027 matches = []
1027 matches = []
1028 match_append = matches.append
1028 match_append = matches.append
1029 n = len(text)
1029 n = len(text)
1030 for lst in [
1030 for lst in [
1031 keyword.kwlist,
1031 keyword.kwlist,
1032 builtin_mod.__dict__.keys(),
1032 builtin_mod.__dict__.keys(),
1033 list(self.namespace.keys()),
1033 list(self.namespace.keys()),
1034 list(self.global_namespace.keys()),
1034 list(self.global_namespace.keys()),
1035 ]:
1035 ]:
1036 for word in lst:
1036 for word in lst:
1037 if word[:n] == text and word != "__builtins__":
1037 if word[:n] == text and word != "__builtins__":
1038 match_append(word)
1038 match_append(word)
1039
1039
1040 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1040 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1041 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1041 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1042 shortened = {
1042 shortened = {
1043 "_".join([sub[0] for sub in word.split("_")]): word
1043 "_".join([sub[0] for sub in word.split("_")]): word
1044 for word in lst
1044 for word in lst
1045 if snake_case_re.match(word)
1045 if snake_case_re.match(word)
1046 }
1046 }
1047 for word in shortened.keys():
1047 for word in shortened.keys():
1048 if word[:n] == text and word != "__builtins__":
1048 if word[:n] == text and word != "__builtins__":
1049 match_append(shortened[word])
1049 match_append(shortened[word])
1050 return matches
1050 return matches
1051
1051
1052 def attr_matches(self, text):
1052 def attr_matches(self, text):
1053 """Compute matches when text contains a dot.
1053 """Compute matches when text contains a dot.
1054
1054
1055 Assuming the text is of the form NAME.NAME....[NAME], and is
1055 Assuming the text is of the form NAME.NAME....[NAME], and is
1056 evaluatable in self.namespace or self.global_namespace, it will be
1056 evaluatable in self.namespace or self.global_namespace, it will be
1057 evaluated and its attributes (as revealed by dir()) are used as
1057 evaluated and its attributes (as revealed by dir()) are used as
1058 possible completions. (For class instances, class members are
1058 possible completions. (For class instances, class members are
1059 also considered.)
1059 also considered.)
1060
1060
1061 WARNING: this can still invoke arbitrary C code, if an object
1061 WARNING: this can still invoke arbitrary C code, if an object
1062 with a __getattr__ hook is evaluated.
1062 with a __getattr__ hook is evaluated.
1063
1063
1064 """
1064 """
1065 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1065 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1066 if not m2:
1066 if not m2:
1067 return []
1067 return []
1068 expr, attr = m2.group(1, 2)
1068 expr, attr = m2.group(1, 2)
1069
1069
1070 obj = self._evaluate_expr(expr)
1070 obj = self._evaluate_expr(expr)
1071
1071
1072 if obj is not_found:
1072 if obj is not_found:
1073 return []
1073 return []
1074
1074
1075 if self.limit_to__all__ and hasattr(obj, '__all__'):
1075 if self.limit_to__all__ and hasattr(obj, '__all__'):
1076 words = get__all__entries(obj)
1076 words = get__all__entries(obj)
1077 else:
1077 else:
1078 words = dir2(obj)
1078 words = dir2(obj)
1079
1079
1080 try:
1080 try:
1081 words = generics.complete_object(obj, words)
1081 words = generics.complete_object(obj, words)
1082 except TryNext:
1082 except TryNext:
1083 pass
1083 pass
1084 except AssertionError:
1084 except AssertionError:
1085 raise
1085 raise
1086 except Exception:
1086 except Exception:
1087 # Silence errors from completion function
1087 # Silence errors from completion function
1088 #raise # dbg
1088 #raise # dbg
1089 pass
1089 pass
1090 # Build match list to return
1090 # Build match list to return
1091 n = len(attr)
1091 n = len(attr)
1092 return ["%s.%s" % (expr, w) for w in words if w[:n] == attr]
1092 return ["%s.%s" % (expr, w) for w in words if w[:n] == attr]
1093
1093
1094 def _evaluate_expr(self, expr):
1094 def _evaluate_expr(self, expr):
1095 obj = not_found
1095 obj = not_found
1096 done = False
1096 done = False
1097 while not done and expr:
1097 while not done and expr:
1098 try:
1098 try:
1099 obj = guarded_eval(
1099 obj = guarded_eval(
1100 expr,
1100 expr,
1101 EvaluationContext(
1101 EvaluationContext(
1102 globals_=self.global_namespace,
1102 globals_=self.global_namespace,
1103 locals_=self.namespace,
1103 locals_=self.namespace,
1104 evaluation=self.evaluation,
1104 evaluation=self.evaluation,
1105 ),
1105 ),
1106 )
1106 )
1107 done = True
1107 done = True
1108 except Exception as e:
1108 except Exception as e:
1109 if self.debug:
1109 if self.debug:
1110 print("Evaluation exception", e)
1110 print("Evaluation exception", e)
1111 # trim the expression to remove any invalid prefix
1111 # trim the expression to remove any invalid prefix
1112 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1112 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1113 # where parenthesis is not closed.
1113 # where parenthesis is not closed.
1114 # TODO: make this faster by reusing parts of the computation?
1114 # TODO: make this faster by reusing parts of the computation?
1115 expr = expr[1:]
1115 expr = expr[1:]
1116 return obj
1116 return obj
1117
1117
1118 def get__all__entries(obj):
1118 def get__all__entries(obj):
1119 """returns the strings in the __all__ attribute"""
1119 """returns the strings in the __all__ attribute"""
1120 try:
1120 try:
1121 words = getattr(obj, '__all__')
1121 words = getattr(obj, '__all__')
1122 except:
1122 except:
1123 return []
1123 return []
1124
1124
1125 return [w for w in words if isinstance(w, str)]
1125 return [w for w in words if isinstance(w, str)]
1126
1126
1127
1127
1128 class DictKeyState(enum.Flag):
1128 class DictKeyState(enum.Flag):
1129 """Represent state of the key match in context of other possible matches.
1129 """Represent state of the key match in context of other possible matches.
1130
1130
1131 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1131 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1132 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1132 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1133 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1133 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1134 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1134 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1135 """
1135 """
1136
1136
1137 BASELINE = 0
1137 BASELINE = 0
1138 END_OF_ITEM = enum.auto()
1138 END_OF_ITEM = enum.auto()
1139 END_OF_TUPLE = enum.auto()
1139 END_OF_TUPLE = enum.auto()
1140 IN_TUPLE = enum.auto()
1140 IN_TUPLE = enum.auto()
1141
1141
1142
1142
1143 def _parse_tokens(c):
1143 def _parse_tokens(c):
1144 tokens = []
1144 tokens = []
1145 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1145 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1146 while True:
1146 while True:
1147 try:
1147 try:
1148 tokens.append(next(token_generator))
1148 tokens.append(next(token_generator))
1149 except tokenize.TokenError:
1149 except tokenize.TokenError:
1150 return tokens
1150 return tokens
1151 except StopIteration:
1151 except StopIteration:
1152 return tokens
1152 return tokens
1153
1153
1154
1154
1155 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1155 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1156 """Match any valid Python numeric literal in a prefix of dictionary keys.
1156 """Match any valid Python numeric literal in a prefix of dictionary keys.
1157
1157
1158 References:
1158 References:
1159 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1159 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1160 - https://docs.python.org/3/library/tokenize.html
1160 - https://docs.python.org/3/library/tokenize.html
1161 """
1161 """
1162 if prefix[-1].isspace():
1162 if prefix[-1].isspace():
1163 # if user typed a space we do not have anything to complete
1163 # if user typed a space we do not have anything to complete
1164 # even if there was a valid number token before
1164 # even if there was a valid number token before
1165 return None
1165 return None
1166 tokens = _parse_tokens(prefix)
1166 tokens = _parse_tokens(prefix)
1167 rev_tokens = reversed(tokens)
1167 rev_tokens = reversed(tokens)
1168 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1168 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1169 number = None
1169 number = None
1170 for token in rev_tokens:
1170 for token in rev_tokens:
1171 if token.type in skip_over:
1171 if token.type in skip_over:
1172 continue
1172 continue
1173 if number is None:
1173 if number is None:
1174 if token.type == tokenize.NUMBER:
1174 if token.type == tokenize.NUMBER:
1175 number = token.string
1175 number = token.string
1176 continue
1176 continue
1177 else:
1177 else:
1178 # we did not match a number
1178 # we did not match a number
1179 return None
1179 return None
1180 if token.type == tokenize.OP:
1180 if token.type == tokenize.OP:
1181 if token.string == ",":
1181 if token.string == ",":
1182 break
1182 break
1183 if token.string in {"+", "-"}:
1183 if token.string in {"+", "-"}:
1184 number = token.string + number
1184 number = token.string + number
1185 else:
1185 else:
1186 return None
1186 return None
1187 return number
1187 return number
1188
1188
1189
1189
1190 _INT_FORMATS = {
1190 _INT_FORMATS = {
1191 "0b": bin,
1191 "0b": bin,
1192 "0o": oct,
1192 "0o": oct,
1193 "0x": hex,
1193 "0x": hex,
1194 }
1194 }
1195
1195
1196
1196
1197 def match_dict_keys(
1197 def match_dict_keys(
1198 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1198 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1199 prefix: str,
1199 prefix: str,
1200 delims: str,
1200 delims: str,
1201 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1201 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1202 ) -> Tuple[str, int, Dict[str, DictKeyState]]:
1202 ) -> Tuple[str, int, Dict[str, DictKeyState]]:
1203 """Used by dict_key_matches, matching the prefix to a list of keys
1203 """Used by dict_key_matches, matching the prefix to a list of keys
1204
1204
1205 Parameters
1205 Parameters
1206 ----------
1206 ----------
1207 keys
1207 keys
1208 list of keys in dictionary currently being completed.
1208 list of keys in dictionary currently being completed.
1209 prefix
1209 prefix
1210 Part of the text already typed by the user. E.g. `mydict[b'fo`
1210 Part of the text already typed by the user. E.g. `mydict[b'fo`
1211 delims
1211 delims
1212 String of delimiters to consider when finding the current key.
1212 String of delimiters to consider when finding the current key.
1213 extra_prefix : optional
1213 extra_prefix : optional
1214 Part of the text already typed in multi-key index cases. E.g. for
1214 Part of the text already typed in multi-key index cases. E.g. for
1215 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1215 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1216
1216
1217 Returns
1217 Returns
1218 -------
1218 -------
1219 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1219 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1220 ``quote`` being the quote that need to be used to close current string.
1220 ``quote`` being the quote that need to be used to close current string.
1221 ``token_start`` the position where the replacement should start occurring,
1221 ``token_start`` the position where the replacement should start occurring,
1222 ``matches`` a dictionary of replacement/completion keys on keys and values
1222 ``matches`` a dictionary of replacement/completion keys on keys and values
1223 indicating whether the state.
1223 indicating whether the state.
1224 """
1224 """
1225 prefix_tuple = extra_prefix if extra_prefix else ()
1225 prefix_tuple = extra_prefix if extra_prefix else ()
1226
1226
1227 prefix_tuple_size = sum(
1227 prefix_tuple_size = sum(
1228 [
1228 [
1229 # for pandas, do not count slices as taking space
1229 # for pandas, do not count slices as taking space
1230 not isinstance(k, slice)
1230 not isinstance(k, slice)
1231 for k in prefix_tuple
1231 for k in prefix_tuple
1232 ]
1232 ]
1233 )
1233 )
1234 text_serializable_types = (str, bytes, int, float, slice)
1234 text_serializable_types = (str, bytes, int, float, slice)
1235
1235
1236 def filter_prefix_tuple(key):
1236 def filter_prefix_tuple(key):
1237 # Reject too short keys
1237 # Reject too short keys
1238 if len(key) <= prefix_tuple_size:
1238 if len(key) <= prefix_tuple_size:
1239 return False
1239 return False
1240 # Reject keys which cannot be serialised to text
1240 # Reject keys which cannot be serialised to text
1241 for k in key:
1241 for k in key:
1242 if not isinstance(k, text_serializable_types):
1242 if not isinstance(k, text_serializable_types):
1243 return False
1243 return False
1244 # Reject keys that do not match the prefix
1244 # Reject keys that do not match the prefix
1245 for k, pt in zip(key, prefix_tuple):
1245 for k, pt in zip(key, prefix_tuple):
1246 if k != pt and not isinstance(pt, slice):
1246 if k != pt and not isinstance(pt, slice):
1247 return False
1247 return False
1248 # All checks passed!
1248 # All checks passed!
1249 return True
1249 return True
1250
1250
1251 filtered_key_is_final: Dict[
1251 filtered_key_is_final: Dict[
1252 Union[str, bytes, int, float], DictKeyState
1252 Union[str, bytes, int, float], DictKeyState
1253 ] = defaultdict(lambda: DictKeyState.BASELINE)
1253 ] = defaultdict(lambda: DictKeyState.BASELINE)
1254
1254
1255 for k in keys:
1255 for k in keys:
1256 # If at least one of the matches is not final, mark as undetermined.
1256 # If at least one of the matches is not final, mark as undetermined.
1257 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1257 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1258 # `111` appears final on first match but is not final on the second.
1258 # `111` appears final on first match but is not final on the second.
1259
1259
1260 if isinstance(k, tuple):
1260 if isinstance(k, tuple):
1261 if filter_prefix_tuple(k):
1261 if filter_prefix_tuple(k):
1262 key_fragment = k[prefix_tuple_size]
1262 key_fragment = k[prefix_tuple_size]
1263 filtered_key_is_final[key_fragment] |= (
1263 filtered_key_is_final[key_fragment] |= (
1264 DictKeyState.END_OF_TUPLE
1264 DictKeyState.END_OF_TUPLE
1265 if len(k) == prefix_tuple_size + 1
1265 if len(k) == prefix_tuple_size + 1
1266 else DictKeyState.IN_TUPLE
1266 else DictKeyState.IN_TUPLE
1267 )
1267 )
1268 elif prefix_tuple_size > 0:
1268 elif prefix_tuple_size > 0:
1269 # we are completing a tuple but this key is not a tuple,
1269 # we are completing a tuple but this key is not a tuple,
1270 # so we should ignore it
1270 # so we should ignore it
1271 pass
1271 pass
1272 else:
1272 else:
1273 if isinstance(k, text_serializable_types):
1273 if isinstance(k, text_serializable_types):
1274 filtered_key_is_final[k] |= DictKeyState.END_OF_ITEM
1274 filtered_key_is_final[k] |= DictKeyState.END_OF_ITEM
1275
1275
1276 filtered_keys = filtered_key_is_final.keys()
1276 filtered_keys = filtered_key_is_final.keys()
1277
1277
1278 if not prefix:
1278 if not prefix:
1279 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1279 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1280
1280
1281 quote_match = re.search("(?:\"|')", prefix)
1281 quote_match = re.search("(?:\"|')", prefix)
1282 is_user_prefix_numeric = False
1282 is_user_prefix_numeric = False
1283
1283
1284 if quote_match:
1284 if quote_match:
1285 quote = quote_match.group()
1285 quote = quote_match.group()
1286 valid_prefix = prefix + quote
1286 valid_prefix = prefix + quote
1287 try:
1287 try:
1288 prefix_str = literal_eval(valid_prefix)
1288 prefix_str = literal_eval(valid_prefix)
1289 except Exception:
1289 except Exception:
1290 return "", 0, {}
1290 return "", 0, {}
1291 else:
1291 else:
1292 # If it does not look like a string, let's assume
1292 # If it does not look like a string, let's assume
1293 # we are dealing with a number or variable.
1293 # we are dealing with a number or variable.
1294 number_match = _match_number_in_dict_key_prefix(prefix)
1294 number_match = _match_number_in_dict_key_prefix(prefix)
1295
1295
1296 # We do not want the key matcher to suggest variable names so we yield:
1296 # We do not want the key matcher to suggest variable names so we yield:
1297 if number_match is None:
1297 if number_match is None:
1298 # The alternative would be to assume that user forgort the quote
1298 # The alternative would be to assume that user forgort the quote
1299 # and if the substring matches, suggest adding it at the start.
1299 # and if the substring matches, suggest adding it at the start.
1300 return "", 0, {}
1300 return "", 0, {}
1301
1301
1302 prefix_str = number_match
1302 prefix_str = number_match
1303 is_user_prefix_numeric = True
1303 is_user_prefix_numeric = True
1304 quote = ""
1304 quote = ""
1305
1305
1306 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1306 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1307 token_match = re.search(pattern, prefix, re.UNICODE)
1307 token_match = re.search(pattern, prefix, re.UNICODE)
1308 assert token_match is not None # silence mypy
1308 assert token_match is not None # silence mypy
1309 token_start = token_match.start()
1309 token_start = token_match.start()
1310 token_prefix = token_match.group()
1310 token_prefix = token_match.group()
1311
1311
1312 matched: Dict[str, DictKeyState] = {}
1312 matched: Dict[str, DictKeyState] = {}
1313
1313
1314 for key in filtered_keys:
1314 for key in filtered_keys:
1315 if isinstance(key, (int, float)):
1315 if isinstance(key, (int, float)):
1316 # User typed a number but this key is not a number.
1316 # User typed a number but this key is not a number.
1317 if not is_user_prefix_numeric:
1317 if not is_user_prefix_numeric:
1318 continue
1318 continue
1319 str_key = str(key)
1319 str_key = str(key)
1320 if isinstance(key, int):
1320 if isinstance(key, int):
1321 int_base = prefix_str[:2].lower()
1321 int_base = prefix_str[:2].lower()
1322 # if user typed integer using binary/oct/hex notation:
1322 # if user typed integer using binary/oct/hex notation:
1323 if int_base in _INT_FORMATS:
1323 if int_base in _INT_FORMATS:
1324 int_format = _INT_FORMATS[int_base]
1324 int_format = _INT_FORMATS[int_base]
1325 str_key = int_format(key)
1325 str_key = int_format(key)
1326 else:
1326 else:
1327 # User typed a string but this key is a number.
1327 # User typed a string but this key is a number.
1328 if is_user_prefix_numeric:
1328 if is_user_prefix_numeric:
1329 continue
1329 continue
1330 str_key = key
1330 str_key = key
1331 try:
1331 try:
1332 if not str_key.startswith(prefix_str):
1332 if not str_key.startswith(prefix_str):
1333 continue
1333 continue
1334 except (AttributeError, TypeError, UnicodeError) as e:
1334 except (AttributeError, TypeError, UnicodeError) as e:
1335 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1335 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1336 continue
1336 continue
1337
1337
1338 # reformat remainder of key to begin with prefix
1338 # reformat remainder of key to begin with prefix
1339 rem = str_key[len(prefix_str) :]
1339 rem = str_key[len(prefix_str) :]
1340 # force repr wrapped in '
1340 # force repr wrapped in '
1341 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1341 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1342 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1342 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1343 if quote == '"':
1343 if quote == '"':
1344 # The entered prefix is quoted with ",
1344 # The entered prefix is quoted with ",
1345 # but the match is quoted with '.
1345 # but the match is quoted with '.
1346 # A contained " hence needs escaping for comparison:
1346 # A contained " hence needs escaping for comparison:
1347 rem_repr = rem_repr.replace('"', '\\"')
1347 rem_repr = rem_repr.replace('"', '\\"')
1348
1348
1349 # then reinsert prefix from start of token
1349 # then reinsert prefix from start of token
1350 match = "%s%s" % (token_prefix, rem_repr)
1350 match = "%s%s" % (token_prefix, rem_repr)
1351
1351
1352 matched[match] = filtered_key_is_final[key]
1352 matched[match] = filtered_key_is_final[key]
1353 return quote, token_start, matched
1353 return quote, token_start, matched
1354
1354
1355
1355
1356 def cursor_to_position(text:str, line:int, column:int)->int:
1356 def cursor_to_position(text:str, line:int, column:int)->int:
1357 """
1357 """
1358 Convert the (line,column) position of the cursor in text to an offset in a
1358 Convert the (line,column) position of the cursor in text to an offset in a
1359 string.
1359 string.
1360
1360
1361 Parameters
1361 Parameters
1362 ----------
1362 ----------
1363 text : str
1363 text : str
1364 The text in which to calculate the cursor offset
1364 The text in which to calculate the cursor offset
1365 line : int
1365 line : int
1366 Line of the cursor; 0-indexed
1366 Line of the cursor; 0-indexed
1367 column : int
1367 column : int
1368 Column of the cursor 0-indexed
1368 Column of the cursor 0-indexed
1369
1369
1370 Returns
1370 Returns
1371 -------
1371 -------
1372 Position of the cursor in ``text``, 0-indexed.
1372 Position of the cursor in ``text``, 0-indexed.
1373
1373
1374 See Also
1374 See Also
1375 --------
1375 --------
1376 position_to_cursor : reciprocal of this function
1376 position_to_cursor : reciprocal of this function
1377
1377
1378 """
1378 """
1379 lines = text.split('\n')
1379 lines = text.split('\n')
1380 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1380 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1381
1381
1382 return sum(len(l) + 1 for l in lines[:line]) + column
1382 return sum(len(l) + 1 for l in lines[:line]) + column
1383
1383
1384 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1384 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1385 """
1385 """
1386 Convert the position of the cursor in text (0 indexed) to a line
1386 Convert the position of the cursor in text (0 indexed) to a line
1387 number(0-indexed) and a column number (0-indexed) pair
1387 number(0-indexed) and a column number (0-indexed) pair
1388
1388
1389 Position should be a valid position in ``text``.
1389 Position should be a valid position in ``text``.
1390
1390
1391 Parameters
1391 Parameters
1392 ----------
1392 ----------
1393 text : str
1393 text : str
1394 The text in which to calculate the cursor offset
1394 The text in which to calculate the cursor offset
1395 offset : int
1395 offset : int
1396 Position of the cursor in ``text``, 0-indexed.
1396 Position of the cursor in ``text``, 0-indexed.
1397
1397
1398 Returns
1398 Returns
1399 -------
1399 -------
1400 (line, column) : (int, int)
1400 (line, column) : (int, int)
1401 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1401 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1402
1402
1403 See Also
1403 See Also
1404 --------
1404 --------
1405 cursor_to_position : reciprocal of this function
1405 cursor_to_position : reciprocal of this function
1406
1406
1407 """
1407 """
1408
1408
1409 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1409 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1410
1410
1411 before = text[:offset]
1411 before = text[:offset]
1412 blines = before.split('\n') # ! splitnes trim trailing \n
1412 blines = before.split('\n') # ! splitnes trim trailing \n
1413 line = before.count('\n')
1413 line = before.count('\n')
1414 col = len(blines[-1])
1414 col = len(blines[-1])
1415 return line, col
1415 return line, col
1416
1416
1417
1417
1418 def _safe_isinstance(obj, module, class_name, *attrs):
1418 def _safe_isinstance(obj, module, class_name, *attrs):
1419 """Checks if obj is an instance of module.class_name if loaded
1419 """Checks if obj is an instance of module.class_name if loaded
1420 """
1420 """
1421 if module in sys.modules:
1421 if module in sys.modules:
1422 m = sys.modules[module]
1422 m = sys.modules[module]
1423 for attr in [class_name, *attrs]:
1423 for attr in [class_name, *attrs]:
1424 m = getattr(m, attr)
1424 m = getattr(m, attr)
1425 return isinstance(obj, m)
1425 return isinstance(obj, m)
1426
1426
1427
1427
1428 @context_matcher()
1428 @context_matcher()
1429 def back_unicode_name_matcher(context: CompletionContext):
1429 def back_unicode_name_matcher(context: CompletionContext):
1430 """Match Unicode characters back to Unicode name
1430 """Match Unicode characters back to Unicode name
1431
1431
1432 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1432 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1433 """
1433 """
1434 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1434 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1435 return _convert_matcher_v1_result_to_v2(
1435 return _convert_matcher_v1_result_to_v2(
1436 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1436 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1437 )
1437 )
1438
1438
1439
1439
1440 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1440 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1441 """Match Unicode characters back to Unicode name
1441 """Match Unicode characters back to Unicode name
1442
1442
1443 This does ``β˜ƒ`` -> ``\\snowman``
1443 This does ``β˜ƒ`` -> ``\\snowman``
1444
1444
1445 Note that snowman is not a valid python3 combining character but will be expanded.
1445 Note that snowman is not a valid python3 combining character but will be expanded.
1446 Though it will not recombine back to the snowman character by the completion machinery.
1446 Though it will not recombine back to the snowman character by the completion machinery.
1447
1447
1448 This will not either back-complete standard sequences like \\n, \\b ...
1448 This will not either back-complete standard sequences like \\n, \\b ...
1449
1449
1450 .. deprecated:: 8.6
1450 .. deprecated:: 8.6
1451 You can use :meth:`back_unicode_name_matcher` instead.
1451 You can use :meth:`back_unicode_name_matcher` instead.
1452
1452
1453 Returns
1453 Returns
1454 =======
1454 =======
1455
1455
1456 Return a tuple with two elements:
1456 Return a tuple with two elements:
1457
1457
1458 - The Unicode character that was matched (preceded with a backslash), or
1458 - The Unicode character that was matched (preceded with a backslash), or
1459 empty string,
1459 empty string,
1460 - a sequence (of 1), name for the match Unicode character, preceded by
1460 - a sequence (of 1), name for the match Unicode character, preceded by
1461 backslash, or empty if no match.
1461 backslash, or empty if no match.
1462 """
1462 """
1463 if len(text)<2:
1463 if len(text)<2:
1464 return '', ()
1464 return '', ()
1465 maybe_slash = text[-2]
1465 maybe_slash = text[-2]
1466 if maybe_slash != '\\':
1466 if maybe_slash != '\\':
1467 return '', ()
1467 return '', ()
1468
1468
1469 char = text[-1]
1469 char = text[-1]
1470 # no expand on quote for completion in strings.
1470 # no expand on quote for completion in strings.
1471 # nor backcomplete standard ascii keys
1471 # nor backcomplete standard ascii keys
1472 if char in string.ascii_letters or char in ('"',"'"):
1472 if char in string.ascii_letters or char in ('"',"'"):
1473 return '', ()
1473 return '', ()
1474 try :
1474 try :
1475 unic = unicodedata.name(char)
1475 unic = unicodedata.name(char)
1476 return '\\'+char,('\\'+unic,)
1476 return '\\'+char,('\\'+unic,)
1477 except KeyError:
1477 except KeyError:
1478 pass
1478 pass
1479 return '', ()
1479 return '', ()
1480
1480
1481
1481
1482 @context_matcher()
1482 @context_matcher()
1483 def back_latex_name_matcher(context: CompletionContext):
1483 def back_latex_name_matcher(context: CompletionContext):
1484 """Match latex characters back to unicode name
1484 """Match latex characters back to unicode name
1485
1485
1486 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1486 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1487 """
1487 """
1488 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1488 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1489 return _convert_matcher_v1_result_to_v2(
1489 return _convert_matcher_v1_result_to_v2(
1490 matches, type="latex", fragment=fragment, suppress_if_matches=True
1490 matches, type="latex", fragment=fragment, suppress_if_matches=True
1491 )
1491 )
1492
1492
1493
1493
1494 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1494 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1495 """Match latex characters back to unicode name
1495 """Match latex characters back to unicode name
1496
1496
1497 This does ``\\β„΅`` -> ``\\aleph``
1497 This does ``\\β„΅`` -> ``\\aleph``
1498
1498
1499 .. deprecated:: 8.6
1499 .. deprecated:: 8.6
1500 You can use :meth:`back_latex_name_matcher` instead.
1500 You can use :meth:`back_latex_name_matcher` instead.
1501 """
1501 """
1502 if len(text)<2:
1502 if len(text)<2:
1503 return '', ()
1503 return '', ()
1504 maybe_slash = text[-2]
1504 maybe_slash = text[-2]
1505 if maybe_slash != '\\':
1505 if maybe_slash != '\\':
1506 return '', ()
1506 return '', ()
1507
1507
1508
1508
1509 char = text[-1]
1509 char = text[-1]
1510 # no expand on quote for completion in strings.
1510 # no expand on quote for completion in strings.
1511 # nor backcomplete standard ascii keys
1511 # nor backcomplete standard ascii keys
1512 if char in string.ascii_letters or char in ('"',"'"):
1512 if char in string.ascii_letters or char in ('"',"'"):
1513 return '', ()
1513 return '', ()
1514 try :
1514 try :
1515 latex = reverse_latex_symbol[char]
1515 latex = reverse_latex_symbol[char]
1516 # '\\' replace the \ as well
1516 # '\\' replace the \ as well
1517 return '\\'+char,[latex]
1517 return '\\'+char,[latex]
1518 except KeyError:
1518 except KeyError:
1519 pass
1519 pass
1520 return '', ()
1520 return '', ()
1521
1521
1522
1522
1523 def _formatparamchildren(parameter) -> str:
1523 def _formatparamchildren(parameter) -> str:
1524 """
1524 """
1525 Get parameter name and value from Jedi Private API
1525 Get parameter name and value from Jedi Private API
1526
1526
1527 Jedi does not expose a simple way to get `param=value` from its API.
1527 Jedi does not expose a simple way to get `param=value` from its API.
1528
1528
1529 Parameters
1529 Parameters
1530 ----------
1530 ----------
1531 parameter
1531 parameter
1532 Jedi's function `Param`
1532 Jedi's function `Param`
1533
1533
1534 Returns
1534 Returns
1535 -------
1535 -------
1536 A string like 'a', 'b=1', '*args', '**kwargs'
1536 A string like 'a', 'b=1', '*args', '**kwargs'
1537
1537
1538 """
1538 """
1539 description = parameter.description
1539 description = parameter.description
1540 if not description.startswith('param '):
1540 if not description.startswith('param '):
1541 raise ValueError('Jedi function parameter description have change format.'
1541 raise ValueError('Jedi function parameter description have change format.'
1542 'Expected "param ...", found %r".' % description)
1542 'Expected "param ...", found %r".' % description)
1543 return description[6:]
1543 return description[6:]
1544
1544
1545 def _make_signature(completion)-> str:
1545 def _make_signature(completion)-> str:
1546 """
1546 """
1547 Make the signature from a jedi completion
1547 Make the signature from a jedi completion
1548
1548
1549 Parameters
1549 Parameters
1550 ----------
1550 ----------
1551 completion : jedi.Completion
1551 completion : jedi.Completion
1552 object does not complete a function type
1552 object does not complete a function type
1553
1553
1554 Returns
1554 Returns
1555 -------
1555 -------
1556 a string consisting of the function signature, with the parenthesis but
1556 a string consisting of the function signature, with the parenthesis but
1557 without the function name. example:
1557 without the function name. example:
1558 `(a, *args, b=1, **kwargs)`
1558 `(a, *args, b=1, **kwargs)`
1559
1559
1560 """
1560 """
1561
1561
1562 # it looks like this might work on jedi 0.17
1562 # it looks like this might work on jedi 0.17
1563 if hasattr(completion, 'get_signatures'):
1563 if hasattr(completion, 'get_signatures'):
1564 signatures = completion.get_signatures()
1564 signatures = completion.get_signatures()
1565 if not signatures:
1565 if not signatures:
1566 return '(?)'
1566 return '(?)'
1567
1567
1568 c0 = completion.get_signatures()[0]
1568 c0 = completion.get_signatures()[0]
1569 return '('+c0.to_string().split('(', maxsplit=1)[1]
1569 return '('+c0.to_string().split('(', maxsplit=1)[1]
1570
1570
1571 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1571 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1572 for p in signature.defined_names()) if f])
1572 for p in signature.defined_names()) if f])
1573
1573
1574
1574
1575 _CompleteResult = Dict[str, MatcherResult]
1575 _CompleteResult = Dict[str, MatcherResult]
1576
1576
1577
1577
1578 DICT_MATCHER_REGEX = re.compile(
1578 DICT_MATCHER_REGEX = re.compile(
1579 r"""(?x)
1579 r"""(?x)
1580 ( # match dict-referring - or any get item object - expression
1580 ( # match dict-referring - or any get item object - expression
1581 .+
1581 .+
1582 )
1582 )
1583 \[ # open bracket
1583 \[ # open bracket
1584 \s* # and optional whitespace
1584 \s* # and optional whitespace
1585 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1585 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1586 # and slices
1586 # and slices
1587 ((?:(?:
1587 ((?:(?:
1588 (?: # closed string
1588 (?: # closed string
1589 [uUbB]? # string prefix (r not handled)
1589 [uUbB]? # string prefix (r not handled)
1590 (?:
1590 (?:
1591 '(?:[^']|(?<!\\)\\')*'
1591 '(?:[^']|(?<!\\)\\')*'
1592 |
1592 |
1593 "(?:[^"]|(?<!\\)\\")*"
1593 "(?:[^"]|(?<!\\)\\")*"
1594 )
1594 )
1595 )
1595 )
1596 |
1596 |
1597 # capture integers and slices
1597 # capture integers and slices
1598 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1598 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1599 |
1599 |
1600 # integer in bin/hex/oct notation
1600 # integer in bin/hex/oct notation
1601 0[bBxXoO]_?(?:\w|\d)+
1601 0[bBxXoO]_?(?:\w|\d)+
1602 )
1602 )
1603 \s*,\s*
1603 \s*,\s*
1604 )*)
1604 )*)
1605 ((?:
1605 ((?:
1606 (?: # unclosed string
1606 (?: # unclosed string
1607 [uUbB]? # string prefix (r not handled)
1607 [uUbB]? # string prefix (r not handled)
1608 (?:
1608 (?:
1609 '(?:[^']|(?<!\\)\\')*
1609 '(?:[^']|(?<!\\)\\')*
1610 |
1610 |
1611 "(?:[^"]|(?<!\\)\\")*
1611 "(?:[^"]|(?<!\\)\\")*
1612 )
1612 )
1613 )
1613 )
1614 |
1614 |
1615 # unfinished integer
1615 # unfinished integer
1616 (?:[-+]?\d+)
1616 (?:[-+]?\d+)
1617 |
1617 |
1618 # integer in bin/hex/oct notation
1618 # integer in bin/hex/oct notation
1619 0[bBxXoO]_?(?:\w|\d)+
1619 0[bBxXoO]_?(?:\w|\d)+
1620 )
1620 )
1621 )?
1621 )?
1622 $
1622 $
1623 """
1623 """
1624 )
1624 )
1625
1625
1626
1626
1627 def _convert_matcher_v1_result_to_v2(
1627 def _convert_matcher_v1_result_to_v2(
1628 matches: Sequence[str],
1628 matches: Sequence[str],
1629 type: str,
1629 type: str,
1630 fragment: Optional[str] = None,
1630 fragment: Optional[str] = None,
1631 suppress_if_matches: bool = False,
1631 suppress_if_matches: bool = False,
1632 ) -> SimpleMatcherResult:
1632 ) -> SimpleMatcherResult:
1633 """Utility to help with transition"""
1633 """Utility to help with transition"""
1634 result = {
1634 result = {
1635 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1635 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1636 "suppress": (True if matches else False) if suppress_if_matches else False,
1636 "suppress": (True if matches else False) if suppress_if_matches else False,
1637 }
1637 }
1638 if fragment is not None:
1638 if fragment is not None:
1639 result["matched_fragment"] = fragment
1639 result["matched_fragment"] = fragment
1640 return result
1640 return result
1641
1641
1642
1642
1643 class IPCompleter(Completer):
1643 class IPCompleter(Completer):
1644 """Extension of the completer class with IPython-specific features"""
1644 """Extension of the completer class with IPython-specific features"""
1645
1645
1646 @observe('greedy')
1646 @observe('greedy')
1647 def _greedy_changed(self, change):
1647 def _greedy_changed(self, change):
1648 """update the splitter and readline delims when greedy is changed"""
1648 """update the splitter and readline delims when greedy is changed"""
1649 if change["new"]:
1649 if change["new"]:
1650 self.evaluation = "unsafe"
1650 self.evaluation = "unsafe"
1651 self.auto_close_dict_keys = True
1651 self.auto_close_dict_keys = True
1652 self.splitter.delims = GREEDY_DELIMS
1652 self.splitter.delims = GREEDY_DELIMS
1653 else:
1653 else:
1654 self.evaluation = "limitted"
1654 self.evaluation = "limited"
1655 self.auto_close_dict_keys = False
1655 self.auto_close_dict_keys = False
1656 self.splitter.delims = DELIMS
1656 self.splitter.delims = DELIMS
1657
1657
1658 dict_keys_only = Bool(
1658 dict_keys_only = Bool(
1659 False,
1659 False,
1660 help="""
1660 help="""
1661 Whether to show dict key matches only.
1661 Whether to show dict key matches only.
1662
1662
1663 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1663 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1664 """,
1664 """,
1665 )
1665 )
1666
1666
1667 suppress_competing_matchers = UnionTrait(
1667 suppress_competing_matchers = UnionTrait(
1668 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1668 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1669 default_value=None,
1669 default_value=None,
1670 help="""
1670 help="""
1671 Whether to suppress completions from other *Matchers*.
1671 Whether to suppress completions from other *Matchers*.
1672
1672
1673 When set to ``None`` (default) the matchers will attempt to auto-detect
1673 When set to ``None`` (default) the matchers will attempt to auto-detect
1674 whether suppression of other matchers is desirable. For example, at
1674 whether suppression of other matchers is desirable. For example, at
1675 the beginning of a line followed by `%` we expect a magic completion
1675 the beginning of a line followed by `%` we expect a magic completion
1676 to be the only applicable option, and after ``my_dict['`` we usually
1676 to be the only applicable option, and after ``my_dict['`` we usually
1677 expect a completion with an existing dictionary key.
1677 expect a completion with an existing dictionary key.
1678
1678
1679 If you want to disable this heuristic and see completions from all matchers,
1679 If you want to disable this heuristic and see completions from all matchers,
1680 set ``IPCompleter.suppress_competing_matchers = False``.
1680 set ``IPCompleter.suppress_competing_matchers = False``.
1681 To disable the heuristic for specific matchers provide a dictionary mapping:
1681 To disable the heuristic for specific matchers provide a dictionary mapping:
1682 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1682 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1683
1683
1684 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1684 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1685 completions to the set of matchers with the highest priority;
1685 completions to the set of matchers with the highest priority;
1686 this is equivalent to ``IPCompleter.merge_completions`` and
1686 this is equivalent to ``IPCompleter.merge_completions`` and
1687 can be beneficial for performance, but will sometimes omit relevant
1687 can be beneficial for performance, but will sometimes omit relevant
1688 candidates from matchers further down the priority list.
1688 candidates from matchers further down the priority list.
1689 """,
1689 """,
1690 ).tag(config=True)
1690 ).tag(config=True)
1691
1691
1692 merge_completions = Bool(
1692 merge_completions = Bool(
1693 True,
1693 True,
1694 help="""Whether to merge completion results into a single list
1694 help="""Whether to merge completion results into a single list
1695
1695
1696 If False, only the completion results from the first non-empty
1696 If False, only the completion results from the first non-empty
1697 completer will be returned.
1697 completer will be returned.
1698
1698
1699 As of version 8.6.0, setting the value to ``False`` is an alias for:
1699 As of version 8.6.0, setting the value to ``False`` is an alias for:
1700 ``IPCompleter.suppress_competing_matchers = True.``.
1700 ``IPCompleter.suppress_competing_matchers = True.``.
1701 """,
1701 """,
1702 ).tag(config=True)
1702 ).tag(config=True)
1703
1703
1704 disable_matchers = ListTrait(
1704 disable_matchers = ListTrait(
1705 Unicode(),
1705 Unicode(),
1706 help="""List of matchers to disable.
1706 help="""List of matchers to disable.
1707
1707
1708 The list should contain matcher identifiers (see :any:`completion_matcher`).
1708 The list should contain matcher identifiers (see :any:`completion_matcher`).
1709 """,
1709 """,
1710 ).tag(config=True)
1710 ).tag(config=True)
1711
1711
1712 omit__names = Enum(
1712 omit__names = Enum(
1713 (0, 1, 2),
1713 (0, 1, 2),
1714 default_value=2,
1714 default_value=2,
1715 help="""Instruct the completer to omit private method names
1715 help="""Instruct the completer to omit private method names
1716
1716
1717 Specifically, when completing on ``object.<tab>``.
1717 Specifically, when completing on ``object.<tab>``.
1718
1718
1719 When 2 [default]: all names that start with '_' will be excluded.
1719 When 2 [default]: all names that start with '_' will be excluded.
1720
1720
1721 When 1: all 'magic' names (``__foo__``) will be excluded.
1721 When 1: all 'magic' names (``__foo__``) will be excluded.
1722
1722
1723 When 0: nothing will be excluded.
1723 When 0: nothing will be excluded.
1724 """
1724 """
1725 ).tag(config=True)
1725 ).tag(config=True)
1726 limit_to__all__ = Bool(False,
1726 limit_to__all__ = Bool(False,
1727 help="""
1727 help="""
1728 DEPRECATED as of version 5.0.
1728 DEPRECATED as of version 5.0.
1729
1729
1730 Instruct the completer to use __all__ for the completion
1730 Instruct the completer to use __all__ for the completion
1731
1731
1732 Specifically, when completing on ``object.<tab>``.
1732 Specifically, when completing on ``object.<tab>``.
1733
1733
1734 When True: only those names in obj.__all__ will be included.
1734 When True: only those names in obj.__all__ will be included.
1735
1735
1736 When False [default]: the __all__ attribute is ignored
1736 When False [default]: the __all__ attribute is ignored
1737 """,
1737 """,
1738 ).tag(config=True)
1738 ).tag(config=True)
1739
1739
1740 profile_completions = Bool(
1740 profile_completions = Bool(
1741 default_value=False,
1741 default_value=False,
1742 help="If True, emit profiling data for completion subsystem using cProfile."
1742 help="If True, emit profiling data for completion subsystem using cProfile."
1743 ).tag(config=True)
1743 ).tag(config=True)
1744
1744
1745 profiler_output_dir = Unicode(
1745 profiler_output_dir = Unicode(
1746 default_value=".completion_profiles",
1746 default_value=".completion_profiles",
1747 help="Template for path at which to output profile data for completions."
1747 help="Template for path at which to output profile data for completions."
1748 ).tag(config=True)
1748 ).tag(config=True)
1749
1749
1750 @observe('limit_to__all__')
1750 @observe('limit_to__all__')
1751 def _limit_to_all_changed(self, change):
1751 def _limit_to_all_changed(self, change):
1752 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1752 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1753 'value has been deprecated since IPython 5.0, will be made to have '
1753 'value has been deprecated since IPython 5.0, will be made to have '
1754 'no effects and then removed in future version of IPython.',
1754 'no effects and then removed in future version of IPython.',
1755 UserWarning)
1755 UserWarning)
1756
1756
1757 def __init__(
1757 def __init__(
1758 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1758 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1759 ):
1759 ):
1760 """IPCompleter() -> completer
1760 """IPCompleter() -> completer
1761
1761
1762 Return a completer object.
1762 Return a completer object.
1763
1763
1764 Parameters
1764 Parameters
1765 ----------
1765 ----------
1766 shell
1766 shell
1767 a pointer to the ipython shell itself. This is needed
1767 a pointer to the ipython shell itself. This is needed
1768 because this completer knows about magic functions, and those can
1768 because this completer knows about magic functions, and those can
1769 only be accessed via the ipython instance.
1769 only be accessed via the ipython instance.
1770 namespace : dict, optional
1770 namespace : dict, optional
1771 an optional dict where completions are performed.
1771 an optional dict where completions are performed.
1772 global_namespace : dict, optional
1772 global_namespace : dict, optional
1773 secondary optional dict for completions, to
1773 secondary optional dict for completions, to
1774 handle cases (such as IPython embedded inside functions) where
1774 handle cases (such as IPython embedded inside functions) where
1775 both Python scopes are visible.
1775 both Python scopes are visible.
1776 config : Config
1776 config : Config
1777 traitlet's config object
1777 traitlet's config object
1778 **kwargs
1778 **kwargs
1779 passed to super class unmodified.
1779 passed to super class unmodified.
1780 """
1780 """
1781
1781
1782 self.magic_escape = ESC_MAGIC
1782 self.magic_escape = ESC_MAGIC
1783 self.splitter = CompletionSplitter()
1783 self.splitter = CompletionSplitter()
1784
1784
1785 # _greedy_changed() depends on splitter and readline being defined:
1785 # _greedy_changed() depends on splitter and readline being defined:
1786 super().__init__(
1786 super().__init__(
1787 namespace=namespace,
1787 namespace=namespace,
1788 global_namespace=global_namespace,
1788 global_namespace=global_namespace,
1789 config=config,
1789 config=config,
1790 **kwargs,
1790 **kwargs,
1791 )
1791 )
1792
1792
1793 # List where completion matches will be stored
1793 # List where completion matches will be stored
1794 self.matches = []
1794 self.matches = []
1795 self.shell = shell
1795 self.shell = shell
1796 # Regexp to split filenames with spaces in them
1796 # Regexp to split filenames with spaces in them
1797 self.space_name_re = re.compile(r'([^\\] )')
1797 self.space_name_re = re.compile(r'([^\\] )')
1798 # Hold a local ref. to glob.glob for speed
1798 # Hold a local ref. to glob.glob for speed
1799 self.glob = glob.glob
1799 self.glob = glob.glob
1800
1800
1801 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1801 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1802 # buffers, to avoid completion problems.
1802 # buffers, to avoid completion problems.
1803 term = os.environ.get('TERM','xterm')
1803 term = os.environ.get('TERM','xterm')
1804 self.dumb_terminal = term in ['dumb','emacs']
1804 self.dumb_terminal = term in ['dumb','emacs']
1805
1805
1806 # Special handling of backslashes needed in win32 platforms
1806 # Special handling of backslashes needed in win32 platforms
1807 if sys.platform == "win32":
1807 if sys.platform == "win32":
1808 self.clean_glob = self._clean_glob_win32
1808 self.clean_glob = self._clean_glob_win32
1809 else:
1809 else:
1810 self.clean_glob = self._clean_glob
1810 self.clean_glob = self._clean_glob
1811
1811
1812 #regexp to parse docstring for function signature
1812 #regexp to parse docstring for function signature
1813 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1813 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1814 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1814 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1815 #use this if positional argument name is also needed
1815 #use this if positional argument name is also needed
1816 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1816 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1817
1817
1818 self.magic_arg_matchers = [
1818 self.magic_arg_matchers = [
1819 self.magic_config_matcher,
1819 self.magic_config_matcher,
1820 self.magic_color_matcher,
1820 self.magic_color_matcher,
1821 ]
1821 ]
1822
1822
1823 # This is set externally by InteractiveShell
1823 # This is set externally by InteractiveShell
1824 self.custom_completers = None
1824 self.custom_completers = None
1825
1825
1826 # This is a list of names of unicode characters that can be completed
1826 # This is a list of names of unicode characters that can be completed
1827 # into their corresponding unicode value. The list is large, so we
1827 # into their corresponding unicode value. The list is large, so we
1828 # lazily initialize it on first use. Consuming code should access this
1828 # lazily initialize it on first use. Consuming code should access this
1829 # attribute through the `@unicode_names` property.
1829 # attribute through the `@unicode_names` property.
1830 self._unicode_names = None
1830 self._unicode_names = None
1831
1831
1832 self._backslash_combining_matchers = [
1832 self._backslash_combining_matchers = [
1833 self.latex_name_matcher,
1833 self.latex_name_matcher,
1834 self.unicode_name_matcher,
1834 self.unicode_name_matcher,
1835 back_latex_name_matcher,
1835 back_latex_name_matcher,
1836 back_unicode_name_matcher,
1836 back_unicode_name_matcher,
1837 self.fwd_unicode_matcher,
1837 self.fwd_unicode_matcher,
1838 ]
1838 ]
1839
1839
1840 if not self.backslash_combining_completions:
1840 if not self.backslash_combining_completions:
1841 for matcher in self._backslash_combining_matchers:
1841 for matcher in self._backslash_combining_matchers:
1842 self.disable_matchers.append(matcher.matcher_identifier)
1842 self.disable_matchers.append(matcher.matcher_identifier)
1843
1843
1844 if not self.merge_completions:
1844 if not self.merge_completions:
1845 self.suppress_competing_matchers = True
1845 self.suppress_competing_matchers = True
1846
1846
1847 @property
1847 @property
1848 def matchers(self) -> List[Matcher]:
1848 def matchers(self) -> List[Matcher]:
1849 """All active matcher routines for completion"""
1849 """All active matcher routines for completion"""
1850 if self.dict_keys_only:
1850 if self.dict_keys_only:
1851 return [self.dict_key_matcher]
1851 return [self.dict_key_matcher]
1852
1852
1853 if self.use_jedi:
1853 if self.use_jedi:
1854 return [
1854 return [
1855 *self.custom_matchers,
1855 *self.custom_matchers,
1856 *self._backslash_combining_matchers,
1856 *self._backslash_combining_matchers,
1857 *self.magic_arg_matchers,
1857 *self.magic_arg_matchers,
1858 self.custom_completer_matcher,
1858 self.custom_completer_matcher,
1859 self.magic_matcher,
1859 self.magic_matcher,
1860 self._jedi_matcher,
1860 self._jedi_matcher,
1861 self.dict_key_matcher,
1861 self.dict_key_matcher,
1862 self.file_matcher,
1862 self.file_matcher,
1863 ]
1863 ]
1864 else:
1864 else:
1865 return [
1865 return [
1866 *self.custom_matchers,
1866 *self.custom_matchers,
1867 *self._backslash_combining_matchers,
1867 *self._backslash_combining_matchers,
1868 *self.magic_arg_matchers,
1868 *self.magic_arg_matchers,
1869 self.custom_completer_matcher,
1869 self.custom_completer_matcher,
1870 self.dict_key_matcher,
1870 self.dict_key_matcher,
1871 # TODO: convert python_matches to v2 API
1871 # TODO: convert python_matches to v2 API
1872 self.magic_matcher,
1872 self.magic_matcher,
1873 self.python_matches,
1873 self.python_matches,
1874 self.file_matcher,
1874 self.file_matcher,
1875 self.python_func_kw_matcher,
1875 self.python_func_kw_matcher,
1876 ]
1876 ]
1877
1877
1878 def all_completions(self, text:str) -> List[str]:
1878 def all_completions(self, text:str) -> List[str]:
1879 """
1879 """
1880 Wrapper around the completion methods for the benefit of emacs.
1880 Wrapper around the completion methods for the benefit of emacs.
1881 """
1881 """
1882 prefix = text.rpartition('.')[0]
1882 prefix = text.rpartition('.')[0]
1883 with provisionalcompleter():
1883 with provisionalcompleter():
1884 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1884 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1885 for c in self.completions(text, len(text))]
1885 for c in self.completions(text, len(text))]
1886
1886
1887 return self.complete(text)[1]
1887 return self.complete(text)[1]
1888
1888
1889 def _clean_glob(self, text:str):
1889 def _clean_glob(self, text:str):
1890 return self.glob("%s*" % text)
1890 return self.glob("%s*" % text)
1891
1891
1892 def _clean_glob_win32(self, text:str):
1892 def _clean_glob_win32(self, text:str):
1893 return [f.replace("\\","/")
1893 return [f.replace("\\","/")
1894 for f in self.glob("%s*" % text)]
1894 for f in self.glob("%s*" % text)]
1895
1895
1896 @context_matcher()
1896 @context_matcher()
1897 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1897 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1898 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1898 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1899 matches = self.file_matches(context.token)
1899 matches = self.file_matches(context.token)
1900 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1900 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1901 # starts with `/home/`, `C:\`, etc)
1901 # starts with `/home/`, `C:\`, etc)
1902 return _convert_matcher_v1_result_to_v2(matches, type="path")
1902 return _convert_matcher_v1_result_to_v2(matches, type="path")
1903
1903
1904 def file_matches(self, text: str) -> List[str]:
1904 def file_matches(self, text: str) -> List[str]:
1905 """Match filenames, expanding ~USER type strings.
1905 """Match filenames, expanding ~USER type strings.
1906
1906
1907 Most of the seemingly convoluted logic in this completer is an
1907 Most of the seemingly convoluted logic in this completer is an
1908 attempt to handle filenames with spaces in them. And yet it's not
1908 attempt to handle filenames with spaces in them. And yet it's not
1909 quite perfect, because Python's readline doesn't expose all of the
1909 quite perfect, because Python's readline doesn't expose all of the
1910 GNU readline details needed for this to be done correctly.
1910 GNU readline details needed for this to be done correctly.
1911
1911
1912 For a filename with a space in it, the printed completions will be
1912 For a filename with a space in it, the printed completions will be
1913 only the parts after what's already been typed (instead of the
1913 only the parts after what's already been typed (instead of the
1914 full completions, as is normally done). I don't think with the
1914 full completions, as is normally done). I don't think with the
1915 current (as of Python 2.3) Python readline it's possible to do
1915 current (as of Python 2.3) Python readline it's possible to do
1916 better.
1916 better.
1917
1917
1918 .. deprecated:: 8.6
1918 .. deprecated:: 8.6
1919 You can use :meth:`file_matcher` instead.
1919 You can use :meth:`file_matcher` instead.
1920 """
1920 """
1921
1921
1922 # chars that require escaping with backslash - i.e. chars
1922 # chars that require escaping with backslash - i.e. chars
1923 # that readline treats incorrectly as delimiters, but we
1923 # that readline treats incorrectly as delimiters, but we
1924 # don't want to treat as delimiters in filename matching
1924 # don't want to treat as delimiters in filename matching
1925 # when escaped with backslash
1925 # when escaped with backslash
1926 if text.startswith('!'):
1926 if text.startswith('!'):
1927 text = text[1:]
1927 text = text[1:]
1928 text_prefix = u'!'
1928 text_prefix = u'!'
1929 else:
1929 else:
1930 text_prefix = u''
1930 text_prefix = u''
1931
1931
1932 text_until_cursor = self.text_until_cursor
1932 text_until_cursor = self.text_until_cursor
1933 # track strings with open quotes
1933 # track strings with open quotes
1934 open_quotes = has_open_quotes(text_until_cursor)
1934 open_quotes = has_open_quotes(text_until_cursor)
1935
1935
1936 if '(' in text_until_cursor or '[' in text_until_cursor:
1936 if '(' in text_until_cursor or '[' in text_until_cursor:
1937 lsplit = text
1937 lsplit = text
1938 else:
1938 else:
1939 try:
1939 try:
1940 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1940 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1941 lsplit = arg_split(text_until_cursor)[-1]
1941 lsplit = arg_split(text_until_cursor)[-1]
1942 except ValueError:
1942 except ValueError:
1943 # typically an unmatched ", or backslash without escaped char.
1943 # typically an unmatched ", or backslash without escaped char.
1944 if open_quotes:
1944 if open_quotes:
1945 lsplit = text_until_cursor.split(open_quotes)[-1]
1945 lsplit = text_until_cursor.split(open_quotes)[-1]
1946 else:
1946 else:
1947 return []
1947 return []
1948 except IndexError:
1948 except IndexError:
1949 # tab pressed on empty line
1949 # tab pressed on empty line
1950 lsplit = ""
1950 lsplit = ""
1951
1951
1952 if not open_quotes and lsplit != protect_filename(lsplit):
1952 if not open_quotes and lsplit != protect_filename(lsplit):
1953 # if protectables are found, do matching on the whole escaped name
1953 # if protectables are found, do matching on the whole escaped name
1954 has_protectables = True
1954 has_protectables = True
1955 text0,text = text,lsplit
1955 text0,text = text,lsplit
1956 else:
1956 else:
1957 has_protectables = False
1957 has_protectables = False
1958 text = os.path.expanduser(text)
1958 text = os.path.expanduser(text)
1959
1959
1960 if text == "":
1960 if text == "":
1961 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1961 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1962
1962
1963 # Compute the matches from the filesystem
1963 # Compute the matches from the filesystem
1964 if sys.platform == 'win32':
1964 if sys.platform == 'win32':
1965 m0 = self.clean_glob(text)
1965 m0 = self.clean_glob(text)
1966 else:
1966 else:
1967 m0 = self.clean_glob(text.replace('\\', ''))
1967 m0 = self.clean_glob(text.replace('\\', ''))
1968
1968
1969 if has_protectables:
1969 if has_protectables:
1970 # If we had protectables, we need to revert our changes to the
1970 # If we had protectables, we need to revert our changes to the
1971 # beginning of filename so that we don't double-write the part
1971 # beginning of filename so that we don't double-write the part
1972 # of the filename we have so far
1972 # of the filename we have so far
1973 len_lsplit = len(lsplit)
1973 len_lsplit = len(lsplit)
1974 matches = [text_prefix + text0 +
1974 matches = [text_prefix + text0 +
1975 protect_filename(f[len_lsplit:]) for f in m0]
1975 protect_filename(f[len_lsplit:]) for f in m0]
1976 else:
1976 else:
1977 if open_quotes:
1977 if open_quotes:
1978 # if we have a string with an open quote, we don't need to
1978 # if we have a string with an open quote, we don't need to
1979 # protect the names beyond the quote (and we _shouldn't_, as
1979 # protect the names beyond the quote (and we _shouldn't_, as
1980 # it would cause bugs when the filesystem call is made).
1980 # it would cause bugs when the filesystem call is made).
1981 matches = m0 if sys.platform == "win32" else\
1981 matches = m0 if sys.platform == "win32" else\
1982 [protect_filename(f, open_quotes) for f in m0]
1982 [protect_filename(f, open_quotes) for f in m0]
1983 else:
1983 else:
1984 matches = [text_prefix +
1984 matches = [text_prefix +
1985 protect_filename(f) for f in m0]
1985 protect_filename(f) for f in m0]
1986
1986
1987 # Mark directories in input list by appending '/' to their names.
1987 # Mark directories in input list by appending '/' to their names.
1988 return [x+'/' if os.path.isdir(x) else x for x in matches]
1988 return [x+'/' if os.path.isdir(x) else x for x in matches]
1989
1989
1990 @context_matcher()
1990 @context_matcher()
1991 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1991 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1992 """Match magics."""
1992 """Match magics."""
1993 text = context.token
1993 text = context.token
1994 matches = self.magic_matches(text)
1994 matches = self.magic_matches(text)
1995 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1995 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1996 is_magic_prefix = len(text) > 0 and text[0] == "%"
1996 is_magic_prefix = len(text) > 0 and text[0] == "%"
1997 result["suppress"] = is_magic_prefix and bool(result["completions"])
1997 result["suppress"] = is_magic_prefix and bool(result["completions"])
1998 return result
1998 return result
1999
1999
2000 def magic_matches(self, text: str):
2000 def magic_matches(self, text: str):
2001 """Match magics.
2001 """Match magics.
2002
2002
2003 .. deprecated:: 8.6
2003 .. deprecated:: 8.6
2004 You can use :meth:`magic_matcher` instead.
2004 You can use :meth:`magic_matcher` instead.
2005 """
2005 """
2006 # Get all shell magics now rather than statically, so magics loaded at
2006 # Get all shell magics now rather than statically, so magics loaded at
2007 # runtime show up too.
2007 # runtime show up too.
2008 lsm = self.shell.magics_manager.lsmagic()
2008 lsm = self.shell.magics_manager.lsmagic()
2009 line_magics = lsm['line']
2009 line_magics = lsm['line']
2010 cell_magics = lsm['cell']
2010 cell_magics = lsm['cell']
2011 pre = self.magic_escape
2011 pre = self.magic_escape
2012 pre2 = pre+pre
2012 pre2 = pre+pre
2013
2013
2014 explicit_magic = text.startswith(pre)
2014 explicit_magic = text.startswith(pre)
2015
2015
2016 # Completion logic:
2016 # Completion logic:
2017 # - user gives %%: only do cell magics
2017 # - user gives %%: only do cell magics
2018 # - user gives %: do both line and cell magics
2018 # - user gives %: do both line and cell magics
2019 # - no prefix: do both
2019 # - no prefix: do both
2020 # In other words, line magics are skipped if the user gives %% explicitly
2020 # In other words, line magics are skipped if the user gives %% explicitly
2021 #
2021 #
2022 # We also exclude magics that match any currently visible names:
2022 # We also exclude magics that match any currently visible names:
2023 # https://github.com/ipython/ipython/issues/4877, unless the user has
2023 # https://github.com/ipython/ipython/issues/4877, unless the user has
2024 # typed a %:
2024 # typed a %:
2025 # https://github.com/ipython/ipython/issues/10754
2025 # https://github.com/ipython/ipython/issues/10754
2026 bare_text = text.lstrip(pre)
2026 bare_text = text.lstrip(pre)
2027 global_matches = self.global_matches(bare_text)
2027 global_matches = self.global_matches(bare_text)
2028 if not explicit_magic:
2028 if not explicit_magic:
2029 def matches(magic):
2029 def matches(magic):
2030 """
2030 """
2031 Filter magics, in particular remove magics that match
2031 Filter magics, in particular remove magics that match
2032 a name present in global namespace.
2032 a name present in global namespace.
2033 """
2033 """
2034 return ( magic.startswith(bare_text) and
2034 return ( magic.startswith(bare_text) and
2035 magic not in global_matches )
2035 magic not in global_matches )
2036 else:
2036 else:
2037 def matches(magic):
2037 def matches(magic):
2038 return magic.startswith(bare_text)
2038 return magic.startswith(bare_text)
2039
2039
2040 comp = [ pre2+m for m in cell_magics if matches(m)]
2040 comp = [ pre2+m for m in cell_magics if matches(m)]
2041 if not text.startswith(pre2):
2041 if not text.startswith(pre2):
2042 comp += [ pre+m for m in line_magics if matches(m)]
2042 comp += [ pre+m for m in line_magics if matches(m)]
2043
2043
2044 return comp
2044 return comp
2045
2045
2046 @context_matcher()
2046 @context_matcher()
2047 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2047 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2048 """Match class names and attributes for %config magic."""
2048 """Match class names and attributes for %config magic."""
2049 # NOTE: uses `line_buffer` equivalent for compatibility
2049 # NOTE: uses `line_buffer` equivalent for compatibility
2050 matches = self.magic_config_matches(context.line_with_cursor)
2050 matches = self.magic_config_matches(context.line_with_cursor)
2051 return _convert_matcher_v1_result_to_v2(matches, type="param")
2051 return _convert_matcher_v1_result_to_v2(matches, type="param")
2052
2052
2053 def magic_config_matches(self, text: str) -> List[str]:
2053 def magic_config_matches(self, text: str) -> List[str]:
2054 """Match class names and attributes for %config magic.
2054 """Match class names and attributes for %config magic.
2055
2055
2056 .. deprecated:: 8.6
2056 .. deprecated:: 8.6
2057 You can use :meth:`magic_config_matcher` instead.
2057 You can use :meth:`magic_config_matcher` instead.
2058 """
2058 """
2059 texts = text.strip().split()
2059 texts = text.strip().split()
2060
2060
2061 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2061 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2062 # get all configuration classes
2062 # get all configuration classes
2063 classes = sorted(set([ c for c in self.shell.configurables
2063 classes = sorted(set([ c for c in self.shell.configurables
2064 if c.__class__.class_traits(config=True)
2064 if c.__class__.class_traits(config=True)
2065 ]), key=lambda x: x.__class__.__name__)
2065 ]), key=lambda x: x.__class__.__name__)
2066 classnames = [ c.__class__.__name__ for c in classes ]
2066 classnames = [ c.__class__.__name__ for c in classes ]
2067
2067
2068 # return all classnames if config or %config is given
2068 # return all classnames if config or %config is given
2069 if len(texts) == 1:
2069 if len(texts) == 1:
2070 return classnames
2070 return classnames
2071
2071
2072 # match classname
2072 # match classname
2073 classname_texts = texts[1].split('.')
2073 classname_texts = texts[1].split('.')
2074 classname = classname_texts[0]
2074 classname = classname_texts[0]
2075 classname_matches = [ c for c in classnames
2075 classname_matches = [ c for c in classnames
2076 if c.startswith(classname) ]
2076 if c.startswith(classname) ]
2077
2077
2078 # return matched classes or the matched class with attributes
2078 # return matched classes or the matched class with attributes
2079 if texts[1].find('.') < 0:
2079 if texts[1].find('.') < 0:
2080 return classname_matches
2080 return classname_matches
2081 elif len(classname_matches) == 1 and \
2081 elif len(classname_matches) == 1 and \
2082 classname_matches[0] == classname:
2082 classname_matches[0] == classname:
2083 cls = classes[classnames.index(classname)].__class__
2083 cls = classes[classnames.index(classname)].__class__
2084 help = cls.class_get_help()
2084 help = cls.class_get_help()
2085 # strip leading '--' from cl-args:
2085 # strip leading '--' from cl-args:
2086 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2086 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2087 return [ attr.split('=')[0]
2087 return [ attr.split('=')[0]
2088 for attr in help.strip().splitlines()
2088 for attr in help.strip().splitlines()
2089 if attr.startswith(texts[1]) ]
2089 if attr.startswith(texts[1]) ]
2090 return []
2090 return []
2091
2091
2092 @context_matcher()
2092 @context_matcher()
2093 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2093 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2094 """Match color schemes for %colors magic."""
2094 """Match color schemes for %colors magic."""
2095 # NOTE: uses `line_buffer` equivalent for compatibility
2095 # NOTE: uses `line_buffer` equivalent for compatibility
2096 matches = self.magic_color_matches(context.line_with_cursor)
2096 matches = self.magic_color_matches(context.line_with_cursor)
2097 return _convert_matcher_v1_result_to_v2(matches, type="param")
2097 return _convert_matcher_v1_result_to_v2(matches, type="param")
2098
2098
2099 def magic_color_matches(self, text: str) -> List[str]:
2099 def magic_color_matches(self, text: str) -> List[str]:
2100 """Match color schemes for %colors magic.
2100 """Match color schemes for %colors magic.
2101
2101
2102 .. deprecated:: 8.6
2102 .. deprecated:: 8.6
2103 You can use :meth:`magic_color_matcher` instead.
2103 You can use :meth:`magic_color_matcher` instead.
2104 """
2104 """
2105 texts = text.split()
2105 texts = text.split()
2106 if text.endswith(' '):
2106 if text.endswith(' '):
2107 # .split() strips off the trailing whitespace. Add '' back
2107 # .split() strips off the trailing whitespace. Add '' back
2108 # so that: '%colors ' -> ['%colors', '']
2108 # so that: '%colors ' -> ['%colors', '']
2109 texts.append('')
2109 texts.append('')
2110
2110
2111 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2111 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2112 prefix = texts[1]
2112 prefix = texts[1]
2113 return [ color for color in InspectColors.keys()
2113 return [ color for color in InspectColors.keys()
2114 if color.startswith(prefix) ]
2114 if color.startswith(prefix) ]
2115 return []
2115 return []
2116
2116
2117 @context_matcher(identifier="IPCompleter.jedi_matcher")
2117 @context_matcher(identifier="IPCompleter.jedi_matcher")
2118 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2118 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2119 matches = self._jedi_matches(
2119 matches = self._jedi_matches(
2120 cursor_column=context.cursor_position,
2120 cursor_column=context.cursor_position,
2121 cursor_line=context.cursor_line,
2121 cursor_line=context.cursor_line,
2122 text=context.full_text,
2122 text=context.full_text,
2123 )
2123 )
2124 return {
2124 return {
2125 "completions": matches,
2125 "completions": matches,
2126 # static analysis should not suppress other matchers
2126 # static analysis should not suppress other matchers
2127 "suppress": False,
2127 "suppress": False,
2128 }
2128 }
2129
2129
2130 def _jedi_matches(
2130 def _jedi_matches(
2131 self, cursor_column: int, cursor_line: int, text: str
2131 self, cursor_column: int, cursor_line: int, text: str
2132 ) -> Iterable[_JediCompletionLike]:
2132 ) -> Iterable[_JediCompletionLike]:
2133 """
2133 """
2134 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2134 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2135 cursor position.
2135 cursor position.
2136
2136
2137 Parameters
2137 Parameters
2138 ----------
2138 ----------
2139 cursor_column : int
2139 cursor_column : int
2140 column position of the cursor in ``text``, 0-indexed.
2140 column position of the cursor in ``text``, 0-indexed.
2141 cursor_line : int
2141 cursor_line : int
2142 line position of the cursor in ``text``, 0-indexed
2142 line position of the cursor in ``text``, 0-indexed
2143 text : str
2143 text : str
2144 text to complete
2144 text to complete
2145
2145
2146 Notes
2146 Notes
2147 -----
2147 -----
2148 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2148 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2149 object containing a string with the Jedi debug information attached.
2149 object containing a string with the Jedi debug information attached.
2150
2150
2151 .. deprecated:: 8.6
2151 .. deprecated:: 8.6
2152 You can use :meth:`_jedi_matcher` instead.
2152 You can use :meth:`_jedi_matcher` instead.
2153 """
2153 """
2154 namespaces = [self.namespace]
2154 namespaces = [self.namespace]
2155 if self.global_namespace is not None:
2155 if self.global_namespace is not None:
2156 namespaces.append(self.global_namespace)
2156 namespaces.append(self.global_namespace)
2157
2157
2158 completion_filter = lambda x:x
2158 completion_filter = lambda x:x
2159 offset = cursor_to_position(text, cursor_line, cursor_column)
2159 offset = cursor_to_position(text, cursor_line, cursor_column)
2160 # filter output if we are completing for object members
2160 # filter output if we are completing for object members
2161 if offset:
2161 if offset:
2162 pre = text[offset-1]
2162 pre = text[offset-1]
2163 if pre == '.':
2163 if pre == '.':
2164 if self.omit__names == 2:
2164 if self.omit__names == 2:
2165 completion_filter = lambda c:not c.name.startswith('_')
2165 completion_filter = lambda c:not c.name.startswith('_')
2166 elif self.omit__names == 1:
2166 elif self.omit__names == 1:
2167 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2167 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2168 elif self.omit__names == 0:
2168 elif self.omit__names == 0:
2169 completion_filter = lambda x:x
2169 completion_filter = lambda x:x
2170 else:
2170 else:
2171 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2171 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2172
2172
2173 interpreter = jedi.Interpreter(text[:offset], namespaces)
2173 interpreter = jedi.Interpreter(text[:offset], namespaces)
2174 try_jedi = True
2174 try_jedi = True
2175
2175
2176 try:
2176 try:
2177 # find the first token in the current tree -- if it is a ' or " then we are in a string
2177 # find the first token in the current tree -- if it is a ' or " then we are in a string
2178 completing_string = False
2178 completing_string = False
2179 try:
2179 try:
2180 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2180 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2181 except StopIteration:
2181 except StopIteration:
2182 pass
2182 pass
2183 else:
2183 else:
2184 # note the value may be ', ", or it may also be ''' or """, or
2184 # note the value may be ', ", or it may also be ''' or """, or
2185 # in some cases, """what/you/typed..., but all of these are
2185 # in some cases, """what/you/typed..., but all of these are
2186 # strings.
2186 # strings.
2187 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2187 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2188
2188
2189 # if we are in a string jedi is likely not the right candidate for
2189 # if we are in a string jedi is likely not the right candidate for
2190 # now. Skip it.
2190 # now. Skip it.
2191 try_jedi = not completing_string
2191 try_jedi = not completing_string
2192 except Exception as e:
2192 except Exception as e:
2193 # many of things can go wrong, we are using private API just don't crash.
2193 # many of things can go wrong, we are using private API just don't crash.
2194 if self.debug:
2194 if self.debug:
2195 print("Error detecting if completing a non-finished string :", e, '|')
2195 print("Error detecting if completing a non-finished string :", e, '|')
2196
2196
2197 if not try_jedi:
2197 if not try_jedi:
2198 return []
2198 return []
2199 try:
2199 try:
2200 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2200 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2201 except Exception as e:
2201 except Exception as e:
2202 if self.debug:
2202 if self.debug:
2203 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
2203 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
2204 else:
2204 else:
2205 return []
2205 return []
2206
2206
2207 def python_matches(self, text: str) -> Iterable[str]:
2207 def python_matches(self, text: str) -> Iterable[str]:
2208 """Match attributes or global python names"""
2208 """Match attributes or global python names"""
2209 if "." in text:
2209 if "." in text:
2210 try:
2210 try:
2211 matches = self.attr_matches(text)
2211 matches = self.attr_matches(text)
2212 if text.endswith('.') and self.omit__names:
2212 if text.endswith('.') and self.omit__names:
2213 if self.omit__names == 1:
2213 if self.omit__names == 1:
2214 # true if txt is _not_ a __ name, false otherwise:
2214 # true if txt is _not_ a __ name, false otherwise:
2215 no__name = (lambda txt:
2215 no__name = (lambda txt:
2216 re.match(r'.*\.__.*?__',txt) is None)
2216 re.match(r'.*\.__.*?__',txt) is None)
2217 else:
2217 else:
2218 # true if txt is _not_ a _ name, false otherwise:
2218 # true if txt is _not_ a _ name, false otherwise:
2219 no__name = (lambda txt:
2219 no__name = (lambda txt:
2220 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2220 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2221 matches = filter(no__name, matches)
2221 matches = filter(no__name, matches)
2222 except NameError:
2222 except NameError:
2223 # catches <undefined attributes>.<tab>
2223 # catches <undefined attributes>.<tab>
2224 matches = []
2224 matches = []
2225 else:
2225 else:
2226 matches = self.global_matches(text)
2226 matches = self.global_matches(text)
2227 return matches
2227 return matches
2228
2228
2229 def _default_arguments_from_docstring(self, doc):
2229 def _default_arguments_from_docstring(self, doc):
2230 """Parse the first line of docstring for call signature.
2230 """Parse the first line of docstring for call signature.
2231
2231
2232 Docstring should be of the form 'min(iterable[, key=func])\n'.
2232 Docstring should be of the form 'min(iterable[, key=func])\n'.
2233 It can also parse cython docstring of the form
2233 It can also parse cython docstring of the form
2234 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2234 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2235 """
2235 """
2236 if doc is None:
2236 if doc is None:
2237 return []
2237 return []
2238
2238
2239 #care only the firstline
2239 #care only the firstline
2240 line = doc.lstrip().splitlines()[0]
2240 line = doc.lstrip().splitlines()[0]
2241
2241
2242 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2242 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2243 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2243 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2244 sig = self.docstring_sig_re.search(line)
2244 sig = self.docstring_sig_re.search(line)
2245 if sig is None:
2245 if sig is None:
2246 return []
2246 return []
2247 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2247 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2248 sig = sig.groups()[0].split(',')
2248 sig = sig.groups()[0].split(',')
2249 ret = []
2249 ret = []
2250 for s in sig:
2250 for s in sig:
2251 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2251 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2252 ret += self.docstring_kwd_re.findall(s)
2252 ret += self.docstring_kwd_re.findall(s)
2253 return ret
2253 return ret
2254
2254
2255 def _default_arguments(self, obj):
2255 def _default_arguments(self, obj):
2256 """Return the list of default arguments of obj if it is callable,
2256 """Return the list of default arguments of obj if it is callable,
2257 or empty list otherwise."""
2257 or empty list otherwise."""
2258 call_obj = obj
2258 call_obj = obj
2259 ret = []
2259 ret = []
2260 if inspect.isbuiltin(obj):
2260 if inspect.isbuiltin(obj):
2261 pass
2261 pass
2262 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2262 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2263 if inspect.isclass(obj):
2263 if inspect.isclass(obj):
2264 #for cython embedsignature=True the constructor docstring
2264 #for cython embedsignature=True the constructor docstring
2265 #belongs to the object itself not __init__
2265 #belongs to the object itself not __init__
2266 ret += self._default_arguments_from_docstring(
2266 ret += self._default_arguments_from_docstring(
2267 getattr(obj, '__doc__', ''))
2267 getattr(obj, '__doc__', ''))
2268 # for classes, check for __init__,__new__
2268 # for classes, check for __init__,__new__
2269 call_obj = (getattr(obj, '__init__', None) or
2269 call_obj = (getattr(obj, '__init__', None) or
2270 getattr(obj, '__new__', None))
2270 getattr(obj, '__new__', None))
2271 # for all others, check if they are __call__able
2271 # for all others, check if they are __call__able
2272 elif hasattr(obj, '__call__'):
2272 elif hasattr(obj, '__call__'):
2273 call_obj = obj.__call__
2273 call_obj = obj.__call__
2274 ret += self._default_arguments_from_docstring(
2274 ret += self._default_arguments_from_docstring(
2275 getattr(call_obj, '__doc__', ''))
2275 getattr(call_obj, '__doc__', ''))
2276
2276
2277 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2277 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2278 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2278 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2279
2279
2280 try:
2280 try:
2281 sig = inspect.signature(obj)
2281 sig = inspect.signature(obj)
2282 ret.extend(k for k, v in sig.parameters.items() if
2282 ret.extend(k for k, v in sig.parameters.items() if
2283 v.kind in _keeps)
2283 v.kind in _keeps)
2284 except ValueError:
2284 except ValueError:
2285 pass
2285 pass
2286
2286
2287 return list(set(ret))
2287 return list(set(ret))
2288
2288
2289 @context_matcher()
2289 @context_matcher()
2290 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2290 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2291 """Match named parameters (kwargs) of the last open function."""
2291 """Match named parameters (kwargs) of the last open function."""
2292 matches = self.python_func_kw_matches(context.token)
2292 matches = self.python_func_kw_matches(context.token)
2293 return _convert_matcher_v1_result_to_v2(matches, type="param")
2293 return _convert_matcher_v1_result_to_v2(matches, type="param")
2294
2294
2295 def python_func_kw_matches(self, text):
2295 def python_func_kw_matches(self, text):
2296 """Match named parameters (kwargs) of the last open function.
2296 """Match named parameters (kwargs) of the last open function.
2297
2297
2298 .. deprecated:: 8.6
2298 .. deprecated:: 8.6
2299 You can use :meth:`python_func_kw_matcher` instead.
2299 You can use :meth:`python_func_kw_matcher` instead.
2300 """
2300 """
2301
2301
2302 if "." in text: # a parameter cannot be dotted
2302 if "." in text: # a parameter cannot be dotted
2303 return []
2303 return []
2304 try: regexp = self.__funcParamsRegex
2304 try: regexp = self.__funcParamsRegex
2305 except AttributeError:
2305 except AttributeError:
2306 regexp = self.__funcParamsRegex = re.compile(r'''
2306 regexp = self.__funcParamsRegex = re.compile(r'''
2307 '.*?(?<!\\)' | # single quoted strings or
2307 '.*?(?<!\\)' | # single quoted strings or
2308 ".*?(?<!\\)" | # double quoted strings or
2308 ".*?(?<!\\)" | # double quoted strings or
2309 \w+ | # identifier
2309 \w+ | # identifier
2310 \S # other characters
2310 \S # other characters
2311 ''', re.VERBOSE | re.DOTALL)
2311 ''', re.VERBOSE | re.DOTALL)
2312 # 1. find the nearest identifier that comes before an unclosed
2312 # 1. find the nearest identifier that comes before an unclosed
2313 # parenthesis before the cursor
2313 # parenthesis before the cursor
2314 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2314 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2315 tokens = regexp.findall(self.text_until_cursor)
2315 tokens = regexp.findall(self.text_until_cursor)
2316 iterTokens = reversed(tokens); openPar = 0
2316 iterTokens = reversed(tokens); openPar = 0
2317
2317
2318 for token in iterTokens:
2318 for token in iterTokens:
2319 if token == ')':
2319 if token == ')':
2320 openPar -= 1
2320 openPar -= 1
2321 elif token == '(':
2321 elif token == '(':
2322 openPar += 1
2322 openPar += 1
2323 if openPar > 0:
2323 if openPar > 0:
2324 # found the last unclosed parenthesis
2324 # found the last unclosed parenthesis
2325 break
2325 break
2326 else:
2326 else:
2327 return []
2327 return []
2328 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2328 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2329 ids = []
2329 ids = []
2330 isId = re.compile(r'\w+$').match
2330 isId = re.compile(r'\w+$').match
2331
2331
2332 while True:
2332 while True:
2333 try:
2333 try:
2334 ids.append(next(iterTokens))
2334 ids.append(next(iterTokens))
2335 if not isId(ids[-1]):
2335 if not isId(ids[-1]):
2336 ids.pop(); break
2336 ids.pop(); break
2337 if not next(iterTokens) == '.':
2337 if not next(iterTokens) == '.':
2338 break
2338 break
2339 except StopIteration:
2339 except StopIteration:
2340 break
2340 break
2341
2341
2342 # Find all named arguments already assigned to, as to avoid suggesting
2342 # Find all named arguments already assigned to, as to avoid suggesting
2343 # them again
2343 # them again
2344 usedNamedArgs = set()
2344 usedNamedArgs = set()
2345 par_level = -1
2345 par_level = -1
2346 for token, next_token in zip(tokens, tokens[1:]):
2346 for token, next_token in zip(tokens, tokens[1:]):
2347 if token == '(':
2347 if token == '(':
2348 par_level += 1
2348 par_level += 1
2349 elif token == ')':
2349 elif token == ')':
2350 par_level -= 1
2350 par_level -= 1
2351
2351
2352 if par_level != 0:
2352 if par_level != 0:
2353 continue
2353 continue
2354
2354
2355 if next_token != '=':
2355 if next_token != '=':
2356 continue
2356 continue
2357
2357
2358 usedNamedArgs.add(token)
2358 usedNamedArgs.add(token)
2359
2359
2360 argMatches = []
2360 argMatches = []
2361 try:
2361 try:
2362 callableObj = '.'.join(ids[::-1])
2362 callableObj = '.'.join(ids[::-1])
2363 namedArgs = self._default_arguments(eval(callableObj,
2363 namedArgs = self._default_arguments(eval(callableObj,
2364 self.namespace))
2364 self.namespace))
2365
2365
2366 # Remove used named arguments from the list, no need to show twice
2366 # Remove used named arguments from the list, no need to show twice
2367 for namedArg in set(namedArgs) - usedNamedArgs:
2367 for namedArg in set(namedArgs) - usedNamedArgs:
2368 if namedArg.startswith(text):
2368 if namedArg.startswith(text):
2369 argMatches.append("%s=" %namedArg)
2369 argMatches.append("%s=" %namedArg)
2370 except:
2370 except:
2371 pass
2371 pass
2372
2372
2373 return argMatches
2373 return argMatches
2374
2374
2375 @staticmethod
2375 @staticmethod
2376 def _get_keys(obj: Any) -> List[Any]:
2376 def _get_keys(obj: Any) -> List[Any]:
2377 # Objects can define their own completions by defining an
2377 # Objects can define their own completions by defining an
2378 # _ipy_key_completions_() method.
2378 # _ipy_key_completions_() method.
2379 method = get_real_method(obj, '_ipython_key_completions_')
2379 method = get_real_method(obj, '_ipython_key_completions_')
2380 if method is not None:
2380 if method is not None:
2381 return method()
2381 return method()
2382
2382
2383 # Special case some common in-memory dict-like types
2383 # Special case some common in-memory dict-like types
2384 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2384 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2385 try:
2385 try:
2386 return list(obj.keys())
2386 return list(obj.keys())
2387 except Exception:
2387 except Exception:
2388 return []
2388 return []
2389 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2389 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2390 try:
2390 try:
2391 return list(obj.obj.keys())
2391 return list(obj.obj.keys())
2392 except Exception:
2392 except Exception:
2393 return []
2393 return []
2394 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2394 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2395 _safe_isinstance(obj, 'numpy', 'void'):
2395 _safe_isinstance(obj, 'numpy', 'void'):
2396 return obj.dtype.names or []
2396 return obj.dtype.names or []
2397 return []
2397 return []
2398
2398
2399 @context_matcher()
2399 @context_matcher()
2400 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2400 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2401 """Match string keys in a dictionary, after e.g. ``foo[``."""
2401 """Match string keys in a dictionary, after e.g. ``foo[``."""
2402 matches = self.dict_key_matches(context.token)
2402 matches = self.dict_key_matches(context.token)
2403 return _convert_matcher_v1_result_to_v2(
2403 return _convert_matcher_v1_result_to_v2(
2404 matches, type="dict key", suppress_if_matches=True
2404 matches, type="dict key", suppress_if_matches=True
2405 )
2405 )
2406
2406
2407 def dict_key_matches(self, text: str) -> List[str]:
2407 def dict_key_matches(self, text: str) -> List[str]:
2408 """Match string keys in a dictionary, after e.g. ``foo[``.
2408 """Match string keys in a dictionary, after e.g. ``foo[``.
2409
2409
2410 .. deprecated:: 8.6
2410 .. deprecated:: 8.6
2411 You can use :meth:`dict_key_matcher` instead.
2411 You can use :meth:`dict_key_matcher` instead.
2412 """
2412 """
2413
2413
2414 # Short-circuit on closed dictionary (regular expression would
2414 # Short-circuit on closed dictionary (regular expression would
2415 # not match anyway, but would take quite a while).
2415 # not match anyway, but would take quite a while).
2416 if self.text_until_cursor.strip().endswith("]"):
2416 if self.text_until_cursor.strip().endswith("]"):
2417 return []
2417 return []
2418
2418
2419 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2419 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2420
2420
2421 if match is None:
2421 if match is None:
2422 return []
2422 return []
2423
2423
2424 expr, prior_tuple_keys, key_prefix = match.groups()
2424 expr, prior_tuple_keys, key_prefix = match.groups()
2425
2425
2426 obj = self._evaluate_expr(expr)
2426 obj = self._evaluate_expr(expr)
2427
2427
2428 if obj is not_found:
2428 if obj is not_found:
2429 return []
2429 return []
2430
2430
2431 keys = self._get_keys(obj)
2431 keys = self._get_keys(obj)
2432 if not keys:
2432 if not keys:
2433 return keys
2433 return keys
2434
2434
2435 tuple_prefix = guarded_eval(
2435 tuple_prefix = guarded_eval(
2436 prior_tuple_keys,
2436 prior_tuple_keys,
2437 EvaluationContext(
2437 EvaluationContext(
2438 globals_=self.global_namespace,
2438 globals_=self.global_namespace,
2439 locals_=self.namespace,
2439 locals_=self.namespace,
2440 evaluation=self.evaluation,
2440 evaluation=self.evaluation,
2441 in_subscript=True,
2441 in_subscript=True,
2442 ),
2442 ),
2443 )
2443 )
2444
2444
2445 closing_quote, token_offset, matches = match_dict_keys(
2445 closing_quote, token_offset, matches = match_dict_keys(
2446 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2446 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2447 )
2447 )
2448 if not matches:
2448 if not matches:
2449 return []
2449 return []
2450
2450
2451 # get the cursor position of
2451 # get the cursor position of
2452 # - the text being completed
2452 # - the text being completed
2453 # - the start of the key text
2453 # - the start of the key text
2454 # - the start of the completion
2454 # - the start of the completion
2455 text_start = len(self.text_until_cursor) - len(text)
2455 text_start = len(self.text_until_cursor) - len(text)
2456 if key_prefix:
2456 if key_prefix:
2457 key_start = match.start(3)
2457 key_start = match.start(3)
2458 completion_start = key_start + token_offset
2458 completion_start = key_start + token_offset
2459 else:
2459 else:
2460 key_start = completion_start = match.end()
2460 key_start = completion_start = match.end()
2461
2461
2462 # grab the leading prefix, to make sure all completions start with `text`
2462 # grab the leading prefix, to make sure all completions start with `text`
2463 if text_start > key_start:
2463 if text_start > key_start:
2464 leading = ''
2464 leading = ''
2465 else:
2465 else:
2466 leading = text[text_start:completion_start]
2466 leading = text[text_start:completion_start]
2467
2467
2468 # append closing quote and bracket as appropriate
2468 # append closing quote and bracket as appropriate
2469 # this is *not* appropriate if the opening quote or bracket is outside
2469 # this is *not* appropriate if the opening quote or bracket is outside
2470 # the text given to this method, e.g. `d["""a\nt
2470 # the text given to this method, e.g. `d["""a\nt
2471 can_close_quote = False
2471 can_close_quote = False
2472 can_close_bracket = False
2472 can_close_bracket = False
2473
2473
2474 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2474 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2475
2475
2476 if continuation.startswith(closing_quote):
2476 if continuation.startswith(closing_quote):
2477 # do not close if already closed, e.g. `d['a<tab>'`
2477 # do not close if already closed, e.g. `d['a<tab>'`
2478 continuation = continuation[len(closing_quote) :]
2478 continuation = continuation[len(closing_quote) :]
2479 else:
2479 else:
2480 can_close_quote = True
2480 can_close_quote = True
2481
2481
2482 continuation = continuation.strip()
2482 continuation = continuation.strip()
2483
2483
2484 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2484 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2485 # handling it is out of scope, so let's avoid appending suffixes.
2485 # handling it is out of scope, so let's avoid appending suffixes.
2486 has_known_tuple_handling = isinstance(obj, dict)
2486 has_known_tuple_handling = isinstance(obj, dict)
2487
2487
2488 can_close_bracket = (
2488 can_close_bracket = (
2489 not continuation.startswith("]") and self.auto_close_dict_keys
2489 not continuation.startswith("]") and self.auto_close_dict_keys
2490 )
2490 )
2491 can_close_tuple_item = (
2491 can_close_tuple_item = (
2492 not continuation.startswith(",")
2492 not continuation.startswith(",")
2493 and has_known_tuple_handling
2493 and has_known_tuple_handling
2494 and self.auto_close_dict_keys
2494 and self.auto_close_dict_keys
2495 )
2495 )
2496 can_close_quote = can_close_quote and self.auto_close_dict_keys
2496 can_close_quote = can_close_quote and self.auto_close_dict_keys
2497
2497
2498 # fast path if closing qoute should be appended but not suffix is allowed
2498 # fast path if closing qoute should be appended but not suffix is allowed
2499 if not can_close_quote and not can_close_bracket and closing_quote:
2499 if not can_close_quote and not can_close_bracket and closing_quote:
2500 return [leading + k for k in matches]
2500 return [leading + k for k in matches]
2501
2501
2502 results = []
2502 results = []
2503
2503
2504 end_of_tuple_or_item = DictKeyState.END_OF_TUPLE | DictKeyState.END_OF_ITEM
2504 end_of_tuple_or_item = DictKeyState.END_OF_TUPLE | DictKeyState.END_OF_ITEM
2505
2505
2506 for k, state_flag in matches.items():
2506 for k, state_flag in matches.items():
2507 result = leading + k
2507 result = leading + k
2508 if can_close_quote and closing_quote:
2508 if can_close_quote and closing_quote:
2509 result += closing_quote
2509 result += closing_quote
2510
2510
2511 if state_flag == end_of_tuple_or_item:
2511 if state_flag == end_of_tuple_or_item:
2512 # We do not know which suffix to add,
2512 # We do not know which suffix to add,
2513 # e.g. both tuple item and string
2513 # e.g. both tuple item and string
2514 # match this item.
2514 # match this item.
2515 pass
2515 pass
2516
2516
2517 if state_flag in end_of_tuple_or_item and can_close_bracket:
2517 if state_flag in end_of_tuple_or_item and can_close_bracket:
2518 result += "]"
2518 result += "]"
2519 if state_flag == DictKeyState.IN_TUPLE and can_close_tuple_item:
2519 if state_flag == DictKeyState.IN_TUPLE and can_close_tuple_item:
2520 result += ", "
2520 result += ", "
2521 results.append(result)
2521 results.append(result)
2522 return results
2522 return results
2523
2523
2524 @context_matcher()
2524 @context_matcher()
2525 def unicode_name_matcher(self, context: CompletionContext):
2525 def unicode_name_matcher(self, context: CompletionContext):
2526 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2526 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2527 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2527 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2528 return _convert_matcher_v1_result_to_v2(
2528 return _convert_matcher_v1_result_to_v2(
2529 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2529 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2530 )
2530 )
2531
2531
2532 @staticmethod
2532 @staticmethod
2533 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2533 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2534 """Match Latex-like syntax for unicode characters base
2534 """Match Latex-like syntax for unicode characters base
2535 on the name of the character.
2535 on the name of the character.
2536
2536
2537 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2537 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2538
2538
2539 Works only on valid python 3 identifier, or on combining characters that
2539 Works only on valid python 3 identifier, or on combining characters that
2540 will combine to form a valid identifier.
2540 will combine to form a valid identifier.
2541 """
2541 """
2542 slashpos = text.rfind('\\')
2542 slashpos = text.rfind('\\')
2543 if slashpos > -1:
2543 if slashpos > -1:
2544 s = text[slashpos+1:]
2544 s = text[slashpos+1:]
2545 try :
2545 try :
2546 unic = unicodedata.lookup(s)
2546 unic = unicodedata.lookup(s)
2547 # allow combining chars
2547 # allow combining chars
2548 if ('a'+unic).isidentifier():
2548 if ('a'+unic).isidentifier():
2549 return '\\'+s,[unic]
2549 return '\\'+s,[unic]
2550 except KeyError:
2550 except KeyError:
2551 pass
2551 pass
2552 return '', []
2552 return '', []
2553
2553
2554 @context_matcher()
2554 @context_matcher()
2555 def latex_name_matcher(self, context: CompletionContext):
2555 def latex_name_matcher(self, context: CompletionContext):
2556 """Match Latex syntax for unicode characters.
2556 """Match Latex syntax for unicode characters.
2557
2557
2558 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2558 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2559 """
2559 """
2560 fragment, matches = self.latex_matches(context.text_until_cursor)
2560 fragment, matches = self.latex_matches(context.text_until_cursor)
2561 return _convert_matcher_v1_result_to_v2(
2561 return _convert_matcher_v1_result_to_v2(
2562 matches, type="latex", fragment=fragment, suppress_if_matches=True
2562 matches, type="latex", fragment=fragment, suppress_if_matches=True
2563 )
2563 )
2564
2564
2565 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2565 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2566 """Match Latex syntax for unicode characters.
2566 """Match Latex syntax for unicode characters.
2567
2567
2568 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2568 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2569
2569
2570 .. deprecated:: 8.6
2570 .. deprecated:: 8.6
2571 You can use :meth:`latex_name_matcher` instead.
2571 You can use :meth:`latex_name_matcher` instead.
2572 """
2572 """
2573 slashpos = text.rfind('\\')
2573 slashpos = text.rfind('\\')
2574 if slashpos > -1:
2574 if slashpos > -1:
2575 s = text[slashpos:]
2575 s = text[slashpos:]
2576 if s in latex_symbols:
2576 if s in latex_symbols:
2577 # Try to complete a full latex symbol to unicode
2577 # Try to complete a full latex symbol to unicode
2578 # \\alpha -> Ξ±
2578 # \\alpha -> Ξ±
2579 return s, [latex_symbols[s]]
2579 return s, [latex_symbols[s]]
2580 else:
2580 else:
2581 # If a user has partially typed a latex symbol, give them
2581 # If a user has partially typed a latex symbol, give them
2582 # a full list of options \al -> [\aleph, \alpha]
2582 # a full list of options \al -> [\aleph, \alpha]
2583 matches = [k for k in latex_symbols if k.startswith(s)]
2583 matches = [k for k in latex_symbols if k.startswith(s)]
2584 if matches:
2584 if matches:
2585 return s, matches
2585 return s, matches
2586 return '', ()
2586 return '', ()
2587
2587
2588 @context_matcher()
2588 @context_matcher()
2589 def custom_completer_matcher(self, context):
2589 def custom_completer_matcher(self, context):
2590 """Dispatch custom completer.
2590 """Dispatch custom completer.
2591
2591
2592 If a match is found, suppresses all other matchers except for Jedi.
2592 If a match is found, suppresses all other matchers except for Jedi.
2593 """
2593 """
2594 matches = self.dispatch_custom_completer(context.token) or []
2594 matches = self.dispatch_custom_completer(context.token) or []
2595 result = _convert_matcher_v1_result_to_v2(
2595 result = _convert_matcher_v1_result_to_v2(
2596 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2596 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2597 )
2597 )
2598 result["ordered"] = True
2598 result["ordered"] = True
2599 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2599 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2600 return result
2600 return result
2601
2601
2602 def dispatch_custom_completer(self, text):
2602 def dispatch_custom_completer(self, text):
2603 """
2603 """
2604 .. deprecated:: 8.6
2604 .. deprecated:: 8.6
2605 You can use :meth:`custom_completer_matcher` instead.
2605 You can use :meth:`custom_completer_matcher` instead.
2606 """
2606 """
2607 if not self.custom_completers:
2607 if not self.custom_completers:
2608 return
2608 return
2609
2609
2610 line = self.line_buffer
2610 line = self.line_buffer
2611 if not line.strip():
2611 if not line.strip():
2612 return None
2612 return None
2613
2613
2614 # Create a little structure to pass all the relevant information about
2614 # Create a little structure to pass all the relevant information about
2615 # the current completion to any custom completer.
2615 # the current completion to any custom completer.
2616 event = SimpleNamespace()
2616 event = SimpleNamespace()
2617 event.line = line
2617 event.line = line
2618 event.symbol = text
2618 event.symbol = text
2619 cmd = line.split(None,1)[0]
2619 cmd = line.split(None,1)[0]
2620 event.command = cmd
2620 event.command = cmd
2621 event.text_until_cursor = self.text_until_cursor
2621 event.text_until_cursor = self.text_until_cursor
2622
2622
2623 # for foo etc, try also to find completer for %foo
2623 # for foo etc, try also to find completer for %foo
2624 if not cmd.startswith(self.magic_escape):
2624 if not cmd.startswith(self.magic_escape):
2625 try_magic = self.custom_completers.s_matches(
2625 try_magic = self.custom_completers.s_matches(
2626 self.magic_escape + cmd)
2626 self.magic_escape + cmd)
2627 else:
2627 else:
2628 try_magic = []
2628 try_magic = []
2629
2629
2630 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2630 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2631 try_magic,
2631 try_magic,
2632 self.custom_completers.flat_matches(self.text_until_cursor)):
2632 self.custom_completers.flat_matches(self.text_until_cursor)):
2633 try:
2633 try:
2634 res = c(event)
2634 res = c(event)
2635 if res:
2635 if res:
2636 # first, try case sensitive match
2636 # first, try case sensitive match
2637 withcase = [r for r in res if r.startswith(text)]
2637 withcase = [r for r in res if r.startswith(text)]
2638 if withcase:
2638 if withcase:
2639 return withcase
2639 return withcase
2640 # if none, then case insensitive ones are ok too
2640 # if none, then case insensitive ones are ok too
2641 text_low = text.lower()
2641 text_low = text.lower()
2642 return [r for r in res if r.lower().startswith(text_low)]
2642 return [r for r in res if r.lower().startswith(text_low)]
2643 except TryNext:
2643 except TryNext:
2644 pass
2644 pass
2645 except KeyboardInterrupt:
2645 except KeyboardInterrupt:
2646 """
2646 """
2647 If custom completer take too long,
2647 If custom completer take too long,
2648 let keyboard interrupt abort and return nothing.
2648 let keyboard interrupt abort and return nothing.
2649 """
2649 """
2650 break
2650 break
2651
2651
2652 return None
2652 return None
2653
2653
2654 def completions(self, text: str, offset: int)->Iterator[Completion]:
2654 def completions(self, text: str, offset: int)->Iterator[Completion]:
2655 """
2655 """
2656 Returns an iterator over the possible completions
2656 Returns an iterator over the possible completions
2657
2657
2658 .. warning::
2658 .. warning::
2659
2659
2660 Unstable
2660 Unstable
2661
2661
2662 This function is unstable, API may change without warning.
2662 This function is unstable, API may change without warning.
2663 It will also raise unless use in proper context manager.
2663 It will also raise unless use in proper context manager.
2664
2664
2665 Parameters
2665 Parameters
2666 ----------
2666 ----------
2667 text : str
2667 text : str
2668 Full text of the current input, multi line string.
2668 Full text of the current input, multi line string.
2669 offset : int
2669 offset : int
2670 Integer representing the position of the cursor in ``text``. Offset
2670 Integer representing the position of the cursor in ``text``. Offset
2671 is 0-based indexed.
2671 is 0-based indexed.
2672
2672
2673 Yields
2673 Yields
2674 ------
2674 ------
2675 Completion
2675 Completion
2676
2676
2677 Notes
2677 Notes
2678 -----
2678 -----
2679 The cursor on a text can either be seen as being "in between"
2679 The cursor on a text can either be seen as being "in between"
2680 characters or "On" a character depending on the interface visible to
2680 characters or "On" a character depending on the interface visible to
2681 the user. For consistency the cursor being on "in between" characters X
2681 the user. For consistency the cursor being on "in between" characters X
2682 and Y is equivalent to the cursor being "on" character Y, that is to say
2682 and Y is equivalent to the cursor being "on" character Y, that is to say
2683 the character the cursor is on is considered as being after the cursor.
2683 the character the cursor is on is considered as being after the cursor.
2684
2684
2685 Combining characters may span more that one position in the
2685 Combining characters may span more that one position in the
2686 text.
2686 text.
2687
2687
2688 .. note::
2688 .. note::
2689
2689
2690 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2690 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2691 fake Completion token to distinguish completion returned by Jedi
2691 fake Completion token to distinguish completion returned by Jedi
2692 and usual IPython completion.
2692 and usual IPython completion.
2693
2693
2694 .. note::
2694 .. note::
2695
2695
2696 Completions are not completely deduplicated yet. If identical
2696 Completions are not completely deduplicated yet. If identical
2697 completions are coming from different sources this function does not
2697 completions are coming from different sources this function does not
2698 ensure that each completion object will only be present once.
2698 ensure that each completion object will only be present once.
2699 """
2699 """
2700 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2700 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2701 "It may change without warnings. "
2701 "It may change without warnings. "
2702 "Use in corresponding context manager.",
2702 "Use in corresponding context manager.",
2703 category=ProvisionalCompleterWarning, stacklevel=2)
2703 category=ProvisionalCompleterWarning, stacklevel=2)
2704
2704
2705 seen = set()
2705 seen = set()
2706 profiler:Optional[cProfile.Profile]
2706 profiler:Optional[cProfile.Profile]
2707 try:
2707 try:
2708 if self.profile_completions:
2708 if self.profile_completions:
2709 import cProfile
2709 import cProfile
2710 profiler = cProfile.Profile()
2710 profiler = cProfile.Profile()
2711 profiler.enable()
2711 profiler.enable()
2712 else:
2712 else:
2713 profiler = None
2713 profiler = None
2714
2714
2715 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2715 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2716 if c and (c in seen):
2716 if c and (c in seen):
2717 continue
2717 continue
2718 yield c
2718 yield c
2719 seen.add(c)
2719 seen.add(c)
2720 except KeyboardInterrupt:
2720 except KeyboardInterrupt:
2721 """if completions take too long and users send keyboard interrupt,
2721 """if completions take too long and users send keyboard interrupt,
2722 do not crash and return ASAP. """
2722 do not crash and return ASAP. """
2723 pass
2723 pass
2724 finally:
2724 finally:
2725 if profiler is not None:
2725 if profiler is not None:
2726 profiler.disable()
2726 profiler.disable()
2727 ensure_dir_exists(self.profiler_output_dir)
2727 ensure_dir_exists(self.profiler_output_dir)
2728 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2728 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2729 print("Writing profiler output to", output_path)
2729 print("Writing profiler output to", output_path)
2730 profiler.dump_stats(output_path)
2730 profiler.dump_stats(output_path)
2731
2731
2732 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2732 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2733 """
2733 """
2734 Core completion module.Same signature as :any:`completions`, with the
2734 Core completion module.Same signature as :any:`completions`, with the
2735 extra `timeout` parameter (in seconds).
2735 extra `timeout` parameter (in seconds).
2736
2736
2737 Computing jedi's completion ``.type`` can be quite expensive (it is a
2737 Computing jedi's completion ``.type`` can be quite expensive (it is a
2738 lazy property) and can require some warm-up, more warm up than just
2738 lazy property) and can require some warm-up, more warm up than just
2739 computing the ``name`` of a completion. The warm-up can be :
2739 computing the ``name`` of a completion. The warm-up can be :
2740
2740
2741 - Long warm-up the first time a module is encountered after
2741 - Long warm-up the first time a module is encountered after
2742 install/update: actually build parse/inference tree.
2742 install/update: actually build parse/inference tree.
2743
2743
2744 - first time the module is encountered in a session: load tree from
2744 - first time the module is encountered in a session: load tree from
2745 disk.
2745 disk.
2746
2746
2747 We don't want to block completions for tens of seconds so we give the
2747 We don't want to block completions for tens of seconds so we give the
2748 completer a "budget" of ``_timeout`` seconds per invocation to compute
2748 completer a "budget" of ``_timeout`` seconds per invocation to compute
2749 completions types, the completions that have not yet been computed will
2749 completions types, the completions that have not yet been computed will
2750 be marked as "unknown" an will have a chance to be computed next round
2750 be marked as "unknown" an will have a chance to be computed next round
2751 are things get cached.
2751 are things get cached.
2752
2752
2753 Keep in mind that Jedi is not the only thing treating the completion so
2753 Keep in mind that Jedi is not the only thing treating the completion so
2754 keep the timeout short-ish as if we take more than 0.3 second we still
2754 keep the timeout short-ish as if we take more than 0.3 second we still
2755 have lots of processing to do.
2755 have lots of processing to do.
2756
2756
2757 """
2757 """
2758 deadline = time.monotonic() + _timeout
2758 deadline = time.monotonic() + _timeout
2759
2759
2760 before = full_text[:offset]
2760 before = full_text[:offset]
2761 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2761 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2762
2762
2763 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2763 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2764
2764
2765 results = self._complete(
2765 results = self._complete(
2766 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2766 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2767 )
2767 )
2768 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2768 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2769 identifier: result
2769 identifier: result
2770 for identifier, result in results.items()
2770 for identifier, result in results.items()
2771 if identifier != jedi_matcher_id
2771 if identifier != jedi_matcher_id
2772 }
2772 }
2773
2773
2774 jedi_matches = (
2774 jedi_matches = (
2775 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2775 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2776 if jedi_matcher_id in results
2776 if jedi_matcher_id in results
2777 else ()
2777 else ()
2778 )
2778 )
2779
2779
2780 iter_jm = iter(jedi_matches)
2780 iter_jm = iter(jedi_matches)
2781 if _timeout:
2781 if _timeout:
2782 for jm in iter_jm:
2782 for jm in iter_jm:
2783 try:
2783 try:
2784 type_ = jm.type
2784 type_ = jm.type
2785 except Exception:
2785 except Exception:
2786 if self.debug:
2786 if self.debug:
2787 print("Error in Jedi getting type of ", jm)
2787 print("Error in Jedi getting type of ", jm)
2788 type_ = None
2788 type_ = None
2789 delta = len(jm.name_with_symbols) - len(jm.complete)
2789 delta = len(jm.name_with_symbols) - len(jm.complete)
2790 if type_ == 'function':
2790 if type_ == 'function':
2791 signature = _make_signature(jm)
2791 signature = _make_signature(jm)
2792 else:
2792 else:
2793 signature = ''
2793 signature = ''
2794 yield Completion(start=offset - delta,
2794 yield Completion(start=offset - delta,
2795 end=offset,
2795 end=offset,
2796 text=jm.name_with_symbols,
2796 text=jm.name_with_symbols,
2797 type=type_,
2797 type=type_,
2798 signature=signature,
2798 signature=signature,
2799 _origin='jedi')
2799 _origin='jedi')
2800
2800
2801 if time.monotonic() > deadline:
2801 if time.monotonic() > deadline:
2802 break
2802 break
2803
2803
2804 for jm in iter_jm:
2804 for jm in iter_jm:
2805 delta = len(jm.name_with_symbols) - len(jm.complete)
2805 delta = len(jm.name_with_symbols) - len(jm.complete)
2806 yield Completion(
2806 yield Completion(
2807 start=offset - delta,
2807 start=offset - delta,
2808 end=offset,
2808 end=offset,
2809 text=jm.name_with_symbols,
2809 text=jm.name_with_symbols,
2810 type=_UNKNOWN_TYPE, # don't compute type for speed
2810 type=_UNKNOWN_TYPE, # don't compute type for speed
2811 _origin="jedi",
2811 _origin="jedi",
2812 signature="",
2812 signature="",
2813 )
2813 )
2814
2814
2815 # TODO:
2815 # TODO:
2816 # Suppress this, right now just for debug.
2816 # Suppress this, right now just for debug.
2817 if jedi_matches and non_jedi_results and self.debug:
2817 if jedi_matches and non_jedi_results and self.debug:
2818 some_start_offset = before.rfind(
2818 some_start_offset = before.rfind(
2819 next(iter(non_jedi_results.values()))["matched_fragment"]
2819 next(iter(non_jedi_results.values()))["matched_fragment"]
2820 )
2820 )
2821 yield Completion(
2821 yield Completion(
2822 start=some_start_offset,
2822 start=some_start_offset,
2823 end=offset,
2823 end=offset,
2824 text="--jedi/ipython--",
2824 text="--jedi/ipython--",
2825 _origin="debug",
2825 _origin="debug",
2826 type="none",
2826 type="none",
2827 signature="",
2827 signature="",
2828 )
2828 )
2829
2829
2830 ordered = []
2830 ordered = []
2831 sortable = []
2831 sortable = []
2832
2832
2833 for origin, result in non_jedi_results.items():
2833 for origin, result in non_jedi_results.items():
2834 matched_text = result["matched_fragment"]
2834 matched_text = result["matched_fragment"]
2835 start_offset = before.rfind(matched_text)
2835 start_offset = before.rfind(matched_text)
2836 is_ordered = result.get("ordered", False)
2836 is_ordered = result.get("ordered", False)
2837 container = ordered if is_ordered else sortable
2837 container = ordered if is_ordered else sortable
2838
2838
2839 # I'm unsure if this is always true, so let's assert and see if it
2839 # I'm unsure if this is always true, so let's assert and see if it
2840 # crash
2840 # crash
2841 assert before.endswith(matched_text)
2841 assert before.endswith(matched_text)
2842
2842
2843 for simple_completion in result["completions"]:
2843 for simple_completion in result["completions"]:
2844 completion = Completion(
2844 completion = Completion(
2845 start=start_offset,
2845 start=start_offset,
2846 end=offset,
2846 end=offset,
2847 text=simple_completion.text,
2847 text=simple_completion.text,
2848 _origin=origin,
2848 _origin=origin,
2849 signature="",
2849 signature="",
2850 type=simple_completion.type or _UNKNOWN_TYPE,
2850 type=simple_completion.type or _UNKNOWN_TYPE,
2851 )
2851 )
2852 container.append(completion)
2852 container.append(completion)
2853
2853
2854 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2854 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2855 :MATCHES_LIMIT
2855 :MATCHES_LIMIT
2856 ]
2856 ]
2857
2857
2858 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2858 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2859 """Find completions for the given text and line context.
2859 """Find completions for the given text and line context.
2860
2860
2861 Note that both the text and the line_buffer are optional, but at least
2861 Note that both the text and the line_buffer are optional, but at least
2862 one of them must be given.
2862 one of them must be given.
2863
2863
2864 Parameters
2864 Parameters
2865 ----------
2865 ----------
2866 text : string, optional
2866 text : string, optional
2867 Text to perform the completion on. If not given, the line buffer
2867 Text to perform the completion on. If not given, the line buffer
2868 is split using the instance's CompletionSplitter object.
2868 is split using the instance's CompletionSplitter object.
2869 line_buffer : string, optional
2869 line_buffer : string, optional
2870 If not given, the completer attempts to obtain the current line
2870 If not given, the completer attempts to obtain the current line
2871 buffer via readline. This keyword allows clients which are
2871 buffer via readline. This keyword allows clients which are
2872 requesting for text completions in non-readline contexts to inform
2872 requesting for text completions in non-readline contexts to inform
2873 the completer of the entire text.
2873 the completer of the entire text.
2874 cursor_pos : int, optional
2874 cursor_pos : int, optional
2875 Index of the cursor in the full line buffer. Should be provided by
2875 Index of the cursor in the full line buffer. Should be provided by
2876 remote frontends where kernel has no access to frontend state.
2876 remote frontends where kernel has no access to frontend state.
2877
2877
2878 Returns
2878 Returns
2879 -------
2879 -------
2880 Tuple of two items:
2880 Tuple of two items:
2881 text : str
2881 text : str
2882 Text that was actually used in the completion.
2882 Text that was actually used in the completion.
2883 matches : list
2883 matches : list
2884 A list of completion matches.
2884 A list of completion matches.
2885
2885
2886 Notes
2886 Notes
2887 -----
2887 -----
2888 This API is likely to be deprecated and replaced by
2888 This API is likely to be deprecated and replaced by
2889 :any:`IPCompleter.completions` in the future.
2889 :any:`IPCompleter.completions` in the future.
2890
2890
2891 """
2891 """
2892 warnings.warn('`Completer.complete` is pending deprecation since '
2892 warnings.warn('`Completer.complete` is pending deprecation since '
2893 'IPython 6.0 and will be replaced by `Completer.completions`.',
2893 'IPython 6.0 and will be replaced by `Completer.completions`.',
2894 PendingDeprecationWarning)
2894 PendingDeprecationWarning)
2895 # potential todo, FOLD the 3rd throw away argument of _complete
2895 # potential todo, FOLD the 3rd throw away argument of _complete
2896 # into the first 2 one.
2896 # into the first 2 one.
2897 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2897 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2898 # TODO: should we deprecate now, or does it stay?
2898 # TODO: should we deprecate now, or does it stay?
2899
2899
2900 results = self._complete(
2900 results = self._complete(
2901 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2901 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2902 )
2902 )
2903
2903
2904 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2904 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2905
2905
2906 return self._arrange_and_extract(
2906 return self._arrange_and_extract(
2907 results,
2907 results,
2908 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2908 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2909 skip_matchers={jedi_matcher_id},
2909 skip_matchers={jedi_matcher_id},
2910 # this API does not support different start/end positions (fragments of token).
2910 # this API does not support different start/end positions (fragments of token).
2911 abort_if_offset_changes=True,
2911 abort_if_offset_changes=True,
2912 )
2912 )
2913
2913
2914 def _arrange_and_extract(
2914 def _arrange_and_extract(
2915 self,
2915 self,
2916 results: Dict[str, MatcherResult],
2916 results: Dict[str, MatcherResult],
2917 skip_matchers: Set[str],
2917 skip_matchers: Set[str],
2918 abort_if_offset_changes: bool,
2918 abort_if_offset_changes: bool,
2919 ):
2919 ):
2920
2920
2921 sortable = []
2921 sortable = []
2922 ordered = []
2922 ordered = []
2923 most_recent_fragment = None
2923 most_recent_fragment = None
2924 for identifier, result in results.items():
2924 for identifier, result in results.items():
2925 if identifier in skip_matchers:
2925 if identifier in skip_matchers:
2926 continue
2926 continue
2927 if not result["completions"]:
2927 if not result["completions"]:
2928 continue
2928 continue
2929 if not most_recent_fragment:
2929 if not most_recent_fragment:
2930 most_recent_fragment = result["matched_fragment"]
2930 most_recent_fragment = result["matched_fragment"]
2931 if (
2931 if (
2932 abort_if_offset_changes
2932 abort_if_offset_changes
2933 and result["matched_fragment"] != most_recent_fragment
2933 and result["matched_fragment"] != most_recent_fragment
2934 ):
2934 ):
2935 break
2935 break
2936 if result.get("ordered", False):
2936 if result.get("ordered", False):
2937 ordered.extend(result["completions"])
2937 ordered.extend(result["completions"])
2938 else:
2938 else:
2939 sortable.extend(result["completions"])
2939 sortable.extend(result["completions"])
2940
2940
2941 if not most_recent_fragment:
2941 if not most_recent_fragment:
2942 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2942 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2943
2943
2944 return most_recent_fragment, [
2944 return most_recent_fragment, [
2945 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2945 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2946 ]
2946 ]
2947
2947
2948 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2948 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2949 full_text=None) -> _CompleteResult:
2949 full_text=None) -> _CompleteResult:
2950 """
2950 """
2951 Like complete but can also returns raw jedi completions as well as the
2951 Like complete but can also returns raw jedi completions as well as the
2952 origin of the completion text. This could (and should) be made much
2952 origin of the completion text. This could (and should) be made much
2953 cleaner but that will be simpler once we drop the old (and stateful)
2953 cleaner but that will be simpler once we drop the old (and stateful)
2954 :any:`complete` API.
2954 :any:`complete` API.
2955
2955
2956 With current provisional API, cursor_pos act both (depending on the
2956 With current provisional API, cursor_pos act both (depending on the
2957 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2957 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2958 ``column`` when passing multiline strings this could/should be renamed
2958 ``column`` when passing multiline strings this could/should be renamed
2959 but would add extra noise.
2959 but would add extra noise.
2960
2960
2961 Parameters
2961 Parameters
2962 ----------
2962 ----------
2963 cursor_line
2963 cursor_line
2964 Index of the line the cursor is on. 0 indexed.
2964 Index of the line the cursor is on. 0 indexed.
2965 cursor_pos
2965 cursor_pos
2966 Position of the cursor in the current line/line_buffer/text. 0
2966 Position of the cursor in the current line/line_buffer/text. 0
2967 indexed.
2967 indexed.
2968 line_buffer : optional, str
2968 line_buffer : optional, str
2969 The current line the cursor is in, this is mostly due to legacy
2969 The current line the cursor is in, this is mostly due to legacy
2970 reason that readline could only give a us the single current line.
2970 reason that readline could only give a us the single current line.
2971 Prefer `full_text`.
2971 Prefer `full_text`.
2972 text : str
2972 text : str
2973 The current "token" the cursor is in, mostly also for historical
2973 The current "token" the cursor is in, mostly also for historical
2974 reasons. as the completer would trigger only after the current line
2974 reasons. as the completer would trigger only after the current line
2975 was parsed.
2975 was parsed.
2976 full_text : str
2976 full_text : str
2977 Full text of the current cell.
2977 Full text of the current cell.
2978
2978
2979 Returns
2979 Returns
2980 -------
2980 -------
2981 An ordered dictionary where keys are identifiers of completion
2981 An ordered dictionary where keys are identifiers of completion
2982 matchers and values are ``MatcherResult``s.
2982 matchers and values are ``MatcherResult``s.
2983 """
2983 """
2984
2984
2985 # if the cursor position isn't given, the only sane assumption we can
2985 # if the cursor position isn't given, the only sane assumption we can
2986 # make is that it's at the end of the line (the common case)
2986 # make is that it's at the end of the line (the common case)
2987 if cursor_pos is None:
2987 if cursor_pos is None:
2988 cursor_pos = len(line_buffer) if text is None else len(text)
2988 cursor_pos = len(line_buffer) if text is None else len(text)
2989
2989
2990 if self.use_main_ns:
2990 if self.use_main_ns:
2991 self.namespace = __main__.__dict__
2991 self.namespace = __main__.__dict__
2992
2992
2993 # if text is either None or an empty string, rely on the line buffer
2993 # if text is either None or an empty string, rely on the line buffer
2994 if (not line_buffer) and full_text:
2994 if (not line_buffer) and full_text:
2995 line_buffer = full_text.split('\n')[cursor_line]
2995 line_buffer = full_text.split('\n')[cursor_line]
2996 if not text: # issue #11508: check line_buffer before calling split_line
2996 if not text: # issue #11508: check line_buffer before calling split_line
2997 text = (
2997 text = (
2998 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2998 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2999 )
2999 )
3000
3000
3001 # If no line buffer is given, assume the input text is all there was
3001 # If no line buffer is given, assume the input text is all there was
3002 if line_buffer is None:
3002 if line_buffer is None:
3003 line_buffer = text
3003 line_buffer = text
3004
3004
3005 # deprecated - do not use `line_buffer` in new code.
3005 # deprecated - do not use `line_buffer` in new code.
3006 self.line_buffer = line_buffer
3006 self.line_buffer = line_buffer
3007 self.text_until_cursor = self.line_buffer[:cursor_pos]
3007 self.text_until_cursor = self.line_buffer[:cursor_pos]
3008
3008
3009 if not full_text:
3009 if not full_text:
3010 full_text = line_buffer
3010 full_text = line_buffer
3011
3011
3012 context = CompletionContext(
3012 context = CompletionContext(
3013 full_text=full_text,
3013 full_text=full_text,
3014 cursor_position=cursor_pos,
3014 cursor_position=cursor_pos,
3015 cursor_line=cursor_line,
3015 cursor_line=cursor_line,
3016 token=text,
3016 token=text,
3017 limit=MATCHES_LIMIT,
3017 limit=MATCHES_LIMIT,
3018 )
3018 )
3019
3019
3020 # Start with a clean slate of completions
3020 # Start with a clean slate of completions
3021 results = {}
3021 results = {}
3022
3022
3023 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3023 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3024
3024
3025 suppressed_matchers = set()
3025 suppressed_matchers = set()
3026
3026
3027 matchers = {
3027 matchers = {
3028 _get_matcher_id(matcher): matcher
3028 _get_matcher_id(matcher): matcher
3029 for matcher in sorted(
3029 for matcher in sorted(
3030 self.matchers, key=_get_matcher_priority, reverse=True
3030 self.matchers, key=_get_matcher_priority, reverse=True
3031 )
3031 )
3032 }
3032 }
3033
3033
3034 for matcher_id, matcher in matchers.items():
3034 for matcher_id, matcher in matchers.items():
3035 api_version = _get_matcher_api_version(matcher)
3035 api_version = _get_matcher_api_version(matcher)
3036 matcher_id = _get_matcher_id(matcher)
3036 matcher_id = _get_matcher_id(matcher)
3037
3037
3038 if matcher_id in self.disable_matchers:
3038 if matcher_id in self.disable_matchers:
3039 continue
3039 continue
3040
3040
3041 if matcher_id in results:
3041 if matcher_id in results:
3042 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3042 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3043
3043
3044 if matcher_id in suppressed_matchers:
3044 if matcher_id in suppressed_matchers:
3045 continue
3045 continue
3046
3046
3047 try:
3047 try:
3048 if api_version == 1:
3048 if api_version == 1:
3049 result = _convert_matcher_v1_result_to_v2(
3049 result = _convert_matcher_v1_result_to_v2(
3050 matcher(text), type=_UNKNOWN_TYPE
3050 matcher(text), type=_UNKNOWN_TYPE
3051 )
3051 )
3052 elif api_version == 2:
3052 elif api_version == 2:
3053 result = cast(matcher, MatcherAPIv2)(context)
3053 result = cast(matcher, MatcherAPIv2)(context)
3054 else:
3054 else:
3055 raise ValueError(f"Unsupported API version {api_version}")
3055 raise ValueError(f"Unsupported API version {api_version}")
3056 except:
3056 except:
3057 # Show the ugly traceback if the matcher causes an
3057 # Show the ugly traceback if the matcher causes an
3058 # exception, but do NOT crash the kernel!
3058 # exception, but do NOT crash the kernel!
3059 sys.excepthook(*sys.exc_info())
3059 sys.excepthook(*sys.exc_info())
3060 continue
3060 continue
3061
3061
3062 # set default value for matched fragment if suffix was not selected.
3062 # set default value for matched fragment if suffix was not selected.
3063 result["matched_fragment"] = result.get("matched_fragment", context.token)
3063 result["matched_fragment"] = result.get("matched_fragment", context.token)
3064
3064
3065 if not suppressed_matchers:
3065 if not suppressed_matchers:
3066 suppression_recommended = result.get("suppress", False)
3066 suppression_recommended = result.get("suppress", False)
3067
3067
3068 suppression_config = (
3068 suppression_config = (
3069 self.suppress_competing_matchers.get(matcher_id, None)
3069 self.suppress_competing_matchers.get(matcher_id, None)
3070 if isinstance(self.suppress_competing_matchers, dict)
3070 if isinstance(self.suppress_competing_matchers, dict)
3071 else self.suppress_competing_matchers
3071 else self.suppress_competing_matchers
3072 )
3072 )
3073 should_suppress = (
3073 should_suppress = (
3074 (suppression_config is True)
3074 (suppression_config is True)
3075 or (suppression_recommended and (suppression_config is not False))
3075 or (suppression_recommended and (suppression_config is not False))
3076 ) and has_any_completions(result)
3076 ) and has_any_completions(result)
3077
3077
3078 if should_suppress:
3078 if should_suppress:
3079 suppression_exceptions = result.get("do_not_suppress", set())
3079 suppression_exceptions = result.get("do_not_suppress", set())
3080 try:
3080 try:
3081 to_suppress = set(suppression_recommended)
3081 to_suppress = set(suppression_recommended)
3082 except TypeError:
3082 except TypeError:
3083 to_suppress = set(matchers)
3083 to_suppress = set(matchers)
3084 suppressed_matchers = to_suppress - suppression_exceptions
3084 suppressed_matchers = to_suppress - suppression_exceptions
3085
3085
3086 new_results = {}
3086 new_results = {}
3087 for previous_matcher_id, previous_result in results.items():
3087 for previous_matcher_id, previous_result in results.items():
3088 if previous_matcher_id not in suppressed_matchers:
3088 if previous_matcher_id not in suppressed_matchers:
3089 new_results[previous_matcher_id] = previous_result
3089 new_results[previous_matcher_id] = previous_result
3090 results = new_results
3090 results = new_results
3091
3091
3092 results[matcher_id] = result
3092 results[matcher_id] = result
3093
3093
3094 _, matches = self._arrange_and_extract(
3094 _, matches = self._arrange_and_extract(
3095 results,
3095 results,
3096 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3096 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3097 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3097 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3098 skip_matchers={jedi_matcher_id},
3098 skip_matchers={jedi_matcher_id},
3099 abort_if_offset_changes=False,
3099 abort_if_offset_changes=False,
3100 )
3100 )
3101
3101
3102 # populate legacy stateful API
3102 # populate legacy stateful API
3103 self.matches = matches
3103 self.matches = matches
3104
3104
3105 return results
3105 return results
3106
3106
3107 @staticmethod
3107 @staticmethod
3108 def _deduplicate(
3108 def _deduplicate(
3109 matches: Sequence[SimpleCompletion],
3109 matches: Sequence[SimpleCompletion],
3110 ) -> Iterable[SimpleCompletion]:
3110 ) -> Iterable[SimpleCompletion]:
3111 filtered_matches = {}
3111 filtered_matches = {}
3112 for match in matches:
3112 for match in matches:
3113 text = match.text
3113 text = match.text
3114 if (
3114 if (
3115 text not in filtered_matches
3115 text not in filtered_matches
3116 or filtered_matches[text].type == _UNKNOWN_TYPE
3116 or filtered_matches[text].type == _UNKNOWN_TYPE
3117 ):
3117 ):
3118 filtered_matches[text] = match
3118 filtered_matches[text] = match
3119
3119
3120 return filtered_matches.values()
3120 return filtered_matches.values()
3121
3121
3122 @staticmethod
3122 @staticmethod
3123 def _sort(matches: Sequence[SimpleCompletion]):
3123 def _sort(matches: Sequence[SimpleCompletion]):
3124 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3124 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3125
3125
3126 @context_matcher()
3126 @context_matcher()
3127 def fwd_unicode_matcher(self, context: CompletionContext):
3127 def fwd_unicode_matcher(self, context: CompletionContext):
3128 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3128 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3129 # TODO: use `context.limit` to terminate early once we matched the maximum
3129 # TODO: use `context.limit` to terminate early once we matched the maximum
3130 # number that will be used downstream; can be added as an optional to
3130 # number that will be used downstream; can be added as an optional to
3131 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3131 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3132 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3132 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3133 return _convert_matcher_v1_result_to_v2(
3133 return _convert_matcher_v1_result_to_v2(
3134 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3134 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3135 )
3135 )
3136
3136
3137 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3137 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3138 """
3138 """
3139 Forward match a string starting with a backslash with a list of
3139 Forward match a string starting with a backslash with a list of
3140 potential Unicode completions.
3140 potential Unicode completions.
3141
3141
3142 Will compute list of Unicode character names on first call and cache it.
3142 Will compute list of Unicode character names on first call and cache it.
3143
3143
3144 .. deprecated:: 8.6
3144 .. deprecated:: 8.6
3145 You can use :meth:`fwd_unicode_matcher` instead.
3145 You can use :meth:`fwd_unicode_matcher` instead.
3146
3146
3147 Returns
3147 Returns
3148 -------
3148 -------
3149 At tuple with:
3149 At tuple with:
3150 - matched text (empty if no matches)
3150 - matched text (empty if no matches)
3151 - list of potential completions, empty tuple otherwise)
3151 - list of potential completions, empty tuple otherwise)
3152 """
3152 """
3153 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3153 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3154 # We could do a faster match using a Trie.
3154 # We could do a faster match using a Trie.
3155
3155
3156 # Using pygtrie the following seem to work:
3156 # Using pygtrie the following seem to work:
3157
3157
3158 # s = PrefixSet()
3158 # s = PrefixSet()
3159
3159
3160 # for c in range(0,0x10FFFF + 1):
3160 # for c in range(0,0x10FFFF + 1):
3161 # try:
3161 # try:
3162 # s.add(unicodedata.name(chr(c)))
3162 # s.add(unicodedata.name(chr(c)))
3163 # except ValueError:
3163 # except ValueError:
3164 # pass
3164 # pass
3165 # [''.join(k) for k in s.iter(prefix)]
3165 # [''.join(k) for k in s.iter(prefix)]
3166
3166
3167 # But need to be timed and adds an extra dependency.
3167 # But need to be timed and adds an extra dependency.
3168
3168
3169 slashpos = text.rfind('\\')
3169 slashpos = text.rfind('\\')
3170 # if text starts with slash
3170 # if text starts with slash
3171 if slashpos > -1:
3171 if slashpos > -1:
3172 # PERF: It's important that we don't access self._unicode_names
3172 # PERF: It's important that we don't access self._unicode_names
3173 # until we're inside this if-block. _unicode_names is lazily
3173 # until we're inside this if-block. _unicode_names is lazily
3174 # initialized, and it takes a user-noticeable amount of time to
3174 # initialized, and it takes a user-noticeable amount of time to
3175 # initialize it, so we don't want to initialize it unless we're
3175 # initialize it, so we don't want to initialize it unless we're
3176 # actually going to use it.
3176 # actually going to use it.
3177 s = text[slashpos + 1 :]
3177 s = text[slashpos + 1 :]
3178 sup = s.upper()
3178 sup = s.upper()
3179 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3179 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3180 if candidates:
3180 if candidates:
3181 return s, candidates
3181 return s, candidates
3182 candidates = [x for x in self.unicode_names if sup in x]
3182 candidates = [x for x in self.unicode_names if sup in x]
3183 if candidates:
3183 if candidates:
3184 return s, candidates
3184 return s, candidates
3185 splitsup = sup.split(" ")
3185 splitsup = sup.split(" ")
3186 candidates = [
3186 candidates = [
3187 x for x in self.unicode_names if all(u in x for u in splitsup)
3187 x for x in self.unicode_names if all(u in x for u in splitsup)
3188 ]
3188 ]
3189 if candidates:
3189 if candidates:
3190 return s, candidates
3190 return s, candidates
3191
3191
3192 return "", ()
3192 return "", ()
3193
3193
3194 # if text does not start with slash
3194 # if text does not start with slash
3195 else:
3195 else:
3196 return '', ()
3196 return '', ()
3197
3197
3198 @property
3198 @property
3199 def unicode_names(self) -> List[str]:
3199 def unicode_names(self) -> List[str]:
3200 """List of names of unicode code points that can be completed.
3200 """List of names of unicode code points that can be completed.
3201
3201
3202 The list is lazily initialized on first access.
3202 The list is lazily initialized on first access.
3203 """
3203 """
3204 if self._unicode_names is None:
3204 if self._unicode_names is None:
3205 names = []
3205 names = []
3206 for c in range(0,0x10FFFF + 1):
3206 for c in range(0,0x10FFFF + 1):
3207 try:
3207 try:
3208 names.append(unicodedata.name(chr(c)))
3208 names.append(unicodedata.name(chr(c)))
3209 except ValueError:
3209 except ValueError:
3210 pass
3210 pass
3211 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3211 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3212
3212
3213 return self._unicode_names
3213 return self._unicode_names
3214
3214
3215 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3215 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3216 names = []
3216 names = []
3217 for start,stop in ranges:
3217 for start,stop in ranges:
3218 for c in range(start, stop) :
3218 for c in range(start, stop) :
3219 try:
3219 try:
3220 names.append(unicodedata.name(chr(c)))
3220 names.append(unicodedata.name(chr(c)))
3221 except ValueError:
3221 except ValueError:
3222 pass
3222 pass
3223 return names
3223 return names
@@ -1,525 +1,524 b''
1 from typing import Callable, Set, Tuple, NamedTuple, Literal, Union, TYPE_CHECKING
1 from typing import Callable, Set, Tuple, NamedTuple, Literal, Union, TYPE_CHECKING
2 import collections
2 import collections
3 import sys
3 import sys
4 import ast
4 import ast
5 from functools import cached_property
5 from functools import cached_property
6 from dataclasses import dataclass, field
6 from dataclasses import dataclass, field
7
7
8 from IPython.utils.docs import GENERATING_DOCUMENTATION
8 from IPython.utils.docs import GENERATING_DOCUMENTATION
9
9
10
10
11 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
11 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
12 from typing_extensions import Protocol
12 from typing_extensions import Protocol
13 else:
13 else:
14 # do not require on runtime
14 # do not require on runtime
15 Protocol = object # requires Python >=3.8
15 Protocol = object # requires Python >=3.8
16
16
17
17
18 class HasGetItem(Protocol):
18 class HasGetItem(Protocol):
19 def __getitem__(self, key) -> None:
19 def __getitem__(self, key) -> None:
20 ...
20 ...
21
21
22
22
23 class InstancesHaveGetItem(Protocol):
23 class InstancesHaveGetItem(Protocol):
24 def __call__(self) -> HasGetItem:
24 def __call__(self) -> HasGetItem:
25 ...
25 ...
26
26
27
27
28 class HasGetAttr(Protocol):
28 class HasGetAttr(Protocol):
29 def __getattr__(self, key) -> None:
29 def __getattr__(self, key) -> None:
30 ...
30 ...
31
31
32
32
33 class DoesNotHaveGetAttr(Protocol):
33 class DoesNotHaveGetAttr(Protocol):
34 pass
34 pass
35
35
36
36
37 # By default `__getattr__` is not explicitly implemented on most objects
37 # By default `__getattr__` is not explicitly implemented on most objects
38 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
38 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
39
39
40
40
41 def unbind_method(func: Callable) -> Union[Callable, None]:
41 def unbind_method(func: Callable) -> Union[Callable, None]:
42 """Get unbound method for given bound method.
42 """Get unbound method for given bound method.
43
43
44 Returns None if cannot get unbound method."""
44 Returns None if cannot get unbound method."""
45 owner = getattr(func, "__self__", None)
45 owner = getattr(func, "__self__", None)
46 owner_class = type(owner)
46 owner_class = type(owner)
47 name = getattr(func, "__name__", None)
47 name = getattr(func, "__name__", None)
48 instance_dict_overrides = getattr(owner, "__dict__", None)
48 instance_dict_overrides = getattr(owner, "__dict__", None)
49 if (
49 if (
50 owner is not None
50 owner is not None
51 and name
51 and name
52 and (
52 and (
53 not instance_dict_overrides
53 not instance_dict_overrides
54 or (instance_dict_overrides and name not in instance_dict_overrides)
54 or (instance_dict_overrides and name not in instance_dict_overrides)
55 )
55 )
56 ):
56 ):
57 return getattr(owner_class, name)
57 return getattr(owner_class, name)
58
58
59
59
60 @dataclass
60 @dataclass
61 class EvaluationPolicy:
61 class EvaluationPolicy:
62 allow_locals_access: bool = False
62 allow_locals_access: bool = False
63 allow_globals_access: bool = False
63 allow_globals_access: bool = False
64 allow_item_access: bool = False
64 allow_item_access: bool = False
65 allow_attr_access: bool = False
65 allow_attr_access: bool = False
66 allow_builtins_access: bool = False
66 allow_builtins_access: bool = False
67 allow_any_calls: bool = False
67 allow_any_calls: bool = False
68 allowed_calls: Set[Callable] = field(default_factory=set)
68 allowed_calls: Set[Callable] = field(default_factory=set)
69
69
70 def can_get_item(self, value, item):
70 def can_get_item(self, value, item):
71 return self.allow_item_access
71 return self.allow_item_access
72
72
73 def can_get_attr(self, value, attr):
73 def can_get_attr(self, value, attr):
74 return self.allow_attr_access
74 return self.allow_attr_access
75
75
76 def can_call(self, func):
76 def can_call(self, func):
77 if self.allow_any_calls:
77 if self.allow_any_calls:
78 return True
78 return True
79
79
80 if func in self.allowed_calls:
80 if func in self.allowed_calls:
81 return True
81 return True
82
82
83 owner_method = unbind_method(func)
83 owner_method = unbind_method(func)
84 if owner_method and owner_method in self.allowed_calls:
84 if owner_method and owner_method in self.allowed_calls:
85 return True
85 return True
86
86
87
87
88 def has_original_dunder_external(
88 def has_original_dunder_external(
89 value,
89 value,
90 module_name,
90 module_name,
91 access_path,
91 access_path,
92 method_name,
92 method_name,
93 ):
93 ):
94 try:
94 try:
95 if module_name not in sys.modules:
95 if module_name not in sys.modules:
96 return False
96 return False
97 member_type = sys.modules[module_name]
97 member_type = sys.modules[module_name]
98 for attr in access_path:
98 for attr in access_path:
99 member_type = getattr(member_type, attr)
99 member_type = getattr(member_type, attr)
100 value_type = type(value)
100 value_type = type(value)
101 if type(value) == member_type:
101 if type(value) == member_type:
102 return True
102 return True
103 if isinstance(value, member_type):
103 if isinstance(value, member_type):
104 method = getattr(value_type, method_name, None)
104 method = getattr(value_type, method_name, None)
105 member_method = getattr(member_type, method_name, None)
105 member_method = getattr(member_type, method_name, None)
106 if member_method == method:
106 if member_method == method:
107 return True
107 return True
108 except (AttributeError, KeyError):
108 except (AttributeError, KeyError):
109 return False
109 return False
110
110
111
111
112 def has_original_dunder(
112 def has_original_dunder(
113 value, allowed_types, allowed_methods, allowed_external, method_name
113 value, allowed_types, allowed_methods, allowed_external, method_name
114 ):
114 ):
115 # note: Python ignores `__getattr__`/`__getitem__` on instances,
115 # note: Python ignores `__getattr__`/`__getitem__` on instances,
116 # we only need to check at class level
116 # we only need to check at class level
117 value_type = type(value)
117 value_type = type(value)
118
118
119 # strict type check passes β†’ no need to check method
119 # strict type check passes β†’ no need to check method
120 if value_type in allowed_types:
120 if value_type in allowed_types:
121 return True
121 return True
122
122
123 method = getattr(value_type, method_name, None)
123 method = getattr(value_type, method_name, None)
124
124
125 if not method:
125 if not method:
126 return None
126 return None
127
127
128 if method in allowed_methods:
128 if method in allowed_methods:
129 return True
129 return True
130
130
131 for module_name, *access_path in allowed_external:
131 for module_name, *access_path in allowed_external:
132 if has_original_dunder_external(value, module_name, access_path, method_name):
132 if has_original_dunder_external(value, module_name, access_path, method_name):
133 return True
133 return True
134
134
135 return False
135 return False
136
136
137
137
138 @dataclass
138 @dataclass
139 class SelectivePolicy(EvaluationPolicy):
139 class SelectivePolicy(EvaluationPolicy):
140 allowed_getitem: Set[HasGetItem] = field(default_factory=set)
140 allowed_getitem: Set[HasGetItem] = field(default_factory=set)
141 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
141 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
142 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
142 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
143 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
143 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
144
144
145 def can_get_attr(self, value, attr):
145 def can_get_attr(self, value, attr):
146 has_original_attribute = has_original_dunder(
146 has_original_attribute = has_original_dunder(
147 value,
147 value,
148 allowed_types=self.allowed_getattr,
148 allowed_types=self.allowed_getattr,
149 allowed_methods=self._getattribute_methods,
149 allowed_methods=self._getattribute_methods,
150 allowed_external=self.allowed_getattr_external,
150 allowed_external=self.allowed_getattr_external,
151 method_name="__getattribute__",
151 method_name="__getattribute__",
152 )
152 )
153 has_original_attr = has_original_dunder(
153 has_original_attr = has_original_dunder(
154 value,
154 value,
155 allowed_types=self.allowed_getattr,
155 allowed_types=self.allowed_getattr,
156 allowed_methods=self._getattr_methods,
156 allowed_methods=self._getattr_methods,
157 allowed_external=self.allowed_getattr_external,
157 allowed_external=self.allowed_getattr_external,
158 method_name="__getattr__",
158 method_name="__getattr__",
159 )
159 )
160 # Many objects do not have `__getattr__`, this is fine
160 # Many objects do not have `__getattr__`, this is fine
161 if has_original_attr is None and has_original_attribute:
161 if has_original_attr is None and has_original_attribute:
162 return True
162 return True
163
163
164 # Accept objects without modifications to `__getattr__` and `__getattribute__`
164 # Accept objects without modifications to `__getattr__` and `__getattribute__`
165 return has_original_attr and has_original_attribute
165 return has_original_attr and has_original_attribute
166
166
167 def get_attr(self, value, attr):
167 def get_attr(self, value, attr):
168 if self.can_get_attr(value, attr):
168 if self.can_get_attr(value, attr):
169 return getattr(value, attr)
169 return getattr(value, attr)
170
170
171 def can_get_item(self, value, item):
171 def can_get_item(self, value, item):
172 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
172 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
173 return has_original_dunder(
173 return has_original_dunder(
174 value,
174 value,
175 allowed_types=self.allowed_getitem,
175 allowed_types=self.allowed_getitem,
176 allowed_methods=self._getitem_methods,
176 allowed_methods=self._getitem_methods,
177 allowed_external=self.allowed_getitem_external,
177 allowed_external=self.allowed_getitem_external,
178 method_name="__getitem__",
178 method_name="__getitem__",
179 )
179 )
180
180
181 @cached_property
181 @cached_property
182 def _getitem_methods(self) -> Set[Callable]:
182 def _getitem_methods(self) -> Set[Callable]:
183 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
183 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
184
184
185 @cached_property
185 @cached_property
186 def _getattr_methods(self) -> Set[Callable]:
186 def _getattr_methods(self) -> Set[Callable]:
187 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
187 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
188
188
189 @cached_property
189 @cached_property
190 def _getattribute_methods(self) -> Set[Callable]:
190 def _getattribute_methods(self) -> Set[Callable]:
191 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
191 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
192
192
193 def _safe_get_methods(self, classes, name) -> Set[Callable]:
193 def _safe_get_methods(self, classes, name) -> Set[Callable]:
194 return {
194 return {
195 method
195 method
196 for class_ in classes
196 for class_ in classes
197 for method in [getattr(class_, name, None)]
197 for method in [getattr(class_, name, None)]
198 if method
198 if method
199 }
199 }
200
200
201
201
202 class DummyNamedTuple(NamedTuple):
202 class DummyNamedTuple(NamedTuple):
203 pass
203 pass
204
204
205
205
206 class EvaluationContext(NamedTuple):
206 class EvaluationContext(NamedTuple):
207 locals_: dict
207 locals_: dict
208 globals_: dict
208 globals_: dict
209 evaluation: Literal[
209 evaluation: Literal[
210 "forbidden", "minimal", "limitted", "unsafe", "dangerous"
210 "forbidden", "minimal", "limited", "unsafe", "dangerous"
211 ] = "forbidden"
211 ] = "forbidden"
212 in_subscript: bool = False
212 in_subscript: bool = False
213
213
214
214
215 class IdentitySubscript:
215 class IdentitySubscript:
216 def __getitem__(self, key):
216 def __getitem__(self, key):
217 return key
217 return key
218
218
219
219
220 IDENTITY_SUBSCRIPT = IdentitySubscript()
220 IDENTITY_SUBSCRIPT = IdentitySubscript()
221 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
221 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
222
222
223
223
224 class GuardRejection(ValueError):
224 class GuardRejection(ValueError):
225 pass
225 pass
226
226
227
227
228 def guarded_eval(code: str, context: EvaluationContext):
228 def guarded_eval(code: str, context: EvaluationContext):
229 locals_ = context.locals_
229 locals_ = context.locals_
230
230
231 if context.evaluation == "forbidden":
231 if context.evaluation == "forbidden":
232 raise GuardRejection("Forbidden mode")
232 raise GuardRejection("Forbidden mode")
233
233
234 # note: not using `ast.literal_eval` as it does not implement
234 # note: not using `ast.literal_eval` as it does not implement
235 # getitem at all, for example it fails on simple `[0][1]`
235 # getitem at all, for example it fails on simple `[0][1]`
236
236
237 if context.in_subscript:
237 if context.in_subscript:
238 # syntatic sugar for ellipsis (:) is only available in susbcripts
238 # syntatic sugar for ellipsis (:) is only available in susbcripts
239 # so we need to trick the ast parser into thinking that we have
239 # so we need to trick the ast parser into thinking that we have
240 # a subscript, but we need to be able to later recognise that we did
240 # a subscript, but we need to be able to later recognise that we did
241 # it so we can ignore the actual __getitem__ operation
241 # it so we can ignore the actual __getitem__ operation
242 if not code:
242 if not code:
243 return tuple()
243 return tuple()
244 locals_ = locals_.copy()
244 locals_ = locals_.copy()
245 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
245 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
246 code = SUBSCRIPT_MARKER + "[" + code + "]"
246 code = SUBSCRIPT_MARKER + "[" + code + "]"
247 context = EvaluationContext(**{**context._asdict(), **{"locals_": locals_}})
247 context = EvaluationContext(**{**context._asdict(), **{"locals_": locals_}})
248
248
249 if context.evaluation == "dangerous":
249 if context.evaluation == "dangerous":
250 return eval(code, context.globals_, context.locals_)
250 return eval(code, context.globals_, context.locals_)
251
251
252 expression = ast.parse(code, mode="eval")
252 expression = ast.parse(code, mode="eval")
253
253
254 return eval_node(expression, context)
254 return eval_node(expression, context)
255
255
256
256
257 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
257 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
258 """
258 """
259 Evaluate AST node in provided context.
259 Evaluate AST node in provided context.
260
260
261 Applies evaluation restrictions defined in the context.
261 Applies evaluation restrictions defined in the context.
262
262
263 Currently does not support evaluation of functions with arguments.
263 Currently does not support evaluation of functions with keyword arguments.
264
264
265 Does not evaluate actions which always have side effects:
265 Does not evaluate actions which always have side effects:
266 - class definitions (``class sth: ...``)
266 - class definitions (``class sth: ...``)
267 - function definitions (``def sth: ...``)
267 - function definitions (``def sth: ...``)
268 - variable assignments (``x = 1``)
268 - variable assignments (``x = 1``)
269 - augumented assignments (``x += 1``)
269 - augmented assignments (``x += 1``)
270 - deletions (``del x``)
270 - deletions (``del x``)
271
271
272 Does not evaluate operations which do not return values:
272 Does not evaluate operations which do not return values:
273 - assertions (``assert x``)
273 - assertions (``assert x``)
274 - pass (``pass``)
274 - pass (``pass``)
275 - imports (``import x``)
275 - imports (``import x``)
276 - control flow
276 - control flow
277 - conditionals (``if x:``) except for terenary IfExp (``a if x else b``)
277 - conditionals (``if x:``) except for ternary IfExp (``a if x else b``)
278 - loops (``for`` and `while``)
278 - loops (``for`` and `while``)
279 - exception handling
279 - exception handling
280
280
281 The purpose of this function is to guard against unwanted side-effects;
281 The purpose of this function is to guard against unwanted side-effects;
282 it does not give guarantees on protection from malicious code execution.
282 it does not give guarantees on protection from malicious code execution.
283 """
283 """
284 policy = EVALUATION_POLICIES[context.evaluation]
284 policy = EVALUATION_POLICIES[context.evaluation]
285 if node is None:
285 if node is None:
286 return None
286 return None
287 if isinstance(node, ast.Expression):
287 if isinstance(node, ast.Expression):
288 return eval_node(node.body, context)
288 return eval_node(node.body, context)
289 if isinstance(node, ast.BinOp):
289 if isinstance(node, ast.BinOp):
290 # TODO: add guards
290 # TODO: add guards
291 left = eval_node(node.left, context)
291 left = eval_node(node.left, context)
292 right = eval_node(node.right, context)
292 right = eval_node(node.right, context)
293 if isinstance(node.op, ast.Add):
293 if isinstance(node.op, ast.Add):
294 return left + right
294 return left + right
295 if isinstance(node.op, ast.Sub):
295 if isinstance(node.op, ast.Sub):
296 return left - right
296 return left - right
297 if isinstance(node.op, ast.Mult):
297 if isinstance(node.op, ast.Mult):
298 return left * right
298 return left * right
299 if isinstance(node.op, ast.Div):
299 if isinstance(node.op, ast.Div):
300 return left / right
300 return left / right
301 if isinstance(node.op, ast.FloorDiv):
301 if isinstance(node.op, ast.FloorDiv):
302 return left // right
302 return left // right
303 if isinstance(node.op, ast.Mod):
303 if isinstance(node.op, ast.Mod):
304 return left % right
304 return left % right
305 if isinstance(node.op, ast.Pow):
305 if isinstance(node.op, ast.Pow):
306 return left**right
306 return left**right
307 if isinstance(node.op, ast.LShift):
307 if isinstance(node.op, ast.LShift):
308 return left << right
308 return left << right
309 if isinstance(node.op, ast.RShift):
309 if isinstance(node.op, ast.RShift):
310 return left >> right
310 return left >> right
311 if isinstance(node.op, ast.BitOr):
311 if isinstance(node.op, ast.BitOr):
312 return left | right
312 return left | right
313 if isinstance(node.op, ast.BitXor):
313 if isinstance(node.op, ast.BitXor):
314 return left ^ right
314 return left ^ right
315 if isinstance(node.op, ast.BitAnd):
315 if isinstance(node.op, ast.BitAnd):
316 return left & right
316 return left & right
317 if isinstance(node.op, ast.MatMult):
317 if isinstance(node.op, ast.MatMult):
318 return left @ right
318 return left @ right
319 if isinstance(node, ast.Constant):
319 if isinstance(node, ast.Constant):
320 return node.value
320 return node.value
321 if isinstance(node, ast.Index):
321 if isinstance(node, ast.Index):
322 return eval_node(node.value, context)
322 return eval_node(node.value, context)
323 if isinstance(node, ast.Tuple):
323 if isinstance(node, ast.Tuple):
324 return tuple(eval_node(e, context) for e in node.elts)
324 return tuple(eval_node(e, context) for e in node.elts)
325 if isinstance(node, ast.List):
325 if isinstance(node, ast.List):
326 return [eval_node(e, context) for e in node.elts]
326 return [eval_node(e, context) for e in node.elts]
327 if isinstance(node, ast.Set):
327 if isinstance(node, ast.Set):
328 return {eval_node(e, context) for e in node.elts}
328 return {eval_node(e, context) for e in node.elts}
329 if isinstance(node, ast.Dict):
329 if isinstance(node, ast.Dict):
330 return dict(
330 return dict(
331 zip(
331 zip(
332 [eval_node(k, context) for k in node.keys],
332 [eval_node(k, context) for k in node.keys],
333 [eval_node(v, context) for v in node.values],
333 [eval_node(v, context) for v in node.values],
334 )
334 )
335 )
335 )
336 if isinstance(node, ast.Slice):
336 if isinstance(node, ast.Slice):
337 return slice(
337 return slice(
338 eval_node(node.lower, context),
338 eval_node(node.lower, context),
339 eval_node(node.upper, context),
339 eval_node(node.upper, context),
340 eval_node(node.step, context),
340 eval_node(node.step, context),
341 )
341 )
342 if isinstance(node, ast.ExtSlice):
342 if isinstance(node, ast.ExtSlice):
343 return tuple([eval_node(dim, context) for dim in node.dims])
343 return tuple([eval_node(dim, context) for dim in node.dims])
344 if isinstance(node, ast.UnaryOp):
344 if isinstance(node, ast.UnaryOp):
345 # TODO: add guards
345 # TODO: add guards
346 value = eval_node(node.operand, context)
346 value = eval_node(node.operand, context)
347 if isinstance(node.op, ast.USub):
347 if isinstance(node.op, ast.USub):
348 return -value
348 return -value
349 if isinstance(node.op, ast.UAdd):
349 if isinstance(node.op, ast.UAdd):
350 return +value
350 return +value
351 if isinstance(node.op, ast.Invert):
351 if isinstance(node.op, ast.Invert):
352 return ~value
352 return ~value
353 if isinstance(node.op, ast.Not):
353 if isinstance(node.op, ast.Not):
354 return not value
354 return not value
355 raise ValueError("Unhandled unary operation:", node.op)
355 raise ValueError("Unhandled unary operation:", node.op)
356 if isinstance(node, ast.Subscript):
356 if isinstance(node, ast.Subscript):
357 value = eval_node(node.value, context)
357 value = eval_node(node.value, context)
358 slice_ = eval_node(node.slice, context)
358 slice_ = eval_node(node.slice, context)
359 if policy.can_get_item(value, slice_):
359 if policy.can_get_item(value, slice_):
360 return value[slice_]
360 return value[slice_]
361 raise GuardRejection(
361 raise GuardRejection(
362 "Subscript access (`__getitem__`) for",
362 "Subscript access (`__getitem__`) for",
363 type(value), # not joined to avoid calling `repr`
363 type(value), # not joined to avoid calling `repr`
364 f" not allowed in {context.evaluation} mode",
364 f" not allowed in {context.evaluation} mode",
365 )
365 )
366 if isinstance(node, ast.Name):
366 if isinstance(node, ast.Name):
367 if policy.allow_locals_access and node.id in context.locals_:
367 if policy.allow_locals_access and node.id in context.locals_:
368 return context.locals_[node.id]
368 return context.locals_[node.id]
369 if policy.allow_globals_access and node.id in context.globals_:
369 if policy.allow_globals_access and node.id in context.globals_:
370 return context.globals_[node.id]
370 return context.globals_[node.id]
371 if policy.allow_builtins_access and node.id in __builtins__:
371 if policy.allow_builtins_access and node.id in __builtins__:
372 return __builtins__[node.id]
372 return __builtins__[node.id]
373 if not policy.allow_globals_access and not policy.allow_locals_access:
373 if not policy.allow_globals_access and not policy.allow_locals_access:
374 raise GuardRejection(
374 raise GuardRejection(
375 f"Namespace access not allowed in {context.evaluation} mode"
375 f"Namespace access not allowed in {context.evaluation} mode"
376 )
376 )
377 else:
377 else:
378 raise NameError(f"{node.id} not found in locals nor globals")
378 raise NameError(f"{node.id} not found in locals nor globals")
379 if isinstance(node, ast.Attribute):
379 if isinstance(node, ast.Attribute):
380 value = eval_node(node.value, context)
380 value = eval_node(node.value, context)
381 if policy.can_get_attr(value, node.attr):
381 if policy.can_get_attr(value, node.attr):
382 return getattr(value, node.attr)
382 return getattr(value, node.attr)
383 raise GuardRejection(
383 raise GuardRejection(
384 "Attribute access (`__getattr__`) for",
384 "Attribute access (`__getattr__`) for",
385 type(value), # not joined to avoid calling `repr`
385 type(value), # not joined to avoid calling `repr`
386 f"not allowed in {context.evaluation} mode",
386 f"not allowed in {context.evaluation} mode",
387 )
387 )
388 if isinstance(node, ast.IfExp):
388 if isinstance(node, ast.IfExp):
389 test = eval_node(node.test, context)
389 test = eval_node(node.test, context)
390 if test:
390 if test:
391 return eval_node(node.body, context)
391 return eval_node(node.body, context)
392 else:
392 else:
393 return eval_node(node.orelse, context)
393 return eval_node(node.orelse, context)
394 if isinstance(node, ast.Call):
394 if isinstance(node, ast.Call):
395 func = eval_node(node.func, context)
395 func = eval_node(node.func, context)
396 print(node.keywords)
397 if policy.can_call(func) and not node.keywords:
396 if policy.can_call(func) and not node.keywords:
398 args = [eval_node(arg, context) for arg in node.args]
397 args = [eval_node(arg, context) for arg in node.args]
399 return func(*args)
398 return func(*args)
400 raise GuardRejection(
399 raise GuardRejection(
401 "Call for",
400 "Call for",
402 func, # not joined to avoid calling `repr`
401 func, # not joined to avoid calling `repr`
403 f"not allowed in {context.evaluation} mode",
402 f"not allowed in {context.evaluation} mode",
404 )
403 )
405 raise ValueError("Unhandled node", node)
404 raise ValueError("Unhandled node", node)
406
405
407
406
408 SUPPORTED_EXTERNAL_GETITEM = {
407 SUPPORTED_EXTERNAL_GETITEM = {
409 ("pandas", "core", "indexing", "_iLocIndexer"),
408 ("pandas", "core", "indexing", "_iLocIndexer"),
410 ("pandas", "core", "indexing", "_LocIndexer"),
409 ("pandas", "core", "indexing", "_LocIndexer"),
411 ("pandas", "DataFrame"),
410 ("pandas", "DataFrame"),
412 ("pandas", "Series"),
411 ("pandas", "Series"),
413 ("numpy", "ndarray"),
412 ("numpy", "ndarray"),
414 ("numpy", "void"),
413 ("numpy", "void"),
415 }
414 }
416
415
417 BUILTIN_GETITEM = {
416 BUILTIN_GETITEM = {
418 dict,
417 dict,
419 str,
418 str,
420 bytes,
419 bytes,
421 list,
420 list,
422 tuple,
421 tuple,
423 collections.defaultdict,
422 collections.defaultdict,
424 collections.deque,
423 collections.deque,
425 collections.OrderedDict,
424 collections.OrderedDict,
426 collections.ChainMap,
425 collections.ChainMap,
427 collections.UserDict,
426 collections.UserDict,
428 collections.UserList,
427 collections.UserList,
429 collections.UserString,
428 collections.UserString,
430 DummyNamedTuple,
429 DummyNamedTuple,
431 IdentitySubscript,
430 IdentitySubscript,
432 }
431 }
433
432
434
433
435 def _list_methods(cls, source=None):
434 def _list_methods(cls, source=None):
436 """For use on immutable objects or with methods returning a copy"""
435 """For use on immutable objects or with methods returning a copy"""
437 return [getattr(cls, k) for k in (source if source else dir(cls))]
436 return [getattr(cls, k) for k in (source if source else dir(cls))]
438
437
439
438
440 dict_non_mutating_methods = ("copy", "keys", "values", "items")
439 dict_non_mutating_methods = ("copy", "keys", "values", "items")
441 list_non_mutating_methods = ("copy", "index", "count")
440 list_non_mutating_methods = ("copy", "index", "count")
442 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
441 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
443
442
444
443
445 dict_keys = type({}.keys())
444 dict_keys = type({}.keys())
446 method_descriptor = type(list.copy)
445 method_descriptor = type(list.copy)
447
446
448 ALLOWED_CALLS = {
447 ALLOWED_CALLS = {
449 bytes,
448 bytes,
450 *_list_methods(bytes),
449 *_list_methods(bytes),
451 dict,
450 dict,
452 *_list_methods(dict, dict_non_mutating_methods),
451 *_list_methods(dict, dict_non_mutating_methods),
453 dict_keys.isdisjoint,
452 dict_keys.isdisjoint,
454 list,
453 list,
455 *_list_methods(list, list_non_mutating_methods),
454 *_list_methods(list, list_non_mutating_methods),
456 set,
455 set,
457 *_list_methods(set, set_non_mutating_methods),
456 *_list_methods(set, set_non_mutating_methods),
458 frozenset,
457 frozenset,
459 *_list_methods(frozenset),
458 *_list_methods(frozenset),
460 range,
459 range,
461 str,
460 str,
462 *_list_methods(str),
461 *_list_methods(str),
463 tuple,
462 tuple,
464 *_list_methods(tuple),
463 *_list_methods(tuple),
465 collections.deque,
464 collections.deque,
466 *_list_methods(collections.deque, list_non_mutating_methods),
465 *_list_methods(collections.deque, list_non_mutating_methods),
467 collections.defaultdict,
466 collections.defaultdict,
468 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
467 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
469 collections.OrderedDict,
468 collections.OrderedDict,
470 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
469 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
471 collections.UserDict,
470 collections.UserDict,
472 *_list_methods(collections.UserDict, dict_non_mutating_methods),
471 *_list_methods(collections.UserDict, dict_non_mutating_methods),
473 collections.UserList,
472 collections.UserList,
474 *_list_methods(collections.UserList, list_non_mutating_methods),
473 *_list_methods(collections.UserList, list_non_mutating_methods),
475 collections.UserString,
474 collections.UserString,
476 *_list_methods(collections.UserString, dir(str)),
475 *_list_methods(collections.UserString, dir(str)),
477 collections.Counter,
476 collections.Counter,
478 *_list_methods(collections.Counter, dict_non_mutating_methods),
477 *_list_methods(collections.Counter, dict_non_mutating_methods),
479 collections.Counter.elements,
478 collections.Counter.elements,
480 collections.Counter.most_common,
479 collections.Counter.most_common,
481 }
480 }
482
481
483 EVALUATION_POLICIES = {
482 EVALUATION_POLICIES = {
484 "minimal": EvaluationPolicy(
483 "minimal": EvaluationPolicy(
485 allow_builtins_access=True,
484 allow_builtins_access=True,
486 allow_locals_access=False,
485 allow_locals_access=False,
487 allow_globals_access=False,
486 allow_globals_access=False,
488 allow_item_access=False,
487 allow_item_access=False,
489 allow_attr_access=False,
488 allow_attr_access=False,
490 allowed_calls=set(),
489 allowed_calls=set(),
491 allow_any_calls=False,
490 allow_any_calls=False,
492 ),
491 ),
493 "limitted": SelectivePolicy(
492 "limited": SelectivePolicy(
494 # TODO:
493 # TODO:
495 # - should reject binary and unary operations if custom methods would be dispatched
494 # - should reject binary and unary operations if custom methods would be dispatched
496 allowed_getitem=BUILTIN_GETITEM,
495 allowed_getitem=BUILTIN_GETITEM,
497 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
496 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
498 allowed_getattr={
497 allowed_getattr={
499 *BUILTIN_GETITEM,
498 *BUILTIN_GETITEM,
500 set,
499 set,
501 frozenset,
500 frozenset,
502 object,
501 object,
503 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
502 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
504 dict_keys,
503 dict_keys,
505 method_descriptor,
504 method_descriptor,
506 },
505 },
507 allowed_getattr_external={
506 allowed_getattr_external={
508 # pandas Series/Frame implements custom `__getattr__`
507 # pandas Series/Frame implements custom `__getattr__`
509 ("pandas", "DataFrame"),
508 ("pandas", "DataFrame"),
510 ("pandas", "Series"),
509 ("pandas", "Series"),
511 },
510 },
512 allow_builtins_access=True,
511 allow_builtins_access=True,
513 allow_locals_access=True,
512 allow_locals_access=True,
514 allow_globals_access=True,
513 allow_globals_access=True,
515 allowed_calls=ALLOWED_CALLS,
514 allowed_calls=ALLOWED_CALLS,
516 ),
515 ),
517 "unsafe": EvaluationPolicy(
516 "unsafe": EvaluationPolicy(
518 allow_builtins_access=True,
517 allow_builtins_access=True,
519 allow_locals_access=True,
518 allow_locals_access=True,
520 allow_globals_access=True,
519 allow_globals_access=True,
521 allow_attr_access=True,
520 allow_attr_access=True,
522 allow_item_access=True,
521 allow_item_access=True,
523 allow_any_calls=True,
522 allow_any_calls=True,
524 ),
523 ),
525 }
524 }
@@ -1,1740 +1,1740 b''
1 # encoding: utf-8
1 # encoding: utf-8
2 """Tests for the IPython tab-completion machinery."""
2 """Tests for the IPython tab-completion machinery."""
3
3
4 # Copyright (c) IPython Development Team.
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
5 # Distributed under the terms of the Modified BSD License.
6
6
7 import os
7 import os
8 import pytest
8 import pytest
9 import sys
9 import sys
10 import textwrap
10 import textwrap
11 import unittest
11 import unittest
12
12
13 from contextlib import contextmanager
13 from contextlib import contextmanager
14
14
15 from traitlets.config.loader import Config
15 from traitlets.config.loader import Config
16 from IPython import get_ipython
16 from IPython import get_ipython
17 from IPython.core import completer
17 from IPython.core import completer
18 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
18 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
19 from IPython.utils.generics import complete_object
19 from IPython.utils.generics import complete_object
20 from IPython.testing import decorators as dec
20 from IPython.testing import decorators as dec
21
21
22 from IPython.core.completer import (
22 from IPython.core.completer import (
23 Completion,
23 Completion,
24 provisionalcompleter,
24 provisionalcompleter,
25 match_dict_keys,
25 match_dict_keys,
26 _deduplicate_completions,
26 _deduplicate_completions,
27 _match_number_in_dict_key_prefix,
27 _match_number_in_dict_key_prefix,
28 completion_matcher,
28 completion_matcher,
29 SimpleCompletion,
29 SimpleCompletion,
30 CompletionContext,
30 CompletionContext,
31 )
31 )
32
32
33 # -----------------------------------------------------------------------------
33 # -----------------------------------------------------------------------------
34 # Test functions
34 # Test functions
35 # -----------------------------------------------------------------------------
35 # -----------------------------------------------------------------------------
36
36
37 def recompute_unicode_ranges():
37 def recompute_unicode_ranges():
38 """
38 """
39 utility to recompute the largest unicode range without any characters
39 utility to recompute the largest unicode range without any characters
40
40
41 use to recompute the gap in the global _UNICODE_RANGES of completer.py
41 use to recompute the gap in the global _UNICODE_RANGES of completer.py
42 """
42 """
43 import itertools
43 import itertools
44 import unicodedata
44 import unicodedata
45 valid = []
45 valid = []
46 for c in range(0,0x10FFFF + 1):
46 for c in range(0,0x10FFFF + 1):
47 try:
47 try:
48 unicodedata.name(chr(c))
48 unicodedata.name(chr(c))
49 except ValueError:
49 except ValueError:
50 continue
50 continue
51 valid.append(c)
51 valid.append(c)
52
52
53 def ranges(i):
53 def ranges(i):
54 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
54 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
55 b = list(b)
55 b = list(b)
56 yield b[0][1], b[-1][1]
56 yield b[0][1], b[-1][1]
57
57
58 rg = list(ranges(valid))
58 rg = list(ranges(valid))
59 lens = []
59 lens = []
60 gap_lens = []
60 gap_lens = []
61 pstart, pstop = 0,0
61 pstart, pstop = 0,0
62 for start, stop in rg:
62 for start, stop in rg:
63 lens.append(stop-start)
63 lens.append(stop-start)
64 gap_lens.append((start - pstop, hex(pstop), hex(start), f'{round((start - pstop)/0xe01f0*100)}%'))
64 gap_lens.append((start - pstop, hex(pstop), hex(start), f'{round((start - pstop)/0xe01f0*100)}%'))
65 pstart, pstop = start, stop
65 pstart, pstop = start, stop
66
66
67 return sorted(gap_lens)[-1]
67 return sorted(gap_lens)[-1]
68
68
69
69
70
70
71 def test_unicode_range():
71 def test_unicode_range():
72 """
72 """
73 Test that the ranges we test for unicode names give the same number of
73 Test that the ranges we test for unicode names give the same number of
74 results than testing the full length.
74 results than testing the full length.
75 """
75 """
76 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
76 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
77
77
78 expected_list = _unicode_name_compute([(0, 0x110000)])
78 expected_list = _unicode_name_compute([(0, 0x110000)])
79 test = _unicode_name_compute(_UNICODE_RANGES)
79 test = _unicode_name_compute(_UNICODE_RANGES)
80 len_exp = len(expected_list)
80 len_exp = len(expected_list)
81 len_test = len(test)
81 len_test = len(test)
82
82
83 # do not inline the len() or on error pytest will try to print the 130 000 +
83 # do not inline the len() or on error pytest will try to print the 130 000 +
84 # elements.
84 # elements.
85 message = None
85 message = None
86 if len_exp != len_test or len_exp > 131808:
86 if len_exp != len_test or len_exp > 131808:
87 size, start, stop, prct = recompute_unicode_ranges()
87 size, start, stop, prct = recompute_unicode_ranges()
88 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
88 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
89 likely due to a new release of Python. We've find that the biggest gap
89 likely due to a new release of Python. We've find that the biggest gap
90 in unicode characters has reduces in size to be {size} characters
90 in unicode characters has reduces in size to be {size} characters
91 ({prct}), from {start}, to {stop}. In completer.py likely update to
91 ({prct}), from {start}, to {stop}. In completer.py likely update to
92
92
93 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
93 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
94
94
95 And update the assertion below to use
95 And update the assertion below to use
96
96
97 len_exp <= {len_exp}
97 len_exp <= {len_exp}
98 """
98 """
99 assert len_exp == len_test, message
99 assert len_exp == len_test, message
100
100
101 # fail if new unicode symbols have been added.
101 # fail if new unicode symbols have been added.
102 assert len_exp <= 138552, message
102 assert len_exp <= 138552, message
103
103
104
104
105 @contextmanager
105 @contextmanager
106 def greedy_completion():
106 def greedy_completion():
107 ip = get_ipython()
107 ip = get_ipython()
108 greedy_original = ip.Completer.greedy
108 greedy_original = ip.Completer.greedy
109 try:
109 try:
110 ip.Completer.greedy = True
110 ip.Completer.greedy = True
111 yield
111 yield
112 finally:
112 finally:
113 ip.Completer.greedy = greedy_original
113 ip.Completer.greedy = greedy_original
114
114
115
115
116 @contextmanager
116 @contextmanager
117 def evaluation_level(evaluation: str):
117 def evaluation_level(evaluation: str):
118 ip = get_ipython()
118 ip = get_ipython()
119 evaluation_original = ip.Completer.evaluation
119 evaluation_original = ip.Completer.evaluation
120 try:
120 try:
121 ip.Completer.evaluation = evaluation
121 ip.Completer.evaluation = evaluation
122 yield
122 yield
123 finally:
123 finally:
124 ip.Completer.evaluation = evaluation_original
124 ip.Completer.evaluation = evaluation_original
125
125
126
126
127 @contextmanager
127 @contextmanager
128 def custom_matchers(matchers):
128 def custom_matchers(matchers):
129 ip = get_ipython()
129 ip = get_ipython()
130 try:
130 try:
131 ip.Completer.custom_matchers.extend(matchers)
131 ip.Completer.custom_matchers.extend(matchers)
132 yield
132 yield
133 finally:
133 finally:
134 ip.Completer.custom_matchers.clear()
134 ip.Completer.custom_matchers.clear()
135
135
136
136
137 def test_protect_filename():
137 def test_protect_filename():
138 if sys.platform == "win32":
138 if sys.platform == "win32":
139 pairs = [
139 pairs = [
140 ("abc", "abc"),
140 ("abc", "abc"),
141 (" abc", '" abc"'),
141 (" abc", '" abc"'),
142 ("a bc", '"a bc"'),
142 ("a bc", '"a bc"'),
143 ("a bc", '"a bc"'),
143 ("a bc", '"a bc"'),
144 (" bc", '" bc"'),
144 (" bc", '" bc"'),
145 ]
145 ]
146 else:
146 else:
147 pairs = [
147 pairs = [
148 ("abc", "abc"),
148 ("abc", "abc"),
149 (" abc", r"\ abc"),
149 (" abc", r"\ abc"),
150 ("a bc", r"a\ bc"),
150 ("a bc", r"a\ bc"),
151 ("a bc", r"a\ \ bc"),
151 ("a bc", r"a\ \ bc"),
152 (" bc", r"\ \ bc"),
152 (" bc", r"\ \ bc"),
153 # On posix, we also protect parens and other special characters.
153 # On posix, we also protect parens and other special characters.
154 ("a(bc", r"a\(bc"),
154 ("a(bc", r"a\(bc"),
155 ("a)bc", r"a\)bc"),
155 ("a)bc", r"a\)bc"),
156 ("a( )bc", r"a\(\ \)bc"),
156 ("a( )bc", r"a\(\ \)bc"),
157 ("a[1]bc", r"a\[1\]bc"),
157 ("a[1]bc", r"a\[1\]bc"),
158 ("a{1}bc", r"a\{1\}bc"),
158 ("a{1}bc", r"a\{1\}bc"),
159 ("a#bc", r"a\#bc"),
159 ("a#bc", r"a\#bc"),
160 ("a?bc", r"a\?bc"),
160 ("a?bc", r"a\?bc"),
161 ("a=bc", r"a\=bc"),
161 ("a=bc", r"a\=bc"),
162 ("a\\bc", r"a\\bc"),
162 ("a\\bc", r"a\\bc"),
163 ("a|bc", r"a\|bc"),
163 ("a|bc", r"a\|bc"),
164 ("a;bc", r"a\;bc"),
164 ("a;bc", r"a\;bc"),
165 ("a:bc", r"a\:bc"),
165 ("a:bc", r"a\:bc"),
166 ("a'bc", r"a\'bc"),
166 ("a'bc", r"a\'bc"),
167 ("a*bc", r"a\*bc"),
167 ("a*bc", r"a\*bc"),
168 ('a"bc', r"a\"bc"),
168 ('a"bc', r"a\"bc"),
169 ("a^bc", r"a\^bc"),
169 ("a^bc", r"a\^bc"),
170 ("a&bc", r"a\&bc"),
170 ("a&bc", r"a\&bc"),
171 ]
171 ]
172 # run the actual tests
172 # run the actual tests
173 for s1, s2 in pairs:
173 for s1, s2 in pairs:
174 s1p = completer.protect_filename(s1)
174 s1p = completer.protect_filename(s1)
175 assert s1p == s2
175 assert s1p == s2
176
176
177
177
178 def check_line_split(splitter, test_specs):
178 def check_line_split(splitter, test_specs):
179 for part1, part2, split in test_specs:
179 for part1, part2, split in test_specs:
180 cursor_pos = len(part1)
180 cursor_pos = len(part1)
181 line = part1 + part2
181 line = part1 + part2
182 out = splitter.split_line(line, cursor_pos)
182 out = splitter.split_line(line, cursor_pos)
183 assert out == split
183 assert out == split
184
184
185 def test_line_split():
185 def test_line_split():
186 """Basic line splitter test with default specs."""
186 """Basic line splitter test with default specs."""
187 sp = completer.CompletionSplitter()
187 sp = completer.CompletionSplitter()
188 # The format of the test specs is: part1, part2, expected answer. Parts 1
188 # The format of the test specs is: part1, part2, expected answer. Parts 1
189 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
189 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
190 # was at the end of part1. So an empty part2 represents someone hitting
190 # was at the end of part1. So an empty part2 represents someone hitting
191 # tab at the end of the line, the most common case.
191 # tab at the end of the line, the most common case.
192 t = [
192 t = [
193 ("run some/scrip", "", "some/scrip"),
193 ("run some/scrip", "", "some/scrip"),
194 ("run scripts/er", "ror.py foo", "scripts/er"),
194 ("run scripts/er", "ror.py foo", "scripts/er"),
195 ("echo $HOM", "", "HOM"),
195 ("echo $HOM", "", "HOM"),
196 ("print sys.pa", "", "sys.pa"),
196 ("print sys.pa", "", "sys.pa"),
197 ("print(sys.pa", "", "sys.pa"),
197 ("print(sys.pa", "", "sys.pa"),
198 ("execfile('scripts/er", "", "scripts/er"),
198 ("execfile('scripts/er", "", "scripts/er"),
199 ("a[x.", "", "x."),
199 ("a[x.", "", "x."),
200 ("a[x.", "y", "x."),
200 ("a[x.", "y", "x."),
201 ('cd "some_file/', "", "some_file/"),
201 ('cd "some_file/', "", "some_file/"),
202 ]
202 ]
203 check_line_split(sp, t)
203 check_line_split(sp, t)
204 # Ensure splitting works OK with unicode by re-running the tests with
204 # Ensure splitting works OK with unicode by re-running the tests with
205 # all inputs turned into unicode
205 # all inputs turned into unicode
206 check_line_split(sp, [map(str, p) for p in t])
206 check_line_split(sp, [map(str, p) for p in t])
207
207
208
208
209 class NamedInstanceClass:
209 class NamedInstanceClass:
210 instances = {}
210 instances = {}
211
211
212 def __init__(self, name):
212 def __init__(self, name):
213 self.instances[name] = self
213 self.instances[name] = self
214
214
215 @classmethod
215 @classmethod
216 def _ipython_key_completions_(cls):
216 def _ipython_key_completions_(cls):
217 return cls.instances.keys()
217 return cls.instances.keys()
218
218
219
219
220 class KeyCompletable:
220 class KeyCompletable:
221 def __init__(self, things=()):
221 def __init__(self, things=()):
222 self.things = things
222 self.things = things
223
223
224 def _ipython_key_completions_(self):
224 def _ipython_key_completions_(self):
225 return list(self.things)
225 return list(self.things)
226
226
227
227
228 class TestCompleter(unittest.TestCase):
228 class TestCompleter(unittest.TestCase):
229 def setUp(self):
229 def setUp(self):
230 """
230 """
231 We want to silence all PendingDeprecationWarning when testing the completer
231 We want to silence all PendingDeprecationWarning when testing the completer
232 """
232 """
233 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
233 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
234 self._assertwarns.__enter__()
234 self._assertwarns.__enter__()
235
235
236 def tearDown(self):
236 def tearDown(self):
237 try:
237 try:
238 self._assertwarns.__exit__(None, None, None)
238 self._assertwarns.__exit__(None, None, None)
239 except AssertionError:
239 except AssertionError:
240 pass
240 pass
241
241
242 def test_custom_completion_error(self):
242 def test_custom_completion_error(self):
243 """Test that errors from custom attribute completers are silenced."""
243 """Test that errors from custom attribute completers are silenced."""
244 ip = get_ipython()
244 ip = get_ipython()
245
245
246 class A:
246 class A:
247 pass
247 pass
248
248
249 ip.user_ns["x"] = A()
249 ip.user_ns["x"] = A()
250
250
251 @complete_object.register(A)
251 @complete_object.register(A)
252 def complete_A(a, existing_completions):
252 def complete_A(a, existing_completions):
253 raise TypeError("this should be silenced")
253 raise TypeError("this should be silenced")
254
254
255 ip.complete("x.")
255 ip.complete("x.")
256
256
257 def test_custom_completion_ordering(self):
257 def test_custom_completion_ordering(self):
258 """Test that errors from custom attribute completers are silenced."""
258 """Test that errors from custom attribute completers are silenced."""
259 ip = get_ipython()
259 ip = get_ipython()
260
260
261 _, matches = ip.complete('in')
261 _, matches = ip.complete('in')
262 assert matches.index('input') < matches.index('int')
262 assert matches.index('input') < matches.index('int')
263
263
264 def complete_example(a):
264 def complete_example(a):
265 return ['example2', 'example1']
265 return ['example2', 'example1']
266
266
267 ip.Completer.custom_completers.add_re('ex*', complete_example)
267 ip.Completer.custom_completers.add_re('ex*', complete_example)
268 _, matches = ip.complete('ex')
268 _, matches = ip.complete('ex')
269 assert matches.index('example2') < matches.index('example1')
269 assert matches.index('example2') < matches.index('example1')
270
270
271 def test_unicode_completions(self):
271 def test_unicode_completions(self):
272 ip = get_ipython()
272 ip = get_ipython()
273 # Some strings that trigger different types of completion. Check them both
273 # Some strings that trigger different types of completion. Check them both
274 # in str and unicode forms
274 # in str and unicode forms
275 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
275 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
276 for t in s + list(map(str, s)):
276 for t in s + list(map(str, s)):
277 # We don't need to check exact completion values (they may change
277 # We don't need to check exact completion values (they may change
278 # depending on the state of the namespace, but at least no exceptions
278 # depending on the state of the namespace, but at least no exceptions
279 # should be thrown and the return value should be a pair of text, list
279 # should be thrown and the return value should be a pair of text, list
280 # values.
280 # values.
281 text, matches = ip.complete(t)
281 text, matches = ip.complete(t)
282 self.assertIsInstance(text, str)
282 self.assertIsInstance(text, str)
283 self.assertIsInstance(matches, list)
283 self.assertIsInstance(matches, list)
284
284
285 def test_latex_completions(self):
285 def test_latex_completions(self):
286 from IPython.core.latex_symbols import latex_symbols
286 from IPython.core.latex_symbols import latex_symbols
287 import random
287 import random
288
288
289 ip = get_ipython()
289 ip = get_ipython()
290 # Test some random unicode symbols
290 # Test some random unicode symbols
291 keys = random.sample(sorted(latex_symbols), 10)
291 keys = random.sample(sorted(latex_symbols), 10)
292 for k in keys:
292 for k in keys:
293 text, matches = ip.complete(k)
293 text, matches = ip.complete(k)
294 self.assertEqual(text, k)
294 self.assertEqual(text, k)
295 self.assertEqual(matches, [latex_symbols[k]])
295 self.assertEqual(matches, [latex_symbols[k]])
296 # Test a more complex line
296 # Test a more complex line
297 text, matches = ip.complete("print(\\alpha")
297 text, matches = ip.complete("print(\\alpha")
298 self.assertEqual(text, "\\alpha")
298 self.assertEqual(text, "\\alpha")
299 self.assertEqual(matches[0], latex_symbols["\\alpha"])
299 self.assertEqual(matches[0], latex_symbols["\\alpha"])
300 # Test multiple matching latex symbols
300 # Test multiple matching latex symbols
301 text, matches = ip.complete("\\al")
301 text, matches = ip.complete("\\al")
302 self.assertIn("\\alpha", matches)
302 self.assertIn("\\alpha", matches)
303 self.assertIn("\\aleph", matches)
303 self.assertIn("\\aleph", matches)
304
304
305 def test_latex_no_results(self):
305 def test_latex_no_results(self):
306 """
306 """
307 forward latex should really return nothing in either field if nothing is found.
307 forward latex should really return nothing in either field if nothing is found.
308 """
308 """
309 ip = get_ipython()
309 ip = get_ipython()
310 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
310 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
311 self.assertEqual(text, "")
311 self.assertEqual(text, "")
312 self.assertEqual(matches, ())
312 self.assertEqual(matches, ())
313
313
314 def test_back_latex_completion(self):
314 def test_back_latex_completion(self):
315 ip = get_ipython()
315 ip = get_ipython()
316
316
317 # do not return more than 1 matches for \beta, only the latex one.
317 # do not return more than 1 matches for \beta, only the latex one.
318 name, matches = ip.complete("\\Ξ²")
318 name, matches = ip.complete("\\Ξ²")
319 self.assertEqual(matches, ["\\beta"])
319 self.assertEqual(matches, ["\\beta"])
320
320
321 def test_back_unicode_completion(self):
321 def test_back_unicode_completion(self):
322 ip = get_ipython()
322 ip = get_ipython()
323
323
324 name, matches = ip.complete("\\β…€")
324 name, matches = ip.complete("\\β…€")
325 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
325 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
326
326
327 def test_forward_unicode_completion(self):
327 def test_forward_unicode_completion(self):
328 ip = get_ipython()
328 ip = get_ipython()
329
329
330 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
330 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
331 self.assertEqual(matches, ["β…€"]) # This is not a V
331 self.assertEqual(matches, ["β…€"]) # This is not a V
332 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
332 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
333
333
334 def test_delim_setting(self):
334 def test_delim_setting(self):
335 sp = completer.CompletionSplitter()
335 sp = completer.CompletionSplitter()
336 sp.delims = " "
336 sp.delims = " "
337 self.assertEqual(sp.delims, " ")
337 self.assertEqual(sp.delims, " ")
338 self.assertEqual(sp._delim_expr, r"[\ ]")
338 self.assertEqual(sp._delim_expr, r"[\ ]")
339
339
340 def test_spaces(self):
340 def test_spaces(self):
341 """Test with only spaces as split chars."""
341 """Test with only spaces as split chars."""
342 sp = completer.CompletionSplitter()
342 sp = completer.CompletionSplitter()
343 sp.delims = " "
343 sp.delims = " "
344 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
344 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
345 check_line_split(sp, t)
345 check_line_split(sp, t)
346
346
347 def test_has_open_quotes1(self):
347 def test_has_open_quotes1(self):
348 for s in ["'", "'''", "'hi' '"]:
348 for s in ["'", "'''", "'hi' '"]:
349 self.assertEqual(completer.has_open_quotes(s), "'")
349 self.assertEqual(completer.has_open_quotes(s), "'")
350
350
351 def test_has_open_quotes2(self):
351 def test_has_open_quotes2(self):
352 for s in ['"', '"""', '"hi" "']:
352 for s in ['"', '"""', '"hi" "']:
353 self.assertEqual(completer.has_open_quotes(s), '"')
353 self.assertEqual(completer.has_open_quotes(s), '"')
354
354
355 def test_has_open_quotes3(self):
355 def test_has_open_quotes3(self):
356 for s in ["''", "''' '''", "'hi' 'ipython'"]:
356 for s in ["''", "''' '''", "'hi' 'ipython'"]:
357 self.assertFalse(completer.has_open_quotes(s))
357 self.assertFalse(completer.has_open_quotes(s))
358
358
359 def test_has_open_quotes4(self):
359 def test_has_open_quotes4(self):
360 for s in ['""', '""" """', '"hi" "ipython"']:
360 for s in ['""', '""" """', '"hi" "ipython"']:
361 self.assertFalse(completer.has_open_quotes(s))
361 self.assertFalse(completer.has_open_quotes(s))
362
362
363 @pytest.mark.xfail(
363 @pytest.mark.xfail(
364 sys.platform == "win32", reason="abspath completions fail on Windows"
364 sys.platform == "win32", reason="abspath completions fail on Windows"
365 )
365 )
366 def test_abspath_file_completions(self):
366 def test_abspath_file_completions(self):
367 ip = get_ipython()
367 ip = get_ipython()
368 with TemporaryDirectory() as tmpdir:
368 with TemporaryDirectory() as tmpdir:
369 prefix = os.path.join(tmpdir, "foo")
369 prefix = os.path.join(tmpdir, "foo")
370 suffixes = ["1", "2"]
370 suffixes = ["1", "2"]
371 names = [prefix + s for s in suffixes]
371 names = [prefix + s for s in suffixes]
372 for n in names:
372 for n in names:
373 open(n, "w", encoding="utf-8").close()
373 open(n, "w", encoding="utf-8").close()
374
374
375 # Check simple completion
375 # Check simple completion
376 c = ip.complete(prefix)[1]
376 c = ip.complete(prefix)[1]
377 self.assertEqual(c, names)
377 self.assertEqual(c, names)
378
378
379 # Now check with a function call
379 # Now check with a function call
380 cmd = 'a = f("%s' % prefix
380 cmd = 'a = f("%s' % prefix
381 c = ip.complete(prefix, cmd)[1]
381 c = ip.complete(prefix, cmd)[1]
382 comp = [prefix + s for s in suffixes]
382 comp = [prefix + s for s in suffixes]
383 self.assertEqual(c, comp)
383 self.assertEqual(c, comp)
384
384
385 def test_local_file_completions(self):
385 def test_local_file_completions(self):
386 ip = get_ipython()
386 ip = get_ipython()
387 with TemporaryWorkingDirectory():
387 with TemporaryWorkingDirectory():
388 prefix = "./foo"
388 prefix = "./foo"
389 suffixes = ["1", "2"]
389 suffixes = ["1", "2"]
390 names = [prefix + s for s in suffixes]
390 names = [prefix + s for s in suffixes]
391 for n in names:
391 for n in names:
392 open(n, "w", encoding="utf-8").close()
392 open(n, "w", encoding="utf-8").close()
393
393
394 # Check simple completion
394 # Check simple completion
395 c = ip.complete(prefix)[1]
395 c = ip.complete(prefix)[1]
396 self.assertEqual(c, names)
396 self.assertEqual(c, names)
397
397
398 # Now check with a function call
398 # Now check with a function call
399 cmd = 'a = f("%s' % prefix
399 cmd = 'a = f("%s' % prefix
400 c = ip.complete(prefix, cmd)[1]
400 c = ip.complete(prefix, cmd)[1]
401 comp = {prefix + s for s in suffixes}
401 comp = {prefix + s for s in suffixes}
402 self.assertTrue(comp.issubset(set(c)))
402 self.assertTrue(comp.issubset(set(c)))
403
403
404 def test_quoted_file_completions(self):
404 def test_quoted_file_completions(self):
405 ip = get_ipython()
405 ip = get_ipython()
406
406
407 def _(text):
407 def _(text):
408 return ip.Completer._complete(
408 return ip.Completer._complete(
409 cursor_line=0, cursor_pos=len(text), full_text=text
409 cursor_line=0, cursor_pos=len(text), full_text=text
410 )["IPCompleter.file_matcher"]["completions"]
410 )["IPCompleter.file_matcher"]["completions"]
411
411
412 with TemporaryWorkingDirectory():
412 with TemporaryWorkingDirectory():
413 name = "foo'bar"
413 name = "foo'bar"
414 open(name, "w", encoding="utf-8").close()
414 open(name, "w", encoding="utf-8").close()
415
415
416 # Don't escape Windows
416 # Don't escape Windows
417 escaped = name if sys.platform == "win32" else "foo\\'bar"
417 escaped = name if sys.platform == "win32" else "foo\\'bar"
418
418
419 # Single quote matches embedded single quote
419 # Single quote matches embedded single quote
420 c = _("open('foo")[0]
420 c = _("open('foo")[0]
421 self.assertEqual(c.text, escaped)
421 self.assertEqual(c.text, escaped)
422
422
423 # Double quote requires no escape
423 # Double quote requires no escape
424 c = _('open("foo')[0]
424 c = _('open("foo')[0]
425 self.assertEqual(c.text, name)
425 self.assertEqual(c.text, name)
426
426
427 # No quote requires an escape
427 # No quote requires an escape
428 c = _("%ls foo")[0]
428 c = _("%ls foo")[0]
429 self.assertEqual(c.text, escaped)
429 self.assertEqual(c.text, escaped)
430
430
431 def test_all_completions_dups(self):
431 def test_all_completions_dups(self):
432 """
432 """
433 Make sure the output of `IPCompleter.all_completions` does not have
433 Make sure the output of `IPCompleter.all_completions` does not have
434 duplicated prefixes.
434 duplicated prefixes.
435 """
435 """
436 ip = get_ipython()
436 ip = get_ipython()
437 c = ip.Completer
437 c = ip.Completer
438 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
438 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
439 for jedi_status in [True, False]:
439 for jedi_status in [True, False]:
440 with provisionalcompleter():
440 with provisionalcompleter():
441 ip.Completer.use_jedi = jedi_status
441 ip.Completer.use_jedi = jedi_status
442 matches = c.all_completions("TestCl")
442 matches = c.all_completions("TestCl")
443 assert matches == ["TestClass"], (jedi_status, matches)
443 assert matches == ["TestClass"], (jedi_status, matches)
444 matches = c.all_completions("TestClass.")
444 matches = c.all_completions("TestClass.")
445 assert len(matches) > 2, (jedi_status, matches)
445 assert len(matches) > 2, (jedi_status, matches)
446 matches = c.all_completions("TestClass.a")
446 matches = c.all_completions("TestClass.a")
447 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
447 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
448
448
449 def test_jedi(self):
449 def test_jedi(self):
450 """
450 """
451 A couple of issue we had with Jedi
451 A couple of issue we had with Jedi
452 """
452 """
453 ip = get_ipython()
453 ip = get_ipython()
454
454
455 def _test_complete(reason, s, comp, start=None, end=None):
455 def _test_complete(reason, s, comp, start=None, end=None):
456 l = len(s)
456 l = len(s)
457 start = start if start is not None else l
457 start = start if start is not None else l
458 end = end if end is not None else l
458 end = end if end is not None else l
459 with provisionalcompleter():
459 with provisionalcompleter():
460 ip.Completer.use_jedi = True
460 ip.Completer.use_jedi = True
461 completions = set(ip.Completer.completions(s, l))
461 completions = set(ip.Completer.completions(s, l))
462 ip.Completer.use_jedi = False
462 ip.Completer.use_jedi = False
463 assert Completion(start, end, comp) in completions, reason
463 assert Completion(start, end, comp) in completions, reason
464
464
465 def _test_not_complete(reason, s, comp):
465 def _test_not_complete(reason, s, comp):
466 l = len(s)
466 l = len(s)
467 with provisionalcompleter():
467 with provisionalcompleter():
468 ip.Completer.use_jedi = True
468 ip.Completer.use_jedi = True
469 completions = set(ip.Completer.completions(s, l))
469 completions = set(ip.Completer.completions(s, l))
470 ip.Completer.use_jedi = False
470 ip.Completer.use_jedi = False
471 assert Completion(l, l, comp) not in completions, reason
471 assert Completion(l, l, comp) not in completions, reason
472
472
473 import jedi
473 import jedi
474
474
475 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
475 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
476 if jedi_version > (0, 10):
476 if jedi_version > (0, 10):
477 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
477 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
478 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
478 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
479 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
479 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
480 _test_complete("cover duplicate completions", "im", "import", 0, 2)
480 _test_complete("cover duplicate completions", "im", "import", 0, 2)
481
481
482 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
482 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
483
483
484 def test_completion_have_signature(self):
484 def test_completion_have_signature(self):
485 """
485 """
486 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
486 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
487 """
487 """
488 ip = get_ipython()
488 ip = get_ipython()
489 with provisionalcompleter():
489 with provisionalcompleter():
490 ip.Completer.use_jedi = True
490 ip.Completer.use_jedi = True
491 completions = ip.Completer.completions("ope", 3)
491 completions = ip.Completer.completions("ope", 3)
492 c = next(completions) # should be `open`
492 c = next(completions) # should be `open`
493 ip.Completer.use_jedi = False
493 ip.Completer.use_jedi = False
494 assert "file" in c.signature, "Signature of function was not found by completer"
494 assert "file" in c.signature, "Signature of function was not found by completer"
495 assert (
495 assert (
496 "encoding" in c.signature
496 "encoding" in c.signature
497 ), "Signature of function was not found by completer"
497 ), "Signature of function was not found by completer"
498
498
499 def test_completions_have_type(self):
499 def test_completions_have_type(self):
500 """
500 """
501 Lets make sure matchers provide completion type.
501 Lets make sure matchers provide completion type.
502 """
502 """
503 ip = get_ipython()
503 ip = get_ipython()
504 with provisionalcompleter():
504 with provisionalcompleter():
505 ip.Completer.use_jedi = False
505 ip.Completer.use_jedi = False
506 completions = ip.Completer.completions("%tim", 3)
506 completions = ip.Completer.completions("%tim", 3)
507 c = next(completions) # should be `%time` or similar
507 c = next(completions) # should be `%time` or similar
508 assert c.type == "magic", "Type of magic was not assigned by completer"
508 assert c.type == "magic", "Type of magic was not assigned by completer"
509
509
510 @pytest.mark.xfail(reason="Known failure on jedi<=0.18.0")
510 @pytest.mark.xfail(reason="Known failure on jedi<=0.18.0")
511 def test_deduplicate_completions(self):
511 def test_deduplicate_completions(self):
512 """
512 """
513 Test that completions are correctly deduplicated (even if ranges are not the same)
513 Test that completions are correctly deduplicated (even if ranges are not the same)
514 """
514 """
515 ip = get_ipython()
515 ip = get_ipython()
516 ip.ex(
516 ip.ex(
517 textwrap.dedent(
517 textwrap.dedent(
518 """
518 """
519 class Z:
519 class Z:
520 zoo = 1
520 zoo = 1
521 """
521 """
522 )
522 )
523 )
523 )
524 with provisionalcompleter():
524 with provisionalcompleter():
525 ip.Completer.use_jedi = True
525 ip.Completer.use_jedi = True
526 l = list(
526 l = list(
527 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
527 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
528 )
528 )
529 ip.Completer.use_jedi = False
529 ip.Completer.use_jedi = False
530
530
531 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
531 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
532 assert l[0].text == "zoo" # and not `it.accumulate`
532 assert l[0].text == "zoo" # and not `it.accumulate`
533
533
534 def test_greedy_completions(self):
534 def test_greedy_completions(self):
535 """
535 """
536 Test the capability of the Greedy completer.
536 Test the capability of the Greedy completer.
537
537
538 Most of the test here does not really show off the greedy completer, for proof
538 Most of the test here does not really show off the greedy completer, for proof
539 each of the text below now pass with Jedi. The greedy completer is capable of more.
539 each of the text below now pass with Jedi. The greedy completer is capable of more.
540
540
541 See the :any:`test_dict_key_completion_contexts`
541 See the :any:`test_dict_key_completion_contexts`
542
542
543 """
543 """
544 ip = get_ipython()
544 ip = get_ipython()
545 ip.ex("a=list(range(5))")
545 ip.ex("a=list(range(5))")
546 _, c = ip.complete(".", line="a[0].")
546 _, c = ip.complete(".", line="a[0].")
547 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
547 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
548
548
549 def _(line, cursor_pos, expect, message, completion):
549 def _(line, cursor_pos, expect, message, completion):
550 with greedy_completion(), provisionalcompleter():
550 with greedy_completion(), provisionalcompleter():
551 ip.Completer.use_jedi = False
551 ip.Completer.use_jedi = False
552 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
552 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
553 self.assertIn(expect, c, message % c)
553 self.assertIn(expect, c, message % c)
554
554
555 ip.Completer.use_jedi = True
555 ip.Completer.use_jedi = True
556 with provisionalcompleter():
556 with provisionalcompleter():
557 completions = ip.Completer.completions(line, cursor_pos)
557 completions = ip.Completer.completions(line, cursor_pos)
558 self.assertIn(completion, completions)
558 self.assertIn(completion, completions)
559
559
560 with provisionalcompleter():
560 with provisionalcompleter():
561 _(
561 _(
562 "a[0].",
562 "a[0].",
563 5,
563 5,
564 "a[0].real",
564 "a[0].real",
565 "Should have completed on a[0].: %s",
565 "Should have completed on a[0].: %s",
566 Completion(5, 5, "real"),
566 Completion(5, 5, "real"),
567 )
567 )
568 _(
568 _(
569 "a[0].r",
569 "a[0].r",
570 6,
570 6,
571 "a[0].real",
571 "a[0].real",
572 "Should have completed on a[0].r: %s",
572 "Should have completed on a[0].r: %s",
573 Completion(5, 6, "real"),
573 Completion(5, 6, "real"),
574 )
574 )
575
575
576 _(
576 _(
577 "a[0].from_",
577 "a[0].from_",
578 10,
578 10,
579 "a[0].from_bytes",
579 "a[0].from_bytes",
580 "Should have completed on a[0].from_: %s",
580 "Should have completed on a[0].from_: %s",
581 Completion(5, 10, "from_bytes"),
581 Completion(5, 10, "from_bytes"),
582 )
582 )
583
583
584 def test_omit__names(self):
584 def test_omit__names(self):
585 # also happens to test IPCompleter as a configurable
585 # also happens to test IPCompleter as a configurable
586 ip = get_ipython()
586 ip = get_ipython()
587 ip._hidden_attr = 1
587 ip._hidden_attr = 1
588 ip._x = {}
588 ip._x = {}
589 c = ip.Completer
589 c = ip.Completer
590 ip.ex("ip=get_ipython()")
590 ip.ex("ip=get_ipython()")
591 cfg = Config()
591 cfg = Config()
592 cfg.IPCompleter.omit__names = 0
592 cfg.IPCompleter.omit__names = 0
593 c.update_config(cfg)
593 c.update_config(cfg)
594 with provisionalcompleter():
594 with provisionalcompleter():
595 c.use_jedi = False
595 c.use_jedi = False
596 s, matches = c.complete("ip.")
596 s, matches = c.complete("ip.")
597 self.assertIn("ip.__str__", matches)
597 self.assertIn("ip.__str__", matches)
598 self.assertIn("ip._hidden_attr", matches)
598 self.assertIn("ip._hidden_attr", matches)
599
599
600 # c.use_jedi = True
600 # c.use_jedi = True
601 # completions = set(c.completions('ip.', 3))
601 # completions = set(c.completions('ip.', 3))
602 # self.assertIn(Completion(3, 3, '__str__'), completions)
602 # self.assertIn(Completion(3, 3, '__str__'), completions)
603 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
603 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
604
604
605 cfg = Config()
605 cfg = Config()
606 cfg.IPCompleter.omit__names = 1
606 cfg.IPCompleter.omit__names = 1
607 c.update_config(cfg)
607 c.update_config(cfg)
608 with provisionalcompleter():
608 with provisionalcompleter():
609 c.use_jedi = False
609 c.use_jedi = False
610 s, matches = c.complete("ip.")
610 s, matches = c.complete("ip.")
611 self.assertNotIn("ip.__str__", matches)
611 self.assertNotIn("ip.__str__", matches)
612 # self.assertIn('ip._hidden_attr', matches)
612 # self.assertIn('ip._hidden_attr', matches)
613
613
614 # c.use_jedi = True
614 # c.use_jedi = True
615 # completions = set(c.completions('ip.', 3))
615 # completions = set(c.completions('ip.', 3))
616 # self.assertNotIn(Completion(3,3,'__str__'), completions)
616 # self.assertNotIn(Completion(3,3,'__str__'), completions)
617 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
617 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
618
618
619 cfg = Config()
619 cfg = Config()
620 cfg.IPCompleter.omit__names = 2
620 cfg.IPCompleter.omit__names = 2
621 c.update_config(cfg)
621 c.update_config(cfg)
622 with provisionalcompleter():
622 with provisionalcompleter():
623 c.use_jedi = False
623 c.use_jedi = False
624 s, matches = c.complete("ip.")
624 s, matches = c.complete("ip.")
625 self.assertNotIn("ip.__str__", matches)
625 self.assertNotIn("ip.__str__", matches)
626 self.assertNotIn("ip._hidden_attr", matches)
626 self.assertNotIn("ip._hidden_attr", matches)
627
627
628 # c.use_jedi = True
628 # c.use_jedi = True
629 # completions = set(c.completions('ip.', 3))
629 # completions = set(c.completions('ip.', 3))
630 # self.assertNotIn(Completion(3,3,'__str__'), completions)
630 # self.assertNotIn(Completion(3,3,'__str__'), completions)
631 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
631 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
632
632
633 with provisionalcompleter():
633 with provisionalcompleter():
634 c.use_jedi = False
634 c.use_jedi = False
635 s, matches = c.complete("ip._x.")
635 s, matches = c.complete("ip._x.")
636 self.assertIn("ip._x.keys", matches)
636 self.assertIn("ip._x.keys", matches)
637
637
638 # c.use_jedi = True
638 # c.use_jedi = True
639 # completions = set(c.completions('ip._x.', 6))
639 # completions = set(c.completions('ip._x.', 6))
640 # self.assertIn(Completion(6,6, "keys"), completions)
640 # self.assertIn(Completion(6,6, "keys"), completions)
641
641
642 del ip._hidden_attr
642 del ip._hidden_attr
643 del ip._x
643 del ip._x
644
644
645 def test_limit_to__all__False_ok(self):
645 def test_limit_to__all__False_ok(self):
646 """
646 """
647 Limit to all is deprecated, once we remove it this test can go away.
647 Limit to all is deprecated, once we remove it this test can go away.
648 """
648 """
649 ip = get_ipython()
649 ip = get_ipython()
650 c = ip.Completer
650 c = ip.Completer
651 c.use_jedi = False
651 c.use_jedi = False
652 ip.ex("class D: x=24")
652 ip.ex("class D: x=24")
653 ip.ex("d=D()")
653 ip.ex("d=D()")
654 cfg = Config()
654 cfg = Config()
655 cfg.IPCompleter.limit_to__all__ = False
655 cfg.IPCompleter.limit_to__all__ = False
656 c.update_config(cfg)
656 c.update_config(cfg)
657 s, matches = c.complete("d.")
657 s, matches = c.complete("d.")
658 self.assertIn("d.x", matches)
658 self.assertIn("d.x", matches)
659
659
660 def test_get__all__entries_ok(self):
660 def test_get__all__entries_ok(self):
661 class A:
661 class A:
662 __all__ = ["x", 1]
662 __all__ = ["x", 1]
663
663
664 words = completer.get__all__entries(A())
664 words = completer.get__all__entries(A())
665 self.assertEqual(words, ["x"])
665 self.assertEqual(words, ["x"])
666
666
667 def test_get__all__entries_no__all__ok(self):
667 def test_get__all__entries_no__all__ok(self):
668 class A:
668 class A:
669 pass
669 pass
670
670
671 words = completer.get__all__entries(A())
671 words = completer.get__all__entries(A())
672 self.assertEqual(words, [])
672 self.assertEqual(words, [])
673
673
674 def test_func_kw_completions(self):
674 def test_func_kw_completions(self):
675 ip = get_ipython()
675 ip = get_ipython()
676 c = ip.Completer
676 c = ip.Completer
677 c.use_jedi = False
677 c.use_jedi = False
678 ip.ex("def myfunc(a=1,b=2): return a+b")
678 ip.ex("def myfunc(a=1,b=2): return a+b")
679 s, matches = c.complete(None, "myfunc(1,b")
679 s, matches = c.complete(None, "myfunc(1,b")
680 self.assertIn("b=", matches)
680 self.assertIn("b=", matches)
681 # Simulate completing with cursor right after b (pos==10):
681 # Simulate completing with cursor right after b (pos==10):
682 s, matches = c.complete(None, "myfunc(1,b)", 10)
682 s, matches = c.complete(None, "myfunc(1,b)", 10)
683 self.assertIn("b=", matches)
683 self.assertIn("b=", matches)
684 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
684 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
685 self.assertIn("b=", matches)
685 self.assertIn("b=", matches)
686 # builtin function
686 # builtin function
687 s, matches = c.complete(None, "min(k, k")
687 s, matches = c.complete(None, "min(k, k")
688 self.assertIn("key=", matches)
688 self.assertIn("key=", matches)
689
689
690 def test_default_arguments_from_docstring(self):
690 def test_default_arguments_from_docstring(self):
691 ip = get_ipython()
691 ip = get_ipython()
692 c = ip.Completer
692 c = ip.Completer
693 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
693 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
694 self.assertEqual(kwd, ["key"])
694 self.assertEqual(kwd, ["key"])
695 # with cython type etc
695 # with cython type etc
696 kwd = c._default_arguments_from_docstring(
696 kwd = c._default_arguments_from_docstring(
697 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
697 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
698 )
698 )
699 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
699 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
700 # white spaces
700 # white spaces
701 kwd = c._default_arguments_from_docstring(
701 kwd = c._default_arguments_from_docstring(
702 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
702 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
703 )
703 )
704 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
704 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
705
705
706 def test_line_magics(self):
706 def test_line_magics(self):
707 ip = get_ipython()
707 ip = get_ipython()
708 c = ip.Completer
708 c = ip.Completer
709 s, matches = c.complete(None, "lsmag")
709 s, matches = c.complete(None, "lsmag")
710 self.assertIn("%lsmagic", matches)
710 self.assertIn("%lsmagic", matches)
711 s, matches = c.complete(None, "%lsmag")
711 s, matches = c.complete(None, "%lsmag")
712 self.assertIn("%lsmagic", matches)
712 self.assertIn("%lsmagic", matches)
713
713
714 def test_cell_magics(self):
714 def test_cell_magics(self):
715 from IPython.core.magic import register_cell_magic
715 from IPython.core.magic import register_cell_magic
716
716
717 @register_cell_magic
717 @register_cell_magic
718 def _foo_cellm(line, cell):
718 def _foo_cellm(line, cell):
719 pass
719 pass
720
720
721 ip = get_ipython()
721 ip = get_ipython()
722 c = ip.Completer
722 c = ip.Completer
723
723
724 s, matches = c.complete(None, "_foo_ce")
724 s, matches = c.complete(None, "_foo_ce")
725 self.assertIn("%%_foo_cellm", matches)
725 self.assertIn("%%_foo_cellm", matches)
726 s, matches = c.complete(None, "%%_foo_ce")
726 s, matches = c.complete(None, "%%_foo_ce")
727 self.assertIn("%%_foo_cellm", matches)
727 self.assertIn("%%_foo_cellm", matches)
728
728
729 def test_line_cell_magics(self):
729 def test_line_cell_magics(self):
730 from IPython.core.magic import register_line_cell_magic
730 from IPython.core.magic import register_line_cell_magic
731
731
732 @register_line_cell_magic
732 @register_line_cell_magic
733 def _bar_cellm(line, cell):
733 def _bar_cellm(line, cell):
734 pass
734 pass
735
735
736 ip = get_ipython()
736 ip = get_ipython()
737 c = ip.Completer
737 c = ip.Completer
738
738
739 # The policy here is trickier, see comments in completion code. The
739 # The policy here is trickier, see comments in completion code. The
740 # returned values depend on whether the user passes %% or not explicitly,
740 # returned values depend on whether the user passes %% or not explicitly,
741 # and this will show a difference if the same name is both a line and cell
741 # and this will show a difference if the same name is both a line and cell
742 # magic.
742 # magic.
743 s, matches = c.complete(None, "_bar_ce")
743 s, matches = c.complete(None, "_bar_ce")
744 self.assertIn("%_bar_cellm", matches)
744 self.assertIn("%_bar_cellm", matches)
745 self.assertIn("%%_bar_cellm", matches)
745 self.assertIn("%%_bar_cellm", matches)
746 s, matches = c.complete(None, "%_bar_ce")
746 s, matches = c.complete(None, "%_bar_ce")
747 self.assertIn("%_bar_cellm", matches)
747 self.assertIn("%_bar_cellm", matches)
748 self.assertIn("%%_bar_cellm", matches)
748 self.assertIn("%%_bar_cellm", matches)
749 s, matches = c.complete(None, "%%_bar_ce")
749 s, matches = c.complete(None, "%%_bar_ce")
750 self.assertNotIn("%_bar_cellm", matches)
750 self.assertNotIn("%_bar_cellm", matches)
751 self.assertIn("%%_bar_cellm", matches)
751 self.assertIn("%%_bar_cellm", matches)
752
752
753 def test_magic_completion_order(self):
753 def test_magic_completion_order(self):
754 ip = get_ipython()
754 ip = get_ipython()
755 c = ip.Completer
755 c = ip.Completer
756
756
757 # Test ordering of line and cell magics.
757 # Test ordering of line and cell magics.
758 text, matches = c.complete("timeit")
758 text, matches = c.complete("timeit")
759 self.assertEqual(matches, ["%timeit", "%%timeit"])
759 self.assertEqual(matches, ["%timeit", "%%timeit"])
760
760
761 def test_magic_completion_shadowing(self):
761 def test_magic_completion_shadowing(self):
762 ip = get_ipython()
762 ip = get_ipython()
763 c = ip.Completer
763 c = ip.Completer
764 c.use_jedi = False
764 c.use_jedi = False
765
765
766 # Before importing matplotlib, %matplotlib magic should be the only option.
766 # Before importing matplotlib, %matplotlib magic should be the only option.
767 text, matches = c.complete("mat")
767 text, matches = c.complete("mat")
768 self.assertEqual(matches, ["%matplotlib"])
768 self.assertEqual(matches, ["%matplotlib"])
769
769
770 # The newly introduced name should shadow the magic.
770 # The newly introduced name should shadow the magic.
771 ip.run_cell("matplotlib = 1")
771 ip.run_cell("matplotlib = 1")
772 text, matches = c.complete("mat")
772 text, matches = c.complete("mat")
773 self.assertEqual(matches, ["matplotlib"])
773 self.assertEqual(matches, ["matplotlib"])
774
774
775 # After removing matplotlib from namespace, the magic should again be
775 # After removing matplotlib from namespace, the magic should again be
776 # the only option.
776 # the only option.
777 del ip.user_ns["matplotlib"]
777 del ip.user_ns["matplotlib"]
778 text, matches = c.complete("mat")
778 text, matches = c.complete("mat")
779 self.assertEqual(matches, ["%matplotlib"])
779 self.assertEqual(matches, ["%matplotlib"])
780
780
781 def test_magic_completion_shadowing_explicit(self):
781 def test_magic_completion_shadowing_explicit(self):
782 """
782 """
783 If the user try to complete a shadowed magic, and explicit % start should
783 If the user try to complete a shadowed magic, and explicit % start should
784 still return the completions.
784 still return the completions.
785 """
785 """
786 ip = get_ipython()
786 ip = get_ipython()
787 c = ip.Completer
787 c = ip.Completer
788
788
789 # Before importing matplotlib, %matplotlib magic should be the only option.
789 # Before importing matplotlib, %matplotlib magic should be the only option.
790 text, matches = c.complete("%mat")
790 text, matches = c.complete("%mat")
791 self.assertEqual(matches, ["%matplotlib"])
791 self.assertEqual(matches, ["%matplotlib"])
792
792
793 ip.run_cell("matplotlib = 1")
793 ip.run_cell("matplotlib = 1")
794
794
795 # After removing matplotlib from namespace, the magic should still be
795 # After removing matplotlib from namespace, the magic should still be
796 # the only option.
796 # the only option.
797 text, matches = c.complete("%mat")
797 text, matches = c.complete("%mat")
798 self.assertEqual(matches, ["%matplotlib"])
798 self.assertEqual(matches, ["%matplotlib"])
799
799
800 def test_magic_config(self):
800 def test_magic_config(self):
801 ip = get_ipython()
801 ip = get_ipython()
802 c = ip.Completer
802 c = ip.Completer
803
803
804 s, matches = c.complete(None, "conf")
804 s, matches = c.complete(None, "conf")
805 self.assertIn("%config", matches)
805 self.assertIn("%config", matches)
806 s, matches = c.complete(None, "conf")
806 s, matches = c.complete(None, "conf")
807 self.assertNotIn("AliasManager", matches)
807 self.assertNotIn("AliasManager", matches)
808 s, matches = c.complete(None, "config ")
808 s, matches = c.complete(None, "config ")
809 self.assertIn("AliasManager", matches)
809 self.assertIn("AliasManager", matches)
810 s, matches = c.complete(None, "%config ")
810 s, matches = c.complete(None, "%config ")
811 self.assertIn("AliasManager", matches)
811 self.assertIn("AliasManager", matches)
812 s, matches = c.complete(None, "config Ali")
812 s, matches = c.complete(None, "config Ali")
813 self.assertListEqual(["AliasManager"], matches)
813 self.assertListEqual(["AliasManager"], matches)
814 s, matches = c.complete(None, "%config Ali")
814 s, matches = c.complete(None, "%config Ali")
815 self.assertListEqual(["AliasManager"], matches)
815 self.assertListEqual(["AliasManager"], matches)
816 s, matches = c.complete(None, "config AliasManager")
816 s, matches = c.complete(None, "config AliasManager")
817 self.assertListEqual(["AliasManager"], matches)
817 self.assertListEqual(["AliasManager"], matches)
818 s, matches = c.complete(None, "%config AliasManager")
818 s, matches = c.complete(None, "%config AliasManager")
819 self.assertListEqual(["AliasManager"], matches)
819 self.assertListEqual(["AliasManager"], matches)
820 s, matches = c.complete(None, "config AliasManager.")
820 s, matches = c.complete(None, "config AliasManager.")
821 self.assertIn("AliasManager.default_aliases", matches)
821 self.assertIn("AliasManager.default_aliases", matches)
822 s, matches = c.complete(None, "%config AliasManager.")
822 s, matches = c.complete(None, "%config AliasManager.")
823 self.assertIn("AliasManager.default_aliases", matches)
823 self.assertIn("AliasManager.default_aliases", matches)
824 s, matches = c.complete(None, "config AliasManager.de")
824 s, matches = c.complete(None, "config AliasManager.de")
825 self.assertListEqual(["AliasManager.default_aliases"], matches)
825 self.assertListEqual(["AliasManager.default_aliases"], matches)
826 s, matches = c.complete(None, "config AliasManager.de")
826 s, matches = c.complete(None, "config AliasManager.de")
827 self.assertListEqual(["AliasManager.default_aliases"], matches)
827 self.assertListEqual(["AliasManager.default_aliases"], matches)
828
828
829 def test_magic_color(self):
829 def test_magic_color(self):
830 ip = get_ipython()
830 ip = get_ipython()
831 c = ip.Completer
831 c = ip.Completer
832
832
833 s, matches = c.complete(None, "colo")
833 s, matches = c.complete(None, "colo")
834 self.assertIn("%colors", matches)
834 self.assertIn("%colors", matches)
835 s, matches = c.complete(None, "colo")
835 s, matches = c.complete(None, "colo")
836 self.assertNotIn("NoColor", matches)
836 self.assertNotIn("NoColor", matches)
837 s, matches = c.complete(None, "%colors") # No trailing space
837 s, matches = c.complete(None, "%colors") # No trailing space
838 self.assertNotIn("NoColor", matches)
838 self.assertNotIn("NoColor", matches)
839 s, matches = c.complete(None, "colors ")
839 s, matches = c.complete(None, "colors ")
840 self.assertIn("NoColor", matches)
840 self.assertIn("NoColor", matches)
841 s, matches = c.complete(None, "%colors ")
841 s, matches = c.complete(None, "%colors ")
842 self.assertIn("NoColor", matches)
842 self.assertIn("NoColor", matches)
843 s, matches = c.complete(None, "colors NoCo")
843 s, matches = c.complete(None, "colors NoCo")
844 self.assertListEqual(["NoColor"], matches)
844 self.assertListEqual(["NoColor"], matches)
845 s, matches = c.complete(None, "%colors NoCo")
845 s, matches = c.complete(None, "%colors NoCo")
846 self.assertListEqual(["NoColor"], matches)
846 self.assertListEqual(["NoColor"], matches)
847
847
848 def test_match_dict_keys(self):
848 def test_match_dict_keys(self):
849 """
849 """
850 Test that match_dict_keys works on a couple of use case does return what
850 Test that match_dict_keys works on a couple of use case does return what
851 expected, and does not crash
851 expected, and does not crash
852 """
852 """
853 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
853 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
854
854
855 def match(*args, **kwargs):
855 def match(*args, **kwargs):
856 quote, offset, matches = match_dict_keys(*args, **kwargs)
856 quote, offset, matches = match_dict_keys(*args, **kwargs)
857 return quote, offset, list(matches)
857 return quote, offset, list(matches)
858
858
859 keys = ["foo", b"far"]
859 keys = ["foo", b"far"]
860 assert match(keys, "b'", delims=delims) == ("'", 2, ["far"])
860 assert match(keys, "b'", delims=delims) == ("'", 2, ["far"])
861 assert match(keys, "b'f", delims=delims) == ("'", 2, ["far"])
861 assert match(keys, "b'f", delims=delims) == ("'", 2, ["far"])
862 assert match(keys, 'b"', delims=delims) == ('"', 2, ["far"])
862 assert match(keys, 'b"', delims=delims) == ('"', 2, ["far"])
863 assert match(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
863 assert match(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
864
864
865 assert match(keys, "'", delims=delims) == ("'", 1, ["foo"])
865 assert match(keys, "'", delims=delims) == ("'", 1, ["foo"])
866 assert match(keys, "'f", delims=delims) == ("'", 1, ["foo"])
866 assert match(keys, "'f", delims=delims) == ("'", 1, ["foo"])
867 assert match(keys, '"', delims=delims) == ('"', 1, ["foo"])
867 assert match(keys, '"', delims=delims) == ('"', 1, ["foo"])
868 assert match(keys, '"f', delims=delims) == ('"', 1, ["foo"])
868 assert match(keys, '"f', delims=delims) == ('"', 1, ["foo"])
869
869
870 # Completion on first item of tuple
870 # Completion on first item of tuple
871 keys = [("foo", 1111), ("foo", 2222), (3333, "bar"), (3333, "test")]
871 keys = [("foo", 1111), ("foo", 2222), (3333, "bar"), (3333, "test")]
872 assert match(keys, "'f", delims=delims) == ("'", 1, ["foo"])
872 assert match(keys, "'f", delims=delims) == ("'", 1, ["foo"])
873 assert match(keys, "33", delims=delims) == ("", 0, ["3333"])
873 assert match(keys, "33", delims=delims) == ("", 0, ["3333"])
874
874
875 # Completion on numbers
875 # Completion on numbers
876 keys = [0xDEADBEEF, 1111, 1234, "1999", 0b10101, 22] # 3735928559 # 21
876 keys = [0xDEADBEEF, 1111, 1234, "1999", 0b10101, 22] # 3735928559 # 21
877 assert match(keys, "0xdead", delims=delims) == ("", 0, ["0xdeadbeef"])
877 assert match(keys, "0xdead", delims=delims) == ("", 0, ["0xdeadbeef"])
878 assert match(keys, "1", delims=delims) == ("", 0, ["1111", "1234"])
878 assert match(keys, "1", delims=delims) == ("", 0, ["1111", "1234"])
879 assert match(keys, "2", delims=delims) == ("", 0, ["21", "22"])
879 assert match(keys, "2", delims=delims) == ("", 0, ["21", "22"])
880 assert match(keys, "0b101", delims=delims) == ("", 0, ["0b10101", "0b10110"])
880 assert match(keys, "0b101", delims=delims) == ("", 0, ["0b10101", "0b10110"])
881
881
882 def test_match_dict_keys_tuple(self):
882 def test_match_dict_keys_tuple(self):
883 """
883 """
884 Test that match_dict_keys called with extra prefix works on a couple of use case,
884 Test that match_dict_keys called with extra prefix works on a couple of use case,
885 does return what expected, and does not crash.
885 does return what expected, and does not crash.
886 """
886 """
887 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
887 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
888
888
889 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
889 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
890
890
891 def match(*args, **kwargs):
891 def match(*args, **kwargs):
892 quote, offset, matches = match_dict_keys(*args, **kwargs)
892 quote, offset, matches = match_dict_keys(*args, **kwargs)
893 return quote, offset, list(matches)
893 return quote, offset, list(matches)
894
894
895 # Completion on first key == "foo"
895 # Completion on first key == "foo"
896 assert match(keys, "'", delims=delims, extra_prefix=("foo",)) == (
896 assert match(keys, "'", delims=delims, extra_prefix=("foo",)) == (
897 "'",
897 "'",
898 1,
898 1,
899 ["bar", "oof"],
899 ["bar", "oof"],
900 )
900 )
901 assert match(keys, '"', delims=delims, extra_prefix=("foo",)) == (
901 assert match(keys, '"', delims=delims, extra_prefix=("foo",)) == (
902 '"',
902 '"',
903 1,
903 1,
904 ["bar", "oof"],
904 ["bar", "oof"],
905 )
905 )
906 assert match(keys, "'o", delims=delims, extra_prefix=("foo",)) == (
906 assert match(keys, "'o", delims=delims, extra_prefix=("foo",)) == (
907 "'",
907 "'",
908 1,
908 1,
909 ["oof"],
909 ["oof"],
910 )
910 )
911 assert match(keys, '"o', delims=delims, extra_prefix=("foo",)) == (
911 assert match(keys, '"o', delims=delims, extra_prefix=("foo",)) == (
912 '"',
912 '"',
913 1,
913 1,
914 ["oof"],
914 ["oof"],
915 )
915 )
916 assert match(keys, "b'", delims=delims, extra_prefix=("foo",)) == (
916 assert match(keys, "b'", delims=delims, extra_prefix=("foo",)) == (
917 "'",
917 "'",
918 2,
918 2,
919 ["bar"],
919 ["bar"],
920 )
920 )
921 assert match(keys, 'b"', delims=delims, extra_prefix=("foo",)) == (
921 assert match(keys, 'b"', delims=delims, extra_prefix=("foo",)) == (
922 '"',
922 '"',
923 2,
923 2,
924 ["bar"],
924 ["bar"],
925 )
925 )
926 assert match(keys, "b'b", delims=delims, extra_prefix=("foo",)) == (
926 assert match(keys, "b'b", delims=delims, extra_prefix=("foo",)) == (
927 "'",
927 "'",
928 2,
928 2,
929 ["bar"],
929 ["bar"],
930 )
930 )
931 assert match(keys, 'b"b', delims=delims, extra_prefix=("foo",)) == (
931 assert match(keys, 'b"b', delims=delims, extra_prefix=("foo",)) == (
932 '"',
932 '"',
933 2,
933 2,
934 ["bar"],
934 ["bar"],
935 )
935 )
936
936
937 # No Completion
937 # No Completion
938 assert match(keys, "'", delims=delims, extra_prefix=("no_foo",)) == ("'", 1, [])
938 assert match(keys, "'", delims=delims, extra_prefix=("no_foo",)) == ("'", 1, [])
939 assert match(keys, "'", delims=delims, extra_prefix=("fo",)) == ("'", 1, [])
939 assert match(keys, "'", delims=delims, extra_prefix=("fo",)) == ("'", 1, [])
940
940
941 keys = [("foo1", "foo2", "foo3", "foo4"), ("foo1", "foo2", "bar", "foo4")]
941 keys = [("foo1", "foo2", "foo3", "foo4"), ("foo1", "foo2", "bar", "foo4")]
942 assert match(keys, "'foo", delims=delims, extra_prefix=("foo1",)) == (
942 assert match(keys, "'foo", delims=delims, extra_prefix=("foo1",)) == (
943 "'",
943 "'",
944 1,
944 1,
945 ["foo2"],
945 ["foo2"],
946 )
946 )
947 assert match(keys, "'foo", delims=delims, extra_prefix=("foo1", "foo2")) == (
947 assert match(keys, "'foo", delims=delims, extra_prefix=("foo1", "foo2")) == (
948 "'",
948 "'",
949 1,
949 1,
950 ["foo3"],
950 ["foo3"],
951 )
951 )
952 assert match(
952 assert match(
953 keys, "'foo", delims=delims, extra_prefix=("foo1", "foo2", "foo3")
953 keys, "'foo", delims=delims, extra_prefix=("foo1", "foo2", "foo3")
954 ) == ("'", 1, ["foo4"])
954 ) == ("'", 1, ["foo4"])
955 assert match(
955 assert match(
956 keys, "'foo", delims=delims, extra_prefix=("foo1", "foo2", "foo3", "foo4")
956 keys, "'foo", delims=delims, extra_prefix=("foo1", "foo2", "foo3", "foo4")
957 ) == ("'", 1, [])
957 ) == ("'", 1, [])
958
958
959 keys = [("foo", 1111), ("foo", "2222"), (3333, "bar"), (3333, 4444)]
959 keys = [("foo", 1111), ("foo", "2222"), (3333, "bar"), (3333, 4444)]
960 assert match(keys, "'", delims=delims, extra_prefix=("foo",)) == (
960 assert match(keys, "'", delims=delims, extra_prefix=("foo",)) == (
961 "'",
961 "'",
962 1,
962 1,
963 ["2222"],
963 ["2222"],
964 )
964 )
965 assert match(keys, "", delims=delims, extra_prefix=("foo",)) == (
965 assert match(keys, "", delims=delims, extra_prefix=("foo",)) == (
966 "",
966 "",
967 0,
967 0,
968 ["1111", "'2222'"],
968 ["1111", "'2222'"],
969 )
969 )
970 assert match(keys, "'", delims=delims, extra_prefix=(3333,)) == (
970 assert match(keys, "'", delims=delims, extra_prefix=(3333,)) == (
971 "'",
971 "'",
972 1,
972 1,
973 ["bar"],
973 ["bar"],
974 )
974 )
975 assert match(keys, "", delims=delims, extra_prefix=(3333,)) == (
975 assert match(keys, "", delims=delims, extra_prefix=(3333,)) == (
976 "",
976 "",
977 0,
977 0,
978 ["'bar'", "4444"],
978 ["'bar'", "4444"],
979 )
979 )
980 assert match(keys, "'", delims=delims, extra_prefix=("3333",)) == ("'", 1, [])
980 assert match(keys, "'", delims=delims, extra_prefix=("3333",)) == ("'", 1, [])
981 assert match(keys, "33", delims=delims) == ("", 0, ["3333"])
981 assert match(keys, "33", delims=delims) == ("", 0, ["3333"])
982
982
983 def test_dict_key_completion_closures(self):
983 def test_dict_key_completion_closures(self):
984 ip = get_ipython()
984 ip = get_ipython()
985 complete = ip.Completer.complete
985 complete = ip.Completer.complete
986 ip.Completer.auto_close_dict_keys = True
986 ip.Completer.auto_close_dict_keys = True
987
987
988 ip.user_ns["d"] = {
988 ip.user_ns["d"] = {
989 # tuple only
989 # tuple only
990 ("aa", 11): None,
990 ("aa", 11): None,
991 # tuple and non-tuple
991 # tuple and non-tuple
992 ("bb", 22): None,
992 ("bb", 22): None,
993 "bb": None,
993 "bb": None,
994 # non-tuple only
994 # non-tuple only
995 "cc": None,
995 "cc": None,
996 # numeric tuple only
996 # numeric tuple only
997 (77, "x"): None,
997 (77, "x"): None,
998 # numeric tuple and non-tuple
998 # numeric tuple and non-tuple
999 (88, "y"): None,
999 (88, "y"): None,
1000 88: None,
1000 88: None,
1001 # numeric non-tuple only
1001 # numeric non-tuple only
1002 99: None,
1002 99: None,
1003 }
1003 }
1004
1004
1005 _, matches = complete(line_buffer="d[")
1005 _, matches = complete(line_buffer="d[")
1006 # should append `, ` if matches a tuple only
1006 # should append `, ` if matches a tuple only
1007 self.assertIn("'aa', ", matches)
1007 self.assertIn("'aa', ", matches)
1008 # should not append anything if matches a tuple and an item
1008 # should not append anything if matches a tuple and an item
1009 self.assertIn("'bb'", matches)
1009 self.assertIn("'bb'", matches)
1010 # should append `]` if matches and item only
1010 # should append `]` if matches and item only
1011 self.assertIn("'cc']", matches)
1011 self.assertIn("'cc']", matches)
1012
1012
1013 # should append `, ` if matches a tuple only
1013 # should append `, ` if matches a tuple only
1014 self.assertIn("77, ", matches)
1014 self.assertIn("77, ", matches)
1015 # should not append anything if matches a tuple and an item
1015 # should not append anything if matches a tuple and an item
1016 self.assertIn("88", matches)
1016 self.assertIn("88", matches)
1017 # should append `]` if matches and item only
1017 # should append `]` if matches and item only
1018 self.assertIn("99]", matches)
1018 self.assertIn("99]", matches)
1019
1019
1020 _, matches = complete(line_buffer="d['aa', ")
1020 _, matches = complete(line_buffer="d['aa', ")
1021 # should restrict matches to those matching tuple prefix
1021 # should restrict matches to those matching tuple prefix
1022 self.assertIn("11]", matches)
1022 self.assertIn("11]", matches)
1023 self.assertNotIn("'bb'", matches)
1023 self.assertNotIn("'bb'", matches)
1024 self.assertNotIn("'bb', ", matches)
1024 self.assertNotIn("'bb', ", matches)
1025 self.assertNotIn("'bb']", matches)
1025 self.assertNotIn("'bb']", matches)
1026 self.assertNotIn("'cc'", matches)
1026 self.assertNotIn("'cc'", matches)
1027 self.assertNotIn("'cc', ", matches)
1027 self.assertNotIn("'cc', ", matches)
1028 self.assertNotIn("'cc']", matches)
1028 self.assertNotIn("'cc']", matches)
1029 ip.Completer.auto_close_dict_keys = False
1029 ip.Completer.auto_close_dict_keys = False
1030
1030
1031 def test_dict_key_completion_string(self):
1031 def test_dict_key_completion_string(self):
1032 """Test dictionary key completion for string keys"""
1032 """Test dictionary key completion for string keys"""
1033 ip = get_ipython()
1033 ip = get_ipython()
1034 complete = ip.Completer.complete
1034 complete = ip.Completer.complete
1035
1035
1036 ip.user_ns["d"] = {"abc": None}
1036 ip.user_ns["d"] = {"abc": None}
1037
1037
1038 # check completion at different stages
1038 # check completion at different stages
1039 _, matches = complete(line_buffer="d[")
1039 _, matches = complete(line_buffer="d[")
1040 self.assertIn("'abc'", matches)
1040 self.assertIn("'abc'", matches)
1041 self.assertNotIn("'abc']", matches)
1041 self.assertNotIn("'abc']", matches)
1042
1042
1043 _, matches = complete(line_buffer="d['")
1043 _, matches = complete(line_buffer="d['")
1044 self.assertIn("abc", matches)
1044 self.assertIn("abc", matches)
1045 self.assertNotIn("abc']", matches)
1045 self.assertNotIn("abc']", matches)
1046
1046
1047 _, matches = complete(line_buffer="d['a")
1047 _, matches = complete(line_buffer="d['a")
1048 self.assertIn("abc", matches)
1048 self.assertIn("abc", matches)
1049 self.assertNotIn("abc']", matches)
1049 self.assertNotIn("abc']", matches)
1050
1050
1051 # check use of different quoting
1051 # check use of different quoting
1052 _, matches = complete(line_buffer='d["')
1052 _, matches = complete(line_buffer='d["')
1053 self.assertIn("abc", matches)
1053 self.assertIn("abc", matches)
1054 self.assertNotIn('abc"]', matches)
1054 self.assertNotIn('abc"]', matches)
1055
1055
1056 _, matches = complete(line_buffer='d["a')
1056 _, matches = complete(line_buffer='d["a')
1057 self.assertIn("abc", matches)
1057 self.assertIn("abc", matches)
1058 self.assertNotIn('abc"]', matches)
1058 self.assertNotIn('abc"]', matches)
1059
1059
1060 # check sensitivity to following context
1060 # check sensitivity to following context
1061 _, matches = complete(line_buffer="d[]", cursor_pos=2)
1061 _, matches = complete(line_buffer="d[]", cursor_pos=2)
1062 self.assertIn("'abc'", matches)
1062 self.assertIn("'abc'", matches)
1063
1063
1064 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1064 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1065 self.assertIn("abc", matches)
1065 self.assertIn("abc", matches)
1066 self.assertNotIn("abc'", matches)
1066 self.assertNotIn("abc'", matches)
1067 self.assertNotIn("abc']", matches)
1067 self.assertNotIn("abc']", matches)
1068
1068
1069 # check multiple solutions are correctly returned and that noise is not
1069 # check multiple solutions are correctly returned and that noise is not
1070 ip.user_ns["d"] = {
1070 ip.user_ns["d"] = {
1071 "abc": None,
1071 "abc": None,
1072 "abd": None,
1072 "abd": None,
1073 "bad": None,
1073 "bad": None,
1074 object(): None,
1074 object(): None,
1075 5: None,
1075 5: None,
1076 ("abe", None): None,
1076 ("abe", None): None,
1077 (None, "abf"): None
1077 (None, "abf"): None
1078 }
1078 }
1079
1079
1080 _, matches = complete(line_buffer="d['a")
1080 _, matches = complete(line_buffer="d['a")
1081 self.assertIn("abc", matches)
1081 self.assertIn("abc", matches)
1082 self.assertIn("abd", matches)
1082 self.assertIn("abd", matches)
1083 self.assertNotIn("bad", matches)
1083 self.assertNotIn("bad", matches)
1084 self.assertNotIn("abe", matches)
1084 self.assertNotIn("abe", matches)
1085 self.assertNotIn("abf", matches)
1085 self.assertNotIn("abf", matches)
1086 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1086 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1087
1087
1088 # check escaping and whitespace
1088 # check escaping and whitespace
1089 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
1089 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
1090 _, matches = complete(line_buffer="d['a")
1090 _, matches = complete(line_buffer="d['a")
1091 self.assertIn("a\\nb", matches)
1091 self.assertIn("a\\nb", matches)
1092 self.assertIn("a\\'b", matches)
1092 self.assertIn("a\\'b", matches)
1093 self.assertIn('a"b', matches)
1093 self.assertIn('a"b', matches)
1094 self.assertIn("a word", matches)
1094 self.assertIn("a word", matches)
1095 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1095 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1096
1096
1097 # - can complete on non-initial word of the string
1097 # - can complete on non-initial word of the string
1098 _, matches = complete(line_buffer="d['a w")
1098 _, matches = complete(line_buffer="d['a w")
1099 self.assertIn("word", matches)
1099 self.assertIn("word", matches)
1100
1100
1101 # - understands quote escaping
1101 # - understands quote escaping
1102 _, matches = complete(line_buffer="d['a\\'")
1102 _, matches = complete(line_buffer="d['a\\'")
1103 self.assertIn("b", matches)
1103 self.assertIn("b", matches)
1104
1104
1105 # - default quoting should work like repr
1105 # - default quoting should work like repr
1106 _, matches = complete(line_buffer="d[")
1106 _, matches = complete(line_buffer="d[")
1107 self.assertIn('"a\'b"', matches)
1107 self.assertIn('"a\'b"', matches)
1108
1108
1109 # - when opening quote with ", possible to match with unescaped apostrophe
1109 # - when opening quote with ", possible to match with unescaped apostrophe
1110 _, matches = complete(line_buffer="d[\"a'")
1110 _, matches = complete(line_buffer="d[\"a'")
1111 self.assertIn("b", matches)
1111 self.assertIn("b", matches)
1112
1112
1113 # need to not split at delims that readline won't split at
1113 # need to not split at delims that readline won't split at
1114 if "-" not in ip.Completer.splitter.delims:
1114 if "-" not in ip.Completer.splitter.delims:
1115 ip.user_ns["d"] = {"before-after": None}
1115 ip.user_ns["d"] = {"before-after": None}
1116 _, matches = complete(line_buffer="d['before-af")
1116 _, matches = complete(line_buffer="d['before-af")
1117 self.assertIn("before-after", matches)
1117 self.assertIn("before-after", matches)
1118
1118
1119 # check completion on tuple-of-string keys at different stage - on first key
1119 # check completion on tuple-of-string keys at different stage - on first key
1120 ip.user_ns["d"] = {('foo', 'bar'): None}
1120 ip.user_ns["d"] = {('foo', 'bar'): None}
1121 _, matches = complete(line_buffer="d[")
1121 _, matches = complete(line_buffer="d[")
1122 self.assertIn("'foo'", matches)
1122 self.assertIn("'foo'", matches)
1123 self.assertNotIn("'foo']", matches)
1123 self.assertNotIn("'foo']", matches)
1124 self.assertNotIn("'bar'", matches)
1124 self.assertNotIn("'bar'", matches)
1125 self.assertNotIn("foo", matches)
1125 self.assertNotIn("foo", matches)
1126 self.assertNotIn("bar", matches)
1126 self.assertNotIn("bar", matches)
1127
1127
1128 # - match the prefix
1128 # - match the prefix
1129 _, matches = complete(line_buffer="d['f")
1129 _, matches = complete(line_buffer="d['f")
1130 self.assertIn("foo", matches)
1130 self.assertIn("foo", matches)
1131 self.assertNotIn("foo']", matches)
1131 self.assertNotIn("foo']", matches)
1132 self.assertNotIn('foo"]', matches)
1132 self.assertNotIn('foo"]', matches)
1133 _, matches = complete(line_buffer="d['foo")
1133 _, matches = complete(line_buffer="d['foo")
1134 self.assertIn("foo", matches)
1134 self.assertIn("foo", matches)
1135
1135
1136 # - can complete on second key
1136 # - can complete on second key
1137 _, matches = complete(line_buffer="d['foo', ")
1137 _, matches = complete(line_buffer="d['foo', ")
1138 self.assertIn("'bar'", matches)
1138 self.assertIn("'bar'", matches)
1139 _, matches = complete(line_buffer="d['foo', 'b")
1139 _, matches = complete(line_buffer="d['foo', 'b")
1140 self.assertIn("bar", matches)
1140 self.assertIn("bar", matches)
1141 self.assertNotIn("foo", matches)
1141 self.assertNotIn("foo", matches)
1142
1142
1143 # - does not propose missing keys
1143 # - does not propose missing keys
1144 _, matches = complete(line_buffer="d['foo', 'f")
1144 _, matches = complete(line_buffer="d['foo', 'f")
1145 self.assertNotIn("bar", matches)
1145 self.assertNotIn("bar", matches)
1146 self.assertNotIn("foo", matches)
1146 self.assertNotIn("foo", matches)
1147
1147
1148 # check sensitivity to following context
1148 # check sensitivity to following context
1149 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1149 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1150 self.assertIn("'bar'", matches)
1150 self.assertIn("'bar'", matches)
1151 self.assertNotIn("bar", matches)
1151 self.assertNotIn("bar", matches)
1152 self.assertNotIn("'foo'", matches)
1152 self.assertNotIn("'foo'", matches)
1153 self.assertNotIn("foo", matches)
1153 self.assertNotIn("foo", matches)
1154
1154
1155 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1155 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1156 self.assertIn("foo", matches)
1156 self.assertIn("foo", matches)
1157 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1157 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1158
1158
1159 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1159 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1160 self.assertIn("foo", matches)
1160 self.assertIn("foo", matches)
1161 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1161 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1162
1162
1163 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1163 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1164 self.assertIn("bar", matches)
1164 self.assertIn("bar", matches)
1165 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1165 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1166
1166
1167 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1167 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1168 self.assertIn("'bar'", matches)
1168 self.assertIn("'bar'", matches)
1169 self.assertNotIn("bar", matches)
1169 self.assertNotIn("bar", matches)
1170
1170
1171 # Can complete with longer tuple keys
1171 # Can complete with longer tuple keys
1172 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1172 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1173
1173
1174 # - can complete second key
1174 # - can complete second key
1175 _, matches = complete(line_buffer="d['foo', 'b")
1175 _, matches = complete(line_buffer="d['foo', 'b")
1176 self.assertIn("bar", matches)
1176 self.assertIn("bar", matches)
1177 self.assertNotIn("foo", matches)
1177 self.assertNotIn("foo", matches)
1178 self.assertNotIn("foobar", matches)
1178 self.assertNotIn("foobar", matches)
1179
1179
1180 # - can complete third key
1180 # - can complete third key
1181 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1181 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1182 self.assertIn("foobar", matches)
1182 self.assertIn("foobar", matches)
1183 self.assertNotIn("foo", matches)
1183 self.assertNotIn("foo", matches)
1184 self.assertNotIn("bar", matches)
1184 self.assertNotIn("bar", matches)
1185
1185
1186 def test_dict_key_completion_numbers(self):
1186 def test_dict_key_completion_numbers(self):
1187 ip = get_ipython()
1187 ip = get_ipython()
1188 complete = ip.Completer.complete
1188 complete = ip.Completer.complete
1189
1189
1190 ip.user_ns["d"] = {
1190 ip.user_ns["d"] = {
1191 0xDEADBEEF: None, # 3735928559
1191 0xDEADBEEF: None, # 3735928559
1192 1111: None,
1192 1111: None,
1193 1234: None,
1193 1234: None,
1194 "1999": None,
1194 "1999": None,
1195 0b10101: None, # 21
1195 0b10101: None, # 21
1196 22: None,
1196 22: None,
1197 }
1197 }
1198 _, matches = complete(line_buffer="d[1")
1198 _, matches = complete(line_buffer="d[1")
1199 self.assertIn("1111", matches)
1199 self.assertIn("1111", matches)
1200 self.assertIn("1234", matches)
1200 self.assertIn("1234", matches)
1201 self.assertNotIn("1999", matches)
1201 self.assertNotIn("1999", matches)
1202 self.assertNotIn("'1999'", matches)
1202 self.assertNotIn("'1999'", matches)
1203
1203
1204 _, matches = complete(line_buffer="d[0xdead")
1204 _, matches = complete(line_buffer="d[0xdead")
1205 self.assertIn("0xdeadbeef", matches)
1205 self.assertIn("0xdeadbeef", matches)
1206
1206
1207 _, matches = complete(line_buffer="d[2")
1207 _, matches = complete(line_buffer="d[2")
1208 self.assertIn("21", matches)
1208 self.assertIn("21", matches)
1209 self.assertIn("22", matches)
1209 self.assertIn("22", matches)
1210
1210
1211 _, matches = complete(line_buffer="d[0b101")
1211 _, matches = complete(line_buffer="d[0b101")
1212 self.assertIn("0b10101", matches)
1212 self.assertIn("0b10101", matches)
1213 self.assertIn("0b10110", matches)
1213 self.assertIn("0b10110", matches)
1214
1214
1215 def test_dict_key_completion_contexts(self):
1215 def test_dict_key_completion_contexts(self):
1216 """Test expression contexts in which dict key completion occurs"""
1216 """Test expression contexts in which dict key completion occurs"""
1217 ip = get_ipython()
1217 ip = get_ipython()
1218 complete = ip.Completer.complete
1218 complete = ip.Completer.complete
1219 d = {"abc": None}
1219 d = {"abc": None}
1220 ip.user_ns["d"] = d
1220 ip.user_ns["d"] = d
1221
1221
1222 class C:
1222 class C:
1223 data = d
1223 data = d
1224
1224
1225 ip.user_ns["C"] = C
1225 ip.user_ns["C"] = C
1226 ip.user_ns["get"] = lambda: d
1226 ip.user_ns["get"] = lambda: d
1227 ip.user_ns["nested"] = {"x": d}
1227 ip.user_ns["nested"] = {"x": d}
1228
1228
1229 def assert_no_completion(**kwargs):
1229 def assert_no_completion(**kwargs):
1230 _, matches = complete(**kwargs)
1230 _, matches = complete(**kwargs)
1231 self.assertNotIn("abc", matches)
1231 self.assertNotIn("abc", matches)
1232 self.assertNotIn("abc'", matches)
1232 self.assertNotIn("abc'", matches)
1233 self.assertNotIn("abc']", matches)
1233 self.assertNotIn("abc']", matches)
1234 self.assertNotIn("'abc'", matches)
1234 self.assertNotIn("'abc'", matches)
1235 self.assertNotIn("'abc']", matches)
1235 self.assertNotIn("'abc']", matches)
1236
1236
1237 def assert_completion(**kwargs):
1237 def assert_completion(**kwargs):
1238 _, matches = complete(**kwargs)
1238 _, matches = complete(**kwargs)
1239 self.assertIn("'abc'", matches)
1239 self.assertIn("'abc'", matches)
1240 self.assertNotIn("'abc']", matches)
1240 self.assertNotIn("'abc']", matches)
1241
1241
1242 # no completion after string closed, even if reopened
1242 # no completion after string closed, even if reopened
1243 assert_no_completion(line_buffer="d['a'")
1243 assert_no_completion(line_buffer="d['a'")
1244 assert_no_completion(line_buffer='d["a"')
1244 assert_no_completion(line_buffer='d["a"')
1245 assert_no_completion(line_buffer="d['a' + ")
1245 assert_no_completion(line_buffer="d['a' + ")
1246 assert_no_completion(line_buffer="d['a' + '")
1246 assert_no_completion(line_buffer="d['a' + '")
1247
1247
1248 # completion in non-trivial expressions
1248 # completion in non-trivial expressions
1249 assert_completion(line_buffer="+ d[")
1249 assert_completion(line_buffer="+ d[")
1250 assert_completion(line_buffer="(d[")
1250 assert_completion(line_buffer="(d[")
1251 assert_completion(line_buffer="C.data[")
1251 assert_completion(line_buffer="C.data[")
1252
1252
1253 # nested dict completion
1253 # nested dict completion
1254 assert_completion(line_buffer="nested['x'][")
1254 assert_completion(line_buffer="nested['x'][")
1255
1255
1256 with evaluation_level("minimal"):
1256 with evaluation_level("minimal"):
1257 with pytest.raises(AssertionError):
1257 with pytest.raises(AssertionError):
1258 assert_completion(line_buffer="nested['x'][")
1258 assert_completion(line_buffer="nested['x'][")
1259
1259
1260 # greedy flag
1260 # greedy flag
1261 def assert_completion(**kwargs):
1261 def assert_completion(**kwargs):
1262 _, matches = complete(**kwargs)
1262 _, matches = complete(**kwargs)
1263 self.assertIn("get()['abc']", matches)
1263 self.assertIn("get()['abc']", matches)
1264
1264
1265 assert_no_completion(line_buffer="get()[")
1265 assert_no_completion(line_buffer="get()[")
1266 with greedy_completion():
1266 with greedy_completion():
1267 assert_completion(line_buffer="get()[")
1267 assert_completion(line_buffer="get()[")
1268 assert_completion(line_buffer="get()['")
1268 assert_completion(line_buffer="get()['")
1269 assert_completion(line_buffer="get()['a")
1269 assert_completion(line_buffer="get()['a")
1270 assert_completion(line_buffer="get()['ab")
1270 assert_completion(line_buffer="get()['ab")
1271 assert_completion(line_buffer="get()['abc")
1271 assert_completion(line_buffer="get()['abc")
1272
1272
1273 def test_dict_key_completion_bytes(self):
1273 def test_dict_key_completion_bytes(self):
1274 """Test handling of bytes in dict key completion"""
1274 """Test handling of bytes in dict key completion"""
1275 ip = get_ipython()
1275 ip = get_ipython()
1276 complete = ip.Completer.complete
1276 complete = ip.Completer.complete
1277
1277
1278 ip.user_ns["d"] = {"abc": None, b"abd": None}
1278 ip.user_ns["d"] = {"abc": None, b"abd": None}
1279
1279
1280 _, matches = complete(line_buffer="d[")
1280 _, matches = complete(line_buffer="d[")
1281 self.assertIn("'abc'", matches)
1281 self.assertIn("'abc'", matches)
1282 self.assertIn("b'abd'", matches)
1282 self.assertIn("b'abd'", matches)
1283
1283
1284 if False: # not currently implemented
1284 if False: # not currently implemented
1285 _, matches = complete(line_buffer="d[b")
1285 _, matches = complete(line_buffer="d[b")
1286 self.assertIn("b'abd'", matches)
1286 self.assertIn("b'abd'", matches)
1287 self.assertNotIn("b'abc'", matches)
1287 self.assertNotIn("b'abc'", matches)
1288
1288
1289 _, matches = complete(line_buffer="d[b'")
1289 _, matches = complete(line_buffer="d[b'")
1290 self.assertIn("abd", matches)
1290 self.assertIn("abd", matches)
1291 self.assertNotIn("abc", matches)
1291 self.assertNotIn("abc", matches)
1292
1292
1293 _, matches = complete(line_buffer="d[B'")
1293 _, matches = complete(line_buffer="d[B'")
1294 self.assertIn("abd", matches)
1294 self.assertIn("abd", matches)
1295 self.assertNotIn("abc", matches)
1295 self.assertNotIn("abc", matches)
1296
1296
1297 _, matches = complete(line_buffer="d['")
1297 _, matches = complete(line_buffer="d['")
1298 self.assertIn("abc", matches)
1298 self.assertIn("abc", matches)
1299 self.assertNotIn("abd", matches)
1299 self.assertNotIn("abd", matches)
1300
1300
1301 def test_dict_key_completion_unicode_py3(self):
1301 def test_dict_key_completion_unicode_py3(self):
1302 """Test handling of unicode in dict key completion"""
1302 """Test handling of unicode in dict key completion"""
1303 ip = get_ipython()
1303 ip = get_ipython()
1304 complete = ip.Completer.complete
1304 complete = ip.Completer.complete
1305
1305
1306 ip.user_ns["d"] = {"a\u05d0": None}
1306 ip.user_ns["d"] = {"a\u05d0": None}
1307
1307
1308 # query using escape
1308 # query using escape
1309 if sys.platform != "win32":
1309 if sys.platform != "win32":
1310 # Known failure on Windows
1310 # Known failure on Windows
1311 _, matches = complete(line_buffer="d['a\\u05d0")
1311 _, matches = complete(line_buffer="d['a\\u05d0")
1312 self.assertIn("u05d0", matches) # tokenized after \\
1312 self.assertIn("u05d0", matches) # tokenized after \\
1313
1313
1314 # query using character
1314 # query using character
1315 _, matches = complete(line_buffer="d['a\u05d0")
1315 _, matches = complete(line_buffer="d['a\u05d0")
1316 self.assertIn("a\u05d0", matches)
1316 self.assertIn("a\u05d0", matches)
1317
1317
1318 with greedy_completion():
1318 with greedy_completion():
1319 # query using escape
1319 # query using escape
1320 _, matches = complete(line_buffer="d['a\\u05d0")
1320 _, matches = complete(line_buffer="d['a\\u05d0")
1321 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1321 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1322
1322
1323 # query using character
1323 # query using character
1324 _, matches = complete(line_buffer="d['a\u05d0")
1324 _, matches = complete(line_buffer="d['a\u05d0")
1325 self.assertIn("d['a\u05d0']", matches)
1325 self.assertIn("d['a\u05d0']", matches)
1326
1326
1327 @dec.skip_without("numpy")
1327 @dec.skip_without("numpy")
1328 def test_struct_array_key_completion(self):
1328 def test_struct_array_key_completion(self):
1329 """Test dict key completion applies to numpy struct arrays"""
1329 """Test dict key completion applies to numpy struct arrays"""
1330 import numpy
1330 import numpy
1331
1331
1332 ip = get_ipython()
1332 ip = get_ipython()
1333 complete = ip.Completer.complete
1333 complete = ip.Completer.complete
1334 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1334 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1335 _, matches = complete(line_buffer="d['")
1335 _, matches = complete(line_buffer="d['")
1336 self.assertIn("hello", matches)
1336 self.assertIn("hello", matches)
1337 self.assertIn("world", matches)
1337 self.assertIn("world", matches)
1338 # complete on the numpy struct itself
1338 # complete on the numpy struct itself
1339 dt = numpy.dtype(
1339 dt = numpy.dtype(
1340 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1340 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1341 )
1341 )
1342 x = numpy.zeros(2, dtype=dt)
1342 x = numpy.zeros(2, dtype=dt)
1343 ip.user_ns["d"] = x[1]
1343 ip.user_ns["d"] = x[1]
1344 _, matches = complete(line_buffer="d['")
1344 _, matches = complete(line_buffer="d['")
1345 self.assertIn("my_head", matches)
1345 self.assertIn("my_head", matches)
1346 self.assertIn("my_data", matches)
1346 self.assertIn("my_data", matches)
1347
1347
1348 def completes_on_nested():
1348 def completes_on_nested():
1349 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1349 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1350 _, matches = complete(line_buffer="d[1]['my_head']['")
1350 _, matches = complete(line_buffer="d[1]['my_head']['")
1351 self.assertTrue(any(["my_dt" in m for m in matches]))
1351 self.assertTrue(any(["my_dt" in m for m in matches]))
1352 self.assertTrue(any(["my_df" in m for m in matches]))
1352 self.assertTrue(any(["my_df" in m for m in matches]))
1353 # complete on a nested level
1353 # complete on a nested level
1354 with greedy_completion():
1354 with greedy_completion():
1355 completes_on_nested()
1355 completes_on_nested()
1356
1356
1357 with evaluation_level("limitted"):
1357 with evaluation_level("limited"):
1358 completes_on_nested()
1358 completes_on_nested()
1359
1359
1360 with evaluation_level("minimal"):
1360 with evaluation_level("minimal"):
1361 with pytest.raises(AssertionError):
1361 with pytest.raises(AssertionError):
1362 completes_on_nested()
1362 completes_on_nested()
1363
1363
1364 @dec.skip_without("pandas")
1364 @dec.skip_without("pandas")
1365 def test_dataframe_key_completion(self):
1365 def test_dataframe_key_completion(self):
1366 """Test dict key completion applies to pandas DataFrames"""
1366 """Test dict key completion applies to pandas DataFrames"""
1367 import pandas
1367 import pandas
1368
1368
1369 ip = get_ipython()
1369 ip = get_ipython()
1370 complete = ip.Completer.complete
1370 complete = ip.Completer.complete
1371 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1371 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1372 _, matches = complete(line_buffer="d['")
1372 _, matches = complete(line_buffer="d['")
1373 self.assertIn("hello", matches)
1373 self.assertIn("hello", matches)
1374 self.assertIn("world", matches)
1374 self.assertIn("world", matches)
1375 _, matches = complete(line_buffer="d.loc[:, '")
1375 _, matches = complete(line_buffer="d.loc[:, '")
1376 self.assertIn("hello", matches)
1376 self.assertIn("hello", matches)
1377 self.assertIn("world", matches)
1377 self.assertIn("world", matches)
1378 _, matches = complete(line_buffer="d.loc[1:, '")
1378 _, matches = complete(line_buffer="d.loc[1:, '")
1379 self.assertIn("hello", matches)
1379 self.assertIn("hello", matches)
1380 _, matches = complete(line_buffer="d.loc[1:1, '")
1380 _, matches = complete(line_buffer="d.loc[1:1, '")
1381 self.assertIn("hello", matches)
1381 self.assertIn("hello", matches)
1382 _, matches = complete(line_buffer="d.loc[1:1:-1, '")
1382 _, matches = complete(line_buffer="d.loc[1:1:-1, '")
1383 self.assertIn("hello", matches)
1383 self.assertIn("hello", matches)
1384 _, matches = complete(line_buffer="d.loc[::, '")
1384 _, matches = complete(line_buffer="d.loc[::, '")
1385 self.assertIn("hello", matches)
1385 self.assertIn("hello", matches)
1386
1386
1387 def test_dict_key_completion_invalids(self):
1387 def test_dict_key_completion_invalids(self):
1388 """Smoke test cases dict key completion can't handle"""
1388 """Smoke test cases dict key completion can't handle"""
1389 ip = get_ipython()
1389 ip = get_ipython()
1390 complete = ip.Completer.complete
1390 complete = ip.Completer.complete
1391
1391
1392 ip.user_ns["no_getitem"] = None
1392 ip.user_ns["no_getitem"] = None
1393 ip.user_ns["no_keys"] = []
1393 ip.user_ns["no_keys"] = []
1394 ip.user_ns["cant_call_keys"] = dict
1394 ip.user_ns["cant_call_keys"] = dict
1395 ip.user_ns["empty"] = {}
1395 ip.user_ns["empty"] = {}
1396 ip.user_ns["d"] = {"abc": 5}
1396 ip.user_ns["d"] = {"abc": 5}
1397
1397
1398 _, matches = complete(line_buffer="no_getitem['")
1398 _, matches = complete(line_buffer="no_getitem['")
1399 _, matches = complete(line_buffer="no_keys['")
1399 _, matches = complete(line_buffer="no_keys['")
1400 _, matches = complete(line_buffer="cant_call_keys['")
1400 _, matches = complete(line_buffer="cant_call_keys['")
1401 _, matches = complete(line_buffer="empty['")
1401 _, matches = complete(line_buffer="empty['")
1402 _, matches = complete(line_buffer="name_error['")
1402 _, matches = complete(line_buffer="name_error['")
1403 _, matches = complete(line_buffer="d['\\") # incomplete escape
1403 _, matches = complete(line_buffer="d['\\") # incomplete escape
1404
1404
1405 def test_object_key_completion(self):
1405 def test_object_key_completion(self):
1406 ip = get_ipython()
1406 ip = get_ipython()
1407 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1407 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1408
1408
1409 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1409 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1410 self.assertIn("qwerty", matches)
1410 self.assertIn("qwerty", matches)
1411 self.assertIn("qwick", matches)
1411 self.assertIn("qwick", matches)
1412
1412
1413 def test_class_key_completion(self):
1413 def test_class_key_completion(self):
1414 ip = get_ipython()
1414 ip = get_ipython()
1415 NamedInstanceClass("qwerty")
1415 NamedInstanceClass("qwerty")
1416 NamedInstanceClass("qwick")
1416 NamedInstanceClass("qwick")
1417 ip.user_ns["named_instance_class"] = NamedInstanceClass
1417 ip.user_ns["named_instance_class"] = NamedInstanceClass
1418
1418
1419 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1419 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1420 self.assertIn("qwerty", matches)
1420 self.assertIn("qwerty", matches)
1421 self.assertIn("qwick", matches)
1421 self.assertIn("qwick", matches)
1422
1422
1423 def test_tryimport(self):
1423 def test_tryimport(self):
1424 """
1424 """
1425 Test that try-import don't crash on trailing dot, and import modules before
1425 Test that try-import don't crash on trailing dot, and import modules before
1426 """
1426 """
1427 from IPython.core.completerlib import try_import
1427 from IPython.core.completerlib import try_import
1428
1428
1429 assert try_import("IPython.")
1429 assert try_import("IPython.")
1430
1430
1431 def test_aimport_module_completer(self):
1431 def test_aimport_module_completer(self):
1432 ip = get_ipython()
1432 ip = get_ipython()
1433 _, matches = ip.complete("i", "%aimport i")
1433 _, matches = ip.complete("i", "%aimport i")
1434 self.assertIn("io", matches)
1434 self.assertIn("io", matches)
1435 self.assertNotIn("int", matches)
1435 self.assertNotIn("int", matches)
1436
1436
1437 def test_nested_import_module_completer(self):
1437 def test_nested_import_module_completer(self):
1438 ip = get_ipython()
1438 ip = get_ipython()
1439 _, matches = ip.complete(None, "import IPython.co", 17)
1439 _, matches = ip.complete(None, "import IPython.co", 17)
1440 self.assertIn("IPython.core", matches)
1440 self.assertIn("IPython.core", matches)
1441 self.assertNotIn("import IPython.core", matches)
1441 self.assertNotIn("import IPython.core", matches)
1442 self.assertNotIn("IPython.display", matches)
1442 self.assertNotIn("IPython.display", matches)
1443
1443
1444 def test_import_module_completer(self):
1444 def test_import_module_completer(self):
1445 ip = get_ipython()
1445 ip = get_ipython()
1446 _, matches = ip.complete("i", "import i")
1446 _, matches = ip.complete("i", "import i")
1447 self.assertIn("io", matches)
1447 self.assertIn("io", matches)
1448 self.assertNotIn("int", matches)
1448 self.assertNotIn("int", matches)
1449
1449
1450 def test_from_module_completer(self):
1450 def test_from_module_completer(self):
1451 ip = get_ipython()
1451 ip = get_ipython()
1452 _, matches = ip.complete("B", "from io import B", 16)
1452 _, matches = ip.complete("B", "from io import B", 16)
1453 self.assertIn("BytesIO", matches)
1453 self.assertIn("BytesIO", matches)
1454 self.assertNotIn("BaseException", matches)
1454 self.assertNotIn("BaseException", matches)
1455
1455
1456 def test_snake_case_completion(self):
1456 def test_snake_case_completion(self):
1457 ip = get_ipython()
1457 ip = get_ipython()
1458 ip.Completer.use_jedi = False
1458 ip.Completer.use_jedi = False
1459 ip.user_ns["some_three"] = 3
1459 ip.user_ns["some_three"] = 3
1460 ip.user_ns["some_four"] = 4
1460 ip.user_ns["some_four"] = 4
1461 _, matches = ip.complete("s_", "print(s_f")
1461 _, matches = ip.complete("s_", "print(s_f")
1462 self.assertIn("some_three", matches)
1462 self.assertIn("some_three", matches)
1463 self.assertIn("some_four", matches)
1463 self.assertIn("some_four", matches)
1464
1464
1465 def test_mix_terms(self):
1465 def test_mix_terms(self):
1466 ip = get_ipython()
1466 ip = get_ipython()
1467 from textwrap import dedent
1467 from textwrap import dedent
1468
1468
1469 ip.Completer.use_jedi = False
1469 ip.Completer.use_jedi = False
1470 ip.ex(
1470 ip.ex(
1471 dedent(
1471 dedent(
1472 """
1472 """
1473 class Test:
1473 class Test:
1474 def meth(self, meth_arg1):
1474 def meth(self, meth_arg1):
1475 print("meth")
1475 print("meth")
1476
1476
1477 def meth_1(self, meth1_arg1, meth1_arg2):
1477 def meth_1(self, meth1_arg1, meth1_arg2):
1478 print("meth1")
1478 print("meth1")
1479
1479
1480 def meth_2(self, meth2_arg1, meth2_arg2):
1480 def meth_2(self, meth2_arg1, meth2_arg2):
1481 print("meth2")
1481 print("meth2")
1482 test = Test()
1482 test = Test()
1483 """
1483 """
1484 )
1484 )
1485 )
1485 )
1486 _, matches = ip.complete(None, "test.meth(")
1486 _, matches = ip.complete(None, "test.meth(")
1487 self.assertIn("meth_arg1=", matches)
1487 self.assertIn("meth_arg1=", matches)
1488 self.assertNotIn("meth2_arg1=", matches)
1488 self.assertNotIn("meth2_arg1=", matches)
1489
1489
1490 def test_percent_symbol_restrict_to_magic_completions(self):
1490 def test_percent_symbol_restrict_to_magic_completions(self):
1491 ip = get_ipython()
1491 ip = get_ipython()
1492 completer = ip.Completer
1492 completer = ip.Completer
1493 text = "%a"
1493 text = "%a"
1494
1494
1495 with provisionalcompleter():
1495 with provisionalcompleter():
1496 completer.use_jedi = True
1496 completer.use_jedi = True
1497 completions = completer.completions(text, len(text))
1497 completions = completer.completions(text, len(text))
1498 for c in completions:
1498 for c in completions:
1499 self.assertEqual(c.text[0], "%")
1499 self.assertEqual(c.text[0], "%")
1500
1500
1501 def test_fwd_unicode_restricts(self):
1501 def test_fwd_unicode_restricts(self):
1502 ip = get_ipython()
1502 ip = get_ipython()
1503 completer = ip.Completer
1503 completer = ip.Completer
1504 text = "\\ROMAN NUMERAL FIVE"
1504 text = "\\ROMAN NUMERAL FIVE"
1505
1505
1506 with provisionalcompleter():
1506 with provisionalcompleter():
1507 completer.use_jedi = True
1507 completer.use_jedi = True
1508 completions = [
1508 completions = [
1509 completion.text for completion in completer.completions(text, len(text))
1509 completion.text for completion in completer.completions(text, len(text))
1510 ]
1510 ]
1511 self.assertEqual(completions, ["\u2164"])
1511 self.assertEqual(completions, ["\u2164"])
1512
1512
1513 def test_dict_key_restrict_to_dicts(self):
1513 def test_dict_key_restrict_to_dicts(self):
1514 """Test that dict key suppresses non-dict completion items"""
1514 """Test that dict key suppresses non-dict completion items"""
1515 ip = get_ipython()
1515 ip = get_ipython()
1516 c = ip.Completer
1516 c = ip.Completer
1517 d = {"abc": None}
1517 d = {"abc": None}
1518 ip.user_ns["d"] = d
1518 ip.user_ns["d"] = d
1519
1519
1520 text = 'd["a'
1520 text = 'd["a'
1521
1521
1522 def _():
1522 def _():
1523 with provisionalcompleter():
1523 with provisionalcompleter():
1524 c.use_jedi = True
1524 c.use_jedi = True
1525 return [
1525 return [
1526 completion.text for completion in c.completions(text, len(text))
1526 completion.text for completion in c.completions(text, len(text))
1527 ]
1527 ]
1528
1528
1529 completions = _()
1529 completions = _()
1530 self.assertEqual(completions, ["abc"])
1530 self.assertEqual(completions, ["abc"])
1531
1531
1532 # check that it can be disabled in granular manner:
1532 # check that it can be disabled in granular manner:
1533 cfg = Config()
1533 cfg = Config()
1534 cfg.IPCompleter.suppress_competing_matchers = {
1534 cfg.IPCompleter.suppress_competing_matchers = {
1535 "IPCompleter.dict_key_matcher": False
1535 "IPCompleter.dict_key_matcher": False
1536 }
1536 }
1537 c.update_config(cfg)
1537 c.update_config(cfg)
1538
1538
1539 completions = _()
1539 completions = _()
1540 self.assertIn("abc", completions)
1540 self.assertIn("abc", completions)
1541 self.assertGreater(len(completions), 1)
1541 self.assertGreater(len(completions), 1)
1542
1542
1543 def test_matcher_suppression(self):
1543 def test_matcher_suppression(self):
1544 @completion_matcher(identifier="a_matcher")
1544 @completion_matcher(identifier="a_matcher")
1545 def a_matcher(text):
1545 def a_matcher(text):
1546 return ["completion_a"]
1546 return ["completion_a"]
1547
1547
1548 @completion_matcher(identifier="b_matcher", api_version=2)
1548 @completion_matcher(identifier="b_matcher", api_version=2)
1549 def b_matcher(context: CompletionContext):
1549 def b_matcher(context: CompletionContext):
1550 text = context.token
1550 text = context.token
1551 result = {"completions": [SimpleCompletion("completion_b")]}
1551 result = {"completions": [SimpleCompletion("completion_b")]}
1552
1552
1553 if text == "suppress c":
1553 if text == "suppress c":
1554 result["suppress"] = {"c_matcher"}
1554 result["suppress"] = {"c_matcher"}
1555
1555
1556 if text.startswith("suppress all"):
1556 if text.startswith("suppress all"):
1557 result["suppress"] = True
1557 result["suppress"] = True
1558 if text == "suppress all but c":
1558 if text == "suppress all but c":
1559 result["do_not_suppress"] = {"c_matcher"}
1559 result["do_not_suppress"] = {"c_matcher"}
1560 if text == "suppress all but a":
1560 if text == "suppress all but a":
1561 result["do_not_suppress"] = {"a_matcher"}
1561 result["do_not_suppress"] = {"a_matcher"}
1562
1562
1563 return result
1563 return result
1564
1564
1565 @completion_matcher(identifier="c_matcher")
1565 @completion_matcher(identifier="c_matcher")
1566 def c_matcher(text):
1566 def c_matcher(text):
1567 return ["completion_c"]
1567 return ["completion_c"]
1568
1568
1569 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1569 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1570 ip = get_ipython()
1570 ip = get_ipython()
1571 c = ip.Completer
1571 c = ip.Completer
1572
1572
1573 def _(text, expected):
1573 def _(text, expected):
1574 c.use_jedi = False
1574 c.use_jedi = False
1575 s, matches = c.complete(text)
1575 s, matches = c.complete(text)
1576 self.assertEqual(expected, matches)
1576 self.assertEqual(expected, matches)
1577
1577
1578 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1578 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1579 _("suppress all", ["completion_b"])
1579 _("suppress all", ["completion_b"])
1580 _("suppress all but a", ["completion_a", "completion_b"])
1580 _("suppress all but a", ["completion_a", "completion_b"])
1581 _("suppress all but c", ["completion_b", "completion_c"])
1581 _("suppress all but c", ["completion_b", "completion_c"])
1582
1582
1583 def configure(suppression_config):
1583 def configure(suppression_config):
1584 cfg = Config()
1584 cfg = Config()
1585 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1585 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1586 c.update_config(cfg)
1586 c.update_config(cfg)
1587
1587
1588 # test that configuration takes priority over the run-time decisions
1588 # test that configuration takes priority over the run-time decisions
1589
1589
1590 configure(False)
1590 configure(False)
1591 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1591 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1592
1592
1593 configure({"b_matcher": False})
1593 configure({"b_matcher": False})
1594 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1594 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1595
1595
1596 configure({"a_matcher": False})
1596 configure({"a_matcher": False})
1597 _("suppress all", ["completion_b"])
1597 _("suppress all", ["completion_b"])
1598
1598
1599 configure({"b_matcher": True})
1599 configure({"b_matcher": True})
1600 _("do not suppress", ["completion_b"])
1600 _("do not suppress", ["completion_b"])
1601
1601
1602 configure(True)
1602 configure(True)
1603 _("do not suppress", ["completion_a"])
1603 _("do not suppress", ["completion_a"])
1604
1604
1605 def test_matcher_suppression_with_iterator(self):
1605 def test_matcher_suppression_with_iterator(self):
1606 @completion_matcher(identifier="matcher_returning_iterator")
1606 @completion_matcher(identifier="matcher_returning_iterator")
1607 def matcher_returning_iterator(text):
1607 def matcher_returning_iterator(text):
1608 return iter(["completion_iter"])
1608 return iter(["completion_iter"])
1609
1609
1610 @completion_matcher(identifier="matcher_returning_list")
1610 @completion_matcher(identifier="matcher_returning_list")
1611 def matcher_returning_list(text):
1611 def matcher_returning_list(text):
1612 return ["completion_list"]
1612 return ["completion_list"]
1613
1613
1614 with custom_matchers([matcher_returning_iterator, matcher_returning_list]):
1614 with custom_matchers([matcher_returning_iterator, matcher_returning_list]):
1615 ip = get_ipython()
1615 ip = get_ipython()
1616 c = ip.Completer
1616 c = ip.Completer
1617
1617
1618 def _(text, expected):
1618 def _(text, expected):
1619 c.use_jedi = False
1619 c.use_jedi = False
1620 s, matches = c.complete(text)
1620 s, matches = c.complete(text)
1621 self.assertEqual(expected, matches)
1621 self.assertEqual(expected, matches)
1622
1622
1623 def configure(suppression_config):
1623 def configure(suppression_config):
1624 cfg = Config()
1624 cfg = Config()
1625 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1625 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1626 c.update_config(cfg)
1626 c.update_config(cfg)
1627
1627
1628 configure(False)
1628 configure(False)
1629 _("---", ["completion_iter", "completion_list"])
1629 _("---", ["completion_iter", "completion_list"])
1630
1630
1631 configure(True)
1631 configure(True)
1632 _("---", ["completion_iter"])
1632 _("---", ["completion_iter"])
1633
1633
1634 configure(None)
1634 configure(None)
1635 _("--", ["completion_iter", "completion_list"])
1635 _("--", ["completion_iter", "completion_list"])
1636
1636
1637 def test_matcher_suppression_with_jedi(self):
1637 def test_matcher_suppression_with_jedi(self):
1638 ip = get_ipython()
1638 ip = get_ipython()
1639 c = ip.Completer
1639 c = ip.Completer
1640 c.use_jedi = True
1640 c.use_jedi = True
1641
1641
1642 def configure(suppression_config):
1642 def configure(suppression_config):
1643 cfg = Config()
1643 cfg = Config()
1644 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1644 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1645 c.update_config(cfg)
1645 c.update_config(cfg)
1646
1646
1647 def _():
1647 def _():
1648 with provisionalcompleter():
1648 with provisionalcompleter():
1649 matches = [completion.text for completion in c.completions("dict.", 5)]
1649 matches = [completion.text for completion in c.completions("dict.", 5)]
1650 self.assertIn("keys", matches)
1650 self.assertIn("keys", matches)
1651
1651
1652 configure(False)
1652 configure(False)
1653 _()
1653 _()
1654
1654
1655 configure(True)
1655 configure(True)
1656 _()
1656 _()
1657
1657
1658 configure(None)
1658 configure(None)
1659 _()
1659 _()
1660
1660
1661 def test_matcher_disabling(self):
1661 def test_matcher_disabling(self):
1662 @completion_matcher(identifier="a_matcher")
1662 @completion_matcher(identifier="a_matcher")
1663 def a_matcher(text):
1663 def a_matcher(text):
1664 return ["completion_a"]
1664 return ["completion_a"]
1665
1665
1666 @completion_matcher(identifier="b_matcher")
1666 @completion_matcher(identifier="b_matcher")
1667 def b_matcher(text):
1667 def b_matcher(text):
1668 return ["completion_b"]
1668 return ["completion_b"]
1669
1669
1670 def _(expected):
1670 def _(expected):
1671 s, matches = c.complete("completion_")
1671 s, matches = c.complete("completion_")
1672 self.assertEqual(expected, matches)
1672 self.assertEqual(expected, matches)
1673
1673
1674 with custom_matchers([a_matcher, b_matcher]):
1674 with custom_matchers([a_matcher, b_matcher]):
1675 ip = get_ipython()
1675 ip = get_ipython()
1676 c = ip.Completer
1676 c = ip.Completer
1677
1677
1678 _(["completion_a", "completion_b"])
1678 _(["completion_a", "completion_b"])
1679
1679
1680 cfg = Config()
1680 cfg = Config()
1681 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1681 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1682 c.update_config(cfg)
1682 c.update_config(cfg)
1683
1683
1684 _(["completion_a"])
1684 _(["completion_a"])
1685
1685
1686 cfg.IPCompleter.disable_matchers = []
1686 cfg.IPCompleter.disable_matchers = []
1687 c.update_config(cfg)
1687 c.update_config(cfg)
1688
1688
1689 def test_matcher_priority(self):
1689 def test_matcher_priority(self):
1690 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1690 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1691 def a_matcher(text):
1691 def a_matcher(text):
1692 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1692 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1693
1693
1694 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1694 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1695 def b_matcher(text):
1695 def b_matcher(text):
1696 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1696 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1697
1697
1698 def _(expected):
1698 def _(expected):
1699 s, matches = c.complete("completion_")
1699 s, matches = c.complete("completion_")
1700 self.assertEqual(expected, matches)
1700 self.assertEqual(expected, matches)
1701
1701
1702 with custom_matchers([a_matcher, b_matcher]):
1702 with custom_matchers([a_matcher, b_matcher]):
1703 ip = get_ipython()
1703 ip = get_ipython()
1704 c = ip.Completer
1704 c = ip.Completer
1705
1705
1706 _(["completion_b"])
1706 _(["completion_b"])
1707 a_matcher.matcher_priority = 3
1707 a_matcher.matcher_priority = 3
1708 _(["completion_a"])
1708 _(["completion_a"])
1709
1709
1710
1710
1711 @pytest.mark.parametrize(
1711 @pytest.mark.parametrize(
1712 "input, expected",
1712 "input, expected",
1713 [
1713 [
1714 ["1.234", "1.234"],
1714 ["1.234", "1.234"],
1715 # should match signed numbers
1715 # should match signed numbers
1716 ["+1", "+1"],
1716 ["+1", "+1"],
1717 ["-1", "-1"],
1717 ["-1", "-1"],
1718 ["-1.0", "-1.0"],
1718 ["-1.0", "-1.0"],
1719 ["-1.", "-1."],
1719 ["-1.", "-1."],
1720 ["+1.", "+1."],
1720 ["+1.", "+1."],
1721 [".1", ".1"],
1721 [".1", ".1"],
1722 # should not match non-numbers
1722 # should not match non-numbers
1723 ["1..", None],
1723 ["1..", None],
1724 ["..", None],
1724 ["..", None],
1725 [".1.", None],
1725 [".1.", None],
1726 # should match after comma
1726 # should match after comma
1727 [",1", "1"],
1727 [",1", "1"],
1728 [", 1", "1"],
1728 [", 1", "1"],
1729 [", .1", ".1"],
1729 [", .1", ".1"],
1730 [", +.1", "+.1"],
1730 [", +.1", "+.1"],
1731 # should not match after trailing spaces
1731 # should not match after trailing spaces
1732 [".1 ", None],
1732 [".1 ", None],
1733 # some complex cases
1733 # some complex cases
1734 ["0b_0011_1111_0100_1110", "0b_0011_1111_0100_1110"],
1734 ["0b_0011_1111_0100_1110", "0b_0011_1111_0100_1110"],
1735 ["0xdeadbeef", "0xdeadbeef"],
1735 ["0xdeadbeef", "0xdeadbeef"],
1736 ["0b_1110_0101", "0b_1110_0101"],
1736 ["0b_1110_0101", "0b_1110_0101"],
1737 ],
1737 ],
1738 )
1738 )
1739 def test_match_numeric_literal_for_dict_key(input, expected):
1739 def test_match_numeric_literal_for_dict_key(input, expected):
1740 assert _match_number_in_dict_key_prefix(input) == expected
1740 assert _match_number_in_dict_key_prefix(input) == expected
@@ -1,256 +1,256 b''
1 from typing import NamedTuple
1 from typing import NamedTuple
2 from IPython.core.guarded_eval import (
2 from IPython.core.guarded_eval import (
3 EvaluationContext,
3 EvaluationContext,
4 GuardRejection,
4 GuardRejection,
5 guarded_eval,
5 guarded_eval,
6 unbind_method,
6 unbind_method,
7 )
7 )
8 from IPython.testing import decorators as dec
8 from IPython.testing import decorators as dec
9 import pytest
9 import pytest
10
10
11
11
12 def limitted(**kwargs):
12 def limited(**kwargs):
13 return EvaluationContext(locals_=kwargs, globals_={}, evaluation="limitted")
13 return EvaluationContext(locals_=kwargs, globals_={}, evaluation="limited")
14
14
15
15
16 def unsafe(**kwargs):
16 def unsafe(**kwargs):
17 return EvaluationContext(locals_=kwargs, globals_={}, evaluation="unsafe")
17 return EvaluationContext(locals_=kwargs, globals_={}, evaluation="unsafe")
18
18
19
19
20 @dec.skip_without("pandas")
20 @dec.skip_without("pandas")
21 def test_pandas_series_iloc():
21 def test_pandas_series_iloc():
22 import pandas as pd
22 import pandas as pd
23
23
24 series = pd.Series([1], index=["a"])
24 series = pd.Series([1], index=["a"])
25 context = limitted(data=series)
25 context = limited(data=series)
26 assert guarded_eval("data.iloc[0]", context) == 1
26 assert guarded_eval("data.iloc[0]", context) == 1
27
27
28
28
29 @dec.skip_without("pandas")
29 @dec.skip_without("pandas")
30 def test_pandas_series():
30 def test_pandas_series():
31 import pandas as pd
31 import pandas as pd
32
32
33 context = limitted(data=pd.Series([1], index=["a"]))
33 context = limited(data=pd.Series([1], index=["a"]))
34 assert guarded_eval('data["a"]', context) == 1
34 assert guarded_eval('data["a"]', context) == 1
35 with pytest.raises(KeyError):
35 with pytest.raises(KeyError):
36 guarded_eval('data["c"]', context)
36 guarded_eval('data["c"]', context)
37
37
38
38
39 @dec.skip_without("pandas")
39 @dec.skip_without("pandas")
40 def test_pandas_bad_series():
40 def test_pandas_bad_series():
41 import pandas as pd
41 import pandas as pd
42
42
43 class BadItemSeries(pd.Series):
43 class BadItemSeries(pd.Series):
44 def __getitem__(self, key):
44 def __getitem__(self, key):
45 return "CUSTOM_ITEM"
45 return "CUSTOM_ITEM"
46
46
47 class BadAttrSeries(pd.Series):
47 class BadAttrSeries(pd.Series):
48 def __getattr__(self, key):
48 def __getattr__(self, key):
49 return "CUSTOM_ATTR"
49 return "CUSTOM_ATTR"
50
50
51 bad_series = BadItemSeries([1], index=["a"])
51 bad_series = BadItemSeries([1], index=["a"])
52 context = limitted(data=bad_series)
52 context = limited(data=bad_series)
53
53
54 with pytest.raises(GuardRejection):
54 with pytest.raises(GuardRejection):
55 guarded_eval('data["a"]', context)
55 guarded_eval('data["a"]', context)
56 with pytest.raises(GuardRejection):
56 with pytest.raises(GuardRejection):
57 guarded_eval('data["c"]', context)
57 guarded_eval('data["c"]', context)
58
58
59 # note: here result is a bit unexpected because
59 # note: here result is a bit unexpected because
60 # pandas `__getattr__` calls `__getitem__`;
60 # pandas `__getattr__` calls `__getitem__`;
61 # FIXME - special case to handle it?
61 # FIXME - special case to handle it?
62 assert guarded_eval("data.a", context) == "CUSTOM_ITEM"
62 assert guarded_eval("data.a", context) == "CUSTOM_ITEM"
63
63
64 context = unsafe(data=bad_series)
64 context = unsafe(data=bad_series)
65 assert guarded_eval('data["a"]', context) == "CUSTOM_ITEM"
65 assert guarded_eval('data["a"]', context) == "CUSTOM_ITEM"
66
66
67 bad_attr_series = BadAttrSeries([1], index=["a"])
67 bad_attr_series = BadAttrSeries([1], index=["a"])
68 context = limitted(data=bad_attr_series)
68 context = limited(data=bad_attr_series)
69 assert guarded_eval('data["a"]', context) == 1
69 assert guarded_eval('data["a"]', context) == 1
70 with pytest.raises(GuardRejection):
70 with pytest.raises(GuardRejection):
71 guarded_eval("data.a", context)
71 guarded_eval("data.a", context)
72
72
73
73
74 @dec.skip_without("pandas")
74 @dec.skip_without("pandas")
75 def test_pandas_dataframe_loc():
75 def test_pandas_dataframe_loc():
76 import pandas as pd
76 import pandas as pd
77 from pandas.testing import assert_series_equal
77 from pandas.testing import assert_series_equal
78
78
79 data = pd.DataFrame([{"a": 1}])
79 data = pd.DataFrame([{"a": 1}])
80 context = limitted(data=data)
80 context = limited(data=data)
81 assert_series_equal(guarded_eval('data.loc[:, "a"]', context), data["a"])
81 assert_series_equal(guarded_eval('data.loc[:, "a"]', context), data["a"])
82
82
83
83
84 def test_named_tuple():
84 def test_named_tuple():
85 class GoodNamedTuple(NamedTuple):
85 class GoodNamedTuple(NamedTuple):
86 a: str
86 a: str
87 pass
87 pass
88
88
89 class BadNamedTuple(NamedTuple):
89 class BadNamedTuple(NamedTuple):
90 a: str
90 a: str
91
91
92 def __getitem__(self, key):
92 def __getitem__(self, key):
93 return None
93 return None
94
94
95 good = GoodNamedTuple(a="x")
95 good = GoodNamedTuple(a="x")
96 bad = BadNamedTuple(a="x")
96 bad = BadNamedTuple(a="x")
97
97
98 context = limitted(data=good)
98 context = limited(data=good)
99 assert guarded_eval("data[0]", context) == "x"
99 assert guarded_eval("data[0]", context) == "x"
100
100
101 context = limitted(data=bad)
101 context = limited(data=bad)
102 with pytest.raises(GuardRejection):
102 with pytest.raises(GuardRejection):
103 guarded_eval("data[0]", context)
103 guarded_eval("data[0]", context)
104
104
105
105
106 def test_dict():
106 def test_dict():
107 context = limitted(data={"a": 1, "b": {"x": 2}, ("x", "y"): 3})
107 context = limited(data={"a": 1, "b": {"x": 2}, ("x", "y"): 3})
108 assert guarded_eval('data["a"]', context) == 1
108 assert guarded_eval('data["a"]', context) == 1
109 assert guarded_eval('data["b"]', context) == {"x": 2}
109 assert guarded_eval('data["b"]', context) == {"x": 2}
110 assert guarded_eval('data["b"]["x"]', context) == 2
110 assert guarded_eval('data["b"]["x"]', context) == 2
111 assert guarded_eval('data["x", "y"]', context) == 3
111 assert guarded_eval('data["x", "y"]', context) == 3
112
112
113 assert guarded_eval("data.keys", context)
113 assert guarded_eval("data.keys", context)
114
114
115
115
116 def test_set():
116 def test_set():
117 context = limitted(data={"a", "b"})
117 context = limited(data={"a", "b"})
118 assert guarded_eval("data.difference", context)
118 assert guarded_eval("data.difference", context)
119
119
120
120
121 def test_list():
121 def test_list():
122 context = limitted(data=[1, 2, 3])
122 context = limited(data=[1, 2, 3])
123 assert guarded_eval("data[1]", context) == 2
123 assert guarded_eval("data[1]", context) == 2
124 assert guarded_eval("data.copy", context)
124 assert guarded_eval("data.copy", context)
125
125
126
126
127 def test_dict_literal():
127 def test_dict_literal():
128 context = limitted()
128 context = limited()
129 assert guarded_eval("{}", context) == {}
129 assert guarded_eval("{}", context) == {}
130 assert guarded_eval('{"a": 1}', context) == {"a": 1}
130 assert guarded_eval('{"a": 1}', context) == {"a": 1}
131
131
132
132
133 def test_list_literal():
133 def test_list_literal():
134 context = limitted()
134 context = limited()
135 assert guarded_eval("[]", context) == []
135 assert guarded_eval("[]", context) == []
136 assert guarded_eval('[1, "a"]', context) == [1, "a"]
136 assert guarded_eval('[1, "a"]', context) == [1, "a"]
137
137
138
138
139 def test_set_literal():
139 def test_set_literal():
140 context = limitted()
140 context = limited()
141 assert guarded_eval("set()", context) == set()
141 assert guarded_eval("set()", context) == set()
142 assert guarded_eval('{"a"}', context) == {"a"}
142 assert guarded_eval('{"a"}', context) == {"a"}
143
143
144
144
145 def test_if_expression():
145 def test_if_expression():
146 context = limitted()
146 context = limited()
147 assert guarded_eval("2 if True else 3", context) == 2
147 assert guarded_eval("2 if True else 3", context) == 2
148 assert guarded_eval("4 if False else 5", context) == 5
148 assert guarded_eval("4 if False else 5", context) == 5
149
149
150
150
151 def test_object():
151 def test_object():
152 obj = object()
152 obj = object()
153 context = limitted(obj=obj)
153 context = limited(obj=obj)
154 assert guarded_eval("obj.__dir__", context) == obj.__dir__
154 assert guarded_eval("obj.__dir__", context) == obj.__dir__
155
155
156
156
157 @pytest.mark.parametrize(
157 @pytest.mark.parametrize(
158 "code,expected",
158 "code,expected",
159 [
159 [
160 ["int.numerator", int.numerator],
160 ["int.numerator", int.numerator],
161 ["float.is_integer", float.is_integer],
161 ["float.is_integer", float.is_integer],
162 ["complex.real", complex.real],
162 ["complex.real", complex.real],
163 ],
163 ],
164 )
164 )
165 def test_number_attributes(code, expected):
165 def test_number_attributes(code, expected):
166 assert guarded_eval(code, limitted()) == expected
166 assert guarded_eval(code, limited()) == expected
167
167
168
168
169 def test_method_descriptor():
169 def test_method_descriptor():
170 context = limitted()
170 context = limited()
171 assert guarded_eval("list.copy.__name__", context) == "copy"
171 assert guarded_eval("list.copy.__name__", context) == "copy"
172
172
173
173
174 @pytest.mark.parametrize(
174 @pytest.mark.parametrize(
175 "data,good,bad,expected",
175 "data,good,bad,expected",
176 [
176 [
177 [[1, 2, 3], "data.index(2)", "data.append(4)", 1],
177 [[1, 2, 3], "data.index(2)", "data.append(4)", 1],
178 [{"a": 1}, "data.keys().isdisjoint({})", "data.update()", True],
178 [{"a": 1}, "data.keys().isdisjoint({})", "data.update()", True],
179 ],
179 ],
180 )
180 )
181 def test_calls(data, good, bad, expected):
181 def test_calls(data, good, bad, expected):
182 context = limitted(data=data)
182 context = limited(data=data)
183 assert guarded_eval(good, context) == expected
183 assert guarded_eval(good, context) == expected
184
184
185 with pytest.raises(GuardRejection):
185 with pytest.raises(GuardRejection):
186 guarded_eval(bad, context)
186 guarded_eval(bad, context)
187
187
188
188
189 @pytest.mark.parametrize(
189 @pytest.mark.parametrize(
190 "code,expected",
190 "code,expected",
191 [
191 [
192 ["(1\n+\n1)", 2],
192 ["(1\n+\n1)", 2],
193 ["list(range(10))[-1:]", [9]],
193 ["list(range(10))[-1:]", [9]],
194 ["list(range(20))[3:-2:3]", [3, 6, 9, 12, 15]],
194 ["list(range(20))[3:-2:3]", [3, 6, 9, 12, 15]],
195 ],
195 ],
196 )
196 )
197 def test_literals(code, expected):
197 def test_literals(code, expected):
198 context = limitted()
198 context = limited()
199 assert guarded_eval(code, context) == expected
199 assert guarded_eval(code, context) == expected
200
200
201
201
202 def test_subscript():
202 def test_subscript():
203 context = EvaluationContext(
203 context = EvaluationContext(
204 locals_={}, globals_={}, evaluation="limitted", in_subscript=True
204 locals_={}, globals_={}, evaluation="limited", in_subscript=True
205 )
205 )
206 empty_slice = slice(None, None, None)
206 empty_slice = slice(None, None, None)
207 assert guarded_eval("", context) == tuple()
207 assert guarded_eval("", context) == tuple()
208 assert guarded_eval(":", context) == empty_slice
208 assert guarded_eval(":", context) == empty_slice
209 assert guarded_eval("1:2:3", context) == slice(1, 2, 3)
209 assert guarded_eval("1:2:3", context) == slice(1, 2, 3)
210 assert guarded_eval(':, "a"', context) == (empty_slice, "a")
210 assert guarded_eval(':, "a"', context) == (empty_slice, "a")
211
211
212
212
213 def test_unbind_method():
213 def test_unbind_method():
214 class X(list):
214 class X(list):
215 def index(self, k):
215 def index(self, k):
216 return "CUSTOM"
216 return "CUSTOM"
217
217
218 x = X()
218 x = X()
219 assert unbind_method(x.index) is X.index
219 assert unbind_method(x.index) is X.index
220 assert unbind_method([].index) is list.index
220 assert unbind_method([].index) is list.index
221
221
222
222
223 def test_assumption_instance_attr_do_not_matter():
223 def test_assumption_instance_attr_do_not_matter():
224 """This is semi-specified in Python documentation.
224 """This is semi-specified in Python documentation.
225
225
226 However, since the specification says 'not guaranted
226 However, since the specification says 'not guaranted
227 to work' rather than 'is forbidden to work', future
227 to work' rather than 'is forbidden to work', future
228 versions could invalidate this assumptions. This test
228 versions could invalidate this assumptions. This test
229 is meant to catch such a change if it ever comes true.
229 is meant to catch such a change if it ever comes true.
230 """
230 """
231
231
232 class T:
232 class T:
233 def __getitem__(self, k):
233 def __getitem__(self, k):
234 return "a"
234 return "a"
235
235
236 def __getattr__(self, k):
236 def __getattr__(self, k):
237 return "a"
237 return "a"
238
238
239 t = T()
239 t = T()
240 t.__getitem__ = lambda f: "b"
240 t.__getitem__ = lambda f: "b"
241 t.__getattr__ = lambda f: "b"
241 t.__getattr__ = lambda f: "b"
242 assert t[1] == "a"
242 assert t[1] == "a"
243 assert t[1] == "a"
243 assert t[1] == "a"
244
244
245
245
246 def test_assumption_named_tuples_share_getitem():
246 def test_assumption_named_tuples_share_getitem():
247 """Check assumption on named tuples sharing __getitem__"""
247 """Check assumption on named tuples sharing __getitem__"""
248 from typing import NamedTuple
248 from typing import NamedTuple
249
249
250 class A(NamedTuple):
250 class A(NamedTuple):
251 pass
251 pass
252
252
253 class B(NamedTuple):
253 class B(NamedTuple):
254 pass
254 pass
255
255
256 assert A.__getitem__ == B.__getitem__
256 assert A.__getitem__ == B.__getitem__
General Comments 0
You need to be logged in to leave comments. Login now