##// END OF EJS Templates
Increase coverage of `guard_eval`
krassowski -
Show More
@@ -1,3320 +1,3321 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press :kbd:`Tab` to expand it to its latex form.
53 and press :kbd:`Tab` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 :any:`Completer.backslash_combining_completions` option to ``False``.
62 :any:`Completer.backslash_combining_completions` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
98 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103
103
104 Matchers
104 Matchers
105 ========
105 ========
106
106
107 All completions routines are implemented using unified *Matchers* API.
107 All completions routines are implemented using unified *Matchers* API.
108 The matchers API is provisional and subject to change without notice.
108 The matchers API is provisional and subject to change without notice.
109
109
110 The built-in matchers include:
110 The built-in matchers include:
111
111
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.unicode_name_matcher`,
114 - :any:`IPCompleter.unicode_name_matcher`,
115 :any:`IPCompleter.fwd_unicode_matcher`
115 :any:`IPCompleter.fwd_unicode_matcher`
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 (`complete_command`) with string dispatch (including regular expressions).
124 (`complete_command`) with string dispatch (including regular expressions).
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Jedi results to match behaviour in earlier IPython versions.
126 Jedi results to match behaviour in earlier IPython versions.
127
127
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129
129
130 Matcher API
130 Matcher API
131 -----------
131 -----------
132
132
133 Simplifying some details, the ``Matcher`` interface can described as
133 Simplifying some details, the ``Matcher`` interface can described as
134
134
135 .. code-block::
135 .. code-block::
136
136
137 MatcherAPIv1 = Callable[[str], list[str]]
137 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139
139
140 Matcher = MatcherAPIv1 | MatcherAPIv2
140 Matcher = MatcherAPIv1 | MatcherAPIv2
141
141
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 and remains supported as a simplest way for generating completions. This is also
143 and remains supported as a simplest way for generating completions. This is also
144 currently the only API supported by the IPython hooks system `complete_command`.
144 currently the only API supported by the IPython hooks system `complete_command`.
145
145
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 and requires a literal ``2`` for v2 Matchers.
148 and requires a literal ``2`` for v2 Matchers.
149
149
150 Once the API stabilises future versions may relax the requirement for specifying
150 Once the API stabilises future versions may relax the requirement for specifying
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153
153
154 Suppression of competing matchers
154 Suppression of competing matchers
155 ---------------------------------
155 ---------------------------------
156
156
157 By default results from all matchers are combined, in the order determined by
157 By default results from all matchers are combined, in the order determined by
158 their priority. Matchers can request to suppress results from subsequent
158 their priority. Matchers can request to suppress results from subsequent
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160
160
161 When multiple matchers simultaneously request surpression, the results from of
161 When multiple matchers simultaneously request surpression, the results from of
162 the matcher with higher priority will be returned.
162 the matcher with higher priority will be returned.
163
163
164 Sometimes it is desirable to suppress most but not all other matchers;
164 Sometimes it is desirable to suppress most but not all other matchers;
165 this can be achieved by adding a list of identifiers of matchers which
165 this can be achieved by adding a list of identifiers of matchers which
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167
167
168 The suppression behaviour can is user-configurable via
168 The suppression behaviour can is user-configurable via
169 :any:`IPCompleter.suppress_competing_matchers`.
169 :any:`IPCompleter.suppress_competing_matchers`.
170 """
170 """
171
171
172
172
173 # Copyright (c) IPython Development Team.
173 # Copyright (c) IPython Development Team.
174 # Distributed under the terms of the Modified BSD License.
174 # Distributed under the terms of the Modified BSD License.
175 #
175 #
176 # Some of this code originated from rlcompleter in the Python standard library
176 # Some of this code originated from rlcompleter in the Python standard library
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
178
178
179 from __future__ import annotations
179 from __future__ import annotations
180 import builtins as builtin_mod
180 import builtins as builtin_mod
181 import enum
181 import enum
182 import glob
182 import glob
183 import inspect
183 import inspect
184 import itertools
184 import itertools
185 import keyword
185 import keyword
186 import os
186 import os
187 import re
187 import re
188 import string
188 import string
189 import sys
189 import sys
190 import tokenize
190 import tokenize
191 import time
191 import time
192 import unicodedata
192 import unicodedata
193 import uuid
193 import uuid
194 import warnings
194 import warnings
195 from ast import literal_eval
195 from ast import literal_eval
196 from collections import defaultdict
196 from collections import defaultdict
197 from contextlib import contextmanager
197 from contextlib import contextmanager
198 from dataclasses import dataclass
198 from dataclasses import dataclass
199 from functools import cached_property, partial
199 from functools import cached_property, partial
200 from types import SimpleNamespace
200 from types import SimpleNamespace
201 from typing import (
201 from typing import (
202 Iterable,
202 Iterable,
203 Iterator,
203 Iterator,
204 List,
204 List,
205 Tuple,
205 Tuple,
206 Union,
206 Union,
207 Any,
207 Any,
208 Sequence,
208 Sequence,
209 Dict,
209 Dict,
210 Optional,
210 Optional,
211 TYPE_CHECKING,
211 TYPE_CHECKING,
212 Set,
212 Set,
213 Sized,
213 Sized,
214 TypeVar,
214 TypeVar,
215 Literal,
215 Literal,
216 )
216 )
217
217
218 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
218 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
219 from IPython.core.error import TryNext
219 from IPython.core.error import TryNext
220 from IPython.core.inputtransformer2 import ESC_MAGIC
220 from IPython.core.inputtransformer2 import ESC_MAGIC
221 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
221 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
222 from IPython.core.oinspect import InspectColors
222 from IPython.core.oinspect import InspectColors
223 from IPython.testing.skipdoctest import skip_doctest
223 from IPython.testing.skipdoctest import skip_doctest
224 from IPython.utils import generics
224 from IPython.utils import generics
225 from IPython.utils.decorators import sphinx_options
225 from IPython.utils.decorators import sphinx_options
226 from IPython.utils.dir2 import dir2, get_real_method
226 from IPython.utils.dir2 import dir2, get_real_method
227 from IPython.utils.docs import GENERATING_DOCUMENTATION
227 from IPython.utils.docs import GENERATING_DOCUMENTATION
228 from IPython.utils.path import ensure_dir_exists
228 from IPython.utils.path import ensure_dir_exists
229 from IPython.utils.process import arg_split
229 from IPython.utils.process import arg_split
230 from traitlets import (
230 from traitlets import (
231 Bool,
231 Bool,
232 Enum,
232 Enum,
233 Int,
233 Int,
234 List as ListTrait,
234 List as ListTrait,
235 Unicode,
235 Unicode,
236 Dict as DictTrait,
236 Dict as DictTrait,
237 Union as UnionTrait,
237 Union as UnionTrait,
238 observe,
238 observe,
239 )
239 )
240 from traitlets.config.configurable import Configurable
240 from traitlets.config.configurable import Configurable
241
241
242 import __main__
242 import __main__
243
243
244 # skip module docstests
244 # skip module docstests
245 __skip_doctest__ = True
245 __skip_doctest__ = True
246
246
247
247
248 try:
248 try:
249 import jedi
249 import jedi
250 jedi.settings.case_insensitive_completion = False
250 jedi.settings.case_insensitive_completion = False
251 import jedi.api.helpers
251 import jedi.api.helpers
252 import jedi.api.classes
252 import jedi.api.classes
253 JEDI_INSTALLED = True
253 JEDI_INSTALLED = True
254 except ImportError:
254 except ImportError:
255 JEDI_INSTALLED = False
255 JEDI_INSTALLED = False
256
256
257
257
258 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
258 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
259 from typing import cast
259 from typing import cast
260 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
260 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
261 else:
261 else:
262 from typing import Generic
262 from typing import Generic
263
263
264 def cast(type_, obj):
264 def cast(type_, obj):
265 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
265 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
266 return obj
266 return obj
267
267
268 # do not require on runtime
268 # do not require on runtime
269 NotRequired = Tuple # requires Python >=3.11
269 NotRequired = Tuple # requires Python >=3.11
270 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
270 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
271 Protocol = object # requires Python >=3.8
271 Protocol = object # requires Python >=3.8
272 TypeAlias = Any # requires Python >=3.10
272 TypeAlias = Any # requires Python >=3.10
273 TypeGuard = Generic # requires Python >=3.10
273 TypeGuard = Generic # requires Python >=3.10
274 if GENERATING_DOCUMENTATION:
274 if GENERATING_DOCUMENTATION:
275 from typing import TypedDict
275 from typing import TypedDict
276
276
277 # -----------------------------------------------------------------------------
277 # -----------------------------------------------------------------------------
278 # Globals
278 # Globals
279 #-----------------------------------------------------------------------------
279 #-----------------------------------------------------------------------------
280
280
281 # ranges where we have most of the valid unicode names. We could be more finer
281 # ranges where we have most of the valid unicode names. We could be more finer
282 # grained but is it worth it for performance While unicode have character in the
282 # grained but is it worth it for performance While unicode have character in the
283 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
283 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
284 # write this). With below range we cover them all, with a density of ~67%
284 # write this). With below range we cover them all, with a density of ~67%
285 # biggest next gap we consider only adds up about 1% density and there are 600
285 # biggest next gap we consider only adds up about 1% density and there are 600
286 # gaps that would need hard coding.
286 # gaps that would need hard coding.
287 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
287 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
288
288
289 # Public API
289 # Public API
290 __all__ = ["Completer", "IPCompleter"]
290 __all__ = ["Completer", "IPCompleter"]
291
291
292 if sys.platform == 'win32':
292 if sys.platform == 'win32':
293 PROTECTABLES = ' '
293 PROTECTABLES = ' '
294 else:
294 else:
295 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
295 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
296
296
297 # Protect against returning an enormous number of completions which the frontend
297 # Protect against returning an enormous number of completions which the frontend
298 # may have trouble processing.
298 # may have trouble processing.
299 MATCHES_LIMIT = 500
299 MATCHES_LIMIT = 500
300
300
301 # Completion type reported when no type can be inferred.
301 # Completion type reported when no type can be inferred.
302 _UNKNOWN_TYPE = "<unknown>"
302 _UNKNOWN_TYPE = "<unknown>"
303
303
304 # sentinel value to signal lack of a match
304 # sentinel value to signal lack of a match
305 not_found = object()
305 not_found = object()
306
306
307 class ProvisionalCompleterWarning(FutureWarning):
307 class ProvisionalCompleterWarning(FutureWarning):
308 """
308 """
309 Exception raise by an experimental feature in this module.
309 Exception raise by an experimental feature in this module.
310
310
311 Wrap code in :any:`provisionalcompleter` context manager if you
311 Wrap code in :any:`provisionalcompleter` context manager if you
312 are certain you want to use an unstable feature.
312 are certain you want to use an unstable feature.
313 """
313 """
314 pass
314 pass
315
315
316 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
316 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
317
317
318
318
319 @skip_doctest
319 @skip_doctest
320 @contextmanager
320 @contextmanager
321 def provisionalcompleter(action='ignore'):
321 def provisionalcompleter(action='ignore'):
322 """
322 """
323 This context manager has to be used in any place where unstable completer
323 This context manager has to be used in any place where unstable completer
324 behavior and API may be called.
324 behavior and API may be called.
325
325
326 >>> with provisionalcompleter():
326 >>> with provisionalcompleter():
327 ... completer.do_experimental_things() # works
327 ... completer.do_experimental_things() # works
328
328
329 >>> completer.do_experimental_things() # raises.
329 >>> completer.do_experimental_things() # raises.
330
330
331 .. note::
331 .. note::
332
332
333 Unstable
333 Unstable
334
334
335 By using this context manager you agree that the API in use may change
335 By using this context manager you agree that the API in use may change
336 without warning, and that you won't complain if they do so.
336 without warning, and that you won't complain if they do so.
337
337
338 You also understand that, if the API is not to your liking, you should report
338 You also understand that, if the API is not to your liking, you should report
339 a bug to explain your use case upstream.
339 a bug to explain your use case upstream.
340
340
341 We'll be happy to get your feedback, feature requests, and improvements on
341 We'll be happy to get your feedback, feature requests, and improvements on
342 any of the unstable APIs!
342 any of the unstable APIs!
343 """
343 """
344 with warnings.catch_warnings():
344 with warnings.catch_warnings():
345 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
345 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
346 yield
346 yield
347
347
348
348
349 def has_open_quotes(s):
349 def has_open_quotes(s):
350 """Return whether a string has open quotes.
350 """Return whether a string has open quotes.
351
351
352 This simply counts whether the number of quote characters of either type in
352 This simply counts whether the number of quote characters of either type in
353 the string is odd.
353 the string is odd.
354
354
355 Returns
355 Returns
356 -------
356 -------
357 If there is an open quote, the quote character is returned. Else, return
357 If there is an open quote, the quote character is returned. Else, return
358 False.
358 False.
359 """
359 """
360 # We check " first, then ', so complex cases with nested quotes will get
360 # We check " first, then ', so complex cases with nested quotes will get
361 # the " to take precedence.
361 # the " to take precedence.
362 if s.count('"') % 2:
362 if s.count('"') % 2:
363 return '"'
363 return '"'
364 elif s.count("'") % 2:
364 elif s.count("'") % 2:
365 return "'"
365 return "'"
366 else:
366 else:
367 return False
367 return False
368
368
369
369
370 def protect_filename(s, protectables=PROTECTABLES):
370 def protect_filename(s, protectables=PROTECTABLES):
371 """Escape a string to protect certain characters."""
371 """Escape a string to protect certain characters."""
372 if set(s) & set(protectables):
372 if set(s) & set(protectables):
373 if sys.platform == "win32":
373 if sys.platform == "win32":
374 return '"' + s + '"'
374 return '"' + s + '"'
375 else:
375 else:
376 return "".join(("\\" + c if c in protectables else c) for c in s)
376 return "".join(("\\" + c if c in protectables else c) for c in s)
377 else:
377 else:
378 return s
378 return s
379
379
380
380
381 def expand_user(path:str) -> Tuple[str, bool, str]:
381 def expand_user(path:str) -> Tuple[str, bool, str]:
382 """Expand ``~``-style usernames in strings.
382 """Expand ``~``-style usernames in strings.
383
383
384 This is similar to :func:`os.path.expanduser`, but it computes and returns
384 This is similar to :func:`os.path.expanduser`, but it computes and returns
385 extra information that will be useful if the input was being used in
385 extra information that will be useful if the input was being used in
386 computing completions, and you wish to return the completions with the
386 computing completions, and you wish to return the completions with the
387 original '~' instead of its expanded value.
387 original '~' instead of its expanded value.
388
388
389 Parameters
389 Parameters
390 ----------
390 ----------
391 path : str
391 path : str
392 String to be expanded. If no ~ is present, the output is the same as the
392 String to be expanded. If no ~ is present, the output is the same as the
393 input.
393 input.
394
394
395 Returns
395 Returns
396 -------
396 -------
397 newpath : str
397 newpath : str
398 Result of ~ expansion in the input path.
398 Result of ~ expansion in the input path.
399 tilde_expand : bool
399 tilde_expand : bool
400 Whether any expansion was performed or not.
400 Whether any expansion was performed or not.
401 tilde_val : str
401 tilde_val : str
402 The value that ~ was replaced with.
402 The value that ~ was replaced with.
403 """
403 """
404 # Default values
404 # Default values
405 tilde_expand = False
405 tilde_expand = False
406 tilde_val = ''
406 tilde_val = ''
407 newpath = path
407 newpath = path
408
408
409 if path.startswith('~'):
409 if path.startswith('~'):
410 tilde_expand = True
410 tilde_expand = True
411 rest = len(path)-1
411 rest = len(path)-1
412 newpath = os.path.expanduser(path)
412 newpath = os.path.expanduser(path)
413 if rest:
413 if rest:
414 tilde_val = newpath[:-rest]
414 tilde_val = newpath[:-rest]
415 else:
415 else:
416 tilde_val = newpath
416 tilde_val = newpath
417
417
418 return newpath, tilde_expand, tilde_val
418 return newpath, tilde_expand, tilde_val
419
419
420
420
421 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
421 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
422 """Does the opposite of expand_user, with its outputs.
422 """Does the opposite of expand_user, with its outputs.
423 """
423 """
424 if tilde_expand:
424 if tilde_expand:
425 return path.replace(tilde_val, '~')
425 return path.replace(tilde_val, '~')
426 else:
426 else:
427 return path
427 return path
428
428
429
429
430 def completions_sorting_key(word):
430 def completions_sorting_key(word):
431 """key for sorting completions
431 """key for sorting completions
432
432
433 This does several things:
433 This does several things:
434
434
435 - Demote any completions starting with underscores to the end
435 - Demote any completions starting with underscores to the end
436 - Insert any %magic and %%cellmagic completions in the alphabetical order
436 - Insert any %magic and %%cellmagic completions in the alphabetical order
437 by their name
437 by their name
438 """
438 """
439 prio1, prio2 = 0, 0
439 prio1, prio2 = 0, 0
440
440
441 if word.startswith('__'):
441 if word.startswith('__'):
442 prio1 = 2
442 prio1 = 2
443 elif word.startswith('_'):
443 elif word.startswith('_'):
444 prio1 = 1
444 prio1 = 1
445
445
446 if word.endswith('='):
446 if word.endswith('='):
447 prio1 = -1
447 prio1 = -1
448
448
449 if word.startswith('%%'):
449 if word.startswith('%%'):
450 # If there's another % in there, this is something else, so leave it alone
450 # If there's another % in there, this is something else, so leave it alone
451 if not "%" in word[2:]:
451 if not "%" in word[2:]:
452 word = word[2:]
452 word = word[2:]
453 prio2 = 2
453 prio2 = 2
454 elif word.startswith('%'):
454 elif word.startswith('%'):
455 if not "%" in word[1:]:
455 if not "%" in word[1:]:
456 word = word[1:]
456 word = word[1:]
457 prio2 = 1
457 prio2 = 1
458
458
459 return prio1, word, prio2
459 return prio1, word, prio2
460
460
461
461
462 class _FakeJediCompletion:
462 class _FakeJediCompletion:
463 """
463 """
464 This is a workaround to communicate to the UI that Jedi has crashed and to
464 This is a workaround to communicate to the UI that Jedi has crashed and to
465 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
465 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
466
466
467 Added in IPython 6.0 so should likely be removed for 7.0
467 Added in IPython 6.0 so should likely be removed for 7.0
468
468
469 """
469 """
470
470
471 def __init__(self, name):
471 def __init__(self, name):
472
472
473 self.name = name
473 self.name = name
474 self.complete = name
474 self.complete = name
475 self.type = 'crashed'
475 self.type = 'crashed'
476 self.name_with_symbols = name
476 self.name_with_symbols = name
477 self.signature = ""
477 self.signature = ""
478 self._origin = "fake"
478 self._origin = "fake"
479 self.text = "crashed"
479 self.text = "crashed"
480
480
481 def __repr__(self):
481 def __repr__(self):
482 return '<Fake completion object jedi has crashed>'
482 return '<Fake completion object jedi has crashed>'
483
483
484
484
485 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
485 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
486
486
487
487
488 class Completion:
488 class Completion:
489 """
489 """
490 Completion object used and returned by IPython completers.
490 Completion object used and returned by IPython completers.
491
491
492 .. warning::
492 .. warning::
493
493
494 Unstable
494 Unstable
495
495
496 This function is unstable, API may change without warning.
496 This function is unstable, API may change without warning.
497 It will also raise unless use in proper context manager.
497 It will also raise unless use in proper context manager.
498
498
499 This act as a middle ground :any:`Completion` object between the
499 This act as a middle ground :any:`Completion` object between the
500 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
500 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
501 object. While Jedi need a lot of information about evaluator and how the
501 object. While Jedi need a lot of information about evaluator and how the
502 code should be ran/inspected, PromptToolkit (and other frontend) mostly
502 code should be ran/inspected, PromptToolkit (and other frontend) mostly
503 need user facing information.
503 need user facing information.
504
504
505 - Which range should be replaced replaced by what.
505 - Which range should be replaced replaced by what.
506 - Some metadata (like completion type), or meta information to displayed to
506 - Some metadata (like completion type), or meta information to displayed to
507 the use user.
507 the use user.
508
508
509 For debugging purpose we can also store the origin of the completion (``jedi``,
509 For debugging purpose we can also store the origin of the completion (``jedi``,
510 ``IPython.python_matches``, ``IPython.magics_matches``...).
510 ``IPython.python_matches``, ``IPython.magics_matches``...).
511 """
511 """
512
512
513 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
513 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
514
514
515 def __init__(
515 def __init__(
516 self,
516 self,
517 start: int,
517 start: int,
518 end: int,
518 end: int,
519 text: str,
519 text: str,
520 *,
520 *,
521 type: Optional[str] = None,
521 type: Optional[str] = None,
522 _origin="",
522 _origin="",
523 signature="",
523 signature="",
524 ) -> None:
524 ) -> None:
525 warnings.warn(
525 warnings.warn(
526 "``Completion`` is a provisional API (as of IPython 6.0). "
526 "``Completion`` is a provisional API (as of IPython 6.0). "
527 "It may change without warnings. "
527 "It may change without warnings. "
528 "Use in corresponding context manager.",
528 "Use in corresponding context manager.",
529 category=ProvisionalCompleterWarning,
529 category=ProvisionalCompleterWarning,
530 stacklevel=2,
530 stacklevel=2,
531 )
531 )
532
532
533 self.start = start
533 self.start = start
534 self.end = end
534 self.end = end
535 self.text = text
535 self.text = text
536 self.type = type
536 self.type = type
537 self.signature = signature
537 self.signature = signature
538 self._origin = _origin
538 self._origin = _origin
539
539
540 def __repr__(self):
540 def __repr__(self):
541 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
541 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
542 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
542 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
543
543
544 def __eq__(self, other) -> bool:
544 def __eq__(self, other) -> bool:
545 """
545 """
546 Equality and hash do not hash the type (as some completer may not be
546 Equality and hash do not hash the type (as some completer may not be
547 able to infer the type), but are use to (partially) de-duplicate
547 able to infer the type), but are use to (partially) de-duplicate
548 completion.
548 completion.
549
549
550 Completely de-duplicating completion is a bit tricker that just
550 Completely de-duplicating completion is a bit tricker that just
551 comparing as it depends on surrounding text, which Completions are not
551 comparing as it depends on surrounding text, which Completions are not
552 aware of.
552 aware of.
553 """
553 """
554 return self.start == other.start and \
554 return self.start == other.start and \
555 self.end == other.end and \
555 self.end == other.end and \
556 self.text == other.text
556 self.text == other.text
557
557
558 def __hash__(self):
558 def __hash__(self):
559 return hash((self.start, self.end, self.text))
559 return hash((self.start, self.end, self.text))
560
560
561
561
562 class SimpleCompletion:
562 class SimpleCompletion:
563 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
563 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
564
564
565 .. warning::
565 .. warning::
566
566
567 Provisional
567 Provisional
568
568
569 This class is used to describe the currently supported attributes of
569 This class is used to describe the currently supported attributes of
570 simple completion items, and any additional implementation details
570 simple completion items, and any additional implementation details
571 should not be relied on. Additional attributes may be included in
571 should not be relied on. Additional attributes may be included in
572 future versions, and meaning of text disambiguated from the current
572 future versions, and meaning of text disambiguated from the current
573 dual meaning of "text to insert" and "text to used as a label".
573 dual meaning of "text to insert" and "text to used as a label".
574 """
574 """
575
575
576 __slots__ = ["text", "type"]
576 __slots__ = ["text", "type"]
577
577
578 def __init__(self, text: str, *, type: Optional[str] = None):
578 def __init__(self, text: str, *, type: Optional[str] = None):
579 self.text = text
579 self.text = text
580 self.type = type
580 self.type = type
581
581
582 def __repr__(self):
582 def __repr__(self):
583 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
583 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
584
584
585
585
586 class _MatcherResultBase(TypedDict):
586 class _MatcherResultBase(TypedDict):
587 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
587 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
588
588
589 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
589 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
590 matched_fragment: NotRequired[str]
590 matched_fragment: NotRequired[str]
591
591
592 #: Whether to suppress results from all other matchers (True), some
592 #: Whether to suppress results from all other matchers (True), some
593 #: matchers (set of identifiers) or none (False); default is False.
593 #: matchers (set of identifiers) or none (False); default is False.
594 suppress: NotRequired[Union[bool, Set[str]]]
594 suppress: NotRequired[Union[bool, Set[str]]]
595
595
596 #: Identifiers of matchers which should NOT be suppressed when this matcher
596 #: Identifiers of matchers which should NOT be suppressed when this matcher
597 #: requests to suppress all other matchers; defaults to an empty set.
597 #: requests to suppress all other matchers; defaults to an empty set.
598 do_not_suppress: NotRequired[Set[str]]
598 do_not_suppress: NotRequired[Set[str]]
599
599
600 #: Are completions already ordered and should be left as-is? default is False.
600 #: Are completions already ordered and should be left as-is? default is False.
601 ordered: NotRequired[bool]
601 ordered: NotRequired[bool]
602
602
603
603
604 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
604 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
605 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
605 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
606 """Result of new-style completion matcher."""
606 """Result of new-style completion matcher."""
607
607
608 # note: TypedDict is added again to the inheritance chain
608 # note: TypedDict is added again to the inheritance chain
609 # in order to get __orig_bases__ for documentation
609 # in order to get __orig_bases__ for documentation
610
610
611 #: List of candidate completions
611 #: List of candidate completions
612 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
612 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
613
613
614
614
615 class _JediMatcherResult(_MatcherResultBase):
615 class _JediMatcherResult(_MatcherResultBase):
616 """Matching result returned by Jedi (will be processed differently)"""
616 """Matching result returned by Jedi (will be processed differently)"""
617
617
618 #: list of candidate completions
618 #: list of candidate completions
619 completions: Iterator[_JediCompletionLike]
619 completions: Iterator[_JediCompletionLike]
620
620
621
621
622 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
622 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
623 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
623 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
624
624
625
625
626 @dataclass
626 @dataclass
627 class CompletionContext:
627 class CompletionContext:
628 """Completion context provided as an argument to matchers in the Matcher API v2."""
628 """Completion context provided as an argument to matchers in the Matcher API v2."""
629
629
630 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
630 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
631 # which was not explicitly visible as an argument of the matcher, making any refactor
631 # which was not explicitly visible as an argument of the matcher, making any refactor
632 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
632 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
633 # from the completer, and make substituting them in sub-classes easier.
633 # from the completer, and make substituting them in sub-classes easier.
634
634
635 #: Relevant fragment of code directly preceding the cursor.
635 #: Relevant fragment of code directly preceding the cursor.
636 #: The extraction of token is implemented via splitter heuristic
636 #: The extraction of token is implemented via splitter heuristic
637 #: (following readline behaviour for legacy reasons), which is user configurable
637 #: (following readline behaviour for legacy reasons), which is user configurable
638 #: (by switching the greedy mode).
638 #: (by switching the greedy mode).
639 token: str
639 token: str
640
640
641 #: The full available content of the editor or buffer
641 #: The full available content of the editor or buffer
642 full_text: str
642 full_text: str
643
643
644 #: Cursor position in the line (the same for ``full_text`` and ``text``).
644 #: Cursor position in the line (the same for ``full_text`` and ``text``).
645 cursor_position: int
645 cursor_position: int
646
646
647 #: Cursor line in ``full_text``.
647 #: Cursor line in ``full_text``.
648 cursor_line: int
648 cursor_line: int
649
649
650 #: The maximum number of completions that will be used downstream.
650 #: The maximum number of completions that will be used downstream.
651 #: Matchers can use this information to abort early.
651 #: Matchers can use this information to abort early.
652 #: The built-in Jedi matcher is currently excepted from this limit.
652 #: The built-in Jedi matcher is currently excepted from this limit.
653 # If not given, return all possible completions.
653 # If not given, return all possible completions.
654 limit: Optional[int]
654 limit: Optional[int]
655
655
656 @cached_property
656 @cached_property
657 def text_until_cursor(self) -> str:
657 def text_until_cursor(self) -> str:
658 return self.line_with_cursor[: self.cursor_position]
658 return self.line_with_cursor[: self.cursor_position]
659
659
660 @cached_property
660 @cached_property
661 def line_with_cursor(self) -> str:
661 def line_with_cursor(self) -> str:
662 return self.full_text.split("\n")[self.cursor_line]
662 return self.full_text.split("\n")[self.cursor_line]
663
663
664
664
665 #: Matcher results for API v2.
665 #: Matcher results for API v2.
666 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
666 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
667
667
668
668
669 class _MatcherAPIv1Base(Protocol):
669 class _MatcherAPIv1Base(Protocol):
670 def __call__(self, text: str) -> List[str]:
670 def __call__(self, text: str) -> List[str]:
671 """Call signature."""
671 """Call signature."""
672 ...
672 ...
673
673
674 #: Used to construct the default matcher identifier
674 #: Used to construct the default matcher identifier
675 __qualname__: str
675 __qualname__: str
676
676
677
677
678 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
678 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
679 #: API version
679 #: API version
680 matcher_api_version: Optional[Literal[1]]
680 matcher_api_version: Optional[Literal[1]]
681
681
682 def __call__(self, text: str) -> List[str]:
682 def __call__(self, text: str) -> List[str]:
683 """Call signature."""
683 """Call signature."""
684 ...
684 ...
685
685
686
686
687 #: Protocol describing Matcher API v1.
687 #: Protocol describing Matcher API v1.
688 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
688 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
689
689
690
690
691 class MatcherAPIv2(Protocol):
691 class MatcherAPIv2(Protocol):
692 """Protocol describing Matcher API v2."""
692 """Protocol describing Matcher API v2."""
693
693
694 #: API version
694 #: API version
695 matcher_api_version: Literal[2] = 2
695 matcher_api_version: Literal[2] = 2
696
696
697 def __call__(self, context: CompletionContext) -> MatcherResult:
697 def __call__(self, context: CompletionContext) -> MatcherResult:
698 """Call signature."""
698 """Call signature."""
699 ...
699 ...
700
700
701 #: Used to construct the default matcher identifier
701 #: Used to construct the default matcher identifier
702 __qualname__: str
702 __qualname__: str
703
703
704
704
705 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
705 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
706
706
707
707
708 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
708 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
709 api_version = _get_matcher_api_version(matcher)
709 api_version = _get_matcher_api_version(matcher)
710 return api_version == 1
710 return api_version == 1
711
711
712
712
713 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
713 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
714 api_version = _get_matcher_api_version(matcher)
714 api_version = _get_matcher_api_version(matcher)
715 return api_version == 2
715 return api_version == 2
716
716
717
717
718 def _is_sizable(value: Any) -> TypeGuard[Sized]:
718 def _is_sizable(value: Any) -> TypeGuard[Sized]:
719 """Determines whether objects is sizable"""
719 """Determines whether objects is sizable"""
720 return hasattr(value, "__len__")
720 return hasattr(value, "__len__")
721
721
722
722
723 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
723 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
724 """Determines whether objects is sizable"""
724 """Determines whether objects is sizable"""
725 return hasattr(value, "__next__")
725 return hasattr(value, "__next__")
726
726
727
727
728 def has_any_completions(result: MatcherResult) -> bool:
728 def has_any_completions(result: MatcherResult) -> bool:
729 """Check if any result includes any completions."""
729 """Check if any result includes any completions."""
730 completions = result["completions"]
730 completions = result["completions"]
731 if _is_sizable(completions):
731 if _is_sizable(completions):
732 return len(completions) != 0
732 return len(completions) != 0
733 if _is_iterator(completions):
733 if _is_iterator(completions):
734 try:
734 try:
735 old_iterator = completions
735 old_iterator = completions
736 first = next(old_iterator)
736 first = next(old_iterator)
737 result["completions"] = cast(
737 result["completions"] = cast(
738 Iterator[SimpleCompletion],
738 Iterator[SimpleCompletion],
739 itertools.chain([first], old_iterator),
739 itertools.chain([first], old_iterator),
740 )
740 )
741 return True
741 return True
742 except StopIteration:
742 except StopIteration:
743 return False
743 return False
744 raise ValueError(
744 raise ValueError(
745 "Completions returned by matcher need to be an Iterator or a Sizable"
745 "Completions returned by matcher need to be an Iterator or a Sizable"
746 )
746 )
747
747
748
748
749 def completion_matcher(
749 def completion_matcher(
750 *,
750 *,
751 priority: Optional[float] = None,
751 priority: Optional[float] = None,
752 identifier: Optional[str] = None,
752 identifier: Optional[str] = None,
753 api_version: int = 1,
753 api_version: int = 1,
754 ):
754 ):
755 """Adds attributes describing the matcher.
755 """Adds attributes describing the matcher.
756
756
757 Parameters
757 Parameters
758 ----------
758 ----------
759 priority : Optional[float]
759 priority : Optional[float]
760 The priority of the matcher, determines the order of execution of matchers.
760 The priority of the matcher, determines the order of execution of matchers.
761 Higher priority means that the matcher will be executed first. Defaults to 0.
761 Higher priority means that the matcher will be executed first. Defaults to 0.
762 identifier : Optional[str]
762 identifier : Optional[str]
763 identifier of the matcher allowing users to modify the behaviour via traitlets,
763 identifier of the matcher allowing users to modify the behaviour via traitlets,
764 and also used to for debugging (will be passed as ``origin`` with the completions).
764 and also used to for debugging (will be passed as ``origin`` with the completions).
765
765
766 Defaults to matcher function's ``__qualname__`` (for example,
766 Defaults to matcher function's ``__qualname__`` (for example,
767 ``IPCompleter.file_matcher`` for the built-in matched defined
767 ``IPCompleter.file_matcher`` for the built-in matched defined
768 as a ``file_matcher`` method of the ``IPCompleter`` class).
768 as a ``file_matcher`` method of the ``IPCompleter`` class).
769 api_version: Optional[int]
769 api_version: Optional[int]
770 version of the Matcher API used by this matcher.
770 version of the Matcher API used by this matcher.
771 Currently supported values are 1 and 2.
771 Currently supported values are 1 and 2.
772 Defaults to 1.
772 Defaults to 1.
773 """
773 """
774
774
775 def wrapper(func: Matcher):
775 def wrapper(func: Matcher):
776 func.matcher_priority = priority or 0 # type: ignore
776 func.matcher_priority = priority or 0 # type: ignore
777 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
777 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
778 func.matcher_api_version = api_version # type: ignore
778 func.matcher_api_version = api_version # type: ignore
779 if TYPE_CHECKING:
779 if TYPE_CHECKING:
780 if api_version == 1:
780 if api_version == 1:
781 func = cast(MatcherAPIv1, func)
781 func = cast(MatcherAPIv1, func)
782 elif api_version == 2:
782 elif api_version == 2:
783 func = cast(MatcherAPIv2, func)
783 func = cast(MatcherAPIv2, func)
784 return func
784 return func
785
785
786 return wrapper
786 return wrapper
787
787
788
788
789 def _get_matcher_priority(matcher: Matcher):
789 def _get_matcher_priority(matcher: Matcher):
790 return getattr(matcher, "matcher_priority", 0)
790 return getattr(matcher, "matcher_priority", 0)
791
791
792
792
793 def _get_matcher_id(matcher: Matcher):
793 def _get_matcher_id(matcher: Matcher):
794 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
794 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
795
795
796
796
797 def _get_matcher_api_version(matcher):
797 def _get_matcher_api_version(matcher):
798 return getattr(matcher, "matcher_api_version", 1)
798 return getattr(matcher, "matcher_api_version", 1)
799
799
800
800
801 context_matcher = partial(completion_matcher, api_version=2)
801 context_matcher = partial(completion_matcher, api_version=2)
802
802
803
803
804 _IC = Iterable[Completion]
804 _IC = Iterable[Completion]
805
805
806
806
807 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
807 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
808 """
808 """
809 Deduplicate a set of completions.
809 Deduplicate a set of completions.
810
810
811 .. warning::
811 .. warning::
812
812
813 Unstable
813 Unstable
814
814
815 This function is unstable, API may change without warning.
815 This function is unstable, API may change without warning.
816
816
817 Parameters
817 Parameters
818 ----------
818 ----------
819 text : str
819 text : str
820 text that should be completed.
820 text that should be completed.
821 completions : Iterator[Completion]
821 completions : Iterator[Completion]
822 iterator over the completions to deduplicate
822 iterator over the completions to deduplicate
823
823
824 Yields
824 Yields
825 ------
825 ------
826 `Completions` objects
826 `Completions` objects
827 Completions coming from multiple sources, may be different but end up having
827 Completions coming from multiple sources, may be different but end up having
828 the same effect when applied to ``text``. If this is the case, this will
828 the same effect when applied to ``text``. If this is the case, this will
829 consider completions as equal and only emit the first encountered.
829 consider completions as equal and only emit the first encountered.
830 Not folded in `completions()` yet for debugging purpose, and to detect when
830 Not folded in `completions()` yet for debugging purpose, and to detect when
831 the IPython completer does return things that Jedi does not, but should be
831 the IPython completer does return things that Jedi does not, but should be
832 at some point.
832 at some point.
833 """
833 """
834 completions = list(completions)
834 completions = list(completions)
835 if not completions:
835 if not completions:
836 return
836 return
837
837
838 new_start = min(c.start for c in completions)
838 new_start = min(c.start for c in completions)
839 new_end = max(c.end for c in completions)
839 new_end = max(c.end for c in completions)
840
840
841 seen = set()
841 seen = set()
842 for c in completions:
842 for c in completions:
843 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
843 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
844 if new_text not in seen:
844 if new_text not in seen:
845 yield c
845 yield c
846 seen.add(new_text)
846 seen.add(new_text)
847
847
848
848
849 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
849 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
850 """
850 """
851 Rectify a set of completions to all have the same ``start`` and ``end``
851 Rectify a set of completions to all have the same ``start`` and ``end``
852
852
853 .. warning::
853 .. warning::
854
854
855 Unstable
855 Unstable
856
856
857 This function is unstable, API may change without warning.
857 This function is unstable, API may change without warning.
858 It will also raise unless use in proper context manager.
858 It will also raise unless use in proper context manager.
859
859
860 Parameters
860 Parameters
861 ----------
861 ----------
862 text : str
862 text : str
863 text that should be completed.
863 text that should be completed.
864 completions : Iterator[Completion]
864 completions : Iterator[Completion]
865 iterator over the completions to rectify
865 iterator over the completions to rectify
866 _debug : bool
866 _debug : bool
867 Log failed completion
867 Log failed completion
868
868
869 Notes
869 Notes
870 -----
870 -----
871 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
871 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
872 the Jupyter Protocol requires them to behave like so. This will readjust
872 the Jupyter Protocol requires them to behave like so. This will readjust
873 the completion to have the same ``start`` and ``end`` by padding both
873 the completion to have the same ``start`` and ``end`` by padding both
874 extremities with surrounding text.
874 extremities with surrounding text.
875
875
876 During stabilisation should support a ``_debug`` option to log which
876 During stabilisation should support a ``_debug`` option to log which
877 completion are return by the IPython completer and not found in Jedi in
877 completion are return by the IPython completer and not found in Jedi in
878 order to make upstream bug report.
878 order to make upstream bug report.
879 """
879 """
880 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
880 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
881 "It may change without warnings. "
881 "It may change without warnings. "
882 "Use in corresponding context manager.",
882 "Use in corresponding context manager.",
883 category=ProvisionalCompleterWarning, stacklevel=2)
883 category=ProvisionalCompleterWarning, stacklevel=2)
884
884
885 completions = list(completions)
885 completions = list(completions)
886 if not completions:
886 if not completions:
887 return
887 return
888 starts = (c.start for c in completions)
888 starts = (c.start for c in completions)
889 ends = (c.end for c in completions)
889 ends = (c.end for c in completions)
890
890
891 new_start = min(starts)
891 new_start = min(starts)
892 new_end = max(ends)
892 new_end = max(ends)
893
893
894 seen_jedi = set()
894 seen_jedi = set()
895 seen_python_matches = set()
895 seen_python_matches = set()
896 for c in completions:
896 for c in completions:
897 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
897 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
898 if c._origin == 'jedi':
898 if c._origin == 'jedi':
899 seen_jedi.add(new_text)
899 seen_jedi.add(new_text)
900 elif c._origin == 'IPCompleter.python_matches':
900 elif c._origin == 'IPCompleter.python_matches':
901 seen_python_matches.add(new_text)
901 seen_python_matches.add(new_text)
902 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
902 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
903 diff = seen_python_matches.difference(seen_jedi)
903 diff = seen_python_matches.difference(seen_jedi)
904 if diff and _debug:
904 if diff and _debug:
905 print('IPython.python matches have extras:', diff)
905 print('IPython.python matches have extras:', diff)
906
906
907
907
908 if sys.platform == 'win32':
908 if sys.platform == 'win32':
909 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
909 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
910 else:
910 else:
911 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
911 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
912
912
913 GREEDY_DELIMS = ' =\r\n'
913 GREEDY_DELIMS = ' =\r\n'
914
914
915
915
916 class CompletionSplitter(object):
916 class CompletionSplitter(object):
917 """An object to split an input line in a manner similar to readline.
917 """An object to split an input line in a manner similar to readline.
918
918
919 By having our own implementation, we can expose readline-like completion in
919 By having our own implementation, we can expose readline-like completion in
920 a uniform manner to all frontends. This object only needs to be given the
920 a uniform manner to all frontends. This object only needs to be given the
921 line of text to be split and the cursor position on said line, and it
921 line of text to be split and the cursor position on said line, and it
922 returns the 'word' to be completed on at the cursor after splitting the
922 returns the 'word' to be completed on at the cursor after splitting the
923 entire line.
923 entire line.
924
924
925 What characters are used as splitting delimiters can be controlled by
925 What characters are used as splitting delimiters can be controlled by
926 setting the ``delims`` attribute (this is a property that internally
926 setting the ``delims`` attribute (this is a property that internally
927 automatically builds the necessary regular expression)"""
927 automatically builds the necessary regular expression)"""
928
928
929 # Private interface
929 # Private interface
930
930
931 # A string of delimiter characters. The default value makes sense for
931 # A string of delimiter characters. The default value makes sense for
932 # IPython's most typical usage patterns.
932 # IPython's most typical usage patterns.
933 _delims = DELIMS
933 _delims = DELIMS
934
934
935 # The expression (a normal string) to be compiled into a regular expression
935 # The expression (a normal string) to be compiled into a regular expression
936 # for actual splitting. We store it as an attribute mostly for ease of
936 # for actual splitting. We store it as an attribute mostly for ease of
937 # debugging, since this type of code can be so tricky to debug.
937 # debugging, since this type of code can be so tricky to debug.
938 _delim_expr = None
938 _delim_expr = None
939
939
940 # The regular expression that does the actual splitting
940 # The regular expression that does the actual splitting
941 _delim_re = None
941 _delim_re = None
942
942
943 def __init__(self, delims=None):
943 def __init__(self, delims=None):
944 delims = CompletionSplitter._delims if delims is None else delims
944 delims = CompletionSplitter._delims if delims is None else delims
945 self.delims = delims
945 self.delims = delims
946
946
947 @property
947 @property
948 def delims(self):
948 def delims(self):
949 """Return the string of delimiter characters."""
949 """Return the string of delimiter characters."""
950 return self._delims
950 return self._delims
951
951
952 @delims.setter
952 @delims.setter
953 def delims(self, delims):
953 def delims(self, delims):
954 """Set the delimiters for line splitting."""
954 """Set the delimiters for line splitting."""
955 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
955 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
956 self._delim_re = re.compile(expr)
956 self._delim_re = re.compile(expr)
957 self._delims = delims
957 self._delims = delims
958 self._delim_expr = expr
958 self._delim_expr = expr
959
959
960 def split_line(self, line, cursor_pos=None):
960 def split_line(self, line, cursor_pos=None):
961 """Split a line of text with a cursor at the given position.
961 """Split a line of text with a cursor at the given position.
962 """
962 """
963 l = line if cursor_pos is None else line[:cursor_pos]
963 l = line if cursor_pos is None else line[:cursor_pos]
964 return self._delim_re.split(l)[-1]
964 return self._delim_re.split(l)[-1]
965
965
966
966
967
967
968 class Completer(Configurable):
968 class Completer(Configurable):
969
969
970 greedy = Bool(
970 greedy = Bool(
971 False,
971 False,
972 help="""Activate greedy completion.
972 help="""Activate greedy completion.
973
973
974 .. deprecated:: 8.8
974 .. deprecated:: 8.8
975 Use :any:`Completer.evaluation` and :any:`Completer.auto_close_dict_keys` instead.
975 Use :any:`Completer.evaluation` and :any:`Completer.auto_close_dict_keys` instead.
976
976
977 When enabled in IPython 8.8 or newer, changes configuration as follows:
977 When enabled in IPython 8.8 or newer, changes configuration as follows:
978
978
979 - ``Completer.evaluation = 'unsafe'``
979 - ``Completer.evaluation = 'unsafe'``
980 - ``Completer.auto_close_dict_keys = True``
980 - ``Completer.auto_close_dict_keys = True``
981 """,
981 """,
982 ).tag(config=True)
982 ).tag(config=True)
983
983
984 evaluation = Enum(
984 evaluation = Enum(
985 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
985 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
986 default_value="limited",
986 default_value="limited",
987 help="""Policy for code evaluation under completion.
987 help="""Policy for code evaluation under completion.
988
988
989 Successive options allow to enable more eager evaluation for better
989 Successive options allow to enable more eager evaluation for better
990 completion suggestions, including for nested dictionaries, nested lists,
990 completion suggestions, including for nested dictionaries, nested lists,
991 or even results of function calls.
991 or even results of function calls.
992 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
992 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
993 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
993 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
994
994
995 Allowed values are:
995 Allowed values are:
996
996
997 - ``forbidden``: no evaluation of code is permitted,
997 - ``forbidden``: no evaluation of code is permitted,
998 - ``minimal``: evaluation of literals and access to built-in namespace;
998 - ``minimal``: evaluation of literals and access to built-in namespace;
999 no item/attribute evaluation nor access to locals/globals,
999 no item/attribute evaluationm no access to locals/globals,
1000 no evaluation of any operations or comparisons.
1000 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1001 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1001 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1002 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1002 :any:`object.__getitem__`) on allow-listed objects (for example:
1003 :any:`object.__getitem__`) on allow-listed objects (for example:
1003 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1004 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1004 - ``unsafe``: evaluation of all methods and function calls but not of
1005 - ``unsafe``: evaluation of all methods and function calls but not of
1005 syntax with side-effects like `del x`,
1006 syntax with side-effects like `del x`,
1006 - ``dangerous``: completely arbitrary evaluation.
1007 - ``dangerous``: completely arbitrary evaluation.
1007 """,
1008 """,
1008 ).tag(config=True)
1009 ).tag(config=True)
1009
1010
1010 use_jedi = Bool(default_value=JEDI_INSTALLED,
1011 use_jedi = Bool(default_value=JEDI_INSTALLED,
1011 help="Experimental: Use Jedi to generate autocompletions. "
1012 help="Experimental: Use Jedi to generate autocompletions. "
1012 "Default to True if jedi is installed.").tag(config=True)
1013 "Default to True if jedi is installed.").tag(config=True)
1013
1014
1014 jedi_compute_type_timeout = Int(default_value=400,
1015 jedi_compute_type_timeout = Int(default_value=400,
1015 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1016 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1016 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1017 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1017 performance by preventing jedi to build its cache.
1018 performance by preventing jedi to build its cache.
1018 """).tag(config=True)
1019 """).tag(config=True)
1019
1020
1020 debug = Bool(default_value=False,
1021 debug = Bool(default_value=False,
1021 help='Enable debug for the Completer. Mostly print extra '
1022 help='Enable debug for the Completer. Mostly print extra '
1022 'information for experimental jedi integration.')\
1023 'information for experimental jedi integration.')\
1023 .tag(config=True)
1024 .tag(config=True)
1024
1025
1025 backslash_combining_completions = Bool(True,
1026 backslash_combining_completions = Bool(True,
1026 help="Enable unicode completions, e.g. \\alpha<tab> . "
1027 help="Enable unicode completions, e.g. \\alpha<tab> . "
1027 "Includes completion of latex commands, unicode names, and expanding "
1028 "Includes completion of latex commands, unicode names, and expanding "
1028 "unicode characters back to latex commands.").tag(config=True)
1029 "unicode characters back to latex commands.").tag(config=True)
1029
1030
1030 auto_close_dict_keys = Bool(
1031 auto_close_dict_keys = Bool(
1031 False,
1032 False,
1032 help="""
1033 help="""
1033 Enable auto-closing dictionary keys.
1034 Enable auto-closing dictionary keys.
1034
1035
1035 When enabled string keys will be suffixed with a final quote
1036 When enabled string keys will be suffixed with a final quote
1036 (matching the opening quote), tuple keys will also receive a
1037 (matching the opening quote), tuple keys will also receive a
1037 separating comma if needed, and keys which are final will
1038 separating comma if needed, and keys which are final will
1038 receive a closing bracket (``]``).
1039 receive a closing bracket (``]``).
1039 """,
1040 """,
1040 ).tag(config=True)
1041 ).tag(config=True)
1041
1042
1042 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1043 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1043 """Create a new completer for the command line.
1044 """Create a new completer for the command line.
1044
1045
1045 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1046 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1046
1047
1047 If unspecified, the default namespace where completions are performed
1048 If unspecified, the default namespace where completions are performed
1048 is __main__ (technically, __main__.__dict__). Namespaces should be
1049 is __main__ (technically, __main__.__dict__). Namespaces should be
1049 given as dictionaries.
1050 given as dictionaries.
1050
1051
1051 An optional second namespace can be given. This allows the completer
1052 An optional second namespace can be given. This allows the completer
1052 to handle cases where both the local and global scopes need to be
1053 to handle cases where both the local and global scopes need to be
1053 distinguished.
1054 distinguished.
1054 """
1055 """
1055
1056
1056 # Don't bind to namespace quite yet, but flag whether the user wants a
1057 # Don't bind to namespace quite yet, but flag whether the user wants a
1057 # specific namespace or to use __main__.__dict__. This will allow us
1058 # specific namespace or to use __main__.__dict__. This will allow us
1058 # to bind to __main__.__dict__ at completion time, not now.
1059 # to bind to __main__.__dict__ at completion time, not now.
1059 if namespace is None:
1060 if namespace is None:
1060 self.use_main_ns = True
1061 self.use_main_ns = True
1061 else:
1062 else:
1062 self.use_main_ns = False
1063 self.use_main_ns = False
1063 self.namespace = namespace
1064 self.namespace = namespace
1064
1065
1065 # The global namespace, if given, can be bound directly
1066 # The global namespace, if given, can be bound directly
1066 if global_namespace is None:
1067 if global_namespace is None:
1067 self.global_namespace = {}
1068 self.global_namespace = {}
1068 else:
1069 else:
1069 self.global_namespace = global_namespace
1070 self.global_namespace = global_namespace
1070
1071
1071 self.custom_matchers = []
1072 self.custom_matchers = []
1072
1073
1073 super(Completer, self).__init__(**kwargs)
1074 super(Completer, self).__init__(**kwargs)
1074
1075
1075 def complete(self, text, state):
1076 def complete(self, text, state):
1076 """Return the next possible completion for 'text'.
1077 """Return the next possible completion for 'text'.
1077
1078
1078 This is called successively with state == 0, 1, 2, ... until it
1079 This is called successively with state == 0, 1, 2, ... until it
1079 returns None. The completion should begin with 'text'.
1080 returns None. The completion should begin with 'text'.
1080
1081
1081 """
1082 """
1082 if self.use_main_ns:
1083 if self.use_main_ns:
1083 self.namespace = __main__.__dict__
1084 self.namespace = __main__.__dict__
1084
1085
1085 if state == 0:
1086 if state == 0:
1086 if "." in text:
1087 if "." in text:
1087 self.matches = self.attr_matches(text)
1088 self.matches = self.attr_matches(text)
1088 else:
1089 else:
1089 self.matches = self.global_matches(text)
1090 self.matches = self.global_matches(text)
1090 try:
1091 try:
1091 return self.matches[state]
1092 return self.matches[state]
1092 except IndexError:
1093 except IndexError:
1093 return None
1094 return None
1094
1095
1095 def global_matches(self, text):
1096 def global_matches(self, text):
1096 """Compute matches when text is a simple name.
1097 """Compute matches when text is a simple name.
1097
1098
1098 Return a list of all keywords, built-in functions and names currently
1099 Return a list of all keywords, built-in functions and names currently
1099 defined in self.namespace or self.global_namespace that match.
1100 defined in self.namespace or self.global_namespace that match.
1100
1101
1101 """
1102 """
1102 matches = []
1103 matches = []
1103 match_append = matches.append
1104 match_append = matches.append
1104 n = len(text)
1105 n = len(text)
1105 for lst in [
1106 for lst in [
1106 keyword.kwlist,
1107 keyword.kwlist,
1107 builtin_mod.__dict__.keys(),
1108 builtin_mod.__dict__.keys(),
1108 list(self.namespace.keys()),
1109 list(self.namespace.keys()),
1109 list(self.global_namespace.keys()),
1110 list(self.global_namespace.keys()),
1110 ]:
1111 ]:
1111 for word in lst:
1112 for word in lst:
1112 if word[:n] == text and word != "__builtins__":
1113 if word[:n] == text and word != "__builtins__":
1113 match_append(word)
1114 match_append(word)
1114
1115
1115 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1116 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1116 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1117 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1117 shortened = {
1118 shortened = {
1118 "_".join([sub[0] for sub in word.split("_")]): word
1119 "_".join([sub[0] for sub in word.split("_")]): word
1119 for word in lst
1120 for word in lst
1120 if snake_case_re.match(word)
1121 if snake_case_re.match(word)
1121 }
1122 }
1122 for word in shortened.keys():
1123 for word in shortened.keys():
1123 if word[:n] == text and word != "__builtins__":
1124 if word[:n] == text and word != "__builtins__":
1124 match_append(shortened[word])
1125 match_append(shortened[word])
1125 return matches
1126 return matches
1126
1127
1127 def attr_matches(self, text):
1128 def attr_matches(self, text):
1128 """Compute matches when text contains a dot.
1129 """Compute matches when text contains a dot.
1129
1130
1130 Assuming the text is of the form NAME.NAME....[NAME], and is
1131 Assuming the text is of the form NAME.NAME....[NAME], and is
1131 evaluatable in self.namespace or self.global_namespace, it will be
1132 evaluatable in self.namespace or self.global_namespace, it will be
1132 evaluated and its attributes (as revealed by dir()) are used as
1133 evaluated and its attributes (as revealed by dir()) are used as
1133 possible completions. (For class instances, class members are
1134 possible completions. (For class instances, class members are
1134 also considered.)
1135 also considered.)
1135
1136
1136 WARNING: this can still invoke arbitrary C code, if an object
1137 WARNING: this can still invoke arbitrary C code, if an object
1137 with a __getattr__ hook is evaluated.
1138 with a __getattr__ hook is evaluated.
1138
1139
1139 """
1140 """
1140 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1141 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1141 if not m2:
1142 if not m2:
1142 return []
1143 return []
1143 expr, attr = m2.group(1, 2)
1144 expr, attr = m2.group(1, 2)
1144
1145
1145 obj = self._evaluate_expr(expr)
1146 obj = self._evaluate_expr(expr)
1146
1147
1147 if obj is not_found:
1148 if obj is not_found:
1148 return []
1149 return []
1149
1150
1150 if self.limit_to__all__ and hasattr(obj, '__all__'):
1151 if self.limit_to__all__ and hasattr(obj, '__all__'):
1151 words = get__all__entries(obj)
1152 words = get__all__entries(obj)
1152 else:
1153 else:
1153 words = dir2(obj)
1154 words = dir2(obj)
1154
1155
1155 try:
1156 try:
1156 words = generics.complete_object(obj, words)
1157 words = generics.complete_object(obj, words)
1157 except TryNext:
1158 except TryNext:
1158 pass
1159 pass
1159 except AssertionError:
1160 except AssertionError:
1160 raise
1161 raise
1161 except Exception:
1162 except Exception:
1162 # Silence errors from completion function
1163 # Silence errors from completion function
1163 #raise # dbg
1164 #raise # dbg
1164 pass
1165 pass
1165 # Build match list to return
1166 # Build match list to return
1166 n = len(attr)
1167 n = len(attr)
1167 return ["%s.%s" % (expr, w) for w in words if w[:n] == attr]
1168 return ["%s.%s" % (expr, w) for w in words if w[:n] == attr]
1168
1169
1169 def _evaluate_expr(self, expr):
1170 def _evaluate_expr(self, expr):
1170 obj = not_found
1171 obj = not_found
1171 done = False
1172 done = False
1172 while not done and expr:
1173 while not done and expr:
1173 try:
1174 try:
1174 obj = guarded_eval(
1175 obj = guarded_eval(
1175 expr,
1176 expr,
1176 EvaluationContext(
1177 EvaluationContext(
1177 globals=self.global_namespace,
1178 globals=self.global_namespace,
1178 locals=self.namespace,
1179 locals=self.namespace,
1179 evaluation=self.evaluation,
1180 evaluation=self.evaluation,
1180 ),
1181 ),
1181 )
1182 )
1182 done = True
1183 done = True
1183 except Exception as e:
1184 except Exception as e:
1184 if self.debug:
1185 if self.debug:
1185 print("Evaluation exception", e)
1186 print("Evaluation exception", e)
1186 # trim the expression to remove any invalid prefix
1187 # trim the expression to remove any invalid prefix
1187 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1188 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1188 # where parenthesis is not closed.
1189 # where parenthesis is not closed.
1189 # TODO: make this faster by reusing parts of the computation?
1190 # TODO: make this faster by reusing parts of the computation?
1190 expr = expr[1:]
1191 expr = expr[1:]
1191 return obj
1192 return obj
1192
1193
1193 def get__all__entries(obj):
1194 def get__all__entries(obj):
1194 """returns the strings in the __all__ attribute"""
1195 """returns the strings in the __all__ attribute"""
1195 try:
1196 try:
1196 words = getattr(obj, '__all__')
1197 words = getattr(obj, '__all__')
1197 except:
1198 except:
1198 return []
1199 return []
1199
1200
1200 return [w for w in words if isinstance(w, str)]
1201 return [w for w in words if isinstance(w, str)]
1201
1202
1202
1203
1203 class _DictKeyState(enum.Flag):
1204 class _DictKeyState(enum.Flag):
1204 """Represent state of the key match in context of other possible matches.
1205 """Represent state of the key match in context of other possible matches.
1205
1206
1206 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1207 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1207 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1208 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1208 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1209 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1209 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1210 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1210 """
1211 """
1211
1212
1212 BASELINE = 0
1213 BASELINE = 0
1213 END_OF_ITEM = enum.auto()
1214 END_OF_ITEM = enum.auto()
1214 END_OF_TUPLE = enum.auto()
1215 END_OF_TUPLE = enum.auto()
1215 IN_TUPLE = enum.auto()
1216 IN_TUPLE = enum.auto()
1216
1217
1217
1218
1218 def _parse_tokens(c):
1219 def _parse_tokens(c):
1219 """Parse tokens even if there is an error."""
1220 """Parse tokens even if there is an error."""
1220 tokens = []
1221 tokens = []
1221 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1222 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1222 while True:
1223 while True:
1223 try:
1224 try:
1224 tokens.append(next(token_generator))
1225 tokens.append(next(token_generator))
1225 except tokenize.TokenError:
1226 except tokenize.TokenError:
1226 return tokens
1227 return tokens
1227 except StopIteration:
1228 except StopIteration:
1228 return tokens
1229 return tokens
1229
1230
1230
1231
1231 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1232 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1232 """Match any valid Python numeric literal in a prefix of dictionary keys.
1233 """Match any valid Python numeric literal in a prefix of dictionary keys.
1233
1234
1234 References:
1235 References:
1235 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1236 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1236 - https://docs.python.org/3/library/tokenize.html
1237 - https://docs.python.org/3/library/tokenize.html
1237 """
1238 """
1238 if prefix[-1].isspace():
1239 if prefix[-1].isspace():
1239 # if user typed a space we do not have anything to complete
1240 # if user typed a space we do not have anything to complete
1240 # even if there was a valid number token before
1241 # even if there was a valid number token before
1241 return None
1242 return None
1242 tokens = _parse_tokens(prefix)
1243 tokens = _parse_tokens(prefix)
1243 rev_tokens = reversed(tokens)
1244 rev_tokens = reversed(tokens)
1244 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1245 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1245 number = None
1246 number = None
1246 for token in rev_tokens:
1247 for token in rev_tokens:
1247 if token.type in skip_over:
1248 if token.type in skip_over:
1248 continue
1249 continue
1249 if number is None:
1250 if number is None:
1250 if token.type == tokenize.NUMBER:
1251 if token.type == tokenize.NUMBER:
1251 number = token.string
1252 number = token.string
1252 continue
1253 continue
1253 else:
1254 else:
1254 # we did not match a number
1255 # we did not match a number
1255 return None
1256 return None
1256 if token.type == tokenize.OP:
1257 if token.type == tokenize.OP:
1257 if token.string == ",":
1258 if token.string == ",":
1258 break
1259 break
1259 if token.string in {"+", "-"}:
1260 if token.string in {"+", "-"}:
1260 number = token.string + number
1261 number = token.string + number
1261 else:
1262 else:
1262 return None
1263 return None
1263 return number
1264 return number
1264
1265
1265
1266
1266 _INT_FORMATS = {
1267 _INT_FORMATS = {
1267 "0b": bin,
1268 "0b": bin,
1268 "0o": oct,
1269 "0o": oct,
1269 "0x": hex,
1270 "0x": hex,
1270 }
1271 }
1271
1272
1272
1273
1273 def match_dict_keys(
1274 def match_dict_keys(
1274 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1275 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1275 prefix: str,
1276 prefix: str,
1276 delims: str,
1277 delims: str,
1277 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1278 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1278 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1279 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1279 """Used by dict_key_matches, matching the prefix to a list of keys
1280 """Used by dict_key_matches, matching the prefix to a list of keys
1280
1281
1281 Parameters
1282 Parameters
1282 ----------
1283 ----------
1283 keys
1284 keys
1284 list of keys in dictionary currently being completed.
1285 list of keys in dictionary currently being completed.
1285 prefix
1286 prefix
1286 Part of the text already typed by the user. E.g. `mydict[b'fo`
1287 Part of the text already typed by the user. E.g. `mydict[b'fo`
1287 delims
1288 delims
1288 String of delimiters to consider when finding the current key.
1289 String of delimiters to consider when finding the current key.
1289 extra_prefix : optional
1290 extra_prefix : optional
1290 Part of the text already typed in multi-key index cases. E.g. for
1291 Part of the text already typed in multi-key index cases. E.g. for
1291 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1292 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1292
1293
1293 Returns
1294 Returns
1294 -------
1295 -------
1295 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1296 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1296 ``quote`` being the quote that need to be used to close current string.
1297 ``quote`` being the quote that need to be used to close current string.
1297 ``token_start`` the position where the replacement should start occurring,
1298 ``token_start`` the position where the replacement should start occurring,
1298 ``matches`` a dictionary of replacement/completion keys on keys and values
1299 ``matches`` a dictionary of replacement/completion keys on keys and values
1299 indicating whether the state.
1300 indicating whether the state.
1300 """
1301 """
1301 prefix_tuple = extra_prefix if extra_prefix else ()
1302 prefix_tuple = extra_prefix if extra_prefix else ()
1302
1303
1303 prefix_tuple_size = sum(
1304 prefix_tuple_size = sum(
1304 [
1305 [
1305 # for pandas, do not count slices as taking space
1306 # for pandas, do not count slices as taking space
1306 not isinstance(k, slice)
1307 not isinstance(k, slice)
1307 for k in prefix_tuple
1308 for k in prefix_tuple
1308 ]
1309 ]
1309 )
1310 )
1310 text_serializable_types = (str, bytes, int, float, slice)
1311 text_serializable_types = (str, bytes, int, float, slice)
1311
1312
1312 def filter_prefix_tuple(key):
1313 def filter_prefix_tuple(key):
1313 # Reject too short keys
1314 # Reject too short keys
1314 if len(key) <= prefix_tuple_size:
1315 if len(key) <= prefix_tuple_size:
1315 return False
1316 return False
1316 # Reject keys which cannot be serialised to text
1317 # Reject keys which cannot be serialised to text
1317 for k in key:
1318 for k in key:
1318 if not isinstance(k, text_serializable_types):
1319 if not isinstance(k, text_serializable_types):
1319 return False
1320 return False
1320 # Reject keys that do not match the prefix
1321 # Reject keys that do not match the prefix
1321 for k, pt in zip(key, prefix_tuple):
1322 for k, pt in zip(key, prefix_tuple):
1322 if k != pt and not isinstance(pt, slice):
1323 if k != pt and not isinstance(pt, slice):
1323 return False
1324 return False
1324 # All checks passed!
1325 # All checks passed!
1325 return True
1326 return True
1326
1327
1327 filtered_key_is_final: Dict[
1328 filtered_key_is_final: Dict[
1328 Union[str, bytes, int, float], _DictKeyState
1329 Union[str, bytes, int, float], _DictKeyState
1329 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1330 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1330
1331
1331 for k in keys:
1332 for k in keys:
1332 # If at least one of the matches is not final, mark as undetermined.
1333 # If at least one of the matches is not final, mark as undetermined.
1333 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1334 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1334 # `111` appears final on first match but is not final on the second.
1335 # `111` appears final on first match but is not final on the second.
1335
1336
1336 if isinstance(k, tuple):
1337 if isinstance(k, tuple):
1337 if filter_prefix_tuple(k):
1338 if filter_prefix_tuple(k):
1338 key_fragment = k[prefix_tuple_size]
1339 key_fragment = k[prefix_tuple_size]
1339 filtered_key_is_final[key_fragment] |= (
1340 filtered_key_is_final[key_fragment] |= (
1340 _DictKeyState.END_OF_TUPLE
1341 _DictKeyState.END_OF_TUPLE
1341 if len(k) == prefix_tuple_size + 1
1342 if len(k) == prefix_tuple_size + 1
1342 else _DictKeyState.IN_TUPLE
1343 else _DictKeyState.IN_TUPLE
1343 )
1344 )
1344 elif prefix_tuple_size > 0:
1345 elif prefix_tuple_size > 0:
1345 # we are completing a tuple but this key is not a tuple,
1346 # we are completing a tuple but this key is not a tuple,
1346 # so we should ignore it
1347 # so we should ignore it
1347 pass
1348 pass
1348 else:
1349 else:
1349 if isinstance(k, text_serializable_types):
1350 if isinstance(k, text_serializable_types):
1350 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1351 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1351
1352
1352 filtered_keys = filtered_key_is_final.keys()
1353 filtered_keys = filtered_key_is_final.keys()
1353
1354
1354 if not prefix:
1355 if not prefix:
1355 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1356 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1356
1357
1357 quote_match = re.search("(?:\"|')", prefix)
1358 quote_match = re.search("(?:\"|')", prefix)
1358 is_user_prefix_numeric = False
1359 is_user_prefix_numeric = False
1359
1360
1360 if quote_match:
1361 if quote_match:
1361 quote = quote_match.group()
1362 quote = quote_match.group()
1362 valid_prefix = prefix + quote
1363 valid_prefix = prefix + quote
1363 try:
1364 try:
1364 prefix_str = literal_eval(valid_prefix)
1365 prefix_str = literal_eval(valid_prefix)
1365 except Exception:
1366 except Exception:
1366 return "", 0, {}
1367 return "", 0, {}
1367 else:
1368 else:
1368 # If it does not look like a string, let's assume
1369 # If it does not look like a string, let's assume
1369 # we are dealing with a number or variable.
1370 # we are dealing with a number or variable.
1370 number_match = _match_number_in_dict_key_prefix(prefix)
1371 number_match = _match_number_in_dict_key_prefix(prefix)
1371
1372
1372 # We do not want the key matcher to suggest variable names so we yield:
1373 # We do not want the key matcher to suggest variable names so we yield:
1373 if number_match is None:
1374 if number_match is None:
1374 # The alternative would be to assume that user forgort the quote
1375 # The alternative would be to assume that user forgort the quote
1375 # and if the substring matches, suggest adding it at the start.
1376 # and if the substring matches, suggest adding it at the start.
1376 return "", 0, {}
1377 return "", 0, {}
1377
1378
1378 prefix_str = number_match
1379 prefix_str = number_match
1379 is_user_prefix_numeric = True
1380 is_user_prefix_numeric = True
1380 quote = ""
1381 quote = ""
1381
1382
1382 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1383 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1383 token_match = re.search(pattern, prefix, re.UNICODE)
1384 token_match = re.search(pattern, prefix, re.UNICODE)
1384 assert token_match is not None # silence mypy
1385 assert token_match is not None # silence mypy
1385 token_start = token_match.start()
1386 token_start = token_match.start()
1386 token_prefix = token_match.group()
1387 token_prefix = token_match.group()
1387
1388
1388 matched: Dict[str, _DictKeyState] = {}
1389 matched: Dict[str, _DictKeyState] = {}
1389
1390
1390 str_key: Union[str, bytes]
1391 str_key: Union[str, bytes]
1391
1392
1392 for key in filtered_keys:
1393 for key in filtered_keys:
1393 if isinstance(key, (int, float)):
1394 if isinstance(key, (int, float)):
1394 # User typed a number but this key is not a number.
1395 # User typed a number but this key is not a number.
1395 if not is_user_prefix_numeric:
1396 if not is_user_prefix_numeric:
1396 continue
1397 continue
1397 str_key = str(key)
1398 str_key = str(key)
1398 if isinstance(key, int):
1399 if isinstance(key, int):
1399 int_base = prefix_str[:2].lower()
1400 int_base = prefix_str[:2].lower()
1400 # if user typed integer using binary/oct/hex notation:
1401 # if user typed integer using binary/oct/hex notation:
1401 if int_base in _INT_FORMATS:
1402 if int_base in _INT_FORMATS:
1402 int_format = _INT_FORMATS[int_base]
1403 int_format = _INT_FORMATS[int_base]
1403 str_key = int_format(key)
1404 str_key = int_format(key)
1404 else:
1405 else:
1405 # User typed a string but this key is a number.
1406 # User typed a string but this key is a number.
1406 if is_user_prefix_numeric:
1407 if is_user_prefix_numeric:
1407 continue
1408 continue
1408 str_key = key
1409 str_key = key
1409 try:
1410 try:
1410 if not str_key.startswith(prefix_str):
1411 if not str_key.startswith(prefix_str):
1411 continue
1412 continue
1412 except (AttributeError, TypeError, UnicodeError) as e:
1413 except (AttributeError, TypeError, UnicodeError) as e:
1413 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1414 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1414 continue
1415 continue
1415
1416
1416 # reformat remainder of key to begin with prefix
1417 # reformat remainder of key to begin with prefix
1417 rem = str_key[len(prefix_str) :]
1418 rem = str_key[len(prefix_str) :]
1418 # force repr wrapped in '
1419 # force repr wrapped in '
1419 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1420 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1420 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1421 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1421 if quote == '"':
1422 if quote == '"':
1422 # The entered prefix is quoted with ",
1423 # The entered prefix is quoted with ",
1423 # but the match is quoted with '.
1424 # but the match is quoted with '.
1424 # A contained " hence needs escaping for comparison:
1425 # A contained " hence needs escaping for comparison:
1425 rem_repr = rem_repr.replace('"', '\\"')
1426 rem_repr = rem_repr.replace('"', '\\"')
1426
1427
1427 # then reinsert prefix from start of token
1428 # then reinsert prefix from start of token
1428 match = "%s%s" % (token_prefix, rem_repr)
1429 match = "%s%s" % (token_prefix, rem_repr)
1429
1430
1430 matched[match] = filtered_key_is_final[key]
1431 matched[match] = filtered_key_is_final[key]
1431 return quote, token_start, matched
1432 return quote, token_start, matched
1432
1433
1433
1434
1434 def cursor_to_position(text:str, line:int, column:int)->int:
1435 def cursor_to_position(text:str, line:int, column:int)->int:
1435 """
1436 """
1436 Convert the (line,column) position of the cursor in text to an offset in a
1437 Convert the (line,column) position of the cursor in text to an offset in a
1437 string.
1438 string.
1438
1439
1439 Parameters
1440 Parameters
1440 ----------
1441 ----------
1441 text : str
1442 text : str
1442 The text in which to calculate the cursor offset
1443 The text in which to calculate the cursor offset
1443 line : int
1444 line : int
1444 Line of the cursor; 0-indexed
1445 Line of the cursor; 0-indexed
1445 column : int
1446 column : int
1446 Column of the cursor 0-indexed
1447 Column of the cursor 0-indexed
1447
1448
1448 Returns
1449 Returns
1449 -------
1450 -------
1450 Position of the cursor in ``text``, 0-indexed.
1451 Position of the cursor in ``text``, 0-indexed.
1451
1452
1452 See Also
1453 See Also
1453 --------
1454 --------
1454 position_to_cursor : reciprocal of this function
1455 position_to_cursor : reciprocal of this function
1455
1456
1456 """
1457 """
1457 lines = text.split('\n')
1458 lines = text.split('\n')
1458 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1459 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1459
1460
1460 return sum(len(l) + 1 for l in lines[:line]) + column
1461 return sum(len(l) + 1 for l in lines[:line]) + column
1461
1462
1462 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1463 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1463 """
1464 """
1464 Convert the position of the cursor in text (0 indexed) to a line
1465 Convert the position of the cursor in text (0 indexed) to a line
1465 number(0-indexed) and a column number (0-indexed) pair
1466 number(0-indexed) and a column number (0-indexed) pair
1466
1467
1467 Position should be a valid position in ``text``.
1468 Position should be a valid position in ``text``.
1468
1469
1469 Parameters
1470 Parameters
1470 ----------
1471 ----------
1471 text : str
1472 text : str
1472 The text in which to calculate the cursor offset
1473 The text in which to calculate the cursor offset
1473 offset : int
1474 offset : int
1474 Position of the cursor in ``text``, 0-indexed.
1475 Position of the cursor in ``text``, 0-indexed.
1475
1476
1476 Returns
1477 Returns
1477 -------
1478 -------
1478 (line, column) : (int, int)
1479 (line, column) : (int, int)
1479 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1480 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1480
1481
1481 See Also
1482 See Also
1482 --------
1483 --------
1483 cursor_to_position : reciprocal of this function
1484 cursor_to_position : reciprocal of this function
1484
1485
1485 """
1486 """
1486
1487
1487 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1488 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1488
1489
1489 before = text[:offset]
1490 before = text[:offset]
1490 blines = before.split('\n') # ! splitnes trim trailing \n
1491 blines = before.split('\n') # ! splitnes trim trailing \n
1491 line = before.count('\n')
1492 line = before.count('\n')
1492 col = len(blines[-1])
1493 col = len(blines[-1])
1493 return line, col
1494 return line, col
1494
1495
1495
1496
1496 def _safe_isinstance(obj, module, class_name, *attrs):
1497 def _safe_isinstance(obj, module, class_name, *attrs):
1497 """Checks if obj is an instance of module.class_name if loaded
1498 """Checks if obj is an instance of module.class_name if loaded
1498 """
1499 """
1499 if module in sys.modules:
1500 if module in sys.modules:
1500 m = sys.modules[module]
1501 m = sys.modules[module]
1501 for attr in [class_name, *attrs]:
1502 for attr in [class_name, *attrs]:
1502 m = getattr(m, attr)
1503 m = getattr(m, attr)
1503 return isinstance(obj, m)
1504 return isinstance(obj, m)
1504
1505
1505
1506
1506 @context_matcher()
1507 @context_matcher()
1507 def back_unicode_name_matcher(context: CompletionContext):
1508 def back_unicode_name_matcher(context: CompletionContext):
1508 """Match Unicode characters back to Unicode name
1509 """Match Unicode characters back to Unicode name
1509
1510
1510 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1511 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1511 """
1512 """
1512 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1513 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1513 return _convert_matcher_v1_result_to_v2(
1514 return _convert_matcher_v1_result_to_v2(
1514 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1515 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1515 )
1516 )
1516
1517
1517
1518
1518 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1519 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1519 """Match Unicode characters back to Unicode name
1520 """Match Unicode characters back to Unicode name
1520
1521
1521 This does ``β˜ƒ`` -> ``\\snowman``
1522 This does ``β˜ƒ`` -> ``\\snowman``
1522
1523
1523 Note that snowman is not a valid python3 combining character but will be expanded.
1524 Note that snowman is not a valid python3 combining character but will be expanded.
1524 Though it will not recombine back to the snowman character by the completion machinery.
1525 Though it will not recombine back to the snowman character by the completion machinery.
1525
1526
1526 This will not either back-complete standard sequences like \\n, \\b ...
1527 This will not either back-complete standard sequences like \\n, \\b ...
1527
1528
1528 .. deprecated:: 8.6
1529 .. deprecated:: 8.6
1529 You can use :meth:`back_unicode_name_matcher` instead.
1530 You can use :meth:`back_unicode_name_matcher` instead.
1530
1531
1531 Returns
1532 Returns
1532 =======
1533 =======
1533
1534
1534 Return a tuple with two elements:
1535 Return a tuple with two elements:
1535
1536
1536 - The Unicode character that was matched (preceded with a backslash), or
1537 - The Unicode character that was matched (preceded with a backslash), or
1537 empty string,
1538 empty string,
1538 - a sequence (of 1), name for the match Unicode character, preceded by
1539 - a sequence (of 1), name for the match Unicode character, preceded by
1539 backslash, or empty if no match.
1540 backslash, or empty if no match.
1540 """
1541 """
1541 if len(text)<2:
1542 if len(text)<2:
1542 return '', ()
1543 return '', ()
1543 maybe_slash = text[-2]
1544 maybe_slash = text[-2]
1544 if maybe_slash != '\\':
1545 if maybe_slash != '\\':
1545 return '', ()
1546 return '', ()
1546
1547
1547 char = text[-1]
1548 char = text[-1]
1548 # no expand on quote for completion in strings.
1549 # no expand on quote for completion in strings.
1549 # nor backcomplete standard ascii keys
1550 # nor backcomplete standard ascii keys
1550 if char in string.ascii_letters or char in ('"',"'"):
1551 if char in string.ascii_letters or char in ('"',"'"):
1551 return '', ()
1552 return '', ()
1552 try :
1553 try :
1553 unic = unicodedata.name(char)
1554 unic = unicodedata.name(char)
1554 return '\\'+char,('\\'+unic,)
1555 return '\\'+char,('\\'+unic,)
1555 except KeyError:
1556 except KeyError:
1556 pass
1557 pass
1557 return '', ()
1558 return '', ()
1558
1559
1559
1560
1560 @context_matcher()
1561 @context_matcher()
1561 def back_latex_name_matcher(context: CompletionContext):
1562 def back_latex_name_matcher(context: CompletionContext):
1562 """Match latex characters back to unicode name
1563 """Match latex characters back to unicode name
1563
1564
1564 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1565 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1565 """
1566 """
1566 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1567 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1567 return _convert_matcher_v1_result_to_v2(
1568 return _convert_matcher_v1_result_to_v2(
1568 matches, type="latex", fragment=fragment, suppress_if_matches=True
1569 matches, type="latex", fragment=fragment, suppress_if_matches=True
1569 )
1570 )
1570
1571
1571
1572
1572 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1573 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1573 """Match latex characters back to unicode name
1574 """Match latex characters back to unicode name
1574
1575
1575 This does ``\\β„΅`` -> ``\\aleph``
1576 This does ``\\β„΅`` -> ``\\aleph``
1576
1577
1577 .. deprecated:: 8.6
1578 .. deprecated:: 8.6
1578 You can use :meth:`back_latex_name_matcher` instead.
1579 You can use :meth:`back_latex_name_matcher` instead.
1579 """
1580 """
1580 if len(text)<2:
1581 if len(text)<2:
1581 return '', ()
1582 return '', ()
1582 maybe_slash = text[-2]
1583 maybe_slash = text[-2]
1583 if maybe_slash != '\\':
1584 if maybe_slash != '\\':
1584 return '', ()
1585 return '', ()
1585
1586
1586
1587
1587 char = text[-1]
1588 char = text[-1]
1588 # no expand on quote for completion in strings.
1589 # no expand on quote for completion in strings.
1589 # nor backcomplete standard ascii keys
1590 # nor backcomplete standard ascii keys
1590 if char in string.ascii_letters or char in ('"',"'"):
1591 if char in string.ascii_letters or char in ('"',"'"):
1591 return '', ()
1592 return '', ()
1592 try :
1593 try :
1593 latex = reverse_latex_symbol[char]
1594 latex = reverse_latex_symbol[char]
1594 # '\\' replace the \ as well
1595 # '\\' replace the \ as well
1595 return '\\'+char,[latex]
1596 return '\\'+char,[latex]
1596 except KeyError:
1597 except KeyError:
1597 pass
1598 pass
1598 return '', ()
1599 return '', ()
1599
1600
1600
1601
1601 def _formatparamchildren(parameter) -> str:
1602 def _formatparamchildren(parameter) -> str:
1602 """
1603 """
1603 Get parameter name and value from Jedi Private API
1604 Get parameter name and value from Jedi Private API
1604
1605
1605 Jedi does not expose a simple way to get `param=value` from its API.
1606 Jedi does not expose a simple way to get `param=value` from its API.
1606
1607
1607 Parameters
1608 Parameters
1608 ----------
1609 ----------
1609 parameter
1610 parameter
1610 Jedi's function `Param`
1611 Jedi's function `Param`
1611
1612
1612 Returns
1613 Returns
1613 -------
1614 -------
1614 A string like 'a', 'b=1', '*args', '**kwargs'
1615 A string like 'a', 'b=1', '*args', '**kwargs'
1615
1616
1616 """
1617 """
1617 description = parameter.description
1618 description = parameter.description
1618 if not description.startswith('param '):
1619 if not description.startswith('param '):
1619 raise ValueError('Jedi function parameter description have change format.'
1620 raise ValueError('Jedi function parameter description have change format.'
1620 'Expected "param ...", found %r".' % description)
1621 'Expected "param ...", found %r".' % description)
1621 return description[6:]
1622 return description[6:]
1622
1623
1623 def _make_signature(completion)-> str:
1624 def _make_signature(completion)-> str:
1624 """
1625 """
1625 Make the signature from a jedi completion
1626 Make the signature from a jedi completion
1626
1627
1627 Parameters
1628 Parameters
1628 ----------
1629 ----------
1629 completion : jedi.Completion
1630 completion : jedi.Completion
1630 object does not complete a function type
1631 object does not complete a function type
1631
1632
1632 Returns
1633 Returns
1633 -------
1634 -------
1634 a string consisting of the function signature, with the parenthesis but
1635 a string consisting of the function signature, with the parenthesis but
1635 without the function name. example:
1636 without the function name. example:
1636 `(a, *args, b=1, **kwargs)`
1637 `(a, *args, b=1, **kwargs)`
1637
1638
1638 """
1639 """
1639
1640
1640 # it looks like this might work on jedi 0.17
1641 # it looks like this might work on jedi 0.17
1641 if hasattr(completion, 'get_signatures'):
1642 if hasattr(completion, 'get_signatures'):
1642 signatures = completion.get_signatures()
1643 signatures = completion.get_signatures()
1643 if not signatures:
1644 if not signatures:
1644 return '(?)'
1645 return '(?)'
1645
1646
1646 c0 = completion.get_signatures()[0]
1647 c0 = completion.get_signatures()[0]
1647 return '('+c0.to_string().split('(', maxsplit=1)[1]
1648 return '('+c0.to_string().split('(', maxsplit=1)[1]
1648
1649
1649 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1650 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1650 for p in signature.defined_names()) if f])
1651 for p in signature.defined_names()) if f])
1651
1652
1652
1653
1653 _CompleteResult = Dict[str, MatcherResult]
1654 _CompleteResult = Dict[str, MatcherResult]
1654
1655
1655
1656
1656 DICT_MATCHER_REGEX = re.compile(
1657 DICT_MATCHER_REGEX = re.compile(
1657 r"""(?x)
1658 r"""(?x)
1658 ( # match dict-referring - or any get item object - expression
1659 ( # match dict-referring - or any get item object - expression
1659 .+
1660 .+
1660 )
1661 )
1661 \[ # open bracket
1662 \[ # open bracket
1662 \s* # and optional whitespace
1663 \s* # and optional whitespace
1663 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1664 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1664 # and slices
1665 # and slices
1665 ((?:(?:
1666 ((?:(?:
1666 (?: # closed string
1667 (?: # closed string
1667 [uUbB]? # string prefix (r not handled)
1668 [uUbB]? # string prefix (r not handled)
1668 (?:
1669 (?:
1669 '(?:[^']|(?<!\\)\\')*'
1670 '(?:[^']|(?<!\\)\\')*'
1670 |
1671 |
1671 "(?:[^"]|(?<!\\)\\")*"
1672 "(?:[^"]|(?<!\\)\\")*"
1672 )
1673 )
1673 )
1674 )
1674 |
1675 |
1675 # capture integers and slices
1676 # capture integers and slices
1676 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1677 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1677 |
1678 |
1678 # integer in bin/hex/oct notation
1679 # integer in bin/hex/oct notation
1679 0[bBxXoO]_?(?:\w|\d)+
1680 0[bBxXoO]_?(?:\w|\d)+
1680 )
1681 )
1681 \s*,\s*
1682 \s*,\s*
1682 )*)
1683 )*)
1683 ((?:
1684 ((?:
1684 (?: # unclosed string
1685 (?: # unclosed string
1685 [uUbB]? # string prefix (r not handled)
1686 [uUbB]? # string prefix (r not handled)
1686 (?:
1687 (?:
1687 '(?:[^']|(?<!\\)\\')*
1688 '(?:[^']|(?<!\\)\\')*
1688 |
1689 |
1689 "(?:[^"]|(?<!\\)\\")*
1690 "(?:[^"]|(?<!\\)\\")*
1690 )
1691 )
1691 )
1692 )
1692 |
1693 |
1693 # unfinished integer
1694 # unfinished integer
1694 (?:[-+]?\d+)
1695 (?:[-+]?\d+)
1695 |
1696 |
1696 # integer in bin/hex/oct notation
1697 # integer in bin/hex/oct notation
1697 0[bBxXoO]_?(?:\w|\d)+
1698 0[bBxXoO]_?(?:\w|\d)+
1698 )
1699 )
1699 )?
1700 )?
1700 $
1701 $
1701 """
1702 """
1702 )
1703 )
1703
1704
1704
1705
1705 def _convert_matcher_v1_result_to_v2(
1706 def _convert_matcher_v1_result_to_v2(
1706 matches: Sequence[str],
1707 matches: Sequence[str],
1707 type: str,
1708 type: str,
1708 fragment: Optional[str] = None,
1709 fragment: Optional[str] = None,
1709 suppress_if_matches: bool = False,
1710 suppress_if_matches: bool = False,
1710 ) -> SimpleMatcherResult:
1711 ) -> SimpleMatcherResult:
1711 """Utility to help with transition"""
1712 """Utility to help with transition"""
1712 result = {
1713 result = {
1713 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1714 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1714 "suppress": (True if matches else False) if suppress_if_matches else False,
1715 "suppress": (True if matches else False) if suppress_if_matches else False,
1715 }
1716 }
1716 if fragment is not None:
1717 if fragment is not None:
1717 result["matched_fragment"] = fragment
1718 result["matched_fragment"] = fragment
1718 return cast(SimpleMatcherResult, result)
1719 return cast(SimpleMatcherResult, result)
1719
1720
1720
1721
1721 class IPCompleter(Completer):
1722 class IPCompleter(Completer):
1722 """Extension of the completer class with IPython-specific features"""
1723 """Extension of the completer class with IPython-specific features"""
1723
1724
1724 @observe('greedy')
1725 @observe('greedy')
1725 def _greedy_changed(self, change):
1726 def _greedy_changed(self, change):
1726 """update the splitter and readline delims when greedy is changed"""
1727 """update the splitter and readline delims when greedy is changed"""
1727 if change["new"]:
1728 if change["new"]:
1728 self.evaluation = "unsafe"
1729 self.evaluation = "unsafe"
1729 self.auto_close_dict_keys = True
1730 self.auto_close_dict_keys = True
1730 self.splitter.delims = GREEDY_DELIMS
1731 self.splitter.delims = GREEDY_DELIMS
1731 else:
1732 else:
1732 self.evaluation = "limited"
1733 self.evaluation = "limited"
1733 self.auto_close_dict_keys = False
1734 self.auto_close_dict_keys = False
1734 self.splitter.delims = DELIMS
1735 self.splitter.delims = DELIMS
1735
1736
1736 dict_keys_only = Bool(
1737 dict_keys_only = Bool(
1737 False,
1738 False,
1738 help="""
1739 help="""
1739 Whether to show dict key matches only.
1740 Whether to show dict key matches only.
1740
1741
1741 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1742 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1742 """,
1743 """,
1743 )
1744 )
1744
1745
1745 suppress_competing_matchers = UnionTrait(
1746 suppress_competing_matchers = UnionTrait(
1746 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1747 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1747 default_value=None,
1748 default_value=None,
1748 help="""
1749 help="""
1749 Whether to suppress completions from other *Matchers*.
1750 Whether to suppress completions from other *Matchers*.
1750
1751
1751 When set to ``None`` (default) the matchers will attempt to auto-detect
1752 When set to ``None`` (default) the matchers will attempt to auto-detect
1752 whether suppression of other matchers is desirable. For example, at
1753 whether suppression of other matchers is desirable. For example, at
1753 the beginning of a line followed by `%` we expect a magic completion
1754 the beginning of a line followed by `%` we expect a magic completion
1754 to be the only applicable option, and after ``my_dict['`` we usually
1755 to be the only applicable option, and after ``my_dict['`` we usually
1755 expect a completion with an existing dictionary key.
1756 expect a completion with an existing dictionary key.
1756
1757
1757 If you want to disable this heuristic and see completions from all matchers,
1758 If you want to disable this heuristic and see completions from all matchers,
1758 set ``IPCompleter.suppress_competing_matchers = False``.
1759 set ``IPCompleter.suppress_competing_matchers = False``.
1759 To disable the heuristic for specific matchers provide a dictionary mapping:
1760 To disable the heuristic for specific matchers provide a dictionary mapping:
1760 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1761 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1761
1762
1762 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1763 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1763 completions to the set of matchers with the highest priority;
1764 completions to the set of matchers with the highest priority;
1764 this is equivalent to ``IPCompleter.merge_completions`` and
1765 this is equivalent to ``IPCompleter.merge_completions`` and
1765 can be beneficial for performance, but will sometimes omit relevant
1766 can be beneficial for performance, but will sometimes omit relevant
1766 candidates from matchers further down the priority list.
1767 candidates from matchers further down the priority list.
1767 """,
1768 """,
1768 ).tag(config=True)
1769 ).tag(config=True)
1769
1770
1770 merge_completions = Bool(
1771 merge_completions = Bool(
1771 True,
1772 True,
1772 help="""Whether to merge completion results into a single list
1773 help="""Whether to merge completion results into a single list
1773
1774
1774 If False, only the completion results from the first non-empty
1775 If False, only the completion results from the first non-empty
1775 completer will be returned.
1776 completer will be returned.
1776
1777
1777 As of version 8.6.0, setting the value to ``False`` is an alias for:
1778 As of version 8.6.0, setting the value to ``False`` is an alias for:
1778 ``IPCompleter.suppress_competing_matchers = True.``.
1779 ``IPCompleter.suppress_competing_matchers = True.``.
1779 """,
1780 """,
1780 ).tag(config=True)
1781 ).tag(config=True)
1781
1782
1782 disable_matchers = ListTrait(
1783 disable_matchers = ListTrait(
1783 Unicode(),
1784 Unicode(),
1784 help="""List of matchers to disable.
1785 help="""List of matchers to disable.
1785
1786
1786 The list should contain matcher identifiers (see :any:`completion_matcher`).
1787 The list should contain matcher identifiers (see :any:`completion_matcher`).
1787 """,
1788 """,
1788 ).tag(config=True)
1789 ).tag(config=True)
1789
1790
1790 omit__names = Enum(
1791 omit__names = Enum(
1791 (0, 1, 2),
1792 (0, 1, 2),
1792 default_value=2,
1793 default_value=2,
1793 help="""Instruct the completer to omit private method names
1794 help="""Instruct the completer to omit private method names
1794
1795
1795 Specifically, when completing on ``object.<tab>``.
1796 Specifically, when completing on ``object.<tab>``.
1796
1797
1797 When 2 [default]: all names that start with '_' will be excluded.
1798 When 2 [default]: all names that start with '_' will be excluded.
1798
1799
1799 When 1: all 'magic' names (``__foo__``) will be excluded.
1800 When 1: all 'magic' names (``__foo__``) will be excluded.
1800
1801
1801 When 0: nothing will be excluded.
1802 When 0: nothing will be excluded.
1802 """
1803 """
1803 ).tag(config=True)
1804 ).tag(config=True)
1804 limit_to__all__ = Bool(False,
1805 limit_to__all__ = Bool(False,
1805 help="""
1806 help="""
1806 DEPRECATED as of version 5.0.
1807 DEPRECATED as of version 5.0.
1807
1808
1808 Instruct the completer to use __all__ for the completion
1809 Instruct the completer to use __all__ for the completion
1809
1810
1810 Specifically, when completing on ``object.<tab>``.
1811 Specifically, when completing on ``object.<tab>``.
1811
1812
1812 When True: only those names in obj.__all__ will be included.
1813 When True: only those names in obj.__all__ will be included.
1813
1814
1814 When False [default]: the __all__ attribute is ignored
1815 When False [default]: the __all__ attribute is ignored
1815 """,
1816 """,
1816 ).tag(config=True)
1817 ).tag(config=True)
1817
1818
1818 profile_completions = Bool(
1819 profile_completions = Bool(
1819 default_value=False,
1820 default_value=False,
1820 help="If True, emit profiling data for completion subsystem using cProfile."
1821 help="If True, emit profiling data for completion subsystem using cProfile."
1821 ).tag(config=True)
1822 ).tag(config=True)
1822
1823
1823 profiler_output_dir = Unicode(
1824 profiler_output_dir = Unicode(
1824 default_value=".completion_profiles",
1825 default_value=".completion_profiles",
1825 help="Template for path at which to output profile data for completions."
1826 help="Template for path at which to output profile data for completions."
1826 ).tag(config=True)
1827 ).tag(config=True)
1827
1828
1828 @observe('limit_to__all__')
1829 @observe('limit_to__all__')
1829 def _limit_to_all_changed(self, change):
1830 def _limit_to_all_changed(self, change):
1830 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1831 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1831 'value has been deprecated since IPython 5.0, will be made to have '
1832 'value has been deprecated since IPython 5.0, will be made to have '
1832 'no effects and then removed in future version of IPython.',
1833 'no effects and then removed in future version of IPython.',
1833 UserWarning)
1834 UserWarning)
1834
1835
1835 def __init__(
1836 def __init__(
1836 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1837 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1837 ):
1838 ):
1838 """IPCompleter() -> completer
1839 """IPCompleter() -> completer
1839
1840
1840 Return a completer object.
1841 Return a completer object.
1841
1842
1842 Parameters
1843 Parameters
1843 ----------
1844 ----------
1844 shell
1845 shell
1845 a pointer to the ipython shell itself. This is needed
1846 a pointer to the ipython shell itself. This is needed
1846 because this completer knows about magic functions, and those can
1847 because this completer knows about magic functions, and those can
1847 only be accessed via the ipython instance.
1848 only be accessed via the ipython instance.
1848 namespace : dict, optional
1849 namespace : dict, optional
1849 an optional dict where completions are performed.
1850 an optional dict where completions are performed.
1850 global_namespace : dict, optional
1851 global_namespace : dict, optional
1851 secondary optional dict for completions, to
1852 secondary optional dict for completions, to
1852 handle cases (such as IPython embedded inside functions) where
1853 handle cases (such as IPython embedded inside functions) where
1853 both Python scopes are visible.
1854 both Python scopes are visible.
1854 config : Config
1855 config : Config
1855 traitlet's config object
1856 traitlet's config object
1856 **kwargs
1857 **kwargs
1857 passed to super class unmodified.
1858 passed to super class unmodified.
1858 """
1859 """
1859
1860
1860 self.magic_escape = ESC_MAGIC
1861 self.magic_escape = ESC_MAGIC
1861 self.splitter = CompletionSplitter()
1862 self.splitter = CompletionSplitter()
1862
1863
1863 # _greedy_changed() depends on splitter and readline being defined:
1864 # _greedy_changed() depends on splitter and readline being defined:
1864 super().__init__(
1865 super().__init__(
1865 namespace=namespace,
1866 namespace=namespace,
1866 global_namespace=global_namespace,
1867 global_namespace=global_namespace,
1867 config=config,
1868 config=config,
1868 **kwargs,
1869 **kwargs,
1869 )
1870 )
1870
1871
1871 # List where completion matches will be stored
1872 # List where completion matches will be stored
1872 self.matches = []
1873 self.matches = []
1873 self.shell = shell
1874 self.shell = shell
1874 # Regexp to split filenames with spaces in them
1875 # Regexp to split filenames with spaces in them
1875 self.space_name_re = re.compile(r'([^\\] )')
1876 self.space_name_re = re.compile(r'([^\\] )')
1876 # Hold a local ref. to glob.glob for speed
1877 # Hold a local ref. to glob.glob for speed
1877 self.glob = glob.glob
1878 self.glob = glob.glob
1878
1879
1879 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1880 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1880 # buffers, to avoid completion problems.
1881 # buffers, to avoid completion problems.
1881 term = os.environ.get('TERM','xterm')
1882 term = os.environ.get('TERM','xterm')
1882 self.dumb_terminal = term in ['dumb','emacs']
1883 self.dumb_terminal = term in ['dumb','emacs']
1883
1884
1884 # Special handling of backslashes needed in win32 platforms
1885 # Special handling of backslashes needed in win32 platforms
1885 if sys.platform == "win32":
1886 if sys.platform == "win32":
1886 self.clean_glob = self._clean_glob_win32
1887 self.clean_glob = self._clean_glob_win32
1887 else:
1888 else:
1888 self.clean_glob = self._clean_glob
1889 self.clean_glob = self._clean_glob
1889
1890
1890 #regexp to parse docstring for function signature
1891 #regexp to parse docstring for function signature
1891 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1892 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1892 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1893 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1893 #use this if positional argument name is also needed
1894 #use this if positional argument name is also needed
1894 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1895 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1895
1896
1896 self.magic_arg_matchers = [
1897 self.magic_arg_matchers = [
1897 self.magic_config_matcher,
1898 self.magic_config_matcher,
1898 self.magic_color_matcher,
1899 self.magic_color_matcher,
1899 ]
1900 ]
1900
1901
1901 # This is set externally by InteractiveShell
1902 # This is set externally by InteractiveShell
1902 self.custom_completers = None
1903 self.custom_completers = None
1903
1904
1904 # This is a list of names of unicode characters that can be completed
1905 # This is a list of names of unicode characters that can be completed
1905 # into their corresponding unicode value. The list is large, so we
1906 # into their corresponding unicode value. The list is large, so we
1906 # lazily initialize it on first use. Consuming code should access this
1907 # lazily initialize it on first use. Consuming code should access this
1907 # attribute through the `@unicode_names` property.
1908 # attribute through the `@unicode_names` property.
1908 self._unicode_names = None
1909 self._unicode_names = None
1909
1910
1910 self._backslash_combining_matchers = [
1911 self._backslash_combining_matchers = [
1911 self.latex_name_matcher,
1912 self.latex_name_matcher,
1912 self.unicode_name_matcher,
1913 self.unicode_name_matcher,
1913 back_latex_name_matcher,
1914 back_latex_name_matcher,
1914 back_unicode_name_matcher,
1915 back_unicode_name_matcher,
1915 self.fwd_unicode_matcher,
1916 self.fwd_unicode_matcher,
1916 ]
1917 ]
1917
1918
1918 if not self.backslash_combining_completions:
1919 if not self.backslash_combining_completions:
1919 for matcher in self._backslash_combining_matchers:
1920 for matcher in self._backslash_combining_matchers:
1920 self.disable_matchers.append(_get_matcher_id(matcher))
1921 self.disable_matchers.append(_get_matcher_id(matcher))
1921
1922
1922 if not self.merge_completions:
1923 if not self.merge_completions:
1923 self.suppress_competing_matchers = True
1924 self.suppress_competing_matchers = True
1924
1925
1925 @property
1926 @property
1926 def matchers(self) -> List[Matcher]:
1927 def matchers(self) -> List[Matcher]:
1927 """All active matcher routines for completion"""
1928 """All active matcher routines for completion"""
1928 if self.dict_keys_only:
1929 if self.dict_keys_only:
1929 return [self.dict_key_matcher]
1930 return [self.dict_key_matcher]
1930
1931
1931 if self.use_jedi:
1932 if self.use_jedi:
1932 return [
1933 return [
1933 *self.custom_matchers,
1934 *self.custom_matchers,
1934 *self._backslash_combining_matchers,
1935 *self._backslash_combining_matchers,
1935 *self.magic_arg_matchers,
1936 *self.magic_arg_matchers,
1936 self.custom_completer_matcher,
1937 self.custom_completer_matcher,
1937 self.magic_matcher,
1938 self.magic_matcher,
1938 self._jedi_matcher,
1939 self._jedi_matcher,
1939 self.dict_key_matcher,
1940 self.dict_key_matcher,
1940 self.file_matcher,
1941 self.file_matcher,
1941 ]
1942 ]
1942 else:
1943 else:
1943 return [
1944 return [
1944 *self.custom_matchers,
1945 *self.custom_matchers,
1945 *self._backslash_combining_matchers,
1946 *self._backslash_combining_matchers,
1946 *self.magic_arg_matchers,
1947 *self.magic_arg_matchers,
1947 self.custom_completer_matcher,
1948 self.custom_completer_matcher,
1948 self.dict_key_matcher,
1949 self.dict_key_matcher,
1949 # TODO: convert python_matches to v2 API
1950 # TODO: convert python_matches to v2 API
1950 self.magic_matcher,
1951 self.magic_matcher,
1951 self.python_matches,
1952 self.python_matches,
1952 self.file_matcher,
1953 self.file_matcher,
1953 self.python_func_kw_matcher,
1954 self.python_func_kw_matcher,
1954 ]
1955 ]
1955
1956
1956 def all_completions(self, text:str) -> List[str]:
1957 def all_completions(self, text:str) -> List[str]:
1957 """
1958 """
1958 Wrapper around the completion methods for the benefit of emacs.
1959 Wrapper around the completion methods for the benefit of emacs.
1959 """
1960 """
1960 prefix = text.rpartition('.')[0]
1961 prefix = text.rpartition('.')[0]
1961 with provisionalcompleter():
1962 with provisionalcompleter():
1962 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1963 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1963 for c in self.completions(text, len(text))]
1964 for c in self.completions(text, len(text))]
1964
1965
1965 return self.complete(text)[1]
1966 return self.complete(text)[1]
1966
1967
1967 def _clean_glob(self, text:str):
1968 def _clean_glob(self, text:str):
1968 return self.glob("%s*" % text)
1969 return self.glob("%s*" % text)
1969
1970
1970 def _clean_glob_win32(self, text:str):
1971 def _clean_glob_win32(self, text:str):
1971 return [f.replace("\\","/")
1972 return [f.replace("\\","/")
1972 for f in self.glob("%s*" % text)]
1973 for f in self.glob("%s*" % text)]
1973
1974
1974 @context_matcher()
1975 @context_matcher()
1975 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1976 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1976 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1977 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1977 matches = self.file_matches(context.token)
1978 matches = self.file_matches(context.token)
1978 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1979 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1979 # starts with `/home/`, `C:\`, etc)
1980 # starts with `/home/`, `C:\`, etc)
1980 return _convert_matcher_v1_result_to_v2(matches, type="path")
1981 return _convert_matcher_v1_result_to_v2(matches, type="path")
1981
1982
1982 def file_matches(self, text: str) -> List[str]:
1983 def file_matches(self, text: str) -> List[str]:
1983 """Match filenames, expanding ~USER type strings.
1984 """Match filenames, expanding ~USER type strings.
1984
1985
1985 Most of the seemingly convoluted logic in this completer is an
1986 Most of the seemingly convoluted logic in this completer is an
1986 attempt to handle filenames with spaces in them. And yet it's not
1987 attempt to handle filenames with spaces in them. And yet it's not
1987 quite perfect, because Python's readline doesn't expose all of the
1988 quite perfect, because Python's readline doesn't expose all of the
1988 GNU readline details needed for this to be done correctly.
1989 GNU readline details needed for this to be done correctly.
1989
1990
1990 For a filename with a space in it, the printed completions will be
1991 For a filename with a space in it, the printed completions will be
1991 only the parts after what's already been typed (instead of the
1992 only the parts after what's already been typed (instead of the
1992 full completions, as is normally done). I don't think with the
1993 full completions, as is normally done). I don't think with the
1993 current (as of Python 2.3) Python readline it's possible to do
1994 current (as of Python 2.3) Python readline it's possible to do
1994 better.
1995 better.
1995
1996
1996 .. deprecated:: 8.6
1997 .. deprecated:: 8.6
1997 You can use :meth:`file_matcher` instead.
1998 You can use :meth:`file_matcher` instead.
1998 """
1999 """
1999
2000
2000 # chars that require escaping with backslash - i.e. chars
2001 # chars that require escaping with backslash - i.e. chars
2001 # that readline treats incorrectly as delimiters, but we
2002 # that readline treats incorrectly as delimiters, but we
2002 # don't want to treat as delimiters in filename matching
2003 # don't want to treat as delimiters in filename matching
2003 # when escaped with backslash
2004 # when escaped with backslash
2004 if text.startswith('!'):
2005 if text.startswith('!'):
2005 text = text[1:]
2006 text = text[1:]
2006 text_prefix = u'!'
2007 text_prefix = u'!'
2007 else:
2008 else:
2008 text_prefix = u''
2009 text_prefix = u''
2009
2010
2010 text_until_cursor = self.text_until_cursor
2011 text_until_cursor = self.text_until_cursor
2011 # track strings with open quotes
2012 # track strings with open quotes
2012 open_quotes = has_open_quotes(text_until_cursor)
2013 open_quotes = has_open_quotes(text_until_cursor)
2013
2014
2014 if '(' in text_until_cursor or '[' in text_until_cursor:
2015 if '(' in text_until_cursor or '[' in text_until_cursor:
2015 lsplit = text
2016 lsplit = text
2016 else:
2017 else:
2017 try:
2018 try:
2018 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2019 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2019 lsplit = arg_split(text_until_cursor)[-1]
2020 lsplit = arg_split(text_until_cursor)[-1]
2020 except ValueError:
2021 except ValueError:
2021 # typically an unmatched ", or backslash without escaped char.
2022 # typically an unmatched ", or backslash without escaped char.
2022 if open_quotes:
2023 if open_quotes:
2023 lsplit = text_until_cursor.split(open_quotes)[-1]
2024 lsplit = text_until_cursor.split(open_quotes)[-1]
2024 else:
2025 else:
2025 return []
2026 return []
2026 except IndexError:
2027 except IndexError:
2027 # tab pressed on empty line
2028 # tab pressed on empty line
2028 lsplit = ""
2029 lsplit = ""
2029
2030
2030 if not open_quotes and lsplit != protect_filename(lsplit):
2031 if not open_quotes and lsplit != protect_filename(lsplit):
2031 # if protectables are found, do matching on the whole escaped name
2032 # if protectables are found, do matching on the whole escaped name
2032 has_protectables = True
2033 has_protectables = True
2033 text0,text = text,lsplit
2034 text0,text = text,lsplit
2034 else:
2035 else:
2035 has_protectables = False
2036 has_protectables = False
2036 text = os.path.expanduser(text)
2037 text = os.path.expanduser(text)
2037
2038
2038 if text == "":
2039 if text == "":
2039 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2040 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2040
2041
2041 # Compute the matches from the filesystem
2042 # Compute the matches from the filesystem
2042 if sys.platform == 'win32':
2043 if sys.platform == 'win32':
2043 m0 = self.clean_glob(text)
2044 m0 = self.clean_glob(text)
2044 else:
2045 else:
2045 m0 = self.clean_glob(text.replace('\\', ''))
2046 m0 = self.clean_glob(text.replace('\\', ''))
2046
2047
2047 if has_protectables:
2048 if has_protectables:
2048 # If we had protectables, we need to revert our changes to the
2049 # If we had protectables, we need to revert our changes to the
2049 # beginning of filename so that we don't double-write the part
2050 # beginning of filename so that we don't double-write the part
2050 # of the filename we have so far
2051 # of the filename we have so far
2051 len_lsplit = len(lsplit)
2052 len_lsplit = len(lsplit)
2052 matches = [text_prefix + text0 +
2053 matches = [text_prefix + text0 +
2053 protect_filename(f[len_lsplit:]) for f in m0]
2054 protect_filename(f[len_lsplit:]) for f in m0]
2054 else:
2055 else:
2055 if open_quotes:
2056 if open_quotes:
2056 # if we have a string with an open quote, we don't need to
2057 # if we have a string with an open quote, we don't need to
2057 # protect the names beyond the quote (and we _shouldn't_, as
2058 # protect the names beyond the quote (and we _shouldn't_, as
2058 # it would cause bugs when the filesystem call is made).
2059 # it would cause bugs when the filesystem call is made).
2059 matches = m0 if sys.platform == "win32" else\
2060 matches = m0 if sys.platform == "win32" else\
2060 [protect_filename(f, open_quotes) for f in m0]
2061 [protect_filename(f, open_quotes) for f in m0]
2061 else:
2062 else:
2062 matches = [text_prefix +
2063 matches = [text_prefix +
2063 protect_filename(f) for f in m0]
2064 protect_filename(f) for f in m0]
2064
2065
2065 # Mark directories in input list by appending '/' to their names.
2066 # Mark directories in input list by appending '/' to their names.
2066 return [x+'/' if os.path.isdir(x) else x for x in matches]
2067 return [x+'/' if os.path.isdir(x) else x for x in matches]
2067
2068
2068 @context_matcher()
2069 @context_matcher()
2069 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2070 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2070 """Match magics."""
2071 """Match magics."""
2071 text = context.token
2072 text = context.token
2072 matches = self.magic_matches(text)
2073 matches = self.magic_matches(text)
2073 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2074 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2074 is_magic_prefix = len(text) > 0 and text[0] == "%"
2075 is_magic_prefix = len(text) > 0 and text[0] == "%"
2075 result["suppress"] = is_magic_prefix and bool(result["completions"])
2076 result["suppress"] = is_magic_prefix and bool(result["completions"])
2076 return result
2077 return result
2077
2078
2078 def magic_matches(self, text: str):
2079 def magic_matches(self, text: str):
2079 """Match magics.
2080 """Match magics.
2080
2081
2081 .. deprecated:: 8.6
2082 .. deprecated:: 8.6
2082 You can use :meth:`magic_matcher` instead.
2083 You can use :meth:`magic_matcher` instead.
2083 """
2084 """
2084 # Get all shell magics now rather than statically, so magics loaded at
2085 # Get all shell magics now rather than statically, so magics loaded at
2085 # runtime show up too.
2086 # runtime show up too.
2086 lsm = self.shell.magics_manager.lsmagic()
2087 lsm = self.shell.magics_manager.lsmagic()
2087 line_magics = lsm['line']
2088 line_magics = lsm['line']
2088 cell_magics = lsm['cell']
2089 cell_magics = lsm['cell']
2089 pre = self.magic_escape
2090 pre = self.magic_escape
2090 pre2 = pre+pre
2091 pre2 = pre+pre
2091
2092
2092 explicit_magic = text.startswith(pre)
2093 explicit_magic = text.startswith(pre)
2093
2094
2094 # Completion logic:
2095 # Completion logic:
2095 # - user gives %%: only do cell magics
2096 # - user gives %%: only do cell magics
2096 # - user gives %: do both line and cell magics
2097 # - user gives %: do both line and cell magics
2097 # - no prefix: do both
2098 # - no prefix: do both
2098 # In other words, line magics are skipped if the user gives %% explicitly
2099 # In other words, line magics are skipped if the user gives %% explicitly
2099 #
2100 #
2100 # We also exclude magics that match any currently visible names:
2101 # We also exclude magics that match any currently visible names:
2101 # https://github.com/ipython/ipython/issues/4877, unless the user has
2102 # https://github.com/ipython/ipython/issues/4877, unless the user has
2102 # typed a %:
2103 # typed a %:
2103 # https://github.com/ipython/ipython/issues/10754
2104 # https://github.com/ipython/ipython/issues/10754
2104 bare_text = text.lstrip(pre)
2105 bare_text = text.lstrip(pre)
2105 global_matches = self.global_matches(bare_text)
2106 global_matches = self.global_matches(bare_text)
2106 if not explicit_magic:
2107 if not explicit_magic:
2107 def matches(magic):
2108 def matches(magic):
2108 """
2109 """
2109 Filter magics, in particular remove magics that match
2110 Filter magics, in particular remove magics that match
2110 a name present in global namespace.
2111 a name present in global namespace.
2111 """
2112 """
2112 return ( magic.startswith(bare_text) and
2113 return ( magic.startswith(bare_text) and
2113 magic not in global_matches )
2114 magic not in global_matches )
2114 else:
2115 else:
2115 def matches(magic):
2116 def matches(magic):
2116 return magic.startswith(bare_text)
2117 return magic.startswith(bare_text)
2117
2118
2118 comp = [ pre2+m for m in cell_magics if matches(m)]
2119 comp = [ pre2+m for m in cell_magics if matches(m)]
2119 if not text.startswith(pre2):
2120 if not text.startswith(pre2):
2120 comp += [ pre+m for m in line_magics if matches(m)]
2121 comp += [ pre+m for m in line_magics if matches(m)]
2121
2122
2122 return comp
2123 return comp
2123
2124
2124 @context_matcher()
2125 @context_matcher()
2125 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2126 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2126 """Match class names and attributes for %config magic."""
2127 """Match class names and attributes for %config magic."""
2127 # NOTE: uses `line_buffer` equivalent for compatibility
2128 # NOTE: uses `line_buffer` equivalent for compatibility
2128 matches = self.magic_config_matches(context.line_with_cursor)
2129 matches = self.magic_config_matches(context.line_with_cursor)
2129 return _convert_matcher_v1_result_to_v2(matches, type="param")
2130 return _convert_matcher_v1_result_to_v2(matches, type="param")
2130
2131
2131 def magic_config_matches(self, text: str) -> List[str]:
2132 def magic_config_matches(self, text: str) -> List[str]:
2132 """Match class names and attributes for %config magic.
2133 """Match class names and attributes for %config magic.
2133
2134
2134 .. deprecated:: 8.6
2135 .. deprecated:: 8.6
2135 You can use :meth:`magic_config_matcher` instead.
2136 You can use :meth:`magic_config_matcher` instead.
2136 """
2137 """
2137 texts = text.strip().split()
2138 texts = text.strip().split()
2138
2139
2139 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2140 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2140 # get all configuration classes
2141 # get all configuration classes
2141 classes = sorted(set([ c for c in self.shell.configurables
2142 classes = sorted(set([ c for c in self.shell.configurables
2142 if c.__class__.class_traits(config=True)
2143 if c.__class__.class_traits(config=True)
2143 ]), key=lambda x: x.__class__.__name__)
2144 ]), key=lambda x: x.__class__.__name__)
2144 classnames = [ c.__class__.__name__ for c in classes ]
2145 classnames = [ c.__class__.__name__ for c in classes ]
2145
2146
2146 # return all classnames if config or %config is given
2147 # return all classnames if config or %config is given
2147 if len(texts) == 1:
2148 if len(texts) == 1:
2148 return classnames
2149 return classnames
2149
2150
2150 # match classname
2151 # match classname
2151 classname_texts = texts[1].split('.')
2152 classname_texts = texts[1].split('.')
2152 classname = classname_texts[0]
2153 classname = classname_texts[0]
2153 classname_matches = [ c for c in classnames
2154 classname_matches = [ c for c in classnames
2154 if c.startswith(classname) ]
2155 if c.startswith(classname) ]
2155
2156
2156 # return matched classes or the matched class with attributes
2157 # return matched classes or the matched class with attributes
2157 if texts[1].find('.') < 0:
2158 if texts[1].find('.') < 0:
2158 return classname_matches
2159 return classname_matches
2159 elif len(classname_matches) == 1 and \
2160 elif len(classname_matches) == 1 and \
2160 classname_matches[0] == classname:
2161 classname_matches[0] == classname:
2161 cls = classes[classnames.index(classname)].__class__
2162 cls = classes[classnames.index(classname)].__class__
2162 help = cls.class_get_help()
2163 help = cls.class_get_help()
2163 # strip leading '--' from cl-args:
2164 # strip leading '--' from cl-args:
2164 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2165 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2165 return [ attr.split('=')[0]
2166 return [ attr.split('=')[0]
2166 for attr in help.strip().splitlines()
2167 for attr in help.strip().splitlines()
2167 if attr.startswith(texts[1]) ]
2168 if attr.startswith(texts[1]) ]
2168 return []
2169 return []
2169
2170
2170 @context_matcher()
2171 @context_matcher()
2171 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2172 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2172 """Match color schemes for %colors magic."""
2173 """Match color schemes for %colors magic."""
2173 # NOTE: uses `line_buffer` equivalent for compatibility
2174 # NOTE: uses `line_buffer` equivalent for compatibility
2174 matches = self.magic_color_matches(context.line_with_cursor)
2175 matches = self.magic_color_matches(context.line_with_cursor)
2175 return _convert_matcher_v1_result_to_v2(matches, type="param")
2176 return _convert_matcher_v1_result_to_v2(matches, type="param")
2176
2177
2177 def magic_color_matches(self, text: str) -> List[str]:
2178 def magic_color_matches(self, text: str) -> List[str]:
2178 """Match color schemes for %colors magic.
2179 """Match color schemes for %colors magic.
2179
2180
2180 .. deprecated:: 8.6
2181 .. deprecated:: 8.6
2181 You can use :meth:`magic_color_matcher` instead.
2182 You can use :meth:`magic_color_matcher` instead.
2182 """
2183 """
2183 texts = text.split()
2184 texts = text.split()
2184 if text.endswith(' '):
2185 if text.endswith(' '):
2185 # .split() strips off the trailing whitespace. Add '' back
2186 # .split() strips off the trailing whitespace. Add '' back
2186 # so that: '%colors ' -> ['%colors', '']
2187 # so that: '%colors ' -> ['%colors', '']
2187 texts.append('')
2188 texts.append('')
2188
2189
2189 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2190 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2190 prefix = texts[1]
2191 prefix = texts[1]
2191 return [ color for color in InspectColors.keys()
2192 return [ color for color in InspectColors.keys()
2192 if color.startswith(prefix) ]
2193 if color.startswith(prefix) ]
2193 return []
2194 return []
2194
2195
2195 @context_matcher(identifier="IPCompleter.jedi_matcher")
2196 @context_matcher(identifier="IPCompleter.jedi_matcher")
2196 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2197 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2197 matches = self._jedi_matches(
2198 matches = self._jedi_matches(
2198 cursor_column=context.cursor_position,
2199 cursor_column=context.cursor_position,
2199 cursor_line=context.cursor_line,
2200 cursor_line=context.cursor_line,
2200 text=context.full_text,
2201 text=context.full_text,
2201 )
2202 )
2202 return {
2203 return {
2203 "completions": matches,
2204 "completions": matches,
2204 # static analysis should not suppress other matchers
2205 # static analysis should not suppress other matchers
2205 "suppress": False,
2206 "suppress": False,
2206 }
2207 }
2207
2208
2208 def _jedi_matches(
2209 def _jedi_matches(
2209 self, cursor_column: int, cursor_line: int, text: str
2210 self, cursor_column: int, cursor_line: int, text: str
2210 ) -> Iterator[_JediCompletionLike]:
2211 ) -> Iterator[_JediCompletionLike]:
2211 """
2212 """
2212 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2213 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2213 cursor position.
2214 cursor position.
2214
2215
2215 Parameters
2216 Parameters
2216 ----------
2217 ----------
2217 cursor_column : int
2218 cursor_column : int
2218 column position of the cursor in ``text``, 0-indexed.
2219 column position of the cursor in ``text``, 0-indexed.
2219 cursor_line : int
2220 cursor_line : int
2220 line position of the cursor in ``text``, 0-indexed
2221 line position of the cursor in ``text``, 0-indexed
2221 text : str
2222 text : str
2222 text to complete
2223 text to complete
2223
2224
2224 Notes
2225 Notes
2225 -----
2226 -----
2226 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2227 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2227 object containing a string with the Jedi debug information attached.
2228 object containing a string with the Jedi debug information attached.
2228
2229
2229 .. deprecated:: 8.6
2230 .. deprecated:: 8.6
2230 You can use :meth:`_jedi_matcher` instead.
2231 You can use :meth:`_jedi_matcher` instead.
2231 """
2232 """
2232 namespaces = [self.namespace]
2233 namespaces = [self.namespace]
2233 if self.global_namespace is not None:
2234 if self.global_namespace is not None:
2234 namespaces.append(self.global_namespace)
2235 namespaces.append(self.global_namespace)
2235
2236
2236 completion_filter = lambda x:x
2237 completion_filter = lambda x:x
2237 offset = cursor_to_position(text, cursor_line, cursor_column)
2238 offset = cursor_to_position(text, cursor_line, cursor_column)
2238 # filter output if we are completing for object members
2239 # filter output if we are completing for object members
2239 if offset:
2240 if offset:
2240 pre = text[offset-1]
2241 pre = text[offset-1]
2241 if pre == '.':
2242 if pre == '.':
2242 if self.omit__names == 2:
2243 if self.omit__names == 2:
2243 completion_filter = lambda c:not c.name.startswith('_')
2244 completion_filter = lambda c:not c.name.startswith('_')
2244 elif self.omit__names == 1:
2245 elif self.omit__names == 1:
2245 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2246 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2246 elif self.omit__names == 0:
2247 elif self.omit__names == 0:
2247 completion_filter = lambda x:x
2248 completion_filter = lambda x:x
2248 else:
2249 else:
2249 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2250 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2250
2251
2251 interpreter = jedi.Interpreter(text[:offset], namespaces)
2252 interpreter = jedi.Interpreter(text[:offset], namespaces)
2252 try_jedi = True
2253 try_jedi = True
2253
2254
2254 try:
2255 try:
2255 # find the first token in the current tree -- if it is a ' or " then we are in a string
2256 # find the first token in the current tree -- if it is a ' or " then we are in a string
2256 completing_string = False
2257 completing_string = False
2257 try:
2258 try:
2258 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2259 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2259 except StopIteration:
2260 except StopIteration:
2260 pass
2261 pass
2261 else:
2262 else:
2262 # note the value may be ', ", or it may also be ''' or """, or
2263 # note the value may be ', ", or it may also be ''' or """, or
2263 # in some cases, """what/you/typed..., but all of these are
2264 # in some cases, """what/you/typed..., but all of these are
2264 # strings.
2265 # strings.
2265 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2266 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2266
2267
2267 # if we are in a string jedi is likely not the right candidate for
2268 # if we are in a string jedi is likely not the right candidate for
2268 # now. Skip it.
2269 # now. Skip it.
2269 try_jedi = not completing_string
2270 try_jedi = not completing_string
2270 except Exception as e:
2271 except Exception as e:
2271 # many of things can go wrong, we are using private API just don't crash.
2272 # many of things can go wrong, we are using private API just don't crash.
2272 if self.debug:
2273 if self.debug:
2273 print("Error detecting if completing a non-finished string :", e, '|')
2274 print("Error detecting if completing a non-finished string :", e, '|')
2274
2275
2275 if not try_jedi:
2276 if not try_jedi:
2276 return iter([])
2277 return iter([])
2277 try:
2278 try:
2278 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2279 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2279 except Exception as e:
2280 except Exception as e:
2280 if self.debug:
2281 if self.debug:
2281 return iter(
2282 return iter(
2282 [
2283 [
2283 _FakeJediCompletion(
2284 _FakeJediCompletion(
2284 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2285 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2285 % (e)
2286 % (e)
2286 )
2287 )
2287 ]
2288 ]
2288 )
2289 )
2289 else:
2290 else:
2290 return iter([])
2291 return iter([])
2291
2292
2292 @completion_matcher(api_version=1)
2293 @completion_matcher(api_version=1)
2293 def python_matches(self, text: str) -> Iterable[str]:
2294 def python_matches(self, text: str) -> Iterable[str]:
2294 """Match attributes or global python names"""
2295 """Match attributes or global python names"""
2295 if "." in text:
2296 if "." in text:
2296 try:
2297 try:
2297 matches = self.attr_matches(text)
2298 matches = self.attr_matches(text)
2298 if text.endswith('.') and self.omit__names:
2299 if text.endswith('.') and self.omit__names:
2299 if self.omit__names == 1:
2300 if self.omit__names == 1:
2300 # true if txt is _not_ a __ name, false otherwise:
2301 # true if txt is _not_ a __ name, false otherwise:
2301 no__name = (lambda txt:
2302 no__name = (lambda txt:
2302 re.match(r'.*\.__.*?__',txt) is None)
2303 re.match(r'.*\.__.*?__',txt) is None)
2303 else:
2304 else:
2304 # true if txt is _not_ a _ name, false otherwise:
2305 # true if txt is _not_ a _ name, false otherwise:
2305 no__name = (lambda txt:
2306 no__name = (lambda txt:
2306 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2307 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2307 matches = filter(no__name, matches)
2308 matches = filter(no__name, matches)
2308 except NameError:
2309 except NameError:
2309 # catches <undefined attributes>.<tab>
2310 # catches <undefined attributes>.<tab>
2310 matches = []
2311 matches = []
2311 else:
2312 else:
2312 matches = self.global_matches(text)
2313 matches = self.global_matches(text)
2313 return matches
2314 return matches
2314
2315
2315 def _default_arguments_from_docstring(self, doc):
2316 def _default_arguments_from_docstring(self, doc):
2316 """Parse the first line of docstring for call signature.
2317 """Parse the first line of docstring for call signature.
2317
2318
2318 Docstring should be of the form 'min(iterable[, key=func])\n'.
2319 Docstring should be of the form 'min(iterable[, key=func])\n'.
2319 It can also parse cython docstring of the form
2320 It can also parse cython docstring of the form
2320 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2321 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2321 """
2322 """
2322 if doc is None:
2323 if doc is None:
2323 return []
2324 return []
2324
2325
2325 #care only the firstline
2326 #care only the firstline
2326 line = doc.lstrip().splitlines()[0]
2327 line = doc.lstrip().splitlines()[0]
2327
2328
2328 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2329 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2329 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2330 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2330 sig = self.docstring_sig_re.search(line)
2331 sig = self.docstring_sig_re.search(line)
2331 if sig is None:
2332 if sig is None:
2332 return []
2333 return []
2333 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2334 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2334 sig = sig.groups()[0].split(',')
2335 sig = sig.groups()[0].split(',')
2335 ret = []
2336 ret = []
2336 for s in sig:
2337 for s in sig:
2337 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2338 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2338 ret += self.docstring_kwd_re.findall(s)
2339 ret += self.docstring_kwd_re.findall(s)
2339 return ret
2340 return ret
2340
2341
2341 def _default_arguments(self, obj):
2342 def _default_arguments(self, obj):
2342 """Return the list of default arguments of obj if it is callable,
2343 """Return the list of default arguments of obj if it is callable,
2343 or empty list otherwise."""
2344 or empty list otherwise."""
2344 call_obj = obj
2345 call_obj = obj
2345 ret = []
2346 ret = []
2346 if inspect.isbuiltin(obj):
2347 if inspect.isbuiltin(obj):
2347 pass
2348 pass
2348 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2349 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2349 if inspect.isclass(obj):
2350 if inspect.isclass(obj):
2350 #for cython embedsignature=True the constructor docstring
2351 #for cython embedsignature=True the constructor docstring
2351 #belongs to the object itself not __init__
2352 #belongs to the object itself not __init__
2352 ret += self._default_arguments_from_docstring(
2353 ret += self._default_arguments_from_docstring(
2353 getattr(obj, '__doc__', ''))
2354 getattr(obj, '__doc__', ''))
2354 # for classes, check for __init__,__new__
2355 # for classes, check for __init__,__new__
2355 call_obj = (getattr(obj, '__init__', None) or
2356 call_obj = (getattr(obj, '__init__', None) or
2356 getattr(obj, '__new__', None))
2357 getattr(obj, '__new__', None))
2357 # for all others, check if they are __call__able
2358 # for all others, check if they are __call__able
2358 elif hasattr(obj, '__call__'):
2359 elif hasattr(obj, '__call__'):
2359 call_obj = obj.__call__
2360 call_obj = obj.__call__
2360 ret += self._default_arguments_from_docstring(
2361 ret += self._default_arguments_from_docstring(
2361 getattr(call_obj, '__doc__', ''))
2362 getattr(call_obj, '__doc__', ''))
2362
2363
2363 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2364 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2364 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2365 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2365
2366
2366 try:
2367 try:
2367 sig = inspect.signature(obj)
2368 sig = inspect.signature(obj)
2368 ret.extend(k for k, v in sig.parameters.items() if
2369 ret.extend(k for k, v in sig.parameters.items() if
2369 v.kind in _keeps)
2370 v.kind in _keeps)
2370 except ValueError:
2371 except ValueError:
2371 pass
2372 pass
2372
2373
2373 return list(set(ret))
2374 return list(set(ret))
2374
2375
2375 @context_matcher()
2376 @context_matcher()
2376 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2377 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2377 """Match named parameters (kwargs) of the last open function."""
2378 """Match named parameters (kwargs) of the last open function."""
2378 matches = self.python_func_kw_matches(context.token)
2379 matches = self.python_func_kw_matches(context.token)
2379 return _convert_matcher_v1_result_to_v2(matches, type="param")
2380 return _convert_matcher_v1_result_to_v2(matches, type="param")
2380
2381
2381 def python_func_kw_matches(self, text):
2382 def python_func_kw_matches(self, text):
2382 """Match named parameters (kwargs) of the last open function.
2383 """Match named parameters (kwargs) of the last open function.
2383
2384
2384 .. deprecated:: 8.6
2385 .. deprecated:: 8.6
2385 You can use :meth:`python_func_kw_matcher` instead.
2386 You can use :meth:`python_func_kw_matcher` instead.
2386 """
2387 """
2387
2388
2388 if "." in text: # a parameter cannot be dotted
2389 if "." in text: # a parameter cannot be dotted
2389 return []
2390 return []
2390 try: regexp = self.__funcParamsRegex
2391 try: regexp = self.__funcParamsRegex
2391 except AttributeError:
2392 except AttributeError:
2392 regexp = self.__funcParamsRegex = re.compile(r'''
2393 regexp = self.__funcParamsRegex = re.compile(r'''
2393 '.*?(?<!\\)' | # single quoted strings or
2394 '.*?(?<!\\)' | # single quoted strings or
2394 ".*?(?<!\\)" | # double quoted strings or
2395 ".*?(?<!\\)" | # double quoted strings or
2395 \w+ | # identifier
2396 \w+ | # identifier
2396 \S # other characters
2397 \S # other characters
2397 ''', re.VERBOSE | re.DOTALL)
2398 ''', re.VERBOSE | re.DOTALL)
2398 # 1. find the nearest identifier that comes before an unclosed
2399 # 1. find the nearest identifier that comes before an unclosed
2399 # parenthesis before the cursor
2400 # parenthesis before the cursor
2400 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2401 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2401 tokens = regexp.findall(self.text_until_cursor)
2402 tokens = regexp.findall(self.text_until_cursor)
2402 iterTokens = reversed(tokens); openPar = 0
2403 iterTokens = reversed(tokens); openPar = 0
2403
2404
2404 for token in iterTokens:
2405 for token in iterTokens:
2405 if token == ')':
2406 if token == ')':
2406 openPar -= 1
2407 openPar -= 1
2407 elif token == '(':
2408 elif token == '(':
2408 openPar += 1
2409 openPar += 1
2409 if openPar > 0:
2410 if openPar > 0:
2410 # found the last unclosed parenthesis
2411 # found the last unclosed parenthesis
2411 break
2412 break
2412 else:
2413 else:
2413 return []
2414 return []
2414 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2415 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2415 ids = []
2416 ids = []
2416 isId = re.compile(r'\w+$').match
2417 isId = re.compile(r'\w+$').match
2417
2418
2418 while True:
2419 while True:
2419 try:
2420 try:
2420 ids.append(next(iterTokens))
2421 ids.append(next(iterTokens))
2421 if not isId(ids[-1]):
2422 if not isId(ids[-1]):
2422 ids.pop(); break
2423 ids.pop(); break
2423 if not next(iterTokens) == '.':
2424 if not next(iterTokens) == '.':
2424 break
2425 break
2425 except StopIteration:
2426 except StopIteration:
2426 break
2427 break
2427
2428
2428 # Find all named arguments already assigned to, as to avoid suggesting
2429 # Find all named arguments already assigned to, as to avoid suggesting
2429 # them again
2430 # them again
2430 usedNamedArgs = set()
2431 usedNamedArgs = set()
2431 par_level = -1
2432 par_level = -1
2432 for token, next_token in zip(tokens, tokens[1:]):
2433 for token, next_token in zip(tokens, tokens[1:]):
2433 if token == '(':
2434 if token == '(':
2434 par_level += 1
2435 par_level += 1
2435 elif token == ')':
2436 elif token == ')':
2436 par_level -= 1
2437 par_level -= 1
2437
2438
2438 if par_level != 0:
2439 if par_level != 0:
2439 continue
2440 continue
2440
2441
2441 if next_token != '=':
2442 if next_token != '=':
2442 continue
2443 continue
2443
2444
2444 usedNamedArgs.add(token)
2445 usedNamedArgs.add(token)
2445
2446
2446 argMatches = []
2447 argMatches = []
2447 try:
2448 try:
2448 callableObj = '.'.join(ids[::-1])
2449 callableObj = '.'.join(ids[::-1])
2449 namedArgs = self._default_arguments(eval(callableObj,
2450 namedArgs = self._default_arguments(eval(callableObj,
2450 self.namespace))
2451 self.namespace))
2451
2452
2452 # Remove used named arguments from the list, no need to show twice
2453 # Remove used named arguments from the list, no need to show twice
2453 for namedArg in set(namedArgs) - usedNamedArgs:
2454 for namedArg in set(namedArgs) - usedNamedArgs:
2454 if namedArg.startswith(text):
2455 if namedArg.startswith(text):
2455 argMatches.append("%s=" %namedArg)
2456 argMatches.append("%s=" %namedArg)
2456 except:
2457 except:
2457 pass
2458 pass
2458
2459
2459 return argMatches
2460 return argMatches
2460
2461
2461 @staticmethod
2462 @staticmethod
2462 def _get_keys(obj: Any) -> List[Any]:
2463 def _get_keys(obj: Any) -> List[Any]:
2463 # Objects can define their own completions by defining an
2464 # Objects can define their own completions by defining an
2464 # _ipy_key_completions_() method.
2465 # _ipy_key_completions_() method.
2465 method = get_real_method(obj, '_ipython_key_completions_')
2466 method = get_real_method(obj, '_ipython_key_completions_')
2466 if method is not None:
2467 if method is not None:
2467 return method()
2468 return method()
2468
2469
2469 # Special case some common in-memory dict-like types
2470 # Special case some common in-memory dict-like types
2470 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2471 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2471 try:
2472 try:
2472 return list(obj.keys())
2473 return list(obj.keys())
2473 except Exception:
2474 except Exception:
2474 return []
2475 return []
2475 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2476 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2476 try:
2477 try:
2477 return list(obj.obj.keys())
2478 return list(obj.obj.keys())
2478 except Exception:
2479 except Exception:
2479 return []
2480 return []
2480 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2481 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2481 _safe_isinstance(obj, 'numpy', 'void'):
2482 _safe_isinstance(obj, 'numpy', 'void'):
2482 return obj.dtype.names or []
2483 return obj.dtype.names or []
2483 return []
2484 return []
2484
2485
2485 @context_matcher()
2486 @context_matcher()
2486 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2487 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2487 """Match string keys in a dictionary, after e.g. ``foo[``."""
2488 """Match string keys in a dictionary, after e.g. ``foo[``."""
2488 matches = self.dict_key_matches(context.token)
2489 matches = self.dict_key_matches(context.token)
2489 return _convert_matcher_v1_result_to_v2(
2490 return _convert_matcher_v1_result_to_v2(
2490 matches, type="dict key", suppress_if_matches=True
2491 matches, type="dict key", suppress_if_matches=True
2491 )
2492 )
2492
2493
2493 def dict_key_matches(self, text: str) -> List[str]:
2494 def dict_key_matches(self, text: str) -> List[str]:
2494 """Match string keys in a dictionary, after e.g. ``foo[``.
2495 """Match string keys in a dictionary, after e.g. ``foo[``.
2495
2496
2496 .. deprecated:: 8.6
2497 .. deprecated:: 8.6
2497 You can use :meth:`dict_key_matcher` instead.
2498 You can use :meth:`dict_key_matcher` instead.
2498 """
2499 """
2499
2500
2500 # Short-circuit on closed dictionary (regular expression would
2501 # Short-circuit on closed dictionary (regular expression would
2501 # not match anyway, but would take quite a while).
2502 # not match anyway, but would take quite a while).
2502 if self.text_until_cursor.strip().endswith("]"):
2503 if self.text_until_cursor.strip().endswith("]"):
2503 return []
2504 return []
2504
2505
2505 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2506 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2506
2507
2507 if match is None:
2508 if match is None:
2508 return []
2509 return []
2509
2510
2510 expr, prior_tuple_keys, key_prefix = match.groups()
2511 expr, prior_tuple_keys, key_prefix = match.groups()
2511
2512
2512 obj = self._evaluate_expr(expr)
2513 obj = self._evaluate_expr(expr)
2513
2514
2514 if obj is not_found:
2515 if obj is not_found:
2515 return []
2516 return []
2516
2517
2517 keys = self._get_keys(obj)
2518 keys = self._get_keys(obj)
2518 if not keys:
2519 if not keys:
2519 return keys
2520 return keys
2520
2521
2521 tuple_prefix = guarded_eval(
2522 tuple_prefix = guarded_eval(
2522 prior_tuple_keys,
2523 prior_tuple_keys,
2523 EvaluationContext(
2524 EvaluationContext(
2524 globals=self.global_namespace,
2525 globals=self.global_namespace,
2525 locals=self.namespace,
2526 locals=self.namespace,
2526 evaluation=self.evaluation,
2527 evaluation=self.evaluation,
2527 in_subscript=True,
2528 in_subscript=True,
2528 ),
2529 ),
2529 )
2530 )
2530
2531
2531 closing_quote, token_offset, matches = match_dict_keys(
2532 closing_quote, token_offset, matches = match_dict_keys(
2532 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2533 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2533 )
2534 )
2534 if not matches:
2535 if not matches:
2535 return []
2536 return []
2536
2537
2537 # get the cursor position of
2538 # get the cursor position of
2538 # - the text being completed
2539 # - the text being completed
2539 # - the start of the key text
2540 # - the start of the key text
2540 # - the start of the completion
2541 # - the start of the completion
2541 text_start = len(self.text_until_cursor) - len(text)
2542 text_start = len(self.text_until_cursor) - len(text)
2542 if key_prefix:
2543 if key_prefix:
2543 key_start = match.start(3)
2544 key_start = match.start(3)
2544 completion_start = key_start + token_offset
2545 completion_start = key_start + token_offset
2545 else:
2546 else:
2546 key_start = completion_start = match.end()
2547 key_start = completion_start = match.end()
2547
2548
2548 # grab the leading prefix, to make sure all completions start with `text`
2549 # grab the leading prefix, to make sure all completions start with `text`
2549 if text_start > key_start:
2550 if text_start > key_start:
2550 leading = ''
2551 leading = ''
2551 else:
2552 else:
2552 leading = text[text_start:completion_start]
2553 leading = text[text_start:completion_start]
2553
2554
2554 # append closing quote and bracket as appropriate
2555 # append closing quote and bracket as appropriate
2555 # this is *not* appropriate if the opening quote or bracket is outside
2556 # this is *not* appropriate if the opening quote or bracket is outside
2556 # the text given to this method, e.g. `d["""a\nt
2557 # the text given to this method, e.g. `d["""a\nt
2557 can_close_quote = False
2558 can_close_quote = False
2558 can_close_bracket = False
2559 can_close_bracket = False
2559
2560
2560 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2561 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2561
2562
2562 if continuation.startswith(closing_quote):
2563 if continuation.startswith(closing_quote):
2563 # do not close if already closed, e.g. `d['a<tab>'`
2564 # do not close if already closed, e.g. `d['a<tab>'`
2564 continuation = continuation[len(closing_quote) :]
2565 continuation = continuation[len(closing_quote) :]
2565 else:
2566 else:
2566 can_close_quote = True
2567 can_close_quote = True
2567
2568
2568 continuation = continuation.strip()
2569 continuation = continuation.strip()
2569
2570
2570 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2571 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2571 # handling it is out of scope, so let's avoid appending suffixes.
2572 # handling it is out of scope, so let's avoid appending suffixes.
2572 has_known_tuple_handling = isinstance(obj, dict)
2573 has_known_tuple_handling = isinstance(obj, dict)
2573
2574
2574 can_close_bracket = (
2575 can_close_bracket = (
2575 not continuation.startswith("]") and self.auto_close_dict_keys
2576 not continuation.startswith("]") and self.auto_close_dict_keys
2576 )
2577 )
2577 can_close_tuple_item = (
2578 can_close_tuple_item = (
2578 not continuation.startswith(",")
2579 not continuation.startswith(",")
2579 and has_known_tuple_handling
2580 and has_known_tuple_handling
2580 and self.auto_close_dict_keys
2581 and self.auto_close_dict_keys
2581 )
2582 )
2582 can_close_quote = can_close_quote and self.auto_close_dict_keys
2583 can_close_quote = can_close_quote and self.auto_close_dict_keys
2583
2584
2584 # fast path if closing qoute should be appended but not suffix is allowed
2585 # fast path if closing qoute should be appended but not suffix is allowed
2585 if not can_close_quote and not can_close_bracket and closing_quote:
2586 if not can_close_quote and not can_close_bracket and closing_quote:
2586 return [leading + k for k in matches]
2587 return [leading + k for k in matches]
2587
2588
2588 results = []
2589 results = []
2589
2590
2590 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2591 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2591
2592
2592 for k, state_flag in matches.items():
2593 for k, state_flag in matches.items():
2593 result = leading + k
2594 result = leading + k
2594 if can_close_quote and closing_quote:
2595 if can_close_quote and closing_quote:
2595 result += closing_quote
2596 result += closing_quote
2596
2597
2597 if state_flag == end_of_tuple_or_item:
2598 if state_flag == end_of_tuple_or_item:
2598 # We do not know which suffix to add,
2599 # We do not know which suffix to add,
2599 # e.g. both tuple item and string
2600 # e.g. both tuple item and string
2600 # match this item.
2601 # match this item.
2601 pass
2602 pass
2602
2603
2603 if state_flag in end_of_tuple_or_item and can_close_bracket:
2604 if state_flag in end_of_tuple_or_item and can_close_bracket:
2604 result += "]"
2605 result += "]"
2605 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2606 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2606 result += ", "
2607 result += ", "
2607 results.append(result)
2608 results.append(result)
2608 return results
2609 return results
2609
2610
2610 @context_matcher()
2611 @context_matcher()
2611 def unicode_name_matcher(self, context: CompletionContext):
2612 def unicode_name_matcher(self, context: CompletionContext):
2612 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2613 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2613 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2614 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2614 return _convert_matcher_v1_result_to_v2(
2615 return _convert_matcher_v1_result_to_v2(
2615 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2616 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2616 )
2617 )
2617
2618
2618 @staticmethod
2619 @staticmethod
2619 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2620 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2620 """Match Latex-like syntax for unicode characters base
2621 """Match Latex-like syntax for unicode characters base
2621 on the name of the character.
2622 on the name of the character.
2622
2623
2623 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2624 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2624
2625
2625 Works only on valid python 3 identifier, or on combining characters that
2626 Works only on valid python 3 identifier, or on combining characters that
2626 will combine to form a valid identifier.
2627 will combine to form a valid identifier.
2627 """
2628 """
2628 slashpos = text.rfind('\\')
2629 slashpos = text.rfind('\\')
2629 if slashpos > -1:
2630 if slashpos > -1:
2630 s = text[slashpos+1:]
2631 s = text[slashpos+1:]
2631 try :
2632 try :
2632 unic = unicodedata.lookup(s)
2633 unic = unicodedata.lookup(s)
2633 # allow combining chars
2634 # allow combining chars
2634 if ('a'+unic).isidentifier():
2635 if ('a'+unic).isidentifier():
2635 return '\\'+s,[unic]
2636 return '\\'+s,[unic]
2636 except KeyError:
2637 except KeyError:
2637 pass
2638 pass
2638 return '', []
2639 return '', []
2639
2640
2640 @context_matcher()
2641 @context_matcher()
2641 def latex_name_matcher(self, context: CompletionContext):
2642 def latex_name_matcher(self, context: CompletionContext):
2642 """Match Latex syntax for unicode characters.
2643 """Match Latex syntax for unicode characters.
2643
2644
2644 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2645 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2645 """
2646 """
2646 fragment, matches = self.latex_matches(context.text_until_cursor)
2647 fragment, matches = self.latex_matches(context.text_until_cursor)
2647 return _convert_matcher_v1_result_to_v2(
2648 return _convert_matcher_v1_result_to_v2(
2648 matches, type="latex", fragment=fragment, suppress_if_matches=True
2649 matches, type="latex", fragment=fragment, suppress_if_matches=True
2649 )
2650 )
2650
2651
2651 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2652 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2652 """Match Latex syntax for unicode characters.
2653 """Match Latex syntax for unicode characters.
2653
2654
2654 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2655 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2655
2656
2656 .. deprecated:: 8.6
2657 .. deprecated:: 8.6
2657 You can use :meth:`latex_name_matcher` instead.
2658 You can use :meth:`latex_name_matcher` instead.
2658 """
2659 """
2659 slashpos = text.rfind('\\')
2660 slashpos = text.rfind('\\')
2660 if slashpos > -1:
2661 if slashpos > -1:
2661 s = text[slashpos:]
2662 s = text[slashpos:]
2662 if s in latex_symbols:
2663 if s in latex_symbols:
2663 # Try to complete a full latex symbol to unicode
2664 # Try to complete a full latex symbol to unicode
2664 # \\alpha -> Ξ±
2665 # \\alpha -> Ξ±
2665 return s, [latex_symbols[s]]
2666 return s, [latex_symbols[s]]
2666 else:
2667 else:
2667 # If a user has partially typed a latex symbol, give them
2668 # If a user has partially typed a latex symbol, give them
2668 # a full list of options \al -> [\aleph, \alpha]
2669 # a full list of options \al -> [\aleph, \alpha]
2669 matches = [k for k in latex_symbols if k.startswith(s)]
2670 matches = [k for k in latex_symbols if k.startswith(s)]
2670 if matches:
2671 if matches:
2671 return s, matches
2672 return s, matches
2672 return '', ()
2673 return '', ()
2673
2674
2674 @context_matcher()
2675 @context_matcher()
2675 def custom_completer_matcher(self, context):
2676 def custom_completer_matcher(self, context):
2676 """Dispatch custom completer.
2677 """Dispatch custom completer.
2677
2678
2678 If a match is found, suppresses all other matchers except for Jedi.
2679 If a match is found, suppresses all other matchers except for Jedi.
2679 """
2680 """
2680 matches = self.dispatch_custom_completer(context.token) or []
2681 matches = self.dispatch_custom_completer(context.token) or []
2681 result = _convert_matcher_v1_result_to_v2(
2682 result = _convert_matcher_v1_result_to_v2(
2682 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2683 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2683 )
2684 )
2684 result["ordered"] = True
2685 result["ordered"] = True
2685 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2686 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2686 return result
2687 return result
2687
2688
2688 def dispatch_custom_completer(self, text):
2689 def dispatch_custom_completer(self, text):
2689 """
2690 """
2690 .. deprecated:: 8.6
2691 .. deprecated:: 8.6
2691 You can use :meth:`custom_completer_matcher` instead.
2692 You can use :meth:`custom_completer_matcher` instead.
2692 """
2693 """
2693 if not self.custom_completers:
2694 if not self.custom_completers:
2694 return
2695 return
2695
2696
2696 line = self.line_buffer
2697 line = self.line_buffer
2697 if not line.strip():
2698 if not line.strip():
2698 return None
2699 return None
2699
2700
2700 # Create a little structure to pass all the relevant information about
2701 # Create a little structure to pass all the relevant information about
2701 # the current completion to any custom completer.
2702 # the current completion to any custom completer.
2702 event = SimpleNamespace()
2703 event = SimpleNamespace()
2703 event.line = line
2704 event.line = line
2704 event.symbol = text
2705 event.symbol = text
2705 cmd = line.split(None,1)[0]
2706 cmd = line.split(None,1)[0]
2706 event.command = cmd
2707 event.command = cmd
2707 event.text_until_cursor = self.text_until_cursor
2708 event.text_until_cursor = self.text_until_cursor
2708
2709
2709 # for foo etc, try also to find completer for %foo
2710 # for foo etc, try also to find completer for %foo
2710 if not cmd.startswith(self.magic_escape):
2711 if not cmd.startswith(self.magic_escape):
2711 try_magic = self.custom_completers.s_matches(
2712 try_magic = self.custom_completers.s_matches(
2712 self.magic_escape + cmd)
2713 self.magic_escape + cmd)
2713 else:
2714 else:
2714 try_magic = []
2715 try_magic = []
2715
2716
2716 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2717 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2717 try_magic,
2718 try_magic,
2718 self.custom_completers.flat_matches(self.text_until_cursor)):
2719 self.custom_completers.flat_matches(self.text_until_cursor)):
2719 try:
2720 try:
2720 res = c(event)
2721 res = c(event)
2721 if res:
2722 if res:
2722 # first, try case sensitive match
2723 # first, try case sensitive match
2723 withcase = [r for r in res if r.startswith(text)]
2724 withcase = [r for r in res if r.startswith(text)]
2724 if withcase:
2725 if withcase:
2725 return withcase
2726 return withcase
2726 # if none, then case insensitive ones are ok too
2727 # if none, then case insensitive ones are ok too
2727 text_low = text.lower()
2728 text_low = text.lower()
2728 return [r for r in res if r.lower().startswith(text_low)]
2729 return [r for r in res if r.lower().startswith(text_low)]
2729 except TryNext:
2730 except TryNext:
2730 pass
2731 pass
2731 except KeyboardInterrupt:
2732 except KeyboardInterrupt:
2732 """
2733 """
2733 If custom completer take too long,
2734 If custom completer take too long,
2734 let keyboard interrupt abort and return nothing.
2735 let keyboard interrupt abort and return nothing.
2735 """
2736 """
2736 break
2737 break
2737
2738
2738 return None
2739 return None
2739
2740
2740 def completions(self, text: str, offset: int)->Iterator[Completion]:
2741 def completions(self, text: str, offset: int)->Iterator[Completion]:
2741 """
2742 """
2742 Returns an iterator over the possible completions
2743 Returns an iterator over the possible completions
2743
2744
2744 .. warning::
2745 .. warning::
2745
2746
2746 Unstable
2747 Unstable
2747
2748
2748 This function is unstable, API may change without warning.
2749 This function is unstable, API may change without warning.
2749 It will also raise unless use in proper context manager.
2750 It will also raise unless use in proper context manager.
2750
2751
2751 Parameters
2752 Parameters
2752 ----------
2753 ----------
2753 text : str
2754 text : str
2754 Full text of the current input, multi line string.
2755 Full text of the current input, multi line string.
2755 offset : int
2756 offset : int
2756 Integer representing the position of the cursor in ``text``. Offset
2757 Integer representing the position of the cursor in ``text``. Offset
2757 is 0-based indexed.
2758 is 0-based indexed.
2758
2759
2759 Yields
2760 Yields
2760 ------
2761 ------
2761 Completion
2762 Completion
2762
2763
2763 Notes
2764 Notes
2764 -----
2765 -----
2765 The cursor on a text can either be seen as being "in between"
2766 The cursor on a text can either be seen as being "in between"
2766 characters or "On" a character depending on the interface visible to
2767 characters or "On" a character depending on the interface visible to
2767 the user. For consistency the cursor being on "in between" characters X
2768 the user. For consistency the cursor being on "in between" characters X
2768 and Y is equivalent to the cursor being "on" character Y, that is to say
2769 and Y is equivalent to the cursor being "on" character Y, that is to say
2769 the character the cursor is on is considered as being after the cursor.
2770 the character the cursor is on is considered as being after the cursor.
2770
2771
2771 Combining characters may span more that one position in the
2772 Combining characters may span more that one position in the
2772 text.
2773 text.
2773
2774
2774 .. note::
2775 .. note::
2775
2776
2776 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2777 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2777 fake Completion token to distinguish completion returned by Jedi
2778 fake Completion token to distinguish completion returned by Jedi
2778 and usual IPython completion.
2779 and usual IPython completion.
2779
2780
2780 .. note::
2781 .. note::
2781
2782
2782 Completions are not completely deduplicated yet. If identical
2783 Completions are not completely deduplicated yet. If identical
2783 completions are coming from different sources this function does not
2784 completions are coming from different sources this function does not
2784 ensure that each completion object will only be present once.
2785 ensure that each completion object will only be present once.
2785 """
2786 """
2786 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2787 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2787 "It may change without warnings. "
2788 "It may change without warnings. "
2788 "Use in corresponding context manager.",
2789 "Use in corresponding context manager.",
2789 category=ProvisionalCompleterWarning, stacklevel=2)
2790 category=ProvisionalCompleterWarning, stacklevel=2)
2790
2791
2791 seen = set()
2792 seen = set()
2792 profiler:Optional[cProfile.Profile]
2793 profiler:Optional[cProfile.Profile]
2793 try:
2794 try:
2794 if self.profile_completions:
2795 if self.profile_completions:
2795 import cProfile
2796 import cProfile
2796 profiler = cProfile.Profile()
2797 profiler = cProfile.Profile()
2797 profiler.enable()
2798 profiler.enable()
2798 else:
2799 else:
2799 profiler = None
2800 profiler = None
2800
2801
2801 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2802 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2802 if c and (c in seen):
2803 if c and (c in seen):
2803 continue
2804 continue
2804 yield c
2805 yield c
2805 seen.add(c)
2806 seen.add(c)
2806 except KeyboardInterrupt:
2807 except KeyboardInterrupt:
2807 """if completions take too long and users send keyboard interrupt,
2808 """if completions take too long and users send keyboard interrupt,
2808 do not crash and return ASAP. """
2809 do not crash and return ASAP. """
2809 pass
2810 pass
2810 finally:
2811 finally:
2811 if profiler is not None:
2812 if profiler is not None:
2812 profiler.disable()
2813 profiler.disable()
2813 ensure_dir_exists(self.profiler_output_dir)
2814 ensure_dir_exists(self.profiler_output_dir)
2814 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2815 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2815 print("Writing profiler output to", output_path)
2816 print("Writing profiler output to", output_path)
2816 profiler.dump_stats(output_path)
2817 profiler.dump_stats(output_path)
2817
2818
2818 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2819 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2819 """
2820 """
2820 Core completion module.Same signature as :any:`completions`, with the
2821 Core completion module.Same signature as :any:`completions`, with the
2821 extra `timeout` parameter (in seconds).
2822 extra `timeout` parameter (in seconds).
2822
2823
2823 Computing jedi's completion ``.type`` can be quite expensive (it is a
2824 Computing jedi's completion ``.type`` can be quite expensive (it is a
2824 lazy property) and can require some warm-up, more warm up than just
2825 lazy property) and can require some warm-up, more warm up than just
2825 computing the ``name`` of a completion. The warm-up can be :
2826 computing the ``name`` of a completion. The warm-up can be :
2826
2827
2827 - Long warm-up the first time a module is encountered after
2828 - Long warm-up the first time a module is encountered after
2828 install/update: actually build parse/inference tree.
2829 install/update: actually build parse/inference tree.
2829
2830
2830 - first time the module is encountered in a session: load tree from
2831 - first time the module is encountered in a session: load tree from
2831 disk.
2832 disk.
2832
2833
2833 We don't want to block completions for tens of seconds so we give the
2834 We don't want to block completions for tens of seconds so we give the
2834 completer a "budget" of ``_timeout`` seconds per invocation to compute
2835 completer a "budget" of ``_timeout`` seconds per invocation to compute
2835 completions types, the completions that have not yet been computed will
2836 completions types, the completions that have not yet been computed will
2836 be marked as "unknown" an will have a chance to be computed next round
2837 be marked as "unknown" an will have a chance to be computed next round
2837 are things get cached.
2838 are things get cached.
2838
2839
2839 Keep in mind that Jedi is not the only thing treating the completion so
2840 Keep in mind that Jedi is not the only thing treating the completion so
2840 keep the timeout short-ish as if we take more than 0.3 second we still
2841 keep the timeout short-ish as if we take more than 0.3 second we still
2841 have lots of processing to do.
2842 have lots of processing to do.
2842
2843
2843 """
2844 """
2844 deadline = time.monotonic() + _timeout
2845 deadline = time.monotonic() + _timeout
2845
2846
2846 before = full_text[:offset]
2847 before = full_text[:offset]
2847 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2848 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2848
2849
2849 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2850 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2850
2851
2851 def is_non_jedi_result(
2852 def is_non_jedi_result(
2852 result: MatcherResult, identifier: str
2853 result: MatcherResult, identifier: str
2853 ) -> TypeGuard[SimpleMatcherResult]:
2854 ) -> TypeGuard[SimpleMatcherResult]:
2854 return identifier != jedi_matcher_id
2855 return identifier != jedi_matcher_id
2855
2856
2856 results = self._complete(
2857 results = self._complete(
2857 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2858 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2858 )
2859 )
2859
2860
2860 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2861 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2861 identifier: result
2862 identifier: result
2862 for identifier, result in results.items()
2863 for identifier, result in results.items()
2863 if is_non_jedi_result(result, identifier)
2864 if is_non_jedi_result(result, identifier)
2864 }
2865 }
2865
2866
2866 jedi_matches = (
2867 jedi_matches = (
2867 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2868 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2868 if jedi_matcher_id in results
2869 if jedi_matcher_id in results
2869 else ()
2870 else ()
2870 )
2871 )
2871
2872
2872 iter_jm = iter(jedi_matches)
2873 iter_jm = iter(jedi_matches)
2873 if _timeout:
2874 if _timeout:
2874 for jm in iter_jm:
2875 for jm in iter_jm:
2875 try:
2876 try:
2876 type_ = jm.type
2877 type_ = jm.type
2877 except Exception:
2878 except Exception:
2878 if self.debug:
2879 if self.debug:
2879 print("Error in Jedi getting type of ", jm)
2880 print("Error in Jedi getting type of ", jm)
2880 type_ = None
2881 type_ = None
2881 delta = len(jm.name_with_symbols) - len(jm.complete)
2882 delta = len(jm.name_with_symbols) - len(jm.complete)
2882 if type_ == 'function':
2883 if type_ == 'function':
2883 signature = _make_signature(jm)
2884 signature = _make_signature(jm)
2884 else:
2885 else:
2885 signature = ''
2886 signature = ''
2886 yield Completion(start=offset - delta,
2887 yield Completion(start=offset - delta,
2887 end=offset,
2888 end=offset,
2888 text=jm.name_with_symbols,
2889 text=jm.name_with_symbols,
2889 type=type_,
2890 type=type_,
2890 signature=signature,
2891 signature=signature,
2891 _origin='jedi')
2892 _origin='jedi')
2892
2893
2893 if time.monotonic() > deadline:
2894 if time.monotonic() > deadline:
2894 break
2895 break
2895
2896
2896 for jm in iter_jm:
2897 for jm in iter_jm:
2897 delta = len(jm.name_with_symbols) - len(jm.complete)
2898 delta = len(jm.name_with_symbols) - len(jm.complete)
2898 yield Completion(
2899 yield Completion(
2899 start=offset - delta,
2900 start=offset - delta,
2900 end=offset,
2901 end=offset,
2901 text=jm.name_with_symbols,
2902 text=jm.name_with_symbols,
2902 type=_UNKNOWN_TYPE, # don't compute type for speed
2903 type=_UNKNOWN_TYPE, # don't compute type for speed
2903 _origin="jedi",
2904 _origin="jedi",
2904 signature="",
2905 signature="",
2905 )
2906 )
2906
2907
2907 # TODO:
2908 # TODO:
2908 # Suppress this, right now just for debug.
2909 # Suppress this, right now just for debug.
2909 if jedi_matches and non_jedi_results and self.debug:
2910 if jedi_matches and non_jedi_results and self.debug:
2910 some_start_offset = before.rfind(
2911 some_start_offset = before.rfind(
2911 next(iter(non_jedi_results.values()))["matched_fragment"]
2912 next(iter(non_jedi_results.values()))["matched_fragment"]
2912 )
2913 )
2913 yield Completion(
2914 yield Completion(
2914 start=some_start_offset,
2915 start=some_start_offset,
2915 end=offset,
2916 end=offset,
2916 text="--jedi/ipython--",
2917 text="--jedi/ipython--",
2917 _origin="debug",
2918 _origin="debug",
2918 type="none",
2919 type="none",
2919 signature="",
2920 signature="",
2920 )
2921 )
2921
2922
2922 ordered: List[Completion] = []
2923 ordered: List[Completion] = []
2923 sortable: List[Completion] = []
2924 sortable: List[Completion] = []
2924
2925
2925 for origin, result in non_jedi_results.items():
2926 for origin, result in non_jedi_results.items():
2926 matched_text = result["matched_fragment"]
2927 matched_text = result["matched_fragment"]
2927 start_offset = before.rfind(matched_text)
2928 start_offset = before.rfind(matched_text)
2928 is_ordered = result.get("ordered", False)
2929 is_ordered = result.get("ordered", False)
2929 container = ordered if is_ordered else sortable
2930 container = ordered if is_ordered else sortable
2930
2931
2931 # I'm unsure if this is always true, so let's assert and see if it
2932 # I'm unsure if this is always true, so let's assert and see if it
2932 # crash
2933 # crash
2933 assert before.endswith(matched_text)
2934 assert before.endswith(matched_text)
2934
2935
2935 for simple_completion in result["completions"]:
2936 for simple_completion in result["completions"]:
2936 completion = Completion(
2937 completion = Completion(
2937 start=start_offset,
2938 start=start_offset,
2938 end=offset,
2939 end=offset,
2939 text=simple_completion.text,
2940 text=simple_completion.text,
2940 _origin=origin,
2941 _origin=origin,
2941 signature="",
2942 signature="",
2942 type=simple_completion.type or _UNKNOWN_TYPE,
2943 type=simple_completion.type or _UNKNOWN_TYPE,
2943 )
2944 )
2944 container.append(completion)
2945 container.append(completion)
2945
2946
2946 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2947 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2947 :MATCHES_LIMIT
2948 :MATCHES_LIMIT
2948 ]
2949 ]
2949
2950
2950 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2951 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2951 """Find completions for the given text and line context.
2952 """Find completions for the given text and line context.
2952
2953
2953 Note that both the text and the line_buffer are optional, but at least
2954 Note that both the text and the line_buffer are optional, but at least
2954 one of them must be given.
2955 one of them must be given.
2955
2956
2956 Parameters
2957 Parameters
2957 ----------
2958 ----------
2958 text : string, optional
2959 text : string, optional
2959 Text to perform the completion on. If not given, the line buffer
2960 Text to perform the completion on. If not given, the line buffer
2960 is split using the instance's CompletionSplitter object.
2961 is split using the instance's CompletionSplitter object.
2961 line_buffer : string, optional
2962 line_buffer : string, optional
2962 If not given, the completer attempts to obtain the current line
2963 If not given, the completer attempts to obtain the current line
2963 buffer via readline. This keyword allows clients which are
2964 buffer via readline. This keyword allows clients which are
2964 requesting for text completions in non-readline contexts to inform
2965 requesting for text completions in non-readline contexts to inform
2965 the completer of the entire text.
2966 the completer of the entire text.
2966 cursor_pos : int, optional
2967 cursor_pos : int, optional
2967 Index of the cursor in the full line buffer. Should be provided by
2968 Index of the cursor in the full line buffer. Should be provided by
2968 remote frontends where kernel has no access to frontend state.
2969 remote frontends where kernel has no access to frontend state.
2969
2970
2970 Returns
2971 Returns
2971 -------
2972 -------
2972 Tuple of two items:
2973 Tuple of two items:
2973 text : str
2974 text : str
2974 Text that was actually used in the completion.
2975 Text that was actually used in the completion.
2975 matches : list
2976 matches : list
2976 A list of completion matches.
2977 A list of completion matches.
2977
2978
2978 Notes
2979 Notes
2979 -----
2980 -----
2980 This API is likely to be deprecated and replaced by
2981 This API is likely to be deprecated and replaced by
2981 :any:`IPCompleter.completions` in the future.
2982 :any:`IPCompleter.completions` in the future.
2982
2983
2983 """
2984 """
2984 warnings.warn('`Completer.complete` is pending deprecation since '
2985 warnings.warn('`Completer.complete` is pending deprecation since '
2985 'IPython 6.0 and will be replaced by `Completer.completions`.',
2986 'IPython 6.0 and will be replaced by `Completer.completions`.',
2986 PendingDeprecationWarning)
2987 PendingDeprecationWarning)
2987 # potential todo, FOLD the 3rd throw away argument of _complete
2988 # potential todo, FOLD the 3rd throw away argument of _complete
2988 # into the first 2 one.
2989 # into the first 2 one.
2989 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2990 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2990 # TODO: should we deprecate now, or does it stay?
2991 # TODO: should we deprecate now, or does it stay?
2991
2992
2992 results = self._complete(
2993 results = self._complete(
2993 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2994 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2994 )
2995 )
2995
2996
2996 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2997 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2997
2998
2998 return self._arrange_and_extract(
2999 return self._arrange_and_extract(
2999 results,
3000 results,
3000 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3001 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3001 skip_matchers={jedi_matcher_id},
3002 skip_matchers={jedi_matcher_id},
3002 # this API does not support different start/end positions (fragments of token).
3003 # this API does not support different start/end positions (fragments of token).
3003 abort_if_offset_changes=True,
3004 abort_if_offset_changes=True,
3004 )
3005 )
3005
3006
3006 def _arrange_and_extract(
3007 def _arrange_and_extract(
3007 self,
3008 self,
3008 results: Dict[str, MatcherResult],
3009 results: Dict[str, MatcherResult],
3009 skip_matchers: Set[str],
3010 skip_matchers: Set[str],
3010 abort_if_offset_changes: bool,
3011 abort_if_offset_changes: bool,
3011 ):
3012 ):
3012
3013
3013 sortable: List[AnyMatcherCompletion] = []
3014 sortable: List[AnyMatcherCompletion] = []
3014 ordered: List[AnyMatcherCompletion] = []
3015 ordered: List[AnyMatcherCompletion] = []
3015 most_recent_fragment = None
3016 most_recent_fragment = None
3016 for identifier, result in results.items():
3017 for identifier, result in results.items():
3017 if identifier in skip_matchers:
3018 if identifier in skip_matchers:
3018 continue
3019 continue
3019 if not result["completions"]:
3020 if not result["completions"]:
3020 continue
3021 continue
3021 if not most_recent_fragment:
3022 if not most_recent_fragment:
3022 most_recent_fragment = result["matched_fragment"]
3023 most_recent_fragment = result["matched_fragment"]
3023 if (
3024 if (
3024 abort_if_offset_changes
3025 abort_if_offset_changes
3025 and result["matched_fragment"] != most_recent_fragment
3026 and result["matched_fragment"] != most_recent_fragment
3026 ):
3027 ):
3027 break
3028 break
3028 if result.get("ordered", False):
3029 if result.get("ordered", False):
3029 ordered.extend(result["completions"])
3030 ordered.extend(result["completions"])
3030 else:
3031 else:
3031 sortable.extend(result["completions"])
3032 sortable.extend(result["completions"])
3032
3033
3033 if not most_recent_fragment:
3034 if not most_recent_fragment:
3034 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3035 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3035
3036
3036 return most_recent_fragment, [
3037 return most_recent_fragment, [
3037 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3038 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3038 ]
3039 ]
3039
3040
3040 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3041 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3041 full_text=None) -> _CompleteResult:
3042 full_text=None) -> _CompleteResult:
3042 """
3043 """
3043 Like complete but can also returns raw jedi completions as well as the
3044 Like complete but can also returns raw jedi completions as well as the
3044 origin of the completion text. This could (and should) be made much
3045 origin of the completion text. This could (and should) be made much
3045 cleaner but that will be simpler once we drop the old (and stateful)
3046 cleaner but that will be simpler once we drop the old (and stateful)
3046 :any:`complete` API.
3047 :any:`complete` API.
3047
3048
3048 With current provisional API, cursor_pos act both (depending on the
3049 With current provisional API, cursor_pos act both (depending on the
3049 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3050 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3050 ``column`` when passing multiline strings this could/should be renamed
3051 ``column`` when passing multiline strings this could/should be renamed
3051 but would add extra noise.
3052 but would add extra noise.
3052
3053
3053 Parameters
3054 Parameters
3054 ----------
3055 ----------
3055 cursor_line
3056 cursor_line
3056 Index of the line the cursor is on. 0 indexed.
3057 Index of the line the cursor is on. 0 indexed.
3057 cursor_pos
3058 cursor_pos
3058 Position of the cursor in the current line/line_buffer/text. 0
3059 Position of the cursor in the current line/line_buffer/text. 0
3059 indexed.
3060 indexed.
3060 line_buffer : optional, str
3061 line_buffer : optional, str
3061 The current line the cursor is in, this is mostly due to legacy
3062 The current line the cursor is in, this is mostly due to legacy
3062 reason that readline could only give a us the single current line.
3063 reason that readline could only give a us the single current line.
3063 Prefer `full_text`.
3064 Prefer `full_text`.
3064 text : str
3065 text : str
3065 The current "token" the cursor is in, mostly also for historical
3066 The current "token" the cursor is in, mostly also for historical
3066 reasons. as the completer would trigger only after the current line
3067 reasons. as the completer would trigger only after the current line
3067 was parsed.
3068 was parsed.
3068 full_text : str
3069 full_text : str
3069 Full text of the current cell.
3070 Full text of the current cell.
3070
3071
3071 Returns
3072 Returns
3072 -------
3073 -------
3073 An ordered dictionary where keys are identifiers of completion
3074 An ordered dictionary where keys are identifiers of completion
3074 matchers and values are ``MatcherResult``s.
3075 matchers and values are ``MatcherResult``s.
3075 """
3076 """
3076
3077
3077 # if the cursor position isn't given, the only sane assumption we can
3078 # if the cursor position isn't given, the only sane assumption we can
3078 # make is that it's at the end of the line (the common case)
3079 # make is that it's at the end of the line (the common case)
3079 if cursor_pos is None:
3080 if cursor_pos is None:
3080 cursor_pos = len(line_buffer) if text is None else len(text)
3081 cursor_pos = len(line_buffer) if text is None else len(text)
3081
3082
3082 if self.use_main_ns:
3083 if self.use_main_ns:
3083 self.namespace = __main__.__dict__
3084 self.namespace = __main__.__dict__
3084
3085
3085 # if text is either None or an empty string, rely on the line buffer
3086 # if text is either None or an empty string, rely on the line buffer
3086 if (not line_buffer) and full_text:
3087 if (not line_buffer) and full_text:
3087 line_buffer = full_text.split('\n')[cursor_line]
3088 line_buffer = full_text.split('\n')[cursor_line]
3088 if not text: # issue #11508: check line_buffer before calling split_line
3089 if not text: # issue #11508: check line_buffer before calling split_line
3089 text = (
3090 text = (
3090 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3091 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3091 )
3092 )
3092
3093
3093 # If no line buffer is given, assume the input text is all there was
3094 # If no line buffer is given, assume the input text is all there was
3094 if line_buffer is None:
3095 if line_buffer is None:
3095 line_buffer = text
3096 line_buffer = text
3096
3097
3097 # deprecated - do not use `line_buffer` in new code.
3098 # deprecated - do not use `line_buffer` in new code.
3098 self.line_buffer = line_buffer
3099 self.line_buffer = line_buffer
3099 self.text_until_cursor = self.line_buffer[:cursor_pos]
3100 self.text_until_cursor = self.line_buffer[:cursor_pos]
3100
3101
3101 if not full_text:
3102 if not full_text:
3102 full_text = line_buffer
3103 full_text = line_buffer
3103
3104
3104 context = CompletionContext(
3105 context = CompletionContext(
3105 full_text=full_text,
3106 full_text=full_text,
3106 cursor_position=cursor_pos,
3107 cursor_position=cursor_pos,
3107 cursor_line=cursor_line,
3108 cursor_line=cursor_line,
3108 token=text,
3109 token=text,
3109 limit=MATCHES_LIMIT,
3110 limit=MATCHES_LIMIT,
3110 )
3111 )
3111
3112
3112 # Start with a clean slate of completions
3113 # Start with a clean slate of completions
3113 results: Dict[str, MatcherResult] = {}
3114 results: Dict[str, MatcherResult] = {}
3114
3115
3115 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3116 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3116
3117
3117 suppressed_matchers: Set[str] = set()
3118 suppressed_matchers: Set[str] = set()
3118
3119
3119 matchers = {
3120 matchers = {
3120 _get_matcher_id(matcher): matcher
3121 _get_matcher_id(matcher): matcher
3121 for matcher in sorted(
3122 for matcher in sorted(
3122 self.matchers, key=_get_matcher_priority, reverse=True
3123 self.matchers, key=_get_matcher_priority, reverse=True
3123 )
3124 )
3124 }
3125 }
3125
3126
3126 for matcher_id, matcher in matchers.items():
3127 for matcher_id, matcher in matchers.items():
3127 matcher_id = _get_matcher_id(matcher)
3128 matcher_id = _get_matcher_id(matcher)
3128
3129
3129 if matcher_id in self.disable_matchers:
3130 if matcher_id in self.disable_matchers:
3130 continue
3131 continue
3131
3132
3132 if matcher_id in results:
3133 if matcher_id in results:
3133 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3134 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3134
3135
3135 if matcher_id in suppressed_matchers:
3136 if matcher_id in suppressed_matchers:
3136 continue
3137 continue
3137
3138
3138 result: MatcherResult
3139 result: MatcherResult
3139 try:
3140 try:
3140 if _is_matcher_v1(matcher):
3141 if _is_matcher_v1(matcher):
3141 result = _convert_matcher_v1_result_to_v2(
3142 result = _convert_matcher_v1_result_to_v2(
3142 matcher(text), type=_UNKNOWN_TYPE
3143 matcher(text), type=_UNKNOWN_TYPE
3143 )
3144 )
3144 elif _is_matcher_v2(matcher):
3145 elif _is_matcher_v2(matcher):
3145 result = matcher(context)
3146 result = matcher(context)
3146 else:
3147 else:
3147 api_version = _get_matcher_api_version(matcher)
3148 api_version = _get_matcher_api_version(matcher)
3148 raise ValueError(f"Unsupported API version {api_version}")
3149 raise ValueError(f"Unsupported API version {api_version}")
3149 except:
3150 except:
3150 # Show the ugly traceback if the matcher causes an
3151 # Show the ugly traceback if the matcher causes an
3151 # exception, but do NOT crash the kernel!
3152 # exception, but do NOT crash the kernel!
3152 sys.excepthook(*sys.exc_info())
3153 sys.excepthook(*sys.exc_info())
3153 continue
3154 continue
3154
3155
3155 # set default value for matched fragment if suffix was not selected.
3156 # set default value for matched fragment if suffix was not selected.
3156 result["matched_fragment"] = result.get("matched_fragment", context.token)
3157 result["matched_fragment"] = result.get("matched_fragment", context.token)
3157
3158
3158 if not suppressed_matchers:
3159 if not suppressed_matchers:
3159 suppression_recommended: Union[bool, Set[str]] = result.get(
3160 suppression_recommended: Union[bool, Set[str]] = result.get(
3160 "suppress", False
3161 "suppress", False
3161 )
3162 )
3162
3163
3163 suppression_config = (
3164 suppression_config = (
3164 self.suppress_competing_matchers.get(matcher_id, None)
3165 self.suppress_competing_matchers.get(matcher_id, None)
3165 if isinstance(self.suppress_competing_matchers, dict)
3166 if isinstance(self.suppress_competing_matchers, dict)
3166 else self.suppress_competing_matchers
3167 else self.suppress_competing_matchers
3167 )
3168 )
3168 should_suppress = (
3169 should_suppress = (
3169 (suppression_config is True)
3170 (suppression_config is True)
3170 or (suppression_recommended and (suppression_config is not False))
3171 or (suppression_recommended and (suppression_config is not False))
3171 ) and has_any_completions(result)
3172 ) and has_any_completions(result)
3172
3173
3173 if should_suppress:
3174 if should_suppress:
3174 suppression_exceptions: Set[str] = result.get(
3175 suppression_exceptions: Set[str] = result.get(
3175 "do_not_suppress", set()
3176 "do_not_suppress", set()
3176 )
3177 )
3177 if isinstance(suppression_recommended, Iterable):
3178 if isinstance(suppression_recommended, Iterable):
3178 to_suppress = set(suppression_recommended)
3179 to_suppress = set(suppression_recommended)
3179 else:
3180 else:
3180 to_suppress = set(matchers)
3181 to_suppress = set(matchers)
3181 suppressed_matchers = to_suppress - suppression_exceptions
3182 suppressed_matchers = to_suppress - suppression_exceptions
3182
3183
3183 new_results = {}
3184 new_results = {}
3184 for previous_matcher_id, previous_result in results.items():
3185 for previous_matcher_id, previous_result in results.items():
3185 if previous_matcher_id not in suppressed_matchers:
3186 if previous_matcher_id not in suppressed_matchers:
3186 new_results[previous_matcher_id] = previous_result
3187 new_results[previous_matcher_id] = previous_result
3187 results = new_results
3188 results = new_results
3188
3189
3189 results[matcher_id] = result
3190 results[matcher_id] = result
3190
3191
3191 _, matches = self._arrange_and_extract(
3192 _, matches = self._arrange_and_extract(
3192 results,
3193 results,
3193 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3194 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3194 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3195 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3195 skip_matchers={jedi_matcher_id},
3196 skip_matchers={jedi_matcher_id},
3196 abort_if_offset_changes=False,
3197 abort_if_offset_changes=False,
3197 )
3198 )
3198
3199
3199 # populate legacy stateful API
3200 # populate legacy stateful API
3200 self.matches = matches
3201 self.matches = matches
3201
3202
3202 return results
3203 return results
3203
3204
3204 @staticmethod
3205 @staticmethod
3205 def _deduplicate(
3206 def _deduplicate(
3206 matches: Sequence[AnyCompletion],
3207 matches: Sequence[AnyCompletion],
3207 ) -> Iterable[AnyCompletion]:
3208 ) -> Iterable[AnyCompletion]:
3208 filtered_matches: Dict[str, AnyCompletion] = {}
3209 filtered_matches: Dict[str, AnyCompletion] = {}
3209 for match in matches:
3210 for match in matches:
3210 text = match.text
3211 text = match.text
3211 if (
3212 if (
3212 text not in filtered_matches
3213 text not in filtered_matches
3213 or filtered_matches[text].type == _UNKNOWN_TYPE
3214 or filtered_matches[text].type == _UNKNOWN_TYPE
3214 ):
3215 ):
3215 filtered_matches[text] = match
3216 filtered_matches[text] = match
3216
3217
3217 return filtered_matches.values()
3218 return filtered_matches.values()
3218
3219
3219 @staticmethod
3220 @staticmethod
3220 def _sort(matches: Sequence[AnyCompletion]):
3221 def _sort(matches: Sequence[AnyCompletion]):
3221 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3222 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3222
3223
3223 @context_matcher()
3224 @context_matcher()
3224 def fwd_unicode_matcher(self, context: CompletionContext):
3225 def fwd_unicode_matcher(self, context: CompletionContext):
3225 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3226 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3226 # TODO: use `context.limit` to terminate early once we matched the maximum
3227 # TODO: use `context.limit` to terminate early once we matched the maximum
3227 # number that will be used downstream; can be added as an optional to
3228 # number that will be used downstream; can be added as an optional to
3228 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3229 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3229 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3230 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3230 return _convert_matcher_v1_result_to_v2(
3231 return _convert_matcher_v1_result_to_v2(
3231 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3232 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3232 )
3233 )
3233
3234
3234 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3235 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3235 """
3236 """
3236 Forward match a string starting with a backslash with a list of
3237 Forward match a string starting with a backslash with a list of
3237 potential Unicode completions.
3238 potential Unicode completions.
3238
3239
3239 Will compute list of Unicode character names on first call and cache it.
3240 Will compute list of Unicode character names on first call and cache it.
3240
3241
3241 .. deprecated:: 8.6
3242 .. deprecated:: 8.6
3242 You can use :meth:`fwd_unicode_matcher` instead.
3243 You can use :meth:`fwd_unicode_matcher` instead.
3243
3244
3244 Returns
3245 Returns
3245 -------
3246 -------
3246 At tuple with:
3247 At tuple with:
3247 - matched text (empty if no matches)
3248 - matched text (empty if no matches)
3248 - list of potential completions, empty tuple otherwise)
3249 - list of potential completions, empty tuple otherwise)
3249 """
3250 """
3250 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3251 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3251 # We could do a faster match using a Trie.
3252 # We could do a faster match using a Trie.
3252
3253
3253 # Using pygtrie the following seem to work:
3254 # Using pygtrie the following seem to work:
3254
3255
3255 # s = PrefixSet()
3256 # s = PrefixSet()
3256
3257
3257 # for c in range(0,0x10FFFF + 1):
3258 # for c in range(0,0x10FFFF + 1):
3258 # try:
3259 # try:
3259 # s.add(unicodedata.name(chr(c)))
3260 # s.add(unicodedata.name(chr(c)))
3260 # except ValueError:
3261 # except ValueError:
3261 # pass
3262 # pass
3262 # [''.join(k) for k in s.iter(prefix)]
3263 # [''.join(k) for k in s.iter(prefix)]
3263
3264
3264 # But need to be timed and adds an extra dependency.
3265 # But need to be timed and adds an extra dependency.
3265
3266
3266 slashpos = text.rfind('\\')
3267 slashpos = text.rfind('\\')
3267 # if text starts with slash
3268 # if text starts with slash
3268 if slashpos > -1:
3269 if slashpos > -1:
3269 # PERF: It's important that we don't access self._unicode_names
3270 # PERF: It's important that we don't access self._unicode_names
3270 # until we're inside this if-block. _unicode_names is lazily
3271 # until we're inside this if-block. _unicode_names is lazily
3271 # initialized, and it takes a user-noticeable amount of time to
3272 # initialized, and it takes a user-noticeable amount of time to
3272 # initialize it, so we don't want to initialize it unless we're
3273 # initialize it, so we don't want to initialize it unless we're
3273 # actually going to use it.
3274 # actually going to use it.
3274 s = text[slashpos + 1 :]
3275 s = text[slashpos + 1 :]
3275 sup = s.upper()
3276 sup = s.upper()
3276 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3277 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3277 if candidates:
3278 if candidates:
3278 return s, candidates
3279 return s, candidates
3279 candidates = [x for x in self.unicode_names if sup in x]
3280 candidates = [x for x in self.unicode_names if sup in x]
3280 if candidates:
3281 if candidates:
3281 return s, candidates
3282 return s, candidates
3282 splitsup = sup.split(" ")
3283 splitsup = sup.split(" ")
3283 candidates = [
3284 candidates = [
3284 x for x in self.unicode_names if all(u in x for u in splitsup)
3285 x for x in self.unicode_names if all(u in x for u in splitsup)
3285 ]
3286 ]
3286 if candidates:
3287 if candidates:
3287 return s, candidates
3288 return s, candidates
3288
3289
3289 return "", ()
3290 return "", ()
3290
3291
3291 # if text does not start with slash
3292 # if text does not start with slash
3292 else:
3293 else:
3293 return '', ()
3294 return '', ()
3294
3295
3295 @property
3296 @property
3296 def unicode_names(self) -> List[str]:
3297 def unicode_names(self) -> List[str]:
3297 """List of names of unicode code points that can be completed.
3298 """List of names of unicode code points that can be completed.
3298
3299
3299 The list is lazily initialized on first access.
3300 The list is lazily initialized on first access.
3300 """
3301 """
3301 if self._unicode_names is None:
3302 if self._unicode_names is None:
3302 names = []
3303 names = []
3303 for c in range(0,0x10FFFF + 1):
3304 for c in range(0,0x10FFFF + 1):
3304 try:
3305 try:
3305 names.append(unicodedata.name(chr(c)))
3306 names.append(unicodedata.name(chr(c)))
3306 except ValueError:
3307 except ValueError:
3307 pass
3308 pass
3308 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3309 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3309
3310
3310 return self._unicode_names
3311 return self._unicode_names
3311
3312
3312 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3313 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3313 names = []
3314 names = []
3314 for start,stop in ranges:
3315 for start,stop in ranges:
3315 for c in range(start, stop) :
3316 for c in range(start, stop) :
3316 try:
3317 try:
3317 names.append(unicodedata.name(chr(c)))
3318 names.append(unicodedata.name(chr(c)))
3318 except ValueError:
3319 except ValueError:
3319 pass
3320 pass
3320 return names
3321 return names
@@ -1,680 +1,695 b''
1 from typing import (
1 from typing import (
2 Any,
2 Any,
3 Callable,
3 Callable,
4 Dict,
4 Dict,
5 Set,
5 Set,
6 Tuple,
6 Tuple,
7 NamedTuple,
7 NamedTuple,
8 Type,
8 Type,
9 Literal,
9 Literal,
10 Union,
10 Union,
11 TYPE_CHECKING,
11 TYPE_CHECKING,
12 )
12 )
13 import ast
13 import ast
14 import builtins
14 import builtins
15 import collections
15 import collections
16 import operator
16 import operator
17 import sys
17 import sys
18 from functools import cached_property
18 from functools import cached_property
19 from dataclasses import dataclass, field
19 from dataclasses import dataclass, field
20
20
21 from IPython.utils.docs import GENERATING_DOCUMENTATION
21 from IPython.utils.docs import GENERATING_DOCUMENTATION
22 from IPython.utils.decorators import undoc
22 from IPython.utils.decorators import undoc
23
23
24
24
25 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
25 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
26 from typing_extensions import Protocol
26 from typing_extensions import Protocol
27 else:
27 else:
28 # do not require on runtime
28 # do not require on runtime
29 Protocol = object # requires Python >=3.8
29 Protocol = object # requires Python >=3.8
30
30
31
31
32 @undoc
32 @undoc
33 class HasGetItem(Protocol):
33 class HasGetItem(Protocol):
34 def __getitem__(self, key) -> None:
34 def __getitem__(self, key) -> None:
35 ...
35 ...
36
36
37
37
38 @undoc
38 @undoc
39 class InstancesHaveGetItem(Protocol):
39 class InstancesHaveGetItem(Protocol):
40 def __call__(self, *args, **kwargs) -> HasGetItem:
40 def __call__(self, *args, **kwargs) -> HasGetItem:
41 ...
41 ...
42
42
43
43
44 @undoc
44 @undoc
45 class HasGetAttr(Protocol):
45 class HasGetAttr(Protocol):
46 def __getattr__(self, key) -> None:
46 def __getattr__(self, key) -> None:
47 ...
47 ...
48
48
49
49
50 @undoc
50 @undoc
51 class DoesNotHaveGetAttr(Protocol):
51 class DoesNotHaveGetAttr(Protocol):
52 pass
52 pass
53
53
54
54
55 # By default `__getattr__` is not explicitly implemented on most objects
55 # By default `__getattr__` is not explicitly implemented on most objects
56 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
56 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
57
57
58
58
59 def _unbind_method(func: Callable) -> Union[Callable, None]:
59 def _unbind_method(func: Callable) -> Union[Callable, None]:
60 """Get unbound method for given bound method.
60 """Get unbound method for given bound method.
61
61
62 Returns None if cannot get unbound method."""
62 Returns None if cannot get unbound method."""
63 owner = getattr(func, "__self__", None)
63 owner = getattr(func, "__self__", None)
64 owner_class = type(owner)
64 owner_class = type(owner)
65 name = getattr(func, "__name__", None)
65 name = getattr(func, "__name__", None)
66 instance_dict_overrides = getattr(owner, "__dict__", None)
66 instance_dict_overrides = getattr(owner, "__dict__", None)
67 if (
67 if (
68 owner is not None
68 owner is not None
69 and name
69 and name
70 and (
70 and (
71 not instance_dict_overrides
71 not instance_dict_overrides
72 or (instance_dict_overrides and name not in instance_dict_overrides)
72 or (instance_dict_overrides and name not in instance_dict_overrides)
73 )
73 )
74 ):
74 ):
75 return getattr(owner_class, name)
75 return getattr(owner_class, name)
76 return None
76 return None
77
77
78
78
79 @undoc
79 @undoc
80 @dataclass
80 @dataclass
81 class EvaluationPolicy:
81 class EvaluationPolicy:
82 """Definition of evaluation policy."""
82 """Definition of evaluation policy."""
83
83
84 allow_locals_access: bool = False
84 allow_locals_access: bool = False
85 allow_globals_access: bool = False
85 allow_globals_access: bool = False
86 allow_item_access: bool = False
86 allow_item_access: bool = False
87 allow_attr_access: bool = False
87 allow_attr_access: bool = False
88 allow_builtins_access: bool = False
88 allow_builtins_access: bool = False
89 allow_all_operations: bool = False
89 allow_all_operations: bool = False
90 allow_any_calls: bool = False
90 allow_any_calls: bool = False
91 allowed_calls: Set[Callable] = field(default_factory=set)
91 allowed_calls: Set[Callable] = field(default_factory=set)
92
92
93 def can_get_item(self, value, item):
93 def can_get_item(self, value, item):
94 return self.allow_item_access
94 return self.allow_item_access
95
95
96 def can_get_attr(self, value, attr):
96 def can_get_attr(self, value, attr):
97 return self.allow_attr_access
97 return self.allow_attr_access
98
98
99 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
99 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
100 if self.allow_all_operations:
100 if self.allow_all_operations:
101 return True
101 return True
102
102
103 def can_call(self, func):
103 def can_call(self, func):
104 if self.allow_any_calls:
104 if self.allow_any_calls:
105 return True
105 return True
106
106
107 if func in self.allowed_calls:
107 if func in self.allowed_calls:
108 return True
108 return True
109
109
110 owner_method = _unbind_method(func)
110 owner_method = _unbind_method(func)
111
111 if owner_method and owner_method in self.allowed_calls:
112 if owner_method and owner_method in self.allowed_calls:
112 return True
113 return True
113
114
114
115
115 def _has_original_dunder_external(
116 def _has_original_dunder_external(
116 value,
117 value,
117 module_name,
118 module_name,
118 access_path,
119 access_path,
119 method_name,
120 method_name,
120 ):
121 ):
121 try:
122 try:
122 if module_name not in sys.modules:
123 if module_name not in sys.modules:
123 return False
124 return False
124 member_type = sys.modules[module_name]
125 member_type = sys.modules[module_name]
125 for attr in access_path:
126 for attr in access_path:
126 member_type = getattr(member_type, attr)
127 member_type = getattr(member_type, attr)
127 value_type = type(value)
128 value_type = type(value)
128 if type(value) == member_type:
129 if type(value) == member_type:
129 return True
130 return True
131 if method_name == "__getattribute__":
132 # we have to short-circuit here due to an unresolved issue in
133 # `isinstance` implementation: https://bugs.python.org/issue32683
134 return False
130 if isinstance(value, member_type):
135 if isinstance(value, member_type):
131 method = getattr(value_type, method_name, None)
136 method = getattr(value_type, method_name, None)
132 member_method = getattr(member_type, method_name, None)
137 member_method = getattr(member_type, method_name, None)
133 if member_method == method:
138 if member_method == method:
134 return True
139 return True
135 except (AttributeError, KeyError):
140 except (AttributeError, KeyError):
136 return False
141 return False
137
142
138
143
139 def _has_original_dunder(
144 def _has_original_dunder(
140 value, allowed_types, allowed_methods, allowed_external, method_name
145 value, allowed_types, allowed_methods, allowed_external, method_name
141 ):
146 ):
142 # note: Python ignores `__getattr__`/`__getitem__` on instances,
147 # note: Python ignores `__getattr__`/`__getitem__` on instances,
143 # we only need to check at class level
148 # we only need to check at class level
144 value_type = type(value)
149 value_type = type(value)
145
150
146 # strict type check passes β†’ no need to check method
151 # strict type check passes β†’ no need to check method
147 if value_type in allowed_types:
152 if value_type in allowed_types:
148 return True
153 return True
149
154
150 method = getattr(value_type, method_name, None)
155 method = getattr(value_type, method_name, None)
151
156
152 if not method:
157 if method is None:
153 return None
158 return None
154
159
155 if method in allowed_methods:
160 if method in allowed_methods:
156 return True
161 return True
157
162
158 for module_name, *access_path in allowed_external:
163 for module_name, *access_path in allowed_external:
159 if _has_original_dunder_external(value, module_name, access_path, method_name):
164 if _has_original_dunder_external(value, module_name, access_path, method_name):
160 return True
165 return True
161
166
162 return False
167 return False
163
168
164
169
165 @undoc
170 @undoc
166 @dataclass
171 @dataclass
167 class SelectivePolicy(EvaluationPolicy):
172 class SelectivePolicy(EvaluationPolicy):
168 allowed_getitem: Set[InstancesHaveGetItem] = field(default_factory=set)
173 allowed_getitem: Set[InstancesHaveGetItem] = field(default_factory=set)
169 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
174 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
170
175
171 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
176 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
172 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
177 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
173
178
174 allowed_operations: Set = field(default_factory=set)
179 allowed_operations: Set = field(default_factory=set)
175 allowed_operations_external: Set[Tuple[str, ...]] = field(default_factory=set)
180 allowed_operations_external: Set[Tuple[str, ...]] = field(default_factory=set)
176
181
177 _operation_methods_cache: Dict[str, Set[Callable]] = field(
182 _operation_methods_cache: Dict[str, Set[Callable]] = field(
178 default_factory=dict, init=False
183 default_factory=dict, init=False
179 )
184 )
180
185
181 def can_get_attr(self, value, attr):
186 def can_get_attr(self, value, attr):
182 has_original_attribute = _has_original_dunder(
187 has_original_attribute = _has_original_dunder(
183 value,
188 value,
184 allowed_types=self.allowed_getattr,
189 allowed_types=self.allowed_getattr,
185 allowed_methods=self._getattribute_methods,
190 allowed_methods=self._getattribute_methods,
186 allowed_external=self.allowed_getattr_external,
191 allowed_external=self.allowed_getattr_external,
187 method_name="__getattribute__",
192 method_name="__getattribute__",
188 )
193 )
189 has_original_attr = _has_original_dunder(
194 has_original_attr = _has_original_dunder(
190 value,
195 value,
191 allowed_types=self.allowed_getattr,
196 allowed_types=self.allowed_getattr,
192 allowed_methods=self._getattr_methods,
197 allowed_methods=self._getattr_methods,
193 allowed_external=self.allowed_getattr_external,
198 allowed_external=self.allowed_getattr_external,
194 method_name="__getattr__",
199 method_name="__getattr__",
195 )
200 )
201
196 # Many objects do not have `__getattr__`, this is fine
202 # Many objects do not have `__getattr__`, this is fine
197 if has_original_attr is None and has_original_attribute:
203 if has_original_attr is None and has_original_attribute:
198 return True
204 return True
199
205
200 # Accept objects without modifications to `__getattr__` and `__getattribute__`
206 # Accept objects without modifications to `__getattr__` and `__getattribute__`
201 return has_original_attr and has_original_attribute
207 return has_original_attr and has_original_attribute
202
208
203 def get_attr(self, value, attr):
204 if self.can_get_attr(value, attr):
205 return getattr(value, attr)
206
207 def can_get_item(self, value, item):
209 def can_get_item(self, value, item):
208 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
210 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
209 return _has_original_dunder(
211 return _has_original_dunder(
210 value,
212 value,
211 allowed_types=self.allowed_getitem,
213 allowed_types=self.allowed_getitem,
212 allowed_methods=self._getitem_methods,
214 allowed_methods=self._getitem_methods,
213 allowed_external=self.allowed_getitem_external,
215 allowed_external=self.allowed_getitem_external,
214 method_name="__getitem__",
216 method_name="__getitem__",
215 )
217 )
216
218
217 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
219 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
220 objects = [a]
221 if b is not None:
222 objects.append(b)
218 return all(
223 return all(
219 [
224 [
220 _has_original_dunder(
225 _has_original_dunder(
221 a,
226 obj,
222 allowed_types=self.allowed_operations,
227 allowed_types=self.allowed_operations,
223 allowed_methods=self._dunder_methods(dunder),
228 allowed_methods=self._operator_dunder_methods(dunder),
224 allowed_external=self.allowed_operations_external,
229 allowed_external=self.allowed_operations_external,
225 method_name=dunder,
230 method_name=dunder,
226 )
231 )
227 for dunder in dunders
232 for dunder in dunders
233 for obj in objects
228 ]
234 ]
229 )
235 )
230
236
231 def _dunder_methods(self, dunder: str) -> Set[Callable]:
237 def _operator_dunder_methods(self, dunder: str) -> Set[Callable]:
232 if dunder not in self._operation_methods_cache:
238 if dunder not in self._operation_methods_cache:
233 self._operation_methods_cache[dunder] = self._safe_get_methods(
239 self._operation_methods_cache[dunder] = self._safe_get_methods(
234 self.allowed_operations, dunder
240 self.allowed_operations, dunder
235 )
241 )
236 return self._operation_methods_cache[dunder]
242 return self._operation_methods_cache[dunder]
237
243
238 @cached_property
244 @cached_property
239 def _getitem_methods(self) -> Set[Callable]:
245 def _getitem_methods(self) -> Set[Callable]:
240 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
246 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
241
247
242 @cached_property
248 @cached_property
243 def _getattr_methods(self) -> Set[Callable]:
249 def _getattr_methods(self) -> Set[Callable]:
244 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
250 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
245
251
246 @cached_property
252 @cached_property
247 def _getattribute_methods(self) -> Set[Callable]:
253 def _getattribute_methods(self) -> Set[Callable]:
248 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
254 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
249
255
250 def _safe_get_methods(self, classes, name) -> Set[Callable]:
256 def _safe_get_methods(self, classes, name) -> Set[Callable]:
251 return {
257 return {
252 method
258 method
253 for class_ in classes
259 for class_ in classes
254 for method in [getattr(class_, name, None)]
260 for method in [getattr(class_, name, None)]
255 if method
261 if method
256 }
262 }
257
263
258
264
259 class _DummyNamedTuple(NamedTuple):
265 class _DummyNamedTuple(NamedTuple):
260 pass
266 """Used internally to retrieve methods of named tuple instance."""
261
267
262
268
263 class EvaluationContext(NamedTuple):
269 class EvaluationContext(NamedTuple):
264 #: Local namespace
270 #: Local namespace
265 locals: dict
271 locals: dict
266 #: Global namespace
272 #: Global namespace
267 globals: dict
273 globals: dict
268 #: Evaluation policy identifier
274 #: Evaluation policy identifier
269 evaluation: Literal[
275 evaluation: Literal[
270 "forbidden", "minimal", "limited", "unsafe", "dangerous"
276 "forbidden", "minimal", "limited", "unsafe", "dangerous"
271 ] = "forbidden"
277 ] = "forbidden"
272 #: Whether the evalution of code takes place inside of a subscript.
278 #: Whether the evalution of code takes place inside of a subscript.
273 #: Useful for evaluating ``:-1, 'col'`` in ``df[:-1, 'col']``.
279 #: Useful for evaluating ``:-1, 'col'`` in ``df[:-1, 'col']``.
274 in_subscript: bool = False
280 in_subscript: bool = False
275
281
276
282
277 class _IdentitySubscript:
283 class _IdentitySubscript:
278 """Returns the key itself when item is requested via subscript."""
284 """Returns the key itself when item is requested via subscript."""
279
285
280 def __getitem__(self, key):
286 def __getitem__(self, key):
281 return key
287 return key
282
288
283
289
284 IDENTITY_SUBSCRIPT = _IdentitySubscript()
290 IDENTITY_SUBSCRIPT = _IdentitySubscript()
285 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
291 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
286
292
287
293
288 class GuardRejection(Exception):
294 class GuardRejection(Exception):
289 """Exception raised when guard rejects evaluation attempt."""
295 """Exception raised when guard rejects evaluation attempt."""
290
296
291 pass
297 pass
292
298
293
299
294 def guarded_eval(code: str, context: EvaluationContext):
300 def guarded_eval(code: str, context: EvaluationContext):
295 """Evaluate provided code in the evaluation context.
301 """Evaluate provided code in the evaluation context.
296
302
297 If evaluation policy given by context is set to ``forbidden``
303 If evaluation policy given by context is set to ``forbidden``
298 no evaluation will be performed; if it is set to ``dangerous``
304 no evaluation will be performed; if it is set to ``dangerous``
299 standard :func:`eval` will be used; finally, for any other,
305 standard :func:`eval` will be used; finally, for any other,
300 policy :func:`eval_node` will be called on parsed AST.
306 policy :func:`eval_node` will be called on parsed AST.
301 """
307 """
302 locals_ = context.locals
308 locals_ = context.locals
303
309
304 if context.evaluation == "forbidden":
310 if context.evaluation == "forbidden":
305 raise GuardRejection("Forbidden mode")
311 raise GuardRejection("Forbidden mode")
306
312
307 # note: not using `ast.literal_eval` as it does not implement
313 # note: not using `ast.literal_eval` as it does not implement
308 # getitem at all, for example it fails on simple `[0][1]`
314 # getitem at all, for example it fails on simple `[0][1]`
309
315
310 if context.in_subscript:
316 if context.in_subscript:
311 # syntatic sugar for ellipsis (:) is only available in susbcripts
317 # syntatic sugar for ellipsis (:) is only available in susbcripts
312 # so we need to trick the ast parser into thinking that we have
318 # so we need to trick the ast parser into thinking that we have
313 # a subscript, but we need to be able to later recognise that we did
319 # a subscript, but we need to be able to later recognise that we did
314 # it so we can ignore the actual __getitem__ operation
320 # it so we can ignore the actual __getitem__ operation
315 if not code:
321 if not code:
316 return tuple()
322 return tuple()
317 locals_ = locals_.copy()
323 locals_ = locals_.copy()
318 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
324 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
319 code = SUBSCRIPT_MARKER + "[" + code + "]"
325 code = SUBSCRIPT_MARKER + "[" + code + "]"
320 context = EvaluationContext(**{**context._asdict(), **{"locals": locals_}})
326 context = EvaluationContext(**{**context._asdict(), **{"locals": locals_}})
321
327
322 if context.evaluation == "dangerous":
328 if context.evaluation == "dangerous":
323 return eval(code, context.globals, context.locals)
329 return eval(code, context.globals, context.locals)
324
330
325 expression = ast.parse(code, mode="eval")
331 expression = ast.parse(code, mode="eval")
326
332
327 return eval_node(expression, context)
333 return eval_node(expression, context)
328
334
329
335
330 BINARY_OP_DUNDERS: Dict[Type[ast.operator], Tuple[str]] = {
336 BINARY_OP_DUNDERS: Dict[Type[ast.operator], Tuple[str]] = {
331 ast.Add: ("__add__",),
337 ast.Add: ("__add__",),
332 ast.Sub: ("__sub__",),
338 ast.Sub: ("__sub__",),
333 ast.Mult: ("__mul__",),
339 ast.Mult: ("__mul__",),
334 ast.Div: ("__truediv__",),
340 ast.Div: ("__truediv__",),
335 ast.FloorDiv: ("__floordiv__",),
341 ast.FloorDiv: ("__floordiv__",),
336 ast.Mod: ("__mod__",),
342 ast.Mod: ("__mod__",),
337 ast.Pow: ("__pow__",),
343 ast.Pow: ("__pow__",),
338 ast.LShift: ("__lshift__",),
344 ast.LShift: ("__lshift__",),
339 ast.RShift: ("__rshift__",),
345 ast.RShift: ("__rshift__",),
340 ast.BitOr: ("__or__",),
346 ast.BitOr: ("__or__",),
341 ast.BitXor: ("__xor__",),
347 ast.BitXor: ("__xor__",),
342 ast.BitAnd: ("__and__",),
348 ast.BitAnd: ("__and__",),
343 ast.MatMult: ("__matmul__",),
349 ast.MatMult: ("__matmul__",),
344 }
350 }
345
351
346 COMP_OP_DUNDERS: Dict[Type[ast.cmpop], Tuple[str, ...]] = {
352 COMP_OP_DUNDERS: Dict[Type[ast.cmpop], Tuple[str, ...]] = {
347 ast.Eq: ("__eq__",),
353 ast.Eq: ("__eq__",),
348 ast.NotEq: ("__ne__", "__eq__"),
354 ast.NotEq: ("__ne__", "__eq__"),
349 ast.Lt: ("__lt__", "__gt__"),
355 ast.Lt: ("__lt__", "__gt__"),
350 ast.LtE: ("__le__", "__ge__"),
356 ast.LtE: ("__le__", "__ge__"),
351 ast.Gt: ("__gt__", "__lt__"),
357 ast.Gt: ("__gt__", "__lt__"),
352 ast.GtE: ("__ge__", "__le__"),
358 ast.GtE: ("__ge__", "__le__"),
353 ast.In: ("__contains__",),
359 ast.In: ("__contains__",),
354 # Note: ast.Is, ast.IsNot, ast.NotIn are handled specially
360 # Note: ast.Is, ast.IsNot, ast.NotIn are handled specially
355 }
361 }
356
362
357 UNARY_OP_DUNDERS: Dict[Type[ast.unaryop], Tuple[str, ...]] = {
363 UNARY_OP_DUNDERS: Dict[Type[ast.unaryop], Tuple[str, ...]] = {
358 ast.USub: ("__neg__",),
364 ast.USub: ("__neg__",),
359 ast.UAdd: ("__pos__",),
365 ast.UAdd: ("__pos__",),
360 # we have to check both __inv__ and __invert__!
366 # we have to check both __inv__ and __invert__!
361 ast.Invert: ("__invert__", "__inv__"),
367 ast.Invert: ("__invert__", "__inv__"),
362 ast.Not: ("__not__",),
368 ast.Not: ("__not__",),
363 }
369 }
364
370
365
371
366 def _find_dunder(node_op, dunders) -> Union[Tuple[str, ...], None]:
372 def _find_dunder(node_op, dunders) -> Union[Tuple[str, ...], None]:
367 dunder = None
373 dunder = None
368 for op, candidate_dunder in dunders.items():
374 for op, candidate_dunder in dunders.items():
369 if isinstance(node_op, op):
375 if isinstance(node_op, op):
370 dunder = candidate_dunder
376 dunder = candidate_dunder
371 return dunder
377 return dunder
372
378
373
379
374 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
380 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
375 """Evaluate AST node in provided context.
381 """Evaluate AST node in provided context.
376
382
377 Applies evaluation restrictions defined in the context. Currently does not support evaluation of functions with keyword arguments.
383 Applies evaluation restrictions defined in the context. Currently does not support evaluation of functions with keyword arguments.
378
384
379 Does not evaluate actions that always have side effects:
385 Does not evaluate actions that always have side effects:
380
386
381 - class definitions (``class sth: ...``)
387 - class definitions (``class sth: ...``)
382 - function definitions (``def sth: ...``)
388 - function definitions (``def sth: ...``)
383 - variable assignments (``x = 1``)
389 - variable assignments (``x = 1``)
384 - augmented assignments (``x += 1``)
390 - augmented assignments (``x += 1``)
385 - deletions (``del x``)
391 - deletions (``del x``)
386
392
387 Does not evaluate operations which do not return values:
393 Does not evaluate operations which do not return values:
388
394
389 - assertions (``assert x``)
395 - assertions (``assert x``)
390 - pass (``pass``)
396 - pass (``pass``)
391 - imports (``import x``)
397 - imports (``import x``)
392 - control flow:
398 - control flow:
393
399
394 - conditionals (``if x:``) except for ternary IfExp (``a if x else b``)
400 - conditionals (``if x:``) except for ternary IfExp (``a if x else b``)
395 - loops (``for`` and `while``)
401 - loops (``for`` and `while``)
396 - exception handling
402 - exception handling
397
403
398 The purpose of this function is to guard against unwanted side-effects;
404 The purpose of this function is to guard against unwanted side-effects;
399 it does not give guarantees on protection from malicious code execution.
405 it does not give guarantees on protection from malicious code execution.
400 """
406 """
401 policy = EVALUATION_POLICIES[context.evaluation]
407 policy = EVALUATION_POLICIES[context.evaluation]
402 if node is None:
408 if node is None:
403 return None
409 return None
404 if isinstance(node, ast.Expression):
410 if isinstance(node, ast.Expression):
405 return eval_node(node.body, context)
411 return eval_node(node.body, context)
406 if isinstance(node, ast.BinOp):
412 if isinstance(node, ast.BinOp):
407 left = eval_node(node.left, context)
413 left = eval_node(node.left, context)
408 right = eval_node(node.right, context)
414 right = eval_node(node.right, context)
409 dunders = _find_dunder(node.op, BINARY_OP_DUNDERS)
415 dunders = _find_dunder(node.op, BINARY_OP_DUNDERS)
410 if dunders:
416 if dunders:
411 if policy.can_operate(dunders, left, right):
417 if policy.can_operate(dunders, left, right):
412 return getattr(left, dunders[0])(right)
418 return getattr(left, dunders[0])(right)
413 else:
419 else:
414 raise GuardRejection(
420 raise GuardRejection(
415 f"Operation (`{dunders}`) for",
421 f"Operation (`{dunders}`) for",
416 type(left),
422 type(left),
417 f"not allowed in {context.evaluation} mode",
423 f"not allowed in {context.evaluation} mode",
418 )
424 )
419 if isinstance(node, ast.Compare):
425 if isinstance(node, ast.Compare):
420 left = eval_node(node.left, context)
426 left = eval_node(node.left, context)
421 all_true = True
427 all_true = True
422 negate = False
428 negate = False
423 for op, right in zip(node.ops, node.comparators):
429 for op, right in zip(node.ops, node.comparators):
424 right = eval_node(right, context)
430 right = eval_node(right, context)
425 dunder = None
431 dunder = None
426 dunders = _find_dunder(op, COMP_OP_DUNDERS)
432 dunders = _find_dunder(op, COMP_OP_DUNDERS)
427 if not dunders:
433 if not dunders:
428 if isinstance(op, ast.NotIn):
434 if isinstance(op, ast.NotIn):
429 dunders = COMP_OP_DUNDERS[ast.In]
435 dunders = COMP_OP_DUNDERS[ast.In]
430 negate = True
436 negate = True
431 if isinstance(op, ast.Is):
437 if isinstance(op, ast.Is):
432 dunder = "is_"
438 dunder = "is_"
433 if isinstance(op, ast.IsNot):
439 if isinstance(op, ast.IsNot):
434 dunder = "is_"
440 dunder = "is_"
435 negate = True
441 negate = True
436 if not dunder and dunders:
442 if not dunder and dunders:
437 dunder = dunders[0]
443 dunder = dunders[0]
438 if dunder:
444 if dunder:
439 a, b = (right, left) if dunder == "__contains__" else (left, right)
445 a, b = (right, left) if dunder == "__contains__" else (left, right)
440 if dunder == "is_" or dunders and policy.can_operate(dunders, a, b):
446 if dunder == "is_" or dunders and policy.can_operate(dunders, a, b):
441 result = getattr(operator, dunder)(a, b)
447 result = getattr(operator, dunder)(a, b)
442 if negate:
448 if negate:
443 result = not result
449 result = not result
444 if not result:
450 if not result:
445 all_true = False
451 all_true = False
446 left = right
452 left = right
447 else:
453 else:
448 raise GuardRejection(
454 raise GuardRejection(
449 f"Comparison (`{dunder}`) for",
455 f"Comparison (`{dunder}`) for",
450 type(left),
456 type(left),
451 f"not allowed in {context.evaluation} mode",
457 f"not allowed in {context.evaluation} mode",
452 )
458 )
453 else:
459 else:
454 raise ValueError(f"Comparison `{dunder}` not supported")
460 raise ValueError(
461 f"Comparison `{dunder}` not supported"
462 ) # pragma: no cover
455 return all_true
463 return all_true
456 if isinstance(node, ast.Constant):
464 if isinstance(node, ast.Constant):
457 return node.value
465 return node.value
458 if isinstance(node, ast.Index):
466 if isinstance(node, ast.Index):
459 return eval_node(node.value, context)
467 # deprecated since Python 3.9
468 return eval_node(node.value, context) # pragma: no cover
460 if isinstance(node, ast.Tuple):
469 if isinstance(node, ast.Tuple):
461 return tuple(eval_node(e, context) for e in node.elts)
470 return tuple(eval_node(e, context) for e in node.elts)
462 if isinstance(node, ast.List):
471 if isinstance(node, ast.List):
463 return [eval_node(e, context) for e in node.elts]
472 return [eval_node(e, context) for e in node.elts]
464 if isinstance(node, ast.Set):
473 if isinstance(node, ast.Set):
465 return {eval_node(e, context) for e in node.elts}
474 return {eval_node(e, context) for e in node.elts}
466 if isinstance(node, ast.Dict):
475 if isinstance(node, ast.Dict):
467 return dict(
476 return dict(
468 zip(
477 zip(
469 [eval_node(k, context) for k in node.keys],
478 [eval_node(k, context) for k in node.keys],
470 [eval_node(v, context) for v in node.values],
479 [eval_node(v, context) for v in node.values],
471 )
480 )
472 )
481 )
473 if isinstance(node, ast.Slice):
482 if isinstance(node, ast.Slice):
474 return slice(
483 return slice(
475 eval_node(node.lower, context),
484 eval_node(node.lower, context),
476 eval_node(node.upper, context),
485 eval_node(node.upper, context),
477 eval_node(node.step, context),
486 eval_node(node.step, context),
478 )
487 )
479 if isinstance(node, ast.ExtSlice):
488 if isinstance(node, ast.ExtSlice):
480 return tuple([eval_node(dim, context) for dim in node.dims])
489 # deprecated since Python 3.9
490 return tuple([eval_node(dim, context) for dim in node.dims]) # pragma: no cover
481 if isinstance(node, ast.UnaryOp):
491 if isinstance(node, ast.UnaryOp):
482 value = eval_node(node.operand, context)
492 value = eval_node(node.operand, context)
483 dunders = _find_dunder(node.op, UNARY_OP_DUNDERS)
493 dunders = _find_dunder(node.op, UNARY_OP_DUNDERS)
484 if dunders:
494 if dunders:
485 if policy.can_operate(dunders, value):
495 if policy.can_operate(dunders, value):
486 return getattr(value, dunders[0])()
496 return getattr(value, dunders[0])()
487 else:
497 else:
488 raise GuardRejection(
498 raise GuardRejection(
489 f"Operation (`{dunders}`) for",
499 f"Operation (`{dunders}`) for",
490 type(value),
500 type(value),
491 f"not allowed in {context.evaluation} mode",
501 f"not allowed in {context.evaluation} mode",
492 )
502 )
493 raise ValueError("Unhandled unary operation:", node.op)
494 if isinstance(node, ast.Subscript):
503 if isinstance(node, ast.Subscript):
495 value = eval_node(node.value, context)
504 value = eval_node(node.value, context)
496 slice_ = eval_node(node.slice, context)
505 slice_ = eval_node(node.slice, context)
497 if policy.can_get_item(value, slice_):
506 if policy.can_get_item(value, slice_):
498 return value[slice_]
507 return value[slice_]
499 raise GuardRejection(
508 raise GuardRejection(
500 "Subscript access (`__getitem__`) for",
509 "Subscript access (`__getitem__`) for",
501 type(value), # not joined to avoid calling `repr`
510 type(value), # not joined to avoid calling `repr`
502 f" not allowed in {context.evaluation} mode",
511 f" not allowed in {context.evaluation} mode",
503 )
512 )
504 if isinstance(node, ast.Name):
513 if isinstance(node, ast.Name):
505 if policy.allow_locals_access and node.id in context.locals:
514 if policy.allow_locals_access and node.id in context.locals:
506 return context.locals[node.id]
515 return context.locals[node.id]
507 if policy.allow_globals_access and node.id in context.globals:
516 if policy.allow_globals_access and node.id in context.globals:
508 return context.globals[node.id]
517 return context.globals[node.id]
509 if policy.allow_builtins_access and hasattr(builtins, node.id):
518 if policy.allow_builtins_access and hasattr(builtins, node.id):
510 # note: do not use __builtins__, it is implementation detail of Python
519 # note: do not use __builtins__, it is implementation detail of cPython
511 return getattr(builtins, node.id)
520 return getattr(builtins, node.id)
512 if not policy.allow_globals_access and not policy.allow_locals_access:
521 if not policy.allow_globals_access and not policy.allow_locals_access:
513 raise GuardRejection(
522 raise GuardRejection(
514 f"Namespace access not allowed in {context.evaluation} mode"
523 f"Namespace access not allowed in {context.evaluation} mode"
515 )
524 )
516 else:
525 else:
517 raise NameError(f"{node.id} not found in locals nor globals")
526 raise NameError(f"{node.id} not found in locals, globals, nor builtins")
518 if isinstance(node, ast.Attribute):
527 if isinstance(node, ast.Attribute):
519 value = eval_node(node.value, context)
528 value = eval_node(node.value, context)
520 if policy.can_get_attr(value, node.attr):
529 if policy.can_get_attr(value, node.attr):
521 return getattr(value, node.attr)
530 return getattr(value, node.attr)
522 raise GuardRejection(
531 raise GuardRejection(
523 "Attribute access (`__getattr__`) for",
532 "Attribute access (`__getattr__`) for",
524 type(value), # not joined to avoid calling `repr`
533 type(value), # not joined to avoid calling `repr`
525 f"not allowed in {context.evaluation} mode",
534 f"not allowed in {context.evaluation} mode",
526 )
535 )
527 if isinstance(node, ast.IfExp):
536 if isinstance(node, ast.IfExp):
528 test = eval_node(node.test, context)
537 test = eval_node(node.test, context)
529 if test:
538 if test:
530 return eval_node(node.body, context)
539 return eval_node(node.body, context)
531 else:
540 else:
532 return eval_node(node.orelse, context)
541 return eval_node(node.orelse, context)
533 if isinstance(node, ast.Call):
542 if isinstance(node, ast.Call):
534 func = eval_node(node.func, context)
543 func = eval_node(node.func, context)
535 if policy.can_call(func) and not node.keywords:
544 if policy.can_call(func) and not node.keywords:
536 args = [eval_node(arg, context) for arg in node.args]
545 args = [eval_node(arg, context) for arg in node.args]
537 return func(*args)
546 return func(*args)
538 raise GuardRejection(
547 raise GuardRejection(
539 "Call for",
548 "Call for",
540 func, # not joined to avoid calling `repr`
549 func, # not joined to avoid calling `repr`
541 f"not allowed in {context.evaluation} mode",
550 f"not allowed in {context.evaluation} mode",
542 )
551 )
543 raise ValueError("Unhandled node", node)
552 raise ValueError("Unhandled node", ast.dump(node))
544
553
545
554
546 SUPPORTED_EXTERNAL_GETITEM = {
555 SUPPORTED_EXTERNAL_GETITEM = {
547 ("pandas", "core", "indexing", "_iLocIndexer"),
556 ("pandas", "core", "indexing", "_iLocIndexer"),
548 ("pandas", "core", "indexing", "_LocIndexer"),
557 ("pandas", "core", "indexing", "_LocIndexer"),
549 ("pandas", "DataFrame"),
558 ("pandas", "DataFrame"),
550 ("pandas", "Series"),
559 ("pandas", "Series"),
551 ("numpy", "ndarray"),
560 ("numpy", "ndarray"),
552 ("numpy", "void"),
561 ("numpy", "void"),
553 }
562 }
554
563
564
555 BUILTIN_GETITEM: Set[InstancesHaveGetItem] = {
565 BUILTIN_GETITEM: Set[InstancesHaveGetItem] = {
556 dict,
566 dict,
557 str,
567 str,
558 bytes,
568 bytes,
559 list,
569 list,
560 tuple,
570 tuple,
561 collections.defaultdict,
571 collections.defaultdict,
562 collections.deque,
572 collections.deque,
563 collections.OrderedDict,
573 collections.OrderedDict,
564 collections.ChainMap,
574 collections.ChainMap,
565 collections.UserDict,
575 collections.UserDict,
566 collections.UserList,
576 collections.UserList,
567 collections.UserString,
577 collections.UserString,
568 _DummyNamedTuple,
578 _DummyNamedTuple,
569 _IdentitySubscript,
579 _IdentitySubscript,
570 }
580 }
571
581
572
582
573 def _list_methods(cls, source=None):
583 def _list_methods(cls, source=None):
574 """For use on immutable objects or with methods returning a copy"""
584 """For use on immutable objects or with methods returning a copy"""
575 return [getattr(cls, k) for k in (source if source else dir(cls))]
585 return [getattr(cls, k) for k in (source if source else dir(cls))]
576
586
577
587
578 dict_non_mutating_methods = ("copy", "keys", "values", "items")
588 dict_non_mutating_methods = ("copy", "keys", "values", "items")
579 list_non_mutating_methods = ("copy", "index", "count")
589 list_non_mutating_methods = ("copy", "index", "count")
580 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
590 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
581
591
582
592
583 dict_keys: Type[collections.abc.KeysView] = type({}.keys())
593 dict_keys: Type[collections.abc.KeysView] = type({}.keys())
584 method_descriptor: Any = type(list.copy)
594 method_descriptor: Any = type(list.copy)
585
595
596 NUMERICS = {int, float, complex}
597
586 ALLOWED_CALLS = {
598 ALLOWED_CALLS = {
587 bytes,
599 bytes,
588 *_list_methods(bytes),
600 *_list_methods(bytes),
589 dict,
601 dict,
590 *_list_methods(dict, dict_non_mutating_methods),
602 *_list_methods(dict, dict_non_mutating_methods),
591 dict_keys.isdisjoint,
603 dict_keys.isdisjoint,
592 list,
604 list,
593 *_list_methods(list, list_non_mutating_methods),
605 *_list_methods(list, list_non_mutating_methods),
594 set,
606 set,
595 *_list_methods(set, set_non_mutating_methods),
607 *_list_methods(set, set_non_mutating_methods),
596 frozenset,
608 frozenset,
597 *_list_methods(frozenset),
609 *_list_methods(frozenset),
598 range,
610 range,
599 str,
611 str,
600 *_list_methods(str),
612 *_list_methods(str),
601 tuple,
613 tuple,
602 *_list_methods(tuple),
614 *_list_methods(tuple),
615 *NUMERICS,
616 *[method for numeric_cls in NUMERICS for method in _list_methods(numeric_cls)],
603 collections.deque,
617 collections.deque,
604 *_list_methods(collections.deque, list_non_mutating_methods),
618 *_list_methods(collections.deque, list_non_mutating_methods),
605 collections.defaultdict,
619 collections.defaultdict,
606 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
620 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
607 collections.OrderedDict,
621 collections.OrderedDict,
608 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
622 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
609 collections.UserDict,
623 collections.UserDict,
610 *_list_methods(collections.UserDict, dict_non_mutating_methods),
624 *_list_methods(collections.UserDict, dict_non_mutating_methods),
611 collections.UserList,
625 collections.UserList,
612 *_list_methods(collections.UserList, list_non_mutating_methods),
626 *_list_methods(collections.UserList, list_non_mutating_methods),
613 collections.UserString,
627 collections.UserString,
614 *_list_methods(collections.UserString, dir(str)),
628 *_list_methods(collections.UserString, dir(str)),
615 collections.Counter,
629 collections.Counter,
616 *_list_methods(collections.Counter, dict_non_mutating_methods),
630 *_list_methods(collections.Counter, dict_non_mutating_methods),
617 collections.Counter.elements,
631 collections.Counter.elements,
618 collections.Counter.most_common,
632 collections.Counter.most_common,
619 }
633 }
620
634
621 BUILTIN_GETATTR: Set[MayHaveGetattr] = {
635 BUILTIN_GETATTR: Set[MayHaveGetattr] = {
622 *BUILTIN_GETITEM,
636 *BUILTIN_GETITEM,
623 set,
637 set,
624 frozenset,
638 frozenset,
625 object,
639 object,
626 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
640 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
641 *NUMERICS,
627 dict_keys,
642 dict_keys,
628 method_descriptor,
643 method_descriptor,
629 }
644 }
630
645
631
646
632 BUILTIN_OPERATIONS = {int, float, complex, *BUILTIN_GETATTR}
647 BUILTIN_OPERATIONS = {*BUILTIN_GETATTR}
633
648
634 EVALUATION_POLICIES = {
649 EVALUATION_POLICIES = {
635 "minimal": EvaluationPolicy(
650 "minimal": EvaluationPolicy(
636 allow_builtins_access=True,
651 allow_builtins_access=True,
637 allow_locals_access=False,
652 allow_locals_access=False,
638 allow_globals_access=False,
653 allow_globals_access=False,
639 allow_item_access=False,
654 allow_item_access=False,
640 allow_attr_access=False,
655 allow_attr_access=False,
641 allowed_calls=set(),
656 allowed_calls=set(),
642 allow_any_calls=False,
657 allow_any_calls=False,
643 allow_all_operations=False,
658 allow_all_operations=False,
644 ),
659 ),
645 "limited": SelectivePolicy(
660 "limited": SelectivePolicy(
646 # TODO:
661 # TODO:
647 # - should reject binary and unary operations if custom methods would be dispatched
662 # - should reject binary and unary operations if custom methods would be dispatched
648 allowed_getitem=BUILTIN_GETITEM,
663 allowed_getitem=BUILTIN_GETITEM,
649 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
664 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
650 allowed_getattr=BUILTIN_GETATTR,
665 allowed_getattr=BUILTIN_GETATTR,
651 allowed_getattr_external={
666 allowed_getattr_external={
652 # pandas Series/Frame implements custom `__getattr__`
667 # pandas Series/Frame implements custom `__getattr__`
653 ("pandas", "DataFrame"),
668 ("pandas", "DataFrame"),
654 ("pandas", "Series"),
669 ("pandas", "Series"),
655 },
670 },
656 allowed_operations=BUILTIN_OPERATIONS,
671 allowed_operations=BUILTIN_OPERATIONS,
657 allow_builtins_access=True,
672 allow_builtins_access=True,
658 allow_locals_access=True,
673 allow_locals_access=True,
659 allow_globals_access=True,
674 allow_globals_access=True,
660 allowed_calls=ALLOWED_CALLS,
675 allowed_calls=ALLOWED_CALLS,
661 ),
676 ),
662 "unsafe": EvaluationPolicy(
677 "unsafe": EvaluationPolicy(
663 allow_builtins_access=True,
678 allow_builtins_access=True,
664 allow_locals_access=True,
679 allow_locals_access=True,
665 allow_globals_access=True,
680 allow_globals_access=True,
666 allow_attr_access=True,
681 allow_attr_access=True,
667 allow_item_access=True,
682 allow_item_access=True,
668 allow_any_calls=True,
683 allow_any_calls=True,
669 allow_all_operations=True,
684 allow_all_operations=True,
670 ),
685 ),
671 }
686 }
672
687
673
688
674 __all__ = [
689 __all__ = [
675 "guarded_eval",
690 "guarded_eval",
676 "eval_node",
691 "eval_node",
677 "GuardRejection",
692 "GuardRejection",
678 "EvaluationContext",
693 "EvaluationContext",
679 "_unbind_method",
694 "_unbind_method",
680 ]
695 ]
@@ -1,331 +1,492 b''
1 from typing import NamedTuple
1 from typing import NamedTuple
2 from functools import partial
2 from IPython.core.guarded_eval import (
3 from IPython.core.guarded_eval import (
3 EvaluationContext,
4 EvaluationContext,
4 GuardRejection,
5 GuardRejection,
5 guarded_eval,
6 guarded_eval,
6 _unbind_method,
7 _unbind_method,
7 )
8 )
8 from IPython.testing import decorators as dec
9 from IPython.testing import decorators as dec
9 import pytest
10 import pytest
10
11
11
12
12 def limited(**kwargs):
13 def create_context(evaluation: str, **kwargs):
13 return EvaluationContext(locals=kwargs, globals={}, evaluation="limited")
14 return EvaluationContext(locals=kwargs, globals={}, evaluation=evaluation)
14
15
15
16
16 def unsafe(**kwargs):
17 forbidden = partial(create_context, "forbidden")
17 return EvaluationContext(locals=kwargs, globals={}, evaluation="unsafe")
18 minimal = partial(create_context, "minimal")
19 limited = partial(create_context, "limited")
20 unsafe = partial(create_context, "unsafe")
21 dangerous = partial(create_context, "dangerous")
22
23 LIMITED_OR_HIGHER = [limited, unsafe, dangerous]
24
25 MINIMAL_OR_HIGHER = [minimal, *LIMITED_OR_HIGHER]
18
26
19
27
20 @dec.skip_without("pandas")
28 @dec.skip_without("pandas")
21 def test_pandas_series_iloc():
29 def test_pandas_series_iloc():
22 import pandas as pd
30 import pandas as pd
23
31
24 series = pd.Series([1], index=["a"])
32 series = pd.Series([1], index=["a"])
25 context = limited(data=series)
33 context = limited(data=series)
26 assert guarded_eval("data.iloc[0]", context) == 1
34 assert guarded_eval("data.iloc[0]", context) == 1
27
35
28
36
29 @dec.skip_without("pandas")
37 @dec.skip_without("pandas")
30 def test_pandas_series():
38 def test_pandas_series():
31 import pandas as pd
39 import pandas as pd
32
40
33 context = limited(data=pd.Series([1], index=["a"]))
41 context = limited(data=pd.Series([1], index=["a"]))
34 assert guarded_eval('data["a"]', context) == 1
42 assert guarded_eval('data["a"]', context) == 1
35 with pytest.raises(KeyError):
43 with pytest.raises(KeyError):
36 guarded_eval('data["c"]', context)
44 guarded_eval('data["c"]', context)
37
45
38
46
39 @dec.skip_without("pandas")
47 @dec.skip_without("pandas")
40 def test_pandas_bad_series():
48 def test_pandas_bad_series():
41 import pandas as pd
49 import pandas as pd
42
50
43 class BadItemSeries(pd.Series):
51 class BadItemSeries(pd.Series):
44 def __getitem__(self, key):
52 def __getitem__(self, key):
45 return "CUSTOM_ITEM"
53 return "CUSTOM_ITEM"
46
54
47 class BadAttrSeries(pd.Series):
55 class BadAttrSeries(pd.Series):
48 def __getattr__(self, key):
56 def __getattr__(self, key):
49 return "CUSTOM_ATTR"
57 return "CUSTOM_ATTR"
50
58
51 bad_series = BadItemSeries([1], index=["a"])
59 bad_series = BadItemSeries([1], index=["a"])
52 context = limited(data=bad_series)
60 context = limited(data=bad_series)
53
61
54 with pytest.raises(GuardRejection):
62 with pytest.raises(GuardRejection):
55 guarded_eval('data["a"]', context)
63 guarded_eval('data["a"]', context)
56 with pytest.raises(GuardRejection):
64 with pytest.raises(GuardRejection):
57 guarded_eval('data["c"]', context)
65 guarded_eval('data["c"]', context)
58
66
59 # note: here result is a bit unexpected because
67 # note: here result is a bit unexpected because
60 # pandas `__getattr__` calls `__getitem__`;
68 # pandas `__getattr__` calls `__getitem__`;
61 # FIXME - special case to handle it?
69 # FIXME - special case to handle it?
62 assert guarded_eval("data.a", context) == "CUSTOM_ITEM"
70 assert guarded_eval("data.a", context) == "CUSTOM_ITEM"
63
71
64 context = unsafe(data=bad_series)
72 context = unsafe(data=bad_series)
65 assert guarded_eval('data["a"]', context) == "CUSTOM_ITEM"
73 assert guarded_eval('data["a"]', context) == "CUSTOM_ITEM"
66
74
67 bad_attr_series = BadAttrSeries([1], index=["a"])
75 bad_attr_series = BadAttrSeries([1], index=["a"])
68 context = limited(data=bad_attr_series)
76 context = limited(data=bad_attr_series)
69 assert guarded_eval('data["a"]', context) == 1
77 assert guarded_eval('data["a"]', context) == 1
70 with pytest.raises(GuardRejection):
78 with pytest.raises(GuardRejection):
71 guarded_eval("data.a", context)
79 guarded_eval("data.a", context)
72
80
73
81
74 @dec.skip_without("pandas")
82 @dec.skip_without("pandas")
75 def test_pandas_dataframe_loc():
83 def test_pandas_dataframe_loc():
76 import pandas as pd
84 import pandas as pd
77 from pandas.testing import assert_series_equal
85 from pandas.testing import assert_series_equal
78
86
79 data = pd.DataFrame([{"a": 1}])
87 data = pd.DataFrame([{"a": 1}])
80 context = limited(data=data)
88 context = limited(data=data)
81 assert_series_equal(guarded_eval('data.loc[:, "a"]', context), data["a"])
89 assert_series_equal(guarded_eval('data.loc[:, "a"]', context), data["a"])
82
90
83
91
84 def test_named_tuple():
92 def test_named_tuple():
85 class GoodNamedTuple(NamedTuple):
93 class GoodNamedTuple(NamedTuple):
86 a: str
94 a: str
87 pass
95 pass
88
96
89 class BadNamedTuple(NamedTuple):
97 class BadNamedTuple(NamedTuple):
90 a: str
98 a: str
91
99
92 def __getitem__(self, key):
100 def __getitem__(self, key):
93 return None
101 return None
94
102
95 good = GoodNamedTuple(a="x")
103 good = GoodNamedTuple(a="x")
96 bad = BadNamedTuple(a="x")
104 bad = BadNamedTuple(a="x")
97
105
98 context = limited(data=good)
106 context = limited(data=good)
99 assert guarded_eval("data[0]", context) == "x"
107 assert guarded_eval("data[0]", context) == "x"
100
108
101 context = limited(data=bad)
109 context = limited(data=bad)
102 with pytest.raises(GuardRejection):
110 with pytest.raises(GuardRejection):
103 guarded_eval("data[0]", context)
111 guarded_eval("data[0]", context)
104
112
105
113
106 def test_dict():
114 def test_dict():
107 context = limited(data={"a": 1, "b": {"x": 2}, ("x", "y"): 3})
115 context = limited(data={"a": 1, "b": {"x": 2}, ("x", "y"): 3})
108 assert guarded_eval('data["a"]', context) == 1
116 assert guarded_eval('data["a"]', context) == 1
109 assert guarded_eval('data["b"]', context) == {"x": 2}
117 assert guarded_eval('data["b"]', context) == {"x": 2}
110 assert guarded_eval('data["b"]["x"]', context) == 2
118 assert guarded_eval('data["b"]["x"]', context) == 2
111 assert guarded_eval('data["x", "y"]', context) == 3
119 assert guarded_eval('data["x", "y"]', context) == 3
112
120
113 assert guarded_eval("data.keys", context)
121 assert guarded_eval("data.keys", context)
114
122
115
123
116 def test_set():
124 def test_set():
117 context = limited(data={"a", "b"})
125 context = limited(data={"a", "b"})
118 assert guarded_eval("data.difference", context)
126 assert guarded_eval("data.difference", context)
119
127
120
128
121 def test_list():
129 def test_list():
122 context = limited(data=[1, 2, 3])
130 context = limited(data=[1, 2, 3])
123 assert guarded_eval("data[1]", context) == 2
131 assert guarded_eval("data[1]", context) == 2
124 assert guarded_eval("data.copy", context)
132 assert guarded_eval("data.copy", context)
125
133
126
134
127 def test_dict_literal():
135 def test_dict_literal():
128 context = limited()
136 context = limited()
129 assert guarded_eval("{}", context) == {}
137 assert guarded_eval("{}", context) == {}
130 assert guarded_eval('{"a": 1}', context) == {"a": 1}
138 assert guarded_eval('{"a": 1}', context) == {"a": 1}
131
139
132
140
133 def test_list_literal():
141 def test_list_literal():
134 context = limited()
142 context = limited()
135 assert guarded_eval("[]", context) == []
143 assert guarded_eval("[]", context) == []
136 assert guarded_eval('[1, "a"]', context) == [1, "a"]
144 assert guarded_eval('[1, "a"]', context) == [1, "a"]
137
145
138
146
139 def test_set_literal():
147 def test_set_literal():
140 context = limited()
148 context = limited()
141 assert guarded_eval("set()", context) == set()
149 assert guarded_eval("set()", context) == set()
142 assert guarded_eval('{"a"}', context) == {"a"}
150 assert guarded_eval('{"a"}', context) == {"a"}
143
151
144
152
145 def test_if_expression():
153 def test_evaluates_if_expression():
146 context = limited()
154 context = limited()
147 assert guarded_eval("2 if True else 3", context) == 2
155 assert guarded_eval("2 if True else 3", context) == 2
148 assert guarded_eval("4 if False else 5", context) == 5
156 assert guarded_eval("4 if False else 5", context) == 5
149
157
150
158
151 def test_object():
159 def test_object():
152 obj = object()
160 obj = object()
153 context = limited(obj=obj)
161 context = limited(obj=obj)
154 assert guarded_eval("obj.__dir__", context) == obj.__dir__
162 assert guarded_eval("obj.__dir__", context) == obj.__dir__
155
163
156
164
157 @pytest.mark.parametrize(
165 @pytest.mark.parametrize(
158 "code,expected",
166 "code,expected",
159 [
167 [
160 ["int.numerator", int.numerator],
168 ["int.numerator", int.numerator],
161 ["float.is_integer", float.is_integer],
169 ["float.is_integer", float.is_integer],
162 ["complex.real", complex.real],
170 ["complex.real", complex.real],
163 ],
171 ],
164 )
172 )
165 def test_number_attributes(code, expected):
173 def test_number_attributes(code, expected):
166 assert guarded_eval(code, limited()) == expected
174 assert guarded_eval(code, limited()) == expected
167
175
168
176
169 def test_method_descriptor():
177 def test_method_descriptor():
170 context = limited()
178 context = limited()
171 assert guarded_eval("list.copy.__name__", context) == "copy"
179 assert guarded_eval("list.copy.__name__", context) == "copy"
172
180
173
181
174 @pytest.mark.parametrize(
182 @pytest.mark.parametrize(
175 "data,good,bad,expected",
183 "data,good,bad,expected",
176 [
184 [
177 [[1, 2, 3], "data.index(2)", "data.append(4)", 1],
185 [[1, 2, 3], "data.index(2)", "data.append(4)", 1],
178 [{"a": 1}, "data.keys().isdisjoint({})", "data.update()", True],
186 [{"a": 1}, "data.keys().isdisjoint({})", "data.update()", True],
179 ],
187 ],
180 )
188 )
181 def test_calls(data, good, bad, expected):
189 def test_evaluates_calls(data, good, bad, expected):
182 context = limited(data=data)
190 context = limited(data=data)
183 assert guarded_eval(good, context) == expected
191 assert guarded_eval(good, context) == expected
184
192
185 with pytest.raises(GuardRejection):
193 with pytest.raises(GuardRejection):
186 guarded_eval(bad, context)
194 guarded_eval(bad, context)
187
195
188
196
189 @pytest.mark.parametrize(
197 @pytest.mark.parametrize(
190 "code,expected",
198 "code,expected",
191 [
199 [
192 ["(1\n+\n1)", 2],
200 ["(1\n+\n1)", 2],
193 ["list(range(10))[-1:]", [9]],
201 ["list(range(10))[-1:]", [9]],
194 ["list(range(20))[3:-2:3]", [3, 6, 9, 12, 15]],
202 ["list(range(20))[3:-2:3]", [3, 6, 9, 12, 15]],
195 ],
203 ],
196 )
204 )
197 def test_literals(code, expected):
205 @pytest.mark.parametrize("context", LIMITED_OR_HIGHER)
198 context = limited()
206 def test_evaluates_complex_cases(code, expected, context):
199 assert guarded_eval(code, context) == expected
207 assert guarded_eval(code, context()) == expected
208
209
210 @pytest.mark.parametrize(
211 "code,expected",
212 [
213 ["1", 1],
214 ["1.0", 1.0],
215 ["0xdeedbeef", 0xDEEDBEEF],
216 ["True", True],
217 ["None", None],
218 ["{}", {}],
219 ["[]", []],
220 ],
221 )
222 @pytest.mark.parametrize("context", MINIMAL_OR_HIGHER)
223 def test_evaluates_literals(code, expected, context):
224 assert guarded_eval(code, context()) == expected
200
225
201
226
202 @pytest.mark.parametrize(
227 @pytest.mark.parametrize(
203 "code,expected",
228 "code,expected",
204 [
229 [
205 ["-5", -5],
230 ["-5", -5],
206 ["+5", +5],
231 ["+5", +5],
207 ["~5", -6],
232 ["~5", -6],
208 ],
233 ],
209 )
234 )
210 def test_unary_operations(code, expected):
235 @pytest.mark.parametrize("context", LIMITED_OR_HIGHER)
211 context = limited()
236 def test_evaluates_unary_operations(code, expected, context):
212 assert guarded_eval(code, context) == expected
237 assert guarded_eval(code, context()) == expected
213
238
214
239
215 @pytest.mark.parametrize(
240 @pytest.mark.parametrize(
216 "code,expected",
241 "code,expected",
217 [
242 [
218 ["1 + 1", 2],
243 ["1 + 1", 2],
219 ["3 - 1", 2],
244 ["3 - 1", 2],
220 ["2 * 3", 6],
245 ["2 * 3", 6],
221 ["5 // 2", 2],
246 ["5 // 2", 2],
222 ["5 / 2", 2.5],
247 ["5 / 2", 2.5],
223 ["5**2", 25],
248 ["5**2", 25],
224 ["2 >> 1", 1],
249 ["2 >> 1", 1],
225 ["2 << 1", 4],
250 ["2 << 1", 4],
226 ["1 | 2", 3],
251 ["1 | 2", 3],
227 ["1 & 1", 1],
252 ["1 & 1", 1],
228 ["1 & 2", 0],
253 ["1 & 2", 0],
229 ],
254 ],
230 )
255 )
231 def test_binary_operations(code, expected):
256 @pytest.mark.parametrize("context", LIMITED_OR_HIGHER)
232 context = limited()
257 def test_evaluates_binary_operations(code, expected, context):
233 assert guarded_eval(code, context) == expected
258 assert guarded_eval(code, context()) == expected
234
259
235
260
236 @pytest.mark.parametrize(
261 @pytest.mark.parametrize(
237 "code,expected",
262 "code,expected",
238 [
263 [
239 ["2 > 1", True],
264 ["2 > 1", True],
240 ["2 < 1", False],
265 ["2 < 1", False],
241 ["2 <= 1", False],
266 ["2 <= 1", False],
242 ["2 <= 2", True],
267 ["2 <= 2", True],
243 ["1 >= 2", False],
268 ["1 >= 2", False],
244 ["2 >= 2", True],
269 ["2 >= 2", True],
245 ["2 == 2", True],
270 ["2 == 2", True],
246 ["1 == 2", False],
271 ["1 == 2", False],
247 ["1 != 2", True],
272 ["1 != 2", True],
248 ["1 != 1", False],
273 ["1 != 1", False],
249 ["1 < 4 < 3", False],
274 ["1 < 4 < 3", False],
250 ["(1 < 4) < 3", True],
275 ["(1 < 4) < 3", True],
251 ["4 > 3 > 2 > 1", True],
276 ["4 > 3 > 2 > 1", True],
252 ["4 > 3 > 2 > 9", False],
277 ["4 > 3 > 2 > 9", False],
253 ["1 < 2 < 3 < 4", True],
278 ["1 < 2 < 3 < 4", True],
254 ["9 < 2 < 3 < 4", False],
279 ["9 < 2 < 3 < 4", False],
255 ["1 < 2 > 1 > 0 > -1 < 1", True],
280 ["1 < 2 > 1 > 0 > -1 < 1", True],
256 ["1 in [1] in [[1]]", True],
281 ["1 in [1] in [[1]]", True],
257 ["1 in [1] in [[2]]", False],
282 ["1 in [1] in [[2]]", False],
258 ["1 in [1]", True],
283 ["1 in [1]", True],
259 ["0 in [1]", False],
284 ["0 in [1]", False],
260 ["1 not in [1]", False],
285 ["1 not in [1]", False],
261 ["0 not in [1]", True],
286 ["0 not in [1]", True],
262 ["True is True", True],
287 ["True is True", True],
263 ["False is False", True],
288 ["False is False", True],
264 ["True is False", False],
289 ["True is False", False],
290 ["True is not True", False],
291 ["False is not True", True],
265 ],
292 ],
266 )
293 )
267 def test_comparisons(code, expected):
294 @pytest.mark.parametrize("context", LIMITED_OR_HIGHER)
268 context = limited()
295 def test_evaluates_comparisons(code, expected, context):
269 assert guarded_eval(code, context) == expected
296 assert guarded_eval(code, context()) == expected
297
298
299 def test_guards_comparisons():
300 class GoodEq(int):
301 pass
302
303 class BadEq(int):
304 def __eq__(self, other):
305 assert False
306
307 context = limited(bad=BadEq(1), good=GoodEq(1))
308
309 with pytest.raises(GuardRejection):
310 guarded_eval("bad == 1", context)
311
312 with pytest.raises(GuardRejection):
313 guarded_eval("bad != 1", context)
314
315 with pytest.raises(GuardRejection):
316 guarded_eval("1 == bad", context)
317
318 with pytest.raises(GuardRejection):
319 guarded_eval("1 != bad", context)
320
321 assert guarded_eval("good == 1", context) is True
322 assert guarded_eval("good != 1", context) is False
323 assert guarded_eval("1 == good", context) is True
324 assert guarded_eval("1 != good", context) is False
325
326
327 def test_guards_unary_operations():
328 class GoodOp(int):
329 pass
330
331 class BadOpInv(int):
332 def __inv__(self, other):
333 assert False
334
335 class BadOpInverse(int):
336 def __inv__(self, other):
337 assert False
338
339 context = limited(good=GoodOp(1), bad1=BadOpInv(1), bad2=BadOpInverse(1))
340
341 with pytest.raises(GuardRejection):
342 guarded_eval("~bad1", context)
343
344 with pytest.raises(GuardRejection):
345 guarded_eval("~bad2", context)
346
347
348 def test_guards_binary_operations():
349 class GoodOp(int):
350 pass
270
351
352 class BadOp(int):
353 def __add__(self, other):
354 assert False
271
355
272 def test_access_builtins():
356 context = limited(good=GoodOp(1), bad=BadOp(1))
357
358 with pytest.raises(GuardRejection):
359 guarded_eval("1 + bad", context)
360
361 with pytest.raises(GuardRejection):
362 guarded_eval("bad + 1", context)
363
364 assert guarded_eval("good + 1", context) == 2
365 assert guarded_eval("1 + good", context) == 2
366
367
368 def test_guards_attributes():
369 class GoodAttr(float):
370 pass
371
372 class BadAttr1(float):
373 def __getattr__(self, key):
374 assert False
375
376 class BadAttr2(float):
377 def __getattribute__(self, key):
378 assert False
379
380 context = limited(good=GoodAttr(0.5), bad1=BadAttr1(0.5), bad2=BadAttr2(0.5))
381
382 with pytest.raises(GuardRejection):
383 guarded_eval("bad1.as_integer_ratio", context)
384
385 with pytest.raises(GuardRejection):
386 guarded_eval("bad2.as_integer_ratio", context)
387
388 assert guarded_eval("good.as_integer_ratio()", context) == (1, 2)
389
390
391 @pytest.mark.parametrize("context", MINIMAL_OR_HIGHER)
392 def test_access_builtins(context):
393 assert guarded_eval("round", context()) == round
394
395
396 def test_access_builtins_fails():
273 context = limited()
397 context = limited()
274 assert guarded_eval("round", context) == round
398 with pytest.raises(NameError):
399 guarded_eval("this_is_not_builtin", context)
400
401
402 def test_rejects_forbidden():
403 context = forbidden()
404 with pytest.raises(GuardRejection):
405 guarded_eval("1", context)
406
407
408 def test_guards_locals_and_globals():
409 context = EvaluationContext(
410 locals={"local_a": "a"}, globals={"global_b": "b"}, evaluation="minimal"
411 )
412
413 with pytest.raises(GuardRejection):
414 guarded_eval("local_a", context)
415
416 with pytest.raises(GuardRejection):
417 guarded_eval("global_b", context)
418
419
420 def test_access_locals_and_globals():
421 context = EvaluationContext(
422 locals={"local_a": "a"}, globals={"global_b": "b"}, evaluation="limited"
423 )
424 assert guarded_eval("local_a", context) == "a"
425 assert guarded_eval("global_b", context) == "b"
426
427
428 @pytest.mark.parametrize(
429 "code",
430 ["def func(): pass", "class C: pass", "x = 1", "x += 1", "del x", "import ast"],
431 )
432 @pytest.mark.parametrize("context", [minimal(), limited(), unsafe()])
433 def test_rejects_side_effect_syntax(code, context):
434 with pytest.raises(SyntaxError):
435 guarded_eval(code, context)
275
436
276
437
277 def test_subscript():
438 def test_subscript():
278 context = EvaluationContext(
439 context = EvaluationContext(
279 locals={}, globals={}, evaluation="limited", in_subscript=True
440 locals={}, globals={}, evaluation="limited", in_subscript=True
280 )
441 )
281 empty_slice = slice(None, None, None)
442 empty_slice = slice(None, None, None)
282 assert guarded_eval("", context) == tuple()
443 assert guarded_eval("", context) == tuple()
283 assert guarded_eval(":", context) == empty_slice
444 assert guarded_eval(":", context) == empty_slice
284 assert guarded_eval("1:2:3", context) == slice(1, 2, 3)
445 assert guarded_eval("1:2:3", context) == slice(1, 2, 3)
285 assert guarded_eval(':, "a"', context) == (empty_slice, "a")
446 assert guarded_eval(':, "a"', context) == (empty_slice, "a")
286
447
287
448
288 def test_unbind_method():
449 def test_unbind_method():
289 class X(list):
450 class X(list):
290 def index(self, k):
451 def index(self, k):
291 return "CUSTOM"
452 return "CUSTOM"
292
453
293 x = X()
454 x = X()
294 assert _unbind_method(x.index) is X.index
455 assert _unbind_method(x.index) is X.index
295 assert _unbind_method([].index) is list.index
456 assert _unbind_method([].index) is list.index
296
457
297
458
298 def test_assumption_instance_attr_do_not_matter():
459 def test_assumption_instance_attr_do_not_matter():
299 """This is semi-specified in Python documentation.
460 """This is semi-specified in Python documentation.
300
461
301 However, since the specification says 'not guaranted
462 However, since the specification says 'not guaranted
302 to work' rather than 'is forbidden to work', future
463 to work' rather than 'is forbidden to work', future
303 versions could invalidate this assumptions. This test
464 versions could invalidate this assumptions. This test
304 is meant to catch such a change if it ever comes true.
465 is meant to catch such a change if it ever comes true.
305 """
466 """
306
467
307 class T:
468 class T:
308 def __getitem__(self, k):
469 def __getitem__(self, k):
309 return "a"
470 return "a"
310
471
311 def __getattr__(self, k):
472 def __getattr__(self, k):
312 return "a"
473 return "a"
313
474
314 t = T()
475 t = T()
315 t.__getitem__ = lambda f: "b"
476 t.__getitem__ = lambda f: "b"
316 t.__getattr__ = lambda f: "b"
477 t.__getattr__ = lambda f: "b"
317 assert t[1] == "a"
478 assert t[1] == "a"
318 assert t[1] == "a"
479 assert t[1] == "a"
319
480
320
481
321 def test_assumption_named_tuples_share_getitem():
482 def test_assumption_named_tuples_share_getitem():
322 """Check assumption on named tuples sharing __getitem__"""
483 """Check assumption on named tuples sharing __getitem__"""
323 from typing import NamedTuple
484 from typing import NamedTuple
324
485
325 class A(NamedTuple):
486 class A(NamedTuple):
326 pass
487 pass
327
488
328 class B(NamedTuple):
489 class B(NamedTuple):
329 pass
490 pass
330
491
331 assert A.__getitem__ == B.__getitem__
492 assert A.__getitem__ == B.__getitem__
General Comments 0
You need to be logged in to leave comments. Login now