##// END OF EJS Templates
Polish documentation, hide private functions
krassowski -
Show More
@@ -1,3302 +1,3320 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press ``<tab>`` to expand it to its latex form.
53 and press :kbd:`Tab` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 :any:`Completer.backslash_combining_completions` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103
103
104 Matchers
104 Matchers
105 ========
105 ========
106
106
107 All completions routines are implemented using unified *Matchers* API.
107 All completions routines are implemented using unified *Matchers* API.
108 The matchers API is provisional and subject to change without notice.
108 The matchers API is provisional and subject to change without notice.
109
109
110 The built-in matchers include:
110 The built-in matchers include:
111
111
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.unicode_name_matcher`,
114 - :any:`IPCompleter.unicode_name_matcher`,
115 :any:`IPCompleter.fwd_unicode_matcher`
115 :any:`IPCompleter.fwd_unicode_matcher`
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 (`complete_command`) with string dispatch (including regular expressions).
124 (`complete_command`) with string dispatch (including regular expressions).
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Jedi results to match behaviour in earlier IPython versions.
126 Jedi results to match behaviour in earlier IPython versions.
127
127
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129
129
130 Matcher API
130 Matcher API
131 -----------
131 -----------
132
132
133 Simplifying some details, the ``Matcher`` interface can described as
133 Simplifying some details, the ``Matcher`` interface can described as
134
134
135 .. code-block::
135 .. code-block::
136
136
137 MatcherAPIv1 = Callable[[str], list[str]]
137 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139
139
140 Matcher = MatcherAPIv1 | MatcherAPIv2
140 Matcher = MatcherAPIv1 | MatcherAPIv2
141
141
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 and remains supported as a simplest way for generating completions. This is also
143 and remains supported as a simplest way for generating completions. This is also
144 currently the only API supported by the IPython hooks system `complete_command`.
144 currently the only API supported by the IPython hooks system `complete_command`.
145
145
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 and requires a literal ``2`` for v2 Matchers.
148 and requires a literal ``2`` for v2 Matchers.
149
149
150 Once the API stabilises future versions may relax the requirement for specifying
150 Once the API stabilises future versions may relax the requirement for specifying
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153
153
154 Suppression of competing matchers
154 Suppression of competing matchers
155 ---------------------------------
155 ---------------------------------
156
156
157 By default results from all matchers are combined, in the order determined by
157 By default results from all matchers are combined, in the order determined by
158 their priority. Matchers can request to suppress results from subsequent
158 their priority. Matchers can request to suppress results from subsequent
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160
160
161 When multiple matchers simultaneously request surpression, the results from of
161 When multiple matchers simultaneously request surpression, the results from of
162 the matcher with higher priority will be returned.
162 the matcher with higher priority will be returned.
163
163
164 Sometimes it is desirable to suppress most but not all other matchers;
164 Sometimes it is desirable to suppress most but not all other matchers;
165 this can be achieved by adding a list of identifiers of matchers which
165 this can be achieved by adding a list of identifiers of matchers which
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167
167
168 The suppression behaviour can is user-configurable via
168 The suppression behaviour can is user-configurable via
169 :any:`IPCompleter.suppress_competing_matchers`.
169 :any:`IPCompleter.suppress_competing_matchers`.
170 """
170 """
171
171
172
172
173 # Copyright (c) IPython Development Team.
173 # Copyright (c) IPython Development Team.
174 # Distributed under the terms of the Modified BSD License.
174 # Distributed under the terms of the Modified BSD License.
175 #
175 #
176 # Some of this code originated from rlcompleter in the Python standard library
176 # Some of this code originated from rlcompleter in the Python standard library
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
178
178
179 from __future__ import annotations
179 from __future__ import annotations
180 import builtins as builtin_mod
180 import builtins as builtin_mod
181 import enum
181 import enum
182 import glob
182 import glob
183 import inspect
183 import inspect
184 import itertools
184 import itertools
185 import keyword
185 import keyword
186 import os
186 import os
187 import re
187 import re
188 import string
188 import string
189 import sys
189 import sys
190 import tokenize
190 import tokenize
191 import time
191 import time
192 import unicodedata
192 import unicodedata
193 import uuid
193 import uuid
194 import warnings
194 import warnings
195 from ast import literal_eval
195 from ast import literal_eval
196 from collections import defaultdict
196 from collections import defaultdict
197 from contextlib import contextmanager
197 from contextlib import contextmanager
198 from dataclasses import dataclass
198 from dataclasses import dataclass
199 from functools import cached_property, partial
199 from functools import cached_property, partial
200 from types import SimpleNamespace
200 from types import SimpleNamespace
201 from typing import (
201 from typing import (
202 Iterable,
202 Iterable,
203 Iterator,
203 Iterator,
204 List,
204 List,
205 Tuple,
205 Tuple,
206 Union,
206 Union,
207 Any,
207 Any,
208 Sequence,
208 Sequence,
209 Dict,
209 Dict,
210 Optional,
210 Optional,
211 TYPE_CHECKING,
211 TYPE_CHECKING,
212 Set,
212 Set,
213 Sized,
213 Sized,
214 TypeVar,
214 TypeVar,
215 Literal,
215 Literal,
216 )
216 )
217
217
218 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
218 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
219 from IPython.core.error import TryNext
219 from IPython.core.error import TryNext
220 from IPython.core.inputtransformer2 import ESC_MAGIC
220 from IPython.core.inputtransformer2 import ESC_MAGIC
221 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
221 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
222 from IPython.core.oinspect import InspectColors
222 from IPython.core.oinspect import InspectColors
223 from IPython.testing.skipdoctest import skip_doctest
223 from IPython.testing.skipdoctest import skip_doctest
224 from IPython.utils import generics
224 from IPython.utils import generics
225 from IPython.utils.decorators import sphinx_options
225 from IPython.utils.decorators import sphinx_options
226 from IPython.utils.dir2 import dir2, get_real_method
226 from IPython.utils.dir2 import dir2, get_real_method
227 from IPython.utils.docs import GENERATING_DOCUMENTATION
227 from IPython.utils.docs import GENERATING_DOCUMENTATION
228 from IPython.utils.path import ensure_dir_exists
228 from IPython.utils.path import ensure_dir_exists
229 from IPython.utils.process import arg_split
229 from IPython.utils.process import arg_split
230 from traitlets import (
230 from traitlets import (
231 Bool,
231 Bool,
232 Enum,
232 Enum,
233 Int,
233 Int,
234 List as ListTrait,
234 List as ListTrait,
235 Unicode,
235 Unicode,
236 Dict as DictTrait,
236 Dict as DictTrait,
237 Union as UnionTrait,
237 Union as UnionTrait,
238 observe,
238 observe,
239 )
239 )
240 from traitlets.config.configurable import Configurable
240 from traitlets.config.configurable import Configurable
241
241
242 import __main__
242 import __main__
243
243
244 # skip module docstests
244 # skip module docstests
245 __skip_doctest__ = True
245 __skip_doctest__ = True
246
246
247
247
248 try:
248 try:
249 import jedi
249 import jedi
250 jedi.settings.case_insensitive_completion = False
250 jedi.settings.case_insensitive_completion = False
251 import jedi.api.helpers
251 import jedi.api.helpers
252 import jedi.api.classes
252 import jedi.api.classes
253 JEDI_INSTALLED = True
253 JEDI_INSTALLED = True
254 except ImportError:
254 except ImportError:
255 JEDI_INSTALLED = False
255 JEDI_INSTALLED = False
256
256
257
257
258 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
258 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
259 from typing import cast
259 from typing import cast
260 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
260 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
261 else:
261 else:
262 from typing import Generic
262 from typing import Generic
263
263
264 def cast(type_, obj):
264 def cast(type_, obj):
265 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
265 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
266 return obj
266 return obj
267
267
268 # do not require on runtime
268 # do not require on runtime
269 NotRequired = Tuple # requires Python >=3.11
269 NotRequired = Tuple # requires Python >=3.11
270 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
270 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
271 Protocol = object # requires Python >=3.8
271 Protocol = object # requires Python >=3.8
272 TypeAlias = Any # requires Python >=3.10
272 TypeAlias = Any # requires Python >=3.10
273 TypeGuard = Generic # requires Python >=3.10
273 TypeGuard = Generic # requires Python >=3.10
274 if GENERATING_DOCUMENTATION:
274 if GENERATING_DOCUMENTATION:
275 from typing import TypedDict
275 from typing import TypedDict
276
276
277 # -----------------------------------------------------------------------------
277 # -----------------------------------------------------------------------------
278 # Globals
278 # Globals
279 #-----------------------------------------------------------------------------
279 #-----------------------------------------------------------------------------
280
280
281 # ranges where we have most of the valid unicode names. We could be more finer
281 # ranges where we have most of the valid unicode names. We could be more finer
282 # grained but is it worth it for performance While unicode have character in the
282 # grained but is it worth it for performance While unicode have character in the
283 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
283 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
284 # write this). With below range we cover them all, with a density of ~67%
284 # write this). With below range we cover them all, with a density of ~67%
285 # biggest next gap we consider only adds up about 1% density and there are 600
285 # biggest next gap we consider only adds up about 1% density and there are 600
286 # gaps that would need hard coding.
286 # gaps that would need hard coding.
287 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
287 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
288
288
289 # Public API
289 # Public API
290 __all__ = ["Completer", "IPCompleter"]
290 __all__ = ["Completer", "IPCompleter"]
291
291
292 if sys.platform == 'win32':
292 if sys.platform == 'win32':
293 PROTECTABLES = ' '
293 PROTECTABLES = ' '
294 else:
294 else:
295 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
295 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
296
296
297 # Protect against returning an enormous number of completions which the frontend
297 # Protect against returning an enormous number of completions which the frontend
298 # may have trouble processing.
298 # may have trouble processing.
299 MATCHES_LIMIT = 500
299 MATCHES_LIMIT = 500
300
300
301 # Completion type reported when no type can be inferred.
301 # Completion type reported when no type can be inferred.
302 _UNKNOWN_TYPE = "<unknown>"
302 _UNKNOWN_TYPE = "<unknown>"
303
303
304 # sentinel value to signal lack of a match
304 # sentinel value to signal lack of a match
305 not_found = object()
305 not_found = object()
306
306
307 class ProvisionalCompleterWarning(FutureWarning):
307 class ProvisionalCompleterWarning(FutureWarning):
308 """
308 """
309 Exception raise by an experimental feature in this module.
309 Exception raise by an experimental feature in this module.
310
310
311 Wrap code in :any:`provisionalcompleter` context manager if you
311 Wrap code in :any:`provisionalcompleter` context manager if you
312 are certain you want to use an unstable feature.
312 are certain you want to use an unstable feature.
313 """
313 """
314 pass
314 pass
315
315
316 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
316 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
317
317
318
318
319 @skip_doctest
319 @skip_doctest
320 @contextmanager
320 @contextmanager
321 def provisionalcompleter(action='ignore'):
321 def provisionalcompleter(action='ignore'):
322 """
322 """
323 This context manager has to be used in any place where unstable completer
323 This context manager has to be used in any place where unstable completer
324 behavior and API may be called.
324 behavior and API may be called.
325
325
326 >>> with provisionalcompleter():
326 >>> with provisionalcompleter():
327 ... completer.do_experimental_things() # works
327 ... completer.do_experimental_things() # works
328
328
329 >>> completer.do_experimental_things() # raises.
329 >>> completer.do_experimental_things() # raises.
330
330
331 .. note::
331 .. note::
332
332
333 Unstable
333 Unstable
334
334
335 By using this context manager you agree that the API in use may change
335 By using this context manager you agree that the API in use may change
336 without warning, and that you won't complain if they do so.
336 without warning, and that you won't complain if they do so.
337
337
338 You also understand that, if the API is not to your liking, you should report
338 You also understand that, if the API is not to your liking, you should report
339 a bug to explain your use case upstream.
339 a bug to explain your use case upstream.
340
340
341 We'll be happy to get your feedback, feature requests, and improvements on
341 We'll be happy to get your feedback, feature requests, and improvements on
342 any of the unstable APIs!
342 any of the unstable APIs!
343 """
343 """
344 with warnings.catch_warnings():
344 with warnings.catch_warnings():
345 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
345 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
346 yield
346 yield
347
347
348
348
349 def has_open_quotes(s):
349 def has_open_quotes(s):
350 """Return whether a string has open quotes.
350 """Return whether a string has open quotes.
351
351
352 This simply counts whether the number of quote characters of either type in
352 This simply counts whether the number of quote characters of either type in
353 the string is odd.
353 the string is odd.
354
354
355 Returns
355 Returns
356 -------
356 -------
357 If there is an open quote, the quote character is returned. Else, return
357 If there is an open quote, the quote character is returned. Else, return
358 False.
358 False.
359 """
359 """
360 # We check " first, then ', so complex cases with nested quotes will get
360 # We check " first, then ', so complex cases with nested quotes will get
361 # the " to take precedence.
361 # the " to take precedence.
362 if s.count('"') % 2:
362 if s.count('"') % 2:
363 return '"'
363 return '"'
364 elif s.count("'") % 2:
364 elif s.count("'") % 2:
365 return "'"
365 return "'"
366 else:
366 else:
367 return False
367 return False
368
368
369
369
370 def protect_filename(s, protectables=PROTECTABLES):
370 def protect_filename(s, protectables=PROTECTABLES):
371 """Escape a string to protect certain characters."""
371 """Escape a string to protect certain characters."""
372 if set(s) & set(protectables):
372 if set(s) & set(protectables):
373 if sys.platform == "win32":
373 if sys.platform == "win32":
374 return '"' + s + '"'
374 return '"' + s + '"'
375 else:
375 else:
376 return "".join(("\\" + c if c in protectables else c) for c in s)
376 return "".join(("\\" + c if c in protectables else c) for c in s)
377 else:
377 else:
378 return s
378 return s
379
379
380
380
381 def expand_user(path:str) -> Tuple[str, bool, str]:
381 def expand_user(path:str) -> Tuple[str, bool, str]:
382 """Expand ``~``-style usernames in strings.
382 """Expand ``~``-style usernames in strings.
383
383
384 This is similar to :func:`os.path.expanduser`, but it computes and returns
384 This is similar to :func:`os.path.expanduser`, but it computes and returns
385 extra information that will be useful if the input was being used in
385 extra information that will be useful if the input was being used in
386 computing completions, and you wish to return the completions with the
386 computing completions, and you wish to return the completions with the
387 original '~' instead of its expanded value.
387 original '~' instead of its expanded value.
388
388
389 Parameters
389 Parameters
390 ----------
390 ----------
391 path : str
391 path : str
392 String to be expanded. If no ~ is present, the output is the same as the
392 String to be expanded. If no ~ is present, the output is the same as the
393 input.
393 input.
394
394
395 Returns
395 Returns
396 -------
396 -------
397 newpath : str
397 newpath : str
398 Result of ~ expansion in the input path.
398 Result of ~ expansion in the input path.
399 tilde_expand : bool
399 tilde_expand : bool
400 Whether any expansion was performed or not.
400 Whether any expansion was performed or not.
401 tilde_val : str
401 tilde_val : str
402 The value that ~ was replaced with.
402 The value that ~ was replaced with.
403 """
403 """
404 # Default values
404 # Default values
405 tilde_expand = False
405 tilde_expand = False
406 tilde_val = ''
406 tilde_val = ''
407 newpath = path
407 newpath = path
408
408
409 if path.startswith('~'):
409 if path.startswith('~'):
410 tilde_expand = True
410 tilde_expand = True
411 rest = len(path)-1
411 rest = len(path)-1
412 newpath = os.path.expanduser(path)
412 newpath = os.path.expanduser(path)
413 if rest:
413 if rest:
414 tilde_val = newpath[:-rest]
414 tilde_val = newpath[:-rest]
415 else:
415 else:
416 tilde_val = newpath
416 tilde_val = newpath
417
417
418 return newpath, tilde_expand, tilde_val
418 return newpath, tilde_expand, tilde_val
419
419
420
420
421 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
421 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
422 """Does the opposite of expand_user, with its outputs.
422 """Does the opposite of expand_user, with its outputs.
423 """
423 """
424 if tilde_expand:
424 if tilde_expand:
425 return path.replace(tilde_val, '~')
425 return path.replace(tilde_val, '~')
426 else:
426 else:
427 return path
427 return path
428
428
429
429
430 def completions_sorting_key(word):
430 def completions_sorting_key(word):
431 """key for sorting completions
431 """key for sorting completions
432
432
433 This does several things:
433 This does several things:
434
434
435 - Demote any completions starting with underscores to the end
435 - Demote any completions starting with underscores to the end
436 - Insert any %magic and %%cellmagic completions in the alphabetical order
436 - Insert any %magic and %%cellmagic completions in the alphabetical order
437 by their name
437 by their name
438 """
438 """
439 prio1, prio2 = 0, 0
439 prio1, prio2 = 0, 0
440
440
441 if word.startswith('__'):
441 if word.startswith('__'):
442 prio1 = 2
442 prio1 = 2
443 elif word.startswith('_'):
443 elif word.startswith('_'):
444 prio1 = 1
444 prio1 = 1
445
445
446 if word.endswith('='):
446 if word.endswith('='):
447 prio1 = -1
447 prio1 = -1
448
448
449 if word.startswith('%%'):
449 if word.startswith('%%'):
450 # If there's another % in there, this is something else, so leave it alone
450 # If there's another % in there, this is something else, so leave it alone
451 if not "%" in word[2:]:
451 if not "%" in word[2:]:
452 word = word[2:]
452 word = word[2:]
453 prio2 = 2
453 prio2 = 2
454 elif word.startswith('%'):
454 elif word.startswith('%'):
455 if not "%" in word[1:]:
455 if not "%" in word[1:]:
456 word = word[1:]
456 word = word[1:]
457 prio2 = 1
457 prio2 = 1
458
458
459 return prio1, word, prio2
459 return prio1, word, prio2
460
460
461
461
462 class _FakeJediCompletion:
462 class _FakeJediCompletion:
463 """
463 """
464 This is a workaround to communicate to the UI that Jedi has crashed and to
464 This is a workaround to communicate to the UI that Jedi has crashed and to
465 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
465 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
466
466
467 Added in IPython 6.0 so should likely be removed for 7.0
467 Added in IPython 6.0 so should likely be removed for 7.0
468
468
469 """
469 """
470
470
471 def __init__(self, name):
471 def __init__(self, name):
472
472
473 self.name = name
473 self.name = name
474 self.complete = name
474 self.complete = name
475 self.type = 'crashed'
475 self.type = 'crashed'
476 self.name_with_symbols = name
476 self.name_with_symbols = name
477 self.signature = ""
477 self.signature = ""
478 self._origin = "fake"
478 self._origin = "fake"
479 self.text = "crashed"
479 self.text = "crashed"
480
480
481 def __repr__(self):
481 def __repr__(self):
482 return '<Fake completion object jedi has crashed>'
482 return '<Fake completion object jedi has crashed>'
483
483
484
484
485 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
485 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
486
486
487
487
488 class Completion:
488 class Completion:
489 """
489 """
490 Completion object used and returned by IPython completers.
490 Completion object used and returned by IPython completers.
491
491
492 .. warning::
492 .. warning::
493
493
494 Unstable
494 Unstable
495
495
496 This function is unstable, API may change without warning.
496 This function is unstable, API may change without warning.
497 It will also raise unless use in proper context manager.
497 It will also raise unless use in proper context manager.
498
498
499 This act as a middle ground :any:`Completion` object between the
499 This act as a middle ground :any:`Completion` object between the
500 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
500 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
501 object. While Jedi need a lot of information about evaluator and how the
501 object. While Jedi need a lot of information about evaluator and how the
502 code should be ran/inspected, PromptToolkit (and other frontend) mostly
502 code should be ran/inspected, PromptToolkit (and other frontend) mostly
503 need user facing information.
503 need user facing information.
504
504
505 - Which range should be replaced replaced by what.
505 - Which range should be replaced replaced by what.
506 - Some metadata (like completion type), or meta information to displayed to
506 - Some metadata (like completion type), or meta information to displayed to
507 the use user.
507 the use user.
508
508
509 For debugging purpose we can also store the origin of the completion (``jedi``,
509 For debugging purpose we can also store the origin of the completion (``jedi``,
510 ``IPython.python_matches``, ``IPython.magics_matches``...).
510 ``IPython.python_matches``, ``IPython.magics_matches``...).
511 """
511 """
512
512
513 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
513 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
514
514
515 def __init__(
515 def __init__(
516 self,
516 self,
517 start: int,
517 start: int,
518 end: int,
518 end: int,
519 text: str,
519 text: str,
520 *,
520 *,
521 type: Optional[str] = None,
521 type: Optional[str] = None,
522 _origin="",
522 _origin="",
523 signature="",
523 signature="",
524 ) -> None:
524 ) -> None:
525 warnings.warn(
525 warnings.warn(
526 "``Completion`` is a provisional API (as of IPython 6.0). "
526 "``Completion`` is a provisional API (as of IPython 6.0). "
527 "It may change without warnings. "
527 "It may change without warnings. "
528 "Use in corresponding context manager.",
528 "Use in corresponding context manager.",
529 category=ProvisionalCompleterWarning,
529 category=ProvisionalCompleterWarning,
530 stacklevel=2,
530 stacklevel=2,
531 )
531 )
532
532
533 self.start = start
533 self.start = start
534 self.end = end
534 self.end = end
535 self.text = text
535 self.text = text
536 self.type = type
536 self.type = type
537 self.signature = signature
537 self.signature = signature
538 self._origin = _origin
538 self._origin = _origin
539
539
540 def __repr__(self):
540 def __repr__(self):
541 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
541 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
542 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
542 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
543
543
544 def __eq__(self, other) -> bool:
544 def __eq__(self, other) -> bool:
545 """
545 """
546 Equality and hash do not hash the type (as some completer may not be
546 Equality and hash do not hash the type (as some completer may not be
547 able to infer the type), but are use to (partially) de-duplicate
547 able to infer the type), but are use to (partially) de-duplicate
548 completion.
548 completion.
549
549
550 Completely de-duplicating completion is a bit tricker that just
550 Completely de-duplicating completion is a bit tricker that just
551 comparing as it depends on surrounding text, which Completions are not
551 comparing as it depends on surrounding text, which Completions are not
552 aware of.
552 aware of.
553 """
553 """
554 return self.start == other.start and \
554 return self.start == other.start and \
555 self.end == other.end and \
555 self.end == other.end and \
556 self.text == other.text
556 self.text == other.text
557
557
558 def __hash__(self):
558 def __hash__(self):
559 return hash((self.start, self.end, self.text))
559 return hash((self.start, self.end, self.text))
560
560
561
561
562 class SimpleCompletion:
562 class SimpleCompletion:
563 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
563 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
564
564
565 .. warning::
565 .. warning::
566
566
567 Provisional
567 Provisional
568
568
569 This class is used to describe the currently supported attributes of
569 This class is used to describe the currently supported attributes of
570 simple completion items, and any additional implementation details
570 simple completion items, and any additional implementation details
571 should not be relied on. Additional attributes may be included in
571 should not be relied on. Additional attributes may be included in
572 future versions, and meaning of text disambiguated from the current
572 future versions, and meaning of text disambiguated from the current
573 dual meaning of "text to insert" and "text to used as a label".
573 dual meaning of "text to insert" and "text to used as a label".
574 """
574 """
575
575
576 __slots__ = ["text", "type"]
576 __slots__ = ["text", "type"]
577
577
578 def __init__(self, text: str, *, type: Optional[str] = None):
578 def __init__(self, text: str, *, type: Optional[str] = None):
579 self.text = text
579 self.text = text
580 self.type = type
580 self.type = type
581
581
582 def __repr__(self):
582 def __repr__(self):
583 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
583 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
584
584
585
585
586 class _MatcherResultBase(TypedDict):
586 class _MatcherResultBase(TypedDict):
587 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
587 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
588
588
589 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
589 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
590 matched_fragment: NotRequired[str]
590 matched_fragment: NotRequired[str]
591
591
592 #: Whether to suppress results from all other matchers (True), some
592 #: Whether to suppress results from all other matchers (True), some
593 #: matchers (set of identifiers) or none (False); default is False.
593 #: matchers (set of identifiers) or none (False); default is False.
594 suppress: NotRequired[Union[bool, Set[str]]]
594 suppress: NotRequired[Union[bool, Set[str]]]
595
595
596 #: Identifiers of matchers which should NOT be suppressed when this matcher
596 #: Identifiers of matchers which should NOT be suppressed when this matcher
597 #: requests to suppress all other matchers; defaults to an empty set.
597 #: requests to suppress all other matchers; defaults to an empty set.
598 do_not_suppress: NotRequired[Set[str]]
598 do_not_suppress: NotRequired[Set[str]]
599
599
600 #: Are completions already ordered and should be left as-is? default is False.
600 #: Are completions already ordered and should be left as-is? default is False.
601 ordered: NotRequired[bool]
601 ordered: NotRequired[bool]
602
602
603
603
604 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
604 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
605 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
605 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
606 """Result of new-style completion matcher."""
606 """Result of new-style completion matcher."""
607
607
608 # note: TypedDict is added again to the inheritance chain
608 # note: TypedDict is added again to the inheritance chain
609 # in order to get __orig_bases__ for documentation
609 # in order to get __orig_bases__ for documentation
610
610
611 #: List of candidate completions
611 #: List of candidate completions
612 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
612 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
613
613
614
614
615 class _JediMatcherResult(_MatcherResultBase):
615 class _JediMatcherResult(_MatcherResultBase):
616 """Matching result returned by Jedi (will be processed differently)"""
616 """Matching result returned by Jedi (will be processed differently)"""
617
617
618 #: list of candidate completions
618 #: list of candidate completions
619 completions: Iterator[_JediCompletionLike]
619 completions: Iterator[_JediCompletionLike]
620
620
621
621
622 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
622 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
623 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
623 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
624
624
625
625
626 @dataclass
626 @dataclass
627 class CompletionContext:
627 class CompletionContext:
628 """Completion context provided as an argument to matchers in the Matcher API v2."""
628 """Completion context provided as an argument to matchers in the Matcher API v2."""
629
629
630 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
630 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
631 # which was not explicitly visible as an argument of the matcher, making any refactor
631 # which was not explicitly visible as an argument of the matcher, making any refactor
632 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
632 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
633 # from the completer, and make substituting them in sub-classes easier.
633 # from the completer, and make substituting them in sub-classes easier.
634
634
635 #: Relevant fragment of code directly preceding the cursor.
635 #: Relevant fragment of code directly preceding the cursor.
636 #: The extraction of token is implemented via splitter heuristic
636 #: The extraction of token is implemented via splitter heuristic
637 #: (following readline behaviour for legacy reasons), which is user configurable
637 #: (following readline behaviour for legacy reasons), which is user configurable
638 #: (by switching the greedy mode).
638 #: (by switching the greedy mode).
639 token: str
639 token: str
640
640
641 #: The full available content of the editor or buffer
641 #: The full available content of the editor or buffer
642 full_text: str
642 full_text: str
643
643
644 #: Cursor position in the line (the same for ``full_text`` and ``text``).
644 #: Cursor position in the line (the same for ``full_text`` and ``text``).
645 cursor_position: int
645 cursor_position: int
646
646
647 #: Cursor line in ``full_text``.
647 #: Cursor line in ``full_text``.
648 cursor_line: int
648 cursor_line: int
649
649
650 #: The maximum number of completions that will be used downstream.
650 #: The maximum number of completions that will be used downstream.
651 #: Matchers can use this information to abort early.
651 #: Matchers can use this information to abort early.
652 #: The built-in Jedi matcher is currently excepted from this limit.
652 #: The built-in Jedi matcher is currently excepted from this limit.
653 # If not given, return all possible completions.
653 # If not given, return all possible completions.
654 limit: Optional[int]
654 limit: Optional[int]
655
655
656 @cached_property
656 @cached_property
657 def text_until_cursor(self) -> str:
657 def text_until_cursor(self) -> str:
658 return self.line_with_cursor[: self.cursor_position]
658 return self.line_with_cursor[: self.cursor_position]
659
659
660 @cached_property
660 @cached_property
661 def line_with_cursor(self) -> str:
661 def line_with_cursor(self) -> str:
662 return self.full_text.split("\n")[self.cursor_line]
662 return self.full_text.split("\n")[self.cursor_line]
663
663
664
664
665 #: Matcher results for API v2.
665 #: Matcher results for API v2.
666 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
666 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
667
667
668
668
669 class _MatcherAPIv1Base(Protocol):
669 class _MatcherAPIv1Base(Protocol):
670 def __call__(self, text: str) -> List[str]:
670 def __call__(self, text: str) -> List[str]:
671 """Call signature."""
671 """Call signature."""
672 ...
672 ...
673
673
674 #: Used to construct the default matcher identifier
674 #: Used to construct the default matcher identifier
675 __qualname__: str
675 __qualname__: str
676
676
677
677
678 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
678 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
679 #: API version
679 #: API version
680 matcher_api_version: Optional[Literal[1]]
680 matcher_api_version: Optional[Literal[1]]
681
681
682 def __call__(self, text: str) -> List[str]:
682 def __call__(self, text: str) -> List[str]:
683 """Call signature."""
683 """Call signature."""
684 ...
684 ...
685
685
686
686
687 #: Protocol describing Matcher API v1.
687 #: Protocol describing Matcher API v1.
688 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
688 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
689
689
690
690
691 class MatcherAPIv2(Protocol):
691 class MatcherAPIv2(Protocol):
692 """Protocol describing Matcher API v2."""
692 """Protocol describing Matcher API v2."""
693
693
694 #: API version
694 #: API version
695 matcher_api_version: Literal[2] = 2
695 matcher_api_version: Literal[2] = 2
696
696
697 def __call__(self, context: CompletionContext) -> MatcherResult:
697 def __call__(self, context: CompletionContext) -> MatcherResult:
698 """Call signature."""
698 """Call signature."""
699 ...
699 ...
700
700
701 #: Used to construct the default matcher identifier
701 #: Used to construct the default matcher identifier
702 __qualname__: str
702 __qualname__: str
703
703
704
704
705 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
705 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
706
706
707
707
708 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
708 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
709 api_version = _get_matcher_api_version(matcher)
709 api_version = _get_matcher_api_version(matcher)
710 return api_version == 1
710 return api_version == 1
711
711
712
712
713 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
713 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
714 api_version = _get_matcher_api_version(matcher)
714 api_version = _get_matcher_api_version(matcher)
715 return api_version == 2
715 return api_version == 2
716
716
717
717
718 def _is_sizable(value: Any) -> TypeGuard[Sized]:
718 def _is_sizable(value: Any) -> TypeGuard[Sized]:
719 """Determines whether objects is sizable"""
719 """Determines whether objects is sizable"""
720 return hasattr(value, "__len__")
720 return hasattr(value, "__len__")
721
721
722
722
723 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
723 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
724 """Determines whether objects is sizable"""
724 """Determines whether objects is sizable"""
725 return hasattr(value, "__next__")
725 return hasattr(value, "__next__")
726
726
727
727
728 def has_any_completions(result: MatcherResult) -> bool:
728 def has_any_completions(result: MatcherResult) -> bool:
729 """Check if any result includes any completions."""
729 """Check if any result includes any completions."""
730 completions = result["completions"]
730 completions = result["completions"]
731 if _is_sizable(completions):
731 if _is_sizable(completions):
732 return len(completions) != 0
732 return len(completions) != 0
733 if _is_iterator(completions):
733 if _is_iterator(completions):
734 try:
734 try:
735 old_iterator = completions
735 old_iterator = completions
736 first = next(old_iterator)
736 first = next(old_iterator)
737 result["completions"] = cast(
737 result["completions"] = cast(
738 Iterator[SimpleCompletion],
738 Iterator[SimpleCompletion],
739 itertools.chain([first], old_iterator),
739 itertools.chain([first], old_iterator),
740 )
740 )
741 return True
741 return True
742 except StopIteration:
742 except StopIteration:
743 return False
743 return False
744 raise ValueError(
744 raise ValueError(
745 "Completions returned by matcher need to be an Iterator or a Sizable"
745 "Completions returned by matcher need to be an Iterator or a Sizable"
746 )
746 )
747
747
748
748
749 def completion_matcher(
749 def completion_matcher(
750 *,
750 *,
751 priority: Optional[float] = None,
751 priority: Optional[float] = None,
752 identifier: Optional[str] = None,
752 identifier: Optional[str] = None,
753 api_version: int = 1,
753 api_version: int = 1,
754 ):
754 ):
755 """Adds attributes describing the matcher.
755 """Adds attributes describing the matcher.
756
756
757 Parameters
757 Parameters
758 ----------
758 ----------
759 priority : Optional[float]
759 priority : Optional[float]
760 The priority of the matcher, determines the order of execution of matchers.
760 The priority of the matcher, determines the order of execution of matchers.
761 Higher priority means that the matcher will be executed first. Defaults to 0.
761 Higher priority means that the matcher will be executed first. Defaults to 0.
762 identifier : Optional[str]
762 identifier : Optional[str]
763 identifier of the matcher allowing users to modify the behaviour via traitlets,
763 identifier of the matcher allowing users to modify the behaviour via traitlets,
764 and also used to for debugging (will be passed as ``origin`` with the completions).
764 and also used to for debugging (will be passed as ``origin`` with the completions).
765
765
766 Defaults to matcher function's ``__qualname__`` (for example,
766 Defaults to matcher function's ``__qualname__`` (for example,
767 ``IPCompleter.file_matcher`` for the built-in matched defined
767 ``IPCompleter.file_matcher`` for the built-in matched defined
768 as a ``file_matcher`` method of the ``IPCompleter`` class).
768 as a ``file_matcher`` method of the ``IPCompleter`` class).
769 api_version: Optional[int]
769 api_version: Optional[int]
770 version of the Matcher API used by this matcher.
770 version of the Matcher API used by this matcher.
771 Currently supported values are 1 and 2.
771 Currently supported values are 1 and 2.
772 Defaults to 1.
772 Defaults to 1.
773 """
773 """
774
774
775 def wrapper(func: Matcher):
775 def wrapper(func: Matcher):
776 func.matcher_priority = priority or 0 # type: ignore
776 func.matcher_priority = priority or 0 # type: ignore
777 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
777 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
778 func.matcher_api_version = api_version # type: ignore
778 func.matcher_api_version = api_version # type: ignore
779 if TYPE_CHECKING:
779 if TYPE_CHECKING:
780 if api_version == 1:
780 if api_version == 1:
781 func = cast(MatcherAPIv1, func)
781 func = cast(MatcherAPIv1, func)
782 elif api_version == 2:
782 elif api_version == 2:
783 func = cast(MatcherAPIv2, func)
783 func = cast(MatcherAPIv2, func)
784 return func
784 return func
785
785
786 return wrapper
786 return wrapper
787
787
788
788
789 def _get_matcher_priority(matcher: Matcher):
789 def _get_matcher_priority(matcher: Matcher):
790 return getattr(matcher, "matcher_priority", 0)
790 return getattr(matcher, "matcher_priority", 0)
791
791
792
792
793 def _get_matcher_id(matcher: Matcher):
793 def _get_matcher_id(matcher: Matcher):
794 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
794 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
795
795
796
796
797 def _get_matcher_api_version(matcher):
797 def _get_matcher_api_version(matcher):
798 return getattr(matcher, "matcher_api_version", 1)
798 return getattr(matcher, "matcher_api_version", 1)
799
799
800
800
801 context_matcher = partial(completion_matcher, api_version=2)
801 context_matcher = partial(completion_matcher, api_version=2)
802
802
803
803
804 _IC = Iterable[Completion]
804 _IC = Iterable[Completion]
805
805
806
806
807 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
807 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
808 """
808 """
809 Deduplicate a set of completions.
809 Deduplicate a set of completions.
810
810
811 .. warning::
811 .. warning::
812
812
813 Unstable
813 Unstable
814
814
815 This function is unstable, API may change without warning.
815 This function is unstable, API may change without warning.
816
816
817 Parameters
817 Parameters
818 ----------
818 ----------
819 text : str
819 text : str
820 text that should be completed.
820 text that should be completed.
821 completions : Iterator[Completion]
821 completions : Iterator[Completion]
822 iterator over the completions to deduplicate
822 iterator over the completions to deduplicate
823
823
824 Yields
824 Yields
825 ------
825 ------
826 `Completions` objects
826 `Completions` objects
827 Completions coming from multiple sources, may be different but end up having
827 Completions coming from multiple sources, may be different but end up having
828 the same effect when applied to ``text``. If this is the case, this will
828 the same effect when applied to ``text``. If this is the case, this will
829 consider completions as equal and only emit the first encountered.
829 consider completions as equal and only emit the first encountered.
830 Not folded in `completions()` yet for debugging purpose, and to detect when
830 Not folded in `completions()` yet for debugging purpose, and to detect when
831 the IPython completer does return things that Jedi does not, but should be
831 the IPython completer does return things that Jedi does not, but should be
832 at some point.
832 at some point.
833 """
833 """
834 completions = list(completions)
834 completions = list(completions)
835 if not completions:
835 if not completions:
836 return
836 return
837
837
838 new_start = min(c.start for c in completions)
838 new_start = min(c.start for c in completions)
839 new_end = max(c.end for c in completions)
839 new_end = max(c.end for c in completions)
840
840
841 seen = set()
841 seen = set()
842 for c in completions:
842 for c in completions:
843 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
843 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
844 if new_text not in seen:
844 if new_text not in seen:
845 yield c
845 yield c
846 seen.add(new_text)
846 seen.add(new_text)
847
847
848
848
849 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
849 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
850 """
850 """
851 Rectify a set of completions to all have the same ``start`` and ``end``
851 Rectify a set of completions to all have the same ``start`` and ``end``
852
852
853 .. warning::
853 .. warning::
854
854
855 Unstable
855 Unstable
856
856
857 This function is unstable, API may change without warning.
857 This function is unstable, API may change without warning.
858 It will also raise unless use in proper context manager.
858 It will also raise unless use in proper context manager.
859
859
860 Parameters
860 Parameters
861 ----------
861 ----------
862 text : str
862 text : str
863 text that should be completed.
863 text that should be completed.
864 completions : Iterator[Completion]
864 completions : Iterator[Completion]
865 iterator over the completions to rectify
865 iterator over the completions to rectify
866 _debug : bool
866 _debug : bool
867 Log failed completion
867 Log failed completion
868
868
869 Notes
869 Notes
870 -----
870 -----
871 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
871 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
872 the Jupyter Protocol requires them to behave like so. This will readjust
872 the Jupyter Protocol requires them to behave like so. This will readjust
873 the completion to have the same ``start`` and ``end`` by padding both
873 the completion to have the same ``start`` and ``end`` by padding both
874 extremities with surrounding text.
874 extremities with surrounding text.
875
875
876 During stabilisation should support a ``_debug`` option to log which
876 During stabilisation should support a ``_debug`` option to log which
877 completion are return by the IPython completer and not found in Jedi in
877 completion are return by the IPython completer and not found in Jedi in
878 order to make upstream bug report.
878 order to make upstream bug report.
879 """
879 """
880 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
880 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
881 "It may change without warnings. "
881 "It may change without warnings. "
882 "Use in corresponding context manager.",
882 "Use in corresponding context manager.",
883 category=ProvisionalCompleterWarning, stacklevel=2)
883 category=ProvisionalCompleterWarning, stacklevel=2)
884
884
885 completions = list(completions)
885 completions = list(completions)
886 if not completions:
886 if not completions:
887 return
887 return
888 starts = (c.start for c in completions)
888 starts = (c.start for c in completions)
889 ends = (c.end for c in completions)
889 ends = (c.end for c in completions)
890
890
891 new_start = min(starts)
891 new_start = min(starts)
892 new_end = max(ends)
892 new_end = max(ends)
893
893
894 seen_jedi = set()
894 seen_jedi = set()
895 seen_python_matches = set()
895 seen_python_matches = set()
896 for c in completions:
896 for c in completions:
897 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
897 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
898 if c._origin == 'jedi':
898 if c._origin == 'jedi':
899 seen_jedi.add(new_text)
899 seen_jedi.add(new_text)
900 elif c._origin == 'IPCompleter.python_matches':
900 elif c._origin == 'IPCompleter.python_matches':
901 seen_python_matches.add(new_text)
901 seen_python_matches.add(new_text)
902 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
902 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
903 diff = seen_python_matches.difference(seen_jedi)
903 diff = seen_python_matches.difference(seen_jedi)
904 if diff and _debug:
904 if diff and _debug:
905 print('IPython.python matches have extras:', diff)
905 print('IPython.python matches have extras:', diff)
906
906
907
907
908 if sys.platform == 'win32':
908 if sys.platform == 'win32':
909 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
909 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
910 else:
910 else:
911 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
911 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
912
912
913 GREEDY_DELIMS = ' =\r\n'
913 GREEDY_DELIMS = ' =\r\n'
914
914
915
915
916 class CompletionSplitter(object):
916 class CompletionSplitter(object):
917 """An object to split an input line in a manner similar to readline.
917 """An object to split an input line in a manner similar to readline.
918
918
919 By having our own implementation, we can expose readline-like completion in
919 By having our own implementation, we can expose readline-like completion in
920 a uniform manner to all frontends. This object only needs to be given the
920 a uniform manner to all frontends. This object only needs to be given the
921 line of text to be split and the cursor position on said line, and it
921 line of text to be split and the cursor position on said line, and it
922 returns the 'word' to be completed on at the cursor after splitting the
922 returns the 'word' to be completed on at the cursor after splitting the
923 entire line.
923 entire line.
924
924
925 What characters are used as splitting delimiters can be controlled by
925 What characters are used as splitting delimiters can be controlled by
926 setting the ``delims`` attribute (this is a property that internally
926 setting the ``delims`` attribute (this is a property that internally
927 automatically builds the necessary regular expression)"""
927 automatically builds the necessary regular expression)"""
928
928
929 # Private interface
929 # Private interface
930
930
931 # A string of delimiter characters. The default value makes sense for
931 # A string of delimiter characters. The default value makes sense for
932 # IPython's most typical usage patterns.
932 # IPython's most typical usage patterns.
933 _delims = DELIMS
933 _delims = DELIMS
934
934
935 # The expression (a normal string) to be compiled into a regular expression
935 # The expression (a normal string) to be compiled into a regular expression
936 # for actual splitting. We store it as an attribute mostly for ease of
936 # for actual splitting. We store it as an attribute mostly for ease of
937 # debugging, since this type of code can be so tricky to debug.
937 # debugging, since this type of code can be so tricky to debug.
938 _delim_expr = None
938 _delim_expr = None
939
939
940 # The regular expression that does the actual splitting
940 # The regular expression that does the actual splitting
941 _delim_re = None
941 _delim_re = None
942
942
943 def __init__(self, delims=None):
943 def __init__(self, delims=None):
944 delims = CompletionSplitter._delims if delims is None else delims
944 delims = CompletionSplitter._delims if delims is None else delims
945 self.delims = delims
945 self.delims = delims
946
946
947 @property
947 @property
948 def delims(self):
948 def delims(self):
949 """Return the string of delimiter characters."""
949 """Return the string of delimiter characters."""
950 return self._delims
950 return self._delims
951
951
952 @delims.setter
952 @delims.setter
953 def delims(self, delims):
953 def delims(self, delims):
954 """Set the delimiters for line splitting."""
954 """Set the delimiters for line splitting."""
955 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
955 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
956 self._delim_re = re.compile(expr)
956 self._delim_re = re.compile(expr)
957 self._delims = delims
957 self._delims = delims
958 self._delim_expr = expr
958 self._delim_expr = expr
959
959
960 def split_line(self, line, cursor_pos=None):
960 def split_line(self, line, cursor_pos=None):
961 """Split a line of text with a cursor at the given position.
961 """Split a line of text with a cursor at the given position.
962 """
962 """
963 l = line if cursor_pos is None else line[:cursor_pos]
963 l = line if cursor_pos is None else line[:cursor_pos]
964 return self._delim_re.split(l)[-1]
964 return self._delim_re.split(l)[-1]
965
965
966
966
967
967
968 class Completer(Configurable):
968 class Completer(Configurable):
969
969
970 greedy = Bool(
970 greedy = Bool(
971 False,
971 False,
972 help="""Activate greedy completion.
972 help="""Activate greedy completion.
973
973
974 .. deprecated:: 8.8
974 .. deprecated:: 8.8
975 Use :any:`evaluation` and :any:`auto_close_dict_keys` instead.
975 Use :any:`Completer.evaluation` and :any:`Completer.auto_close_dict_keys` instead.
976
976
977 When enabled in IPython 8.8+ activates following settings for compatibility:
977 When enabled in IPython 8.8 or newer, changes configuration as follows:
978 - ``evaluation = 'unsafe'``
978
979 - ``auto_close_dict_keys = True``
979 - ``Completer.evaluation = 'unsafe'``
980 - ``Completer.auto_close_dict_keys = True``
980 """,
981 """,
981 ).tag(config=True)
982 ).tag(config=True)
982
983
983 evaluation = Enum(
984 evaluation = Enum(
984 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
985 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
985 default_value="limited",
986 default_value="limited",
986 help="""Code evaluation under completion.
987 help="""Policy for code evaluation under completion.
987
988
988 Successive options allow to enable more eager evaluation for more accurate completion suggestions,
989 Successive options allow to enable more eager evaluation for better
989 including for nested dictionaries, nested lists, or even results of function calls. Setting `unsafe`
990 completion suggestions, including for nested dictionaries, nested lists,
990 or higher can lead to evaluation of arbitrary user code on TAB with potentially dangerous side effects.
991 or even results of function calls.
992 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
993 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
991
994
992 Allowed values are:
995 Allowed values are:
993 - `forbidden`: no evaluation at all
996
994 - `minimal`: evaluation of literals and access to built-in namespaces; no item/attribute evaluation nor access to locals/globals
997 - ``forbidden``: no evaluation of code is permitted,
995 - `limited` (default): access to all namespaces, evaluation of hard-coded methods (``keys()``, ``__getattr__``, ``__getitems__``, etc) on allow-listed objects (e.g. ``dict``, ``list``, ``tuple``, ``pandas.Series``)
998 - ``minimal``: evaluation of literals and access to built-in namespace;
996 - `unsafe`: evaluation of all methods and function calls but not of syntax with side-effects like `del x`,
999 no item/attribute evaluation nor access to locals/globals,
997 - `dangerous`: completely arbitrary evaluation
1000 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1001 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1002 :any:`object.__getitem__`) on allow-listed objects (for example:
1003 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1004 - ``unsafe``: evaluation of all methods and function calls but not of
1005 syntax with side-effects like `del x`,
1006 - ``dangerous``: completely arbitrary evaluation.
998 """,
1007 """,
999 ).tag(config=True)
1008 ).tag(config=True)
1000
1009
1001 use_jedi = Bool(default_value=JEDI_INSTALLED,
1010 use_jedi = Bool(default_value=JEDI_INSTALLED,
1002 help="Experimental: Use Jedi to generate autocompletions. "
1011 help="Experimental: Use Jedi to generate autocompletions. "
1003 "Default to True if jedi is installed.").tag(config=True)
1012 "Default to True if jedi is installed.").tag(config=True)
1004
1013
1005 jedi_compute_type_timeout = Int(default_value=400,
1014 jedi_compute_type_timeout = Int(default_value=400,
1006 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1015 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1007 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1016 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1008 performance by preventing jedi to build its cache.
1017 performance by preventing jedi to build its cache.
1009 """).tag(config=True)
1018 """).tag(config=True)
1010
1019
1011 debug = Bool(default_value=False,
1020 debug = Bool(default_value=False,
1012 help='Enable debug for the Completer. Mostly print extra '
1021 help='Enable debug for the Completer. Mostly print extra '
1013 'information for experimental jedi integration.')\
1022 'information for experimental jedi integration.')\
1014 .tag(config=True)
1023 .tag(config=True)
1015
1024
1016 backslash_combining_completions = Bool(True,
1025 backslash_combining_completions = Bool(True,
1017 help="Enable unicode completions, e.g. \\alpha<tab> . "
1026 help="Enable unicode completions, e.g. \\alpha<tab> . "
1018 "Includes completion of latex commands, unicode names, and expanding "
1027 "Includes completion of latex commands, unicode names, and expanding "
1019 "unicode characters back to latex commands.").tag(config=True)
1028 "unicode characters back to latex commands.").tag(config=True)
1020
1029
1021 auto_close_dict_keys = Bool(
1030 auto_close_dict_keys = Bool(
1022 False, help="""Enable auto-closing dictionary keys."""
1031 False,
1032 help="""
1033 Enable auto-closing dictionary keys.
1034
1035 When enabled string keys will be suffixed with a final quote
1036 (matching the opening quote), tuple keys will also receive a
1037 separating comma if needed, and keys which are final will
1038 receive a closing bracket (``]``).
1039 """,
1023 ).tag(config=True)
1040 ).tag(config=True)
1024
1041
1025 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1042 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1026 """Create a new completer for the command line.
1043 """Create a new completer for the command line.
1027
1044
1028 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1045 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1029
1046
1030 If unspecified, the default namespace where completions are performed
1047 If unspecified, the default namespace where completions are performed
1031 is __main__ (technically, __main__.__dict__). Namespaces should be
1048 is __main__ (technically, __main__.__dict__). Namespaces should be
1032 given as dictionaries.
1049 given as dictionaries.
1033
1050
1034 An optional second namespace can be given. This allows the completer
1051 An optional second namespace can be given. This allows the completer
1035 to handle cases where both the local and global scopes need to be
1052 to handle cases where both the local and global scopes need to be
1036 distinguished.
1053 distinguished.
1037 """
1054 """
1038
1055
1039 # Don't bind to namespace quite yet, but flag whether the user wants a
1056 # Don't bind to namespace quite yet, but flag whether the user wants a
1040 # specific namespace or to use __main__.__dict__. This will allow us
1057 # specific namespace or to use __main__.__dict__. This will allow us
1041 # to bind to __main__.__dict__ at completion time, not now.
1058 # to bind to __main__.__dict__ at completion time, not now.
1042 if namespace is None:
1059 if namespace is None:
1043 self.use_main_ns = True
1060 self.use_main_ns = True
1044 else:
1061 else:
1045 self.use_main_ns = False
1062 self.use_main_ns = False
1046 self.namespace = namespace
1063 self.namespace = namespace
1047
1064
1048 # The global namespace, if given, can be bound directly
1065 # The global namespace, if given, can be bound directly
1049 if global_namespace is None:
1066 if global_namespace is None:
1050 self.global_namespace = {}
1067 self.global_namespace = {}
1051 else:
1068 else:
1052 self.global_namespace = global_namespace
1069 self.global_namespace = global_namespace
1053
1070
1054 self.custom_matchers = []
1071 self.custom_matchers = []
1055
1072
1056 super(Completer, self).__init__(**kwargs)
1073 super(Completer, self).__init__(**kwargs)
1057
1074
1058 def complete(self, text, state):
1075 def complete(self, text, state):
1059 """Return the next possible completion for 'text'.
1076 """Return the next possible completion for 'text'.
1060
1077
1061 This is called successively with state == 0, 1, 2, ... until it
1078 This is called successively with state == 0, 1, 2, ... until it
1062 returns None. The completion should begin with 'text'.
1079 returns None. The completion should begin with 'text'.
1063
1080
1064 """
1081 """
1065 if self.use_main_ns:
1082 if self.use_main_ns:
1066 self.namespace = __main__.__dict__
1083 self.namespace = __main__.__dict__
1067
1084
1068 if state == 0:
1085 if state == 0:
1069 if "." in text:
1086 if "." in text:
1070 self.matches = self.attr_matches(text)
1087 self.matches = self.attr_matches(text)
1071 else:
1088 else:
1072 self.matches = self.global_matches(text)
1089 self.matches = self.global_matches(text)
1073 try:
1090 try:
1074 return self.matches[state]
1091 return self.matches[state]
1075 except IndexError:
1092 except IndexError:
1076 return None
1093 return None
1077
1094
1078 def global_matches(self, text):
1095 def global_matches(self, text):
1079 """Compute matches when text is a simple name.
1096 """Compute matches when text is a simple name.
1080
1097
1081 Return a list of all keywords, built-in functions and names currently
1098 Return a list of all keywords, built-in functions and names currently
1082 defined in self.namespace or self.global_namespace that match.
1099 defined in self.namespace or self.global_namespace that match.
1083
1100
1084 """
1101 """
1085 matches = []
1102 matches = []
1086 match_append = matches.append
1103 match_append = matches.append
1087 n = len(text)
1104 n = len(text)
1088 for lst in [
1105 for lst in [
1089 keyword.kwlist,
1106 keyword.kwlist,
1090 builtin_mod.__dict__.keys(),
1107 builtin_mod.__dict__.keys(),
1091 list(self.namespace.keys()),
1108 list(self.namespace.keys()),
1092 list(self.global_namespace.keys()),
1109 list(self.global_namespace.keys()),
1093 ]:
1110 ]:
1094 for word in lst:
1111 for word in lst:
1095 if word[:n] == text and word != "__builtins__":
1112 if word[:n] == text and word != "__builtins__":
1096 match_append(word)
1113 match_append(word)
1097
1114
1098 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1115 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1099 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1116 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1100 shortened = {
1117 shortened = {
1101 "_".join([sub[0] for sub in word.split("_")]): word
1118 "_".join([sub[0] for sub in word.split("_")]): word
1102 for word in lst
1119 for word in lst
1103 if snake_case_re.match(word)
1120 if snake_case_re.match(word)
1104 }
1121 }
1105 for word in shortened.keys():
1122 for word in shortened.keys():
1106 if word[:n] == text and word != "__builtins__":
1123 if word[:n] == text and word != "__builtins__":
1107 match_append(shortened[word])
1124 match_append(shortened[word])
1108 return matches
1125 return matches
1109
1126
1110 def attr_matches(self, text):
1127 def attr_matches(self, text):
1111 """Compute matches when text contains a dot.
1128 """Compute matches when text contains a dot.
1112
1129
1113 Assuming the text is of the form NAME.NAME....[NAME], and is
1130 Assuming the text is of the form NAME.NAME....[NAME], and is
1114 evaluatable in self.namespace or self.global_namespace, it will be
1131 evaluatable in self.namespace or self.global_namespace, it will be
1115 evaluated and its attributes (as revealed by dir()) are used as
1132 evaluated and its attributes (as revealed by dir()) are used as
1116 possible completions. (For class instances, class members are
1133 possible completions. (For class instances, class members are
1117 also considered.)
1134 also considered.)
1118
1135
1119 WARNING: this can still invoke arbitrary C code, if an object
1136 WARNING: this can still invoke arbitrary C code, if an object
1120 with a __getattr__ hook is evaluated.
1137 with a __getattr__ hook is evaluated.
1121
1138
1122 """
1139 """
1123 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1140 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1124 if not m2:
1141 if not m2:
1125 return []
1142 return []
1126 expr, attr = m2.group(1, 2)
1143 expr, attr = m2.group(1, 2)
1127
1144
1128 obj = self._evaluate_expr(expr)
1145 obj = self._evaluate_expr(expr)
1129
1146
1130 if obj is not_found:
1147 if obj is not_found:
1131 return []
1148 return []
1132
1149
1133 if self.limit_to__all__ and hasattr(obj, '__all__'):
1150 if self.limit_to__all__ and hasattr(obj, '__all__'):
1134 words = get__all__entries(obj)
1151 words = get__all__entries(obj)
1135 else:
1152 else:
1136 words = dir2(obj)
1153 words = dir2(obj)
1137
1154
1138 try:
1155 try:
1139 words = generics.complete_object(obj, words)
1156 words = generics.complete_object(obj, words)
1140 except TryNext:
1157 except TryNext:
1141 pass
1158 pass
1142 except AssertionError:
1159 except AssertionError:
1143 raise
1160 raise
1144 except Exception:
1161 except Exception:
1145 # Silence errors from completion function
1162 # Silence errors from completion function
1146 #raise # dbg
1163 #raise # dbg
1147 pass
1164 pass
1148 # Build match list to return
1165 # Build match list to return
1149 n = len(attr)
1166 n = len(attr)
1150 return ["%s.%s" % (expr, w) for w in words if w[:n] == attr]
1167 return ["%s.%s" % (expr, w) for w in words if w[:n] == attr]
1151
1168
1152 def _evaluate_expr(self, expr):
1169 def _evaluate_expr(self, expr):
1153 obj = not_found
1170 obj = not_found
1154 done = False
1171 done = False
1155 while not done and expr:
1172 while not done and expr:
1156 try:
1173 try:
1157 obj = guarded_eval(
1174 obj = guarded_eval(
1158 expr,
1175 expr,
1159 EvaluationContext(
1176 EvaluationContext(
1160 globals_=self.global_namespace,
1177 globals=self.global_namespace,
1161 locals_=self.namespace,
1178 locals=self.namespace,
1162 evaluation=self.evaluation,
1179 evaluation=self.evaluation,
1163 ),
1180 ),
1164 )
1181 )
1165 done = True
1182 done = True
1166 except Exception as e:
1183 except Exception as e:
1167 if self.debug:
1184 if self.debug:
1168 print("Evaluation exception", e)
1185 print("Evaluation exception", e)
1169 # trim the expression to remove any invalid prefix
1186 # trim the expression to remove any invalid prefix
1170 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1187 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1171 # where parenthesis is not closed.
1188 # where parenthesis is not closed.
1172 # TODO: make this faster by reusing parts of the computation?
1189 # TODO: make this faster by reusing parts of the computation?
1173 expr = expr[1:]
1190 expr = expr[1:]
1174 return obj
1191 return obj
1175
1192
1176 def get__all__entries(obj):
1193 def get__all__entries(obj):
1177 """returns the strings in the __all__ attribute"""
1194 """returns the strings in the __all__ attribute"""
1178 try:
1195 try:
1179 words = getattr(obj, '__all__')
1196 words = getattr(obj, '__all__')
1180 except:
1197 except:
1181 return []
1198 return []
1182
1199
1183 return [w for w in words if isinstance(w, str)]
1200 return [w for w in words if isinstance(w, str)]
1184
1201
1185
1202
1186 class DictKeyState(enum.Flag):
1203 class _DictKeyState(enum.Flag):
1187 """Represent state of the key match in context of other possible matches.
1204 """Represent state of the key match in context of other possible matches.
1188
1205
1189 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1206 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1190 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1207 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1191 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1208 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1192 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1209 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1193 """
1210 """
1194
1211
1195 BASELINE = 0
1212 BASELINE = 0
1196 END_OF_ITEM = enum.auto()
1213 END_OF_ITEM = enum.auto()
1197 END_OF_TUPLE = enum.auto()
1214 END_OF_TUPLE = enum.auto()
1198 IN_TUPLE = enum.auto()
1215 IN_TUPLE = enum.auto()
1199
1216
1200
1217
1201 def _parse_tokens(c):
1218 def _parse_tokens(c):
1219 """Parse tokens even if there is an error."""
1202 tokens = []
1220 tokens = []
1203 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1221 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1204 while True:
1222 while True:
1205 try:
1223 try:
1206 tokens.append(next(token_generator))
1224 tokens.append(next(token_generator))
1207 except tokenize.TokenError:
1225 except tokenize.TokenError:
1208 return tokens
1226 return tokens
1209 except StopIteration:
1227 except StopIteration:
1210 return tokens
1228 return tokens
1211
1229
1212
1230
1213 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1231 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1214 """Match any valid Python numeric literal in a prefix of dictionary keys.
1232 """Match any valid Python numeric literal in a prefix of dictionary keys.
1215
1233
1216 References:
1234 References:
1217 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1235 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1218 - https://docs.python.org/3/library/tokenize.html
1236 - https://docs.python.org/3/library/tokenize.html
1219 """
1237 """
1220 if prefix[-1].isspace():
1238 if prefix[-1].isspace():
1221 # if user typed a space we do not have anything to complete
1239 # if user typed a space we do not have anything to complete
1222 # even if there was a valid number token before
1240 # even if there was a valid number token before
1223 return None
1241 return None
1224 tokens = _parse_tokens(prefix)
1242 tokens = _parse_tokens(prefix)
1225 rev_tokens = reversed(tokens)
1243 rev_tokens = reversed(tokens)
1226 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1244 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1227 number = None
1245 number = None
1228 for token in rev_tokens:
1246 for token in rev_tokens:
1229 if token.type in skip_over:
1247 if token.type in skip_over:
1230 continue
1248 continue
1231 if number is None:
1249 if number is None:
1232 if token.type == tokenize.NUMBER:
1250 if token.type == tokenize.NUMBER:
1233 number = token.string
1251 number = token.string
1234 continue
1252 continue
1235 else:
1253 else:
1236 # we did not match a number
1254 # we did not match a number
1237 return None
1255 return None
1238 if token.type == tokenize.OP:
1256 if token.type == tokenize.OP:
1239 if token.string == ",":
1257 if token.string == ",":
1240 break
1258 break
1241 if token.string in {"+", "-"}:
1259 if token.string in {"+", "-"}:
1242 number = token.string + number
1260 number = token.string + number
1243 else:
1261 else:
1244 return None
1262 return None
1245 return number
1263 return number
1246
1264
1247
1265
1248 _INT_FORMATS = {
1266 _INT_FORMATS = {
1249 "0b": bin,
1267 "0b": bin,
1250 "0o": oct,
1268 "0o": oct,
1251 "0x": hex,
1269 "0x": hex,
1252 }
1270 }
1253
1271
1254
1272
1255 def match_dict_keys(
1273 def match_dict_keys(
1256 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1274 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1257 prefix: str,
1275 prefix: str,
1258 delims: str,
1276 delims: str,
1259 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1277 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1260 ) -> Tuple[str, int, Dict[str, DictKeyState]]:
1278 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1261 """Used by dict_key_matches, matching the prefix to a list of keys
1279 """Used by dict_key_matches, matching the prefix to a list of keys
1262
1280
1263 Parameters
1281 Parameters
1264 ----------
1282 ----------
1265 keys
1283 keys
1266 list of keys in dictionary currently being completed.
1284 list of keys in dictionary currently being completed.
1267 prefix
1285 prefix
1268 Part of the text already typed by the user. E.g. `mydict[b'fo`
1286 Part of the text already typed by the user. E.g. `mydict[b'fo`
1269 delims
1287 delims
1270 String of delimiters to consider when finding the current key.
1288 String of delimiters to consider when finding the current key.
1271 extra_prefix : optional
1289 extra_prefix : optional
1272 Part of the text already typed in multi-key index cases. E.g. for
1290 Part of the text already typed in multi-key index cases. E.g. for
1273 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1291 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1274
1292
1275 Returns
1293 Returns
1276 -------
1294 -------
1277 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1295 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1278 ``quote`` being the quote that need to be used to close current string.
1296 ``quote`` being the quote that need to be used to close current string.
1279 ``token_start`` the position where the replacement should start occurring,
1297 ``token_start`` the position where the replacement should start occurring,
1280 ``matches`` a dictionary of replacement/completion keys on keys and values
1298 ``matches`` a dictionary of replacement/completion keys on keys and values
1281 indicating whether the state.
1299 indicating whether the state.
1282 """
1300 """
1283 prefix_tuple = extra_prefix if extra_prefix else ()
1301 prefix_tuple = extra_prefix if extra_prefix else ()
1284
1302
1285 prefix_tuple_size = sum(
1303 prefix_tuple_size = sum(
1286 [
1304 [
1287 # for pandas, do not count slices as taking space
1305 # for pandas, do not count slices as taking space
1288 not isinstance(k, slice)
1306 not isinstance(k, slice)
1289 for k in prefix_tuple
1307 for k in prefix_tuple
1290 ]
1308 ]
1291 )
1309 )
1292 text_serializable_types = (str, bytes, int, float, slice)
1310 text_serializable_types = (str, bytes, int, float, slice)
1293
1311
1294 def filter_prefix_tuple(key):
1312 def filter_prefix_tuple(key):
1295 # Reject too short keys
1313 # Reject too short keys
1296 if len(key) <= prefix_tuple_size:
1314 if len(key) <= prefix_tuple_size:
1297 return False
1315 return False
1298 # Reject keys which cannot be serialised to text
1316 # Reject keys which cannot be serialised to text
1299 for k in key:
1317 for k in key:
1300 if not isinstance(k, text_serializable_types):
1318 if not isinstance(k, text_serializable_types):
1301 return False
1319 return False
1302 # Reject keys that do not match the prefix
1320 # Reject keys that do not match the prefix
1303 for k, pt in zip(key, prefix_tuple):
1321 for k, pt in zip(key, prefix_tuple):
1304 if k != pt and not isinstance(pt, slice):
1322 if k != pt and not isinstance(pt, slice):
1305 return False
1323 return False
1306 # All checks passed!
1324 # All checks passed!
1307 return True
1325 return True
1308
1326
1309 filtered_key_is_final: Dict[
1327 filtered_key_is_final: Dict[
1310 Union[str, bytes, int, float], DictKeyState
1328 Union[str, bytes, int, float], _DictKeyState
1311 ] = defaultdict(lambda: DictKeyState.BASELINE)
1329 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1312
1330
1313 for k in keys:
1331 for k in keys:
1314 # If at least one of the matches is not final, mark as undetermined.
1332 # If at least one of the matches is not final, mark as undetermined.
1315 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1333 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1316 # `111` appears final on first match but is not final on the second.
1334 # `111` appears final on first match but is not final on the second.
1317
1335
1318 if isinstance(k, tuple):
1336 if isinstance(k, tuple):
1319 if filter_prefix_tuple(k):
1337 if filter_prefix_tuple(k):
1320 key_fragment = k[prefix_tuple_size]
1338 key_fragment = k[prefix_tuple_size]
1321 filtered_key_is_final[key_fragment] |= (
1339 filtered_key_is_final[key_fragment] |= (
1322 DictKeyState.END_OF_TUPLE
1340 _DictKeyState.END_OF_TUPLE
1323 if len(k) == prefix_tuple_size + 1
1341 if len(k) == prefix_tuple_size + 1
1324 else DictKeyState.IN_TUPLE
1342 else _DictKeyState.IN_TUPLE
1325 )
1343 )
1326 elif prefix_tuple_size > 0:
1344 elif prefix_tuple_size > 0:
1327 # we are completing a tuple but this key is not a tuple,
1345 # we are completing a tuple but this key is not a tuple,
1328 # so we should ignore it
1346 # so we should ignore it
1329 pass
1347 pass
1330 else:
1348 else:
1331 if isinstance(k, text_serializable_types):
1349 if isinstance(k, text_serializable_types):
1332 filtered_key_is_final[k] |= DictKeyState.END_OF_ITEM
1350 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1333
1351
1334 filtered_keys = filtered_key_is_final.keys()
1352 filtered_keys = filtered_key_is_final.keys()
1335
1353
1336 if not prefix:
1354 if not prefix:
1337 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1355 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1338
1356
1339 quote_match = re.search("(?:\"|')", prefix)
1357 quote_match = re.search("(?:\"|')", prefix)
1340 is_user_prefix_numeric = False
1358 is_user_prefix_numeric = False
1341
1359
1342 if quote_match:
1360 if quote_match:
1343 quote = quote_match.group()
1361 quote = quote_match.group()
1344 valid_prefix = prefix + quote
1362 valid_prefix = prefix + quote
1345 try:
1363 try:
1346 prefix_str = literal_eval(valid_prefix)
1364 prefix_str = literal_eval(valid_prefix)
1347 except Exception:
1365 except Exception:
1348 return "", 0, {}
1366 return "", 0, {}
1349 else:
1367 else:
1350 # If it does not look like a string, let's assume
1368 # If it does not look like a string, let's assume
1351 # we are dealing with a number or variable.
1369 # we are dealing with a number or variable.
1352 number_match = _match_number_in_dict_key_prefix(prefix)
1370 number_match = _match_number_in_dict_key_prefix(prefix)
1353
1371
1354 # We do not want the key matcher to suggest variable names so we yield:
1372 # We do not want the key matcher to suggest variable names so we yield:
1355 if number_match is None:
1373 if number_match is None:
1356 # The alternative would be to assume that user forgort the quote
1374 # The alternative would be to assume that user forgort the quote
1357 # and if the substring matches, suggest adding it at the start.
1375 # and if the substring matches, suggest adding it at the start.
1358 return "", 0, {}
1376 return "", 0, {}
1359
1377
1360 prefix_str = number_match
1378 prefix_str = number_match
1361 is_user_prefix_numeric = True
1379 is_user_prefix_numeric = True
1362 quote = ""
1380 quote = ""
1363
1381
1364 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1382 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1365 token_match = re.search(pattern, prefix, re.UNICODE)
1383 token_match = re.search(pattern, prefix, re.UNICODE)
1366 assert token_match is not None # silence mypy
1384 assert token_match is not None # silence mypy
1367 token_start = token_match.start()
1385 token_start = token_match.start()
1368 token_prefix = token_match.group()
1386 token_prefix = token_match.group()
1369
1387
1370 matched: Dict[str, DictKeyState] = {}
1388 matched: Dict[str, _DictKeyState] = {}
1371
1389
1372 str_key: Union[str, bytes]
1390 str_key: Union[str, bytes]
1373
1391
1374 for key in filtered_keys:
1392 for key in filtered_keys:
1375 if isinstance(key, (int, float)):
1393 if isinstance(key, (int, float)):
1376 # User typed a number but this key is not a number.
1394 # User typed a number but this key is not a number.
1377 if not is_user_prefix_numeric:
1395 if not is_user_prefix_numeric:
1378 continue
1396 continue
1379 str_key = str(key)
1397 str_key = str(key)
1380 if isinstance(key, int):
1398 if isinstance(key, int):
1381 int_base = prefix_str[:2].lower()
1399 int_base = prefix_str[:2].lower()
1382 # if user typed integer using binary/oct/hex notation:
1400 # if user typed integer using binary/oct/hex notation:
1383 if int_base in _INT_FORMATS:
1401 if int_base in _INT_FORMATS:
1384 int_format = _INT_FORMATS[int_base]
1402 int_format = _INT_FORMATS[int_base]
1385 str_key = int_format(key)
1403 str_key = int_format(key)
1386 else:
1404 else:
1387 # User typed a string but this key is a number.
1405 # User typed a string but this key is a number.
1388 if is_user_prefix_numeric:
1406 if is_user_prefix_numeric:
1389 continue
1407 continue
1390 str_key = key
1408 str_key = key
1391 try:
1409 try:
1392 if not str_key.startswith(prefix_str):
1410 if not str_key.startswith(prefix_str):
1393 continue
1411 continue
1394 except (AttributeError, TypeError, UnicodeError) as e:
1412 except (AttributeError, TypeError, UnicodeError) as e:
1395 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1413 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1396 continue
1414 continue
1397
1415
1398 # reformat remainder of key to begin with prefix
1416 # reformat remainder of key to begin with prefix
1399 rem = str_key[len(prefix_str) :]
1417 rem = str_key[len(prefix_str) :]
1400 # force repr wrapped in '
1418 # force repr wrapped in '
1401 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1419 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1402 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1420 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1403 if quote == '"':
1421 if quote == '"':
1404 # The entered prefix is quoted with ",
1422 # The entered prefix is quoted with ",
1405 # but the match is quoted with '.
1423 # but the match is quoted with '.
1406 # A contained " hence needs escaping for comparison:
1424 # A contained " hence needs escaping for comparison:
1407 rem_repr = rem_repr.replace('"', '\\"')
1425 rem_repr = rem_repr.replace('"', '\\"')
1408
1426
1409 # then reinsert prefix from start of token
1427 # then reinsert prefix from start of token
1410 match = "%s%s" % (token_prefix, rem_repr)
1428 match = "%s%s" % (token_prefix, rem_repr)
1411
1429
1412 matched[match] = filtered_key_is_final[key]
1430 matched[match] = filtered_key_is_final[key]
1413 return quote, token_start, matched
1431 return quote, token_start, matched
1414
1432
1415
1433
1416 def cursor_to_position(text:str, line:int, column:int)->int:
1434 def cursor_to_position(text:str, line:int, column:int)->int:
1417 """
1435 """
1418 Convert the (line,column) position of the cursor in text to an offset in a
1436 Convert the (line,column) position of the cursor in text to an offset in a
1419 string.
1437 string.
1420
1438
1421 Parameters
1439 Parameters
1422 ----------
1440 ----------
1423 text : str
1441 text : str
1424 The text in which to calculate the cursor offset
1442 The text in which to calculate the cursor offset
1425 line : int
1443 line : int
1426 Line of the cursor; 0-indexed
1444 Line of the cursor; 0-indexed
1427 column : int
1445 column : int
1428 Column of the cursor 0-indexed
1446 Column of the cursor 0-indexed
1429
1447
1430 Returns
1448 Returns
1431 -------
1449 -------
1432 Position of the cursor in ``text``, 0-indexed.
1450 Position of the cursor in ``text``, 0-indexed.
1433
1451
1434 See Also
1452 See Also
1435 --------
1453 --------
1436 position_to_cursor : reciprocal of this function
1454 position_to_cursor : reciprocal of this function
1437
1455
1438 """
1456 """
1439 lines = text.split('\n')
1457 lines = text.split('\n')
1440 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1458 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1441
1459
1442 return sum(len(l) + 1 for l in lines[:line]) + column
1460 return sum(len(l) + 1 for l in lines[:line]) + column
1443
1461
1444 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1462 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1445 """
1463 """
1446 Convert the position of the cursor in text (0 indexed) to a line
1464 Convert the position of the cursor in text (0 indexed) to a line
1447 number(0-indexed) and a column number (0-indexed) pair
1465 number(0-indexed) and a column number (0-indexed) pair
1448
1466
1449 Position should be a valid position in ``text``.
1467 Position should be a valid position in ``text``.
1450
1468
1451 Parameters
1469 Parameters
1452 ----------
1470 ----------
1453 text : str
1471 text : str
1454 The text in which to calculate the cursor offset
1472 The text in which to calculate the cursor offset
1455 offset : int
1473 offset : int
1456 Position of the cursor in ``text``, 0-indexed.
1474 Position of the cursor in ``text``, 0-indexed.
1457
1475
1458 Returns
1476 Returns
1459 -------
1477 -------
1460 (line, column) : (int, int)
1478 (line, column) : (int, int)
1461 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1479 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1462
1480
1463 See Also
1481 See Also
1464 --------
1482 --------
1465 cursor_to_position : reciprocal of this function
1483 cursor_to_position : reciprocal of this function
1466
1484
1467 """
1485 """
1468
1486
1469 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1487 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1470
1488
1471 before = text[:offset]
1489 before = text[:offset]
1472 blines = before.split('\n') # ! splitnes trim trailing \n
1490 blines = before.split('\n') # ! splitnes trim trailing \n
1473 line = before.count('\n')
1491 line = before.count('\n')
1474 col = len(blines[-1])
1492 col = len(blines[-1])
1475 return line, col
1493 return line, col
1476
1494
1477
1495
1478 def _safe_isinstance(obj, module, class_name, *attrs):
1496 def _safe_isinstance(obj, module, class_name, *attrs):
1479 """Checks if obj is an instance of module.class_name if loaded
1497 """Checks if obj is an instance of module.class_name if loaded
1480 """
1498 """
1481 if module in sys.modules:
1499 if module in sys.modules:
1482 m = sys.modules[module]
1500 m = sys.modules[module]
1483 for attr in [class_name, *attrs]:
1501 for attr in [class_name, *attrs]:
1484 m = getattr(m, attr)
1502 m = getattr(m, attr)
1485 return isinstance(obj, m)
1503 return isinstance(obj, m)
1486
1504
1487
1505
1488 @context_matcher()
1506 @context_matcher()
1489 def back_unicode_name_matcher(context: CompletionContext):
1507 def back_unicode_name_matcher(context: CompletionContext):
1490 """Match Unicode characters back to Unicode name
1508 """Match Unicode characters back to Unicode name
1491
1509
1492 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1510 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1493 """
1511 """
1494 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1512 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1495 return _convert_matcher_v1_result_to_v2(
1513 return _convert_matcher_v1_result_to_v2(
1496 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1514 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1497 )
1515 )
1498
1516
1499
1517
1500 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1518 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1501 """Match Unicode characters back to Unicode name
1519 """Match Unicode characters back to Unicode name
1502
1520
1503 This does ``β˜ƒ`` -> ``\\snowman``
1521 This does ``β˜ƒ`` -> ``\\snowman``
1504
1522
1505 Note that snowman is not a valid python3 combining character but will be expanded.
1523 Note that snowman is not a valid python3 combining character but will be expanded.
1506 Though it will not recombine back to the snowman character by the completion machinery.
1524 Though it will not recombine back to the snowman character by the completion machinery.
1507
1525
1508 This will not either back-complete standard sequences like \\n, \\b ...
1526 This will not either back-complete standard sequences like \\n, \\b ...
1509
1527
1510 .. deprecated:: 8.6
1528 .. deprecated:: 8.6
1511 You can use :meth:`back_unicode_name_matcher` instead.
1529 You can use :meth:`back_unicode_name_matcher` instead.
1512
1530
1513 Returns
1531 Returns
1514 =======
1532 =======
1515
1533
1516 Return a tuple with two elements:
1534 Return a tuple with two elements:
1517
1535
1518 - The Unicode character that was matched (preceded with a backslash), or
1536 - The Unicode character that was matched (preceded with a backslash), or
1519 empty string,
1537 empty string,
1520 - a sequence (of 1), name for the match Unicode character, preceded by
1538 - a sequence (of 1), name for the match Unicode character, preceded by
1521 backslash, or empty if no match.
1539 backslash, or empty if no match.
1522 """
1540 """
1523 if len(text)<2:
1541 if len(text)<2:
1524 return '', ()
1542 return '', ()
1525 maybe_slash = text[-2]
1543 maybe_slash = text[-2]
1526 if maybe_slash != '\\':
1544 if maybe_slash != '\\':
1527 return '', ()
1545 return '', ()
1528
1546
1529 char = text[-1]
1547 char = text[-1]
1530 # no expand on quote for completion in strings.
1548 # no expand on quote for completion in strings.
1531 # nor backcomplete standard ascii keys
1549 # nor backcomplete standard ascii keys
1532 if char in string.ascii_letters or char in ('"',"'"):
1550 if char in string.ascii_letters or char in ('"',"'"):
1533 return '', ()
1551 return '', ()
1534 try :
1552 try :
1535 unic = unicodedata.name(char)
1553 unic = unicodedata.name(char)
1536 return '\\'+char,('\\'+unic,)
1554 return '\\'+char,('\\'+unic,)
1537 except KeyError:
1555 except KeyError:
1538 pass
1556 pass
1539 return '', ()
1557 return '', ()
1540
1558
1541
1559
1542 @context_matcher()
1560 @context_matcher()
1543 def back_latex_name_matcher(context: CompletionContext):
1561 def back_latex_name_matcher(context: CompletionContext):
1544 """Match latex characters back to unicode name
1562 """Match latex characters back to unicode name
1545
1563
1546 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1564 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1547 """
1565 """
1548 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1566 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1549 return _convert_matcher_v1_result_to_v2(
1567 return _convert_matcher_v1_result_to_v2(
1550 matches, type="latex", fragment=fragment, suppress_if_matches=True
1568 matches, type="latex", fragment=fragment, suppress_if_matches=True
1551 )
1569 )
1552
1570
1553
1571
1554 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1572 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1555 """Match latex characters back to unicode name
1573 """Match latex characters back to unicode name
1556
1574
1557 This does ``\\β„΅`` -> ``\\aleph``
1575 This does ``\\β„΅`` -> ``\\aleph``
1558
1576
1559 .. deprecated:: 8.6
1577 .. deprecated:: 8.6
1560 You can use :meth:`back_latex_name_matcher` instead.
1578 You can use :meth:`back_latex_name_matcher` instead.
1561 """
1579 """
1562 if len(text)<2:
1580 if len(text)<2:
1563 return '', ()
1581 return '', ()
1564 maybe_slash = text[-2]
1582 maybe_slash = text[-2]
1565 if maybe_slash != '\\':
1583 if maybe_slash != '\\':
1566 return '', ()
1584 return '', ()
1567
1585
1568
1586
1569 char = text[-1]
1587 char = text[-1]
1570 # no expand on quote for completion in strings.
1588 # no expand on quote for completion in strings.
1571 # nor backcomplete standard ascii keys
1589 # nor backcomplete standard ascii keys
1572 if char in string.ascii_letters or char in ('"',"'"):
1590 if char in string.ascii_letters or char in ('"',"'"):
1573 return '', ()
1591 return '', ()
1574 try :
1592 try :
1575 latex = reverse_latex_symbol[char]
1593 latex = reverse_latex_symbol[char]
1576 # '\\' replace the \ as well
1594 # '\\' replace the \ as well
1577 return '\\'+char,[latex]
1595 return '\\'+char,[latex]
1578 except KeyError:
1596 except KeyError:
1579 pass
1597 pass
1580 return '', ()
1598 return '', ()
1581
1599
1582
1600
1583 def _formatparamchildren(parameter) -> str:
1601 def _formatparamchildren(parameter) -> str:
1584 """
1602 """
1585 Get parameter name and value from Jedi Private API
1603 Get parameter name and value from Jedi Private API
1586
1604
1587 Jedi does not expose a simple way to get `param=value` from its API.
1605 Jedi does not expose a simple way to get `param=value` from its API.
1588
1606
1589 Parameters
1607 Parameters
1590 ----------
1608 ----------
1591 parameter
1609 parameter
1592 Jedi's function `Param`
1610 Jedi's function `Param`
1593
1611
1594 Returns
1612 Returns
1595 -------
1613 -------
1596 A string like 'a', 'b=1', '*args', '**kwargs'
1614 A string like 'a', 'b=1', '*args', '**kwargs'
1597
1615
1598 """
1616 """
1599 description = parameter.description
1617 description = parameter.description
1600 if not description.startswith('param '):
1618 if not description.startswith('param '):
1601 raise ValueError('Jedi function parameter description have change format.'
1619 raise ValueError('Jedi function parameter description have change format.'
1602 'Expected "param ...", found %r".' % description)
1620 'Expected "param ...", found %r".' % description)
1603 return description[6:]
1621 return description[6:]
1604
1622
1605 def _make_signature(completion)-> str:
1623 def _make_signature(completion)-> str:
1606 """
1624 """
1607 Make the signature from a jedi completion
1625 Make the signature from a jedi completion
1608
1626
1609 Parameters
1627 Parameters
1610 ----------
1628 ----------
1611 completion : jedi.Completion
1629 completion : jedi.Completion
1612 object does not complete a function type
1630 object does not complete a function type
1613
1631
1614 Returns
1632 Returns
1615 -------
1633 -------
1616 a string consisting of the function signature, with the parenthesis but
1634 a string consisting of the function signature, with the parenthesis but
1617 without the function name. example:
1635 without the function name. example:
1618 `(a, *args, b=1, **kwargs)`
1636 `(a, *args, b=1, **kwargs)`
1619
1637
1620 """
1638 """
1621
1639
1622 # it looks like this might work on jedi 0.17
1640 # it looks like this might work on jedi 0.17
1623 if hasattr(completion, 'get_signatures'):
1641 if hasattr(completion, 'get_signatures'):
1624 signatures = completion.get_signatures()
1642 signatures = completion.get_signatures()
1625 if not signatures:
1643 if not signatures:
1626 return '(?)'
1644 return '(?)'
1627
1645
1628 c0 = completion.get_signatures()[0]
1646 c0 = completion.get_signatures()[0]
1629 return '('+c0.to_string().split('(', maxsplit=1)[1]
1647 return '('+c0.to_string().split('(', maxsplit=1)[1]
1630
1648
1631 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1649 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1632 for p in signature.defined_names()) if f])
1650 for p in signature.defined_names()) if f])
1633
1651
1634
1652
1635 _CompleteResult = Dict[str, MatcherResult]
1653 _CompleteResult = Dict[str, MatcherResult]
1636
1654
1637
1655
1638 DICT_MATCHER_REGEX = re.compile(
1656 DICT_MATCHER_REGEX = re.compile(
1639 r"""(?x)
1657 r"""(?x)
1640 ( # match dict-referring - or any get item object - expression
1658 ( # match dict-referring - or any get item object - expression
1641 .+
1659 .+
1642 )
1660 )
1643 \[ # open bracket
1661 \[ # open bracket
1644 \s* # and optional whitespace
1662 \s* # and optional whitespace
1645 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1663 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1646 # and slices
1664 # and slices
1647 ((?:(?:
1665 ((?:(?:
1648 (?: # closed string
1666 (?: # closed string
1649 [uUbB]? # string prefix (r not handled)
1667 [uUbB]? # string prefix (r not handled)
1650 (?:
1668 (?:
1651 '(?:[^']|(?<!\\)\\')*'
1669 '(?:[^']|(?<!\\)\\')*'
1652 |
1670 |
1653 "(?:[^"]|(?<!\\)\\")*"
1671 "(?:[^"]|(?<!\\)\\")*"
1654 )
1672 )
1655 )
1673 )
1656 |
1674 |
1657 # capture integers and slices
1675 # capture integers and slices
1658 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1676 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1659 |
1677 |
1660 # integer in bin/hex/oct notation
1678 # integer in bin/hex/oct notation
1661 0[bBxXoO]_?(?:\w|\d)+
1679 0[bBxXoO]_?(?:\w|\d)+
1662 )
1680 )
1663 \s*,\s*
1681 \s*,\s*
1664 )*)
1682 )*)
1665 ((?:
1683 ((?:
1666 (?: # unclosed string
1684 (?: # unclosed string
1667 [uUbB]? # string prefix (r not handled)
1685 [uUbB]? # string prefix (r not handled)
1668 (?:
1686 (?:
1669 '(?:[^']|(?<!\\)\\')*
1687 '(?:[^']|(?<!\\)\\')*
1670 |
1688 |
1671 "(?:[^"]|(?<!\\)\\")*
1689 "(?:[^"]|(?<!\\)\\")*
1672 )
1690 )
1673 )
1691 )
1674 |
1692 |
1675 # unfinished integer
1693 # unfinished integer
1676 (?:[-+]?\d+)
1694 (?:[-+]?\d+)
1677 |
1695 |
1678 # integer in bin/hex/oct notation
1696 # integer in bin/hex/oct notation
1679 0[bBxXoO]_?(?:\w|\d)+
1697 0[bBxXoO]_?(?:\w|\d)+
1680 )
1698 )
1681 )?
1699 )?
1682 $
1700 $
1683 """
1701 """
1684 )
1702 )
1685
1703
1686
1704
1687 def _convert_matcher_v1_result_to_v2(
1705 def _convert_matcher_v1_result_to_v2(
1688 matches: Sequence[str],
1706 matches: Sequence[str],
1689 type: str,
1707 type: str,
1690 fragment: Optional[str] = None,
1708 fragment: Optional[str] = None,
1691 suppress_if_matches: bool = False,
1709 suppress_if_matches: bool = False,
1692 ) -> SimpleMatcherResult:
1710 ) -> SimpleMatcherResult:
1693 """Utility to help with transition"""
1711 """Utility to help with transition"""
1694 result = {
1712 result = {
1695 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1713 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1696 "suppress": (True if matches else False) if suppress_if_matches else False,
1714 "suppress": (True if matches else False) if suppress_if_matches else False,
1697 }
1715 }
1698 if fragment is not None:
1716 if fragment is not None:
1699 result["matched_fragment"] = fragment
1717 result["matched_fragment"] = fragment
1700 return cast(SimpleMatcherResult, result)
1718 return cast(SimpleMatcherResult, result)
1701
1719
1702
1720
1703 class IPCompleter(Completer):
1721 class IPCompleter(Completer):
1704 """Extension of the completer class with IPython-specific features"""
1722 """Extension of the completer class with IPython-specific features"""
1705
1723
1706 @observe('greedy')
1724 @observe('greedy')
1707 def _greedy_changed(self, change):
1725 def _greedy_changed(self, change):
1708 """update the splitter and readline delims when greedy is changed"""
1726 """update the splitter and readline delims when greedy is changed"""
1709 if change["new"]:
1727 if change["new"]:
1710 self.evaluation = "unsafe"
1728 self.evaluation = "unsafe"
1711 self.auto_close_dict_keys = True
1729 self.auto_close_dict_keys = True
1712 self.splitter.delims = GREEDY_DELIMS
1730 self.splitter.delims = GREEDY_DELIMS
1713 else:
1731 else:
1714 self.evaluation = "limited"
1732 self.evaluation = "limited"
1715 self.auto_close_dict_keys = False
1733 self.auto_close_dict_keys = False
1716 self.splitter.delims = DELIMS
1734 self.splitter.delims = DELIMS
1717
1735
1718 dict_keys_only = Bool(
1736 dict_keys_only = Bool(
1719 False,
1737 False,
1720 help="""
1738 help="""
1721 Whether to show dict key matches only.
1739 Whether to show dict key matches only.
1722
1740
1723 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1741 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1724 """,
1742 """,
1725 )
1743 )
1726
1744
1727 suppress_competing_matchers = UnionTrait(
1745 suppress_competing_matchers = UnionTrait(
1728 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1746 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1729 default_value=None,
1747 default_value=None,
1730 help="""
1748 help="""
1731 Whether to suppress completions from other *Matchers*.
1749 Whether to suppress completions from other *Matchers*.
1732
1750
1733 When set to ``None`` (default) the matchers will attempt to auto-detect
1751 When set to ``None`` (default) the matchers will attempt to auto-detect
1734 whether suppression of other matchers is desirable. For example, at
1752 whether suppression of other matchers is desirable. For example, at
1735 the beginning of a line followed by `%` we expect a magic completion
1753 the beginning of a line followed by `%` we expect a magic completion
1736 to be the only applicable option, and after ``my_dict['`` we usually
1754 to be the only applicable option, and after ``my_dict['`` we usually
1737 expect a completion with an existing dictionary key.
1755 expect a completion with an existing dictionary key.
1738
1756
1739 If you want to disable this heuristic and see completions from all matchers,
1757 If you want to disable this heuristic and see completions from all matchers,
1740 set ``IPCompleter.suppress_competing_matchers = False``.
1758 set ``IPCompleter.suppress_competing_matchers = False``.
1741 To disable the heuristic for specific matchers provide a dictionary mapping:
1759 To disable the heuristic for specific matchers provide a dictionary mapping:
1742 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1760 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1743
1761
1744 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1762 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1745 completions to the set of matchers with the highest priority;
1763 completions to the set of matchers with the highest priority;
1746 this is equivalent to ``IPCompleter.merge_completions`` and
1764 this is equivalent to ``IPCompleter.merge_completions`` and
1747 can be beneficial for performance, but will sometimes omit relevant
1765 can be beneficial for performance, but will sometimes omit relevant
1748 candidates from matchers further down the priority list.
1766 candidates from matchers further down the priority list.
1749 """,
1767 """,
1750 ).tag(config=True)
1768 ).tag(config=True)
1751
1769
1752 merge_completions = Bool(
1770 merge_completions = Bool(
1753 True,
1771 True,
1754 help="""Whether to merge completion results into a single list
1772 help="""Whether to merge completion results into a single list
1755
1773
1756 If False, only the completion results from the first non-empty
1774 If False, only the completion results from the first non-empty
1757 completer will be returned.
1775 completer will be returned.
1758
1776
1759 As of version 8.6.0, setting the value to ``False`` is an alias for:
1777 As of version 8.6.0, setting the value to ``False`` is an alias for:
1760 ``IPCompleter.suppress_competing_matchers = True.``.
1778 ``IPCompleter.suppress_competing_matchers = True.``.
1761 """,
1779 """,
1762 ).tag(config=True)
1780 ).tag(config=True)
1763
1781
1764 disable_matchers = ListTrait(
1782 disable_matchers = ListTrait(
1765 Unicode(),
1783 Unicode(),
1766 help="""List of matchers to disable.
1784 help="""List of matchers to disable.
1767
1785
1768 The list should contain matcher identifiers (see :any:`completion_matcher`).
1786 The list should contain matcher identifiers (see :any:`completion_matcher`).
1769 """,
1787 """,
1770 ).tag(config=True)
1788 ).tag(config=True)
1771
1789
1772 omit__names = Enum(
1790 omit__names = Enum(
1773 (0, 1, 2),
1791 (0, 1, 2),
1774 default_value=2,
1792 default_value=2,
1775 help="""Instruct the completer to omit private method names
1793 help="""Instruct the completer to omit private method names
1776
1794
1777 Specifically, when completing on ``object.<tab>``.
1795 Specifically, when completing on ``object.<tab>``.
1778
1796
1779 When 2 [default]: all names that start with '_' will be excluded.
1797 When 2 [default]: all names that start with '_' will be excluded.
1780
1798
1781 When 1: all 'magic' names (``__foo__``) will be excluded.
1799 When 1: all 'magic' names (``__foo__``) will be excluded.
1782
1800
1783 When 0: nothing will be excluded.
1801 When 0: nothing will be excluded.
1784 """
1802 """
1785 ).tag(config=True)
1803 ).tag(config=True)
1786 limit_to__all__ = Bool(False,
1804 limit_to__all__ = Bool(False,
1787 help="""
1805 help="""
1788 DEPRECATED as of version 5.0.
1806 DEPRECATED as of version 5.0.
1789
1807
1790 Instruct the completer to use __all__ for the completion
1808 Instruct the completer to use __all__ for the completion
1791
1809
1792 Specifically, when completing on ``object.<tab>``.
1810 Specifically, when completing on ``object.<tab>``.
1793
1811
1794 When True: only those names in obj.__all__ will be included.
1812 When True: only those names in obj.__all__ will be included.
1795
1813
1796 When False [default]: the __all__ attribute is ignored
1814 When False [default]: the __all__ attribute is ignored
1797 """,
1815 """,
1798 ).tag(config=True)
1816 ).tag(config=True)
1799
1817
1800 profile_completions = Bool(
1818 profile_completions = Bool(
1801 default_value=False,
1819 default_value=False,
1802 help="If True, emit profiling data for completion subsystem using cProfile."
1820 help="If True, emit profiling data for completion subsystem using cProfile."
1803 ).tag(config=True)
1821 ).tag(config=True)
1804
1822
1805 profiler_output_dir = Unicode(
1823 profiler_output_dir = Unicode(
1806 default_value=".completion_profiles",
1824 default_value=".completion_profiles",
1807 help="Template for path at which to output profile data for completions."
1825 help="Template for path at which to output profile data for completions."
1808 ).tag(config=True)
1826 ).tag(config=True)
1809
1827
1810 @observe('limit_to__all__')
1828 @observe('limit_to__all__')
1811 def _limit_to_all_changed(self, change):
1829 def _limit_to_all_changed(self, change):
1812 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1830 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1813 'value has been deprecated since IPython 5.0, will be made to have '
1831 'value has been deprecated since IPython 5.0, will be made to have '
1814 'no effects and then removed in future version of IPython.',
1832 'no effects and then removed in future version of IPython.',
1815 UserWarning)
1833 UserWarning)
1816
1834
1817 def __init__(
1835 def __init__(
1818 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1836 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1819 ):
1837 ):
1820 """IPCompleter() -> completer
1838 """IPCompleter() -> completer
1821
1839
1822 Return a completer object.
1840 Return a completer object.
1823
1841
1824 Parameters
1842 Parameters
1825 ----------
1843 ----------
1826 shell
1844 shell
1827 a pointer to the ipython shell itself. This is needed
1845 a pointer to the ipython shell itself. This is needed
1828 because this completer knows about magic functions, and those can
1846 because this completer knows about magic functions, and those can
1829 only be accessed via the ipython instance.
1847 only be accessed via the ipython instance.
1830 namespace : dict, optional
1848 namespace : dict, optional
1831 an optional dict where completions are performed.
1849 an optional dict where completions are performed.
1832 global_namespace : dict, optional
1850 global_namespace : dict, optional
1833 secondary optional dict for completions, to
1851 secondary optional dict for completions, to
1834 handle cases (such as IPython embedded inside functions) where
1852 handle cases (such as IPython embedded inside functions) where
1835 both Python scopes are visible.
1853 both Python scopes are visible.
1836 config : Config
1854 config : Config
1837 traitlet's config object
1855 traitlet's config object
1838 **kwargs
1856 **kwargs
1839 passed to super class unmodified.
1857 passed to super class unmodified.
1840 """
1858 """
1841
1859
1842 self.magic_escape = ESC_MAGIC
1860 self.magic_escape = ESC_MAGIC
1843 self.splitter = CompletionSplitter()
1861 self.splitter = CompletionSplitter()
1844
1862
1845 # _greedy_changed() depends on splitter and readline being defined:
1863 # _greedy_changed() depends on splitter and readline being defined:
1846 super().__init__(
1864 super().__init__(
1847 namespace=namespace,
1865 namespace=namespace,
1848 global_namespace=global_namespace,
1866 global_namespace=global_namespace,
1849 config=config,
1867 config=config,
1850 **kwargs,
1868 **kwargs,
1851 )
1869 )
1852
1870
1853 # List where completion matches will be stored
1871 # List where completion matches will be stored
1854 self.matches = []
1872 self.matches = []
1855 self.shell = shell
1873 self.shell = shell
1856 # Regexp to split filenames with spaces in them
1874 # Regexp to split filenames with spaces in them
1857 self.space_name_re = re.compile(r'([^\\] )')
1875 self.space_name_re = re.compile(r'([^\\] )')
1858 # Hold a local ref. to glob.glob for speed
1876 # Hold a local ref. to glob.glob for speed
1859 self.glob = glob.glob
1877 self.glob = glob.glob
1860
1878
1861 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1879 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1862 # buffers, to avoid completion problems.
1880 # buffers, to avoid completion problems.
1863 term = os.environ.get('TERM','xterm')
1881 term = os.environ.get('TERM','xterm')
1864 self.dumb_terminal = term in ['dumb','emacs']
1882 self.dumb_terminal = term in ['dumb','emacs']
1865
1883
1866 # Special handling of backslashes needed in win32 platforms
1884 # Special handling of backslashes needed in win32 platforms
1867 if sys.platform == "win32":
1885 if sys.platform == "win32":
1868 self.clean_glob = self._clean_glob_win32
1886 self.clean_glob = self._clean_glob_win32
1869 else:
1887 else:
1870 self.clean_glob = self._clean_glob
1888 self.clean_glob = self._clean_glob
1871
1889
1872 #regexp to parse docstring for function signature
1890 #regexp to parse docstring for function signature
1873 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1891 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1874 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1892 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1875 #use this if positional argument name is also needed
1893 #use this if positional argument name is also needed
1876 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1894 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1877
1895
1878 self.magic_arg_matchers = [
1896 self.magic_arg_matchers = [
1879 self.magic_config_matcher,
1897 self.magic_config_matcher,
1880 self.magic_color_matcher,
1898 self.magic_color_matcher,
1881 ]
1899 ]
1882
1900
1883 # This is set externally by InteractiveShell
1901 # This is set externally by InteractiveShell
1884 self.custom_completers = None
1902 self.custom_completers = None
1885
1903
1886 # This is a list of names of unicode characters that can be completed
1904 # This is a list of names of unicode characters that can be completed
1887 # into their corresponding unicode value. The list is large, so we
1905 # into their corresponding unicode value. The list is large, so we
1888 # lazily initialize it on first use. Consuming code should access this
1906 # lazily initialize it on first use. Consuming code should access this
1889 # attribute through the `@unicode_names` property.
1907 # attribute through the `@unicode_names` property.
1890 self._unicode_names = None
1908 self._unicode_names = None
1891
1909
1892 self._backslash_combining_matchers = [
1910 self._backslash_combining_matchers = [
1893 self.latex_name_matcher,
1911 self.latex_name_matcher,
1894 self.unicode_name_matcher,
1912 self.unicode_name_matcher,
1895 back_latex_name_matcher,
1913 back_latex_name_matcher,
1896 back_unicode_name_matcher,
1914 back_unicode_name_matcher,
1897 self.fwd_unicode_matcher,
1915 self.fwd_unicode_matcher,
1898 ]
1916 ]
1899
1917
1900 if not self.backslash_combining_completions:
1918 if not self.backslash_combining_completions:
1901 for matcher in self._backslash_combining_matchers:
1919 for matcher in self._backslash_combining_matchers:
1902 self.disable_matchers.append(_get_matcher_id(matcher))
1920 self.disable_matchers.append(_get_matcher_id(matcher))
1903
1921
1904 if not self.merge_completions:
1922 if not self.merge_completions:
1905 self.suppress_competing_matchers = True
1923 self.suppress_competing_matchers = True
1906
1924
1907 @property
1925 @property
1908 def matchers(self) -> List[Matcher]:
1926 def matchers(self) -> List[Matcher]:
1909 """All active matcher routines for completion"""
1927 """All active matcher routines for completion"""
1910 if self.dict_keys_only:
1928 if self.dict_keys_only:
1911 return [self.dict_key_matcher]
1929 return [self.dict_key_matcher]
1912
1930
1913 if self.use_jedi:
1931 if self.use_jedi:
1914 return [
1932 return [
1915 *self.custom_matchers,
1933 *self.custom_matchers,
1916 *self._backslash_combining_matchers,
1934 *self._backslash_combining_matchers,
1917 *self.magic_arg_matchers,
1935 *self.magic_arg_matchers,
1918 self.custom_completer_matcher,
1936 self.custom_completer_matcher,
1919 self.magic_matcher,
1937 self.magic_matcher,
1920 self._jedi_matcher,
1938 self._jedi_matcher,
1921 self.dict_key_matcher,
1939 self.dict_key_matcher,
1922 self.file_matcher,
1940 self.file_matcher,
1923 ]
1941 ]
1924 else:
1942 else:
1925 return [
1943 return [
1926 *self.custom_matchers,
1944 *self.custom_matchers,
1927 *self._backslash_combining_matchers,
1945 *self._backslash_combining_matchers,
1928 *self.magic_arg_matchers,
1946 *self.magic_arg_matchers,
1929 self.custom_completer_matcher,
1947 self.custom_completer_matcher,
1930 self.dict_key_matcher,
1948 self.dict_key_matcher,
1931 # TODO: convert python_matches to v2 API
1949 # TODO: convert python_matches to v2 API
1932 self.magic_matcher,
1950 self.magic_matcher,
1933 self.python_matches,
1951 self.python_matches,
1934 self.file_matcher,
1952 self.file_matcher,
1935 self.python_func_kw_matcher,
1953 self.python_func_kw_matcher,
1936 ]
1954 ]
1937
1955
1938 def all_completions(self, text:str) -> List[str]:
1956 def all_completions(self, text:str) -> List[str]:
1939 """
1957 """
1940 Wrapper around the completion methods for the benefit of emacs.
1958 Wrapper around the completion methods for the benefit of emacs.
1941 """
1959 """
1942 prefix = text.rpartition('.')[0]
1960 prefix = text.rpartition('.')[0]
1943 with provisionalcompleter():
1961 with provisionalcompleter():
1944 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1962 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1945 for c in self.completions(text, len(text))]
1963 for c in self.completions(text, len(text))]
1946
1964
1947 return self.complete(text)[1]
1965 return self.complete(text)[1]
1948
1966
1949 def _clean_glob(self, text:str):
1967 def _clean_glob(self, text:str):
1950 return self.glob("%s*" % text)
1968 return self.glob("%s*" % text)
1951
1969
1952 def _clean_glob_win32(self, text:str):
1970 def _clean_glob_win32(self, text:str):
1953 return [f.replace("\\","/")
1971 return [f.replace("\\","/")
1954 for f in self.glob("%s*" % text)]
1972 for f in self.glob("%s*" % text)]
1955
1973
1956 @context_matcher()
1974 @context_matcher()
1957 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1975 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1958 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1976 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1959 matches = self.file_matches(context.token)
1977 matches = self.file_matches(context.token)
1960 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1978 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1961 # starts with `/home/`, `C:\`, etc)
1979 # starts with `/home/`, `C:\`, etc)
1962 return _convert_matcher_v1_result_to_v2(matches, type="path")
1980 return _convert_matcher_v1_result_to_v2(matches, type="path")
1963
1981
1964 def file_matches(self, text: str) -> List[str]:
1982 def file_matches(self, text: str) -> List[str]:
1965 """Match filenames, expanding ~USER type strings.
1983 """Match filenames, expanding ~USER type strings.
1966
1984
1967 Most of the seemingly convoluted logic in this completer is an
1985 Most of the seemingly convoluted logic in this completer is an
1968 attempt to handle filenames with spaces in them. And yet it's not
1986 attempt to handle filenames with spaces in them. And yet it's not
1969 quite perfect, because Python's readline doesn't expose all of the
1987 quite perfect, because Python's readline doesn't expose all of the
1970 GNU readline details needed for this to be done correctly.
1988 GNU readline details needed for this to be done correctly.
1971
1989
1972 For a filename with a space in it, the printed completions will be
1990 For a filename with a space in it, the printed completions will be
1973 only the parts after what's already been typed (instead of the
1991 only the parts after what's already been typed (instead of the
1974 full completions, as is normally done). I don't think with the
1992 full completions, as is normally done). I don't think with the
1975 current (as of Python 2.3) Python readline it's possible to do
1993 current (as of Python 2.3) Python readline it's possible to do
1976 better.
1994 better.
1977
1995
1978 .. deprecated:: 8.6
1996 .. deprecated:: 8.6
1979 You can use :meth:`file_matcher` instead.
1997 You can use :meth:`file_matcher` instead.
1980 """
1998 """
1981
1999
1982 # chars that require escaping with backslash - i.e. chars
2000 # chars that require escaping with backslash - i.e. chars
1983 # that readline treats incorrectly as delimiters, but we
2001 # that readline treats incorrectly as delimiters, but we
1984 # don't want to treat as delimiters in filename matching
2002 # don't want to treat as delimiters in filename matching
1985 # when escaped with backslash
2003 # when escaped with backslash
1986 if text.startswith('!'):
2004 if text.startswith('!'):
1987 text = text[1:]
2005 text = text[1:]
1988 text_prefix = u'!'
2006 text_prefix = u'!'
1989 else:
2007 else:
1990 text_prefix = u''
2008 text_prefix = u''
1991
2009
1992 text_until_cursor = self.text_until_cursor
2010 text_until_cursor = self.text_until_cursor
1993 # track strings with open quotes
2011 # track strings with open quotes
1994 open_quotes = has_open_quotes(text_until_cursor)
2012 open_quotes = has_open_quotes(text_until_cursor)
1995
2013
1996 if '(' in text_until_cursor or '[' in text_until_cursor:
2014 if '(' in text_until_cursor or '[' in text_until_cursor:
1997 lsplit = text
2015 lsplit = text
1998 else:
2016 else:
1999 try:
2017 try:
2000 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2018 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2001 lsplit = arg_split(text_until_cursor)[-1]
2019 lsplit = arg_split(text_until_cursor)[-1]
2002 except ValueError:
2020 except ValueError:
2003 # typically an unmatched ", or backslash without escaped char.
2021 # typically an unmatched ", or backslash without escaped char.
2004 if open_quotes:
2022 if open_quotes:
2005 lsplit = text_until_cursor.split(open_quotes)[-1]
2023 lsplit = text_until_cursor.split(open_quotes)[-1]
2006 else:
2024 else:
2007 return []
2025 return []
2008 except IndexError:
2026 except IndexError:
2009 # tab pressed on empty line
2027 # tab pressed on empty line
2010 lsplit = ""
2028 lsplit = ""
2011
2029
2012 if not open_quotes and lsplit != protect_filename(lsplit):
2030 if not open_quotes and lsplit != protect_filename(lsplit):
2013 # if protectables are found, do matching on the whole escaped name
2031 # if protectables are found, do matching on the whole escaped name
2014 has_protectables = True
2032 has_protectables = True
2015 text0,text = text,lsplit
2033 text0,text = text,lsplit
2016 else:
2034 else:
2017 has_protectables = False
2035 has_protectables = False
2018 text = os.path.expanduser(text)
2036 text = os.path.expanduser(text)
2019
2037
2020 if text == "":
2038 if text == "":
2021 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2039 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2022
2040
2023 # Compute the matches from the filesystem
2041 # Compute the matches from the filesystem
2024 if sys.platform == 'win32':
2042 if sys.platform == 'win32':
2025 m0 = self.clean_glob(text)
2043 m0 = self.clean_glob(text)
2026 else:
2044 else:
2027 m0 = self.clean_glob(text.replace('\\', ''))
2045 m0 = self.clean_glob(text.replace('\\', ''))
2028
2046
2029 if has_protectables:
2047 if has_protectables:
2030 # If we had protectables, we need to revert our changes to the
2048 # If we had protectables, we need to revert our changes to the
2031 # beginning of filename so that we don't double-write the part
2049 # beginning of filename so that we don't double-write the part
2032 # of the filename we have so far
2050 # of the filename we have so far
2033 len_lsplit = len(lsplit)
2051 len_lsplit = len(lsplit)
2034 matches = [text_prefix + text0 +
2052 matches = [text_prefix + text0 +
2035 protect_filename(f[len_lsplit:]) for f in m0]
2053 protect_filename(f[len_lsplit:]) for f in m0]
2036 else:
2054 else:
2037 if open_quotes:
2055 if open_quotes:
2038 # if we have a string with an open quote, we don't need to
2056 # if we have a string with an open quote, we don't need to
2039 # protect the names beyond the quote (and we _shouldn't_, as
2057 # protect the names beyond the quote (and we _shouldn't_, as
2040 # it would cause bugs when the filesystem call is made).
2058 # it would cause bugs when the filesystem call is made).
2041 matches = m0 if sys.platform == "win32" else\
2059 matches = m0 if sys.platform == "win32" else\
2042 [protect_filename(f, open_quotes) for f in m0]
2060 [protect_filename(f, open_quotes) for f in m0]
2043 else:
2061 else:
2044 matches = [text_prefix +
2062 matches = [text_prefix +
2045 protect_filename(f) for f in m0]
2063 protect_filename(f) for f in m0]
2046
2064
2047 # Mark directories in input list by appending '/' to their names.
2065 # Mark directories in input list by appending '/' to their names.
2048 return [x+'/' if os.path.isdir(x) else x for x in matches]
2066 return [x+'/' if os.path.isdir(x) else x for x in matches]
2049
2067
2050 @context_matcher()
2068 @context_matcher()
2051 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2069 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2052 """Match magics."""
2070 """Match magics."""
2053 text = context.token
2071 text = context.token
2054 matches = self.magic_matches(text)
2072 matches = self.magic_matches(text)
2055 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2073 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2056 is_magic_prefix = len(text) > 0 and text[0] == "%"
2074 is_magic_prefix = len(text) > 0 and text[0] == "%"
2057 result["suppress"] = is_magic_prefix and bool(result["completions"])
2075 result["suppress"] = is_magic_prefix and bool(result["completions"])
2058 return result
2076 return result
2059
2077
2060 def magic_matches(self, text: str):
2078 def magic_matches(self, text: str):
2061 """Match magics.
2079 """Match magics.
2062
2080
2063 .. deprecated:: 8.6
2081 .. deprecated:: 8.6
2064 You can use :meth:`magic_matcher` instead.
2082 You can use :meth:`magic_matcher` instead.
2065 """
2083 """
2066 # Get all shell magics now rather than statically, so magics loaded at
2084 # Get all shell magics now rather than statically, so magics loaded at
2067 # runtime show up too.
2085 # runtime show up too.
2068 lsm = self.shell.magics_manager.lsmagic()
2086 lsm = self.shell.magics_manager.lsmagic()
2069 line_magics = lsm['line']
2087 line_magics = lsm['line']
2070 cell_magics = lsm['cell']
2088 cell_magics = lsm['cell']
2071 pre = self.magic_escape
2089 pre = self.magic_escape
2072 pre2 = pre+pre
2090 pre2 = pre+pre
2073
2091
2074 explicit_magic = text.startswith(pre)
2092 explicit_magic = text.startswith(pre)
2075
2093
2076 # Completion logic:
2094 # Completion logic:
2077 # - user gives %%: only do cell magics
2095 # - user gives %%: only do cell magics
2078 # - user gives %: do both line and cell magics
2096 # - user gives %: do both line and cell magics
2079 # - no prefix: do both
2097 # - no prefix: do both
2080 # In other words, line magics are skipped if the user gives %% explicitly
2098 # In other words, line magics are skipped if the user gives %% explicitly
2081 #
2099 #
2082 # We also exclude magics that match any currently visible names:
2100 # We also exclude magics that match any currently visible names:
2083 # https://github.com/ipython/ipython/issues/4877, unless the user has
2101 # https://github.com/ipython/ipython/issues/4877, unless the user has
2084 # typed a %:
2102 # typed a %:
2085 # https://github.com/ipython/ipython/issues/10754
2103 # https://github.com/ipython/ipython/issues/10754
2086 bare_text = text.lstrip(pre)
2104 bare_text = text.lstrip(pre)
2087 global_matches = self.global_matches(bare_text)
2105 global_matches = self.global_matches(bare_text)
2088 if not explicit_magic:
2106 if not explicit_magic:
2089 def matches(magic):
2107 def matches(magic):
2090 """
2108 """
2091 Filter magics, in particular remove magics that match
2109 Filter magics, in particular remove magics that match
2092 a name present in global namespace.
2110 a name present in global namespace.
2093 """
2111 """
2094 return ( magic.startswith(bare_text) and
2112 return ( magic.startswith(bare_text) and
2095 magic not in global_matches )
2113 magic not in global_matches )
2096 else:
2114 else:
2097 def matches(magic):
2115 def matches(magic):
2098 return magic.startswith(bare_text)
2116 return magic.startswith(bare_text)
2099
2117
2100 comp = [ pre2+m for m in cell_magics if matches(m)]
2118 comp = [ pre2+m for m in cell_magics if matches(m)]
2101 if not text.startswith(pre2):
2119 if not text.startswith(pre2):
2102 comp += [ pre+m for m in line_magics if matches(m)]
2120 comp += [ pre+m for m in line_magics if matches(m)]
2103
2121
2104 return comp
2122 return comp
2105
2123
2106 @context_matcher()
2124 @context_matcher()
2107 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2125 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2108 """Match class names and attributes for %config magic."""
2126 """Match class names and attributes for %config magic."""
2109 # NOTE: uses `line_buffer` equivalent for compatibility
2127 # NOTE: uses `line_buffer` equivalent for compatibility
2110 matches = self.magic_config_matches(context.line_with_cursor)
2128 matches = self.magic_config_matches(context.line_with_cursor)
2111 return _convert_matcher_v1_result_to_v2(matches, type="param")
2129 return _convert_matcher_v1_result_to_v2(matches, type="param")
2112
2130
2113 def magic_config_matches(self, text: str) -> List[str]:
2131 def magic_config_matches(self, text: str) -> List[str]:
2114 """Match class names and attributes for %config magic.
2132 """Match class names and attributes for %config magic.
2115
2133
2116 .. deprecated:: 8.6
2134 .. deprecated:: 8.6
2117 You can use :meth:`magic_config_matcher` instead.
2135 You can use :meth:`magic_config_matcher` instead.
2118 """
2136 """
2119 texts = text.strip().split()
2137 texts = text.strip().split()
2120
2138
2121 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2139 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2122 # get all configuration classes
2140 # get all configuration classes
2123 classes = sorted(set([ c for c in self.shell.configurables
2141 classes = sorted(set([ c for c in self.shell.configurables
2124 if c.__class__.class_traits(config=True)
2142 if c.__class__.class_traits(config=True)
2125 ]), key=lambda x: x.__class__.__name__)
2143 ]), key=lambda x: x.__class__.__name__)
2126 classnames = [ c.__class__.__name__ for c in classes ]
2144 classnames = [ c.__class__.__name__ for c in classes ]
2127
2145
2128 # return all classnames if config or %config is given
2146 # return all classnames if config or %config is given
2129 if len(texts) == 1:
2147 if len(texts) == 1:
2130 return classnames
2148 return classnames
2131
2149
2132 # match classname
2150 # match classname
2133 classname_texts = texts[1].split('.')
2151 classname_texts = texts[1].split('.')
2134 classname = classname_texts[0]
2152 classname = classname_texts[0]
2135 classname_matches = [ c for c in classnames
2153 classname_matches = [ c for c in classnames
2136 if c.startswith(classname) ]
2154 if c.startswith(classname) ]
2137
2155
2138 # return matched classes or the matched class with attributes
2156 # return matched classes or the matched class with attributes
2139 if texts[1].find('.') < 0:
2157 if texts[1].find('.') < 0:
2140 return classname_matches
2158 return classname_matches
2141 elif len(classname_matches) == 1 and \
2159 elif len(classname_matches) == 1 and \
2142 classname_matches[0] == classname:
2160 classname_matches[0] == classname:
2143 cls = classes[classnames.index(classname)].__class__
2161 cls = classes[classnames.index(classname)].__class__
2144 help = cls.class_get_help()
2162 help = cls.class_get_help()
2145 # strip leading '--' from cl-args:
2163 # strip leading '--' from cl-args:
2146 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2164 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2147 return [ attr.split('=')[0]
2165 return [ attr.split('=')[0]
2148 for attr in help.strip().splitlines()
2166 for attr in help.strip().splitlines()
2149 if attr.startswith(texts[1]) ]
2167 if attr.startswith(texts[1]) ]
2150 return []
2168 return []
2151
2169
2152 @context_matcher()
2170 @context_matcher()
2153 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2171 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2154 """Match color schemes for %colors magic."""
2172 """Match color schemes for %colors magic."""
2155 # NOTE: uses `line_buffer` equivalent for compatibility
2173 # NOTE: uses `line_buffer` equivalent for compatibility
2156 matches = self.magic_color_matches(context.line_with_cursor)
2174 matches = self.magic_color_matches(context.line_with_cursor)
2157 return _convert_matcher_v1_result_to_v2(matches, type="param")
2175 return _convert_matcher_v1_result_to_v2(matches, type="param")
2158
2176
2159 def magic_color_matches(self, text: str) -> List[str]:
2177 def magic_color_matches(self, text: str) -> List[str]:
2160 """Match color schemes for %colors magic.
2178 """Match color schemes for %colors magic.
2161
2179
2162 .. deprecated:: 8.6
2180 .. deprecated:: 8.6
2163 You can use :meth:`magic_color_matcher` instead.
2181 You can use :meth:`magic_color_matcher` instead.
2164 """
2182 """
2165 texts = text.split()
2183 texts = text.split()
2166 if text.endswith(' '):
2184 if text.endswith(' '):
2167 # .split() strips off the trailing whitespace. Add '' back
2185 # .split() strips off the trailing whitespace. Add '' back
2168 # so that: '%colors ' -> ['%colors', '']
2186 # so that: '%colors ' -> ['%colors', '']
2169 texts.append('')
2187 texts.append('')
2170
2188
2171 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2189 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2172 prefix = texts[1]
2190 prefix = texts[1]
2173 return [ color for color in InspectColors.keys()
2191 return [ color for color in InspectColors.keys()
2174 if color.startswith(prefix) ]
2192 if color.startswith(prefix) ]
2175 return []
2193 return []
2176
2194
2177 @context_matcher(identifier="IPCompleter.jedi_matcher")
2195 @context_matcher(identifier="IPCompleter.jedi_matcher")
2178 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2196 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2179 matches = self._jedi_matches(
2197 matches = self._jedi_matches(
2180 cursor_column=context.cursor_position,
2198 cursor_column=context.cursor_position,
2181 cursor_line=context.cursor_line,
2199 cursor_line=context.cursor_line,
2182 text=context.full_text,
2200 text=context.full_text,
2183 )
2201 )
2184 return {
2202 return {
2185 "completions": matches,
2203 "completions": matches,
2186 # static analysis should not suppress other matchers
2204 # static analysis should not suppress other matchers
2187 "suppress": False,
2205 "suppress": False,
2188 }
2206 }
2189
2207
2190 def _jedi_matches(
2208 def _jedi_matches(
2191 self, cursor_column: int, cursor_line: int, text: str
2209 self, cursor_column: int, cursor_line: int, text: str
2192 ) -> Iterator[_JediCompletionLike]:
2210 ) -> Iterator[_JediCompletionLike]:
2193 """
2211 """
2194 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2212 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2195 cursor position.
2213 cursor position.
2196
2214
2197 Parameters
2215 Parameters
2198 ----------
2216 ----------
2199 cursor_column : int
2217 cursor_column : int
2200 column position of the cursor in ``text``, 0-indexed.
2218 column position of the cursor in ``text``, 0-indexed.
2201 cursor_line : int
2219 cursor_line : int
2202 line position of the cursor in ``text``, 0-indexed
2220 line position of the cursor in ``text``, 0-indexed
2203 text : str
2221 text : str
2204 text to complete
2222 text to complete
2205
2223
2206 Notes
2224 Notes
2207 -----
2225 -----
2208 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2226 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2209 object containing a string with the Jedi debug information attached.
2227 object containing a string with the Jedi debug information attached.
2210
2228
2211 .. deprecated:: 8.6
2229 .. deprecated:: 8.6
2212 You can use :meth:`_jedi_matcher` instead.
2230 You can use :meth:`_jedi_matcher` instead.
2213 """
2231 """
2214 namespaces = [self.namespace]
2232 namespaces = [self.namespace]
2215 if self.global_namespace is not None:
2233 if self.global_namespace is not None:
2216 namespaces.append(self.global_namespace)
2234 namespaces.append(self.global_namespace)
2217
2235
2218 completion_filter = lambda x:x
2236 completion_filter = lambda x:x
2219 offset = cursor_to_position(text, cursor_line, cursor_column)
2237 offset = cursor_to_position(text, cursor_line, cursor_column)
2220 # filter output if we are completing for object members
2238 # filter output if we are completing for object members
2221 if offset:
2239 if offset:
2222 pre = text[offset-1]
2240 pre = text[offset-1]
2223 if pre == '.':
2241 if pre == '.':
2224 if self.omit__names == 2:
2242 if self.omit__names == 2:
2225 completion_filter = lambda c:not c.name.startswith('_')
2243 completion_filter = lambda c:not c.name.startswith('_')
2226 elif self.omit__names == 1:
2244 elif self.omit__names == 1:
2227 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2245 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2228 elif self.omit__names == 0:
2246 elif self.omit__names == 0:
2229 completion_filter = lambda x:x
2247 completion_filter = lambda x:x
2230 else:
2248 else:
2231 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2249 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2232
2250
2233 interpreter = jedi.Interpreter(text[:offset], namespaces)
2251 interpreter = jedi.Interpreter(text[:offset], namespaces)
2234 try_jedi = True
2252 try_jedi = True
2235
2253
2236 try:
2254 try:
2237 # find the first token in the current tree -- if it is a ' or " then we are in a string
2255 # find the first token in the current tree -- if it is a ' or " then we are in a string
2238 completing_string = False
2256 completing_string = False
2239 try:
2257 try:
2240 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2258 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2241 except StopIteration:
2259 except StopIteration:
2242 pass
2260 pass
2243 else:
2261 else:
2244 # note the value may be ', ", or it may also be ''' or """, or
2262 # note the value may be ', ", or it may also be ''' or """, or
2245 # in some cases, """what/you/typed..., but all of these are
2263 # in some cases, """what/you/typed..., but all of these are
2246 # strings.
2264 # strings.
2247 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2265 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2248
2266
2249 # if we are in a string jedi is likely not the right candidate for
2267 # if we are in a string jedi is likely not the right candidate for
2250 # now. Skip it.
2268 # now. Skip it.
2251 try_jedi = not completing_string
2269 try_jedi = not completing_string
2252 except Exception as e:
2270 except Exception as e:
2253 # many of things can go wrong, we are using private API just don't crash.
2271 # many of things can go wrong, we are using private API just don't crash.
2254 if self.debug:
2272 if self.debug:
2255 print("Error detecting if completing a non-finished string :", e, '|')
2273 print("Error detecting if completing a non-finished string :", e, '|')
2256
2274
2257 if not try_jedi:
2275 if not try_jedi:
2258 return iter([])
2276 return iter([])
2259 try:
2277 try:
2260 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2278 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2261 except Exception as e:
2279 except Exception as e:
2262 if self.debug:
2280 if self.debug:
2263 return iter(
2281 return iter(
2264 [
2282 [
2265 _FakeJediCompletion(
2283 _FakeJediCompletion(
2266 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2284 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2267 % (e)
2285 % (e)
2268 )
2286 )
2269 ]
2287 ]
2270 )
2288 )
2271 else:
2289 else:
2272 return iter([])
2290 return iter([])
2273
2291
2274 @completion_matcher(api_version=1)
2292 @completion_matcher(api_version=1)
2275 def python_matches(self, text: str) -> Iterable[str]:
2293 def python_matches(self, text: str) -> Iterable[str]:
2276 """Match attributes or global python names"""
2294 """Match attributes or global python names"""
2277 if "." in text:
2295 if "." in text:
2278 try:
2296 try:
2279 matches = self.attr_matches(text)
2297 matches = self.attr_matches(text)
2280 if text.endswith('.') and self.omit__names:
2298 if text.endswith('.') and self.omit__names:
2281 if self.omit__names == 1:
2299 if self.omit__names == 1:
2282 # true if txt is _not_ a __ name, false otherwise:
2300 # true if txt is _not_ a __ name, false otherwise:
2283 no__name = (lambda txt:
2301 no__name = (lambda txt:
2284 re.match(r'.*\.__.*?__',txt) is None)
2302 re.match(r'.*\.__.*?__',txt) is None)
2285 else:
2303 else:
2286 # true if txt is _not_ a _ name, false otherwise:
2304 # true if txt is _not_ a _ name, false otherwise:
2287 no__name = (lambda txt:
2305 no__name = (lambda txt:
2288 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2306 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2289 matches = filter(no__name, matches)
2307 matches = filter(no__name, matches)
2290 except NameError:
2308 except NameError:
2291 # catches <undefined attributes>.<tab>
2309 # catches <undefined attributes>.<tab>
2292 matches = []
2310 matches = []
2293 else:
2311 else:
2294 matches = self.global_matches(text)
2312 matches = self.global_matches(text)
2295 return matches
2313 return matches
2296
2314
2297 def _default_arguments_from_docstring(self, doc):
2315 def _default_arguments_from_docstring(self, doc):
2298 """Parse the first line of docstring for call signature.
2316 """Parse the first line of docstring for call signature.
2299
2317
2300 Docstring should be of the form 'min(iterable[, key=func])\n'.
2318 Docstring should be of the form 'min(iterable[, key=func])\n'.
2301 It can also parse cython docstring of the form
2319 It can also parse cython docstring of the form
2302 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2320 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2303 """
2321 """
2304 if doc is None:
2322 if doc is None:
2305 return []
2323 return []
2306
2324
2307 #care only the firstline
2325 #care only the firstline
2308 line = doc.lstrip().splitlines()[0]
2326 line = doc.lstrip().splitlines()[0]
2309
2327
2310 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2328 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2311 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2329 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2312 sig = self.docstring_sig_re.search(line)
2330 sig = self.docstring_sig_re.search(line)
2313 if sig is None:
2331 if sig is None:
2314 return []
2332 return []
2315 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2333 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2316 sig = sig.groups()[0].split(',')
2334 sig = sig.groups()[0].split(',')
2317 ret = []
2335 ret = []
2318 for s in sig:
2336 for s in sig:
2319 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2337 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2320 ret += self.docstring_kwd_re.findall(s)
2338 ret += self.docstring_kwd_re.findall(s)
2321 return ret
2339 return ret
2322
2340
2323 def _default_arguments(self, obj):
2341 def _default_arguments(self, obj):
2324 """Return the list of default arguments of obj if it is callable,
2342 """Return the list of default arguments of obj if it is callable,
2325 or empty list otherwise."""
2343 or empty list otherwise."""
2326 call_obj = obj
2344 call_obj = obj
2327 ret = []
2345 ret = []
2328 if inspect.isbuiltin(obj):
2346 if inspect.isbuiltin(obj):
2329 pass
2347 pass
2330 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2348 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2331 if inspect.isclass(obj):
2349 if inspect.isclass(obj):
2332 #for cython embedsignature=True the constructor docstring
2350 #for cython embedsignature=True the constructor docstring
2333 #belongs to the object itself not __init__
2351 #belongs to the object itself not __init__
2334 ret += self._default_arguments_from_docstring(
2352 ret += self._default_arguments_from_docstring(
2335 getattr(obj, '__doc__', ''))
2353 getattr(obj, '__doc__', ''))
2336 # for classes, check for __init__,__new__
2354 # for classes, check for __init__,__new__
2337 call_obj = (getattr(obj, '__init__', None) or
2355 call_obj = (getattr(obj, '__init__', None) or
2338 getattr(obj, '__new__', None))
2356 getattr(obj, '__new__', None))
2339 # for all others, check if they are __call__able
2357 # for all others, check if they are __call__able
2340 elif hasattr(obj, '__call__'):
2358 elif hasattr(obj, '__call__'):
2341 call_obj = obj.__call__
2359 call_obj = obj.__call__
2342 ret += self._default_arguments_from_docstring(
2360 ret += self._default_arguments_from_docstring(
2343 getattr(call_obj, '__doc__', ''))
2361 getattr(call_obj, '__doc__', ''))
2344
2362
2345 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2363 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2346 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2364 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2347
2365
2348 try:
2366 try:
2349 sig = inspect.signature(obj)
2367 sig = inspect.signature(obj)
2350 ret.extend(k for k, v in sig.parameters.items() if
2368 ret.extend(k for k, v in sig.parameters.items() if
2351 v.kind in _keeps)
2369 v.kind in _keeps)
2352 except ValueError:
2370 except ValueError:
2353 pass
2371 pass
2354
2372
2355 return list(set(ret))
2373 return list(set(ret))
2356
2374
2357 @context_matcher()
2375 @context_matcher()
2358 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2376 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2359 """Match named parameters (kwargs) of the last open function."""
2377 """Match named parameters (kwargs) of the last open function."""
2360 matches = self.python_func_kw_matches(context.token)
2378 matches = self.python_func_kw_matches(context.token)
2361 return _convert_matcher_v1_result_to_v2(matches, type="param")
2379 return _convert_matcher_v1_result_to_v2(matches, type="param")
2362
2380
2363 def python_func_kw_matches(self, text):
2381 def python_func_kw_matches(self, text):
2364 """Match named parameters (kwargs) of the last open function.
2382 """Match named parameters (kwargs) of the last open function.
2365
2383
2366 .. deprecated:: 8.6
2384 .. deprecated:: 8.6
2367 You can use :meth:`python_func_kw_matcher` instead.
2385 You can use :meth:`python_func_kw_matcher` instead.
2368 """
2386 """
2369
2387
2370 if "." in text: # a parameter cannot be dotted
2388 if "." in text: # a parameter cannot be dotted
2371 return []
2389 return []
2372 try: regexp = self.__funcParamsRegex
2390 try: regexp = self.__funcParamsRegex
2373 except AttributeError:
2391 except AttributeError:
2374 regexp = self.__funcParamsRegex = re.compile(r'''
2392 regexp = self.__funcParamsRegex = re.compile(r'''
2375 '.*?(?<!\\)' | # single quoted strings or
2393 '.*?(?<!\\)' | # single quoted strings or
2376 ".*?(?<!\\)" | # double quoted strings or
2394 ".*?(?<!\\)" | # double quoted strings or
2377 \w+ | # identifier
2395 \w+ | # identifier
2378 \S # other characters
2396 \S # other characters
2379 ''', re.VERBOSE | re.DOTALL)
2397 ''', re.VERBOSE | re.DOTALL)
2380 # 1. find the nearest identifier that comes before an unclosed
2398 # 1. find the nearest identifier that comes before an unclosed
2381 # parenthesis before the cursor
2399 # parenthesis before the cursor
2382 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2400 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2383 tokens = regexp.findall(self.text_until_cursor)
2401 tokens = regexp.findall(self.text_until_cursor)
2384 iterTokens = reversed(tokens); openPar = 0
2402 iterTokens = reversed(tokens); openPar = 0
2385
2403
2386 for token in iterTokens:
2404 for token in iterTokens:
2387 if token == ')':
2405 if token == ')':
2388 openPar -= 1
2406 openPar -= 1
2389 elif token == '(':
2407 elif token == '(':
2390 openPar += 1
2408 openPar += 1
2391 if openPar > 0:
2409 if openPar > 0:
2392 # found the last unclosed parenthesis
2410 # found the last unclosed parenthesis
2393 break
2411 break
2394 else:
2412 else:
2395 return []
2413 return []
2396 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2414 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2397 ids = []
2415 ids = []
2398 isId = re.compile(r'\w+$').match
2416 isId = re.compile(r'\w+$').match
2399
2417
2400 while True:
2418 while True:
2401 try:
2419 try:
2402 ids.append(next(iterTokens))
2420 ids.append(next(iterTokens))
2403 if not isId(ids[-1]):
2421 if not isId(ids[-1]):
2404 ids.pop(); break
2422 ids.pop(); break
2405 if not next(iterTokens) == '.':
2423 if not next(iterTokens) == '.':
2406 break
2424 break
2407 except StopIteration:
2425 except StopIteration:
2408 break
2426 break
2409
2427
2410 # Find all named arguments already assigned to, as to avoid suggesting
2428 # Find all named arguments already assigned to, as to avoid suggesting
2411 # them again
2429 # them again
2412 usedNamedArgs = set()
2430 usedNamedArgs = set()
2413 par_level = -1
2431 par_level = -1
2414 for token, next_token in zip(tokens, tokens[1:]):
2432 for token, next_token in zip(tokens, tokens[1:]):
2415 if token == '(':
2433 if token == '(':
2416 par_level += 1
2434 par_level += 1
2417 elif token == ')':
2435 elif token == ')':
2418 par_level -= 1
2436 par_level -= 1
2419
2437
2420 if par_level != 0:
2438 if par_level != 0:
2421 continue
2439 continue
2422
2440
2423 if next_token != '=':
2441 if next_token != '=':
2424 continue
2442 continue
2425
2443
2426 usedNamedArgs.add(token)
2444 usedNamedArgs.add(token)
2427
2445
2428 argMatches = []
2446 argMatches = []
2429 try:
2447 try:
2430 callableObj = '.'.join(ids[::-1])
2448 callableObj = '.'.join(ids[::-1])
2431 namedArgs = self._default_arguments(eval(callableObj,
2449 namedArgs = self._default_arguments(eval(callableObj,
2432 self.namespace))
2450 self.namespace))
2433
2451
2434 # Remove used named arguments from the list, no need to show twice
2452 # Remove used named arguments from the list, no need to show twice
2435 for namedArg in set(namedArgs) - usedNamedArgs:
2453 for namedArg in set(namedArgs) - usedNamedArgs:
2436 if namedArg.startswith(text):
2454 if namedArg.startswith(text):
2437 argMatches.append("%s=" %namedArg)
2455 argMatches.append("%s=" %namedArg)
2438 except:
2456 except:
2439 pass
2457 pass
2440
2458
2441 return argMatches
2459 return argMatches
2442
2460
2443 @staticmethod
2461 @staticmethod
2444 def _get_keys(obj: Any) -> List[Any]:
2462 def _get_keys(obj: Any) -> List[Any]:
2445 # Objects can define their own completions by defining an
2463 # Objects can define their own completions by defining an
2446 # _ipy_key_completions_() method.
2464 # _ipy_key_completions_() method.
2447 method = get_real_method(obj, '_ipython_key_completions_')
2465 method = get_real_method(obj, '_ipython_key_completions_')
2448 if method is not None:
2466 if method is not None:
2449 return method()
2467 return method()
2450
2468
2451 # Special case some common in-memory dict-like types
2469 # Special case some common in-memory dict-like types
2452 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2470 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2453 try:
2471 try:
2454 return list(obj.keys())
2472 return list(obj.keys())
2455 except Exception:
2473 except Exception:
2456 return []
2474 return []
2457 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2475 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2458 try:
2476 try:
2459 return list(obj.obj.keys())
2477 return list(obj.obj.keys())
2460 except Exception:
2478 except Exception:
2461 return []
2479 return []
2462 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2480 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2463 _safe_isinstance(obj, 'numpy', 'void'):
2481 _safe_isinstance(obj, 'numpy', 'void'):
2464 return obj.dtype.names or []
2482 return obj.dtype.names or []
2465 return []
2483 return []
2466
2484
2467 @context_matcher()
2485 @context_matcher()
2468 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2486 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2469 """Match string keys in a dictionary, after e.g. ``foo[``."""
2487 """Match string keys in a dictionary, after e.g. ``foo[``."""
2470 matches = self.dict_key_matches(context.token)
2488 matches = self.dict_key_matches(context.token)
2471 return _convert_matcher_v1_result_to_v2(
2489 return _convert_matcher_v1_result_to_v2(
2472 matches, type="dict key", suppress_if_matches=True
2490 matches, type="dict key", suppress_if_matches=True
2473 )
2491 )
2474
2492
2475 def dict_key_matches(self, text: str) -> List[str]:
2493 def dict_key_matches(self, text: str) -> List[str]:
2476 """Match string keys in a dictionary, after e.g. ``foo[``.
2494 """Match string keys in a dictionary, after e.g. ``foo[``.
2477
2495
2478 .. deprecated:: 8.6
2496 .. deprecated:: 8.6
2479 You can use :meth:`dict_key_matcher` instead.
2497 You can use :meth:`dict_key_matcher` instead.
2480 """
2498 """
2481
2499
2482 # Short-circuit on closed dictionary (regular expression would
2500 # Short-circuit on closed dictionary (regular expression would
2483 # not match anyway, but would take quite a while).
2501 # not match anyway, but would take quite a while).
2484 if self.text_until_cursor.strip().endswith("]"):
2502 if self.text_until_cursor.strip().endswith("]"):
2485 return []
2503 return []
2486
2504
2487 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2505 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2488
2506
2489 if match is None:
2507 if match is None:
2490 return []
2508 return []
2491
2509
2492 expr, prior_tuple_keys, key_prefix = match.groups()
2510 expr, prior_tuple_keys, key_prefix = match.groups()
2493
2511
2494 obj = self._evaluate_expr(expr)
2512 obj = self._evaluate_expr(expr)
2495
2513
2496 if obj is not_found:
2514 if obj is not_found:
2497 return []
2515 return []
2498
2516
2499 keys = self._get_keys(obj)
2517 keys = self._get_keys(obj)
2500 if not keys:
2518 if not keys:
2501 return keys
2519 return keys
2502
2520
2503 tuple_prefix = guarded_eval(
2521 tuple_prefix = guarded_eval(
2504 prior_tuple_keys,
2522 prior_tuple_keys,
2505 EvaluationContext(
2523 EvaluationContext(
2506 globals_=self.global_namespace,
2524 globals=self.global_namespace,
2507 locals_=self.namespace,
2525 locals=self.namespace,
2508 evaluation=self.evaluation,
2526 evaluation=self.evaluation,
2509 in_subscript=True,
2527 in_subscript=True,
2510 ),
2528 ),
2511 )
2529 )
2512
2530
2513 closing_quote, token_offset, matches = match_dict_keys(
2531 closing_quote, token_offset, matches = match_dict_keys(
2514 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2532 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2515 )
2533 )
2516 if not matches:
2534 if not matches:
2517 return []
2535 return []
2518
2536
2519 # get the cursor position of
2537 # get the cursor position of
2520 # - the text being completed
2538 # - the text being completed
2521 # - the start of the key text
2539 # - the start of the key text
2522 # - the start of the completion
2540 # - the start of the completion
2523 text_start = len(self.text_until_cursor) - len(text)
2541 text_start = len(self.text_until_cursor) - len(text)
2524 if key_prefix:
2542 if key_prefix:
2525 key_start = match.start(3)
2543 key_start = match.start(3)
2526 completion_start = key_start + token_offset
2544 completion_start = key_start + token_offset
2527 else:
2545 else:
2528 key_start = completion_start = match.end()
2546 key_start = completion_start = match.end()
2529
2547
2530 # grab the leading prefix, to make sure all completions start with `text`
2548 # grab the leading prefix, to make sure all completions start with `text`
2531 if text_start > key_start:
2549 if text_start > key_start:
2532 leading = ''
2550 leading = ''
2533 else:
2551 else:
2534 leading = text[text_start:completion_start]
2552 leading = text[text_start:completion_start]
2535
2553
2536 # append closing quote and bracket as appropriate
2554 # append closing quote and bracket as appropriate
2537 # this is *not* appropriate if the opening quote or bracket is outside
2555 # this is *not* appropriate if the opening quote or bracket is outside
2538 # the text given to this method, e.g. `d["""a\nt
2556 # the text given to this method, e.g. `d["""a\nt
2539 can_close_quote = False
2557 can_close_quote = False
2540 can_close_bracket = False
2558 can_close_bracket = False
2541
2559
2542 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2560 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2543
2561
2544 if continuation.startswith(closing_quote):
2562 if continuation.startswith(closing_quote):
2545 # do not close if already closed, e.g. `d['a<tab>'`
2563 # do not close if already closed, e.g. `d['a<tab>'`
2546 continuation = continuation[len(closing_quote) :]
2564 continuation = continuation[len(closing_quote) :]
2547 else:
2565 else:
2548 can_close_quote = True
2566 can_close_quote = True
2549
2567
2550 continuation = continuation.strip()
2568 continuation = continuation.strip()
2551
2569
2552 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2570 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2553 # handling it is out of scope, so let's avoid appending suffixes.
2571 # handling it is out of scope, so let's avoid appending suffixes.
2554 has_known_tuple_handling = isinstance(obj, dict)
2572 has_known_tuple_handling = isinstance(obj, dict)
2555
2573
2556 can_close_bracket = (
2574 can_close_bracket = (
2557 not continuation.startswith("]") and self.auto_close_dict_keys
2575 not continuation.startswith("]") and self.auto_close_dict_keys
2558 )
2576 )
2559 can_close_tuple_item = (
2577 can_close_tuple_item = (
2560 not continuation.startswith(",")
2578 not continuation.startswith(",")
2561 and has_known_tuple_handling
2579 and has_known_tuple_handling
2562 and self.auto_close_dict_keys
2580 and self.auto_close_dict_keys
2563 )
2581 )
2564 can_close_quote = can_close_quote and self.auto_close_dict_keys
2582 can_close_quote = can_close_quote and self.auto_close_dict_keys
2565
2583
2566 # fast path if closing qoute should be appended but not suffix is allowed
2584 # fast path if closing qoute should be appended but not suffix is allowed
2567 if not can_close_quote and not can_close_bracket and closing_quote:
2585 if not can_close_quote and not can_close_bracket and closing_quote:
2568 return [leading + k for k in matches]
2586 return [leading + k for k in matches]
2569
2587
2570 results = []
2588 results = []
2571
2589
2572 end_of_tuple_or_item = DictKeyState.END_OF_TUPLE | DictKeyState.END_OF_ITEM
2590 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2573
2591
2574 for k, state_flag in matches.items():
2592 for k, state_flag in matches.items():
2575 result = leading + k
2593 result = leading + k
2576 if can_close_quote and closing_quote:
2594 if can_close_quote and closing_quote:
2577 result += closing_quote
2595 result += closing_quote
2578
2596
2579 if state_flag == end_of_tuple_or_item:
2597 if state_flag == end_of_tuple_or_item:
2580 # We do not know which suffix to add,
2598 # We do not know which suffix to add,
2581 # e.g. both tuple item and string
2599 # e.g. both tuple item and string
2582 # match this item.
2600 # match this item.
2583 pass
2601 pass
2584
2602
2585 if state_flag in end_of_tuple_or_item and can_close_bracket:
2603 if state_flag in end_of_tuple_or_item and can_close_bracket:
2586 result += "]"
2604 result += "]"
2587 if state_flag == DictKeyState.IN_TUPLE and can_close_tuple_item:
2605 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2588 result += ", "
2606 result += ", "
2589 results.append(result)
2607 results.append(result)
2590 return results
2608 return results
2591
2609
2592 @context_matcher()
2610 @context_matcher()
2593 def unicode_name_matcher(self, context: CompletionContext):
2611 def unicode_name_matcher(self, context: CompletionContext):
2594 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2612 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2595 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2613 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2596 return _convert_matcher_v1_result_to_v2(
2614 return _convert_matcher_v1_result_to_v2(
2597 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2615 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2598 )
2616 )
2599
2617
2600 @staticmethod
2618 @staticmethod
2601 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2619 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2602 """Match Latex-like syntax for unicode characters base
2620 """Match Latex-like syntax for unicode characters base
2603 on the name of the character.
2621 on the name of the character.
2604
2622
2605 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2623 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2606
2624
2607 Works only on valid python 3 identifier, or on combining characters that
2625 Works only on valid python 3 identifier, or on combining characters that
2608 will combine to form a valid identifier.
2626 will combine to form a valid identifier.
2609 """
2627 """
2610 slashpos = text.rfind('\\')
2628 slashpos = text.rfind('\\')
2611 if slashpos > -1:
2629 if slashpos > -1:
2612 s = text[slashpos+1:]
2630 s = text[slashpos+1:]
2613 try :
2631 try :
2614 unic = unicodedata.lookup(s)
2632 unic = unicodedata.lookup(s)
2615 # allow combining chars
2633 # allow combining chars
2616 if ('a'+unic).isidentifier():
2634 if ('a'+unic).isidentifier():
2617 return '\\'+s,[unic]
2635 return '\\'+s,[unic]
2618 except KeyError:
2636 except KeyError:
2619 pass
2637 pass
2620 return '', []
2638 return '', []
2621
2639
2622 @context_matcher()
2640 @context_matcher()
2623 def latex_name_matcher(self, context: CompletionContext):
2641 def latex_name_matcher(self, context: CompletionContext):
2624 """Match Latex syntax for unicode characters.
2642 """Match Latex syntax for unicode characters.
2625
2643
2626 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2644 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2627 """
2645 """
2628 fragment, matches = self.latex_matches(context.text_until_cursor)
2646 fragment, matches = self.latex_matches(context.text_until_cursor)
2629 return _convert_matcher_v1_result_to_v2(
2647 return _convert_matcher_v1_result_to_v2(
2630 matches, type="latex", fragment=fragment, suppress_if_matches=True
2648 matches, type="latex", fragment=fragment, suppress_if_matches=True
2631 )
2649 )
2632
2650
2633 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2651 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2634 """Match Latex syntax for unicode characters.
2652 """Match Latex syntax for unicode characters.
2635
2653
2636 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2654 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2637
2655
2638 .. deprecated:: 8.6
2656 .. deprecated:: 8.6
2639 You can use :meth:`latex_name_matcher` instead.
2657 You can use :meth:`latex_name_matcher` instead.
2640 """
2658 """
2641 slashpos = text.rfind('\\')
2659 slashpos = text.rfind('\\')
2642 if slashpos > -1:
2660 if slashpos > -1:
2643 s = text[slashpos:]
2661 s = text[slashpos:]
2644 if s in latex_symbols:
2662 if s in latex_symbols:
2645 # Try to complete a full latex symbol to unicode
2663 # Try to complete a full latex symbol to unicode
2646 # \\alpha -> Ξ±
2664 # \\alpha -> Ξ±
2647 return s, [latex_symbols[s]]
2665 return s, [latex_symbols[s]]
2648 else:
2666 else:
2649 # If a user has partially typed a latex symbol, give them
2667 # If a user has partially typed a latex symbol, give them
2650 # a full list of options \al -> [\aleph, \alpha]
2668 # a full list of options \al -> [\aleph, \alpha]
2651 matches = [k for k in latex_symbols if k.startswith(s)]
2669 matches = [k for k in latex_symbols if k.startswith(s)]
2652 if matches:
2670 if matches:
2653 return s, matches
2671 return s, matches
2654 return '', ()
2672 return '', ()
2655
2673
2656 @context_matcher()
2674 @context_matcher()
2657 def custom_completer_matcher(self, context):
2675 def custom_completer_matcher(self, context):
2658 """Dispatch custom completer.
2676 """Dispatch custom completer.
2659
2677
2660 If a match is found, suppresses all other matchers except for Jedi.
2678 If a match is found, suppresses all other matchers except for Jedi.
2661 """
2679 """
2662 matches = self.dispatch_custom_completer(context.token) or []
2680 matches = self.dispatch_custom_completer(context.token) or []
2663 result = _convert_matcher_v1_result_to_v2(
2681 result = _convert_matcher_v1_result_to_v2(
2664 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2682 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2665 )
2683 )
2666 result["ordered"] = True
2684 result["ordered"] = True
2667 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2685 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2668 return result
2686 return result
2669
2687
2670 def dispatch_custom_completer(self, text):
2688 def dispatch_custom_completer(self, text):
2671 """
2689 """
2672 .. deprecated:: 8.6
2690 .. deprecated:: 8.6
2673 You can use :meth:`custom_completer_matcher` instead.
2691 You can use :meth:`custom_completer_matcher` instead.
2674 """
2692 """
2675 if not self.custom_completers:
2693 if not self.custom_completers:
2676 return
2694 return
2677
2695
2678 line = self.line_buffer
2696 line = self.line_buffer
2679 if not line.strip():
2697 if not line.strip():
2680 return None
2698 return None
2681
2699
2682 # Create a little structure to pass all the relevant information about
2700 # Create a little structure to pass all the relevant information about
2683 # the current completion to any custom completer.
2701 # the current completion to any custom completer.
2684 event = SimpleNamespace()
2702 event = SimpleNamespace()
2685 event.line = line
2703 event.line = line
2686 event.symbol = text
2704 event.symbol = text
2687 cmd = line.split(None,1)[0]
2705 cmd = line.split(None,1)[0]
2688 event.command = cmd
2706 event.command = cmd
2689 event.text_until_cursor = self.text_until_cursor
2707 event.text_until_cursor = self.text_until_cursor
2690
2708
2691 # for foo etc, try also to find completer for %foo
2709 # for foo etc, try also to find completer for %foo
2692 if not cmd.startswith(self.magic_escape):
2710 if not cmd.startswith(self.magic_escape):
2693 try_magic = self.custom_completers.s_matches(
2711 try_magic = self.custom_completers.s_matches(
2694 self.magic_escape + cmd)
2712 self.magic_escape + cmd)
2695 else:
2713 else:
2696 try_magic = []
2714 try_magic = []
2697
2715
2698 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2716 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2699 try_magic,
2717 try_magic,
2700 self.custom_completers.flat_matches(self.text_until_cursor)):
2718 self.custom_completers.flat_matches(self.text_until_cursor)):
2701 try:
2719 try:
2702 res = c(event)
2720 res = c(event)
2703 if res:
2721 if res:
2704 # first, try case sensitive match
2722 # first, try case sensitive match
2705 withcase = [r for r in res if r.startswith(text)]
2723 withcase = [r for r in res if r.startswith(text)]
2706 if withcase:
2724 if withcase:
2707 return withcase
2725 return withcase
2708 # if none, then case insensitive ones are ok too
2726 # if none, then case insensitive ones are ok too
2709 text_low = text.lower()
2727 text_low = text.lower()
2710 return [r for r in res if r.lower().startswith(text_low)]
2728 return [r for r in res if r.lower().startswith(text_low)]
2711 except TryNext:
2729 except TryNext:
2712 pass
2730 pass
2713 except KeyboardInterrupt:
2731 except KeyboardInterrupt:
2714 """
2732 """
2715 If custom completer take too long,
2733 If custom completer take too long,
2716 let keyboard interrupt abort and return nothing.
2734 let keyboard interrupt abort and return nothing.
2717 """
2735 """
2718 break
2736 break
2719
2737
2720 return None
2738 return None
2721
2739
2722 def completions(self, text: str, offset: int)->Iterator[Completion]:
2740 def completions(self, text: str, offset: int)->Iterator[Completion]:
2723 """
2741 """
2724 Returns an iterator over the possible completions
2742 Returns an iterator over the possible completions
2725
2743
2726 .. warning::
2744 .. warning::
2727
2745
2728 Unstable
2746 Unstable
2729
2747
2730 This function is unstable, API may change without warning.
2748 This function is unstable, API may change without warning.
2731 It will also raise unless use in proper context manager.
2749 It will also raise unless use in proper context manager.
2732
2750
2733 Parameters
2751 Parameters
2734 ----------
2752 ----------
2735 text : str
2753 text : str
2736 Full text of the current input, multi line string.
2754 Full text of the current input, multi line string.
2737 offset : int
2755 offset : int
2738 Integer representing the position of the cursor in ``text``. Offset
2756 Integer representing the position of the cursor in ``text``. Offset
2739 is 0-based indexed.
2757 is 0-based indexed.
2740
2758
2741 Yields
2759 Yields
2742 ------
2760 ------
2743 Completion
2761 Completion
2744
2762
2745 Notes
2763 Notes
2746 -----
2764 -----
2747 The cursor on a text can either be seen as being "in between"
2765 The cursor on a text can either be seen as being "in between"
2748 characters or "On" a character depending on the interface visible to
2766 characters or "On" a character depending on the interface visible to
2749 the user. For consistency the cursor being on "in between" characters X
2767 the user. For consistency the cursor being on "in between" characters X
2750 and Y is equivalent to the cursor being "on" character Y, that is to say
2768 and Y is equivalent to the cursor being "on" character Y, that is to say
2751 the character the cursor is on is considered as being after the cursor.
2769 the character the cursor is on is considered as being after the cursor.
2752
2770
2753 Combining characters may span more that one position in the
2771 Combining characters may span more that one position in the
2754 text.
2772 text.
2755
2773
2756 .. note::
2774 .. note::
2757
2775
2758 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2776 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2759 fake Completion token to distinguish completion returned by Jedi
2777 fake Completion token to distinguish completion returned by Jedi
2760 and usual IPython completion.
2778 and usual IPython completion.
2761
2779
2762 .. note::
2780 .. note::
2763
2781
2764 Completions are not completely deduplicated yet. If identical
2782 Completions are not completely deduplicated yet. If identical
2765 completions are coming from different sources this function does not
2783 completions are coming from different sources this function does not
2766 ensure that each completion object will only be present once.
2784 ensure that each completion object will only be present once.
2767 """
2785 """
2768 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2786 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2769 "It may change without warnings. "
2787 "It may change without warnings. "
2770 "Use in corresponding context manager.",
2788 "Use in corresponding context manager.",
2771 category=ProvisionalCompleterWarning, stacklevel=2)
2789 category=ProvisionalCompleterWarning, stacklevel=2)
2772
2790
2773 seen = set()
2791 seen = set()
2774 profiler:Optional[cProfile.Profile]
2792 profiler:Optional[cProfile.Profile]
2775 try:
2793 try:
2776 if self.profile_completions:
2794 if self.profile_completions:
2777 import cProfile
2795 import cProfile
2778 profiler = cProfile.Profile()
2796 profiler = cProfile.Profile()
2779 profiler.enable()
2797 profiler.enable()
2780 else:
2798 else:
2781 profiler = None
2799 profiler = None
2782
2800
2783 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2801 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2784 if c and (c in seen):
2802 if c and (c in seen):
2785 continue
2803 continue
2786 yield c
2804 yield c
2787 seen.add(c)
2805 seen.add(c)
2788 except KeyboardInterrupt:
2806 except KeyboardInterrupt:
2789 """if completions take too long and users send keyboard interrupt,
2807 """if completions take too long and users send keyboard interrupt,
2790 do not crash and return ASAP. """
2808 do not crash and return ASAP. """
2791 pass
2809 pass
2792 finally:
2810 finally:
2793 if profiler is not None:
2811 if profiler is not None:
2794 profiler.disable()
2812 profiler.disable()
2795 ensure_dir_exists(self.profiler_output_dir)
2813 ensure_dir_exists(self.profiler_output_dir)
2796 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2814 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2797 print("Writing profiler output to", output_path)
2815 print("Writing profiler output to", output_path)
2798 profiler.dump_stats(output_path)
2816 profiler.dump_stats(output_path)
2799
2817
2800 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2818 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2801 """
2819 """
2802 Core completion module.Same signature as :any:`completions`, with the
2820 Core completion module.Same signature as :any:`completions`, with the
2803 extra `timeout` parameter (in seconds).
2821 extra `timeout` parameter (in seconds).
2804
2822
2805 Computing jedi's completion ``.type`` can be quite expensive (it is a
2823 Computing jedi's completion ``.type`` can be quite expensive (it is a
2806 lazy property) and can require some warm-up, more warm up than just
2824 lazy property) and can require some warm-up, more warm up than just
2807 computing the ``name`` of a completion. The warm-up can be :
2825 computing the ``name`` of a completion. The warm-up can be :
2808
2826
2809 - Long warm-up the first time a module is encountered after
2827 - Long warm-up the first time a module is encountered after
2810 install/update: actually build parse/inference tree.
2828 install/update: actually build parse/inference tree.
2811
2829
2812 - first time the module is encountered in a session: load tree from
2830 - first time the module is encountered in a session: load tree from
2813 disk.
2831 disk.
2814
2832
2815 We don't want to block completions for tens of seconds so we give the
2833 We don't want to block completions for tens of seconds so we give the
2816 completer a "budget" of ``_timeout`` seconds per invocation to compute
2834 completer a "budget" of ``_timeout`` seconds per invocation to compute
2817 completions types, the completions that have not yet been computed will
2835 completions types, the completions that have not yet been computed will
2818 be marked as "unknown" an will have a chance to be computed next round
2836 be marked as "unknown" an will have a chance to be computed next round
2819 are things get cached.
2837 are things get cached.
2820
2838
2821 Keep in mind that Jedi is not the only thing treating the completion so
2839 Keep in mind that Jedi is not the only thing treating the completion so
2822 keep the timeout short-ish as if we take more than 0.3 second we still
2840 keep the timeout short-ish as if we take more than 0.3 second we still
2823 have lots of processing to do.
2841 have lots of processing to do.
2824
2842
2825 """
2843 """
2826 deadline = time.monotonic() + _timeout
2844 deadline = time.monotonic() + _timeout
2827
2845
2828 before = full_text[:offset]
2846 before = full_text[:offset]
2829 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2847 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2830
2848
2831 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2849 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2832
2850
2833 def is_non_jedi_result(
2851 def is_non_jedi_result(
2834 result: MatcherResult, identifier: str
2852 result: MatcherResult, identifier: str
2835 ) -> TypeGuard[SimpleMatcherResult]:
2853 ) -> TypeGuard[SimpleMatcherResult]:
2836 return identifier != jedi_matcher_id
2854 return identifier != jedi_matcher_id
2837
2855
2838 results = self._complete(
2856 results = self._complete(
2839 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2857 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2840 )
2858 )
2841
2859
2842 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2860 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2843 identifier: result
2861 identifier: result
2844 for identifier, result in results.items()
2862 for identifier, result in results.items()
2845 if is_non_jedi_result(result, identifier)
2863 if is_non_jedi_result(result, identifier)
2846 }
2864 }
2847
2865
2848 jedi_matches = (
2866 jedi_matches = (
2849 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2867 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2850 if jedi_matcher_id in results
2868 if jedi_matcher_id in results
2851 else ()
2869 else ()
2852 )
2870 )
2853
2871
2854 iter_jm = iter(jedi_matches)
2872 iter_jm = iter(jedi_matches)
2855 if _timeout:
2873 if _timeout:
2856 for jm in iter_jm:
2874 for jm in iter_jm:
2857 try:
2875 try:
2858 type_ = jm.type
2876 type_ = jm.type
2859 except Exception:
2877 except Exception:
2860 if self.debug:
2878 if self.debug:
2861 print("Error in Jedi getting type of ", jm)
2879 print("Error in Jedi getting type of ", jm)
2862 type_ = None
2880 type_ = None
2863 delta = len(jm.name_with_symbols) - len(jm.complete)
2881 delta = len(jm.name_with_symbols) - len(jm.complete)
2864 if type_ == 'function':
2882 if type_ == 'function':
2865 signature = _make_signature(jm)
2883 signature = _make_signature(jm)
2866 else:
2884 else:
2867 signature = ''
2885 signature = ''
2868 yield Completion(start=offset - delta,
2886 yield Completion(start=offset - delta,
2869 end=offset,
2887 end=offset,
2870 text=jm.name_with_symbols,
2888 text=jm.name_with_symbols,
2871 type=type_,
2889 type=type_,
2872 signature=signature,
2890 signature=signature,
2873 _origin='jedi')
2891 _origin='jedi')
2874
2892
2875 if time.monotonic() > deadline:
2893 if time.monotonic() > deadline:
2876 break
2894 break
2877
2895
2878 for jm in iter_jm:
2896 for jm in iter_jm:
2879 delta = len(jm.name_with_symbols) - len(jm.complete)
2897 delta = len(jm.name_with_symbols) - len(jm.complete)
2880 yield Completion(
2898 yield Completion(
2881 start=offset - delta,
2899 start=offset - delta,
2882 end=offset,
2900 end=offset,
2883 text=jm.name_with_symbols,
2901 text=jm.name_with_symbols,
2884 type=_UNKNOWN_TYPE, # don't compute type for speed
2902 type=_UNKNOWN_TYPE, # don't compute type for speed
2885 _origin="jedi",
2903 _origin="jedi",
2886 signature="",
2904 signature="",
2887 )
2905 )
2888
2906
2889 # TODO:
2907 # TODO:
2890 # Suppress this, right now just for debug.
2908 # Suppress this, right now just for debug.
2891 if jedi_matches and non_jedi_results and self.debug:
2909 if jedi_matches and non_jedi_results and self.debug:
2892 some_start_offset = before.rfind(
2910 some_start_offset = before.rfind(
2893 next(iter(non_jedi_results.values()))["matched_fragment"]
2911 next(iter(non_jedi_results.values()))["matched_fragment"]
2894 )
2912 )
2895 yield Completion(
2913 yield Completion(
2896 start=some_start_offset,
2914 start=some_start_offset,
2897 end=offset,
2915 end=offset,
2898 text="--jedi/ipython--",
2916 text="--jedi/ipython--",
2899 _origin="debug",
2917 _origin="debug",
2900 type="none",
2918 type="none",
2901 signature="",
2919 signature="",
2902 )
2920 )
2903
2921
2904 ordered: List[Completion] = []
2922 ordered: List[Completion] = []
2905 sortable: List[Completion] = []
2923 sortable: List[Completion] = []
2906
2924
2907 for origin, result in non_jedi_results.items():
2925 for origin, result in non_jedi_results.items():
2908 matched_text = result["matched_fragment"]
2926 matched_text = result["matched_fragment"]
2909 start_offset = before.rfind(matched_text)
2927 start_offset = before.rfind(matched_text)
2910 is_ordered = result.get("ordered", False)
2928 is_ordered = result.get("ordered", False)
2911 container = ordered if is_ordered else sortable
2929 container = ordered if is_ordered else sortable
2912
2930
2913 # I'm unsure if this is always true, so let's assert and see if it
2931 # I'm unsure if this is always true, so let's assert and see if it
2914 # crash
2932 # crash
2915 assert before.endswith(matched_text)
2933 assert before.endswith(matched_text)
2916
2934
2917 for simple_completion in result["completions"]:
2935 for simple_completion in result["completions"]:
2918 completion = Completion(
2936 completion = Completion(
2919 start=start_offset,
2937 start=start_offset,
2920 end=offset,
2938 end=offset,
2921 text=simple_completion.text,
2939 text=simple_completion.text,
2922 _origin=origin,
2940 _origin=origin,
2923 signature="",
2941 signature="",
2924 type=simple_completion.type or _UNKNOWN_TYPE,
2942 type=simple_completion.type or _UNKNOWN_TYPE,
2925 )
2943 )
2926 container.append(completion)
2944 container.append(completion)
2927
2945
2928 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2946 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2929 :MATCHES_LIMIT
2947 :MATCHES_LIMIT
2930 ]
2948 ]
2931
2949
2932 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2950 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2933 """Find completions for the given text and line context.
2951 """Find completions for the given text and line context.
2934
2952
2935 Note that both the text and the line_buffer are optional, but at least
2953 Note that both the text and the line_buffer are optional, but at least
2936 one of them must be given.
2954 one of them must be given.
2937
2955
2938 Parameters
2956 Parameters
2939 ----------
2957 ----------
2940 text : string, optional
2958 text : string, optional
2941 Text to perform the completion on. If not given, the line buffer
2959 Text to perform the completion on. If not given, the line buffer
2942 is split using the instance's CompletionSplitter object.
2960 is split using the instance's CompletionSplitter object.
2943 line_buffer : string, optional
2961 line_buffer : string, optional
2944 If not given, the completer attempts to obtain the current line
2962 If not given, the completer attempts to obtain the current line
2945 buffer via readline. This keyword allows clients which are
2963 buffer via readline. This keyword allows clients which are
2946 requesting for text completions in non-readline contexts to inform
2964 requesting for text completions in non-readline contexts to inform
2947 the completer of the entire text.
2965 the completer of the entire text.
2948 cursor_pos : int, optional
2966 cursor_pos : int, optional
2949 Index of the cursor in the full line buffer. Should be provided by
2967 Index of the cursor in the full line buffer. Should be provided by
2950 remote frontends where kernel has no access to frontend state.
2968 remote frontends where kernel has no access to frontend state.
2951
2969
2952 Returns
2970 Returns
2953 -------
2971 -------
2954 Tuple of two items:
2972 Tuple of two items:
2955 text : str
2973 text : str
2956 Text that was actually used in the completion.
2974 Text that was actually used in the completion.
2957 matches : list
2975 matches : list
2958 A list of completion matches.
2976 A list of completion matches.
2959
2977
2960 Notes
2978 Notes
2961 -----
2979 -----
2962 This API is likely to be deprecated and replaced by
2980 This API is likely to be deprecated and replaced by
2963 :any:`IPCompleter.completions` in the future.
2981 :any:`IPCompleter.completions` in the future.
2964
2982
2965 """
2983 """
2966 warnings.warn('`Completer.complete` is pending deprecation since '
2984 warnings.warn('`Completer.complete` is pending deprecation since '
2967 'IPython 6.0 and will be replaced by `Completer.completions`.',
2985 'IPython 6.0 and will be replaced by `Completer.completions`.',
2968 PendingDeprecationWarning)
2986 PendingDeprecationWarning)
2969 # potential todo, FOLD the 3rd throw away argument of _complete
2987 # potential todo, FOLD the 3rd throw away argument of _complete
2970 # into the first 2 one.
2988 # into the first 2 one.
2971 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2989 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2972 # TODO: should we deprecate now, or does it stay?
2990 # TODO: should we deprecate now, or does it stay?
2973
2991
2974 results = self._complete(
2992 results = self._complete(
2975 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2993 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2976 )
2994 )
2977
2995
2978 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2996 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2979
2997
2980 return self._arrange_and_extract(
2998 return self._arrange_and_extract(
2981 results,
2999 results,
2982 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3000 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2983 skip_matchers={jedi_matcher_id},
3001 skip_matchers={jedi_matcher_id},
2984 # this API does not support different start/end positions (fragments of token).
3002 # this API does not support different start/end positions (fragments of token).
2985 abort_if_offset_changes=True,
3003 abort_if_offset_changes=True,
2986 )
3004 )
2987
3005
2988 def _arrange_and_extract(
3006 def _arrange_and_extract(
2989 self,
3007 self,
2990 results: Dict[str, MatcherResult],
3008 results: Dict[str, MatcherResult],
2991 skip_matchers: Set[str],
3009 skip_matchers: Set[str],
2992 abort_if_offset_changes: bool,
3010 abort_if_offset_changes: bool,
2993 ):
3011 ):
2994
3012
2995 sortable: List[AnyMatcherCompletion] = []
3013 sortable: List[AnyMatcherCompletion] = []
2996 ordered: List[AnyMatcherCompletion] = []
3014 ordered: List[AnyMatcherCompletion] = []
2997 most_recent_fragment = None
3015 most_recent_fragment = None
2998 for identifier, result in results.items():
3016 for identifier, result in results.items():
2999 if identifier in skip_matchers:
3017 if identifier in skip_matchers:
3000 continue
3018 continue
3001 if not result["completions"]:
3019 if not result["completions"]:
3002 continue
3020 continue
3003 if not most_recent_fragment:
3021 if not most_recent_fragment:
3004 most_recent_fragment = result["matched_fragment"]
3022 most_recent_fragment = result["matched_fragment"]
3005 if (
3023 if (
3006 abort_if_offset_changes
3024 abort_if_offset_changes
3007 and result["matched_fragment"] != most_recent_fragment
3025 and result["matched_fragment"] != most_recent_fragment
3008 ):
3026 ):
3009 break
3027 break
3010 if result.get("ordered", False):
3028 if result.get("ordered", False):
3011 ordered.extend(result["completions"])
3029 ordered.extend(result["completions"])
3012 else:
3030 else:
3013 sortable.extend(result["completions"])
3031 sortable.extend(result["completions"])
3014
3032
3015 if not most_recent_fragment:
3033 if not most_recent_fragment:
3016 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3034 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3017
3035
3018 return most_recent_fragment, [
3036 return most_recent_fragment, [
3019 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3037 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3020 ]
3038 ]
3021
3039
3022 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3040 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3023 full_text=None) -> _CompleteResult:
3041 full_text=None) -> _CompleteResult:
3024 """
3042 """
3025 Like complete but can also returns raw jedi completions as well as the
3043 Like complete but can also returns raw jedi completions as well as the
3026 origin of the completion text. This could (and should) be made much
3044 origin of the completion text. This could (and should) be made much
3027 cleaner but that will be simpler once we drop the old (and stateful)
3045 cleaner but that will be simpler once we drop the old (and stateful)
3028 :any:`complete` API.
3046 :any:`complete` API.
3029
3047
3030 With current provisional API, cursor_pos act both (depending on the
3048 With current provisional API, cursor_pos act both (depending on the
3031 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3049 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3032 ``column`` when passing multiline strings this could/should be renamed
3050 ``column`` when passing multiline strings this could/should be renamed
3033 but would add extra noise.
3051 but would add extra noise.
3034
3052
3035 Parameters
3053 Parameters
3036 ----------
3054 ----------
3037 cursor_line
3055 cursor_line
3038 Index of the line the cursor is on. 0 indexed.
3056 Index of the line the cursor is on. 0 indexed.
3039 cursor_pos
3057 cursor_pos
3040 Position of the cursor in the current line/line_buffer/text. 0
3058 Position of the cursor in the current line/line_buffer/text. 0
3041 indexed.
3059 indexed.
3042 line_buffer : optional, str
3060 line_buffer : optional, str
3043 The current line the cursor is in, this is mostly due to legacy
3061 The current line the cursor is in, this is mostly due to legacy
3044 reason that readline could only give a us the single current line.
3062 reason that readline could only give a us the single current line.
3045 Prefer `full_text`.
3063 Prefer `full_text`.
3046 text : str
3064 text : str
3047 The current "token" the cursor is in, mostly also for historical
3065 The current "token" the cursor is in, mostly also for historical
3048 reasons. as the completer would trigger only after the current line
3066 reasons. as the completer would trigger only after the current line
3049 was parsed.
3067 was parsed.
3050 full_text : str
3068 full_text : str
3051 Full text of the current cell.
3069 Full text of the current cell.
3052
3070
3053 Returns
3071 Returns
3054 -------
3072 -------
3055 An ordered dictionary where keys are identifiers of completion
3073 An ordered dictionary where keys are identifiers of completion
3056 matchers and values are ``MatcherResult``s.
3074 matchers and values are ``MatcherResult``s.
3057 """
3075 """
3058
3076
3059 # if the cursor position isn't given, the only sane assumption we can
3077 # if the cursor position isn't given, the only sane assumption we can
3060 # make is that it's at the end of the line (the common case)
3078 # make is that it's at the end of the line (the common case)
3061 if cursor_pos is None:
3079 if cursor_pos is None:
3062 cursor_pos = len(line_buffer) if text is None else len(text)
3080 cursor_pos = len(line_buffer) if text is None else len(text)
3063
3081
3064 if self.use_main_ns:
3082 if self.use_main_ns:
3065 self.namespace = __main__.__dict__
3083 self.namespace = __main__.__dict__
3066
3084
3067 # if text is either None or an empty string, rely on the line buffer
3085 # if text is either None or an empty string, rely on the line buffer
3068 if (not line_buffer) and full_text:
3086 if (not line_buffer) and full_text:
3069 line_buffer = full_text.split('\n')[cursor_line]
3087 line_buffer = full_text.split('\n')[cursor_line]
3070 if not text: # issue #11508: check line_buffer before calling split_line
3088 if not text: # issue #11508: check line_buffer before calling split_line
3071 text = (
3089 text = (
3072 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3090 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3073 )
3091 )
3074
3092
3075 # If no line buffer is given, assume the input text is all there was
3093 # If no line buffer is given, assume the input text is all there was
3076 if line_buffer is None:
3094 if line_buffer is None:
3077 line_buffer = text
3095 line_buffer = text
3078
3096
3079 # deprecated - do not use `line_buffer` in new code.
3097 # deprecated - do not use `line_buffer` in new code.
3080 self.line_buffer = line_buffer
3098 self.line_buffer = line_buffer
3081 self.text_until_cursor = self.line_buffer[:cursor_pos]
3099 self.text_until_cursor = self.line_buffer[:cursor_pos]
3082
3100
3083 if not full_text:
3101 if not full_text:
3084 full_text = line_buffer
3102 full_text = line_buffer
3085
3103
3086 context = CompletionContext(
3104 context = CompletionContext(
3087 full_text=full_text,
3105 full_text=full_text,
3088 cursor_position=cursor_pos,
3106 cursor_position=cursor_pos,
3089 cursor_line=cursor_line,
3107 cursor_line=cursor_line,
3090 token=text,
3108 token=text,
3091 limit=MATCHES_LIMIT,
3109 limit=MATCHES_LIMIT,
3092 )
3110 )
3093
3111
3094 # Start with a clean slate of completions
3112 # Start with a clean slate of completions
3095 results: Dict[str, MatcherResult] = {}
3113 results: Dict[str, MatcherResult] = {}
3096
3114
3097 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3115 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3098
3116
3099 suppressed_matchers: Set[str] = set()
3117 suppressed_matchers: Set[str] = set()
3100
3118
3101 matchers = {
3119 matchers = {
3102 _get_matcher_id(matcher): matcher
3120 _get_matcher_id(matcher): matcher
3103 for matcher in sorted(
3121 for matcher in sorted(
3104 self.matchers, key=_get_matcher_priority, reverse=True
3122 self.matchers, key=_get_matcher_priority, reverse=True
3105 )
3123 )
3106 }
3124 }
3107
3125
3108 for matcher_id, matcher in matchers.items():
3126 for matcher_id, matcher in matchers.items():
3109 matcher_id = _get_matcher_id(matcher)
3127 matcher_id = _get_matcher_id(matcher)
3110
3128
3111 if matcher_id in self.disable_matchers:
3129 if matcher_id in self.disable_matchers:
3112 continue
3130 continue
3113
3131
3114 if matcher_id in results:
3132 if matcher_id in results:
3115 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3133 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3116
3134
3117 if matcher_id in suppressed_matchers:
3135 if matcher_id in suppressed_matchers:
3118 continue
3136 continue
3119
3137
3120 result: MatcherResult
3138 result: MatcherResult
3121 try:
3139 try:
3122 if _is_matcher_v1(matcher):
3140 if _is_matcher_v1(matcher):
3123 result = _convert_matcher_v1_result_to_v2(
3141 result = _convert_matcher_v1_result_to_v2(
3124 matcher(text), type=_UNKNOWN_TYPE
3142 matcher(text), type=_UNKNOWN_TYPE
3125 )
3143 )
3126 elif _is_matcher_v2(matcher):
3144 elif _is_matcher_v2(matcher):
3127 result = matcher(context)
3145 result = matcher(context)
3128 else:
3146 else:
3129 api_version = _get_matcher_api_version(matcher)
3147 api_version = _get_matcher_api_version(matcher)
3130 raise ValueError(f"Unsupported API version {api_version}")
3148 raise ValueError(f"Unsupported API version {api_version}")
3131 except:
3149 except:
3132 # Show the ugly traceback if the matcher causes an
3150 # Show the ugly traceback if the matcher causes an
3133 # exception, but do NOT crash the kernel!
3151 # exception, but do NOT crash the kernel!
3134 sys.excepthook(*sys.exc_info())
3152 sys.excepthook(*sys.exc_info())
3135 continue
3153 continue
3136
3154
3137 # set default value for matched fragment if suffix was not selected.
3155 # set default value for matched fragment if suffix was not selected.
3138 result["matched_fragment"] = result.get("matched_fragment", context.token)
3156 result["matched_fragment"] = result.get("matched_fragment", context.token)
3139
3157
3140 if not suppressed_matchers:
3158 if not suppressed_matchers:
3141 suppression_recommended: Union[bool, Set[str]] = result.get(
3159 suppression_recommended: Union[bool, Set[str]] = result.get(
3142 "suppress", False
3160 "suppress", False
3143 )
3161 )
3144
3162
3145 suppression_config = (
3163 suppression_config = (
3146 self.suppress_competing_matchers.get(matcher_id, None)
3164 self.suppress_competing_matchers.get(matcher_id, None)
3147 if isinstance(self.suppress_competing_matchers, dict)
3165 if isinstance(self.suppress_competing_matchers, dict)
3148 else self.suppress_competing_matchers
3166 else self.suppress_competing_matchers
3149 )
3167 )
3150 should_suppress = (
3168 should_suppress = (
3151 (suppression_config is True)
3169 (suppression_config is True)
3152 or (suppression_recommended and (suppression_config is not False))
3170 or (suppression_recommended and (suppression_config is not False))
3153 ) and has_any_completions(result)
3171 ) and has_any_completions(result)
3154
3172
3155 if should_suppress:
3173 if should_suppress:
3156 suppression_exceptions: Set[str] = result.get(
3174 suppression_exceptions: Set[str] = result.get(
3157 "do_not_suppress", set()
3175 "do_not_suppress", set()
3158 )
3176 )
3159 if isinstance(suppression_recommended, Iterable):
3177 if isinstance(suppression_recommended, Iterable):
3160 to_suppress = set(suppression_recommended)
3178 to_suppress = set(suppression_recommended)
3161 else:
3179 else:
3162 to_suppress = set(matchers)
3180 to_suppress = set(matchers)
3163 suppressed_matchers = to_suppress - suppression_exceptions
3181 suppressed_matchers = to_suppress - suppression_exceptions
3164
3182
3165 new_results = {}
3183 new_results = {}
3166 for previous_matcher_id, previous_result in results.items():
3184 for previous_matcher_id, previous_result in results.items():
3167 if previous_matcher_id not in suppressed_matchers:
3185 if previous_matcher_id not in suppressed_matchers:
3168 new_results[previous_matcher_id] = previous_result
3186 new_results[previous_matcher_id] = previous_result
3169 results = new_results
3187 results = new_results
3170
3188
3171 results[matcher_id] = result
3189 results[matcher_id] = result
3172
3190
3173 _, matches = self._arrange_and_extract(
3191 _, matches = self._arrange_and_extract(
3174 results,
3192 results,
3175 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3193 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3176 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3194 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3177 skip_matchers={jedi_matcher_id},
3195 skip_matchers={jedi_matcher_id},
3178 abort_if_offset_changes=False,
3196 abort_if_offset_changes=False,
3179 )
3197 )
3180
3198
3181 # populate legacy stateful API
3199 # populate legacy stateful API
3182 self.matches = matches
3200 self.matches = matches
3183
3201
3184 return results
3202 return results
3185
3203
3186 @staticmethod
3204 @staticmethod
3187 def _deduplicate(
3205 def _deduplicate(
3188 matches: Sequence[AnyCompletion],
3206 matches: Sequence[AnyCompletion],
3189 ) -> Iterable[AnyCompletion]:
3207 ) -> Iterable[AnyCompletion]:
3190 filtered_matches: Dict[str, AnyCompletion] = {}
3208 filtered_matches: Dict[str, AnyCompletion] = {}
3191 for match in matches:
3209 for match in matches:
3192 text = match.text
3210 text = match.text
3193 if (
3211 if (
3194 text not in filtered_matches
3212 text not in filtered_matches
3195 or filtered_matches[text].type == _UNKNOWN_TYPE
3213 or filtered_matches[text].type == _UNKNOWN_TYPE
3196 ):
3214 ):
3197 filtered_matches[text] = match
3215 filtered_matches[text] = match
3198
3216
3199 return filtered_matches.values()
3217 return filtered_matches.values()
3200
3218
3201 @staticmethod
3219 @staticmethod
3202 def _sort(matches: Sequence[AnyCompletion]):
3220 def _sort(matches: Sequence[AnyCompletion]):
3203 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3221 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3204
3222
3205 @context_matcher()
3223 @context_matcher()
3206 def fwd_unicode_matcher(self, context: CompletionContext):
3224 def fwd_unicode_matcher(self, context: CompletionContext):
3207 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3225 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3208 # TODO: use `context.limit` to terminate early once we matched the maximum
3226 # TODO: use `context.limit` to terminate early once we matched the maximum
3209 # number that will be used downstream; can be added as an optional to
3227 # number that will be used downstream; can be added as an optional to
3210 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3228 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3211 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3229 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3212 return _convert_matcher_v1_result_to_v2(
3230 return _convert_matcher_v1_result_to_v2(
3213 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3231 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3214 )
3232 )
3215
3233
3216 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3234 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3217 """
3235 """
3218 Forward match a string starting with a backslash with a list of
3236 Forward match a string starting with a backslash with a list of
3219 potential Unicode completions.
3237 potential Unicode completions.
3220
3238
3221 Will compute list of Unicode character names on first call and cache it.
3239 Will compute list of Unicode character names on first call and cache it.
3222
3240
3223 .. deprecated:: 8.6
3241 .. deprecated:: 8.6
3224 You can use :meth:`fwd_unicode_matcher` instead.
3242 You can use :meth:`fwd_unicode_matcher` instead.
3225
3243
3226 Returns
3244 Returns
3227 -------
3245 -------
3228 At tuple with:
3246 At tuple with:
3229 - matched text (empty if no matches)
3247 - matched text (empty if no matches)
3230 - list of potential completions, empty tuple otherwise)
3248 - list of potential completions, empty tuple otherwise)
3231 """
3249 """
3232 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3250 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3233 # We could do a faster match using a Trie.
3251 # We could do a faster match using a Trie.
3234
3252
3235 # Using pygtrie the following seem to work:
3253 # Using pygtrie the following seem to work:
3236
3254
3237 # s = PrefixSet()
3255 # s = PrefixSet()
3238
3256
3239 # for c in range(0,0x10FFFF + 1):
3257 # for c in range(0,0x10FFFF + 1):
3240 # try:
3258 # try:
3241 # s.add(unicodedata.name(chr(c)))
3259 # s.add(unicodedata.name(chr(c)))
3242 # except ValueError:
3260 # except ValueError:
3243 # pass
3261 # pass
3244 # [''.join(k) for k in s.iter(prefix)]
3262 # [''.join(k) for k in s.iter(prefix)]
3245
3263
3246 # But need to be timed and adds an extra dependency.
3264 # But need to be timed and adds an extra dependency.
3247
3265
3248 slashpos = text.rfind('\\')
3266 slashpos = text.rfind('\\')
3249 # if text starts with slash
3267 # if text starts with slash
3250 if slashpos > -1:
3268 if slashpos > -1:
3251 # PERF: It's important that we don't access self._unicode_names
3269 # PERF: It's important that we don't access self._unicode_names
3252 # until we're inside this if-block. _unicode_names is lazily
3270 # until we're inside this if-block. _unicode_names is lazily
3253 # initialized, and it takes a user-noticeable amount of time to
3271 # initialized, and it takes a user-noticeable amount of time to
3254 # initialize it, so we don't want to initialize it unless we're
3272 # initialize it, so we don't want to initialize it unless we're
3255 # actually going to use it.
3273 # actually going to use it.
3256 s = text[slashpos + 1 :]
3274 s = text[slashpos + 1 :]
3257 sup = s.upper()
3275 sup = s.upper()
3258 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3276 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3259 if candidates:
3277 if candidates:
3260 return s, candidates
3278 return s, candidates
3261 candidates = [x for x in self.unicode_names if sup in x]
3279 candidates = [x for x in self.unicode_names if sup in x]
3262 if candidates:
3280 if candidates:
3263 return s, candidates
3281 return s, candidates
3264 splitsup = sup.split(" ")
3282 splitsup = sup.split(" ")
3265 candidates = [
3283 candidates = [
3266 x for x in self.unicode_names if all(u in x for u in splitsup)
3284 x for x in self.unicode_names if all(u in x for u in splitsup)
3267 ]
3285 ]
3268 if candidates:
3286 if candidates:
3269 return s, candidates
3287 return s, candidates
3270
3288
3271 return "", ()
3289 return "", ()
3272
3290
3273 # if text does not start with slash
3291 # if text does not start with slash
3274 else:
3292 else:
3275 return '', ()
3293 return '', ()
3276
3294
3277 @property
3295 @property
3278 def unicode_names(self) -> List[str]:
3296 def unicode_names(self) -> List[str]:
3279 """List of names of unicode code points that can be completed.
3297 """List of names of unicode code points that can be completed.
3280
3298
3281 The list is lazily initialized on first access.
3299 The list is lazily initialized on first access.
3282 """
3300 """
3283 if self._unicode_names is None:
3301 if self._unicode_names is None:
3284 names = []
3302 names = []
3285 for c in range(0,0x10FFFF + 1):
3303 for c in range(0,0x10FFFF + 1):
3286 try:
3304 try:
3287 names.append(unicodedata.name(chr(c)))
3305 names.append(unicodedata.name(chr(c)))
3288 except ValueError:
3306 except ValueError:
3289 pass
3307 pass
3290 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3308 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3291
3309
3292 return self._unicode_names
3310 return self._unicode_names
3293
3311
3294 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3312 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3295 names = []
3313 names = []
3296 for start,stop in ranges:
3314 for start,stop in ranges:
3297 for c in range(start, stop) :
3315 for c in range(start, stop) :
3298 try:
3316 try:
3299 names.append(unicodedata.name(chr(c)))
3317 names.append(unicodedata.name(chr(c)))
3300 except ValueError:
3318 except ValueError:
3301 pass
3319 pass
3302 return names
3320 return names
@@ -1,539 +1,573 b''
1 from typing import (
1 from typing import (
2 Any,
2 Any,
3 Callable,
3 Callable,
4 Set,
4 Set,
5 Tuple,
5 Tuple,
6 NamedTuple,
6 NamedTuple,
7 Type,
7 Type,
8 Literal,
8 Literal,
9 Union,
9 Union,
10 TYPE_CHECKING,
10 TYPE_CHECKING,
11 )
11 )
12 import builtins
12 import builtins
13 import collections
13 import collections
14 import sys
14 import sys
15 import ast
15 import ast
16 from functools import cached_property
16 from functools import cached_property
17 from dataclasses import dataclass, field
17 from dataclasses import dataclass, field
18
18
19 from IPython.utils.docs import GENERATING_DOCUMENTATION
19 from IPython.utils.docs import GENERATING_DOCUMENTATION
20 from IPython.utils.decorators import undoc
20
21
21
22
22 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
23 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
23 from typing_extensions import Protocol
24 from typing_extensions import Protocol
24 else:
25 else:
25 # do not require on runtime
26 # do not require on runtime
26 Protocol = object # requires Python >=3.8
27 Protocol = object # requires Python >=3.8
27
28
28
29
30 @undoc
29 class HasGetItem(Protocol):
31 class HasGetItem(Protocol):
30 def __getitem__(self, key) -> None:
32 def __getitem__(self, key) -> None:
31 ...
33 ...
32
34
33
35
36 @undoc
34 class InstancesHaveGetItem(Protocol):
37 class InstancesHaveGetItem(Protocol):
35 def __call__(self, *args, **kwargs) -> HasGetItem:
38 def __call__(self, *args, **kwargs) -> HasGetItem:
36 ...
39 ...
37
40
38
41
42 @undoc
39 class HasGetAttr(Protocol):
43 class HasGetAttr(Protocol):
40 def __getattr__(self, key) -> None:
44 def __getattr__(self, key) -> None:
41 ...
45 ...
42
46
43
47
48 @undoc
44 class DoesNotHaveGetAttr(Protocol):
49 class DoesNotHaveGetAttr(Protocol):
45 pass
50 pass
46
51
47
52
48 # By default `__getattr__` is not explicitly implemented on most objects
53 # By default `__getattr__` is not explicitly implemented on most objects
49 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
54 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
50
55
51
56
52 def unbind_method(func: Callable) -> Union[Callable, None]:
57 def _unbind_method(func: Callable) -> Union[Callable, None]:
53 """Get unbound method for given bound method.
58 """Get unbound method for given bound method.
54
59
55 Returns None if cannot get unbound method."""
60 Returns None if cannot get unbound method."""
56 owner = getattr(func, "__self__", None)
61 owner = getattr(func, "__self__", None)
57 owner_class = type(owner)
62 owner_class = type(owner)
58 name = getattr(func, "__name__", None)
63 name = getattr(func, "__name__", None)
59 instance_dict_overrides = getattr(owner, "__dict__", None)
64 instance_dict_overrides = getattr(owner, "__dict__", None)
60 if (
65 if (
61 owner is not None
66 owner is not None
62 and name
67 and name
63 and (
68 and (
64 not instance_dict_overrides
69 not instance_dict_overrides
65 or (instance_dict_overrides and name not in instance_dict_overrides)
70 or (instance_dict_overrides and name not in instance_dict_overrides)
66 )
71 )
67 ):
72 ):
68 return getattr(owner_class, name)
73 return getattr(owner_class, name)
69 return None
74 return None
70
75
71
76
77 @undoc
72 @dataclass
78 @dataclass
73 class EvaluationPolicy:
79 class EvaluationPolicy:
80 """Definition of evaluation policy."""
81
74 allow_locals_access: bool = False
82 allow_locals_access: bool = False
75 allow_globals_access: bool = False
83 allow_globals_access: bool = False
76 allow_item_access: bool = False
84 allow_item_access: bool = False
77 allow_attr_access: bool = False
85 allow_attr_access: bool = False
78 allow_builtins_access: bool = False
86 allow_builtins_access: bool = False
79 allow_any_calls: bool = False
87 allow_any_calls: bool = False
80 allowed_calls: Set[Callable] = field(default_factory=set)
88 allowed_calls: Set[Callable] = field(default_factory=set)
81
89
82 def can_get_item(self, value, item):
90 def can_get_item(self, value, item):
83 return self.allow_item_access
91 return self.allow_item_access
84
92
85 def can_get_attr(self, value, attr):
93 def can_get_attr(self, value, attr):
86 return self.allow_attr_access
94 return self.allow_attr_access
87
95
88 def can_call(self, func):
96 def can_call(self, func):
89 if self.allow_any_calls:
97 if self.allow_any_calls:
90 return True
98 return True
91
99
92 if func in self.allowed_calls:
100 if func in self.allowed_calls:
93 return True
101 return True
94
102
95 owner_method = unbind_method(func)
103 owner_method = _unbind_method(func)
96 if owner_method and owner_method in self.allowed_calls:
104 if owner_method and owner_method in self.allowed_calls:
97 return True
105 return True
98
106
99
107
100 def has_original_dunder_external(
108 def _has_original_dunder_external(
101 value,
109 value,
102 module_name,
110 module_name,
103 access_path,
111 access_path,
104 method_name,
112 method_name,
105 ):
113 ):
106 try:
114 try:
107 if module_name not in sys.modules:
115 if module_name not in sys.modules:
108 return False
116 return False
109 member_type = sys.modules[module_name]
117 member_type = sys.modules[module_name]
110 for attr in access_path:
118 for attr in access_path:
111 member_type = getattr(member_type, attr)
119 member_type = getattr(member_type, attr)
112 value_type = type(value)
120 value_type = type(value)
113 if type(value) == member_type:
121 if type(value) == member_type:
114 return True
122 return True
115 if isinstance(value, member_type):
123 if isinstance(value, member_type):
116 method = getattr(value_type, method_name, None)
124 method = getattr(value_type, method_name, None)
117 member_method = getattr(member_type, method_name, None)
125 member_method = getattr(member_type, method_name, None)
118 if member_method == method:
126 if member_method == method:
119 return True
127 return True
120 except (AttributeError, KeyError):
128 except (AttributeError, KeyError):
121 return False
129 return False
122
130
123
131
124 def has_original_dunder(
132 def _has_original_dunder(
125 value, allowed_types, allowed_methods, allowed_external, method_name
133 value, allowed_types, allowed_methods, allowed_external, method_name
126 ):
134 ):
127 # note: Python ignores `__getattr__`/`__getitem__` on instances,
135 # note: Python ignores `__getattr__`/`__getitem__` on instances,
128 # we only need to check at class level
136 # we only need to check at class level
129 value_type = type(value)
137 value_type = type(value)
130
138
131 # strict type check passes β†’ no need to check method
139 # strict type check passes β†’ no need to check method
132 if value_type in allowed_types:
140 if value_type in allowed_types:
133 return True
141 return True
134
142
135 method = getattr(value_type, method_name, None)
143 method = getattr(value_type, method_name, None)
136
144
137 if not method:
145 if not method:
138 return None
146 return None
139
147
140 if method in allowed_methods:
148 if method in allowed_methods:
141 return True
149 return True
142
150
143 for module_name, *access_path in allowed_external:
151 for module_name, *access_path in allowed_external:
144 if has_original_dunder_external(value, module_name, access_path, method_name):
152 if _has_original_dunder_external(value, module_name, access_path, method_name):
145 return True
153 return True
146
154
147 return False
155 return False
148
156
149
157
158 @undoc
150 @dataclass
159 @dataclass
151 class SelectivePolicy(EvaluationPolicy):
160 class SelectivePolicy(EvaluationPolicy):
152 allowed_getitem: Set[InstancesHaveGetItem] = field(default_factory=set)
161 allowed_getitem: Set[InstancesHaveGetItem] = field(default_factory=set)
153 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
162 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
154 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
163 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
155 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
164 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
156
165
157 def can_get_attr(self, value, attr):
166 def can_get_attr(self, value, attr):
158 has_original_attribute = has_original_dunder(
167 has_original_attribute = _has_original_dunder(
159 value,
168 value,
160 allowed_types=self.allowed_getattr,
169 allowed_types=self.allowed_getattr,
161 allowed_methods=self._getattribute_methods,
170 allowed_methods=self._getattribute_methods,
162 allowed_external=self.allowed_getattr_external,
171 allowed_external=self.allowed_getattr_external,
163 method_name="__getattribute__",
172 method_name="__getattribute__",
164 )
173 )
165 has_original_attr = has_original_dunder(
174 has_original_attr = _has_original_dunder(
166 value,
175 value,
167 allowed_types=self.allowed_getattr,
176 allowed_types=self.allowed_getattr,
168 allowed_methods=self._getattr_methods,
177 allowed_methods=self._getattr_methods,
169 allowed_external=self.allowed_getattr_external,
178 allowed_external=self.allowed_getattr_external,
170 method_name="__getattr__",
179 method_name="__getattr__",
171 )
180 )
172 # Many objects do not have `__getattr__`, this is fine
181 # Many objects do not have `__getattr__`, this is fine
173 if has_original_attr is None and has_original_attribute:
182 if has_original_attr is None and has_original_attribute:
174 return True
183 return True
175
184
176 # Accept objects without modifications to `__getattr__` and `__getattribute__`
185 # Accept objects without modifications to `__getattr__` and `__getattribute__`
177 return has_original_attr and has_original_attribute
186 return has_original_attr and has_original_attribute
178
187
179 def get_attr(self, value, attr):
188 def get_attr(self, value, attr):
180 if self.can_get_attr(value, attr):
189 if self.can_get_attr(value, attr):
181 return getattr(value, attr)
190 return getattr(value, attr)
182
191
183 def can_get_item(self, value, item):
192 def can_get_item(self, value, item):
184 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
193 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
185 return has_original_dunder(
194 return _has_original_dunder(
186 value,
195 value,
187 allowed_types=self.allowed_getitem,
196 allowed_types=self.allowed_getitem,
188 allowed_methods=self._getitem_methods,
197 allowed_methods=self._getitem_methods,
189 allowed_external=self.allowed_getitem_external,
198 allowed_external=self.allowed_getitem_external,
190 method_name="__getitem__",
199 method_name="__getitem__",
191 )
200 )
192
201
193 @cached_property
202 @cached_property
194 def _getitem_methods(self) -> Set[Callable]:
203 def _getitem_methods(self) -> Set[Callable]:
195 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
204 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
196
205
197 @cached_property
206 @cached_property
198 def _getattr_methods(self) -> Set[Callable]:
207 def _getattr_methods(self) -> Set[Callable]:
199 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
208 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
200
209
201 @cached_property
210 @cached_property
202 def _getattribute_methods(self) -> Set[Callable]:
211 def _getattribute_methods(self) -> Set[Callable]:
203 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
212 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
204
213
205 def _safe_get_methods(self, classes, name) -> Set[Callable]:
214 def _safe_get_methods(self, classes, name) -> Set[Callable]:
206 return {
215 return {
207 method
216 method
208 for class_ in classes
217 for class_ in classes
209 for method in [getattr(class_, name, None)]
218 for method in [getattr(class_, name, None)]
210 if method
219 if method
211 }
220 }
212
221
213
222
214 class DummyNamedTuple(NamedTuple):
223 class _DummyNamedTuple(NamedTuple):
215 pass
224 pass
216
225
217
226
218 class EvaluationContext(NamedTuple):
227 class EvaluationContext(NamedTuple):
219 locals_: dict
228 #: Local namespace
220 globals_: dict
229 locals: dict
230 #: Global namespace
231 globals: dict
232 #: Evaluation policy identifier
221 evaluation: Literal[
233 evaluation: Literal[
222 "forbidden", "minimal", "limited", "unsafe", "dangerous"
234 "forbidden", "minimal", "limited", "unsafe", "dangerous"
223 ] = "forbidden"
235 ] = "forbidden"
236 #: Whether the evalution of code takes place inside of a subscript.
237 #: Useful for evaluating ``:-1, 'col'`` in ``df[:-1, 'col']``.
224 in_subscript: bool = False
238 in_subscript: bool = False
225
239
226
240
227 class IdentitySubscript:
241 class _IdentitySubscript:
242 """Returns the key itself when item is requested via subscript."""
243
228 def __getitem__(self, key):
244 def __getitem__(self, key):
229 return key
245 return key
230
246
231
247
232 IDENTITY_SUBSCRIPT = IdentitySubscript()
248 IDENTITY_SUBSCRIPT = _IdentitySubscript()
233 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
249 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
234
250
235
251
236 class GuardRejection(ValueError):
252 class GuardRejection(Exception):
253 """Exception raised when guard rejects evaluation attempt."""
254
237 pass
255 pass
238
256
239
257
240 def guarded_eval(code: str, context: EvaluationContext):
258 def guarded_eval(code: str, context: EvaluationContext):
241 locals_ = context.locals_
259 """Evaluate provided code in the evaluation context.
260
261 If evaluation policy given by context is set to ``forbidden``
262 no evaluation will be performed; if it is set to ``dangerous``
263 standard :func:`eval` will be used; finally, for any other,
264 policy :func:`eval_node` will be called on parsed AST.
265 """
266 locals_ = context.locals
242
267
243 if context.evaluation == "forbidden":
268 if context.evaluation == "forbidden":
244 raise GuardRejection("Forbidden mode")
269 raise GuardRejection("Forbidden mode")
245
270
246 # note: not using `ast.literal_eval` as it does not implement
271 # note: not using `ast.literal_eval` as it does not implement
247 # getitem at all, for example it fails on simple `[0][1]`
272 # getitem at all, for example it fails on simple `[0][1]`
248
273
249 if context.in_subscript:
274 if context.in_subscript:
250 # syntatic sugar for ellipsis (:) is only available in susbcripts
275 # syntatic sugar for ellipsis (:) is only available in susbcripts
251 # so we need to trick the ast parser into thinking that we have
276 # so we need to trick the ast parser into thinking that we have
252 # a subscript, but we need to be able to later recognise that we did
277 # a subscript, but we need to be able to later recognise that we did
253 # it so we can ignore the actual __getitem__ operation
278 # it so we can ignore the actual __getitem__ operation
254 if not code:
279 if not code:
255 return tuple()
280 return tuple()
256 locals_ = locals_.copy()
281 locals_ = locals_.copy()
257 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
282 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
258 code = SUBSCRIPT_MARKER + "[" + code + "]"
283 code = SUBSCRIPT_MARKER + "[" + code + "]"
259 context = EvaluationContext(**{**context._asdict(), **{"locals_": locals_}})
284 context = EvaluationContext(**{**context._asdict(), **{"locals": locals_}})
260
285
261 if context.evaluation == "dangerous":
286 if context.evaluation == "dangerous":
262 return eval(code, context.globals_, context.locals_)
287 return eval(code, context.globals, context.locals)
263
288
264 expression = ast.parse(code, mode="eval")
289 expression = ast.parse(code, mode="eval")
265
290
266 return eval_node(expression, context)
291 return eval_node(expression, context)
267
292
268
293
269 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
294 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
270 """
295 """Evaluate AST node in provided context.
271 Evaluate AST node in provided context.
272
296
273 Applies evaluation restrictions defined in the context.
297 Applies evaluation restrictions defined in the context. Currently does not support evaluation of functions with keyword arguments.
274
298
275 Currently does not support evaluation of functions with keyword arguments.
299 Does not evaluate actions that always have side effects:
276
300
277 Does not evaluate actions which always have side effects:
278 - class definitions (``class sth: ...``)
301 - class definitions (``class sth: ...``)
279 - function definitions (``def sth: ...``)
302 - function definitions (``def sth: ...``)
280 - variable assignments (``x = 1``)
303 - variable assignments (``x = 1``)
281 - augmented assignments (``x += 1``)
304 - augmented assignments (``x += 1``)
282 - deletions (``del x``)
305 - deletions (``del x``)
283
306
284 Does not evaluate operations which do not return values:
307 Does not evaluate operations which do not return values:
308
285 - assertions (``assert x``)
309 - assertions (``assert x``)
286 - pass (``pass``)
310 - pass (``pass``)
287 - imports (``import x``)
311 - imports (``import x``)
288 - control flow
312 - control flow:
313
289 - conditionals (``if x:``) except for ternary IfExp (``a if x else b``)
314 - conditionals (``if x:``) except for ternary IfExp (``a if x else b``)
290 - loops (``for`` and `while``)
315 - loops (``for`` and `while``)
291 - exception handling
316 - exception handling
292
317
293 The purpose of this function is to guard against unwanted side-effects;
318 The purpose of this function is to guard against unwanted side-effects;
294 it does not give guarantees on protection from malicious code execution.
319 it does not give guarantees on protection from malicious code execution.
295 """
320 """
296 policy = EVALUATION_POLICIES[context.evaluation]
321 policy = EVALUATION_POLICIES[context.evaluation]
297 if node is None:
322 if node is None:
298 return None
323 return None
299 if isinstance(node, ast.Expression):
324 if isinstance(node, ast.Expression):
300 return eval_node(node.body, context)
325 return eval_node(node.body, context)
301 if isinstance(node, ast.BinOp):
326 if isinstance(node, ast.BinOp):
302 # TODO: add guards
327 # TODO: add guards
303 left = eval_node(node.left, context)
328 left = eval_node(node.left, context)
304 right = eval_node(node.right, context)
329 right = eval_node(node.right, context)
305 if isinstance(node.op, ast.Add):
330 if isinstance(node.op, ast.Add):
306 return left + right
331 return left + right
307 if isinstance(node.op, ast.Sub):
332 if isinstance(node.op, ast.Sub):
308 return left - right
333 return left - right
309 if isinstance(node.op, ast.Mult):
334 if isinstance(node.op, ast.Mult):
310 return left * right
335 return left * right
311 if isinstance(node.op, ast.Div):
336 if isinstance(node.op, ast.Div):
312 return left / right
337 return left / right
313 if isinstance(node.op, ast.FloorDiv):
338 if isinstance(node.op, ast.FloorDiv):
314 return left // right
339 return left // right
315 if isinstance(node.op, ast.Mod):
340 if isinstance(node.op, ast.Mod):
316 return left % right
341 return left % right
317 if isinstance(node.op, ast.Pow):
342 if isinstance(node.op, ast.Pow):
318 return left**right
343 return left**right
319 if isinstance(node.op, ast.LShift):
344 if isinstance(node.op, ast.LShift):
320 return left << right
345 return left << right
321 if isinstance(node.op, ast.RShift):
346 if isinstance(node.op, ast.RShift):
322 return left >> right
347 return left >> right
323 if isinstance(node.op, ast.BitOr):
348 if isinstance(node.op, ast.BitOr):
324 return left | right
349 return left | right
325 if isinstance(node.op, ast.BitXor):
350 if isinstance(node.op, ast.BitXor):
326 return left ^ right
351 return left ^ right
327 if isinstance(node.op, ast.BitAnd):
352 if isinstance(node.op, ast.BitAnd):
328 return left & right
353 return left & right
329 if isinstance(node.op, ast.MatMult):
354 if isinstance(node.op, ast.MatMult):
330 return left @ right
355 return left @ right
331 if isinstance(node, ast.Constant):
356 if isinstance(node, ast.Constant):
332 return node.value
357 return node.value
333 if isinstance(node, ast.Index):
358 if isinstance(node, ast.Index):
334 return eval_node(node.value, context)
359 return eval_node(node.value, context)
335 if isinstance(node, ast.Tuple):
360 if isinstance(node, ast.Tuple):
336 return tuple(eval_node(e, context) for e in node.elts)
361 return tuple(eval_node(e, context) for e in node.elts)
337 if isinstance(node, ast.List):
362 if isinstance(node, ast.List):
338 return [eval_node(e, context) for e in node.elts]
363 return [eval_node(e, context) for e in node.elts]
339 if isinstance(node, ast.Set):
364 if isinstance(node, ast.Set):
340 return {eval_node(e, context) for e in node.elts}
365 return {eval_node(e, context) for e in node.elts}
341 if isinstance(node, ast.Dict):
366 if isinstance(node, ast.Dict):
342 return dict(
367 return dict(
343 zip(
368 zip(
344 [eval_node(k, context) for k in node.keys],
369 [eval_node(k, context) for k in node.keys],
345 [eval_node(v, context) for v in node.values],
370 [eval_node(v, context) for v in node.values],
346 )
371 )
347 )
372 )
348 if isinstance(node, ast.Slice):
373 if isinstance(node, ast.Slice):
349 return slice(
374 return slice(
350 eval_node(node.lower, context),
375 eval_node(node.lower, context),
351 eval_node(node.upper, context),
376 eval_node(node.upper, context),
352 eval_node(node.step, context),
377 eval_node(node.step, context),
353 )
378 )
354 if isinstance(node, ast.ExtSlice):
379 if isinstance(node, ast.ExtSlice):
355 return tuple([eval_node(dim, context) for dim in node.dims])
380 return tuple([eval_node(dim, context) for dim in node.dims])
356 if isinstance(node, ast.UnaryOp):
381 if isinstance(node, ast.UnaryOp):
357 # TODO: add guards
382 # TODO: add guards
358 value = eval_node(node.operand, context)
383 value = eval_node(node.operand, context)
359 if isinstance(node.op, ast.USub):
384 if isinstance(node.op, ast.USub):
360 return -value
385 return -value
361 if isinstance(node.op, ast.UAdd):
386 if isinstance(node.op, ast.UAdd):
362 return +value
387 return +value
363 if isinstance(node.op, ast.Invert):
388 if isinstance(node.op, ast.Invert):
364 return ~value
389 return ~value
365 if isinstance(node.op, ast.Not):
390 if isinstance(node.op, ast.Not):
366 return not value
391 return not value
367 raise ValueError("Unhandled unary operation:", node.op)
392 raise ValueError("Unhandled unary operation:", node.op)
368 if isinstance(node, ast.Subscript):
393 if isinstance(node, ast.Subscript):
369 value = eval_node(node.value, context)
394 value = eval_node(node.value, context)
370 slice_ = eval_node(node.slice, context)
395 slice_ = eval_node(node.slice, context)
371 if policy.can_get_item(value, slice_):
396 if policy.can_get_item(value, slice_):
372 return value[slice_]
397 return value[slice_]
373 raise GuardRejection(
398 raise GuardRejection(
374 "Subscript access (`__getitem__`) for",
399 "Subscript access (`__getitem__`) for",
375 type(value), # not joined to avoid calling `repr`
400 type(value), # not joined to avoid calling `repr`
376 f" not allowed in {context.evaluation} mode",
401 f" not allowed in {context.evaluation} mode",
377 )
402 )
378 if isinstance(node, ast.Name):
403 if isinstance(node, ast.Name):
379 if policy.allow_locals_access and node.id in context.locals_:
404 if policy.allow_locals_access and node.id in context.locals:
380 return context.locals_[node.id]
405 return context.locals[node.id]
381 if policy.allow_globals_access and node.id in context.globals_:
406 if policy.allow_globals_access and node.id in context.globals:
382 return context.globals_[node.id]
407 return context.globals[node.id]
383 if policy.allow_builtins_access and hasattr(builtins, node.id):
408 if policy.allow_builtins_access and hasattr(builtins, node.id):
384 # note: do not use __builtins__, it is implementation detail of Python
409 # note: do not use __builtins__, it is implementation detail of Python
385 return getattr(builtins, node.id)
410 return getattr(builtins, node.id)
386 if not policy.allow_globals_access and not policy.allow_locals_access:
411 if not policy.allow_globals_access and not policy.allow_locals_access:
387 raise GuardRejection(
412 raise GuardRejection(
388 f"Namespace access not allowed in {context.evaluation} mode"
413 f"Namespace access not allowed in {context.evaluation} mode"
389 )
414 )
390 else:
415 else:
391 raise NameError(f"{node.id} not found in locals nor globals")
416 raise NameError(f"{node.id} not found in locals nor globals")
392 if isinstance(node, ast.Attribute):
417 if isinstance(node, ast.Attribute):
393 value = eval_node(node.value, context)
418 value = eval_node(node.value, context)
394 if policy.can_get_attr(value, node.attr):
419 if policy.can_get_attr(value, node.attr):
395 return getattr(value, node.attr)
420 return getattr(value, node.attr)
396 raise GuardRejection(
421 raise GuardRejection(
397 "Attribute access (`__getattr__`) for",
422 "Attribute access (`__getattr__`) for",
398 type(value), # not joined to avoid calling `repr`
423 type(value), # not joined to avoid calling `repr`
399 f"not allowed in {context.evaluation} mode",
424 f"not allowed in {context.evaluation} mode",
400 )
425 )
401 if isinstance(node, ast.IfExp):
426 if isinstance(node, ast.IfExp):
402 test = eval_node(node.test, context)
427 test = eval_node(node.test, context)
403 if test:
428 if test:
404 return eval_node(node.body, context)
429 return eval_node(node.body, context)
405 else:
430 else:
406 return eval_node(node.orelse, context)
431 return eval_node(node.orelse, context)
407 if isinstance(node, ast.Call):
432 if isinstance(node, ast.Call):
408 func = eval_node(node.func, context)
433 func = eval_node(node.func, context)
409 if policy.can_call(func) and not node.keywords:
434 if policy.can_call(func) and not node.keywords:
410 args = [eval_node(arg, context) for arg in node.args]
435 args = [eval_node(arg, context) for arg in node.args]
411 return func(*args)
436 return func(*args)
412 raise GuardRejection(
437 raise GuardRejection(
413 "Call for",
438 "Call for",
414 func, # not joined to avoid calling `repr`
439 func, # not joined to avoid calling `repr`
415 f"not allowed in {context.evaluation} mode",
440 f"not allowed in {context.evaluation} mode",
416 )
441 )
417 raise ValueError("Unhandled node", node)
442 raise ValueError("Unhandled node", node)
418
443
419
444
420 SUPPORTED_EXTERNAL_GETITEM = {
445 SUPPORTED_EXTERNAL_GETITEM = {
421 ("pandas", "core", "indexing", "_iLocIndexer"),
446 ("pandas", "core", "indexing", "_iLocIndexer"),
422 ("pandas", "core", "indexing", "_LocIndexer"),
447 ("pandas", "core", "indexing", "_LocIndexer"),
423 ("pandas", "DataFrame"),
448 ("pandas", "DataFrame"),
424 ("pandas", "Series"),
449 ("pandas", "Series"),
425 ("numpy", "ndarray"),
450 ("numpy", "ndarray"),
426 ("numpy", "void"),
451 ("numpy", "void"),
427 }
452 }
428
453
429 BUILTIN_GETITEM: Set[InstancesHaveGetItem] = {
454 BUILTIN_GETITEM: Set[InstancesHaveGetItem] = {
430 dict,
455 dict,
431 str,
456 str,
432 bytes,
457 bytes,
433 list,
458 list,
434 tuple,
459 tuple,
435 collections.defaultdict,
460 collections.defaultdict,
436 collections.deque,
461 collections.deque,
437 collections.OrderedDict,
462 collections.OrderedDict,
438 collections.ChainMap,
463 collections.ChainMap,
439 collections.UserDict,
464 collections.UserDict,
440 collections.UserList,
465 collections.UserList,
441 collections.UserString,
466 collections.UserString,
442 DummyNamedTuple,
467 _DummyNamedTuple,
443 IdentitySubscript,
468 _IdentitySubscript,
444 }
469 }
445
470
446
471
447 def _list_methods(cls, source=None):
472 def _list_methods(cls, source=None):
448 """For use on immutable objects or with methods returning a copy"""
473 """For use on immutable objects or with methods returning a copy"""
449 return [getattr(cls, k) for k in (source if source else dir(cls))]
474 return [getattr(cls, k) for k in (source if source else dir(cls))]
450
475
451
476
452 dict_non_mutating_methods = ("copy", "keys", "values", "items")
477 dict_non_mutating_methods = ("copy", "keys", "values", "items")
453 list_non_mutating_methods = ("copy", "index", "count")
478 list_non_mutating_methods = ("copy", "index", "count")
454 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
479 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
455
480
456
481
457 dict_keys: Type[collections.abc.KeysView] = type({}.keys())
482 dict_keys: Type[collections.abc.KeysView] = type({}.keys())
458 method_descriptor: Any = type(list.copy)
483 method_descriptor: Any = type(list.copy)
459
484
460 ALLOWED_CALLS = {
485 ALLOWED_CALLS = {
461 bytes,
486 bytes,
462 *_list_methods(bytes),
487 *_list_methods(bytes),
463 dict,
488 dict,
464 *_list_methods(dict, dict_non_mutating_methods),
489 *_list_methods(dict, dict_non_mutating_methods),
465 dict_keys.isdisjoint,
490 dict_keys.isdisjoint,
466 list,
491 list,
467 *_list_methods(list, list_non_mutating_methods),
492 *_list_methods(list, list_non_mutating_methods),
468 set,
493 set,
469 *_list_methods(set, set_non_mutating_methods),
494 *_list_methods(set, set_non_mutating_methods),
470 frozenset,
495 frozenset,
471 *_list_methods(frozenset),
496 *_list_methods(frozenset),
472 range,
497 range,
473 str,
498 str,
474 *_list_methods(str),
499 *_list_methods(str),
475 tuple,
500 tuple,
476 *_list_methods(tuple),
501 *_list_methods(tuple),
477 collections.deque,
502 collections.deque,
478 *_list_methods(collections.deque, list_non_mutating_methods),
503 *_list_methods(collections.deque, list_non_mutating_methods),
479 collections.defaultdict,
504 collections.defaultdict,
480 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
505 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
481 collections.OrderedDict,
506 collections.OrderedDict,
482 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
507 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
483 collections.UserDict,
508 collections.UserDict,
484 *_list_methods(collections.UserDict, dict_non_mutating_methods),
509 *_list_methods(collections.UserDict, dict_non_mutating_methods),
485 collections.UserList,
510 collections.UserList,
486 *_list_methods(collections.UserList, list_non_mutating_methods),
511 *_list_methods(collections.UserList, list_non_mutating_methods),
487 collections.UserString,
512 collections.UserString,
488 *_list_methods(collections.UserString, dir(str)),
513 *_list_methods(collections.UserString, dir(str)),
489 collections.Counter,
514 collections.Counter,
490 *_list_methods(collections.Counter, dict_non_mutating_methods),
515 *_list_methods(collections.Counter, dict_non_mutating_methods),
491 collections.Counter.elements,
516 collections.Counter.elements,
492 collections.Counter.most_common,
517 collections.Counter.most_common,
493 }
518 }
494
519
495 BUILTIN_GETATTR: Set[MayHaveGetattr] = {
520 BUILTIN_GETATTR: Set[MayHaveGetattr] = {
496 *BUILTIN_GETITEM,
521 *BUILTIN_GETITEM,
497 set,
522 set,
498 frozenset,
523 frozenset,
499 object,
524 object,
500 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
525 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
501 dict_keys,
526 dict_keys,
502 method_descriptor,
527 method_descriptor,
503 }
528 }
504
529
505 EVALUATION_POLICIES = {
530 EVALUATION_POLICIES = {
506 "minimal": EvaluationPolicy(
531 "minimal": EvaluationPolicy(
507 allow_builtins_access=True,
532 allow_builtins_access=True,
508 allow_locals_access=False,
533 allow_locals_access=False,
509 allow_globals_access=False,
534 allow_globals_access=False,
510 allow_item_access=False,
535 allow_item_access=False,
511 allow_attr_access=False,
536 allow_attr_access=False,
512 allowed_calls=set(),
537 allowed_calls=set(),
513 allow_any_calls=False,
538 allow_any_calls=False,
514 ),
539 ),
515 "limited": SelectivePolicy(
540 "limited": SelectivePolicy(
516 # TODO:
541 # TODO:
517 # - should reject binary and unary operations if custom methods would be dispatched
542 # - should reject binary and unary operations if custom methods would be dispatched
518 allowed_getitem=BUILTIN_GETITEM,
543 allowed_getitem=BUILTIN_GETITEM,
519 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
544 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
520 allowed_getattr=BUILTIN_GETATTR,
545 allowed_getattr=BUILTIN_GETATTR,
521 allowed_getattr_external={
546 allowed_getattr_external={
522 # pandas Series/Frame implements custom `__getattr__`
547 # pandas Series/Frame implements custom `__getattr__`
523 ("pandas", "DataFrame"),
548 ("pandas", "DataFrame"),
524 ("pandas", "Series"),
549 ("pandas", "Series"),
525 },
550 },
526 allow_builtins_access=True,
551 allow_builtins_access=True,
527 allow_locals_access=True,
552 allow_locals_access=True,
528 allow_globals_access=True,
553 allow_globals_access=True,
529 allowed_calls=ALLOWED_CALLS,
554 allowed_calls=ALLOWED_CALLS,
530 ),
555 ),
531 "unsafe": EvaluationPolicy(
556 "unsafe": EvaluationPolicy(
532 allow_builtins_access=True,
557 allow_builtins_access=True,
533 allow_locals_access=True,
558 allow_locals_access=True,
534 allow_globals_access=True,
559 allow_globals_access=True,
535 allow_attr_access=True,
560 allow_attr_access=True,
536 allow_item_access=True,
561 allow_item_access=True,
537 allow_any_calls=True,
562 allow_any_calls=True,
538 ),
563 ),
539 }
564 }
565
566
567 __all__ = [
568 "guarded_eval",
569 "eval_node",
570 "GuardRejection",
571 "EvaluationContext",
572 "_unbind_method",
573 ]
@@ -1,212 +1,140 b''
1 """Implementation of configuration-related magic functions.
1 """Implementation of configuration-related magic functions.
2 """
2 """
3 #-----------------------------------------------------------------------------
3 #-----------------------------------------------------------------------------
4 # Copyright (c) 2012 The IPython Development Team.
4 # Copyright (c) 2012 The IPython Development Team.
5 #
5 #
6 # Distributed under the terms of the Modified BSD License.
6 # Distributed under the terms of the Modified BSD License.
7 #
7 #
8 # The full license is in the file COPYING.txt, distributed with this software.
8 # The full license is in the file COPYING.txt, distributed with this software.
9 #-----------------------------------------------------------------------------
9 #-----------------------------------------------------------------------------
10
10
11 #-----------------------------------------------------------------------------
11 #-----------------------------------------------------------------------------
12 # Imports
12 # Imports
13 #-----------------------------------------------------------------------------
13 #-----------------------------------------------------------------------------
14
14
15 # Stdlib
15 # Stdlib
16 import re
16 import re
17
17
18 # Our own packages
18 # Our own packages
19 from IPython.core.error import UsageError
19 from IPython.core.error import UsageError
20 from IPython.core.magic import Magics, magics_class, line_magic
20 from IPython.core.magic import Magics, magics_class, line_magic
21 from logging import error
21 from logging import error
22
22
23 #-----------------------------------------------------------------------------
23 #-----------------------------------------------------------------------------
24 # Magic implementation classes
24 # Magic implementation classes
25 #-----------------------------------------------------------------------------
25 #-----------------------------------------------------------------------------
26
26
27 reg = re.compile(r'^\w+\.\w+$')
27 reg = re.compile(r'^\w+\.\w+$')
28 @magics_class
28 @magics_class
29 class ConfigMagics(Magics):
29 class ConfigMagics(Magics):
30
30
31 def __init__(self, shell):
31 def __init__(self, shell):
32 super(ConfigMagics, self).__init__(shell)
32 super(ConfigMagics, self).__init__(shell)
33 self.configurables = []
33 self.configurables = []
34
34
35 @line_magic
35 @line_magic
36 def config(self, s):
36 def config(self, s):
37 """configure IPython
37 """configure IPython
38
38
39 %config Class[.trait=value]
39 %config Class[.trait=value]
40
40
41 This magic exposes most of the IPython config system. Any
41 This magic exposes most of the IPython config system. Any
42 Configurable class should be able to be configured with the simple
42 Configurable class should be able to be configured with the simple
43 line::
43 line::
44
44
45 %config Class.trait=value
45 %config Class.trait=value
46
46
47 Where `value` will be resolved in the user's namespace, if it is an
47 Where `value` will be resolved in the user's namespace, if it is an
48 expression or variable name.
48 expression or variable name.
49
49
50 Examples
50 Examples
51 --------
51 --------
52
52
53 To see what classes are available for config, pass no arguments::
53 To see what classes are available for config, pass no arguments::
54
54
55 In [1]: %config
55 In [1]: %config
56 Available objects for config:
56 Available objects for config:
57 AliasManager
57 AliasManager
58 DisplayFormatter
58 DisplayFormatter
59 HistoryManager
59 HistoryManager
60 IPCompleter
60 IPCompleter
61 LoggingMagics
61 LoggingMagics
62 MagicsManager
62 MagicsManager
63 OSMagics
63 OSMagics
64 PrefilterManager
64 PrefilterManager
65 ScriptMagics
65 ScriptMagics
66 TerminalInteractiveShell
66 TerminalInteractiveShell
67
67
68 To view what is configurable on a given class, just pass the class
68 To view what is configurable on a given class, just pass the class
69 name::
69 name::
70
70
71 In [2]: %config IPCompleter
71 In [2]: %config LoggingMagics
72 IPCompleter(Completer) options
72 LoggingMagics(Magics) options
73 ----------------------------
73 ---------------------------
74 IPCompleter.backslash_combining_completions=<Bool>
74 LoggingMagics.quiet=<Bool>
75 Enable unicode completions, e.g. \\alpha<tab> . Includes completion of latex
75 Suppress output of log state when logging is enabled
76 commands, unicode names, and expanding unicode characters back to latex
77 commands.
78 Current: True
79 IPCompleter.debug=<Bool>
80 Enable debug for the Completer. Mostly print extra information for
81 experimental jedi integration.
82 Current: False
76 Current: False
83 IPCompleter.disable_matchers=<list-item-1>...
84 List of matchers to disable.
85 The list should contain matcher identifiers (see
86 :any:`completion_matcher`).
87 Current: []
88 IPCompleter.greedy=<Bool>
89 Activate greedy completion
90 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
91 This will enable completion on elements of lists, results of function calls, etc.,
92 but can be unsafe because the code is actually evaluated on TAB.
93 Current: False
94 IPCompleter.jedi_compute_type_timeout=<Int>
95 Experimental: restrict time (in milliseconds) during which Jedi can compute types.
96 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
97 performance by preventing jedi to build its cache.
98 Current: 400
99 IPCompleter.limit_to__all__=<Bool>
100 DEPRECATED as of version 5.0.
101 Instruct the completer to use __all__ for the completion
102 Specifically, when completing on ``object.<tab>``.
103 When True: only those names in obj.__all__ will be included.
104 When False [default]: the __all__ attribute is ignored
105 Current: False
106 IPCompleter.merge_completions=<Bool>
107 Whether to merge completion results into a single list
108 If False, only the completion results from the first non-empty
109 completer will be returned.
110 As of version 8.6.0, setting the value to ``False`` is an alias for:
111 ``IPCompleter.suppress_competing_matchers = True.``.
112 Current: True
113 IPCompleter.omit__names=<Enum>
114 Instruct the completer to omit private method names
115 Specifically, when completing on ``object.<tab>``.
116 When 2 [default]: all names that start with '_' will be excluded.
117 When 1: all 'magic' names (``__foo__``) will be excluded.
118 When 0: nothing will be excluded.
119 Choices: any of [0, 1, 2]
120 Current: 2
121 IPCompleter.profile_completions=<Bool>
122 If True, emit profiling data for completion subsystem using cProfile.
123 Current: False
124 IPCompleter.profiler_output_dir=<Unicode>
125 Template for path at which to output profile data for completions.
126 Current: '.completion_profiles'
127 IPCompleter.suppress_competing_matchers=<Union>
128 Whether to suppress completions from other *Matchers*.
129 When set to ``None`` (default) the matchers will attempt to auto-detect
130 whether suppression of other matchers is desirable. For example, at the
131 beginning of a line followed by `%` we expect a magic completion to be the
132 only applicable option, and after ``my_dict['`` we usually expect a
133 completion with an existing dictionary key.
134 If you want to disable this heuristic and see completions from all matchers,
135 set ``IPCompleter.suppress_competing_matchers = False``. To disable the
136 heuristic for specific matchers provide a dictionary mapping:
137 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher':
138 False}``.
139 Set ``IPCompleter.suppress_competing_matchers = True`` to limit completions
140 to the set of matchers with the highest priority; this is equivalent to
141 ``IPCompleter.merge_completions`` and can be beneficial for performance, but
142 will sometimes omit relevant candidates from matchers further down the
143 priority list.
144 Current: None
145 IPCompleter.use_jedi=<Bool>
146 Experimental: Use Jedi to generate autocompletions. Default to True if jedi
147 is installed.
148 Current: True
149
77
150 but the real use is in setting values::
78 but the real use is in setting values::
151
79
152 In [3]: %config IPCompleter.greedy = True
80 In [3]: %config LoggingMagics.quiet = True
153
81
154 and these values are read from the user_ns if they are variables::
82 and these values are read from the user_ns if they are variables::
155
83
156 In [4]: feeling_greedy=False
84 In [4]: feeling_quiet=False
157
85
158 In [5]: %config IPCompleter.greedy = feeling_greedy
86 In [5]: %config LoggingMagics.quiet = feeling_quiet
159
87
160 """
88 """
161 from traitlets.config.loader import Config
89 from traitlets.config.loader import Config
162 # some IPython objects are Configurable, but do not yet have
90 # some IPython objects are Configurable, but do not yet have
163 # any configurable traits. Exclude them from the effects of
91 # any configurable traits. Exclude them from the effects of
164 # this magic, as their presence is just noise:
92 # this magic, as their presence is just noise:
165 configurables = sorted(set([ c for c in self.shell.configurables
93 configurables = sorted(set([ c for c in self.shell.configurables
166 if c.__class__.class_traits(config=True)
94 if c.__class__.class_traits(config=True)
167 ]), key=lambda x: x.__class__.__name__)
95 ]), key=lambda x: x.__class__.__name__)
168 classnames = [ c.__class__.__name__ for c in configurables ]
96 classnames = [ c.__class__.__name__ for c in configurables ]
169
97
170 line = s.strip()
98 line = s.strip()
171 if not line:
99 if not line:
172 # print available configurable names
100 # print available configurable names
173 print("Available objects for config:")
101 print("Available objects for config:")
174 for name in classnames:
102 for name in classnames:
175 print(" ", name)
103 print(" ", name)
176 return
104 return
177 elif line in classnames:
105 elif line in classnames:
178 # `%config TerminalInteractiveShell` will print trait info for
106 # `%config TerminalInteractiveShell` will print trait info for
179 # TerminalInteractiveShell
107 # TerminalInteractiveShell
180 c = configurables[classnames.index(line)]
108 c = configurables[classnames.index(line)]
181 cls = c.__class__
109 cls = c.__class__
182 help = cls.class_get_help(c)
110 help = cls.class_get_help(c)
183 # strip leading '--' from cl-args:
111 # strip leading '--' from cl-args:
184 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
112 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
185 print(help)
113 print(help)
186 return
114 return
187 elif reg.match(line):
115 elif reg.match(line):
188 cls, attr = line.split('.')
116 cls, attr = line.split('.')
189 return getattr(configurables[classnames.index(cls)],attr)
117 return getattr(configurables[classnames.index(cls)],attr)
190 elif '=' not in line:
118 elif '=' not in line:
191 msg = "Invalid config statement: %r, "\
119 msg = "Invalid config statement: %r, "\
192 "should be `Class.trait = value`."
120 "should be `Class.trait = value`."
193
121
194 ll = line.lower()
122 ll = line.lower()
195 for classname in classnames:
123 for classname in classnames:
196 if ll == classname.lower():
124 if ll == classname.lower():
197 msg = msg + '\nDid you mean %s (note the case)?' % classname
125 msg = msg + '\nDid you mean %s (note the case)?' % classname
198 break
126 break
199
127
200 raise UsageError( msg % line)
128 raise UsageError( msg % line)
201
129
202 # otherwise, assume we are setting configurables.
130 # otherwise, assume we are setting configurables.
203 # leave quotes on args when splitting, because we want
131 # leave quotes on args when splitting, because we want
204 # unquoted args to eval in user_ns
132 # unquoted args to eval in user_ns
205 cfg = Config()
133 cfg = Config()
206 exec("cfg."+line, self.shell.user_ns, locals())
134 exec("cfg."+line, self.shell.user_ns, locals())
207
135
208 for configurable in configurables:
136 for configurable in configurables:
209 try:
137 try:
210 configurable.update_config(cfg)
138 configurable.update_config(cfg)
211 except Exception as e:
139 except Exception as e:
212 error(e)
140 error(e)
@@ -1,261 +1,261 b''
1 from typing import NamedTuple
1 from typing import NamedTuple
2 from IPython.core.guarded_eval import (
2 from IPython.core.guarded_eval import (
3 EvaluationContext,
3 EvaluationContext,
4 GuardRejection,
4 GuardRejection,
5 guarded_eval,
5 guarded_eval,
6 unbind_method,
6 _unbind_method,
7 )
7 )
8 from IPython.testing import decorators as dec
8 from IPython.testing import decorators as dec
9 import pytest
9 import pytest
10
10
11
11
12 def limited(**kwargs):
12 def limited(**kwargs):
13 return EvaluationContext(locals_=kwargs, globals_={}, evaluation="limited")
13 return EvaluationContext(locals=kwargs, globals={}, evaluation="limited")
14
14
15
15
16 def unsafe(**kwargs):
16 def unsafe(**kwargs):
17 return EvaluationContext(locals_=kwargs, globals_={}, evaluation="unsafe")
17 return EvaluationContext(locals=kwargs, globals={}, evaluation="unsafe")
18
18
19
19
20 @dec.skip_without("pandas")
20 @dec.skip_without("pandas")
21 def test_pandas_series_iloc():
21 def test_pandas_series_iloc():
22 import pandas as pd
22 import pandas as pd
23
23
24 series = pd.Series([1], index=["a"])
24 series = pd.Series([1], index=["a"])
25 context = limited(data=series)
25 context = limited(data=series)
26 assert guarded_eval("data.iloc[0]", context) == 1
26 assert guarded_eval("data.iloc[0]", context) == 1
27
27
28
28
29 @dec.skip_without("pandas")
29 @dec.skip_without("pandas")
30 def test_pandas_series():
30 def test_pandas_series():
31 import pandas as pd
31 import pandas as pd
32
32
33 context = limited(data=pd.Series([1], index=["a"]))
33 context = limited(data=pd.Series([1], index=["a"]))
34 assert guarded_eval('data["a"]', context) == 1
34 assert guarded_eval('data["a"]', context) == 1
35 with pytest.raises(KeyError):
35 with pytest.raises(KeyError):
36 guarded_eval('data["c"]', context)
36 guarded_eval('data["c"]', context)
37
37
38
38
39 @dec.skip_without("pandas")
39 @dec.skip_without("pandas")
40 def test_pandas_bad_series():
40 def test_pandas_bad_series():
41 import pandas as pd
41 import pandas as pd
42
42
43 class BadItemSeries(pd.Series):
43 class BadItemSeries(pd.Series):
44 def __getitem__(self, key):
44 def __getitem__(self, key):
45 return "CUSTOM_ITEM"
45 return "CUSTOM_ITEM"
46
46
47 class BadAttrSeries(pd.Series):
47 class BadAttrSeries(pd.Series):
48 def __getattr__(self, key):
48 def __getattr__(self, key):
49 return "CUSTOM_ATTR"
49 return "CUSTOM_ATTR"
50
50
51 bad_series = BadItemSeries([1], index=["a"])
51 bad_series = BadItemSeries([1], index=["a"])
52 context = limited(data=bad_series)
52 context = limited(data=bad_series)
53
53
54 with pytest.raises(GuardRejection):
54 with pytest.raises(GuardRejection):
55 guarded_eval('data["a"]', context)
55 guarded_eval('data["a"]', context)
56 with pytest.raises(GuardRejection):
56 with pytest.raises(GuardRejection):
57 guarded_eval('data["c"]', context)
57 guarded_eval('data["c"]', context)
58
58
59 # note: here result is a bit unexpected because
59 # note: here result is a bit unexpected because
60 # pandas `__getattr__` calls `__getitem__`;
60 # pandas `__getattr__` calls `__getitem__`;
61 # FIXME - special case to handle it?
61 # FIXME - special case to handle it?
62 assert guarded_eval("data.a", context) == "CUSTOM_ITEM"
62 assert guarded_eval("data.a", context) == "CUSTOM_ITEM"
63
63
64 context = unsafe(data=bad_series)
64 context = unsafe(data=bad_series)
65 assert guarded_eval('data["a"]', context) == "CUSTOM_ITEM"
65 assert guarded_eval('data["a"]', context) == "CUSTOM_ITEM"
66
66
67 bad_attr_series = BadAttrSeries([1], index=["a"])
67 bad_attr_series = BadAttrSeries([1], index=["a"])
68 context = limited(data=bad_attr_series)
68 context = limited(data=bad_attr_series)
69 assert guarded_eval('data["a"]', context) == 1
69 assert guarded_eval('data["a"]', context) == 1
70 with pytest.raises(GuardRejection):
70 with pytest.raises(GuardRejection):
71 guarded_eval("data.a", context)
71 guarded_eval("data.a", context)
72
72
73
73
74 @dec.skip_without("pandas")
74 @dec.skip_without("pandas")
75 def test_pandas_dataframe_loc():
75 def test_pandas_dataframe_loc():
76 import pandas as pd
76 import pandas as pd
77 from pandas.testing import assert_series_equal
77 from pandas.testing import assert_series_equal
78
78
79 data = pd.DataFrame([{"a": 1}])
79 data = pd.DataFrame([{"a": 1}])
80 context = limited(data=data)
80 context = limited(data=data)
81 assert_series_equal(guarded_eval('data.loc[:, "a"]', context), data["a"])
81 assert_series_equal(guarded_eval('data.loc[:, "a"]', context), data["a"])
82
82
83
83
84 def test_named_tuple():
84 def test_named_tuple():
85 class GoodNamedTuple(NamedTuple):
85 class GoodNamedTuple(NamedTuple):
86 a: str
86 a: str
87 pass
87 pass
88
88
89 class BadNamedTuple(NamedTuple):
89 class BadNamedTuple(NamedTuple):
90 a: str
90 a: str
91
91
92 def __getitem__(self, key):
92 def __getitem__(self, key):
93 return None
93 return None
94
94
95 good = GoodNamedTuple(a="x")
95 good = GoodNamedTuple(a="x")
96 bad = BadNamedTuple(a="x")
96 bad = BadNamedTuple(a="x")
97
97
98 context = limited(data=good)
98 context = limited(data=good)
99 assert guarded_eval("data[0]", context) == "x"
99 assert guarded_eval("data[0]", context) == "x"
100
100
101 context = limited(data=bad)
101 context = limited(data=bad)
102 with pytest.raises(GuardRejection):
102 with pytest.raises(GuardRejection):
103 guarded_eval("data[0]", context)
103 guarded_eval("data[0]", context)
104
104
105
105
106 def test_dict():
106 def test_dict():
107 context = limited(data={"a": 1, "b": {"x": 2}, ("x", "y"): 3})
107 context = limited(data={"a": 1, "b": {"x": 2}, ("x", "y"): 3})
108 assert guarded_eval('data["a"]', context) == 1
108 assert guarded_eval('data["a"]', context) == 1
109 assert guarded_eval('data["b"]', context) == {"x": 2}
109 assert guarded_eval('data["b"]', context) == {"x": 2}
110 assert guarded_eval('data["b"]["x"]', context) == 2
110 assert guarded_eval('data["b"]["x"]', context) == 2
111 assert guarded_eval('data["x", "y"]', context) == 3
111 assert guarded_eval('data["x", "y"]', context) == 3
112
112
113 assert guarded_eval("data.keys", context)
113 assert guarded_eval("data.keys", context)
114
114
115
115
116 def test_set():
116 def test_set():
117 context = limited(data={"a", "b"})
117 context = limited(data={"a", "b"})
118 assert guarded_eval("data.difference", context)
118 assert guarded_eval("data.difference", context)
119
119
120
120
121 def test_list():
121 def test_list():
122 context = limited(data=[1, 2, 3])
122 context = limited(data=[1, 2, 3])
123 assert guarded_eval("data[1]", context) == 2
123 assert guarded_eval("data[1]", context) == 2
124 assert guarded_eval("data.copy", context)
124 assert guarded_eval("data.copy", context)
125
125
126
126
127 def test_dict_literal():
127 def test_dict_literal():
128 context = limited()
128 context = limited()
129 assert guarded_eval("{}", context) == {}
129 assert guarded_eval("{}", context) == {}
130 assert guarded_eval('{"a": 1}', context) == {"a": 1}
130 assert guarded_eval('{"a": 1}', context) == {"a": 1}
131
131
132
132
133 def test_list_literal():
133 def test_list_literal():
134 context = limited()
134 context = limited()
135 assert guarded_eval("[]", context) == []
135 assert guarded_eval("[]", context) == []
136 assert guarded_eval('[1, "a"]', context) == [1, "a"]
136 assert guarded_eval('[1, "a"]', context) == [1, "a"]
137
137
138
138
139 def test_set_literal():
139 def test_set_literal():
140 context = limited()
140 context = limited()
141 assert guarded_eval("set()", context) == set()
141 assert guarded_eval("set()", context) == set()
142 assert guarded_eval('{"a"}', context) == {"a"}
142 assert guarded_eval('{"a"}', context) == {"a"}
143
143
144
144
145 def test_if_expression():
145 def test_if_expression():
146 context = limited()
146 context = limited()
147 assert guarded_eval("2 if True else 3", context) == 2
147 assert guarded_eval("2 if True else 3", context) == 2
148 assert guarded_eval("4 if False else 5", context) == 5
148 assert guarded_eval("4 if False else 5", context) == 5
149
149
150
150
151 def test_object():
151 def test_object():
152 obj = object()
152 obj = object()
153 context = limited(obj=obj)
153 context = limited(obj=obj)
154 assert guarded_eval("obj.__dir__", context) == obj.__dir__
154 assert guarded_eval("obj.__dir__", context) == obj.__dir__
155
155
156
156
157 @pytest.mark.parametrize(
157 @pytest.mark.parametrize(
158 "code,expected",
158 "code,expected",
159 [
159 [
160 ["int.numerator", int.numerator],
160 ["int.numerator", int.numerator],
161 ["float.is_integer", float.is_integer],
161 ["float.is_integer", float.is_integer],
162 ["complex.real", complex.real],
162 ["complex.real", complex.real],
163 ],
163 ],
164 )
164 )
165 def test_number_attributes(code, expected):
165 def test_number_attributes(code, expected):
166 assert guarded_eval(code, limited()) == expected
166 assert guarded_eval(code, limited()) == expected
167
167
168
168
169 def test_method_descriptor():
169 def test_method_descriptor():
170 context = limited()
170 context = limited()
171 assert guarded_eval("list.copy.__name__", context) == "copy"
171 assert guarded_eval("list.copy.__name__", context) == "copy"
172
172
173
173
174 @pytest.mark.parametrize(
174 @pytest.mark.parametrize(
175 "data,good,bad,expected",
175 "data,good,bad,expected",
176 [
176 [
177 [[1, 2, 3], "data.index(2)", "data.append(4)", 1],
177 [[1, 2, 3], "data.index(2)", "data.append(4)", 1],
178 [{"a": 1}, "data.keys().isdisjoint({})", "data.update()", True],
178 [{"a": 1}, "data.keys().isdisjoint({})", "data.update()", True],
179 ],
179 ],
180 )
180 )
181 def test_calls(data, good, bad, expected):
181 def test_calls(data, good, bad, expected):
182 context = limited(data=data)
182 context = limited(data=data)
183 assert guarded_eval(good, context) == expected
183 assert guarded_eval(good, context) == expected
184
184
185 with pytest.raises(GuardRejection):
185 with pytest.raises(GuardRejection):
186 guarded_eval(bad, context)
186 guarded_eval(bad, context)
187
187
188
188
189 @pytest.mark.parametrize(
189 @pytest.mark.parametrize(
190 "code,expected",
190 "code,expected",
191 [
191 [
192 ["(1\n+\n1)", 2],
192 ["(1\n+\n1)", 2],
193 ["list(range(10))[-1:]", [9]],
193 ["list(range(10))[-1:]", [9]],
194 ["list(range(20))[3:-2:3]", [3, 6, 9, 12, 15]],
194 ["list(range(20))[3:-2:3]", [3, 6, 9, 12, 15]],
195 ],
195 ],
196 )
196 )
197 def test_literals(code, expected):
197 def test_literals(code, expected):
198 context = limited()
198 context = limited()
199 assert guarded_eval(code, context) == expected
199 assert guarded_eval(code, context) == expected
200
200
201
201
202 def test_access_builtins():
202 def test_access_builtins():
203 context = limited()
203 context = limited()
204 assert guarded_eval("round", context) == round
204 assert guarded_eval("round", context) == round
205
205
206
206
207 def test_subscript():
207 def test_subscript():
208 context = EvaluationContext(
208 context = EvaluationContext(
209 locals_={}, globals_={}, evaluation="limited", in_subscript=True
209 locals={}, globals={}, evaluation="limited", in_subscript=True
210 )
210 )
211 empty_slice = slice(None, None, None)
211 empty_slice = slice(None, None, None)
212 assert guarded_eval("", context) == tuple()
212 assert guarded_eval("", context) == tuple()
213 assert guarded_eval(":", context) == empty_slice
213 assert guarded_eval(":", context) == empty_slice
214 assert guarded_eval("1:2:3", context) == slice(1, 2, 3)
214 assert guarded_eval("1:2:3", context) == slice(1, 2, 3)
215 assert guarded_eval(':, "a"', context) == (empty_slice, "a")
215 assert guarded_eval(':, "a"', context) == (empty_slice, "a")
216
216
217
217
218 def test_unbind_method():
218 def test_unbind_method():
219 class X(list):
219 class X(list):
220 def index(self, k):
220 def index(self, k):
221 return "CUSTOM"
221 return "CUSTOM"
222
222
223 x = X()
223 x = X()
224 assert unbind_method(x.index) is X.index
224 assert _unbind_method(x.index) is X.index
225 assert unbind_method([].index) is list.index
225 assert _unbind_method([].index) is list.index
226
226
227
227
228 def test_assumption_instance_attr_do_not_matter():
228 def test_assumption_instance_attr_do_not_matter():
229 """This is semi-specified in Python documentation.
229 """This is semi-specified in Python documentation.
230
230
231 However, since the specification says 'not guaranted
231 However, since the specification says 'not guaranted
232 to work' rather than 'is forbidden to work', future
232 to work' rather than 'is forbidden to work', future
233 versions could invalidate this assumptions. This test
233 versions could invalidate this assumptions. This test
234 is meant to catch such a change if it ever comes true.
234 is meant to catch such a change if it ever comes true.
235 """
235 """
236
236
237 class T:
237 class T:
238 def __getitem__(self, k):
238 def __getitem__(self, k):
239 return "a"
239 return "a"
240
240
241 def __getattr__(self, k):
241 def __getattr__(self, k):
242 return "a"
242 return "a"
243
243
244 t = T()
244 t = T()
245 t.__getitem__ = lambda f: "b"
245 t.__getitem__ = lambda f: "b"
246 t.__getattr__ = lambda f: "b"
246 t.__getattr__ = lambda f: "b"
247 assert t[1] == "a"
247 assert t[1] == "a"
248 assert t[1] == "a"
248 assert t[1] == "a"
249
249
250
250
251 def test_assumption_named_tuples_share_getitem():
251 def test_assumption_named_tuples_share_getitem():
252 """Check assumption on named tuples sharing __getitem__"""
252 """Check assumption on named tuples sharing __getitem__"""
253 from typing import NamedTuple
253 from typing import NamedTuple
254
254
255 class A(NamedTuple):
255 class A(NamedTuple):
256 pass
256 pass
257
257
258 class B(NamedTuple):
258 class B(NamedTuple):
259 pass
259 pass
260
260
261 assert A.__getitem__ == B.__getitem__
261 assert A.__getitem__ == B.__getitem__
General Comments 0
You need to be logged in to leave comments. Login now