##// END OF EJS Templates
reformat
M Bussonnier -
Show More
@@ -1,3389 +1,3389
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press :kbd:`Tab` to expand it to its latex form.
53 and press :kbd:`Tab` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 :std:configtrait:`Completer.backslash_combining_completions` option to
62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 ``False``.
63 ``False``.
64
64
65
65
66 Experimental
66 Experimental
67 ============
67 ============
68
68
69 Starting with IPython 6.0, this module can make use of the Jedi library to
69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 generate completions both using static analysis of the code, and dynamically
70 generate completions both using static analysis of the code, and dynamically
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 for Python. The APIs attached to this new mechanism is unstable and will
72 for Python. The APIs attached to this new mechanism is unstable and will
73 raise unless use in an :any:`provisionalcompleter` context manager.
73 raise unless use in an :any:`provisionalcompleter` context manager.
74
74
75 You will find that the following are experimental:
75 You will find that the following are experimental:
76
76
77 - :any:`provisionalcompleter`
77 - :any:`provisionalcompleter`
78 - :any:`IPCompleter.completions`
78 - :any:`IPCompleter.completions`
79 - :any:`Completion`
79 - :any:`Completion`
80 - :any:`rectify_completions`
80 - :any:`rectify_completions`
81
81
82 .. note::
82 .. note::
83
83
84 better name for :any:`rectify_completions` ?
84 better name for :any:`rectify_completions` ?
85
85
86 We welcome any feedback on these new API, and we also encourage you to try this
86 We welcome any feedback on these new API, and we also encourage you to try this
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 to have extra logging information if :any:`jedi` is crashing, or if current
88 to have extra logging information if :any:`jedi` is crashing, or if current
89 IPython completer pending deprecations are returning results not yet handled
89 IPython completer pending deprecations are returning results not yet handled
90 by :any:`jedi`
90 by :any:`jedi`
91
91
92 Using Jedi for tab completion allow snippets like the following to work without
92 Using Jedi for tab completion allow snippets like the following to work without
93 having to execute any code:
93 having to execute any code:
94
94
95 >>> myvar = ['hello', 42]
95 >>> myvar = ['hello', 42]
96 ... myvar[1].bi<tab>
96 ... myvar[1].bi<tab>
97
97
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 option.
100 option.
101
101
102 Be sure to update :any:`jedi` to the latest stable version or to try the
102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 current development version to get better completions.
103 current development version to get better completions.
104
104
105 Matchers
105 Matchers
106 ========
106 ========
107
107
108 All completions routines are implemented using unified *Matchers* API.
108 All completions routines are implemented using unified *Matchers* API.
109 The matchers API is provisional and subject to change without notice.
109 The matchers API is provisional and subject to change without notice.
110
110
111 The built-in matchers include:
111 The built-in matchers include:
112
112
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 - :any:`IPCompleter.unicode_name_matcher`,
115 - :any:`IPCompleter.unicode_name_matcher`,
116 :any:`IPCompleter.fwd_unicode_matcher`
116 :any:`IPCompleter.fwd_unicode_matcher`
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 (`complete_command`) with string dispatch (including regular expressions).
125 (`complete_command`) with string dispatch (including regular expressions).
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 Jedi results to match behaviour in earlier IPython versions.
127 Jedi results to match behaviour in earlier IPython versions.
128
128
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130
130
131 Matcher API
131 Matcher API
132 -----------
132 -----------
133
133
134 Simplifying some details, the ``Matcher`` interface can described as
134 Simplifying some details, the ``Matcher`` interface can described as
135
135
136 .. code-block::
136 .. code-block::
137
137
138 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv1 = Callable[[str], list[str]]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140
140
141 Matcher = MatcherAPIv1 | MatcherAPIv2
141 Matcher = MatcherAPIv1 | MatcherAPIv2
142
142
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 and remains supported as a simplest way for generating completions. This is also
144 and remains supported as a simplest way for generating completions. This is also
145 currently the only API supported by the IPython hooks system `complete_command`.
145 currently the only API supported by the IPython hooks system `complete_command`.
146
146
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 and requires a literal ``2`` for v2 Matchers.
149 and requires a literal ``2`` for v2 Matchers.
150
150
151 Once the API stabilises future versions may relax the requirement for specifying
151 Once the API stabilises future versions may relax the requirement for specifying
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154
154
155 Suppression of competing matchers
155 Suppression of competing matchers
156 ---------------------------------
156 ---------------------------------
157
157
158 By default results from all matchers are combined, in the order determined by
158 By default results from all matchers are combined, in the order determined by
159 their priority. Matchers can request to suppress results from subsequent
159 their priority. Matchers can request to suppress results from subsequent
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161
161
162 When multiple matchers simultaneously request suppression, the results from of
162 When multiple matchers simultaneously request suppression, the results from of
163 the matcher with higher priority will be returned.
163 the matcher with higher priority will be returned.
164
164
165 Sometimes it is desirable to suppress most but not all other matchers;
165 Sometimes it is desirable to suppress most but not all other matchers;
166 this can be achieved by adding a set of identifiers of matchers which
166 this can be achieved by adding a set of identifiers of matchers which
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168
168
169 The suppression behaviour can is user-configurable via
169 The suppression behaviour can is user-configurable via
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 """
171 """
172
172
173
173
174 # Copyright (c) IPython Development Team.
174 # Copyright (c) IPython Development Team.
175 # Distributed under the terms of the Modified BSD License.
175 # Distributed under the terms of the Modified BSD License.
176 #
176 #
177 # Some of this code originated from rlcompleter in the Python standard library
177 # Some of this code originated from rlcompleter in the Python standard library
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179
179
180 from __future__ import annotations
180 from __future__ import annotations
181 import builtins as builtin_mod
181 import builtins as builtin_mod
182 import enum
182 import enum
183 import glob
183 import glob
184 import inspect
184 import inspect
185 import itertools
185 import itertools
186 import keyword
186 import keyword
187 import os
187 import os
188 import re
188 import re
189 import string
189 import string
190 import sys
190 import sys
191 import tokenize
191 import tokenize
192 import time
192 import time
193 import unicodedata
193 import unicodedata
194 import uuid
194 import uuid
195 import warnings
195 import warnings
196 from ast import literal_eval
196 from ast import literal_eval
197 from collections import defaultdict
197 from collections import defaultdict
198 from contextlib import contextmanager
198 from contextlib import contextmanager
199 from dataclasses import dataclass
199 from dataclasses import dataclass
200 from functools import cached_property, partial
200 from functools import cached_property, partial
201 from types import SimpleNamespace
201 from types import SimpleNamespace
202 from typing import (
202 from typing import (
203 Iterable,
203 Iterable,
204 Iterator,
204 Iterator,
205 List,
205 List,
206 Tuple,
206 Tuple,
207 Union,
207 Union,
208 Any,
208 Any,
209 Sequence,
209 Sequence,
210 Dict,
210 Dict,
211 Optional,
211 Optional,
212 TYPE_CHECKING,
212 TYPE_CHECKING,
213 Set,
213 Set,
214 Sized,
214 Sized,
215 TypeVar,
215 TypeVar,
216 Literal,
216 Literal,
217 )
217 )
218
218
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 from IPython.core.error import TryNext
220 from IPython.core.error import TryNext
221 from IPython.core.inputtransformer2 import ESC_MAGIC
221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 from IPython.core.oinspect import InspectColors
223 from IPython.core.oinspect import InspectColors
224 from IPython.testing.skipdoctest import skip_doctest
224 from IPython.testing.skipdoctest import skip_doctest
225 from IPython.utils import generics
225 from IPython.utils import generics
226 from IPython.utils.decorators import sphinx_options
226 from IPython.utils.decorators import sphinx_options
227 from IPython.utils.dir2 import dir2, get_real_method
227 from IPython.utils.dir2 import dir2, get_real_method
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 from IPython.utils.path import ensure_dir_exists
229 from IPython.utils.path import ensure_dir_exists
230 from IPython.utils.process import arg_split
230 from IPython.utils.process import arg_split
231 from traitlets import (
231 from traitlets import (
232 Bool,
232 Bool,
233 Enum,
233 Enum,
234 Int,
234 Int,
235 List as ListTrait,
235 List as ListTrait,
236 Unicode,
236 Unicode,
237 Dict as DictTrait,
237 Dict as DictTrait,
238 Union as UnionTrait,
238 Union as UnionTrait,
239 observe,
239 observe,
240 )
240 )
241 from traitlets.config.configurable import Configurable
241 from traitlets.config.configurable import Configurable
242
242
243 import __main__
243 import __main__
244
244
245 # skip module docstests
245 # skip module docstests
246 __skip_doctest__ = True
246 __skip_doctest__ = True
247
247
248
248
249 try:
249 try:
250 import jedi
250 import jedi
251 jedi.settings.case_insensitive_completion = False
251 jedi.settings.case_insensitive_completion = False
252 import jedi.api.helpers
252 import jedi.api.helpers
253 import jedi.api.classes
253 import jedi.api.classes
254 JEDI_INSTALLED = True
254 JEDI_INSTALLED = True
255 except ImportError:
255 except ImportError:
256 JEDI_INSTALLED = False
256 JEDI_INSTALLED = False
257
257
258
258
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 from typing import cast
260 from typing import cast
261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 else:
262 else:
263 from typing import Generic
263 from typing import Generic
264
264
265 def cast(type_, obj):
265 def cast(type_, obj):
266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 return obj
267 return obj
268
268
269 # do not require on runtime
269 # do not require on runtime
270 NotRequired = Tuple # requires Python >=3.11
270 NotRequired = Tuple # requires Python >=3.11
271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 Protocol = object # requires Python >=3.8
272 Protocol = object # requires Python >=3.8
273 TypeAlias = Any # requires Python >=3.10
273 TypeAlias = Any # requires Python >=3.10
274 TypeGuard = Generic # requires Python >=3.10
274 TypeGuard = Generic # requires Python >=3.10
275 if GENERATING_DOCUMENTATION:
275 if GENERATING_DOCUMENTATION:
276 from typing import TypedDict
276 from typing import TypedDict
277
277
278 # -----------------------------------------------------------------------------
278 # -----------------------------------------------------------------------------
279 # Globals
279 # Globals
280 #-----------------------------------------------------------------------------
280 #-----------------------------------------------------------------------------
281
281
282 # ranges where we have most of the valid unicode names. We could be more finer
282 # ranges where we have most of the valid unicode names. We could be more finer
283 # grained but is it worth it for performance While unicode have character in the
283 # grained but is it worth it for performance While unicode have character in the
284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 # write this). With below range we cover them all, with a density of ~67%
285 # write this). With below range we cover them all, with a density of ~67%
286 # biggest next gap we consider only adds up about 1% density and there are 600
286 # biggest next gap we consider only adds up about 1% density and there are 600
287 # gaps that would need hard coding.
287 # gaps that would need hard coding.
288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289
289
290 # Public API
290 # Public API
291 __all__ = ["Completer", "IPCompleter"]
291 __all__ = ["Completer", "IPCompleter"]
292
292
293 if sys.platform == 'win32':
293 if sys.platform == 'win32':
294 PROTECTABLES = ' '
294 PROTECTABLES = ' '
295 else:
295 else:
296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297
297
298 # Protect against returning an enormous number of completions which the frontend
298 # Protect against returning an enormous number of completions which the frontend
299 # may have trouble processing.
299 # may have trouble processing.
300 MATCHES_LIMIT = 500
300 MATCHES_LIMIT = 500
301
301
302 # Completion type reported when no type can be inferred.
302 # Completion type reported when no type can be inferred.
303 _UNKNOWN_TYPE = "<unknown>"
303 _UNKNOWN_TYPE = "<unknown>"
304
304
305 # sentinel value to signal lack of a match
305 # sentinel value to signal lack of a match
306 not_found = object()
306 not_found = object()
307
307
308 class ProvisionalCompleterWarning(FutureWarning):
308 class ProvisionalCompleterWarning(FutureWarning):
309 """
309 """
310 Exception raise by an experimental feature in this module.
310 Exception raise by an experimental feature in this module.
311
311
312 Wrap code in :any:`provisionalcompleter` context manager if you
312 Wrap code in :any:`provisionalcompleter` context manager if you
313 are certain you want to use an unstable feature.
313 are certain you want to use an unstable feature.
314 """
314 """
315 pass
315 pass
316
316
317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318
318
319
319
320 @skip_doctest
320 @skip_doctest
321 @contextmanager
321 @contextmanager
322 def provisionalcompleter(action='ignore'):
322 def provisionalcompleter(action='ignore'):
323 """
323 """
324 This context manager has to be used in any place where unstable completer
324 This context manager has to be used in any place where unstable completer
325 behavior and API may be called.
325 behavior and API may be called.
326
326
327 >>> with provisionalcompleter():
327 >>> with provisionalcompleter():
328 ... completer.do_experimental_things() # works
328 ... completer.do_experimental_things() # works
329
329
330 >>> completer.do_experimental_things() # raises.
330 >>> completer.do_experimental_things() # raises.
331
331
332 .. note::
332 .. note::
333
333
334 Unstable
334 Unstable
335
335
336 By using this context manager you agree that the API in use may change
336 By using this context manager you agree that the API in use may change
337 without warning, and that you won't complain if they do so.
337 without warning, and that you won't complain if they do so.
338
338
339 You also understand that, if the API is not to your liking, you should report
339 You also understand that, if the API is not to your liking, you should report
340 a bug to explain your use case upstream.
340 a bug to explain your use case upstream.
341
341
342 We'll be happy to get your feedback, feature requests, and improvements on
342 We'll be happy to get your feedback, feature requests, and improvements on
343 any of the unstable APIs!
343 any of the unstable APIs!
344 """
344 """
345 with warnings.catch_warnings():
345 with warnings.catch_warnings():
346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 yield
347 yield
348
348
349
349
350 def has_open_quotes(s):
350 def has_open_quotes(s):
351 """Return whether a string has open quotes.
351 """Return whether a string has open quotes.
352
352
353 This simply counts whether the number of quote characters of either type in
353 This simply counts whether the number of quote characters of either type in
354 the string is odd.
354 the string is odd.
355
355
356 Returns
356 Returns
357 -------
357 -------
358 If there is an open quote, the quote character is returned. Else, return
358 If there is an open quote, the quote character is returned. Else, return
359 False.
359 False.
360 """
360 """
361 # We check " first, then ', so complex cases with nested quotes will get
361 # We check " first, then ', so complex cases with nested quotes will get
362 # the " to take precedence.
362 # the " to take precedence.
363 if s.count('"') % 2:
363 if s.count('"') % 2:
364 return '"'
364 return '"'
365 elif s.count("'") % 2:
365 elif s.count("'") % 2:
366 return "'"
366 return "'"
367 else:
367 else:
368 return False
368 return False
369
369
370
370
371 def protect_filename(s, protectables=PROTECTABLES):
371 def protect_filename(s, protectables=PROTECTABLES):
372 """Escape a string to protect certain characters."""
372 """Escape a string to protect certain characters."""
373 if set(s) & set(protectables):
373 if set(s) & set(protectables):
374 if sys.platform == "win32":
374 if sys.platform == "win32":
375 return '"' + s + '"'
375 return '"' + s + '"'
376 else:
376 else:
377 return "".join(("\\" + c if c in protectables else c) for c in s)
377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 else:
378 else:
379 return s
379 return s
380
380
381
381
382 def expand_user(path:str) -> Tuple[str, bool, str]:
382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 """Expand ``~``-style usernames in strings.
383 """Expand ``~``-style usernames in strings.
384
384
385 This is similar to :func:`os.path.expanduser`, but it computes and returns
385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 extra information that will be useful if the input was being used in
386 extra information that will be useful if the input was being used in
387 computing completions, and you wish to return the completions with the
387 computing completions, and you wish to return the completions with the
388 original '~' instead of its expanded value.
388 original '~' instead of its expanded value.
389
389
390 Parameters
390 Parameters
391 ----------
391 ----------
392 path : str
392 path : str
393 String to be expanded. If no ~ is present, the output is the same as the
393 String to be expanded. If no ~ is present, the output is the same as the
394 input.
394 input.
395
395
396 Returns
396 Returns
397 -------
397 -------
398 newpath : str
398 newpath : str
399 Result of ~ expansion in the input path.
399 Result of ~ expansion in the input path.
400 tilde_expand : bool
400 tilde_expand : bool
401 Whether any expansion was performed or not.
401 Whether any expansion was performed or not.
402 tilde_val : str
402 tilde_val : str
403 The value that ~ was replaced with.
403 The value that ~ was replaced with.
404 """
404 """
405 # Default values
405 # Default values
406 tilde_expand = False
406 tilde_expand = False
407 tilde_val = ''
407 tilde_val = ''
408 newpath = path
408 newpath = path
409
409
410 if path.startswith('~'):
410 if path.startswith('~'):
411 tilde_expand = True
411 tilde_expand = True
412 rest = len(path)-1
412 rest = len(path)-1
413 newpath = os.path.expanduser(path)
413 newpath = os.path.expanduser(path)
414 if rest:
414 if rest:
415 tilde_val = newpath[:-rest]
415 tilde_val = newpath[:-rest]
416 else:
416 else:
417 tilde_val = newpath
417 tilde_val = newpath
418
418
419 return newpath, tilde_expand, tilde_val
419 return newpath, tilde_expand, tilde_val
420
420
421
421
422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 """Does the opposite of expand_user, with its outputs.
423 """Does the opposite of expand_user, with its outputs.
424 """
424 """
425 if tilde_expand:
425 if tilde_expand:
426 return path.replace(tilde_val, '~')
426 return path.replace(tilde_val, '~')
427 else:
427 else:
428 return path
428 return path
429
429
430
430
431 def completions_sorting_key(word):
431 def completions_sorting_key(word):
432 """key for sorting completions
432 """key for sorting completions
433
433
434 This does several things:
434 This does several things:
435
435
436 - Demote any completions starting with underscores to the end
436 - Demote any completions starting with underscores to the end
437 - Insert any %magic and %%cellmagic completions in the alphabetical order
437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 by their name
438 by their name
439 """
439 """
440 prio1, prio2 = 0, 0
440 prio1, prio2 = 0, 0
441
441
442 if word.startswith('__'):
442 if word.startswith('__'):
443 prio1 = 2
443 prio1 = 2
444 elif word.startswith('_'):
444 elif word.startswith('_'):
445 prio1 = 1
445 prio1 = 1
446
446
447 if word.endswith('='):
447 if word.endswith('='):
448 prio1 = -1
448 prio1 = -1
449
449
450 if word.startswith('%%'):
450 if word.startswith('%%'):
451 # If there's another % in there, this is something else, so leave it alone
451 # If there's another % in there, this is something else, so leave it alone
452 if not "%" in word[2:]:
452 if not "%" in word[2:]:
453 word = word[2:]
453 word = word[2:]
454 prio2 = 2
454 prio2 = 2
455 elif word.startswith('%'):
455 elif word.startswith('%'):
456 if not "%" in word[1:]:
456 if not "%" in word[1:]:
457 word = word[1:]
457 word = word[1:]
458 prio2 = 1
458 prio2 = 1
459
459
460 return prio1, word, prio2
460 return prio1, word, prio2
461
461
462
462
463 class _FakeJediCompletion:
463 class _FakeJediCompletion:
464 """
464 """
465 This is a workaround to communicate to the UI that Jedi has crashed and to
465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467
467
468 Added in IPython 6.0 so should likely be removed for 7.0
468 Added in IPython 6.0 so should likely be removed for 7.0
469
469
470 """
470 """
471
471
472 def __init__(self, name):
472 def __init__(self, name):
473
473
474 self.name = name
474 self.name = name
475 self.complete = name
475 self.complete = name
476 self.type = 'crashed'
476 self.type = 'crashed'
477 self.name_with_symbols = name
477 self.name_with_symbols = name
478 self.signature = ""
478 self.signature = ""
479 self._origin = "fake"
479 self._origin = "fake"
480 self.text = "crashed"
480 self.text = "crashed"
481
481
482 def __repr__(self):
482 def __repr__(self):
483 return '<Fake completion object jedi has crashed>'
483 return '<Fake completion object jedi has crashed>'
484
484
485
485
486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487
487
488
488
489 class Completion:
489 class Completion:
490 """
490 """
491 Completion object used and returned by IPython completers.
491 Completion object used and returned by IPython completers.
492
492
493 .. warning::
493 .. warning::
494
494
495 Unstable
495 Unstable
496
496
497 This function is unstable, API may change without warning.
497 This function is unstable, API may change without warning.
498 It will also raise unless use in proper context manager.
498 It will also raise unless use in proper context manager.
499
499
500 This act as a middle ground :any:`Completion` object between the
500 This act as a middle ground :any:`Completion` object between the
501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 object. While Jedi need a lot of information about evaluator and how the
502 object. While Jedi need a lot of information about evaluator and how the
503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 need user facing information.
504 need user facing information.
505
505
506 - Which range should be replaced replaced by what.
506 - Which range should be replaced replaced by what.
507 - Some metadata (like completion type), or meta information to displayed to
507 - Some metadata (like completion type), or meta information to displayed to
508 the use user.
508 the use user.
509
509
510 For debugging purpose we can also store the origin of the completion (``jedi``,
510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 ``IPython.python_matches``, ``IPython.magics_matches``...).
511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 """
512 """
513
513
514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515
515
516 def __init__(
516 def __init__(
517 self,
517 self,
518 start: int,
518 start: int,
519 end: int,
519 end: int,
520 text: str,
520 text: str,
521 *,
521 *,
522 type: Optional[str] = None,
522 type: Optional[str] = None,
523 _origin="",
523 _origin="",
524 signature="",
524 signature="",
525 ) -> None:
525 ) -> None:
526 warnings.warn(
526 warnings.warn(
527 "``Completion`` is a provisional API (as of IPython 6.0). "
527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 "It may change without warnings. "
528 "It may change without warnings. "
529 "Use in corresponding context manager.",
529 "Use in corresponding context manager.",
530 category=ProvisionalCompleterWarning,
530 category=ProvisionalCompleterWarning,
531 stacklevel=2,
531 stacklevel=2,
532 )
532 )
533
533
534 self.start = start
534 self.start = start
535 self.end = end
535 self.end = end
536 self.text = text
536 self.text = text
537 self.type = type
537 self.type = type
538 self.signature = signature
538 self.signature = signature
539 self._origin = _origin
539 self._origin = _origin
540
540
541 def __repr__(self):
541 def __repr__(self):
542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544
544
545 def __eq__(self, other) -> bool:
545 def __eq__(self, other) -> bool:
546 """
546 """
547 Equality and hash do not hash the type (as some completer may not be
547 Equality and hash do not hash the type (as some completer may not be
548 able to infer the type), but are use to (partially) de-duplicate
548 able to infer the type), but are use to (partially) de-duplicate
549 completion.
549 completion.
550
550
551 Completely de-duplicating completion is a bit tricker that just
551 Completely de-duplicating completion is a bit tricker that just
552 comparing as it depends on surrounding text, which Completions are not
552 comparing as it depends on surrounding text, which Completions are not
553 aware of.
553 aware of.
554 """
554 """
555 return self.start == other.start and \
555 return self.start == other.start and \
556 self.end == other.end and \
556 self.end == other.end and \
557 self.text == other.text
557 self.text == other.text
558
558
559 def __hash__(self):
559 def __hash__(self):
560 return hash((self.start, self.end, self.text))
560 return hash((self.start, self.end, self.text))
561
561
562
562
563 class SimpleCompletion:
563 class SimpleCompletion:
564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565
565
566 .. warning::
566 .. warning::
567
567
568 Provisional
568 Provisional
569
569
570 This class is used to describe the currently supported attributes of
570 This class is used to describe the currently supported attributes of
571 simple completion items, and any additional implementation details
571 simple completion items, and any additional implementation details
572 should not be relied on. Additional attributes may be included in
572 should not be relied on. Additional attributes may be included in
573 future versions, and meaning of text disambiguated from the current
573 future versions, and meaning of text disambiguated from the current
574 dual meaning of "text to insert" and "text to used as a label".
574 dual meaning of "text to insert" and "text to used as a label".
575 """
575 """
576
576
577 __slots__ = ["text", "type"]
577 __slots__ = ["text", "type"]
578
578
579 def __init__(self, text: str, *, type: Optional[str] = None):
579 def __init__(self, text: str, *, type: Optional[str] = None):
580 self.text = text
580 self.text = text
581 self.type = type
581 self.type = type
582
582
583 def __repr__(self):
583 def __repr__(self):
584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585
585
586
586
587 class _MatcherResultBase(TypedDict):
587 class _MatcherResultBase(TypedDict):
588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589
589
590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 matched_fragment: NotRequired[str]
591 matched_fragment: NotRequired[str]
592
592
593 #: Whether to suppress results from all other matchers (True), some
593 #: Whether to suppress results from all other matchers (True), some
594 #: matchers (set of identifiers) or none (False); default is False.
594 #: matchers (set of identifiers) or none (False); default is False.
595 suppress: NotRequired[Union[bool, Set[str]]]
595 suppress: NotRequired[Union[bool, Set[str]]]
596
596
597 #: Identifiers of matchers which should NOT be suppressed when this matcher
597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 #: requests to suppress all other matchers; defaults to an empty set.
598 #: requests to suppress all other matchers; defaults to an empty set.
599 do_not_suppress: NotRequired[Set[str]]
599 do_not_suppress: NotRequired[Set[str]]
600
600
601 #: Are completions already ordered and should be left as-is? default is False.
601 #: Are completions already ordered and should be left as-is? default is False.
602 ordered: NotRequired[bool]
602 ordered: NotRequired[bool]
603
603
604
604
605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 """Result of new-style completion matcher."""
607 """Result of new-style completion matcher."""
608
608
609 # note: TypedDict is added again to the inheritance chain
609 # note: TypedDict is added again to the inheritance chain
610 # in order to get __orig_bases__ for documentation
610 # in order to get __orig_bases__ for documentation
611
611
612 #: List of candidate completions
612 #: List of candidate completions
613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614
614
615
615
616 class _JediMatcherResult(_MatcherResultBase):
616 class _JediMatcherResult(_MatcherResultBase):
617 """Matching result returned by Jedi (will be processed differently)"""
617 """Matching result returned by Jedi (will be processed differently)"""
618
618
619 #: list of candidate completions
619 #: list of candidate completions
620 completions: Iterator[_JediCompletionLike]
620 completions: Iterator[_JediCompletionLike]
621
621
622
622
623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625
625
626
626
627 @dataclass
627 @dataclass
628 class CompletionContext:
628 class CompletionContext:
629 """Completion context provided as an argument to matchers in the Matcher API v2."""
629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630
630
631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 # which was not explicitly visible as an argument of the matcher, making any refactor
632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 # from the completer, and make substituting them in sub-classes easier.
634 # from the completer, and make substituting them in sub-classes easier.
635
635
636 #: Relevant fragment of code directly preceding the cursor.
636 #: Relevant fragment of code directly preceding the cursor.
637 #: The extraction of token is implemented via splitter heuristic
637 #: The extraction of token is implemented via splitter heuristic
638 #: (following readline behaviour for legacy reasons), which is user configurable
638 #: (following readline behaviour for legacy reasons), which is user configurable
639 #: (by switching the greedy mode).
639 #: (by switching the greedy mode).
640 token: str
640 token: str
641
641
642 #: The full available content of the editor or buffer
642 #: The full available content of the editor or buffer
643 full_text: str
643 full_text: str
644
644
645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 cursor_position: int
646 cursor_position: int
647
647
648 #: Cursor line in ``full_text``.
648 #: Cursor line in ``full_text``.
649 cursor_line: int
649 cursor_line: int
650
650
651 #: The maximum number of completions that will be used downstream.
651 #: The maximum number of completions that will be used downstream.
652 #: Matchers can use this information to abort early.
652 #: Matchers can use this information to abort early.
653 #: The built-in Jedi matcher is currently excepted from this limit.
653 #: The built-in Jedi matcher is currently excepted from this limit.
654 # If not given, return all possible completions.
654 # If not given, return all possible completions.
655 limit: Optional[int]
655 limit: Optional[int]
656
656
657 @cached_property
657 @cached_property
658 def text_until_cursor(self) -> str:
658 def text_until_cursor(self) -> str:
659 return self.line_with_cursor[: self.cursor_position]
659 return self.line_with_cursor[: self.cursor_position]
660
660
661 @cached_property
661 @cached_property
662 def line_with_cursor(self) -> str:
662 def line_with_cursor(self) -> str:
663 return self.full_text.split("\n")[self.cursor_line]
663 return self.full_text.split("\n")[self.cursor_line]
664
664
665
665
666 #: Matcher results for API v2.
666 #: Matcher results for API v2.
667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668
668
669
669
670 class _MatcherAPIv1Base(Protocol):
670 class _MatcherAPIv1Base(Protocol):
671 def __call__(self, text: str) -> List[str]:
671 def __call__(self, text: str) -> List[str]:
672 """Call signature."""
672 """Call signature."""
673 ...
673 ...
674
674
675 #: Used to construct the default matcher identifier
675 #: Used to construct the default matcher identifier
676 __qualname__: str
676 __qualname__: str
677
677
678
678
679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 #: API version
680 #: API version
681 matcher_api_version: Optional[Literal[1]]
681 matcher_api_version: Optional[Literal[1]]
682
682
683 def __call__(self, text: str) -> List[str]:
683 def __call__(self, text: str) -> List[str]:
684 """Call signature."""
684 """Call signature."""
685 ...
685 ...
686
686
687
687
688 #: Protocol describing Matcher API v1.
688 #: Protocol describing Matcher API v1.
689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690
690
691
691
692 class MatcherAPIv2(Protocol):
692 class MatcherAPIv2(Protocol):
693 """Protocol describing Matcher API v2."""
693 """Protocol describing Matcher API v2."""
694
694
695 #: API version
695 #: API version
696 matcher_api_version: Literal[2] = 2
696 matcher_api_version: Literal[2] = 2
697
697
698 def __call__(self, context: CompletionContext) -> MatcherResult:
698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 """Call signature."""
699 """Call signature."""
700 ...
700 ...
701
701
702 #: Used to construct the default matcher identifier
702 #: Used to construct the default matcher identifier
703 __qualname__: str
703 __qualname__: str
704
704
705
705
706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707
707
708
708
709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 api_version = _get_matcher_api_version(matcher)
710 api_version = _get_matcher_api_version(matcher)
711 return api_version == 1
711 return api_version == 1
712
712
713
713
714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 api_version = _get_matcher_api_version(matcher)
715 api_version = _get_matcher_api_version(matcher)
716 return api_version == 2
716 return api_version == 2
717
717
718
718
719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 """Determines whether objects is sizable"""
720 """Determines whether objects is sizable"""
721 return hasattr(value, "__len__")
721 return hasattr(value, "__len__")
722
722
723
723
724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 """Determines whether objects is sizable"""
725 """Determines whether objects is sizable"""
726 return hasattr(value, "__next__")
726 return hasattr(value, "__next__")
727
727
728
728
729 def has_any_completions(result: MatcherResult) -> bool:
729 def has_any_completions(result: MatcherResult) -> bool:
730 """Check if any result includes any completions."""
730 """Check if any result includes any completions."""
731 completions = result["completions"]
731 completions = result["completions"]
732 if _is_sizable(completions):
732 if _is_sizable(completions):
733 return len(completions) != 0
733 return len(completions) != 0
734 if _is_iterator(completions):
734 if _is_iterator(completions):
735 try:
735 try:
736 old_iterator = completions
736 old_iterator = completions
737 first = next(old_iterator)
737 first = next(old_iterator)
738 result["completions"] = cast(
738 result["completions"] = cast(
739 Iterator[SimpleCompletion],
739 Iterator[SimpleCompletion],
740 itertools.chain([first], old_iterator),
740 itertools.chain([first], old_iterator),
741 )
741 )
742 return True
742 return True
743 except StopIteration:
743 except StopIteration:
744 return False
744 return False
745 raise ValueError(
745 raise ValueError(
746 "Completions returned by matcher need to be an Iterator or a Sizable"
746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 )
747 )
748
748
749
749
750 def completion_matcher(
750 def completion_matcher(
751 *,
751 *,
752 priority: Optional[float] = None,
752 priority: Optional[float] = None,
753 identifier: Optional[str] = None,
753 identifier: Optional[str] = None,
754 api_version: int = 1,
754 api_version: int = 1,
755 ):
755 ):
756 """Adds attributes describing the matcher.
756 """Adds attributes describing the matcher.
757
757
758 Parameters
758 Parameters
759 ----------
759 ----------
760 priority : Optional[float]
760 priority : Optional[float]
761 The priority of the matcher, determines the order of execution of matchers.
761 The priority of the matcher, determines the order of execution of matchers.
762 Higher priority means that the matcher will be executed first. Defaults to 0.
762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 identifier : Optional[str]
763 identifier : Optional[str]
764 identifier of the matcher allowing users to modify the behaviour via traitlets,
764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 and also used to for debugging (will be passed as ``origin`` with the completions).
765 and also used to for debugging (will be passed as ``origin`` with the completions).
766
766
767 Defaults to matcher function's ``__qualname__`` (for example,
767 Defaults to matcher function's ``__qualname__`` (for example,
768 ``IPCompleter.file_matcher`` for the built-in matched defined
768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 as a ``file_matcher`` method of the ``IPCompleter`` class).
769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 api_version: Optional[int]
770 api_version: Optional[int]
771 version of the Matcher API used by this matcher.
771 version of the Matcher API used by this matcher.
772 Currently supported values are 1 and 2.
772 Currently supported values are 1 and 2.
773 Defaults to 1.
773 Defaults to 1.
774 """
774 """
775
775
776 def wrapper(func: Matcher):
776 def wrapper(func: Matcher):
777 func.matcher_priority = priority or 0 # type: ignore
777 func.matcher_priority = priority or 0 # type: ignore
778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 func.matcher_api_version = api_version # type: ignore
779 func.matcher_api_version = api_version # type: ignore
780 if TYPE_CHECKING:
780 if TYPE_CHECKING:
781 if api_version == 1:
781 if api_version == 1:
782 func = cast(MatcherAPIv1, func)
782 func = cast(MatcherAPIv1, func)
783 elif api_version == 2:
783 elif api_version == 2:
784 func = cast(MatcherAPIv2, func)
784 func = cast(MatcherAPIv2, func)
785 return func
785 return func
786
786
787 return wrapper
787 return wrapper
788
788
789
789
790 def _get_matcher_priority(matcher: Matcher):
790 def _get_matcher_priority(matcher: Matcher):
791 return getattr(matcher, "matcher_priority", 0)
791 return getattr(matcher, "matcher_priority", 0)
792
792
793
793
794 def _get_matcher_id(matcher: Matcher):
794 def _get_matcher_id(matcher: Matcher):
795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796
796
797
797
798 def _get_matcher_api_version(matcher):
798 def _get_matcher_api_version(matcher):
799 return getattr(matcher, "matcher_api_version", 1)
799 return getattr(matcher, "matcher_api_version", 1)
800
800
801
801
802 context_matcher = partial(completion_matcher, api_version=2)
802 context_matcher = partial(completion_matcher, api_version=2)
803
803
804
804
805 _IC = Iterable[Completion]
805 _IC = Iterable[Completion]
806
806
807
807
808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 """
809 """
810 Deduplicate a set of completions.
810 Deduplicate a set of completions.
811
811
812 .. warning::
812 .. warning::
813
813
814 Unstable
814 Unstable
815
815
816 This function is unstable, API may change without warning.
816 This function is unstable, API may change without warning.
817
817
818 Parameters
818 Parameters
819 ----------
819 ----------
820 text : str
820 text : str
821 text that should be completed.
821 text that should be completed.
822 completions : Iterator[Completion]
822 completions : Iterator[Completion]
823 iterator over the completions to deduplicate
823 iterator over the completions to deduplicate
824
824
825 Yields
825 Yields
826 ------
826 ------
827 `Completions` objects
827 `Completions` objects
828 Completions coming from multiple sources, may be different but end up having
828 Completions coming from multiple sources, may be different but end up having
829 the same effect when applied to ``text``. If this is the case, this will
829 the same effect when applied to ``text``. If this is the case, this will
830 consider completions as equal and only emit the first encountered.
830 consider completions as equal and only emit the first encountered.
831 Not folded in `completions()` yet for debugging purpose, and to detect when
831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 the IPython completer does return things that Jedi does not, but should be
832 the IPython completer does return things that Jedi does not, but should be
833 at some point.
833 at some point.
834 """
834 """
835 completions = list(completions)
835 completions = list(completions)
836 if not completions:
836 if not completions:
837 return
837 return
838
838
839 new_start = min(c.start for c in completions)
839 new_start = min(c.start for c in completions)
840 new_end = max(c.end for c in completions)
840 new_end = max(c.end for c in completions)
841
841
842 seen = set()
842 seen = set()
843 for c in completions:
843 for c in completions:
844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 if new_text not in seen:
845 if new_text not in seen:
846 yield c
846 yield c
847 seen.add(new_text)
847 seen.add(new_text)
848
848
849
849
850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 """
851 """
852 Rectify a set of completions to all have the same ``start`` and ``end``
852 Rectify a set of completions to all have the same ``start`` and ``end``
853
853
854 .. warning::
854 .. warning::
855
855
856 Unstable
856 Unstable
857
857
858 This function is unstable, API may change without warning.
858 This function is unstable, API may change without warning.
859 It will also raise unless use in proper context manager.
859 It will also raise unless use in proper context manager.
860
860
861 Parameters
861 Parameters
862 ----------
862 ----------
863 text : str
863 text : str
864 text that should be completed.
864 text that should be completed.
865 completions : Iterator[Completion]
865 completions : Iterator[Completion]
866 iterator over the completions to rectify
866 iterator over the completions to rectify
867 _debug : bool
867 _debug : bool
868 Log failed completion
868 Log failed completion
869
869
870 Notes
870 Notes
871 -----
871 -----
872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 the Jupyter Protocol requires them to behave like so. This will readjust
873 the Jupyter Protocol requires them to behave like so. This will readjust
874 the completion to have the same ``start`` and ``end`` by padding both
874 the completion to have the same ``start`` and ``end`` by padding both
875 extremities with surrounding text.
875 extremities with surrounding text.
876
876
877 During stabilisation should support a ``_debug`` option to log which
877 During stabilisation should support a ``_debug`` option to log which
878 completion are return by the IPython completer and not found in Jedi in
878 completion are return by the IPython completer and not found in Jedi in
879 order to make upstream bug report.
879 order to make upstream bug report.
880 """
880 """
881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 "It may change without warnings. "
882 "It may change without warnings. "
883 "Use in corresponding context manager.",
883 "Use in corresponding context manager.",
884 category=ProvisionalCompleterWarning, stacklevel=2)
884 category=ProvisionalCompleterWarning, stacklevel=2)
885
885
886 completions = list(completions)
886 completions = list(completions)
887 if not completions:
887 if not completions:
888 return
888 return
889 starts = (c.start for c in completions)
889 starts = (c.start for c in completions)
890 ends = (c.end for c in completions)
890 ends = (c.end for c in completions)
891
891
892 new_start = min(starts)
892 new_start = min(starts)
893 new_end = max(ends)
893 new_end = max(ends)
894
894
895 seen_jedi = set()
895 seen_jedi = set()
896 seen_python_matches = set()
896 seen_python_matches = set()
897 for c in completions:
897 for c in completions:
898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 if c._origin == 'jedi':
899 if c._origin == 'jedi':
900 seen_jedi.add(new_text)
900 seen_jedi.add(new_text)
901 elif c._origin == "IPCompleter.python_matcher":
901 elif c._origin == "IPCompleter.python_matcher":
902 seen_python_matches.add(new_text)
902 seen_python_matches.add(new_text)
903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 diff = seen_python_matches.difference(seen_jedi)
904 diff = seen_python_matches.difference(seen_jedi)
905 if diff and _debug:
905 if diff and _debug:
906 print('IPython.python matches have extras:', diff)
906 print('IPython.python matches have extras:', diff)
907
907
908
908
909 if sys.platform == 'win32':
909 if sys.platform == 'win32':
910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 else:
911 else:
912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913
913
914 GREEDY_DELIMS = ' =\r\n'
914 GREEDY_DELIMS = ' =\r\n'
915
915
916
916
917 class CompletionSplitter(object):
917 class CompletionSplitter(object):
918 """An object to split an input line in a manner similar to readline.
918 """An object to split an input line in a manner similar to readline.
919
919
920 By having our own implementation, we can expose readline-like completion in
920 By having our own implementation, we can expose readline-like completion in
921 a uniform manner to all frontends. This object only needs to be given the
921 a uniform manner to all frontends. This object only needs to be given the
922 line of text to be split and the cursor position on said line, and it
922 line of text to be split and the cursor position on said line, and it
923 returns the 'word' to be completed on at the cursor after splitting the
923 returns the 'word' to be completed on at the cursor after splitting the
924 entire line.
924 entire line.
925
925
926 What characters are used as splitting delimiters can be controlled by
926 What characters are used as splitting delimiters can be controlled by
927 setting the ``delims`` attribute (this is a property that internally
927 setting the ``delims`` attribute (this is a property that internally
928 automatically builds the necessary regular expression)"""
928 automatically builds the necessary regular expression)"""
929
929
930 # Private interface
930 # Private interface
931
931
932 # A string of delimiter characters. The default value makes sense for
932 # A string of delimiter characters. The default value makes sense for
933 # IPython's most typical usage patterns.
933 # IPython's most typical usage patterns.
934 _delims = DELIMS
934 _delims = DELIMS
935
935
936 # The expression (a normal string) to be compiled into a regular expression
936 # The expression (a normal string) to be compiled into a regular expression
937 # for actual splitting. We store it as an attribute mostly for ease of
937 # for actual splitting. We store it as an attribute mostly for ease of
938 # debugging, since this type of code can be so tricky to debug.
938 # debugging, since this type of code can be so tricky to debug.
939 _delim_expr = None
939 _delim_expr = None
940
940
941 # The regular expression that does the actual splitting
941 # The regular expression that does the actual splitting
942 _delim_re = None
942 _delim_re = None
943
943
944 def __init__(self, delims=None):
944 def __init__(self, delims=None):
945 delims = CompletionSplitter._delims if delims is None else delims
945 delims = CompletionSplitter._delims if delims is None else delims
946 self.delims = delims
946 self.delims = delims
947
947
948 @property
948 @property
949 def delims(self):
949 def delims(self):
950 """Return the string of delimiter characters."""
950 """Return the string of delimiter characters."""
951 return self._delims
951 return self._delims
952
952
953 @delims.setter
953 @delims.setter
954 def delims(self, delims):
954 def delims(self, delims):
955 """Set the delimiters for line splitting."""
955 """Set the delimiters for line splitting."""
956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 self._delim_re = re.compile(expr)
957 self._delim_re = re.compile(expr)
958 self._delims = delims
958 self._delims = delims
959 self._delim_expr = expr
959 self._delim_expr = expr
960
960
961 def split_line(self, line, cursor_pos=None):
961 def split_line(self, line, cursor_pos=None):
962 """Split a line of text with a cursor at the given position.
962 """Split a line of text with a cursor at the given position.
963 """
963 """
964 l = line if cursor_pos is None else line[:cursor_pos]
964 l = line if cursor_pos is None else line[:cursor_pos]
965 return self._delim_re.split(l)[-1]
965 return self._delim_re.split(l)[-1]
966
966
967
967
968
968
969 class Completer(Configurable):
969 class Completer(Configurable):
970
970
971 greedy = Bool(
971 greedy = Bool(
972 False,
972 False,
973 help="""Activate greedy completion.
973 help="""Activate greedy completion.
974
974
975 .. deprecated:: 8.8
975 .. deprecated:: 8.8
976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977
977
978 When enabled in IPython 8.8 or newer, changes configuration as follows:
978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979
979
980 - ``Completer.evaluation = 'unsafe'``
980 - ``Completer.evaluation = 'unsafe'``
981 - ``Completer.auto_close_dict_keys = True``
981 - ``Completer.auto_close_dict_keys = True``
982 """,
982 """,
983 ).tag(config=True)
983 ).tag(config=True)
984
984
985 evaluation = Enum(
985 evaluation = Enum(
986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 default_value="limited",
987 default_value="limited",
988 help="""Policy for code evaluation under completion.
988 help="""Policy for code evaluation under completion.
989
989
990 Successive options allow to enable more eager evaluation for better
990 Successive options allow to enable more eager evaluation for better
991 completion suggestions, including for nested dictionaries, nested lists,
991 completion suggestions, including for nested dictionaries, nested lists,
992 or even results of function calls.
992 or even results of function calls.
993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995
995
996 Allowed values are:
996 Allowed values are:
997
997
998 - ``forbidden``: no evaluation of code is permitted,
998 - ``forbidden``: no evaluation of code is permitted,
999 - ``minimal``: evaluation of literals and access to built-in namespace;
999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 no item/attribute evaluationm no access to locals/globals,
1000 no item/attribute evaluationm no access to locals/globals,
1001 no evaluation of any operations or comparisons.
1001 no evaluation of any operations or comparisons.
1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 - ``unsafe``: evaluation of all methods and function calls but not of
1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 syntax with side-effects like `del x`,
1007 syntax with side-effects like `del x`,
1008 - ``dangerous``: completely arbitrary evaluation.
1008 - ``dangerous``: completely arbitrary evaluation.
1009 """,
1009 """,
1010 ).tag(config=True)
1010 ).tag(config=True)
1011
1011
1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 help="Experimental: Use Jedi to generate autocompletions. "
1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 "Default to True if jedi is installed.").tag(config=True)
1014 "Default to True if jedi is installed.").tag(config=True)
1015
1015
1016 jedi_compute_type_timeout = Int(default_value=400,
1016 jedi_compute_type_timeout = Int(default_value=400,
1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 performance by preventing jedi to build its cache.
1019 performance by preventing jedi to build its cache.
1020 """).tag(config=True)
1020 """).tag(config=True)
1021
1021
1022 debug = Bool(default_value=False,
1022 debug = Bool(default_value=False,
1023 help='Enable debug for the Completer. Mostly print extra '
1023 help='Enable debug for the Completer. Mostly print extra '
1024 'information for experimental jedi integration.')\
1024 'information for experimental jedi integration.')\
1025 .tag(config=True)
1025 .tag(config=True)
1026
1026
1027 backslash_combining_completions = Bool(True,
1027 backslash_combining_completions = Bool(True,
1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 "Includes completion of latex commands, unicode names, and expanding "
1029 "Includes completion of latex commands, unicode names, and expanding "
1030 "unicode characters back to latex commands.").tag(config=True)
1030 "unicode characters back to latex commands.").tag(config=True)
1031
1031
1032 auto_close_dict_keys = Bool(
1032 auto_close_dict_keys = Bool(
1033 False,
1033 False,
1034 help="""
1034 help="""
1035 Enable auto-closing dictionary keys.
1035 Enable auto-closing dictionary keys.
1036
1036
1037 When enabled string keys will be suffixed with a final quote
1037 When enabled string keys will be suffixed with a final quote
1038 (matching the opening quote), tuple keys will also receive a
1038 (matching the opening quote), tuple keys will also receive a
1039 separating comma if needed, and keys which are final will
1039 separating comma if needed, and keys which are final will
1040 receive a closing bracket (``]``).
1040 receive a closing bracket (``]``).
1041 """,
1041 """,
1042 ).tag(config=True)
1042 ).tag(config=True)
1043
1043
1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 """Create a new completer for the command line.
1045 """Create a new completer for the command line.
1046
1046
1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048
1048
1049 If unspecified, the default namespace where completions are performed
1049 If unspecified, the default namespace where completions are performed
1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 given as dictionaries.
1051 given as dictionaries.
1052
1052
1053 An optional second namespace can be given. This allows the completer
1053 An optional second namespace can be given. This allows the completer
1054 to handle cases where both the local and global scopes need to be
1054 to handle cases where both the local and global scopes need to be
1055 distinguished.
1055 distinguished.
1056 """
1056 """
1057
1057
1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 # specific namespace or to use __main__.__dict__. This will allow us
1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 # to bind to __main__.__dict__ at completion time, not now.
1060 # to bind to __main__.__dict__ at completion time, not now.
1061 if namespace is None:
1061 if namespace is None:
1062 self.use_main_ns = True
1062 self.use_main_ns = True
1063 else:
1063 else:
1064 self.use_main_ns = False
1064 self.use_main_ns = False
1065 self.namespace = namespace
1065 self.namespace = namespace
1066
1066
1067 # The global namespace, if given, can be bound directly
1067 # The global namespace, if given, can be bound directly
1068 if global_namespace is None:
1068 if global_namespace is None:
1069 self.global_namespace = {}
1069 self.global_namespace = {}
1070 else:
1070 else:
1071 self.global_namespace = global_namespace
1071 self.global_namespace = global_namespace
1072
1072
1073 self.custom_matchers = []
1073 self.custom_matchers = []
1074
1074
1075 super(Completer, self).__init__(**kwargs)
1075 super(Completer, self).__init__(**kwargs)
1076
1076
1077 def complete(self, text, state):
1077 def complete(self, text, state):
1078 """Return the next possible completion for 'text'.
1078 """Return the next possible completion for 'text'.
1079
1079
1080 This is called successively with state == 0, 1, 2, ... until it
1080 This is called successively with state == 0, 1, 2, ... until it
1081 returns None. The completion should begin with 'text'.
1081 returns None. The completion should begin with 'text'.
1082
1082
1083 """
1083 """
1084 if self.use_main_ns:
1084 if self.use_main_ns:
1085 self.namespace = __main__.__dict__
1085 self.namespace = __main__.__dict__
1086
1086
1087 if state == 0:
1087 if state == 0:
1088 if "." in text:
1088 if "." in text:
1089 self.matches = self.attr_matches(text)
1089 self.matches = self.attr_matches(text)
1090 else:
1090 else:
1091 self.matches = self.global_matches(text)
1091 self.matches = self.global_matches(text)
1092 try:
1092 try:
1093 return self.matches[state]
1093 return self.matches[state]
1094 except IndexError:
1094 except IndexError:
1095 return None
1095 return None
1096
1096
1097 def global_matches(self, text):
1097 def global_matches(self, text):
1098 """Compute matches when text is a simple name.
1098 """Compute matches when text is a simple name.
1099
1099
1100 Return a list of all keywords, built-in functions and names currently
1100 Return a list of all keywords, built-in functions and names currently
1101 defined in self.namespace or self.global_namespace that match.
1101 defined in self.namespace or self.global_namespace that match.
1102
1102
1103 """
1103 """
1104 matches = []
1104 matches = []
1105 match_append = matches.append
1105 match_append = matches.append
1106 n = len(text)
1106 n = len(text)
1107 for lst in [
1107 for lst in [
1108 keyword.kwlist,
1108 keyword.kwlist,
1109 builtin_mod.__dict__.keys(),
1109 builtin_mod.__dict__.keys(),
1110 list(self.namespace.keys()),
1110 list(self.namespace.keys()),
1111 list(self.global_namespace.keys()),
1111 list(self.global_namespace.keys()),
1112 ]:
1112 ]:
1113 for word in lst:
1113 for word in lst:
1114 if word[:n] == text and word != "__builtins__":
1114 if word[:n] == text and word != "__builtins__":
1115 match_append(word)
1115 match_append(word)
1116
1116
1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 shortened = {
1119 shortened = {
1120 "_".join([sub[0] for sub in word.split("_")]): word
1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 for word in lst
1121 for word in lst
1122 if snake_case_re.match(word)
1122 if snake_case_re.match(word)
1123 }
1123 }
1124 for word in shortened.keys():
1124 for word in shortened.keys():
1125 if word[:n] == text and word != "__builtins__":
1125 if word[:n] == text and word != "__builtins__":
1126 match_append(shortened[word])
1126 match_append(shortened[word])
1127 return matches
1127 return matches
1128
1128
1129 def attr_matches(self, text):
1129 def attr_matches(self, text):
1130 """Compute matches when text contains a dot.
1130 """Compute matches when text contains a dot.
1131
1131
1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 evaluatable in self.namespace or self.global_namespace, it will be
1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 evaluated and its attributes (as revealed by dir()) are used as
1134 evaluated and its attributes (as revealed by dir()) are used as
1135 possible completions. (For class instances, class members are
1135 possible completions. (For class instances, class members are
1136 also considered.)
1136 also considered.)
1137
1137
1138 WARNING: this can still invoke arbitrary C code, if an object
1138 WARNING: this can still invoke arbitrary C code, if an object
1139 with a __getattr__ hook is evaluated.
1139 with a __getattr__ hook is evaluated.
1140
1140
1141 """
1141 """
1142 return self._attr_matches(text)[0]
1142 return self._attr_matches(text)[0]
1143
1143
1144 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1144 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1146 if not m2:
1146 if not m2:
1147 return [], ""
1147 return [], ""
1148 expr, attr = m2.group(1, 2)
1148 expr, attr = m2.group(1, 2)
1149
1149
1150 obj = self._evaluate_expr(expr)
1150 obj = self._evaluate_expr(expr)
1151
1151
1152 if obj is not_found:
1152 if obj is not_found:
1153 return [], ""
1153 return [], ""
1154
1154
1155 if self.limit_to__all__ and hasattr(obj, '__all__'):
1155 if self.limit_to__all__ and hasattr(obj, '__all__'):
1156 words = get__all__entries(obj)
1156 words = get__all__entries(obj)
1157 else:
1157 else:
1158 words = dir2(obj)
1158 words = dir2(obj)
1159
1159
1160 try:
1160 try:
1161 words = generics.complete_object(obj, words)
1161 words = generics.complete_object(obj, words)
1162 except TryNext:
1162 except TryNext:
1163 pass
1163 pass
1164 except AssertionError:
1164 except AssertionError:
1165 raise
1165 raise
1166 except Exception:
1166 except Exception:
1167 # Silence errors from completion function
1167 # Silence errors from completion function
1168 pass
1168 pass
1169 # Build match list to return
1169 # Build match list to return
1170 n = len(attr)
1170 n = len(attr)
1171
1171
1172 # Note: ideally we would just return words here and the prefix
1172 # Note: ideally we would just return words here and the prefix
1173 # reconciliator would know that we intend to append to rather than
1173 # reconciliator would know that we intend to append to rather than
1174 # replace the input text; this requires refactoring to return range
1174 # replace the input text; this requires refactoring to return range
1175 # which ought to be replaced (as does jedi).
1175 # which ought to be replaced (as does jedi).
1176 if include_prefix:
1176 if include_prefix:
1177 tokens = _parse_tokens(expr)
1177 tokens = _parse_tokens(expr)
1178 rev_tokens = reversed(tokens)
1178 rev_tokens = reversed(tokens)
1179 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1179 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1180 name_turn = True
1180 name_turn = True
1181
1181
1182 parts = []
1182 parts = []
1183 for token in rev_tokens:
1183 for token in rev_tokens:
1184 if token.type in skip_over:
1184 if token.type in skip_over:
1185 continue
1185 continue
1186 if token.type == tokenize.NAME and name_turn:
1186 if token.type == tokenize.NAME and name_turn:
1187 parts.append(token.string)
1187 parts.append(token.string)
1188 name_turn = False
1188 name_turn = False
1189 elif (
1189 elif (
1190 token.type == tokenize.OP and token.string == "." and not name_turn
1190 token.type == tokenize.OP and token.string == "." and not name_turn
1191 ):
1191 ):
1192 parts.append(token.string)
1192 parts.append(token.string)
1193 name_turn = True
1193 name_turn = True
1194 else:
1194 else:
1195 # short-circuit if not empty nor name token
1195 # short-circuit if not empty nor name token
1196 break
1196 break
1197
1197
1198 prefix_after_space = "".join(reversed(parts))
1198 prefix_after_space = "".join(reversed(parts))
1199 else:
1199 else:
1200 prefix_after_space = ""
1200 prefix_after_space = ""
1201
1201
1202 return (
1202 return (
1203 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1203 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1204 "." + attr,
1204 "." + attr,
1205 )
1205 )
1206
1206
1207 def _evaluate_expr(self, expr):
1207 def _evaluate_expr(self, expr):
1208 obj = not_found
1208 obj = not_found
1209 done = False
1209 done = False
1210 while not done and expr:
1210 while not done and expr:
1211 try:
1211 try:
1212 obj = guarded_eval(
1212 obj = guarded_eval(
1213 expr,
1213 expr,
1214 EvaluationContext(
1214 EvaluationContext(
1215 globals=self.global_namespace,
1215 globals=self.global_namespace,
1216 locals=self.namespace,
1216 locals=self.namespace,
1217 evaluation=self.evaluation,
1217 evaluation=self.evaluation,
1218 ),
1218 ),
1219 )
1219 )
1220 done = True
1220 done = True
1221 except Exception as e:
1221 except Exception as e:
1222 if self.debug:
1222 if self.debug:
1223 print("Evaluation exception", e)
1223 print("Evaluation exception", e)
1224 # trim the expression to remove any invalid prefix
1224 # trim the expression to remove any invalid prefix
1225 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1225 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1226 # where parenthesis is not closed.
1226 # where parenthesis is not closed.
1227 # TODO: make this faster by reusing parts of the computation?
1227 # TODO: make this faster by reusing parts of the computation?
1228 expr = expr[1:]
1228 expr = expr[1:]
1229 return obj
1229 return obj
1230
1230
1231 def get__all__entries(obj):
1231 def get__all__entries(obj):
1232 """returns the strings in the __all__ attribute"""
1232 """returns the strings in the __all__ attribute"""
1233 try:
1233 try:
1234 words = getattr(obj, '__all__')
1234 words = getattr(obj, '__all__')
1235 except:
1235 except:
1236 return []
1236 return []
1237
1237
1238 return [w for w in words if isinstance(w, str)]
1238 return [w for w in words if isinstance(w, str)]
1239
1239
1240
1240
1241 class _DictKeyState(enum.Flag):
1241 class _DictKeyState(enum.Flag):
1242 """Represent state of the key match in context of other possible matches.
1242 """Represent state of the key match in context of other possible matches.
1243
1243
1244 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1244 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1245 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1245 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1246 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1246 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1247 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1247 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1248 """
1248 """
1249
1249
1250 BASELINE = 0
1250 BASELINE = 0
1251 END_OF_ITEM = enum.auto()
1251 END_OF_ITEM = enum.auto()
1252 END_OF_TUPLE = enum.auto()
1252 END_OF_TUPLE = enum.auto()
1253 IN_TUPLE = enum.auto()
1253 IN_TUPLE = enum.auto()
1254
1254
1255
1255
1256 def _parse_tokens(c):
1256 def _parse_tokens(c):
1257 """Parse tokens even if there is an error."""
1257 """Parse tokens even if there is an error."""
1258 tokens = []
1258 tokens = []
1259 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1259 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1260 while True:
1260 while True:
1261 try:
1261 try:
1262 tokens.append(next(token_generator))
1262 tokens.append(next(token_generator))
1263 except tokenize.TokenError:
1263 except tokenize.TokenError:
1264 return tokens
1264 return tokens
1265 except StopIteration:
1265 except StopIteration:
1266 return tokens
1266 return tokens
1267
1267
1268
1268
1269 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1269 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1270 """Match any valid Python numeric literal in a prefix of dictionary keys.
1270 """Match any valid Python numeric literal in a prefix of dictionary keys.
1271
1271
1272 References:
1272 References:
1273 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1273 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1274 - https://docs.python.org/3/library/tokenize.html
1274 - https://docs.python.org/3/library/tokenize.html
1275 """
1275 """
1276 if prefix[-1].isspace():
1276 if prefix[-1].isspace():
1277 # if user typed a space we do not have anything to complete
1277 # if user typed a space we do not have anything to complete
1278 # even if there was a valid number token before
1278 # even if there was a valid number token before
1279 return None
1279 return None
1280 tokens = _parse_tokens(prefix)
1280 tokens = _parse_tokens(prefix)
1281 rev_tokens = reversed(tokens)
1281 rev_tokens = reversed(tokens)
1282 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1282 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1283 number = None
1283 number = None
1284 for token in rev_tokens:
1284 for token in rev_tokens:
1285 if token.type in skip_over:
1285 if token.type in skip_over:
1286 continue
1286 continue
1287 if number is None:
1287 if number is None:
1288 if token.type == tokenize.NUMBER:
1288 if token.type == tokenize.NUMBER:
1289 number = token.string
1289 number = token.string
1290 continue
1290 continue
1291 else:
1291 else:
1292 # we did not match a number
1292 # we did not match a number
1293 return None
1293 return None
1294 if token.type == tokenize.OP:
1294 if token.type == tokenize.OP:
1295 if token.string == ",":
1295 if token.string == ",":
1296 break
1296 break
1297 if token.string in {"+", "-"}:
1297 if token.string in {"+", "-"}:
1298 number = token.string + number
1298 number = token.string + number
1299 else:
1299 else:
1300 return None
1300 return None
1301 return number
1301 return number
1302
1302
1303
1303
1304 _INT_FORMATS = {
1304 _INT_FORMATS = {
1305 "0b": bin,
1305 "0b": bin,
1306 "0o": oct,
1306 "0o": oct,
1307 "0x": hex,
1307 "0x": hex,
1308 }
1308 }
1309
1309
1310
1310
1311 def match_dict_keys(
1311 def match_dict_keys(
1312 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1312 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1313 prefix: str,
1313 prefix: str,
1314 delims: str,
1314 delims: str,
1315 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1315 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1316 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1316 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1317 """Used by dict_key_matches, matching the prefix to a list of keys
1317 """Used by dict_key_matches, matching the prefix to a list of keys
1318
1318
1319 Parameters
1319 Parameters
1320 ----------
1320 ----------
1321 keys
1321 keys
1322 list of keys in dictionary currently being completed.
1322 list of keys in dictionary currently being completed.
1323 prefix
1323 prefix
1324 Part of the text already typed by the user. E.g. `mydict[b'fo`
1324 Part of the text already typed by the user. E.g. `mydict[b'fo`
1325 delims
1325 delims
1326 String of delimiters to consider when finding the current key.
1326 String of delimiters to consider when finding the current key.
1327 extra_prefix : optional
1327 extra_prefix : optional
1328 Part of the text already typed in multi-key index cases. E.g. for
1328 Part of the text already typed in multi-key index cases. E.g. for
1329 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1329 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1330
1330
1331 Returns
1331 Returns
1332 -------
1332 -------
1333 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1333 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1334 ``quote`` being the quote that need to be used to close current string.
1334 ``quote`` being the quote that need to be used to close current string.
1335 ``token_start`` the position where the replacement should start occurring,
1335 ``token_start`` the position where the replacement should start occurring,
1336 ``matches`` a dictionary of replacement/completion keys on keys and values
1336 ``matches`` a dictionary of replacement/completion keys on keys and values
1337 indicating whether the state.
1337 indicating whether the state.
1338 """
1338 """
1339 prefix_tuple = extra_prefix if extra_prefix else ()
1339 prefix_tuple = extra_prefix if extra_prefix else ()
1340
1340
1341 prefix_tuple_size = sum(
1341 prefix_tuple_size = sum(
1342 [
1342 [
1343 # for pandas, do not count slices as taking space
1343 # for pandas, do not count slices as taking space
1344 not isinstance(k, slice)
1344 not isinstance(k, slice)
1345 for k in prefix_tuple
1345 for k in prefix_tuple
1346 ]
1346 ]
1347 )
1347 )
1348 text_serializable_types = (str, bytes, int, float, slice)
1348 text_serializable_types = (str, bytes, int, float, slice)
1349
1349
1350 def filter_prefix_tuple(key):
1350 def filter_prefix_tuple(key):
1351 # Reject too short keys
1351 # Reject too short keys
1352 if len(key) <= prefix_tuple_size:
1352 if len(key) <= prefix_tuple_size:
1353 return False
1353 return False
1354 # Reject keys which cannot be serialised to text
1354 # Reject keys which cannot be serialised to text
1355 for k in key:
1355 for k in key:
1356 if not isinstance(k, text_serializable_types):
1356 if not isinstance(k, text_serializable_types):
1357 return False
1357 return False
1358 # Reject keys that do not match the prefix
1358 # Reject keys that do not match the prefix
1359 for k, pt in zip(key, prefix_tuple):
1359 for k, pt in zip(key, prefix_tuple):
1360 if k != pt and not isinstance(pt, slice):
1360 if k != pt and not isinstance(pt, slice):
1361 return False
1361 return False
1362 # All checks passed!
1362 # All checks passed!
1363 return True
1363 return True
1364
1364
1365 filtered_key_is_final: Dict[
1365 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1366 Union[str, bytes, int, float], _DictKeyState
1366 defaultdict(lambda: _DictKeyState.BASELINE)
1367 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1367 )
1368
1368
1369 for k in keys:
1369 for k in keys:
1370 # If at least one of the matches is not final, mark as undetermined.
1370 # If at least one of the matches is not final, mark as undetermined.
1371 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1371 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1372 # `111` appears final on first match but is not final on the second.
1372 # `111` appears final on first match but is not final on the second.
1373
1373
1374 if isinstance(k, tuple):
1374 if isinstance(k, tuple):
1375 if filter_prefix_tuple(k):
1375 if filter_prefix_tuple(k):
1376 key_fragment = k[prefix_tuple_size]
1376 key_fragment = k[prefix_tuple_size]
1377 filtered_key_is_final[key_fragment] |= (
1377 filtered_key_is_final[key_fragment] |= (
1378 _DictKeyState.END_OF_TUPLE
1378 _DictKeyState.END_OF_TUPLE
1379 if len(k) == prefix_tuple_size + 1
1379 if len(k) == prefix_tuple_size + 1
1380 else _DictKeyState.IN_TUPLE
1380 else _DictKeyState.IN_TUPLE
1381 )
1381 )
1382 elif prefix_tuple_size > 0:
1382 elif prefix_tuple_size > 0:
1383 # we are completing a tuple but this key is not a tuple,
1383 # we are completing a tuple but this key is not a tuple,
1384 # so we should ignore it
1384 # so we should ignore it
1385 pass
1385 pass
1386 else:
1386 else:
1387 if isinstance(k, text_serializable_types):
1387 if isinstance(k, text_serializable_types):
1388 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1388 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1389
1389
1390 filtered_keys = filtered_key_is_final.keys()
1390 filtered_keys = filtered_key_is_final.keys()
1391
1391
1392 if not prefix:
1392 if not prefix:
1393 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1393 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1394
1394
1395 quote_match = re.search("(?:\"|')", prefix)
1395 quote_match = re.search("(?:\"|')", prefix)
1396 is_user_prefix_numeric = False
1396 is_user_prefix_numeric = False
1397
1397
1398 if quote_match:
1398 if quote_match:
1399 quote = quote_match.group()
1399 quote = quote_match.group()
1400 valid_prefix = prefix + quote
1400 valid_prefix = prefix + quote
1401 try:
1401 try:
1402 prefix_str = literal_eval(valid_prefix)
1402 prefix_str = literal_eval(valid_prefix)
1403 except Exception:
1403 except Exception:
1404 return "", 0, {}
1404 return "", 0, {}
1405 else:
1405 else:
1406 # If it does not look like a string, let's assume
1406 # If it does not look like a string, let's assume
1407 # we are dealing with a number or variable.
1407 # we are dealing with a number or variable.
1408 number_match = _match_number_in_dict_key_prefix(prefix)
1408 number_match = _match_number_in_dict_key_prefix(prefix)
1409
1409
1410 # We do not want the key matcher to suggest variable names so we yield:
1410 # We do not want the key matcher to suggest variable names so we yield:
1411 if number_match is None:
1411 if number_match is None:
1412 # The alternative would be to assume that user forgort the quote
1412 # The alternative would be to assume that user forgort the quote
1413 # and if the substring matches, suggest adding it at the start.
1413 # and if the substring matches, suggest adding it at the start.
1414 return "", 0, {}
1414 return "", 0, {}
1415
1415
1416 prefix_str = number_match
1416 prefix_str = number_match
1417 is_user_prefix_numeric = True
1417 is_user_prefix_numeric = True
1418 quote = ""
1418 quote = ""
1419
1419
1420 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1420 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1421 token_match = re.search(pattern, prefix, re.UNICODE)
1421 token_match = re.search(pattern, prefix, re.UNICODE)
1422 assert token_match is not None # silence mypy
1422 assert token_match is not None # silence mypy
1423 token_start = token_match.start()
1423 token_start = token_match.start()
1424 token_prefix = token_match.group()
1424 token_prefix = token_match.group()
1425
1425
1426 matched: Dict[str, _DictKeyState] = {}
1426 matched: Dict[str, _DictKeyState] = {}
1427
1427
1428 str_key: Union[str, bytes]
1428 str_key: Union[str, bytes]
1429
1429
1430 for key in filtered_keys:
1430 for key in filtered_keys:
1431 if isinstance(key, (int, float)):
1431 if isinstance(key, (int, float)):
1432 # User typed a number but this key is not a number.
1432 # User typed a number but this key is not a number.
1433 if not is_user_prefix_numeric:
1433 if not is_user_prefix_numeric:
1434 continue
1434 continue
1435 str_key = str(key)
1435 str_key = str(key)
1436 if isinstance(key, int):
1436 if isinstance(key, int):
1437 int_base = prefix_str[:2].lower()
1437 int_base = prefix_str[:2].lower()
1438 # if user typed integer using binary/oct/hex notation:
1438 # if user typed integer using binary/oct/hex notation:
1439 if int_base in _INT_FORMATS:
1439 if int_base in _INT_FORMATS:
1440 int_format = _INT_FORMATS[int_base]
1440 int_format = _INT_FORMATS[int_base]
1441 str_key = int_format(key)
1441 str_key = int_format(key)
1442 else:
1442 else:
1443 # User typed a string but this key is a number.
1443 # User typed a string but this key is a number.
1444 if is_user_prefix_numeric:
1444 if is_user_prefix_numeric:
1445 continue
1445 continue
1446 str_key = key
1446 str_key = key
1447 try:
1447 try:
1448 if not str_key.startswith(prefix_str):
1448 if not str_key.startswith(prefix_str):
1449 continue
1449 continue
1450 except (AttributeError, TypeError, UnicodeError) as e:
1450 except (AttributeError, TypeError, UnicodeError) as e:
1451 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1451 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1452 continue
1452 continue
1453
1453
1454 # reformat remainder of key to begin with prefix
1454 # reformat remainder of key to begin with prefix
1455 rem = str_key[len(prefix_str) :]
1455 rem = str_key[len(prefix_str) :]
1456 # force repr wrapped in '
1456 # force repr wrapped in '
1457 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1457 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1458 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1458 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1459 if quote == '"':
1459 if quote == '"':
1460 # The entered prefix is quoted with ",
1460 # The entered prefix is quoted with ",
1461 # but the match is quoted with '.
1461 # but the match is quoted with '.
1462 # A contained " hence needs escaping for comparison:
1462 # A contained " hence needs escaping for comparison:
1463 rem_repr = rem_repr.replace('"', '\\"')
1463 rem_repr = rem_repr.replace('"', '\\"')
1464
1464
1465 # then reinsert prefix from start of token
1465 # then reinsert prefix from start of token
1466 match = "%s%s" % (token_prefix, rem_repr)
1466 match = "%s%s" % (token_prefix, rem_repr)
1467
1467
1468 matched[match] = filtered_key_is_final[key]
1468 matched[match] = filtered_key_is_final[key]
1469 return quote, token_start, matched
1469 return quote, token_start, matched
1470
1470
1471
1471
1472 def cursor_to_position(text:str, line:int, column:int)->int:
1472 def cursor_to_position(text:str, line:int, column:int)->int:
1473 """
1473 """
1474 Convert the (line,column) position of the cursor in text to an offset in a
1474 Convert the (line,column) position of the cursor in text to an offset in a
1475 string.
1475 string.
1476
1476
1477 Parameters
1477 Parameters
1478 ----------
1478 ----------
1479 text : str
1479 text : str
1480 The text in which to calculate the cursor offset
1480 The text in which to calculate the cursor offset
1481 line : int
1481 line : int
1482 Line of the cursor; 0-indexed
1482 Line of the cursor; 0-indexed
1483 column : int
1483 column : int
1484 Column of the cursor 0-indexed
1484 Column of the cursor 0-indexed
1485
1485
1486 Returns
1486 Returns
1487 -------
1487 -------
1488 Position of the cursor in ``text``, 0-indexed.
1488 Position of the cursor in ``text``, 0-indexed.
1489
1489
1490 See Also
1490 See Also
1491 --------
1491 --------
1492 position_to_cursor : reciprocal of this function
1492 position_to_cursor : reciprocal of this function
1493
1493
1494 """
1494 """
1495 lines = text.split('\n')
1495 lines = text.split('\n')
1496 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1496 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1497
1497
1498 return sum(len(l) + 1 for l in lines[:line]) + column
1498 return sum(len(l) + 1 for l in lines[:line]) + column
1499
1499
1500 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1500 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1501 """
1501 """
1502 Convert the position of the cursor in text (0 indexed) to a line
1502 Convert the position of the cursor in text (0 indexed) to a line
1503 number(0-indexed) and a column number (0-indexed) pair
1503 number(0-indexed) and a column number (0-indexed) pair
1504
1504
1505 Position should be a valid position in ``text``.
1505 Position should be a valid position in ``text``.
1506
1506
1507 Parameters
1507 Parameters
1508 ----------
1508 ----------
1509 text : str
1509 text : str
1510 The text in which to calculate the cursor offset
1510 The text in which to calculate the cursor offset
1511 offset : int
1511 offset : int
1512 Position of the cursor in ``text``, 0-indexed.
1512 Position of the cursor in ``text``, 0-indexed.
1513
1513
1514 Returns
1514 Returns
1515 -------
1515 -------
1516 (line, column) : (int, int)
1516 (line, column) : (int, int)
1517 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1517 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1518
1518
1519 See Also
1519 See Also
1520 --------
1520 --------
1521 cursor_to_position : reciprocal of this function
1521 cursor_to_position : reciprocal of this function
1522
1522
1523 """
1523 """
1524
1524
1525 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1525 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1526
1526
1527 before = text[:offset]
1527 before = text[:offset]
1528 blines = before.split('\n') # ! splitnes trim trailing \n
1528 blines = before.split('\n') # ! splitnes trim trailing \n
1529 line = before.count('\n')
1529 line = before.count('\n')
1530 col = len(blines[-1])
1530 col = len(blines[-1])
1531 return line, col
1531 return line, col
1532
1532
1533
1533
1534 def _safe_isinstance(obj, module, class_name, *attrs):
1534 def _safe_isinstance(obj, module, class_name, *attrs):
1535 """Checks if obj is an instance of module.class_name if loaded
1535 """Checks if obj is an instance of module.class_name if loaded
1536 """
1536 """
1537 if module in sys.modules:
1537 if module in sys.modules:
1538 m = sys.modules[module]
1538 m = sys.modules[module]
1539 for attr in [class_name, *attrs]:
1539 for attr in [class_name, *attrs]:
1540 m = getattr(m, attr)
1540 m = getattr(m, attr)
1541 return isinstance(obj, m)
1541 return isinstance(obj, m)
1542
1542
1543
1543
1544 @context_matcher()
1544 @context_matcher()
1545 def back_unicode_name_matcher(context: CompletionContext):
1545 def back_unicode_name_matcher(context: CompletionContext):
1546 """Match Unicode characters back to Unicode name
1546 """Match Unicode characters back to Unicode name
1547
1547
1548 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1548 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1549 """
1549 """
1550 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1550 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1551 return _convert_matcher_v1_result_to_v2(
1551 return _convert_matcher_v1_result_to_v2(
1552 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1552 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1553 )
1553 )
1554
1554
1555
1555
1556 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1556 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1557 """Match Unicode characters back to Unicode name
1557 """Match Unicode characters back to Unicode name
1558
1558
1559 This does ``β˜ƒ`` -> ``\\snowman``
1559 This does ``β˜ƒ`` -> ``\\snowman``
1560
1560
1561 Note that snowman is not a valid python3 combining character but will be expanded.
1561 Note that snowman is not a valid python3 combining character but will be expanded.
1562 Though it will not recombine back to the snowman character by the completion machinery.
1562 Though it will not recombine back to the snowman character by the completion machinery.
1563
1563
1564 This will not either back-complete standard sequences like \\n, \\b ...
1564 This will not either back-complete standard sequences like \\n, \\b ...
1565
1565
1566 .. deprecated:: 8.6
1566 .. deprecated:: 8.6
1567 You can use :meth:`back_unicode_name_matcher` instead.
1567 You can use :meth:`back_unicode_name_matcher` instead.
1568
1568
1569 Returns
1569 Returns
1570 =======
1570 =======
1571
1571
1572 Return a tuple with two elements:
1572 Return a tuple with two elements:
1573
1573
1574 - The Unicode character that was matched (preceded with a backslash), or
1574 - The Unicode character that was matched (preceded with a backslash), or
1575 empty string,
1575 empty string,
1576 - a sequence (of 1), name for the match Unicode character, preceded by
1576 - a sequence (of 1), name for the match Unicode character, preceded by
1577 backslash, or empty if no match.
1577 backslash, or empty if no match.
1578 """
1578 """
1579 if len(text)<2:
1579 if len(text)<2:
1580 return '', ()
1580 return '', ()
1581 maybe_slash = text[-2]
1581 maybe_slash = text[-2]
1582 if maybe_slash != '\\':
1582 if maybe_slash != '\\':
1583 return '', ()
1583 return '', ()
1584
1584
1585 char = text[-1]
1585 char = text[-1]
1586 # no expand on quote for completion in strings.
1586 # no expand on quote for completion in strings.
1587 # nor backcomplete standard ascii keys
1587 # nor backcomplete standard ascii keys
1588 if char in string.ascii_letters or char in ('"',"'"):
1588 if char in string.ascii_letters or char in ('"',"'"):
1589 return '', ()
1589 return '', ()
1590 try :
1590 try :
1591 unic = unicodedata.name(char)
1591 unic = unicodedata.name(char)
1592 return '\\'+char,('\\'+unic,)
1592 return '\\'+char,('\\'+unic,)
1593 except KeyError:
1593 except KeyError:
1594 pass
1594 pass
1595 return '', ()
1595 return '', ()
1596
1596
1597
1597
1598 @context_matcher()
1598 @context_matcher()
1599 def back_latex_name_matcher(context: CompletionContext):
1599 def back_latex_name_matcher(context: CompletionContext):
1600 """Match latex characters back to unicode name
1600 """Match latex characters back to unicode name
1601
1601
1602 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1602 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1603 """
1603 """
1604 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1604 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1605 return _convert_matcher_v1_result_to_v2(
1605 return _convert_matcher_v1_result_to_v2(
1606 matches, type="latex", fragment=fragment, suppress_if_matches=True
1606 matches, type="latex", fragment=fragment, suppress_if_matches=True
1607 )
1607 )
1608
1608
1609
1609
1610 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1610 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1611 """Match latex characters back to unicode name
1611 """Match latex characters back to unicode name
1612
1612
1613 This does ``\\β„΅`` -> ``\\aleph``
1613 This does ``\\β„΅`` -> ``\\aleph``
1614
1614
1615 .. deprecated:: 8.6
1615 .. deprecated:: 8.6
1616 You can use :meth:`back_latex_name_matcher` instead.
1616 You can use :meth:`back_latex_name_matcher` instead.
1617 """
1617 """
1618 if len(text)<2:
1618 if len(text)<2:
1619 return '', ()
1619 return '', ()
1620 maybe_slash = text[-2]
1620 maybe_slash = text[-2]
1621 if maybe_slash != '\\':
1621 if maybe_slash != '\\':
1622 return '', ()
1622 return '', ()
1623
1623
1624
1624
1625 char = text[-1]
1625 char = text[-1]
1626 # no expand on quote for completion in strings.
1626 # no expand on quote for completion in strings.
1627 # nor backcomplete standard ascii keys
1627 # nor backcomplete standard ascii keys
1628 if char in string.ascii_letters or char in ('"',"'"):
1628 if char in string.ascii_letters or char in ('"',"'"):
1629 return '', ()
1629 return '', ()
1630 try :
1630 try :
1631 latex = reverse_latex_symbol[char]
1631 latex = reverse_latex_symbol[char]
1632 # '\\' replace the \ as well
1632 # '\\' replace the \ as well
1633 return '\\'+char,[latex]
1633 return '\\'+char,[latex]
1634 except KeyError:
1634 except KeyError:
1635 pass
1635 pass
1636 return '', ()
1636 return '', ()
1637
1637
1638
1638
1639 def _formatparamchildren(parameter) -> str:
1639 def _formatparamchildren(parameter) -> str:
1640 """
1640 """
1641 Get parameter name and value from Jedi Private API
1641 Get parameter name and value from Jedi Private API
1642
1642
1643 Jedi does not expose a simple way to get `param=value` from its API.
1643 Jedi does not expose a simple way to get `param=value` from its API.
1644
1644
1645 Parameters
1645 Parameters
1646 ----------
1646 ----------
1647 parameter
1647 parameter
1648 Jedi's function `Param`
1648 Jedi's function `Param`
1649
1649
1650 Returns
1650 Returns
1651 -------
1651 -------
1652 A string like 'a', 'b=1', '*args', '**kwargs'
1652 A string like 'a', 'b=1', '*args', '**kwargs'
1653
1653
1654 """
1654 """
1655 description = parameter.description
1655 description = parameter.description
1656 if not description.startswith('param '):
1656 if not description.startswith('param '):
1657 raise ValueError('Jedi function parameter description have change format.'
1657 raise ValueError('Jedi function parameter description have change format.'
1658 'Expected "param ...", found %r".' % description)
1658 'Expected "param ...", found %r".' % description)
1659 return description[6:]
1659 return description[6:]
1660
1660
1661 def _make_signature(completion)-> str:
1661 def _make_signature(completion)-> str:
1662 """
1662 """
1663 Make the signature from a jedi completion
1663 Make the signature from a jedi completion
1664
1664
1665 Parameters
1665 Parameters
1666 ----------
1666 ----------
1667 completion : jedi.Completion
1667 completion : jedi.Completion
1668 object does not complete a function type
1668 object does not complete a function type
1669
1669
1670 Returns
1670 Returns
1671 -------
1671 -------
1672 a string consisting of the function signature, with the parenthesis but
1672 a string consisting of the function signature, with the parenthesis but
1673 without the function name. example:
1673 without the function name. example:
1674 `(a, *args, b=1, **kwargs)`
1674 `(a, *args, b=1, **kwargs)`
1675
1675
1676 """
1676 """
1677
1677
1678 # it looks like this might work on jedi 0.17
1678 # it looks like this might work on jedi 0.17
1679 if hasattr(completion, 'get_signatures'):
1679 if hasattr(completion, 'get_signatures'):
1680 signatures = completion.get_signatures()
1680 signatures = completion.get_signatures()
1681 if not signatures:
1681 if not signatures:
1682 return '(?)'
1682 return '(?)'
1683
1683
1684 c0 = completion.get_signatures()[0]
1684 c0 = completion.get_signatures()[0]
1685 return '('+c0.to_string().split('(', maxsplit=1)[1]
1685 return '('+c0.to_string().split('(', maxsplit=1)[1]
1686
1686
1687 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1687 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1688 for p in signature.defined_names()) if f])
1688 for p in signature.defined_names()) if f])
1689
1689
1690
1690
1691 _CompleteResult = Dict[str, MatcherResult]
1691 _CompleteResult = Dict[str, MatcherResult]
1692
1692
1693
1693
1694 DICT_MATCHER_REGEX = re.compile(
1694 DICT_MATCHER_REGEX = re.compile(
1695 r"""(?x)
1695 r"""(?x)
1696 ( # match dict-referring - or any get item object - expression
1696 ( # match dict-referring - or any get item object - expression
1697 .+
1697 .+
1698 )
1698 )
1699 \[ # open bracket
1699 \[ # open bracket
1700 \s* # and optional whitespace
1700 \s* # and optional whitespace
1701 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1701 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1702 # and slices
1702 # and slices
1703 ((?:(?:
1703 ((?:(?:
1704 (?: # closed string
1704 (?: # closed string
1705 [uUbB]? # string prefix (r not handled)
1705 [uUbB]? # string prefix (r not handled)
1706 (?:
1706 (?:
1707 '(?:[^']|(?<!\\)\\')*'
1707 '(?:[^']|(?<!\\)\\')*'
1708 |
1708 |
1709 "(?:[^"]|(?<!\\)\\")*"
1709 "(?:[^"]|(?<!\\)\\")*"
1710 )
1710 )
1711 )
1711 )
1712 |
1712 |
1713 # capture integers and slices
1713 # capture integers and slices
1714 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1714 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1715 |
1715 |
1716 # integer in bin/hex/oct notation
1716 # integer in bin/hex/oct notation
1717 0[bBxXoO]_?(?:\w|\d)+
1717 0[bBxXoO]_?(?:\w|\d)+
1718 )
1718 )
1719 \s*,\s*
1719 \s*,\s*
1720 )*)
1720 )*)
1721 ((?:
1721 ((?:
1722 (?: # unclosed string
1722 (?: # unclosed string
1723 [uUbB]? # string prefix (r not handled)
1723 [uUbB]? # string prefix (r not handled)
1724 (?:
1724 (?:
1725 '(?:[^']|(?<!\\)\\')*
1725 '(?:[^']|(?<!\\)\\')*
1726 |
1726 |
1727 "(?:[^"]|(?<!\\)\\")*
1727 "(?:[^"]|(?<!\\)\\")*
1728 )
1728 )
1729 )
1729 )
1730 |
1730 |
1731 # unfinished integer
1731 # unfinished integer
1732 (?:[-+]?\d+)
1732 (?:[-+]?\d+)
1733 |
1733 |
1734 # integer in bin/hex/oct notation
1734 # integer in bin/hex/oct notation
1735 0[bBxXoO]_?(?:\w|\d)+
1735 0[bBxXoO]_?(?:\w|\d)+
1736 )
1736 )
1737 )?
1737 )?
1738 $
1738 $
1739 """
1739 """
1740 )
1740 )
1741
1741
1742
1742
1743 def _convert_matcher_v1_result_to_v2(
1743 def _convert_matcher_v1_result_to_v2(
1744 matches: Sequence[str],
1744 matches: Sequence[str],
1745 type: str,
1745 type: str,
1746 fragment: Optional[str] = None,
1746 fragment: Optional[str] = None,
1747 suppress_if_matches: bool = False,
1747 suppress_if_matches: bool = False,
1748 ) -> SimpleMatcherResult:
1748 ) -> SimpleMatcherResult:
1749 """Utility to help with transition"""
1749 """Utility to help with transition"""
1750 result = {
1750 result = {
1751 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1751 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1752 "suppress": (True if matches else False) if suppress_if_matches else False,
1752 "suppress": (True if matches else False) if suppress_if_matches else False,
1753 }
1753 }
1754 if fragment is not None:
1754 if fragment is not None:
1755 result["matched_fragment"] = fragment
1755 result["matched_fragment"] = fragment
1756 return cast(SimpleMatcherResult, result)
1756 return cast(SimpleMatcherResult, result)
1757
1757
1758
1758
1759 class IPCompleter(Completer):
1759 class IPCompleter(Completer):
1760 """Extension of the completer class with IPython-specific features"""
1760 """Extension of the completer class with IPython-specific features"""
1761
1761
1762 @observe('greedy')
1762 @observe('greedy')
1763 def _greedy_changed(self, change):
1763 def _greedy_changed(self, change):
1764 """update the splitter and readline delims when greedy is changed"""
1764 """update the splitter and readline delims when greedy is changed"""
1765 if change["new"]:
1765 if change["new"]:
1766 self.evaluation = "unsafe"
1766 self.evaluation = "unsafe"
1767 self.auto_close_dict_keys = True
1767 self.auto_close_dict_keys = True
1768 self.splitter.delims = GREEDY_DELIMS
1768 self.splitter.delims = GREEDY_DELIMS
1769 else:
1769 else:
1770 self.evaluation = "limited"
1770 self.evaluation = "limited"
1771 self.auto_close_dict_keys = False
1771 self.auto_close_dict_keys = False
1772 self.splitter.delims = DELIMS
1772 self.splitter.delims = DELIMS
1773
1773
1774 dict_keys_only = Bool(
1774 dict_keys_only = Bool(
1775 False,
1775 False,
1776 help="""
1776 help="""
1777 Whether to show dict key matches only.
1777 Whether to show dict key matches only.
1778
1778
1779 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1779 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1780 """,
1780 """,
1781 )
1781 )
1782
1782
1783 suppress_competing_matchers = UnionTrait(
1783 suppress_competing_matchers = UnionTrait(
1784 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1784 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1785 default_value=None,
1785 default_value=None,
1786 help="""
1786 help="""
1787 Whether to suppress completions from other *Matchers*.
1787 Whether to suppress completions from other *Matchers*.
1788
1788
1789 When set to ``None`` (default) the matchers will attempt to auto-detect
1789 When set to ``None`` (default) the matchers will attempt to auto-detect
1790 whether suppression of other matchers is desirable. For example, at
1790 whether suppression of other matchers is desirable. For example, at
1791 the beginning of a line followed by `%` we expect a magic completion
1791 the beginning of a line followed by `%` we expect a magic completion
1792 to be the only applicable option, and after ``my_dict['`` we usually
1792 to be the only applicable option, and after ``my_dict['`` we usually
1793 expect a completion with an existing dictionary key.
1793 expect a completion with an existing dictionary key.
1794
1794
1795 If you want to disable this heuristic and see completions from all matchers,
1795 If you want to disable this heuristic and see completions from all matchers,
1796 set ``IPCompleter.suppress_competing_matchers = False``.
1796 set ``IPCompleter.suppress_competing_matchers = False``.
1797 To disable the heuristic for specific matchers provide a dictionary mapping:
1797 To disable the heuristic for specific matchers provide a dictionary mapping:
1798 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1798 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1799
1799
1800 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1800 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1801 completions to the set of matchers with the highest priority;
1801 completions to the set of matchers with the highest priority;
1802 this is equivalent to ``IPCompleter.merge_completions`` and
1802 this is equivalent to ``IPCompleter.merge_completions`` and
1803 can be beneficial for performance, but will sometimes omit relevant
1803 can be beneficial for performance, but will sometimes omit relevant
1804 candidates from matchers further down the priority list.
1804 candidates from matchers further down the priority list.
1805 """,
1805 """,
1806 ).tag(config=True)
1806 ).tag(config=True)
1807
1807
1808 merge_completions = Bool(
1808 merge_completions = Bool(
1809 True,
1809 True,
1810 help="""Whether to merge completion results into a single list
1810 help="""Whether to merge completion results into a single list
1811
1811
1812 If False, only the completion results from the first non-empty
1812 If False, only the completion results from the first non-empty
1813 completer will be returned.
1813 completer will be returned.
1814
1814
1815 As of version 8.6.0, setting the value to ``False`` is an alias for:
1815 As of version 8.6.0, setting the value to ``False`` is an alias for:
1816 ``IPCompleter.suppress_competing_matchers = True.``.
1816 ``IPCompleter.suppress_competing_matchers = True.``.
1817 """,
1817 """,
1818 ).tag(config=True)
1818 ).tag(config=True)
1819
1819
1820 disable_matchers = ListTrait(
1820 disable_matchers = ListTrait(
1821 Unicode(),
1821 Unicode(),
1822 help="""List of matchers to disable.
1822 help="""List of matchers to disable.
1823
1823
1824 The list should contain matcher identifiers (see :any:`completion_matcher`).
1824 The list should contain matcher identifiers (see :any:`completion_matcher`).
1825 """,
1825 """,
1826 ).tag(config=True)
1826 ).tag(config=True)
1827
1827
1828 omit__names = Enum(
1828 omit__names = Enum(
1829 (0, 1, 2),
1829 (0, 1, 2),
1830 default_value=2,
1830 default_value=2,
1831 help="""Instruct the completer to omit private method names
1831 help="""Instruct the completer to omit private method names
1832
1832
1833 Specifically, when completing on ``object.<tab>``.
1833 Specifically, when completing on ``object.<tab>``.
1834
1834
1835 When 2 [default]: all names that start with '_' will be excluded.
1835 When 2 [default]: all names that start with '_' will be excluded.
1836
1836
1837 When 1: all 'magic' names (``__foo__``) will be excluded.
1837 When 1: all 'magic' names (``__foo__``) will be excluded.
1838
1838
1839 When 0: nothing will be excluded.
1839 When 0: nothing will be excluded.
1840 """
1840 """
1841 ).tag(config=True)
1841 ).tag(config=True)
1842 limit_to__all__ = Bool(False,
1842 limit_to__all__ = Bool(False,
1843 help="""
1843 help="""
1844 DEPRECATED as of version 5.0.
1844 DEPRECATED as of version 5.0.
1845
1845
1846 Instruct the completer to use __all__ for the completion
1846 Instruct the completer to use __all__ for the completion
1847
1847
1848 Specifically, when completing on ``object.<tab>``.
1848 Specifically, when completing on ``object.<tab>``.
1849
1849
1850 When True: only those names in obj.__all__ will be included.
1850 When True: only those names in obj.__all__ will be included.
1851
1851
1852 When False [default]: the __all__ attribute is ignored
1852 When False [default]: the __all__ attribute is ignored
1853 """,
1853 """,
1854 ).tag(config=True)
1854 ).tag(config=True)
1855
1855
1856 profile_completions = Bool(
1856 profile_completions = Bool(
1857 default_value=False,
1857 default_value=False,
1858 help="If True, emit profiling data for completion subsystem using cProfile."
1858 help="If True, emit profiling data for completion subsystem using cProfile."
1859 ).tag(config=True)
1859 ).tag(config=True)
1860
1860
1861 profiler_output_dir = Unicode(
1861 profiler_output_dir = Unicode(
1862 default_value=".completion_profiles",
1862 default_value=".completion_profiles",
1863 help="Template for path at which to output profile data for completions."
1863 help="Template for path at which to output profile data for completions."
1864 ).tag(config=True)
1864 ).tag(config=True)
1865
1865
1866 @observe('limit_to__all__')
1866 @observe('limit_to__all__')
1867 def _limit_to_all_changed(self, change):
1867 def _limit_to_all_changed(self, change):
1868 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1868 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1869 'value has been deprecated since IPython 5.0, will be made to have '
1869 'value has been deprecated since IPython 5.0, will be made to have '
1870 'no effects and then removed in future version of IPython.',
1870 'no effects and then removed in future version of IPython.',
1871 UserWarning)
1871 UserWarning)
1872
1872
1873 def __init__(
1873 def __init__(
1874 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1874 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1875 ):
1875 ):
1876 """IPCompleter() -> completer
1876 """IPCompleter() -> completer
1877
1877
1878 Return a completer object.
1878 Return a completer object.
1879
1879
1880 Parameters
1880 Parameters
1881 ----------
1881 ----------
1882 shell
1882 shell
1883 a pointer to the ipython shell itself. This is needed
1883 a pointer to the ipython shell itself. This is needed
1884 because this completer knows about magic functions, and those can
1884 because this completer knows about magic functions, and those can
1885 only be accessed via the ipython instance.
1885 only be accessed via the ipython instance.
1886 namespace : dict, optional
1886 namespace : dict, optional
1887 an optional dict where completions are performed.
1887 an optional dict where completions are performed.
1888 global_namespace : dict, optional
1888 global_namespace : dict, optional
1889 secondary optional dict for completions, to
1889 secondary optional dict for completions, to
1890 handle cases (such as IPython embedded inside functions) where
1890 handle cases (such as IPython embedded inside functions) where
1891 both Python scopes are visible.
1891 both Python scopes are visible.
1892 config : Config
1892 config : Config
1893 traitlet's config object
1893 traitlet's config object
1894 **kwargs
1894 **kwargs
1895 passed to super class unmodified.
1895 passed to super class unmodified.
1896 """
1896 """
1897
1897
1898 self.magic_escape = ESC_MAGIC
1898 self.magic_escape = ESC_MAGIC
1899 self.splitter = CompletionSplitter()
1899 self.splitter = CompletionSplitter()
1900
1900
1901 # _greedy_changed() depends on splitter and readline being defined:
1901 # _greedy_changed() depends on splitter and readline being defined:
1902 super().__init__(
1902 super().__init__(
1903 namespace=namespace,
1903 namespace=namespace,
1904 global_namespace=global_namespace,
1904 global_namespace=global_namespace,
1905 config=config,
1905 config=config,
1906 **kwargs,
1906 **kwargs,
1907 )
1907 )
1908
1908
1909 # List where completion matches will be stored
1909 # List where completion matches will be stored
1910 self.matches = []
1910 self.matches = []
1911 self.shell = shell
1911 self.shell = shell
1912 # Regexp to split filenames with spaces in them
1912 # Regexp to split filenames with spaces in them
1913 self.space_name_re = re.compile(r'([^\\] )')
1913 self.space_name_re = re.compile(r'([^\\] )')
1914 # Hold a local ref. to glob.glob for speed
1914 # Hold a local ref. to glob.glob for speed
1915 self.glob = glob.glob
1915 self.glob = glob.glob
1916
1916
1917 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1917 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1918 # buffers, to avoid completion problems.
1918 # buffers, to avoid completion problems.
1919 term = os.environ.get('TERM','xterm')
1919 term = os.environ.get('TERM','xterm')
1920 self.dumb_terminal = term in ['dumb','emacs']
1920 self.dumb_terminal = term in ['dumb','emacs']
1921
1921
1922 # Special handling of backslashes needed in win32 platforms
1922 # Special handling of backslashes needed in win32 platforms
1923 if sys.platform == "win32":
1923 if sys.platform == "win32":
1924 self.clean_glob = self._clean_glob_win32
1924 self.clean_glob = self._clean_glob_win32
1925 else:
1925 else:
1926 self.clean_glob = self._clean_glob
1926 self.clean_glob = self._clean_glob
1927
1927
1928 #regexp to parse docstring for function signature
1928 #regexp to parse docstring for function signature
1929 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1929 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1930 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1930 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1931 #use this if positional argument name is also needed
1931 #use this if positional argument name is also needed
1932 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1932 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1933
1933
1934 self.magic_arg_matchers = [
1934 self.magic_arg_matchers = [
1935 self.magic_config_matcher,
1935 self.magic_config_matcher,
1936 self.magic_color_matcher,
1936 self.magic_color_matcher,
1937 ]
1937 ]
1938
1938
1939 # This is set externally by InteractiveShell
1939 # This is set externally by InteractiveShell
1940 self.custom_completers = None
1940 self.custom_completers = None
1941
1941
1942 # This is a list of names of unicode characters that can be completed
1942 # This is a list of names of unicode characters that can be completed
1943 # into their corresponding unicode value. The list is large, so we
1943 # into their corresponding unicode value. The list is large, so we
1944 # lazily initialize it on first use. Consuming code should access this
1944 # lazily initialize it on first use. Consuming code should access this
1945 # attribute through the `@unicode_names` property.
1945 # attribute through the `@unicode_names` property.
1946 self._unicode_names = None
1946 self._unicode_names = None
1947
1947
1948 self._backslash_combining_matchers = [
1948 self._backslash_combining_matchers = [
1949 self.latex_name_matcher,
1949 self.latex_name_matcher,
1950 self.unicode_name_matcher,
1950 self.unicode_name_matcher,
1951 back_latex_name_matcher,
1951 back_latex_name_matcher,
1952 back_unicode_name_matcher,
1952 back_unicode_name_matcher,
1953 self.fwd_unicode_matcher,
1953 self.fwd_unicode_matcher,
1954 ]
1954 ]
1955
1955
1956 if not self.backslash_combining_completions:
1956 if not self.backslash_combining_completions:
1957 for matcher in self._backslash_combining_matchers:
1957 for matcher in self._backslash_combining_matchers:
1958 self.disable_matchers.append(_get_matcher_id(matcher))
1958 self.disable_matchers.append(_get_matcher_id(matcher))
1959
1959
1960 if not self.merge_completions:
1960 if not self.merge_completions:
1961 self.suppress_competing_matchers = True
1961 self.suppress_competing_matchers = True
1962
1962
1963 @property
1963 @property
1964 def matchers(self) -> List[Matcher]:
1964 def matchers(self) -> List[Matcher]:
1965 """All active matcher routines for completion"""
1965 """All active matcher routines for completion"""
1966 if self.dict_keys_only:
1966 if self.dict_keys_only:
1967 return [self.dict_key_matcher]
1967 return [self.dict_key_matcher]
1968
1968
1969 if self.use_jedi:
1969 if self.use_jedi:
1970 return [
1970 return [
1971 *self.custom_matchers,
1971 *self.custom_matchers,
1972 *self._backslash_combining_matchers,
1972 *self._backslash_combining_matchers,
1973 *self.magic_arg_matchers,
1973 *self.magic_arg_matchers,
1974 self.custom_completer_matcher,
1974 self.custom_completer_matcher,
1975 self.magic_matcher,
1975 self.magic_matcher,
1976 self._jedi_matcher,
1976 self._jedi_matcher,
1977 self.dict_key_matcher,
1977 self.dict_key_matcher,
1978 self.file_matcher,
1978 self.file_matcher,
1979 ]
1979 ]
1980 else:
1980 else:
1981 return [
1981 return [
1982 *self.custom_matchers,
1982 *self.custom_matchers,
1983 *self._backslash_combining_matchers,
1983 *self._backslash_combining_matchers,
1984 *self.magic_arg_matchers,
1984 *self.magic_arg_matchers,
1985 self.custom_completer_matcher,
1985 self.custom_completer_matcher,
1986 self.dict_key_matcher,
1986 self.dict_key_matcher,
1987 self.magic_matcher,
1987 self.magic_matcher,
1988 self.python_matcher,
1988 self.python_matcher,
1989 self.file_matcher,
1989 self.file_matcher,
1990 self.python_func_kw_matcher,
1990 self.python_func_kw_matcher,
1991 ]
1991 ]
1992
1992
1993 def all_completions(self, text:str) -> List[str]:
1993 def all_completions(self, text:str) -> List[str]:
1994 """
1994 """
1995 Wrapper around the completion methods for the benefit of emacs.
1995 Wrapper around the completion methods for the benefit of emacs.
1996 """
1996 """
1997 prefix = text.rpartition('.')[0]
1997 prefix = text.rpartition('.')[0]
1998 with provisionalcompleter():
1998 with provisionalcompleter():
1999 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1999 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2000 for c in self.completions(text, len(text))]
2000 for c in self.completions(text, len(text))]
2001
2001
2002 return self.complete(text)[1]
2002 return self.complete(text)[1]
2003
2003
2004 def _clean_glob(self, text:str):
2004 def _clean_glob(self, text:str):
2005 return self.glob("%s*" % text)
2005 return self.glob("%s*" % text)
2006
2006
2007 def _clean_glob_win32(self, text:str):
2007 def _clean_glob_win32(self, text:str):
2008 return [f.replace("\\","/")
2008 return [f.replace("\\","/")
2009 for f in self.glob("%s*" % text)]
2009 for f in self.glob("%s*" % text)]
2010
2010
2011 @context_matcher()
2011 @context_matcher()
2012 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2012 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2013 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2013 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2014 matches = self.file_matches(context.token)
2014 matches = self.file_matches(context.token)
2015 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2015 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2016 # starts with `/home/`, `C:\`, etc)
2016 # starts with `/home/`, `C:\`, etc)
2017 return _convert_matcher_v1_result_to_v2(matches, type="path")
2017 return _convert_matcher_v1_result_to_v2(matches, type="path")
2018
2018
2019 def file_matches(self, text: str) -> List[str]:
2019 def file_matches(self, text: str) -> List[str]:
2020 """Match filenames, expanding ~USER type strings.
2020 """Match filenames, expanding ~USER type strings.
2021
2021
2022 Most of the seemingly convoluted logic in this completer is an
2022 Most of the seemingly convoluted logic in this completer is an
2023 attempt to handle filenames with spaces in them. And yet it's not
2023 attempt to handle filenames with spaces in them. And yet it's not
2024 quite perfect, because Python's readline doesn't expose all of the
2024 quite perfect, because Python's readline doesn't expose all of the
2025 GNU readline details needed for this to be done correctly.
2025 GNU readline details needed for this to be done correctly.
2026
2026
2027 For a filename with a space in it, the printed completions will be
2027 For a filename with a space in it, the printed completions will be
2028 only the parts after what's already been typed (instead of the
2028 only the parts after what's already been typed (instead of the
2029 full completions, as is normally done). I don't think with the
2029 full completions, as is normally done). I don't think with the
2030 current (as of Python 2.3) Python readline it's possible to do
2030 current (as of Python 2.3) Python readline it's possible to do
2031 better.
2031 better.
2032
2032
2033 .. deprecated:: 8.6
2033 .. deprecated:: 8.6
2034 You can use :meth:`file_matcher` instead.
2034 You can use :meth:`file_matcher` instead.
2035 """
2035 """
2036
2036
2037 # chars that require escaping with backslash - i.e. chars
2037 # chars that require escaping with backslash - i.e. chars
2038 # that readline treats incorrectly as delimiters, but we
2038 # that readline treats incorrectly as delimiters, but we
2039 # don't want to treat as delimiters in filename matching
2039 # don't want to treat as delimiters in filename matching
2040 # when escaped with backslash
2040 # when escaped with backslash
2041 if text.startswith('!'):
2041 if text.startswith('!'):
2042 text = text[1:]
2042 text = text[1:]
2043 text_prefix = u'!'
2043 text_prefix = u'!'
2044 else:
2044 else:
2045 text_prefix = u''
2045 text_prefix = u''
2046
2046
2047 text_until_cursor = self.text_until_cursor
2047 text_until_cursor = self.text_until_cursor
2048 # track strings with open quotes
2048 # track strings with open quotes
2049 open_quotes = has_open_quotes(text_until_cursor)
2049 open_quotes = has_open_quotes(text_until_cursor)
2050
2050
2051 if '(' in text_until_cursor or '[' in text_until_cursor:
2051 if '(' in text_until_cursor or '[' in text_until_cursor:
2052 lsplit = text
2052 lsplit = text
2053 else:
2053 else:
2054 try:
2054 try:
2055 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2055 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2056 lsplit = arg_split(text_until_cursor)[-1]
2056 lsplit = arg_split(text_until_cursor)[-1]
2057 except ValueError:
2057 except ValueError:
2058 # typically an unmatched ", or backslash without escaped char.
2058 # typically an unmatched ", or backslash without escaped char.
2059 if open_quotes:
2059 if open_quotes:
2060 lsplit = text_until_cursor.split(open_quotes)[-1]
2060 lsplit = text_until_cursor.split(open_quotes)[-1]
2061 else:
2061 else:
2062 return []
2062 return []
2063 except IndexError:
2063 except IndexError:
2064 # tab pressed on empty line
2064 # tab pressed on empty line
2065 lsplit = ""
2065 lsplit = ""
2066
2066
2067 if not open_quotes and lsplit != protect_filename(lsplit):
2067 if not open_quotes and lsplit != protect_filename(lsplit):
2068 # if protectables are found, do matching on the whole escaped name
2068 # if protectables are found, do matching on the whole escaped name
2069 has_protectables = True
2069 has_protectables = True
2070 text0,text = text,lsplit
2070 text0,text = text,lsplit
2071 else:
2071 else:
2072 has_protectables = False
2072 has_protectables = False
2073 text = os.path.expanduser(text)
2073 text = os.path.expanduser(text)
2074
2074
2075 if text == "":
2075 if text == "":
2076 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2076 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2077
2077
2078 # Compute the matches from the filesystem
2078 # Compute the matches from the filesystem
2079 if sys.platform == 'win32':
2079 if sys.platform == 'win32':
2080 m0 = self.clean_glob(text)
2080 m0 = self.clean_glob(text)
2081 else:
2081 else:
2082 m0 = self.clean_glob(text.replace('\\', ''))
2082 m0 = self.clean_glob(text.replace('\\', ''))
2083
2083
2084 if has_protectables:
2084 if has_protectables:
2085 # If we had protectables, we need to revert our changes to the
2085 # If we had protectables, we need to revert our changes to the
2086 # beginning of filename so that we don't double-write the part
2086 # beginning of filename so that we don't double-write the part
2087 # of the filename we have so far
2087 # of the filename we have so far
2088 len_lsplit = len(lsplit)
2088 len_lsplit = len(lsplit)
2089 matches = [text_prefix + text0 +
2089 matches = [text_prefix + text0 +
2090 protect_filename(f[len_lsplit:]) for f in m0]
2090 protect_filename(f[len_lsplit:]) for f in m0]
2091 else:
2091 else:
2092 if open_quotes:
2092 if open_quotes:
2093 # if we have a string with an open quote, we don't need to
2093 # if we have a string with an open quote, we don't need to
2094 # protect the names beyond the quote (and we _shouldn't_, as
2094 # protect the names beyond the quote (and we _shouldn't_, as
2095 # it would cause bugs when the filesystem call is made).
2095 # it would cause bugs when the filesystem call is made).
2096 matches = m0 if sys.platform == "win32" else\
2096 matches = m0 if sys.platform == "win32" else\
2097 [protect_filename(f, open_quotes) for f in m0]
2097 [protect_filename(f, open_quotes) for f in m0]
2098 else:
2098 else:
2099 matches = [text_prefix +
2099 matches = [text_prefix +
2100 protect_filename(f) for f in m0]
2100 protect_filename(f) for f in m0]
2101
2101
2102 # Mark directories in input list by appending '/' to their names.
2102 # Mark directories in input list by appending '/' to their names.
2103 return [x+'/' if os.path.isdir(x) else x for x in matches]
2103 return [x+'/' if os.path.isdir(x) else x for x in matches]
2104
2104
2105 @context_matcher()
2105 @context_matcher()
2106 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2106 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2107 """Match magics."""
2107 """Match magics."""
2108 text = context.token
2108 text = context.token
2109 matches = self.magic_matches(text)
2109 matches = self.magic_matches(text)
2110 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2110 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2111 is_magic_prefix = len(text) > 0 and text[0] == "%"
2111 is_magic_prefix = len(text) > 0 and text[0] == "%"
2112 result["suppress"] = is_magic_prefix and bool(result["completions"])
2112 result["suppress"] = is_magic_prefix and bool(result["completions"])
2113 return result
2113 return result
2114
2114
2115 def magic_matches(self, text: str):
2115 def magic_matches(self, text: str):
2116 """Match magics.
2116 """Match magics.
2117
2117
2118 .. deprecated:: 8.6
2118 .. deprecated:: 8.6
2119 You can use :meth:`magic_matcher` instead.
2119 You can use :meth:`magic_matcher` instead.
2120 """
2120 """
2121 # Get all shell magics now rather than statically, so magics loaded at
2121 # Get all shell magics now rather than statically, so magics loaded at
2122 # runtime show up too.
2122 # runtime show up too.
2123 lsm = self.shell.magics_manager.lsmagic()
2123 lsm = self.shell.magics_manager.lsmagic()
2124 line_magics = lsm['line']
2124 line_magics = lsm['line']
2125 cell_magics = lsm['cell']
2125 cell_magics = lsm['cell']
2126 pre = self.magic_escape
2126 pre = self.magic_escape
2127 pre2 = pre+pre
2127 pre2 = pre+pre
2128
2128
2129 explicit_magic = text.startswith(pre)
2129 explicit_magic = text.startswith(pre)
2130
2130
2131 # Completion logic:
2131 # Completion logic:
2132 # - user gives %%: only do cell magics
2132 # - user gives %%: only do cell magics
2133 # - user gives %: do both line and cell magics
2133 # - user gives %: do both line and cell magics
2134 # - no prefix: do both
2134 # - no prefix: do both
2135 # In other words, line magics are skipped if the user gives %% explicitly
2135 # In other words, line magics are skipped if the user gives %% explicitly
2136 #
2136 #
2137 # We also exclude magics that match any currently visible names:
2137 # We also exclude magics that match any currently visible names:
2138 # https://github.com/ipython/ipython/issues/4877, unless the user has
2138 # https://github.com/ipython/ipython/issues/4877, unless the user has
2139 # typed a %:
2139 # typed a %:
2140 # https://github.com/ipython/ipython/issues/10754
2140 # https://github.com/ipython/ipython/issues/10754
2141 bare_text = text.lstrip(pre)
2141 bare_text = text.lstrip(pre)
2142 global_matches = self.global_matches(bare_text)
2142 global_matches = self.global_matches(bare_text)
2143 if not explicit_magic:
2143 if not explicit_magic:
2144 def matches(magic):
2144 def matches(magic):
2145 """
2145 """
2146 Filter magics, in particular remove magics that match
2146 Filter magics, in particular remove magics that match
2147 a name present in global namespace.
2147 a name present in global namespace.
2148 """
2148 """
2149 return ( magic.startswith(bare_text) and
2149 return ( magic.startswith(bare_text) and
2150 magic not in global_matches )
2150 magic not in global_matches )
2151 else:
2151 else:
2152 def matches(magic):
2152 def matches(magic):
2153 return magic.startswith(bare_text)
2153 return magic.startswith(bare_text)
2154
2154
2155 comp = [ pre2+m for m in cell_magics if matches(m)]
2155 comp = [ pre2+m for m in cell_magics if matches(m)]
2156 if not text.startswith(pre2):
2156 if not text.startswith(pre2):
2157 comp += [ pre+m for m in line_magics if matches(m)]
2157 comp += [ pre+m for m in line_magics if matches(m)]
2158
2158
2159 return comp
2159 return comp
2160
2160
2161 @context_matcher()
2161 @context_matcher()
2162 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2162 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2163 """Match class names and attributes for %config magic."""
2163 """Match class names and attributes for %config magic."""
2164 # NOTE: uses `line_buffer` equivalent for compatibility
2164 # NOTE: uses `line_buffer` equivalent for compatibility
2165 matches = self.magic_config_matches(context.line_with_cursor)
2165 matches = self.magic_config_matches(context.line_with_cursor)
2166 return _convert_matcher_v1_result_to_v2(matches, type="param")
2166 return _convert_matcher_v1_result_to_v2(matches, type="param")
2167
2167
2168 def magic_config_matches(self, text: str) -> List[str]:
2168 def magic_config_matches(self, text: str) -> List[str]:
2169 """Match class names and attributes for %config magic.
2169 """Match class names and attributes for %config magic.
2170
2170
2171 .. deprecated:: 8.6
2171 .. deprecated:: 8.6
2172 You can use :meth:`magic_config_matcher` instead.
2172 You can use :meth:`magic_config_matcher` instead.
2173 """
2173 """
2174 texts = text.strip().split()
2174 texts = text.strip().split()
2175
2175
2176 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2176 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2177 # get all configuration classes
2177 # get all configuration classes
2178 classes = sorted(set([ c for c in self.shell.configurables
2178 classes = sorted(set([ c for c in self.shell.configurables
2179 if c.__class__.class_traits(config=True)
2179 if c.__class__.class_traits(config=True)
2180 ]), key=lambda x: x.__class__.__name__)
2180 ]), key=lambda x: x.__class__.__name__)
2181 classnames = [ c.__class__.__name__ for c in classes ]
2181 classnames = [ c.__class__.__name__ for c in classes ]
2182
2182
2183 # return all classnames if config or %config is given
2183 # return all classnames if config or %config is given
2184 if len(texts) == 1:
2184 if len(texts) == 1:
2185 return classnames
2185 return classnames
2186
2186
2187 # match classname
2187 # match classname
2188 classname_texts = texts[1].split('.')
2188 classname_texts = texts[1].split('.')
2189 classname = classname_texts[0]
2189 classname = classname_texts[0]
2190 classname_matches = [ c for c in classnames
2190 classname_matches = [ c for c in classnames
2191 if c.startswith(classname) ]
2191 if c.startswith(classname) ]
2192
2192
2193 # return matched classes or the matched class with attributes
2193 # return matched classes or the matched class with attributes
2194 if texts[1].find('.') < 0:
2194 if texts[1].find('.') < 0:
2195 return classname_matches
2195 return classname_matches
2196 elif len(classname_matches) == 1 and \
2196 elif len(classname_matches) == 1 and \
2197 classname_matches[0] == classname:
2197 classname_matches[0] == classname:
2198 cls = classes[classnames.index(classname)].__class__
2198 cls = classes[classnames.index(classname)].__class__
2199 help = cls.class_get_help()
2199 help = cls.class_get_help()
2200 # strip leading '--' from cl-args:
2200 # strip leading '--' from cl-args:
2201 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2201 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2202 return [ attr.split('=')[0]
2202 return [ attr.split('=')[0]
2203 for attr in help.strip().splitlines()
2203 for attr in help.strip().splitlines()
2204 if attr.startswith(texts[1]) ]
2204 if attr.startswith(texts[1]) ]
2205 return []
2205 return []
2206
2206
2207 @context_matcher()
2207 @context_matcher()
2208 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2208 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2209 """Match color schemes for %colors magic."""
2209 """Match color schemes for %colors magic."""
2210 # NOTE: uses `line_buffer` equivalent for compatibility
2210 # NOTE: uses `line_buffer` equivalent for compatibility
2211 matches = self.magic_color_matches(context.line_with_cursor)
2211 matches = self.magic_color_matches(context.line_with_cursor)
2212 return _convert_matcher_v1_result_to_v2(matches, type="param")
2212 return _convert_matcher_v1_result_to_v2(matches, type="param")
2213
2213
2214 def magic_color_matches(self, text: str) -> List[str]:
2214 def magic_color_matches(self, text: str) -> List[str]:
2215 """Match color schemes for %colors magic.
2215 """Match color schemes for %colors magic.
2216
2216
2217 .. deprecated:: 8.6
2217 .. deprecated:: 8.6
2218 You can use :meth:`magic_color_matcher` instead.
2218 You can use :meth:`magic_color_matcher` instead.
2219 """
2219 """
2220 texts = text.split()
2220 texts = text.split()
2221 if text.endswith(' '):
2221 if text.endswith(' '):
2222 # .split() strips off the trailing whitespace. Add '' back
2222 # .split() strips off the trailing whitespace. Add '' back
2223 # so that: '%colors ' -> ['%colors', '']
2223 # so that: '%colors ' -> ['%colors', '']
2224 texts.append('')
2224 texts.append('')
2225
2225
2226 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2226 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2227 prefix = texts[1]
2227 prefix = texts[1]
2228 return [ color for color in InspectColors.keys()
2228 return [ color for color in InspectColors.keys()
2229 if color.startswith(prefix) ]
2229 if color.startswith(prefix) ]
2230 return []
2230 return []
2231
2231
2232 @context_matcher(identifier="IPCompleter.jedi_matcher")
2232 @context_matcher(identifier="IPCompleter.jedi_matcher")
2233 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2233 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2234 matches = self._jedi_matches(
2234 matches = self._jedi_matches(
2235 cursor_column=context.cursor_position,
2235 cursor_column=context.cursor_position,
2236 cursor_line=context.cursor_line,
2236 cursor_line=context.cursor_line,
2237 text=context.full_text,
2237 text=context.full_text,
2238 )
2238 )
2239 return {
2239 return {
2240 "completions": matches,
2240 "completions": matches,
2241 # static analysis should not suppress other matchers
2241 # static analysis should not suppress other matchers
2242 "suppress": False,
2242 "suppress": False,
2243 }
2243 }
2244
2244
2245 def _jedi_matches(
2245 def _jedi_matches(
2246 self, cursor_column: int, cursor_line: int, text: str
2246 self, cursor_column: int, cursor_line: int, text: str
2247 ) -> Iterator[_JediCompletionLike]:
2247 ) -> Iterator[_JediCompletionLike]:
2248 """
2248 """
2249 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2249 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2250 cursor position.
2250 cursor position.
2251
2251
2252 Parameters
2252 Parameters
2253 ----------
2253 ----------
2254 cursor_column : int
2254 cursor_column : int
2255 column position of the cursor in ``text``, 0-indexed.
2255 column position of the cursor in ``text``, 0-indexed.
2256 cursor_line : int
2256 cursor_line : int
2257 line position of the cursor in ``text``, 0-indexed
2257 line position of the cursor in ``text``, 0-indexed
2258 text : str
2258 text : str
2259 text to complete
2259 text to complete
2260
2260
2261 Notes
2261 Notes
2262 -----
2262 -----
2263 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2263 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2264 object containing a string with the Jedi debug information attached.
2264 object containing a string with the Jedi debug information attached.
2265
2265
2266 .. deprecated:: 8.6
2266 .. deprecated:: 8.6
2267 You can use :meth:`_jedi_matcher` instead.
2267 You can use :meth:`_jedi_matcher` instead.
2268 """
2268 """
2269 namespaces = [self.namespace]
2269 namespaces = [self.namespace]
2270 if self.global_namespace is not None:
2270 if self.global_namespace is not None:
2271 namespaces.append(self.global_namespace)
2271 namespaces.append(self.global_namespace)
2272
2272
2273 completion_filter = lambda x:x
2273 completion_filter = lambda x:x
2274 offset = cursor_to_position(text, cursor_line, cursor_column)
2274 offset = cursor_to_position(text, cursor_line, cursor_column)
2275 # filter output if we are completing for object members
2275 # filter output if we are completing for object members
2276 if offset:
2276 if offset:
2277 pre = text[offset-1]
2277 pre = text[offset-1]
2278 if pre == '.':
2278 if pre == '.':
2279 if self.omit__names == 2:
2279 if self.omit__names == 2:
2280 completion_filter = lambda c:not c.name.startswith('_')
2280 completion_filter = lambda c:not c.name.startswith('_')
2281 elif self.omit__names == 1:
2281 elif self.omit__names == 1:
2282 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2282 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2283 elif self.omit__names == 0:
2283 elif self.omit__names == 0:
2284 completion_filter = lambda x:x
2284 completion_filter = lambda x:x
2285 else:
2285 else:
2286 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2286 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2287
2287
2288 interpreter = jedi.Interpreter(text[:offset], namespaces)
2288 interpreter = jedi.Interpreter(text[:offset], namespaces)
2289 try_jedi = True
2289 try_jedi = True
2290
2290
2291 try:
2291 try:
2292 # find the first token in the current tree -- if it is a ' or " then we are in a string
2292 # find the first token in the current tree -- if it is a ' or " then we are in a string
2293 completing_string = False
2293 completing_string = False
2294 try:
2294 try:
2295 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2295 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2296 except StopIteration:
2296 except StopIteration:
2297 pass
2297 pass
2298 else:
2298 else:
2299 # note the value may be ', ", or it may also be ''' or """, or
2299 # note the value may be ', ", or it may also be ''' or """, or
2300 # in some cases, """what/you/typed..., but all of these are
2300 # in some cases, """what/you/typed..., but all of these are
2301 # strings.
2301 # strings.
2302 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2302 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2303
2303
2304 # if we are in a string jedi is likely not the right candidate for
2304 # if we are in a string jedi is likely not the right candidate for
2305 # now. Skip it.
2305 # now. Skip it.
2306 try_jedi = not completing_string
2306 try_jedi = not completing_string
2307 except Exception as e:
2307 except Exception as e:
2308 # many of things can go wrong, we are using private API just don't crash.
2308 # many of things can go wrong, we are using private API just don't crash.
2309 if self.debug:
2309 if self.debug:
2310 print("Error detecting if completing a non-finished string :", e, '|')
2310 print("Error detecting if completing a non-finished string :", e, '|')
2311
2311
2312 if not try_jedi:
2312 if not try_jedi:
2313 return iter([])
2313 return iter([])
2314 try:
2314 try:
2315 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2315 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2316 except Exception as e:
2316 except Exception as e:
2317 if self.debug:
2317 if self.debug:
2318 return iter(
2318 return iter(
2319 [
2319 [
2320 _FakeJediCompletion(
2320 _FakeJediCompletion(
2321 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2321 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2322 % (e)
2322 % (e)
2323 )
2323 )
2324 ]
2324 ]
2325 )
2325 )
2326 else:
2326 else:
2327 return iter([])
2327 return iter([])
2328
2328
2329 @context_matcher()
2329 @context_matcher()
2330 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2330 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2331 """Match attributes or global python names"""
2331 """Match attributes or global python names"""
2332 text = context.line_with_cursor
2332 text = context.line_with_cursor
2333 if "." in text:
2333 if "." in text:
2334 try:
2334 try:
2335 matches, fragment = self._attr_matches(text, include_prefix=False)
2335 matches, fragment = self._attr_matches(text, include_prefix=False)
2336 if text.endswith(".") and self.omit__names:
2336 if text.endswith(".") and self.omit__names:
2337 if self.omit__names == 1:
2337 if self.omit__names == 1:
2338 # true if txt is _not_ a __ name, false otherwise:
2338 # true if txt is _not_ a __ name, false otherwise:
2339 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2339 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2340 else:
2340 else:
2341 # true if txt is _not_ a _ name, false otherwise:
2341 # true if txt is _not_ a _ name, false otherwise:
2342 no__name = (
2342 no__name = (
2343 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2343 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2344 is None
2344 is None
2345 )
2345 )
2346 matches = filter(no__name, matches)
2346 matches = filter(no__name, matches)
2347 return _convert_matcher_v1_result_to_v2(
2347 return _convert_matcher_v1_result_to_v2(
2348 matches, type="attribute", fragment=fragment
2348 matches, type="attribute", fragment=fragment
2349 )
2349 )
2350 except NameError:
2350 except NameError:
2351 # catches <undefined attributes>.<tab>
2351 # catches <undefined attributes>.<tab>
2352 matches = []
2352 matches = []
2353 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2353 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2354 else:
2354 else:
2355 matches = self.global_matches(context.token)
2355 matches = self.global_matches(context.token)
2356 # TODO: maybe distinguish between functions, modules and just "variables"
2356 # TODO: maybe distinguish between functions, modules and just "variables"
2357 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2357 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2358
2358
2359 @completion_matcher(api_version=1)
2359 @completion_matcher(api_version=1)
2360 def python_matches(self, text: str) -> Iterable[str]:
2360 def python_matches(self, text: str) -> Iterable[str]:
2361 """Match attributes or global python names.
2361 """Match attributes or global python names.
2362
2362
2363 .. deprecated:: 8.27
2363 .. deprecated:: 8.27
2364 You can use :meth:`python_matcher` instead."""
2364 You can use :meth:`python_matcher` instead."""
2365 if "." in text:
2365 if "." in text:
2366 try:
2366 try:
2367 matches = self.attr_matches(text)
2367 matches = self.attr_matches(text)
2368 if text.endswith('.') and self.omit__names:
2368 if text.endswith('.') and self.omit__names:
2369 if self.omit__names == 1:
2369 if self.omit__names == 1:
2370 # true if txt is _not_ a __ name, false otherwise:
2370 # true if txt is _not_ a __ name, false otherwise:
2371 no__name = (lambda txt:
2371 no__name = (lambda txt:
2372 re.match(r'.*\.__.*?__',txt) is None)
2372 re.match(r'.*\.__.*?__',txt) is None)
2373 else:
2373 else:
2374 # true if txt is _not_ a _ name, false otherwise:
2374 # true if txt is _not_ a _ name, false otherwise:
2375 no__name = (lambda txt:
2375 no__name = (lambda txt:
2376 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2376 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2377 matches = filter(no__name, matches)
2377 matches = filter(no__name, matches)
2378 except NameError:
2378 except NameError:
2379 # catches <undefined attributes>.<tab>
2379 # catches <undefined attributes>.<tab>
2380 matches = []
2380 matches = []
2381 else:
2381 else:
2382 matches = self.global_matches(text)
2382 matches = self.global_matches(text)
2383 return matches
2383 return matches
2384
2384
2385 def _default_arguments_from_docstring(self, doc):
2385 def _default_arguments_from_docstring(self, doc):
2386 """Parse the first line of docstring for call signature.
2386 """Parse the first line of docstring for call signature.
2387
2387
2388 Docstring should be of the form 'min(iterable[, key=func])\n'.
2388 Docstring should be of the form 'min(iterable[, key=func])\n'.
2389 It can also parse cython docstring of the form
2389 It can also parse cython docstring of the form
2390 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2390 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2391 """
2391 """
2392 if doc is None:
2392 if doc is None:
2393 return []
2393 return []
2394
2394
2395 #care only the firstline
2395 #care only the firstline
2396 line = doc.lstrip().splitlines()[0]
2396 line = doc.lstrip().splitlines()[0]
2397
2397
2398 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2398 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2399 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2399 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2400 sig = self.docstring_sig_re.search(line)
2400 sig = self.docstring_sig_re.search(line)
2401 if sig is None:
2401 if sig is None:
2402 return []
2402 return []
2403 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2403 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2404 sig = sig.groups()[0].split(',')
2404 sig = sig.groups()[0].split(',')
2405 ret = []
2405 ret = []
2406 for s in sig:
2406 for s in sig:
2407 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2407 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2408 ret += self.docstring_kwd_re.findall(s)
2408 ret += self.docstring_kwd_re.findall(s)
2409 return ret
2409 return ret
2410
2410
2411 def _default_arguments(self, obj):
2411 def _default_arguments(self, obj):
2412 """Return the list of default arguments of obj if it is callable,
2412 """Return the list of default arguments of obj if it is callable,
2413 or empty list otherwise."""
2413 or empty list otherwise."""
2414 call_obj = obj
2414 call_obj = obj
2415 ret = []
2415 ret = []
2416 if inspect.isbuiltin(obj):
2416 if inspect.isbuiltin(obj):
2417 pass
2417 pass
2418 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2418 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2419 if inspect.isclass(obj):
2419 if inspect.isclass(obj):
2420 #for cython embedsignature=True the constructor docstring
2420 #for cython embedsignature=True the constructor docstring
2421 #belongs to the object itself not __init__
2421 #belongs to the object itself not __init__
2422 ret += self._default_arguments_from_docstring(
2422 ret += self._default_arguments_from_docstring(
2423 getattr(obj, '__doc__', ''))
2423 getattr(obj, '__doc__', ''))
2424 # for classes, check for __init__,__new__
2424 # for classes, check for __init__,__new__
2425 call_obj = (getattr(obj, '__init__', None) or
2425 call_obj = (getattr(obj, '__init__', None) or
2426 getattr(obj, '__new__', None))
2426 getattr(obj, '__new__', None))
2427 # for all others, check if they are __call__able
2427 # for all others, check if they are __call__able
2428 elif hasattr(obj, '__call__'):
2428 elif hasattr(obj, '__call__'):
2429 call_obj = obj.__call__
2429 call_obj = obj.__call__
2430 ret += self._default_arguments_from_docstring(
2430 ret += self._default_arguments_from_docstring(
2431 getattr(call_obj, '__doc__', ''))
2431 getattr(call_obj, '__doc__', ''))
2432
2432
2433 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2433 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2434 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2434 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2435
2435
2436 try:
2436 try:
2437 sig = inspect.signature(obj)
2437 sig = inspect.signature(obj)
2438 ret.extend(k for k, v in sig.parameters.items() if
2438 ret.extend(k for k, v in sig.parameters.items() if
2439 v.kind in _keeps)
2439 v.kind in _keeps)
2440 except ValueError:
2440 except ValueError:
2441 pass
2441 pass
2442
2442
2443 return list(set(ret))
2443 return list(set(ret))
2444
2444
2445 @context_matcher()
2445 @context_matcher()
2446 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2446 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2447 """Match named parameters (kwargs) of the last open function."""
2447 """Match named parameters (kwargs) of the last open function."""
2448 matches = self.python_func_kw_matches(context.token)
2448 matches = self.python_func_kw_matches(context.token)
2449 return _convert_matcher_v1_result_to_v2(matches, type="param")
2449 return _convert_matcher_v1_result_to_v2(matches, type="param")
2450
2450
2451 def python_func_kw_matches(self, text):
2451 def python_func_kw_matches(self, text):
2452 """Match named parameters (kwargs) of the last open function.
2452 """Match named parameters (kwargs) of the last open function.
2453
2453
2454 .. deprecated:: 8.6
2454 .. deprecated:: 8.6
2455 You can use :meth:`python_func_kw_matcher` instead.
2455 You can use :meth:`python_func_kw_matcher` instead.
2456 """
2456 """
2457
2457
2458 if "." in text: # a parameter cannot be dotted
2458 if "." in text: # a parameter cannot be dotted
2459 return []
2459 return []
2460 try: regexp = self.__funcParamsRegex
2460 try: regexp = self.__funcParamsRegex
2461 except AttributeError:
2461 except AttributeError:
2462 regexp = self.__funcParamsRegex = re.compile(r'''
2462 regexp = self.__funcParamsRegex = re.compile(r'''
2463 '.*?(?<!\\)' | # single quoted strings or
2463 '.*?(?<!\\)' | # single quoted strings or
2464 ".*?(?<!\\)" | # double quoted strings or
2464 ".*?(?<!\\)" | # double quoted strings or
2465 \w+ | # identifier
2465 \w+ | # identifier
2466 \S # other characters
2466 \S # other characters
2467 ''', re.VERBOSE | re.DOTALL)
2467 ''', re.VERBOSE | re.DOTALL)
2468 # 1. find the nearest identifier that comes before an unclosed
2468 # 1. find the nearest identifier that comes before an unclosed
2469 # parenthesis before the cursor
2469 # parenthesis before the cursor
2470 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2470 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2471 tokens = regexp.findall(self.text_until_cursor)
2471 tokens = regexp.findall(self.text_until_cursor)
2472 iterTokens = reversed(tokens); openPar = 0
2472 iterTokens = reversed(tokens); openPar = 0
2473
2473
2474 for token in iterTokens:
2474 for token in iterTokens:
2475 if token == ')':
2475 if token == ')':
2476 openPar -= 1
2476 openPar -= 1
2477 elif token == '(':
2477 elif token == '(':
2478 openPar += 1
2478 openPar += 1
2479 if openPar > 0:
2479 if openPar > 0:
2480 # found the last unclosed parenthesis
2480 # found the last unclosed parenthesis
2481 break
2481 break
2482 else:
2482 else:
2483 return []
2483 return []
2484 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2484 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2485 ids = []
2485 ids = []
2486 isId = re.compile(r'\w+$').match
2486 isId = re.compile(r'\w+$').match
2487
2487
2488 while True:
2488 while True:
2489 try:
2489 try:
2490 ids.append(next(iterTokens))
2490 ids.append(next(iterTokens))
2491 if not isId(ids[-1]):
2491 if not isId(ids[-1]):
2492 ids.pop(); break
2492 ids.pop(); break
2493 if not next(iterTokens) == '.':
2493 if not next(iterTokens) == '.':
2494 break
2494 break
2495 except StopIteration:
2495 except StopIteration:
2496 break
2496 break
2497
2497
2498 # Find all named arguments already assigned to, as to avoid suggesting
2498 # Find all named arguments already assigned to, as to avoid suggesting
2499 # them again
2499 # them again
2500 usedNamedArgs = set()
2500 usedNamedArgs = set()
2501 par_level = -1
2501 par_level = -1
2502 for token, next_token in zip(tokens, tokens[1:]):
2502 for token, next_token in zip(tokens, tokens[1:]):
2503 if token == '(':
2503 if token == '(':
2504 par_level += 1
2504 par_level += 1
2505 elif token == ')':
2505 elif token == ')':
2506 par_level -= 1
2506 par_level -= 1
2507
2507
2508 if par_level != 0:
2508 if par_level != 0:
2509 continue
2509 continue
2510
2510
2511 if next_token != '=':
2511 if next_token != '=':
2512 continue
2512 continue
2513
2513
2514 usedNamedArgs.add(token)
2514 usedNamedArgs.add(token)
2515
2515
2516 argMatches = []
2516 argMatches = []
2517 try:
2517 try:
2518 callableObj = '.'.join(ids[::-1])
2518 callableObj = '.'.join(ids[::-1])
2519 namedArgs = self._default_arguments(eval(callableObj,
2519 namedArgs = self._default_arguments(eval(callableObj,
2520 self.namespace))
2520 self.namespace))
2521
2521
2522 # Remove used named arguments from the list, no need to show twice
2522 # Remove used named arguments from the list, no need to show twice
2523 for namedArg in set(namedArgs) - usedNamedArgs:
2523 for namedArg in set(namedArgs) - usedNamedArgs:
2524 if namedArg.startswith(text):
2524 if namedArg.startswith(text):
2525 argMatches.append("%s=" %namedArg)
2525 argMatches.append("%s=" %namedArg)
2526 except:
2526 except:
2527 pass
2527 pass
2528
2528
2529 return argMatches
2529 return argMatches
2530
2530
2531 @staticmethod
2531 @staticmethod
2532 def _get_keys(obj: Any) -> List[Any]:
2532 def _get_keys(obj: Any) -> List[Any]:
2533 # Objects can define their own completions by defining an
2533 # Objects can define their own completions by defining an
2534 # _ipy_key_completions_() method.
2534 # _ipy_key_completions_() method.
2535 method = get_real_method(obj, '_ipython_key_completions_')
2535 method = get_real_method(obj, '_ipython_key_completions_')
2536 if method is not None:
2536 if method is not None:
2537 return method()
2537 return method()
2538
2538
2539 # Special case some common in-memory dict-like types
2539 # Special case some common in-memory dict-like types
2540 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2540 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2541 try:
2541 try:
2542 return list(obj.keys())
2542 return list(obj.keys())
2543 except Exception:
2543 except Exception:
2544 return []
2544 return []
2545 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2545 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2546 try:
2546 try:
2547 return list(obj.obj.keys())
2547 return list(obj.obj.keys())
2548 except Exception:
2548 except Exception:
2549 return []
2549 return []
2550 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2550 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2551 _safe_isinstance(obj, 'numpy', 'void'):
2551 _safe_isinstance(obj, 'numpy', 'void'):
2552 return obj.dtype.names or []
2552 return obj.dtype.names or []
2553 return []
2553 return []
2554
2554
2555 @context_matcher()
2555 @context_matcher()
2556 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2556 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2557 """Match string keys in a dictionary, after e.g. ``foo[``."""
2557 """Match string keys in a dictionary, after e.g. ``foo[``."""
2558 matches = self.dict_key_matches(context.token)
2558 matches = self.dict_key_matches(context.token)
2559 return _convert_matcher_v1_result_to_v2(
2559 return _convert_matcher_v1_result_to_v2(
2560 matches, type="dict key", suppress_if_matches=True
2560 matches, type="dict key", suppress_if_matches=True
2561 )
2561 )
2562
2562
2563 def dict_key_matches(self, text: str) -> List[str]:
2563 def dict_key_matches(self, text: str) -> List[str]:
2564 """Match string keys in a dictionary, after e.g. ``foo[``.
2564 """Match string keys in a dictionary, after e.g. ``foo[``.
2565
2565
2566 .. deprecated:: 8.6
2566 .. deprecated:: 8.6
2567 You can use :meth:`dict_key_matcher` instead.
2567 You can use :meth:`dict_key_matcher` instead.
2568 """
2568 """
2569
2569
2570 # Short-circuit on closed dictionary (regular expression would
2570 # Short-circuit on closed dictionary (regular expression would
2571 # not match anyway, but would take quite a while).
2571 # not match anyway, but would take quite a while).
2572 if self.text_until_cursor.strip().endswith("]"):
2572 if self.text_until_cursor.strip().endswith("]"):
2573 return []
2573 return []
2574
2574
2575 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2575 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2576
2576
2577 if match is None:
2577 if match is None:
2578 return []
2578 return []
2579
2579
2580 expr, prior_tuple_keys, key_prefix = match.groups()
2580 expr, prior_tuple_keys, key_prefix = match.groups()
2581
2581
2582 obj = self._evaluate_expr(expr)
2582 obj = self._evaluate_expr(expr)
2583
2583
2584 if obj is not_found:
2584 if obj is not_found:
2585 return []
2585 return []
2586
2586
2587 keys = self._get_keys(obj)
2587 keys = self._get_keys(obj)
2588 if not keys:
2588 if not keys:
2589 return keys
2589 return keys
2590
2590
2591 tuple_prefix = guarded_eval(
2591 tuple_prefix = guarded_eval(
2592 prior_tuple_keys,
2592 prior_tuple_keys,
2593 EvaluationContext(
2593 EvaluationContext(
2594 globals=self.global_namespace,
2594 globals=self.global_namespace,
2595 locals=self.namespace,
2595 locals=self.namespace,
2596 evaluation=self.evaluation, # type: ignore
2596 evaluation=self.evaluation, # type: ignore
2597 in_subscript=True,
2597 in_subscript=True,
2598 ),
2598 ),
2599 )
2599 )
2600
2600
2601 closing_quote, token_offset, matches = match_dict_keys(
2601 closing_quote, token_offset, matches = match_dict_keys(
2602 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2602 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2603 )
2603 )
2604 if not matches:
2604 if not matches:
2605 return []
2605 return []
2606
2606
2607 # get the cursor position of
2607 # get the cursor position of
2608 # - the text being completed
2608 # - the text being completed
2609 # - the start of the key text
2609 # - the start of the key text
2610 # - the start of the completion
2610 # - the start of the completion
2611 text_start = len(self.text_until_cursor) - len(text)
2611 text_start = len(self.text_until_cursor) - len(text)
2612 if key_prefix:
2612 if key_prefix:
2613 key_start = match.start(3)
2613 key_start = match.start(3)
2614 completion_start = key_start + token_offset
2614 completion_start = key_start + token_offset
2615 else:
2615 else:
2616 key_start = completion_start = match.end()
2616 key_start = completion_start = match.end()
2617
2617
2618 # grab the leading prefix, to make sure all completions start with `text`
2618 # grab the leading prefix, to make sure all completions start with `text`
2619 if text_start > key_start:
2619 if text_start > key_start:
2620 leading = ''
2620 leading = ''
2621 else:
2621 else:
2622 leading = text[text_start:completion_start]
2622 leading = text[text_start:completion_start]
2623
2623
2624 # append closing quote and bracket as appropriate
2624 # append closing quote and bracket as appropriate
2625 # this is *not* appropriate if the opening quote or bracket is outside
2625 # this is *not* appropriate if the opening quote or bracket is outside
2626 # the text given to this method, e.g. `d["""a\nt
2626 # the text given to this method, e.g. `d["""a\nt
2627 can_close_quote = False
2627 can_close_quote = False
2628 can_close_bracket = False
2628 can_close_bracket = False
2629
2629
2630 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2630 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2631
2631
2632 if continuation.startswith(closing_quote):
2632 if continuation.startswith(closing_quote):
2633 # do not close if already closed, e.g. `d['a<tab>'`
2633 # do not close if already closed, e.g. `d['a<tab>'`
2634 continuation = continuation[len(closing_quote) :]
2634 continuation = continuation[len(closing_quote) :]
2635 else:
2635 else:
2636 can_close_quote = True
2636 can_close_quote = True
2637
2637
2638 continuation = continuation.strip()
2638 continuation = continuation.strip()
2639
2639
2640 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2640 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2641 # handling it is out of scope, so let's avoid appending suffixes.
2641 # handling it is out of scope, so let's avoid appending suffixes.
2642 has_known_tuple_handling = isinstance(obj, dict)
2642 has_known_tuple_handling = isinstance(obj, dict)
2643
2643
2644 can_close_bracket = (
2644 can_close_bracket = (
2645 not continuation.startswith("]") and self.auto_close_dict_keys
2645 not continuation.startswith("]") and self.auto_close_dict_keys
2646 )
2646 )
2647 can_close_tuple_item = (
2647 can_close_tuple_item = (
2648 not continuation.startswith(",")
2648 not continuation.startswith(",")
2649 and has_known_tuple_handling
2649 and has_known_tuple_handling
2650 and self.auto_close_dict_keys
2650 and self.auto_close_dict_keys
2651 )
2651 )
2652 can_close_quote = can_close_quote and self.auto_close_dict_keys
2652 can_close_quote = can_close_quote and self.auto_close_dict_keys
2653
2653
2654 # fast path if closing quote should be appended but not suffix is allowed
2654 # fast path if closing quote should be appended but not suffix is allowed
2655 if not can_close_quote and not can_close_bracket and closing_quote:
2655 if not can_close_quote and not can_close_bracket and closing_quote:
2656 return [leading + k for k in matches]
2656 return [leading + k for k in matches]
2657
2657
2658 results = []
2658 results = []
2659
2659
2660 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2660 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2661
2661
2662 for k, state_flag in matches.items():
2662 for k, state_flag in matches.items():
2663 result = leading + k
2663 result = leading + k
2664 if can_close_quote and closing_quote:
2664 if can_close_quote and closing_quote:
2665 result += closing_quote
2665 result += closing_quote
2666
2666
2667 if state_flag == end_of_tuple_or_item:
2667 if state_flag == end_of_tuple_or_item:
2668 # We do not know which suffix to add,
2668 # We do not know which suffix to add,
2669 # e.g. both tuple item and string
2669 # e.g. both tuple item and string
2670 # match this item.
2670 # match this item.
2671 pass
2671 pass
2672
2672
2673 if state_flag in end_of_tuple_or_item and can_close_bracket:
2673 if state_flag in end_of_tuple_or_item and can_close_bracket:
2674 result += "]"
2674 result += "]"
2675 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2675 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2676 result += ", "
2676 result += ", "
2677 results.append(result)
2677 results.append(result)
2678 return results
2678 return results
2679
2679
2680 @context_matcher()
2680 @context_matcher()
2681 def unicode_name_matcher(self, context: CompletionContext):
2681 def unicode_name_matcher(self, context: CompletionContext):
2682 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2682 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2683 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2683 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2684 return _convert_matcher_v1_result_to_v2(
2684 return _convert_matcher_v1_result_to_v2(
2685 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2685 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2686 )
2686 )
2687
2687
2688 @staticmethod
2688 @staticmethod
2689 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2689 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2690 """Match Latex-like syntax for unicode characters base
2690 """Match Latex-like syntax for unicode characters base
2691 on the name of the character.
2691 on the name of the character.
2692
2692
2693 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2693 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2694
2694
2695 Works only on valid python 3 identifier, or on combining characters that
2695 Works only on valid python 3 identifier, or on combining characters that
2696 will combine to form a valid identifier.
2696 will combine to form a valid identifier.
2697 """
2697 """
2698 slashpos = text.rfind('\\')
2698 slashpos = text.rfind('\\')
2699 if slashpos > -1:
2699 if slashpos > -1:
2700 s = text[slashpos+1:]
2700 s = text[slashpos+1:]
2701 try :
2701 try :
2702 unic = unicodedata.lookup(s)
2702 unic = unicodedata.lookup(s)
2703 # allow combining chars
2703 # allow combining chars
2704 if ('a'+unic).isidentifier():
2704 if ('a'+unic).isidentifier():
2705 return '\\'+s,[unic]
2705 return '\\'+s,[unic]
2706 except KeyError:
2706 except KeyError:
2707 pass
2707 pass
2708 return '', []
2708 return '', []
2709
2709
2710 @context_matcher()
2710 @context_matcher()
2711 def latex_name_matcher(self, context: CompletionContext):
2711 def latex_name_matcher(self, context: CompletionContext):
2712 """Match Latex syntax for unicode characters.
2712 """Match Latex syntax for unicode characters.
2713
2713
2714 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2714 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2715 """
2715 """
2716 fragment, matches = self.latex_matches(context.text_until_cursor)
2716 fragment, matches = self.latex_matches(context.text_until_cursor)
2717 return _convert_matcher_v1_result_to_v2(
2717 return _convert_matcher_v1_result_to_v2(
2718 matches, type="latex", fragment=fragment, suppress_if_matches=True
2718 matches, type="latex", fragment=fragment, suppress_if_matches=True
2719 )
2719 )
2720
2720
2721 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2721 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2722 """Match Latex syntax for unicode characters.
2722 """Match Latex syntax for unicode characters.
2723
2723
2724 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2724 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2725
2725
2726 .. deprecated:: 8.6
2726 .. deprecated:: 8.6
2727 You can use :meth:`latex_name_matcher` instead.
2727 You can use :meth:`latex_name_matcher` instead.
2728 """
2728 """
2729 slashpos = text.rfind('\\')
2729 slashpos = text.rfind('\\')
2730 if slashpos > -1:
2730 if slashpos > -1:
2731 s = text[slashpos:]
2731 s = text[slashpos:]
2732 if s in latex_symbols:
2732 if s in latex_symbols:
2733 # Try to complete a full latex symbol to unicode
2733 # Try to complete a full latex symbol to unicode
2734 # \\alpha -> Ξ±
2734 # \\alpha -> Ξ±
2735 return s, [latex_symbols[s]]
2735 return s, [latex_symbols[s]]
2736 else:
2736 else:
2737 # If a user has partially typed a latex symbol, give them
2737 # If a user has partially typed a latex symbol, give them
2738 # a full list of options \al -> [\aleph, \alpha]
2738 # a full list of options \al -> [\aleph, \alpha]
2739 matches = [k for k in latex_symbols if k.startswith(s)]
2739 matches = [k for k in latex_symbols if k.startswith(s)]
2740 if matches:
2740 if matches:
2741 return s, matches
2741 return s, matches
2742 return '', ()
2742 return '', ()
2743
2743
2744 @context_matcher()
2744 @context_matcher()
2745 def custom_completer_matcher(self, context):
2745 def custom_completer_matcher(self, context):
2746 """Dispatch custom completer.
2746 """Dispatch custom completer.
2747
2747
2748 If a match is found, suppresses all other matchers except for Jedi.
2748 If a match is found, suppresses all other matchers except for Jedi.
2749 """
2749 """
2750 matches = self.dispatch_custom_completer(context.token) or []
2750 matches = self.dispatch_custom_completer(context.token) or []
2751 result = _convert_matcher_v1_result_to_v2(
2751 result = _convert_matcher_v1_result_to_v2(
2752 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2752 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2753 )
2753 )
2754 result["ordered"] = True
2754 result["ordered"] = True
2755 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2755 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2756 return result
2756 return result
2757
2757
2758 def dispatch_custom_completer(self, text):
2758 def dispatch_custom_completer(self, text):
2759 """
2759 """
2760 .. deprecated:: 8.6
2760 .. deprecated:: 8.6
2761 You can use :meth:`custom_completer_matcher` instead.
2761 You can use :meth:`custom_completer_matcher` instead.
2762 """
2762 """
2763 if not self.custom_completers:
2763 if not self.custom_completers:
2764 return
2764 return
2765
2765
2766 line = self.line_buffer
2766 line = self.line_buffer
2767 if not line.strip():
2767 if not line.strip():
2768 return None
2768 return None
2769
2769
2770 # Create a little structure to pass all the relevant information about
2770 # Create a little structure to pass all the relevant information about
2771 # the current completion to any custom completer.
2771 # the current completion to any custom completer.
2772 event = SimpleNamespace()
2772 event = SimpleNamespace()
2773 event.line = line
2773 event.line = line
2774 event.symbol = text
2774 event.symbol = text
2775 cmd = line.split(None,1)[0]
2775 cmd = line.split(None,1)[0]
2776 event.command = cmd
2776 event.command = cmd
2777 event.text_until_cursor = self.text_until_cursor
2777 event.text_until_cursor = self.text_until_cursor
2778
2778
2779 # for foo etc, try also to find completer for %foo
2779 # for foo etc, try also to find completer for %foo
2780 if not cmd.startswith(self.magic_escape):
2780 if not cmd.startswith(self.magic_escape):
2781 try_magic = self.custom_completers.s_matches(
2781 try_magic = self.custom_completers.s_matches(
2782 self.magic_escape + cmd)
2782 self.magic_escape + cmd)
2783 else:
2783 else:
2784 try_magic = []
2784 try_magic = []
2785
2785
2786 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2786 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2787 try_magic,
2787 try_magic,
2788 self.custom_completers.flat_matches(self.text_until_cursor)):
2788 self.custom_completers.flat_matches(self.text_until_cursor)):
2789 try:
2789 try:
2790 res = c(event)
2790 res = c(event)
2791 if res:
2791 if res:
2792 # first, try case sensitive match
2792 # first, try case sensitive match
2793 withcase = [r for r in res if r.startswith(text)]
2793 withcase = [r for r in res if r.startswith(text)]
2794 if withcase:
2794 if withcase:
2795 return withcase
2795 return withcase
2796 # if none, then case insensitive ones are ok too
2796 # if none, then case insensitive ones are ok too
2797 text_low = text.lower()
2797 text_low = text.lower()
2798 return [r for r in res if r.lower().startswith(text_low)]
2798 return [r for r in res if r.lower().startswith(text_low)]
2799 except TryNext:
2799 except TryNext:
2800 pass
2800 pass
2801 except KeyboardInterrupt:
2801 except KeyboardInterrupt:
2802 """
2802 """
2803 If custom completer take too long,
2803 If custom completer take too long,
2804 let keyboard interrupt abort and return nothing.
2804 let keyboard interrupt abort and return nothing.
2805 """
2805 """
2806 break
2806 break
2807
2807
2808 return None
2808 return None
2809
2809
2810 def completions(self, text: str, offset: int)->Iterator[Completion]:
2810 def completions(self, text: str, offset: int)->Iterator[Completion]:
2811 """
2811 """
2812 Returns an iterator over the possible completions
2812 Returns an iterator over the possible completions
2813
2813
2814 .. warning::
2814 .. warning::
2815
2815
2816 Unstable
2816 Unstable
2817
2817
2818 This function is unstable, API may change without warning.
2818 This function is unstable, API may change without warning.
2819 It will also raise unless use in proper context manager.
2819 It will also raise unless use in proper context manager.
2820
2820
2821 Parameters
2821 Parameters
2822 ----------
2822 ----------
2823 text : str
2823 text : str
2824 Full text of the current input, multi line string.
2824 Full text of the current input, multi line string.
2825 offset : int
2825 offset : int
2826 Integer representing the position of the cursor in ``text``. Offset
2826 Integer representing the position of the cursor in ``text``. Offset
2827 is 0-based indexed.
2827 is 0-based indexed.
2828
2828
2829 Yields
2829 Yields
2830 ------
2830 ------
2831 Completion
2831 Completion
2832
2832
2833 Notes
2833 Notes
2834 -----
2834 -----
2835 The cursor on a text can either be seen as being "in between"
2835 The cursor on a text can either be seen as being "in between"
2836 characters or "On" a character depending on the interface visible to
2836 characters or "On" a character depending on the interface visible to
2837 the user. For consistency the cursor being on "in between" characters X
2837 the user. For consistency the cursor being on "in between" characters X
2838 and Y is equivalent to the cursor being "on" character Y, that is to say
2838 and Y is equivalent to the cursor being "on" character Y, that is to say
2839 the character the cursor is on is considered as being after the cursor.
2839 the character the cursor is on is considered as being after the cursor.
2840
2840
2841 Combining characters may span more that one position in the
2841 Combining characters may span more that one position in the
2842 text.
2842 text.
2843
2843
2844 .. note::
2844 .. note::
2845
2845
2846 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2846 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2847 fake Completion token to distinguish completion returned by Jedi
2847 fake Completion token to distinguish completion returned by Jedi
2848 and usual IPython completion.
2848 and usual IPython completion.
2849
2849
2850 .. note::
2850 .. note::
2851
2851
2852 Completions are not completely deduplicated yet. If identical
2852 Completions are not completely deduplicated yet. If identical
2853 completions are coming from different sources this function does not
2853 completions are coming from different sources this function does not
2854 ensure that each completion object will only be present once.
2854 ensure that each completion object will only be present once.
2855 """
2855 """
2856 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2856 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2857 "It may change without warnings. "
2857 "It may change without warnings. "
2858 "Use in corresponding context manager.",
2858 "Use in corresponding context manager.",
2859 category=ProvisionalCompleterWarning, stacklevel=2)
2859 category=ProvisionalCompleterWarning, stacklevel=2)
2860
2860
2861 seen = set()
2861 seen = set()
2862 profiler:Optional[cProfile.Profile]
2862 profiler:Optional[cProfile.Profile]
2863 try:
2863 try:
2864 if self.profile_completions:
2864 if self.profile_completions:
2865 import cProfile
2865 import cProfile
2866 profiler = cProfile.Profile()
2866 profiler = cProfile.Profile()
2867 profiler.enable()
2867 profiler.enable()
2868 else:
2868 else:
2869 profiler = None
2869 profiler = None
2870
2870
2871 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2871 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2872 if c and (c in seen):
2872 if c and (c in seen):
2873 continue
2873 continue
2874 yield c
2874 yield c
2875 seen.add(c)
2875 seen.add(c)
2876 except KeyboardInterrupt:
2876 except KeyboardInterrupt:
2877 """if completions take too long and users send keyboard interrupt,
2877 """if completions take too long and users send keyboard interrupt,
2878 do not crash and return ASAP. """
2878 do not crash and return ASAP. """
2879 pass
2879 pass
2880 finally:
2880 finally:
2881 if profiler is not None:
2881 if profiler is not None:
2882 profiler.disable()
2882 profiler.disable()
2883 ensure_dir_exists(self.profiler_output_dir)
2883 ensure_dir_exists(self.profiler_output_dir)
2884 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2884 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2885 print("Writing profiler output to", output_path)
2885 print("Writing profiler output to", output_path)
2886 profiler.dump_stats(output_path)
2886 profiler.dump_stats(output_path)
2887
2887
2888 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2888 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2889 """
2889 """
2890 Core completion module.Same signature as :any:`completions`, with the
2890 Core completion module.Same signature as :any:`completions`, with the
2891 extra `timeout` parameter (in seconds).
2891 extra `timeout` parameter (in seconds).
2892
2892
2893 Computing jedi's completion ``.type`` can be quite expensive (it is a
2893 Computing jedi's completion ``.type`` can be quite expensive (it is a
2894 lazy property) and can require some warm-up, more warm up than just
2894 lazy property) and can require some warm-up, more warm up than just
2895 computing the ``name`` of a completion. The warm-up can be :
2895 computing the ``name`` of a completion. The warm-up can be :
2896
2896
2897 - Long warm-up the first time a module is encountered after
2897 - Long warm-up the first time a module is encountered after
2898 install/update: actually build parse/inference tree.
2898 install/update: actually build parse/inference tree.
2899
2899
2900 - first time the module is encountered in a session: load tree from
2900 - first time the module is encountered in a session: load tree from
2901 disk.
2901 disk.
2902
2902
2903 We don't want to block completions for tens of seconds so we give the
2903 We don't want to block completions for tens of seconds so we give the
2904 completer a "budget" of ``_timeout`` seconds per invocation to compute
2904 completer a "budget" of ``_timeout`` seconds per invocation to compute
2905 completions types, the completions that have not yet been computed will
2905 completions types, the completions that have not yet been computed will
2906 be marked as "unknown" an will have a chance to be computed next round
2906 be marked as "unknown" an will have a chance to be computed next round
2907 are things get cached.
2907 are things get cached.
2908
2908
2909 Keep in mind that Jedi is not the only thing treating the completion so
2909 Keep in mind that Jedi is not the only thing treating the completion so
2910 keep the timeout short-ish as if we take more than 0.3 second we still
2910 keep the timeout short-ish as if we take more than 0.3 second we still
2911 have lots of processing to do.
2911 have lots of processing to do.
2912
2912
2913 """
2913 """
2914 deadline = time.monotonic() + _timeout
2914 deadline = time.monotonic() + _timeout
2915
2915
2916 before = full_text[:offset]
2916 before = full_text[:offset]
2917 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2917 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2918
2918
2919 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2919 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2920
2920
2921 def is_non_jedi_result(
2921 def is_non_jedi_result(
2922 result: MatcherResult, identifier: str
2922 result: MatcherResult, identifier: str
2923 ) -> TypeGuard[SimpleMatcherResult]:
2923 ) -> TypeGuard[SimpleMatcherResult]:
2924 return identifier != jedi_matcher_id
2924 return identifier != jedi_matcher_id
2925
2925
2926 results = self._complete(
2926 results = self._complete(
2927 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2927 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2928 )
2928 )
2929
2929
2930 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2930 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2931 identifier: result
2931 identifier: result
2932 for identifier, result in results.items()
2932 for identifier, result in results.items()
2933 if is_non_jedi_result(result, identifier)
2933 if is_non_jedi_result(result, identifier)
2934 }
2934 }
2935
2935
2936 jedi_matches = (
2936 jedi_matches = (
2937 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2937 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2938 if jedi_matcher_id in results
2938 if jedi_matcher_id in results
2939 else ()
2939 else ()
2940 )
2940 )
2941
2941
2942 iter_jm = iter(jedi_matches)
2942 iter_jm = iter(jedi_matches)
2943 if _timeout:
2943 if _timeout:
2944 for jm in iter_jm:
2944 for jm in iter_jm:
2945 try:
2945 try:
2946 type_ = jm.type
2946 type_ = jm.type
2947 except Exception:
2947 except Exception:
2948 if self.debug:
2948 if self.debug:
2949 print("Error in Jedi getting type of ", jm)
2949 print("Error in Jedi getting type of ", jm)
2950 type_ = None
2950 type_ = None
2951 delta = len(jm.name_with_symbols) - len(jm.complete)
2951 delta = len(jm.name_with_symbols) - len(jm.complete)
2952 if type_ == 'function':
2952 if type_ == 'function':
2953 signature = _make_signature(jm)
2953 signature = _make_signature(jm)
2954 else:
2954 else:
2955 signature = ''
2955 signature = ''
2956 yield Completion(start=offset - delta,
2956 yield Completion(start=offset - delta,
2957 end=offset,
2957 end=offset,
2958 text=jm.name_with_symbols,
2958 text=jm.name_with_symbols,
2959 type=type_,
2959 type=type_,
2960 signature=signature,
2960 signature=signature,
2961 _origin='jedi')
2961 _origin='jedi')
2962
2962
2963 if time.monotonic() > deadline:
2963 if time.monotonic() > deadline:
2964 break
2964 break
2965
2965
2966 for jm in iter_jm:
2966 for jm in iter_jm:
2967 delta = len(jm.name_with_symbols) - len(jm.complete)
2967 delta = len(jm.name_with_symbols) - len(jm.complete)
2968 yield Completion(
2968 yield Completion(
2969 start=offset - delta,
2969 start=offset - delta,
2970 end=offset,
2970 end=offset,
2971 text=jm.name_with_symbols,
2971 text=jm.name_with_symbols,
2972 type=_UNKNOWN_TYPE, # don't compute type for speed
2972 type=_UNKNOWN_TYPE, # don't compute type for speed
2973 _origin="jedi",
2973 _origin="jedi",
2974 signature="",
2974 signature="",
2975 )
2975 )
2976
2976
2977 # TODO:
2977 # TODO:
2978 # Suppress this, right now just for debug.
2978 # Suppress this, right now just for debug.
2979 if jedi_matches and non_jedi_results and self.debug:
2979 if jedi_matches and non_jedi_results and self.debug:
2980 some_start_offset = before.rfind(
2980 some_start_offset = before.rfind(
2981 next(iter(non_jedi_results.values()))["matched_fragment"]
2981 next(iter(non_jedi_results.values()))["matched_fragment"]
2982 )
2982 )
2983 yield Completion(
2983 yield Completion(
2984 start=some_start_offset,
2984 start=some_start_offset,
2985 end=offset,
2985 end=offset,
2986 text="--jedi/ipython--",
2986 text="--jedi/ipython--",
2987 _origin="debug",
2987 _origin="debug",
2988 type="none",
2988 type="none",
2989 signature="",
2989 signature="",
2990 )
2990 )
2991
2991
2992 ordered: List[Completion] = []
2992 ordered: List[Completion] = []
2993 sortable: List[Completion] = []
2993 sortable: List[Completion] = []
2994
2994
2995 for origin, result in non_jedi_results.items():
2995 for origin, result in non_jedi_results.items():
2996 matched_text = result["matched_fragment"]
2996 matched_text = result["matched_fragment"]
2997 start_offset = before.rfind(matched_text)
2997 start_offset = before.rfind(matched_text)
2998 is_ordered = result.get("ordered", False)
2998 is_ordered = result.get("ordered", False)
2999 container = ordered if is_ordered else sortable
2999 container = ordered if is_ordered else sortable
3000
3000
3001 # I'm unsure if this is always true, so let's assert and see if it
3001 # I'm unsure if this is always true, so let's assert and see if it
3002 # crash
3002 # crash
3003 assert before.endswith(matched_text)
3003 assert before.endswith(matched_text)
3004
3004
3005 for simple_completion in result["completions"]:
3005 for simple_completion in result["completions"]:
3006 completion = Completion(
3006 completion = Completion(
3007 start=start_offset,
3007 start=start_offset,
3008 end=offset,
3008 end=offset,
3009 text=simple_completion.text,
3009 text=simple_completion.text,
3010 _origin=origin,
3010 _origin=origin,
3011 signature="",
3011 signature="",
3012 type=simple_completion.type or _UNKNOWN_TYPE,
3012 type=simple_completion.type or _UNKNOWN_TYPE,
3013 )
3013 )
3014 container.append(completion)
3014 container.append(completion)
3015
3015
3016 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3016 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3017 :MATCHES_LIMIT
3017 :MATCHES_LIMIT
3018 ]
3018 ]
3019
3019
3020 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3020 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3021 """Find completions for the given text and line context.
3021 """Find completions for the given text and line context.
3022
3022
3023 Note that both the text and the line_buffer are optional, but at least
3023 Note that both the text and the line_buffer are optional, but at least
3024 one of them must be given.
3024 one of them must be given.
3025
3025
3026 Parameters
3026 Parameters
3027 ----------
3027 ----------
3028 text : string, optional
3028 text : string, optional
3029 Text to perform the completion on. If not given, the line buffer
3029 Text to perform the completion on. If not given, the line buffer
3030 is split using the instance's CompletionSplitter object.
3030 is split using the instance's CompletionSplitter object.
3031 line_buffer : string, optional
3031 line_buffer : string, optional
3032 If not given, the completer attempts to obtain the current line
3032 If not given, the completer attempts to obtain the current line
3033 buffer via readline. This keyword allows clients which are
3033 buffer via readline. This keyword allows clients which are
3034 requesting for text completions in non-readline contexts to inform
3034 requesting for text completions in non-readline contexts to inform
3035 the completer of the entire text.
3035 the completer of the entire text.
3036 cursor_pos : int, optional
3036 cursor_pos : int, optional
3037 Index of the cursor in the full line buffer. Should be provided by
3037 Index of the cursor in the full line buffer. Should be provided by
3038 remote frontends where kernel has no access to frontend state.
3038 remote frontends where kernel has no access to frontend state.
3039
3039
3040 Returns
3040 Returns
3041 -------
3041 -------
3042 Tuple of two items:
3042 Tuple of two items:
3043 text : str
3043 text : str
3044 Text that was actually used in the completion.
3044 Text that was actually used in the completion.
3045 matches : list
3045 matches : list
3046 A list of completion matches.
3046 A list of completion matches.
3047
3047
3048 Notes
3048 Notes
3049 -----
3049 -----
3050 This API is likely to be deprecated and replaced by
3050 This API is likely to be deprecated and replaced by
3051 :any:`IPCompleter.completions` in the future.
3051 :any:`IPCompleter.completions` in the future.
3052
3052
3053 """
3053 """
3054 warnings.warn('`Completer.complete` is pending deprecation since '
3054 warnings.warn('`Completer.complete` is pending deprecation since '
3055 'IPython 6.0 and will be replaced by `Completer.completions`.',
3055 'IPython 6.0 and will be replaced by `Completer.completions`.',
3056 PendingDeprecationWarning)
3056 PendingDeprecationWarning)
3057 # potential todo, FOLD the 3rd throw away argument of _complete
3057 # potential todo, FOLD the 3rd throw away argument of _complete
3058 # into the first 2 one.
3058 # into the first 2 one.
3059 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3059 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3060 # TODO: should we deprecate now, or does it stay?
3060 # TODO: should we deprecate now, or does it stay?
3061
3061
3062 results = self._complete(
3062 results = self._complete(
3063 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3063 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3064 )
3064 )
3065
3065
3066 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3066 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3067
3067
3068 return self._arrange_and_extract(
3068 return self._arrange_and_extract(
3069 results,
3069 results,
3070 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3070 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3071 skip_matchers={jedi_matcher_id},
3071 skip_matchers={jedi_matcher_id},
3072 # this API does not support different start/end positions (fragments of token).
3072 # this API does not support different start/end positions (fragments of token).
3073 abort_if_offset_changes=True,
3073 abort_if_offset_changes=True,
3074 )
3074 )
3075
3075
3076 def _arrange_and_extract(
3076 def _arrange_and_extract(
3077 self,
3077 self,
3078 results: Dict[str, MatcherResult],
3078 results: Dict[str, MatcherResult],
3079 skip_matchers: Set[str],
3079 skip_matchers: Set[str],
3080 abort_if_offset_changes: bool,
3080 abort_if_offset_changes: bool,
3081 ):
3081 ):
3082 sortable: List[AnyMatcherCompletion] = []
3082 sortable: List[AnyMatcherCompletion] = []
3083 ordered: List[AnyMatcherCompletion] = []
3083 ordered: List[AnyMatcherCompletion] = []
3084 most_recent_fragment = None
3084 most_recent_fragment = None
3085 for identifier, result in results.items():
3085 for identifier, result in results.items():
3086 if identifier in skip_matchers:
3086 if identifier in skip_matchers:
3087 continue
3087 continue
3088 if not result["completions"]:
3088 if not result["completions"]:
3089 continue
3089 continue
3090 if not most_recent_fragment:
3090 if not most_recent_fragment:
3091 most_recent_fragment = result["matched_fragment"]
3091 most_recent_fragment = result["matched_fragment"]
3092 if (
3092 if (
3093 abort_if_offset_changes
3093 abort_if_offset_changes
3094 and result["matched_fragment"] != most_recent_fragment
3094 and result["matched_fragment"] != most_recent_fragment
3095 ):
3095 ):
3096 break
3096 break
3097 if result.get("ordered", False):
3097 if result.get("ordered", False):
3098 ordered.extend(result["completions"])
3098 ordered.extend(result["completions"])
3099 else:
3099 else:
3100 sortable.extend(result["completions"])
3100 sortable.extend(result["completions"])
3101
3101
3102 if not most_recent_fragment:
3102 if not most_recent_fragment:
3103 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3103 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3104
3104
3105 return most_recent_fragment, [
3105 return most_recent_fragment, [
3106 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3106 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3107 ]
3107 ]
3108
3108
3109 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3109 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3110 full_text=None) -> _CompleteResult:
3110 full_text=None) -> _CompleteResult:
3111 """
3111 """
3112 Like complete but can also returns raw jedi completions as well as the
3112 Like complete but can also returns raw jedi completions as well as the
3113 origin of the completion text. This could (and should) be made much
3113 origin of the completion text. This could (and should) be made much
3114 cleaner but that will be simpler once we drop the old (and stateful)
3114 cleaner but that will be simpler once we drop the old (and stateful)
3115 :any:`complete` API.
3115 :any:`complete` API.
3116
3116
3117 With current provisional API, cursor_pos act both (depending on the
3117 With current provisional API, cursor_pos act both (depending on the
3118 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3118 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3119 ``column`` when passing multiline strings this could/should be renamed
3119 ``column`` when passing multiline strings this could/should be renamed
3120 but would add extra noise.
3120 but would add extra noise.
3121
3121
3122 Parameters
3122 Parameters
3123 ----------
3123 ----------
3124 cursor_line
3124 cursor_line
3125 Index of the line the cursor is on. 0 indexed.
3125 Index of the line the cursor is on. 0 indexed.
3126 cursor_pos
3126 cursor_pos
3127 Position of the cursor in the current line/line_buffer/text. 0
3127 Position of the cursor in the current line/line_buffer/text. 0
3128 indexed.
3128 indexed.
3129 line_buffer : optional, str
3129 line_buffer : optional, str
3130 The current line the cursor is in, this is mostly due to legacy
3130 The current line the cursor is in, this is mostly due to legacy
3131 reason that readline could only give a us the single current line.
3131 reason that readline could only give a us the single current line.
3132 Prefer `full_text`.
3132 Prefer `full_text`.
3133 text : str
3133 text : str
3134 The current "token" the cursor is in, mostly also for historical
3134 The current "token" the cursor is in, mostly also for historical
3135 reasons. as the completer would trigger only after the current line
3135 reasons. as the completer would trigger only after the current line
3136 was parsed.
3136 was parsed.
3137 full_text : str
3137 full_text : str
3138 Full text of the current cell.
3138 Full text of the current cell.
3139
3139
3140 Returns
3140 Returns
3141 -------
3141 -------
3142 An ordered dictionary where keys are identifiers of completion
3142 An ordered dictionary where keys are identifiers of completion
3143 matchers and values are ``MatcherResult``s.
3143 matchers and values are ``MatcherResult``s.
3144 """
3144 """
3145
3145
3146 # if the cursor position isn't given, the only sane assumption we can
3146 # if the cursor position isn't given, the only sane assumption we can
3147 # make is that it's at the end of the line (the common case)
3147 # make is that it's at the end of the line (the common case)
3148 if cursor_pos is None:
3148 if cursor_pos is None:
3149 cursor_pos = len(line_buffer) if text is None else len(text)
3149 cursor_pos = len(line_buffer) if text is None else len(text)
3150
3150
3151 if self.use_main_ns:
3151 if self.use_main_ns:
3152 self.namespace = __main__.__dict__
3152 self.namespace = __main__.__dict__
3153
3153
3154 # if text is either None or an empty string, rely on the line buffer
3154 # if text is either None or an empty string, rely on the line buffer
3155 if (not line_buffer) and full_text:
3155 if (not line_buffer) and full_text:
3156 line_buffer = full_text.split('\n')[cursor_line]
3156 line_buffer = full_text.split('\n')[cursor_line]
3157 if not text: # issue #11508: check line_buffer before calling split_line
3157 if not text: # issue #11508: check line_buffer before calling split_line
3158 text = (
3158 text = (
3159 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3159 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3160 )
3160 )
3161
3161
3162 # If no line buffer is given, assume the input text is all there was
3162 # If no line buffer is given, assume the input text is all there was
3163 if line_buffer is None:
3163 if line_buffer is None:
3164 line_buffer = text
3164 line_buffer = text
3165
3165
3166 # deprecated - do not use `line_buffer` in new code.
3166 # deprecated - do not use `line_buffer` in new code.
3167 self.line_buffer = line_buffer
3167 self.line_buffer = line_buffer
3168 self.text_until_cursor = self.line_buffer[:cursor_pos]
3168 self.text_until_cursor = self.line_buffer[:cursor_pos]
3169
3169
3170 if not full_text:
3170 if not full_text:
3171 full_text = line_buffer
3171 full_text = line_buffer
3172
3172
3173 context = CompletionContext(
3173 context = CompletionContext(
3174 full_text=full_text,
3174 full_text=full_text,
3175 cursor_position=cursor_pos,
3175 cursor_position=cursor_pos,
3176 cursor_line=cursor_line,
3176 cursor_line=cursor_line,
3177 token=text,
3177 token=text,
3178 limit=MATCHES_LIMIT,
3178 limit=MATCHES_LIMIT,
3179 )
3179 )
3180
3180
3181 # Start with a clean slate of completions
3181 # Start with a clean slate of completions
3182 results: Dict[str, MatcherResult] = {}
3182 results: Dict[str, MatcherResult] = {}
3183
3183
3184 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3184 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3185
3185
3186 suppressed_matchers: Set[str] = set()
3186 suppressed_matchers: Set[str] = set()
3187
3187
3188 matchers = {
3188 matchers = {
3189 _get_matcher_id(matcher): matcher
3189 _get_matcher_id(matcher): matcher
3190 for matcher in sorted(
3190 for matcher in sorted(
3191 self.matchers, key=_get_matcher_priority, reverse=True
3191 self.matchers, key=_get_matcher_priority, reverse=True
3192 )
3192 )
3193 }
3193 }
3194
3194
3195 for matcher_id, matcher in matchers.items():
3195 for matcher_id, matcher in matchers.items():
3196 matcher_id = _get_matcher_id(matcher)
3196 matcher_id = _get_matcher_id(matcher)
3197
3197
3198 if matcher_id in self.disable_matchers:
3198 if matcher_id in self.disable_matchers:
3199 continue
3199 continue
3200
3200
3201 if matcher_id in results:
3201 if matcher_id in results:
3202 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3202 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3203
3203
3204 if matcher_id in suppressed_matchers:
3204 if matcher_id in suppressed_matchers:
3205 continue
3205 continue
3206
3206
3207 result: MatcherResult
3207 result: MatcherResult
3208 try:
3208 try:
3209 if _is_matcher_v1(matcher):
3209 if _is_matcher_v1(matcher):
3210 result = _convert_matcher_v1_result_to_v2(
3210 result = _convert_matcher_v1_result_to_v2(
3211 matcher(text), type=_UNKNOWN_TYPE
3211 matcher(text), type=_UNKNOWN_TYPE
3212 )
3212 )
3213 elif _is_matcher_v2(matcher):
3213 elif _is_matcher_v2(matcher):
3214 result = matcher(context)
3214 result = matcher(context)
3215 else:
3215 else:
3216 api_version = _get_matcher_api_version(matcher)
3216 api_version = _get_matcher_api_version(matcher)
3217 raise ValueError(f"Unsupported API version {api_version}")
3217 raise ValueError(f"Unsupported API version {api_version}")
3218 except:
3218 except:
3219 # Show the ugly traceback if the matcher causes an
3219 # Show the ugly traceback if the matcher causes an
3220 # exception, but do NOT crash the kernel!
3220 # exception, but do NOT crash the kernel!
3221 sys.excepthook(*sys.exc_info())
3221 sys.excepthook(*sys.exc_info())
3222 continue
3222 continue
3223
3223
3224 # set default value for matched fragment if suffix was not selected.
3224 # set default value for matched fragment if suffix was not selected.
3225 result["matched_fragment"] = result.get("matched_fragment", context.token)
3225 result["matched_fragment"] = result.get("matched_fragment", context.token)
3226
3226
3227 if not suppressed_matchers:
3227 if not suppressed_matchers:
3228 suppression_recommended: Union[bool, Set[str]] = result.get(
3228 suppression_recommended: Union[bool, Set[str]] = result.get(
3229 "suppress", False
3229 "suppress", False
3230 )
3230 )
3231
3231
3232 suppression_config = (
3232 suppression_config = (
3233 self.suppress_competing_matchers.get(matcher_id, None)
3233 self.suppress_competing_matchers.get(matcher_id, None)
3234 if isinstance(self.suppress_competing_matchers, dict)
3234 if isinstance(self.suppress_competing_matchers, dict)
3235 else self.suppress_competing_matchers
3235 else self.suppress_competing_matchers
3236 )
3236 )
3237 should_suppress = (
3237 should_suppress = (
3238 (suppression_config is True)
3238 (suppression_config is True)
3239 or (suppression_recommended and (suppression_config is not False))
3239 or (suppression_recommended and (suppression_config is not False))
3240 ) and has_any_completions(result)
3240 ) and has_any_completions(result)
3241
3241
3242 if should_suppress:
3242 if should_suppress:
3243 suppression_exceptions: Set[str] = result.get(
3243 suppression_exceptions: Set[str] = result.get(
3244 "do_not_suppress", set()
3244 "do_not_suppress", set()
3245 )
3245 )
3246 if isinstance(suppression_recommended, Iterable):
3246 if isinstance(suppression_recommended, Iterable):
3247 to_suppress = set(suppression_recommended)
3247 to_suppress = set(suppression_recommended)
3248 else:
3248 else:
3249 to_suppress = set(matchers)
3249 to_suppress = set(matchers)
3250 suppressed_matchers = to_suppress - suppression_exceptions
3250 suppressed_matchers = to_suppress - suppression_exceptions
3251
3251
3252 new_results = {}
3252 new_results = {}
3253 for previous_matcher_id, previous_result in results.items():
3253 for previous_matcher_id, previous_result in results.items():
3254 if previous_matcher_id not in suppressed_matchers:
3254 if previous_matcher_id not in suppressed_matchers:
3255 new_results[previous_matcher_id] = previous_result
3255 new_results[previous_matcher_id] = previous_result
3256 results = new_results
3256 results = new_results
3257
3257
3258 results[matcher_id] = result
3258 results[matcher_id] = result
3259
3259
3260 _, matches = self._arrange_and_extract(
3260 _, matches = self._arrange_and_extract(
3261 results,
3261 results,
3262 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3262 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3263 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3263 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3264 skip_matchers={jedi_matcher_id},
3264 skip_matchers={jedi_matcher_id},
3265 abort_if_offset_changes=False,
3265 abort_if_offset_changes=False,
3266 )
3266 )
3267
3267
3268 # populate legacy stateful API
3268 # populate legacy stateful API
3269 self.matches = matches
3269 self.matches = matches
3270
3270
3271 return results
3271 return results
3272
3272
3273 @staticmethod
3273 @staticmethod
3274 def _deduplicate(
3274 def _deduplicate(
3275 matches: Sequence[AnyCompletion],
3275 matches: Sequence[AnyCompletion],
3276 ) -> Iterable[AnyCompletion]:
3276 ) -> Iterable[AnyCompletion]:
3277 filtered_matches: Dict[str, AnyCompletion] = {}
3277 filtered_matches: Dict[str, AnyCompletion] = {}
3278 for match in matches:
3278 for match in matches:
3279 text = match.text
3279 text = match.text
3280 if (
3280 if (
3281 text not in filtered_matches
3281 text not in filtered_matches
3282 or filtered_matches[text].type == _UNKNOWN_TYPE
3282 or filtered_matches[text].type == _UNKNOWN_TYPE
3283 ):
3283 ):
3284 filtered_matches[text] = match
3284 filtered_matches[text] = match
3285
3285
3286 return filtered_matches.values()
3286 return filtered_matches.values()
3287
3287
3288 @staticmethod
3288 @staticmethod
3289 def _sort(matches: Sequence[AnyCompletion]):
3289 def _sort(matches: Sequence[AnyCompletion]):
3290 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3290 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3291
3291
3292 @context_matcher()
3292 @context_matcher()
3293 def fwd_unicode_matcher(self, context: CompletionContext):
3293 def fwd_unicode_matcher(self, context: CompletionContext):
3294 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3294 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3295 # TODO: use `context.limit` to terminate early once we matched the maximum
3295 # TODO: use `context.limit` to terminate early once we matched the maximum
3296 # number that will be used downstream; can be added as an optional to
3296 # number that will be used downstream; can be added as an optional to
3297 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3297 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3298 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3298 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3299 return _convert_matcher_v1_result_to_v2(
3299 return _convert_matcher_v1_result_to_v2(
3300 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3300 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3301 )
3301 )
3302
3302
3303 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3303 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3304 """
3304 """
3305 Forward match a string starting with a backslash with a list of
3305 Forward match a string starting with a backslash with a list of
3306 potential Unicode completions.
3306 potential Unicode completions.
3307
3307
3308 Will compute list of Unicode character names on first call and cache it.
3308 Will compute list of Unicode character names on first call and cache it.
3309
3309
3310 .. deprecated:: 8.6
3310 .. deprecated:: 8.6
3311 You can use :meth:`fwd_unicode_matcher` instead.
3311 You can use :meth:`fwd_unicode_matcher` instead.
3312
3312
3313 Returns
3313 Returns
3314 -------
3314 -------
3315 At tuple with:
3315 At tuple with:
3316 - matched text (empty if no matches)
3316 - matched text (empty if no matches)
3317 - list of potential completions, empty tuple otherwise)
3317 - list of potential completions, empty tuple otherwise)
3318 """
3318 """
3319 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3319 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3320 # We could do a faster match using a Trie.
3320 # We could do a faster match using a Trie.
3321
3321
3322 # Using pygtrie the following seem to work:
3322 # Using pygtrie the following seem to work:
3323
3323
3324 # s = PrefixSet()
3324 # s = PrefixSet()
3325
3325
3326 # for c in range(0,0x10FFFF + 1):
3326 # for c in range(0,0x10FFFF + 1):
3327 # try:
3327 # try:
3328 # s.add(unicodedata.name(chr(c)))
3328 # s.add(unicodedata.name(chr(c)))
3329 # except ValueError:
3329 # except ValueError:
3330 # pass
3330 # pass
3331 # [''.join(k) for k in s.iter(prefix)]
3331 # [''.join(k) for k in s.iter(prefix)]
3332
3332
3333 # But need to be timed and adds an extra dependency.
3333 # But need to be timed and adds an extra dependency.
3334
3334
3335 slashpos = text.rfind('\\')
3335 slashpos = text.rfind('\\')
3336 # if text starts with slash
3336 # if text starts with slash
3337 if slashpos > -1:
3337 if slashpos > -1:
3338 # PERF: It's important that we don't access self._unicode_names
3338 # PERF: It's important that we don't access self._unicode_names
3339 # until we're inside this if-block. _unicode_names is lazily
3339 # until we're inside this if-block. _unicode_names is lazily
3340 # initialized, and it takes a user-noticeable amount of time to
3340 # initialized, and it takes a user-noticeable amount of time to
3341 # initialize it, so we don't want to initialize it unless we're
3341 # initialize it, so we don't want to initialize it unless we're
3342 # actually going to use it.
3342 # actually going to use it.
3343 s = text[slashpos + 1 :]
3343 s = text[slashpos + 1 :]
3344 sup = s.upper()
3344 sup = s.upper()
3345 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3345 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3346 if candidates:
3346 if candidates:
3347 return s, candidates
3347 return s, candidates
3348 candidates = [x for x in self.unicode_names if sup in x]
3348 candidates = [x for x in self.unicode_names if sup in x]
3349 if candidates:
3349 if candidates:
3350 return s, candidates
3350 return s, candidates
3351 splitsup = sup.split(" ")
3351 splitsup = sup.split(" ")
3352 candidates = [
3352 candidates = [
3353 x for x in self.unicode_names if all(u in x for u in splitsup)
3353 x for x in self.unicode_names if all(u in x for u in splitsup)
3354 ]
3354 ]
3355 if candidates:
3355 if candidates:
3356 return s, candidates
3356 return s, candidates
3357
3357
3358 return "", ()
3358 return "", ()
3359
3359
3360 # if text does not start with slash
3360 # if text does not start with slash
3361 else:
3361 else:
3362 return '', ()
3362 return '', ()
3363
3363
3364 @property
3364 @property
3365 def unicode_names(self) -> List[str]:
3365 def unicode_names(self) -> List[str]:
3366 """List of names of unicode code points that can be completed.
3366 """List of names of unicode code points that can be completed.
3367
3367
3368 The list is lazily initialized on first access.
3368 The list is lazily initialized on first access.
3369 """
3369 """
3370 if self._unicode_names is None:
3370 if self._unicode_names is None:
3371 names = []
3371 names = []
3372 for c in range(0,0x10FFFF + 1):
3372 for c in range(0,0x10FFFF + 1):
3373 try:
3373 try:
3374 names.append(unicodedata.name(chr(c)))
3374 names.append(unicodedata.name(chr(c)))
3375 except ValueError:
3375 except ValueError:
3376 pass
3376 pass
3377 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3377 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3378
3378
3379 return self._unicode_names
3379 return self._unicode_names
3380
3380
3381 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3381 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3382 names = []
3382 names = []
3383 for start,stop in ranges:
3383 for start,stop in ranges:
3384 for c in range(start, stop) :
3384 for c in range(start, stop) :
3385 try:
3385 try:
3386 names.append(unicodedata.name(chr(c)))
3386 names.append(unicodedata.name(chr(c)))
3387 except ValueError:
3387 except ValueError:
3388 pass
3388 pass
3389 return names
3389 return names
@@ -1,898 +1,895
1 from inspect import isclass, signature, Signature
1 from inspect import isclass, signature, Signature
2 from typing import (
2 from typing import (
3 Annotated,
3 Annotated,
4 AnyStr,
4 AnyStr,
5 Callable,
5 Callable,
6 Dict,
6 Dict,
7 Literal,
7 Literal,
8 NamedTuple,
8 NamedTuple,
9 NewType,
9 NewType,
10 Optional,
10 Optional,
11 Protocol,
11 Protocol,
12 Set,
12 Set,
13 Sequence,
13 Sequence,
14 Tuple,
14 Tuple,
15 Type,
15 Type,
16 TypeGuard,
16 TypeGuard,
17 Union,
17 Union,
18 get_args,
18 get_args,
19 get_origin,
19 get_origin,
20 is_typeddict,
20 is_typeddict,
21 )
21 )
22 import ast
22 import ast
23 import builtins
23 import builtins
24 import collections
24 import collections
25 import operator
25 import operator
26 import sys
26 import sys
27 from functools import cached_property
27 from functools import cached_property
28 from dataclasses import dataclass, field
28 from dataclasses import dataclass, field
29 from types import MethodDescriptorType, ModuleType
29 from types import MethodDescriptorType, ModuleType
30
30
31 from IPython.utils.decorators import undoc
31 from IPython.utils.decorators import undoc
32
32
33
33
34 if sys.version_info < (3, 11):
34 if sys.version_info < (3, 11):
35 from typing_extensions import Self, LiteralString
35 from typing_extensions import Self, LiteralString
36 else:
36 else:
37 from typing import Self, LiteralString
37 from typing import Self, LiteralString
38
38
39 if sys.version_info < (3, 12):
39 if sys.version_info < (3, 12):
40 from typing_extensions import TypeAliasType
40 from typing_extensions import TypeAliasType
41 else:
41 else:
42 from typing import TypeAliasType
42 from typing import TypeAliasType
43
43
44
44
45 @undoc
45 @undoc
46 class HasGetItem(Protocol):
46 class HasGetItem(Protocol):
47 def __getitem__(self, key) -> None:
47 def __getitem__(self, key) -> None: ...
48 ...
49
48
50
49
51 @undoc
50 @undoc
52 class InstancesHaveGetItem(Protocol):
51 class InstancesHaveGetItem(Protocol):
53 def __call__(self, *args, **kwargs) -> HasGetItem:
52 def __call__(self, *args, **kwargs) -> HasGetItem: ...
54 ...
55
53
56
54
57 @undoc
55 @undoc
58 class HasGetAttr(Protocol):
56 class HasGetAttr(Protocol):
59 def __getattr__(self, key) -> None:
57 def __getattr__(self, key) -> None: ...
60 ...
61
58
62
59
63 @undoc
60 @undoc
64 class DoesNotHaveGetAttr(Protocol):
61 class DoesNotHaveGetAttr(Protocol):
65 pass
62 pass
66
63
67
64
68 # By default `__getattr__` is not explicitly implemented on most objects
65 # By default `__getattr__` is not explicitly implemented on most objects
69 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
66 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
70
67
71
68
72 def _unbind_method(func: Callable) -> Union[Callable, None]:
69 def _unbind_method(func: Callable) -> Union[Callable, None]:
73 """Get unbound method for given bound method.
70 """Get unbound method for given bound method.
74
71
75 Returns None if cannot get unbound method, or method is already unbound.
72 Returns None if cannot get unbound method, or method is already unbound.
76 """
73 """
77 owner = getattr(func, "__self__", None)
74 owner = getattr(func, "__self__", None)
78 owner_class = type(owner)
75 owner_class = type(owner)
79 name = getattr(func, "__name__", None)
76 name = getattr(func, "__name__", None)
80 instance_dict_overrides = getattr(owner, "__dict__", None)
77 instance_dict_overrides = getattr(owner, "__dict__", None)
81 if (
78 if (
82 owner is not None
79 owner is not None
83 and name
80 and name
84 and (
81 and (
85 not instance_dict_overrides
82 not instance_dict_overrides
86 or (instance_dict_overrides and name not in instance_dict_overrides)
83 or (instance_dict_overrides and name not in instance_dict_overrides)
87 )
84 )
88 ):
85 ):
89 return getattr(owner_class, name)
86 return getattr(owner_class, name)
90 return None
87 return None
91
88
92
89
93 @undoc
90 @undoc
94 @dataclass
91 @dataclass
95 class EvaluationPolicy:
92 class EvaluationPolicy:
96 """Definition of evaluation policy."""
93 """Definition of evaluation policy."""
97
94
98 allow_locals_access: bool = False
95 allow_locals_access: bool = False
99 allow_globals_access: bool = False
96 allow_globals_access: bool = False
100 allow_item_access: bool = False
97 allow_item_access: bool = False
101 allow_attr_access: bool = False
98 allow_attr_access: bool = False
102 allow_builtins_access: bool = False
99 allow_builtins_access: bool = False
103 allow_all_operations: bool = False
100 allow_all_operations: bool = False
104 allow_any_calls: bool = False
101 allow_any_calls: bool = False
105 allowed_calls: Set[Callable] = field(default_factory=set)
102 allowed_calls: Set[Callable] = field(default_factory=set)
106
103
107 def can_get_item(self, value, item):
104 def can_get_item(self, value, item):
108 return self.allow_item_access
105 return self.allow_item_access
109
106
110 def can_get_attr(self, value, attr):
107 def can_get_attr(self, value, attr):
111 return self.allow_attr_access
108 return self.allow_attr_access
112
109
113 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
110 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
114 if self.allow_all_operations:
111 if self.allow_all_operations:
115 return True
112 return True
116
113
117 def can_call(self, func):
114 def can_call(self, func):
118 if self.allow_any_calls:
115 if self.allow_any_calls:
119 return True
116 return True
120
117
121 if func in self.allowed_calls:
118 if func in self.allowed_calls:
122 return True
119 return True
123
120
124 owner_method = _unbind_method(func)
121 owner_method = _unbind_method(func)
125
122
126 if owner_method and owner_method in self.allowed_calls:
123 if owner_method and owner_method in self.allowed_calls:
127 return True
124 return True
128
125
129
126
130 def _get_external(module_name: str, access_path: Sequence[str]):
127 def _get_external(module_name: str, access_path: Sequence[str]):
131 """Get value from external module given a dotted access path.
128 """Get value from external module given a dotted access path.
132
129
133 Raises:
130 Raises:
134 * `KeyError` if module is removed not found, and
131 * `KeyError` if module is removed not found, and
135 * `AttributeError` if access path does not match an exported object
132 * `AttributeError` if access path does not match an exported object
136 """
133 """
137 member_type = sys.modules[module_name]
134 member_type = sys.modules[module_name]
138 for attr in access_path:
135 for attr in access_path:
139 member_type = getattr(member_type, attr)
136 member_type = getattr(member_type, attr)
140 return member_type
137 return member_type
141
138
142
139
143 def _has_original_dunder_external(
140 def _has_original_dunder_external(
144 value,
141 value,
145 module_name: str,
142 module_name: str,
146 access_path: Sequence[str],
143 access_path: Sequence[str],
147 method_name: str,
144 method_name: str,
148 ):
145 ):
149 if module_name not in sys.modules:
146 if module_name not in sys.modules:
150 # LBYLB as it is faster
147 # LBYLB as it is faster
151 return False
148 return False
152 try:
149 try:
153 member_type = _get_external(module_name, access_path)
150 member_type = _get_external(module_name, access_path)
154 value_type = type(value)
151 value_type = type(value)
155 if type(value) == member_type:
152 if type(value) == member_type:
156 return True
153 return True
157 if method_name == "__getattribute__":
154 if method_name == "__getattribute__":
158 # we have to short-circuit here due to an unresolved issue in
155 # we have to short-circuit here due to an unresolved issue in
159 # `isinstance` implementation: https://bugs.python.org/issue32683
156 # `isinstance` implementation: https://bugs.python.org/issue32683
160 return False
157 return False
161 if isinstance(value, member_type):
158 if isinstance(value, member_type):
162 method = getattr(value_type, method_name, None)
159 method = getattr(value_type, method_name, None)
163 member_method = getattr(member_type, method_name, None)
160 member_method = getattr(member_type, method_name, None)
164 if member_method == method:
161 if member_method == method:
165 return True
162 return True
166 except (AttributeError, KeyError):
163 except (AttributeError, KeyError):
167 return False
164 return False
168
165
169
166
170 def _has_original_dunder(
167 def _has_original_dunder(
171 value, allowed_types, allowed_methods, allowed_external, method_name
168 value, allowed_types, allowed_methods, allowed_external, method_name
172 ):
169 ):
173 # note: Python ignores `__getattr__`/`__getitem__` on instances,
170 # note: Python ignores `__getattr__`/`__getitem__` on instances,
174 # we only need to check at class level
171 # we only need to check at class level
175 value_type = type(value)
172 value_type = type(value)
176
173
177 # strict type check passes β†’ no need to check method
174 # strict type check passes β†’ no need to check method
178 if value_type in allowed_types:
175 if value_type in allowed_types:
179 return True
176 return True
180
177
181 method = getattr(value_type, method_name, None)
178 method = getattr(value_type, method_name, None)
182
179
183 if method is None:
180 if method is None:
184 return None
181 return None
185
182
186 if method in allowed_methods:
183 if method in allowed_methods:
187 return True
184 return True
188
185
189 for module_name, *access_path in allowed_external:
186 for module_name, *access_path in allowed_external:
190 if _has_original_dunder_external(value, module_name, access_path, method_name):
187 if _has_original_dunder_external(value, module_name, access_path, method_name):
191 return True
188 return True
192
189
193 return False
190 return False
194
191
195
192
196 @undoc
193 @undoc
197 @dataclass
194 @dataclass
198 class SelectivePolicy(EvaluationPolicy):
195 class SelectivePolicy(EvaluationPolicy):
199 allowed_getitem: Set[InstancesHaveGetItem] = field(default_factory=set)
196 allowed_getitem: Set[InstancesHaveGetItem] = field(default_factory=set)
200 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
197 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
201
198
202 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
199 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
203 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
200 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
204
201
205 allowed_operations: Set = field(default_factory=set)
202 allowed_operations: Set = field(default_factory=set)
206 allowed_operations_external: Set[Tuple[str, ...]] = field(default_factory=set)
203 allowed_operations_external: Set[Tuple[str, ...]] = field(default_factory=set)
207
204
208 _operation_methods_cache: Dict[str, Set[Callable]] = field(
205 _operation_methods_cache: Dict[str, Set[Callable]] = field(
209 default_factory=dict, init=False
206 default_factory=dict, init=False
210 )
207 )
211
208
212 def can_get_attr(self, value, attr):
209 def can_get_attr(self, value, attr):
213 has_original_attribute = _has_original_dunder(
210 has_original_attribute = _has_original_dunder(
214 value,
211 value,
215 allowed_types=self.allowed_getattr,
212 allowed_types=self.allowed_getattr,
216 allowed_methods=self._getattribute_methods,
213 allowed_methods=self._getattribute_methods,
217 allowed_external=self.allowed_getattr_external,
214 allowed_external=self.allowed_getattr_external,
218 method_name="__getattribute__",
215 method_name="__getattribute__",
219 )
216 )
220 has_original_attr = _has_original_dunder(
217 has_original_attr = _has_original_dunder(
221 value,
218 value,
222 allowed_types=self.allowed_getattr,
219 allowed_types=self.allowed_getattr,
223 allowed_methods=self._getattr_methods,
220 allowed_methods=self._getattr_methods,
224 allowed_external=self.allowed_getattr_external,
221 allowed_external=self.allowed_getattr_external,
225 method_name="__getattr__",
222 method_name="__getattr__",
226 )
223 )
227
224
228 accept = False
225 accept = False
229
226
230 # Many objects do not have `__getattr__`, this is fine.
227 # Many objects do not have `__getattr__`, this is fine.
231 if has_original_attr is None and has_original_attribute:
228 if has_original_attr is None and has_original_attribute:
232 accept = True
229 accept = True
233 else:
230 else:
234 # Accept objects without modifications to `__getattr__` and `__getattribute__`
231 # Accept objects without modifications to `__getattr__` and `__getattribute__`
235 accept = has_original_attr and has_original_attribute
232 accept = has_original_attr and has_original_attribute
236
233
237 if accept:
234 if accept:
238 # We still need to check for overridden properties.
235 # We still need to check for overridden properties.
239
236
240 value_class = type(value)
237 value_class = type(value)
241 if not hasattr(value_class, attr):
238 if not hasattr(value_class, attr):
242 return True
239 return True
243
240
244 class_attr_val = getattr(value_class, attr)
241 class_attr_val = getattr(value_class, attr)
245 is_property = isinstance(class_attr_val, property)
242 is_property = isinstance(class_attr_val, property)
246
243
247 if not is_property:
244 if not is_property:
248 return True
245 return True
249
246
250 # Properties in allowed types are ok (although we do not include any
247 # Properties in allowed types are ok (although we do not include any
251 # properties in our default allow list currently).
248 # properties in our default allow list currently).
252 if type(value) in self.allowed_getattr:
249 if type(value) in self.allowed_getattr:
253 return True # pragma: no cover
250 return True # pragma: no cover
254
251
255 # Properties in subclasses of allowed types may be ok if not changed
252 # Properties in subclasses of allowed types may be ok if not changed
256 for module_name, *access_path in self.allowed_getattr_external:
253 for module_name, *access_path in self.allowed_getattr_external:
257 try:
254 try:
258 external_class = _get_external(module_name, access_path)
255 external_class = _get_external(module_name, access_path)
259 external_class_attr_val = getattr(external_class, attr)
256 external_class_attr_val = getattr(external_class, attr)
260 except (KeyError, AttributeError):
257 except (KeyError, AttributeError):
261 return False # pragma: no cover
258 return False # pragma: no cover
262 return class_attr_val == external_class_attr_val
259 return class_attr_val == external_class_attr_val
263
260
264 return False
261 return False
265
262
266 def can_get_item(self, value, item):
263 def can_get_item(self, value, item):
267 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
264 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
268 return _has_original_dunder(
265 return _has_original_dunder(
269 value,
266 value,
270 allowed_types=self.allowed_getitem,
267 allowed_types=self.allowed_getitem,
271 allowed_methods=self._getitem_methods,
268 allowed_methods=self._getitem_methods,
272 allowed_external=self.allowed_getitem_external,
269 allowed_external=self.allowed_getitem_external,
273 method_name="__getitem__",
270 method_name="__getitem__",
274 )
271 )
275
272
276 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
273 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
277 objects = [a]
274 objects = [a]
278 if b is not None:
275 if b is not None:
279 objects.append(b)
276 objects.append(b)
280 return all(
277 return all(
281 [
278 [
282 _has_original_dunder(
279 _has_original_dunder(
283 obj,
280 obj,
284 allowed_types=self.allowed_operations,
281 allowed_types=self.allowed_operations,
285 allowed_methods=self._operator_dunder_methods(dunder),
282 allowed_methods=self._operator_dunder_methods(dunder),
286 allowed_external=self.allowed_operations_external,
283 allowed_external=self.allowed_operations_external,
287 method_name=dunder,
284 method_name=dunder,
288 )
285 )
289 for dunder in dunders
286 for dunder in dunders
290 for obj in objects
287 for obj in objects
291 ]
288 ]
292 )
289 )
293
290
294 def _operator_dunder_methods(self, dunder: str) -> Set[Callable]:
291 def _operator_dunder_methods(self, dunder: str) -> Set[Callable]:
295 if dunder not in self._operation_methods_cache:
292 if dunder not in self._operation_methods_cache:
296 self._operation_methods_cache[dunder] = self._safe_get_methods(
293 self._operation_methods_cache[dunder] = self._safe_get_methods(
297 self.allowed_operations, dunder
294 self.allowed_operations, dunder
298 )
295 )
299 return self._operation_methods_cache[dunder]
296 return self._operation_methods_cache[dunder]
300
297
301 @cached_property
298 @cached_property
302 def _getitem_methods(self) -> Set[Callable]:
299 def _getitem_methods(self) -> Set[Callable]:
303 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
300 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
304
301
305 @cached_property
302 @cached_property
306 def _getattr_methods(self) -> Set[Callable]:
303 def _getattr_methods(self) -> Set[Callable]:
307 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
304 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
308
305
309 @cached_property
306 @cached_property
310 def _getattribute_methods(self) -> Set[Callable]:
307 def _getattribute_methods(self) -> Set[Callable]:
311 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
308 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
312
309
313 def _safe_get_methods(self, classes, name) -> Set[Callable]:
310 def _safe_get_methods(self, classes, name) -> Set[Callable]:
314 return {
311 return {
315 method
312 method
316 for class_ in classes
313 for class_ in classes
317 for method in [getattr(class_, name, None)]
314 for method in [getattr(class_, name, None)]
318 if method
315 if method
319 }
316 }
320
317
321
318
322 class _DummyNamedTuple(NamedTuple):
319 class _DummyNamedTuple(NamedTuple):
323 """Used internally to retrieve methods of named tuple instance."""
320 """Used internally to retrieve methods of named tuple instance."""
324
321
325
322
326 class EvaluationContext(NamedTuple):
323 class EvaluationContext(NamedTuple):
327 #: Local namespace
324 #: Local namespace
328 locals: dict
325 locals: dict
329 #: Global namespace
326 #: Global namespace
330 globals: dict
327 globals: dict
331 #: Evaluation policy identifier
328 #: Evaluation policy identifier
332 evaluation: Literal[
329 evaluation: Literal["forbidden", "minimal", "limited", "unsafe", "dangerous"] = (
333 "forbidden", "minimal", "limited", "unsafe", "dangerous"
330 "forbidden"
334 ] = "forbidden"
331 )
335 #: Whether the evaluation of code takes place inside of a subscript.
332 #: Whether the evaluation of code takes place inside of a subscript.
336 #: Useful for evaluating ``:-1, 'col'`` in ``df[:-1, 'col']``.
333 #: Useful for evaluating ``:-1, 'col'`` in ``df[:-1, 'col']``.
337 in_subscript: bool = False
334 in_subscript: bool = False
338
335
339
336
340 class _IdentitySubscript:
337 class _IdentitySubscript:
341 """Returns the key itself when item is requested via subscript."""
338 """Returns the key itself when item is requested via subscript."""
342
339
343 def __getitem__(self, key):
340 def __getitem__(self, key):
344 return key
341 return key
345
342
346
343
347 IDENTITY_SUBSCRIPT = _IdentitySubscript()
344 IDENTITY_SUBSCRIPT = _IdentitySubscript()
348 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
345 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
349 UNKNOWN_SIGNATURE = Signature()
346 UNKNOWN_SIGNATURE = Signature()
350 NOT_EVALUATED = object()
347 NOT_EVALUATED = object()
351
348
352
349
353 class GuardRejection(Exception):
350 class GuardRejection(Exception):
354 """Exception raised when guard rejects evaluation attempt."""
351 """Exception raised when guard rejects evaluation attempt."""
355
352
356 pass
353 pass
357
354
358
355
359 def guarded_eval(code: str, context: EvaluationContext):
356 def guarded_eval(code: str, context: EvaluationContext):
360 """Evaluate provided code in the evaluation context.
357 """Evaluate provided code in the evaluation context.
361
358
362 If evaluation policy given by context is set to ``forbidden``
359 If evaluation policy given by context is set to ``forbidden``
363 no evaluation will be performed; if it is set to ``dangerous``
360 no evaluation will be performed; if it is set to ``dangerous``
364 standard :func:`eval` will be used; finally, for any other,
361 standard :func:`eval` will be used; finally, for any other,
365 policy :func:`eval_node` will be called on parsed AST.
362 policy :func:`eval_node` will be called on parsed AST.
366 """
363 """
367 locals_ = context.locals
364 locals_ = context.locals
368
365
369 if context.evaluation == "forbidden":
366 if context.evaluation == "forbidden":
370 raise GuardRejection("Forbidden mode")
367 raise GuardRejection("Forbidden mode")
371
368
372 # note: not using `ast.literal_eval` as it does not implement
369 # note: not using `ast.literal_eval` as it does not implement
373 # getitem at all, for example it fails on simple `[0][1]`
370 # getitem at all, for example it fails on simple `[0][1]`
374
371
375 if context.in_subscript:
372 if context.in_subscript:
376 # syntactic sugar for ellipsis (:) is only available in subscripts
373 # syntactic sugar for ellipsis (:) is only available in subscripts
377 # so we need to trick the ast parser into thinking that we have
374 # so we need to trick the ast parser into thinking that we have
378 # a subscript, but we need to be able to later recognise that we did
375 # a subscript, but we need to be able to later recognise that we did
379 # it so we can ignore the actual __getitem__ operation
376 # it so we can ignore the actual __getitem__ operation
380 if not code:
377 if not code:
381 return tuple()
378 return tuple()
382 locals_ = locals_.copy()
379 locals_ = locals_.copy()
383 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
380 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
384 code = SUBSCRIPT_MARKER + "[" + code + "]"
381 code = SUBSCRIPT_MARKER + "[" + code + "]"
385 context = EvaluationContext(**{**context._asdict(), **{"locals": locals_}})
382 context = EvaluationContext(**{**context._asdict(), **{"locals": locals_}})
386
383
387 if context.evaluation == "dangerous":
384 if context.evaluation == "dangerous":
388 return eval(code, context.globals, context.locals)
385 return eval(code, context.globals, context.locals)
389
386
390 expression = ast.parse(code, mode="eval")
387 expression = ast.parse(code, mode="eval")
391
388
392 return eval_node(expression, context)
389 return eval_node(expression, context)
393
390
394
391
395 BINARY_OP_DUNDERS: Dict[Type[ast.operator], Tuple[str]] = {
392 BINARY_OP_DUNDERS: Dict[Type[ast.operator], Tuple[str]] = {
396 ast.Add: ("__add__",),
393 ast.Add: ("__add__",),
397 ast.Sub: ("__sub__",),
394 ast.Sub: ("__sub__",),
398 ast.Mult: ("__mul__",),
395 ast.Mult: ("__mul__",),
399 ast.Div: ("__truediv__",),
396 ast.Div: ("__truediv__",),
400 ast.FloorDiv: ("__floordiv__",),
397 ast.FloorDiv: ("__floordiv__",),
401 ast.Mod: ("__mod__",),
398 ast.Mod: ("__mod__",),
402 ast.Pow: ("__pow__",),
399 ast.Pow: ("__pow__",),
403 ast.LShift: ("__lshift__",),
400 ast.LShift: ("__lshift__",),
404 ast.RShift: ("__rshift__",),
401 ast.RShift: ("__rshift__",),
405 ast.BitOr: ("__or__",),
402 ast.BitOr: ("__or__",),
406 ast.BitXor: ("__xor__",),
403 ast.BitXor: ("__xor__",),
407 ast.BitAnd: ("__and__",),
404 ast.BitAnd: ("__and__",),
408 ast.MatMult: ("__matmul__",),
405 ast.MatMult: ("__matmul__",),
409 }
406 }
410
407
411 COMP_OP_DUNDERS: Dict[Type[ast.cmpop], Tuple[str, ...]] = {
408 COMP_OP_DUNDERS: Dict[Type[ast.cmpop], Tuple[str, ...]] = {
412 ast.Eq: ("__eq__",),
409 ast.Eq: ("__eq__",),
413 ast.NotEq: ("__ne__", "__eq__"),
410 ast.NotEq: ("__ne__", "__eq__"),
414 ast.Lt: ("__lt__", "__gt__"),
411 ast.Lt: ("__lt__", "__gt__"),
415 ast.LtE: ("__le__", "__ge__"),
412 ast.LtE: ("__le__", "__ge__"),
416 ast.Gt: ("__gt__", "__lt__"),
413 ast.Gt: ("__gt__", "__lt__"),
417 ast.GtE: ("__ge__", "__le__"),
414 ast.GtE: ("__ge__", "__le__"),
418 ast.In: ("__contains__",),
415 ast.In: ("__contains__",),
419 # Note: ast.Is, ast.IsNot, ast.NotIn are handled specially
416 # Note: ast.Is, ast.IsNot, ast.NotIn are handled specially
420 }
417 }
421
418
422 UNARY_OP_DUNDERS: Dict[Type[ast.unaryop], Tuple[str, ...]] = {
419 UNARY_OP_DUNDERS: Dict[Type[ast.unaryop], Tuple[str, ...]] = {
423 ast.USub: ("__neg__",),
420 ast.USub: ("__neg__",),
424 ast.UAdd: ("__pos__",),
421 ast.UAdd: ("__pos__",),
425 # we have to check both __inv__ and __invert__!
422 # we have to check both __inv__ and __invert__!
426 ast.Invert: ("__invert__", "__inv__"),
423 ast.Invert: ("__invert__", "__inv__"),
427 ast.Not: ("__not__",),
424 ast.Not: ("__not__",),
428 }
425 }
429
426
430
427
431 class ImpersonatingDuck:
428 class ImpersonatingDuck:
432 """A dummy class used to create objects of other classes without calling their ``__init__``"""
429 """A dummy class used to create objects of other classes without calling their ``__init__``"""
433
430
434 # no-op: override __class__ to impersonate
431 # no-op: override __class__ to impersonate
435
432
436
433
437 class _Duck:
434 class _Duck:
438 """A dummy class used to create objects pretending to have given attributes"""
435 """A dummy class used to create objects pretending to have given attributes"""
439
436
440 def __init__(self, attributes: Optional[dict] = None, items: Optional[dict] = None):
437 def __init__(self, attributes: Optional[dict] = None, items: Optional[dict] = None):
441 self.attributes = attributes or {}
438 self.attributes = attributes or {}
442 self.items = items or {}
439 self.items = items or {}
443
440
444 def __getattr__(self, attr: str):
441 def __getattr__(self, attr: str):
445 return self.attributes[attr]
442 return self.attributes[attr]
446
443
447 def __hasattr__(self, attr: str):
444 def __hasattr__(self, attr: str):
448 return attr in self.attributes
445 return attr in self.attributes
449
446
450 def __dir__(self):
447 def __dir__(self):
451 return [*dir(super), *self.attributes]
448 return [*dir(super), *self.attributes]
452
449
453 def __getitem__(self, key: str):
450 def __getitem__(self, key: str):
454 return self.items[key]
451 return self.items[key]
455
452
456 def __hasitem__(self, key: str):
453 def __hasitem__(self, key: str):
457 return self.items[key]
454 return self.items[key]
458
455
459 def _ipython_key_completions_(self):
456 def _ipython_key_completions_(self):
460 return self.items.keys()
457 return self.items.keys()
461
458
462
459
463 def _find_dunder(node_op, dunders) -> Union[Tuple[str, ...], None]:
460 def _find_dunder(node_op, dunders) -> Union[Tuple[str, ...], None]:
464 dunder = None
461 dunder = None
465 for op, candidate_dunder in dunders.items():
462 for op, candidate_dunder in dunders.items():
466 if isinstance(node_op, op):
463 if isinstance(node_op, op):
467 dunder = candidate_dunder
464 dunder = candidate_dunder
468 return dunder
465 return dunder
469
466
470
467
471 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
468 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
472 """Evaluate AST node in provided context.
469 """Evaluate AST node in provided context.
473
470
474 Applies evaluation restrictions defined in the context. Currently does not support evaluation of functions with keyword arguments.
471 Applies evaluation restrictions defined in the context. Currently does not support evaluation of functions with keyword arguments.
475
472
476 Does not evaluate actions that always have side effects:
473 Does not evaluate actions that always have side effects:
477
474
478 - class definitions (``class sth: ...``)
475 - class definitions (``class sth: ...``)
479 - function definitions (``def sth: ...``)
476 - function definitions (``def sth: ...``)
480 - variable assignments (``x = 1``)
477 - variable assignments (``x = 1``)
481 - augmented assignments (``x += 1``)
478 - augmented assignments (``x += 1``)
482 - deletions (``del x``)
479 - deletions (``del x``)
483
480
484 Does not evaluate operations which do not return values:
481 Does not evaluate operations which do not return values:
485
482
486 - assertions (``assert x``)
483 - assertions (``assert x``)
487 - pass (``pass``)
484 - pass (``pass``)
488 - imports (``import x``)
485 - imports (``import x``)
489 - control flow:
486 - control flow:
490
487
491 - conditionals (``if x:``) except for ternary IfExp (``a if x else b``)
488 - conditionals (``if x:``) except for ternary IfExp (``a if x else b``)
492 - loops (``for`` and ``while``)
489 - loops (``for`` and ``while``)
493 - exception handling
490 - exception handling
494
491
495 The purpose of this function is to guard against unwanted side-effects;
492 The purpose of this function is to guard against unwanted side-effects;
496 it does not give guarantees on protection from malicious code execution.
493 it does not give guarantees on protection from malicious code execution.
497 """
494 """
498 policy = EVALUATION_POLICIES[context.evaluation]
495 policy = EVALUATION_POLICIES[context.evaluation]
499 if node is None:
496 if node is None:
500 return None
497 return None
501 if isinstance(node, ast.Expression):
498 if isinstance(node, ast.Expression):
502 return eval_node(node.body, context)
499 return eval_node(node.body, context)
503 if isinstance(node, ast.BinOp):
500 if isinstance(node, ast.BinOp):
504 left = eval_node(node.left, context)
501 left = eval_node(node.left, context)
505 right = eval_node(node.right, context)
502 right = eval_node(node.right, context)
506 dunders = _find_dunder(node.op, BINARY_OP_DUNDERS)
503 dunders = _find_dunder(node.op, BINARY_OP_DUNDERS)
507 if dunders:
504 if dunders:
508 if policy.can_operate(dunders, left, right):
505 if policy.can_operate(dunders, left, right):
509 return getattr(left, dunders[0])(right)
506 return getattr(left, dunders[0])(right)
510 else:
507 else:
511 raise GuardRejection(
508 raise GuardRejection(
512 f"Operation (`{dunders}`) for",
509 f"Operation (`{dunders}`) for",
513 type(left),
510 type(left),
514 f"not allowed in {context.evaluation} mode",
511 f"not allowed in {context.evaluation} mode",
515 )
512 )
516 if isinstance(node, ast.Compare):
513 if isinstance(node, ast.Compare):
517 left = eval_node(node.left, context)
514 left = eval_node(node.left, context)
518 all_true = True
515 all_true = True
519 negate = False
516 negate = False
520 for op, right in zip(node.ops, node.comparators):
517 for op, right in zip(node.ops, node.comparators):
521 right = eval_node(right, context)
518 right = eval_node(right, context)
522 dunder = None
519 dunder = None
523 dunders = _find_dunder(op, COMP_OP_DUNDERS)
520 dunders = _find_dunder(op, COMP_OP_DUNDERS)
524 if not dunders:
521 if not dunders:
525 if isinstance(op, ast.NotIn):
522 if isinstance(op, ast.NotIn):
526 dunders = COMP_OP_DUNDERS[ast.In]
523 dunders = COMP_OP_DUNDERS[ast.In]
527 negate = True
524 negate = True
528 if isinstance(op, ast.Is):
525 if isinstance(op, ast.Is):
529 dunder = "is_"
526 dunder = "is_"
530 if isinstance(op, ast.IsNot):
527 if isinstance(op, ast.IsNot):
531 dunder = "is_"
528 dunder = "is_"
532 negate = True
529 negate = True
533 if not dunder and dunders:
530 if not dunder and dunders:
534 dunder = dunders[0]
531 dunder = dunders[0]
535 if dunder:
532 if dunder:
536 a, b = (right, left) if dunder == "__contains__" else (left, right)
533 a, b = (right, left) if dunder == "__contains__" else (left, right)
537 if dunder == "is_" or dunders and policy.can_operate(dunders, a, b):
534 if dunder == "is_" or dunders and policy.can_operate(dunders, a, b):
538 result = getattr(operator, dunder)(a, b)
535 result = getattr(operator, dunder)(a, b)
539 if negate:
536 if negate:
540 result = not result
537 result = not result
541 if not result:
538 if not result:
542 all_true = False
539 all_true = False
543 left = right
540 left = right
544 else:
541 else:
545 raise GuardRejection(
542 raise GuardRejection(
546 f"Comparison (`{dunder}`) for",
543 f"Comparison (`{dunder}`) for",
547 type(left),
544 type(left),
548 f"not allowed in {context.evaluation} mode",
545 f"not allowed in {context.evaluation} mode",
549 )
546 )
550 else:
547 else:
551 raise ValueError(
548 raise ValueError(
552 f"Comparison `{dunder}` not supported"
549 f"Comparison `{dunder}` not supported"
553 ) # pragma: no cover
550 ) # pragma: no cover
554 return all_true
551 return all_true
555 if isinstance(node, ast.Constant):
552 if isinstance(node, ast.Constant):
556 return node.value
553 return node.value
557 if isinstance(node, ast.Tuple):
554 if isinstance(node, ast.Tuple):
558 return tuple(eval_node(e, context) for e in node.elts)
555 return tuple(eval_node(e, context) for e in node.elts)
559 if isinstance(node, ast.List):
556 if isinstance(node, ast.List):
560 return [eval_node(e, context) for e in node.elts]
557 return [eval_node(e, context) for e in node.elts]
561 if isinstance(node, ast.Set):
558 if isinstance(node, ast.Set):
562 return {eval_node(e, context) for e in node.elts}
559 return {eval_node(e, context) for e in node.elts}
563 if isinstance(node, ast.Dict):
560 if isinstance(node, ast.Dict):
564 return dict(
561 return dict(
565 zip(
562 zip(
566 [eval_node(k, context) for k in node.keys],
563 [eval_node(k, context) for k in node.keys],
567 [eval_node(v, context) for v in node.values],
564 [eval_node(v, context) for v in node.values],
568 )
565 )
569 )
566 )
570 if isinstance(node, ast.Slice):
567 if isinstance(node, ast.Slice):
571 return slice(
568 return slice(
572 eval_node(node.lower, context),
569 eval_node(node.lower, context),
573 eval_node(node.upper, context),
570 eval_node(node.upper, context),
574 eval_node(node.step, context),
571 eval_node(node.step, context),
575 )
572 )
576 if isinstance(node, ast.UnaryOp):
573 if isinstance(node, ast.UnaryOp):
577 value = eval_node(node.operand, context)
574 value = eval_node(node.operand, context)
578 dunders = _find_dunder(node.op, UNARY_OP_DUNDERS)
575 dunders = _find_dunder(node.op, UNARY_OP_DUNDERS)
579 if dunders:
576 if dunders:
580 if policy.can_operate(dunders, value):
577 if policy.can_operate(dunders, value):
581 return getattr(value, dunders[0])()
578 return getattr(value, dunders[0])()
582 else:
579 else:
583 raise GuardRejection(
580 raise GuardRejection(
584 f"Operation (`{dunders}`) for",
581 f"Operation (`{dunders}`) for",
585 type(value),
582 type(value),
586 f"not allowed in {context.evaluation} mode",
583 f"not allowed in {context.evaluation} mode",
587 )
584 )
588 if isinstance(node, ast.Subscript):
585 if isinstance(node, ast.Subscript):
589 value = eval_node(node.value, context)
586 value = eval_node(node.value, context)
590 slice_ = eval_node(node.slice, context)
587 slice_ = eval_node(node.slice, context)
591 if policy.can_get_item(value, slice_):
588 if policy.can_get_item(value, slice_):
592 return value[slice_]
589 return value[slice_]
593 raise GuardRejection(
590 raise GuardRejection(
594 "Subscript access (`__getitem__`) for",
591 "Subscript access (`__getitem__`) for",
595 type(value), # not joined to avoid calling `repr`
592 type(value), # not joined to avoid calling `repr`
596 f" not allowed in {context.evaluation} mode",
593 f" not allowed in {context.evaluation} mode",
597 )
594 )
598 if isinstance(node, ast.Name):
595 if isinstance(node, ast.Name):
599 return _eval_node_name(node.id, context)
596 return _eval_node_name(node.id, context)
600 if isinstance(node, ast.Attribute):
597 if isinstance(node, ast.Attribute):
601 value = eval_node(node.value, context)
598 value = eval_node(node.value, context)
602 if policy.can_get_attr(value, node.attr):
599 if policy.can_get_attr(value, node.attr):
603 return getattr(value, node.attr)
600 return getattr(value, node.attr)
604 raise GuardRejection(
601 raise GuardRejection(
605 "Attribute access (`__getattr__`) for",
602 "Attribute access (`__getattr__`) for",
606 type(value), # not joined to avoid calling `repr`
603 type(value), # not joined to avoid calling `repr`
607 f"not allowed in {context.evaluation} mode",
604 f"not allowed in {context.evaluation} mode",
608 )
605 )
609 if isinstance(node, ast.IfExp):
606 if isinstance(node, ast.IfExp):
610 test = eval_node(node.test, context)
607 test = eval_node(node.test, context)
611 if test:
608 if test:
612 return eval_node(node.body, context)
609 return eval_node(node.body, context)
613 else:
610 else:
614 return eval_node(node.orelse, context)
611 return eval_node(node.orelse, context)
615 if isinstance(node, ast.Call):
612 if isinstance(node, ast.Call):
616 func = eval_node(node.func, context)
613 func = eval_node(node.func, context)
617 if policy.can_call(func) and not node.keywords:
614 if policy.can_call(func) and not node.keywords:
618 args = [eval_node(arg, context) for arg in node.args]
615 args = [eval_node(arg, context) for arg in node.args]
619 return func(*args)
616 return func(*args)
620 if isclass(func):
617 if isclass(func):
621 # this code path gets entered when calling class e.g. `MyClass()`
618 # this code path gets entered when calling class e.g. `MyClass()`
622 # or `my_instance.__class__()` - in both cases `func` is `MyClass`.
619 # or `my_instance.__class__()` - in both cases `func` is `MyClass`.
623 # Should return `MyClass` if `__new__` is not overridden,
620 # Should return `MyClass` if `__new__` is not overridden,
624 # otherwise whatever `__new__` return type is.
621 # otherwise whatever `__new__` return type is.
625 overridden_return_type = _eval_return_type(func.__new__, node, context)
622 overridden_return_type = _eval_return_type(func.__new__, node, context)
626 if overridden_return_type is not NOT_EVALUATED:
623 if overridden_return_type is not NOT_EVALUATED:
627 return overridden_return_type
624 return overridden_return_type
628 return _create_duck_for_heap_type(func)
625 return _create_duck_for_heap_type(func)
629 else:
626 else:
630 return_type = _eval_return_type(func, node, context)
627 return_type = _eval_return_type(func, node, context)
631 if return_type is not NOT_EVALUATED:
628 if return_type is not NOT_EVALUATED:
632 return return_type
629 return return_type
633 raise GuardRejection(
630 raise GuardRejection(
634 "Call for",
631 "Call for",
635 func, # not joined to avoid calling `repr`
632 func, # not joined to avoid calling `repr`
636 f"not allowed in {context.evaluation} mode",
633 f"not allowed in {context.evaluation} mode",
637 )
634 )
638 raise ValueError("Unhandled node", ast.dump(node))
635 raise ValueError("Unhandled node", ast.dump(node))
639
636
640
637
641 def _eval_return_type(func: Callable, node: ast.Call, context: EvaluationContext):
638 def _eval_return_type(func: Callable, node: ast.Call, context: EvaluationContext):
642 """Evaluate return type of a given callable function.
639 """Evaluate return type of a given callable function.
643
640
644 Returns the built-in type, a duck or NOT_EVALUATED sentinel.
641 Returns the built-in type, a duck or NOT_EVALUATED sentinel.
645 """
642 """
646 try:
643 try:
647 sig = signature(func)
644 sig = signature(func)
648 except ValueError:
645 except ValueError:
649 sig = UNKNOWN_SIGNATURE
646 sig = UNKNOWN_SIGNATURE
650 # if annotation was not stringized, or it was stringized
647 # if annotation was not stringized, or it was stringized
651 # but resolved by signature call we know the return type
648 # but resolved by signature call we know the return type
652 not_empty = sig.return_annotation is not Signature.empty
649 not_empty = sig.return_annotation is not Signature.empty
653 if not_empty:
650 if not_empty:
654 return _resolve_annotation(sig.return_annotation, sig, func, node, context)
651 return _resolve_annotation(sig.return_annotation, sig, func, node, context)
655 return NOT_EVALUATED
652 return NOT_EVALUATED
656
653
657
654
658 def _resolve_annotation(
655 def _resolve_annotation(
659 annotation,
656 annotation,
660 sig: Signature,
657 sig: Signature,
661 func: Callable,
658 func: Callable,
662 node: ast.Call,
659 node: ast.Call,
663 context: EvaluationContext,
660 context: EvaluationContext,
664 ):
661 ):
665 """Resolve annotation created by user with `typing` module and custom objects."""
662 """Resolve annotation created by user with `typing` module and custom objects."""
666 annotation = (
663 annotation = (
667 _eval_node_name(annotation, context)
664 _eval_node_name(annotation, context)
668 if isinstance(annotation, str)
665 if isinstance(annotation, str)
669 else annotation
666 else annotation
670 )
667 )
671 origin = get_origin(annotation)
668 origin = get_origin(annotation)
672 if annotation is Self and hasattr(func, "__self__"):
669 if annotation is Self and hasattr(func, "__self__"):
673 return func.__self__
670 return func.__self__
674 elif origin is Literal:
671 elif origin is Literal:
675 type_args = get_args(annotation)
672 type_args = get_args(annotation)
676 if len(type_args) == 1:
673 if len(type_args) == 1:
677 return type_args[0]
674 return type_args[0]
678 elif annotation is LiteralString:
675 elif annotation is LiteralString:
679 return ""
676 return ""
680 elif annotation is AnyStr:
677 elif annotation is AnyStr:
681 index = None
678 index = None
682 for i, (key, value) in enumerate(sig.parameters.items()):
679 for i, (key, value) in enumerate(sig.parameters.items()):
683 if value.annotation is AnyStr:
680 if value.annotation is AnyStr:
684 index = i
681 index = i
685 break
682 break
686 if index is not None and index < len(node.args):
683 if index is not None and index < len(node.args):
687 return eval_node(node.args[index], context)
684 return eval_node(node.args[index], context)
688 elif origin is TypeGuard:
685 elif origin is TypeGuard:
689 return bool()
686 return bool()
690 elif origin is Union:
687 elif origin is Union:
691 attributes = [
688 attributes = [
692 attr
689 attr
693 for type_arg in get_args(annotation)
690 for type_arg in get_args(annotation)
694 for attr in dir(_resolve_annotation(type_arg, sig, func, node, context))
691 for attr in dir(_resolve_annotation(type_arg, sig, func, node, context))
695 ]
692 ]
696 return _Duck(attributes=dict.fromkeys(attributes))
693 return _Duck(attributes=dict.fromkeys(attributes))
697 elif is_typeddict(annotation):
694 elif is_typeddict(annotation):
698 return _Duck(
695 return _Duck(
699 attributes=dict.fromkeys(dir(dict())),
696 attributes=dict.fromkeys(dir(dict())),
700 items={
697 items={
701 k: _resolve_annotation(v, sig, func, node, context)
698 k: _resolve_annotation(v, sig, func, node, context)
702 for k, v in annotation.__annotations__.items()
699 for k, v in annotation.__annotations__.items()
703 },
700 },
704 )
701 )
705 elif hasattr(annotation, "_is_protocol"):
702 elif hasattr(annotation, "_is_protocol"):
706 return _Duck(attributes=dict.fromkeys(dir(annotation)))
703 return _Duck(attributes=dict.fromkeys(dir(annotation)))
707 elif origin is Annotated:
704 elif origin is Annotated:
708 type_arg = get_args(annotation)[0]
705 type_arg = get_args(annotation)[0]
709 return _resolve_annotation(type_arg, sig, func, node, context)
706 return _resolve_annotation(type_arg, sig, func, node, context)
710 elif isinstance(annotation, NewType):
707 elif isinstance(annotation, NewType):
711 return _eval_or_create_duck(annotation.__supertype__, node, context)
708 return _eval_or_create_duck(annotation.__supertype__, node, context)
712 elif isinstance(annotation, TypeAliasType):
709 elif isinstance(annotation, TypeAliasType):
713 return _eval_or_create_duck(annotation.__value__, node, context)
710 return _eval_or_create_duck(annotation.__value__, node, context)
714 else:
711 else:
715 return _eval_or_create_duck(annotation, node, context)
712 return _eval_or_create_duck(annotation, node, context)
716
713
717
714
718 def _eval_node_name(node_id: str, context: EvaluationContext):
715 def _eval_node_name(node_id: str, context: EvaluationContext):
719 policy = EVALUATION_POLICIES[context.evaluation]
716 policy = EVALUATION_POLICIES[context.evaluation]
720 if policy.allow_locals_access and node_id in context.locals:
717 if policy.allow_locals_access and node_id in context.locals:
721 return context.locals[node_id]
718 return context.locals[node_id]
722 if policy.allow_globals_access and node_id in context.globals:
719 if policy.allow_globals_access and node_id in context.globals:
723 return context.globals[node_id]
720 return context.globals[node_id]
724 if policy.allow_builtins_access and hasattr(builtins, node_id):
721 if policy.allow_builtins_access and hasattr(builtins, node_id):
725 # note: do not use __builtins__, it is implementation detail of cPython
722 # note: do not use __builtins__, it is implementation detail of cPython
726 return getattr(builtins, node_id)
723 return getattr(builtins, node_id)
727 if not policy.allow_globals_access and not policy.allow_locals_access:
724 if not policy.allow_globals_access and not policy.allow_locals_access:
728 raise GuardRejection(
725 raise GuardRejection(
729 f"Namespace access not allowed in {context.evaluation} mode"
726 f"Namespace access not allowed in {context.evaluation} mode"
730 )
727 )
731 else:
728 else:
732 raise NameError(f"{node_id} not found in locals, globals, nor builtins")
729 raise NameError(f"{node_id} not found in locals, globals, nor builtins")
733
730
734
731
735 def _eval_or_create_duck(duck_type, node: ast.Call, context: EvaluationContext):
732 def _eval_or_create_duck(duck_type, node: ast.Call, context: EvaluationContext):
736 policy = EVALUATION_POLICIES[context.evaluation]
733 policy = EVALUATION_POLICIES[context.evaluation]
737 # if allow-listed builtin is on type annotation, instantiate it
734 # if allow-listed builtin is on type annotation, instantiate it
738 if policy.can_call(duck_type) and not node.keywords:
735 if policy.can_call(duck_type) and not node.keywords:
739 args = [eval_node(arg, context) for arg in node.args]
736 args = [eval_node(arg, context) for arg in node.args]
740 return duck_type(*args)
737 return duck_type(*args)
741 # if custom class is in type annotation, mock it
738 # if custom class is in type annotation, mock it
742 return _create_duck_for_heap_type(duck_type)
739 return _create_duck_for_heap_type(duck_type)
743
740
744
741
745 def _create_duck_for_heap_type(duck_type):
742 def _create_duck_for_heap_type(duck_type):
746 """Create an imitation of an object of a given type (a duck).
743 """Create an imitation of an object of a given type (a duck).
747
744
748 Returns the duck or NOT_EVALUATED sentinel if duck could not be created.
745 Returns the duck or NOT_EVALUATED sentinel if duck could not be created.
749 """
746 """
750 duck = ImpersonatingDuck()
747 duck = ImpersonatingDuck()
751 try:
748 try:
752 # this only works for heap types, not builtins
749 # this only works for heap types, not builtins
753 duck.__class__ = duck_type
750 duck.__class__ = duck_type
754 return duck
751 return duck
755 except TypeError:
752 except TypeError:
756 pass
753 pass
757 return NOT_EVALUATED
754 return NOT_EVALUATED
758
755
759
756
760 SUPPORTED_EXTERNAL_GETITEM = {
757 SUPPORTED_EXTERNAL_GETITEM = {
761 ("pandas", "core", "indexing", "_iLocIndexer"),
758 ("pandas", "core", "indexing", "_iLocIndexer"),
762 ("pandas", "core", "indexing", "_LocIndexer"),
759 ("pandas", "core", "indexing", "_LocIndexer"),
763 ("pandas", "DataFrame"),
760 ("pandas", "DataFrame"),
764 ("pandas", "Series"),
761 ("pandas", "Series"),
765 ("numpy", "ndarray"),
762 ("numpy", "ndarray"),
766 ("numpy", "void"),
763 ("numpy", "void"),
767 }
764 }
768
765
769
766
770 BUILTIN_GETITEM: Set[InstancesHaveGetItem] = {
767 BUILTIN_GETITEM: Set[InstancesHaveGetItem] = {
771 dict,
768 dict,
772 str, # type: ignore[arg-type]
769 str, # type: ignore[arg-type]
773 bytes, # type: ignore[arg-type]
770 bytes, # type: ignore[arg-type]
774 list,
771 list,
775 tuple,
772 tuple,
776 collections.defaultdict,
773 collections.defaultdict,
777 collections.deque,
774 collections.deque,
778 collections.OrderedDict,
775 collections.OrderedDict,
779 collections.ChainMap,
776 collections.ChainMap,
780 collections.UserDict,
777 collections.UserDict,
781 collections.UserList,
778 collections.UserList,
782 collections.UserString, # type: ignore[arg-type]
779 collections.UserString, # type: ignore[arg-type]
783 _DummyNamedTuple,
780 _DummyNamedTuple,
784 _IdentitySubscript,
781 _IdentitySubscript,
785 }
782 }
786
783
787
784
788 def _list_methods(cls, source=None):
785 def _list_methods(cls, source=None):
789 """For use on immutable objects or with methods returning a copy"""
786 """For use on immutable objects or with methods returning a copy"""
790 return [getattr(cls, k) for k in (source if source else dir(cls))]
787 return [getattr(cls, k) for k in (source if source else dir(cls))]
791
788
792
789
793 dict_non_mutating_methods = ("copy", "keys", "values", "items")
790 dict_non_mutating_methods = ("copy", "keys", "values", "items")
794 list_non_mutating_methods = ("copy", "index", "count")
791 list_non_mutating_methods = ("copy", "index", "count")
795 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
792 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
796
793
797
794
798 dict_keys: Type[collections.abc.KeysView] = type({}.keys())
795 dict_keys: Type[collections.abc.KeysView] = type({}.keys())
799
796
800 NUMERICS = {int, float, complex}
797 NUMERICS = {int, float, complex}
801
798
802 ALLOWED_CALLS = {
799 ALLOWED_CALLS = {
803 bytes,
800 bytes,
804 *_list_methods(bytes),
801 *_list_methods(bytes),
805 dict,
802 dict,
806 *_list_methods(dict, dict_non_mutating_methods),
803 *_list_methods(dict, dict_non_mutating_methods),
807 dict_keys.isdisjoint,
804 dict_keys.isdisjoint,
808 list,
805 list,
809 *_list_methods(list, list_non_mutating_methods),
806 *_list_methods(list, list_non_mutating_methods),
810 set,
807 set,
811 *_list_methods(set, set_non_mutating_methods),
808 *_list_methods(set, set_non_mutating_methods),
812 frozenset,
809 frozenset,
813 *_list_methods(frozenset),
810 *_list_methods(frozenset),
814 range,
811 range,
815 str,
812 str,
816 *_list_methods(str),
813 *_list_methods(str),
817 tuple,
814 tuple,
818 *_list_methods(tuple),
815 *_list_methods(tuple),
819 *NUMERICS,
816 *NUMERICS,
820 *[method for numeric_cls in NUMERICS for method in _list_methods(numeric_cls)],
817 *[method for numeric_cls in NUMERICS for method in _list_methods(numeric_cls)],
821 collections.deque,
818 collections.deque,
822 *_list_methods(collections.deque, list_non_mutating_methods),
819 *_list_methods(collections.deque, list_non_mutating_methods),
823 collections.defaultdict,
820 collections.defaultdict,
824 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
821 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
825 collections.OrderedDict,
822 collections.OrderedDict,
826 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
823 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
827 collections.UserDict,
824 collections.UserDict,
828 *_list_methods(collections.UserDict, dict_non_mutating_methods),
825 *_list_methods(collections.UserDict, dict_non_mutating_methods),
829 collections.UserList,
826 collections.UserList,
830 *_list_methods(collections.UserList, list_non_mutating_methods),
827 *_list_methods(collections.UserList, list_non_mutating_methods),
831 collections.UserString,
828 collections.UserString,
832 *_list_methods(collections.UserString, dir(str)),
829 *_list_methods(collections.UserString, dir(str)),
833 collections.Counter,
830 collections.Counter,
834 *_list_methods(collections.Counter, dict_non_mutating_methods),
831 *_list_methods(collections.Counter, dict_non_mutating_methods),
835 collections.Counter.elements,
832 collections.Counter.elements,
836 collections.Counter.most_common,
833 collections.Counter.most_common,
837 }
834 }
838
835
839 BUILTIN_GETATTR: Set[MayHaveGetattr] = {
836 BUILTIN_GETATTR: Set[MayHaveGetattr] = {
840 *BUILTIN_GETITEM,
837 *BUILTIN_GETITEM,
841 set,
838 set,
842 frozenset,
839 frozenset,
843 object,
840 object,
844 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
841 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
845 *NUMERICS,
842 *NUMERICS,
846 dict_keys,
843 dict_keys,
847 MethodDescriptorType,
844 MethodDescriptorType,
848 ModuleType,
845 ModuleType,
849 }
846 }
850
847
851
848
852 BUILTIN_OPERATIONS = {*BUILTIN_GETATTR}
849 BUILTIN_OPERATIONS = {*BUILTIN_GETATTR}
853
850
854 EVALUATION_POLICIES = {
851 EVALUATION_POLICIES = {
855 "minimal": EvaluationPolicy(
852 "minimal": EvaluationPolicy(
856 allow_builtins_access=True,
853 allow_builtins_access=True,
857 allow_locals_access=False,
854 allow_locals_access=False,
858 allow_globals_access=False,
855 allow_globals_access=False,
859 allow_item_access=False,
856 allow_item_access=False,
860 allow_attr_access=False,
857 allow_attr_access=False,
861 allowed_calls=set(),
858 allowed_calls=set(),
862 allow_any_calls=False,
859 allow_any_calls=False,
863 allow_all_operations=False,
860 allow_all_operations=False,
864 ),
861 ),
865 "limited": SelectivePolicy(
862 "limited": SelectivePolicy(
866 allowed_getitem=BUILTIN_GETITEM,
863 allowed_getitem=BUILTIN_GETITEM,
867 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
864 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
868 allowed_getattr=BUILTIN_GETATTR,
865 allowed_getattr=BUILTIN_GETATTR,
869 allowed_getattr_external={
866 allowed_getattr_external={
870 # pandas Series/Frame implements custom `__getattr__`
867 # pandas Series/Frame implements custom `__getattr__`
871 ("pandas", "DataFrame"),
868 ("pandas", "DataFrame"),
872 ("pandas", "Series"),
869 ("pandas", "Series"),
873 },
870 },
874 allowed_operations=BUILTIN_OPERATIONS,
871 allowed_operations=BUILTIN_OPERATIONS,
875 allow_builtins_access=True,
872 allow_builtins_access=True,
876 allow_locals_access=True,
873 allow_locals_access=True,
877 allow_globals_access=True,
874 allow_globals_access=True,
878 allowed_calls=ALLOWED_CALLS,
875 allowed_calls=ALLOWED_CALLS,
879 ),
876 ),
880 "unsafe": EvaluationPolicy(
877 "unsafe": EvaluationPolicy(
881 allow_builtins_access=True,
878 allow_builtins_access=True,
882 allow_locals_access=True,
879 allow_locals_access=True,
883 allow_globals_access=True,
880 allow_globals_access=True,
884 allow_attr_access=True,
881 allow_attr_access=True,
885 allow_item_access=True,
882 allow_item_access=True,
886 allow_any_calls=True,
883 allow_any_calls=True,
887 allow_all_operations=True,
884 allow_all_operations=True,
888 ),
885 ),
889 }
886 }
890
887
891
888
892 __all__ = [
889 __all__ = [
893 "guarded_eval",
890 "guarded_eval",
894 "eval_node",
891 "eval_node",
895 "GuardRejection",
892 "GuardRejection",
896 "EvaluationContext",
893 "EvaluationContext",
897 "_unbind_method",
894 "_unbind_method",
898 ]
895 ]
@@ -1,798 +1,799
1 """DEPRECATED: Input handling and transformation machinery.
1 """DEPRECATED: Input handling and transformation machinery.
2
2
3 This module was deprecated in IPython 7.0, in favour of inputtransformer2.
3 This module was deprecated in IPython 7.0, in favour of inputtransformer2.
4
4
5 The first class in this module, :class:`InputSplitter`, is designed to tell when
5 The first class in this module, :class:`InputSplitter`, is designed to tell when
6 input from a line-oriented frontend is complete and should be executed, and when
6 input from a line-oriented frontend is complete and should be executed, and when
7 the user should be prompted for another line of code instead. The name 'input
7 the user should be prompted for another line of code instead. The name 'input
8 splitter' is largely for historical reasons.
8 splitter' is largely for historical reasons.
9
9
10 A companion, :class:`IPythonInputSplitter`, provides the same functionality but
10 A companion, :class:`IPythonInputSplitter`, provides the same functionality but
11 with full support for the extended IPython syntax (magics, system calls, etc).
11 with full support for the extended IPython syntax (magics, system calls, etc).
12 The code to actually do these transformations is in :mod:`IPython.core.inputtransformer`.
12 The code to actually do these transformations is in :mod:`IPython.core.inputtransformer`.
13 :class:`IPythonInputSplitter` feeds the raw code to the transformers in order
13 :class:`IPythonInputSplitter` feeds the raw code to the transformers in order
14 and stores the results.
14 and stores the results.
15
15
16 For more details, see the class docstrings below.
16 For more details, see the class docstrings below.
17 """
17 """
18
18 from __future__ import annotations
19 from __future__ import annotations
19
20
20 from warnings import warn
21 from warnings import warn
21
22
22 warn('IPython.core.inputsplitter is deprecated since IPython 7 in favor of `IPython.core.inputtransformer2`',
23 warn('IPython.core.inputsplitter is deprecated since IPython 7 in favor of `IPython.core.inputtransformer2`',
23 DeprecationWarning)
24 DeprecationWarning)
24
25
25 # Copyright (c) IPython Development Team.
26 # Copyright (c) IPython Development Team.
26 # Distributed under the terms of the Modified BSD License.
27 # Distributed under the terms of the Modified BSD License.
27 import ast
28 import ast
28 import codeop
29 import codeop
29 import io
30 import io
30 import re
31 import re
31 import sys
32 import sys
32 import tokenize
33 import tokenize
33 import warnings
34 import warnings
34
35
35 from typing import List, Tuple, Union, Optional, TYPE_CHECKING
36 from typing import List, Tuple, Union, Optional, TYPE_CHECKING
36 from types import CodeType
37 from types import CodeType
37
38
38 from IPython.core.inputtransformer import (leading_indent,
39 from IPython.core.inputtransformer import (leading_indent,
39 classic_prompt,
40 classic_prompt,
40 ipy_prompt,
41 ipy_prompt,
41 cellmagic,
42 cellmagic,
42 assemble_logical_lines,
43 assemble_logical_lines,
43 help_end,
44 help_end,
44 escaped_commands,
45 escaped_commands,
45 assign_from_magic,
46 assign_from_magic,
46 assign_from_system,
47 assign_from_system,
47 assemble_python_lines,
48 assemble_python_lines,
48 )
49 )
49 from IPython.utils import tokenutil
50 from IPython.utils import tokenutil
50
51
51 # These are available in this module for backwards compatibility.
52 # These are available in this module for backwards compatibility.
52 from IPython.core.inputtransformer import (ESC_SHELL, ESC_SH_CAP, ESC_HELP,
53 from IPython.core.inputtransformer import (ESC_SHELL, ESC_SH_CAP, ESC_HELP,
53 ESC_HELP2, ESC_MAGIC, ESC_MAGIC2,
54 ESC_HELP2, ESC_MAGIC, ESC_MAGIC2,
54 ESC_QUOTE, ESC_QUOTE2, ESC_PAREN, ESC_SEQUENCES)
55 ESC_QUOTE, ESC_QUOTE2, ESC_PAREN, ESC_SEQUENCES)
55
56
56 if TYPE_CHECKING:
57 if TYPE_CHECKING:
57 from typing_extensions import Self
58 from typing_extensions import Self
58 #-----------------------------------------------------------------------------
59 #-----------------------------------------------------------------------------
59 # Utilities
60 # Utilities
60 #-----------------------------------------------------------------------------
61 #-----------------------------------------------------------------------------
61
62
62 # FIXME: These are general-purpose utilities that later can be moved to the
63 # FIXME: These are general-purpose utilities that later can be moved to the
63 # general ward. Kept here for now because we're being very strict about test
64 # general ward. Kept here for now because we're being very strict about test
64 # coverage with this code, and this lets us ensure that we keep 100% coverage
65 # coverage with this code, and this lets us ensure that we keep 100% coverage
65 # while developing.
66 # while developing.
66
67
67 # compiled regexps for autoindent management
68 # compiled regexps for autoindent management
68 dedent_re = re.compile('|'.join([
69 dedent_re = re.compile('|'.join([
69 r'^\s+raise(\s.*)?$', # raise statement (+ space + other stuff, maybe)
70 r'^\s+raise(\s.*)?$', # raise statement (+ space + other stuff, maybe)
70 r'^\s+raise\([^\)]*\).*$', # wacky raise with immediate open paren
71 r'^\s+raise\([^\)]*\).*$', # wacky raise with immediate open paren
71 r'^\s+return(\s.*)?$', # normal return (+ space + other stuff, maybe)
72 r'^\s+return(\s.*)?$', # normal return (+ space + other stuff, maybe)
72 r'^\s+return\([^\)]*\).*$', # wacky return with immediate open paren
73 r'^\s+return\([^\)]*\).*$', # wacky return with immediate open paren
73 r'^\s+pass\s*$', # pass (optionally followed by trailing spaces)
74 r'^\s+pass\s*$', # pass (optionally followed by trailing spaces)
74 r'^\s+break\s*$', # break (optionally followed by trailing spaces)
75 r'^\s+break\s*$', # break (optionally followed by trailing spaces)
75 r'^\s+continue\s*$', # continue (optionally followed by trailing spaces)
76 r'^\s+continue\s*$', # continue (optionally followed by trailing spaces)
76 ]))
77 ]))
77 ini_spaces_re = re.compile(r'^([ \t\r\f\v]+)')
78 ini_spaces_re = re.compile(r'^([ \t\r\f\v]+)')
78
79
79 # regexp to match pure comment lines so we don't accidentally insert 'if 1:'
80 # regexp to match pure comment lines so we don't accidentally insert 'if 1:'
80 # before pure comments
81 # before pure comments
81 comment_line_re = re.compile(r'^\s*\#')
82 comment_line_re = re.compile(r'^\s*\#')
82
83
83
84
84 def num_ini_spaces(s):
85 def num_ini_spaces(s):
85 """Return the number of initial spaces in a string.
86 """Return the number of initial spaces in a string.
86
87
87 Note that tabs are counted as a single space. For now, we do *not* support
88 Note that tabs are counted as a single space. For now, we do *not* support
88 mixing of tabs and spaces in the user's input.
89 mixing of tabs and spaces in the user's input.
89
90
90 Parameters
91 Parameters
91 ----------
92 ----------
92 s : string
93 s : string
93
94
94 Returns
95 Returns
95 -------
96 -------
96 n : int
97 n : int
97 """
98 """
98 warnings.warn(
99 warnings.warn(
99 "`num_ini_spaces` is Pending Deprecation since IPython 8.17."
100 "`num_ini_spaces` is Pending Deprecation since IPython 8.17."
100 "It is considered for removal in in future version. "
101 "It is considered for removal in in future version. "
101 "Please open an issue if you believe it should be kept.",
102 "Please open an issue if you believe it should be kept.",
102 stacklevel=2,
103 stacklevel=2,
103 category=PendingDeprecationWarning,
104 category=PendingDeprecationWarning,
104 )
105 )
105 ini_spaces = ini_spaces_re.match(s)
106 ini_spaces = ini_spaces_re.match(s)
106 if ini_spaces:
107 if ini_spaces:
107 return ini_spaces.end()
108 return ini_spaces.end()
108 else:
109 else:
109 return 0
110 return 0
110
111
111 # Fake token types for partial_tokenize:
112 # Fake token types for partial_tokenize:
112 INCOMPLETE_STRING = tokenize.N_TOKENS
113 INCOMPLETE_STRING = tokenize.N_TOKENS
113 IN_MULTILINE_STATEMENT = tokenize.N_TOKENS + 1
114 IN_MULTILINE_STATEMENT = tokenize.N_TOKENS + 1
114
115
115 # The 2 classes below have the same API as TokenInfo, but don't try to look up
116 # The 2 classes below have the same API as TokenInfo, but don't try to look up
116 # a token type name that they won't find.
117 # a token type name that they won't find.
117 class IncompleteString:
118 class IncompleteString:
118 type = exact_type = INCOMPLETE_STRING
119 type = exact_type = INCOMPLETE_STRING
119 def __init__(self, s, start, end, line):
120 def __init__(self, s, start, end, line):
120 self.s = s
121 self.s = s
121 self.start = start
122 self.start = start
122 self.end = end
123 self.end = end
123 self.line = line
124 self.line = line
124
125
125 class InMultilineStatement:
126 class InMultilineStatement:
126 type = exact_type = IN_MULTILINE_STATEMENT
127 type = exact_type = IN_MULTILINE_STATEMENT
127 def __init__(self, pos, line):
128 def __init__(self, pos, line):
128 self.s = ''
129 self.s = ''
129 self.start = self.end = pos
130 self.start = self.end = pos
130 self.line = line
131 self.line = line
131
132
132 def partial_tokens(s):
133 def partial_tokens(s):
133 """Iterate over tokens from a possibly-incomplete string of code.
134 """Iterate over tokens from a possibly-incomplete string of code.
134
135
135 This adds two special token types: INCOMPLETE_STRING and
136 This adds two special token types: INCOMPLETE_STRING and
136 IN_MULTILINE_STATEMENT. These can only occur as the last token yielded, and
137 IN_MULTILINE_STATEMENT. These can only occur as the last token yielded, and
137 represent the two main ways for code to be incomplete.
138 represent the two main ways for code to be incomplete.
138 """
139 """
139 readline = io.StringIO(s).readline
140 readline = io.StringIO(s).readline
140 token = tokenize.TokenInfo(tokenize.NEWLINE, '', (1, 0), (1, 0), '')
141 token = tokenize.TokenInfo(tokenize.NEWLINE, '', (1, 0), (1, 0), '')
141 try:
142 try:
142 for token in tokenutil.generate_tokens_catch_errors(readline):
143 for token in tokenutil.generate_tokens_catch_errors(readline):
143 yield token
144 yield token
144 except tokenize.TokenError as e:
145 except tokenize.TokenError as e:
145 # catch EOF error
146 # catch EOF error
146 lines = s.splitlines(keepends=True)
147 lines = s.splitlines(keepends=True)
147 end = len(lines), len(lines[-1])
148 end = len(lines), len(lines[-1])
148 if 'multi-line string' in e.args[0]:
149 if 'multi-line string' in e.args[0]:
149 l, c = start = token.end
150 l, c = start = token.end
150 s = lines[l-1][c:] + ''.join(lines[l:])
151 s = lines[l-1][c:] + ''.join(lines[l:])
151 yield IncompleteString(s, start, end, lines[-1])
152 yield IncompleteString(s, start, end, lines[-1])
152 elif 'multi-line statement' in e.args[0]:
153 elif 'multi-line statement' in e.args[0]:
153 yield InMultilineStatement(end, lines[-1])
154 yield InMultilineStatement(end, lines[-1])
154 else:
155 else:
155 raise
156 raise
156
157
157 def find_next_indent(code) -> int:
158 def find_next_indent(code) -> int:
158 """Find the number of spaces for the next line of indentation"""
159 """Find the number of spaces for the next line of indentation"""
159 tokens = list(partial_tokens(code))
160 tokens = list(partial_tokens(code))
160 if tokens[-1].type == tokenize.ENDMARKER:
161 if tokens[-1].type == tokenize.ENDMARKER:
161 tokens.pop()
162 tokens.pop()
162 if not tokens:
163 if not tokens:
163 return 0
164 return 0
164
165
165 while tokens[-1].type in {
166 while tokens[-1].type in {
166 tokenize.DEDENT,
167 tokenize.DEDENT,
167 tokenize.NEWLINE,
168 tokenize.NEWLINE,
168 tokenize.COMMENT,
169 tokenize.COMMENT,
169 tokenize.ERRORTOKEN,
170 tokenize.ERRORTOKEN,
170 }:
171 }:
171 tokens.pop()
172 tokens.pop()
172
173
173 # Starting in Python 3.12, the tokenize module adds implicit newlines at the end
174 # Starting in Python 3.12, the tokenize module adds implicit newlines at the end
174 # of input. We need to remove those if we're in a multiline statement
175 # of input. We need to remove those if we're in a multiline statement
175 if tokens[-1].type == IN_MULTILINE_STATEMENT:
176 if tokens[-1].type == IN_MULTILINE_STATEMENT:
176 while tokens[-2].type in {tokenize.NL}:
177 while tokens[-2].type in {tokenize.NL}:
177 tokens.pop(-2)
178 tokens.pop(-2)
178
179
179
180
180 if tokens[-1].type == INCOMPLETE_STRING:
181 if tokens[-1].type == INCOMPLETE_STRING:
181 # Inside a multiline string
182 # Inside a multiline string
182 return 0
183 return 0
183
184
184 # Find the indents used before
185 # Find the indents used before
185 prev_indents = [0]
186 prev_indents = [0]
186 def _add_indent(n):
187 def _add_indent(n):
187 if n != prev_indents[-1]:
188 if n != prev_indents[-1]:
188 prev_indents.append(n)
189 prev_indents.append(n)
189
190
190 tokiter = iter(tokens)
191 tokiter = iter(tokens)
191 for tok in tokiter:
192 for tok in tokiter:
192 if tok.type in {tokenize.INDENT, tokenize.DEDENT}:
193 if tok.type in {tokenize.INDENT, tokenize.DEDENT}:
193 _add_indent(tok.end[1])
194 _add_indent(tok.end[1])
194 elif (tok.type == tokenize.NL):
195 elif (tok.type == tokenize.NL):
195 try:
196 try:
196 _add_indent(next(tokiter).start[1])
197 _add_indent(next(tokiter).start[1])
197 except StopIteration:
198 except StopIteration:
198 break
199 break
199
200
200 last_indent = prev_indents.pop()
201 last_indent = prev_indents.pop()
201
202
202 # If we've just opened a multiline statement (e.g. 'a = ['), indent more
203 # If we've just opened a multiline statement (e.g. 'a = ['), indent more
203 if tokens[-1].type == IN_MULTILINE_STATEMENT:
204 if tokens[-1].type == IN_MULTILINE_STATEMENT:
204 if tokens[-2].exact_type in {tokenize.LPAR, tokenize.LSQB, tokenize.LBRACE}:
205 if tokens[-2].exact_type in {tokenize.LPAR, tokenize.LSQB, tokenize.LBRACE}:
205 return last_indent + 4
206 return last_indent + 4
206 return last_indent
207 return last_indent
207
208
208 if tokens[-1].exact_type == tokenize.COLON:
209 if tokens[-1].exact_type == tokenize.COLON:
209 # Line ends with colon - indent
210 # Line ends with colon - indent
210 return last_indent + 4
211 return last_indent + 4
211
212
212 if last_indent:
213 if last_indent:
213 # Examine the last line for dedent cues - statements like return or
214 # Examine the last line for dedent cues - statements like return or
214 # raise which normally end a block of code.
215 # raise which normally end a block of code.
215 last_line_starts = 0
216 last_line_starts = 0
216 for i, tok in enumerate(tokens):
217 for i, tok in enumerate(tokens):
217 if tok.type == tokenize.NEWLINE:
218 if tok.type == tokenize.NEWLINE:
218 last_line_starts = i + 1
219 last_line_starts = i + 1
219
220
220 last_line_tokens = tokens[last_line_starts:]
221 last_line_tokens = tokens[last_line_starts:]
221 names = [t.string for t in last_line_tokens if t.type == tokenize.NAME]
222 names = [t.string for t in last_line_tokens if t.type == tokenize.NAME]
222 if names and names[0] in {'raise', 'return', 'pass', 'break', 'continue'}:
223 if names and names[0] in {'raise', 'return', 'pass', 'break', 'continue'}:
223 # Find the most recent indentation less than the current level
224 # Find the most recent indentation less than the current level
224 for indent in reversed(prev_indents):
225 for indent in reversed(prev_indents):
225 if indent < last_indent:
226 if indent < last_indent:
226 return indent
227 return indent
227
228
228 return last_indent
229 return last_indent
229
230
230
231
231 def last_blank(src):
232 def last_blank(src):
232 """Determine if the input source ends in a blank.
233 """Determine if the input source ends in a blank.
233
234
234 A blank is either a newline or a line consisting of whitespace.
235 A blank is either a newline or a line consisting of whitespace.
235
236
236 Parameters
237 Parameters
237 ----------
238 ----------
238 src : string
239 src : string
239 A single or multiline string.
240 A single or multiline string.
240 """
241 """
241 if not src: return False
242 if not src: return False
242 ll = src.splitlines()[-1]
243 ll = src.splitlines()[-1]
243 return (ll == '') or ll.isspace()
244 return (ll == '') or ll.isspace()
244
245
245
246
246 last_two_blanks_re = re.compile(r'\n\s*\n\s*$', re.MULTILINE)
247 last_two_blanks_re = re.compile(r'\n\s*\n\s*$', re.MULTILINE)
247 last_two_blanks_re2 = re.compile(r'.+\n\s*\n\s+$', re.MULTILINE)
248 last_two_blanks_re2 = re.compile(r'.+\n\s*\n\s+$', re.MULTILINE)
248
249
249 def last_two_blanks(src):
250 def last_two_blanks(src):
250 """Determine if the input source ends in two blanks.
251 """Determine if the input source ends in two blanks.
251
252
252 A blank is either a newline or a line consisting of whitespace.
253 A blank is either a newline or a line consisting of whitespace.
253
254
254 Parameters
255 Parameters
255 ----------
256 ----------
256 src : string
257 src : string
257 A single or multiline string.
258 A single or multiline string.
258 """
259 """
259 if not src: return False
260 if not src: return False
260 # The logic here is tricky: I couldn't get a regexp to work and pass all
261 # The logic here is tricky: I couldn't get a regexp to work and pass all
261 # the tests, so I took a different approach: split the source by lines,
262 # the tests, so I took a different approach: split the source by lines,
262 # grab the last two and prepend '###\n' as a stand-in for whatever was in
263 # grab the last two and prepend '###\n' as a stand-in for whatever was in
263 # the body before the last two lines. Then, with that structure, it's
264 # the body before the last two lines. Then, with that structure, it's
264 # possible to analyze with two regexps. Not the most elegant solution, but
265 # possible to analyze with two regexps. Not the most elegant solution, but
265 # it works. If anyone tries to change this logic, make sure to validate
266 # it works. If anyone tries to change this logic, make sure to validate
266 # the whole test suite first!
267 # the whole test suite first!
267 new_src = '\n'.join(['###\n'] + src.splitlines()[-2:])
268 new_src = '\n'.join(['###\n'] + src.splitlines()[-2:])
268 return (bool(last_two_blanks_re.match(new_src)) or
269 return (bool(last_two_blanks_re.match(new_src)) or
269 bool(last_two_blanks_re2.match(new_src)) )
270 bool(last_two_blanks_re2.match(new_src)) )
270
271
271
272
272 def remove_comments(src):
273 def remove_comments(src):
273 """Remove all comments from input source.
274 """Remove all comments from input source.
274
275
275 Note: comments are NOT recognized inside of strings!
276 Note: comments are NOT recognized inside of strings!
276
277
277 Parameters
278 Parameters
278 ----------
279 ----------
279 src : string
280 src : string
280 A single or multiline input string.
281 A single or multiline input string.
281
282
282 Returns
283 Returns
283 -------
284 -------
284 String with all Python comments removed.
285 String with all Python comments removed.
285 """
286 """
286
287
287 return re.sub('#.*', '', src)
288 return re.sub('#.*', '', src)
288
289
289
290
290 def get_input_encoding():
291 def get_input_encoding():
291 """Return the default standard input encoding.
292 """Return the default standard input encoding.
292
293
293 If sys.stdin has no encoding, 'ascii' is returned."""
294 If sys.stdin has no encoding, 'ascii' is returned."""
294 # There are strange environments for which sys.stdin.encoding is None. We
295 # There are strange environments for which sys.stdin.encoding is None. We
295 # ensure that a valid encoding is returned.
296 # ensure that a valid encoding is returned.
296 encoding = getattr(sys.stdin, 'encoding', None)
297 encoding = getattr(sys.stdin, 'encoding', None)
297 if encoding is None:
298 if encoding is None:
298 encoding = 'ascii'
299 encoding = 'ascii'
299 return encoding
300 return encoding
300
301
301 #-----------------------------------------------------------------------------
302 #-----------------------------------------------------------------------------
302 # Classes and functions for normal Python syntax handling
303 # Classes and functions for normal Python syntax handling
303 #-----------------------------------------------------------------------------
304 #-----------------------------------------------------------------------------
304
305
305 class InputSplitter(object):
306 class InputSplitter(object):
306 r"""An object that can accumulate lines of Python source before execution.
307 r"""An object that can accumulate lines of Python source before execution.
307
308
308 This object is designed to be fed python source line-by-line, using
309 This object is designed to be fed python source line-by-line, using
309 :meth:`push`. It will return on each push whether the currently pushed
310 :meth:`push`. It will return on each push whether the currently pushed
310 code could be executed already. In addition, it provides a method called
311 code could be executed already. In addition, it provides a method called
311 :meth:`push_accepts_more` that can be used to query whether more input
312 :meth:`push_accepts_more` that can be used to query whether more input
312 can be pushed into a single interactive block.
313 can be pushed into a single interactive block.
313
314
314 This is a simple example of how an interactive terminal-based client can use
315 This is a simple example of how an interactive terminal-based client can use
315 this tool::
316 this tool::
316
317
317 isp = InputSplitter()
318 isp = InputSplitter()
318 while isp.push_accepts_more():
319 while isp.push_accepts_more():
319 indent = ' '*isp.indent_spaces
320 indent = ' '*isp.indent_spaces
320 prompt = '>>> ' + indent
321 prompt = '>>> ' + indent
321 line = indent + raw_input(prompt)
322 line = indent + raw_input(prompt)
322 isp.push(line)
323 isp.push(line)
323 print('Input source was:\n', isp.source_reset())
324 print('Input source was:\n', isp.source_reset())
324 """
325 """
325 # A cache for storing the current indentation
326 # A cache for storing the current indentation
326 # The first value stores the most recently processed source input
327 # The first value stores the most recently processed source input
327 # The second value is the number of spaces for the current indentation
328 # The second value is the number of spaces for the current indentation
328 # If self.source matches the first value, the second value is a valid
329 # If self.source matches the first value, the second value is a valid
329 # current indentation. Otherwise, the cache is invalid and the indentation
330 # current indentation. Otherwise, the cache is invalid and the indentation
330 # must be recalculated.
331 # must be recalculated.
331 _indent_spaces_cache: Union[Tuple[None, None], Tuple[str, int]] = None, None
332 _indent_spaces_cache: Union[Tuple[None, None], Tuple[str, int]] = None, None
332 # String, indicating the default input encoding. It is computed by default
333 # String, indicating the default input encoding. It is computed by default
333 # at initialization time via get_input_encoding(), but it can be reset by a
334 # at initialization time via get_input_encoding(), but it can be reset by a
334 # client with specific knowledge of the encoding.
335 # client with specific knowledge of the encoding.
335 encoding = ''
336 encoding = ''
336 # String where the current full source input is stored, properly encoded.
337 # String where the current full source input is stored, properly encoded.
337 # Reading this attribute is the normal way of querying the currently pushed
338 # Reading this attribute is the normal way of querying the currently pushed
338 # source code, that has been properly encoded.
339 # source code, that has been properly encoded.
339 source: str = ""
340 source: str = ""
340 # Code object corresponding to the current source. It is automatically
341 # Code object corresponding to the current source. It is automatically
341 # synced to the source, so it can be queried at any time to obtain the code
342 # synced to the source, so it can be queried at any time to obtain the code
342 # object; it will be None if the source doesn't compile to valid Python.
343 # object; it will be None if the source doesn't compile to valid Python.
343 code: Optional[CodeType] = None
344 code: Optional[CodeType] = None
344
345
345 # Private attributes
346 # Private attributes
346
347
347 # List with lines of input accumulated so far
348 # List with lines of input accumulated so far
348 _buffer: List[str]
349 _buffer: List[str]
349 # Command compiler
350 # Command compiler
350 _compile: codeop.CommandCompiler
351 _compile: codeop.CommandCompiler
351 # Boolean indicating whether the current block is complete
352 # Boolean indicating whether the current block is complete
352 _is_complete: Optional[bool] = None
353 _is_complete: Optional[bool] = None
353 # Boolean indicating whether the current block has an unrecoverable syntax error
354 # Boolean indicating whether the current block has an unrecoverable syntax error
354 _is_invalid: bool = False
355 _is_invalid: bool = False
355
356
356 def __init__(self) -> None:
357 def __init__(self) -> None:
357 """Create a new InputSplitter instance."""
358 """Create a new InputSplitter instance."""
358 self._buffer = []
359 self._buffer = []
359 self._compile = codeop.CommandCompiler()
360 self._compile = codeop.CommandCompiler()
360 self.encoding = get_input_encoding()
361 self.encoding = get_input_encoding()
361
362
362 def reset(self):
363 def reset(self):
363 """Reset the input buffer and associated state."""
364 """Reset the input buffer and associated state."""
364 self._buffer[:] = []
365 self._buffer[:] = []
365 self.source = ''
366 self.source = ''
366 self.code = None
367 self.code = None
367 self._is_complete = False
368 self._is_complete = False
368 self._is_invalid = False
369 self._is_invalid = False
369
370
370 def source_reset(self):
371 def source_reset(self):
371 """Return the input source and perform a full reset.
372 """Return the input source and perform a full reset.
372 """
373 """
373 out = self.source
374 out = self.source
374 self.reset()
375 self.reset()
375 return out
376 return out
376
377
377 def check_complete(self, source):
378 def check_complete(self, source):
378 """Return whether a block of code is ready to execute, or should be continued
379 """Return whether a block of code is ready to execute, or should be continued
379
380
380 This is a non-stateful API, and will reset the state of this InputSplitter.
381 This is a non-stateful API, and will reset the state of this InputSplitter.
381
382
382 Parameters
383 Parameters
383 ----------
384 ----------
384 source : string
385 source : string
385 Python input code, which can be multiline.
386 Python input code, which can be multiline.
386
387
387 Returns
388 Returns
388 -------
389 -------
389 status : str
390 status : str
390 One of 'complete', 'incomplete', or 'invalid' if source is not a
391 One of 'complete', 'incomplete', or 'invalid' if source is not a
391 prefix of valid code.
392 prefix of valid code.
392 indent_spaces : int or None
393 indent_spaces : int or None
393 The number of spaces by which to indent the next line of code. If
394 The number of spaces by which to indent the next line of code. If
394 status is not 'incomplete', this is None.
395 status is not 'incomplete', this is None.
395 """
396 """
396 self.reset()
397 self.reset()
397 try:
398 try:
398 self.push(source)
399 self.push(source)
399 except SyntaxError:
400 except SyntaxError:
400 # Transformers in IPythonInputSplitter can raise SyntaxError,
401 # Transformers in IPythonInputSplitter can raise SyntaxError,
401 # which push() will not catch.
402 # which push() will not catch.
402 return 'invalid', None
403 return 'invalid', None
403 else:
404 else:
404 if self._is_invalid:
405 if self._is_invalid:
405 return 'invalid', None
406 return 'invalid', None
406 elif self.push_accepts_more():
407 elif self.push_accepts_more():
407 return 'incomplete', self.get_indent_spaces()
408 return 'incomplete', self.get_indent_spaces()
408 else:
409 else:
409 return 'complete', None
410 return 'complete', None
410 finally:
411 finally:
411 self.reset()
412 self.reset()
412
413
413 def push(self, lines:str) -> bool:
414 def push(self, lines:str) -> bool:
414 """Push one or more lines of input.
415 """Push one or more lines of input.
415
416
416 This stores the given lines and returns a status code indicating
417 This stores the given lines and returns a status code indicating
417 whether the code forms a complete Python block or not.
418 whether the code forms a complete Python block or not.
418
419
419 Any exceptions generated in compilation are swallowed, but if an
420 Any exceptions generated in compilation are swallowed, but if an
420 exception was produced, the method returns True.
421 exception was produced, the method returns True.
421
422
422 Parameters
423 Parameters
423 ----------
424 ----------
424 lines : string
425 lines : string
425 One or more lines of Python input.
426 One or more lines of Python input.
426
427
427 Returns
428 Returns
428 -------
429 -------
429 is_complete : boolean
430 is_complete : boolean
430 True if the current input source (the result of the current input
431 True if the current input source (the result of the current input
431 plus prior inputs) forms a complete Python execution block. Note that
432 plus prior inputs) forms a complete Python execution block. Note that
432 this value is also stored as a private attribute (``_is_complete``), so it
433 this value is also stored as a private attribute (``_is_complete``), so it
433 can be queried at any time.
434 can be queried at any time.
434 """
435 """
435 assert isinstance(lines, str)
436 assert isinstance(lines, str)
436 self._store(lines)
437 self._store(lines)
437 source = self.source
438 source = self.source
438
439
439 # Before calling _compile(), reset the code object to None so that if an
440 # Before calling _compile(), reset the code object to None so that if an
440 # exception is raised in compilation, we don't mislead by having
441 # exception is raised in compilation, we don't mislead by having
441 # inconsistent code/source attributes.
442 # inconsistent code/source attributes.
442 self.code, self._is_complete = None, None
443 self.code, self._is_complete = None, None
443 self._is_invalid = False
444 self._is_invalid = False
444
445
445 # Honor termination lines properly
446 # Honor termination lines properly
446 if source.endswith('\\\n'):
447 if source.endswith('\\\n'):
447 return False
448 return False
448
449
449 try:
450 try:
450 with warnings.catch_warnings():
451 with warnings.catch_warnings():
451 warnings.simplefilter('error', SyntaxWarning)
452 warnings.simplefilter('error', SyntaxWarning)
452 self.code = self._compile(source, symbol="exec")
453 self.code = self._compile(source, symbol="exec")
453 # Invalid syntax can produce any of a number of different errors from
454 # Invalid syntax can produce any of a number of different errors from
454 # inside the compiler, so we have to catch them all. Syntax errors
455 # inside the compiler, so we have to catch them all. Syntax errors
455 # immediately produce a 'ready' block, so the invalid Python can be
456 # immediately produce a 'ready' block, so the invalid Python can be
456 # sent to the kernel for evaluation with possible ipython
457 # sent to the kernel for evaluation with possible ipython
457 # special-syntax conversion.
458 # special-syntax conversion.
458 except (SyntaxError, OverflowError, ValueError, TypeError,
459 except (SyntaxError, OverflowError, ValueError, TypeError,
459 MemoryError, SyntaxWarning):
460 MemoryError, SyntaxWarning):
460 self._is_complete = True
461 self._is_complete = True
461 self._is_invalid = True
462 self._is_invalid = True
462 else:
463 else:
463 # Compilation didn't produce any exceptions (though it may not have
464 # Compilation didn't produce any exceptions (though it may not have
464 # given a complete code object)
465 # given a complete code object)
465 self._is_complete = self.code is not None
466 self._is_complete = self.code is not None
466
467
467 return self._is_complete
468 return self._is_complete
468
469
469 def push_accepts_more(self):
470 def push_accepts_more(self):
470 """Return whether a block of interactive input can accept more input.
471 """Return whether a block of interactive input can accept more input.
471
472
472 This method is meant to be used by line-oriented frontends, who need to
473 This method is meant to be used by line-oriented frontends, who need to
473 guess whether a block is complete or not based solely on prior and
474 guess whether a block is complete or not based solely on prior and
474 current input lines. The InputSplitter considers it has a complete
475 current input lines. The InputSplitter considers it has a complete
475 interactive block and will not accept more input when either:
476 interactive block and will not accept more input when either:
476
477
477 * A SyntaxError is raised
478 * A SyntaxError is raised
478
479
479 * The code is complete and consists of a single line or a single
480 * The code is complete and consists of a single line or a single
480 non-compound statement
481 non-compound statement
481
482
482 * The code is complete and has a blank line at the end
483 * The code is complete and has a blank line at the end
483
484
484 If the current input produces a syntax error, this method immediately
485 If the current input produces a syntax error, this method immediately
485 returns False but does *not* raise the syntax error exception, as
486 returns False but does *not* raise the syntax error exception, as
486 typically clients will want to send invalid syntax to an execution
487 typically clients will want to send invalid syntax to an execution
487 backend which might convert the invalid syntax into valid Python via
488 backend which might convert the invalid syntax into valid Python via
488 one of the dynamic IPython mechanisms.
489 one of the dynamic IPython mechanisms.
489 """
490 """
490
491
491 # With incomplete input, unconditionally accept more
492 # With incomplete input, unconditionally accept more
492 # A syntax error also sets _is_complete to True - see push()
493 # A syntax error also sets _is_complete to True - see push()
493 if not self._is_complete:
494 if not self._is_complete:
494 #print("Not complete") # debug
495 #print("Not complete") # debug
495 return True
496 return True
496
497
497 # The user can make any (complete) input execute by leaving a blank line
498 # The user can make any (complete) input execute by leaving a blank line
498 last_line = self.source.splitlines()[-1]
499 last_line = self.source.splitlines()[-1]
499 if (not last_line) or last_line.isspace():
500 if (not last_line) or last_line.isspace():
500 #print("Blank line") # debug
501 #print("Blank line") # debug
501 return False
502 return False
502
503
503 # If there's just a single line or AST node, and we're flush left, as is
504 # If there's just a single line or AST node, and we're flush left, as is
504 # the case after a simple statement such as 'a=1', we want to execute it
505 # the case after a simple statement such as 'a=1', we want to execute it
505 # straight away.
506 # straight away.
506 if self.get_indent_spaces() == 0:
507 if self.get_indent_spaces() == 0:
507 if len(self.source.splitlines()) <= 1:
508 if len(self.source.splitlines()) <= 1:
508 return False
509 return False
509
510
510 try:
511 try:
511 code_ast = ast.parse("".join(self._buffer))
512 code_ast = ast.parse("".join(self._buffer))
512 except Exception:
513 except Exception:
513 #print("Can't parse AST") # debug
514 #print("Can't parse AST") # debug
514 return False
515 return False
515 else:
516 else:
516 if len(code_ast.body) == 1 and \
517 if len(code_ast.body) == 1 and \
517 not hasattr(code_ast.body[0], 'body'):
518 not hasattr(code_ast.body[0], 'body'):
518 #print("Simple statement") # debug
519 #print("Simple statement") # debug
519 return False
520 return False
520
521
521 # General fallback - accept more code
522 # General fallback - accept more code
522 return True
523 return True
523
524
524 def get_indent_spaces(self) -> int:
525 def get_indent_spaces(self) -> int:
525 sourcefor, n = self._indent_spaces_cache
526 sourcefor, n = self._indent_spaces_cache
526 if sourcefor == self.source:
527 if sourcefor == self.source:
527 assert n is not None
528 assert n is not None
528 return n
529 return n
529
530
530 # self.source always has a trailing newline
531 # self.source always has a trailing newline
531 n = find_next_indent(self.source[:-1])
532 n = find_next_indent(self.source[:-1])
532 self._indent_spaces_cache = (self.source, n)
533 self._indent_spaces_cache = (self.source, n)
533 return n
534 return n
534
535
535 # Backwards compatibility. I think all code that used .indent_spaces was
536 # Backwards compatibility. I think all code that used .indent_spaces was
536 # inside IPython, but we can leave this here until IPython 7 in case any
537 # inside IPython, but we can leave this here until IPython 7 in case any
537 # other modules are using it. -TK, November 2017
538 # other modules are using it. -TK, November 2017
538 indent_spaces = property(get_indent_spaces)
539 indent_spaces = property(get_indent_spaces)
539
540
540 def _store(self, lines, buffer=None, store='source'):
541 def _store(self, lines, buffer=None, store='source'):
541 """Store one or more lines of input.
542 """Store one or more lines of input.
542
543
543 If input lines are not newline-terminated, a newline is automatically
544 If input lines are not newline-terminated, a newline is automatically
544 appended."""
545 appended."""
545
546
546 if buffer is None:
547 if buffer is None:
547 buffer = self._buffer
548 buffer = self._buffer
548
549
549 if lines.endswith('\n'):
550 if lines.endswith('\n'):
550 buffer.append(lines)
551 buffer.append(lines)
551 else:
552 else:
552 buffer.append(lines+'\n')
553 buffer.append(lines+'\n')
553 setattr(self, store, self._set_source(buffer))
554 setattr(self, store, self._set_source(buffer))
554
555
555 def _set_source(self, buffer):
556 def _set_source(self, buffer):
556 return u''.join(buffer)
557 return u''.join(buffer)
557
558
558
559
559 class IPythonInputSplitter(InputSplitter):
560 class IPythonInputSplitter(InputSplitter):
560 """An input splitter that recognizes all of IPython's special syntax."""
561 """An input splitter that recognizes all of IPython's special syntax."""
561
562
562 # String with raw, untransformed input.
563 # String with raw, untransformed input.
563 source_raw = ''
564 source_raw = ''
564
565
565 # Flag to track when a transformer has stored input that it hasn't given
566 # Flag to track when a transformer has stored input that it hasn't given
566 # back yet.
567 # back yet.
567 transformer_accumulating = False
568 transformer_accumulating = False
568
569
569 # Flag to track when assemble_python_lines has stored input that it hasn't
570 # Flag to track when assemble_python_lines has stored input that it hasn't
570 # given back yet.
571 # given back yet.
571 within_python_line = False
572 within_python_line = False
572
573
573 # Private attributes
574 # Private attributes
574
575
575 # List with lines of raw input accumulated so far.
576 # List with lines of raw input accumulated so far.
576 _buffer_raw: List[str]
577 _buffer_raw: List[str]
577
578
578 def __init__(self, line_input_checker=True, physical_line_transforms=None,
579 def __init__(self, line_input_checker=True, physical_line_transforms=None,
579 logical_line_transforms=None, python_line_transforms=None):
580 logical_line_transforms=None, python_line_transforms=None):
580 super(IPythonInputSplitter, self).__init__()
581 super(IPythonInputSplitter, self).__init__()
581 self._buffer_raw = []
582 self._buffer_raw = []
582 self._validate = True
583 self._validate = True
583
584
584 if physical_line_transforms is not None:
585 if physical_line_transforms is not None:
585 self.physical_line_transforms = physical_line_transforms
586 self.physical_line_transforms = physical_line_transforms
586 else:
587 else:
587 self.physical_line_transforms = [
588 self.physical_line_transforms = [
588 leading_indent(),
589 leading_indent(),
589 classic_prompt(),
590 classic_prompt(),
590 ipy_prompt(),
591 ipy_prompt(),
591 cellmagic(end_on_blank_line=line_input_checker),
592 cellmagic(end_on_blank_line=line_input_checker),
592 ]
593 ]
593
594
594 self.assemble_logical_lines = assemble_logical_lines()
595 self.assemble_logical_lines = assemble_logical_lines()
595 if logical_line_transforms is not None:
596 if logical_line_transforms is not None:
596 self.logical_line_transforms = logical_line_transforms
597 self.logical_line_transforms = logical_line_transforms
597 else:
598 else:
598 self.logical_line_transforms = [
599 self.logical_line_transforms = [
599 help_end(),
600 help_end(),
600 escaped_commands(),
601 escaped_commands(),
601 assign_from_magic(),
602 assign_from_magic(),
602 assign_from_system(),
603 assign_from_system(),
603 ]
604 ]
604
605
605 self.assemble_python_lines = assemble_python_lines()
606 self.assemble_python_lines = assemble_python_lines()
606 if python_line_transforms is not None:
607 if python_line_transforms is not None:
607 self.python_line_transforms = python_line_transforms
608 self.python_line_transforms = python_line_transforms
608 else:
609 else:
609 # We don't use any of these at present
610 # We don't use any of these at present
610 self.python_line_transforms = []
611 self.python_line_transforms = []
611
612
612 @property
613 @property
613 def transforms(self):
614 def transforms(self):
614 "Quick access to all transformers."
615 "Quick access to all transformers."
615 return self.physical_line_transforms + \
616 return self.physical_line_transforms + \
616 [self.assemble_logical_lines] + self.logical_line_transforms + \
617 [self.assemble_logical_lines] + self.logical_line_transforms + \
617 [self.assemble_python_lines] + self.python_line_transforms
618 [self.assemble_python_lines] + self.python_line_transforms
618
619
619 @property
620 @property
620 def transforms_in_use(self):
621 def transforms_in_use(self):
621 """Transformers, excluding logical line transformers if we're in a
622 """Transformers, excluding logical line transformers if we're in a
622 Python line."""
623 Python line."""
623 t = self.physical_line_transforms[:]
624 t = self.physical_line_transforms[:]
624 if not self.within_python_line:
625 if not self.within_python_line:
625 t += [self.assemble_logical_lines] + self.logical_line_transforms
626 t += [self.assemble_logical_lines] + self.logical_line_transforms
626 return t + [self.assemble_python_lines] + self.python_line_transforms
627 return t + [self.assemble_python_lines] + self.python_line_transforms
627
628
628 def reset(self):
629 def reset(self):
629 """Reset the input buffer and associated state."""
630 """Reset the input buffer and associated state."""
630 super(IPythonInputSplitter, self).reset()
631 super(IPythonInputSplitter, self).reset()
631 self._buffer_raw[:] = []
632 self._buffer_raw[:] = []
632 self.source_raw = ''
633 self.source_raw = ''
633 self.transformer_accumulating = False
634 self.transformer_accumulating = False
634 self.within_python_line = False
635 self.within_python_line = False
635
636
636 for t in self.transforms:
637 for t in self.transforms:
637 try:
638 try:
638 t.reset()
639 t.reset()
639 except SyntaxError:
640 except SyntaxError:
640 # Nothing that calls reset() expects to handle transformer
641 # Nothing that calls reset() expects to handle transformer
641 # errors
642 # errors
642 pass
643 pass
643
644
644 def flush_transformers(self: Self):
645 def flush_transformers(self: Self):
645 def _flush(transform, outs: List[str]):
646 def _flush(transform, outs: List[str]):
646 """yield transformed lines
647 """yield transformed lines
647
648
648 always strings, never None
649 always strings, never None
649
650
650 transform: the current transform
651 transform: the current transform
651 outs: an iterable of previously transformed inputs.
652 outs: an iterable of previously transformed inputs.
652 Each may be multiline, which will be passed
653 Each may be multiline, which will be passed
653 one line at a time to transform.
654 one line at a time to transform.
654 """
655 """
655 for out in outs:
656 for out in outs:
656 for line in out.splitlines():
657 for line in out.splitlines():
657 # push one line at a time
658 # push one line at a time
658 tmp = transform.push(line)
659 tmp = transform.push(line)
659 if tmp is not None:
660 if tmp is not None:
660 yield tmp
661 yield tmp
661
662
662 # reset the transform
663 # reset the transform
663 tmp = transform.reset()
664 tmp = transform.reset()
664 if tmp is not None:
665 if tmp is not None:
665 yield tmp
666 yield tmp
666
667
667 out: List[str] = []
668 out: List[str] = []
668 for t in self.transforms_in_use:
669 for t in self.transforms_in_use:
669 out = _flush(t, out)
670 out = _flush(t, out)
670
671
671 out = list(out)
672 out = list(out)
672 if out:
673 if out:
673 self._store('\n'.join(out))
674 self._store('\n'.join(out))
674
675
675 def raw_reset(self):
676 def raw_reset(self):
676 """Return raw input only and perform a full reset.
677 """Return raw input only and perform a full reset.
677 """
678 """
678 out = self.source_raw
679 out = self.source_raw
679 self.reset()
680 self.reset()
680 return out
681 return out
681
682
682 def source_reset(self):
683 def source_reset(self):
683 try:
684 try:
684 self.flush_transformers()
685 self.flush_transformers()
685 return self.source
686 return self.source
686 finally:
687 finally:
687 self.reset()
688 self.reset()
688
689
689 def push_accepts_more(self):
690 def push_accepts_more(self):
690 if self.transformer_accumulating:
691 if self.transformer_accumulating:
691 return True
692 return True
692 else:
693 else:
693 return super(IPythonInputSplitter, self).push_accepts_more()
694 return super(IPythonInputSplitter, self).push_accepts_more()
694
695
695 def transform_cell(self, cell):
696 def transform_cell(self, cell):
696 """Process and translate a cell of input.
697 """Process and translate a cell of input.
697 """
698 """
698 self.reset()
699 self.reset()
699 try:
700 try:
700 self.push(cell)
701 self.push(cell)
701 self.flush_transformers()
702 self.flush_transformers()
702 return self.source
703 return self.source
703 finally:
704 finally:
704 self.reset()
705 self.reset()
705
706
706 def push(self, lines:str) -> bool:
707 def push(self, lines:str) -> bool:
707 """Push one or more lines of IPython input.
708 """Push one or more lines of IPython input.
708
709
709 This stores the given lines and returns a status code indicating
710 This stores the given lines and returns a status code indicating
710 whether the code forms a complete Python block or not, after processing
711 whether the code forms a complete Python block or not, after processing
711 all input lines for special IPython syntax.
712 all input lines for special IPython syntax.
712
713
713 Any exceptions generated in compilation are swallowed, but if an
714 Any exceptions generated in compilation are swallowed, but if an
714 exception was produced, the method returns True.
715 exception was produced, the method returns True.
715
716
716 Parameters
717 Parameters
717 ----------
718 ----------
718 lines : string
719 lines : string
719 One or more lines of Python input.
720 One or more lines of Python input.
720
721
721 Returns
722 Returns
722 -------
723 -------
723 is_complete : boolean
724 is_complete : boolean
724 True if the current input source (the result of the current input
725 True if the current input source (the result of the current input
725 plus prior inputs) forms a complete Python execution block. Note that
726 plus prior inputs) forms a complete Python execution block. Note that
726 this value is also stored as a private attribute (_is_complete), so it
727 this value is also stored as a private attribute (_is_complete), so it
727 can be queried at any time.
728 can be queried at any time.
728 """
729 """
729 assert isinstance(lines, str)
730 assert isinstance(lines, str)
730 # We must ensure all input is pure unicode
731 # We must ensure all input is pure unicode
731 # ''.splitlines() --> [], but we need to push the empty line to transformers
732 # ''.splitlines() --> [], but we need to push the empty line to transformers
732 lines_list = lines.splitlines()
733 lines_list = lines.splitlines()
733 if not lines_list:
734 if not lines_list:
734 lines_list = ['']
735 lines_list = ['']
735
736
736 # Store raw source before applying any transformations to it. Note
737 # Store raw source before applying any transformations to it. Note
737 # that this must be done *after* the reset() call that would otherwise
738 # that this must be done *after* the reset() call that would otherwise
738 # flush the buffer.
739 # flush the buffer.
739 self._store(lines, self._buffer_raw, 'source_raw')
740 self._store(lines, self._buffer_raw, 'source_raw')
740
741
741 transformed_lines_list = []
742 transformed_lines_list = []
742 for line in lines_list:
743 for line in lines_list:
743 transformed = self._transform_line(line)
744 transformed = self._transform_line(line)
744 if transformed is not None:
745 if transformed is not None:
745 transformed_lines_list.append(transformed)
746 transformed_lines_list.append(transformed)
746
747
747 if transformed_lines_list:
748 if transformed_lines_list:
748 transformed_lines = '\n'.join(transformed_lines_list)
749 transformed_lines = '\n'.join(transformed_lines_list)
749 return super(IPythonInputSplitter, self).push(transformed_lines)
750 return super(IPythonInputSplitter, self).push(transformed_lines)
750 else:
751 else:
751 # Got nothing back from transformers - they must be waiting for
752 # Got nothing back from transformers - they must be waiting for
752 # more input.
753 # more input.
753 return False
754 return False
754
755
755 def _transform_line(self, line):
756 def _transform_line(self, line):
756 """Push a line of input code through the various transformers.
757 """Push a line of input code through the various transformers.
757
758
758 Returns any output from the transformers, or None if a transformer
759 Returns any output from the transformers, or None if a transformer
759 is accumulating lines.
760 is accumulating lines.
760
761
761 Sets self.transformer_accumulating as a side effect.
762 Sets self.transformer_accumulating as a side effect.
762 """
763 """
763 def _accumulating(dbg):
764 def _accumulating(dbg):
764 #print(dbg)
765 #print(dbg)
765 self.transformer_accumulating = True
766 self.transformer_accumulating = True
766 return None
767 return None
767
768
768 for transformer in self.physical_line_transforms:
769 for transformer in self.physical_line_transforms:
769 line = transformer.push(line)
770 line = transformer.push(line)
770 if line is None:
771 if line is None:
771 return _accumulating(transformer)
772 return _accumulating(transformer)
772
773
773 if not self.within_python_line:
774 if not self.within_python_line:
774 line = self.assemble_logical_lines.push(line)
775 line = self.assemble_logical_lines.push(line)
775 if line is None:
776 if line is None:
776 return _accumulating('acc logical line')
777 return _accumulating('acc logical line')
777
778
778 for transformer in self.logical_line_transforms:
779 for transformer in self.logical_line_transforms:
779 line = transformer.push(line)
780 line = transformer.push(line)
780 if line is None:
781 if line is None:
781 return _accumulating(transformer)
782 return _accumulating(transformer)
782
783
783 line = self.assemble_python_lines.push(line)
784 line = self.assemble_python_lines.push(line)
784 if line is None:
785 if line is None:
785 self.within_python_line = True
786 self.within_python_line = True
786 return _accumulating('acc python line')
787 return _accumulating('acc python line')
787 else:
788 else:
788 self.within_python_line = False
789 self.within_python_line = False
789
790
790 for transformer in self.python_line_transforms:
791 for transformer in self.python_line_transforms:
791 line = transformer.push(line)
792 line = transformer.push(line)
792 if line is None:
793 if line is None:
793 return _accumulating(transformer)
794 return _accumulating(transformer)
794
795
795 #print("transformers clear") #debug
796 #print("transformers clear") #debug
796 self.transformer_accumulating = False
797 self.transformer_accumulating = False
797 return line
798 return line
798
799
@@ -1,546 +1,545
1 """Tests for the Formatters."""
1 """Tests for the Formatters."""
2
2
3 from math import pi
3 from math import pi
4
4
5 try:
5 try:
6 import numpy
6 import numpy
7 except:
7 except:
8 numpy = None
8 numpy = None
9 import pytest
9 import pytest
10
10
11 from IPython import get_ipython
11 from IPython import get_ipython
12 from traitlets.config import Config
12 from traitlets.config import Config
13 from IPython.core.formatters import (
13 from IPython.core.formatters import (
14 PlainTextFormatter, HTMLFormatter, PDFFormatter, _mod_name_key,
14 PlainTextFormatter, HTMLFormatter, PDFFormatter, _mod_name_key,
15 DisplayFormatter, JSONFormatter,
15 DisplayFormatter, JSONFormatter,
16 )
16 )
17 from IPython.utils.io import capture_output
17 from IPython.utils.io import capture_output
18
18
19 class A(object):
19 class A(object):
20 def __repr__(self):
20 def __repr__(self):
21 return 'A()'
21 return 'A()'
22
22
23 class B(A):
23 class B(A):
24 def __repr__(self):
24 def __repr__(self):
25 return 'B()'
25 return 'B()'
26
26
27 class C:
27 class C:
28 pass
28 pass
29
29
30 class BadRepr(object):
30 class BadRepr(object):
31 def __repr__(self):
31 def __repr__(self):
32 raise ValueError("bad repr")
32 raise ValueError("bad repr")
33
33
34 class BadPretty(object):
34 class BadPretty(object):
35 _repr_pretty_ = None
35 _repr_pretty_ = None
36
36
37 class GoodPretty(object):
37 class GoodPretty(object):
38 def _repr_pretty_(self, pp, cycle):
38 def _repr_pretty_(self, pp, cycle):
39 pp.text('foo')
39 pp.text('foo')
40
40
41 def __repr__(self):
41 def __repr__(self):
42 return 'GoodPretty()'
42 return 'GoodPretty()'
43
43
44 def foo_printer(obj, pp, cycle):
44 def foo_printer(obj, pp, cycle):
45 pp.text('foo')
45 pp.text('foo')
46
46
47 def test_pretty():
47 def test_pretty():
48 f = PlainTextFormatter()
48 f = PlainTextFormatter()
49 f.for_type(A, foo_printer)
49 f.for_type(A, foo_printer)
50 assert f(A()) == "foo"
50 assert f(A()) == "foo"
51 assert f(B()) == "B()"
51 assert f(B()) == "B()"
52 assert f(GoodPretty()) == "foo"
52 assert f(GoodPretty()) == "foo"
53 # Just don't raise an exception for the following:
53 # Just don't raise an exception for the following:
54 f(BadPretty())
54 f(BadPretty())
55
55
56 f.pprint = False
56 f.pprint = False
57 assert f(A()) == "A()"
57 assert f(A()) == "A()"
58 assert f(B()) == "B()"
58 assert f(B()) == "B()"
59 assert f(GoodPretty()) == "GoodPretty()"
59 assert f(GoodPretty()) == "GoodPretty()"
60
60
61
61
62 def test_deferred():
62 def test_deferred():
63 f = PlainTextFormatter()
63 f = PlainTextFormatter()
64
64
65 def test_precision():
65 def test_precision():
66 """test various values for float_precision."""
66 """test various values for float_precision."""
67 f = PlainTextFormatter()
67 f = PlainTextFormatter()
68 assert f(pi) == repr(pi)
68 assert f(pi) == repr(pi)
69 f.float_precision = 0
69 f.float_precision = 0
70 if numpy:
70 if numpy:
71 po = numpy.get_printoptions()
71 po = numpy.get_printoptions()
72 assert po["precision"] == 0
72 assert po["precision"] == 0
73 assert f(pi) == "3"
73 assert f(pi) == "3"
74 f.float_precision = 2
74 f.float_precision = 2
75 if numpy:
75 if numpy:
76 po = numpy.get_printoptions()
76 po = numpy.get_printoptions()
77 assert po["precision"] == 2
77 assert po["precision"] == 2
78 assert f(pi) == "3.14"
78 assert f(pi) == "3.14"
79 f.float_precision = "%g"
79 f.float_precision = "%g"
80 if numpy:
80 if numpy:
81 po = numpy.get_printoptions()
81 po = numpy.get_printoptions()
82 assert po["precision"] == 2
82 assert po["precision"] == 2
83 assert f(pi) == "3.14159"
83 assert f(pi) == "3.14159"
84 f.float_precision = "%e"
84 f.float_precision = "%e"
85 assert f(pi) == "3.141593e+00"
85 assert f(pi) == "3.141593e+00"
86 f.float_precision = ""
86 f.float_precision = ""
87 if numpy:
87 if numpy:
88 po = numpy.get_printoptions()
88 po = numpy.get_printoptions()
89 assert po["precision"] == 8
89 assert po["precision"] == 8
90 assert f(pi) == repr(pi)
90 assert f(pi) == repr(pi)
91
91
92
92
93 def test_bad_precision():
93 def test_bad_precision():
94 """test various invalid values for float_precision."""
94 """test various invalid values for float_precision."""
95 f = PlainTextFormatter()
95 f = PlainTextFormatter()
96 def set_fp(p):
96 def set_fp(p):
97 f.float_precision = p
97 f.float_precision = p
98
98
99 pytest.raises(ValueError, set_fp, "%")
99 pytest.raises(ValueError, set_fp, "%")
100 pytest.raises(ValueError, set_fp, "%.3f%i")
100 pytest.raises(ValueError, set_fp, "%.3f%i")
101 pytest.raises(ValueError, set_fp, "foo")
101 pytest.raises(ValueError, set_fp, "foo")
102 pytest.raises(ValueError, set_fp, -1)
102 pytest.raises(ValueError, set_fp, -1)
103
103
104 def test_for_type():
104 def test_for_type():
105 f = PlainTextFormatter()
105 f = PlainTextFormatter()
106
106
107 # initial return, None
107 # initial return, None
108 assert f.for_type(C, foo_printer) is None
108 assert f.for_type(C, foo_printer) is None
109 # no func queries
109 # no func queries
110 assert f.for_type(C) is foo_printer
110 assert f.for_type(C) is foo_printer
111 # shouldn't change anything
111 # shouldn't change anything
112 assert f.for_type(C) is foo_printer
112 assert f.for_type(C) is foo_printer
113 # None should do the same
113 # None should do the same
114 assert f.for_type(C, None) is foo_printer
114 assert f.for_type(C, None) is foo_printer
115 assert f.for_type(C, None) is foo_printer
115 assert f.for_type(C, None) is foo_printer
116
116
117 def test_for_type_string():
117 def test_for_type_string():
118 f = PlainTextFormatter()
118 f = PlainTextFormatter()
119
119
120 type_str = '%s.%s' % (C.__module__, 'C')
120 type_str = '%s.%s' % (C.__module__, 'C')
121
121
122 # initial return, None
122 # initial return, None
123 assert f.for_type(type_str, foo_printer) is None
123 assert f.for_type(type_str, foo_printer) is None
124 # no func queries
124 # no func queries
125 assert f.for_type(type_str) is foo_printer
125 assert f.for_type(type_str) is foo_printer
126 assert _mod_name_key(C) in f.deferred_printers
126 assert _mod_name_key(C) in f.deferred_printers
127 assert f.for_type(C) is foo_printer
127 assert f.for_type(C) is foo_printer
128 assert _mod_name_key(C) not in f.deferred_printers
128 assert _mod_name_key(C) not in f.deferred_printers
129 assert C in f.type_printers
129 assert C in f.type_printers
130
130
131 def test_for_type_by_name():
131 def test_for_type_by_name():
132 f = PlainTextFormatter()
132 f = PlainTextFormatter()
133
133
134 mod = C.__module__
134 mod = C.__module__
135
135
136 # initial return, None
136 # initial return, None
137 assert f.for_type_by_name(mod, "C", foo_printer) is None
137 assert f.for_type_by_name(mod, "C", foo_printer) is None
138 # no func queries
138 # no func queries
139 assert f.for_type_by_name(mod, "C") is foo_printer
139 assert f.for_type_by_name(mod, "C") is foo_printer
140 # shouldn't change anything
140 # shouldn't change anything
141 assert f.for_type_by_name(mod, "C") is foo_printer
141 assert f.for_type_by_name(mod, "C") is foo_printer
142 # None should do the same
142 # None should do the same
143 assert f.for_type_by_name(mod, "C", None) is foo_printer
143 assert f.for_type_by_name(mod, "C", None) is foo_printer
144 assert f.for_type_by_name(mod, "C", None) is foo_printer
144 assert f.for_type_by_name(mod, "C", None) is foo_printer
145
145
146
146
147 def test_lookup():
147 def test_lookup():
148 f = PlainTextFormatter()
148 f = PlainTextFormatter()
149
149
150 f.for_type(C, foo_printer)
150 f.for_type(C, foo_printer)
151 assert f.lookup(C()) is foo_printer
151 assert f.lookup(C()) is foo_printer
152 with pytest.raises(KeyError):
152 with pytest.raises(KeyError):
153 f.lookup(A())
153 f.lookup(A())
154
154
155 def test_lookup_string():
155 def test_lookup_string():
156 f = PlainTextFormatter()
156 f = PlainTextFormatter()
157 type_str = '%s.%s' % (C.__module__, 'C')
157 type_str = '%s.%s' % (C.__module__, 'C')
158
158
159 f.for_type(type_str, foo_printer)
159 f.for_type(type_str, foo_printer)
160 assert f.lookup(C()) is foo_printer
160 assert f.lookup(C()) is foo_printer
161 # should move from deferred to imported dict
161 # should move from deferred to imported dict
162 assert _mod_name_key(C) not in f.deferred_printers
162 assert _mod_name_key(C) not in f.deferred_printers
163 assert C in f.type_printers
163 assert C in f.type_printers
164
164
165 def test_lookup_by_type():
165 def test_lookup_by_type():
166 f = PlainTextFormatter()
166 f = PlainTextFormatter()
167 f.for_type(C, foo_printer)
167 f.for_type(C, foo_printer)
168 assert f.lookup_by_type(C) is foo_printer
168 assert f.lookup_by_type(C) is foo_printer
169 with pytest.raises(KeyError):
169 with pytest.raises(KeyError):
170 f.lookup_by_type(A)
170 f.lookup_by_type(A)
171
171
172 def test_lookup_by_type_string():
172 def test_lookup_by_type_string():
173 f = PlainTextFormatter()
173 f = PlainTextFormatter()
174 type_str = '%s.%s' % (C.__module__, 'C')
174 type_str = '%s.%s' % (C.__module__, 'C')
175 f.for_type(type_str, foo_printer)
175 f.for_type(type_str, foo_printer)
176
176
177 # verify insertion
177 # verify insertion
178 assert _mod_name_key(C) in f.deferred_printers
178 assert _mod_name_key(C) in f.deferred_printers
179 assert C not in f.type_printers
179 assert C not in f.type_printers
180
180
181 assert f.lookup_by_type(type_str) is foo_printer
181 assert f.lookup_by_type(type_str) is foo_printer
182 # lookup by string doesn't cause import
182 # lookup by string doesn't cause import
183 assert _mod_name_key(C) in f.deferred_printers
183 assert _mod_name_key(C) in f.deferred_printers
184 assert C not in f.type_printers
184 assert C not in f.type_printers
185
185
186 assert f.lookup_by_type(C) is foo_printer
186 assert f.lookup_by_type(C) is foo_printer
187 # should move from deferred to imported dict
187 # should move from deferred to imported dict
188 assert _mod_name_key(C) not in f.deferred_printers
188 assert _mod_name_key(C) not in f.deferred_printers
189 assert C in f.type_printers
189 assert C in f.type_printers
190
190
191 def test_in_formatter():
191 def test_in_formatter():
192 f = PlainTextFormatter()
192 f = PlainTextFormatter()
193 f.for_type(C, foo_printer)
193 f.for_type(C, foo_printer)
194 type_str = '%s.%s' % (C.__module__, 'C')
194 type_str = '%s.%s' % (C.__module__, 'C')
195 assert C in f
195 assert C in f
196 assert type_str in f
196 assert type_str in f
197
197
198 def test_string_in_formatter():
198 def test_string_in_formatter():
199 f = PlainTextFormatter()
199 f = PlainTextFormatter()
200 type_str = '%s.%s' % (C.__module__, 'C')
200 type_str = '%s.%s' % (C.__module__, 'C')
201 f.for_type(type_str, foo_printer)
201 f.for_type(type_str, foo_printer)
202 assert type_str in f
202 assert type_str in f
203 assert C in f
203 assert C in f
204
204
205 def test_pop():
205 def test_pop():
206 f = PlainTextFormatter()
206 f = PlainTextFormatter()
207 f.for_type(C, foo_printer)
207 f.for_type(C, foo_printer)
208 assert f.lookup_by_type(C) is foo_printer
208 assert f.lookup_by_type(C) is foo_printer
209 assert f.pop(C, None) is foo_printer
209 assert f.pop(C, None) is foo_printer
210 f.for_type(C, foo_printer)
210 f.for_type(C, foo_printer)
211 assert f.pop(C) is foo_printer
211 assert f.pop(C) is foo_printer
212 with pytest.raises(KeyError):
212 with pytest.raises(KeyError):
213 f.lookup_by_type(C)
213 f.lookup_by_type(C)
214 with pytest.raises(KeyError):
214 with pytest.raises(KeyError):
215 f.pop(C)
215 f.pop(C)
216 with pytest.raises(KeyError):
216 with pytest.raises(KeyError):
217 f.pop(A)
217 f.pop(A)
218 assert f.pop(A, None) is None
218 assert f.pop(A, None) is None
219
219
220 def test_pop_string():
220 def test_pop_string():
221 f = PlainTextFormatter()
221 f = PlainTextFormatter()
222 type_str = '%s.%s' % (C.__module__, 'C')
222 type_str = '%s.%s' % (C.__module__, 'C')
223
223
224 with pytest.raises(KeyError):
224 with pytest.raises(KeyError):
225 f.pop(type_str)
225 f.pop(type_str)
226
226
227 f.for_type(type_str, foo_printer)
227 f.for_type(type_str, foo_printer)
228 f.pop(type_str)
228 f.pop(type_str)
229 with pytest.raises(KeyError):
229 with pytest.raises(KeyError):
230 f.lookup_by_type(C)
230 f.lookup_by_type(C)
231 with pytest.raises(KeyError):
231 with pytest.raises(KeyError):
232 f.pop(type_str)
232 f.pop(type_str)
233
233
234 f.for_type(C, foo_printer)
234 f.for_type(C, foo_printer)
235 assert f.pop(type_str, None) is foo_printer
235 assert f.pop(type_str, None) is foo_printer
236 with pytest.raises(KeyError):
236 with pytest.raises(KeyError):
237 f.lookup_by_type(C)
237 f.lookup_by_type(C)
238 with pytest.raises(KeyError):
238 with pytest.raises(KeyError):
239 f.pop(type_str)
239 f.pop(type_str)
240 assert f.pop(type_str, None) is None
240 assert f.pop(type_str, None) is None
241
241
242
242
243 def test_error_method():
243 def test_error_method():
244 f = HTMLFormatter()
244 f = HTMLFormatter()
245 class BadHTML(object):
245 class BadHTML(object):
246 def _repr_html_(self):
246 def _repr_html_(self):
247 raise ValueError("Bad HTML")
247 raise ValueError("Bad HTML")
248 bad = BadHTML()
248 bad = BadHTML()
249 with capture_output() as captured:
249 with capture_output() as captured:
250 result = f(bad)
250 result = f(bad)
251 assert result is None
251 assert result is None
252 assert "Traceback" in captured.stdout
252 assert "Traceback" in captured.stdout
253 assert "Bad HTML" in captured.stdout
253 assert "Bad HTML" in captured.stdout
254 assert "_repr_html_" in captured.stdout
254 assert "_repr_html_" in captured.stdout
255
255
256 def test_nowarn_notimplemented():
256 def test_nowarn_notimplemented():
257 f = HTMLFormatter()
257 f = HTMLFormatter()
258 class HTMLNotImplemented(object):
258 class HTMLNotImplemented(object):
259 def _repr_html_(self):
259 def _repr_html_(self):
260 raise NotImplementedError
260 raise NotImplementedError
261 h = HTMLNotImplemented()
261 h = HTMLNotImplemented()
262 with capture_output() as captured:
262 with capture_output() as captured:
263 result = f(h)
263 result = f(h)
264 assert result is None
264 assert result is None
265 assert "" == captured.stderr
265 assert "" == captured.stderr
266 assert "" == captured.stdout
266 assert "" == captured.stdout
267
267
268
268
269 def test_warn_error_for_type():
269 def test_warn_error_for_type():
270 f = HTMLFormatter()
270 f = HTMLFormatter()
271 f.for_type(int, lambda i: name_error)
271 f.for_type(int, lambda i: name_error)
272 with capture_output() as captured:
272 with capture_output() as captured:
273 result = f(5)
273 result = f(5)
274 assert result is None
274 assert result is None
275 assert "Traceback" in captured.stdout
275 assert "Traceback" in captured.stdout
276 assert "NameError" in captured.stdout
276 assert "NameError" in captured.stdout
277 assert "name_error" in captured.stdout
277 assert "name_error" in captured.stdout
278
278
279 def test_error_pretty_method():
279 def test_error_pretty_method():
280 f = PlainTextFormatter()
280 f = PlainTextFormatter()
281 class BadPretty(object):
281 class BadPretty(object):
282 def _repr_pretty_(self):
282 def _repr_pretty_(self):
283 return "hello"
283 return "hello"
284 bad = BadPretty()
284 bad = BadPretty()
285 with capture_output() as captured:
285 with capture_output() as captured:
286 result = f(bad)
286 result = f(bad)
287 assert result is None
287 assert result is None
288 assert "Traceback" in captured.stdout
288 assert "Traceback" in captured.stdout
289 assert "_repr_pretty_" in captured.stdout
289 assert "_repr_pretty_" in captured.stdout
290 assert "given" in captured.stdout
290 assert "given" in captured.stdout
291 assert "argument" in captured.stdout
291 assert "argument" in captured.stdout
292
292
293
293
294 def test_bad_repr_traceback():
294 def test_bad_repr_traceback():
295 f = PlainTextFormatter()
295 f = PlainTextFormatter()
296 bad = BadRepr()
296 bad = BadRepr()
297 with capture_output() as captured:
297 with capture_output() as captured:
298 result = f(bad)
298 result = f(bad)
299 # catches error, returns None
299 # catches error, returns None
300 assert result is None
300 assert result is None
301 assert "Traceback" in captured.stdout
301 assert "Traceback" in captured.stdout
302 assert "__repr__" in captured.stdout
302 assert "__repr__" in captured.stdout
303 assert "ValueError" in captured.stdout
303 assert "ValueError" in captured.stdout
304
304
305
305
306 class MakePDF(object):
306 class MakePDF(object):
307 def _repr_pdf_(self):
307 def _repr_pdf_(self):
308 return 'PDF'
308 return 'PDF'
309
309
310 def test_pdf_formatter():
310 def test_pdf_formatter():
311 pdf = MakePDF()
311 pdf = MakePDF()
312 f = PDFFormatter()
312 f = PDFFormatter()
313 assert f(pdf) == "PDF"
313 assert f(pdf) == "PDF"
314
314
315
315
316 def test_print_method_bound():
316 def test_print_method_bound():
317 f = HTMLFormatter()
317 f = HTMLFormatter()
318 class MyHTML(object):
318 class MyHTML(object):
319 def _repr_html_(self):
319 def _repr_html_(self):
320 return "hello"
320 return "hello"
321 with capture_output() as captured:
321 with capture_output() as captured:
322 result = f(MyHTML)
322 result = f(MyHTML)
323 assert result is None
323 assert result is None
324 assert "FormatterWarning" not in captured.stderr
324 assert "FormatterWarning" not in captured.stderr
325
325
326 with capture_output() as captured:
326 with capture_output() as captured:
327 result = f(MyHTML())
327 result = f(MyHTML())
328 assert result == "hello"
328 assert result == "hello"
329 assert captured.stderr == ""
329 assert captured.stderr == ""
330
330
331
331
332 def test_print_method_weird():
332 def test_print_method_weird():
333
333
334 class TextMagicHat(object):
334 class TextMagicHat(object):
335 def __getattr__(self, key):
335 def __getattr__(self, key):
336 return key
336 return key
337
337
338 f = HTMLFormatter()
338 f = HTMLFormatter()
339
339
340 text_hat = TextMagicHat()
340 text_hat = TextMagicHat()
341 assert text_hat._repr_html_ == "_repr_html_"
341 assert text_hat._repr_html_ == "_repr_html_"
342 with capture_output() as captured:
342 with capture_output() as captured:
343 result = f(text_hat)
343 result = f(text_hat)
344
344
345 assert result is None
345 assert result is None
346 assert "FormatterWarning" not in captured.stderr
346 assert "FormatterWarning" not in captured.stderr
347
347
348 class CallableMagicHat(object):
348 class CallableMagicHat(object):
349 def __getattr__(self, key):
349 def __getattr__(self, key):
350 return lambda : key
350 return lambda : key
351
351
352 call_hat = CallableMagicHat()
352 call_hat = CallableMagicHat()
353 with capture_output() as captured:
353 with capture_output() as captured:
354 result = f(call_hat)
354 result = f(call_hat)
355
355
356 assert result is None
356 assert result is None
357
357
358 class BadReprArgs(object):
358 class BadReprArgs(object):
359 def _repr_html_(self, extra, args):
359 def _repr_html_(self, extra, args):
360 return "html"
360 return "html"
361
361
362 bad = BadReprArgs()
362 bad = BadReprArgs()
363 with capture_output() as captured:
363 with capture_output() as captured:
364 result = f(bad)
364 result = f(bad)
365
365
366 assert result is None
366 assert result is None
367 assert "FormatterWarning" not in captured.stderr
367 assert "FormatterWarning" not in captured.stderr
368
368
369
369
370 def test_format_config():
370 def test_format_config():
371 """config objects don't pretend to support fancy reprs with lazy attrs"""
371 """config objects don't pretend to support fancy reprs with lazy attrs"""
372 f = HTMLFormatter()
372 f = HTMLFormatter()
373 cfg = Config()
373 cfg = Config()
374 with capture_output() as captured:
374 with capture_output() as captured:
375 result = f(cfg)
375 result = f(cfg)
376 assert result is None
376 assert result is None
377 assert captured.stderr == ""
377 assert captured.stderr == ""
378
378
379 with capture_output() as captured:
379 with capture_output() as captured:
380 result = f(Config)
380 result = f(Config)
381 assert result is None
381 assert result is None
382 assert captured.stderr == ""
382 assert captured.stderr == ""
383
383
384
384
385 def test_pretty_max_seq_length():
385 def test_pretty_max_seq_length():
386 f = PlainTextFormatter(max_seq_length=1)
386 f = PlainTextFormatter(max_seq_length=1)
387 lis = list(range(3))
387 lis = list(range(3))
388 text = f(lis)
388 text = f(lis)
389 assert text == "[0, ...]"
389 assert text == "[0, ...]"
390 f.max_seq_length = 0
390 f.max_seq_length = 0
391 text = f(lis)
391 text = f(lis)
392 assert text == "[0, 1, 2]"
392 assert text == "[0, 1, 2]"
393 text = f(list(range(1024)))
393 text = f(list(range(1024)))
394 lines = text.splitlines()
394 lines = text.splitlines()
395 assert len(lines) == 1024
395 assert len(lines) == 1024
396
396
397
397
398 def test_ipython_display_formatter():
398 def test_ipython_display_formatter():
399 """Objects with _ipython_display_ defined bypass other formatters"""
399 """Objects with _ipython_display_ defined bypass other formatters"""
400 f = get_ipython().display_formatter
400 f = get_ipython().display_formatter
401 catcher = []
401 catcher = []
402 class SelfDisplaying(object):
402 class SelfDisplaying(object):
403 def _ipython_display_(self):
403 def _ipython_display_(self):
404 catcher.append(self)
404 catcher.append(self)
405
405
406 class NotSelfDisplaying(object):
406 class NotSelfDisplaying(object):
407 def __repr__(self):
407 def __repr__(self):
408 return "NotSelfDisplaying"
408 return "NotSelfDisplaying"
409
409
410 def _ipython_display_(self):
410 def _ipython_display_(self):
411 raise NotImplementedError
411 raise NotImplementedError
412
412
413 save_enabled = f.ipython_display_formatter.enabled
413 save_enabled = f.ipython_display_formatter.enabled
414 f.ipython_display_formatter.enabled = True
414 f.ipython_display_formatter.enabled = True
415
415
416 yes = SelfDisplaying()
416 yes = SelfDisplaying()
417 no = NotSelfDisplaying()
417 no = NotSelfDisplaying()
418
418
419 d, md = f.format(no)
419 d, md = f.format(no)
420 assert d == {"text/plain": repr(no)}
420 assert d == {"text/plain": repr(no)}
421 assert md == {}
421 assert md == {}
422 assert catcher == []
422 assert catcher == []
423
423
424 d, md = f.format(yes)
424 d, md = f.format(yes)
425 assert d == {}
425 assert d == {}
426 assert md == {}
426 assert md == {}
427 assert catcher == [yes]
427 assert catcher == [yes]
428
428
429 f.ipython_display_formatter.enabled = save_enabled
429 f.ipython_display_formatter.enabled = save_enabled
430
430
431
431
432 def test_repr_mime():
432 def test_repr_mime():
433 class HasReprMime(object):
433 class HasReprMime(object):
434 def _repr_mimebundle_(self, include=None, exclude=None):
434 def _repr_mimebundle_(self, include=None, exclude=None):
435 return {
435 return {
436 'application/json+test.v2': {
436 'application/json+test.v2': {
437 'x': 'y'
437 'x': 'y'
438 },
438 },
439 'plain/text' : '<HasReprMime>',
439 'plain/text' : '<HasReprMime>',
440 'image/png' : 'i-overwrite'
440 'image/png' : 'i-overwrite'
441 }
441 }
442
442
443 def _repr_png_(self):
443 def _repr_png_(self):
444 return 'should-be-overwritten'
444 return 'should-be-overwritten'
445 def _repr_html_(self):
445 def _repr_html_(self):
446 return '<b>hi!</b>'
446 return '<b>hi!</b>'
447
447
448 f = get_ipython().display_formatter
448 f = get_ipython().display_formatter
449 html_f = f.formatters['text/html']
449 html_f = f.formatters['text/html']
450 save_enabled = html_f.enabled
450 save_enabled = html_f.enabled
451 html_f.enabled = True
451 html_f.enabled = True
452 obj = HasReprMime()
452 obj = HasReprMime()
453 d, md = f.format(obj)
453 d, md = f.format(obj)
454 html_f.enabled = save_enabled
454 html_f.enabled = save_enabled
455
455
456 assert sorted(d) == [
456 assert sorted(d) == [
457 "application/json+test.v2",
457 "application/json+test.v2",
458 "image/png",
458 "image/png",
459 "plain/text",
459 "plain/text",
460 "text/html",
460 "text/html",
461 "text/plain",
461 "text/plain",
462 ]
462 ]
463 assert md == {}
463 assert md == {}
464
464
465 d, md = f.format(obj, include={"image/png"})
465 d, md = f.format(obj, include={"image/png"})
466 assert list(d.keys()) == [
466 assert list(d.keys()) == [
467 "image/png"
467 "image/png"
468 ], "Include should filter out even things from repr_mimebundle"
468 ], "Include should filter out even things from repr_mimebundle"
469
469
470 assert d["image/png"] == "i-overwrite", "_repr_mimebundle_ take precedence"
470 assert d["image/png"] == "i-overwrite", "_repr_mimebundle_ take precedence"
471
471
472
472
473 def test_pass_correct_include_exclude():
473 def test_pass_correct_include_exclude():
474 class Tester(object):
474 class Tester(object):
475
475
476 def __init__(self, include=None, exclude=None):
476 def __init__(self, include=None, exclude=None):
477 self.include = include
477 self.include = include
478 self.exclude = exclude
478 self.exclude = exclude
479
479
480 def _repr_mimebundle_(self, include, exclude, **kwargs):
480 def _repr_mimebundle_(self, include, exclude, **kwargs):
481 if include and (include != self.include):
481 if include and (include != self.include):
482 raise ValueError('include got modified: display() may be broken.')
482 raise ValueError('include got modified: display() may be broken.')
483 if exclude and (exclude != self.exclude):
483 if exclude and (exclude != self.exclude):
484 raise ValueError('exclude got modified: display() may be broken.')
484 raise ValueError('exclude got modified: display() may be broken.')
485
485
486 return None
486 return None
487
487
488 include = {'a', 'b', 'c'}
488 include = {'a', 'b', 'c'}
489 exclude = {'c', 'e' , 'f'}
489 exclude = {'c', 'e' , 'f'}
490
490
491 f = get_ipython().display_formatter
491 f = get_ipython().display_formatter
492 f.format(Tester(include=include, exclude=exclude), include=include, exclude=exclude)
492 f.format(Tester(include=include, exclude=exclude), include=include, exclude=exclude)
493 f.format(Tester(exclude=exclude), exclude=exclude)
493 f.format(Tester(exclude=exclude), exclude=exclude)
494 f.format(Tester(include=include), include=include)
494 f.format(Tester(include=include), include=include)
495
495
496
496
497 def test_repr_mime_meta():
497 def test_repr_mime_meta():
498 class HasReprMimeMeta(object):
498 class HasReprMimeMeta(object):
499 def _repr_mimebundle_(self, include=None, exclude=None):
499 def _repr_mimebundle_(self, include=None, exclude=None):
500 data = {
500 data = {
501 'image/png': 'base64-image-data',
501 'image/png': 'base64-image-data',
502 }
502 }
503 metadata = {
503 metadata = {
504 'image/png': {
504 'image/png': {
505 'width': 5,
505 'width': 5,
506 'height': 10,
506 'height': 10,
507 }
507 }
508 }
508 }
509 return (data, metadata)
509 return (data, metadata)
510
510
511 f = get_ipython().display_formatter
511 f = get_ipython().display_formatter
512 obj = HasReprMimeMeta()
512 obj = HasReprMimeMeta()
513 d, md = f.format(obj)
513 d, md = f.format(obj)
514 assert sorted(d) == ["image/png", "text/plain"]
514 assert sorted(d) == ["image/png", "text/plain"]
515 assert md == {
515 assert md == {
516 "image/png": {
516 "image/png": {
517 "width": 5,
517 "width": 5,
518 "height": 10,
518 "height": 10,
519 }
519 }
520 }
520 }
521
521
522
522
523 def test_repr_mime_failure():
523 def test_repr_mime_failure():
524 class BadReprMime(object):
524 class BadReprMime(object):
525 def _repr_mimebundle_(self, include=None, exclude=None):
525 def _repr_mimebundle_(self, include=None, exclude=None):
526 raise RuntimeError
526 raise RuntimeError
527
527
528 f = get_ipython().display_formatter
528 f = get_ipython().display_formatter
529 obj = BadReprMime()
529 obj = BadReprMime()
530 d, md = f.format(obj)
530 d, md = f.format(obj)
531 assert "text/plain" in d
531 assert "text/plain" in d
532
532
533
533
534 def test_custom_repr_namedtuple_partialmethod():
534 def test_custom_repr_namedtuple_partialmethod():
535 from functools import partialmethod
535 from functools import partialmethod
536 from typing import NamedTuple
536 from typing import NamedTuple
537
537
538 class Foo(NamedTuple):
538 class Foo(NamedTuple): ...
539 ...
540
539
541 Foo.__repr__ = partialmethod(lambda obj: "Hello World")
540 Foo.__repr__ = partialmethod(lambda obj: "Hello World")
542 foo = Foo()
541 foo = Foo()
543
542
544 f = PlainTextFormatter()
543 f = PlainTextFormatter()
545 assert f.pprint
544 assert f.pprint
546 assert f(foo) == "Hello World"
545 assert f(foo) == "Hello World"
@@ -1,447 +1,448
1 """Tests for the token-based transformers in IPython.core.inputtransformer2
1 """Tests for the token-based transformers in IPython.core.inputtransformer2
2
2
3 Line-based transformers are the simpler ones; token-based transformers are
3 Line-based transformers are the simpler ones; token-based transformers are
4 more complex. See test_inputtransformer2_line for tests for line-based
4 more complex. See test_inputtransformer2_line for tests for line-based
5 transformations.
5 transformations.
6 """
6 """
7
7 import platform
8 import platform
8 import string
9 import string
9 import sys
10 import sys
10 from textwrap import dedent
11 from textwrap import dedent
11
12
12 import pytest
13 import pytest
13
14
14 from IPython.core import inputtransformer2 as ipt2
15 from IPython.core import inputtransformer2 as ipt2
15 from IPython.core.inputtransformer2 import _find_assign_op, make_tokens_by_line
16 from IPython.core.inputtransformer2 import _find_assign_op, make_tokens_by_line
16
17
17 MULTILINE_MAGIC = (
18 MULTILINE_MAGIC = (
18 """\
19 """\
19 a = f()
20 a = f()
20 %foo \\
21 %foo \\
21 bar
22 bar
22 g()
23 g()
23 """.splitlines(
24 """.splitlines(
24 keepends=True
25 keepends=True
25 ),
26 ),
26 (2, 0),
27 (2, 0),
27 """\
28 """\
28 a = f()
29 a = f()
29 get_ipython().run_line_magic('foo', ' bar')
30 get_ipython().run_line_magic('foo', ' bar')
30 g()
31 g()
31 """.splitlines(
32 """.splitlines(
32 keepends=True
33 keepends=True
33 ),
34 ),
34 )
35 )
35
36
36 INDENTED_MAGIC = (
37 INDENTED_MAGIC = (
37 """\
38 """\
38 for a in range(5):
39 for a in range(5):
39 %ls
40 %ls
40 """.splitlines(
41 """.splitlines(
41 keepends=True
42 keepends=True
42 ),
43 ),
43 (2, 4),
44 (2, 4),
44 """\
45 """\
45 for a in range(5):
46 for a in range(5):
46 get_ipython().run_line_magic('ls', '')
47 get_ipython().run_line_magic('ls', '')
47 """.splitlines(
48 """.splitlines(
48 keepends=True
49 keepends=True
49 ),
50 ),
50 )
51 )
51
52
52 CRLF_MAGIC = (
53 CRLF_MAGIC = (
53 ["a = f()\n", "%ls\r\n", "g()\n"],
54 ["a = f()\n", "%ls\r\n", "g()\n"],
54 (2, 0),
55 (2, 0),
55 ["a = f()\n", "get_ipython().run_line_magic('ls', '')\n", "g()\n"],
56 ["a = f()\n", "get_ipython().run_line_magic('ls', '')\n", "g()\n"],
56 )
57 )
57
58
58 MULTILINE_MAGIC_ASSIGN = (
59 MULTILINE_MAGIC_ASSIGN = (
59 """\
60 """\
60 a = f()
61 a = f()
61 b = %foo \\
62 b = %foo \\
62 bar
63 bar
63 g()
64 g()
64 """.splitlines(
65 """.splitlines(
65 keepends=True
66 keepends=True
66 ),
67 ),
67 (2, 4),
68 (2, 4),
68 """\
69 """\
69 a = f()
70 a = f()
70 b = get_ipython().run_line_magic('foo', ' bar')
71 b = get_ipython().run_line_magic('foo', ' bar')
71 g()
72 g()
72 """.splitlines(
73 """.splitlines(
73 keepends=True
74 keepends=True
74 ),
75 ),
75 )
76 )
76
77
77 MULTILINE_SYSTEM_ASSIGN = ("""\
78 MULTILINE_SYSTEM_ASSIGN = ("""\
78 a = f()
79 a = f()
79 b = !foo \\
80 b = !foo \\
80 bar
81 bar
81 g()
82 g()
82 """.splitlines(keepends=True), (2, 4), """\
83 """.splitlines(keepends=True), (2, 4), """\
83 a = f()
84 a = f()
84 b = get_ipython().getoutput('foo bar')
85 b = get_ipython().getoutput('foo bar')
85 g()
86 g()
86 """.splitlines(keepends=True))
87 """.splitlines(keepends=True))
87
88
88 #####
89 #####
89
90
90 MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT = (
91 MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT = (
91 """\
92 """\
92 def test():
93 def test():
93 for i in range(1):
94 for i in range(1):
94 print(i)
95 print(i)
95 res =! ls
96 res =! ls
96 """.splitlines(
97 """.splitlines(
97 keepends=True
98 keepends=True
98 ),
99 ),
99 (4, 7),
100 (4, 7),
100 """\
101 """\
101 def test():
102 def test():
102 for i in range(1):
103 for i in range(1):
103 print(i)
104 print(i)
104 res =get_ipython().getoutput(\' ls\')
105 res =get_ipython().getoutput(\' ls\')
105 """.splitlines(
106 """.splitlines(
106 keepends=True
107 keepends=True
107 ),
108 ),
108 )
109 )
109
110
110 ######
111 ######
111
112
112 AUTOCALL_QUOTE = ([",f 1 2 3\n"], (1, 0), ['f("1", "2", "3")\n'])
113 AUTOCALL_QUOTE = ([",f 1 2 3\n"], (1, 0), ['f("1", "2", "3")\n'])
113
114
114 AUTOCALL_QUOTE2 = ([";f 1 2 3\n"], (1, 0), ['f("1 2 3")\n'])
115 AUTOCALL_QUOTE2 = ([";f 1 2 3\n"], (1, 0), ['f("1 2 3")\n'])
115
116
116 AUTOCALL_PAREN = (["/f 1 2 3\n"], (1, 0), ["f(1, 2, 3)\n"])
117 AUTOCALL_PAREN = (["/f 1 2 3\n"], (1, 0), ["f(1, 2, 3)\n"])
117
118
118 SIMPLE_HELP = (["foo?\n"], (1, 0), ["get_ipython().run_line_magic('pinfo', 'foo')\n"])
119 SIMPLE_HELP = (["foo?\n"], (1, 0), ["get_ipython().run_line_magic('pinfo', 'foo')\n"])
119
120
120 DETAILED_HELP = (
121 DETAILED_HELP = (
121 ["foo??\n"],
122 ["foo??\n"],
122 (1, 0),
123 (1, 0),
123 ["get_ipython().run_line_magic('pinfo2', 'foo')\n"],
124 ["get_ipython().run_line_magic('pinfo2', 'foo')\n"],
124 )
125 )
125
126
126 MAGIC_HELP = (["%foo?\n"], (1, 0), ["get_ipython().run_line_magic('pinfo', '%foo')\n"])
127 MAGIC_HELP = (["%foo?\n"], (1, 0), ["get_ipython().run_line_magic('pinfo', '%foo')\n"])
127
128
128 HELP_IN_EXPR = (
129 HELP_IN_EXPR = (
129 ["a = b + c?\n"],
130 ["a = b + c?\n"],
130 (1, 0),
131 (1, 0),
131 ["get_ipython().run_line_magic('pinfo', 'c')\n"],
132 ["get_ipython().run_line_magic('pinfo', 'c')\n"],
132 )
133 )
133
134
134 HELP_CONTINUED_LINE = (
135 HELP_CONTINUED_LINE = (
135 """\
136 """\
136 a = \\
137 a = \\
137 zip?
138 zip?
138 """.splitlines(
139 """.splitlines(
139 keepends=True
140 keepends=True
140 ),
141 ),
141 (1, 0),
142 (1, 0),
142 [r"get_ipython().run_line_magic('pinfo', 'zip')" + "\n"],
143 [r"get_ipython().run_line_magic('pinfo', 'zip')" + "\n"],
143 )
144 )
144
145
145 HELP_MULTILINE = (
146 HELP_MULTILINE = (
146 """\
147 """\
147 (a,
148 (a,
148 b) = zip?
149 b) = zip?
149 """.splitlines(
150 """.splitlines(
150 keepends=True
151 keepends=True
151 ),
152 ),
152 (1, 0),
153 (1, 0),
153 [r"get_ipython().run_line_magic('pinfo', 'zip')" + "\n"],
154 [r"get_ipython().run_line_magic('pinfo', 'zip')" + "\n"],
154 )
155 )
155
156
156 HELP_UNICODE = (
157 HELP_UNICODE = (
157 ["Ο€.foo?\n"],
158 ["Ο€.foo?\n"],
158 (1, 0),
159 (1, 0),
159 ["get_ipython().run_line_magic('pinfo', 'Ο€.foo')\n"],
160 ["get_ipython().run_line_magic('pinfo', 'Ο€.foo')\n"],
160 )
161 )
161
162
162
163
163 def null_cleanup_transformer(lines):
164 def null_cleanup_transformer(lines):
164 """
165 """
165 A cleanup transform that returns an empty list.
166 A cleanup transform that returns an empty list.
166 """
167 """
167 return []
168 return []
168
169
169
170
170 def test_check_make_token_by_line_never_ends_empty():
171 def test_check_make_token_by_line_never_ends_empty():
171 """
172 """
172 Check that not sequence of single or double characters ends up leading to en empty list of tokens
173 Check that not sequence of single or double characters ends up leading to en empty list of tokens
173 """
174 """
174 from string import printable
175 from string import printable
175
176
176 for c in printable:
177 for c in printable:
177 assert make_tokens_by_line(c)[-1] != []
178 assert make_tokens_by_line(c)[-1] != []
178 for k in printable:
179 for k in printable:
179 assert make_tokens_by_line(c + k)[-1] != []
180 assert make_tokens_by_line(c + k)[-1] != []
180
181
181
182
182 def check_find(transformer, case, match=True):
183 def check_find(transformer, case, match=True):
183 sample, expected_start, _ = case
184 sample, expected_start, _ = case
184 tbl = make_tokens_by_line(sample)
185 tbl = make_tokens_by_line(sample)
185 res = transformer.find(tbl)
186 res = transformer.find(tbl)
186 if match:
187 if match:
187 # start_line is stored 0-indexed, expected values are 1-indexed
188 # start_line is stored 0-indexed, expected values are 1-indexed
188 assert (res.start_line + 1, res.start_col) == expected_start
189 assert (res.start_line + 1, res.start_col) == expected_start
189 return res
190 return res
190 else:
191 else:
191 assert res is None
192 assert res is None
192
193
193
194
194 def check_transform(transformer_cls, case):
195 def check_transform(transformer_cls, case):
195 lines, start, expected = case
196 lines, start, expected = case
196 transformer = transformer_cls(start)
197 transformer = transformer_cls(start)
197 assert transformer.transform(lines) == expected
198 assert transformer.transform(lines) == expected
198
199
199
200
200 def test_continued_line():
201 def test_continued_line():
201 lines = MULTILINE_MAGIC_ASSIGN[0]
202 lines = MULTILINE_MAGIC_ASSIGN[0]
202 assert ipt2.find_end_of_continued_line(lines, 1) == 2
203 assert ipt2.find_end_of_continued_line(lines, 1) == 2
203
204
204 assert ipt2.assemble_continued_line(lines, (1, 5), 2) == "foo bar"
205 assert ipt2.assemble_continued_line(lines, (1, 5), 2) == "foo bar"
205
206
206
207
207 def test_find_assign_magic():
208 def test_find_assign_magic():
208 check_find(ipt2.MagicAssign, MULTILINE_MAGIC_ASSIGN)
209 check_find(ipt2.MagicAssign, MULTILINE_MAGIC_ASSIGN)
209 check_find(ipt2.MagicAssign, MULTILINE_SYSTEM_ASSIGN, match=False)
210 check_find(ipt2.MagicAssign, MULTILINE_SYSTEM_ASSIGN, match=False)
210 check_find(ipt2.MagicAssign, MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT, match=False)
211 check_find(ipt2.MagicAssign, MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT, match=False)
211
212
212
213
213 def test_transform_assign_magic():
214 def test_transform_assign_magic():
214 check_transform(ipt2.MagicAssign, MULTILINE_MAGIC_ASSIGN)
215 check_transform(ipt2.MagicAssign, MULTILINE_MAGIC_ASSIGN)
215
216
216
217
217 def test_find_assign_system():
218 def test_find_assign_system():
218 check_find(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN)
219 check_find(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN)
219 check_find(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT)
220 check_find(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT)
220 check_find(ipt2.SystemAssign, (["a = !ls\n"], (1, 5), None))
221 check_find(ipt2.SystemAssign, (["a = !ls\n"], (1, 5), None))
221 check_find(ipt2.SystemAssign, (["a=!ls\n"], (1, 2), None))
222 check_find(ipt2.SystemAssign, (["a=!ls\n"], (1, 2), None))
222 check_find(ipt2.SystemAssign, MULTILINE_MAGIC_ASSIGN, match=False)
223 check_find(ipt2.SystemAssign, MULTILINE_MAGIC_ASSIGN, match=False)
223
224
224
225
225 def test_transform_assign_system():
226 def test_transform_assign_system():
226 check_transform(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN)
227 check_transform(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN)
227 check_transform(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT)
228 check_transform(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT)
228
229
229
230
230 def test_find_magic_escape():
231 def test_find_magic_escape():
231 check_find(ipt2.EscapedCommand, MULTILINE_MAGIC)
232 check_find(ipt2.EscapedCommand, MULTILINE_MAGIC)
232 check_find(ipt2.EscapedCommand, INDENTED_MAGIC)
233 check_find(ipt2.EscapedCommand, INDENTED_MAGIC)
233 check_find(ipt2.EscapedCommand, MULTILINE_MAGIC_ASSIGN, match=False)
234 check_find(ipt2.EscapedCommand, MULTILINE_MAGIC_ASSIGN, match=False)
234
235
235
236
236 def test_transform_magic_escape():
237 def test_transform_magic_escape():
237 check_transform(ipt2.EscapedCommand, MULTILINE_MAGIC)
238 check_transform(ipt2.EscapedCommand, MULTILINE_MAGIC)
238 check_transform(ipt2.EscapedCommand, INDENTED_MAGIC)
239 check_transform(ipt2.EscapedCommand, INDENTED_MAGIC)
239 check_transform(ipt2.EscapedCommand, CRLF_MAGIC)
240 check_transform(ipt2.EscapedCommand, CRLF_MAGIC)
240
241
241
242
242 def test_find_autocalls():
243 def test_find_autocalls():
243 for case in [AUTOCALL_QUOTE, AUTOCALL_QUOTE2, AUTOCALL_PAREN]:
244 for case in [AUTOCALL_QUOTE, AUTOCALL_QUOTE2, AUTOCALL_PAREN]:
244 print("Testing %r" % case[0])
245 print("Testing %r" % case[0])
245 check_find(ipt2.EscapedCommand, case)
246 check_find(ipt2.EscapedCommand, case)
246
247
247
248
248 def test_transform_autocall():
249 def test_transform_autocall():
249 for case in [AUTOCALL_QUOTE, AUTOCALL_QUOTE2, AUTOCALL_PAREN]:
250 for case in [AUTOCALL_QUOTE, AUTOCALL_QUOTE2, AUTOCALL_PAREN]:
250 print("Testing %r" % case[0])
251 print("Testing %r" % case[0])
251 check_transform(ipt2.EscapedCommand, case)
252 check_transform(ipt2.EscapedCommand, case)
252
253
253
254
254 def test_find_help():
255 def test_find_help():
255 for case in [SIMPLE_HELP, DETAILED_HELP, MAGIC_HELP, HELP_IN_EXPR]:
256 for case in [SIMPLE_HELP, DETAILED_HELP, MAGIC_HELP, HELP_IN_EXPR]:
256 check_find(ipt2.HelpEnd, case)
257 check_find(ipt2.HelpEnd, case)
257
258
258 tf = check_find(ipt2.HelpEnd, HELP_CONTINUED_LINE)
259 tf = check_find(ipt2.HelpEnd, HELP_CONTINUED_LINE)
259 assert tf.q_line == 1
260 assert tf.q_line == 1
260 assert tf.q_col == 3
261 assert tf.q_col == 3
261
262
262 tf = check_find(ipt2.HelpEnd, HELP_MULTILINE)
263 tf = check_find(ipt2.HelpEnd, HELP_MULTILINE)
263 assert tf.q_line == 1
264 assert tf.q_line == 1
264 assert tf.q_col == 8
265 assert tf.q_col == 8
265
266
266 # ? in a comment does not trigger help
267 # ? in a comment does not trigger help
267 check_find(ipt2.HelpEnd, (["foo # bar?\n"], None, None), match=False)
268 check_find(ipt2.HelpEnd, (["foo # bar?\n"], None, None), match=False)
268 # Nor in a string
269 # Nor in a string
269 check_find(ipt2.HelpEnd, (["foo = '''bar?\n"], None, None), match=False)
270 check_find(ipt2.HelpEnd, (["foo = '''bar?\n"], None, None), match=False)
270
271
271
272
272 def test_transform_help():
273 def test_transform_help():
273 tf = ipt2.HelpEnd((1, 0), (1, 9))
274 tf = ipt2.HelpEnd((1, 0), (1, 9))
274 assert tf.transform(HELP_IN_EXPR[0]) == HELP_IN_EXPR[2]
275 assert tf.transform(HELP_IN_EXPR[0]) == HELP_IN_EXPR[2]
275
276
276 tf = ipt2.HelpEnd((1, 0), (2, 3))
277 tf = ipt2.HelpEnd((1, 0), (2, 3))
277 assert tf.transform(HELP_CONTINUED_LINE[0]) == HELP_CONTINUED_LINE[2]
278 assert tf.transform(HELP_CONTINUED_LINE[0]) == HELP_CONTINUED_LINE[2]
278
279
279 tf = ipt2.HelpEnd((1, 0), (2, 8))
280 tf = ipt2.HelpEnd((1, 0), (2, 8))
280 assert tf.transform(HELP_MULTILINE[0]) == HELP_MULTILINE[2]
281 assert tf.transform(HELP_MULTILINE[0]) == HELP_MULTILINE[2]
281
282
282 tf = ipt2.HelpEnd((1, 0), (1, 0))
283 tf = ipt2.HelpEnd((1, 0), (1, 0))
283 assert tf.transform(HELP_UNICODE[0]) == HELP_UNICODE[2]
284 assert tf.transform(HELP_UNICODE[0]) == HELP_UNICODE[2]
284
285
285
286
286 def test_find_assign_op_dedent():
287 def test_find_assign_op_dedent():
287 """
288 """
288 be careful that empty token like dedent are not counted as parens
289 be careful that empty token like dedent are not counted as parens
289 """
290 """
290
291
291 class Tk:
292 class Tk:
292 def __init__(self, s):
293 def __init__(self, s):
293 self.string = s
294 self.string = s
294
295
295 assert _find_assign_op([Tk(s) for s in ("", "a", "=", "b")]) == 2
296 assert _find_assign_op([Tk(s) for s in ("", "a", "=", "b")]) == 2
296 assert (
297 assert (
297 _find_assign_op([Tk(s) for s in ("", "(", "a", "=", "b", ")", "=", "5")]) == 6
298 _find_assign_op([Tk(s) for s in ("", "(", "a", "=", "b", ")", "=", "5")]) == 6
298 )
299 )
299
300
300
301
301 extra_closing_paren_param = (
302 extra_closing_paren_param = (
302 pytest.param("(\n))", "invalid", None)
303 pytest.param("(\n))", "invalid", None)
303 if sys.version_info >= (3, 12)
304 if sys.version_info >= (3, 12)
304 else pytest.param("(\n))", "incomplete", 0)
305 else pytest.param("(\n))", "incomplete", 0)
305 )
306 )
306 examples = [
307 examples = [
307 pytest.param("a = 1", "complete", None),
308 pytest.param("a = 1", "complete", None),
308 pytest.param("for a in range(5):", "incomplete", 4),
309 pytest.param("for a in range(5):", "incomplete", 4),
309 pytest.param("for a in range(5):\n if a > 0:", "incomplete", 8),
310 pytest.param("for a in range(5):\n if a > 0:", "incomplete", 8),
310 pytest.param("raise = 2", "invalid", None),
311 pytest.param("raise = 2", "invalid", None),
311 pytest.param("a = [1,\n2,", "incomplete", 0),
312 pytest.param("a = [1,\n2,", "incomplete", 0),
312 extra_closing_paren_param,
313 extra_closing_paren_param,
313 pytest.param("\\\r\n", "incomplete", 0),
314 pytest.param("\\\r\n", "incomplete", 0),
314 pytest.param("a = '''\n hi", "incomplete", 3),
315 pytest.param("a = '''\n hi", "incomplete", 3),
315 pytest.param("def a():\n x=1\n global x", "invalid", None),
316 pytest.param("def a():\n x=1\n global x", "invalid", None),
316 pytest.param(
317 pytest.param(
317 "a \\ ",
318 "a \\ ",
318 "invalid",
319 "invalid",
319 None,
320 None,
320 marks=pytest.mark.xfail(
321 marks=pytest.mark.xfail(
321 reason="Bug in python 3.9.8 – bpo 45738",
322 reason="Bug in python 3.9.8 – bpo 45738",
322 condition=sys.version_info in [(3, 11, 0, "alpha", 2)],
323 condition=sys.version_info in [(3, 11, 0, "alpha", 2)],
323 raises=SystemError,
324 raises=SystemError,
324 strict=True,
325 strict=True,
325 ),
326 ),
326 ), # Nothing allowed after backslash,
327 ), # Nothing allowed after backslash,
327 pytest.param("1\\\n+2", "complete", None),
328 pytest.param("1\\\n+2", "complete", None),
328 ]
329 ]
329
330
330
331
331 @pytest.mark.parametrize("code, expected, number", examples)
332 @pytest.mark.parametrize("code, expected, number", examples)
332 def test_check_complete_param(code, expected, number):
333 def test_check_complete_param(code, expected, number):
333 cc = ipt2.TransformerManager().check_complete
334 cc = ipt2.TransformerManager().check_complete
334 assert cc(code) == (expected, number)
335 assert cc(code) == (expected, number)
335
336
336
337
337 @pytest.mark.xfail(platform.python_implementation() == "PyPy", reason="fail on pypy")
338 @pytest.mark.xfail(platform.python_implementation() == "PyPy", reason="fail on pypy")
338 @pytest.mark.xfail(
339 @pytest.mark.xfail(
339 reason="Bug in python 3.9.8 – bpo 45738",
340 reason="Bug in python 3.9.8 – bpo 45738",
340 condition=sys.version_info in [(3, 11, 0, "alpha", 2)],
341 condition=sys.version_info in [(3, 11, 0, "alpha", 2)],
341 raises=SystemError,
342 raises=SystemError,
342 strict=True,
343 strict=True,
343 )
344 )
344 def test_check_complete():
345 def test_check_complete():
345 cc = ipt2.TransformerManager().check_complete
346 cc = ipt2.TransformerManager().check_complete
346
347
347 example = dedent(
348 example = dedent(
348 """
349 """
349 if True:
350 if True:
350 a=1"""
351 a=1"""
351 )
352 )
352
353
353 assert cc(example) == ("incomplete", 4)
354 assert cc(example) == ("incomplete", 4)
354 assert cc(example + "\n") == ("complete", None)
355 assert cc(example + "\n") == ("complete", None)
355 assert cc(example + "\n ") == ("complete", None)
356 assert cc(example + "\n ") == ("complete", None)
356
357
357 # no need to loop on all the letters/numbers.
358 # no need to loop on all the letters/numbers.
358 short = "12abAB" + string.printable[62:]
359 short = "12abAB" + string.printable[62:]
359 for c in short:
360 for c in short:
360 # test does not raise:
361 # test does not raise:
361 cc(c)
362 cc(c)
362 for k in short:
363 for k in short:
363 cc(c + k)
364 cc(c + k)
364
365
365 assert cc("def f():\n x=0\n \\\n ") == ("incomplete", 2)
366 assert cc("def f():\n x=0\n \\\n ") == ("incomplete", 2)
366
367
367
368
368 @pytest.mark.xfail(platform.python_implementation() == "PyPy", reason="fail on pypy")
369 @pytest.mark.xfail(platform.python_implementation() == "PyPy", reason="fail on pypy")
369 @pytest.mark.parametrize(
370 @pytest.mark.parametrize(
370 "value, expected",
371 "value, expected",
371 [
372 [
372 ('''def foo():\n """''', ("incomplete", 4)),
373 ('''def foo():\n """''', ("incomplete", 4)),
373 ("""async with example:\n pass""", ("incomplete", 4)),
374 ("""async with example:\n pass""", ("incomplete", 4)),
374 ("""async with example:\n pass\n """, ("complete", None)),
375 ("""async with example:\n pass\n """, ("complete", None)),
375 ],
376 ],
376 )
377 )
377 def test_check_complete_II(value, expected):
378 def test_check_complete_II(value, expected):
378 """
379 """
379 Test that multiple line strings are properly handled.
380 Test that multiple line strings are properly handled.
380
381
381 Separate test function for convenience
382 Separate test function for convenience
382
383
383 """
384 """
384 cc = ipt2.TransformerManager().check_complete
385 cc = ipt2.TransformerManager().check_complete
385 assert cc(value) == expected
386 assert cc(value) == expected
386
387
387
388
388 @pytest.mark.parametrize(
389 @pytest.mark.parametrize(
389 "value, expected",
390 "value, expected",
390 [
391 [
391 (")", ("invalid", None)),
392 (")", ("invalid", None)),
392 ("]", ("invalid", None)),
393 ("]", ("invalid", None)),
393 ("}", ("invalid", None)),
394 ("}", ("invalid", None)),
394 (")(", ("invalid", None)),
395 (")(", ("invalid", None)),
395 ("][", ("invalid", None)),
396 ("][", ("invalid", None)),
396 ("}{", ("invalid", None)),
397 ("}{", ("invalid", None)),
397 ("]()(", ("invalid", None)),
398 ("]()(", ("invalid", None)),
398 ("())(", ("invalid", None)),
399 ("())(", ("invalid", None)),
399 (")[](", ("invalid", None)),
400 (")[](", ("invalid", None)),
400 ("()](", ("invalid", None)),
401 ("()](", ("invalid", None)),
401 ],
402 ],
402 )
403 )
403 def test_check_complete_invalidates_sunken_brackets(value, expected):
404 def test_check_complete_invalidates_sunken_brackets(value, expected):
404 """
405 """
405 Test that a single line with more closing brackets than the opening ones is
406 Test that a single line with more closing brackets than the opening ones is
406 interpreted as invalid
407 interpreted as invalid
407 """
408 """
408 cc = ipt2.TransformerManager().check_complete
409 cc = ipt2.TransformerManager().check_complete
409 assert cc(value) == expected
410 assert cc(value) == expected
410
411
411
412
412 def test_null_cleanup_transformer():
413 def test_null_cleanup_transformer():
413 manager = ipt2.TransformerManager()
414 manager = ipt2.TransformerManager()
414 manager.cleanup_transforms.insert(0, null_cleanup_transformer)
415 manager.cleanup_transforms.insert(0, null_cleanup_transformer)
415 assert manager.transform_cell("") == ""
416 assert manager.transform_cell("") == ""
416
417
417
418
418 def test_side_effects_I():
419 def test_side_effects_I():
419 count = 0
420 count = 0
420
421
421 def counter(lines):
422 def counter(lines):
422 nonlocal count
423 nonlocal count
423 count += 1
424 count += 1
424 return lines
425 return lines
425
426
426 counter.has_side_effects = True
427 counter.has_side_effects = True
427
428
428 manager = ipt2.TransformerManager()
429 manager = ipt2.TransformerManager()
429 manager.cleanup_transforms.insert(0, counter)
430 manager.cleanup_transforms.insert(0, counter)
430 assert manager.check_complete("a=1\n") == ("complete", None)
431 assert manager.check_complete("a=1\n") == ("complete", None)
431 assert count == 0
432 assert count == 0
432
433
433
434
434 def test_side_effects_II():
435 def test_side_effects_II():
435 count = 0
436 count = 0
436
437
437 def counter(lines):
438 def counter(lines):
438 nonlocal count
439 nonlocal count
439 count += 1
440 count += 1
440 return lines
441 return lines
441
442
442 counter.has_side_effects = True
443 counter.has_side_effects = True
443
444
444 manager = ipt2.TransformerManager()
445 manager = ipt2.TransformerManager()
445 manager.line_transforms.insert(0, counter)
446 manager.line_transforms.insert(0, counter)
446 assert manager.check_complete("b=1\n") == ("complete", None)
447 assert manager.check_complete("b=1\n") == ("complete", None)
447 assert count == 0
448 assert count == 0
@@ -1,200 +1,202
1 import errno
1 import errno
2 import os
2 import os
3 import shutil
3 import shutil
4 import tempfile
4 import tempfile
5 import warnings
5 import warnings
6 from unittest.mock import patch
6 from unittest.mock import patch
7
7
8 from tempfile import TemporaryDirectory
8 from tempfile import TemporaryDirectory
9 from testpath import assert_isdir, assert_isfile, modified_env
9 from testpath import assert_isdir, assert_isfile, modified_env
10
10
11 from IPython import paths
11 from IPython import paths
12 from IPython.testing.decorators import skip_win32
12 from IPython.testing.decorators import skip_win32
13
13
14 TMP_TEST_DIR = os.path.realpath(tempfile.mkdtemp())
14 TMP_TEST_DIR = os.path.realpath(tempfile.mkdtemp())
15 HOME_TEST_DIR = os.path.join(TMP_TEST_DIR, "home_test_dir")
15 HOME_TEST_DIR = os.path.join(TMP_TEST_DIR, "home_test_dir")
16 XDG_TEST_DIR = os.path.join(HOME_TEST_DIR, "xdg_test_dir")
16 XDG_TEST_DIR = os.path.join(HOME_TEST_DIR, "xdg_test_dir")
17 XDG_CACHE_DIR = os.path.join(HOME_TEST_DIR, "xdg_cache_dir")
17 XDG_CACHE_DIR = os.path.join(HOME_TEST_DIR, "xdg_cache_dir")
18 IP_TEST_DIR = os.path.join(HOME_TEST_DIR,'.ipython')
18 IP_TEST_DIR = os.path.join(HOME_TEST_DIR,'.ipython')
19
19
20 def setup_module():
20 def setup_module():
21 """Setup testenvironment for the module:
21 """Setup testenvironment for the module:
22
22
23 - Adds dummy home dir tree
23 - Adds dummy home dir tree
24 """
24 """
25 # Do not mask exceptions here. In particular, catching WindowsError is a
25 # Do not mask exceptions here. In particular, catching WindowsError is a
26 # problem because that exception is only defined on Windows...
26 # problem because that exception is only defined on Windows...
27 os.makedirs(IP_TEST_DIR)
27 os.makedirs(IP_TEST_DIR)
28 os.makedirs(os.path.join(XDG_TEST_DIR, 'ipython'))
28 os.makedirs(os.path.join(XDG_TEST_DIR, 'ipython'))
29 os.makedirs(os.path.join(XDG_CACHE_DIR, 'ipython'))
29 os.makedirs(os.path.join(XDG_CACHE_DIR, 'ipython'))
30
30
31
31
32 def teardown_module():
32 def teardown_module():
33 """Teardown testenvironment for the module:
33 """Teardown testenvironment for the module:
34
34
35 - Remove dummy home dir tree
35 - Remove dummy home dir tree
36 """
36 """
37 # Note: we remove the parent test dir, which is the root of all test
37 # Note: we remove the parent test dir, which is the root of all test
38 # subdirs we may have created. Use shutil instead of os.removedirs, so
38 # subdirs we may have created. Use shutil instead of os.removedirs, so
39 # that non-empty directories are all recursively removed.
39 # that non-empty directories are all recursively removed.
40 shutil.rmtree(TMP_TEST_DIR)
40 shutil.rmtree(TMP_TEST_DIR)
41
41
42 def patch_get_home_dir(dirpath):
42 def patch_get_home_dir(dirpath):
43 return patch.object(paths, 'get_home_dir', return_value=dirpath)
43 return patch.object(paths, 'get_home_dir', return_value=dirpath)
44
44
45
45
46 def test_get_ipython_dir_1():
46 def test_get_ipython_dir_1():
47 """test_get_ipython_dir_1, Testcase to see if we can call get_ipython_dir without Exceptions."""
47 """test_get_ipython_dir_1, Testcase to see if we can call get_ipython_dir without Exceptions."""
48 env_ipdir = os.path.join("someplace", ".ipython")
48 env_ipdir = os.path.join("someplace", ".ipython")
49 with patch.object(paths, '_writable_dir', return_value=True), \
49 with patch.object(paths, '_writable_dir', return_value=True), \
50 modified_env({'IPYTHONDIR': env_ipdir}):
50 modified_env({'IPYTHONDIR': env_ipdir}):
51 ipdir = paths.get_ipython_dir()
51 ipdir = paths.get_ipython_dir()
52
52
53 assert ipdir == env_ipdir
53 assert ipdir == env_ipdir
54
54
55 def test_get_ipython_dir_2():
55 def test_get_ipython_dir_2():
56 """test_get_ipython_dir_2, Testcase to see if we can call get_ipython_dir without Exceptions."""
56 """test_get_ipython_dir_2, Testcase to see if we can call get_ipython_dir without Exceptions."""
57 with patch_get_home_dir('someplace'), \
57 with patch_get_home_dir('someplace'), \
58 patch.object(paths, 'get_xdg_dir', return_value=None), \
58 patch.object(paths, 'get_xdg_dir', return_value=None), \
59 patch.object(paths, '_writable_dir', return_value=True), \
59 patch.object(paths, '_writable_dir', return_value=True), \
60 patch('os.name', "posix"), \
60 patch('os.name', "posix"), \
61 modified_env({'IPYTHON_DIR': None,
61 modified_env({'IPYTHON_DIR': None,
62 'IPYTHONDIR': None,
62 'IPYTHONDIR': None,
63 'XDG_CONFIG_HOME': None
63 'XDG_CONFIG_HOME': None
64 }):
64 }):
65 ipdir = paths.get_ipython_dir()
65 ipdir = paths.get_ipython_dir()
66
66
67 assert ipdir == os.path.join("someplace", ".ipython")
67 assert ipdir == os.path.join("someplace", ".ipython")
68
68
69 def test_get_ipython_dir_3():
69 def test_get_ipython_dir_3():
70 """test_get_ipython_dir_3, use XDG if defined and exists, and .ipython doesn't exist."""
70 """test_get_ipython_dir_3, use XDG if defined and exists, and .ipython doesn't exist."""
71 tmphome = TemporaryDirectory()
71 tmphome = TemporaryDirectory()
72 try:
72 try:
73 with patch_get_home_dir(tmphome.name), \
73 with patch_get_home_dir(tmphome.name), \
74 patch('os.name', 'posix'), \
74 patch('os.name', 'posix'), \
75 modified_env({
75 modified_env({
76 'IPYTHON_DIR': None,
76 'IPYTHON_DIR': None,
77 'IPYTHONDIR': None,
77 'IPYTHONDIR': None,
78 'XDG_CONFIG_HOME': XDG_TEST_DIR,
78 'XDG_CONFIG_HOME': XDG_TEST_DIR,
79 }), warnings.catch_warnings(record=True) as w:
79 }), warnings.catch_warnings(record=True) as w:
80 ipdir = paths.get_ipython_dir()
80 ipdir = paths.get_ipython_dir()
81
81
82 assert ipdir == os.path.join(tmphome.name, XDG_TEST_DIR, "ipython")
82 assert ipdir == os.path.join(tmphome.name, XDG_TEST_DIR, "ipython")
83 assert len(w) == 0
83 assert len(w) == 0
84 finally:
84 finally:
85 tmphome.cleanup()
85 tmphome.cleanup()
86
86
87 def test_get_ipython_dir_4():
87 def test_get_ipython_dir_4():
88 """test_get_ipython_dir_4, warn if XDG and home both exist."""
88 """test_get_ipython_dir_4, warn if XDG and home both exist."""
89 with patch_get_home_dir(HOME_TEST_DIR), \
89 with patch_get_home_dir(HOME_TEST_DIR), \
90 patch('os.name', 'posix'):
90 patch('os.name', 'posix'):
91 try:
91 try:
92 os.mkdir(os.path.join(XDG_TEST_DIR, 'ipython'))
92 os.mkdir(os.path.join(XDG_TEST_DIR, 'ipython'))
93 except OSError as e:
93 except OSError as e:
94 if e.errno != errno.EEXIST:
94 if e.errno != errno.EEXIST:
95 raise
95 raise
96
96
97
97
98 with modified_env({
98 with modified_env({
99 'IPYTHON_DIR': None,
99 'IPYTHON_DIR': None,
100 'IPYTHONDIR': None,
100 'IPYTHONDIR': None,
101 'XDG_CONFIG_HOME': XDG_TEST_DIR,
101 'XDG_CONFIG_HOME': XDG_TEST_DIR,
102 }), warnings.catch_warnings(record=True) as w:
102 }), warnings.catch_warnings(record=True) as w:
103 ipdir = paths.get_ipython_dir()
103 ipdir = paths.get_ipython_dir()
104
104
105 assert len(w) == 1
105 assert len(w) == 1
106 assert "Ignoring" in str(w[0])
106 assert "Ignoring" in str(w[0])
107
107
108
108
109 def test_get_ipython_dir_5():
109 def test_get_ipython_dir_5():
110 """test_get_ipython_dir_5, use .ipython if exists and XDG defined, but doesn't exist."""
110 """test_get_ipython_dir_5, use .ipython if exists and XDG defined, but doesn't exist."""
111 with patch_get_home_dir(HOME_TEST_DIR), \
111 with patch_get_home_dir(HOME_TEST_DIR), \
112 patch('os.name', 'posix'):
112 patch('os.name', 'posix'):
113 try:
113 try:
114 os.rmdir(os.path.join(XDG_TEST_DIR, 'ipython'))
114 os.rmdir(os.path.join(XDG_TEST_DIR, 'ipython'))
115 except OSError as e:
115 except OSError as e:
116 if e.errno != errno.ENOENT:
116 if e.errno != errno.ENOENT:
117 raise
117 raise
118
118
119 with modified_env({
119 with modified_env({
120 'IPYTHON_DIR': None,
120 'IPYTHON_DIR': None,
121 'IPYTHONDIR': None,
121 'IPYTHONDIR': None,
122 'XDG_CONFIG_HOME': XDG_TEST_DIR,
122 'XDG_CONFIG_HOME': XDG_TEST_DIR,
123 }):
123 }):
124 ipdir = paths.get_ipython_dir()
124 ipdir = paths.get_ipython_dir()
125
125
126 assert ipdir == IP_TEST_DIR
126 assert ipdir == IP_TEST_DIR
127
127
128 def test_get_ipython_dir_6():
128 def test_get_ipython_dir_6():
129 """test_get_ipython_dir_6, use home over XDG if defined and neither exist."""
129 """test_get_ipython_dir_6, use home over XDG if defined and neither exist."""
130 xdg = os.path.join(HOME_TEST_DIR, 'somexdg')
130 xdg = os.path.join(HOME_TEST_DIR, 'somexdg')
131 os.mkdir(xdg)
131 os.mkdir(xdg)
132 shutil.rmtree(os.path.join(HOME_TEST_DIR, '.ipython'))
132 shutil.rmtree(os.path.join(HOME_TEST_DIR, '.ipython'))
133 print(paths._writable_dir)
133 print(paths._writable_dir)
134 with patch_get_home_dir(HOME_TEST_DIR), \
134 with patch_get_home_dir(HOME_TEST_DIR), \
135 patch.object(paths, 'get_xdg_dir', return_value=xdg), \
135 patch.object(paths, 'get_xdg_dir', return_value=xdg), \
136 patch('os.name', 'posix'), \
136 patch('os.name', 'posix'), \
137 modified_env({
137 modified_env({
138 'IPYTHON_DIR': None,
138 'IPYTHON_DIR': None,
139 'IPYTHONDIR': None,
139 'IPYTHONDIR': None,
140 'XDG_CONFIG_HOME': None,
140 'XDG_CONFIG_HOME': None,
141 }), warnings.catch_warnings(record=True) as w:
141 }), warnings.catch_warnings(record=True) as w:
142 ipdir = paths.get_ipython_dir()
142 ipdir = paths.get_ipython_dir()
143
143
144 assert ipdir == os.path.join(HOME_TEST_DIR, ".ipython")
144 assert ipdir == os.path.join(HOME_TEST_DIR, ".ipython")
145 assert len(w) == 0
145 assert len(w) == 0
146
146
147 def test_get_ipython_dir_7():
147 def test_get_ipython_dir_7():
148 """test_get_ipython_dir_7, test home directory expansion on IPYTHONDIR"""
148 """test_get_ipython_dir_7, test home directory expansion on IPYTHONDIR"""
149 home_dir = os.path.normpath(os.path.expanduser('~'))
149 home_dir = os.path.normpath(os.path.expanduser('~'))
150 with modified_env({'IPYTHONDIR': os.path.join('~', 'somewhere')}), \
150 with modified_env({'IPYTHONDIR': os.path.join('~', 'somewhere')}), \
151 patch.object(paths, '_writable_dir', return_value=True):
151 patch.object(paths, '_writable_dir', return_value=True):
152 ipdir = paths.get_ipython_dir()
152 ipdir = paths.get_ipython_dir()
153 assert ipdir == os.path.join(home_dir, "somewhere")
153 assert ipdir == os.path.join(home_dir, "somewhere")
154
154
155
155
156 @skip_win32
156 @skip_win32
157 def test_get_ipython_dir_8():
157 def test_get_ipython_dir_8():
158 """test_get_ipython_dir_8, test / home directory"""
158 """test_get_ipython_dir_8, test / home directory"""
159 if not os.access("/", os.W_OK):
159 if not os.access("/", os.W_OK):
160 # test only when HOME directory actually writable
160 # test only when HOME directory actually writable
161 return
161 return
162
162
163 with patch.object(paths, "_writable_dir", lambda path: bool(path)), patch.object(
163 with (
164 paths, "get_xdg_dir", return_value=None
164 patch.object(paths, "_writable_dir", lambda path: bool(path)),
165 ), modified_env(
165 patch.object(paths, "get_xdg_dir", return_value=None),
166 {
166 modified_env(
167 "IPYTHON_DIR": None,
167 {
168 "IPYTHONDIR": None,
168 "IPYTHON_DIR": None,
169 "HOME": "/",
169 "IPYTHONDIR": None,
170 }
170 "HOME": "/",
171 }
172 ),
171 ):
173 ):
172 assert paths.get_ipython_dir() == "/.ipython"
174 assert paths.get_ipython_dir() == "/.ipython"
173
175
174
176
175 def test_get_ipython_cache_dir():
177 def test_get_ipython_cache_dir():
176 with modified_env({'HOME': HOME_TEST_DIR}):
178 with modified_env({'HOME': HOME_TEST_DIR}):
177 if os.name == "posix":
179 if os.name == "posix":
178 # test default
180 # test default
179 os.makedirs(os.path.join(HOME_TEST_DIR, ".cache"))
181 os.makedirs(os.path.join(HOME_TEST_DIR, ".cache"))
180 with modified_env({'XDG_CACHE_HOME': None}):
182 with modified_env({'XDG_CACHE_HOME': None}):
181 ipdir = paths.get_ipython_cache_dir()
183 ipdir = paths.get_ipython_cache_dir()
182 assert os.path.join(HOME_TEST_DIR, ".cache", "ipython") == ipdir
184 assert os.path.join(HOME_TEST_DIR, ".cache", "ipython") == ipdir
183 assert_isdir(ipdir)
185 assert_isdir(ipdir)
184
186
185 # test env override
187 # test env override
186 with modified_env({"XDG_CACHE_HOME": XDG_CACHE_DIR}):
188 with modified_env({"XDG_CACHE_HOME": XDG_CACHE_DIR}):
187 ipdir = paths.get_ipython_cache_dir()
189 ipdir = paths.get_ipython_cache_dir()
188 assert_isdir(ipdir)
190 assert_isdir(ipdir)
189 assert ipdir == os.path.join(XDG_CACHE_DIR, "ipython")
191 assert ipdir == os.path.join(XDG_CACHE_DIR, "ipython")
190 else:
192 else:
191 assert paths.get_ipython_cache_dir() == paths.get_ipython_dir()
193 assert paths.get_ipython_cache_dir() == paths.get_ipython_dir()
192
194
193 def test_get_ipython_package_dir():
195 def test_get_ipython_package_dir():
194 ipdir = paths.get_ipython_package_dir()
196 ipdir = paths.get_ipython_package_dir()
195 assert_isdir(ipdir)
197 assert_isdir(ipdir)
196
198
197
199
198 def test_get_ipython_module_path():
200 def test_get_ipython_module_path():
199 ipapp_path = paths.get_ipython_module_path('IPython.terminal.ipapp')
201 ipapp_path = paths.get_ipython_module_path('IPython.terminal.ipapp')
200 assert_isfile(ipapp_path)
202 assert_isfile(ipapp_path)
@@ -1,422 +1,423
1 """
1 """
2 This module contains factory functions that attempt
2 This module contains factory functions that attempt
3 to return Qt submodules from the various python Qt bindings.
3 to return Qt submodules from the various python Qt bindings.
4
4
5 It also protects against double-importing Qt with different
5 It also protects against double-importing Qt with different
6 bindings, which is unstable and likely to crash
6 bindings, which is unstable and likely to crash
7
7
8 This is used primarily by qt and qt_for_kernel, and shouldn't
8 This is used primarily by qt and qt_for_kernel, and shouldn't
9 be accessed directly from the outside
9 be accessed directly from the outside
10 """
10 """
11
11 import importlib.abc
12 import importlib.abc
12 import sys
13 import sys
13 import os
14 import os
14 import types
15 import types
15 from functools import partial, lru_cache
16 from functools import partial, lru_cache
16 import operator
17 import operator
17
18
18 # ### Available APIs.
19 # ### Available APIs.
19 # Qt6
20 # Qt6
20 QT_API_PYQT6 = "pyqt6"
21 QT_API_PYQT6 = "pyqt6"
21 QT_API_PYSIDE6 = "pyside6"
22 QT_API_PYSIDE6 = "pyside6"
22
23
23 # Qt5
24 # Qt5
24 QT_API_PYQT5 = 'pyqt5'
25 QT_API_PYQT5 = 'pyqt5'
25 QT_API_PYSIDE2 = 'pyside2'
26 QT_API_PYSIDE2 = 'pyside2'
26
27
27 # Qt4
28 # Qt4
28 # NOTE: Here for legacy matplotlib compatibility, but not really supported on the IPython side.
29 # NOTE: Here for legacy matplotlib compatibility, but not really supported on the IPython side.
29 QT_API_PYQT = "pyqt" # Force version 2
30 QT_API_PYQT = "pyqt" # Force version 2
30 QT_API_PYQTv1 = "pyqtv1" # Force version 2
31 QT_API_PYQTv1 = "pyqtv1" # Force version 2
31 QT_API_PYSIDE = "pyside"
32 QT_API_PYSIDE = "pyside"
32
33
33 QT_API_PYQT_DEFAULT = "pyqtdefault" # use system default for version 1 vs. 2
34 QT_API_PYQT_DEFAULT = "pyqtdefault" # use system default for version 1 vs. 2
34
35
35 api_to_module = {
36 api_to_module = {
36 # Qt6
37 # Qt6
37 QT_API_PYQT6: "PyQt6",
38 QT_API_PYQT6: "PyQt6",
38 QT_API_PYSIDE6: "PySide6",
39 QT_API_PYSIDE6: "PySide6",
39 # Qt5
40 # Qt5
40 QT_API_PYQT5: "PyQt5",
41 QT_API_PYQT5: "PyQt5",
41 QT_API_PYSIDE2: "PySide2",
42 QT_API_PYSIDE2: "PySide2",
42 # Qt4
43 # Qt4
43 QT_API_PYSIDE: "PySide",
44 QT_API_PYSIDE: "PySide",
44 QT_API_PYQT: "PyQt4",
45 QT_API_PYQT: "PyQt4",
45 QT_API_PYQTv1: "PyQt4",
46 QT_API_PYQTv1: "PyQt4",
46 # default
47 # default
47 QT_API_PYQT_DEFAULT: "PyQt6",
48 QT_API_PYQT_DEFAULT: "PyQt6",
48 }
49 }
49
50
50
51
51 class ImportDenier(importlib.abc.MetaPathFinder):
52 class ImportDenier(importlib.abc.MetaPathFinder):
52 """Import Hook that will guard against bad Qt imports
53 """Import Hook that will guard against bad Qt imports
53 once IPython commits to a specific binding
54 once IPython commits to a specific binding
54 """
55 """
55
56
56 def __init__(self):
57 def __init__(self):
57 self.__forbidden = set()
58 self.__forbidden = set()
58
59
59 def forbid(self, module_name):
60 def forbid(self, module_name):
60 sys.modules.pop(module_name, None)
61 sys.modules.pop(module_name, None)
61 self.__forbidden.add(module_name)
62 self.__forbidden.add(module_name)
62
63
63 def find_spec(self, fullname, path, target=None):
64 def find_spec(self, fullname, path, target=None):
64 if path:
65 if path:
65 return
66 return
66 if fullname in self.__forbidden:
67 if fullname in self.__forbidden:
67 raise ImportError(
68 raise ImportError(
68 """
69 """
69 Importing %s disabled by IPython, which has
70 Importing %s disabled by IPython, which has
70 already imported an Incompatible QT Binding: %s
71 already imported an Incompatible QT Binding: %s
71 """
72 """
72 % (fullname, loaded_api())
73 % (fullname, loaded_api())
73 )
74 )
74
75
75
76
76 ID = ImportDenier()
77 ID = ImportDenier()
77 sys.meta_path.insert(0, ID)
78 sys.meta_path.insert(0, ID)
78
79
79
80
80 def commit_api(api):
81 def commit_api(api):
81 """Commit to a particular API, and trigger ImportErrors on subsequent
82 """Commit to a particular API, and trigger ImportErrors on subsequent
82 dangerous imports"""
83 dangerous imports"""
83 modules = set(api_to_module.values())
84 modules = set(api_to_module.values())
84
85
85 modules.remove(api_to_module[api])
86 modules.remove(api_to_module[api])
86 for mod in modules:
87 for mod in modules:
87 ID.forbid(mod)
88 ID.forbid(mod)
88
89
89
90
90 def loaded_api():
91 def loaded_api():
91 """Return which API is loaded, if any
92 """Return which API is loaded, if any
92
93
93 If this returns anything besides None,
94 If this returns anything besides None,
94 importing any other Qt binding is unsafe.
95 importing any other Qt binding is unsafe.
95
96
96 Returns
97 Returns
97 -------
98 -------
98 None, 'pyside6', 'pyqt6', 'pyside2', 'pyside', 'pyqt', 'pyqt5', 'pyqtv1'
99 None, 'pyside6', 'pyqt6', 'pyside2', 'pyside', 'pyqt', 'pyqt5', 'pyqtv1'
99 """
100 """
100 if sys.modules.get("PyQt6.QtCore"):
101 if sys.modules.get("PyQt6.QtCore"):
101 return QT_API_PYQT6
102 return QT_API_PYQT6
102 elif sys.modules.get("PySide6.QtCore"):
103 elif sys.modules.get("PySide6.QtCore"):
103 return QT_API_PYSIDE6
104 return QT_API_PYSIDE6
104 elif sys.modules.get("PyQt5.QtCore"):
105 elif sys.modules.get("PyQt5.QtCore"):
105 return QT_API_PYQT5
106 return QT_API_PYQT5
106 elif sys.modules.get("PySide2.QtCore"):
107 elif sys.modules.get("PySide2.QtCore"):
107 return QT_API_PYSIDE2
108 return QT_API_PYSIDE2
108 elif sys.modules.get("PyQt4.QtCore"):
109 elif sys.modules.get("PyQt4.QtCore"):
109 if qtapi_version() == 2:
110 if qtapi_version() == 2:
110 return QT_API_PYQT
111 return QT_API_PYQT
111 else:
112 else:
112 return QT_API_PYQTv1
113 return QT_API_PYQTv1
113 elif sys.modules.get("PySide.QtCore"):
114 elif sys.modules.get("PySide.QtCore"):
114 return QT_API_PYSIDE
115 return QT_API_PYSIDE
115
116
116 return None
117 return None
117
118
118
119
119 def has_binding(api):
120 def has_binding(api):
120 """Safely check for PyQt4/5, PySide or PySide2, without importing submodules
121 """Safely check for PyQt4/5, PySide or PySide2, without importing submodules
121
122
122 Parameters
123 Parameters
123 ----------
124 ----------
124 api : str [ 'pyqtv1' | 'pyqt' | 'pyqt5' | 'pyside' | 'pyside2' | 'pyqtdefault']
125 api : str [ 'pyqtv1' | 'pyqt' | 'pyqt5' | 'pyside' | 'pyside2' | 'pyqtdefault']
125 Which module to check for
126 Which module to check for
126
127
127 Returns
128 Returns
128 -------
129 -------
129 True if the relevant module appears to be importable
130 True if the relevant module appears to be importable
130 """
131 """
131 module_name = api_to_module[api]
132 module_name = api_to_module[api]
132 from importlib.util import find_spec
133 from importlib.util import find_spec
133
134
134 required = ['QtCore', 'QtGui', 'QtSvg']
135 required = ['QtCore', 'QtGui', 'QtSvg']
135 if api in (QT_API_PYQT5, QT_API_PYSIDE2, QT_API_PYQT6, QT_API_PYSIDE6):
136 if api in (QT_API_PYQT5, QT_API_PYSIDE2, QT_API_PYQT6, QT_API_PYSIDE6):
136 # QT5 requires QtWidgets too
137 # QT5 requires QtWidgets too
137 required.append('QtWidgets')
138 required.append('QtWidgets')
138
139
139 for submod in required:
140 for submod in required:
140 try:
141 try:
141 spec = find_spec('%s.%s' % (module_name, submod))
142 spec = find_spec('%s.%s' % (module_name, submod))
142 except ImportError:
143 except ImportError:
143 # Package (e.g. PyQt5) not found
144 # Package (e.g. PyQt5) not found
144 return False
145 return False
145 else:
146 else:
146 if spec is None:
147 if spec is None:
147 # Submodule (e.g. PyQt5.QtCore) not found
148 # Submodule (e.g. PyQt5.QtCore) not found
148 return False
149 return False
149
150
150 if api == QT_API_PYSIDE:
151 if api == QT_API_PYSIDE:
151 # We can also safely check PySide version
152 # We can also safely check PySide version
152 import PySide
153 import PySide
153
154
154 return PySide.__version_info__ >= (1, 0, 3)
155 return PySide.__version_info__ >= (1, 0, 3)
155
156
156 return True
157 return True
157
158
158
159
159 def qtapi_version():
160 def qtapi_version():
160 """Return which QString API has been set, if any
161 """Return which QString API has been set, if any
161
162
162 Returns
163 Returns
163 -------
164 -------
164 The QString API version (1 or 2), or None if not set
165 The QString API version (1 or 2), or None if not set
165 """
166 """
166 try:
167 try:
167 import sip
168 import sip
168 except ImportError:
169 except ImportError:
169 # as of PyQt5 5.11, sip is no longer available as a top-level
170 # as of PyQt5 5.11, sip is no longer available as a top-level
170 # module and needs to be imported from the PyQt5 namespace
171 # module and needs to be imported from the PyQt5 namespace
171 try:
172 try:
172 from PyQt5 import sip
173 from PyQt5 import sip
173 except ImportError:
174 except ImportError:
174 return
175 return
175 try:
176 try:
176 return sip.getapi('QString')
177 return sip.getapi('QString')
177 except ValueError:
178 except ValueError:
178 return
179 return
179
180
180
181
181 def can_import(api):
182 def can_import(api):
182 """Safely query whether an API is importable, without importing it"""
183 """Safely query whether an API is importable, without importing it"""
183 if not has_binding(api):
184 if not has_binding(api):
184 return False
185 return False
185
186
186 current = loaded_api()
187 current = loaded_api()
187 if api == QT_API_PYQT_DEFAULT:
188 if api == QT_API_PYQT_DEFAULT:
188 return current in [QT_API_PYQT6, None]
189 return current in [QT_API_PYQT6, None]
189 else:
190 else:
190 return current in [api, None]
191 return current in [api, None]
191
192
192
193
193 def import_pyqt4(version=2):
194 def import_pyqt4(version=2):
194 """
195 """
195 Import PyQt4
196 Import PyQt4
196
197
197 Parameters
198 Parameters
198 ----------
199 ----------
199 version : 1, 2, or None
200 version : 1, 2, or None
200 Which QString/QVariant API to use. Set to None to use the system
201 Which QString/QVariant API to use. Set to None to use the system
201 default
202 default
202 ImportErrors raised within this function are non-recoverable
203 ImportErrors raised within this function are non-recoverable
203 """
204 """
204 # The new-style string API (version=2) automatically
205 # The new-style string API (version=2) automatically
205 # converts QStrings to Unicode Python strings. Also, automatically unpacks
206 # converts QStrings to Unicode Python strings. Also, automatically unpacks
206 # QVariants to their underlying objects.
207 # QVariants to their underlying objects.
207 import sip
208 import sip
208
209
209 if version is not None:
210 if version is not None:
210 sip.setapi('QString', version)
211 sip.setapi('QString', version)
211 sip.setapi('QVariant', version)
212 sip.setapi('QVariant', version)
212
213
213 from PyQt4 import QtGui, QtCore, QtSvg
214 from PyQt4 import QtGui, QtCore, QtSvg
214
215
215 if QtCore.PYQT_VERSION < 0x040700:
216 if QtCore.PYQT_VERSION < 0x040700:
216 raise ImportError("IPython requires PyQt4 >= 4.7, found %s" %
217 raise ImportError("IPython requires PyQt4 >= 4.7, found %s" %
217 QtCore.PYQT_VERSION_STR)
218 QtCore.PYQT_VERSION_STR)
218
219
219 # Alias PyQt-specific functions for PySide compatibility.
220 # Alias PyQt-specific functions for PySide compatibility.
220 QtCore.Signal = QtCore.pyqtSignal
221 QtCore.Signal = QtCore.pyqtSignal
221 QtCore.Slot = QtCore.pyqtSlot
222 QtCore.Slot = QtCore.pyqtSlot
222
223
223 # query for the API version (in case version == None)
224 # query for the API version (in case version == None)
224 version = sip.getapi('QString')
225 version = sip.getapi('QString')
225 api = QT_API_PYQTv1 if version == 1 else QT_API_PYQT
226 api = QT_API_PYQTv1 if version == 1 else QT_API_PYQT
226 return QtCore, QtGui, QtSvg, api
227 return QtCore, QtGui, QtSvg, api
227
228
228
229
229 def import_pyqt5():
230 def import_pyqt5():
230 """
231 """
231 Import PyQt5
232 Import PyQt5
232
233
233 ImportErrors raised within this function are non-recoverable
234 ImportErrors raised within this function are non-recoverable
234 """
235 """
235
236
236 from PyQt5 import QtCore, QtSvg, QtWidgets, QtGui
237 from PyQt5 import QtCore, QtSvg, QtWidgets, QtGui
237
238
238 # Alias PyQt-specific functions for PySide compatibility.
239 # Alias PyQt-specific functions for PySide compatibility.
239 QtCore.Signal = QtCore.pyqtSignal
240 QtCore.Signal = QtCore.pyqtSignal
240 QtCore.Slot = QtCore.pyqtSlot
241 QtCore.Slot = QtCore.pyqtSlot
241
242
242 # Join QtGui and QtWidgets for Qt4 compatibility.
243 # Join QtGui and QtWidgets for Qt4 compatibility.
243 QtGuiCompat = types.ModuleType('QtGuiCompat')
244 QtGuiCompat = types.ModuleType('QtGuiCompat')
244 QtGuiCompat.__dict__.update(QtGui.__dict__)
245 QtGuiCompat.__dict__.update(QtGui.__dict__)
245 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
246 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
246
247
247 api = QT_API_PYQT5
248 api = QT_API_PYQT5
248 return QtCore, QtGuiCompat, QtSvg, api
249 return QtCore, QtGuiCompat, QtSvg, api
249
250
250
251
251 def import_pyqt6():
252 def import_pyqt6():
252 """
253 """
253 Import PyQt6
254 Import PyQt6
254
255
255 ImportErrors raised within this function are non-recoverable
256 ImportErrors raised within this function are non-recoverable
256 """
257 """
257
258
258 from PyQt6 import QtCore, QtSvg, QtWidgets, QtGui
259 from PyQt6 import QtCore, QtSvg, QtWidgets, QtGui
259
260
260 # Alias PyQt-specific functions for PySide compatibility.
261 # Alias PyQt-specific functions for PySide compatibility.
261 QtCore.Signal = QtCore.pyqtSignal
262 QtCore.Signal = QtCore.pyqtSignal
262 QtCore.Slot = QtCore.pyqtSlot
263 QtCore.Slot = QtCore.pyqtSlot
263
264
264 # Join QtGui and QtWidgets for Qt4 compatibility.
265 # Join QtGui and QtWidgets for Qt4 compatibility.
265 QtGuiCompat = types.ModuleType("QtGuiCompat")
266 QtGuiCompat = types.ModuleType("QtGuiCompat")
266 QtGuiCompat.__dict__.update(QtGui.__dict__)
267 QtGuiCompat.__dict__.update(QtGui.__dict__)
267 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
268 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
268
269
269 api = QT_API_PYQT6
270 api = QT_API_PYQT6
270 return QtCore, QtGuiCompat, QtSvg, api
271 return QtCore, QtGuiCompat, QtSvg, api
271
272
272
273
273 def import_pyside():
274 def import_pyside():
274 """
275 """
275 Import PySide
276 Import PySide
276
277
277 ImportErrors raised within this function are non-recoverable
278 ImportErrors raised within this function are non-recoverable
278 """
279 """
279 from PySide import QtGui, QtCore, QtSvg
280 from PySide import QtGui, QtCore, QtSvg
280 return QtCore, QtGui, QtSvg, QT_API_PYSIDE
281 return QtCore, QtGui, QtSvg, QT_API_PYSIDE
281
282
282 def import_pyside2():
283 def import_pyside2():
283 """
284 """
284 Import PySide2
285 Import PySide2
285
286
286 ImportErrors raised within this function are non-recoverable
287 ImportErrors raised within this function are non-recoverable
287 """
288 """
288 from PySide2 import QtGui, QtCore, QtSvg, QtWidgets, QtPrintSupport
289 from PySide2 import QtGui, QtCore, QtSvg, QtWidgets, QtPrintSupport
289
290
290 # Join QtGui and QtWidgets for Qt4 compatibility.
291 # Join QtGui and QtWidgets for Qt4 compatibility.
291 QtGuiCompat = types.ModuleType('QtGuiCompat')
292 QtGuiCompat = types.ModuleType('QtGuiCompat')
292 QtGuiCompat.__dict__.update(QtGui.__dict__)
293 QtGuiCompat.__dict__.update(QtGui.__dict__)
293 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
294 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
294 QtGuiCompat.__dict__.update(QtPrintSupport.__dict__)
295 QtGuiCompat.__dict__.update(QtPrintSupport.__dict__)
295
296
296 return QtCore, QtGuiCompat, QtSvg, QT_API_PYSIDE2
297 return QtCore, QtGuiCompat, QtSvg, QT_API_PYSIDE2
297
298
298
299
299 def import_pyside6():
300 def import_pyside6():
300 """
301 """
301 Import PySide6
302 Import PySide6
302
303
303 ImportErrors raised within this function are non-recoverable
304 ImportErrors raised within this function are non-recoverable
304 """
305 """
305
306
306 def get_attrs(module):
307 def get_attrs(module):
307 return {
308 return {
308 name: getattr(module, name)
309 name: getattr(module, name)
309 for name in dir(module)
310 for name in dir(module)
310 if not name.startswith("_")
311 if not name.startswith("_")
311 }
312 }
312
313
313 from PySide6 import QtGui, QtCore, QtSvg, QtWidgets, QtPrintSupport
314 from PySide6 import QtGui, QtCore, QtSvg, QtWidgets, QtPrintSupport
314
315
315 # Join QtGui and QtWidgets for Qt4 compatibility.
316 # Join QtGui and QtWidgets for Qt4 compatibility.
316 QtGuiCompat = types.ModuleType("QtGuiCompat")
317 QtGuiCompat = types.ModuleType("QtGuiCompat")
317 QtGuiCompat.__dict__.update(QtGui.__dict__)
318 QtGuiCompat.__dict__.update(QtGui.__dict__)
318 if QtCore.__version_info__ < (6, 7):
319 if QtCore.__version_info__ < (6, 7):
319 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
320 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
320 QtGuiCompat.__dict__.update(QtPrintSupport.__dict__)
321 QtGuiCompat.__dict__.update(QtPrintSupport.__dict__)
321 else:
322 else:
322 QtGuiCompat.__dict__.update(get_attrs(QtWidgets))
323 QtGuiCompat.__dict__.update(get_attrs(QtWidgets))
323 QtGuiCompat.__dict__.update(get_attrs(QtPrintSupport))
324 QtGuiCompat.__dict__.update(get_attrs(QtPrintSupport))
324
325
325 return QtCore, QtGuiCompat, QtSvg, QT_API_PYSIDE6
326 return QtCore, QtGuiCompat, QtSvg, QT_API_PYSIDE6
326
327
327
328
328 def load_qt(api_options):
329 def load_qt(api_options):
329 """
330 """
330 Attempt to import Qt, given a preference list
331 Attempt to import Qt, given a preference list
331 of permissible bindings
332 of permissible bindings
332
333
333 It is safe to call this function multiple times.
334 It is safe to call this function multiple times.
334
335
335 Parameters
336 Parameters
336 ----------
337 ----------
337 api_options : List of strings
338 api_options : List of strings
338 The order of APIs to try. Valid items are 'pyside', 'pyside2',
339 The order of APIs to try. Valid items are 'pyside', 'pyside2',
339 'pyqt', 'pyqt5', 'pyqtv1' and 'pyqtdefault'
340 'pyqt', 'pyqt5', 'pyqtv1' and 'pyqtdefault'
340
341
341 Returns
342 Returns
342 -------
343 -------
343 A tuple of QtCore, QtGui, QtSvg, QT_API
344 A tuple of QtCore, QtGui, QtSvg, QT_API
344 The first three are the Qt modules. The last is the
345 The first three are the Qt modules. The last is the
345 string indicating which module was loaded.
346 string indicating which module was loaded.
346
347
347 Raises
348 Raises
348 ------
349 ------
349 ImportError, if it isn't possible to import any requested
350 ImportError, if it isn't possible to import any requested
350 bindings (either because they aren't installed, or because
351 bindings (either because they aren't installed, or because
351 an incompatible library has already been installed)
352 an incompatible library has already been installed)
352 """
353 """
353 loaders = {
354 loaders = {
354 # Qt6
355 # Qt6
355 QT_API_PYQT6: import_pyqt6,
356 QT_API_PYQT6: import_pyqt6,
356 QT_API_PYSIDE6: import_pyside6,
357 QT_API_PYSIDE6: import_pyside6,
357 # Qt5
358 # Qt5
358 QT_API_PYQT5: import_pyqt5,
359 QT_API_PYQT5: import_pyqt5,
359 QT_API_PYSIDE2: import_pyside2,
360 QT_API_PYSIDE2: import_pyside2,
360 # Qt4
361 # Qt4
361 QT_API_PYSIDE: import_pyside,
362 QT_API_PYSIDE: import_pyside,
362 QT_API_PYQT: import_pyqt4,
363 QT_API_PYQT: import_pyqt4,
363 QT_API_PYQTv1: partial(import_pyqt4, version=1),
364 QT_API_PYQTv1: partial(import_pyqt4, version=1),
364 # default
365 # default
365 QT_API_PYQT_DEFAULT: import_pyqt6,
366 QT_API_PYQT_DEFAULT: import_pyqt6,
366 }
367 }
367
368
368 for api in api_options:
369 for api in api_options:
369
370
370 if api not in loaders:
371 if api not in loaders:
371 raise RuntimeError(
372 raise RuntimeError(
372 "Invalid Qt API %r, valid values are: %s" %
373 "Invalid Qt API %r, valid values are: %s" %
373 (api, ", ".join(["%r" % k for k in loaders.keys()])))
374 (api, ", ".join(["%r" % k for k in loaders.keys()])))
374
375
375 if not can_import(api):
376 if not can_import(api):
376 continue
377 continue
377
378
378 #cannot safely recover from an ImportError during this
379 #cannot safely recover from an ImportError during this
379 result = loaders[api]()
380 result = loaders[api]()
380 api = result[-1] # changed if api = QT_API_PYQT_DEFAULT
381 api = result[-1] # changed if api = QT_API_PYQT_DEFAULT
381 commit_api(api)
382 commit_api(api)
382 return result
383 return result
383 else:
384 else:
384 # Clear the environment variable since it doesn't work.
385 # Clear the environment variable since it doesn't work.
385 if "QT_API" in os.environ:
386 if "QT_API" in os.environ:
386 del os.environ["QT_API"]
387 del os.environ["QT_API"]
387
388
388 raise ImportError(
389 raise ImportError(
389 """
390 """
390 Could not load requested Qt binding. Please ensure that
391 Could not load requested Qt binding. Please ensure that
391 PyQt4 >= 4.7, PyQt5, PyQt6, PySide >= 1.0.3, PySide2, or
392 PyQt4 >= 4.7, PyQt5, PyQt6, PySide >= 1.0.3, PySide2, or
392 PySide6 is available, and only one is imported per session.
393 PySide6 is available, and only one is imported per session.
393
394
394 Currently-imported Qt library: %r
395 Currently-imported Qt library: %r
395 PyQt5 available (requires QtCore, QtGui, QtSvg, QtWidgets): %s
396 PyQt5 available (requires QtCore, QtGui, QtSvg, QtWidgets): %s
396 PyQt6 available (requires QtCore, QtGui, QtSvg, QtWidgets): %s
397 PyQt6 available (requires QtCore, QtGui, QtSvg, QtWidgets): %s
397 PySide2 installed: %s
398 PySide2 installed: %s
398 PySide6 installed: %s
399 PySide6 installed: %s
399 Tried to load: %r
400 Tried to load: %r
400 """
401 """
401 % (
402 % (
402 loaded_api(),
403 loaded_api(),
403 has_binding(QT_API_PYQT5),
404 has_binding(QT_API_PYQT5),
404 has_binding(QT_API_PYQT6),
405 has_binding(QT_API_PYQT6),
405 has_binding(QT_API_PYSIDE2),
406 has_binding(QT_API_PYSIDE2),
406 has_binding(QT_API_PYSIDE6),
407 has_binding(QT_API_PYSIDE6),
407 api_options,
408 api_options,
408 )
409 )
409 )
410 )
410
411
411
412
412 def enum_factory(QT_API, QtCore):
413 def enum_factory(QT_API, QtCore):
413 """Construct an enum helper to account for PyQt5 <-> PyQt6 changes."""
414 """Construct an enum helper to account for PyQt5 <-> PyQt6 changes."""
414
415
415 @lru_cache(None)
416 @lru_cache(None)
416 def _enum(name):
417 def _enum(name):
417 # foo.bar.Enum.Entry (PyQt6) <=> foo.bar.Entry (non-PyQt6).
418 # foo.bar.Enum.Entry (PyQt6) <=> foo.bar.Entry (non-PyQt6).
418 return operator.attrgetter(
419 return operator.attrgetter(
419 name if QT_API == QT_API_PYQT6 else name.rpartition(".")[0]
420 name if QT_API == QT_API_PYQT6 else name.rpartition(".")[0]
420 )(sys.modules[QtCore.__package__])
421 )(sys.modules[QtCore.__package__])
421
422
422 return _enum
423 return _enum
@@ -1,101 +1,102
1 """ Utilities for accessing the platform's clipboard.
1 """ Utilities for accessing the platform's clipboard.
2 """
2 """
3
3 import os
4 import os
4 import subprocess
5 import subprocess
5
6
6 from IPython.core.error import TryNext
7 from IPython.core.error import TryNext
7 import IPython.utils.py3compat as py3compat
8 import IPython.utils.py3compat as py3compat
8
9
9
10
10 class ClipboardEmpty(ValueError):
11 class ClipboardEmpty(ValueError):
11 pass
12 pass
12
13
13
14
14 def win32_clipboard_get():
15 def win32_clipboard_get():
15 """ Get the current clipboard's text on Windows.
16 """ Get the current clipboard's text on Windows.
16
17
17 Requires Mark Hammond's pywin32 extensions.
18 Requires Mark Hammond's pywin32 extensions.
18 """
19 """
19 try:
20 try:
20 import win32clipboard
21 import win32clipboard
21 except ImportError as e:
22 except ImportError as e:
22 raise TryNext("Getting text from the clipboard requires the pywin32 "
23 raise TryNext("Getting text from the clipboard requires the pywin32 "
23 "extensions: http://sourceforge.net/projects/pywin32/") from e
24 "extensions: http://sourceforge.net/projects/pywin32/") from e
24 win32clipboard.OpenClipboard()
25 win32clipboard.OpenClipboard()
25 try:
26 try:
26 text = win32clipboard.GetClipboardData(win32clipboard.CF_UNICODETEXT)
27 text = win32clipboard.GetClipboardData(win32clipboard.CF_UNICODETEXT)
27 except (TypeError, win32clipboard.error):
28 except (TypeError, win32clipboard.error):
28 try:
29 try:
29 text = win32clipboard.GetClipboardData(win32clipboard.CF_TEXT)
30 text = win32clipboard.GetClipboardData(win32clipboard.CF_TEXT)
30 text = py3compat.cast_unicode(text, py3compat.DEFAULT_ENCODING)
31 text = py3compat.cast_unicode(text, py3compat.DEFAULT_ENCODING)
31 except (TypeError, win32clipboard.error) as e:
32 except (TypeError, win32clipboard.error) as e:
32 raise ClipboardEmpty from e
33 raise ClipboardEmpty from e
33 finally:
34 finally:
34 win32clipboard.CloseClipboard()
35 win32clipboard.CloseClipboard()
35 return text
36 return text
36
37
37
38
38 def osx_clipboard_get() -> str:
39 def osx_clipboard_get() -> str:
39 """ Get the clipboard's text on OS X.
40 """ Get the clipboard's text on OS X.
40 """
41 """
41 p = subprocess.Popen(['pbpaste', '-Prefer', 'ascii'],
42 p = subprocess.Popen(['pbpaste', '-Prefer', 'ascii'],
42 stdout=subprocess.PIPE)
43 stdout=subprocess.PIPE)
43 bytes_, stderr = p.communicate()
44 bytes_, stderr = p.communicate()
44 # Text comes in with old Mac \r line endings. Change them to \n.
45 # Text comes in with old Mac \r line endings. Change them to \n.
45 bytes_ = bytes_.replace(b'\r', b'\n')
46 bytes_ = bytes_.replace(b'\r', b'\n')
46 text = py3compat.decode(bytes_)
47 text = py3compat.decode(bytes_)
47 return text
48 return text
48
49
49
50
50 def tkinter_clipboard_get():
51 def tkinter_clipboard_get():
51 """ Get the clipboard's text using Tkinter.
52 """ Get the clipboard's text using Tkinter.
52
53
53 This is the default on systems that are not Windows or OS X. It may
54 This is the default on systems that are not Windows or OS X. It may
54 interfere with other UI toolkits and should be replaced with an
55 interfere with other UI toolkits and should be replaced with an
55 implementation that uses that toolkit.
56 implementation that uses that toolkit.
56 """
57 """
57 try:
58 try:
58 from tkinter import Tk, TclError
59 from tkinter import Tk, TclError
59 except ImportError as e:
60 except ImportError as e:
60 raise TryNext("Getting text from the clipboard on this platform requires tkinter.") from e
61 raise TryNext("Getting text from the clipboard on this platform requires tkinter.") from e
61
62
62 root = Tk()
63 root = Tk()
63 root.withdraw()
64 root.withdraw()
64 try:
65 try:
65 text = root.clipboard_get()
66 text = root.clipboard_get()
66 except TclError as e:
67 except TclError as e:
67 raise ClipboardEmpty from e
68 raise ClipboardEmpty from e
68 finally:
69 finally:
69 root.destroy()
70 root.destroy()
70 text = py3compat.cast_unicode(text, py3compat.DEFAULT_ENCODING)
71 text = py3compat.cast_unicode(text, py3compat.DEFAULT_ENCODING)
71 return text
72 return text
72
73
73
74
74 def wayland_clipboard_get():
75 def wayland_clipboard_get():
75 """Get the clipboard's text under Wayland using wl-paste command.
76 """Get the clipboard's text under Wayland using wl-paste command.
76
77
77 This requires Wayland and wl-clipboard installed and running.
78 This requires Wayland and wl-clipboard installed and running.
78 """
79 """
79 if os.environ.get("XDG_SESSION_TYPE") != "wayland":
80 if os.environ.get("XDG_SESSION_TYPE") != "wayland":
80 raise TryNext("wayland is not detected")
81 raise TryNext("wayland is not detected")
81
82
82 try:
83 try:
83 with subprocess.Popen(["wl-paste"], stdout=subprocess.PIPE) as p:
84 with subprocess.Popen(["wl-paste"], stdout=subprocess.PIPE) as p:
84 raw, err = p.communicate()
85 raw, err = p.communicate()
85 if p.wait():
86 if p.wait():
86 raise TryNext(err)
87 raise TryNext(err)
87 except FileNotFoundError as e:
88 except FileNotFoundError as e:
88 raise TryNext(
89 raise TryNext(
89 "Getting text from the clipboard under Wayland requires the wl-clipboard "
90 "Getting text from the clipboard under Wayland requires the wl-clipboard "
90 "extension: https://github.com/bugaevc/wl-clipboard"
91 "extension: https://github.com/bugaevc/wl-clipboard"
91 ) from e
92 ) from e
92
93
93 if not raw:
94 if not raw:
94 raise ClipboardEmpty
95 raise ClipboardEmpty
95
96
96 try:
97 try:
97 text = py3compat.decode(raw)
98 text = py3compat.decode(raw)
98 except UnicodeDecodeError as e:
99 except UnicodeDecodeError as e:
99 raise ClipboardEmpty from e
100 raise ClipboardEmpty from e
100
101
101 return text
102 return text
@@ -1,1277 +1,1278
1 # -*- coding: utf-8 -*-
1 # -*- coding: utf-8 -*-
2 """
2 """
3 Sphinx directive to support embedded IPython code.
3 Sphinx directive to support embedded IPython code.
4
4
5 IPython provides an extension for `Sphinx <http://www.sphinx-doc.org/>`_ to
5 IPython provides an extension for `Sphinx <http://www.sphinx-doc.org/>`_ to
6 highlight and run code.
6 highlight and run code.
7
7
8 This directive allows pasting of entire interactive IPython sessions, prompts
8 This directive allows pasting of entire interactive IPython sessions, prompts
9 and all, and their code will actually get re-executed at doc build time, with
9 and all, and their code will actually get re-executed at doc build time, with
10 all prompts renumbered sequentially. It also allows you to input code as a pure
10 all prompts renumbered sequentially. It also allows you to input code as a pure
11 python input by giving the argument python to the directive. The output looks
11 python input by giving the argument python to the directive. The output looks
12 like an interactive ipython section.
12 like an interactive ipython section.
13
13
14 Here is an example of how the IPython directive can
14 Here is an example of how the IPython directive can
15 **run** python code, at build time.
15 **run** python code, at build time.
16
16
17 .. ipython::
17 .. ipython::
18
18
19 In [1]: 1+1
19 In [1]: 1+1
20
20
21 In [1]: import datetime
21 In [1]: import datetime
22 ...: datetime.date.fromisoformat('2022-02-22')
22 ...: datetime.date.fromisoformat('2022-02-22')
23
23
24 It supports IPython construct that plain
24 It supports IPython construct that plain
25 Python does not understand (like magics):
25 Python does not understand (like magics):
26
26
27 .. ipython::
27 .. ipython::
28
28
29 In [0]: import time
29 In [0]: import time
30
30
31 In [0]: %pdoc time.sleep
31 In [0]: %pdoc time.sleep
32
32
33 This will also support top-level async when using IPython 7.0+
33 This will also support top-level async when using IPython 7.0+
34
34
35 .. ipython::
35 .. ipython::
36
36
37 In [2]: import asyncio
37 In [2]: import asyncio
38 ...: print('before')
38 ...: print('before')
39 ...: await asyncio.sleep(1)
39 ...: await asyncio.sleep(1)
40 ...: print('after')
40 ...: print('after')
41
41
42
42
43 The namespace will persist across multiple code chucks, Let's define a variable:
43 The namespace will persist across multiple code chucks, Let's define a variable:
44
44
45 .. ipython::
45 .. ipython::
46
46
47 In [0]: who = "World"
47 In [0]: who = "World"
48
48
49 And now say hello:
49 And now say hello:
50
50
51 .. ipython::
51 .. ipython::
52
52
53 In [0]: print('Hello,', who)
53 In [0]: print('Hello,', who)
54
54
55 If the current section raises an exception, you can add the ``:okexcept:`` flag
55 If the current section raises an exception, you can add the ``:okexcept:`` flag
56 to the current block, otherwise the build will fail.
56 to the current block, otherwise the build will fail.
57
57
58 .. ipython::
58 .. ipython::
59 :okexcept:
59 :okexcept:
60
60
61 In [1]: 1/0
61 In [1]: 1/0
62
62
63 IPython Sphinx directive module
63 IPython Sphinx directive module
64 ===============================
64 ===============================
65
65
66 To enable this directive, simply list it in your Sphinx ``conf.py`` file
66 To enable this directive, simply list it in your Sphinx ``conf.py`` file
67 (making sure the directory where you placed it is visible to sphinx, as is
67 (making sure the directory where you placed it is visible to sphinx, as is
68 needed for all Sphinx directives). For example, to enable syntax highlighting
68 needed for all Sphinx directives). For example, to enable syntax highlighting
69 and the IPython directive::
69 and the IPython directive::
70
70
71 extensions = ['IPython.sphinxext.ipython_console_highlighting',
71 extensions = ['IPython.sphinxext.ipython_console_highlighting',
72 'IPython.sphinxext.ipython_directive']
72 'IPython.sphinxext.ipython_directive']
73
73
74 The IPython directive outputs code-blocks with the language 'ipython'. So
74 The IPython directive outputs code-blocks with the language 'ipython'. So
75 if you do not have the syntax highlighting extension enabled as well, then
75 if you do not have the syntax highlighting extension enabled as well, then
76 all rendered code-blocks will be uncolored. By default this directive assumes
76 all rendered code-blocks will be uncolored. By default this directive assumes
77 that your prompts are unchanged IPython ones, but this can be customized.
77 that your prompts are unchanged IPython ones, but this can be customized.
78 The configurable options that can be placed in conf.py are:
78 The configurable options that can be placed in conf.py are:
79
79
80 ipython_savefig_dir:
80 ipython_savefig_dir:
81 The directory in which to save the figures. This is relative to the
81 The directory in which to save the figures. This is relative to the
82 Sphinx source directory. The default is `html_static_path`.
82 Sphinx source directory. The default is `html_static_path`.
83 ipython_rgxin:
83 ipython_rgxin:
84 The compiled regular expression to denote the start of IPython input
84 The compiled regular expression to denote the start of IPython input
85 lines. The default is ``re.compile('In \\[(\\d+)\\]:\\s?(.*)\\s*')``. You
85 lines. The default is ``re.compile('In \\[(\\d+)\\]:\\s?(.*)\\s*')``. You
86 shouldn't need to change this.
86 shouldn't need to change this.
87 ipython_warning_is_error: [default to True]
87 ipython_warning_is_error: [default to True]
88 Fail the build if something unexpected happen, for example if a block raise
88 Fail the build if something unexpected happen, for example if a block raise
89 an exception but does not have the `:okexcept:` flag. The exact behavior of
89 an exception but does not have the `:okexcept:` flag. The exact behavior of
90 what is considered strict, may change between the sphinx directive version.
90 what is considered strict, may change between the sphinx directive version.
91 ipython_rgxout:
91 ipython_rgxout:
92 The compiled regular expression to denote the start of IPython output
92 The compiled regular expression to denote the start of IPython output
93 lines. The default is ``re.compile('Out\\[(\\d+)\\]:\\s?(.*)\\s*')``. You
93 lines. The default is ``re.compile('Out\\[(\\d+)\\]:\\s?(.*)\\s*')``. You
94 shouldn't need to change this.
94 shouldn't need to change this.
95 ipython_promptin:
95 ipython_promptin:
96 The string to represent the IPython input prompt in the generated ReST.
96 The string to represent the IPython input prompt in the generated ReST.
97 The default is ``'In [%d]:'``. This expects that the line numbers are used
97 The default is ``'In [%d]:'``. This expects that the line numbers are used
98 in the prompt.
98 in the prompt.
99 ipython_promptout:
99 ipython_promptout:
100 The string to represent the IPython prompt in the generated ReST. The
100 The string to represent the IPython prompt in the generated ReST. The
101 default is ``'Out [%d]:'``. This expects that the line numbers are used
101 default is ``'Out [%d]:'``. This expects that the line numbers are used
102 in the prompt.
102 in the prompt.
103 ipython_mplbackend:
103 ipython_mplbackend:
104 The string which specifies if the embedded Sphinx shell should import
104 The string which specifies if the embedded Sphinx shell should import
105 Matplotlib and set the backend. The value specifies a backend that is
105 Matplotlib and set the backend. The value specifies a backend that is
106 passed to `matplotlib.use()` before any lines in `ipython_execlines` are
106 passed to `matplotlib.use()` before any lines in `ipython_execlines` are
107 executed. If not specified in conf.py, then the default value of 'agg' is
107 executed. If not specified in conf.py, then the default value of 'agg' is
108 used. To use the IPython directive without matplotlib as a dependency, set
108 used. To use the IPython directive without matplotlib as a dependency, set
109 the value to `None`. It may end up that matplotlib is still imported
109 the value to `None`. It may end up that matplotlib is still imported
110 if the user specifies so in `ipython_execlines` or makes use of the
110 if the user specifies so in `ipython_execlines` or makes use of the
111 @savefig pseudo decorator.
111 @savefig pseudo decorator.
112 ipython_execlines:
112 ipython_execlines:
113 A list of strings to be exec'd in the embedded Sphinx shell. Typical
113 A list of strings to be exec'd in the embedded Sphinx shell. Typical
114 usage is to make certain packages always available. Set this to an empty
114 usage is to make certain packages always available. Set this to an empty
115 list if you wish to have no imports always available. If specified in
115 list if you wish to have no imports always available. If specified in
116 ``conf.py`` as `None`, then it has the effect of making no imports available.
116 ``conf.py`` as `None`, then it has the effect of making no imports available.
117 If omitted from conf.py altogether, then the default value of
117 If omitted from conf.py altogether, then the default value of
118 ['import numpy as np', 'import matplotlib.pyplot as plt'] is used.
118 ['import numpy as np', 'import matplotlib.pyplot as plt'] is used.
119 ipython_holdcount
119 ipython_holdcount
120 When the @suppress pseudo-decorator is used, the execution count can be
120 When the @suppress pseudo-decorator is used, the execution count can be
121 incremented or not. The default behavior is to hold the execution count,
121 incremented or not. The default behavior is to hold the execution count,
122 corresponding to a value of `True`. Set this to `False` to increment
122 corresponding to a value of `True`. Set this to `False` to increment
123 the execution count after each suppressed command.
123 the execution count after each suppressed command.
124
124
125 As an example, to use the IPython directive when `matplotlib` is not available,
125 As an example, to use the IPython directive when `matplotlib` is not available,
126 one sets the backend to `None`::
126 one sets the backend to `None`::
127
127
128 ipython_mplbackend = None
128 ipython_mplbackend = None
129
129
130 An example usage of the directive is:
130 An example usage of the directive is:
131
131
132 .. code-block:: rst
132 .. code-block:: rst
133
133
134 .. ipython::
134 .. ipython::
135
135
136 In [1]: x = 1
136 In [1]: x = 1
137
137
138 In [2]: y = x**2
138 In [2]: y = x**2
139
139
140 In [3]: print(y)
140 In [3]: print(y)
141
141
142 See http://matplotlib.org/sampledoc/ipython_directive.html for additional
142 See http://matplotlib.org/sampledoc/ipython_directive.html for additional
143 documentation.
143 documentation.
144
144
145 Pseudo-Decorators
145 Pseudo-Decorators
146 =================
146 =================
147
147
148 Note: Only one decorator is supported per input. If more than one decorator
148 Note: Only one decorator is supported per input. If more than one decorator
149 is specified, then only the last one is used.
149 is specified, then only the last one is used.
150
150
151 In addition to the Pseudo-Decorators/options described at the above link,
151 In addition to the Pseudo-Decorators/options described at the above link,
152 several enhancements have been made. The directive will emit a message to the
152 several enhancements have been made. The directive will emit a message to the
153 console at build-time if code-execution resulted in an exception or warning.
153 console at build-time if code-execution resulted in an exception or warning.
154 You can suppress these on a per-block basis by specifying the :okexcept:
154 You can suppress these on a per-block basis by specifying the :okexcept:
155 or :okwarning: options:
155 or :okwarning: options:
156
156
157 .. code-block:: rst
157 .. code-block:: rst
158
158
159 .. ipython::
159 .. ipython::
160 :okexcept:
160 :okexcept:
161 :okwarning:
161 :okwarning:
162
162
163 In [1]: 1/0
163 In [1]: 1/0
164 In [2]: # raise warning.
164 In [2]: # raise warning.
165
165
166 To Do
166 To Do
167 =====
167 =====
168
168
169 - Turn the ad-hoc test() function into a real test suite.
169 - Turn the ad-hoc test() function into a real test suite.
170 - Break up ipython-specific functionality from matplotlib stuff into better
170 - Break up ipython-specific functionality from matplotlib stuff into better
171 separated code.
171 separated code.
172
172
173 """
173 """
174
174
175 # Authors
175 # Authors
176 # =======
176 # =======
177 #
177 #
178 # - John D Hunter: original author.
178 # - John D Hunter: original author.
179 # - Fernando Perez: refactoring, documentation, cleanups, port to 0.11.
179 # - Fernando Perez: refactoring, documentation, cleanups, port to 0.11.
180 # - VΓ‘clavΕ milauer <eudoxos-AT-arcig.cz>: Prompt generalizations.
180 # - VΓ‘clavΕ milauer <eudoxos-AT-arcig.cz>: Prompt generalizations.
181 # - Skipper Seabold, refactoring, cleanups, pure python addition
181 # - Skipper Seabold, refactoring, cleanups, pure python addition
182
182
183 #-----------------------------------------------------------------------------
183 #-----------------------------------------------------------------------------
184 # Imports
184 # Imports
185 #-----------------------------------------------------------------------------
185 #-----------------------------------------------------------------------------
186
186
187 # Stdlib
187 # Stdlib
188 import atexit
188 import atexit
189 import errno
189 import errno
190 import os
190 import os
191 import pathlib
191 import pathlib
192 import re
192 import re
193 import sys
193 import sys
194 import tempfile
194 import tempfile
195 import ast
195 import ast
196 import warnings
196 import warnings
197 import shutil
197 import shutil
198 from io import StringIO
198 from io import StringIO
199 from typing import Any, Dict, Set
199 from typing import Any, Dict, Set
200
200
201 # Third-party
201 # Third-party
202 from docutils.parsers.rst import directives
202 from docutils.parsers.rst import directives
203 from docutils.parsers.rst import Directive
203 from docutils.parsers.rst import Directive
204 from sphinx.util import logging
204 from sphinx.util import logging
205
205
206 # Our own
206 # Our own
207 from traitlets.config import Config
207 from traitlets.config import Config
208 from IPython import InteractiveShell
208 from IPython import InteractiveShell
209 from IPython.core.profiledir import ProfileDir
209 from IPython.core.profiledir import ProfileDir
210
210
211 use_matplotlib = False
211 use_matplotlib = False
212 try:
212 try:
213 import matplotlib
213 import matplotlib
214 use_matplotlib = True
214 use_matplotlib = True
215 except Exception:
215 except Exception:
216 pass
216 pass
217
217
218 #-----------------------------------------------------------------------------
218 #-----------------------------------------------------------------------------
219 # Globals
219 # Globals
220 #-----------------------------------------------------------------------------
220 #-----------------------------------------------------------------------------
221 # for tokenizing blocks
221 # for tokenizing blocks
222 COMMENT, INPUT, OUTPUT = range(3)
222 COMMENT, INPUT, OUTPUT = range(3)
223
223
224 PSEUDO_DECORATORS = ["suppress", "verbatim", "savefig", "doctest"]
224 PSEUDO_DECORATORS = ["suppress", "verbatim", "savefig", "doctest"]
225
225
226 #-----------------------------------------------------------------------------
226 #-----------------------------------------------------------------------------
227 # Functions and class declarations
227 # Functions and class declarations
228 #-----------------------------------------------------------------------------
228 #-----------------------------------------------------------------------------
229
229
230 def block_parser(part, rgxin, rgxout, fmtin, fmtout):
230 def block_parser(part, rgxin, rgxout, fmtin, fmtout):
231 """
231 """
232 part is a string of ipython text, comprised of at most one
232 part is a string of ipython text, comprised of at most one
233 input, one output, comments, and blank lines. The block parser
233 input, one output, comments, and blank lines. The block parser
234 parses the text into a list of::
234 parses the text into a list of::
235
235
236 blocks = [ (TOKEN0, data0), (TOKEN1, data1), ...]
236 blocks = [ (TOKEN0, data0), (TOKEN1, data1), ...]
237
237
238 where TOKEN is one of [COMMENT | INPUT | OUTPUT ] and
238 where TOKEN is one of [COMMENT | INPUT | OUTPUT ] and
239 data is, depending on the type of token::
239 data is, depending on the type of token::
240
240
241 COMMENT : the comment string
241 COMMENT : the comment string
242
242
243 INPUT: the (DECORATOR, INPUT_LINE, REST) where
243 INPUT: the (DECORATOR, INPUT_LINE, REST) where
244 DECORATOR: the input decorator (or None)
244 DECORATOR: the input decorator (or None)
245 INPUT_LINE: the input as string (possibly multi-line)
245 INPUT_LINE: the input as string (possibly multi-line)
246 REST : any stdout generated by the input line (not OUTPUT)
246 REST : any stdout generated by the input line (not OUTPUT)
247
247
248 OUTPUT: the output string, possibly multi-line
248 OUTPUT: the output string, possibly multi-line
249
249
250 """
250 """
251 block = []
251 block = []
252 lines = part.split('\n')
252 lines = part.split('\n')
253 N = len(lines)
253 N = len(lines)
254 i = 0
254 i = 0
255 decorator = None
255 decorator = None
256 while 1:
256 while 1:
257
257
258 if i==N:
258 if i==N:
259 # nothing left to parse -- the last line
259 # nothing left to parse -- the last line
260 break
260 break
261
261
262 line = lines[i]
262 line = lines[i]
263 i += 1
263 i += 1
264 line_stripped = line.strip()
264 line_stripped = line.strip()
265 if line_stripped.startswith('#'):
265 if line_stripped.startswith('#'):
266 block.append((COMMENT, line))
266 block.append((COMMENT, line))
267 continue
267 continue
268
268
269 if any(
269 if any(
270 line_stripped.startswith("@" + pseudo_decorator)
270 line_stripped.startswith("@" + pseudo_decorator)
271 for pseudo_decorator in PSEUDO_DECORATORS
271 for pseudo_decorator in PSEUDO_DECORATORS
272 ):
272 ):
273 if decorator:
273 if decorator:
274 raise RuntimeError(
274 raise RuntimeError(
275 "Applying multiple pseudo-decorators on one line is not supported"
275 "Applying multiple pseudo-decorators on one line is not supported"
276 )
276 )
277 else:
277 else:
278 decorator = line_stripped
278 decorator = line_stripped
279 continue
279 continue
280
280
281 # does this look like an input line?
281 # does this look like an input line?
282 matchin = rgxin.match(line)
282 matchin = rgxin.match(line)
283 if matchin:
283 if matchin:
284 lineno, inputline = int(matchin.group(1)), matchin.group(2)
284 lineno, inputline = int(matchin.group(1)), matchin.group(2)
285
285
286 # the ....: continuation string
286 # the ....: continuation string
287 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
287 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
288 Nc = len(continuation)
288 Nc = len(continuation)
289 # input lines can continue on for more than one line, if
289 # input lines can continue on for more than one line, if
290 # we have a '\' line continuation char or a function call
290 # we have a '\' line continuation char or a function call
291 # echo line 'print'. The input line can only be
291 # echo line 'print'. The input line can only be
292 # terminated by the end of the block or an output line, so
292 # terminated by the end of the block or an output line, so
293 # we parse out the rest of the input line if it is
293 # we parse out the rest of the input line if it is
294 # multiline as well as any echo text
294 # multiline as well as any echo text
295
295
296 rest = []
296 rest = []
297 while i<N:
297 while i<N:
298
298
299 # look ahead; if the next line is blank, or a comment, or
299 # look ahead; if the next line is blank, or a comment, or
300 # an output line, we're done
300 # an output line, we're done
301
301
302 nextline = lines[i]
302 nextline = lines[i]
303 matchout = rgxout.match(nextline)
303 matchout = rgxout.match(nextline)
304 # print("nextline=%s, continuation=%s, starts=%s"%(nextline, continuation, nextline.startswith(continuation)))
304 # print("nextline=%s, continuation=%s, starts=%s"%(nextline, continuation, nextline.startswith(continuation)))
305 if matchout or nextline.startswith('#'):
305 if matchout or nextline.startswith('#'):
306 break
306 break
307 elif nextline.startswith(continuation):
307 elif nextline.startswith(continuation):
308 # The default ipython_rgx* treat the space following the colon as optional.
308 # The default ipython_rgx* treat the space following the colon as optional.
309 # However, If the space is there we must consume it or code
309 # However, If the space is there we must consume it or code
310 # employing the cython_magic extension will fail to execute.
310 # employing the cython_magic extension will fail to execute.
311 #
311 #
312 # This works with the default ipython_rgx* patterns,
312 # This works with the default ipython_rgx* patterns,
313 # If you modify them, YMMV.
313 # If you modify them, YMMV.
314 nextline = nextline[Nc:]
314 nextline = nextline[Nc:]
315 if nextline and nextline[0] == ' ':
315 if nextline and nextline[0] == ' ':
316 nextline = nextline[1:]
316 nextline = nextline[1:]
317
317
318 inputline += '\n' + nextline
318 inputline += '\n' + nextline
319 else:
319 else:
320 rest.append(nextline)
320 rest.append(nextline)
321 i+= 1
321 i+= 1
322
322
323 block.append((INPUT, (decorator, inputline, '\n'.join(rest))))
323 block.append((INPUT, (decorator, inputline, '\n'.join(rest))))
324 continue
324 continue
325
325
326 # if it looks like an output line grab all the text to the end
326 # if it looks like an output line grab all the text to the end
327 # of the block
327 # of the block
328 matchout = rgxout.match(line)
328 matchout = rgxout.match(line)
329 if matchout:
329 if matchout:
330 lineno, output = int(matchout.group(1)), matchout.group(2)
330 lineno, output = int(matchout.group(1)), matchout.group(2)
331 if i<N-1:
331 if i<N-1:
332 output = '\n'.join([output] + lines[i:])
332 output = '\n'.join([output] + lines[i:])
333
333
334 block.append((OUTPUT, output))
334 block.append((OUTPUT, output))
335 break
335 break
336
336
337 return block
337 return block
338
338
339
339
340 class EmbeddedSphinxShell(object):
340 class EmbeddedSphinxShell(object):
341 """An embedded IPython instance to run inside Sphinx"""
341 """An embedded IPython instance to run inside Sphinx"""
342
342
343 def __init__(self, exec_lines=None):
343 def __init__(self, exec_lines=None):
344
344
345 self.cout = StringIO()
345 self.cout = StringIO()
346
346
347 if exec_lines is None:
347 if exec_lines is None:
348 exec_lines = []
348 exec_lines = []
349
349
350 # Create config object for IPython
350 # Create config object for IPython
351 config = Config()
351 config = Config()
352 config.HistoryManager.hist_file = ':memory:'
352 config.HistoryManager.hist_file = ':memory:'
353 config.InteractiveShell.autocall = False
353 config.InteractiveShell.autocall = False
354 config.InteractiveShell.autoindent = False
354 config.InteractiveShell.autoindent = False
355 config.InteractiveShell.colors = 'NoColor'
355 config.InteractiveShell.colors = 'NoColor'
356
356
357 # create a profile so instance history isn't saved
357 # create a profile so instance history isn't saved
358 tmp_profile_dir = tempfile.mkdtemp(prefix='profile_')
358 tmp_profile_dir = tempfile.mkdtemp(prefix='profile_')
359 profname = 'auto_profile_sphinx_build'
359 profname = 'auto_profile_sphinx_build'
360 pdir = os.path.join(tmp_profile_dir,profname)
360 pdir = os.path.join(tmp_profile_dir,profname)
361 profile = ProfileDir.create_profile_dir(pdir)
361 profile = ProfileDir.create_profile_dir(pdir)
362
362
363 # Create and initialize global ipython, but don't start its mainloop.
363 # Create and initialize global ipython, but don't start its mainloop.
364 # This will persist across different EmbeddedSphinxShell instances.
364 # This will persist across different EmbeddedSphinxShell instances.
365 IP = InteractiveShell.instance(config=config, profile_dir=profile)
365 IP = InteractiveShell.instance(config=config, profile_dir=profile)
366 atexit.register(self.cleanup)
366 atexit.register(self.cleanup)
367
367
368 # Store a few parts of IPython we'll need.
368 # Store a few parts of IPython we'll need.
369 self.IP = IP
369 self.IP = IP
370 self.user_ns = self.IP.user_ns
370 self.user_ns = self.IP.user_ns
371 self.user_global_ns = self.IP.user_global_ns
371 self.user_global_ns = self.IP.user_global_ns
372
372
373 self.input = ''
373 self.input = ''
374 self.output = ''
374 self.output = ''
375 self.tmp_profile_dir = tmp_profile_dir
375 self.tmp_profile_dir = tmp_profile_dir
376
376
377 self.is_verbatim = False
377 self.is_verbatim = False
378 self.is_doctest = False
378 self.is_doctest = False
379 self.is_suppress = False
379 self.is_suppress = False
380
380
381 # Optionally, provide more detailed information to shell.
381 # Optionally, provide more detailed information to shell.
382 # this is assigned by the SetUp method of IPythonDirective
382 # this is assigned by the SetUp method of IPythonDirective
383 # to point at itself.
383 # to point at itself.
384 #
384 #
385 # So, you can access handy things at self.directive.state
385 # So, you can access handy things at self.directive.state
386 self.directive = None
386 self.directive = None
387
387
388 # on the first call to the savefig decorator, we'll import
388 # on the first call to the savefig decorator, we'll import
389 # pyplot as plt so we can make a call to the plt.gcf().savefig
389 # pyplot as plt so we can make a call to the plt.gcf().savefig
390 self._pyplot_imported = False
390 self._pyplot_imported = False
391
391
392 # Prepopulate the namespace.
392 # Prepopulate the namespace.
393 for line in exec_lines:
393 for line in exec_lines:
394 self.process_input_line(line, store_history=False)
394 self.process_input_line(line, store_history=False)
395
395
396 def cleanup(self):
396 def cleanup(self):
397 shutil.rmtree(self.tmp_profile_dir, ignore_errors=True)
397 shutil.rmtree(self.tmp_profile_dir, ignore_errors=True)
398
398
399 def clear_cout(self):
399 def clear_cout(self):
400 self.cout.seek(0)
400 self.cout.seek(0)
401 self.cout.truncate(0)
401 self.cout.truncate(0)
402
402
403 def process_input_line(self, line, store_history):
403 def process_input_line(self, line, store_history):
404 return self.process_input_lines([line], store_history=store_history)
404 return self.process_input_lines([line], store_history=store_history)
405
405
406 def process_input_lines(self, lines, store_history=True):
406 def process_input_lines(self, lines, store_history=True):
407 """process the input, capturing stdout"""
407 """process the input, capturing stdout"""
408 stdout = sys.stdout
408 stdout = sys.stdout
409 source_raw = '\n'.join(lines)
409 source_raw = '\n'.join(lines)
410 try:
410 try:
411 sys.stdout = self.cout
411 sys.stdout = self.cout
412 self.IP.run_cell(source_raw, store_history=store_history)
412 self.IP.run_cell(source_raw, store_history=store_history)
413 finally:
413 finally:
414 sys.stdout = stdout
414 sys.stdout = stdout
415
415
416 def process_image(self, decorator):
416 def process_image(self, decorator):
417 """
417 """
418 # build out an image directive like
418 # build out an image directive like
419 # .. image:: somefile.png
419 # .. image:: somefile.png
420 # :width 4in
420 # :width 4in
421 #
421 #
422 # from an input like
422 # from an input like
423 # savefig somefile.png width=4in
423 # savefig somefile.png width=4in
424 """
424 """
425 savefig_dir = self.savefig_dir
425 savefig_dir = self.savefig_dir
426 source_dir = self.source_dir
426 source_dir = self.source_dir
427 saveargs = decorator.split(' ')
427 saveargs = decorator.split(' ')
428 filename = saveargs[1]
428 filename = saveargs[1]
429 # insert relative path to image file in source
429 # insert relative path to image file in source
430 # as absolute path for Sphinx
430 # as absolute path for Sphinx
431 # sphinx expects a posix path, even on Windows
431 # sphinx expects a posix path, even on Windows
432 path = pathlib.Path(savefig_dir, filename)
432 path = pathlib.Path(savefig_dir, filename)
433 outfile = '/' + path.relative_to(source_dir).as_posix()
433 outfile = '/' + path.relative_to(source_dir).as_posix()
434
434
435 imagerows = ['.. image:: %s' % outfile]
435 imagerows = ['.. image:: %s' % outfile]
436
436
437 for kwarg in saveargs[2:]:
437 for kwarg in saveargs[2:]:
438 arg, val = kwarg.split('=')
438 arg, val = kwarg.split('=')
439 arg = arg.strip()
439 arg = arg.strip()
440 val = val.strip()
440 val = val.strip()
441 imagerows.append(' :%s: %s'%(arg, val))
441 imagerows.append(' :%s: %s'%(arg, val))
442
442
443 image_file = os.path.basename(outfile) # only return file name
443 image_file = os.path.basename(outfile) # only return file name
444 image_directive = '\n'.join(imagerows)
444 image_directive = '\n'.join(imagerows)
445 return image_file, image_directive
445 return image_file, image_directive
446
446
447 # Callbacks for each type of token
447 # Callbacks for each type of token
448 def process_input(self, data, input_prompt, lineno):
448 def process_input(self, data, input_prompt, lineno):
449 """
449 """
450 Process data block for INPUT token.
450 Process data block for INPUT token.
451
451
452 """
452 """
453 decorator, input, rest = data
453 decorator, input, rest = data
454 image_file = None
454 image_file = None
455 image_directive = None
455 image_directive = None
456
456
457 is_verbatim = decorator=='@verbatim' or self.is_verbatim
457 is_verbatim = decorator=='@verbatim' or self.is_verbatim
458 is_doctest = (decorator is not None and \
458 is_doctest = (decorator is not None and \
459 decorator.startswith('@doctest')) or self.is_doctest
459 decorator.startswith('@doctest')) or self.is_doctest
460 is_suppress = decorator=='@suppress' or self.is_suppress
460 is_suppress = decorator=='@suppress' or self.is_suppress
461 is_okexcept = decorator=='@okexcept' or self.is_okexcept
461 is_okexcept = decorator=='@okexcept' or self.is_okexcept
462 is_okwarning = decorator=='@okwarning' or self.is_okwarning
462 is_okwarning = decorator=='@okwarning' or self.is_okwarning
463 is_savefig = decorator is not None and \
463 is_savefig = decorator is not None and \
464 decorator.startswith('@savefig')
464 decorator.startswith('@savefig')
465
465
466 input_lines = input.split('\n')
466 input_lines = input.split('\n')
467 if len(input_lines) > 1:
467 if len(input_lines) > 1:
468 if input_lines[-1] != "":
468 if input_lines[-1] != "":
469 input_lines.append('') # make sure there's a blank line
469 input_lines.append('') # make sure there's a blank line
470 # so splitter buffer gets reset
470 # so splitter buffer gets reset
471
471
472 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
472 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
473
473
474 if is_savefig:
474 if is_savefig:
475 image_file, image_directive = self.process_image(decorator)
475 image_file, image_directive = self.process_image(decorator)
476
476
477 ret = []
477 ret = []
478 is_semicolon = False
478 is_semicolon = False
479
479
480 # Hold the execution count, if requested to do so.
480 # Hold the execution count, if requested to do so.
481 if is_suppress and self.hold_count:
481 if is_suppress and self.hold_count:
482 store_history = False
482 store_history = False
483 else:
483 else:
484 store_history = True
484 store_history = True
485
485
486 # Note: catch_warnings is not thread safe
486 # Note: catch_warnings is not thread safe
487 with warnings.catch_warnings(record=True) as ws:
487 with warnings.catch_warnings(record=True) as ws:
488 if input_lines[0].endswith(';'):
488 if input_lines[0].endswith(';'):
489 is_semicolon = True
489 is_semicolon = True
490 #for i, line in enumerate(input_lines):
490 #for i, line in enumerate(input_lines):
491
491
492 # process the first input line
492 # process the first input line
493 if is_verbatim:
493 if is_verbatim:
494 self.process_input_lines([''])
494 self.process_input_lines([''])
495 self.IP.execution_count += 1 # increment it anyway
495 self.IP.execution_count += 1 # increment it anyway
496 else:
496 else:
497 # only submit the line in non-verbatim mode
497 # only submit the line in non-verbatim mode
498 self.process_input_lines(input_lines, store_history=store_history)
498 self.process_input_lines(input_lines, store_history=store_history)
499
499
500 if not is_suppress:
500 if not is_suppress:
501 for i, line in enumerate(input_lines):
501 for i, line in enumerate(input_lines):
502 if i == 0:
502 if i == 0:
503 formatted_line = '%s %s'%(input_prompt, line)
503 formatted_line = '%s %s'%(input_prompt, line)
504 else:
504 else:
505 formatted_line = '%s %s'%(continuation, line)
505 formatted_line = '%s %s'%(continuation, line)
506 ret.append(formatted_line)
506 ret.append(formatted_line)
507
507
508 if not is_suppress and len(rest.strip()) and is_verbatim:
508 if not is_suppress and len(rest.strip()) and is_verbatim:
509 # The "rest" is the standard output of the input. This needs to be
509 # The "rest" is the standard output of the input. This needs to be
510 # added when in verbatim mode. If there is no "rest", then we don't
510 # added when in verbatim mode. If there is no "rest", then we don't
511 # add it, as the new line will be added by the processed output.
511 # add it, as the new line will be added by the processed output.
512 ret.append(rest)
512 ret.append(rest)
513
513
514 # Fetch the processed output. (This is not the submitted output.)
514 # Fetch the processed output. (This is not the submitted output.)
515 self.cout.seek(0)
515 self.cout.seek(0)
516 processed_output = self.cout.read()
516 processed_output = self.cout.read()
517 if not is_suppress and not is_semicolon:
517 if not is_suppress and not is_semicolon:
518 #
518 #
519 # In IPythonDirective.run, the elements of `ret` are eventually
519 # In IPythonDirective.run, the elements of `ret` are eventually
520 # combined such that '' entries correspond to newlines. So if
520 # combined such that '' entries correspond to newlines. So if
521 # `processed_output` is equal to '', then the adding it to `ret`
521 # `processed_output` is equal to '', then the adding it to `ret`
522 # ensures that there is a blank line between consecutive inputs
522 # ensures that there is a blank line between consecutive inputs
523 # that have no outputs, as in:
523 # that have no outputs, as in:
524 #
524 #
525 # In [1]: x = 4
525 # In [1]: x = 4
526 #
526 #
527 # In [2]: x = 5
527 # In [2]: x = 5
528 #
528 #
529 # When there is processed output, it has a '\n' at the tail end. So
529 # When there is processed output, it has a '\n' at the tail end. So
530 # adding the output to `ret` will provide the necessary spacing
530 # adding the output to `ret` will provide the necessary spacing
531 # between consecutive input/output blocks, as in:
531 # between consecutive input/output blocks, as in:
532 #
532 #
533 # In [1]: x
533 # In [1]: x
534 # Out[1]: 5
534 # Out[1]: 5
535 #
535 #
536 # In [2]: x
536 # In [2]: x
537 # Out[2]: 5
537 # Out[2]: 5
538 #
538 #
539 # When there is stdout from the input, it also has a '\n' at the
539 # When there is stdout from the input, it also has a '\n' at the
540 # tail end, and so this ensures proper spacing as well. E.g.:
540 # tail end, and so this ensures proper spacing as well. E.g.:
541 #
541 #
542 # In [1]: print(x)
542 # In [1]: print(x)
543 # 5
543 # 5
544 #
544 #
545 # In [2]: x = 5
545 # In [2]: x = 5
546 #
546 #
547 # When in verbatim mode, `processed_output` is empty (because
547 # When in verbatim mode, `processed_output` is empty (because
548 # nothing was passed to IP. Sometimes the submitted code block has
548 # nothing was passed to IP. Sometimes the submitted code block has
549 # an Out[] portion and sometimes it does not. When it does not, we
549 # an Out[] portion and sometimes it does not. When it does not, we
550 # need to ensure proper spacing, so we have to add '' to `ret`.
550 # need to ensure proper spacing, so we have to add '' to `ret`.
551 # However, if there is an Out[] in the submitted code, then we do
551 # However, if there is an Out[] in the submitted code, then we do
552 # not want to add a newline as `process_output` has stuff to add.
552 # not want to add a newline as `process_output` has stuff to add.
553 # The difficulty is that `process_input` doesn't know if
553 # The difficulty is that `process_input` doesn't know if
554 # `process_output` will be called---so it doesn't know if there is
554 # `process_output` will be called---so it doesn't know if there is
555 # Out[] in the code block. The requires that we include a hack in
555 # Out[] in the code block. The requires that we include a hack in
556 # `process_block`. See the comments there.
556 # `process_block`. See the comments there.
557 #
557 #
558 ret.append(processed_output)
558 ret.append(processed_output)
559 elif is_semicolon:
559 elif is_semicolon:
560 # Make sure there is a newline after the semicolon.
560 # Make sure there is a newline after the semicolon.
561 ret.append('')
561 ret.append('')
562
562
563 # context information
563 # context information
564 filename = "Unknown"
564 filename = "Unknown"
565 lineno = 0
565 lineno = 0
566 if self.directive.state:
566 if self.directive.state:
567 filename = self.directive.state.document.current_source
567 filename = self.directive.state.document.current_source
568 lineno = self.directive.state.document.current_line
568 lineno = self.directive.state.document.current_line
569
569
570 # Use sphinx logger for warnings
570 # Use sphinx logger for warnings
571 logger = logging.getLogger(__name__)
571 logger = logging.getLogger(__name__)
572
572
573 # output any exceptions raised during execution to stdout
573 # output any exceptions raised during execution to stdout
574 # unless :okexcept: has been specified.
574 # unless :okexcept: has been specified.
575 if not is_okexcept and (
575 if not is_okexcept and (
576 ("Traceback" in processed_output) or ("SyntaxError" in processed_output)
576 ("Traceback" in processed_output) or ("SyntaxError" in processed_output)
577 ):
577 ):
578 s = "\n>>>" + ("-" * 73) + "\n"
578 s = "\n>>>" + ("-" * 73) + "\n"
579 s += "Exception in %s at block ending on line %s\n" % (filename, lineno)
579 s += "Exception in %s at block ending on line %s\n" % (filename, lineno)
580 s += "Specify :okexcept: as an option in the ipython:: block to suppress this message\n"
580 s += "Specify :okexcept: as an option in the ipython:: block to suppress this message\n"
581 s += processed_output + "\n"
581 s += processed_output + "\n"
582 s += "<<<" + ("-" * 73)
582 s += "<<<" + ("-" * 73)
583 logger.warning(s)
583 logger.warning(s)
584 if self.warning_is_error:
584 if self.warning_is_error:
585 raise RuntimeError(
585 raise RuntimeError(
586 "Unexpected exception in `{}` line {}".format(filename, lineno)
586 "Unexpected exception in `{}` line {}".format(filename, lineno)
587 )
587 )
588
588
589 # output any warning raised during execution to stdout
589 # output any warning raised during execution to stdout
590 # unless :okwarning: has been specified.
590 # unless :okwarning: has been specified.
591 if not is_okwarning:
591 if not is_okwarning:
592 for w in ws:
592 for w in ws:
593 s = "\n>>>" + ("-" * 73) + "\n"
593 s = "\n>>>" + ("-" * 73) + "\n"
594 s += "Warning in %s at block ending on line %s\n" % (filename, lineno)
594 s += "Warning in %s at block ending on line %s\n" % (filename, lineno)
595 s += "Specify :okwarning: as an option in the ipython:: block to suppress this message\n"
595 s += "Specify :okwarning: as an option in the ipython:: block to suppress this message\n"
596 s += ("-" * 76) + "\n"
596 s += ("-" * 76) + "\n"
597 s += warnings.formatwarning(
597 s += warnings.formatwarning(
598 w.message, w.category, w.filename, w.lineno, w.line
598 w.message, w.category, w.filename, w.lineno, w.line
599 )
599 )
600 s += "<<<" + ("-" * 73)
600 s += "<<<" + ("-" * 73)
601 logger.warning(s)
601 logger.warning(s)
602 if self.warning_is_error:
602 if self.warning_is_error:
603 raise RuntimeError(
603 raise RuntimeError(
604 "Unexpected warning in `{}` line {}".format(filename, lineno)
604 "Unexpected warning in `{}` line {}".format(filename, lineno)
605 )
605 )
606
606
607 self.clear_cout()
607 self.clear_cout()
608 return (ret, input_lines, processed_output,
608 return (ret, input_lines, processed_output,
609 is_doctest, decorator, image_file, image_directive)
609 is_doctest, decorator, image_file, image_directive)
610
610
611
611
612 def process_output(self, data, output_prompt, input_lines, output,
612 def process_output(self, data, output_prompt, input_lines, output,
613 is_doctest, decorator, image_file):
613 is_doctest, decorator, image_file):
614 """
614 """
615 Process data block for OUTPUT token.
615 Process data block for OUTPUT token.
616
616
617 """
617 """
618 # Recall: `data` is the submitted output, and `output` is the processed
618 # Recall: `data` is the submitted output, and `output` is the processed
619 # output from `input_lines`.
619 # output from `input_lines`.
620
620
621 TAB = ' ' * 4
621 TAB = ' ' * 4
622
622
623 if is_doctest and output is not None:
623 if is_doctest and output is not None:
624
624
625 found = output # This is the processed output
625 found = output # This is the processed output
626 found = found.strip()
626 found = found.strip()
627 submitted = data.strip()
627 submitted = data.strip()
628
628
629 if self.directive is None:
629 if self.directive is None:
630 source = 'Unavailable'
630 source = 'Unavailable'
631 content = 'Unavailable'
631 content = 'Unavailable'
632 else:
632 else:
633 source = self.directive.state.document.current_source
633 source = self.directive.state.document.current_source
634 content = self.directive.content
634 content = self.directive.content
635 # Add tabs and join into a single string.
635 # Add tabs and join into a single string.
636 content = '\n'.join([TAB + line for line in content])
636 content = '\n'.join([TAB + line for line in content])
637
637
638 # Make sure the output contains the output prompt.
638 # Make sure the output contains the output prompt.
639 ind = found.find(output_prompt)
639 ind = found.find(output_prompt)
640 if ind < 0:
640 if ind < 0:
641 e = ('output does not contain output prompt\n\n'
641 e = ('output does not contain output prompt\n\n'
642 'Document source: {0}\n\n'
642 'Document source: {0}\n\n'
643 'Raw content: \n{1}\n\n'
643 'Raw content: \n{1}\n\n'
644 'Input line(s):\n{TAB}{2}\n\n'
644 'Input line(s):\n{TAB}{2}\n\n'
645 'Output line(s):\n{TAB}{3}\n\n')
645 'Output line(s):\n{TAB}{3}\n\n')
646 e = e.format(source, content, '\n'.join(input_lines),
646 e = e.format(source, content, '\n'.join(input_lines),
647 repr(found), TAB=TAB)
647 repr(found), TAB=TAB)
648 raise RuntimeError(e)
648 raise RuntimeError(e)
649 found = found[len(output_prompt):].strip()
649 found = found[len(output_prompt):].strip()
650
650
651 # Handle the actual doctest comparison.
651 # Handle the actual doctest comparison.
652 if decorator.strip() == '@doctest':
652 if decorator.strip() == '@doctest':
653 # Standard doctest
653 # Standard doctest
654 if found != submitted:
654 if found != submitted:
655 e = ('doctest failure\n\n'
655 e = ('doctest failure\n\n'
656 'Document source: {0}\n\n'
656 'Document source: {0}\n\n'
657 'Raw content: \n{1}\n\n'
657 'Raw content: \n{1}\n\n'
658 'On input line(s):\n{TAB}{2}\n\n'
658 'On input line(s):\n{TAB}{2}\n\n'
659 'we found output:\n{TAB}{3}\n\n'
659 'we found output:\n{TAB}{3}\n\n'
660 'instead of the expected:\n{TAB}{4}\n\n')
660 'instead of the expected:\n{TAB}{4}\n\n')
661 e = e.format(source, content, '\n'.join(input_lines),
661 e = e.format(source, content, '\n'.join(input_lines),
662 repr(found), repr(submitted), TAB=TAB)
662 repr(found), repr(submitted), TAB=TAB)
663 raise RuntimeError(e)
663 raise RuntimeError(e)
664 else:
664 else:
665 self.custom_doctest(decorator, input_lines, found, submitted)
665 self.custom_doctest(decorator, input_lines, found, submitted)
666
666
667 # When in verbatim mode, this holds additional submitted output
667 # When in verbatim mode, this holds additional submitted output
668 # to be written in the final Sphinx output.
668 # to be written in the final Sphinx output.
669 # https://github.com/ipython/ipython/issues/5776
669 # https://github.com/ipython/ipython/issues/5776
670 out_data = []
670 out_data = []
671
671
672 is_verbatim = decorator=='@verbatim' or self.is_verbatim
672 is_verbatim = decorator=='@verbatim' or self.is_verbatim
673 if is_verbatim and data.strip():
673 if is_verbatim and data.strip():
674 # Note that `ret` in `process_block` has '' as its last element if
674 # Note that `ret` in `process_block` has '' as its last element if
675 # the code block was in verbatim mode. So if there is no submitted
675 # the code block was in verbatim mode. So if there is no submitted
676 # output, then we will have proper spacing only if we do not add
676 # output, then we will have proper spacing only if we do not add
677 # an additional '' to `out_data`. This is why we condition on
677 # an additional '' to `out_data`. This is why we condition on
678 # `and data.strip()`.
678 # `and data.strip()`.
679
679
680 # The submitted output has no output prompt. If we want the
680 # The submitted output has no output prompt. If we want the
681 # prompt and the code to appear, we need to join them now
681 # prompt and the code to appear, we need to join them now
682 # instead of adding them separately---as this would create an
682 # instead of adding them separately---as this would create an
683 # undesired newline. How we do this ultimately depends on the
683 # undesired newline. How we do this ultimately depends on the
684 # format of the output regex. I'll do what works for the default
684 # format of the output regex. I'll do what works for the default
685 # prompt for now, and we might have to adjust if it doesn't work
685 # prompt for now, and we might have to adjust if it doesn't work
686 # in other cases. Finally, the submitted output does not have
686 # in other cases. Finally, the submitted output does not have
687 # a trailing newline, so we must add it manually.
687 # a trailing newline, so we must add it manually.
688 out_data.append("{0} {1}\n".format(output_prompt, data))
688 out_data.append("{0} {1}\n".format(output_prompt, data))
689
689
690 return out_data
690 return out_data
691
691
692 def process_comment(self, data):
692 def process_comment(self, data):
693 """Process data fPblock for COMMENT token."""
693 """Process data fPblock for COMMENT token."""
694 if not self.is_suppress:
694 if not self.is_suppress:
695 return [data]
695 return [data]
696
696
697 def save_image(self, image_file):
697 def save_image(self, image_file):
698 """
698 """
699 Saves the image file to disk.
699 Saves the image file to disk.
700 """
700 """
701 self.ensure_pyplot()
701 self.ensure_pyplot()
702 command = 'plt.gcf().savefig("%s")'%image_file
702 command = 'plt.gcf().savefig("%s")'%image_file
703 # print('SAVEFIG', command) # dbg
703 # print('SAVEFIG', command) # dbg
704 self.process_input_line('bookmark ipy_thisdir', store_history=False)
704 self.process_input_line('bookmark ipy_thisdir', store_history=False)
705 self.process_input_line('cd -b ipy_savedir', store_history=False)
705 self.process_input_line('cd -b ipy_savedir', store_history=False)
706 self.process_input_line(command, store_history=False)
706 self.process_input_line(command, store_history=False)
707 self.process_input_line('cd -b ipy_thisdir', store_history=False)
707 self.process_input_line('cd -b ipy_thisdir', store_history=False)
708 self.process_input_line('bookmark -d ipy_thisdir', store_history=False)
708 self.process_input_line('bookmark -d ipy_thisdir', store_history=False)
709 self.clear_cout()
709 self.clear_cout()
710
710
711 def process_block(self, block):
711 def process_block(self, block):
712 """
712 """
713 process block from the block_parser and return a list of processed lines
713 process block from the block_parser and return a list of processed lines
714 """
714 """
715 ret = []
715 ret = []
716 output = None
716 output = None
717 input_lines = None
717 input_lines = None
718 lineno = self.IP.execution_count
718 lineno = self.IP.execution_count
719
719
720 input_prompt = self.promptin % lineno
720 input_prompt = self.promptin % lineno
721 output_prompt = self.promptout % lineno
721 output_prompt = self.promptout % lineno
722 image_file = None
722 image_file = None
723 image_directive = None
723 image_directive = None
724
724
725 found_input = False
725 found_input = False
726 for token, data in block:
726 for token, data in block:
727 if token == COMMENT:
727 if token == COMMENT:
728 out_data = self.process_comment(data)
728 out_data = self.process_comment(data)
729 elif token == INPUT:
729 elif token == INPUT:
730 found_input = True
730 found_input = True
731 (out_data, input_lines, output, is_doctest,
731 (out_data, input_lines, output, is_doctest,
732 decorator, image_file, image_directive) = \
732 decorator, image_file, image_directive) = \
733 self.process_input(data, input_prompt, lineno)
733 self.process_input(data, input_prompt, lineno)
734 elif token == OUTPUT:
734 elif token == OUTPUT:
735 if not found_input:
735 if not found_input:
736
736
737 TAB = ' ' * 4
737 TAB = ' ' * 4
738 linenumber = 0
738 linenumber = 0
739 source = 'Unavailable'
739 source = 'Unavailable'
740 content = 'Unavailable'
740 content = 'Unavailable'
741 if self.directive:
741 if self.directive:
742 linenumber = self.directive.state.document.current_line
742 linenumber = self.directive.state.document.current_line
743 source = self.directive.state.document.current_source
743 source = self.directive.state.document.current_source
744 content = self.directive.content
744 content = self.directive.content
745 # Add tabs and join into a single string.
745 # Add tabs and join into a single string.
746 content = '\n'.join([TAB + line for line in content])
746 content = '\n'.join([TAB + line for line in content])
747
747
748 e = ('\n\nInvalid block: Block contains an output prompt '
748 e = ('\n\nInvalid block: Block contains an output prompt '
749 'without an input prompt.\n\n'
749 'without an input prompt.\n\n'
750 'Document source: {0}\n\n'
750 'Document source: {0}\n\n'
751 'Content begins at line {1}: \n\n{2}\n\n'
751 'Content begins at line {1}: \n\n{2}\n\n'
752 'Problematic block within content: \n\n{TAB}{3}\n\n')
752 'Problematic block within content: \n\n{TAB}{3}\n\n')
753 e = e.format(source, linenumber, content, block, TAB=TAB)
753 e = e.format(source, linenumber, content, block, TAB=TAB)
754
754
755 # Write, rather than include in exception, since Sphinx
755 # Write, rather than include in exception, since Sphinx
756 # will truncate tracebacks.
756 # will truncate tracebacks.
757 sys.stdout.write(e)
757 sys.stdout.write(e)
758 raise RuntimeError('An invalid block was detected.')
758 raise RuntimeError('An invalid block was detected.')
759 out_data = \
759 out_data = \
760 self.process_output(data, output_prompt, input_lines,
760 self.process_output(data, output_prompt, input_lines,
761 output, is_doctest, decorator,
761 output, is_doctest, decorator,
762 image_file)
762 image_file)
763 if out_data:
763 if out_data:
764 # Then there was user submitted output in verbatim mode.
764 # Then there was user submitted output in verbatim mode.
765 # We need to remove the last element of `ret` that was
765 # We need to remove the last element of `ret` that was
766 # added in `process_input`, as it is '' and would introduce
766 # added in `process_input`, as it is '' and would introduce
767 # an undesirable newline.
767 # an undesirable newline.
768 assert(ret[-1] == '')
768 assert(ret[-1] == '')
769 del ret[-1]
769 del ret[-1]
770
770
771 if out_data:
771 if out_data:
772 ret.extend(out_data)
772 ret.extend(out_data)
773
773
774 # save the image files
774 # save the image files
775 if image_file is not None:
775 if image_file is not None:
776 self.save_image(image_file)
776 self.save_image(image_file)
777
777
778 return ret, image_directive
778 return ret, image_directive
779
779
780 def ensure_pyplot(self):
780 def ensure_pyplot(self):
781 """
781 """
782 Ensures that pyplot has been imported into the embedded IPython shell.
782 Ensures that pyplot has been imported into the embedded IPython shell.
783
783
784 Also, makes sure to set the backend appropriately if not set already.
784 Also, makes sure to set the backend appropriately if not set already.
785
785
786 """
786 """
787 # We are here if the @figure pseudo decorator was used. Thus, it's
787 # We are here if the @figure pseudo decorator was used. Thus, it's
788 # possible that we could be here even if python_mplbackend were set to
788 # possible that we could be here even if python_mplbackend were set to
789 # `None`. That's also strange and perhaps worthy of raising an
789 # `None`. That's also strange and perhaps worthy of raising an
790 # exception, but for now, we just set the backend to 'agg'.
790 # exception, but for now, we just set the backend to 'agg'.
791
791
792 if not self._pyplot_imported:
792 if not self._pyplot_imported:
793 if 'matplotlib.backends' not in sys.modules:
793 if 'matplotlib.backends' not in sys.modules:
794 # Then ipython_matplotlib was set to None but there was a
794 # Then ipython_matplotlib was set to None but there was a
795 # call to the @figure decorator (and ipython_execlines did
795 # call to the @figure decorator (and ipython_execlines did
796 # not set a backend).
796 # not set a backend).
797 #raise Exception("No backend was set, but @figure was used!")
797 #raise Exception("No backend was set, but @figure was used!")
798 import matplotlib
798 import matplotlib
799 matplotlib.use('agg')
799 matplotlib.use('agg')
800
800
801 # Always import pyplot into embedded shell.
801 # Always import pyplot into embedded shell.
802 self.process_input_line('import matplotlib.pyplot as plt',
802 self.process_input_line('import matplotlib.pyplot as plt',
803 store_history=False)
803 store_history=False)
804 self._pyplot_imported = True
804 self._pyplot_imported = True
805
805
806 def process_pure_python(self, content):
806 def process_pure_python(self, content):
807 """
807 """
808 content is a list of strings. it is unedited directive content
808 content is a list of strings. it is unedited directive content
809
809
810 This runs it line by line in the InteractiveShell, prepends
810 This runs it line by line in the InteractiveShell, prepends
811 prompts as needed capturing stderr and stdout, then returns
811 prompts as needed capturing stderr and stdout, then returns
812 the content as a list as if it were ipython code
812 the content as a list as if it were ipython code
813 """
813 """
814 output = []
814 output = []
815 savefig = False # keep up with this to clear figure
815 savefig = False # keep up with this to clear figure
816 multiline = False # to handle line continuation
816 multiline = False # to handle line continuation
817 multiline_start = None
817 multiline_start = None
818 fmtin = self.promptin
818 fmtin = self.promptin
819
819
820 ct = 0
820 ct = 0
821
821
822 for lineno, line in enumerate(content):
822 for lineno, line in enumerate(content):
823
823
824 line_stripped = line.strip()
824 line_stripped = line.strip()
825 if not len(line):
825 if not len(line):
826 output.append(line)
826 output.append(line)
827 continue
827 continue
828
828
829 # handle pseudo-decorators, whilst ensuring real python decorators are treated as input
829 # handle pseudo-decorators, whilst ensuring real python decorators are treated as input
830 if any(
830 if any(
831 line_stripped.startswith("@" + pseudo_decorator)
831 line_stripped.startswith("@" + pseudo_decorator)
832 for pseudo_decorator in PSEUDO_DECORATORS
832 for pseudo_decorator in PSEUDO_DECORATORS
833 ):
833 ):
834 output.extend([line])
834 output.extend([line])
835 if 'savefig' in line:
835 if 'savefig' in line:
836 savefig = True # and need to clear figure
836 savefig = True # and need to clear figure
837 continue
837 continue
838
838
839 # handle comments
839 # handle comments
840 if line_stripped.startswith('#'):
840 if line_stripped.startswith('#'):
841 output.extend([line])
841 output.extend([line])
842 continue
842 continue
843
843
844 # deal with lines checking for multiline
844 # deal with lines checking for multiline
845 continuation = u' %s:'% ''.join(['.']*(len(str(ct))+2))
845 continuation = u' %s:'% ''.join(['.']*(len(str(ct))+2))
846 if not multiline:
846 if not multiline:
847 modified = u"%s %s" % (fmtin % ct, line_stripped)
847 modified = u"%s %s" % (fmtin % ct, line_stripped)
848 output.append(modified)
848 output.append(modified)
849 ct += 1
849 ct += 1
850 try:
850 try:
851 ast.parse(line_stripped)
851 ast.parse(line_stripped)
852 output.append(u'')
852 output.append(u'')
853 except Exception: # on a multiline
853 except Exception: # on a multiline
854 multiline = True
854 multiline = True
855 multiline_start = lineno
855 multiline_start = lineno
856 else: # still on a multiline
856 else: # still on a multiline
857 modified = u'%s %s' % (continuation, line)
857 modified = u'%s %s' % (continuation, line)
858 output.append(modified)
858 output.append(modified)
859
859
860 # if the next line is indented, it should be part of multiline
860 # if the next line is indented, it should be part of multiline
861 if len(content) > lineno + 1:
861 if len(content) > lineno + 1:
862 nextline = content[lineno + 1]
862 nextline = content[lineno + 1]
863 if len(nextline) - len(nextline.lstrip()) > 3:
863 if len(nextline) - len(nextline.lstrip()) > 3:
864 continue
864 continue
865 try:
865 try:
866 mod = ast.parse(
866 mod = ast.parse(
867 '\n'.join(content[multiline_start:lineno+1]))
867 '\n'.join(content[multiline_start:lineno+1]))
868 if isinstance(mod.body[0], ast.FunctionDef):
868 if isinstance(mod.body[0], ast.FunctionDef):
869 # check to see if we have the whole function
869 # check to see if we have the whole function
870 for element in mod.body[0].body:
870 for element in mod.body[0].body:
871 if isinstance(element, ast.Return):
871 if isinstance(element, ast.Return):
872 multiline = False
872 multiline = False
873 else:
873 else:
874 output.append(u'')
874 output.append(u'')
875 multiline = False
875 multiline = False
876 except Exception:
876 except Exception:
877 pass
877 pass
878
878
879 if savefig: # clear figure if plotted
879 if savefig: # clear figure if plotted
880 self.ensure_pyplot()
880 self.ensure_pyplot()
881 self.process_input_line('plt.clf()', store_history=False)
881 self.process_input_line('plt.clf()', store_history=False)
882 self.clear_cout()
882 self.clear_cout()
883 savefig = False
883 savefig = False
884
884
885 return output
885 return output
886
886
887 def custom_doctest(self, decorator, input_lines, found, submitted):
887 def custom_doctest(self, decorator, input_lines, found, submitted):
888 """
888 """
889 Perform a specialized doctest.
889 Perform a specialized doctest.
890
890
891 """
891 """
892 from .custom_doctests import doctests
892 from .custom_doctests import doctests
893
893
894 args = decorator.split()
894 args = decorator.split()
895 doctest_type = args[1]
895 doctest_type = args[1]
896 if doctest_type in doctests:
896 if doctest_type in doctests:
897 doctests[doctest_type](self, args, input_lines, found, submitted)
897 doctests[doctest_type](self, args, input_lines, found, submitted)
898 else:
898 else:
899 e = "Invalid option to @doctest: {0}".format(doctest_type)
899 e = "Invalid option to @doctest: {0}".format(doctest_type)
900 raise Exception(e)
900 raise Exception(e)
901
901
902
902
903 class IPythonDirective(Directive):
903 class IPythonDirective(Directive):
904
904
905 has_content: bool = True
905 has_content: bool = True
906 required_arguments: int = 0
906 required_arguments: int = 0
907 optional_arguments: int = 4 # python, suppress, verbatim, doctest
907 optional_arguments: int = 4 # python, suppress, verbatim, doctest
908 final_argumuent_whitespace: bool = True
908 final_argumuent_whitespace: bool = True
909 option_spec: Dict[str, Any] = { 'python': directives.unchanged,
909 option_spec: Dict[str, Any] = {
910 'suppress' : directives.flag,
910 "python": directives.unchanged,
911 'verbatim' : directives.flag,
911 "suppress": directives.flag,
912 'doctest' : directives.flag,
912 "verbatim": directives.flag,
913 'okexcept': directives.flag,
913 "doctest": directives.flag,
914 'okwarning': directives.flag
914 "okexcept": directives.flag,
915 }
915 "okwarning": directives.flag,
916 }
916
917
917 shell = None
918 shell = None
918
919
919 seen_docs: Set = set()
920 seen_docs: Set = set()
920
921
921 def get_config_options(self):
922 def get_config_options(self):
922 # contains sphinx configuration variables
923 # contains sphinx configuration variables
923 config = self.state.document.settings.env.config
924 config = self.state.document.settings.env.config
924
925
925 # get config variables to set figure output directory
926 # get config variables to set figure output directory
926 savefig_dir = config.ipython_savefig_dir
927 savefig_dir = config.ipython_savefig_dir
927 source_dir = self.state.document.settings.env.srcdir
928 source_dir = self.state.document.settings.env.srcdir
928 savefig_dir = os.path.join(source_dir, savefig_dir)
929 savefig_dir = os.path.join(source_dir, savefig_dir)
929
930
930 # get regex and prompt stuff
931 # get regex and prompt stuff
931 rgxin = config.ipython_rgxin
932 rgxin = config.ipython_rgxin
932 rgxout = config.ipython_rgxout
933 rgxout = config.ipython_rgxout
933 warning_is_error= config.ipython_warning_is_error
934 warning_is_error= config.ipython_warning_is_error
934 promptin = config.ipython_promptin
935 promptin = config.ipython_promptin
935 promptout = config.ipython_promptout
936 promptout = config.ipython_promptout
936 mplbackend = config.ipython_mplbackend
937 mplbackend = config.ipython_mplbackend
937 exec_lines = config.ipython_execlines
938 exec_lines = config.ipython_execlines
938 hold_count = config.ipython_holdcount
939 hold_count = config.ipython_holdcount
939
940
940 return (savefig_dir, source_dir, rgxin, rgxout,
941 return (savefig_dir, source_dir, rgxin, rgxout,
941 promptin, promptout, mplbackend, exec_lines, hold_count, warning_is_error)
942 promptin, promptout, mplbackend, exec_lines, hold_count, warning_is_error)
942
943
943 def setup(self):
944 def setup(self):
944 # Get configuration values.
945 # Get configuration values.
945 (savefig_dir, source_dir, rgxin, rgxout, promptin, promptout,
946 (savefig_dir, source_dir, rgxin, rgxout, promptin, promptout,
946 mplbackend, exec_lines, hold_count, warning_is_error) = self.get_config_options()
947 mplbackend, exec_lines, hold_count, warning_is_error) = self.get_config_options()
947
948
948 try:
949 try:
949 os.makedirs(savefig_dir)
950 os.makedirs(savefig_dir)
950 except OSError as e:
951 except OSError as e:
951 if e.errno != errno.EEXIST:
952 if e.errno != errno.EEXIST:
952 raise
953 raise
953
954
954 if self.shell is None:
955 if self.shell is None:
955 # We will be here many times. However, when the
956 # We will be here many times. However, when the
956 # EmbeddedSphinxShell is created, its interactive shell member
957 # EmbeddedSphinxShell is created, its interactive shell member
957 # is the same for each instance.
958 # is the same for each instance.
958
959
959 if mplbackend and 'matplotlib.backends' not in sys.modules and use_matplotlib:
960 if mplbackend and 'matplotlib.backends' not in sys.modules and use_matplotlib:
960 import matplotlib
961 import matplotlib
961 matplotlib.use(mplbackend)
962 matplotlib.use(mplbackend)
962
963
963 # Must be called after (potentially) importing matplotlib and
964 # Must be called after (potentially) importing matplotlib and
964 # setting its backend since exec_lines might import pylab.
965 # setting its backend since exec_lines might import pylab.
965 self.shell = EmbeddedSphinxShell(exec_lines)
966 self.shell = EmbeddedSphinxShell(exec_lines)
966
967
967 # Store IPython directive to enable better error messages
968 # Store IPython directive to enable better error messages
968 self.shell.directive = self
969 self.shell.directive = self
969
970
970 # reset the execution count if we haven't processed this doc
971 # reset the execution count if we haven't processed this doc
971 #NOTE: this may be borked if there are multiple seen_doc tmp files
972 #NOTE: this may be borked if there are multiple seen_doc tmp files
972 #check time stamp?
973 #check time stamp?
973 if not self.state.document.current_source in self.seen_docs:
974 if not self.state.document.current_source in self.seen_docs:
974 self.shell.IP.history_manager.reset()
975 self.shell.IP.history_manager.reset()
975 self.shell.IP.execution_count = 1
976 self.shell.IP.execution_count = 1
976 self.seen_docs.add(self.state.document.current_source)
977 self.seen_docs.add(self.state.document.current_source)
977
978
978 # and attach to shell so we don't have to pass them around
979 # and attach to shell so we don't have to pass them around
979 self.shell.rgxin = rgxin
980 self.shell.rgxin = rgxin
980 self.shell.rgxout = rgxout
981 self.shell.rgxout = rgxout
981 self.shell.promptin = promptin
982 self.shell.promptin = promptin
982 self.shell.promptout = promptout
983 self.shell.promptout = promptout
983 self.shell.savefig_dir = savefig_dir
984 self.shell.savefig_dir = savefig_dir
984 self.shell.source_dir = source_dir
985 self.shell.source_dir = source_dir
985 self.shell.hold_count = hold_count
986 self.shell.hold_count = hold_count
986 self.shell.warning_is_error = warning_is_error
987 self.shell.warning_is_error = warning_is_error
987
988
988 # setup bookmark for saving figures directory
989 # setup bookmark for saving figures directory
989 self.shell.process_input_line(
990 self.shell.process_input_line(
990 'bookmark ipy_savedir "%s"' % savefig_dir, store_history=False
991 'bookmark ipy_savedir "%s"' % savefig_dir, store_history=False
991 )
992 )
992 self.shell.clear_cout()
993 self.shell.clear_cout()
993
994
994 return rgxin, rgxout, promptin, promptout
995 return rgxin, rgxout, promptin, promptout
995
996
996 def teardown(self):
997 def teardown(self):
997 # delete last bookmark
998 # delete last bookmark
998 self.shell.process_input_line('bookmark -d ipy_savedir',
999 self.shell.process_input_line('bookmark -d ipy_savedir',
999 store_history=False)
1000 store_history=False)
1000 self.shell.clear_cout()
1001 self.shell.clear_cout()
1001
1002
1002 def run(self):
1003 def run(self):
1003 debug = False
1004 debug = False
1004
1005
1005 #TODO, any reason block_parser can't be a method of embeddable shell
1006 #TODO, any reason block_parser can't be a method of embeddable shell
1006 # then we wouldn't have to carry these around
1007 # then we wouldn't have to carry these around
1007 rgxin, rgxout, promptin, promptout = self.setup()
1008 rgxin, rgxout, promptin, promptout = self.setup()
1008
1009
1009 options = self.options
1010 options = self.options
1010 self.shell.is_suppress = 'suppress' in options
1011 self.shell.is_suppress = 'suppress' in options
1011 self.shell.is_doctest = 'doctest' in options
1012 self.shell.is_doctest = 'doctest' in options
1012 self.shell.is_verbatim = 'verbatim' in options
1013 self.shell.is_verbatim = 'verbatim' in options
1013 self.shell.is_okexcept = 'okexcept' in options
1014 self.shell.is_okexcept = 'okexcept' in options
1014 self.shell.is_okwarning = 'okwarning' in options
1015 self.shell.is_okwarning = 'okwarning' in options
1015
1016
1016 # handle pure python code
1017 # handle pure python code
1017 if 'python' in self.arguments:
1018 if 'python' in self.arguments:
1018 content = self.content
1019 content = self.content
1019 self.content = self.shell.process_pure_python(content)
1020 self.content = self.shell.process_pure_python(content)
1020
1021
1021 # parts consists of all text within the ipython-block.
1022 # parts consists of all text within the ipython-block.
1022 # Each part is an input/output block.
1023 # Each part is an input/output block.
1023 parts = '\n'.join(self.content).split('\n\n')
1024 parts = '\n'.join(self.content).split('\n\n')
1024
1025
1025 lines = ['.. code-block:: ipython', '']
1026 lines = ['.. code-block:: ipython', '']
1026 figures = []
1027 figures = []
1027
1028
1028 # Use sphinx logger for warnings
1029 # Use sphinx logger for warnings
1029 logger = logging.getLogger(__name__)
1030 logger = logging.getLogger(__name__)
1030
1031
1031 for part in parts:
1032 for part in parts:
1032 block = block_parser(part, rgxin, rgxout, promptin, promptout)
1033 block = block_parser(part, rgxin, rgxout, promptin, promptout)
1033 if len(block):
1034 if len(block):
1034 rows, figure = self.shell.process_block(block)
1035 rows, figure = self.shell.process_block(block)
1035 for row in rows:
1036 for row in rows:
1036 lines.extend([' {0}'.format(line)
1037 lines.extend([' {0}'.format(line)
1037 for line in row.split('\n')])
1038 for line in row.split('\n')])
1038
1039
1039 if figure is not None:
1040 if figure is not None:
1040 figures.append(figure)
1041 figures.append(figure)
1041 else:
1042 else:
1042 message = 'Code input with no code at {}, line {}'\
1043 message = 'Code input with no code at {}, line {}'\
1043 .format(
1044 .format(
1044 self.state.document.current_source,
1045 self.state.document.current_source,
1045 self.state.document.current_line)
1046 self.state.document.current_line)
1046 if self.shell.warning_is_error:
1047 if self.shell.warning_is_error:
1047 raise RuntimeError(message)
1048 raise RuntimeError(message)
1048 else:
1049 else:
1049 logger.warning(message)
1050 logger.warning(message)
1050
1051
1051 for figure in figures:
1052 for figure in figures:
1052 lines.append('')
1053 lines.append('')
1053 lines.extend(figure.split('\n'))
1054 lines.extend(figure.split('\n'))
1054 lines.append('')
1055 lines.append('')
1055
1056
1056 if len(lines) > 2:
1057 if len(lines) > 2:
1057 if debug:
1058 if debug:
1058 print('\n'.join(lines))
1059 print('\n'.join(lines))
1059 else:
1060 else:
1060 # This has to do with input, not output. But if we comment
1061 # This has to do with input, not output. But if we comment
1061 # these lines out, then no IPython code will appear in the
1062 # these lines out, then no IPython code will appear in the
1062 # final output.
1063 # final output.
1063 self.state_machine.insert_input(
1064 self.state_machine.insert_input(
1064 lines, self.state_machine.input_lines.source(0))
1065 lines, self.state_machine.input_lines.source(0))
1065
1066
1066 # cleanup
1067 # cleanup
1067 self.teardown()
1068 self.teardown()
1068
1069
1069 return []
1070 return []
1070
1071
1071 # Enable as a proper Sphinx directive
1072 # Enable as a proper Sphinx directive
1072 def setup(app):
1073 def setup(app):
1073 setup.app = app
1074 setup.app = app
1074
1075
1075 app.add_directive('ipython', IPythonDirective)
1076 app.add_directive('ipython', IPythonDirective)
1076 app.add_config_value('ipython_savefig_dir', 'savefig', 'env')
1077 app.add_config_value('ipython_savefig_dir', 'savefig', 'env')
1077 app.add_config_value('ipython_warning_is_error', True, 'env')
1078 app.add_config_value('ipython_warning_is_error', True, 'env')
1078 app.add_config_value('ipython_rgxin',
1079 app.add_config_value('ipython_rgxin',
1079 re.compile(r'In \[(\d+)\]:\s?(.*)\s*'), 'env')
1080 re.compile(r'In \[(\d+)\]:\s?(.*)\s*'), 'env')
1080 app.add_config_value('ipython_rgxout',
1081 app.add_config_value('ipython_rgxout',
1081 re.compile(r'Out\[(\d+)\]:\s?(.*)\s*'), 'env')
1082 re.compile(r'Out\[(\d+)\]:\s?(.*)\s*'), 'env')
1082 app.add_config_value('ipython_promptin', 'In [%d]:', 'env')
1083 app.add_config_value('ipython_promptin', 'In [%d]:', 'env')
1083 app.add_config_value('ipython_promptout', 'Out[%d]:', 'env')
1084 app.add_config_value('ipython_promptout', 'Out[%d]:', 'env')
1084
1085
1085 # We could just let matplotlib pick whatever is specified as the default
1086 # We could just let matplotlib pick whatever is specified as the default
1086 # backend in the matplotlibrc file, but this would cause issues if the
1087 # backend in the matplotlibrc file, but this would cause issues if the
1087 # backend didn't work in headless environments. For this reason, 'agg'
1088 # backend didn't work in headless environments. For this reason, 'agg'
1088 # is a good default backend choice.
1089 # is a good default backend choice.
1089 app.add_config_value('ipython_mplbackend', 'agg', 'env')
1090 app.add_config_value('ipython_mplbackend', 'agg', 'env')
1090
1091
1091 # If the user sets this config value to `None`, then EmbeddedSphinxShell's
1092 # If the user sets this config value to `None`, then EmbeddedSphinxShell's
1092 # __init__ method will treat it as [].
1093 # __init__ method will treat it as [].
1093 execlines = ['import numpy as np']
1094 execlines = ['import numpy as np']
1094 if use_matplotlib:
1095 if use_matplotlib:
1095 execlines.append('import matplotlib.pyplot as plt')
1096 execlines.append('import matplotlib.pyplot as plt')
1096 app.add_config_value('ipython_execlines', execlines, 'env')
1097 app.add_config_value('ipython_execlines', execlines, 'env')
1097
1098
1098 app.add_config_value('ipython_holdcount', True, 'env')
1099 app.add_config_value('ipython_holdcount', True, 'env')
1099
1100
1100 metadata = {'parallel_read_safe': True, 'parallel_write_safe': True}
1101 metadata = {'parallel_read_safe': True, 'parallel_write_safe': True}
1101 return metadata
1102 return metadata
1102
1103
1103 # Simple smoke test, needs to be converted to a proper automatic test.
1104 # Simple smoke test, needs to be converted to a proper automatic test.
1104 def test():
1105 def test():
1105
1106
1106 examples = [
1107 examples = [
1107 r"""
1108 r"""
1108 In [9]: pwd
1109 In [9]: pwd
1109 Out[9]: '/home/jdhunter/py4science/book'
1110 Out[9]: '/home/jdhunter/py4science/book'
1110
1111
1111 In [10]: cd bookdata/
1112 In [10]: cd bookdata/
1112 /home/jdhunter/py4science/book/bookdata
1113 /home/jdhunter/py4science/book/bookdata
1113
1114
1114 In [2]: from pylab import *
1115 In [2]: from pylab import *
1115
1116
1116 In [2]: ion()
1117 In [2]: ion()
1117
1118
1118 In [3]: im = imread('stinkbug.png')
1119 In [3]: im = imread('stinkbug.png')
1119
1120
1120 @savefig mystinkbug.png width=4in
1121 @savefig mystinkbug.png width=4in
1121 In [4]: imshow(im)
1122 In [4]: imshow(im)
1122 Out[4]: <matplotlib.image.AxesImage object at 0x39ea850>
1123 Out[4]: <matplotlib.image.AxesImage object at 0x39ea850>
1123
1124
1124 """,
1125 """,
1125 r"""
1126 r"""
1126
1127
1127 In [1]: x = 'hello world'
1128 In [1]: x = 'hello world'
1128
1129
1129 # string methods can be
1130 # string methods can be
1130 # used to alter the string
1131 # used to alter the string
1131 @doctest
1132 @doctest
1132 In [2]: x.upper()
1133 In [2]: x.upper()
1133 Out[2]: 'HELLO WORLD'
1134 Out[2]: 'HELLO WORLD'
1134
1135
1135 @verbatim
1136 @verbatim
1136 In [3]: x.st<TAB>
1137 In [3]: x.st<TAB>
1137 x.startswith x.strip
1138 x.startswith x.strip
1138 """,
1139 """,
1139 r"""
1140 r"""
1140
1141
1141 In [130]: url = 'http://ichart.finance.yahoo.com/table.csv?s=CROX\
1142 In [130]: url = 'http://ichart.finance.yahoo.com/table.csv?s=CROX\
1142 .....: &d=9&e=22&f=2009&g=d&a=1&br=8&c=2006&ignore=.csv'
1143 .....: &d=9&e=22&f=2009&g=d&a=1&br=8&c=2006&ignore=.csv'
1143
1144
1144 In [131]: print url.split('&')
1145 In [131]: print url.split('&')
1145 ['http://ichart.finance.yahoo.com/table.csv?s=CROX', 'd=9', 'e=22', 'f=2009', 'g=d', 'a=1', 'b=8', 'c=2006', 'ignore=.csv']
1146 ['http://ichart.finance.yahoo.com/table.csv?s=CROX', 'd=9', 'e=22', 'f=2009', 'g=d', 'a=1', 'b=8', 'c=2006', 'ignore=.csv']
1146
1147
1147 In [60]: import urllib
1148 In [60]: import urllib
1148
1149
1149 """,
1150 """,
1150 r"""\
1151 r"""\
1151
1152
1152 In [133]: import numpy.random
1153 In [133]: import numpy.random
1153
1154
1154 @suppress
1155 @suppress
1155 In [134]: numpy.random.seed(2358)
1156 In [134]: numpy.random.seed(2358)
1156
1157
1157 @doctest
1158 @doctest
1158 In [135]: numpy.random.rand(10,2)
1159 In [135]: numpy.random.rand(10,2)
1159 Out[135]:
1160 Out[135]:
1160 array([[ 0.64524308, 0.59943846],
1161 array([[ 0.64524308, 0.59943846],
1161 [ 0.47102322, 0.8715456 ],
1162 [ 0.47102322, 0.8715456 ],
1162 [ 0.29370834, 0.74776844],
1163 [ 0.29370834, 0.74776844],
1163 [ 0.99539577, 0.1313423 ],
1164 [ 0.99539577, 0.1313423 ],
1164 [ 0.16250302, 0.21103583],
1165 [ 0.16250302, 0.21103583],
1165 [ 0.81626524, 0.1312433 ],
1166 [ 0.81626524, 0.1312433 ],
1166 [ 0.67338089, 0.72302393],
1167 [ 0.67338089, 0.72302393],
1167 [ 0.7566368 , 0.07033696],
1168 [ 0.7566368 , 0.07033696],
1168 [ 0.22591016, 0.77731835],
1169 [ 0.22591016, 0.77731835],
1169 [ 0.0072729 , 0.34273127]])
1170 [ 0.0072729 , 0.34273127]])
1170
1171
1171 """,
1172 """,
1172
1173
1173 r"""
1174 r"""
1174 In [106]: print x
1175 In [106]: print x
1175 jdh
1176 jdh
1176
1177
1177 In [109]: for i in range(10):
1178 In [109]: for i in range(10):
1178 .....: print i
1179 .....: print i
1179 .....:
1180 .....:
1180 .....:
1181 .....:
1181 0
1182 0
1182 1
1183 1
1183 2
1184 2
1184 3
1185 3
1185 4
1186 4
1186 5
1187 5
1187 6
1188 6
1188 7
1189 7
1189 8
1190 8
1190 9
1191 9
1191 """,
1192 """,
1192
1193
1193 r"""
1194 r"""
1194
1195
1195 In [144]: from pylab import *
1196 In [144]: from pylab import *
1196
1197
1197 In [145]: ion()
1198 In [145]: ion()
1198
1199
1199 # use a semicolon to suppress the output
1200 # use a semicolon to suppress the output
1200 @savefig test_hist.png width=4in
1201 @savefig test_hist.png width=4in
1201 In [151]: hist(np.random.randn(10000), 100);
1202 In [151]: hist(np.random.randn(10000), 100);
1202
1203
1203
1204
1204 @savefig test_plot.png width=4in
1205 @savefig test_plot.png width=4in
1205 In [151]: plot(np.random.randn(10000), 'o');
1206 In [151]: plot(np.random.randn(10000), 'o');
1206 """,
1207 """,
1207
1208
1208 r"""
1209 r"""
1209 # use a semicolon to suppress the output
1210 # use a semicolon to suppress the output
1210 In [151]: plt.clf()
1211 In [151]: plt.clf()
1211
1212
1212 @savefig plot_simple.png width=4in
1213 @savefig plot_simple.png width=4in
1213 In [151]: plot([1,2,3])
1214 In [151]: plot([1,2,3])
1214
1215
1215 @savefig hist_simple.png width=4in
1216 @savefig hist_simple.png width=4in
1216 In [151]: hist(np.random.randn(10000), 100);
1217 In [151]: hist(np.random.randn(10000), 100);
1217
1218
1218 """,
1219 """,
1219 r"""
1220 r"""
1220 # update the current fig
1221 # update the current fig
1221 In [151]: ylabel('number')
1222 In [151]: ylabel('number')
1222
1223
1223 In [152]: title('normal distribution')
1224 In [152]: title('normal distribution')
1224
1225
1225
1226
1226 @savefig hist_with_text.png
1227 @savefig hist_with_text.png
1227 In [153]: grid(True)
1228 In [153]: grid(True)
1228
1229
1229 @doctest float
1230 @doctest float
1230 In [154]: 0.1 + 0.2
1231 In [154]: 0.1 + 0.2
1231 Out[154]: 0.3
1232 Out[154]: 0.3
1232
1233
1233 @doctest float
1234 @doctest float
1234 In [155]: np.arange(16).reshape(4,4)
1235 In [155]: np.arange(16).reshape(4,4)
1235 Out[155]:
1236 Out[155]:
1236 array([[ 0, 1, 2, 3],
1237 array([[ 0, 1, 2, 3],
1237 [ 4, 5, 6, 7],
1238 [ 4, 5, 6, 7],
1238 [ 8, 9, 10, 11],
1239 [ 8, 9, 10, 11],
1239 [12, 13, 14, 15]])
1240 [12, 13, 14, 15]])
1240
1241
1241 In [1]: x = np.arange(16, dtype=float).reshape(4,4)
1242 In [1]: x = np.arange(16, dtype=float).reshape(4,4)
1242
1243
1243 In [2]: x[0,0] = np.inf
1244 In [2]: x[0,0] = np.inf
1244
1245
1245 In [3]: x[0,1] = np.nan
1246 In [3]: x[0,1] = np.nan
1246
1247
1247 @doctest float
1248 @doctest float
1248 In [4]: x
1249 In [4]: x
1249 Out[4]:
1250 Out[4]:
1250 array([[ inf, nan, 2., 3.],
1251 array([[ inf, nan, 2., 3.],
1251 [ 4., 5., 6., 7.],
1252 [ 4., 5., 6., 7.],
1252 [ 8., 9., 10., 11.],
1253 [ 8., 9., 10., 11.],
1253 [ 12., 13., 14., 15.]])
1254 [ 12., 13., 14., 15.]])
1254
1255
1255
1256
1256 """,
1257 """,
1257 ]
1258 ]
1258 # skip local-file depending first example:
1259 # skip local-file depending first example:
1259 examples = examples[1:]
1260 examples = examples[1:]
1260
1261
1261 #ipython_directive.DEBUG = True # dbg
1262 #ipython_directive.DEBUG = True # dbg
1262 #options = dict(suppress=True) # dbg
1263 #options = dict(suppress=True) # dbg
1263 options = {}
1264 options = {}
1264 for example in examples:
1265 for example in examples:
1265 content = example.split('\n')
1266 content = example.split('\n')
1266 IPythonDirective('debug', arguments=None, options=options,
1267 IPythonDirective('debug', arguments=None, options=options,
1267 content=content, lineno=0,
1268 content=content, lineno=0,
1268 content_offset=None, block_text=None,
1269 content_offset=None, block_text=None,
1269 state=None, state_machine=None,
1270 state=None, state_machine=None,
1270 )
1271 )
1271
1272
1272 # Run test suite as a script
1273 # Run test suite as a script
1273 if __name__=='__main__':
1274 if __name__=='__main__':
1274 if not os.path.isdir('_static'):
1275 if not os.path.isdir('_static'):
1275 os.mkdir('_static')
1276 os.mkdir('_static')
1276 test()
1277 test()
1277 print('All OK? Check figures in _static/')
1278 print('All OK? Check figures in _static/')
@@ -1,1021 +1,1023
1 """IPython terminal interface using prompt_toolkit"""
1 """IPython terminal interface using prompt_toolkit"""
2
2
3 import os
3 import os
4 import sys
4 import sys
5 import inspect
5 import inspect
6 from warnings import warn
6 from warnings import warn
7 from typing import Union as UnionType, Optional
7 from typing import Union as UnionType, Optional
8
8
9 from IPython.core.async_helpers import get_asyncio_loop
9 from IPython.core.async_helpers import get_asyncio_loop
10 from IPython.core.interactiveshell import InteractiveShell, InteractiveShellABC
10 from IPython.core.interactiveshell import InteractiveShell, InteractiveShellABC
11 from IPython.utils.py3compat import input
11 from IPython.utils.py3compat import input
12 from IPython.utils.terminal import toggle_set_term_title, set_term_title, restore_term_title
12 from IPython.utils.terminal import toggle_set_term_title, set_term_title, restore_term_title
13 from IPython.utils.process import abbrev_cwd
13 from IPython.utils.process import abbrev_cwd
14 from traitlets import (
14 from traitlets import (
15 Bool,
15 Bool,
16 Unicode,
16 Unicode,
17 Dict,
17 Dict,
18 Integer,
18 Integer,
19 List,
19 List,
20 observe,
20 observe,
21 Instance,
21 Instance,
22 Type,
22 Type,
23 default,
23 default,
24 Enum,
24 Enum,
25 Union,
25 Union,
26 Any,
26 Any,
27 validate,
27 validate,
28 Float,
28 Float,
29 )
29 )
30
30
31 from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
31 from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
32 from prompt_toolkit.enums import DEFAULT_BUFFER, EditingMode
32 from prompt_toolkit.enums import DEFAULT_BUFFER, EditingMode
33 from prompt_toolkit.filters import HasFocus, Condition, IsDone
33 from prompt_toolkit.filters import HasFocus, Condition, IsDone
34 from prompt_toolkit.formatted_text import PygmentsTokens
34 from prompt_toolkit.formatted_text import PygmentsTokens
35 from prompt_toolkit.history import History
35 from prompt_toolkit.history import History
36 from prompt_toolkit.layout.processors import ConditionalProcessor, HighlightMatchingBracketProcessor
36 from prompt_toolkit.layout.processors import ConditionalProcessor, HighlightMatchingBracketProcessor
37 from prompt_toolkit.output import ColorDepth
37 from prompt_toolkit.output import ColorDepth
38 from prompt_toolkit.patch_stdout import patch_stdout
38 from prompt_toolkit.patch_stdout import patch_stdout
39 from prompt_toolkit.shortcuts import PromptSession, CompleteStyle, print_formatted_text
39 from prompt_toolkit.shortcuts import PromptSession, CompleteStyle, print_formatted_text
40 from prompt_toolkit.styles import DynamicStyle, merge_styles
40 from prompt_toolkit.styles import DynamicStyle, merge_styles
41 from prompt_toolkit.styles.pygments import style_from_pygments_cls, style_from_pygments_dict
41 from prompt_toolkit.styles.pygments import style_from_pygments_cls, style_from_pygments_dict
42 from prompt_toolkit import __version__ as ptk_version
42 from prompt_toolkit import __version__ as ptk_version
43
43
44 from pygments.styles import get_style_by_name
44 from pygments.styles import get_style_by_name
45 from pygments.style import Style
45 from pygments.style import Style
46 from pygments.token import Token
46 from pygments.token import Token
47
47
48 from .debugger import TerminalPdb, Pdb
48 from .debugger import TerminalPdb, Pdb
49 from .magics import TerminalMagics
49 from .magics import TerminalMagics
50 from .pt_inputhooks import get_inputhook_name_and_func
50 from .pt_inputhooks import get_inputhook_name_and_func
51 from .prompts import Prompts, ClassicPrompts, RichPromptDisplayHook
51 from .prompts import Prompts, ClassicPrompts, RichPromptDisplayHook
52 from .ptutils import IPythonPTCompleter, IPythonPTLexer
52 from .ptutils import IPythonPTCompleter, IPythonPTLexer
53 from .shortcuts import (
53 from .shortcuts import (
54 KEY_BINDINGS,
54 KEY_BINDINGS,
55 create_ipython_shortcuts,
55 create_ipython_shortcuts,
56 create_identifier,
56 create_identifier,
57 RuntimeBinding,
57 RuntimeBinding,
58 add_binding,
58 add_binding,
59 )
59 )
60 from .shortcuts.filters import KEYBINDING_FILTERS, filter_from_string
60 from .shortcuts.filters import KEYBINDING_FILTERS, filter_from_string
61 from .shortcuts.auto_suggest import (
61 from .shortcuts.auto_suggest import (
62 NavigableAutoSuggestFromHistory,
62 NavigableAutoSuggestFromHistory,
63 AppendAutoSuggestionInAnyLine,
63 AppendAutoSuggestionInAnyLine,
64 )
64 )
65
65
66 PTK3 = ptk_version.startswith('3.')
66 PTK3 = ptk_version.startswith('3.')
67
67
68
68
69 class _NoStyle(Style):
69 class _NoStyle(Style):
70 pass
70 pass
71
71
72
72
73 _style_overrides_light_bg = {
73 _style_overrides_light_bg = {
74 Token.Prompt: '#ansibrightblue',
74 Token.Prompt: '#ansibrightblue',
75 Token.PromptNum: '#ansiblue bold',
75 Token.PromptNum: '#ansiblue bold',
76 Token.OutPrompt: '#ansibrightred',
76 Token.OutPrompt: '#ansibrightred',
77 Token.OutPromptNum: '#ansired bold',
77 Token.OutPromptNum: '#ansired bold',
78 }
78 }
79
79
80 _style_overrides_linux = {
80 _style_overrides_linux = {
81 Token.Prompt: '#ansibrightgreen',
81 Token.Prompt: '#ansibrightgreen',
82 Token.PromptNum: '#ansigreen bold',
82 Token.PromptNum: '#ansigreen bold',
83 Token.OutPrompt: '#ansibrightred',
83 Token.OutPrompt: '#ansibrightred',
84 Token.OutPromptNum: '#ansired bold',
84 Token.OutPromptNum: '#ansired bold',
85 }
85 }
86
86
87
87
88 def _backward_compat_continuation_prompt_tokens(method, width: int, *, lineno: int):
88 def _backward_compat_continuation_prompt_tokens(method, width: int, *, lineno: int):
89 """
89 """
90 Sagemath use custom prompt and we broke them in 8.19.
90 Sagemath use custom prompt and we broke them in 8.19.
91 """
91 """
92 sig = inspect.signature(method)
92 sig = inspect.signature(method)
93 if "lineno" in inspect.signature(method).parameters or any(
93 if "lineno" in inspect.signature(method).parameters or any(
94 [p.kind == p.VAR_KEYWORD for p in sig.parameters.values()]
94 [p.kind == p.VAR_KEYWORD for p in sig.parameters.values()]
95 ):
95 ):
96 return method(width, lineno=lineno)
96 return method(width, lineno=lineno)
97 else:
97 else:
98 return method(width)
98 return method(width)
99
99
100
100
101 def get_default_editor():
101 def get_default_editor():
102 try:
102 try:
103 return os.environ['EDITOR']
103 return os.environ['EDITOR']
104 except KeyError:
104 except KeyError:
105 pass
105 pass
106 except UnicodeError:
106 except UnicodeError:
107 warn("$EDITOR environment variable is not pure ASCII. Using platform "
107 warn("$EDITOR environment variable is not pure ASCII. Using platform "
108 "default editor.")
108 "default editor.")
109
109
110 if os.name == 'posix':
110 if os.name == 'posix':
111 return 'vi' # the only one guaranteed to be there!
111 return 'vi' # the only one guaranteed to be there!
112 else:
112 else:
113 return "notepad" # same in Windows!
113 return "notepad" # same in Windows!
114
114
115
115
116 # conservatively check for tty
116 # conservatively check for tty
117 # overridden streams can result in things like:
117 # overridden streams can result in things like:
118 # - sys.stdin = None
118 # - sys.stdin = None
119 # - no isatty method
119 # - no isatty method
120 for _name in ('stdin', 'stdout', 'stderr'):
120 for _name in ('stdin', 'stdout', 'stderr'):
121 _stream = getattr(sys, _name)
121 _stream = getattr(sys, _name)
122 try:
122 try:
123 if not _stream or not hasattr(_stream, "isatty") or not _stream.isatty():
123 if not _stream or not hasattr(_stream, "isatty") or not _stream.isatty():
124 _is_tty = False
124 _is_tty = False
125 break
125 break
126 except ValueError:
126 except ValueError:
127 # stream is closed
127 # stream is closed
128 _is_tty = False
128 _is_tty = False
129 break
129 break
130 else:
130 else:
131 _is_tty = True
131 _is_tty = True
132
132
133
133
134 _use_simple_prompt = ('IPY_TEST_SIMPLE_PROMPT' in os.environ) or (not _is_tty)
134 _use_simple_prompt = ('IPY_TEST_SIMPLE_PROMPT' in os.environ) or (not _is_tty)
135
135
136 def black_reformat_handler(text_before_cursor):
136 def black_reformat_handler(text_before_cursor):
137 """
137 """
138 We do not need to protect against error,
138 We do not need to protect against error,
139 this is taken care at a higher level where any reformat error is ignored.
139 this is taken care at a higher level where any reformat error is ignored.
140 Indeed we may call reformatting on incomplete code.
140 Indeed we may call reformatting on incomplete code.
141 """
141 """
142 import black
142 import black
143
143
144 formatted_text = black.format_str(text_before_cursor, mode=black.FileMode())
144 formatted_text = black.format_str(text_before_cursor, mode=black.FileMode())
145 if not text_before_cursor.endswith("\n") and formatted_text.endswith("\n"):
145 if not text_before_cursor.endswith("\n") and formatted_text.endswith("\n"):
146 formatted_text = formatted_text[:-1]
146 formatted_text = formatted_text[:-1]
147 return formatted_text
147 return formatted_text
148
148
149
149
150 def yapf_reformat_handler(text_before_cursor):
150 def yapf_reformat_handler(text_before_cursor):
151 from yapf.yapflib import file_resources
151 from yapf.yapflib import file_resources
152 from yapf.yapflib import yapf_api
152 from yapf.yapflib import yapf_api
153
153
154 style_config = file_resources.GetDefaultStyleForDir(os.getcwd())
154 style_config = file_resources.GetDefaultStyleForDir(os.getcwd())
155 formatted_text, was_formatted = yapf_api.FormatCode(
155 formatted_text, was_formatted = yapf_api.FormatCode(
156 text_before_cursor, style_config=style_config
156 text_before_cursor, style_config=style_config
157 )
157 )
158 if was_formatted:
158 if was_formatted:
159 if not text_before_cursor.endswith("\n") and formatted_text.endswith("\n"):
159 if not text_before_cursor.endswith("\n") and formatted_text.endswith("\n"):
160 formatted_text = formatted_text[:-1]
160 formatted_text = formatted_text[:-1]
161 return formatted_text
161 return formatted_text
162 else:
162 else:
163 return text_before_cursor
163 return text_before_cursor
164
164
165
165
166 class PtkHistoryAdapter(History):
166 class PtkHistoryAdapter(History):
167 """
167 """
168 Prompt toolkit has it's own way of handling history, Where it assumes it can
168 Prompt toolkit has it's own way of handling history, Where it assumes it can
169 Push/pull from history.
169 Push/pull from history.
170
170
171 """
171 """
172
172
173 def __init__(self, shell):
173 def __init__(self, shell):
174 super().__init__()
174 super().__init__()
175 self.shell = shell
175 self.shell = shell
176 self._refresh()
176 self._refresh()
177
177
178 def append_string(self, string):
178 def append_string(self, string):
179 # we rely on sql for that.
179 # we rely on sql for that.
180 self._loaded = False
180 self._loaded = False
181 self._refresh()
181 self._refresh()
182
182
183 def _refresh(self):
183 def _refresh(self):
184 if not self._loaded:
184 if not self._loaded:
185 self._loaded_strings = list(self.load_history_strings())
185 self._loaded_strings = list(self.load_history_strings())
186
186
187 def load_history_strings(self):
187 def load_history_strings(self):
188 last_cell = ""
188 last_cell = ""
189 res = []
189 res = []
190 for __, ___, cell in self.shell.history_manager.get_tail(
190 for __, ___, cell in self.shell.history_manager.get_tail(
191 self.shell.history_load_length, include_latest=True
191 self.shell.history_load_length, include_latest=True
192 ):
192 ):
193 # Ignore blank lines and consecutive duplicates
193 # Ignore blank lines and consecutive duplicates
194 cell = cell.rstrip()
194 cell = cell.rstrip()
195 if cell and (cell != last_cell):
195 if cell and (cell != last_cell):
196 res.append(cell)
196 res.append(cell)
197 last_cell = cell
197 last_cell = cell
198 yield from res[::-1]
198 yield from res[::-1]
199
199
200 def store_string(self, string: str) -> None:
200 def store_string(self, string: str) -> None:
201 pass
201 pass
202
202
203 class TerminalInteractiveShell(InteractiveShell):
203 class TerminalInteractiveShell(InteractiveShell):
204 mime_renderers = Dict().tag(config=True)
204 mime_renderers = Dict().tag(config=True)
205
205
206 space_for_menu = Integer(6, help='Number of line at the bottom of the screen '
206 space_for_menu = Integer(6, help='Number of line at the bottom of the screen '
207 'to reserve for the tab completion menu, '
207 'to reserve for the tab completion menu, '
208 'search history, ...etc, the height of '
208 'search history, ...etc, the height of '
209 'these menus will at most this value. '
209 'these menus will at most this value. '
210 'Increase it is you prefer long and skinny '
210 'Increase it is you prefer long and skinny '
211 'menus, decrease for short and wide.'
211 'menus, decrease for short and wide.'
212 ).tag(config=True)
212 ).tag(config=True)
213
213
214 pt_app: UnionType[PromptSession, None] = None
214 pt_app: UnionType[PromptSession, None] = None
215 auto_suggest: UnionType[
215 auto_suggest: UnionType[
216 AutoSuggestFromHistory, NavigableAutoSuggestFromHistory, None
216 AutoSuggestFromHistory, NavigableAutoSuggestFromHistory, None
217 ] = None
217 ] = None
218 debugger_history = None
218 debugger_history = None
219
219
220 debugger_history_file = Unicode(
220 debugger_history_file = Unicode(
221 "~/.pdbhistory", help="File in which to store and read history"
221 "~/.pdbhistory", help="File in which to store and read history"
222 ).tag(config=True)
222 ).tag(config=True)
223
223
224 simple_prompt = Bool(_use_simple_prompt,
224 simple_prompt = Bool(_use_simple_prompt,
225 help="""Use `raw_input` for the REPL, without completion and prompt colors.
225 help="""Use `raw_input` for the REPL, without completion and prompt colors.
226
226
227 Useful when controlling IPython as a subprocess, and piping
227 Useful when controlling IPython as a subprocess, and piping
228 STDIN/OUT/ERR. Known usage are: IPython's own testing machinery,
228 STDIN/OUT/ERR. Known usage are: IPython's own testing machinery,
229 and emacs' inferior-python subprocess (assuming you have set
229 and emacs' inferior-python subprocess (assuming you have set
230 `python-shell-interpreter` to "ipython") available through the
230 `python-shell-interpreter` to "ipython") available through the
231 built-in `M-x run-python` and third party packages such as elpy.
231 built-in `M-x run-python` and third party packages such as elpy.
232
232
233 This mode default to `True` if the `IPY_TEST_SIMPLE_PROMPT`
233 This mode default to `True` if the `IPY_TEST_SIMPLE_PROMPT`
234 environment variable is set, or the current terminal is not a tty.
234 environment variable is set, or the current terminal is not a tty.
235 Thus the Default value reported in --help-all, or config will often
235 Thus the Default value reported in --help-all, or config will often
236 be incorrectly reported.
236 be incorrectly reported.
237 """,
237 """,
238 ).tag(config=True)
238 ).tag(config=True)
239
239
240 @property
240 @property
241 def debugger_cls(self):
241 def debugger_cls(self):
242 return Pdb if self.simple_prompt else TerminalPdb
242 return Pdb if self.simple_prompt else TerminalPdb
243
243
244 confirm_exit = Bool(True,
244 confirm_exit = Bool(True,
245 help="""
245 help="""
246 Set to confirm when you try to exit IPython with an EOF (Control-D
246 Set to confirm when you try to exit IPython with an EOF (Control-D
247 in Unix, Control-Z/Enter in Windows). By typing 'exit' or 'quit',
247 in Unix, Control-Z/Enter in Windows). By typing 'exit' or 'quit',
248 you can force a direct exit without any confirmation.""",
248 you can force a direct exit without any confirmation.""",
249 ).tag(config=True)
249 ).tag(config=True)
250
250
251 editing_mode = Unicode('emacs',
251 editing_mode = Unicode('emacs',
252 help="Shortcut style to use at the prompt. 'vi' or 'emacs'.",
252 help="Shortcut style to use at the prompt. 'vi' or 'emacs'.",
253 ).tag(config=True)
253 ).tag(config=True)
254
254
255 emacs_bindings_in_vi_insert_mode = Bool(
255 emacs_bindings_in_vi_insert_mode = Bool(
256 True,
256 True,
257 help="Add shortcuts from 'emacs' insert mode to 'vi' insert mode.",
257 help="Add shortcuts from 'emacs' insert mode to 'vi' insert mode.",
258 ).tag(config=True)
258 ).tag(config=True)
259
259
260 modal_cursor = Bool(
260 modal_cursor = Bool(
261 True,
261 True,
262 help="""
262 help="""
263 Cursor shape changes depending on vi mode: beam in vi insert mode,
263 Cursor shape changes depending on vi mode: beam in vi insert mode,
264 block in nav mode, underscore in replace mode.""",
264 block in nav mode, underscore in replace mode.""",
265 ).tag(config=True)
265 ).tag(config=True)
266
266
267 ttimeoutlen = Float(
267 ttimeoutlen = Float(
268 0.01,
268 0.01,
269 help="""The time in milliseconds that is waited for a key code
269 help="""The time in milliseconds that is waited for a key code
270 to complete.""",
270 to complete.""",
271 ).tag(config=True)
271 ).tag(config=True)
272
272
273 timeoutlen = Float(
273 timeoutlen = Float(
274 0.5,
274 0.5,
275 help="""The time in milliseconds that is waited for a mapped key
275 help="""The time in milliseconds that is waited for a mapped key
276 sequence to complete.""",
276 sequence to complete.""",
277 ).tag(config=True)
277 ).tag(config=True)
278
278
279 autoformatter = Unicode(
279 autoformatter = Unicode(
280 None,
280 None,
281 help="Autoformatter to reformat Terminal code. Can be `'black'`, `'yapf'` or `None`",
281 help="Autoformatter to reformat Terminal code. Can be `'black'`, `'yapf'` or `None`",
282 allow_none=True
282 allow_none=True
283 ).tag(config=True)
283 ).tag(config=True)
284
284
285 auto_match = Bool(
285 auto_match = Bool(
286 False,
286 False,
287 help="""
287 help="""
288 Automatically add/delete closing bracket or quote when opening bracket or quote is entered/deleted.
288 Automatically add/delete closing bracket or quote when opening bracket or quote is entered/deleted.
289 Brackets: (), [], {}
289 Brackets: (), [], {}
290 Quotes: '', \"\"
290 Quotes: '', \"\"
291 """,
291 """,
292 ).tag(config=True)
292 ).tag(config=True)
293
293
294 mouse_support = Bool(False,
294 mouse_support = Bool(False,
295 help="Enable mouse support in the prompt\n(Note: prevents selecting text with the mouse)"
295 help="Enable mouse support in the prompt\n(Note: prevents selecting text with the mouse)"
296 ).tag(config=True)
296 ).tag(config=True)
297
297
298 # We don't load the list of styles for the help string, because loading
298 # We don't load the list of styles for the help string, because loading
299 # Pygments plugins takes time and can cause unexpected errors.
299 # Pygments plugins takes time and can cause unexpected errors.
300 highlighting_style = Union([Unicode('legacy'), Type(klass=Style)],
300 highlighting_style = Union([Unicode('legacy'), Type(klass=Style)],
301 help="""The name or class of a Pygments style to use for syntax
301 help="""The name or class of a Pygments style to use for syntax
302 highlighting. To see available styles, run `pygmentize -L styles`."""
302 highlighting. To see available styles, run `pygmentize -L styles`."""
303 ).tag(config=True)
303 ).tag(config=True)
304
304
305 @validate('editing_mode')
305 @validate('editing_mode')
306 def _validate_editing_mode(self, proposal):
306 def _validate_editing_mode(self, proposal):
307 if proposal['value'].lower() == 'vim':
307 if proposal['value'].lower() == 'vim':
308 proposal['value']= 'vi'
308 proposal['value']= 'vi'
309 elif proposal['value'].lower() == 'default':
309 elif proposal['value'].lower() == 'default':
310 proposal['value']= 'emacs'
310 proposal['value']= 'emacs'
311
311
312 if hasattr(EditingMode, proposal['value'].upper()):
312 if hasattr(EditingMode, proposal['value'].upper()):
313 return proposal['value'].lower()
313 return proposal['value'].lower()
314
314
315 return self.editing_mode
315 return self.editing_mode
316
316
317 @observe('editing_mode')
317 @observe('editing_mode')
318 def _editing_mode(self, change):
318 def _editing_mode(self, change):
319 if self.pt_app:
319 if self.pt_app:
320 self.pt_app.editing_mode = getattr(EditingMode, change.new.upper())
320 self.pt_app.editing_mode = getattr(EditingMode, change.new.upper())
321
321
322 def _set_formatter(self, formatter):
322 def _set_formatter(self, formatter):
323 if formatter is None:
323 if formatter is None:
324 self.reformat_handler = lambda x:x
324 self.reformat_handler = lambda x:x
325 elif formatter == 'black':
325 elif formatter == 'black':
326 self.reformat_handler = black_reformat_handler
326 self.reformat_handler = black_reformat_handler
327 elif formatter == "yapf":
327 elif formatter == "yapf":
328 self.reformat_handler = yapf_reformat_handler
328 self.reformat_handler = yapf_reformat_handler
329 else:
329 else:
330 raise ValueError
330 raise ValueError
331
331
332 @observe("autoformatter")
332 @observe("autoformatter")
333 def _autoformatter_changed(self, change):
333 def _autoformatter_changed(self, change):
334 formatter = change.new
334 formatter = change.new
335 self._set_formatter(formatter)
335 self._set_formatter(formatter)
336
336
337 @observe('highlighting_style')
337 @observe('highlighting_style')
338 @observe('colors')
338 @observe('colors')
339 def _highlighting_style_changed(self, change):
339 def _highlighting_style_changed(self, change):
340 self.refresh_style()
340 self.refresh_style()
341
341
342 def refresh_style(self):
342 def refresh_style(self):
343 self._style = self._make_style_from_name_or_cls(self.highlighting_style)
343 self._style = self._make_style_from_name_or_cls(self.highlighting_style)
344
344
345 highlighting_style_overrides = Dict(
345 highlighting_style_overrides = Dict(
346 help="Override highlighting format for specific tokens"
346 help="Override highlighting format for specific tokens"
347 ).tag(config=True)
347 ).tag(config=True)
348
348
349 true_color = Bool(False,
349 true_color = Bool(False,
350 help="""Use 24bit colors instead of 256 colors in prompt highlighting.
350 help="""Use 24bit colors instead of 256 colors in prompt highlighting.
351 If your terminal supports true color, the following command should
351 If your terminal supports true color, the following command should
352 print ``TRUECOLOR`` in orange::
352 print ``TRUECOLOR`` in orange::
353
353
354 printf \"\\x1b[38;2;255;100;0mTRUECOLOR\\x1b[0m\\n\"
354 printf \"\\x1b[38;2;255;100;0mTRUECOLOR\\x1b[0m\\n\"
355 """,
355 """,
356 ).tag(config=True)
356 ).tag(config=True)
357
357
358 editor = Unicode(get_default_editor(),
358 editor = Unicode(get_default_editor(),
359 help="Set the editor used by IPython (default to $EDITOR/vi/notepad)."
359 help="Set the editor used by IPython (default to $EDITOR/vi/notepad)."
360 ).tag(config=True)
360 ).tag(config=True)
361
361
362 prompts_class = Type(Prompts, help='Class used to generate Prompt token for prompt_toolkit').tag(config=True)
362 prompts_class = Type(Prompts, help='Class used to generate Prompt token for prompt_toolkit').tag(config=True)
363
363
364 prompts = Instance(Prompts)
364 prompts = Instance(Prompts)
365
365
366 @default('prompts')
366 @default('prompts')
367 def _prompts_default(self):
367 def _prompts_default(self):
368 return self.prompts_class(self)
368 return self.prompts_class(self)
369
369
370 # @observe('prompts')
370 # @observe('prompts')
371 # def _(self, change):
371 # def _(self, change):
372 # self._update_layout()
372 # self._update_layout()
373
373
374 @default('displayhook_class')
374 @default('displayhook_class')
375 def _displayhook_class_default(self):
375 def _displayhook_class_default(self):
376 return RichPromptDisplayHook
376 return RichPromptDisplayHook
377
377
378 term_title = Bool(True,
378 term_title = Bool(True,
379 help="Automatically set the terminal title"
379 help="Automatically set the terminal title"
380 ).tag(config=True)
380 ).tag(config=True)
381
381
382 term_title_format = Unicode("IPython: {cwd}",
382 term_title_format = Unicode("IPython: {cwd}",
383 help="Customize the terminal title format. This is a python format string. " +
383 help="Customize the terminal title format. This is a python format string. " +
384 "Available substitutions are: {cwd}."
384 "Available substitutions are: {cwd}."
385 ).tag(config=True)
385 ).tag(config=True)
386
386
387 display_completions = Enum(('column', 'multicolumn','readlinelike'),
387 display_completions = Enum(('column', 'multicolumn','readlinelike'),
388 help= ( "Options for displaying tab completions, 'column', 'multicolumn', and "
388 help= ( "Options for displaying tab completions, 'column', 'multicolumn', and "
389 "'readlinelike'. These options are for `prompt_toolkit`, see "
389 "'readlinelike'. These options are for `prompt_toolkit`, see "
390 "`prompt_toolkit` documentation for more information."
390 "`prompt_toolkit` documentation for more information."
391 ),
391 ),
392 default_value='multicolumn').tag(config=True)
392 default_value='multicolumn').tag(config=True)
393
393
394 highlight_matching_brackets = Bool(True,
394 highlight_matching_brackets = Bool(True,
395 help="Highlight matching brackets.",
395 help="Highlight matching brackets.",
396 ).tag(config=True)
396 ).tag(config=True)
397
397
398 extra_open_editor_shortcuts = Bool(False,
398 extra_open_editor_shortcuts = Bool(False,
399 help="Enable vi (v) or Emacs (C-X C-E) shortcuts to open an external editor. "
399 help="Enable vi (v) or Emacs (C-X C-E) shortcuts to open an external editor. "
400 "This is in addition to the F2 binding, which is always enabled."
400 "This is in addition to the F2 binding, which is always enabled."
401 ).tag(config=True)
401 ).tag(config=True)
402
402
403 handle_return = Any(None,
403 handle_return = Any(None,
404 help="Provide an alternative handler to be called when the user presses "
404 help="Provide an alternative handler to be called when the user presses "
405 "Return. This is an advanced option intended for debugging, which "
405 "Return. This is an advanced option intended for debugging, which "
406 "may be changed or removed in later releases."
406 "may be changed or removed in later releases."
407 ).tag(config=True)
407 ).tag(config=True)
408
408
409 enable_history_search = Bool(True,
409 enable_history_search = Bool(True,
410 help="Allows to enable/disable the prompt toolkit history search"
410 help="Allows to enable/disable the prompt toolkit history search"
411 ).tag(config=True)
411 ).tag(config=True)
412
412
413 autosuggestions_provider = Unicode(
413 autosuggestions_provider = Unicode(
414 "NavigableAutoSuggestFromHistory",
414 "NavigableAutoSuggestFromHistory",
415 help="Specifies from which source automatic suggestions are provided. "
415 help="Specifies from which source automatic suggestions are provided. "
416 "Can be set to ``'NavigableAutoSuggestFromHistory'`` (:kbd:`up` and "
416 "Can be set to ``'NavigableAutoSuggestFromHistory'`` (:kbd:`up` and "
417 ":kbd:`down` swap suggestions), ``'AutoSuggestFromHistory'``, "
417 ":kbd:`down` swap suggestions), ``'AutoSuggestFromHistory'``, "
418 " or ``None`` to disable automatic suggestions. "
418 " or ``None`` to disable automatic suggestions. "
419 "Default is `'NavigableAutoSuggestFromHistory`'.",
419 "Default is `'NavigableAutoSuggestFromHistory`'.",
420 allow_none=True,
420 allow_none=True,
421 ).tag(config=True)
421 ).tag(config=True)
422
422
423 def _set_autosuggestions(self, provider):
423 def _set_autosuggestions(self, provider):
424 # disconnect old handler
424 # disconnect old handler
425 if self.auto_suggest and isinstance(
425 if self.auto_suggest and isinstance(
426 self.auto_suggest, NavigableAutoSuggestFromHistory
426 self.auto_suggest, NavigableAutoSuggestFromHistory
427 ):
427 ):
428 self.auto_suggest.disconnect()
428 self.auto_suggest.disconnect()
429 if provider is None:
429 if provider is None:
430 self.auto_suggest = None
430 self.auto_suggest = None
431 elif provider == "AutoSuggestFromHistory":
431 elif provider == "AutoSuggestFromHistory":
432 self.auto_suggest = AutoSuggestFromHistory()
432 self.auto_suggest = AutoSuggestFromHistory()
433 elif provider == "NavigableAutoSuggestFromHistory":
433 elif provider == "NavigableAutoSuggestFromHistory":
434 self.auto_suggest = NavigableAutoSuggestFromHistory()
434 self.auto_suggest = NavigableAutoSuggestFromHistory()
435 else:
435 else:
436 raise ValueError("No valid provider.")
436 raise ValueError("No valid provider.")
437 if self.pt_app:
437 if self.pt_app:
438 self.pt_app.auto_suggest = self.auto_suggest
438 self.pt_app.auto_suggest = self.auto_suggest
439
439
440 @observe("autosuggestions_provider")
440 @observe("autosuggestions_provider")
441 def _autosuggestions_provider_changed(self, change):
441 def _autosuggestions_provider_changed(self, change):
442 provider = change.new
442 provider = change.new
443 self._set_autosuggestions(provider)
443 self._set_autosuggestions(provider)
444
444
445 shortcuts = List(
445 shortcuts = List(
446 trait=Dict(
446 trait=Dict(
447 key_trait=Enum(
447 key_trait=Enum(
448 [
448 [
449 "command",
449 "command",
450 "match_keys",
450 "match_keys",
451 "match_filter",
451 "match_filter",
452 "new_keys",
452 "new_keys",
453 "new_filter",
453 "new_filter",
454 "create",
454 "create",
455 ]
455 ]
456 ),
456 ),
457 per_key_traits={
457 per_key_traits={
458 "command": Unicode(),
458 "command": Unicode(),
459 "match_keys": List(Unicode()),
459 "match_keys": List(Unicode()),
460 "match_filter": Unicode(),
460 "match_filter": Unicode(),
461 "new_keys": List(Unicode()),
461 "new_keys": List(Unicode()),
462 "new_filter": Unicode(),
462 "new_filter": Unicode(),
463 "create": Bool(False),
463 "create": Bool(False),
464 },
464 },
465 ),
465 ),
466 help="""Add, disable or modifying shortcuts.
466 help="""Add, disable or modifying shortcuts.
467
467
468 Each entry on the list should be a dictionary with ``command`` key
468 Each entry on the list should be a dictionary with ``command`` key
469 identifying the target function executed by the shortcut and at least
469 identifying the target function executed by the shortcut and at least
470 one of the following:
470 one of the following:
471
471
472 - ``match_keys``: list of keys used to match an existing shortcut,
472 - ``match_keys``: list of keys used to match an existing shortcut,
473 - ``match_filter``: shortcut filter used to match an existing shortcut,
473 - ``match_filter``: shortcut filter used to match an existing shortcut,
474 - ``new_keys``: list of keys to set,
474 - ``new_keys``: list of keys to set,
475 - ``new_filter``: a new shortcut filter to set
475 - ``new_filter``: a new shortcut filter to set
476
476
477 The filters have to be composed of pre-defined verbs and joined by one
477 The filters have to be composed of pre-defined verbs and joined by one
478 of the following conjunctions: ``&`` (and), ``|`` (or), ``~`` (not).
478 of the following conjunctions: ``&`` (and), ``|`` (or), ``~`` (not).
479 The pre-defined verbs are:
479 The pre-defined verbs are:
480
480
481 {}
481 {}
482
482
483
483
484 To disable a shortcut set ``new_keys`` to an empty list.
484 To disable a shortcut set ``new_keys`` to an empty list.
485 To add a shortcut add key ``create`` with value ``True``.
485 To add a shortcut add key ``create`` with value ``True``.
486
486
487 When modifying/disabling shortcuts, ``match_keys``/``match_filter`` can
487 When modifying/disabling shortcuts, ``match_keys``/``match_filter`` can
488 be omitted if the provided specification uniquely identifies a shortcut
488 be omitted if the provided specification uniquely identifies a shortcut
489 to be modified/disabled. When modifying a shortcut ``new_filter`` or
489 to be modified/disabled. When modifying a shortcut ``new_filter`` or
490 ``new_keys`` can be omitted which will result in reuse of the existing
490 ``new_keys`` can be omitted which will result in reuse of the existing
491 filter/keys.
491 filter/keys.
492
492
493 Only shortcuts defined in IPython (and not default prompt-toolkit
493 Only shortcuts defined in IPython (and not default prompt-toolkit
494 shortcuts) can be modified or disabled. The full list of shortcuts,
494 shortcuts) can be modified or disabled. The full list of shortcuts,
495 command identifiers and filters is available under
495 command identifiers and filters is available under
496 :ref:`terminal-shortcuts-list`.
496 :ref:`terminal-shortcuts-list`.
497 """.format(
497 """.format(
498 "\n ".join([f"- `{k}`" for k in KEYBINDING_FILTERS])
498 "\n ".join([f"- `{k}`" for k in KEYBINDING_FILTERS])
499 ),
499 ),
500 ).tag(config=True)
500 ).tag(config=True)
501
501
502 @observe("shortcuts")
502 @observe("shortcuts")
503 def _shortcuts_changed(self, change):
503 def _shortcuts_changed(self, change):
504 if self.pt_app:
504 if self.pt_app:
505 self.pt_app.key_bindings = self._merge_shortcuts(user_shortcuts=change.new)
505 self.pt_app.key_bindings = self._merge_shortcuts(user_shortcuts=change.new)
506
506
507 def _merge_shortcuts(self, user_shortcuts):
507 def _merge_shortcuts(self, user_shortcuts):
508 # rebuild the bindings list from scratch
508 # rebuild the bindings list from scratch
509 key_bindings = create_ipython_shortcuts(self)
509 key_bindings = create_ipython_shortcuts(self)
510
510
511 # for now we only allow adding shortcuts for commands which are already
511 # for now we only allow adding shortcuts for commands which are already
512 # registered; this is a security precaution.
512 # registered; this is a security precaution.
513 known_commands = {
513 known_commands = {
514 create_identifier(binding.command): binding.command
514 create_identifier(binding.command): binding.command
515 for binding in KEY_BINDINGS
515 for binding in KEY_BINDINGS
516 }
516 }
517 shortcuts_to_skip = []
517 shortcuts_to_skip = []
518 shortcuts_to_add = []
518 shortcuts_to_add = []
519
519
520 for shortcut in user_shortcuts:
520 for shortcut in user_shortcuts:
521 command_id = shortcut["command"]
521 command_id = shortcut["command"]
522 if command_id not in known_commands:
522 if command_id not in known_commands:
523 allowed_commands = "\n - ".join(known_commands)
523 allowed_commands = "\n - ".join(known_commands)
524 raise ValueError(
524 raise ValueError(
525 f"{command_id} is not a known shortcut command."
525 f"{command_id} is not a known shortcut command."
526 f" Allowed commands are: \n - {allowed_commands}"
526 f" Allowed commands are: \n - {allowed_commands}"
527 )
527 )
528 old_keys = shortcut.get("match_keys", None)
528 old_keys = shortcut.get("match_keys", None)
529 old_filter = (
529 old_filter = (
530 filter_from_string(shortcut["match_filter"])
530 filter_from_string(shortcut["match_filter"])
531 if "match_filter" in shortcut
531 if "match_filter" in shortcut
532 else None
532 else None
533 )
533 )
534 matching = [
534 matching = [
535 binding
535 binding
536 for binding in KEY_BINDINGS
536 for binding in KEY_BINDINGS
537 if (
537 if (
538 (old_filter is None or binding.filter == old_filter)
538 (old_filter is None or binding.filter == old_filter)
539 and (old_keys is None or [k for k in binding.keys] == old_keys)
539 and (old_keys is None or [k for k in binding.keys] == old_keys)
540 and create_identifier(binding.command) == command_id
540 and create_identifier(binding.command) == command_id
541 )
541 )
542 ]
542 ]
543
543
544 new_keys = shortcut.get("new_keys", None)
544 new_keys = shortcut.get("new_keys", None)
545 new_filter = shortcut.get("new_filter", None)
545 new_filter = shortcut.get("new_filter", None)
546
546
547 command = known_commands[command_id]
547 command = known_commands[command_id]
548
548
549 creating_new = shortcut.get("create", False)
549 creating_new = shortcut.get("create", False)
550 modifying_existing = not creating_new and (
550 modifying_existing = not creating_new and (
551 new_keys is not None or new_filter
551 new_keys is not None or new_filter
552 )
552 )
553
553
554 if creating_new and new_keys == []:
554 if creating_new and new_keys == []:
555 raise ValueError("Cannot add a shortcut without keys")
555 raise ValueError("Cannot add a shortcut without keys")
556
556
557 if modifying_existing:
557 if modifying_existing:
558 specification = {
558 specification = {
559 key: shortcut[key]
559 key: shortcut[key]
560 for key in ["command", "filter"]
560 for key in ["command", "filter"]
561 if key in shortcut
561 if key in shortcut
562 }
562 }
563 if len(matching) == 0:
563 if len(matching) == 0:
564 raise ValueError(
564 raise ValueError(
565 f"No shortcuts matching {specification} found in {KEY_BINDINGS}"
565 f"No shortcuts matching {specification} found in {KEY_BINDINGS}"
566 )
566 )
567 elif len(matching) > 1:
567 elif len(matching) > 1:
568 raise ValueError(
568 raise ValueError(
569 f"Multiple shortcuts matching {specification} found,"
569 f"Multiple shortcuts matching {specification} found,"
570 f" please add keys/filter to select one of: {matching}"
570 f" please add keys/filter to select one of: {matching}"
571 )
571 )
572
572
573 matched = matching[0]
573 matched = matching[0]
574 old_filter = matched.filter
574 old_filter = matched.filter
575 old_keys = list(matched.keys)
575 old_keys = list(matched.keys)
576 shortcuts_to_skip.append(
576 shortcuts_to_skip.append(
577 RuntimeBinding(
577 RuntimeBinding(
578 command,
578 command,
579 keys=old_keys,
579 keys=old_keys,
580 filter=old_filter,
580 filter=old_filter,
581 )
581 )
582 )
582 )
583
583
584 if new_keys != []:
584 if new_keys != []:
585 shortcuts_to_add.append(
585 shortcuts_to_add.append(
586 RuntimeBinding(
586 RuntimeBinding(
587 command,
587 command,
588 keys=new_keys or old_keys,
588 keys=new_keys or old_keys,
589 filter=filter_from_string(new_filter)
589 filter=(
590 if new_filter is not None
590 filter_from_string(new_filter)
591 else (
591 if new_filter is not None
592 old_filter
592 else (
593 if old_filter is not None
593 old_filter
594 else filter_from_string("always")
594 if old_filter is not None
595 else filter_from_string("always")
596 )
595 ),
597 ),
596 )
598 )
597 )
599 )
598
600
599 # rebuild the bindings list from scratch
601 # rebuild the bindings list from scratch
600 key_bindings = create_ipython_shortcuts(self, skip=shortcuts_to_skip)
602 key_bindings = create_ipython_shortcuts(self, skip=shortcuts_to_skip)
601 for binding in shortcuts_to_add:
603 for binding in shortcuts_to_add:
602 add_binding(key_bindings, binding)
604 add_binding(key_bindings, binding)
603
605
604 return key_bindings
606 return key_bindings
605
607
606 prompt_includes_vi_mode = Bool(True,
608 prompt_includes_vi_mode = Bool(True,
607 help="Display the current vi mode (when using vi editing mode)."
609 help="Display the current vi mode (when using vi editing mode)."
608 ).tag(config=True)
610 ).tag(config=True)
609
611
610 prompt_line_number_format = Unicode(
612 prompt_line_number_format = Unicode(
611 "",
613 "",
612 help="The format for line numbering, will be passed `line` (int, 1 based)"
614 help="The format for line numbering, will be passed `line` (int, 1 based)"
613 " the current line number and `rel_line` the relative line number."
615 " the current line number and `rel_line` the relative line number."
614 " for example to display both you can use the following template string :"
616 " for example to display both you can use the following template string :"
615 " c.TerminalInteractiveShell.prompt_line_number_format='{line: 4d}/{rel_line:+03d} | '"
617 " c.TerminalInteractiveShell.prompt_line_number_format='{line: 4d}/{rel_line:+03d} | '"
616 " This will display the current line number, with leading space and a width of at least 4"
618 " This will display the current line number, with leading space and a width of at least 4"
617 " character, as well as the relative line number 0 padded and always with a + or - sign."
619 " character, as well as the relative line number 0 padded and always with a + or - sign."
618 " Note that when using Emacs mode the prompt of the first line may not update.",
620 " Note that when using Emacs mode the prompt of the first line may not update.",
619 ).tag(config=True)
621 ).tag(config=True)
620
622
621 @observe('term_title')
623 @observe('term_title')
622 def init_term_title(self, change=None):
624 def init_term_title(self, change=None):
623 # Enable or disable the terminal title.
625 # Enable or disable the terminal title.
624 if self.term_title and _is_tty:
626 if self.term_title and _is_tty:
625 toggle_set_term_title(True)
627 toggle_set_term_title(True)
626 set_term_title(self.term_title_format.format(cwd=abbrev_cwd()))
628 set_term_title(self.term_title_format.format(cwd=abbrev_cwd()))
627 else:
629 else:
628 toggle_set_term_title(False)
630 toggle_set_term_title(False)
629
631
630 def restore_term_title(self):
632 def restore_term_title(self):
631 if self.term_title and _is_tty:
633 if self.term_title and _is_tty:
632 restore_term_title()
634 restore_term_title()
633
635
634 def init_display_formatter(self):
636 def init_display_formatter(self):
635 super(TerminalInteractiveShell, self).init_display_formatter()
637 super(TerminalInteractiveShell, self).init_display_formatter()
636 # terminal only supports plain text
638 # terminal only supports plain text
637 self.display_formatter.active_types = ["text/plain"]
639 self.display_formatter.active_types = ["text/plain"]
638
640
639 def init_prompt_toolkit_cli(self):
641 def init_prompt_toolkit_cli(self):
640 if self.simple_prompt:
642 if self.simple_prompt:
641 # Fall back to plain non-interactive output for tests.
643 # Fall back to plain non-interactive output for tests.
642 # This is very limited.
644 # This is very limited.
643 def prompt():
645 def prompt():
644 prompt_text = "".join(x[1] for x in self.prompts.in_prompt_tokens())
646 prompt_text = "".join(x[1] for x in self.prompts.in_prompt_tokens())
645 lines = [input(prompt_text)]
647 lines = [input(prompt_text)]
646 prompt_continuation = "".join(x[1] for x in self.prompts.continuation_prompt_tokens())
648 prompt_continuation = "".join(x[1] for x in self.prompts.continuation_prompt_tokens())
647 while self.check_complete('\n'.join(lines))[0] == 'incomplete':
649 while self.check_complete('\n'.join(lines))[0] == 'incomplete':
648 lines.append( input(prompt_continuation) )
650 lines.append( input(prompt_continuation) )
649 return '\n'.join(lines)
651 return '\n'.join(lines)
650 self.prompt_for_code = prompt
652 self.prompt_for_code = prompt
651 return
653 return
652
654
653 # Set up keyboard shortcuts
655 # Set up keyboard shortcuts
654 key_bindings = self._merge_shortcuts(user_shortcuts=self.shortcuts)
656 key_bindings = self._merge_shortcuts(user_shortcuts=self.shortcuts)
655
657
656 # Pre-populate history from IPython's history database
658 # Pre-populate history from IPython's history database
657 history = PtkHistoryAdapter(self)
659 history = PtkHistoryAdapter(self)
658
660
659 self._style = self._make_style_from_name_or_cls(self.highlighting_style)
661 self._style = self._make_style_from_name_or_cls(self.highlighting_style)
660 self.style = DynamicStyle(lambda: self._style)
662 self.style = DynamicStyle(lambda: self._style)
661
663
662 editing_mode = getattr(EditingMode, self.editing_mode.upper())
664 editing_mode = getattr(EditingMode, self.editing_mode.upper())
663
665
664 self._use_asyncio_inputhook = False
666 self._use_asyncio_inputhook = False
665 self.pt_app = PromptSession(
667 self.pt_app = PromptSession(
666 auto_suggest=self.auto_suggest,
668 auto_suggest=self.auto_suggest,
667 editing_mode=editing_mode,
669 editing_mode=editing_mode,
668 key_bindings=key_bindings,
670 key_bindings=key_bindings,
669 history=history,
671 history=history,
670 completer=IPythonPTCompleter(shell=self),
672 completer=IPythonPTCompleter(shell=self),
671 enable_history_search=self.enable_history_search,
673 enable_history_search=self.enable_history_search,
672 style=self.style,
674 style=self.style,
673 include_default_pygments_style=False,
675 include_default_pygments_style=False,
674 mouse_support=self.mouse_support,
676 mouse_support=self.mouse_support,
675 enable_open_in_editor=self.extra_open_editor_shortcuts,
677 enable_open_in_editor=self.extra_open_editor_shortcuts,
676 color_depth=self.color_depth,
678 color_depth=self.color_depth,
677 tempfile_suffix=".py",
679 tempfile_suffix=".py",
678 **self._extra_prompt_options(),
680 **self._extra_prompt_options(),
679 )
681 )
680 if isinstance(self.auto_suggest, NavigableAutoSuggestFromHistory):
682 if isinstance(self.auto_suggest, NavigableAutoSuggestFromHistory):
681 self.auto_suggest.connect(self.pt_app)
683 self.auto_suggest.connect(self.pt_app)
682
684
683 def _make_style_from_name_or_cls(self, name_or_cls):
685 def _make_style_from_name_or_cls(self, name_or_cls):
684 """
686 """
685 Small wrapper that make an IPython compatible style from a style name
687 Small wrapper that make an IPython compatible style from a style name
686
688
687 We need that to add style for prompt ... etc.
689 We need that to add style for prompt ... etc.
688 """
690 """
689 style_overrides = {}
691 style_overrides = {}
690 if name_or_cls == 'legacy':
692 if name_or_cls == 'legacy':
691 legacy = self.colors.lower()
693 legacy = self.colors.lower()
692 if legacy == 'linux':
694 if legacy == 'linux':
693 style_cls = get_style_by_name('monokai')
695 style_cls = get_style_by_name('monokai')
694 style_overrides = _style_overrides_linux
696 style_overrides = _style_overrides_linux
695 elif legacy == 'lightbg':
697 elif legacy == 'lightbg':
696 style_overrides = _style_overrides_light_bg
698 style_overrides = _style_overrides_light_bg
697 style_cls = get_style_by_name('pastie')
699 style_cls = get_style_by_name('pastie')
698 elif legacy == 'neutral':
700 elif legacy == 'neutral':
699 # The default theme needs to be visible on both a dark background
701 # The default theme needs to be visible on both a dark background
700 # and a light background, because we can't tell what the terminal
702 # and a light background, because we can't tell what the terminal
701 # looks like. These tweaks to the default theme help with that.
703 # looks like. These tweaks to the default theme help with that.
702 style_cls = get_style_by_name('default')
704 style_cls = get_style_by_name('default')
703 style_overrides.update({
705 style_overrides.update({
704 Token.Number: '#ansigreen',
706 Token.Number: '#ansigreen',
705 Token.Operator: 'noinherit',
707 Token.Operator: 'noinherit',
706 Token.String: '#ansiyellow',
708 Token.String: '#ansiyellow',
707 Token.Name.Function: '#ansiblue',
709 Token.Name.Function: '#ansiblue',
708 Token.Name.Class: 'bold #ansiblue',
710 Token.Name.Class: 'bold #ansiblue',
709 Token.Name.Namespace: 'bold #ansiblue',
711 Token.Name.Namespace: 'bold #ansiblue',
710 Token.Name.Variable.Magic: '#ansiblue',
712 Token.Name.Variable.Magic: '#ansiblue',
711 Token.Prompt: '#ansigreen',
713 Token.Prompt: '#ansigreen',
712 Token.PromptNum: '#ansibrightgreen bold',
714 Token.PromptNum: '#ansibrightgreen bold',
713 Token.OutPrompt: '#ansired',
715 Token.OutPrompt: '#ansired',
714 Token.OutPromptNum: '#ansibrightred bold',
716 Token.OutPromptNum: '#ansibrightred bold',
715 })
717 })
716
718
717 # Hack: Due to limited color support on the Windows console
719 # Hack: Due to limited color support on the Windows console
718 # the prompt colors will be wrong without this
720 # the prompt colors will be wrong without this
719 if os.name == 'nt':
721 if os.name == 'nt':
720 style_overrides.update({
722 style_overrides.update({
721 Token.Prompt: '#ansidarkgreen',
723 Token.Prompt: '#ansidarkgreen',
722 Token.PromptNum: '#ansigreen bold',
724 Token.PromptNum: '#ansigreen bold',
723 Token.OutPrompt: '#ansidarkred',
725 Token.OutPrompt: '#ansidarkred',
724 Token.OutPromptNum: '#ansired bold',
726 Token.OutPromptNum: '#ansired bold',
725 })
727 })
726 elif legacy =='nocolor':
728 elif legacy =='nocolor':
727 style_cls=_NoStyle
729 style_cls=_NoStyle
728 style_overrides = {}
730 style_overrides = {}
729 else :
731 else :
730 raise ValueError('Got unknown colors: ', legacy)
732 raise ValueError('Got unknown colors: ', legacy)
731 else :
733 else :
732 if isinstance(name_or_cls, str):
734 if isinstance(name_or_cls, str):
733 style_cls = get_style_by_name(name_or_cls)
735 style_cls = get_style_by_name(name_or_cls)
734 else:
736 else:
735 style_cls = name_or_cls
737 style_cls = name_or_cls
736 style_overrides = {
738 style_overrides = {
737 Token.Prompt: '#ansigreen',
739 Token.Prompt: '#ansigreen',
738 Token.PromptNum: '#ansibrightgreen bold',
740 Token.PromptNum: '#ansibrightgreen bold',
739 Token.OutPrompt: '#ansired',
741 Token.OutPrompt: '#ansired',
740 Token.OutPromptNum: '#ansibrightred bold',
742 Token.OutPromptNum: '#ansibrightred bold',
741 }
743 }
742 style_overrides.update(self.highlighting_style_overrides)
744 style_overrides.update(self.highlighting_style_overrides)
743 style = merge_styles([
745 style = merge_styles([
744 style_from_pygments_cls(style_cls),
746 style_from_pygments_cls(style_cls),
745 style_from_pygments_dict(style_overrides),
747 style_from_pygments_dict(style_overrides),
746 ])
748 ])
747
749
748 return style
750 return style
749
751
750 @property
752 @property
751 def pt_complete_style(self):
753 def pt_complete_style(self):
752 return {
754 return {
753 'multicolumn': CompleteStyle.MULTI_COLUMN,
755 'multicolumn': CompleteStyle.MULTI_COLUMN,
754 'column': CompleteStyle.COLUMN,
756 'column': CompleteStyle.COLUMN,
755 'readlinelike': CompleteStyle.READLINE_LIKE,
757 'readlinelike': CompleteStyle.READLINE_LIKE,
756 }[self.display_completions]
758 }[self.display_completions]
757
759
758 @property
760 @property
759 def color_depth(self):
761 def color_depth(self):
760 return (ColorDepth.TRUE_COLOR if self.true_color else None)
762 return (ColorDepth.TRUE_COLOR if self.true_color else None)
761
763
762 def _extra_prompt_options(self):
764 def _extra_prompt_options(self):
763 """
765 """
764 Return the current layout option for the current Terminal InteractiveShell
766 Return the current layout option for the current Terminal InteractiveShell
765 """
767 """
766 def get_message():
768 def get_message():
767 return PygmentsTokens(self.prompts.in_prompt_tokens())
769 return PygmentsTokens(self.prompts.in_prompt_tokens())
768
770
769 if self.editing_mode == "emacs" and self.prompt_line_number_format == "":
771 if self.editing_mode == "emacs" and self.prompt_line_number_format == "":
770 # with emacs mode the prompt is (usually) static, so we call only
772 # with emacs mode the prompt is (usually) static, so we call only
771 # the function once. With VI mode it can toggle between [ins] and
773 # the function once. With VI mode it can toggle between [ins] and
772 # [nor] so we can't precompute.
774 # [nor] so we can't precompute.
773 # here I'm going to favor the default keybinding which almost
775 # here I'm going to favor the default keybinding which almost
774 # everybody uses to decrease CPU usage.
776 # everybody uses to decrease CPU usage.
775 # if we have issues with users with custom Prompts we can see how to
777 # if we have issues with users with custom Prompts we can see how to
776 # work around this.
778 # work around this.
777 get_message = get_message()
779 get_message = get_message()
778
780
779 options = {
781 options = {
780 "complete_in_thread": False,
782 "complete_in_thread": False,
781 "lexer": IPythonPTLexer(),
783 "lexer": IPythonPTLexer(),
782 "reserve_space_for_menu": self.space_for_menu,
784 "reserve_space_for_menu": self.space_for_menu,
783 "message": get_message,
785 "message": get_message,
784 "prompt_continuation": (
786 "prompt_continuation": (
785 lambda width, lineno, is_soft_wrap: PygmentsTokens(
787 lambda width, lineno, is_soft_wrap: PygmentsTokens(
786 _backward_compat_continuation_prompt_tokens(
788 _backward_compat_continuation_prompt_tokens(
787 self.prompts.continuation_prompt_tokens, width, lineno=lineno
789 self.prompts.continuation_prompt_tokens, width, lineno=lineno
788 )
790 )
789 )
791 )
790 ),
792 ),
791 "multiline": True,
793 "multiline": True,
792 "complete_style": self.pt_complete_style,
794 "complete_style": self.pt_complete_style,
793 "input_processors": [
795 "input_processors": [
794 # Highlight matching brackets, but only when this setting is
796 # Highlight matching brackets, but only when this setting is
795 # enabled, and only when the DEFAULT_BUFFER has the focus.
797 # enabled, and only when the DEFAULT_BUFFER has the focus.
796 ConditionalProcessor(
798 ConditionalProcessor(
797 processor=HighlightMatchingBracketProcessor(chars="[](){}"),
799 processor=HighlightMatchingBracketProcessor(chars="[](){}"),
798 filter=HasFocus(DEFAULT_BUFFER)
800 filter=HasFocus(DEFAULT_BUFFER)
799 & ~IsDone()
801 & ~IsDone()
800 & Condition(lambda: self.highlight_matching_brackets),
802 & Condition(lambda: self.highlight_matching_brackets),
801 ),
803 ),
802 # Show auto-suggestion in lines other than the last line.
804 # Show auto-suggestion in lines other than the last line.
803 ConditionalProcessor(
805 ConditionalProcessor(
804 processor=AppendAutoSuggestionInAnyLine(),
806 processor=AppendAutoSuggestionInAnyLine(),
805 filter=HasFocus(DEFAULT_BUFFER)
807 filter=HasFocus(DEFAULT_BUFFER)
806 & ~IsDone()
808 & ~IsDone()
807 & Condition(
809 & Condition(
808 lambda: isinstance(
810 lambda: isinstance(
809 self.auto_suggest, NavigableAutoSuggestFromHistory
811 self.auto_suggest, NavigableAutoSuggestFromHistory
810 )
812 )
811 ),
813 ),
812 ),
814 ),
813 ],
815 ],
814 }
816 }
815 if not PTK3:
817 if not PTK3:
816 options['inputhook'] = self.inputhook
818 options['inputhook'] = self.inputhook
817
819
818 return options
820 return options
819
821
820 def prompt_for_code(self):
822 def prompt_for_code(self):
821 if self.rl_next_input:
823 if self.rl_next_input:
822 default = self.rl_next_input
824 default = self.rl_next_input
823 self.rl_next_input = None
825 self.rl_next_input = None
824 else:
826 else:
825 default = ''
827 default = ''
826
828
827 # In order to make sure that asyncio code written in the
829 # In order to make sure that asyncio code written in the
828 # interactive shell doesn't interfere with the prompt, we run the
830 # interactive shell doesn't interfere with the prompt, we run the
829 # prompt in a different event loop.
831 # prompt in a different event loop.
830 # If we don't do this, people could spawn coroutine with a
832 # If we don't do this, people could spawn coroutine with a
831 # while/true inside which will freeze the prompt.
833 # while/true inside which will freeze the prompt.
832
834
833 with patch_stdout(raw=True):
835 with patch_stdout(raw=True):
834 if self._use_asyncio_inputhook:
836 if self._use_asyncio_inputhook:
835 # When we integrate the asyncio event loop, run the UI in the
837 # When we integrate the asyncio event loop, run the UI in the
836 # same event loop as the rest of the code. don't use an actual
838 # same event loop as the rest of the code. don't use an actual
837 # input hook. (Asyncio is not made for nesting event loops.)
839 # input hook. (Asyncio is not made for nesting event loops.)
838 asyncio_loop = get_asyncio_loop()
840 asyncio_loop = get_asyncio_loop()
839 text = asyncio_loop.run_until_complete(
841 text = asyncio_loop.run_until_complete(
840 self.pt_app.prompt_async(
842 self.pt_app.prompt_async(
841 default=default, **self._extra_prompt_options()
843 default=default, **self._extra_prompt_options()
842 )
844 )
843 )
845 )
844 else:
846 else:
845 text = self.pt_app.prompt(
847 text = self.pt_app.prompt(
846 default=default,
848 default=default,
847 inputhook=self._inputhook,
849 inputhook=self._inputhook,
848 **self._extra_prompt_options(),
850 **self._extra_prompt_options(),
849 )
851 )
850
852
851 return text
853 return text
852
854
853 def enable_win_unicode_console(self):
855 def enable_win_unicode_console(self):
854 # Since IPython 7.10 doesn't support python < 3.6 and PEP 528, Python uses the unicode APIs for the Windows
856 # Since IPython 7.10 doesn't support python < 3.6 and PEP 528, Python uses the unicode APIs for the Windows
855 # console by default, so WUC shouldn't be needed.
857 # console by default, so WUC shouldn't be needed.
856 warn("`enable_win_unicode_console` is deprecated since IPython 7.10, does not do anything and will be removed in the future",
858 warn("`enable_win_unicode_console` is deprecated since IPython 7.10, does not do anything and will be removed in the future",
857 DeprecationWarning,
859 DeprecationWarning,
858 stacklevel=2)
860 stacklevel=2)
859
861
860 def init_io(self):
862 def init_io(self):
861 if sys.platform not in {'win32', 'cli'}:
863 if sys.platform not in {'win32', 'cli'}:
862 return
864 return
863
865
864 import colorama
866 import colorama
865 colorama.init()
867 colorama.init()
866
868
867 def init_magics(self):
869 def init_magics(self):
868 super(TerminalInteractiveShell, self).init_magics()
870 super(TerminalInteractiveShell, self).init_magics()
869 self.register_magics(TerminalMagics)
871 self.register_magics(TerminalMagics)
870
872
871 def init_alias(self):
873 def init_alias(self):
872 # The parent class defines aliases that can be safely used with any
874 # The parent class defines aliases that can be safely used with any
873 # frontend.
875 # frontend.
874 super(TerminalInteractiveShell, self).init_alias()
876 super(TerminalInteractiveShell, self).init_alias()
875
877
876 # Now define aliases that only make sense on the terminal, because they
878 # Now define aliases that only make sense on the terminal, because they
877 # need direct access to the console in a way that we can't emulate in
879 # need direct access to the console in a way that we can't emulate in
878 # GUI or web frontend
880 # GUI or web frontend
879 if os.name == 'posix':
881 if os.name == 'posix':
880 for cmd in ('clear', 'more', 'less', 'man'):
882 for cmd in ('clear', 'more', 'less', 'man'):
881 self.alias_manager.soft_define_alias(cmd, cmd)
883 self.alias_manager.soft_define_alias(cmd, cmd)
882
884
883 def __init__(self, *args, **kwargs) -> None:
885 def __init__(self, *args, **kwargs) -> None:
884 super(TerminalInteractiveShell, self).__init__(*args, **kwargs)
886 super(TerminalInteractiveShell, self).__init__(*args, **kwargs)
885 self._set_autosuggestions(self.autosuggestions_provider)
887 self._set_autosuggestions(self.autosuggestions_provider)
886 self.init_prompt_toolkit_cli()
888 self.init_prompt_toolkit_cli()
887 self.init_term_title()
889 self.init_term_title()
888 self.keep_running = True
890 self.keep_running = True
889 self._set_formatter(self.autoformatter)
891 self._set_formatter(self.autoformatter)
890
892
891 def ask_exit(self):
893 def ask_exit(self):
892 self.keep_running = False
894 self.keep_running = False
893
895
894 rl_next_input = None
896 rl_next_input = None
895
897
896 def interact(self):
898 def interact(self):
897 self.keep_running = True
899 self.keep_running = True
898 while self.keep_running:
900 while self.keep_running:
899 print(self.separate_in, end='')
901 print(self.separate_in, end='')
900
902
901 try:
903 try:
902 code = self.prompt_for_code()
904 code = self.prompt_for_code()
903 except EOFError:
905 except EOFError:
904 if (not self.confirm_exit) \
906 if (not self.confirm_exit) \
905 or self.ask_yes_no('Do you really want to exit ([y]/n)?','y','n'):
907 or self.ask_yes_no('Do you really want to exit ([y]/n)?','y','n'):
906 self.ask_exit()
908 self.ask_exit()
907
909
908 else:
910 else:
909 if code:
911 if code:
910 self.run_cell(code, store_history=True)
912 self.run_cell(code, store_history=True)
911
913
912 def mainloop(self):
914 def mainloop(self):
913 # An extra layer of protection in case someone mashing Ctrl-C breaks
915 # An extra layer of protection in case someone mashing Ctrl-C breaks
914 # out of our internal code.
916 # out of our internal code.
915 while True:
917 while True:
916 try:
918 try:
917 self.interact()
919 self.interact()
918 break
920 break
919 except KeyboardInterrupt as e:
921 except KeyboardInterrupt as e:
920 print("\n%s escaped interact()\n" % type(e).__name__)
922 print("\n%s escaped interact()\n" % type(e).__name__)
921 finally:
923 finally:
922 # An interrupt during the eventloop will mess up the
924 # An interrupt during the eventloop will mess up the
923 # internal state of the prompt_toolkit library.
925 # internal state of the prompt_toolkit library.
924 # Stopping the eventloop fixes this, see
926 # Stopping the eventloop fixes this, see
925 # https://github.com/ipython/ipython/pull/9867
927 # https://github.com/ipython/ipython/pull/9867
926 if hasattr(self, '_eventloop'):
928 if hasattr(self, '_eventloop'):
927 self._eventloop.stop()
929 self._eventloop.stop()
928
930
929 self.restore_term_title()
931 self.restore_term_title()
930
932
931 # try to call some at-exit operation optimistically as some things can't
933 # try to call some at-exit operation optimistically as some things can't
932 # be done during interpreter shutdown. this is technically inaccurate as
934 # be done during interpreter shutdown. this is technically inaccurate as
933 # this make mainlool not re-callable, but that should be a rare if not
935 # this make mainlool not re-callable, but that should be a rare if not
934 # in existent use case.
936 # in existent use case.
935
937
936 self._atexit_once()
938 self._atexit_once()
937
939
938 _inputhook = None
940 _inputhook = None
939 def inputhook(self, context):
941 def inputhook(self, context):
940 if self._inputhook is not None:
942 if self._inputhook is not None:
941 self._inputhook(context)
943 self._inputhook(context)
942
944
943 active_eventloop: Optional[str] = None
945 active_eventloop: Optional[str] = None
944
946
945 def enable_gui(self, gui: Optional[str] = None) -> None:
947 def enable_gui(self, gui: Optional[str] = None) -> None:
946 if gui:
948 if gui:
947 from ..core.pylabtools import _convert_gui_from_matplotlib
949 from ..core.pylabtools import _convert_gui_from_matplotlib
948
950
949 gui = _convert_gui_from_matplotlib(gui)
951 gui = _convert_gui_from_matplotlib(gui)
950
952
951 if self.simple_prompt is True and gui is not None:
953 if self.simple_prompt is True and gui is not None:
952 print(
954 print(
953 f'Cannot install event loop hook for "{gui}" when running with `--simple-prompt`.'
955 f'Cannot install event loop hook for "{gui}" when running with `--simple-prompt`.'
954 )
956 )
955 print(
957 print(
956 "NOTE: Tk is supported natively; use Tk apps and Tk backends with `--simple-prompt`."
958 "NOTE: Tk is supported natively; use Tk apps and Tk backends with `--simple-prompt`."
957 )
959 )
958 return
960 return
959
961
960 if self._inputhook is None and gui is None:
962 if self._inputhook is None and gui is None:
961 print("No event loop hook running.")
963 print("No event loop hook running.")
962 return
964 return
963
965
964 if self._inputhook is not None and gui is not None:
966 if self._inputhook is not None and gui is not None:
965 newev, newinhook = get_inputhook_name_and_func(gui)
967 newev, newinhook = get_inputhook_name_and_func(gui)
966 if self._inputhook == newinhook:
968 if self._inputhook == newinhook:
967 # same inputhook, do nothing
969 # same inputhook, do nothing
968 self.log.info(
970 self.log.info(
969 f"Shell is already running the {self.active_eventloop} eventloop. Doing nothing"
971 f"Shell is already running the {self.active_eventloop} eventloop. Doing nothing"
970 )
972 )
971 return
973 return
972 self.log.warning(
974 self.log.warning(
973 f"Shell is already running a different gui event loop for {self.active_eventloop}. "
975 f"Shell is already running a different gui event loop for {self.active_eventloop}. "
974 "Call with no arguments to disable the current loop."
976 "Call with no arguments to disable the current loop."
975 )
977 )
976 return
978 return
977 if self._inputhook is not None and gui is None:
979 if self._inputhook is not None and gui is None:
978 self.active_eventloop = self._inputhook = None
980 self.active_eventloop = self._inputhook = None
979
981
980 if gui and (gui not in {None, "webagg"}):
982 if gui and (gui not in {None, "webagg"}):
981 # This hook runs with each cycle of the `prompt_toolkit`'s event loop.
983 # This hook runs with each cycle of the `prompt_toolkit`'s event loop.
982 self.active_eventloop, self._inputhook = get_inputhook_name_and_func(gui)
984 self.active_eventloop, self._inputhook = get_inputhook_name_and_func(gui)
983 else:
985 else:
984 self.active_eventloop = self._inputhook = None
986 self.active_eventloop = self._inputhook = None
985
987
986 self._use_asyncio_inputhook = gui == "asyncio"
988 self._use_asyncio_inputhook = gui == "asyncio"
987
989
988 # Run !system commands directly, not through pipes, so terminal programs
990 # Run !system commands directly, not through pipes, so terminal programs
989 # work correctly.
991 # work correctly.
990 system = InteractiveShell.system_raw
992 system = InteractiveShell.system_raw
991
993
992 def auto_rewrite_input(self, cmd):
994 def auto_rewrite_input(self, cmd):
993 """Overridden from the parent class to use fancy rewriting prompt"""
995 """Overridden from the parent class to use fancy rewriting prompt"""
994 if not self.show_rewritten_input:
996 if not self.show_rewritten_input:
995 return
997 return
996
998
997 tokens = self.prompts.rewrite_prompt_tokens()
999 tokens = self.prompts.rewrite_prompt_tokens()
998 if self.pt_app:
1000 if self.pt_app:
999 print_formatted_text(PygmentsTokens(tokens), end='',
1001 print_formatted_text(PygmentsTokens(tokens), end='',
1000 style=self.pt_app.app.style)
1002 style=self.pt_app.app.style)
1001 print(cmd)
1003 print(cmd)
1002 else:
1004 else:
1003 prompt = ''.join(s for t, s in tokens)
1005 prompt = ''.join(s for t, s in tokens)
1004 print(prompt, cmd, sep='')
1006 print(prompt, cmd, sep='')
1005
1007
1006 _prompts_before = None
1008 _prompts_before = None
1007 def switch_doctest_mode(self, mode):
1009 def switch_doctest_mode(self, mode):
1008 """Switch prompts to classic for %doctest_mode"""
1010 """Switch prompts to classic for %doctest_mode"""
1009 if mode:
1011 if mode:
1010 self._prompts_before = self.prompts
1012 self._prompts_before = self.prompts
1011 self.prompts = ClassicPrompts(self)
1013 self.prompts = ClassicPrompts(self)
1012 elif self._prompts_before:
1014 elif self._prompts_before:
1013 self.prompts = self._prompts_before
1015 self.prompts = self._prompts_before
1014 self._prompts_before = None
1016 self._prompts_before = None
1015 # self._update_layout()
1017 # self._update_layout()
1016
1018
1017
1019
1018 InteractiveShellABC.register(TerminalInteractiveShell)
1020 InteractiveShellABC.register(TerminalInteractiveShell)
1019
1021
1020 if __name__ == '__main__':
1022 if __name__ == '__main__':
1021 TerminalInteractiveShell.instance().interact()
1023 TerminalInteractiveShell.instance().interact()
@@ -1,104 +1,105
1 """
1 """
2 Utilities function for keybinding with prompt toolkit.
2 Utilities function for keybinding with prompt toolkit.
3
3
4 This will be bound to specific key press and filter modes,
4 This will be bound to specific key press and filter modes,
5 like whether we are in edit mode, and whether the completer is open.
5 like whether we are in edit mode, and whether the completer is open.
6 """
6 """
7
7 import re
8 import re
8 from prompt_toolkit.key_binding import KeyPressEvent
9 from prompt_toolkit.key_binding import KeyPressEvent
9
10
10
11
11 def parenthesis(event: KeyPressEvent):
12 def parenthesis(event: KeyPressEvent):
12 """Auto-close parenthesis"""
13 """Auto-close parenthesis"""
13 event.current_buffer.insert_text("()")
14 event.current_buffer.insert_text("()")
14 event.current_buffer.cursor_left()
15 event.current_buffer.cursor_left()
15
16
16
17
17 def brackets(event: KeyPressEvent):
18 def brackets(event: KeyPressEvent):
18 """Auto-close brackets"""
19 """Auto-close brackets"""
19 event.current_buffer.insert_text("[]")
20 event.current_buffer.insert_text("[]")
20 event.current_buffer.cursor_left()
21 event.current_buffer.cursor_left()
21
22
22
23
23 def braces(event: KeyPressEvent):
24 def braces(event: KeyPressEvent):
24 """Auto-close braces"""
25 """Auto-close braces"""
25 event.current_buffer.insert_text("{}")
26 event.current_buffer.insert_text("{}")
26 event.current_buffer.cursor_left()
27 event.current_buffer.cursor_left()
27
28
28
29
29 def double_quote(event: KeyPressEvent):
30 def double_quote(event: KeyPressEvent):
30 """Auto-close double quotes"""
31 """Auto-close double quotes"""
31 event.current_buffer.insert_text('""')
32 event.current_buffer.insert_text('""')
32 event.current_buffer.cursor_left()
33 event.current_buffer.cursor_left()
33
34
34
35
35 def single_quote(event: KeyPressEvent):
36 def single_quote(event: KeyPressEvent):
36 """Auto-close single quotes"""
37 """Auto-close single quotes"""
37 event.current_buffer.insert_text("''")
38 event.current_buffer.insert_text("''")
38 event.current_buffer.cursor_left()
39 event.current_buffer.cursor_left()
39
40
40
41
41 def docstring_double_quotes(event: KeyPressEvent):
42 def docstring_double_quotes(event: KeyPressEvent):
42 """Auto-close docstring (double quotes)"""
43 """Auto-close docstring (double quotes)"""
43 event.current_buffer.insert_text('""""')
44 event.current_buffer.insert_text('""""')
44 event.current_buffer.cursor_left(3)
45 event.current_buffer.cursor_left(3)
45
46
46
47
47 def docstring_single_quotes(event: KeyPressEvent):
48 def docstring_single_quotes(event: KeyPressEvent):
48 """Auto-close docstring (single quotes)"""
49 """Auto-close docstring (single quotes)"""
49 event.current_buffer.insert_text("''''")
50 event.current_buffer.insert_text("''''")
50 event.current_buffer.cursor_left(3)
51 event.current_buffer.cursor_left(3)
51
52
52
53
53 def raw_string_parenthesis(event: KeyPressEvent):
54 def raw_string_parenthesis(event: KeyPressEvent):
54 """Auto-close parenthesis in raw strings"""
55 """Auto-close parenthesis in raw strings"""
55 matches = re.match(
56 matches = re.match(
56 r".*(r|R)[\"'](-*)",
57 r".*(r|R)[\"'](-*)",
57 event.current_buffer.document.current_line_before_cursor,
58 event.current_buffer.document.current_line_before_cursor,
58 )
59 )
59 dashes = matches.group(2) if matches else ""
60 dashes = matches.group(2) if matches else ""
60 event.current_buffer.insert_text("()" + dashes)
61 event.current_buffer.insert_text("()" + dashes)
61 event.current_buffer.cursor_left(len(dashes) + 1)
62 event.current_buffer.cursor_left(len(dashes) + 1)
62
63
63
64
64 def raw_string_bracket(event: KeyPressEvent):
65 def raw_string_bracket(event: KeyPressEvent):
65 """Auto-close bracker in raw strings"""
66 """Auto-close bracker in raw strings"""
66 matches = re.match(
67 matches = re.match(
67 r".*(r|R)[\"'](-*)",
68 r".*(r|R)[\"'](-*)",
68 event.current_buffer.document.current_line_before_cursor,
69 event.current_buffer.document.current_line_before_cursor,
69 )
70 )
70 dashes = matches.group(2) if matches else ""
71 dashes = matches.group(2) if matches else ""
71 event.current_buffer.insert_text("[]" + dashes)
72 event.current_buffer.insert_text("[]" + dashes)
72 event.current_buffer.cursor_left(len(dashes) + 1)
73 event.current_buffer.cursor_left(len(dashes) + 1)
73
74
74
75
75 def raw_string_braces(event: KeyPressEvent):
76 def raw_string_braces(event: KeyPressEvent):
76 """Auto-close braces in raw strings"""
77 """Auto-close braces in raw strings"""
77 matches = re.match(
78 matches = re.match(
78 r".*(r|R)[\"'](-*)",
79 r".*(r|R)[\"'](-*)",
79 event.current_buffer.document.current_line_before_cursor,
80 event.current_buffer.document.current_line_before_cursor,
80 )
81 )
81 dashes = matches.group(2) if matches else ""
82 dashes = matches.group(2) if matches else ""
82 event.current_buffer.insert_text("{}" + dashes)
83 event.current_buffer.insert_text("{}" + dashes)
83 event.current_buffer.cursor_left(len(dashes) + 1)
84 event.current_buffer.cursor_left(len(dashes) + 1)
84
85
85
86
86 def skip_over(event: KeyPressEvent):
87 def skip_over(event: KeyPressEvent):
87 """Skip over automatically added parenthesis/quote.
88 """Skip over automatically added parenthesis/quote.
88
89
89 (rather than adding another parenthesis/quote)"""
90 (rather than adding another parenthesis/quote)"""
90 event.current_buffer.cursor_right()
91 event.current_buffer.cursor_right()
91
92
92
93
93 def delete_pair(event: KeyPressEvent):
94 def delete_pair(event: KeyPressEvent):
94 """Delete auto-closed parenthesis"""
95 """Delete auto-closed parenthesis"""
95 event.current_buffer.delete()
96 event.current_buffer.delete()
96 event.current_buffer.delete_before_cursor()
97 event.current_buffer.delete_before_cursor()
97
98
98
99
99 auto_match_parens = {"(": parenthesis, "[": brackets, "{": braces}
100 auto_match_parens = {"(": parenthesis, "[": brackets, "{": braces}
100 auto_match_parens_raw_string = {
101 auto_match_parens_raw_string = {
101 "(": raw_string_parenthesis,
102 "(": raw_string_parenthesis,
102 "[": raw_string_bracket,
103 "[": raw_string_bracket,
103 "{": raw_string_braces,
104 "{": raw_string_braces,
104 }
105 }
General Comments 0
You need to be logged in to leave comments. Login now