##// END OF EJS Templates
reformat with darker
Matthias Bussonnier -
Show More
@@ -1,3347 +1,3346 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press :kbd:`Tab` to expand it to its latex form.
53 and press :kbd:`Tab` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 :std:configtrait:`Completer.backslash_combining_completions` option to
62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 ``False``.
63 ``False``.
64
64
65
65
66 Experimental
66 Experimental
67 ============
67 ============
68
68
69 Starting with IPython 6.0, this module can make use of the Jedi library to
69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 generate completions both using static analysis of the code, and dynamically
70 generate completions both using static analysis of the code, and dynamically
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 for Python. The APIs attached to this new mechanism is unstable and will
72 for Python. The APIs attached to this new mechanism is unstable and will
73 raise unless use in an :any:`provisionalcompleter` context manager.
73 raise unless use in an :any:`provisionalcompleter` context manager.
74
74
75 You will find that the following are experimental:
75 You will find that the following are experimental:
76
76
77 - :any:`provisionalcompleter`
77 - :any:`provisionalcompleter`
78 - :any:`IPCompleter.completions`
78 - :any:`IPCompleter.completions`
79 - :any:`Completion`
79 - :any:`Completion`
80 - :any:`rectify_completions`
80 - :any:`rectify_completions`
81
81
82 .. note::
82 .. note::
83
83
84 better name for :any:`rectify_completions` ?
84 better name for :any:`rectify_completions` ?
85
85
86 We welcome any feedback on these new API, and we also encourage you to try this
86 We welcome any feedback on these new API, and we also encourage you to try this
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 to have extra logging information if :any:`jedi` is crashing, or if current
88 to have extra logging information if :any:`jedi` is crashing, or if current
89 IPython completer pending deprecations are returning results not yet handled
89 IPython completer pending deprecations are returning results not yet handled
90 by :any:`jedi`
90 by :any:`jedi`
91
91
92 Using Jedi for tab completion allow snippets like the following to work without
92 Using Jedi for tab completion allow snippets like the following to work without
93 having to execute any code:
93 having to execute any code:
94
94
95 >>> myvar = ['hello', 42]
95 >>> myvar = ['hello', 42]
96 ... myvar[1].bi<tab>
96 ... myvar[1].bi<tab>
97
97
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 option.
100 option.
101
101
102 Be sure to update :any:`jedi` to the latest stable version or to try the
102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 current development version to get better completions.
103 current development version to get better completions.
104
104
105 Matchers
105 Matchers
106 ========
106 ========
107
107
108 All completions routines are implemented using unified *Matchers* API.
108 All completions routines are implemented using unified *Matchers* API.
109 The matchers API is provisional and subject to change without notice.
109 The matchers API is provisional and subject to change without notice.
110
110
111 The built-in matchers include:
111 The built-in matchers include:
112
112
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 - :any:`IPCompleter.unicode_name_matcher`,
115 - :any:`IPCompleter.unicode_name_matcher`,
116 :any:`IPCompleter.fwd_unicode_matcher`
116 :any:`IPCompleter.fwd_unicode_matcher`
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 (`complete_command`) with string dispatch (including regular expressions).
125 (`complete_command`) with string dispatch (including regular expressions).
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 Jedi results to match behaviour in earlier IPython versions.
127 Jedi results to match behaviour in earlier IPython versions.
128
128
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130
130
131 Matcher API
131 Matcher API
132 -----------
132 -----------
133
133
134 Simplifying some details, the ``Matcher`` interface can described as
134 Simplifying some details, the ``Matcher`` interface can described as
135
135
136 .. code-block::
136 .. code-block::
137
137
138 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv1 = Callable[[str], list[str]]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140
140
141 Matcher = MatcherAPIv1 | MatcherAPIv2
141 Matcher = MatcherAPIv1 | MatcherAPIv2
142
142
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 and remains supported as a simplest way for generating completions. This is also
144 and remains supported as a simplest way for generating completions. This is also
145 currently the only API supported by the IPython hooks system `complete_command`.
145 currently the only API supported by the IPython hooks system `complete_command`.
146
146
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 and requires a literal ``2`` for v2 Matchers.
149 and requires a literal ``2`` for v2 Matchers.
150
150
151 Once the API stabilises future versions may relax the requirement for specifying
151 Once the API stabilises future versions may relax the requirement for specifying
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154
154
155 Suppression of competing matchers
155 Suppression of competing matchers
156 ---------------------------------
156 ---------------------------------
157
157
158 By default results from all matchers are combined, in the order determined by
158 By default results from all matchers are combined, in the order determined by
159 their priority. Matchers can request to suppress results from subsequent
159 their priority. Matchers can request to suppress results from subsequent
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161
161
162 When multiple matchers simultaneously request surpression, the results from of
162 When multiple matchers simultaneously request surpression, the results from of
163 the matcher with higher priority will be returned.
163 the matcher with higher priority will be returned.
164
164
165 Sometimes it is desirable to suppress most but not all other matchers;
165 Sometimes it is desirable to suppress most but not all other matchers;
166 this can be achieved by adding a list of identifiers of matchers which
166 this can be achieved by adding a list of identifiers of matchers which
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168
168
169 The suppression behaviour can is user-configurable via
169 The suppression behaviour can is user-configurable via
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 """
171 """
172
172
173
173
174 # Copyright (c) IPython Development Team.
174 # Copyright (c) IPython Development Team.
175 # Distributed under the terms of the Modified BSD License.
175 # Distributed under the terms of the Modified BSD License.
176 #
176 #
177 # Some of this code originated from rlcompleter in the Python standard library
177 # Some of this code originated from rlcompleter in the Python standard library
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179
179
180 from __future__ import annotations
180 from __future__ import annotations
181 import builtins as builtin_mod
181 import builtins as builtin_mod
182 import enum
182 import enum
183 import glob
183 import glob
184 import inspect
184 import inspect
185 import itertools
185 import itertools
186 import keyword
186 import keyword
187 import os
187 import os
188 import re
188 import re
189 import string
189 import string
190 import sys
190 import sys
191 import tokenize
191 import tokenize
192 import time
192 import time
193 import unicodedata
193 import unicodedata
194 import uuid
194 import uuid
195 import warnings
195 import warnings
196 from ast import literal_eval
196 from ast import literal_eval
197 from collections import defaultdict
197 from collections import defaultdict
198 from contextlib import contextmanager
198 from contextlib import contextmanager
199 from dataclasses import dataclass
199 from dataclasses import dataclass
200 from functools import cached_property, partial
200 from functools import cached_property, partial
201 from types import SimpleNamespace
201 from types import SimpleNamespace
202 from typing import (
202 from typing import (
203 Iterable,
203 Iterable,
204 Iterator,
204 Iterator,
205 List,
205 List,
206 Tuple,
206 Tuple,
207 Union,
207 Union,
208 Any,
208 Any,
209 Sequence,
209 Sequence,
210 Dict,
210 Dict,
211 Optional,
211 Optional,
212 TYPE_CHECKING,
212 TYPE_CHECKING,
213 Set,
213 Set,
214 Sized,
214 Sized,
215 TypeVar,
215 TypeVar,
216 Literal,
216 Literal,
217 )
217 )
218
218
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 from IPython.core.error import TryNext
220 from IPython.core.error import TryNext
221 from IPython.core.inputtransformer2 import ESC_MAGIC
221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 from IPython.core.oinspect import InspectColors
223 from IPython.core.oinspect import InspectColors
224 from IPython.testing.skipdoctest import skip_doctest
224 from IPython.testing.skipdoctest import skip_doctest
225 from IPython.utils import generics
225 from IPython.utils import generics
226 from IPython.utils.decorators import sphinx_options
226 from IPython.utils.decorators import sphinx_options
227 from IPython.utils.dir2 import dir2, get_real_method
227 from IPython.utils.dir2 import dir2, get_real_method
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 from IPython.utils.path import ensure_dir_exists
229 from IPython.utils.path import ensure_dir_exists
230 from IPython.utils.process import arg_split
230 from IPython.utils.process import arg_split
231 from traitlets import (
231 from traitlets import (
232 Bool,
232 Bool,
233 Enum,
233 Enum,
234 Int,
234 Int,
235 List as ListTrait,
235 List as ListTrait,
236 Unicode,
236 Unicode,
237 Dict as DictTrait,
237 Dict as DictTrait,
238 Union as UnionTrait,
238 Union as UnionTrait,
239 observe,
239 observe,
240 )
240 )
241 from traitlets.config.configurable import Configurable
241 from traitlets.config.configurable import Configurable
242
242
243 import __main__
243 import __main__
244
244
245 # skip module docstests
245 # skip module docstests
246 __skip_doctest__ = True
246 __skip_doctest__ = True
247
247
248
248
249 try:
249 try:
250 import jedi
250 import jedi
251 jedi.settings.case_insensitive_completion = False
251 jedi.settings.case_insensitive_completion = False
252 import jedi.api.helpers
252 import jedi.api.helpers
253 import jedi.api.classes
253 import jedi.api.classes
254 JEDI_INSTALLED = True
254 JEDI_INSTALLED = True
255 except ImportError:
255 except ImportError:
256 JEDI_INSTALLED = False
256 JEDI_INSTALLED = False
257
257
258
258
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 from typing import cast
260 from typing import cast
261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 else:
262 else:
263 from typing import Generic
263 from typing import Generic
264
264
265 def cast(type_, obj):
265 def cast(type_, obj):
266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 return obj
267 return obj
268
268
269 # do not require on runtime
269 # do not require on runtime
270 NotRequired = Tuple # requires Python >=3.11
270 NotRequired = Tuple # requires Python >=3.11
271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 Protocol = object # requires Python >=3.8
272 Protocol = object # requires Python >=3.8
273 TypeAlias = Any # requires Python >=3.10
273 TypeAlias = Any # requires Python >=3.10
274 TypeGuard = Generic # requires Python >=3.10
274 TypeGuard = Generic # requires Python >=3.10
275 if GENERATING_DOCUMENTATION:
275 if GENERATING_DOCUMENTATION:
276 from typing import TypedDict
276 from typing import TypedDict
277
277
278 # -----------------------------------------------------------------------------
278 # -----------------------------------------------------------------------------
279 # Globals
279 # Globals
280 #-----------------------------------------------------------------------------
280 #-----------------------------------------------------------------------------
281
281
282 # ranges where we have most of the valid unicode names. We could be more finer
282 # ranges where we have most of the valid unicode names. We could be more finer
283 # grained but is it worth it for performance While unicode have character in the
283 # grained but is it worth it for performance While unicode have character in the
284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 # write this). With below range we cover them all, with a density of ~67%
285 # write this). With below range we cover them all, with a density of ~67%
286 # biggest next gap we consider only adds up about 1% density and there are 600
286 # biggest next gap we consider only adds up about 1% density and there are 600
287 # gaps that would need hard coding.
287 # gaps that would need hard coding.
288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289
289
290 # Public API
290 # Public API
291 __all__ = ["Completer", "IPCompleter"]
291 __all__ = ["Completer", "IPCompleter"]
292
292
293 if sys.platform == 'win32':
293 if sys.platform == 'win32':
294 PROTECTABLES = ' '
294 PROTECTABLES = ' '
295 else:
295 else:
296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297
297
298 # Protect against returning an enormous number of completions which the frontend
298 # Protect against returning an enormous number of completions which the frontend
299 # may have trouble processing.
299 # may have trouble processing.
300 MATCHES_LIMIT = 500
300 MATCHES_LIMIT = 500
301
301
302 # Completion type reported when no type can be inferred.
302 # Completion type reported when no type can be inferred.
303 _UNKNOWN_TYPE = "<unknown>"
303 _UNKNOWN_TYPE = "<unknown>"
304
304
305 # sentinel value to signal lack of a match
305 # sentinel value to signal lack of a match
306 not_found = object()
306 not_found = object()
307
307
308 class ProvisionalCompleterWarning(FutureWarning):
308 class ProvisionalCompleterWarning(FutureWarning):
309 """
309 """
310 Exception raise by an experimental feature in this module.
310 Exception raise by an experimental feature in this module.
311
311
312 Wrap code in :any:`provisionalcompleter` context manager if you
312 Wrap code in :any:`provisionalcompleter` context manager if you
313 are certain you want to use an unstable feature.
313 are certain you want to use an unstable feature.
314 """
314 """
315 pass
315 pass
316
316
317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318
318
319
319
320 @skip_doctest
320 @skip_doctest
321 @contextmanager
321 @contextmanager
322 def provisionalcompleter(action='ignore'):
322 def provisionalcompleter(action='ignore'):
323 """
323 """
324 This context manager has to be used in any place where unstable completer
324 This context manager has to be used in any place where unstable completer
325 behavior and API may be called.
325 behavior and API may be called.
326
326
327 >>> with provisionalcompleter():
327 >>> with provisionalcompleter():
328 ... completer.do_experimental_things() # works
328 ... completer.do_experimental_things() # works
329
329
330 >>> completer.do_experimental_things() # raises.
330 >>> completer.do_experimental_things() # raises.
331
331
332 .. note::
332 .. note::
333
333
334 Unstable
334 Unstable
335
335
336 By using this context manager you agree that the API in use may change
336 By using this context manager you agree that the API in use may change
337 without warning, and that you won't complain if they do so.
337 without warning, and that you won't complain if they do so.
338
338
339 You also understand that, if the API is not to your liking, you should report
339 You also understand that, if the API is not to your liking, you should report
340 a bug to explain your use case upstream.
340 a bug to explain your use case upstream.
341
341
342 We'll be happy to get your feedback, feature requests, and improvements on
342 We'll be happy to get your feedback, feature requests, and improvements on
343 any of the unstable APIs!
343 any of the unstable APIs!
344 """
344 """
345 with warnings.catch_warnings():
345 with warnings.catch_warnings():
346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 yield
347 yield
348
348
349
349
350 def has_open_quotes(s):
350 def has_open_quotes(s):
351 """Return whether a string has open quotes.
351 """Return whether a string has open quotes.
352
352
353 This simply counts whether the number of quote characters of either type in
353 This simply counts whether the number of quote characters of either type in
354 the string is odd.
354 the string is odd.
355
355
356 Returns
356 Returns
357 -------
357 -------
358 If there is an open quote, the quote character is returned. Else, return
358 If there is an open quote, the quote character is returned. Else, return
359 False.
359 False.
360 """
360 """
361 # We check " first, then ', so complex cases with nested quotes will get
361 # We check " first, then ', so complex cases with nested quotes will get
362 # the " to take precedence.
362 # the " to take precedence.
363 if s.count('"') % 2:
363 if s.count('"') % 2:
364 return '"'
364 return '"'
365 elif s.count("'") % 2:
365 elif s.count("'") % 2:
366 return "'"
366 return "'"
367 else:
367 else:
368 return False
368 return False
369
369
370
370
371 def protect_filename(s, protectables=PROTECTABLES):
371 def protect_filename(s, protectables=PROTECTABLES):
372 """Escape a string to protect certain characters."""
372 """Escape a string to protect certain characters."""
373 if set(s) & set(protectables):
373 if set(s) & set(protectables):
374 if sys.platform == "win32":
374 if sys.platform == "win32":
375 return '"' + s + '"'
375 return '"' + s + '"'
376 else:
376 else:
377 return "".join(("\\" + c if c in protectables else c) for c in s)
377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 else:
378 else:
379 return s
379 return s
380
380
381
381
382 def expand_user(path:str) -> Tuple[str, bool, str]:
382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 """Expand ``~``-style usernames in strings.
383 """Expand ``~``-style usernames in strings.
384
384
385 This is similar to :func:`os.path.expanduser`, but it computes and returns
385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 extra information that will be useful if the input was being used in
386 extra information that will be useful if the input was being used in
387 computing completions, and you wish to return the completions with the
387 computing completions, and you wish to return the completions with the
388 original '~' instead of its expanded value.
388 original '~' instead of its expanded value.
389
389
390 Parameters
390 Parameters
391 ----------
391 ----------
392 path : str
392 path : str
393 String to be expanded. If no ~ is present, the output is the same as the
393 String to be expanded. If no ~ is present, the output is the same as the
394 input.
394 input.
395
395
396 Returns
396 Returns
397 -------
397 -------
398 newpath : str
398 newpath : str
399 Result of ~ expansion in the input path.
399 Result of ~ expansion in the input path.
400 tilde_expand : bool
400 tilde_expand : bool
401 Whether any expansion was performed or not.
401 Whether any expansion was performed or not.
402 tilde_val : str
402 tilde_val : str
403 The value that ~ was replaced with.
403 The value that ~ was replaced with.
404 """
404 """
405 # Default values
405 # Default values
406 tilde_expand = False
406 tilde_expand = False
407 tilde_val = ''
407 tilde_val = ''
408 newpath = path
408 newpath = path
409
409
410 if path.startswith('~'):
410 if path.startswith('~'):
411 tilde_expand = True
411 tilde_expand = True
412 rest = len(path)-1
412 rest = len(path)-1
413 newpath = os.path.expanduser(path)
413 newpath = os.path.expanduser(path)
414 if rest:
414 if rest:
415 tilde_val = newpath[:-rest]
415 tilde_val = newpath[:-rest]
416 else:
416 else:
417 tilde_val = newpath
417 tilde_val = newpath
418
418
419 return newpath, tilde_expand, tilde_val
419 return newpath, tilde_expand, tilde_val
420
420
421
421
422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 """Does the opposite of expand_user, with its outputs.
423 """Does the opposite of expand_user, with its outputs.
424 """
424 """
425 if tilde_expand:
425 if tilde_expand:
426 return path.replace(tilde_val, '~')
426 return path.replace(tilde_val, '~')
427 else:
427 else:
428 return path
428 return path
429
429
430
430
431 def completions_sorting_key(word):
431 def completions_sorting_key(word):
432 """key for sorting completions
432 """key for sorting completions
433
433
434 This does several things:
434 This does several things:
435
435
436 - Demote any completions starting with underscores to the end
436 - Demote any completions starting with underscores to the end
437 - Insert any %magic and %%cellmagic completions in the alphabetical order
437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 by their name
438 by their name
439 """
439 """
440 prio1, prio2 = 0, 0
440 prio1, prio2 = 0, 0
441
441
442 if word.startswith('__'):
442 if word.startswith('__'):
443 prio1 = 2
443 prio1 = 2
444 elif word.startswith('_'):
444 elif word.startswith('_'):
445 prio1 = 1
445 prio1 = 1
446
446
447 if word.endswith('='):
447 if word.endswith('='):
448 prio1 = -1
448 prio1 = -1
449
449
450 if word.startswith('%%'):
450 if word.startswith('%%'):
451 # If there's another % in there, this is something else, so leave it alone
451 # If there's another % in there, this is something else, so leave it alone
452 if not "%" in word[2:]:
452 if not "%" in word[2:]:
453 word = word[2:]
453 word = word[2:]
454 prio2 = 2
454 prio2 = 2
455 elif word.startswith('%'):
455 elif word.startswith('%'):
456 if not "%" in word[1:]:
456 if not "%" in word[1:]:
457 word = word[1:]
457 word = word[1:]
458 prio2 = 1
458 prio2 = 1
459
459
460 return prio1, word, prio2
460 return prio1, word, prio2
461
461
462
462
463 class _FakeJediCompletion:
463 class _FakeJediCompletion:
464 """
464 """
465 This is a workaround to communicate to the UI that Jedi has crashed and to
465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467
467
468 Added in IPython 6.0 so should likely be removed for 7.0
468 Added in IPython 6.0 so should likely be removed for 7.0
469
469
470 """
470 """
471
471
472 def __init__(self, name):
472 def __init__(self, name):
473
473
474 self.name = name
474 self.name = name
475 self.complete = name
475 self.complete = name
476 self.type = 'crashed'
476 self.type = 'crashed'
477 self.name_with_symbols = name
477 self.name_with_symbols = name
478 self.signature = ""
478 self.signature = ""
479 self._origin = "fake"
479 self._origin = "fake"
480 self.text = "crashed"
480 self.text = "crashed"
481
481
482 def __repr__(self):
482 def __repr__(self):
483 return '<Fake completion object jedi has crashed>'
483 return '<Fake completion object jedi has crashed>'
484
484
485
485
486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487
487
488
488
489 class Completion:
489 class Completion:
490 """
490 """
491 Completion object used and returned by IPython completers.
491 Completion object used and returned by IPython completers.
492
492
493 .. warning::
493 .. warning::
494
494
495 Unstable
495 Unstable
496
496
497 This function is unstable, API may change without warning.
497 This function is unstable, API may change without warning.
498 It will also raise unless use in proper context manager.
498 It will also raise unless use in proper context manager.
499
499
500 This act as a middle ground :any:`Completion` object between the
500 This act as a middle ground :any:`Completion` object between the
501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 object. While Jedi need a lot of information about evaluator and how the
502 object. While Jedi need a lot of information about evaluator and how the
503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 need user facing information.
504 need user facing information.
505
505
506 - Which range should be replaced replaced by what.
506 - Which range should be replaced replaced by what.
507 - Some metadata (like completion type), or meta information to displayed to
507 - Some metadata (like completion type), or meta information to displayed to
508 the use user.
508 the use user.
509
509
510 For debugging purpose we can also store the origin of the completion (``jedi``,
510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 ``IPython.python_matches``, ``IPython.magics_matches``...).
511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 """
512 """
513
513
514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515
515
516 def __init__(
516 def __init__(
517 self,
517 self,
518 start: int,
518 start: int,
519 end: int,
519 end: int,
520 text: str,
520 text: str,
521 *,
521 *,
522 type: Optional[str] = None,
522 type: Optional[str] = None,
523 _origin="",
523 _origin="",
524 signature="",
524 signature="",
525 ) -> None:
525 ) -> None:
526 warnings.warn(
526 warnings.warn(
527 "``Completion`` is a provisional API (as of IPython 6.0). "
527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 "It may change without warnings. "
528 "It may change without warnings. "
529 "Use in corresponding context manager.",
529 "Use in corresponding context manager.",
530 category=ProvisionalCompleterWarning,
530 category=ProvisionalCompleterWarning,
531 stacklevel=2,
531 stacklevel=2,
532 )
532 )
533
533
534 self.start = start
534 self.start = start
535 self.end = end
535 self.end = end
536 self.text = text
536 self.text = text
537 self.type = type
537 self.type = type
538 self.signature = signature
538 self.signature = signature
539 self._origin = _origin
539 self._origin = _origin
540
540
541 def __repr__(self):
541 def __repr__(self):
542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544
544
545 def __eq__(self, other) -> bool:
545 def __eq__(self, other) -> bool:
546 """
546 """
547 Equality and hash do not hash the type (as some completer may not be
547 Equality and hash do not hash the type (as some completer may not be
548 able to infer the type), but are use to (partially) de-duplicate
548 able to infer the type), but are use to (partially) de-duplicate
549 completion.
549 completion.
550
550
551 Completely de-duplicating completion is a bit tricker that just
551 Completely de-duplicating completion is a bit tricker that just
552 comparing as it depends on surrounding text, which Completions are not
552 comparing as it depends on surrounding text, which Completions are not
553 aware of.
553 aware of.
554 """
554 """
555 return self.start == other.start and \
555 return self.start == other.start and \
556 self.end == other.end and \
556 self.end == other.end and \
557 self.text == other.text
557 self.text == other.text
558
558
559 def __hash__(self):
559 def __hash__(self):
560 return hash((self.start, self.end, self.text))
560 return hash((self.start, self.end, self.text))
561
561
562
562
563 class SimpleCompletion:
563 class SimpleCompletion:
564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565
565
566 .. warning::
566 .. warning::
567
567
568 Provisional
568 Provisional
569
569
570 This class is used to describe the currently supported attributes of
570 This class is used to describe the currently supported attributes of
571 simple completion items, and any additional implementation details
571 simple completion items, and any additional implementation details
572 should not be relied on. Additional attributes may be included in
572 should not be relied on. Additional attributes may be included in
573 future versions, and meaning of text disambiguated from the current
573 future versions, and meaning of text disambiguated from the current
574 dual meaning of "text to insert" and "text to used as a label".
574 dual meaning of "text to insert" and "text to used as a label".
575 """
575 """
576
576
577 __slots__ = ["text", "type"]
577 __slots__ = ["text", "type"]
578
578
579 def __init__(self, text: str, *, type: Optional[str] = None):
579 def __init__(self, text: str, *, type: Optional[str] = None):
580 self.text = text
580 self.text = text
581 self.type = type
581 self.type = type
582
582
583 def __repr__(self):
583 def __repr__(self):
584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585
585
586
586
587 class _MatcherResultBase(TypedDict):
587 class _MatcherResultBase(TypedDict):
588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589
589
590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 matched_fragment: NotRequired[str]
591 matched_fragment: NotRequired[str]
592
592
593 #: Whether to suppress results from all other matchers (True), some
593 #: Whether to suppress results from all other matchers (True), some
594 #: matchers (set of identifiers) or none (False); default is False.
594 #: matchers (set of identifiers) or none (False); default is False.
595 suppress: NotRequired[Union[bool, Set[str]]]
595 suppress: NotRequired[Union[bool, Set[str]]]
596
596
597 #: Identifiers of matchers which should NOT be suppressed when this matcher
597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 #: requests to suppress all other matchers; defaults to an empty set.
598 #: requests to suppress all other matchers; defaults to an empty set.
599 do_not_suppress: NotRequired[Set[str]]
599 do_not_suppress: NotRequired[Set[str]]
600
600
601 #: Are completions already ordered and should be left as-is? default is False.
601 #: Are completions already ordered and should be left as-is? default is False.
602 ordered: NotRequired[bool]
602 ordered: NotRequired[bool]
603
603
604
604
605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 """Result of new-style completion matcher."""
607 """Result of new-style completion matcher."""
608
608
609 # note: TypedDict is added again to the inheritance chain
609 # note: TypedDict is added again to the inheritance chain
610 # in order to get __orig_bases__ for documentation
610 # in order to get __orig_bases__ for documentation
611
611
612 #: List of candidate completions
612 #: List of candidate completions
613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614
614
615
615
616 class _JediMatcherResult(_MatcherResultBase):
616 class _JediMatcherResult(_MatcherResultBase):
617 """Matching result returned by Jedi (will be processed differently)"""
617 """Matching result returned by Jedi (will be processed differently)"""
618
618
619 #: list of candidate completions
619 #: list of candidate completions
620 completions: Iterator[_JediCompletionLike]
620 completions: Iterator[_JediCompletionLike]
621
621
622
622
623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625
625
626
626
627 @dataclass
627 @dataclass
628 class CompletionContext:
628 class CompletionContext:
629 """Completion context provided as an argument to matchers in the Matcher API v2."""
629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630
630
631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 # which was not explicitly visible as an argument of the matcher, making any refactor
632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 # from the completer, and make substituting them in sub-classes easier.
634 # from the completer, and make substituting them in sub-classes easier.
635
635
636 #: Relevant fragment of code directly preceding the cursor.
636 #: Relevant fragment of code directly preceding the cursor.
637 #: The extraction of token is implemented via splitter heuristic
637 #: The extraction of token is implemented via splitter heuristic
638 #: (following readline behaviour for legacy reasons), which is user configurable
638 #: (following readline behaviour for legacy reasons), which is user configurable
639 #: (by switching the greedy mode).
639 #: (by switching the greedy mode).
640 token: str
640 token: str
641
641
642 #: The full available content of the editor or buffer
642 #: The full available content of the editor or buffer
643 full_text: str
643 full_text: str
644
644
645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 cursor_position: int
646 cursor_position: int
647
647
648 #: Cursor line in ``full_text``.
648 #: Cursor line in ``full_text``.
649 cursor_line: int
649 cursor_line: int
650
650
651 #: The maximum number of completions that will be used downstream.
651 #: The maximum number of completions that will be used downstream.
652 #: Matchers can use this information to abort early.
652 #: Matchers can use this information to abort early.
653 #: The built-in Jedi matcher is currently excepted from this limit.
653 #: The built-in Jedi matcher is currently excepted from this limit.
654 # If not given, return all possible completions.
654 # If not given, return all possible completions.
655 limit: Optional[int]
655 limit: Optional[int]
656
656
657 @cached_property
657 @cached_property
658 def text_until_cursor(self) -> str:
658 def text_until_cursor(self) -> str:
659 return self.line_with_cursor[: self.cursor_position]
659 return self.line_with_cursor[: self.cursor_position]
660
660
661 @cached_property
661 @cached_property
662 def line_with_cursor(self) -> str:
662 def line_with_cursor(self) -> str:
663 return self.full_text.split("\n")[self.cursor_line]
663 return self.full_text.split("\n")[self.cursor_line]
664
664
665
665
666 #: Matcher results for API v2.
666 #: Matcher results for API v2.
667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668
668
669
669
670 class _MatcherAPIv1Base(Protocol):
670 class _MatcherAPIv1Base(Protocol):
671 def __call__(self, text: str) -> List[str]:
671 def __call__(self, text: str) -> List[str]:
672 """Call signature."""
672 """Call signature."""
673 ...
673 ...
674
674
675 #: Used to construct the default matcher identifier
675 #: Used to construct the default matcher identifier
676 __qualname__: str
676 __qualname__: str
677
677
678
678
679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 #: API version
680 #: API version
681 matcher_api_version: Optional[Literal[1]]
681 matcher_api_version: Optional[Literal[1]]
682
682
683 def __call__(self, text: str) -> List[str]:
683 def __call__(self, text: str) -> List[str]:
684 """Call signature."""
684 """Call signature."""
685 ...
685 ...
686
686
687
687
688 #: Protocol describing Matcher API v1.
688 #: Protocol describing Matcher API v1.
689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690
690
691
691
692 class MatcherAPIv2(Protocol):
692 class MatcherAPIv2(Protocol):
693 """Protocol describing Matcher API v2."""
693 """Protocol describing Matcher API v2."""
694
694
695 #: API version
695 #: API version
696 matcher_api_version: Literal[2] = 2
696 matcher_api_version: Literal[2] = 2
697
697
698 def __call__(self, context: CompletionContext) -> MatcherResult:
698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 """Call signature."""
699 """Call signature."""
700 ...
700 ...
701
701
702 #: Used to construct the default matcher identifier
702 #: Used to construct the default matcher identifier
703 __qualname__: str
703 __qualname__: str
704
704
705
705
706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707
707
708
708
709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 api_version = _get_matcher_api_version(matcher)
710 api_version = _get_matcher_api_version(matcher)
711 return api_version == 1
711 return api_version == 1
712
712
713
713
714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 api_version = _get_matcher_api_version(matcher)
715 api_version = _get_matcher_api_version(matcher)
716 return api_version == 2
716 return api_version == 2
717
717
718
718
719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 """Determines whether objects is sizable"""
720 """Determines whether objects is sizable"""
721 return hasattr(value, "__len__")
721 return hasattr(value, "__len__")
722
722
723
723
724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 """Determines whether objects is sizable"""
725 """Determines whether objects is sizable"""
726 return hasattr(value, "__next__")
726 return hasattr(value, "__next__")
727
727
728
728
729 def has_any_completions(result: MatcherResult) -> bool:
729 def has_any_completions(result: MatcherResult) -> bool:
730 """Check if any result includes any completions."""
730 """Check if any result includes any completions."""
731 completions = result["completions"]
731 completions = result["completions"]
732 if _is_sizable(completions):
732 if _is_sizable(completions):
733 return len(completions) != 0
733 return len(completions) != 0
734 if _is_iterator(completions):
734 if _is_iterator(completions):
735 try:
735 try:
736 old_iterator = completions
736 old_iterator = completions
737 first = next(old_iterator)
737 first = next(old_iterator)
738 result["completions"] = cast(
738 result["completions"] = cast(
739 Iterator[SimpleCompletion],
739 Iterator[SimpleCompletion],
740 itertools.chain([first], old_iterator),
740 itertools.chain([first], old_iterator),
741 )
741 )
742 return True
742 return True
743 except StopIteration:
743 except StopIteration:
744 return False
744 return False
745 raise ValueError(
745 raise ValueError(
746 "Completions returned by matcher need to be an Iterator or a Sizable"
746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 )
747 )
748
748
749
749
750 def completion_matcher(
750 def completion_matcher(
751 *,
751 *,
752 priority: Optional[float] = None,
752 priority: Optional[float] = None,
753 identifier: Optional[str] = None,
753 identifier: Optional[str] = None,
754 api_version: int = 1,
754 api_version: int = 1,
755 ):
755 ):
756 """Adds attributes describing the matcher.
756 """Adds attributes describing the matcher.
757
757
758 Parameters
758 Parameters
759 ----------
759 ----------
760 priority : Optional[float]
760 priority : Optional[float]
761 The priority of the matcher, determines the order of execution of matchers.
761 The priority of the matcher, determines the order of execution of matchers.
762 Higher priority means that the matcher will be executed first. Defaults to 0.
762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 identifier : Optional[str]
763 identifier : Optional[str]
764 identifier of the matcher allowing users to modify the behaviour via traitlets,
764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 and also used to for debugging (will be passed as ``origin`` with the completions).
765 and also used to for debugging (will be passed as ``origin`` with the completions).
766
766
767 Defaults to matcher function's ``__qualname__`` (for example,
767 Defaults to matcher function's ``__qualname__`` (for example,
768 ``IPCompleter.file_matcher`` for the built-in matched defined
768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 as a ``file_matcher`` method of the ``IPCompleter`` class).
769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 api_version: Optional[int]
770 api_version: Optional[int]
771 version of the Matcher API used by this matcher.
771 version of the Matcher API used by this matcher.
772 Currently supported values are 1 and 2.
772 Currently supported values are 1 and 2.
773 Defaults to 1.
773 Defaults to 1.
774 """
774 """
775
775
776 def wrapper(func: Matcher):
776 def wrapper(func: Matcher):
777 func.matcher_priority = priority or 0 # type: ignore
777 func.matcher_priority = priority or 0 # type: ignore
778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 func.matcher_api_version = api_version # type: ignore
779 func.matcher_api_version = api_version # type: ignore
780 if TYPE_CHECKING:
780 if TYPE_CHECKING:
781 if api_version == 1:
781 if api_version == 1:
782 func = cast(MatcherAPIv1, func)
782 func = cast(MatcherAPIv1, func)
783 elif api_version == 2:
783 elif api_version == 2:
784 func = cast(MatcherAPIv2, func)
784 func = cast(MatcherAPIv2, func)
785 return func
785 return func
786
786
787 return wrapper
787 return wrapper
788
788
789
789
790 def _get_matcher_priority(matcher: Matcher):
790 def _get_matcher_priority(matcher: Matcher):
791 return getattr(matcher, "matcher_priority", 0)
791 return getattr(matcher, "matcher_priority", 0)
792
792
793
793
794 def _get_matcher_id(matcher: Matcher):
794 def _get_matcher_id(matcher: Matcher):
795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796
796
797
797
798 def _get_matcher_api_version(matcher):
798 def _get_matcher_api_version(matcher):
799 return getattr(matcher, "matcher_api_version", 1)
799 return getattr(matcher, "matcher_api_version", 1)
800
800
801
801
802 context_matcher = partial(completion_matcher, api_version=2)
802 context_matcher = partial(completion_matcher, api_version=2)
803
803
804
804
805 _IC = Iterable[Completion]
805 _IC = Iterable[Completion]
806
806
807
807
808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 """
809 """
810 Deduplicate a set of completions.
810 Deduplicate a set of completions.
811
811
812 .. warning::
812 .. warning::
813
813
814 Unstable
814 Unstable
815
815
816 This function is unstable, API may change without warning.
816 This function is unstable, API may change without warning.
817
817
818 Parameters
818 Parameters
819 ----------
819 ----------
820 text : str
820 text : str
821 text that should be completed.
821 text that should be completed.
822 completions : Iterator[Completion]
822 completions : Iterator[Completion]
823 iterator over the completions to deduplicate
823 iterator over the completions to deduplicate
824
824
825 Yields
825 Yields
826 ------
826 ------
827 `Completions` objects
827 `Completions` objects
828 Completions coming from multiple sources, may be different but end up having
828 Completions coming from multiple sources, may be different but end up having
829 the same effect when applied to ``text``. If this is the case, this will
829 the same effect when applied to ``text``. If this is the case, this will
830 consider completions as equal and only emit the first encountered.
830 consider completions as equal and only emit the first encountered.
831 Not folded in `completions()` yet for debugging purpose, and to detect when
831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 the IPython completer does return things that Jedi does not, but should be
832 the IPython completer does return things that Jedi does not, but should be
833 at some point.
833 at some point.
834 """
834 """
835 completions = list(completions)
835 completions = list(completions)
836 if not completions:
836 if not completions:
837 return
837 return
838
838
839 new_start = min(c.start for c in completions)
839 new_start = min(c.start for c in completions)
840 new_end = max(c.end for c in completions)
840 new_end = max(c.end for c in completions)
841
841
842 seen = set()
842 seen = set()
843 for c in completions:
843 for c in completions:
844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 if new_text not in seen:
845 if new_text not in seen:
846 yield c
846 yield c
847 seen.add(new_text)
847 seen.add(new_text)
848
848
849
849
850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 """
851 """
852 Rectify a set of completions to all have the same ``start`` and ``end``
852 Rectify a set of completions to all have the same ``start`` and ``end``
853
853
854 .. warning::
854 .. warning::
855
855
856 Unstable
856 Unstable
857
857
858 This function is unstable, API may change without warning.
858 This function is unstable, API may change without warning.
859 It will also raise unless use in proper context manager.
859 It will also raise unless use in proper context manager.
860
860
861 Parameters
861 Parameters
862 ----------
862 ----------
863 text : str
863 text : str
864 text that should be completed.
864 text that should be completed.
865 completions : Iterator[Completion]
865 completions : Iterator[Completion]
866 iterator over the completions to rectify
866 iterator over the completions to rectify
867 _debug : bool
867 _debug : bool
868 Log failed completion
868 Log failed completion
869
869
870 Notes
870 Notes
871 -----
871 -----
872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 the Jupyter Protocol requires them to behave like so. This will readjust
873 the Jupyter Protocol requires them to behave like so. This will readjust
874 the completion to have the same ``start`` and ``end`` by padding both
874 the completion to have the same ``start`` and ``end`` by padding both
875 extremities with surrounding text.
875 extremities with surrounding text.
876
876
877 During stabilisation should support a ``_debug`` option to log which
877 During stabilisation should support a ``_debug`` option to log which
878 completion are return by the IPython completer and not found in Jedi in
878 completion are return by the IPython completer and not found in Jedi in
879 order to make upstream bug report.
879 order to make upstream bug report.
880 """
880 """
881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 "It may change without warnings. "
882 "It may change without warnings. "
883 "Use in corresponding context manager.",
883 "Use in corresponding context manager.",
884 category=ProvisionalCompleterWarning, stacklevel=2)
884 category=ProvisionalCompleterWarning, stacklevel=2)
885
885
886 completions = list(completions)
886 completions = list(completions)
887 if not completions:
887 if not completions:
888 return
888 return
889 starts = (c.start for c in completions)
889 starts = (c.start for c in completions)
890 ends = (c.end for c in completions)
890 ends = (c.end for c in completions)
891
891
892 new_start = min(starts)
892 new_start = min(starts)
893 new_end = max(ends)
893 new_end = max(ends)
894
894
895 seen_jedi = set()
895 seen_jedi = set()
896 seen_python_matches = set()
896 seen_python_matches = set()
897 for c in completions:
897 for c in completions:
898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 if c._origin == 'jedi':
899 if c._origin == 'jedi':
900 seen_jedi.add(new_text)
900 seen_jedi.add(new_text)
901 elif c._origin == 'IPCompleter.python_matches':
901 elif c._origin == 'IPCompleter.python_matches':
902 seen_python_matches.add(new_text)
902 seen_python_matches.add(new_text)
903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 diff = seen_python_matches.difference(seen_jedi)
904 diff = seen_python_matches.difference(seen_jedi)
905 if diff and _debug:
905 if diff and _debug:
906 print('IPython.python matches have extras:', diff)
906 print('IPython.python matches have extras:', diff)
907
907
908
908
909 if sys.platform == 'win32':
909 if sys.platform == 'win32':
910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 else:
911 else:
912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913
913
914 GREEDY_DELIMS = ' =\r\n'
914 GREEDY_DELIMS = ' =\r\n'
915
915
916
916
917 class CompletionSplitter(object):
917 class CompletionSplitter(object):
918 """An object to split an input line in a manner similar to readline.
918 """An object to split an input line in a manner similar to readline.
919
919
920 By having our own implementation, we can expose readline-like completion in
920 By having our own implementation, we can expose readline-like completion in
921 a uniform manner to all frontends. This object only needs to be given the
921 a uniform manner to all frontends. This object only needs to be given the
922 line of text to be split and the cursor position on said line, and it
922 line of text to be split and the cursor position on said line, and it
923 returns the 'word' to be completed on at the cursor after splitting the
923 returns the 'word' to be completed on at the cursor after splitting the
924 entire line.
924 entire line.
925
925
926 What characters are used as splitting delimiters can be controlled by
926 What characters are used as splitting delimiters can be controlled by
927 setting the ``delims`` attribute (this is a property that internally
927 setting the ``delims`` attribute (this is a property that internally
928 automatically builds the necessary regular expression)"""
928 automatically builds the necessary regular expression)"""
929
929
930 # Private interface
930 # Private interface
931
931
932 # A string of delimiter characters. The default value makes sense for
932 # A string of delimiter characters. The default value makes sense for
933 # IPython's most typical usage patterns.
933 # IPython's most typical usage patterns.
934 _delims = DELIMS
934 _delims = DELIMS
935
935
936 # The expression (a normal string) to be compiled into a regular expression
936 # The expression (a normal string) to be compiled into a regular expression
937 # for actual splitting. We store it as an attribute mostly for ease of
937 # for actual splitting. We store it as an attribute mostly for ease of
938 # debugging, since this type of code can be so tricky to debug.
938 # debugging, since this type of code can be so tricky to debug.
939 _delim_expr = None
939 _delim_expr = None
940
940
941 # The regular expression that does the actual splitting
941 # The regular expression that does the actual splitting
942 _delim_re = None
942 _delim_re = None
943
943
944 def __init__(self, delims=None):
944 def __init__(self, delims=None):
945 delims = CompletionSplitter._delims if delims is None else delims
945 delims = CompletionSplitter._delims if delims is None else delims
946 self.delims = delims
946 self.delims = delims
947
947
948 @property
948 @property
949 def delims(self):
949 def delims(self):
950 """Return the string of delimiter characters."""
950 """Return the string of delimiter characters."""
951 return self._delims
951 return self._delims
952
952
953 @delims.setter
953 @delims.setter
954 def delims(self, delims):
954 def delims(self, delims):
955 """Set the delimiters for line splitting."""
955 """Set the delimiters for line splitting."""
956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 self._delim_re = re.compile(expr)
957 self._delim_re = re.compile(expr)
958 self._delims = delims
958 self._delims = delims
959 self._delim_expr = expr
959 self._delim_expr = expr
960
960
961 def split_line(self, line, cursor_pos=None):
961 def split_line(self, line, cursor_pos=None):
962 """Split a line of text with a cursor at the given position.
962 """Split a line of text with a cursor at the given position.
963 """
963 """
964 l = line if cursor_pos is None else line[:cursor_pos]
964 l = line if cursor_pos is None else line[:cursor_pos]
965 return self._delim_re.split(l)[-1]
965 return self._delim_re.split(l)[-1]
966
966
967
967
968
968
969 class Completer(Configurable):
969 class Completer(Configurable):
970
970
971 greedy = Bool(
971 greedy = Bool(
972 False,
972 False,
973 help="""Activate greedy completion.
973 help="""Activate greedy completion.
974
974
975 .. deprecated:: 8.8
975 .. deprecated:: 8.8
976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977
977
978 When enabled in IPython 8.8 or newer, changes configuration as follows:
978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979
979
980 - ``Completer.evaluation = 'unsafe'``
980 - ``Completer.evaluation = 'unsafe'``
981 - ``Completer.auto_close_dict_keys = True``
981 - ``Completer.auto_close_dict_keys = True``
982 """,
982 """,
983 ).tag(config=True)
983 ).tag(config=True)
984
984
985 evaluation = Enum(
985 evaluation = Enum(
986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 default_value="limited",
987 default_value="limited",
988 help="""Policy for code evaluation under completion.
988 help="""Policy for code evaluation under completion.
989
989
990 Successive options allow to enable more eager evaluation for better
990 Successive options allow to enable more eager evaluation for better
991 completion suggestions, including for nested dictionaries, nested lists,
991 completion suggestions, including for nested dictionaries, nested lists,
992 or even results of function calls.
992 or even results of function calls.
993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995
995
996 Allowed values are:
996 Allowed values are:
997
997
998 - ``forbidden``: no evaluation of code is permitted,
998 - ``forbidden``: no evaluation of code is permitted,
999 - ``minimal``: evaluation of literals and access to built-in namespace;
999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 no item/attribute evaluationm no access to locals/globals,
1000 no item/attribute evaluationm no access to locals/globals,
1001 no evaluation of any operations or comparisons.
1001 no evaluation of any operations or comparisons.
1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 - ``unsafe``: evaluation of all methods and function calls but not of
1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 syntax with side-effects like `del x`,
1007 syntax with side-effects like `del x`,
1008 - ``dangerous``: completely arbitrary evaluation.
1008 - ``dangerous``: completely arbitrary evaluation.
1009 """,
1009 """,
1010 ).tag(config=True)
1010 ).tag(config=True)
1011
1011
1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 help="Experimental: Use Jedi to generate autocompletions. "
1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 "Default to True if jedi is installed.").tag(config=True)
1014 "Default to True if jedi is installed.").tag(config=True)
1015
1015
1016 jedi_compute_type_timeout = Int(default_value=400,
1016 jedi_compute_type_timeout = Int(default_value=400,
1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 performance by preventing jedi to build its cache.
1019 performance by preventing jedi to build its cache.
1020 """).tag(config=True)
1020 """).tag(config=True)
1021
1021
1022 debug = Bool(default_value=False,
1022 debug = Bool(default_value=False,
1023 help='Enable debug for the Completer. Mostly print extra '
1023 help='Enable debug for the Completer. Mostly print extra '
1024 'information for experimental jedi integration.')\
1024 'information for experimental jedi integration.')\
1025 .tag(config=True)
1025 .tag(config=True)
1026
1026
1027 backslash_combining_completions = Bool(True,
1027 backslash_combining_completions = Bool(True,
1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 "Includes completion of latex commands, unicode names, and expanding "
1029 "Includes completion of latex commands, unicode names, and expanding "
1030 "unicode characters back to latex commands.").tag(config=True)
1030 "unicode characters back to latex commands.").tag(config=True)
1031
1031
1032 auto_close_dict_keys = Bool(
1032 auto_close_dict_keys = Bool(
1033 False,
1033 False,
1034 help="""
1034 help="""
1035 Enable auto-closing dictionary keys.
1035 Enable auto-closing dictionary keys.
1036
1036
1037 When enabled string keys will be suffixed with a final quote
1037 When enabled string keys will be suffixed with a final quote
1038 (matching the opening quote), tuple keys will also receive a
1038 (matching the opening quote), tuple keys will also receive a
1039 separating comma if needed, and keys which are final will
1039 separating comma if needed, and keys which are final will
1040 receive a closing bracket (``]``).
1040 receive a closing bracket (``]``).
1041 """,
1041 """,
1042 ).tag(config=True)
1042 ).tag(config=True)
1043
1043
1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 """Create a new completer for the command line.
1045 """Create a new completer for the command line.
1046
1046
1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048
1048
1049 If unspecified, the default namespace where completions are performed
1049 If unspecified, the default namespace where completions are performed
1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 given as dictionaries.
1051 given as dictionaries.
1052
1052
1053 An optional second namespace can be given. This allows the completer
1053 An optional second namespace can be given. This allows the completer
1054 to handle cases where both the local and global scopes need to be
1054 to handle cases where both the local and global scopes need to be
1055 distinguished.
1055 distinguished.
1056 """
1056 """
1057
1057
1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 # specific namespace or to use __main__.__dict__. This will allow us
1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 # to bind to __main__.__dict__ at completion time, not now.
1060 # to bind to __main__.__dict__ at completion time, not now.
1061 if namespace is None:
1061 if namespace is None:
1062 self.use_main_ns = True
1062 self.use_main_ns = True
1063 else:
1063 else:
1064 self.use_main_ns = False
1064 self.use_main_ns = False
1065 self.namespace = namespace
1065 self.namespace = namespace
1066
1066
1067 # The global namespace, if given, can be bound directly
1067 # The global namespace, if given, can be bound directly
1068 if global_namespace is None:
1068 if global_namespace is None:
1069 self.global_namespace = {}
1069 self.global_namespace = {}
1070 else:
1070 else:
1071 self.global_namespace = global_namespace
1071 self.global_namespace = global_namespace
1072
1072
1073 self.custom_matchers = []
1073 self.custom_matchers = []
1074
1074
1075 super(Completer, self).__init__(**kwargs)
1075 super(Completer, self).__init__(**kwargs)
1076
1076
1077 def complete(self, text, state):
1077 def complete(self, text, state):
1078 """Return the next possible completion for 'text'.
1078 """Return the next possible completion for 'text'.
1079
1079
1080 This is called successively with state == 0, 1, 2, ... until it
1080 This is called successively with state == 0, 1, 2, ... until it
1081 returns None. The completion should begin with 'text'.
1081 returns None. The completion should begin with 'text'.
1082
1082
1083 """
1083 """
1084 if self.use_main_ns:
1084 if self.use_main_ns:
1085 self.namespace = __main__.__dict__
1085 self.namespace = __main__.__dict__
1086
1086
1087 if state == 0:
1087 if state == 0:
1088 if "." in text:
1088 if "." in text:
1089 self.matches = self.attr_matches(text)
1089 self.matches = self.attr_matches(text)
1090 else:
1090 else:
1091 self.matches = self.global_matches(text)
1091 self.matches = self.global_matches(text)
1092 try:
1092 try:
1093 return self.matches[state]
1093 return self.matches[state]
1094 except IndexError:
1094 except IndexError:
1095 return None
1095 return None
1096
1096
1097 def global_matches(self, text):
1097 def global_matches(self, text):
1098 """Compute matches when text is a simple name.
1098 """Compute matches when text is a simple name.
1099
1099
1100 Return a list of all keywords, built-in functions and names currently
1100 Return a list of all keywords, built-in functions and names currently
1101 defined in self.namespace or self.global_namespace that match.
1101 defined in self.namespace or self.global_namespace that match.
1102
1102
1103 """
1103 """
1104 matches = []
1104 matches = []
1105 match_append = matches.append
1105 match_append = matches.append
1106 n = len(text)
1106 n = len(text)
1107 for lst in [
1107 for lst in [
1108 keyword.kwlist,
1108 keyword.kwlist,
1109 builtin_mod.__dict__.keys(),
1109 builtin_mod.__dict__.keys(),
1110 list(self.namespace.keys()),
1110 list(self.namespace.keys()),
1111 list(self.global_namespace.keys()),
1111 list(self.global_namespace.keys()),
1112 ]:
1112 ]:
1113 for word in lst:
1113 for word in lst:
1114 if word[:n] == text and word != "__builtins__":
1114 if word[:n] == text and word != "__builtins__":
1115 match_append(word)
1115 match_append(word)
1116
1116
1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 shortened = {
1119 shortened = {
1120 "_".join([sub[0] for sub in word.split("_")]): word
1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 for word in lst
1121 for word in lst
1122 if snake_case_re.match(word)
1122 if snake_case_re.match(word)
1123 }
1123 }
1124 for word in shortened.keys():
1124 for word in shortened.keys():
1125 if word[:n] == text and word != "__builtins__":
1125 if word[:n] == text and word != "__builtins__":
1126 match_append(shortened[word])
1126 match_append(shortened[word])
1127 return matches
1127 return matches
1128
1128
1129 def attr_matches(self, text):
1129 def attr_matches(self, text):
1130 """Compute matches when text contains a dot.
1130 """Compute matches when text contains a dot.
1131
1131
1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 evaluatable in self.namespace or self.global_namespace, it will be
1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 evaluated and its attributes (as revealed by dir()) are used as
1134 evaluated and its attributes (as revealed by dir()) are used as
1135 possible completions. (For class instances, class members are
1135 possible completions. (For class instances, class members are
1136 also considered.)
1136 also considered.)
1137
1137
1138 WARNING: this can still invoke arbitrary C code, if an object
1138 WARNING: this can still invoke arbitrary C code, if an object
1139 with a __getattr__ hook is evaluated.
1139 with a __getattr__ hook is evaluated.
1140
1140
1141 """
1141 """
1142 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1142 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1143 if not m2:
1143 if not m2:
1144 return []
1144 return []
1145 expr, attr = m2.group(1, 2)
1145 expr, attr = m2.group(1, 2)
1146
1146
1147 obj = self._evaluate_expr(expr)
1147 obj = self._evaluate_expr(expr)
1148
1148
1149 if obj is not_found:
1149 if obj is not_found:
1150 return []
1150 return []
1151
1151
1152 if self.limit_to__all__ and hasattr(obj, '__all__'):
1152 if self.limit_to__all__ and hasattr(obj, '__all__'):
1153 words = get__all__entries(obj)
1153 words = get__all__entries(obj)
1154 else:
1154 else:
1155 words = dir2(obj)
1155 words = dir2(obj)
1156
1156
1157 try:
1157 try:
1158 words = generics.complete_object(obj, words)
1158 words = generics.complete_object(obj, words)
1159 except TryNext:
1159 except TryNext:
1160 pass
1160 pass
1161 except AssertionError:
1161 except AssertionError:
1162 raise
1162 raise
1163 except Exception:
1163 except Exception:
1164 # Silence errors from completion function
1164 # Silence errors from completion function
1165 pass
1165 pass
1166 # Build match list to return
1166 # Build match list to return
1167 n = len(attr)
1167 n = len(attr)
1168
1168
1169 # Note: ideally we would just return words here and the prefix
1169 # Note: ideally we would just return words here and the prefix
1170 # reconciliator would know that we intend to append to rather than
1170 # reconciliator would know that we intend to append to rather than
1171 # replace the input text; this requires refactoring to return range
1171 # replace the input text; this requires refactoring to return range
1172 # which ought to be replaced (as does jedi).
1172 # which ought to be replaced (as does jedi).
1173 tokens = _parse_tokens(expr)
1173 tokens = _parse_tokens(expr)
1174 rev_tokens = reversed(tokens)
1174 rev_tokens = reversed(tokens)
1175 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1175 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1176 name_turn = True
1176 name_turn = True
1177
1177
1178 parts = []
1178 parts = []
1179 for token in rev_tokens:
1179 for token in rev_tokens:
1180 if token.type in skip_over:
1180 if token.type in skip_over:
1181 continue
1181 continue
1182 if token.type == tokenize.NAME and name_turn:
1182 if token.type == tokenize.NAME and name_turn:
1183 parts.append(token.string)
1183 parts.append(token.string)
1184 name_turn = False
1184 name_turn = False
1185 elif token.type == tokenize.OP and token.string == "." and not name_turn:
1185 elif token.type == tokenize.OP and token.string == "." and not name_turn:
1186 parts.append(token.string)
1186 parts.append(token.string)
1187 name_turn = True
1187 name_turn = True
1188 else:
1188 else:
1189 # short-circuit if not empty nor name token
1189 # short-circuit if not empty nor name token
1190 break
1190 break
1191
1191
1192 prefix_after_space = "".join(reversed(parts))
1192 prefix_after_space = "".join(reversed(parts))
1193
1193
1194 return ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr]
1194 return ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr]
1195
1195
1196 def _evaluate_expr(self, expr):
1196 def _evaluate_expr(self, expr):
1197 obj = not_found
1197 obj = not_found
1198 done = False
1198 done = False
1199 while not done and expr:
1199 while not done and expr:
1200 try:
1200 try:
1201 obj = guarded_eval(
1201 obj = guarded_eval(
1202 expr,
1202 expr,
1203 EvaluationContext(
1203 EvaluationContext(
1204 globals=self.global_namespace,
1204 globals=self.global_namespace,
1205 locals=self.namespace,
1205 locals=self.namespace,
1206 evaluation=self.evaluation,
1206 evaluation=self.evaluation,
1207 ),
1207 ),
1208 )
1208 )
1209 done = True
1209 done = True
1210 except Exception as e:
1210 except Exception as e:
1211 if self.debug:
1211 if self.debug:
1212 print("Evaluation exception", e)
1212 print("Evaluation exception", e)
1213 # trim the expression to remove any invalid prefix
1213 # trim the expression to remove any invalid prefix
1214 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1214 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1215 # where parenthesis is not closed.
1215 # where parenthesis is not closed.
1216 # TODO: make this faster by reusing parts of the computation?
1216 # TODO: make this faster by reusing parts of the computation?
1217 expr = expr[1:]
1217 expr = expr[1:]
1218 return obj
1218 return obj
1219
1219
1220 def get__all__entries(obj):
1220 def get__all__entries(obj):
1221 """returns the strings in the __all__ attribute"""
1221 """returns the strings in the __all__ attribute"""
1222 try:
1222 try:
1223 words = getattr(obj, '__all__')
1223 words = getattr(obj, '__all__')
1224 except:
1224 except:
1225 return []
1225 return []
1226
1226
1227 return [w for w in words if isinstance(w, str)]
1227 return [w for w in words if isinstance(w, str)]
1228
1228
1229
1229
1230 class _DictKeyState(enum.Flag):
1230 class _DictKeyState(enum.Flag):
1231 """Represent state of the key match in context of other possible matches.
1231 """Represent state of the key match in context of other possible matches.
1232
1232
1233 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1233 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1234 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1234 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1235 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1235 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1236 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1236 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1237 """
1237 """
1238
1238
1239 BASELINE = 0
1239 BASELINE = 0
1240 END_OF_ITEM = enum.auto()
1240 END_OF_ITEM = enum.auto()
1241 END_OF_TUPLE = enum.auto()
1241 END_OF_TUPLE = enum.auto()
1242 IN_TUPLE = enum.auto()
1242 IN_TUPLE = enum.auto()
1243
1243
1244
1244
1245 def _parse_tokens(c):
1245 def _parse_tokens(c):
1246 """Parse tokens even if there is an error."""
1246 """Parse tokens even if there is an error."""
1247 tokens = []
1247 tokens = []
1248 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1248 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1249 while True:
1249 while True:
1250 try:
1250 try:
1251 tokens.append(next(token_generator))
1251 tokens.append(next(token_generator))
1252 except tokenize.TokenError:
1252 except tokenize.TokenError:
1253 return tokens
1253 return tokens
1254 except StopIteration:
1254 except StopIteration:
1255 return tokens
1255 return tokens
1256
1256
1257
1257
1258 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1258 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1259 """Match any valid Python numeric literal in a prefix of dictionary keys.
1259 """Match any valid Python numeric literal in a prefix of dictionary keys.
1260
1260
1261 References:
1261 References:
1262 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1262 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1263 - https://docs.python.org/3/library/tokenize.html
1263 - https://docs.python.org/3/library/tokenize.html
1264 """
1264 """
1265 if prefix[-1].isspace():
1265 if prefix[-1].isspace():
1266 # if user typed a space we do not have anything to complete
1266 # if user typed a space we do not have anything to complete
1267 # even if there was a valid number token before
1267 # even if there was a valid number token before
1268 return None
1268 return None
1269 tokens = _parse_tokens(prefix)
1269 tokens = _parse_tokens(prefix)
1270 rev_tokens = reversed(tokens)
1270 rev_tokens = reversed(tokens)
1271 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1271 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1272 number = None
1272 number = None
1273 for token in rev_tokens:
1273 for token in rev_tokens:
1274 if token.type in skip_over:
1274 if token.type in skip_over:
1275 continue
1275 continue
1276 if number is None:
1276 if number is None:
1277 if token.type == tokenize.NUMBER:
1277 if token.type == tokenize.NUMBER:
1278 number = token.string
1278 number = token.string
1279 continue
1279 continue
1280 else:
1280 else:
1281 # we did not match a number
1281 # we did not match a number
1282 return None
1282 return None
1283 if token.type == tokenize.OP:
1283 if token.type == tokenize.OP:
1284 if token.string == ",":
1284 if token.string == ",":
1285 break
1285 break
1286 if token.string in {"+", "-"}:
1286 if token.string in {"+", "-"}:
1287 number = token.string + number
1287 number = token.string + number
1288 else:
1288 else:
1289 return None
1289 return None
1290 return number
1290 return number
1291
1291
1292
1292
1293 _INT_FORMATS = {
1293 _INT_FORMATS = {
1294 "0b": bin,
1294 "0b": bin,
1295 "0o": oct,
1295 "0o": oct,
1296 "0x": hex,
1296 "0x": hex,
1297 }
1297 }
1298
1298
1299
1299
1300 def match_dict_keys(
1300 def match_dict_keys(
1301 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1301 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1302 prefix: str,
1302 prefix: str,
1303 delims: str,
1303 delims: str,
1304 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1304 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1305 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1305 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1306 """Used by dict_key_matches, matching the prefix to a list of keys
1306 """Used by dict_key_matches, matching the prefix to a list of keys
1307
1307
1308 Parameters
1308 Parameters
1309 ----------
1309 ----------
1310 keys
1310 keys
1311 list of keys in dictionary currently being completed.
1311 list of keys in dictionary currently being completed.
1312 prefix
1312 prefix
1313 Part of the text already typed by the user. E.g. `mydict[b'fo`
1313 Part of the text already typed by the user. E.g. `mydict[b'fo`
1314 delims
1314 delims
1315 String of delimiters to consider when finding the current key.
1315 String of delimiters to consider when finding the current key.
1316 extra_prefix : optional
1316 extra_prefix : optional
1317 Part of the text already typed in multi-key index cases. E.g. for
1317 Part of the text already typed in multi-key index cases. E.g. for
1318 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1318 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1319
1319
1320 Returns
1320 Returns
1321 -------
1321 -------
1322 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1322 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1323 ``quote`` being the quote that need to be used to close current string.
1323 ``quote`` being the quote that need to be used to close current string.
1324 ``token_start`` the position where the replacement should start occurring,
1324 ``token_start`` the position where the replacement should start occurring,
1325 ``matches`` a dictionary of replacement/completion keys on keys and values
1325 ``matches`` a dictionary of replacement/completion keys on keys and values
1326 indicating whether the state.
1326 indicating whether the state.
1327 """
1327 """
1328 prefix_tuple = extra_prefix if extra_prefix else ()
1328 prefix_tuple = extra_prefix if extra_prefix else ()
1329
1329
1330 prefix_tuple_size = sum(
1330 prefix_tuple_size = sum(
1331 [
1331 [
1332 # for pandas, do not count slices as taking space
1332 # for pandas, do not count slices as taking space
1333 not isinstance(k, slice)
1333 not isinstance(k, slice)
1334 for k in prefix_tuple
1334 for k in prefix_tuple
1335 ]
1335 ]
1336 )
1336 )
1337 text_serializable_types = (str, bytes, int, float, slice)
1337 text_serializable_types = (str, bytes, int, float, slice)
1338
1338
1339 def filter_prefix_tuple(key):
1339 def filter_prefix_tuple(key):
1340 # Reject too short keys
1340 # Reject too short keys
1341 if len(key) <= prefix_tuple_size:
1341 if len(key) <= prefix_tuple_size:
1342 return False
1342 return False
1343 # Reject keys which cannot be serialised to text
1343 # Reject keys which cannot be serialised to text
1344 for k in key:
1344 for k in key:
1345 if not isinstance(k, text_serializable_types):
1345 if not isinstance(k, text_serializable_types):
1346 return False
1346 return False
1347 # Reject keys that do not match the prefix
1347 # Reject keys that do not match the prefix
1348 for k, pt in zip(key, prefix_tuple):
1348 for k, pt in zip(key, prefix_tuple):
1349 if k != pt and not isinstance(pt, slice):
1349 if k != pt and not isinstance(pt, slice):
1350 return False
1350 return False
1351 # All checks passed!
1351 # All checks passed!
1352 return True
1352 return True
1353
1353
1354 filtered_key_is_final: Dict[
1354 filtered_key_is_final: Dict[
1355 Union[str, bytes, int, float], _DictKeyState
1355 Union[str, bytes, int, float], _DictKeyState
1356 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1356 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1357
1357
1358 for k in keys:
1358 for k in keys:
1359 # If at least one of the matches is not final, mark as undetermined.
1359 # If at least one of the matches is not final, mark as undetermined.
1360 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1360 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1361 # `111` appears final on first match but is not final on the second.
1361 # `111` appears final on first match but is not final on the second.
1362
1362
1363 if isinstance(k, tuple):
1363 if isinstance(k, tuple):
1364 if filter_prefix_tuple(k):
1364 if filter_prefix_tuple(k):
1365 key_fragment = k[prefix_tuple_size]
1365 key_fragment = k[prefix_tuple_size]
1366 filtered_key_is_final[key_fragment] |= (
1366 filtered_key_is_final[key_fragment] |= (
1367 _DictKeyState.END_OF_TUPLE
1367 _DictKeyState.END_OF_TUPLE
1368 if len(k) == prefix_tuple_size + 1
1368 if len(k) == prefix_tuple_size + 1
1369 else _DictKeyState.IN_TUPLE
1369 else _DictKeyState.IN_TUPLE
1370 )
1370 )
1371 elif prefix_tuple_size > 0:
1371 elif prefix_tuple_size > 0:
1372 # we are completing a tuple but this key is not a tuple,
1372 # we are completing a tuple but this key is not a tuple,
1373 # so we should ignore it
1373 # so we should ignore it
1374 pass
1374 pass
1375 else:
1375 else:
1376 if isinstance(k, text_serializable_types):
1376 if isinstance(k, text_serializable_types):
1377 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1377 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1378
1378
1379 filtered_keys = filtered_key_is_final.keys()
1379 filtered_keys = filtered_key_is_final.keys()
1380
1380
1381 if not prefix:
1381 if not prefix:
1382 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1382 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1383
1383
1384 quote_match = re.search("(?:\"|')", prefix)
1384 quote_match = re.search("(?:\"|')", prefix)
1385 is_user_prefix_numeric = False
1385 is_user_prefix_numeric = False
1386
1386
1387 if quote_match:
1387 if quote_match:
1388 quote = quote_match.group()
1388 quote = quote_match.group()
1389 valid_prefix = prefix + quote
1389 valid_prefix = prefix + quote
1390 try:
1390 try:
1391 prefix_str = literal_eval(valid_prefix)
1391 prefix_str = literal_eval(valid_prefix)
1392 except Exception:
1392 except Exception:
1393 return "", 0, {}
1393 return "", 0, {}
1394 else:
1394 else:
1395 # If it does not look like a string, let's assume
1395 # If it does not look like a string, let's assume
1396 # we are dealing with a number or variable.
1396 # we are dealing with a number or variable.
1397 number_match = _match_number_in_dict_key_prefix(prefix)
1397 number_match = _match_number_in_dict_key_prefix(prefix)
1398
1398
1399 # We do not want the key matcher to suggest variable names so we yield:
1399 # We do not want the key matcher to suggest variable names so we yield:
1400 if number_match is None:
1400 if number_match is None:
1401 # The alternative would be to assume that user forgort the quote
1401 # The alternative would be to assume that user forgort the quote
1402 # and if the substring matches, suggest adding it at the start.
1402 # and if the substring matches, suggest adding it at the start.
1403 return "", 0, {}
1403 return "", 0, {}
1404
1404
1405 prefix_str = number_match
1405 prefix_str = number_match
1406 is_user_prefix_numeric = True
1406 is_user_prefix_numeric = True
1407 quote = ""
1407 quote = ""
1408
1408
1409 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1409 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1410 token_match = re.search(pattern, prefix, re.UNICODE)
1410 token_match = re.search(pattern, prefix, re.UNICODE)
1411 assert token_match is not None # silence mypy
1411 assert token_match is not None # silence mypy
1412 token_start = token_match.start()
1412 token_start = token_match.start()
1413 token_prefix = token_match.group()
1413 token_prefix = token_match.group()
1414
1414
1415 matched: Dict[str, _DictKeyState] = {}
1415 matched: Dict[str, _DictKeyState] = {}
1416
1416
1417 str_key: Union[str, bytes]
1417 str_key: Union[str, bytes]
1418
1418
1419 for key in filtered_keys:
1419 for key in filtered_keys:
1420 if isinstance(key, (int, float)):
1420 if isinstance(key, (int, float)):
1421 # User typed a number but this key is not a number.
1421 # User typed a number but this key is not a number.
1422 if not is_user_prefix_numeric:
1422 if not is_user_prefix_numeric:
1423 continue
1423 continue
1424 str_key = str(key)
1424 str_key = str(key)
1425 if isinstance(key, int):
1425 if isinstance(key, int):
1426 int_base = prefix_str[:2].lower()
1426 int_base = prefix_str[:2].lower()
1427 # if user typed integer using binary/oct/hex notation:
1427 # if user typed integer using binary/oct/hex notation:
1428 if int_base in _INT_FORMATS:
1428 if int_base in _INT_FORMATS:
1429 int_format = _INT_FORMATS[int_base]
1429 int_format = _INT_FORMATS[int_base]
1430 str_key = int_format(key)
1430 str_key = int_format(key)
1431 else:
1431 else:
1432 # User typed a string but this key is a number.
1432 # User typed a string but this key is a number.
1433 if is_user_prefix_numeric:
1433 if is_user_prefix_numeric:
1434 continue
1434 continue
1435 str_key = key
1435 str_key = key
1436 try:
1436 try:
1437 if not str_key.startswith(prefix_str):
1437 if not str_key.startswith(prefix_str):
1438 continue
1438 continue
1439 except (AttributeError, TypeError, UnicodeError) as e:
1439 except (AttributeError, TypeError, UnicodeError) as e:
1440 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1440 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1441 continue
1441 continue
1442
1442
1443 # reformat remainder of key to begin with prefix
1443 # reformat remainder of key to begin with prefix
1444 rem = str_key[len(prefix_str) :]
1444 rem = str_key[len(prefix_str) :]
1445 # force repr wrapped in '
1445 # force repr wrapped in '
1446 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1446 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1447 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1447 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1448 if quote == '"':
1448 if quote == '"':
1449 # The entered prefix is quoted with ",
1449 # The entered prefix is quoted with ",
1450 # but the match is quoted with '.
1450 # but the match is quoted with '.
1451 # A contained " hence needs escaping for comparison:
1451 # A contained " hence needs escaping for comparison:
1452 rem_repr = rem_repr.replace('"', '\\"')
1452 rem_repr = rem_repr.replace('"', '\\"')
1453
1453
1454 # then reinsert prefix from start of token
1454 # then reinsert prefix from start of token
1455 match = "%s%s" % (token_prefix, rem_repr)
1455 match = "%s%s" % (token_prefix, rem_repr)
1456
1456
1457 matched[match] = filtered_key_is_final[key]
1457 matched[match] = filtered_key_is_final[key]
1458 return quote, token_start, matched
1458 return quote, token_start, matched
1459
1459
1460
1460
1461 def cursor_to_position(text:str, line:int, column:int)->int:
1461 def cursor_to_position(text:str, line:int, column:int)->int:
1462 """
1462 """
1463 Convert the (line,column) position of the cursor in text to an offset in a
1463 Convert the (line,column) position of the cursor in text to an offset in a
1464 string.
1464 string.
1465
1465
1466 Parameters
1466 Parameters
1467 ----------
1467 ----------
1468 text : str
1468 text : str
1469 The text in which to calculate the cursor offset
1469 The text in which to calculate the cursor offset
1470 line : int
1470 line : int
1471 Line of the cursor; 0-indexed
1471 Line of the cursor; 0-indexed
1472 column : int
1472 column : int
1473 Column of the cursor 0-indexed
1473 Column of the cursor 0-indexed
1474
1474
1475 Returns
1475 Returns
1476 -------
1476 -------
1477 Position of the cursor in ``text``, 0-indexed.
1477 Position of the cursor in ``text``, 0-indexed.
1478
1478
1479 See Also
1479 See Also
1480 --------
1480 --------
1481 position_to_cursor : reciprocal of this function
1481 position_to_cursor : reciprocal of this function
1482
1482
1483 """
1483 """
1484 lines = text.split('\n')
1484 lines = text.split('\n')
1485 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1485 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1486
1486
1487 return sum(len(l) + 1 for l in lines[:line]) + column
1487 return sum(len(l) + 1 for l in lines[:line]) + column
1488
1488
1489 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1489 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1490 """
1490 """
1491 Convert the position of the cursor in text (0 indexed) to a line
1491 Convert the position of the cursor in text (0 indexed) to a line
1492 number(0-indexed) and a column number (0-indexed) pair
1492 number(0-indexed) and a column number (0-indexed) pair
1493
1493
1494 Position should be a valid position in ``text``.
1494 Position should be a valid position in ``text``.
1495
1495
1496 Parameters
1496 Parameters
1497 ----------
1497 ----------
1498 text : str
1498 text : str
1499 The text in which to calculate the cursor offset
1499 The text in which to calculate the cursor offset
1500 offset : int
1500 offset : int
1501 Position of the cursor in ``text``, 0-indexed.
1501 Position of the cursor in ``text``, 0-indexed.
1502
1502
1503 Returns
1503 Returns
1504 -------
1504 -------
1505 (line, column) : (int, int)
1505 (line, column) : (int, int)
1506 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1506 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1507
1507
1508 See Also
1508 See Also
1509 --------
1509 --------
1510 cursor_to_position : reciprocal of this function
1510 cursor_to_position : reciprocal of this function
1511
1511
1512 """
1512 """
1513
1513
1514 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1514 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1515
1515
1516 before = text[:offset]
1516 before = text[:offset]
1517 blines = before.split('\n') # ! splitnes trim trailing \n
1517 blines = before.split('\n') # ! splitnes trim trailing \n
1518 line = before.count('\n')
1518 line = before.count('\n')
1519 col = len(blines[-1])
1519 col = len(blines[-1])
1520 return line, col
1520 return line, col
1521
1521
1522
1522
1523 def _safe_isinstance(obj, module, class_name, *attrs):
1523 def _safe_isinstance(obj, module, class_name, *attrs):
1524 """Checks if obj is an instance of module.class_name if loaded
1524 """Checks if obj is an instance of module.class_name if loaded
1525 """
1525 """
1526 if module in sys.modules:
1526 if module in sys.modules:
1527 m = sys.modules[module]
1527 m = sys.modules[module]
1528 for attr in [class_name, *attrs]:
1528 for attr in [class_name, *attrs]:
1529 m = getattr(m, attr)
1529 m = getattr(m, attr)
1530 return isinstance(obj, m)
1530 return isinstance(obj, m)
1531
1531
1532
1532
1533 @context_matcher()
1533 @context_matcher()
1534 def back_unicode_name_matcher(context: CompletionContext):
1534 def back_unicode_name_matcher(context: CompletionContext):
1535 """Match Unicode characters back to Unicode name
1535 """Match Unicode characters back to Unicode name
1536
1536
1537 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1537 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1538 """
1538 """
1539 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1539 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1540 return _convert_matcher_v1_result_to_v2(
1540 return _convert_matcher_v1_result_to_v2(
1541 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1541 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1542 )
1542 )
1543
1543
1544
1544
1545 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1545 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1546 """Match Unicode characters back to Unicode name
1546 """Match Unicode characters back to Unicode name
1547
1547
1548 This does ``β˜ƒ`` -> ``\\snowman``
1548 This does ``β˜ƒ`` -> ``\\snowman``
1549
1549
1550 Note that snowman is not a valid python3 combining character but will be expanded.
1550 Note that snowman is not a valid python3 combining character but will be expanded.
1551 Though it will not recombine back to the snowman character by the completion machinery.
1551 Though it will not recombine back to the snowman character by the completion machinery.
1552
1552
1553 This will not either back-complete standard sequences like \\n, \\b ...
1553 This will not either back-complete standard sequences like \\n, \\b ...
1554
1554
1555 .. deprecated:: 8.6
1555 .. deprecated:: 8.6
1556 You can use :meth:`back_unicode_name_matcher` instead.
1556 You can use :meth:`back_unicode_name_matcher` instead.
1557
1557
1558 Returns
1558 Returns
1559 =======
1559 =======
1560
1560
1561 Return a tuple with two elements:
1561 Return a tuple with two elements:
1562
1562
1563 - The Unicode character that was matched (preceded with a backslash), or
1563 - The Unicode character that was matched (preceded with a backslash), or
1564 empty string,
1564 empty string,
1565 - a sequence (of 1), name for the match Unicode character, preceded by
1565 - a sequence (of 1), name for the match Unicode character, preceded by
1566 backslash, or empty if no match.
1566 backslash, or empty if no match.
1567 """
1567 """
1568 if len(text)<2:
1568 if len(text)<2:
1569 return '', ()
1569 return '', ()
1570 maybe_slash = text[-2]
1570 maybe_slash = text[-2]
1571 if maybe_slash != '\\':
1571 if maybe_slash != '\\':
1572 return '', ()
1572 return '', ()
1573
1573
1574 char = text[-1]
1574 char = text[-1]
1575 # no expand on quote for completion in strings.
1575 # no expand on quote for completion in strings.
1576 # nor backcomplete standard ascii keys
1576 # nor backcomplete standard ascii keys
1577 if char in string.ascii_letters or char in ('"',"'"):
1577 if char in string.ascii_letters or char in ('"',"'"):
1578 return '', ()
1578 return '', ()
1579 try :
1579 try :
1580 unic = unicodedata.name(char)
1580 unic = unicodedata.name(char)
1581 return '\\'+char,('\\'+unic,)
1581 return '\\'+char,('\\'+unic,)
1582 except KeyError:
1582 except KeyError:
1583 pass
1583 pass
1584 return '', ()
1584 return '', ()
1585
1585
1586
1586
1587 @context_matcher()
1587 @context_matcher()
1588 def back_latex_name_matcher(context: CompletionContext):
1588 def back_latex_name_matcher(context: CompletionContext):
1589 """Match latex characters back to unicode name
1589 """Match latex characters back to unicode name
1590
1590
1591 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1591 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1592 """
1592 """
1593 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1593 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1594 return _convert_matcher_v1_result_to_v2(
1594 return _convert_matcher_v1_result_to_v2(
1595 matches, type="latex", fragment=fragment, suppress_if_matches=True
1595 matches, type="latex", fragment=fragment, suppress_if_matches=True
1596 )
1596 )
1597
1597
1598
1598
1599 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1599 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1600 """Match latex characters back to unicode name
1600 """Match latex characters back to unicode name
1601
1601
1602 This does ``\\β„΅`` -> ``\\aleph``
1602 This does ``\\β„΅`` -> ``\\aleph``
1603
1603
1604 .. deprecated:: 8.6
1604 .. deprecated:: 8.6
1605 You can use :meth:`back_latex_name_matcher` instead.
1605 You can use :meth:`back_latex_name_matcher` instead.
1606 """
1606 """
1607 if len(text)<2:
1607 if len(text)<2:
1608 return '', ()
1608 return '', ()
1609 maybe_slash = text[-2]
1609 maybe_slash = text[-2]
1610 if maybe_slash != '\\':
1610 if maybe_slash != '\\':
1611 return '', ()
1611 return '', ()
1612
1612
1613
1613
1614 char = text[-1]
1614 char = text[-1]
1615 # no expand on quote for completion in strings.
1615 # no expand on quote for completion in strings.
1616 # nor backcomplete standard ascii keys
1616 # nor backcomplete standard ascii keys
1617 if char in string.ascii_letters or char in ('"',"'"):
1617 if char in string.ascii_letters or char in ('"',"'"):
1618 return '', ()
1618 return '', ()
1619 try :
1619 try :
1620 latex = reverse_latex_symbol[char]
1620 latex = reverse_latex_symbol[char]
1621 # '\\' replace the \ as well
1621 # '\\' replace the \ as well
1622 return '\\'+char,[latex]
1622 return '\\'+char,[latex]
1623 except KeyError:
1623 except KeyError:
1624 pass
1624 pass
1625 return '', ()
1625 return '', ()
1626
1626
1627
1627
1628 def _formatparamchildren(parameter) -> str:
1628 def _formatparamchildren(parameter) -> str:
1629 """
1629 """
1630 Get parameter name and value from Jedi Private API
1630 Get parameter name and value from Jedi Private API
1631
1631
1632 Jedi does not expose a simple way to get `param=value` from its API.
1632 Jedi does not expose a simple way to get `param=value` from its API.
1633
1633
1634 Parameters
1634 Parameters
1635 ----------
1635 ----------
1636 parameter
1636 parameter
1637 Jedi's function `Param`
1637 Jedi's function `Param`
1638
1638
1639 Returns
1639 Returns
1640 -------
1640 -------
1641 A string like 'a', 'b=1', '*args', '**kwargs'
1641 A string like 'a', 'b=1', '*args', '**kwargs'
1642
1642
1643 """
1643 """
1644 description = parameter.description
1644 description = parameter.description
1645 if not description.startswith('param '):
1645 if not description.startswith('param '):
1646 raise ValueError('Jedi function parameter description have change format.'
1646 raise ValueError('Jedi function parameter description have change format.'
1647 'Expected "param ...", found %r".' % description)
1647 'Expected "param ...", found %r".' % description)
1648 return description[6:]
1648 return description[6:]
1649
1649
1650 def _make_signature(completion)-> str:
1650 def _make_signature(completion)-> str:
1651 """
1651 """
1652 Make the signature from a jedi completion
1652 Make the signature from a jedi completion
1653
1653
1654 Parameters
1654 Parameters
1655 ----------
1655 ----------
1656 completion : jedi.Completion
1656 completion : jedi.Completion
1657 object does not complete a function type
1657 object does not complete a function type
1658
1658
1659 Returns
1659 Returns
1660 -------
1660 -------
1661 a string consisting of the function signature, with the parenthesis but
1661 a string consisting of the function signature, with the parenthesis but
1662 without the function name. example:
1662 without the function name. example:
1663 `(a, *args, b=1, **kwargs)`
1663 `(a, *args, b=1, **kwargs)`
1664
1664
1665 """
1665 """
1666
1666
1667 # it looks like this might work on jedi 0.17
1667 # it looks like this might work on jedi 0.17
1668 if hasattr(completion, 'get_signatures'):
1668 if hasattr(completion, 'get_signatures'):
1669 signatures = completion.get_signatures()
1669 signatures = completion.get_signatures()
1670 if not signatures:
1670 if not signatures:
1671 return '(?)'
1671 return '(?)'
1672
1672
1673 c0 = completion.get_signatures()[0]
1673 c0 = completion.get_signatures()[0]
1674 return '('+c0.to_string().split('(', maxsplit=1)[1]
1674 return '('+c0.to_string().split('(', maxsplit=1)[1]
1675
1675
1676 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1676 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1677 for p in signature.defined_names()) if f])
1677 for p in signature.defined_names()) if f])
1678
1678
1679
1679
1680 _CompleteResult = Dict[str, MatcherResult]
1680 _CompleteResult = Dict[str, MatcherResult]
1681
1681
1682
1682
1683 DICT_MATCHER_REGEX = re.compile(
1683 DICT_MATCHER_REGEX = re.compile(
1684 r"""(?x)
1684 r"""(?x)
1685 ( # match dict-referring - or any get item object - expression
1685 ( # match dict-referring - or any get item object - expression
1686 .+
1686 .+
1687 )
1687 )
1688 \[ # open bracket
1688 \[ # open bracket
1689 \s* # and optional whitespace
1689 \s* # and optional whitespace
1690 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1690 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1691 # and slices
1691 # and slices
1692 ((?:(?:
1692 ((?:(?:
1693 (?: # closed string
1693 (?: # closed string
1694 [uUbB]? # string prefix (r not handled)
1694 [uUbB]? # string prefix (r not handled)
1695 (?:
1695 (?:
1696 '(?:[^']|(?<!\\)\\')*'
1696 '(?:[^']|(?<!\\)\\')*'
1697 |
1697 |
1698 "(?:[^"]|(?<!\\)\\")*"
1698 "(?:[^"]|(?<!\\)\\")*"
1699 )
1699 )
1700 )
1700 )
1701 |
1701 |
1702 # capture integers and slices
1702 # capture integers and slices
1703 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1703 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1704 |
1704 |
1705 # integer in bin/hex/oct notation
1705 # integer in bin/hex/oct notation
1706 0[bBxXoO]_?(?:\w|\d)+
1706 0[bBxXoO]_?(?:\w|\d)+
1707 )
1707 )
1708 \s*,\s*
1708 \s*,\s*
1709 )*)
1709 )*)
1710 ((?:
1710 ((?:
1711 (?: # unclosed string
1711 (?: # unclosed string
1712 [uUbB]? # string prefix (r not handled)
1712 [uUbB]? # string prefix (r not handled)
1713 (?:
1713 (?:
1714 '(?:[^']|(?<!\\)\\')*
1714 '(?:[^']|(?<!\\)\\')*
1715 |
1715 |
1716 "(?:[^"]|(?<!\\)\\")*
1716 "(?:[^"]|(?<!\\)\\")*
1717 )
1717 )
1718 )
1718 )
1719 |
1719 |
1720 # unfinished integer
1720 # unfinished integer
1721 (?:[-+]?\d+)
1721 (?:[-+]?\d+)
1722 |
1722 |
1723 # integer in bin/hex/oct notation
1723 # integer in bin/hex/oct notation
1724 0[bBxXoO]_?(?:\w|\d)+
1724 0[bBxXoO]_?(?:\w|\d)+
1725 )
1725 )
1726 )?
1726 )?
1727 $
1727 $
1728 """
1728 """
1729 )
1729 )
1730
1730
1731
1731
1732 def _convert_matcher_v1_result_to_v2(
1732 def _convert_matcher_v1_result_to_v2(
1733 matches: Sequence[str],
1733 matches: Sequence[str],
1734 type: str,
1734 type: str,
1735 fragment: Optional[str] = None,
1735 fragment: Optional[str] = None,
1736 suppress_if_matches: bool = False,
1736 suppress_if_matches: bool = False,
1737 ) -> SimpleMatcherResult:
1737 ) -> SimpleMatcherResult:
1738 """Utility to help with transition"""
1738 """Utility to help with transition"""
1739 result = {
1739 result = {
1740 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1740 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1741 "suppress": (True if matches else False) if suppress_if_matches else False,
1741 "suppress": (True if matches else False) if suppress_if_matches else False,
1742 }
1742 }
1743 if fragment is not None:
1743 if fragment is not None:
1744 result["matched_fragment"] = fragment
1744 result["matched_fragment"] = fragment
1745 return cast(SimpleMatcherResult, result)
1745 return cast(SimpleMatcherResult, result)
1746
1746
1747
1747
1748 class IPCompleter(Completer):
1748 class IPCompleter(Completer):
1749 """Extension of the completer class with IPython-specific features"""
1749 """Extension of the completer class with IPython-specific features"""
1750
1750
1751 @observe('greedy')
1751 @observe('greedy')
1752 def _greedy_changed(self, change):
1752 def _greedy_changed(self, change):
1753 """update the splitter and readline delims when greedy is changed"""
1753 """update the splitter and readline delims when greedy is changed"""
1754 if change["new"]:
1754 if change["new"]:
1755 self.evaluation = "unsafe"
1755 self.evaluation = "unsafe"
1756 self.auto_close_dict_keys = True
1756 self.auto_close_dict_keys = True
1757 self.splitter.delims = GREEDY_DELIMS
1757 self.splitter.delims = GREEDY_DELIMS
1758 else:
1758 else:
1759 self.evaluation = "limited"
1759 self.evaluation = "limited"
1760 self.auto_close_dict_keys = False
1760 self.auto_close_dict_keys = False
1761 self.splitter.delims = DELIMS
1761 self.splitter.delims = DELIMS
1762
1762
1763 dict_keys_only = Bool(
1763 dict_keys_only = Bool(
1764 False,
1764 False,
1765 help="""
1765 help="""
1766 Whether to show dict key matches only.
1766 Whether to show dict key matches only.
1767
1767
1768 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1768 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1769 """,
1769 """,
1770 )
1770 )
1771
1771
1772 suppress_competing_matchers = UnionTrait(
1772 suppress_competing_matchers = UnionTrait(
1773 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1773 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1774 default_value=None,
1774 default_value=None,
1775 help="""
1775 help="""
1776 Whether to suppress completions from other *Matchers*.
1776 Whether to suppress completions from other *Matchers*.
1777
1777
1778 When set to ``None`` (default) the matchers will attempt to auto-detect
1778 When set to ``None`` (default) the matchers will attempt to auto-detect
1779 whether suppression of other matchers is desirable. For example, at
1779 whether suppression of other matchers is desirable. For example, at
1780 the beginning of a line followed by `%` we expect a magic completion
1780 the beginning of a line followed by `%` we expect a magic completion
1781 to be the only applicable option, and after ``my_dict['`` we usually
1781 to be the only applicable option, and after ``my_dict['`` we usually
1782 expect a completion with an existing dictionary key.
1782 expect a completion with an existing dictionary key.
1783
1783
1784 If you want to disable this heuristic and see completions from all matchers,
1784 If you want to disable this heuristic and see completions from all matchers,
1785 set ``IPCompleter.suppress_competing_matchers = False``.
1785 set ``IPCompleter.suppress_competing_matchers = False``.
1786 To disable the heuristic for specific matchers provide a dictionary mapping:
1786 To disable the heuristic for specific matchers provide a dictionary mapping:
1787 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1787 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1788
1788
1789 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1789 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1790 completions to the set of matchers with the highest priority;
1790 completions to the set of matchers with the highest priority;
1791 this is equivalent to ``IPCompleter.merge_completions`` and
1791 this is equivalent to ``IPCompleter.merge_completions`` and
1792 can be beneficial for performance, but will sometimes omit relevant
1792 can be beneficial for performance, but will sometimes omit relevant
1793 candidates from matchers further down the priority list.
1793 candidates from matchers further down the priority list.
1794 """,
1794 """,
1795 ).tag(config=True)
1795 ).tag(config=True)
1796
1796
1797 merge_completions = Bool(
1797 merge_completions = Bool(
1798 True,
1798 True,
1799 help="""Whether to merge completion results into a single list
1799 help="""Whether to merge completion results into a single list
1800
1800
1801 If False, only the completion results from the first non-empty
1801 If False, only the completion results from the first non-empty
1802 completer will be returned.
1802 completer will be returned.
1803
1803
1804 As of version 8.6.0, setting the value to ``False`` is an alias for:
1804 As of version 8.6.0, setting the value to ``False`` is an alias for:
1805 ``IPCompleter.suppress_competing_matchers = True.``.
1805 ``IPCompleter.suppress_competing_matchers = True.``.
1806 """,
1806 """,
1807 ).tag(config=True)
1807 ).tag(config=True)
1808
1808
1809 disable_matchers = ListTrait(
1809 disable_matchers = ListTrait(
1810 Unicode(),
1810 Unicode(),
1811 help="""List of matchers to disable.
1811 help="""List of matchers to disable.
1812
1812
1813 The list should contain matcher identifiers (see :any:`completion_matcher`).
1813 The list should contain matcher identifiers (see :any:`completion_matcher`).
1814 """,
1814 """,
1815 ).tag(config=True)
1815 ).tag(config=True)
1816
1816
1817 omit__names = Enum(
1817 omit__names = Enum(
1818 (0, 1, 2),
1818 (0, 1, 2),
1819 default_value=2,
1819 default_value=2,
1820 help="""Instruct the completer to omit private method names
1820 help="""Instruct the completer to omit private method names
1821
1821
1822 Specifically, when completing on ``object.<tab>``.
1822 Specifically, when completing on ``object.<tab>``.
1823
1823
1824 When 2 [default]: all names that start with '_' will be excluded.
1824 When 2 [default]: all names that start with '_' will be excluded.
1825
1825
1826 When 1: all 'magic' names (``__foo__``) will be excluded.
1826 When 1: all 'magic' names (``__foo__``) will be excluded.
1827
1827
1828 When 0: nothing will be excluded.
1828 When 0: nothing will be excluded.
1829 """
1829 """
1830 ).tag(config=True)
1830 ).tag(config=True)
1831 limit_to__all__ = Bool(False,
1831 limit_to__all__ = Bool(False,
1832 help="""
1832 help="""
1833 DEPRECATED as of version 5.0.
1833 DEPRECATED as of version 5.0.
1834
1834
1835 Instruct the completer to use __all__ for the completion
1835 Instruct the completer to use __all__ for the completion
1836
1836
1837 Specifically, when completing on ``object.<tab>``.
1837 Specifically, when completing on ``object.<tab>``.
1838
1838
1839 When True: only those names in obj.__all__ will be included.
1839 When True: only those names in obj.__all__ will be included.
1840
1840
1841 When False [default]: the __all__ attribute is ignored
1841 When False [default]: the __all__ attribute is ignored
1842 """,
1842 """,
1843 ).tag(config=True)
1843 ).tag(config=True)
1844
1844
1845 profile_completions = Bool(
1845 profile_completions = Bool(
1846 default_value=False,
1846 default_value=False,
1847 help="If True, emit profiling data for completion subsystem using cProfile."
1847 help="If True, emit profiling data for completion subsystem using cProfile."
1848 ).tag(config=True)
1848 ).tag(config=True)
1849
1849
1850 profiler_output_dir = Unicode(
1850 profiler_output_dir = Unicode(
1851 default_value=".completion_profiles",
1851 default_value=".completion_profiles",
1852 help="Template for path at which to output profile data for completions."
1852 help="Template for path at which to output profile data for completions."
1853 ).tag(config=True)
1853 ).tag(config=True)
1854
1854
1855 @observe('limit_to__all__')
1855 @observe('limit_to__all__')
1856 def _limit_to_all_changed(self, change):
1856 def _limit_to_all_changed(self, change):
1857 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1857 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1858 'value has been deprecated since IPython 5.0, will be made to have '
1858 'value has been deprecated since IPython 5.0, will be made to have '
1859 'no effects and then removed in future version of IPython.',
1859 'no effects and then removed in future version of IPython.',
1860 UserWarning)
1860 UserWarning)
1861
1861
1862 def __init__(
1862 def __init__(
1863 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1863 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1864 ):
1864 ):
1865 """IPCompleter() -> completer
1865 """IPCompleter() -> completer
1866
1866
1867 Return a completer object.
1867 Return a completer object.
1868
1868
1869 Parameters
1869 Parameters
1870 ----------
1870 ----------
1871 shell
1871 shell
1872 a pointer to the ipython shell itself. This is needed
1872 a pointer to the ipython shell itself. This is needed
1873 because this completer knows about magic functions, and those can
1873 because this completer knows about magic functions, and those can
1874 only be accessed via the ipython instance.
1874 only be accessed via the ipython instance.
1875 namespace : dict, optional
1875 namespace : dict, optional
1876 an optional dict where completions are performed.
1876 an optional dict where completions are performed.
1877 global_namespace : dict, optional
1877 global_namespace : dict, optional
1878 secondary optional dict for completions, to
1878 secondary optional dict for completions, to
1879 handle cases (such as IPython embedded inside functions) where
1879 handle cases (such as IPython embedded inside functions) where
1880 both Python scopes are visible.
1880 both Python scopes are visible.
1881 config : Config
1881 config : Config
1882 traitlet's config object
1882 traitlet's config object
1883 **kwargs
1883 **kwargs
1884 passed to super class unmodified.
1884 passed to super class unmodified.
1885 """
1885 """
1886
1886
1887 self.magic_escape = ESC_MAGIC
1887 self.magic_escape = ESC_MAGIC
1888 self.splitter = CompletionSplitter()
1888 self.splitter = CompletionSplitter()
1889
1889
1890 # _greedy_changed() depends on splitter and readline being defined:
1890 # _greedy_changed() depends on splitter and readline being defined:
1891 super().__init__(
1891 super().__init__(
1892 namespace=namespace,
1892 namespace=namespace,
1893 global_namespace=global_namespace,
1893 global_namespace=global_namespace,
1894 config=config,
1894 config=config,
1895 **kwargs,
1895 **kwargs,
1896 )
1896 )
1897
1897
1898 # List where completion matches will be stored
1898 # List where completion matches will be stored
1899 self.matches = []
1899 self.matches = []
1900 self.shell = shell
1900 self.shell = shell
1901 # Regexp to split filenames with spaces in them
1901 # Regexp to split filenames with spaces in them
1902 self.space_name_re = re.compile(r'([^\\] )')
1902 self.space_name_re = re.compile(r'([^\\] )')
1903 # Hold a local ref. to glob.glob for speed
1903 # Hold a local ref. to glob.glob for speed
1904 self.glob = glob.glob
1904 self.glob = glob.glob
1905
1905
1906 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1906 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1907 # buffers, to avoid completion problems.
1907 # buffers, to avoid completion problems.
1908 term = os.environ.get('TERM','xterm')
1908 term = os.environ.get('TERM','xterm')
1909 self.dumb_terminal = term in ['dumb','emacs']
1909 self.dumb_terminal = term in ['dumb','emacs']
1910
1910
1911 # Special handling of backslashes needed in win32 platforms
1911 # Special handling of backslashes needed in win32 platforms
1912 if sys.platform == "win32":
1912 if sys.platform == "win32":
1913 self.clean_glob = self._clean_glob_win32
1913 self.clean_glob = self._clean_glob_win32
1914 else:
1914 else:
1915 self.clean_glob = self._clean_glob
1915 self.clean_glob = self._clean_glob
1916
1916
1917 #regexp to parse docstring for function signature
1917 #regexp to parse docstring for function signature
1918 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1918 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1919 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1919 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1920 #use this if positional argument name is also needed
1920 #use this if positional argument name is also needed
1921 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1921 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1922
1922
1923 self.magic_arg_matchers = [
1923 self.magic_arg_matchers = [
1924 self.magic_config_matcher,
1924 self.magic_config_matcher,
1925 self.magic_color_matcher,
1925 self.magic_color_matcher,
1926 ]
1926 ]
1927
1927
1928 # This is set externally by InteractiveShell
1928 # This is set externally by InteractiveShell
1929 self.custom_completers = None
1929 self.custom_completers = None
1930
1930
1931 # This is a list of names of unicode characters that can be completed
1931 # This is a list of names of unicode characters that can be completed
1932 # into their corresponding unicode value. The list is large, so we
1932 # into their corresponding unicode value. The list is large, so we
1933 # lazily initialize it on first use. Consuming code should access this
1933 # lazily initialize it on first use. Consuming code should access this
1934 # attribute through the `@unicode_names` property.
1934 # attribute through the `@unicode_names` property.
1935 self._unicode_names = None
1935 self._unicode_names = None
1936
1936
1937 self._backslash_combining_matchers = [
1937 self._backslash_combining_matchers = [
1938 self.latex_name_matcher,
1938 self.latex_name_matcher,
1939 self.unicode_name_matcher,
1939 self.unicode_name_matcher,
1940 back_latex_name_matcher,
1940 back_latex_name_matcher,
1941 back_unicode_name_matcher,
1941 back_unicode_name_matcher,
1942 self.fwd_unicode_matcher,
1942 self.fwd_unicode_matcher,
1943 ]
1943 ]
1944
1944
1945 if not self.backslash_combining_completions:
1945 if not self.backslash_combining_completions:
1946 for matcher in self._backslash_combining_matchers:
1946 for matcher in self._backslash_combining_matchers:
1947 self.disable_matchers.append(_get_matcher_id(matcher))
1947 self.disable_matchers.append(_get_matcher_id(matcher))
1948
1948
1949 if not self.merge_completions:
1949 if not self.merge_completions:
1950 self.suppress_competing_matchers = True
1950 self.suppress_competing_matchers = True
1951
1951
1952 @property
1952 @property
1953 def matchers(self) -> List[Matcher]:
1953 def matchers(self) -> List[Matcher]:
1954 """All active matcher routines for completion"""
1954 """All active matcher routines for completion"""
1955 if self.dict_keys_only:
1955 if self.dict_keys_only:
1956 return [self.dict_key_matcher]
1956 return [self.dict_key_matcher]
1957
1957
1958 if self.use_jedi:
1958 if self.use_jedi:
1959 return [
1959 return [
1960 *self.custom_matchers,
1960 *self.custom_matchers,
1961 *self._backslash_combining_matchers,
1961 *self._backslash_combining_matchers,
1962 *self.magic_arg_matchers,
1962 *self.magic_arg_matchers,
1963 self.custom_completer_matcher,
1963 self.custom_completer_matcher,
1964 self.magic_matcher,
1964 self.magic_matcher,
1965 self._jedi_matcher,
1965 self._jedi_matcher,
1966 self.dict_key_matcher,
1966 self.dict_key_matcher,
1967 self.file_matcher,
1967 self.file_matcher,
1968 ]
1968 ]
1969 else:
1969 else:
1970 return [
1970 return [
1971 *self.custom_matchers,
1971 *self.custom_matchers,
1972 *self._backslash_combining_matchers,
1972 *self._backslash_combining_matchers,
1973 *self.magic_arg_matchers,
1973 *self.magic_arg_matchers,
1974 self.custom_completer_matcher,
1974 self.custom_completer_matcher,
1975 self.dict_key_matcher,
1975 self.dict_key_matcher,
1976 # TODO: convert python_matches to v2 API
1976 # TODO: convert python_matches to v2 API
1977 self.magic_matcher,
1977 self.magic_matcher,
1978 self.python_matches,
1978 self.python_matches,
1979 self.file_matcher,
1979 self.file_matcher,
1980 self.python_func_kw_matcher,
1980 self.python_func_kw_matcher,
1981 ]
1981 ]
1982
1982
1983 def all_completions(self, text:str) -> List[str]:
1983 def all_completions(self, text:str) -> List[str]:
1984 """
1984 """
1985 Wrapper around the completion methods for the benefit of emacs.
1985 Wrapper around the completion methods for the benefit of emacs.
1986 """
1986 """
1987 prefix = text.rpartition('.')[0]
1987 prefix = text.rpartition('.')[0]
1988 with provisionalcompleter():
1988 with provisionalcompleter():
1989 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1989 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1990 for c in self.completions(text, len(text))]
1990 for c in self.completions(text, len(text))]
1991
1991
1992 return self.complete(text)[1]
1992 return self.complete(text)[1]
1993
1993
1994 def _clean_glob(self, text:str):
1994 def _clean_glob(self, text:str):
1995 return self.glob("%s*" % text)
1995 return self.glob("%s*" % text)
1996
1996
1997 def _clean_glob_win32(self, text:str):
1997 def _clean_glob_win32(self, text:str):
1998 return [f.replace("\\","/")
1998 return [f.replace("\\","/")
1999 for f in self.glob("%s*" % text)]
1999 for f in self.glob("%s*" % text)]
2000
2000
2001 @context_matcher()
2001 @context_matcher()
2002 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2002 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2003 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2003 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2004 matches = self.file_matches(context.token)
2004 matches = self.file_matches(context.token)
2005 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2005 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2006 # starts with `/home/`, `C:\`, etc)
2006 # starts with `/home/`, `C:\`, etc)
2007 return _convert_matcher_v1_result_to_v2(matches, type="path")
2007 return _convert_matcher_v1_result_to_v2(matches, type="path")
2008
2008
2009 def file_matches(self, text: str) -> List[str]:
2009 def file_matches(self, text: str) -> List[str]:
2010 """Match filenames, expanding ~USER type strings.
2010 """Match filenames, expanding ~USER type strings.
2011
2011
2012 Most of the seemingly convoluted logic in this completer is an
2012 Most of the seemingly convoluted logic in this completer is an
2013 attempt to handle filenames with spaces in them. And yet it's not
2013 attempt to handle filenames with spaces in them. And yet it's not
2014 quite perfect, because Python's readline doesn't expose all of the
2014 quite perfect, because Python's readline doesn't expose all of the
2015 GNU readline details needed for this to be done correctly.
2015 GNU readline details needed for this to be done correctly.
2016
2016
2017 For a filename with a space in it, the printed completions will be
2017 For a filename with a space in it, the printed completions will be
2018 only the parts after what's already been typed (instead of the
2018 only the parts after what's already been typed (instead of the
2019 full completions, as is normally done). I don't think with the
2019 full completions, as is normally done). I don't think with the
2020 current (as of Python 2.3) Python readline it's possible to do
2020 current (as of Python 2.3) Python readline it's possible to do
2021 better.
2021 better.
2022
2022
2023 .. deprecated:: 8.6
2023 .. deprecated:: 8.6
2024 You can use :meth:`file_matcher` instead.
2024 You can use :meth:`file_matcher` instead.
2025 """
2025 """
2026
2026
2027 # chars that require escaping with backslash - i.e. chars
2027 # chars that require escaping with backslash - i.e. chars
2028 # that readline treats incorrectly as delimiters, but we
2028 # that readline treats incorrectly as delimiters, but we
2029 # don't want to treat as delimiters in filename matching
2029 # don't want to treat as delimiters in filename matching
2030 # when escaped with backslash
2030 # when escaped with backslash
2031 if text.startswith('!'):
2031 if text.startswith('!'):
2032 text = text[1:]
2032 text = text[1:]
2033 text_prefix = u'!'
2033 text_prefix = u'!'
2034 else:
2034 else:
2035 text_prefix = u''
2035 text_prefix = u''
2036
2036
2037 text_until_cursor = self.text_until_cursor
2037 text_until_cursor = self.text_until_cursor
2038 # track strings with open quotes
2038 # track strings with open quotes
2039 open_quotes = has_open_quotes(text_until_cursor)
2039 open_quotes = has_open_quotes(text_until_cursor)
2040
2040
2041 if '(' in text_until_cursor or '[' in text_until_cursor:
2041 if '(' in text_until_cursor or '[' in text_until_cursor:
2042 lsplit = text
2042 lsplit = text
2043 else:
2043 else:
2044 try:
2044 try:
2045 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2045 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2046 lsplit = arg_split(text_until_cursor)[-1]
2046 lsplit = arg_split(text_until_cursor)[-1]
2047 except ValueError:
2047 except ValueError:
2048 # typically an unmatched ", or backslash without escaped char.
2048 # typically an unmatched ", or backslash without escaped char.
2049 if open_quotes:
2049 if open_quotes:
2050 lsplit = text_until_cursor.split(open_quotes)[-1]
2050 lsplit = text_until_cursor.split(open_quotes)[-1]
2051 else:
2051 else:
2052 return []
2052 return []
2053 except IndexError:
2053 except IndexError:
2054 # tab pressed on empty line
2054 # tab pressed on empty line
2055 lsplit = ""
2055 lsplit = ""
2056
2056
2057 if not open_quotes and lsplit != protect_filename(lsplit):
2057 if not open_quotes and lsplit != protect_filename(lsplit):
2058 # if protectables are found, do matching on the whole escaped name
2058 # if protectables are found, do matching on the whole escaped name
2059 has_protectables = True
2059 has_protectables = True
2060 text0,text = text,lsplit
2060 text0,text = text,lsplit
2061 else:
2061 else:
2062 has_protectables = False
2062 has_protectables = False
2063 text = os.path.expanduser(text)
2063 text = os.path.expanduser(text)
2064
2064
2065 if text == "":
2065 if text == "":
2066 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2066 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2067
2067
2068 # Compute the matches from the filesystem
2068 # Compute the matches from the filesystem
2069 if sys.platform == 'win32':
2069 if sys.platform == 'win32':
2070 m0 = self.clean_glob(text)
2070 m0 = self.clean_glob(text)
2071 else:
2071 else:
2072 m0 = self.clean_glob(text.replace('\\', ''))
2072 m0 = self.clean_glob(text.replace('\\', ''))
2073
2073
2074 if has_protectables:
2074 if has_protectables:
2075 # If we had protectables, we need to revert our changes to the
2075 # If we had protectables, we need to revert our changes to the
2076 # beginning of filename so that we don't double-write the part
2076 # beginning of filename so that we don't double-write the part
2077 # of the filename we have so far
2077 # of the filename we have so far
2078 len_lsplit = len(lsplit)
2078 len_lsplit = len(lsplit)
2079 matches = [text_prefix + text0 +
2079 matches = [text_prefix + text0 +
2080 protect_filename(f[len_lsplit:]) for f in m0]
2080 protect_filename(f[len_lsplit:]) for f in m0]
2081 else:
2081 else:
2082 if open_quotes:
2082 if open_quotes:
2083 # if we have a string with an open quote, we don't need to
2083 # if we have a string with an open quote, we don't need to
2084 # protect the names beyond the quote (and we _shouldn't_, as
2084 # protect the names beyond the quote (and we _shouldn't_, as
2085 # it would cause bugs when the filesystem call is made).
2085 # it would cause bugs when the filesystem call is made).
2086 matches = m0 if sys.platform == "win32" else\
2086 matches = m0 if sys.platform == "win32" else\
2087 [protect_filename(f, open_quotes) for f in m0]
2087 [protect_filename(f, open_quotes) for f in m0]
2088 else:
2088 else:
2089 matches = [text_prefix +
2089 matches = [text_prefix +
2090 protect_filename(f) for f in m0]
2090 protect_filename(f) for f in m0]
2091
2091
2092 # Mark directories in input list by appending '/' to their names.
2092 # Mark directories in input list by appending '/' to their names.
2093 return [x+'/' if os.path.isdir(x) else x for x in matches]
2093 return [x+'/' if os.path.isdir(x) else x for x in matches]
2094
2094
2095 @context_matcher()
2095 @context_matcher()
2096 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2096 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2097 """Match magics."""
2097 """Match magics."""
2098 text = context.token
2098 text = context.token
2099 matches = self.magic_matches(text)
2099 matches = self.magic_matches(text)
2100 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2100 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2101 is_magic_prefix = len(text) > 0 and text[0] == "%"
2101 is_magic_prefix = len(text) > 0 and text[0] == "%"
2102 result["suppress"] = is_magic_prefix and bool(result["completions"])
2102 result["suppress"] = is_magic_prefix and bool(result["completions"])
2103 return result
2103 return result
2104
2104
2105 def magic_matches(self, text: str):
2105 def magic_matches(self, text: str):
2106 """Match magics.
2106 """Match magics.
2107
2107
2108 .. deprecated:: 8.6
2108 .. deprecated:: 8.6
2109 You can use :meth:`magic_matcher` instead.
2109 You can use :meth:`magic_matcher` instead.
2110 """
2110 """
2111 # Get all shell magics now rather than statically, so magics loaded at
2111 # Get all shell magics now rather than statically, so magics loaded at
2112 # runtime show up too.
2112 # runtime show up too.
2113 lsm = self.shell.magics_manager.lsmagic()
2113 lsm = self.shell.magics_manager.lsmagic()
2114 line_magics = lsm['line']
2114 line_magics = lsm['line']
2115 cell_magics = lsm['cell']
2115 cell_magics = lsm['cell']
2116 pre = self.magic_escape
2116 pre = self.magic_escape
2117 pre2 = pre+pre
2117 pre2 = pre+pre
2118
2118
2119 explicit_magic = text.startswith(pre)
2119 explicit_magic = text.startswith(pre)
2120
2120
2121 # Completion logic:
2121 # Completion logic:
2122 # - user gives %%: only do cell magics
2122 # - user gives %%: only do cell magics
2123 # - user gives %: do both line and cell magics
2123 # - user gives %: do both line and cell magics
2124 # - no prefix: do both
2124 # - no prefix: do both
2125 # In other words, line magics are skipped if the user gives %% explicitly
2125 # In other words, line magics are skipped if the user gives %% explicitly
2126 #
2126 #
2127 # We also exclude magics that match any currently visible names:
2127 # We also exclude magics that match any currently visible names:
2128 # https://github.com/ipython/ipython/issues/4877, unless the user has
2128 # https://github.com/ipython/ipython/issues/4877, unless the user has
2129 # typed a %:
2129 # typed a %:
2130 # https://github.com/ipython/ipython/issues/10754
2130 # https://github.com/ipython/ipython/issues/10754
2131 bare_text = text.lstrip(pre)
2131 bare_text = text.lstrip(pre)
2132 global_matches = self.global_matches(bare_text)
2132 global_matches = self.global_matches(bare_text)
2133 if not explicit_magic:
2133 if not explicit_magic:
2134 def matches(magic):
2134 def matches(magic):
2135 """
2135 """
2136 Filter magics, in particular remove magics that match
2136 Filter magics, in particular remove magics that match
2137 a name present in global namespace.
2137 a name present in global namespace.
2138 """
2138 """
2139 return ( magic.startswith(bare_text) and
2139 return ( magic.startswith(bare_text) and
2140 magic not in global_matches )
2140 magic not in global_matches )
2141 else:
2141 else:
2142 def matches(magic):
2142 def matches(magic):
2143 return magic.startswith(bare_text)
2143 return magic.startswith(bare_text)
2144
2144
2145 comp = [ pre2+m for m in cell_magics if matches(m)]
2145 comp = [ pre2+m for m in cell_magics if matches(m)]
2146 if not text.startswith(pre2):
2146 if not text.startswith(pre2):
2147 comp += [ pre+m for m in line_magics if matches(m)]
2147 comp += [ pre+m for m in line_magics if matches(m)]
2148
2148
2149 return comp
2149 return comp
2150
2150
2151 @context_matcher()
2151 @context_matcher()
2152 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2152 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2153 """Match class names and attributes for %config magic."""
2153 """Match class names and attributes for %config magic."""
2154 # NOTE: uses `line_buffer` equivalent for compatibility
2154 # NOTE: uses `line_buffer` equivalent for compatibility
2155 matches = self.magic_config_matches(context.line_with_cursor)
2155 matches = self.magic_config_matches(context.line_with_cursor)
2156 return _convert_matcher_v1_result_to_v2(matches, type="param")
2156 return _convert_matcher_v1_result_to_v2(matches, type="param")
2157
2157
2158 def magic_config_matches(self, text: str) -> List[str]:
2158 def magic_config_matches(self, text: str) -> List[str]:
2159 """Match class names and attributes for %config magic.
2159 """Match class names and attributes for %config magic.
2160
2160
2161 .. deprecated:: 8.6
2161 .. deprecated:: 8.6
2162 You can use :meth:`magic_config_matcher` instead.
2162 You can use :meth:`magic_config_matcher` instead.
2163 """
2163 """
2164 texts = text.strip().split()
2164 texts = text.strip().split()
2165
2165
2166 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2166 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2167 # get all configuration classes
2167 # get all configuration classes
2168 classes = sorted(set([ c for c in self.shell.configurables
2168 classes = sorted(set([ c for c in self.shell.configurables
2169 if c.__class__.class_traits(config=True)
2169 if c.__class__.class_traits(config=True)
2170 ]), key=lambda x: x.__class__.__name__)
2170 ]), key=lambda x: x.__class__.__name__)
2171 classnames = [ c.__class__.__name__ for c in classes ]
2171 classnames = [ c.__class__.__name__ for c in classes ]
2172
2172
2173 # return all classnames if config or %config is given
2173 # return all classnames if config or %config is given
2174 if len(texts) == 1:
2174 if len(texts) == 1:
2175 return classnames
2175 return classnames
2176
2176
2177 # match classname
2177 # match classname
2178 classname_texts = texts[1].split('.')
2178 classname_texts = texts[1].split('.')
2179 classname = classname_texts[0]
2179 classname = classname_texts[0]
2180 classname_matches = [ c for c in classnames
2180 classname_matches = [ c for c in classnames
2181 if c.startswith(classname) ]
2181 if c.startswith(classname) ]
2182
2182
2183 # return matched classes or the matched class with attributes
2183 # return matched classes or the matched class with attributes
2184 if texts[1].find('.') < 0:
2184 if texts[1].find('.') < 0:
2185 return classname_matches
2185 return classname_matches
2186 elif len(classname_matches) == 1 and \
2186 elif len(classname_matches) == 1 and \
2187 classname_matches[0] == classname:
2187 classname_matches[0] == classname:
2188 cls = classes[classnames.index(classname)].__class__
2188 cls = classes[classnames.index(classname)].__class__
2189 help = cls.class_get_help()
2189 help = cls.class_get_help()
2190 # strip leading '--' from cl-args:
2190 # strip leading '--' from cl-args:
2191 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2191 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2192 return [ attr.split('=')[0]
2192 return [ attr.split('=')[0]
2193 for attr in help.strip().splitlines()
2193 for attr in help.strip().splitlines()
2194 if attr.startswith(texts[1]) ]
2194 if attr.startswith(texts[1]) ]
2195 return []
2195 return []
2196
2196
2197 @context_matcher()
2197 @context_matcher()
2198 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2198 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2199 """Match color schemes for %colors magic."""
2199 """Match color schemes for %colors magic."""
2200 # NOTE: uses `line_buffer` equivalent for compatibility
2200 # NOTE: uses `line_buffer` equivalent for compatibility
2201 matches = self.magic_color_matches(context.line_with_cursor)
2201 matches = self.magic_color_matches(context.line_with_cursor)
2202 return _convert_matcher_v1_result_to_v2(matches, type="param")
2202 return _convert_matcher_v1_result_to_v2(matches, type="param")
2203
2203
2204 def magic_color_matches(self, text: str) -> List[str]:
2204 def magic_color_matches(self, text: str) -> List[str]:
2205 """Match color schemes for %colors magic.
2205 """Match color schemes for %colors magic.
2206
2206
2207 .. deprecated:: 8.6
2207 .. deprecated:: 8.6
2208 You can use :meth:`magic_color_matcher` instead.
2208 You can use :meth:`magic_color_matcher` instead.
2209 """
2209 """
2210 texts = text.split()
2210 texts = text.split()
2211 if text.endswith(' '):
2211 if text.endswith(' '):
2212 # .split() strips off the trailing whitespace. Add '' back
2212 # .split() strips off the trailing whitespace. Add '' back
2213 # so that: '%colors ' -> ['%colors', '']
2213 # so that: '%colors ' -> ['%colors', '']
2214 texts.append('')
2214 texts.append('')
2215
2215
2216 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2216 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2217 prefix = texts[1]
2217 prefix = texts[1]
2218 return [ color for color in InspectColors.keys()
2218 return [ color for color in InspectColors.keys()
2219 if color.startswith(prefix) ]
2219 if color.startswith(prefix) ]
2220 return []
2220 return []
2221
2221
2222 @context_matcher(identifier="IPCompleter.jedi_matcher")
2222 @context_matcher(identifier="IPCompleter.jedi_matcher")
2223 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2223 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2224 matches = self._jedi_matches(
2224 matches = self._jedi_matches(
2225 cursor_column=context.cursor_position,
2225 cursor_column=context.cursor_position,
2226 cursor_line=context.cursor_line,
2226 cursor_line=context.cursor_line,
2227 text=context.full_text,
2227 text=context.full_text,
2228 )
2228 )
2229 return {
2229 return {
2230 "completions": matches,
2230 "completions": matches,
2231 # static analysis should not suppress other matchers
2231 # static analysis should not suppress other matchers
2232 "suppress": False,
2232 "suppress": False,
2233 }
2233 }
2234
2234
2235 def _jedi_matches(
2235 def _jedi_matches(
2236 self, cursor_column: int, cursor_line: int, text: str
2236 self, cursor_column: int, cursor_line: int, text: str
2237 ) -> Iterator[_JediCompletionLike]:
2237 ) -> Iterator[_JediCompletionLike]:
2238 """
2238 """
2239 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2239 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2240 cursor position.
2240 cursor position.
2241
2241
2242 Parameters
2242 Parameters
2243 ----------
2243 ----------
2244 cursor_column : int
2244 cursor_column : int
2245 column position of the cursor in ``text``, 0-indexed.
2245 column position of the cursor in ``text``, 0-indexed.
2246 cursor_line : int
2246 cursor_line : int
2247 line position of the cursor in ``text``, 0-indexed
2247 line position of the cursor in ``text``, 0-indexed
2248 text : str
2248 text : str
2249 text to complete
2249 text to complete
2250
2250
2251 Notes
2251 Notes
2252 -----
2252 -----
2253 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2253 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2254 object containing a string with the Jedi debug information attached.
2254 object containing a string with the Jedi debug information attached.
2255
2255
2256 .. deprecated:: 8.6
2256 .. deprecated:: 8.6
2257 You can use :meth:`_jedi_matcher` instead.
2257 You can use :meth:`_jedi_matcher` instead.
2258 """
2258 """
2259 namespaces = [self.namespace]
2259 namespaces = [self.namespace]
2260 if self.global_namespace is not None:
2260 if self.global_namespace is not None:
2261 namespaces.append(self.global_namespace)
2261 namespaces.append(self.global_namespace)
2262
2262
2263 completion_filter = lambda x:x
2263 completion_filter = lambda x:x
2264 offset = cursor_to_position(text, cursor_line, cursor_column)
2264 offset = cursor_to_position(text, cursor_line, cursor_column)
2265 # filter output if we are completing for object members
2265 # filter output if we are completing for object members
2266 if offset:
2266 if offset:
2267 pre = text[offset-1]
2267 pre = text[offset-1]
2268 if pre == '.':
2268 if pre == '.':
2269 if self.omit__names == 2:
2269 if self.omit__names == 2:
2270 completion_filter = lambda c:not c.name.startswith('_')
2270 completion_filter = lambda c:not c.name.startswith('_')
2271 elif self.omit__names == 1:
2271 elif self.omit__names == 1:
2272 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2272 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2273 elif self.omit__names == 0:
2273 elif self.omit__names == 0:
2274 completion_filter = lambda x:x
2274 completion_filter = lambda x:x
2275 else:
2275 else:
2276 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2276 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2277
2277
2278 interpreter = jedi.Interpreter(text[:offset], namespaces)
2278 interpreter = jedi.Interpreter(text[:offset], namespaces)
2279 try_jedi = True
2279 try_jedi = True
2280
2280
2281 try:
2281 try:
2282 # find the first token in the current tree -- if it is a ' or " then we are in a string
2282 # find the first token in the current tree -- if it is a ' or " then we are in a string
2283 completing_string = False
2283 completing_string = False
2284 try:
2284 try:
2285 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2285 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2286 except StopIteration:
2286 except StopIteration:
2287 pass
2287 pass
2288 else:
2288 else:
2289 # note the value may be ', ", or it may also be ''' or """, or
2289 # note the value may be ', ", or it may also be ''' or """, or
2290 # in some cases, """what/you/typed..., but all of these are
2290 # in some cases, """what/you/typed..., but all of these are
2291 # strings.
2291 # strings.
2292 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2292 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2293
2293
2294 # if we are in a string jedi is likely not the right candidate for
2294 # if we are in a string jedi is likely not the right candidate for
2295 # now. Skip it.
2295 # now. Skip it.
2296 try_jedi = not completing_string
2296 try_jedi = not completing_string
2297 except Exception as e:
2297 except Exception as e:
2298 # many of things can go wrong, we are using private API just don't crash.
2298 # many of things can go wrong, we are using private API just don't crash.
2299 if self.debug:
2299 if self.debug:
2300 print("Error detecting if completing a non-finished string :", e, '|')
2300 print("Error detecting if completing a non-finished string :", e, '|')
2301
2301
2302 if not try_jedi:
2302 if not try_jedi:
2303 return iter([])
2303 return iter([])
2304 try:
2304 try:
2305 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2305 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2306 except Exception as e:
2306 except Exception as e:
2307 if self.debug:
2307 if self.debug:
2308 return iter(
2308 return iter(
2309 [
2309 [
2310 _FakeJediCompletion(
2310 _FakeJediCompletion(
2311 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2311 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2312 % (e)
2312 % (e)
2313 )
2313 )
2314 ]
2314 ]
2315 )
2315 )
2316 else:
2316 else:
2317 return iter([])
2317 return iter([])
2318
2318
2319 @completion_matcher(api_version=1)
2319 @completion_matcher(api_version=1)
2320 def python_matches(self, text: str) -> Iterable[str]:
2320 def python_matches(self, text: str) -> Iterable[str]:
2321 """Match attributes or global python names"""
2321 """Match attributes or global python names"""
2322 if "." in text:
2322 if "." in text:
2323 try:
2323 try:
2324 matches = self.attr_matches(text)
2324 matches = self.attr_matches(text)
2325 if text.endswith('.') and self.omit__names:
2325 if text.endswith('.') and self.omit__names:
2326 if self.omit__names == 1:
2326 if self.omit__names == 1:
2327 # true if txt is _not_ a __ name, false otherwise:
2327 # true if txt is _not_ a __ name, false otherwise:
2328 no__name = (lambda txt:
2328 no__name = (lambda txt:
2329 re.match(r'.*\.__.*?__',txt) is None)
2329 re.match(r'.*\.__.*?__',txt) is None)
2330 else:
2330 else:
2331 # true if txt is _not_ a _ name, false otherwise:
2331 # true if txt is _not_ a _ name, false otherwise:
2332 no__name = (lambda txt:
2332 no__name = (lambda txt:
2333 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2333 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2334 matches = filter(no__name, matches)
2334 matches = filter(no__name, matches)
2335 except NameError:
2335 except NameError:
2336 # catches <undefined attributes>.<tab>
2336 # catches <undefined attributes>.<tab>
2337 matches = []
2337 matches = []
2338 else:
2338 else:
2339 matches = self.global_matches(text)
2339 matches = self.global_matches(text)
2340 return matches
2340 return matches
2341
2341
2342 def _default_arguments_from_docstring(self, doc):
2342 def _default_arguments_from_docstring(self, doc):
2343 """Parse the first line of docstring for call signature.
2343 """Parse the first line of docstring for call signature.
2344
2344
2345 Docstring should be of the form 'min(iterable[, key=func])\n'.
2345 Docstring should be of the form 'min(iterable[, key=func])\n'.
2346 It can also parse cython docstring of the form
2346 It can also parse cython docstring of the form
2347 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2347 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2348 """
2348 """
2349 if doc is None:
2349 if doc is None:
2350 return []
2350 return []
2351
2351
2352 #care only the firstline
2352 #care only the firstline
2353 line = doc.lstrip().splitlines()[0]
2353 line = doc.lstrip().splitlines()[0]
2354
2354
2355 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2355 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2356 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2356 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2357 sig = self.docstring_sig_re.search(line)
2357 sig = self.docstring_sig_re.search(line)
2358 if sig is None:
2358 if sig is None:
2359 return []
2359 return []
2360 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2360 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2361 sig = sig.groups()[0].split(',')
2361 sig = sig.groups()[0].split(',')
2362 ret = []
2362 ret = []
2363 for s in sig:
2363 for s in sig:
2364 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2364 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2365 ret += self.docstring_kwd_re.findall(s)
2365 ret += self.docstring_kwd_re.findall(s)
2366 return ret
2366 return ret
2367
2367
2368 def _default_arguments(self, obj):
2368 def _default_arguments(self, obj):
2369 """Return the list of default arguments of obj if it is callable,
2369 """Return the list of default arguments of obj if it is callable,
2370 or empty list otherwise."""
2370 or empty list otherwise."""
2371 call_obj = obj
2371 call_obj = obj
2372 ret = []
2372 ret = []
2373 if inspect.isbuiltin(obj):
2373 if inspect.isbuiltin(obj):
2374 pass
2374 pass
2375 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2375 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2376 if inspect.isclass(obj):
2376 if inspect.isclass(obj):
2377 #for cython embedsignature=True the constructor docstring
2377 #for cython embedsignature=True the constructor docstring
2378 #belongs to the object itself not __init__
2378 #belongs to the object itself not __init__
2379 ret += self._default_arguments_from_docstring(
2379 ret += self._default_arguments_from_docstring(
2380 getattr(obj, '__doc__', ''))
2380 getattr(obj, '__doc__', ''))
2381 # for classes, check for __init__,__new__
2381 # for classes, check for __init__,__new__
2382 call_obj = (getattr(obj, '__init__', None) or
2382 call_obj = (getattr(obj, '__init__', None) or
2383 getattr(obj, '__new__', None))
2383 getattr(obj, '__new__', None))
2384 # for all others, check if they are __call__able
2384 # for all others, check if they are __call__able
2385 elif hasattr(obj, '__call__'):
2385 elif hasattr(obj, '__call__'):
2386 call_obj = obj.__call__
2386 call_obj = obj.__call__
2387 ret += self._default_arguments_from_docstring(
2387 ret += self._default_arguments_from_docstring(
2388 getattr(call_obj, '__doc__', ''))
2388 getattr(call_obj, '__doc__', ''))
2389
2389
2390 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2390 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2391 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2391 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2392
2392
2393 try:
2393 try:
2394 sig = inspect.signature(obj)
2394 sig = inspect.signature(obj)
2395 ret.extend(k for k, v in sig.parameters.items() if
2395 ret.extend(k for k, v in sig.parameters.items() if
2396 v.kind in _keeps)
2396 v.kind in _keeps)
2397 except ValueError:
2397 except ValueError:
2398 pass
2398 pass
2399
2399
2400 return list(set(ret))
2400 return list(set(ret))
2401
2401
2402 @context_matcher()
2402 @context_matcher()
2403 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2403 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2404 """Match named parameters (kwargs) of the last open function."""
2404 """Match named parameters (kwargs) of the last open function."""
2405 matches = self.python_func_kw_matches(context.token)
2405 matches = self.python_func_kw_matches(context.token)
2406 return _convert_matcher_v1_result_to_v2(matches, type="param")
2406 return _convert_matcher_v1_result_to_v2(matches, type="param")
2407
2407
2408 def python_func_kw_matches(self, text):
2408 def python_func_kw_matches(self, text):
2409 """Match named parameters (kwargs) of the last open function.
2409 """Match named parameters (kwargs) of the last open function.
2410
2410
2411 .. deprecated:: 8.6
2411 .. deprecated:: 8.6
2412 You can use :meth:`python_func_kw_matcher` instead.
2412 You can use :meth:`python_func_kw_matcher` instead.
2413 """
2413 """
2414
2414
2415 if "." in text: # a parameter cannot be dotted
2415 if "." in text: # a parameter cannot be dotted
2416 return []
2416 return []
2417 try: regexp = self.__funcParamsRegex
2417 try: regexp = self.__funcParamsRegex
2418 except AttributeError:
2418 except AttributeError:
2419 regexp = self.__funcParamsRegex = re.compile(r'''
2419 regexp = self.__funcParamsRegex = re.compile(r'''
2420 '.*?(?<!\\)' | # single quoted strings or
2420 '.*?(?<!\\)' | # single quoted strings or
2421 ".*?(?<!\\)" | # double quoted strings or
2421 ".*?(?<!\\)" | # double quoted strings or
2422 \w+ | # identifier
2422 \w+ | # identifier
2423 \S # other characters
2423 \S # other characters
2424 ''', re.VERBOSE | re.DOTALL)
2424 ''', re.VERBOSE | re.DOTALL)
2425 # 1. find the nearest identifier that comes before an unclosed
2425 # 1. find the nearest identifier that comes before an unclosed
2426 # parenthesis before the cursor
2426 # parenthesis before the cursor
2427 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2427 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2428 tokens = regexp.findall(self.text_until_cursor)
2428 tokens = regexp.findall(self.text_until_cursor)
2429 iterTokens = reversed(tokens); openPar = 0
2429 iterTokens = reversed(tokens); openPar = 0
2430
2430
2431 for token in iterTokens:
2431 for token in iterTokens:
2432 if token == ')':
2432 if token == ')':
2433 openPar -= 1
2433 openPar -= 1
2434 elif token == '(':
2434 elif token == '(':
2435 openPar += 1
2435 openPar += 1
2436 if openPar > 0:
2436 if openPar > 0:
2437 # found the last unclosed parenthesis
2437 # found the last unclosed parenthesis
2438 break
2438 break
2439 else:
2439 else:
2440 return []
2440 return []
2441 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2441 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2442 ids = []
2442 ids = []
2443 isId = re.compile(r'\w+$').match
2443 isId = re.compile(r'\w+$').match
2444
2444
2445 while True:
2445 while True:
2446 try:
2446 try:
2447 ids.append(next(iterTokens))
2447 ids.append(next(iterTokens))
2448 if not isId(ids[-1]):
2448 if not isId(ids[-1]):
2449 ids.pop(); break
2449 ids.pop(); break
2450 if not next(iterTokens) == '.':
2450 if not next(iterTokens) == '.':
2451 break
2451 break
2452 except StopIteration:
2452 except StopIteration:
2453 break
2453 break
2454
2454
2455 # Find all named arguments already assigned to, as to avoid suggesting
2455 # Find all named arguments already assigned to, as to avoid suggesting
2456 # them again
2456 # them again
2457 usedNamedArgs = set()
2457 usedNamedArgs = set()
2458 par_level = -1
2458 par_level = -1
2459 for token, next_token in zip(tokens, tokens[1:]):
2459 for token, next_token in zip(tokens, tokens[1:]):
2460 if token == '(':
2460 if token == '(':
2461 par_level += 1
2461 par_level += 1
2462 elif token == ')':
2462 elif token == ')':
2463 par_level -= 1
2463 par_level -= 1
2464
2464
2465 if par_level != 0:
2465 if par_level != 0:
2466 continue
2466 continue
2467
2467
2468 if next_token != '=':
2468 if next_token != '=':
2469 continue
2469 continue
2470
2470
2471 usedNamedArgs.add(token)
2471 usedNamedArgs.add(token)
2472
2472
2473 argMatches = []
2473 argMatches = []
2474 try:
2474 try:
2475 callableObj = '.'.join(ids[::-1])
2475 callableObj = '.'.join(ids[::-1])
2476 namedArgs = self._default_arguments(eval(callableObj,
2476 namedArgs = self._default_arguments(eval(callableObj,
2477 self.namespace))
2477 self.namespace))
2478
2478
2479 # Remove used named arguments from the list, no need to show twice
2479 # Remove used named arguments from the list, no need to show twice
2480 for namedArg in set(namedArgs) - usedNamedArgs:
2480 for namedArg in set(namedArgs) - usedNamedArgs:
2481 if namedArg.startswith(text):
2481 if namedArg.startswith(text):
2482 argMatches.append("%s=" %namedArg)
2482 argMatches.append("%s=" %namedArg)
2483 except:
2483 except:
2484 pass
2484 pass
2485
2485
2486 return argMatches
2486 return argMatches
2487
2487
2488 @staticmethod
2488 @staticmethod
2489 def _get_keys(obj: Any) -> List[Any]:
2489 def _get_keys(obj: Any) -> List[Any]:
2490 # Objects can define their own completions by defining an
2490 # Objects can define their own completions by defining an
2491 # _ipy_key_completions_() method.
2491 # _ipy_key_completions_() method.
2492 method = get_real_method(obj, '_ipython_key_completions_')
2492 method = get_real_method(obj, '_ipython_key_completions_')
2493 if method is not None:
2493 if method is not None:
2494 return method()
2494 return method()
2495
2495
2496 # Special case some common in-memory dict-like types
2496 # Special case some common in-memory dict-like types
2497 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2497 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2498 try:
2498 try:
2499 return list(obj.keys())
2499 return list(obj.keys())
2500 except Exception:
2500 except Exception:
2501 return []
2501 return []
2502 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2502 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2503 try:
2503 try:
2504 return list(obj.obj.keys())
2504 return list(obj.obj.keys())
2505 except Exception:
2505 except Exception:
2506 return []
2506 return []
2507 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2507 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2508 _safe_isinstance(obj, 'numpy', 'void'):
2508 _safe_isinstance(obj, 'numpy', 'void'):
2509 return obj.dtype.names or []
2509 return obj.dtype.names or []
2510 return []
2510 return []
2511
2511
2512 @context_matcher()
2512 @context_matcher()
2513 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2513 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2514 """Match string keys in a dictionary, after e.g. ``foo[``."""
2514 """Match string keys in a dictionary, after e.g. ``foo[``."""
2515 matches = self.dict_key_matches(context.token)
2515 matches = self.dict_key_matches(context.token)
2516 return _convert_matcher_v1_result_to_v2(
2516 return _convert_matcher_v1_result_to_v2(
2517 matches, type="dict key", suppress_if_matches=True
2517 matches, type="dict key", suppress_if_matches=True
2518 )
2518 )
2519
2519
2520 def dict_key_matches(self, text: str) -> List[str]:
2520 def dict_key_matches(self, text: str) -> List[str]:
2521 """Match string keys in a dictionary, after e.g. ``foo[``.
2521 """Match string keys in a dictionary, after e.g. ``foo[``.
2522
2522
2523 .. deprecated:: 8.6
2523 .. deprecated:: 8.6
2524 You can use :meth:`dict_key_matcher` instead.
2524 You can use :meth:`dict_key_matcher` instead.
2525 """
2525 """
2526
2526
2527 # Short-circuit on closed dictionary (regular expression would
2527 # Short-circuit on closed dictionary (regular expression would
2528 # not match anyway, but would take quite a while).
2528 # not match anyway, but would take quite a while).
2529 if self.text_until_cursor.strip().endswith("]"):
2529 if self.text_until_cursor.strip().endswith("]"):
2530 return []
2530 return []
2531
2531
2532 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2532 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2533
2533
2534 if match is None:
2534 if match is None:
2535 return []
2535 return []
2536
2536
2537 expr, prior_tuple_keys, key_prefix = match.groups()
2537 expr, prior_tuple_keys, key_prefix = match.groups()
2538
2538
2539 obj = self._evaluate_expr(expr)
2539 obj = self._evaluate_expr(expr)
2540
2540
2541 if obj is not_found:
2541 if obj is not_found:
2542 return []
2542 return []
2543
2543
2544 keys = self._get_keys(obj)
2544 keys = self._get_keys(obj)
2545 if not keys:
2545 if not keys:
2546 return keys
2546 return keys
2547
2547
2548 tuple_prefix = guarded_eval(
2548 tuple_prefix = guarded_eval(
2549 prior_tuple_keys,
2549 prior_tuple_keys,
2550 EvaluationContext(
2550 EvaluationContext(
2551 globals=self.global_namespace,
2551 globals=self.global_namespace,
2552 locals=self.namespace,
2552 locals=self.namespace,
2553 evaluation=self.evaluation,
2553 evaluation=self.evaluation,
2554 in_subscript=True,
2554 in_subscript=True,
2555 ),
2555 ),
2556 )
2556 )
2557
2557
2558 closing_quote, token_offset, matches = match_dict_keys(
2558 closing_quote, token_offset, matches = match_dict_keys(
2559 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2559 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2560 )
2560 )
2561 if not matches:
2561 if not matches:
2562 return []
2562 return []
2563
2563
2564 # get the cursor position of
2564 # get the cursor position of
2565 # - the text being completed
2565 # - the text being completed
2566 # - the start of the key text
2566 # - the start of the key text
2567 # - the start of the completion
2567 # - the start of the completion
2568 text_start = len(self.text_until_cursor) - len(text)
2568 text_start = len(self.text_until_cursor) - len(text)
2569 if key_prefix:
2569 if key_prefix:
2570 key_start = match.start(3)
2570 key_start = match.start(3)
2571 completion_start = key_start + token_offset
2571 completion_start = key_start + token_offset
2572 else:
2572 else:
2573 key_start = completion_start = match.end()
2573 key_start = completion_start = match.end()
2574
2574
2575 # grab the leading prefix, to make sure all completions start with `text`
2575 # grab the leading prefix, to make sure all completions start with `text`
2576 if text_start > key_start:
2576 if text_start > key_start:
2577 leading = ''
2577 leading = ''
2578 else:
2578 else:
2579 leading = text[text_start:completion_start]
2579 leading = text[text_start:completion_start]
2580
2580
2581 # append closing quote and bracket as appropriate
2581 # append closing quote and bracket as appropriate
2582 # this is *not* appropriate if the opening quote or bracket is outside
2582 # this is *not* appropriate if the opening quote or bracket is outside
2583 # the text given to this method, e.g. `d["""a\nt
2583 # the text given to this method, e.g. `d["""a\nt
2584 can_close_quote = False
2584 can_close_quote = False
2585 can_close_bracket = False
2585 can_close_bracket = False
2586
2586
2587 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2587 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2588
2588
2589 if continuation.startswith(closing_quote):
2589 if continuation.startswith(closing_quote):
2590 # do not close if already closed, e.g. `d['a<tab>'`
2590 # do not close if already closed, e.g. `d['a<tab>'`
2591 continuation = continuation[len(closing_quote) :]
2591 continuation = continuation[len(closing_quote) :]
2592 else:
2592 else:
2593 can_close_quote = True
2593 can_close_quote = True
2594
2594
2595 continuation = continuation.strip()
2595 continuation = continuation.strip()
2596
2596
2597 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2597 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2598 # handling it is out of scope, so let's avoid appending suffixes.
2598 # handling it is out of scope, so let's avoid appending suffixes.
2599 has_known_tuple_handling = isinstance(obj, dict)
2599 has_known_tuple_handling = isinstance(obj, dict)
2600
2600
2601 can_close_bracket = (
2601 can_close_bracket = (
2602 not continuation.startswith("]") and self.auto_close_dict_keys
2602 not continuation.startswith("]") and self.auto_close_dict_keys
2603 )
2603 )
2604 can_close_tuple_item = (
2604 can_close_tuple_item = (
2605 not continuation.startswith(",")
2605 not continuation.startswith(",")
2606 and has_known_tuple_handling
2606 and has_known_tuple_handling
2607 and self.auto_close_dict_keys
2607 and self.auto_close_dict_keys
2608 )
2608 )
2609 can_close_quote = can_close_quote and self.auto_close_dict_keys
2609 can_close_quote = can_close_quote and self.auto_close_dict_keys
2610
2610
2611 # fast path if closing qoute should be appended but not suffix is allowed
2611 # fast path if closing qoute should be appended but not suffix is allowed
2612 if not can_close_quote and not can_close_bracket and closing_quote:
2612 if not can_close_quote and not can_close_bracket and closing_quote:
2613 return [leading + k for k in matches]
2613 return [leading + k for k in matches]
2614
2614
2615 results = []
2615 results = []
2616
2616
2617 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2617 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2618
2618
2619 for k, state_flag in matches.items():
2619 for k, state_flag in matches.items():
2620 result = leading + k
2620 result = leading + k
2621 if can_close_quote and closing_quote:
2621 if can_close_quote and closing_quote:
2622 result += closing_quote
2622 result += closing_quote
2623
2623
2624 if state_flag == end_of_tuple_or_item:
2624 if state_flag == end_of_tuple_or_item:
2625 # We do not know which suffix to add,
2625 # We do not know which suffix to add,
2626 # e.g. both tuple item and string
2626 # e.g. both tuple item and string
2627 # match this item.
2627 # match this item.
2628 pass
2628 pass
2629
2629
2630 if state_flag in end_of_tuple_or_item and can_close_bracket:
2630 if state_flag in end_of_tuple_or_item and can_close_bracket:
2631 result += "]"
2631 result += "]"
2632 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2632 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2633 result += ", "
2633 result += ", "
2634 results.append(result)
2634 results.append(result)
2635 return results
2635 return results
2636
2636
2637 @context_matcher()
2637 @context_matcher()
2638 def unicode_name_matcher(self, context: CompletionContext):
2638 def unicode_name_matcher(self, context: CompletionContext):
2639 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2639 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2640 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2640 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2641 return _convert_matcher_v1_result_to_v2(
2641 return _convert_matcher_v1_result_to_v2(
2642 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2642 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2643 )
2643 )
2644
2644
2645 @staticmethod
2645 @staticmethod
2646 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2646 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2647 """Match Latex-like syntax for unicode characters base
2647 """Match Latex-like syntax for unicode characters base
2648 on the name of the character.
2648 on the name of the character.
2649
2649
2650 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2650 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2651
2651
2652 Works only on valid python 3 identifier, or on combining characters that
2652 Works only on valid python 3 identifier, or on combining characters that
2653 will combine to form a valid identifier.
2653 will combine to form a valid identifier.
2654 """
2654 """
2655 slashpos = text.rfind('\\')
2655 slashpos = text.rfind('\\')
2656 if slashpos > -1:
2656 if slashpos > -1:
2657 s = text[slashpos+1:]
2657 s = text[slashpos+1:]
2658 try :
2658 try :
2659 unic = unicodedata.lookup(s)
2659 unic = unicodedata.lookup(s)
2660 # allow combining chars
2660 # allow combining chars
2661 if ('a'+unic).isidentifier():
2661 if ('a'+unic).isidentifier():
2662 return '\\'+s,[unic]
2662 return '\\'+s,[unic]
2663 except KeyError:
2663 except KeyError:
2664 pass
2664 pass
2665 return '', []
2665 return '', []
2666
2666
2667 @context_matcher()
2667 @context_matcher()
2668 def latex_name_matcher(self, context: CompletionContext):
2668 def latex_name_matcher(self, context: CompletionContext):
2669 """Match Latex syntax for unicode characters.
2669 """Match Latex syntax for unicode characters.
2670
2670
2671 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2671 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2672 """
2672 """
2673 fragment, matches = self.latex_matches(context.text_until_cursor)
2673 fragment, matches = self.latex_matches(context.text_until_cursor)
2674 return _convert_matcher_v1_result_to_v2(
2674 return _convert_matcher_v1_result_to_v2(
2675 matches, type="latex", fragment=fragment, suppress_if_matches=True
2675 matches, type="latex", fragment=fragment, suppress_if_matches=True
2676 )
2676 )
2677
2677
2678 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2678 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2679 """Match Latex syntax for unicode characters.
2679 """Match Latex syntax for unicode characters.
2680
2680
2681 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2681 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2682
2682
2683 .. deprecated:: 8.6
2683 .. deprecated:: 8.6
2684 You can use :meth:`latex_name_matcher` instead.
2684 You can use :meth:`latex_name_matcher` instead.
2685 """
2685 """
2686 slashpos = text.rfind('\\')
2686 slashpos = text.rfind('\\')
2687 if slashpos > -1:
2687 if slashpos > -1:
2688 s = text[slashpos:]
2688 s = text[slashpos:]
2689 if s in latex_symbols:
2689 if s in latex_symbols:
2690 # Try to complete a full latex symbol to unicode
2690 # Try to complete a full latex symbol to unicode
2691 # \\alpha -> Ξ±
2691 # \\alpha -> Ξ±
2692 return s, [latex_symbols[s]]
2692 return s, [latex_symbols[s]]
2693 else:
2693 else:
2694 # If a user has partially typed a latex symbol, give them
2694 # If a user has partially typed a latex symbol, give them
2695 # a full list of options \al -> [\aleph, \alpha]
2695 # a full list of options \al -> [\aleph, \alpha]
2696 matches = [k for k in latex_symbols if k.startswith(s)]
2696 matches = [k for k in latex_symbols if k.startswith(s)]
2697 if matches:
2697 if matches:
2698 return s, matches
2698 return s, matches
2699 return '', ()
2699 return '', ()
2700
2700
2701 @context_matcher()
2701 @context_matcher()
2702 def custom_completer_matcher(self, context):
2702 def custom_completer_matcher(self, context):
2703 """Dispatch custom completer.
2703 """Dispatch custom completer.
2704
2704
2705 If a match is found, suppresses all other matchers except for Jedi.
2705 If a match is found, suppresses all other matchers except for Jedi.
2706 """
2706 """
2707 matches = self.dispatch_custom_completer(context.token) or []
2707 matches = self.dispatch_custom_completer(context.token) or []
2708 result = _convert_matcher_v1_result_to_v2(
2708 result = _convert_matcher_v1_result_to_v2(
2709 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2709 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2710 )
2710 )
2711 result["ordered"] = True
2711 result["ordered"] = True
2712 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2712 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2713 return result
2713 return result
2714
2714
2715 def dispatch_custom_completer(self, text):
2715 def dispatch_custom_completer(self, text):
2716 """
2716 """
2717 .. deprecated:: 8.6
2717 .. deprecated:: 8.6
2718 You can use :meth:`custom_completer_matcher` instead.
2718 You can use :meth:`custom_completer_matcher` instead.
2719 """
2719 """
2720 if not self.custom_completers:
2720 if not self.custom_completers:
2721 return
2721 return
2722
2722
2723 line = self.line_buffer
2723 line = self.line_buffer
2724 if not line.strip():
2724 if not line.strip():
2725 return None
2725 return None
2726
2726
2727 # Create a little structure to pass all the relevant information about
2727 # Create a little structure to pass all the relevant information about
2728 # the current completion to any custom completer.
2728 # the current completion to any custom completer.
2729 event = SimpleNamespace()
2729 event = SimpleNamespace()
2730 event.line = line
2730 event.line = line
2731 event.symbol = text
2731 event.symbol = text
2732 cmd = line.split(None,1)[0]
2732 cmd = line.split(None,1)[0]
2733 event.command = cmd
2733 event.command = cmd
2734 event.text_until_cursor = self.text_until_cursor
2734 event.text_until_cursor = self.text_until_cursor
2735
2735
2736 # for foo etc, try also to find completer for %foo
2736 # for foo etc, try also to find completer for %foo
2737 if not cmd.startswith(self.magic_escape):
2737 if not cmd.startswith(self.magic_escape):
2738 try_magic = self.custom_completers.s_matches(
2738 try_magic = self.custom_completers.s_matches(
2739 self.magic_escape + cmd)
2739 self.magic_escape + cmd)
2740 else:
2740 else:
2741 try_magic = []
2741 try_magic = []
2742
2742
2743 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2743 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2744 try_magic,
2744 try_magic,
2745 self.custom_completers.flat_matches(self.text_until_cursor)):
2745 self.custom_completers.flat_matches(self.text_until_cursor)):
2746 try:
2746 try:
2747 res = c(event)
2747 res = c(event)
2748 if res:
2748 if res:
2749 # first, try case sensitive match
2749 # first, try case sensitive match
2750 withcase = [r for r in res if r.startswith(text)]
2750 withcase = [r for r in res if r.startswith(text)]
2751 if withcase:
2751 if withcase:
2752 return withcase
2752 return withcase
2753 # if none, then case insensitive ones are ok too
2753 # if none, then case insensitive ones are ok too
2754 text_low = text.lower()
2754 text_low = text.lower()
2755 return [r for r in res if r.lower().startswith(text_low)]
2755 return [r for r in res if r.lower().startswith(text_low)]
2756 except TryNext:
2756 except TryNext:
2757 pass
2757 pass
2758 except KeyboardInterrupt:
2758 except KeyboardInterrupt:
2759 """
2759 """
2760 If custom completer take too long,
2760 If custom completer take too long,
2761 let keyboard interrupt abort and return nothing.
2761 let keyboard interrupt abort and return nothing.
2762 """
2762 """
2763 break
2763 break
2764
2764
2765 return None
2765 return None
2766
2766
2767 def completions(self, text: str, offset: int)->Iterator[Completion]:
2767 def completions(self, text: str, offset: int)->Iterator[Completion]:
2768 """
2768 """
2769 Returns an iterator over the possible completions
2769 Returns an iterator over the possible completions
2770
2770
2771 .. warning::
2771 .. warning::
2772
2772
2773 Unstable
2773 Unstable
2774
2774
2775 This function is unstable, API may change without warning.
2775 This function is unstable, API may change without warning.
2776 It will also raise unless use in proper context manager.
2776 It will also raise unless use in proper context manager.
2777
2777
2778 Parameters
2778 Parameters
2779 ----------
2779 ----------
2780 text : str
2780 text : str
2781 Full text of the current input, multi line string.
2781 Full text of the current input, multi line string.
2782 offset : int
2782 offset : int
2783 Integer representing the position of the cursor in ``text``. Offset
2783 Integer representing the position of the cursor in ``text``. Offset
2784 is 0-based indexed.
2784 is 0-based indexed.
2785
2785
2786 Yields
2786 Yields
2787 ------
2787 ------
2788 Completion
2788 Completion
2789
2789
2790 Notes
2790 Notes
2791 -----
2791 -----
2792 The cursor on a text can either be seen as being "in between"
2792 The cursor on a text can either be seen as being "in between"
2793 characters or "On" a character depending on the interface visible to
2793 characters or "On" a character depending on the interface visible to
2794 the user. For consistency the cursor being on "in between" characters X
2794 the user. For consistency the cursor being on "in between" characters X
2795 and Y is equivalent to the cursor being "on" character Y, that is to say
2795 and Y is equivalent to the cursor being "on" character Y, that is to say
2796 the character the cursor is on is considered as being after the cursor.
2796 the character the cursor is on is considered as being after the cursor.
2797
2797
2798 Combining characters may span more that one position in the
2798 Combining characters may span more that one position in the
2799 text.
2799 text.
2800
2800
2801 .. note::
2801 .. note::
2802
2802
2803 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2803 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2804 fake Completion token to distinguish completion returned by Jedi
2804 fake Completion token to distinguish completion returned by Jedi
2805 and usual IPython completion.
2805 and usual IPython completion.
2806
2806
2807 .. note::
2807 .. note::
2808
2808
2809 Completions are not completely deduplicated yet. If identical
2809 Completions are not completely deduplicated yet. If identical
2810 completions are coming from different sources this function does not
2810 completions are coming from different sources this function does not
2811 ensure that each completion object will only be present once.
2811 ensure that each completion object will only be present once.
2812 """
2812 """
2813 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2813 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2814 "It may change without warnings. "
2814 "It may change without warnings. "
2815 "Use in corresponding context manager.",
2815 "Use in corresponding context manager.",
2816 category=ProvisionalCompleterWarning, stacklevel=2)
2816 category=ProvisionalCompleterWarning, stacklevel=2)
2817
2817
2818 seen = set()
2818 seen = set()
2819 profiler:Optional[cProfile.Profile]
2819 profiler:Optional[cProfile.Profile]
2820 try:
2820 try:
2821 if self.profile_completions:
2821 if self.profile_completions:
2822 import cProfile
2822 import cProfile
2823 profiler = cProfile.Profile()
2823 profiler = cProfile.Profile()
2824 profiler.enable()
2824 profiler.enable()
2825 else:
2825 else:
2826 profiler = None
2826 profiler = None
2827
2827
2828 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2828 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2829 if c and (c in seen):
2829 if c and (c in seen):
2830 continue
2830 continue
2831 yield c
2831 yield c
2832 seen.add(c)
2832 seen.add(c)
2833 except KeyboardInterrupt:
2833 except KeyboardInterrupt:
2834 """if completions take too long and users send keyboard interrupt,
2834 """if completions take too long and users send keyboard interrupt,
2835 do not crash and return ASAP. """
2835 do not crash and return ASAP. """
2836 pass
2836 pass
2837 finally:
2837 finally:
2838 if profiler is not None:
2838 if profiler is not None:
2839 profiler.disable()
2839 profiler.disable()
2840 ensure_dir_exists(self.profiler_output_dir)
2840 ensure_dir_exists(self.profiler_output_dir)
2841 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2841 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2842 print("Writing profiler output to", output_path)
2842 print("Writing profiler output to", output_path)
2843 profiler.dump_stats(output_path)
2843 profiler.dump_stats(output_path)
2844
2844
2845 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2845 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2846 """
2846 """
2847 Core completion module.Same signature as :any:`completions`, with the
2847 Core completion module.Same signature as :any:`completions`, with the
2848 extra `timeout` parameter (in seconds).
2848 extra `timeout` parameter (in seconds).
2849
2849
2850 Computing jedi's completion ``.type`` can be quite expensive (it is a
2850 Computing jedi's completion ``.type`` can be quite expensive (it is a
2851 lazy property) and can require some warm-up, more warm up than just
2851 lazy property) and can require some warm-up, more warm up than just
2852 computing the ``name`` of a completion. The warm-up can be :
2852 computing the ``name`` of a completion. The warm-up can be :
2853
2853
2854 - Long warm-up the first time a module is encountered after
2854 - Long warm-up the first time a module is encountered after
2855 install/update: actually build parse/inference tree.
2855 install/update: actually build parse/inference tree.
2856
2856
2857 - first time the module is encountered in a session: load tree from
2857 - first time the module is encountered in a session: load tree from
2858 disk.
2858 disk.
2859
2859
2860 We don't want to block completions for tens of seconds so we give the
2860 We don't want to block completions for tens of seconds so we give the
2861 completer a "budget" of ``_timeout`` seconds per invocation to compute
2861 completer a "budget" of ``_timeout`` seconds per invocation to compute
2862 completions types, the completions that have not yet been computed will
2862 completions types, the completions that have not yet been computed will
2863 be marked as "unknown" an will have a chance to be computed next round
2863 be marked as "unknown" an will have a chance to be computed next round
2864 are things get cached.
2864 are things get cached.
2865
2865
2866 Keep in mind that Jedi is not the only thing treating the completion so
2866 Keep in mind that Jedi is not the only thing treating the completion so
2867 keep the timeout short-ish as if we take more than 0.3 second we still
2867 keep the timeout short-ish as if we take more than 0.3 second we still
2868 have lots of processing to do.
2868 have lots of processing to do.
2869
2869
2870 """
2870 """
2871 deadline = time.monotonic() + _timeout
2871 deadline = time.monotonic() + _timeout
2872
2872
2873 before = full_text[:offset]
2873 before = full_text[:offset]
2874 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2874 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2875
2875
2876 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2876 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2877
2877
2878 def is_non_jedi_result(
2878 def is_non_jedi_result(
2879 result: MatcherResult, identifier: str
2879 result: MatcherResult, identifier: str
2880 ) -> TypeGuard[SimpleMatcherResult]:
2880 ) -> TypeGuard[SimpleMatcherResult]:
2881 return identifier != jedi_matcher_id
2881 return identifier != jedi_matcher_id
2882
2882
2883 results = self._complete(
2883 results = self._complete(
2884 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2884 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2885 )
2885 )
2886
2886
2887 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2887 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2888 identifier: result
2888 identifier: result
2889 for identifier, result in results.items()
2889 for identifier, result in results.items()
2890 if is_non_jedi_result(result, identifier)
2890 if is_non_jedi_result(result, identifier)
2891 }
2891 }
2892
2892
2893 jedi_matches = (
2893 jedi_matches = (
2894 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2894 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2895 if jedi_matcher_id in results
2895 if jedi_matcher_id in results
2896 else ()
2896 else ()
2897 )
2897 )
2898
2898
2899 iter_jm = iter(jedi_matches)
2899 iter_jm = iter(jedi_matches)
2900 if _timeout:
2900 if _timeout:
2901 for jm in iter_jm:
2901 for jm in iter_jm:
2902 try:
2902 try:
2903 type_ = jm.type
2903 type_ = jm.type
2904 except Exception:
2904 except Exception:
2905 if self.debug:
2905 if self.debug:
2906 print("Error in Jedi getting type of ", jm)
2906 print("Error in Jedi getting type of ", jm)
2907 type_ = None
2907 type_ = None
2908 delta = len(jm.name_with_symbols) - len(jm.complete)
2908 delta = len(jm.name_with_symbols) - len(jm.complete)
2909 if type_ == 'function':
2909 if type_ == 'function':
2910 signature = _make_signature(jm)
2910 signature = _make_signature(jm)
2911 else:
2911 else:
2912 signature = ''
2912 signature = ''
2913 yield Completion(start=offset - delta,
2913 yield Completion(start=offset - delta,
2914 end=offset,
2914 end=offset,
2915 text=jm.name_with_symbols,
2915 text=jm.name_with_symbols,
2916 type=type_,
2916 type=type_,
2917 signature=signature,
2917 signature=signature,
2918 _origin='jedi')
2918 _origin='jedi')
2919
2919
2920 if time.monotonic() > deadline:
2920 if time.monotonic() > deadline:
2921 break
2921 break
2922
2922
2923 for jm in iter_jm:
2923 for jm in iter_jm:
2924 delta = len(jm.name_with_symbols) - len(jm.complete)
2924 delta = len(jm.name_with_symbols) - len(jm.complete)
2925 yield Completion(
2925 yield Completion(
2926 start=offset - delta,
2926 start=offset - delta,
2927 end=offset,
2927 end=offset,
2928 text=jm.name_with_symbols,
2928 text=jm.name_with_symbols,
2929 type=_UNKNOWN_TYPE, # don't compute type for speed
2929 type=_UNKNOWN_TYPE, # don't compute type for speed
2930 _origin="jedi",
2930 _origin="jedi",
2931 signature="",
2931 signature="",
2932 )
2932 )
2933
2933
2934 # TODO:
2934 # TODO:
2935 # Suppress this, right now just for debug.
2935 # Suppress this, right now just for debug.
2936 if jedi_matches and non_jedi_results and self.debug:
2936 if jedi_matches and non_jedi_results and self.debug:
2937 some_start_offset = before.rfind(
2937 some_start_offset = before.rfind(
2938 next(iter(non_jedi_results.values()))["matched_fragment"]
2938 next(iter(non_jedi_results.values()))["matched_fragment"]
2939 )
2939 )
2940 yield Completion(
2940 yield Completion(
2941 start=some_start_offset,
2941 start=some_start_offset,
2942 end=offset,
2942 end=offset,
2943 text="--jedi/ipython--",
2943 text="--jedi/ipython--",
2944 _origin="debug",
2944 _origin="debug",
2945 type="none",
2945 type="none",
2946 signature="",
2946 signature="",
2947 )
2947 )
2948
2948
2949 ordered: List[Completion] = []
2949 ordered: List[Completion] = []
2950 sortable: List[Completion] = []
2950 sortable: List[Completion] = []
2951
2951
2952 for origin, result in non_jedi_results.items():
2952 for origin, result in non_jedi_results.items():
2953 matched_text = result["matched_fragment"]
2953 matched_text = result["matched_fragment"]
2954 start_offset = before.rfind(matched_text)
2954 start_offset = before.rfind(matched_text)
2955 is_ordered = result.get("ordered", False)
2955 is_ordered = result.get("ordered", False)
2956 container = ordered if is_ordered else sortable
2956 container = ordered if is_ordered else sortable
2957
2957
2958 # I'm unsure if this is always true, so let's assert and see if it
2958 # I'm unsure if this is always true, so let's assert and see if it
2959 # crash
2959 # crash
2960 assert before.endswith(matched_text)
2960 assert before.endswith(matched_text)
2961
2961
2962 for simple_completion in result["completions"]:
2962 for simple_completion in result["completions"]:
2963 completion = Completion(
2963 completion = Completion(
2964 start=start_offset,
2964 start=start_offset,
2965 end=offset,
2965 end=offset,
2966 text=simple_completion.text,
2966 text=simple_completion.text,
2967 _origin=origin,
2967 _origin=origin,
2968 signature="",
2968 signature="",
2969 type=simple_completion.type or _UNKNOWN_TYPE,
2969 type=simple_completion.type or _UNKNOWN_TYPE,
2970 )
2970 )
2971 container.append(completion)
2971 container.append(completion)
2972
2972
2973 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2973 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2974 :MATCHES_LIMIT
2974 :MATCHES_LIMIT
2975 ]
2975 ]
2976
2976
2977 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2977 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2978 """Find completions for the given text and line context.
2978 """Find completions for the given text and line context.
2979
2979
2980 Note that both the text and the line_buffer are optional, but at least
2980 Note that both the text and the line_buffer are optional, but at least
2981 one of them must be given.
2981 one of them must be given.
2982
2982
2983 Parameters
2983 Parameters
2984 ----------
2984 ----------
2985 text : string, optional
2985 text : string, optional
2986 Text to perform the completion on. If not given, the line buffer
2986 Text to perform the completion on. If not given, the line buffer
2987 is split using the instance's CompletionSplitter object.
2987 is split using the instance's CompletionSplitter object.
2988 line_buffer : string, optional
2988 line_buffer : string, optional
2989 If not given, the completer attempts to obtain the current line
2989 If not given, the completer attempts to obtain the current line
2990 buffer via readline. This keyword allows clients which are
2990 buffer via readline. This keyword allows clients which are
2991 requesting for text completions in non-readline contexts to inform
2991 requesting for text completions in non-readline contexts to inform
2992 the completer of the entire text.
2992 the completer of the entire text.
2993 cursor_pos : int, optional
2993 cursor_pos : int, optional
2994 Index of the cursor in the full line buffer. Should be provided by
2994 Index of the cursor in the full line buffer. Should be provided by
2995 remote frontends where kernel has no access to frontend state.
2995 remote frontends where kernel has no access to frontend state.
2996
2996
2997 Returns
2997 Returns
2998 -------
2998 -------
2999 Tuple of two items:
2999 Tuple of two items:
3000 text : str
3000 text : str
3001 Text that was actually used in the completion.
3001 Text that was actually used in the completion.
3002 matches : list
3002 matches : list
3003 A list of completion matches.
3003 A list of completion matches.
3004
3004
3005 Notes
3005 Notes
3006 -----
3006 -----
3007 This API is likely to be deprecated and replaced by
3007 This API is likely to be deprecated and replaced by
3008 :any:`IPCompleter.completions` in the future.
3008 :any:`IPCompleter.completions` in the future.
3009
3009
3010 """
3010 """
3011 warnings.warn('`Completer.complete` is pending deprecation since '
3011 warnings.warn('`Completer.complete` is pending deprecation since '
3012 'IPython 6.0 and will be replaced by `Completer.completions`.',
3012 'IPython 6.0 and will be replaced by `Completer.completions`.',
3013 PendingDeprecationWarning)
3013 PendingDeprecationWarning)
3014 # potential todo, FOLD the 3rd throw away argument of _complete
3014 # potential todo, FOLD the 3rd throw away argument of _complete
3015 # into the first 2 one.
3015 # into the first 2 one.
3016 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3016 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3017 # TODO: should we deprecate now, or does it stay?
3017 # TODO: should we deprecate now, or does it stay?
3018
3018
3019 results = self._complete(
3019 results = self._complete(
3020 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3020 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3021 )
3021 )
3022
3022
3023 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3023 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3024
3024
3025 return self._arrange_and_extract(
3025 return self._arrange_and_extract(
3026 results,
3026 results,
3027 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3027 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3028 skip_matchers={jedi_matcher_id},
3028 skip_matchers={jedi_matcher_id},
3029 # this API does not support different start/end positions (fragments of token).
3029 # this API does not support different start/end positions (fragments of token).
3030 abort_if_offset_changes=True,
3030 abort_if_offset_changes=True,
3031 )
3031 )
3032
3032
3033 def _arrange_and_extract(
3033 def _arrange_and_extract(
3034 self,
3034 self,
3035 results: Dict[str, MatcherResult],
3035 results: Dict[str, MatcherResult],
3036 skip_matchers: Set[str],
3036 skip_matchers: Set[str],
3037 abort_if_offset_changes: bool,
3037 abort_if_offset_changes: bool,
3038 ):
3038 ):
3039
3040 sortable: List[AnyMatcherCompletion] = []
3039 sortable: List[AnyMatcherCompletion] = []
3041 ordered: List[AnyMatcherCompletion] = []
3040 ordered: List[AnyMatcherCompletion] = []
3042 most_recent_fragment = None
3041 most_recent_fragment = None
3043 for identifier, result in results.items():
3042 for identifier, result in results.items():
3044 if identifier in skip_matchers:
3043 if identifier in skip_matchers:
3045 continue
3044 continue
3046 if not result["completions"]:
3045 if not result["completions"]:
3047 continue
3046 continue
3048 if not most_recent_fragment:
3047 if not most_recent_fragment:
3049 most_recent_fragment = result["matched_fragment"]
3048 most_recent_fragment = result["matched_fragment"]
3050 if (
3049 if (
3051 abort_if_offset_changes
3050 abort_if_offset_changes
3052 and result["matched_fragment"] != most_recent_fragment
3051 and result["matched_fragment"] != most_recent_fragment
3053 ):
3052 ):
3054 break
3053 break
3055 if result.get("ordered", False):
3054 if result.get("ordered", False):
3056 ordered.extend(result["completions"])
3055 ordered.extend(result["completions"])
3057 else:
3056 else:
3058 sortable.extend(result["completions"])
3057 sortable.extend(result["completions"])
3059
3058
3060 if not most_recent_fragment:
3059 if not most_recent_fragment:
3061 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3060 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3062
3061
3063 return most_recent_fragment, [
3062 return most_recent_fragment, [
3064 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3063 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3065 ]
3064 ]
3066
3065
3067 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3066 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3068 full_text=None) -> _CompleteResult:
3067 full_text=None) -> _CompleteResult:
3069 """
3068 """
3070 Like complete but can also returns raw jedi completions as well as the
3069 Like complete but can also returns raw jedi completions as well as the
3071 origin of the completion text. This could (and should) be made much
3070 origin of the completion text. This could (and should) be made much
3072 cleaner but that will be simpler once we drop the old (and stateful)
3071 cleaner but that will be simpler once we drop the old (and stateful)
3073 :any:`complete` API.
3072 :any:`complete` API.
3074
3073
3075 With current provisional API, cursor_pos act both (depending on the
3074 With current provisional API, cursor_pos act both (depending on the
3076 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3075 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3077 ``column`` when passing multiline strings this could/should be renamed
3076 ``column`` when passing multiline strings this could/should be renamed
3078 but would add extra noise.
3077 but would add extra noise.
3079
3078
3080 Parameters
3079 Parameters
3081 ----------
3080 ----------
3082 cursor_line
3081 cursor_line
3083 Index of the line the cursor is on. 0 indexed.
3082 Index of the line the cursor is on. 0 indexed.
3084 cursor_pos
3083 cursor_pos
3085 Position of the cursor in the current line/line_buffer/text. 0
3084 Position of the cursor in the current line/line_buffer/text. 0
3086 indexed.
3085 indexed.
3087 line_buffer : optional, str
3086 line_buffer : optional, str
3088 The current line the cursor is in, this is mostly due to legacy
3087 The current line the cursor is in, this is mostly due to legacy
3089 reason that readline could only give a us the single current line.
3088 reason that readline could only give a us the single current line.
3090 Prefer `full_text`.
3089 Prefer `full_text`.
3091 text : str
3090 text : str
3092 The current "token" the cursor is in, mostly also for historical
3091 The current "token" the cursor is in, mostly also for historical
3093 reasons. as the completer would trigger only after the current line
3092 reasons. as the completer would trigger only after the current line
3094 was parsed.
3093 was parsed.
3095 full_text : str
3094 full_text : str
3096 Full text of the current cell.
3095 Full text of the current cell.
3097
3096
3098 Returns
3097 Returns
3099 -------
3098 -------
3100 An ordered dictionary where keys are identifiers of completion
3099 An ordered dictionary where keys are identifiers of completion
3101 matchers and values are ``MatcherResult``s.
3100 matchers and values are ``MatcherResult``s.
3102 """
3101 """
3103
3102
3104 # if the cursor position isn't given, the only sane assumption we can
3103 # if the cursor position isn't given, the only sane assumption we can
3105 # make is that it's at the end of the line (the common case)
3104 # make is that it's at the end of the line (the common case)
3106 if cursor_pos is None:
3105 if cursor_pos is None:
3107 cursor_pos = len(line_buffer) if text is None else len(text)
3106 cursor_pos = len(line_buffer) if text is None else len(text)
3108
3107
3109 if self.use_main_ns:
3108 if self.use_main_ns:
3110 self.namespace = __main__.__dict__
3109 self.namespace = __main__.__dict__
3111
3110
3112 # if text is either None or an empty string, rely on the line buffer
3111 # if text is either None or an empty string, rely on the line buffer
3113 if (not line_buffer) and full_text:
3112 if (not line_buffer) and full_text:
3114 line_buffer = full_text.split('\n')[cursor_line]
3113 line_buffer = full_text.split('\n')[cursor_line]
3115 if not text: # issue #11508: check line_buffer before calling split_line
3114 if not text: # issue #11508: check line_buffer before calling split_line
3116 text = (
3115 text = (
3117 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3116 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3118 )
3117 )
3119
3118
3120 # If no line buffer is given, assume the input text is all there was
3119 # If no line buffer is given, assume the input text is all there was
3121 if line_buffer is None:
3120 if line_buffer is None:
3122 line_buffer = text
3121 line_buffer = text
3123
3122
3124 # deprecated - do not use `line_buffer` in new code.
3123 # deprecated - do not use `line_buffer` in new code.
3125 self.line_buffer = line_buffer
3124 self.line_buffer = line_buffer
3126 self.text_until_cursor = self.line_buffer[:cursor_pos]
3125 self.text_until_cursor = self.line_buffer[:cursor_pos]
3127
3126
3128 if not full_text:
3127 if not full_text:
3129 full_text = line_buffer
3128 full_text = line_buffer
3130
3129
3131 context = CompletionContext(
3130 context = CompletionContext(
3132 full_text=full_text,
3131 full_text=full_text,
3133 cursor_position=cursor_pos,
3132 cursor_position=cursor_pos,
3134 cursor_line=cursor_line,
3133 cursor_line=cursor_line,
3135 token=text,
3134 token=text,
3136 limit=MATCHES_LIMIT,
3135 limit=MATCHES_LIMIT,
3137 )
3136 )
3138
3137
3139 # Start with a clean slate of completions
3138 # Start with a clean slate of completions
3140 results: Dict[str, MatcherResult] = {}
3139 results: Dict[str, MatcherResult] = {}
3141
3140
3142 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3141 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3143
3142
3144 suppressed_matchers: Set[str] = set()
3143 suppressed_matchers: Set[str] = set()
3145
3144
3146 matchers = {
3145 matchers = {
3147 _get_matcher_id(matcher): matcher
3146 _get_matcher_id(matcher): matcher
3148 for matcher in sorted(
3147 for matcher in sorted(
3149 self.matchers, key=_get_matcher_priority, reverse=True
3148 self.matchers, key=_get_matcher_priority, reverse=True
3150 )
3149 )
3151 }
3150 }
3152
3151
3153 for matcher_id, matcher in matchers.items():
3152 for matcher_id, matcher in matchers.items():
3154 matcher_id = _get_matcher_id(matcher)
3153 matcher_id = _get_matcher_id(matcher)
3155
3154
3156 if matcher_id in self.disable_matchers:
3155 if matcher_id in self.disable_matchers:
3157 continue
3156 continue
3158
3157
3159 if matcher_id in results:
3158 if matcher_id in results:
3160 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3159 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3161
3160
3162 if matcher_id in suppressed_matchers:
3161 if matcher_id in suppressed_matchers:
3163 continue
3162 continue
3164
3163
3165 result: MatcherResult
3164 result: MatcherResult
3166 try:
3165 try:
3167 if _is_matcher_v1(matcher):
3166 if _is_matcher_v1(matcher):
3168 result = _convert_matcher_v1_result_to_v2(
3167 result = _convert_matcher_v1_result_to_v2(
3169 matcher(text), type=_UNKNOWN_TYPE
3168 matcher(text), type=_UNKNOWN_TYPE
3170 )
3169 )
3171 elif _is_matcher_v2(matcher):
3170 elif _is_matcher_v2(matcher):
3172 result = matcher(context)
3171 result = matcher(context)
3173 else:
3172 else:
3174 api_version = _get_matcher_api_version(matcher)
3173 api_version = _get_matcher_api_version(matcher)
3175 raise ValueError(f"Unsupported API version {api_version}")
3174 raise ValueError(f"Unsupported API version {api_version}")
3176 except:
3175 except:
3177 # Show the ugly traceback if the matcher causes an
3176 # Show the ugly traceback if the matcher causes an
3178 # exception, but do NOT crash the kernel!
3177 # exception, but do NOT crash the kernel!
3179 sys.excepthook(*sys.exc_info())
3178 sys.excepthook(*sys.exc_info())
3180 continue
3179 continue
3181
3180
3182 # set default value for matched fragment if suffix was not selected.
3181 # set default value for matched fragment if suffix was not selected.
3183 result["matched_fragment"] = result.get("matched_fragment", context.token)
3182 result["matched_fragment"] = result.get("matched_fragment", context.token)
3184
3183
3185 if not suppressed_matchers:
3184 if not suppressed_matchers:
3186 suppression_recommended: Union[bool, Set[str]] = result.get(
3185 suppression_recommended: Union[bool, Set[str]] = result.get(
3187 "suppress", False
3186 "suppress", False
3188 )
3187 )
3189
3188
3190 suppression_config = (
3189 suppression_config = (
3191 self.suppress_competing_matchers.get(matcher_id, None)
3190 self.suppress_competing_matchers.get(matcher_id, None)
3192 if isinstance(self.suppress_competing_matchers, dict)
3191 if isinstance(self.suppress_competing_matchers, dict)
3193 else self.suppress_competing_matchers
3192 else self.suppress_competing_matchers
3194 )
3193 )
3195 should_suppress = (
3194 should_suppress = (
3196 (suppression_config is True)
3195 (suppression_config is True)
3197 or (suppression_recommended and (suppression_config is not False))
3196 or (suppression_recommended and (suppression_config is not False))
3198 ) and has_any_completions(result)
3197 ) and has_any_completions(result)
3199
3198
3200 if should_suppress:
3199 if should_suppress:
3201 suppression_exceptions: Set[str] = result.get(
3200 suppression_exceptions: Set[str] = result.get(
3202 "do_not_suppress", set()
3201 "do_not_suppress", set()
3203 )
3202 )
3204 if isinstance(suppression_recommended, Iterable):
3203 if isinstance(suppression_recommended, Iterable):
3205 to_suppress = set(suppression_recommended)
3204 to_suppress = set(suppression_recommended)
3206 else:
3205 else:
3207 to_suppress = set(matchers)
3206 to_suppress = set(matchers)
3208 suppressed_matchers = to_suppress - suppression_exceptions
3207 suppressed_matchers = to_suppress - suppression_exceptions
3209
3208
3210 new_results = {}
3209 new_results = {}
3211 for previous_matcher_id, previous_result in results.items():
3210 for previous_matcher_id, previous_result in results.items():
3212 if previous_matcher_id not in suppressed_matchers:
3211 if previous_matcher_id not in suppressed_matchers:
3213 new_results[previous_matcher_id] = previous_result
3212 new_results[previous_matcher_id] = previous_result
3214 results = new_results
3213 results = new_results
3215
3214
3216 results[matcher_id] = result
3215 results[matcher_id] = result
3217
3216
3218 _, matches = self._arrange_and_extract(
3217 _, matches = self._arrange_and_extract(
3219 results,
3218 results,
3220 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3219 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3221 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3220 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3222 skip_matchers={jedi_matcher_id},
3221 skip_matchers={jedi_matcher_id},
3223 abort_if_offset_changes=False,
3222 abort_if_offset_changes=False,
3224 )
3223 )
3225
3224
3226 # populate legacy stateful API
3225 # populate legacy stateful API
3227 self.matches = matches
3226 self.matches = matches
3228
3227
3229 return results
3228 return results
3230
3229
3231 @staticmethod
3230 @staticmethod
3232 def _deduplicate(
3231 def _deduplicate(
3233 matches: Sequence[AnyCompletion],
3232 matches: Sequence[AnyCompletion],
3234 ) -> Iterable[AnyCompletion]:
3233 ) -> Iterable[AnyCompletion]:
3235 filtered_matches: Dict[str, AnyCompletion] = {}
3234 filtered_matches: Dict[str, AnyCompletion] = {}
3236 for match in matches:
3235 for match in matches:
3237 text = match.text
3236 text = match.text
3238 if (
3237 if (
3239 text not in filtered_matches
3238 text not in filtered_matches
3240 or filtered_matches[text].type == _UNKNOWN_TYPE
3239 or filtered_matches[text].type == _UNKNOWN_TYPE
3241 ):
3240 ):
3242 filtered_matches[text] = match
3241 filtered_matches[text] = match
3243
3242
3244 return filtered_matches.values()
3243 return filtered_matches.values()
3245
3244
3246 @staticmethod
3245 @staticmethod
3247 def _sort(matches: Sequence[AnyCompletion]):
3246 def _sort(matches: Sequence[AnyCompletion]):
3248 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3247 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3249
3248
3250 @context_matcher()
3249 @context_matcher()
3251 def fwd_unicode_matcher(self, context: CompletionContext):
3250 def fwd_unicode_matcher(self, context: CompletionContext):
3252 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3251 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3253 # TODO: use `context.limit` to terminate early once we matched the maximum
3252 # TODO: use `context.limit` to terminate early once we matched the maximum
3254 # number that will be used downstream; can be added as an optional to
3253 # number that will be used downstream; can be added as an optional to
3255 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3254 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3256 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3255 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3257 return _convert_matcher_v1_result_to_v2(
3256 return _convert_matcher_v1_result_to_v2(
3258 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3257 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3259 )
3258 )
3260
3259
3261 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3260 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3262 """
3261 """
3263 Forward match a string starting with a backslash with a list of
3262 Forward match a string starting with a backslash with a list of
3264 potential Unicode completions.
3263 potential Unicode completions.
3265
3264
3266 Will compute list of Unicode character names on first call and cache it.
3265 Will compute list of Unicode character names on first call and cache it.
3267
3266
3268 .. deprecated:: 8.6
3267 .. deprecated:: 8.6
3269 You can use :meth:`fwd_unicode_matcher` instead.
3268 You can use :meth:`fwd_unicode_matcher` instead.
3270
3269
3271 Returns
3270 Returns
3272 -------
3271 -------
3273 At tuple with:
3272 At tuple with:
3274 - matched text (empty if no matches)
3273 - matched text (empty if no matches)
3275 - list of potential completions, empty tuple otherwise)
3274 - list of potential completions, empty tuple otherwise)
3276 """
3275 """
3277 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3276 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3278 # We could do a faster match using a Trie.
3277 # We could do a faster match using a Trie.
3279
3278
3280 # Using pygtrie the following seem to work:
3279 # Using pygtrie the following seem to work:
3281
3280
3282 # s = PrefixSet()
3281 # s = PrefixSet()
3283
3282
3284 # for c in range(0,0x10FFFF + 1):
3283 # for c in range(0,0x10FFFF + 1):
3285 # try:
3284 # try:
3286 # s.add(unicodedata.name(chr(c)))
3285 # s.add(unicodedata.name(chr(c)))
3287 # except ValueError:
3286 # except ValueError:
3288 # pass
3287 # pass
3289 # [''.join(k) for k in s.iter(prefix)]
3288 # [''.join(k) for k in s.iter(prefix)]
3290
3289
3291 # But need to be timed and adds an extra dependency.
3290 # But need to be timed and adds an extra dependency.
3292
3291
3293 slashpos = text.rfind('\\')
3292 slashpos = text.rfind('\\')
3294 # if text starts with slash
3293 # if text starts with slash
3295 if slashpos > -1:
3294 if slashpos > -1:
3296 # PERF: It's important that we don't access self._unicode_names
3295 # PERF: It's important that we don't access self._unicode_names
3297 # until we're inside this if-block. _unicode_names is lazily
3296 # until we're inside this if-block. _unicode_names is lazily
3298 # initialized, and it takes a user-noticeable amount of time to
3297 # initialized, and it takes a user-noticeable amount of time to
3299 # initialize it, so we don't want to initialize it unless we're
3298 # initialize it, so we don't want to initialize it unless we're
3300 # actually going to use it.
3299 # actually going to use it.
3301 s = text[slashpos + 1 :]
3300 s = text[slashpos + 1 :]
3302 sup = s.upper()
3301 sup = s.upper()
3303 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3302 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3304 if candidates:
3303 if candidates:
3305 return s, candidates
3304 return s, candidates
3306 candidates = [x for x in self.unicode_names if sup in x]
3305 candidates = [x for x in self.unicode_names if sup in x]
3307 if candidates:
3306 if candidates:
3308 return s, candidates
3307 return s, candidates
3309 splitsup = sup.split(" ")
3308 splitsup = sup.split(" ")
3310 candidates = [
3309 candidates = [
3311 x for x in self.unicode_names if all(u in x for u in splitsup)
3310 x for x in self.unicode_names if all(u in x for u in splitsup)
3312 ]
3311 ]
3313 if candidates:
3312 if candidates:
3314 return s, candidates
3313 return s, candidates
3315
3314
3316 return "", ()
3315 return "", ()
3317
3316
3318 # if text does not start with slash
3317 # if text does not start with slash
3319 else:
3318 else:
3320 return '', ()
3319 return '', ()
3321
3320
3322 @property
3321 @property
3323 def unicode_names(self) -> List[str]:
3322 def unicode_names(self) -> List[str]:
3324 """List of names of unicode code points that can be completed.
3323 """List of names of unicode code points that can be completed.
3325
3324
3326 The list is lazily initialized on first access.
3325 The list is lazily initialized on first access.
3327 """
3326 """
3328 if self._unicode_names is None:
3327 if self._unicode_names is None:
3329 names = []
3328 names = []
3330 for c in range(0,0x10FFFF + 1):
3329 for c in range(0,0x10FFFF + 1):
3331 try:
3330 try:
3332 names.append(unicodedata.name(chr(c)))
3331 names.append(unicodedata.name(chr(c)))
3333 except ValueError:
3332 except ValueError:
3334 pass
3333 pass
3335 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3334 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3336
3335
3337 return self._unicode_names
3336 return self._unicode_names
3338
3337
3339 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3338 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3340 names = []
3339 names = []
3341 for start,stop in ranges:
3340 for start,stop in ranges:
3342 for c in range(start, stop) :
3341 for c in range(start, stop) :
3343 try:
3342 try:
3344 names.append(unicodedata.name(chr(c)))
3343 names.append(unicodedata.name(chr(c)))
3345 except ValueError:
3344 except ValueError:
3346 pass
3345 pass
3347 return names
3346 return names
@@ -1,998 +1,997 b''
1 # -*- coding: utf-8 -*-
1 # -*- coding: utf-8 -*-
2 """
2 """
3 Pdb debugger class.
3 Pdb debugger class.
4
4
5
5
6 This is an extension to PDB which adds a number of new features.
6 This is an extension to PDB which adds a number of new features.
7 Note that there is also the `IPython.terminal.debugger` class which provides UI
7 Note that there is also the `IPython.terminal.debugger` class which provides UI
8 improvements.
8 improvements.
9
9
10 We also strongly recommend to use this via the `ipdb` package, which provides
10 We also strongly recommend to use this via the `ipdb` package, which provides
11 extra configuration options.
11 extra configuration options.
12
12
13 Among other things, this subclass of PDB:
13 Among other things, this subclass of PDB:
14 - supports many IPython magics like pdef/psource
14 - supports many IPython magics like pdef/psource
15 - hide frames in tracebacks based on `__tracebackhide__`
15 - hide frames in tracebacks based on `__tracebackhide__`
16 - allows to skip frames based on `__debuggerskip__`
16 - allows to skip frames based on `__debuggerskip__`
17
17
18 The skipping and hiding frames are configurable via the `skip_predicates`
18 The skipping and hiding frames are configurable via the `skip_predicates`
19 command.
19 command.
20
20
21 By default, frames from readonly files will be hidden, frames containing
21 By default, frames from readonly files will be hidden, frames containing
22 ``__tracebackhide__=True`` will be hidden.
22 ``__tracebackhide__=True`` will be hidden.
23
23
24 Frames containing ``__debuggerskip__`` will be stepped over, frames who's parent
24 Frames containing ``__debuggerskip__`` will be stepped over, frames who's parent
25 frames value of ``__debuggerskip__`` is ``True`` will be skipped.
25 frames value of ``__debuggerskip__`` is ``True`` will be skipped.
26
26
27 >>> def helpers_helper():
27 >>> def helpers_helper():
28 ... pass
28 ... pass
29 ...
29 ...
30 ... def helper_1():
30 ... def helper_1():
31 ... print("don't step in me")
31 ... print("don't step in me")
32 ... helpers_helpers() # will be stepped over unless breakpoint set.
32 ... helpers_helpers() # will be stepped over unless breakpoint set.
33 ...
33 ...
34 ...
34 ...
35 ... def helper_2():
35 ... def helper_2():
36 ... print("in me neither")
36 ... print("in me neither")
37 ...
37 ...
38
38
39 One can define a decorator that wraps a function between the two helpers:
39 One can define a decorator that wraps a function between the two helpers:
40
40
41 >>> def pdb_skipped_decorator(function):
41 >>> def pdb_skipped_decorator(function):
42 ...
42 ...
43 ...
43 ...
44 ... def wrapped_fn(*args, **kwargs):
44 ... def wrapped_fn(*args, **kwargs):
45 ... __debuggerskip__ = True
45 ... __debuggerskip__ = True
46 ... helper_1()
46 ... helper_1()
47 ... __debuggerskip__ = False
47 ... __debuggerskip__ = False
48 ... result = function(*args, **kwargs)
48 ... result = function(*args, **kwargs)
49 ... __debuggerskip__ = True
49 ... __debuggerskip__ = True
50 ... helper_2()
50 ... helper_2()
51 ... # setting __debuggerskip__ to False again is not necessary
51 ... # setting __debuggerskip__ to False again is not necessary
52 ... return result
52 ... return result
53 ...
53 ...
54 ... return wrapped_fn
54 ... return wrapped_fn
55
55
56 When decorating a function, ipdb will directly step into ``bar()`` by
56 When decorating a function, ipdb will directly step into ``bar()`` by
57 default:
57 default:
58
58
59 >>> @foo_decorator
59 >>> @foo_decorator
60 ... def bar(x, y):
60 ... def bar(x, y):
61 ... return x * y
61 ... return x * y
62
62
63
63
64 You can toggle the behavior with
64 You can toggle the behavior with
65
65
66 ipdb> skip_predicates debuggerskip false
66 ipdb> skip_predicates debuggerskip false
67
67
68 or configure it in your ``.pdbrc``
68 or configure it in your ``.pdbrc``
69
69
70
70
71
71
72 License
72 License
73 -------
73 -------
74
74
75 Modified from the standard pdb.Pdb class to avoid including readline, so that
75 Modified from the standard pdb.Pdb class to avoid including readline, so that
76 the command line completion of other programs which include this isn't
76 the command line completion of other programs which include this isn't
77 damaged.
77 damaged.
78
78
79 In the future, this class will be expanded with improvements over the standard
79 In the future, this class will be expanded with improvements over the standard
80 pdb.
80 pdb.
81
81
82 The original code in this file is mainly lifted out of cmd.py in Python 2.2,
82 The original code in this file is mainly lifted out of cmd.py in Python 2.2,
83 with minor changes. Licensing should therefore be under the standard Python
83 with minor changes. Licensing should therefore be under the standard Python
84 terms. For details on the PSF (Python Software Foundation) standard license,
84 terms. For details on the PSF (Python Software Foundation) standard license,
85 see:
85 see:
86
86
87 https://docs.python.org/2/license.html
87 https://docs.python.org/2/license.html
88
88
89
89
90 All the changes since then are under the same license as IPython.
90 All the changes since then are under the same license as IPython.
91
91
92 """
92 """
93
93
94 #*****************************************************************************
94 #*****************************************************************************
95 #
95 #
96 # This file is licensed under the PSF license.
96 # This file is licensed under the PSF license.
97 #
97 #
98 # Copyright (C) 2001 Python Software Foundation, www.python.org
98 # Copyright (C) 2001 Python Software Foundation, www.python.org
99 # Copyright (C) 2005-2006 Fernando Perez. <fperez@colorado.edu>
99 # Copyright (C) 2005-2006 Fernando Perez. <fperez@colorado.edu>
100 #
100 #
101 #
101 #
102 #*****************************************************************************
102 #*****************************************************************************
103
103
104 import inspect
104 import inspect
105 import linecache
105 import linecache
106 import sys
106 import sys
107 import re
107 import re
108 import os
108 import os
109
109
110 from IPython import get_ipython
110 from IPython import get_ipython
111 from IPython.utils import PyColorize
111 from IPython.utils import PyColorize
112 from IPython.utils import coloransi, py3compat
112 from IPython.utils import coloransi, py3compat
113 from IPython.core.excolors import exception_colors
113 from IPython.core.excolors import exception_colors
114
114
115 # skip module docstests
115 # skip module docstests
116 __skip_doctest__ = True
116 __skip_doctest__ = True
117
117
118 prompt = 'ipdb> '
118 prompt = 'ipdb> '
119
119
120 # We have to check this directly from sys.argv, config struct not yet available
120 # We have to check this directly from sys.argv, config struct not yet available
121 from pdb import Pdb as OldPdb
121 from pdb import Pdb as OldPdb
122
122
123 # Allow the set_trace code to operate outside of an ipython instance, even if
123 # Allow the set_trace code to operate outside of an ipython instance, even if
124 # it does so with some limitations. The rest of this support is implemented in
124 # it does so with some limitations. The rest of this support is implemented in
125 # the Tracer constructor.
125 # the Tracer constructor.
126
126
127 DEBUGGERSKIP = "__debuggerskip__"
127 DEBUGGERSKIP = "__debuggerskip__"
128
128
129
129
130 def make_arrow(pad):
130 def make_arrow(pad):
131 """generate the leading arrow in front of traceback or debugger"""
131 """generate the leading arrow in front of traceback or debugger"""
132 if pad >= 2:
132 if pad >= 2:
133 return '-'*(pad-2) + '> '
133 return '-'*(pad-2) + '> '
134 elif pad == 1:
134 elif pad == 1:
135 return '>'
135 return '>'
136 return ''
136 return ''
137
137
138
138
139 def BdbQuit_excepthook(et, ev, tb, excepthook=None):
139 def BdbQuit_excepthook(et, ev, tb, excepthook=None):
140 """Exception hook which handles `BdbQuit` exceptions.
140 """Exception hook which handles `BdbQuit` exceptions.
141
141
142 All other exceptions are processed using the `excepthook`
142 All other exceptions are processed using the `excepthook`
143 parameter.
143 parameter.
144 """
144 """
145 raise ValueError(
145 raise ValueError(
146 "`BdbQuit_excepthook` is deprecated since version 5.1",
146 "`BdbQuit_excepthook` is deprecated since version 5.1",
147 )
147 )
148
148
149
149
150 def BdbQuit_IPython_excepthook(self, et, ev, tb, tb_offset=None):
150 def BdbQuit_IPython_excepthook(self, et, ev, tb, tb_offset=None):
151 raise ValueError(
151 raise ValueError(
152 "`BdbQuit_IPython_excepthook` is deprecated since version 5.1",
152 "`BdbQuit_IPython_excepthook` is deprecated since version 5.1",
153 DeprecationWarning, stacklevel=2)
153 DeprecationWarning, stacklevel=2)
154
154
155
155
156 RGX_EXTRA_INDENT = re.compile(r'(?<=\n)\s+')
156 RGX_EXTRA_INDENT = re.compile(r'(?<=\n)\s+')
157
157
158
158
159 def strip_indentation(multiline_string):
159 def strip_indentation(multiline_string):
160 return RGX_EXTRA_INDENT.sub('', multiline_string)
160 return RGX_EXTRA_INDENT.sub('', multiline_string)
161
161
162
162
163 def decorate_fn_with_doc(new_fn, old_fn, additional_text=""):
163 def decorate_fn_with_doc(new_fn, old_fn, additional_text=""):
164 """Make new_fn have old_fn's doc string. This is particularly useful
164 """Make new_fn have old_fn's doc string. This is particularly useful
165 for the ``do_...`` commands that hook into the help system.
165 for the ``do_...`` commands that hook into the help system.
166 Adapted from from a comp.lang.python posting
166 Adapted from from a comp.lang.python posting
167 by Duncan Booth."""
167 by Duncan Booth."""
168 def wrapper(*args, **kw):
168 def wrapper(*args, **kw):
169 return new_fn(*args, **kw)
169 return new_fn(*args, **kw)
170 if old_fn.__doc__:
170 if old_fn.__doc__:
171 wrapper.__doc__ = strip_indentation(old_fn.__doc__) + additional_text
171 wrapper.__doc__ = strip_indentation(old_fn.__doc__) + additional_text
172 return wrapper
172 return wrapper
173
173
174
174
175 class Pdb(OldPdb):
175 class Pdb(OldPdb):
176 """Modified Pdb class, does not load readline.
176 """Modified Pdb class, does not load readline.
177
177
178 for a standalone version that uses prompt_toolkit, see
178 for a standalone version that uses prompt_toolkit, see
179 `IPython.terminal.debugger.TerminalPdb` and
179 `IPython.terminal.debugger.TerminalPdb` and
180 `IPython.terminal.debugger.set_trace()`
180 `IPython.terminal.debugger.set_trace()`
181
181
182
182
183 This debugger can hide and skip frames that are tagged according to some predicates.
183 This debugger can hide and skip frames that are tagged according to some predicates.
184 See the `skip_predicates` commands.
184 See the `skip_predicates` commands.
185
185
186 """
186 """
187
187
188 default_predicates = {
188 default_predicates = {
189 "tbhide": True,
189 "tbhide": True,
190 "readonly": False,
190 "readonly": False,
191 "ipython_internal": True,
191 "ipython_internal": True,
192 "debuggerskip": True,
192 "debuggerskip": True,
193 }
193 }
194
194
195 def __init__(self, completekey=None, stdin=None, stdout=None, context=5, **kwargs):
195 def __init__(self, completekey=None, stdin=None, stdout=None, context=5, **kwargs):
196 """Create a new IPython debugger.
196 """Create a new IPython debugger.
197
197
198 Parameters
198 Parameters
199 ----------
199 ----------
200 completekey : default None
200 completekey : default None
201 Passed to pdb.Pdb.
201 Passed to pdb.Pdb.
202 stdin : default None
202 stdin : default None
203 Passed to pdb.Pdb.
203 Passed to pdb.Pdb.
204 stdout : default None
204 stdout : default None
205 Passed to pdb.Pdb.
205 Passed to pdb.Pdb.
206 context : int
206 context : int
207 Number of lines of source code context to show when
207 Number of lines of source code context to show when
208 displaying stacktrace information.
208 displaying stacktrace information.
209 **kwargs
209 **kwargs
210 Passed to pdb.Pdb.
210 Passed to pdb.Pdb.
211
211
212 Notes
212 Notes
213 -----
213 -----
214 The possibilities are python version dependent, see the python
214 The possibilities are python version dependent, see the python
215 docs for more info.
215 docs for more info.
216 """
216 """
217
217
218 # Parent constructor:
218 # Parent constructor:
219 try:
219 try:
220 self.context = int(context)
220 self.context = int(context)
221 if self.context <= 0:
221 if self.context <= 0:
222 raise ValueError("Context must be a positive integer")
222 raise ValueError("Context must be a positive integer")
223 except (TypeError, ValueError) as e:
223 except (TypeError, ValueError) as e:
224 raise ValueError("Context must be a positive integer") from e
224 raise ValueError("Context must be a positive integer") from e
225
225
226 # `kwargs` ensures full compatibility with stdlib's `pdb.Pdb`.
226 # `kwargs` ensures full compatibility with stdlib's `pdb.Pdb`.
227 OldPdb.__init__(self, completekey, stdin, stdout, **kwargs)
227 OldPdb.__init__(self, completekey, stdin, stdout, **kwargs)
228
228
229 # IPython changes...
229 # IPython changes...
230 self.shell = get_ipython()
230 self.shell = get_ipython()
231
231
232 if self.shell is None:
232 if self.shell is None:
233 save_main = sys.modules['__main__']
233 save_main = sys.modules['__main__']
234 # No IPython instance running, we must create one
234 # No IPython instance running, we must create one
235 from IPython.terminal.interactiveshell import \
235 from IPython.terminal.interactiveshell import \
236 TerminalInteractiveShell
236 TerminalInteractiveShell
237 self.shell = TerminalInteractiveShell.instance()
237 self.shell = TerminalInteractiveShell.instance()
238 # needed by any code which calls __import__("__main__") after
238 # needed by any code which calls __import__("__main__") after
239 # the debugger was entered. See also #9941.
239 # the debugger was entered. See also #9941.
240 sys.modules["__main__"] = save_main
240 sys.modules["__main__"] = save_main
241
241
242
242
243 color_scheme = self.shell.colors
243 color_scheme = self.shell.colors
244
244
245 self.aliases = {}
245 self.aliases = {}
246
246
247 # Create color table: we copy the default one from the traceback
247 # Create color table: we copy the default one from the traceback
248 # module and add a few attributes needed for debugging
248 # module and add a few attributes needed for debugging
249 self.color_scheme_table = exception_colors()
249 self.color_scheme_table = exception_colors()
250
250
251 # shorthands
251 # shorthands
252 C = coloransi.TermColors
252 C = coloransi.TermColors
253 cst = self.color_scheme_table
253 cst = self.color_scheme_table
254
254
255 cst['NoColor'].colors.prompt = C.NoColor
255 cst['NoColor'].colors.prompt = C.NoColor
256 cst['NoColor'].colors.breakpoint_enabled = C.NoColor
256 cst['NoColor'].colors.breakpoint_enabled = C.NoColor
257 cst['NoColor'].colors.breakpoint_disabled = C.NoColor
257 cst['NoColor'].colors.breakpoint_disabled = C.NoColor
258
258
259 cst['Linux'].colors.prompt = C.Green
259 cst['Linux'].colors.prompt = C.Green
260 cst['Linux'].colors.breakpoint_enabled = C.LightRed
260 cst['Linux'].colors.breakpoint_enabled = C.LightRed
261 cst['Linux'].colors.breakpoint_disabled = C.Red
261 cst['Linux'].colors.breakpoint_disabled = C.Red
262
262
263 cst['LightBG'].colors.prompt = C.Blue
263 cst['LightBG'].colors.prompt = C.Blue
264 cst['LightBG'].colors.breakpoint_enabled = C.LightRed
264 cst['LightBG'].colors.breakpoint_enabled = C.LightRed
265 cst['LightBG'].colors.breakpoint_disabled = C.Red
265 cst['LightBG'].colors.breakpoint_disabled = C.Red
266
266
267 cst['Neutral'].colors.prompt = C.Blue
267 cst['Neutral'].colors.prompt = C.Blue
268 cst['Neutral'].colors.breakpoint_enabled = C.LightRed
268 cst['Neutral'].colors.breakpoint_enabled = C.LightRed
269 cst['Neutral'].colors.breakpoint_disabled = C.Red
269 cst['Neutral'].colors.breakpoint_disabled = C.Red
270
270
271 # Add a python parser so we can syntax highlight source while
271 # Add a python parser so we can syntax highlight source while
272 # debugging.
272 # debugging.
273 self.parser = PyColorize.Parser(style=color_scheme)
273 self.parser = PyColorize.Parser(style=color_scheme)
274 self.set_colors(color_scheme)
274 self.set_colors(color_scheme)
275
275
276 # Set the prompt - the default prompt is '(Pdb)'
276 # Set the prompt - the default prompt is '(Pdb)'
277 self.prompt = prompt
277 self.prompt = prompt
278 self.skip_hidden = True
278 self.skip_hidden = True
279 self.report_skipped = True
279 self.report_skipped = True
280
280
281 # list of predicates we use to skip frames
281 # list of predicates we use to skip frames
282 self._predicates = self.default_predicates
282 self._predicates = self.default_predicates
283
283
284 #
284 #
285 def set_colors(self, scheme):
285 def set_colors(self, scheme):
286 """Shorthand access to the color table scheme selector method."""
286 """Shorthand access to the color table scheme selector method."""
287 self.color_scheme_table.set_active_scheme(scheme)
287 self.color_scheme_table.set_active_scheme(scheme)
288 self.parser.style = scheme
288 self.parser.style = scheme
289
289
290 def set_trace(self, frame=None):
290 def set_trace(self, frame=None):
291 if frame is None:
291 if frame is None:
292 frame = sys._getframe().f_back
292 frame = sys._getframe().f_back
293 self.initial_frame = frame
293 self.initial_frame = frame
294 return super().set_trace(frame)
294 return super().set_trace(frame)
295
295
296 def _hidden_predicate(self, frame):
296 def _hidden_predicate(self, frame):
297 """
297 """
298 Given a frame return whether it it should be hidden or not by IPython.
298 Given a frame return whether it it should be hidden or not by IPython.
299 """
299 """
300
300
301 if self._predicates["readonly"]:
301 if self._predicates["readonly"]:
302 fname = frame.f_code.co_filename
302 fname = frame.f_code.co_filename
303 # we need to check for file existence and interactively define
303 # we need to check for file existence and interactively define
304 # function would otherwise appear as RO.
304 # function would otherwise appear as RO.
305 if os.path.isfile(fname) and not os.access(fname, os.W_OK):
305 if os.path.isfile(fname) and not os.access(fname, os.W_OK):
306 return True
306 return True
307
307
308 if self._predicates["tbhide"]:
308 if self._predicates["tbhide"]:
309 if frame in (self.curframe, getattr(self, "initial_frame", None)):
309 if frame in (self.curframe, getattr(self, "initial_frame", None)):
310 return False
310 return False
311 frame_locals = self._get_frame_locals(frame)
311 frame_locals = self._get_frame_locals(frame)
312 if "__tracebackhide__" not in frame_locals:
312 if "__tracebackhide__" not in frame_locals:
313 return False
313 return False
314 return frame_locals["__tracebackhide__"]
314 return frame_locals["__tracebackhide__"]
315 return False
315 return False
316
316
317 def hidden_frames(self, stack):
317 def hidden_frames(self, stack):
318 """
318 """
319 Given an index in the stack return whether it should be skipped.
319 Given an index in the stack return whether it should be skipped.
320
320
321 This is used in up/down and where to skip frames.
321 This is used in up/down and where to skip frames.
322 """
322 """
323 # The f_locals dictionary is updated from the actual frame
323 # The f_locals dictionary is updated from the actual frame
324 # locals whenever the .f_locals accessor is called, so we
324 # locals whenever the .f_locals accessor is called, so we
325 # avoid calling it here to preserve self.curframe_locals.
325 # avoid calling it here to preserve self.curframe_locals.
326 # Furthermore, there is no good reason to hide the current frame.
326 # Furthermore, there is no good reason to hide the current frame.
327 ip_hide = [self._hidden_predicate(s[0]) for s in stack]
327 ip_hide = [self._hidden_predicate(s[0]) for s in stack]
328 ip_start = [i for i, s in enumerate(ip_hide) if s == "__ipython_bottom__"]
328 ip_start = [i for i, s in enumerate(ip_hide) if s == "__ipython_bottom__"]
329 if ip_start and self._predicates["ipython_internal"]:
329 if ip_start and self._predicates["ipython_internal"]:
330 ip_hide = [h if i > ip_start[0] else True for (i, h) in enumerate(ip_hide)]
330 ip_hide = [h if i > ip_start[0] else True for (i, h) in enumerate(ip_hide)]
331 return ip_hide
331 return ip_hide
332
332
333 def interaction(self, frame, traceback):
333 def interaction(self, frame, traceback):
334 try:
334 try:
335 OldPdb.interaction(self, frame, traceback)
335 OldPdb.interaction(self, frame, traceback)
336 except KeyboardInterrupt:
336 except KeyboardInterrupt:
337 self.stdout.write("\n" + self.shell.get_exception_only())
337 self.stdout.write("\n" + self.shell.get_exception_only())
338
338
339 def precmd(self, line):
339 def precmd(self, line):
340 """Perform useful escapes on the command before it is executed."""
340 """Perform useful escapes on the command before it is executed."""
341
341
342 if line.endswith("??"):
342 if line.endswith("??"):
343 line = "pinfo2 " + line[:-2]
343 line = "pinfo2 " + line[:-2]
344 elif line.endswith("?"):
344 elif line.endswith("?"):
345 line = "pinfo " + line[:-1]
345 line = "pinfo " + line[:-1]
346
346
347 line = super().precmd(line)
347 line = super().precmd(line)
348
348
349 return line
349 return line
350
350
351 def new_do_frame(self, arg):
351 def new_do_frame(self, arg):
352 OldPdb.do_frame(self, arg)
352 OldPdb.do_frame(self, arg)
353
353
354 def new_do_quit(self, arg):
354 def new_do_quit(self, arg):
355
355
356 if hasattr(self, 'old_all_completions'):
356 if hasattr(self, 'old_all_completions'):
357 self.shell.Completer.all_completions = self.old_all_completions
357 self.shell.Completer.all_completions = self.old_all_completions
358
358
359 return OldPdb.do_quit(self, arg)
359 return OldPdb.do_quit(self, arg)
360
360
361 do_q = do_quit = decorate_fn_with_doc(new_do_quit, OldPdb.do_quit)
361 do_q = do_quit = decorate_fn_with_doc(new_do_quit, OldPdb.do_quit)
362
362
363 def new_do_restart(self, arg):
363 def new_do_restart(self, arg):
364 """Restart command. In the context of ipython this is exactly the same
364 """Restart command. In the context of ipython this is exactly the same
365 thing as 'quit'."""
365 thing as 'quit'."""
366 self.msg("Restart doesn't make sense here. Using 'quit' instead.")
366 self.msg("Restart doesn't make sense here. Using 'quit' instead.")
367 return self.do_quit(arg)
367 return self.do_quit(arg)
368
368
369 def print_stack_trace(self, context=None):
369 def print_stack_trace(self, context=None):
370 Colors = self.color_scheme_table.active_colors
370 Colors = self.color_scheme_table.active_colors
371 ColorsNormal = Colors.Normal
371 ColorsNormal = Colors.Normal
372 if context is None:
372 if context is None:
373 context = self.context
373 context = self.context
374 try:
374 try:
375 context = int(context)
375 context = int(context)
376 if context <= 0:
376 if context <= 0:
377 raise ValueError("Context must be a positive integer")
377 raise ValueError("Context must be a positive integer")
378 except (TypeError, ValueError) as e:
378 except (TypeError, ValueError) as e:
379 raise ValueError("Context must be a positive integer") from e
379 raise ValueError("Context must be a positive integer") from e
380 try:
380 try:
381 skipped = 0
381 skipped = 0
382 for hidden, frame_lineno in zip(self.hidden_frames(self.stack), self.stack):
382 for hidden, frame_lineno in zip(self.hidden_frames(self.stack), self.stack):
383 if hidden and self.skip_hidden:
383 if hidden and self.skip_hidden:
384 skipped += 1
384 skipped += 1
385 continue
385 continue
386 if skipped:
386 if skipped:
387 print(
387 print(
388 f"{Colors.excName} [... skipping {skipped} hidden frame(s)]{ColorsNormal}\n"
388 f"{Colors.excName} [... skipping {skipped} hidden frame(s)]{ColorsNormal}\n"
389 )
389 )
390 skipped = 0
390 skipped = 0
391 self.print_stack_entry(frame_lineno, context=context)
391 self.print_stack_entry(frame_lineno, context=context)
392 if skipped:
392 if skipped:
393 print(
393 print(
394 f"{Colors.excName} [... skipping {skipped} hidden frame(s)]{ColorsNormal}\n"
394 f"{Colors.excName} [... skipping {skipped} hidden frame(s)]{ColorsNormal}\n"
395 )
395 )
396 except KeyboardInterrupt:
396 except KeyboardInterrupt:
397 pass
397 pass
398
398
399 def print_stack_entry(self, frame_lineno, prompt_prefix='\n-> ',
399 def print_stack_entry(self, frame_lineno, prompt_prefix='\n-> ',
400 context=None):
400 context=None):
401 if context is None:
401 if context is None:
402 context = self.context
402 context = self.context
403 try:
403 try:
404 context = int(context)
404 context = int(context)
405 if context <= 0:
405 if context <= 0:
406 raise ValueError("Context must be a positive integer")
406 raise ValueError("Context must be a positive integer")
407 except (TypeError, ValueError) as e:
407 except (TypeError, ValueError) as e:
408 raise ValueError("Context must be a positive integer") from e
408 raise ValueError("Context must be a positive integer") from e
409 print(self.format_stack_entry(frame_lineno, '', context), file=self.stdout)
409 print(self.format_stack_entry(frame_lineno, '', context), file=self.stdout)
410
410
411 # vds: >>
411 # vds: >>
412 frame, lineno = frame_lineno
412 frame, lineno = frame_lineno
413 filename = frame.f_code.co_filename
413 filename = frame.f_code.co_filename
414 self.shell.hooks.synchronize_with_editor(filename, lineno, 0)
414 self.shell.hooks.synchronize_with_editor(filename, lineno, 0)
415 # vds: <<
415 # vds: <<
416
416
417 def _get_frame_locals(self, frame):
417 def _get_frame_locals(self, frame):
418 """ "
418 """ "
419 Accessing f_local of current frame reset the namespace, so we want to avoid
419 Accessing f_local of current frame reset the namespace, so we want to avoid
420 that or the following can happen
420 that or the following can happen
421
421
422 ipdb> foo
422 ipdb> foo
423 "old"
423 "old"
424 ipdb> foo = "new"
424 ipdb> foo = "new"
425 ipdb> foo
425 ipdb> foo
426 "new"
426 "new"
427 ipdb> where
427 ipdb> where
428 ipdb> foo
428 ipdb> foo
429 "old"
429 "old"
430
430
431 So if frame is self.current_frame we instead return self.curframe_locals
431 So if frame is self.current_frame we instead return self.curframe_locals
432
432
433 """
433 """
434 if frame is self.curframe:
434 if frame is self.curframe:
435 return self.curframe_locals
435 return self.curframe_locals
436 else:
436 else:
437 return frame.f_locals
437 return frame.f_locals
438
438
439 def format_stack_entry(self, frame_lineno, lprefix=': ', context=None):
439 def format_stack_entry(self, frame_lineno, lprefix=': ', context=None):
440 if context is None:
440 if context is None:
441 context = self.context
441 context = self.context
442 try:
442 try:
443 context = int(context)
443 context = int(context)
444 if context <= 0:
444 if context <= 0:
445 print("Context must be a positive integer", file=self.stdout)
445 print("Context must be a positive integer", file=self.stdout)
446 except (TypeError, ValueError):
446 except (TypeError, ValueError):
447 print("Context must be a positive integer", file=self.stdout)
447 print("Context must be a positive integer", file=self.stdout)
448
448
449 import reprlib
449 import reprlib
450
450
451 ret = []
451 ret = []
452
452
453 Colors = self.color_scheme_table.active_colors
453 Colors = self.color_scheme_table.active_colors
454 ColorsNormal = Colors.Normal
454 ColorsNormal = Colors.Normal
455 tpl_link = "%s%%s%s" % (Colors.filenameEm, ColorsNormal)
455 tpl_link = "%s%%s%s" % (Colors.filenameEm, ColorsNormal)
456 tpl_call = "%s%%s%s%%s%s" % (Colors.vName, Colors.valEm, ColorsNormal)
456 tpl_call = "%s%%s%s%%s%s" % (Colors.vName, Colors.valEm, ColorsNormal)
457 tpl_line = "%%s%s%%s %s%%s" % (Colors.lineno, ColorsNormal)
457 tpl_line = "%%s%s%%s %s%%s" % (Colors.lineno, ColorsNormal)
458 tpl_line_em = "%%s%s%%s %s%%s%s" % (Colors.linenoEm, Colors.line, ColorsNormal)
458 tpl_line_em = "%%s%s%%s %s%%s%s" % (Colors.linenoEm, Colors.line, ColorsNormal)
459
459
460 frame, lineno = frame_lineno
460 frame, lineno = frame_lineno
461
461
462 return_value = ''
462 return_value = ''
463 loc_frame = self._get_frame_locals(frame)
463 loc_frame = self._get_frame_locals(frame)
464 if "__return__" in loc_frame:
464 if "__return__" in loc_frame:
465 rv = loc_frame["__return__"]
465 rv = loc_frame["__return__"]
466 # return_value += '->'
466 # return_value += '->'
467 return_value += reprlib.repr(rv) + "\n"
467 return_value += reprlib.repr(rv) + "\n"
468 ret.append(return_value)
468 ret.append(return_value)
469
469
470 #s = filename + '(' + `lineno` + ')'
470 #s = filename + '(' + `lineno` + ')'
471 filename = self.canonic(frame.f_code.co_filename)
471 filename = self.canonic(frame.f_code.co_filename)
472 link = tpl_link % py3compat.cast_unicode(filename)
472 link = tpl_link % py3compat.cast_unicode(filename)
473
473
474 if frame.f_code.co_name:
474 if frame.f_code.co_name:
475 func = frame.f_code.co_name
475 func = frame.f_code.co_name
476 else:
476 else:
477 func = "<lambda>"
477 func = "<lambda>"
478
478
479 call = ""
479 call = ""
480 if func != "?":
480 if func != "?":
481 if "__args__" in loc_frame:
481 if "__args__" in loc_frame:
482 args = reprlib.repr(loc_frame["__args__"])
482 args = reprlib.repr(loc_frame["__args__"])
483 else:
483 else:
484 args = '()'
484 args = '()'
485 call = tpl_call % (func, args)
485 call = tpl_call % (func, args)
486
486
487 # The level info should be generated in the same format pdb uses, to
487 # The level info should be generated in the same format pdb uses, to
488 # avoid breaking the pdbtrack functionality of python-mode in *emacs.
488 # avoid breaking the pdbtrack functionality of python-mode in *emacs.
489 if frame is self.curframe:
489 if frame is self.curframe:
490 ret.append('> ')
490 ret.append('> ')
491 else:
491 else:
492 ret.append(" ")
492 ret.append(" ")
493 ret.append("%s(%s)%s\n" % (link, lineno, call))
493 ret.append("%s(%s)%s\n" % (link, lineno, call))
494
494
495 start = lineno - 1 - context//2
495 start = lineno - 1 - context//2
496 lines = linecache.getlines(filename)
496 lines = linecache.getlines(filename)
497 start = min(start, len(lines) - context)
497 start = min(start, len(lines) - context)
498 start = max(start, 0)
498 start = max(start, 0)
499 lines = lines[start : start + context]
499 lines = lines[start : start + context]
500
500
501 for i, line in enumerate(lines):
501 for i, line in enumerate(lines):
502 show_arrow = start + 1 + i == lineno
502 show_arrow = start + 1 + i == lineno
503 linetpl = (frame is self.curframe or show_arrow) and tpl_line_em or tpl_line
503 linetpl = (frame is self.curframe or show_arrow) and tpl_line_em or tpl_line
504 ret.append(
504 ret.append(
505 self.__format_line(
505 self.__format_line(
506 linetpl, filename, start + 1 + i, line, arrow=show_arrow
506 linetpl, filename, start + 1 + i, line, arrow=show_arrow
507 )
507 )
508 )
508 )
509 return "".join(ret)
509 return "".join(ret)
510
510
511 def __format_line(self, tpl_line, filename, lineno, line, arrow=False):
511 def __format_line(self, tpl_line, filename, lineno, line, arrow=False):
512 bp_mark = ""
512 bp_mark = ""
513 bp_mark_color = ""
513 bp_mark_color = ""
514
514
515 new_line, err = self.parser.format2(line, 'str')
515 new_line, err = self.parser.format2(line, 'str')
516 if not err:
516 if not err:
517 line = new_line
517 line = new_line
518
518
519 bp = None
519 bp = None
520 if lineno in self.get_file_breaks(filename):
520 if lineno in self.get_file_breaks(filename):
521 bps = self.get_breaks(filename, lineno)
521 bps = self.get_breaks(filename, lineno)
522 bp = bps[-1]
522 bp = bps[-1]
523
523
524 if bp:
524 if bp:
525 Colors = self.color_scheme_table.active_colors
525 Colors = self.color_scheme_table.active_colors
526 bp_mark = str(bp.number)
526 bp_mark = str(bp.number)
527 bp_mark_color = Colors.breakpoint_enabled
527 bp_mark_color = Colors.breakpoint_enabled
528 if not bp.enabled:
528 if not bp.enabled:
529 bp_mark_color = Colors.breakpoint_disabled
529 bp_mark_color = Colors.breakpoint_disabled
530
530
531 numbers_width = 7
531 numbers_width = 7
532 if arrow:
532 if arrow:
533 # This is the line with the error
533 # This is the line with the error
534 pad = numbers_width - len(str(lineno)) - len(bp_mark)
534 pad = numbers_width - len(str(lineno)) - len(bp_mark)
535 num = '%s%s' % (make_arrow(pad), str(lineno))
535 num = '%s%s' % (make_arrow(pad), str(lineno))
536 else:
536 else:
537 num = '%*s' % (numbers_width - len(bp_mark), str(lineno))
537 num = '%*s' % (numbers_width - len(bp_mark), str(lineno))
538
538
539 return tpl_line % (bp_mark_color + bp_mark, num, line)
539 return tpl_line % (bp_mark_color + bp_mark, num, line)
540
540
541 def print_list_lines(self, filename, first, last):
541 def print_list_lines(self, filename, first, last):
542 """The printing (as opposed to the parsing part of a 'list'
542 """The printing (as opposed to the parsing part of a 'list'
543 command."""
543 command."""
544 try:
544 try:
545 Colors = self.color_scheme_table.active_colors
545 Colors = self.color_scheme_table.active_colors
546 ColorsNormal = Colors.Normal
546 ColorsNormal = Colors.Normal
547 tpl_line = '%%s%s%%s %s%%s' % (Colors.lineno, ColorsNormal)
547 tpl_line = '%%s%s%%s %s%%s' % (Colors.lineno, ColorsNormal)
548 tpl_line_em = '%%s%s%%s %s%%s%s' % (Colors.linenoEm, Colors.line, ColorsNormal)
548 tpl_line_em = '%%s%s%%s %s%%s%s' % (Colors.linenoEm, Colors.line, ColorsNormal)
549 src = []
549 src = []
550 if filename == "<string>" and hasattr(self, "_exec_filename"):
550 if filename == "<string>" and hasattr(self, "_exec_filename"):
551 filename = self._exec_filename
551 filename = self._exec_filename
552
552
553 for lineno in range(first, last+1):
553 for lineno in range(first, last+1):
554 line = linecache.getline(filename, lineno)
554 line = linecache.getline(filename, lineno)
555 if not line:
555 if not line:
556 break
556 break
557
557
558 if lineno == self.curframe.f_lineno:
558 if lineno == self.curframe.f_lineno:
559 line = self.__format_line(
559 line = self.__format_line(
560 tpl_line_em, filename, lineno, line, arrow=True
560 tpl_line_em, filename, lineno, line, arrow=True
561 )
561 )
562 else:
562 else:
563 line = self.__format_line(
563 line = self.__format_line(
564 tpl_line, filename, lineno, line, arrow=False
564 tpl_line, filename, lineno, line, arrow=False
565 )
565 )
566
566
567 src.append(line)
567 src.append(line)
568 self.lineno = lineno
568 self.lineno = lineno
569
569
570 print(''.join(src), file=self.stdout)
570 print(''.join(src), file=self.stdout)
571
571
572 except KeyboardInterrupt:
572 except KeyboardInterrupt:
573 pass
573 pass
574
574
575 def do_skip_predicates(self, args):
575 def do_skip_predicates(self, args):
576 """
576 """
577 Turn on/off individual predicates as to whether a frame should be hidden/skip.
577 Turn on/off individual predicates as to whether a frame should be hidden/skip.
578
578
579 The global option to skip (or not) hidden frames is set with skip_hidden
579 The global option to skip (or not) hidden frames is set with skip_hidden
580
580
581 To change the value of a predicate
581 To change the value of a predicate
582
582
583 skip_predicates key [true|false]
583 skip_predicates key [true|false]
584
584
585 Call without arguments to see the current values.
585 Call without arguments to see the current values.
586
586
587 To permanently change the value of an option add the corresponding
587 To permanently change the value of an option add the corresponding
588 command to your ``~/.pdbrc`` file. If you are programmatically using the
588 command to your ``~/.pdbrc`` file. If you are programmatically using the
589 Pdb instance you can also change the ``default_predicates`` class
589 Pdb instance you can also change the ``default_predicates`` class
590 attribute.
590 attribute.
591 """
591 """
592 if not args.strip():
592 if not args.strip():
593 print("current predicates:")
593 print("current predicates:")
594 for (p, v) in self._predicates.items():
594 for p, v in self._predicates.items():
595 print(" ", p, ":", v)
595 print(" ", p, ":", v)
596 return
596 return
597 type_value = args.strip().split(" ")
597 type_value = args.strip().split(" ")
598 if len(type_value) != 2:
598 if len(type_value) != 2:
599 print(
599 print(
600 f"Usage: skip_predicates <type> <value>, with <type> one of {set(self._predicates.keys())}"
600 f"Usage: skip_predicates <type> <value>, with <type> one of {set(self._predicates.keys())}"
601 )
601 )
602 return
602 return
603
603
604 type_, value = type_value
604 type_, value = type_value
605 if type_ not in self._predicates:
605 if type_ not in self._predicates:
606 print(f"{type_!r} not in {set(self._predicates.keys())}")
606 print(f"{type_!r} not in {set(self._predicates.keys())}")
607 return
607 return
608 if value.lower() not in ("true", "yes", "1", "no", "false", "0"):
608 if value.lower() not in ("true", "yes", "1", "no", "false", "0"):
609 print(
609 print(
610 f"{value!r} is invalid - use one of ('true', 'yes', '1', 'no', 'false', '0')"
610 f"{value!r} is invalid - use one of ('true', 'yes', '1', 'no', 'false', '0')"
611 )
611 )
612 return
612 return
613
613
614 self._predicates[type_] = value.lower() in ("true", "yes", "1")
614 self._predicates[type_] = value.lower() in ("true", "yes", "1")
615 if not any(self._predicates.values()):
615 if not any(self._predicates.values()):
616 print(
616 print(
617 "Warning, all predicates set to False, skip_hidden may not have any effects."
617 "Warning, all predicates set to False, skip_hidden may not have any effects."
618 )
618 )
619
619
620 def do_skip_hidden(self, arg):
620 def do_skip_hidden(self, arg):
621 """
621 """
622 Change whether or not we should skip frames with the
622 Change whether or not we should skip frames with the
623 __tracebackhide__ attribute.
623 __tracebackhide__ attribute.
624 """
624 """
625 if not arg.strip():
625 if not arg.strip():
626 print(
626 print(
627 f"skip_hidden = {self.skip_hidden}, use 'yes','no', 'true', or 'false' to change."
627 f"skip_hidden = {self.skip_hidden}, use 'yes','no', 'true', or 'false' to change."
628 )
628 )
629 elif arg.strip().lower() in ("true", "yes"):
629 elif arg.strip().lower() in ("true", "yes"):
630 self.skip_hidden = True
630 self.skip_hidden = True
631 elif arg.strip().lower() in ("false", "no"):
631 elif arg.strip().lower() in ("false", "no"):
632 self.skip_hidden = False
632 self.skip_hidden = False
633 if not any(self._predicates.values()):
633 if not any(self._predicates.values()):
634 print(
634 print(
635 "Warning, all predicates set to False, skip_hidden may not have any effects."
635 "Warning, all predicates set to False, skip_hidden may not have any effects."
636 )
636 )
637
637
638 def do_list(self, arg):
638 def do_list(self, arg):
639 """Print lines of code from the current stack frame
639 """Print lines of code from the current stack frame
640 """
640 """
641 self.lastcmd = 'list'
641 self.lastcmd = 'list'
642 last = None
642 last = None
643 if arg:
643 if arg:
644 try:
644 try:
645 x = eval(arg, {}, {})
645 x = eval(arg, {}, {})
646 if type(x) == type(()):
646 if type(x) == type(()):
647 first, last = x
647 first, last = x
648 first = int(first)
648 first = int(first)
649 last = int(last)
649 last = int(last)
650 if last < first:
650 if last < first:
651 # Assume it's a count
651 # Assume it's a count
652 last = first + last
652 last = first + last
653 else:
653 else:
654 first = max(1, int(x) - 5)
654 first = max(1, int(x) - 5)
655 except:
655 except:
656 print('*** Error in argument:', repr(arg), file=self.stdout)
656 print('*** Error in argument:', repr(arg), file=self.stdout)
657 return
657 return
658 elif self.lineno is None:
658 elif self.lineno is None:
659 first = max(1, self.curframe.f_lineno - 5)
659 first = max(1, self.curframe.f_lineno - 5)
660 else:
660 else:
661 first = self.lineno + 1
661 first = self.lineno + 1
662 if last is None:
662 if last is None:
663 last = first + 10
663 last = first + 10
664 self.print_list_lines(self.curframe.f_code.co_filename, first, last)
664 self.print_list_lines(self.curframe.f_code.co_filename, first, last)
665
665
666 # vds: >>
666 # vds: >>
667 lineno = first
667 lineno = first
668 filename = self.curframe.f_code.co_filename
668 filename = self.curframe.f_code.co_filename
669 self.shell.hooks.synchronize_with_editor(filename, lineno, 0)
669 self.shell.hooks.synchronize_with_editor(filename, lineno, 0)
670 # vds: <<
670 # vds: <<
671
671
672 do_l = do_list
672 do_l = do_list
673
673
674 def getsourcelines(self, obj):
674 def getsourcelines(self, obj):
675 lines, lineno = inspect.findsource(obj)
675 lines, lineno = inspect.findsource(obj)
676 if inspect.isframe(obj) and obj.f_globals is self._get_frame_locals(obj):
676 if inspect.isframe(obj) and obj.f_globals is self._get_frame_locals(obj):
677 # must be a module frame: do not try to cut a block out of it
677 # must be a module frame: do not try to cut a block out of it
678 return lines, 1
678 return lines, 1
679 elif inspect.ismodule(obj):
679 elif inspect.ismodule(obj):
680 return lines, 1
680 return lines, 1
681 return inspect.getblock(lines[lineno:]), lineno+1
681 return inspect.getblock(lines[lineno:]), lineno+1
682
682
683 def do_longlist(self, arg):
683 def do_longlist(self, arg):
684 """Print lines of code from the current stack frame.
684 """Print lines of code from the current stack frame.
685
685
686 Shows more lines than 'list' does.
686 Shows more lines than 'list' does.
687 """
687 """
688 self.lastcmd = 'longlist'
688 self.lastcmd = 'longlist'
689 try:
689 try:
690 lines, lineno = self.getsourcelines(self.curframe)
690 lines, lineno = self.getsourcelines(self.curframe)
691 except OSError as err:
691 except OSError as err:
692 self.error(err)
692 self.error(err)
693 return
693 return
694 last = lineno + len(lines)
694 last = lineno + len(lines)
695 self.print_list_lines(self.curframe.f_code.co_filename, lineno, last)
695 self.print_list_lines(self.curframe.f_code.co_filename, lineno, last)
696 do_ll = do_longlist
696 do_ll = do_longlist
697
697
698 def do_debug(self, arg):
698 def do_debug(self, arg):
699 """debug code
699 """debug code
700 Enter a recursive debugger that steps through the code
700 Enter a recursive debugger that steps through the code
701 argument (which is an arbitrary expression or statement to be
701 argument (which is an arbitrary expression or statement to be
702 executed in the current environment).
702 executed in the current environment).
703 """
703 """
704 trace_function = sys.gettrace()
704 trace_function = sys.gettrace()
705 sys.settrace(None)
705 sys.settrace(None)
706 globals = self.curframe.f_globals
706 globals = self.curframe.f_globals
707 locals = self.curframe_locals
707 locals = self.curframe_locals
708 p = self.__class__(completekey=self.completekey,
708 p = self.__class__(completekey=self.completekey,
709 stdin=self.stdin, stdout=self.stdout)
709 stdin=self.stdin, stdout=self.stdout)
710 p.use_rawinput = self.use_rawinput
710 p.use_rawinput = self.use_rawinput
711 p.prompt = "(%s) " % self.prompt.strip()
711 p.prompt = "(%s) " % self.prompt.strip()
712 self.message("ENTERING RECURSIVE DEBUGGER")
712 self.message("ENTERING RECURSIVE DEBUGGER")
713 sys.call_tracing(p.run, (arg, globals, locals))
713 sys.call_tracing(p.run, (arg, globals, locals))
714 self.message("LEAVING RECURSIVE DEBUGGER")
714 self.message("LEAVING RECURSIVE DEBUGGER")
715 sys.settrace(trace_function)
715 sys.settrace(trace_function)
716 self.lastcmd = p.lastcmd
716 self.lastcmd = p.lastcmd
717
717
718 def do_pdef(self, arg):
718 def do_pdef(self, arg):
719 """Print the call signature for any callable object.
719 """Print the call signature for any callable object.
720
720
721 The debugger interface to %pdef"""
721 The debugger interface to %pdef"""
722 namespaces = [
722 namespaces = [
723 ("Locals", self.curframe_locals),
723 ("Locals", self.curframe_locals),
724 ("Globals", self.curframe.f_globals),
724 ("Globals", self.curframe.f_globals),
725 ]
725 ]
726 self.shell.find_line_magic("pdef")(arg, namespaces=namespaces)
726 self.shell.find_line_magic("pdef")(arg, namespaces=namespaces)
727
727
728 def do_pdoc(self, arg):
728 def do_pdoc(self, arg):
729 """Print the docstring for an object.
729 """Print the docstring for an object.
730
730
731 The debugger interface to %pdoc."""
731 The debugger interface to %pdoc."""
732 namespaces = [
732 namespaces = [
733 ("Locals", self.curframe_locals),
733 ("Locals", self.curframe_locals),
734 ("Globals", self.curframe.f_globals),
734 ("Globals", self.curframe.f_globals),
735 ]
735 ]
736 self.shell.find_line_magic("pdoc")(arg, namespaces=namespaces)
736 self.shell.find_line_magic("pdoc")(arg, namespaces=namespaces)
737
737
738 def do_pfile(self, arg):
738 def do_pfile(self, arg):
739 """Print (or run through pager) the file where an object is defined.
739 """Print (or run through pager) the file where an object is defined.
740
740
741 The debugger interface to %pfile.
741 The debugger interface to %pfile.
742 """
742 """
743 namespaces = [
743 namespaces = [
744 ("Locals", self.curframe_locals),
744 ("Locals", self.curframe_locals),
745 ("Globals", self.curframe.f_globals),
745 ("Globals", self.curframe.f_globals),
746 ]
746 ]
747 self.shell.find_line_magic("pfile")(arg, namespaces=namespaces)
747 self.shell.find_line_magic("pfile")(arg, namespaces=namespaces)
748
748
749 def do_pinfo(self, arg):
749 def do_pinfo(self, arg):
750 """Provide detailed information about an object.
750 """Provide detailed information about an object.
751
751
752 The debugger interface to %pinfo, i.e., obj?."""
752 The debugger interface to %pinfo, i.e., obj?."""
753 namespaces = [
753 namespaces = [
754 ("Locals", self.curframe_locals),
754 ("Locals", self.curframe_locals),
755 ("Globals", self.curframe.f_globals),
755 ("Globals", self.curframe.f_globals),
756 ]
756 ]
757 self.shell.find_line_magic("pinfo")(arg, namespaces=namespaces)
757 self.shell.find_line_magic("pinfo")(arg, namespaces=namespaces)
758
758
759 def do_pinfo2(self, arg):
759 def do_pinfo2(self, arg):
760 """Provide extra detailed information about an object.
760 """Provide extra detailed information about an object.
761
761
762 The debugger interface to %pinfo2, i.e., obj??."""
762 The debugger interface to %pinfo2, i.e., obj??."""
763 namespaces = [
763 namespaces = [
764 ("Locals", self.curframe_locals),
764 ("Locals", self.curframe_locals),
765 ("Globals", self.curframe.f_globals),
765 ("Globals", self.curframe.f_globals),
766 ]
766 ]
767 self.shell.find_line_magic("pinfo2")(arg, namespaces=namespaces)
767 self.shell.find_line_magic("pinfo2")(arg, namespaces=namespaces)
768
768
769 def do_psource(self, arg):
769 def do_psource(self, arg):
770 """Print (or run through pager) the source code for an object."""
770 """Print (or run through pager) the source code for an object."""
771 namespaces = [
771 namespaces = [
772 ("Locals", self.curframe_locals),
772 ("Locals", self.curframe_locals),
773 ("Globals", self.curframe.f_globals),
773 ("Globals", self.curframe.f_globals),
774 ]
774 ]
775 self.shell.find_line_magic("psource")(arg, namespaces=namespaces)
775 self.shell.find_line_magic("psource")(arg, namespaces=namespaces)
776
776
777 def do_where(self, arg):
777 def do_where(self, arg):
778 """w(here)
778 """w(here)
779 Print a stack trace, with the most recent frame at the bottom.
779 Print a stack trace, with the most recent frame at the bottom.
780 An arrow indicates the "current frame", which determines the
780 An arrow indicates the "current frame", which determines the
781 context of most commands. 'bt' is an alias for this command.
781 context of most commands. 'bt' is an alias for this command.
782
782
783 Take a number as argument as an (optional) number of context line to
783 Take a number as argument as an (optional) number of context line to
784 print"""
784 print"""
785 if arg:
785 if arg:
786 try:
786 try:
787 context = int(arg)
787 context = int(arg)
788 except ValueError as err:
788 except ValueError as err:
789 self.error(err)
789 self.error(err)
790 return
790 return
791 self.print_stack_trace(context)
791 self.print_stack_trace(context)
792 else:
792 else:
793 self.print_stack_trace()
793 self.print_stack_trace()
794
794
795 do_w = do_where
795 do_w = do_where
796
796
797 def break_anywhere(self, frame):
797 def break_anywhere(self, frame):
798 """
798 """
799 _stop_in_decorator_internals is overly restrictive, as we may still want
799 _stop_in_decorator_internals is overly restrictive, as we may still want
800 to trace function calls, so we need to also update break_anywhere so
800 to trace function calls, so we need to also update break_anywhere so
801 that is we don't `stop_here`, because of debugger skip, we may still
801 that is we don't `stop_here`, because of debugger skip, we may still
802 stop at any point inside the function
802 stop at any point inside the function
803
803
804 """
804 """
805
805
806 sup = super().break_anywhere(frame)
806 sup = super().break_anywhere(frame)
807 if sup:
807 if sup:
808 return sup
808 return sup
809 if self._predicates["debuggerskip"]:
809 if self._predicates["debuggerskip"]:
810 if DEBUGGERSKIP in frame.f_code.co_varnames:
810 if DEBUGGERSKIP in frame.f_code.co_varnames:
811 return True
811 return True
812 if frame.f_back and self._get_frame_locals(frame.f_back).get(DEBUGGERSKIP):
812 if frame.f_back and self._get_frame_locals(frame.f_back).get(DEBUGGERSKIP):
813 return True
813 return True
814 return False
814 return False
815
815
816 def _is_in_decorator_internal_and_should_skip(self, frame):
816 def _is_in_decorator_internal_and_should_skip(self, frame):
817 """
817 """
818 Utility to tell us whether we are in a decorator internal and should stop.
818 Utility to tell us whether we are in a decorator internal and should stop.
819
819
820 """
820 """
821
821
822 # if we are disabled don't skip
822 # if we are disabled don't skip
823 if not self._predicates["debuggerskip"]:
823 if not self._predicates["debuggerskip"]:
824 return False
824 return False
825
825
826 # if frame is tagged, skip by default.
826 # if frame is tagged, skip by default.
827 if DEBUGGERSKIP in frame.f_code.co_varnames:
827 if DEBUGGERSKIP in frame.f_code.co_varnames:
828 return True
828 return True
829
829
830 # if one of the parent frame value set to True skip as well.
830 # if one of the parent frame value set to True skip as well.
831
831
832 cframe = frame
832 cframe = frame
833 while getattr(cframe, "f_back", None):
833 while getattr(cframe, "f_back", None):
834 cframe = cframe.f_back
834 cframe = cframe.f_back
835 if self._get_frame_locals(cframe).get(DEBUGGERSKIP):
835 if self._get_frame_locals(cframe).get(DEBUGGERSKIP):
836 return True
836 return True
837
837
838 return False
838 return False
839
839
840 def stop_here(self, frame):
840 def stop_here(self, frame):
841
842 if self._is_in_decorator_internal_and_should_skip(frame) is True:
841 if self._is_in_decorator_internal_and_should_skip(frame) is True:
843 return False
842 return False
844
843
845 hidden = False
844 hidden = False
846 if self.skip_hidden:
845 if self.skip_hidden:
847 hidden = self._hidden_predicate(frame)
846 hidden = self._hidden_predicate(frame)
848 if hidden:
847 if hidden:
849 if self.report_skipped:
848 if self.report_skipped:
850 Colors = self.color_scheme_table.active_colors
849 Colors = self.color_scheme_table.active_colors
851 ColorsNormal = Colors.Normal
850 ColorsNormal = Colors.Normal
852 print(
851 print(
853 f"{Colors.excName} [... skipped 1 hidden frame]{ColorsNormal}\n"
852 f"{Colors.excName} [... skipped 1 hidden frame]{ColorsNormal}\n"
854 )
853 )
855 return super().stop_here(frame)
854 return super().stop_here(frame)
856
855
857 def do_up(self, arg):
856 def do_up(self, arg):
858 """u(p) [count]
857 """u(p) [count]
859 Move the current frame count (default one) levels up in the
858 Move the current frame count (default one) levels up in the
860 stack trace (to an older frame).
859 stack trace (to an older frame).
861
860
862 Will skip hidden frames.
861 Will skip hidden frames.
863 """
862 """
864 # modified version of upstream that skips
863 # modified version of upstream that skips
865 # frames with __tracebackhide__
864 # frames with __tracebackhide__
866 if self.curindex == 0:
865 if self.curindex == 0:
867 self.error("Oldest frame")
866 self.error("Oldest frame")
868 return
867 return
869 try:
868 try:
870 count = int(arg or 1)
869 count = int(arg or 1)
871 except ValueError:
870 except ValueError:
872 self.error("Invalid frame count (%s)" % arg)
871 self.error("Invalid frame count (%s)" % arg)
873 return
872 return
874 skipped = 0
873 skipped = 0
875 if count < 0:
874 if count < 0:
876 _newframe = 0
875 _newframe = 0
877 else:
876 else:
878 counter = 0
877 counter = 0
879 hidden_frames = self.hidden_frames(self.stack)
878 hidden_frames = self.hidden_frames(self.stack)
880 for i in range(self.curindex - 1, -1, -1):
879 for i in range(self.curindex - 1, -1, -1):
881 if hidden_frames[i] and self.skip_hidden:
880 if hidden_frames[i] and self.skip_hidden:
882 skipped += 1
881 skipped += 1
883 continue
882 continue
884 counter += 1
883 counter += 1
885 if counter >= count:
884 if counter >= count:
886 break
885 break
887 else:
886 else:
888 # if no break occurred.
887 # if no break occurred.
889 self.error(
888 self.error(
890 "all frames above hidden, use `skip_hidden False` to get get into those."
889 "all frames above hidden, use `skip_hidden False` to get get into those."
891 )
890 )
892 return
891 return
893
892
894 Colors = self.color_scheme_table.active_colors
893 Colors = self.color_scheme_table.active_colors
895 ColorsNormal = Colors.Normal
894 ColorsNormal = Colors.Normal
896 _newframe = i
895 _newframe = i
897 self._select_frame(_newframe)
896 self._select_frame(_newframe)
898 if skipped:
897 if skipped:
899 print(
898 print(
900 f"{Colors.excName} [... skipped {skipped} hidden frame(s)]{ColorsNormal}\n"
899 f"{Colors.excName} [... skipped {skipped} hidden frame(s)]{ColorsNormal}\n"
901 )
900 )
902
901
903 def do_down(self, arg):
902 def do_down(self, arg):
904 """d(own) [count]
903 """d(own) [count]
905 Move the current frame count (default one) levels down in the
904 Move the current frame count (default one) levels down in the
906 stack trace (to a newer frame).
905 stack trace (to a newer frame).
907
906
908 Will skip hidden frames.
907 Will skip hidden frames.
909 """
908 """
910 if self.curindex + 1 == len(self.stack):
909 if self.curindex + 1 == len(self.stack):
911 self.error("Newest frame")
910 self.error("Newest frame")
912 return
911 return
913 try:
912 try:
914 count = int(arg or 1)
913 count = int(arg or 1)
915 except ValueError:
914 except ValueError:
916 self.error("Invalid frame count (%s)" % arg)
915 self.error("Invalid frame count (%s)" % arg)
917 return
916 return
918 if count < 0:
917 if count < 0:
919 _newframe = len(self.stack) - 1
918 _newframe = len(self.stack) - 1
920 else:
919 else:
921 counter = 0
920 counter = 0
922 skipped = 0
921 skipped = 0
923 hidden_frames = self.hidden_frames(self.stack)
922 hidden_frames = self.hidden_frames(self.stack)
924 for i in range(self.curindex + 1, len(self.stack)):
923 for i in range(self.curindex + 1, len(self.stack)):
925 if hidden_frames[i] and self.skip_hidden:
924 if hidden_frames[i] and self.skip_hidden:
926 skipped += 1
925 skipped += 1
927 continue
926 continue
928 counter += 1
927 counter += 1
929 if counter >= count:
928 if counter >= count:
930 break
929 break
931 else:
930 else:
932 self.error(
931 self.error(
933 "all frames below hidden, use `skip_hidden False` to get get into those."
932 "all frames below hidden, use `skip_hidden False` to get get into those."
934 )
933 )
935 return
934 return
936
935
937 Colors = self.color_scheme_table.active_colors
936 Colors = self.color_scheme_table.active_colors
938 ColorsNormal = Colors.Normal
937 ColorsNormal = Colors.Normal
939 if skipped:
938 if skipped:
940 print(
939 print(
941 f"{Colors.excName} [... skipped {skipped} hidden frame(s)]{ColorsNormal}\n"
940 f"{Colors.excName} [... skipped {skipped} hidden frame(s)]{ColorsNormal}\n"
942 )
941 )
943 _newframe = i
942 _newframe = i
944
943
945 self._select_frame(_newframe)
944 self._select_frame(_newframe)
946
945
947 do_d = do_down
946 do_d = do_down
948 do_u = do_up
947 do_u = do_up
949
948
950 def do_context(self, context):
949 def do_context(self, context):
951 """context number_of_lines
950 """context number_of_lines
952 Set the number of lines of source code to show when displaying
951 Set the number of lines of source code to show when displaying
953 stacktrace information.
952 stacktrace information.
954 """
953 """
955 try:
954 try:
956 new_context = int(context)
955 new_context = int(context)
957 if new_context <= 0:
956 if new_context <= 0:
958 raise ValueError()
957 raise ValueError()
959 self.context = new_context
958 self.context = new_context
960 except ValueError:
959 except ValueError:
961 self.error("The 'context' command requires a positive integer argument.")
960 self.error("The 'context' command requires a positive integer argument.")
962
961
963
962
964 class InterruptiblePdb(Pdb):
963 class InterruptiblePdb(Pdb):
965 """Version of debugger where KeyboardInterrupt exits the debugger altogether."""
964 """Version of debugger where KeyboardInterrupt exits the debugger altogether."""
966
965
967 def cmdloop(self, intro=None):
966 def cmdloop(self, intro=None):
968 """Wrap cmdloop() such that KeyboardInterrupt stops the debugger."""
967 """Wrap cmdloop() such that KeyboardInterrupt stops the debugger."""
969 try:
968 try:
970 return OldPdb.cmdloop(self, intro=intro)
969 return OldPdb.cmdloop(self, intro=intro)
971 except KeyboardInterrupt:
970 except KeyboardInterrupt:
972 self.stop_here = lambda frame: False
971 self.stop_here = lambda frame: False
973 self.do_quit("")
972 self.do_quit("")
974 sys.settrace(None)
973 sys.settrace(None)
975 self.quitting = False
974 self.quitting = False
976 raise
975 raise
977
976
978 def _cmdloop(self):
977 def _cmdloop(self):
979 while True:
978 while True:
980 try:
979 try:
981 # keyboard interrupts allow for an easy way to cancel
980 # keyboard interrupts allow for an easy way to cancel
982 # the current command, so allow them during interactive input
981 # the current command, so allow them during interactive input
983 self.allow_kbdint = True
982 self.allow_kbdint = True
984 self.cmdloop()
983 self.cmdloop()
985 self.allow_kbdint = False
984 self.allow_kbdint = False
986 break
985 break
987 except KeyboardInterrupt:
986 except KeyboardInterrupt:
988 self.message('--KeyboardInterrupt--')
987 self.message('--KeyboardInterrupt--')
989 raise
988 raise
990
989
991
990
992 def set_trace(frame=None):
991 def set_trace(frame=None):
993 """
992 """
994 Start debugging from `frame`.
993 Start debugging from `frame`.
995
994
996 If frame is not specified, debugging starts from caller's frame.
995 If frame is not specified, debugging starts from caller's frame.
997 """
996 """
998 Pdb().set_trace(frame or sys._getframe().f_back)
997 Pdb().set_trace(frame or sys._getframe().f_back)
@@ -1,575 +1,574 b''
1 """Tests for debugging machinery.
1 """Tests for debugging machinery.
2 """
2 """
3
3
4 # Copyright (c) IPython Development Team.
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
5 # Distributed under the terms of the Modified BSD License.
6
6
7 import builtins
7 import builtins
8 import os
8 import os
9 import sys
9 import sys
10 import platform
10 import platform
11
11
12 from tempfile import NamedTemporaryFile
12 from tempfile import NamedTemporaryFile
13 from textwrap import dedent
13 from textwrap import dedent
14 from unittest.mock import patch
14 from unittest.mock import patch
15
15
16 from IPython.core import debugger
16 from IPython.core import debugger
17 from IPython.testing import IPYTHON_TESTING_TIMEOUT_SCALE
17 from IPython.testing import IPYTHON_TESTING_TIMEOUT_SCALE
18 from IPython.testing.decorators import skip_win32
18 from IPython.testing.decorators import skip_win32
19 import pytest
19 import pytest
20
20
21 #-----------------------------------------------------------------------------
21 #-----------------------------------------------------------------------------
22 # Helper classes, from CPython's Pdb test suite
22 # Helper classes, from CPython's Pdb test suite
23 #-----------------------------------------------------------------------------
23 #-----------------------------------------------------------------------------
24
24
25 class _FakeInput(object):
25 class _FakeInput(object):
26 """
26 """
27 A fake input stream for pdb's interactive debugger. Whenever a
27 A fake input stream for pdb's interactive debugger. Whenever a
28 line is read, print it (to simulate the user typing it), and then
28 line is read, print it (to simulate the user typing it), and then
29 return it. The set of lines to return is specified in the
29 return it. The set of lines to return is specified in the
30 constructor; they should not have trailing newlines.
30 constructor; they should not have trailing newlines.
31 """
31 """
32 def __init__(self, lines):
32 def __init__(self, lines):
33 self.lines = iter(lines)
33 self.lines = iter(lines)
34
34
35 def readline(self):
35 def readline(self):
36 line = next(self.lines)
36 line = next(self.lines)
37 print(line)
37 print(line)
38 return line+'\n'
38 return line+'\n'
39
39
40 class PdbTestInput(object):
40 class PdbTestInput(object):
41 """Context manager that makes testing Pdb in doctests easier."""
41 """Context manager that makes testing Pdb in doctests easier."""
42
42
43 def __init__(self, input):
43 def __init__(self, input):
44 self.input = input
44 self.input = input
45
45
46 def __enter__(self):
46 def __enter__(self):
47 self.real_stdin = sys.stdin
47 self.real_stdin = sys.stdin
48 sys.stdin = _FakeInput(self.input)
48 sys.stdin = _FakeInput(self.input)
49
49
50 def __exit__(self, *exc):
50 def __exit__(self, *exc):
51 sys.stdin = self.real_stdin
51 sys.stdin = self.real_stdin
52
52
53 #-----------------------------------------------------------------------------
53 #-----------------------------------------------------------------------------
54 # Tests
54 # Tests
55 #-----------------------------------------------------------------------------
55 #-----------------------------------------------------------------------------
56
56
57 def test_ipdb_magics():
57 def test_ipdb_magics():
58 '''Test calling some IPython magics from ipdb.
58 '''Test calling some IPython magics from ipdb.
59
59
60 First, set up some test functions and classes which we can inspect.
60 First, set up some test functions and classes which we can inspect.
61
61
62 >>> class ExampleClass(object):
62 >>> class ExampleClass(object):
63 ... """Docstring for ExampleClass."""
63 ... """Docstring for ExampleClass."""
64 ... def __init__(self):
64 ... def __init__(self):
65 ... """Docstring for ExampleClass.__init__"""
65 ... """Docstring for ExampleClass.__init__"""
66 ... pass
66 ... pass
67 ... def __str__(self):
67 ... def __str__(self):
68 ... return "ExampleClass()"
68 ... return "ExampleClass()"
69
69
70 >>> def example_function(x, y, z="hello"):
70 >>> def example_function(x, y, z="hello"):
71 ... """Docstring for example_function."""
71 ... """Docstring for example_function."""
72 ... pass
72 ... pass
73
73
74 >>> old_trace = sys.gettrace()
74 >>> old_trace = sys.gettrace()
75
75
76 Create a function which triggers ipdb.
76 Create a function which triggers ipdb.
77
77
78 >>> def trigger_ipdb():
78 >>> def trigger_ipdb():
79 ... a = ExampleClass()
79 ... a = ExampleClass()
80 ... debugger.Pdb().set_trace()
80 ... debugger.Pdb().set_trace()
81
81
82 >>> with PdbTestInput([
82 >>> with PdbTestInput([
83 ... 'pdef example_function',
83 ... 'pdef example_function',
84 ... 'pdoc ExampleClass',
84 ... 'pdoc ExampleClass',
85 ... 'up',
85 ... 'up',
86 ... 'down',
86 ... 'down',
87 ... 'list',
87 ... 'list',
88 ... 'pinfo a',
88 ... 'pinfo a',
89 ... 'll',
89 ... 'll',
90 ... 'continue',
90 ... 'continue',
91 ... ]):
91 ... ]):
92 ... trigger_ipdb()
92 ... trigger_ipdb()
93 --Return--
93 --Return--
94 None
94 None
95 > <doctest ...>(3)trigger_ipdb()
95 > <doctest ...>(3)trigger_ipdb()
96 1 def trigger_ipdb():
96 1 def trigger_ipdb():
97 2 a = ExampleClass()
97 2 a = ExampleClass()
98 ----> 3 debugger.Pdb().set_trace()
98 ----> 3 debugger.Pdb().set_trace()
99 <BLANKLINE>
99 <BLANKLINE>
100 ipdb> pdef example_function
100 ipdb> pdef example_function
101 example_function(x, y, z='hello')
101 example_function(x, y, z='hello')
102 ipdb> pdoc ExampleClass
102 ipdb> pdoc ExampleClass
103 Class docstring:
103 Class docstring:
104 Docstring for ExampleClass.
104 Docstring for ExampleClass.
105 Init docstring:
105 Init docstring:
106 Docstring for ExampleClass.__init__
106 Docstring for ExampleClass.__init__
107 ipdb> up
107 ipdb> up
108 > <doctest ...>(11)<module>()
108 > <doctest ...>(11)<module>()
109 7 'pinfo a',
109 7 'pinfo a',
110 8 'll',
110 8 'll',
111 9 'continue',
111 9 'continue',
112 10 ]):
112 10 ]):
113 ---> 11 trigger_ipdb()
113 ---> 11 trigger_ipdb()
114 <BLANKLINE>
114 <BLANKLINE>
115 ipdb> down
115 ipdb> down
116 None
116 None
117 > <doctest ...>(3)trigger_ipdb()
117 > <doctest ...>(3)trigger_ipdb()
118 1 def trigger_ipdb():
118 1 def trigger_ipdb():
119 2 a = ExampleClass()
119 2 a = ExampleClass()
120 ----> 3 debugger.Pdb().set_trace()
120 ----> 3 debugger.Pdb().set_trace()
121 <BLANKLINE>
121 <BLANKLINE>
122 ipdb> list
122 ipdb> list
123 1 def trigger_ipdb():
123 1 def trigger_ipdb():
124 2 a = ExampleClass()
124 2 a = ExampleClass()
125 ----> 3 debugger.Pdb().set_trace()
125 ----> 3 debugger.Pdb().set_trace()
126 <BLANKLINE>
126 <BLANKLINE>
127 ipdb> pinfo a
127 ipdb> pinfo a
128 Type: ExampleClass
128 Type: ExampleClass
129 String form: ExampleClass()
129 String form: ExampleClass()
130 Namespace: Local...
130 Namespace: Local...
131 Docstring: Docstring for ExampleClass.
131 Docstring: Docstring for ExampleClass.
132 Init docstring: Docstring for ExampleClass.__init__
132 Init docstring: Docstring for ExampleClass.__init__
133 ipdb> ll
133 ipdb> ll
134 1 def trigger_ipdb():
134 1 def trigger_ipdb():
135 2 a = ExampleClass()
135 2 a = ExampleClass()
136 ----> 3 debugger.Pdb().set_trace()
136 ----> 3 debugger.Pdb().set_trace()
137 <BLANKLINE>
137 <BLANKLINE>
138 ipdb> continue
138 ipdb> continue
139
139
140 Restore previous trace function, e.g. for coverage.py
140 Restore previous trace function, e.g. for coverage.py
141
141
142 >>> sys.settrace(old_trace)
142 >>> sys.settrace(old_trace)
143 '''
143 '''
144
144
145 def test_ipdb_magics2():
145 def test_ipdb_magics2():
146 '''Test ipdb with a very short function.
146 '''Test ipdb with a very short function.
147
147
148 >>> old_trace = sys.gettrace()
148 >>> old_trace = sys.gettrace()
149
149
150 >>> def bar():
150 >>> def bar():
151 ... pass
151 ... pass
152
152
153 Run ipdb.
153 Run ipdb.
154
154
155 >>> with PdbTestInput([
155 >>> with PdbTestInput([
156 ... 'continue',
156 ... 'continue',
157 ... ]):
157 ... ]):
158 ... debugger.Pdb().runcall(bar)
158 ... debugger.Pdb().runcall(bar)
159 > <doctest ...>(2)bar()
159 > <doctest ...>(2)bar()
160 1 def bar():
160 1 def bar():
161 ----> 2 pass
161 ----> 2 pass
162 <BLANKLINE>
162 <BLANKLINE>
163 ipdb> continue
163 ipdb> continue
164
164
165 Restore previous trace function, e.g. for coverage.py
165 Restore previous trace function, e.g. for coverage.py
166
166
167 >>> sys.settrace(old_trace)
167 >>> sys.settrace(old_trace)
168 '''
168 '''
169
169
170 def can_quit():
170 def can_quit():
171 '''Test that quit work in ipydb
171 '''Test that quit work in ipydb
172
172
173 >>> old_trace = sys.gettrace()
173 >>> old_trace = sys.gettrace()
174
174
175 >>> def bar():
175 >>> def bar():
176 ... pass
176 ... pass
177
177
178 >>> with PdbTestInput([
178 >>> with PdbTestInput([
179 ... 'quit',
179 ... 'quit',
180 ... ]):
180 ... ]):
181 ... debugger.Pdb().runcall(bar)
181 ... debugger.Pdb().runcall(bar)
182 > <doctest ...>(2)bar()
182 > <doctest ...>(2)bar()
183 1 def bar():
183 1 def bar():
184 ----> 2 pass
184 ----> 2 pass
185 <BLANKLINE>
185 <BLANKLINE>
186 ipdb> quit
186 ipdb> quit
187
187
188 Restore previous trace function, e.g. for coverage.py
188 Restore previous trace function, e.g. for coverage.py
189
189
190 >>> sys.settrace(old_trace)
190 >>> sys.settrace(old_trace)
191 '''
191 '''
192
192
193
193
194 def can_exit():
194 def can_exit():
195 '''Test that quit work in ipydb
195 '''Test that quit work in ipydb
196
196
197 >>> old_trace = sys.gettrace()
197 >>> old_trace = sys.gettrace()
198
198
199 >>> def bar():
199 >>> def bar():
200 ... pass
200 ... pass
201
201
202 >>> with PdbTestInput([
202 >>> with PdbTestInput([
203 ... 'exit',
203 ... 'exit',
204 ... ]):
204 ... ]):
205 ... debugger.Pdb().runcall(bar)
205 ... debugger.Pdb().runcall(bar)
206 > <doctest ...>(2)bar()
206 > <doctest ...>(2)bar()
207 1 def bar():
207 1 def bar():
208 ----> 2 pass
208 ----> 2 pass
209 <BLANKLINE>
209 <BLANKLINE>
210 ipdb> exit
210 ipdb> exit
211
211
212 Restore previous trace function, e.g. for coverage.py
212 Restore previous trace function, e.g. for coverage.py
213
213
214 >>> sys.settrace(old_trace)
214 >>> sys.settrace(old_trace)
215 '''
215 '''
216
216
217
217
218 def test_interruptible_core_debugger():
218 def test_interruptible_core_debugger():
219 """The debugger can be interrupted.
219 """The debugger can be interrupted.
220
220
221 The presumption is there is some mechanism that causes a KeyboardInterrupt
221 The presumption is there is some mechanism that causes a KeyboardInterrupt
222 (this is implemented in ipykernel). We want to ensure the
222 (this is implemented in ipykernel). We want to ensure the
223 KeyboardInterrupt cause debugging to cease.
223 KeyboardInterrupt cause debugging to cease.
224 """
224 """
225 def raising_input(msg="", called=[0]):
225 def raising_input(msg="", called=[0]):
226 called[0] += 1
226 called[0] += 1
227 assert called[0] == 1, "input() should only be called once!"
227 assert called[0] == 1, "input() should only be called once!"
228 raise KeyboardInterrupt()
228 raise KeyboardInterrupt()
229
229
230 tracer_orig = sys.gettrace()
230 tracer_orig = sys.gettrace()
231 try:
231 try:
232 with patch.object(builtins, "input", raising_input):
232 with patch.object(builtins, "input", raising_input):
233 debugger.InterruptiblePdb().set_trace()
233 debugger.InterruptiblePdb().set_trace()
234 # The way this test will fail is by set_trace() never exiting,
234 # The way this test will fail is by set_trace() never exiting,
235 # resulting in a timeout by the test runner. The alternative
235 # resulting in a timeout by the test runner. The alternative
236 # implementation would involve a subprocess, but that adds issues
236 # implementation would involve a subprocess, but that adds issues
237 # with interrupting subprocesses that are rather complex, so it's
237 # with interrupting subprocesses that are rather complex, so it's
238 # simpler just to do it this way.
238 # simpler just to do it this way.
239 finally:
239 finally:
240 # restore the original trace function
240 # restore the original trace function
241 sys.settrace(tracer_orig)
241 sys.settrace(tracer_orig)
242
242
243
243
244 @skip_win32
244 @skip_win32
245 def test_xmode_skip():
245 def test_xmode_skip():
246 """that xmode skip frames
246 """that xmode skip frames
247
247
248 Not as a doctest as pytest does not run doctests.
248 Not as a doctest as pytest does not run doctests.
249 """
249 """
250 import pexpect
250 import pexpect
251 env = os.environ.copy()
251 env = os.environ.copy()
252 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
252 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
253
253
254 child = pexpect.spawn(
254 child = pexpect.spawn(
255 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
255 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
256 )
256 )
257 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
257 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
258
258
259 child.expect("IPython")
259 child.expect("IPython")
260 child.expect("\n")
260 child.expect("\n")
261 child.expect_exact("In [1]")
261 child.expect_exact("In [1]")
262
262
263 block = dedent(
263 block = dedent(
264 """
264 """
265 def f():
265 def f():
266 __tracebackhide__ = True
266 __tracebackhide__ = True
267 g()
267 g()
268
268
269 def g():
269 def g():
270 raise ValueError
270 raise ValueError
271
271
272 f()
272 f()
273 """
273 """
274 )
274 )
275
275
276 for line in block.splitlines():
276 for line in block.splitlines():
277 child.sendline(line)
277 child.sendline(line)
278 child.expect_exact(line)
278 child.expect_exact(line)
279 child.expect_exact("skipping")
279 child.expect_exact("skipping")
280
280
281 block = dedent(
281 block = dedent(
282 """
282 """
283 def f():
283 def f():
284 __tracebackhide__ = True
284 __tracebackhide__ = True
285 g()
285 g()
286
286
287 def g():
287 def g():
288 from IPython.core.debugger import set_trace
288 from IPython.core.debugger import set_trace
289 set_trace()
289 set_trace()
290
290
291 f()
291 f()
292 """
292 """
293 )
293 )
294
294
295 for line in block.splitlines():
295 for line in block.splitlines():
296 child.sendline(line)
296 child.sendline(line)
297 child.expect_exact(line)
297 child.expect_exact(line)
298
298
299 child.expect("ipdb>")
299 child.expect("ipdb>")
300 child.sendline("w")
300 child.sendline("w")
301 child.expect("hidden")
301 child.expect("hidden")
302 child.expect("ipdb>")
302 child.expect("ipdb>")
303 child.sendline("skip_hidden false")
303 child.sendline("skip_hidden false")
304 child.sendline("w")
304 child.sendline("w")
305 child.expect("__traceba")
305 child.expect("__traceba")
306 child.expect("ipdb>")
306 child.expect("ipdb>")
307
307
308 child.close()
308 child.close()
309
309
310
310
311 skip_decorators_blocks = (
311 skip_decorators_blocks = (
312 """
312 """
313 def helpers_helper():
313 def helpers_helper():
314 pass # should not stop here except breakpoint
314 pass # should not stop here except breakpoint
315 """,
315 """,
316 """
316 """
317 def helper_1():
317 def helper_1():
318 helpers_helper() # should not stop here
318 helpers_helper() # should not stop here
319 """,
319 """,
320 """
320 """
321 def helper_2():
321 def helper_2():
322 pass # should not stop here
322 pass # should not stop here
323 """,
323 """,
324 """
324 """
325 def pdb_skipped_decorator2(function):
325 def pdb_skipped_decorator2(function):
326 def wrapped_fn(*args, **kwargs):
326 def wrapped_fn(*args, **kwargs):
327 __debuggerskip__ = True
327 __debuggerskip__ = True
328 helper_2()
328 helper_2()
329 __debuggerskip__ = False
329 __debuggerskip__ = False
330 result = function(*args, **kwargs)
330 result = function(*args, **kwargs)
331 __debuggerskip__ = True
331 __debuggerskip__ = True
332 helper_2()
332 helper_2()
333 return result
333 return result
334 return wrapped_fn
334 return wrapped_fn
335 """,
335 """,
336 """
336 """
337 def pdb_skipped_decorator(function):
337 def pdb_skipped_decorator(function):
338 def wrapped_fn(*args, **kwargs):
338 def wrapped_fn(*args, **kwargs):
339 __debuggerskip__ = True
339 __debuggerskip__ = True
340 helper_1()
340 helper_1()
341 __debuggerskip__ = False
341 __debuggerskip__ = False
342 result = function(*args, **kwargs)
342 result = function(*args, **kwargs)
343 __debuggerskip__ = True
343 __debuggerskip__ = True
344 helper_2()
344 helper_2()
345 return result
345 return result
346 return wrapped_fn
346 return wrapped_fn
347 """,
347 """,
348 """
348 """
349 @pdb_skipped_decorator
349 @pdb_skipped_decorator
350 @pdb_skipped_decorator2
350 @pdb_skipped_decorator2
351 def bar(x, y):
351 def bar(x, y):
352 return x * y
352 return x * y
353 """,
353 """,
354 """import IPython.terminal.debugger as ipdb""",
354 """import IPython.terminal.debugger as ipdb""",
355 """
355 """
356 def f():
356 def f():
357 ipdb.set_trace()
357 ipdb.set_trace()
358 bar(3, 4)
358 bar(3, 4)
359 """,
359 """,
360 """
360 """
361 f()
361 f()
362 """,
362 """,
363 )
363 )
364
364
365
365
366 def _decorator_skip_setup():
366 def _decorator_skip_setup():
367 import pexpect
367 import pexpect
368
368
369 env = os.environ.copy()
369 env = os.environ.copy()
370 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
370 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
371 env["PROMPT_TOOLKIT_NO_CPR"] = "1"
371 env["PROMPT_TOOLKIT_NO_CPR"] = "1"
372
372
373 child = pexpect.spawn(
373 child = pexpect.spawn(
374 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
374 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
375 )
375 )
376 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
376 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
377
377
378 child.expect("IPython")
378 child.expect("IPython")
379 child.expect("\n")
379 child.expect("\n")
380
380
381 child.timeout = 5 * IPYTHON_TESTING_TIMEOUT_SCALE
381 child.timeout = 5 * IPYTHON_TESTING_TIMEOUT_SCALE
382 child.str_last_chars = 500
382 child.str_last_chars = 500
383
383
384 dedented_blocks = [dedent(b).strip() for b in skip_decorators_blocks]
384 dedented_blocks = [dedent(b).strip() for b in skip_decorators_blocks]
385 in_prompt_number = 1
385 in_prompt_number = 1
386 for cblock in dedented_blocks:
386 for cblock in dedented_blocks:
387 child.expect_exact(f"In [{in_prompt_number}]:")
387 child.expect_exact(f"In [{in_prompt_number}]:")
388 in_prompt_number += 1
388 in_prompt_number += 1
389 for line in cblock.splitlines():
389 for line in cblock.splitlines():
390 child.sendline(line)
390 child.sendline(line)
391 child.expect_exact(line)
391 child.expect_exact(line)
392 child.sendline("")
392 child.sendline("")
393 return child
393 return child
394
394
395
395
396 @pytest.mark.skip(reason="recently fail for unknown reason on CI")
396 @pytest.mark.skip(reason="recently fail for unknown reason on CI")
397 @skip_win32
397 @skip_win32
398 def test_decorator_skip():
398 def test_decorator_skip():
399 """test that decorator frames can be skipped."""
399 """test that decorator frames can be skipped."""
400
400
401 child = _decorator_skip_setup()
401 child = _decorator_skip_setup()
402
402
403 child.expect_exact("ipython-input-8")
403 child.expect_exact("ipython-input-8")
404 child.expect_exact("3 bar(3, 4)")
404 child.expect_exact("3 bar(3, 4)")
405 child.expect("ipdb>")
405 child.expect("ipdb>")
406
406
407 child.expect("ipdb>")
407 child.expect("ipdb>")
408 child.sendline("step")
408 child.sendline("step")
409 child.expect_exact("step")
409 child.expect_exact("step")
410 child.expect_exact("--Call--")
410 child.expect_exact("--Call--")
411 child.expect_exact("ipython-input-6")
411 child.expect_exact("ipython-input-6")
412
412
413 child.expect_exact("1 @pdb_skipped_decorator")
413 child.expect_exact("1 @pdb_skipped_decorator")
414
414
415 child.sendline("s")
415 child.sendline("s")
416 child.expect_exact("return x * y")
416 child.expect_exact("return x * y")
417
417
418 child.close()
418 child.close()
419
419
420
420
421 @pytest.mark.skip(reason="recently fail for unknown reason on CI")
421 @pytest.mark.skip(reason="recently fail for unknown reason on CI")
422 @pytest.mark.skipif(platform.python_implementation() == "PyPy", reason="issues on PyPy")
422 @pytest.mark.skipif(platform.python_implementation() == "PyPy", reason="issues on PyPy")
423 @skip_win32
423 @skip_win32
424 def test_decorator_skip_disabled():
424 def test_decorator_skip_disabled():
425 """test that decorator frame skipping can be disabled"""
425 """test that decorator frame skipping can be disabled"""
426
426
427 child = _decorator_skip_setup()
427 child = _decorator_skip_setup()
428
428
429 child.expect_exact("3 bar(3, 4)")
429 child.expect_exact("3 bar(3, 4)")
430
430
431 for input_, expected in [
431 for input_, expected in [
432 ("skip_predicates debuggerskip False", ""),
432 ("skip_predicates debuggerskip False", ""),
433 ("skip_predicates", "debuggerskip : False"),
433 ("skip_predicates", "debuggerskip : False"),
434 ("step", "---> 2 def wrapped_fn"),
434 ("step", "---> 2 def wrapped_fn"),
435 ("step", "----> 3 __debuggerskip__"),
435 ("step", "----> 3 __debuggerskip__"),
436 ("step", "----> 4 helper_1()"),
436 ("step", "----> 4 helper_1()"),
437 ("step", "---> 1 def helper_1():"),
437 ("step", "---> 1 def helper_1():"),
438 ("next", "----> 2 helpers_helper()"),
438 ("next", "----> 2 helpers_helper()"),
439 ("next", "--Return--"),
439 ("next", "--Return--"),
440 ("next", "----> 5 __debuggerskip__ = False"),
440 ("next", "----> 5 __debuggerskip__ = False"),
441 ]:
441 ]:
442 child.expect("ipdb>")
442 child.expect("ipdb>")
443 child.sendline(input_)
443 child.sendline(input_)
444 child.expect_exact(input_)
444 child.expect_exact(input_)
445 child.expect_exact(expected)
445 child.expect_exact(expected)
446
446
447 child.close()
447 child.close()
448
448
449
449
450 @pytest.mark.skipif(platform.python_implementation() == "PyPy", reason="issues on PyPy")
450 @pytest.mark.skipif(platform.python_implementation() == "PyPy", reason="issues on PyPy")
451 @skip_win32
451 @skip_win32
452 def test_decorator_skip_with_breakpoint():
452 def test_decorator_skip_with_breakpoint():
453 """test that decorator frame skipping can be disabled"""
453 """test that decorator frame skipping can be disabled"""
454
454
455 import pexpect
455 import pexpect
456
456
457 env = os.environ.copy()
457 env = os.environ.copy()
458 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
458 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
459 env["PROMPT_TOOLKIT_NO_CPR"] = "1"
459 env["PROMPT_TOOLKIT_NO_CPR"] = "1"
460
460
461 child = pexpect.spawn(
461 child = pexpect.spawn(
462 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
462 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
463 )
463 )
464 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
464 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
465 child.str_last_chars = 500
465 child.str_last_chars = 500
466
466
467 child.expect("IPython")
467 child.expect("IPython")
468 child.expect("\n")
468 child.expect("\n")
469
469
470 child.timeout = 5 * IPYTHON_TESTING_TIMEOUT_SCALE
470 child.timeout = 5 * IPYTHON_TESTING_TIMEOUT_SCALE
471
471
472 ### we need a filename, so we need to exec the full block with a filename
472 ### we need a filename, so we need to exec the full block with a filename
473 with NamedTemporaryFile(suffix=".py", dir=".", delete=True) as tf:
473 with NamedTemporaryFile(suffix=".py", dir=".", delete=True) as tf:
474
475 name = tf.name[:-3].split("/")[-1]
474 name = tf.name[:-3].split("/")[-1]
476 tf.write("\n".join([dedent(x) for x in skip_decorators_blocks[:-1]]).encode())
475 tf.write("\n".join([dedent(x) for x in skip_decorators_blocks[:-1]]).encode())
477 tf.flush()
476 tf.flush()
478 codeblock = f"from {name} import f"
477 codeblock = f"from {name} import f"
479
478
480 dedented_blocks = [
479 dedented_blocks = [
481 codeblock,
480 codeblock,
482 "f()",
481 "f()",
483 ]
482 ]
484
483
485 in_prompt_number = 1
484 in_prompt_number = 1
486 for cblock in dedented_blocks:
485 for cblock in dedented_blocks:
487 child.expect_exact(f"In [{in_prompt_number}]:")
486 child.expect_exact(f"In [{in_prompt_number}]:")
488 in_prompt_number += 1
487 in_prompt_number += 1
489 for line in cblock.splitlines():
488 for line in cblock.splitlines():
490 child.sendline(line)
489 child.sendline(line)
491 child.expect_exact(line)
490 child.expect_exact(line)
492 child.sendline("")
491 child.sendline("")
493
492
494 # as the filename does not exists, we'll rely on the filename prompt
493 # as the filename does not exists, we'll rely on the filename prompt
495 child.expect_exact("47 bar(3, 4)")
494 child.expect_exact("47 bar(3, 4)")
496
495
497 for input_, expected in [
496 for input_, expected in [
498 (f"b {name}.py:3", ""),
497 (f"b {name}.py:3", ""),
499 ("step", "1---> 3 pass # should not stop here except"),
498 ("step", "1---> 3 pass # should not stop here except"),
500 ("step", "---> 38 @pdb_skipped_decorator"),
499 ("step", "---> 38 @pdb_skipped_decorator"),
501 ("continue", ""),
500 ("continue", ""),
502 ]:
501 ]:
503 child.expect("ipdb>")
502 child.expect("ipdb>")
504 child.sendline(input_)
503 child.sendline(input_)
505 child.expect_exact(input_)
504 child.expect_exact(input_)
506 child.expect_exact(expected)
505 child.expect_exact(expected)
507
506
508 child.close()
507 child.close()
509
508
510
509
511 @skip_win32
510 @skip_win32
512 def test_where_erase_value():
511 def test_where_erase_value():
513 """Test that `where` does not access f_locals and erase values."""
512 """Test that `where` does not access f_locals and erase values."""
514 import pexpect
513 import pexpect
515
514
516 env = os.environ.copy()
515 env = os.environ.copy()
517 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
516 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
518
517
519 child = pexpect.spawn(
518 child = pexpect.spawn(
520 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
519 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
521 )
520 )
522 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
521 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
523
522
524 child.expect("IPython")
523 child.expect("IPython")
525 child.expect("\n")
524 child.expect("\n")
526 child.expect_exact("In [1]")
525 child.expect_exact("In [1]")
527
526
528 block = dedent(
527 block = dedent(
529 """
528 """
530 def simple_f():
529 def simple_f():
531 myvar = 1
530 myvar = 1
532 print(myvar)
531 print(myvar)
533 1/0
532 1/0
534 print(myvar)
533 print(myvar)
535 simple_f() """
534 simple_f() """
536 )
535 )
537
536
538 for line in block.splitlines():
537 for line in block.splitlines():
539 child.sendline(line)
538 child.sendline(line)
540 child.expect_exact(line)
539 child.expect_exact(line)
541 child.expect_exact("ZeroDivisionError")
540 child.expect_exact("ZeroDivisionError")
542 child.expect_exact("In [2]:")
541 child.expect_exact("In [2]:")
543
542
544 child.sendline("%debug")
543 child.sendline("%debug")
545
544
546 ##
545 ##
547 child.expect("ipdb>")
546 child.expect("ipdb>")
548
547
549 child.sendline("myvar")
548 child.sendline("myvar")
550 child.expect("1")
549 child.expect("1")
551
550
552 ##
551 ##
553 child.expect("ipdb>")
552 child.expect("ipdb>")
554
553
555 child.sendline("myvar = 2")
554 child.sendline("myvar = 2")
556
555
557 ##
556 ##
558 child.expect_exact("ipdb>")
557 child.expect_exact("ipdb>")
559
558
560 child.sendline("myvar")
559 child.sendline("myvar")
561
560
562 child.expect_exact("2")
561 child.expect_exact("2")
563
562
564 ##
563 ##
565 child.expect("ipdb>")
564 child.expect("ipdb>")
566 child.sendline("where")
565 child.sendline("where")
567
566
568 ##
567 ##
569 child.expect("ipdb>")
568 child.expect("ipdb>")
570 child.sendline("myvar")
569 child.sendline("myvar")
571
570
572 child.expect_exact("2")
571 child.expect_exact("2")
573 child.expect("ipdb>")
572 child.expect("ipdb>")
574
573
575 child.close()
574 child.close()
@@ -1,1513 +1,1512 b''
1 # -*- coding: utf-8 -*-
1 # -*- coding: utf-8 -*-
2 """Tests for various magic functions."""
2 """Tests for various magic functions."""
3
3
4 import gc
4 import gc
5 import io
5 import io
6 import os
6 import os
7 import re
7 import re
8 import shlex
8 import shlex
9 import sys
9 import sys
10 import warnings
10 import warnings
11 from importlib import invalidate_caches
11 from importlib import invalidate_caches
12 from io import StringIO
12 from io import StringIO
13 from pathlib import Path
13 from pathlib import Path
14 from textwrap import dedent
14 from textwrap import dedent
15 from unittest import TestCase, mock
15 from unittest import TestCase, mock
16
16
17 import pytest
17 import pytest
18
18
19 from IPython import get_ipython
19 from IPython import get_ipython
20 from IPython.core import magic
20 from IPython.core import magic
21 from IPython.core.error import UsageError
21 from IPython.core.error import UsageError
22 from IPython.core.magic import (
22 from IPython.core.magic import (
23 Magics,
23 Magics,
24 cell_magic,
24 cell_magic,
25 line_magic,
25 line_magic,
26 magics_class,
26 magics_class,
27 register_cell_magic,
27 register_cell_magic,
28 register_line_magic,
28 register_line_magic,
29 )
29 )
30 from IPython.core.magics import code, execution, logging, osm, script
30 from IPython.core.magics import code, execution, logging, osm, script
31 from IPython.testing import decorators as dec
31 from IPython.testing import decorators as dec
32 from IPython.testing import tools as tt
32 from IPython.testing import tools as tt
33 from IPython.utils.io import capture_output
33 from IPython.utils.io import capture_output
34 from IPython.utils.process import find_cmd
34 from IPython.utils.process import find_cmd
35 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
35 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
36 from IPython.utils.syspathcontext import prepended_to_syspath
36 from IPython.utils.syspathcontext import prepended_to_syspath
37
37
38 from .test_debugger import PdbTestInput
38 from .test_debugger import PdbTestInput
39
39
40 from tempfile import NamedTemporaryFile
40 from tempfile import NamedTemporaryFile
41
41
42 @magic.magics_class
42 @magic.magics_class
43 class DummyMagics(magic.Magics): pass
43 class DummyMagics(magic.Magics): pass
44
44
45 def test_extract_code_ranges():
45 def test_extract_code_ranges():
46 instr = "1 3 5-6 7-9 10:15 17: :10 10- -13 :"
46 instr = "1 3 5-6 7-9 10:15 17: :10 10- -13 :"
47 expected = [
47 expected = [
48 (0, 1),
48 (0, 1),
49 (2, 3),
49 (2, 3),
50 (4, 6),
50 (4, 6),
51 (6, 9),
51 (6, 9),
52 (9, 14),
52 (9, 14),
53 (16, None),
53 (16, None),
54 (None, 9),
54 (None, 9),
55 (9, None),
55 (9, None),
56 (None, 13),
56 (None, 13),
57 (None, None),
57 (None, None),
58 ]
58 ]
59 actual = list(code.extract_code_ranges(instr))
59 actual = list(code.extract_code_ranges(instr))
60 assert actual == expected
60 assert actual == expected
61
61
62 def test_extract_symbols():
62 def test_extract_symbols():
63 source = """import foo\na = 10\ndef b():\n return 42\n\n\nclass A: pass\n\n\n"""
63 source = """import foo\na = 10\ndef b():\n return 42\n\n\nclass A: pass\n\n\n"""
64 symbols_args = ["a", "b", "A", "A,b", "A,a", "z"]
64 symbols_args = ["a", "b", "A", "A,b", "A,a", "z"]
65 expected = [([], ['a']),
65 expected = [([], ['a']),
66 (["def b():\n return 42\n"], []),
66 (["def b():\n return 42\n"], []),
67 (["class A: pass\n"], []),
67 (["class A: pass\n"], []),
68 (["class A: pass\n", "def b():\n return 42\n"], []),
68 (["class A: pass\n", "def b():\n return 42\n"], []),
69 (["class A: pass\n"], ['a']),
69 (["class A: pass\n"], ['a']),
70 ([], ['z'])]
70 ([], ['z'])]
71 for symbols, exp in zip(symbols_args, expected):
71 for symbols, exp in zip(symbols_args, expected):
72 assert code.extract_symbols(source, symbols) == exp
72 assert code.extract_symbols(source, symbols) == exp
73
73
74
74
75 def test_extract_symbols_raises_exception_with_non_python_code():
75 def test_extract_symbols_raises_exception_with_non_python_code():
76 source = ("=begin A Ruby program :)=end\n"
76 source = ("=begin A Ruby program :)=end\n"
77 "def hello\n"
77 "def hello\n"
78 "puts 'Hello world'\n"
78 "puts 'Hello world'\n"
79 "end")
79 "end")
80 with pytest.raises(SyntaxError):
80 with pytest.raises(SyntaxError):
81 code.extract_symbols(source, "hello")
81 code.extract_symbols(source, "hello")
82
82
83
83
84 def test_magic_not_found():
84 def test_magic_not_found():
85 # magic not found raises UsageError
85 # magic not found raises UsageError
86 with pytest.raises(UsageError):
86 with pytest.raises(UsageError):
87 _ip.run_line_magic("doesntexist", "")
87 _ip.run_line_magic("doesntexist", "")
88
88
89 # ensure result isn't success when a magic isn't found
89 # ensure result isn't success when a magic isn't found
90 result = _ip.run_cell('%doesntexist')
90 result = _ip.run_cell('%doesntexist')
91 assert isinstance(result.error_in_exec, UsageError)
91 assert isinstance(result.error_in_exec, UsageError)
92
92
93
93
94 def test_cell_magic_not_found():
94 def test_cell_magic_not_found():
95 # magic not found raises UsageError
95 # magic not found raises UsageError
96 with pytest.raises(UsageError):
96 with pytest.raises(UsageError):
97 _ip.run_cell_magic('doesntexist', 'line', 'cell')
97 _ip.run_cell_magic('doesntexist', 'line', 'cell')
98
98
99 # ensure result isn't success when a magic isn't found
99 # ensure result isn't success when a magic isn't found
100 result = _ip.run_cell('%%doesntexist')
100 result = _ip.run_cell('%%doesntexist')
101 assert isinstance(result.error_in_exec, UsageError)
101 assert isinstance(result.error_in_exec, UsageError)
102
102
103
103
104 def test_magic_error_status():
104 def test_magic_error_status():
105 def fail(shell):
105 def fail(shell):
106 1/0
106 1/0
107 _ip.register_magic_function(fail)
107 _ip.register_magic_function(fail)
108 result = _ip.run_cell('%fail')
108 result = _ip.run_cell('%fail')
109 assert isinstance(result.error_in_exec, ZeroDivisionError)
109 assert isinstance(result.error_in_exec, ZeroDivisionError)
110
110
111
111
112 def test_config():
112 def test_config():
113 """ test that config magic does not raise
113 """ test that config magic does not raise
114 can happen if Configurable init is moved too early into
114 can happen if Configurable init is moved too early into
115 Magics.__init__ as then a Config object will be registered as a
115 Magics.__init__ as then a Config object will be registered as a
116 magic.
116 magic.
117 """
117 """
118 ## should not raise.
118 ## should not raise.
119 _ip.run_line_magic("config", "")
119 _ip.run_line_magic("config", "")
120
120
121
121
122 def test_config_available_configs():
122 def test_config_available_configs():
123 """ test that config magic prints available configs in unique and
123 """ test that config magic prints available configs in unique and
124 sorted order. """
124 sorted order. """
125 with capture_output() as captured:
125 with capture_output() as captured:
126 _ip.run_line_magic("config", "")
126 _ip.run_line_magic("config", "")
127
127
128 stdout = captured.stdout
128 stdout = captured.stdout
129 config_classes = stdout.strip().split('\n')[1:]
129 config_classes = stdout.strip().split('\n')[1:]
130 assert config_classes == sorted(set(config_classes))
130 assert config_classes == sorted(set(config_classes))
131
131
132 def test_config_print_class():
132 def test_config_print_class():
133 """ test that config with a classname prints the class's options. """
133 """ test that config with a classname prints the class's options. """
134 with capture_output() as captured:
134 with capture_output() as captured:
135 _ip.run_line_magic("config", "TerminalInteractiveShell")
135 _ip.run_line_magic("config", "TerminalInteractiveShell")
136
136
137 stdout = captured.stdout
137 stdout = captured.stdout
138 assert re.match(
138 assert re.match(
139 "TerminalInteractiveShell.* options", stdout.splitlines()[0]
139 "TerminalInteractiveShell.* options", stdout.splitlines()[0]
140 ), f"{stdout}\n\n1st line of stdout not like 'TerminalInteractiveShell.* options'"
140 ), f"{stdout}\n\n1st line of stdout not like 'TerminalInteractiveShell.* options'"
141
141
142
142
143 def test_rehashx():
143 def test_rehashx():
144 # clear up everything
144 # clear up everything
145 _ip.alias_manager.clear_aliases()
145 _ip.alias_manager.clear_aliases()
146 del _ip.db['syscmdlist']
146 del _ip.db['syscmdlist']
147
147
148 _ip.run_line_magic("rehashx", "")
148 _ip.run_line_magic("rehashx", "")
149 # Practically ALL ipython development systems will have more than 10 aliases
149 # Practically ALL ipython development systems will have more than 10 aliases
150
150
151 assert len(_ip.alias_manager.aliases) > 10
151 assert len(_ip.alias_manager.aliases) > 10
152 for name, cmd in _ip.alias_manager.aliases:
152 for name, cmd in _ip.alias_manager.aliases:
153 # we must strip dots from alias names
153 # we must strip dots from alias names
154 assert "." not in name
154 assert "." not in name
155
155
156 # rehashx must fill up syscmdlist
156 # rehashx must fill up syscmdlist
157 scoms = _ip.db['syscmdlist']
157 scoms = _ip.db['syscmdlist']
158 assert len(scoms) > 10
158 assert len(scoms) > 10
159
159
160
160
161 def test_magic_parse_options():
161 def test_magic_parse_options():
162 """Test that we don't mangle paths when parsing magic options."""
162 """Test that we don't mangle paths when parsing magic options."""
163 ip = get_ipython()
163 ip = get_ipython()
164 path = 'c:\\x'
164 path = 'c:\\x'
165 m = DummyMagics(ip)
165 m = DummyMagics(ip)
166 opts = m.parse_options('-f %s' % path,'f:')[0]
166 opts = m.parse_options('-f %s' % path,'f:')[0]
167 # argv splitting is os-dependent
167 # argv splitting is os-dependent
168 if os.name == 'posix':
168 if os.name == 'posix':
169 expected = 'c:x'
169 expected = 'c:x'
170 else:
170 else:
171 expected = path
171 expected = path
172 assert opts["f"] == expected
172 assert opts["f"] == expected
173
173
174
174
175 def test_magic_parse_long_options():
175 def test_magic_parse_long_options():
176 """Magic.parse_options can handle --foo=bar long options"""
176 """Magic.parse_options can handle --foo=bar long options"""
177 ip = get_ipython()
177 ip = get_ipython()
178 m = DummyMagics(ip)
178 m = DummyMagics(ip)
179 opts, _ = m.parse_options("--foo --bar=bubble", "a", "foo", "bar=")
179 opts, _ = m.parse_options("--foo --bar=bubble", "a", "foo", "bar=")
180 assert "foo" in opts
180 assert "foo" in opts
181 assert "bar" in opts
181 assert "bar" in opts
182 assert opts["bar"] == "bubble"
182 assert opts["bar"] == "bubble"
183
183
184
184
185 def doctest_hist_f():
185 def doctest_hist_f():
186 """Test %hist -f with temporary filename.
186 """Test %hist -f with temporary filename.
187
187
188 In [9]: import tempfile
188 In [9]: import tempfile
189
189
190 In [10]: tfile = tempfile.mktemp('.py','tmp-ipython-')
190 In [10]: tfile = tempfile.mktemp('.py','tmp-ipython-')
191
191
192 In [11]: %hist -nl -f $tfile 3
192 In [11]: %hist -nl -f $tfile 3
193
193
194 In [13]: import os; os.unlink(tfile)
194 In [13]: import os; os.unlink(tfile)
195 """
195 """
196
196
197
197
198 def doctest_hist_op():
198 def doctest_hist_op():
199 """Test %hist -op
199 """Test %hist -op
200
200
201 In [1]: class b(float):
201 In [1]: class b(float):
202 ...: pass
202 ...: pass
203 ...:
203 ...:
204
204
205 In [2]: class s(object):
205 In [2]: class s(object):
206 ...: def __str__(self):
206 ...: def __str__(self):
207 ...: return 's'
207 ...: return 's'
208 ...:
208 ...:
209
209
210 In [3]:
210 In [3]:
211
211
212 In [4]: class r(b):
212 In [4]: class r(b):
213 ...: def __repr__(self):
213 ...: def __repr__(self):
214 ...: return 'r'
214 ...: return 'r'
215 ...:
215 ...:
216
216
217 In [5]: class sr(s,r): pass
217 In [5]: class sr(s,r): pass
218 ...:
218 ...:
219
219
220 In [6]:
220 In [6]:
221
221
222 In [7]: bb=b()
222 In [7]: bb=b()
223
223
224 In [8]: ss=s()
224 In [8]: ss=s()
225
225
226 In [9]: rr=r()
226 In [9]: rr=r()
227
227
228 In [10]: ssrr=sr()
228 In [10]: ssrr=sr()
229
229
230 In [11]: 4.5
230 In [11]: 4.5
231 Out[11]: 4.5
231 Out[11]: 4.5
232
232
233 In [12]: str(ss)
233 In [12]: str(ss)
234 Out[12]: 's'
234 Out[12]: 's'
235
235
236 In [13]:
236 In [13]:
237
237
238 In [14]: %hist -op
238 In [14]: %hist -op
239 >>> class b:
239 >>> class b:
240 ... pass
240 ... pass
241 ...
241 ...
242 >>> class s(b):
242 >>> class s(b):
243 ... def __str__(self):
243 ... def __str__(self):
244 ... return 's'
244 ... return 's'
245 ...
245 ...
246 >>>
246 >>>
247 >>> class r(b):
247 >>> class r(b):
248 ... def __repr__(self):
248 ... def __repr__(self):
249 ... return 'r'
249 ... return 'r'
250 ...
250 ...
251 >>> class sr(s,r): pass
251 >>> class sr(s,r): pass
252 >>>
252 >>>
253 >>> bb=b()
253 >>> bb=b()
254 >>> ss=s()
254 >>> ss=s()
255 >>> rr=r()
255 >>> rr=r()
256 >>> ssrr=sr()
256 >>> ssrr=sr()
257 >>> 4.5
257 >>> 4.5
258 4.5
258 4.5
259 >>> str(ss)
259 >>> str(ss)
260 's'
260 's'
261 >>>
261 >>>
262 """
262 """
263
263
264 def test_hist_pof():
264 def test_hist_pof():
265 ip = get_ipython()
265 ip = get_ipython()
266 ip.run_cell("1+2", store_history=True)
266 ip.run_cell("1+2", store_history=True)
267 #raise Exception(ip.history_manager.session_number)
267 #raise Exception(ip.history_manager.session_number)
268 #raise Exception(list(ip.history_manager._get_range_session()))
268 #raise Exception(list(ip.history_manager._get_range_session()))
269 with TemporaryDirectory() as td:
269 with TemporaryDirectory() as td:
270 tf = os.path.join(td, 'hist.py')
270 tf = os.path.join(td, 'hist.py')
271 ip.run_line_magic('history', '-pof %s' % tf)
271 ip.run_line_magic('history', '-pof %s' % tf)
272 assert os.path.isfile(tf)
272 assert os.path.isfile(tf)
273
273
274
274
275 def test_macro():
275 def test_macro():
276 ip = get_ipython()
276 ip = get_ipython()
277 ip.history_manager.reset() # Clear any existing history.
277 ip.history_manager.reset() # Clear any existing history.
278 cmds = ["a=1", "def b():\n return a**2", "print(a,b())"]
278 cmds = ["a=1", "def b():\n return a**2", "print(a,b())"]
279 for i, cmd in enumerate(cmds, start=1):
279 for i, cmd in enumerate(cmds, start=1):
280 ip.history_manager.store_inputs(i, cmd)
280 ip.history_manager.store_inputs(i, cmd)
281 ip.run_line_magic("macro", "test 1-3")
281 ip.run_line_magic("macro", "test 1-3")
282 assert ip.user_ns["test"].value == "\n".join(cmds) + "\n"
282 assert ip.user_ns["test"].value == "\n".join(cmds) + "\n"
283
283
284 # List macros
284 # List macros
285 assert "test" in ip.run_line_magic("macro", "")
285 assert "test" in ip.run_line_magic("macro", "")
286
286
287
287
288 def test_macro_run():
288 def test_macro_run():
289 """Test that we can run a multi-line macro successfully."""
289 """Test that we can run a multi-line macro successfully."""
290 ip = get_ipython()
290 ip = get_ipython()
291 ip.history_manager.reset()
291 ip.history_manager.reset()
292 cmds = ["a=10", "a+=1", "print(a)", "%macro test 2-3"]
292 cmds = ["a=10", "a+=1", "print(a)", "%macro test 2-3"]
293 for cmd in cmds:
293 for cmd in cmds:
294 ip.run_cell(cmd, store_history=True)
294 ip.run_cell(cmd, store_history=True)
295 assert ip.user_ns["test"].value == "a+=1\nprint(a)\n"
295 assert ip.user_ns["test"].value == "a+=1\nprint(a)\n"
296 with tt.AssertPrints("12"):
296 with tt.AssertPrints("12"):
297 ip.run_cell("test")
297 ip.run_cell("test")
298 with tt.AssertPrints("13"):
298 with tt.AssertPrints("13"):
299 ip.run_cell("test")
299 ip.run_cell("test")
300
300
301
301
302 def test_magic_magic():
302 def test_magic_magic():
303 """Test %magic"""
303 """Test %magic"""
304 ip = get_ipython()
304 ip = get_ipython()
305 with capture_output() as captured:
305 with capture_output() as captured:
306 ip.run_line_magic("magic", "")
306 ip.run_line_magic("magic", "")
307
307
308 stdout = captured.stdout
308 stdout = captured.stdout
309 assert "%magic" in stdout
309 assert "%magic" in stdout
310 assert "IPython" in stdout
310 assert "IPython" in stdout
311 assert "Available" in stdout
311 assert "Available" in stdout
312
312
313
313
314 @dec.skipif_not_numpy
314 @dec.skipif_not_numpy
315 def test_numpy_reset_array_undec():
315 def test_numpy_reset_array_undec():
316 "Test '%reset array' functionality"
316 "Test '%reset array' functionality"
317 _ip.ex("import numpy as np")
317 _ip.ex("import numpy as np")
318 _ip.ex("a = np.empty(2)")
318 _ip.ex("a = np.empty(2)")
319 assert "a" in _ip.user_ns
319 assert "a" in _ip.user_ns
320 _ip.run_line_magic("reset", "-f array")
320 _ip.run_line_magic("reset", "-f array")
321 assert "a" not in _ip.user_ns
321 assert "a" not in _ip.user_ns
322
322
323
323
324 def test_reset_out():
324 def test_reset_out():
325 "Test '%reset out' magic"
325 "Test '%reset out' magic"
326 _ip.run_cell("parrot = 'dead'", store_history=True)
326 _ip.run_cell("parrot = 'dead'", store_history=True)
327 # test '%reset -f out', make an Out prompt
327 # test '%reset -f out', make an Out prompt
328 _ip.run_cell("parrot", store_history=True)
328 _ip.run_cell("parrot", store_history=True)
329 assert "dead" in [_ip.user_ns[x] for x in ("_", "__", "___")]
329 assert "dead" in [_ip.user_ns[x] for x in ("_", "__", "___")]
330 _ip.run_line_magic("reset", "-f out")
330 _ip.run_line_magic("reset", "-f out")
331 assert "dead" not in [_ip.user_ns[x] for x in ("_", "__", "___")]
331 assert "dead" not in [_ip.user_ns[x] for x in ("_", "__", "___")]
332 assert len(_ip.user_ns["Out"]) == 0
332 assert len(_ip.user_ns["Out"]) == 0
333
333
334
334
335 def test_reset_in():
335 def test_reset_in():
336 "Test '%reset in' magic"
336 "Test '%reset in' magic"
337 # test '%reset -f in'
337 # test '%reset -f in'
338 _ip.run_cell("parrot", store_history=True)
338 _ip.run_cell("parrot", store_history=True)
339 assert "parrot" in [_ip.user_ns[x] for x in ("_i", "_ii", "_iii")]
339 assert "parrot" in [_ip.user_ns[x] for x in ("_i", "_ii", "_iii")]
340 _ip.run_line_magic("reset", "-f in")
340 _ip.run_line_magic("reset", "-f in")
341 assert "parrot" not in [_ip.user_ns[x] for x in ("_i", "_ii", "_iii")]
341 assert "parrot" not in [_ip.user_ns[x] for x in ("_i", "_ii", "_iii")]
342 assert len(set(_ip.user_ns["In"])) == 1
342 assert len(set(_ip.user_ns["In"])) == 1
343
343
344
344
345 def test_reset_dhist():
345 def test_reset_dhist():
346 "Test '%reset dhist' magic"
346 "Test '%reset dhist' magic"
347 _ip.run_cell("tmp = [d for d in _dh]") # copy before clearing
347 _ip.run_cell("tmp = [d for d in _dh]") # copy before clearing
348 _ip.run_line_magic("cd", os.path.dirname(pytest.__file__))
348 _ip.run_line_magic("cd", os.path.dirname(pytest.__file__))
349 _ip.run_line_magic("cd", "-")
349 _ip.run_line_magic("cd", "-")
350 assert len(_ip.user_ns["_dh"]) > 0
350 assert len(_ip.user_ns["_dh"]) > 0
351 _ip.run_line_magic("reset", "-f dhist")
351 _ip.run_line_magic("reset", "-f dhist")
352 assert len(_ip.user_ns["_dh"]) == 0
352 assert len(_ip.user_ns["_dh"]) == 0
353 _ip.run_cell("_dh = [d for d in tmp]") # restore
353 _ip.run_cell("_dh = [d for d in tmp]") # restore
354
354
355
355
356 def test_reset_in_length():
356 def test_reset_in_length():
357 "Test that '%reset in' preserves In[] length"
357 "Test that '%reset in' preserves In[] length"
358 _ip.run_cell("print 'foo'")
358 _ip.run_cell("print 'foo'")
359 _ip.run_cell("reset -f in")
359 _ip.run_cell("reset -f in")
360 assert len(_ip.user_ns["In"]) == _ip.displayhook.prompt_count + 1
360 assert len(_ip.user_ns["In"]) == _ip.displayhook.prompt_count + 1
361
361
362
362
363 class TestResetErrors(TestCase):
363 class TestResetErrors(TestCase):
364
364
365 def test_reset_redefine(self):
365 def test_reset_redefine(self):
366
366
367 @magics_class
367 @magics_class
368 class KernelMagics(Magics):
368 class KernelMagics(Magics):
369 @line_magic
369 @line_magic
370 def less(self, shell): pass
370 def less(self, shell): pass
371
371
372 _ip.register_magics(KernelMagics)
372 _ip.register_magics(KernelMagics)
373
373
374 with self.assertLogs() as cm:
374 with self.assertLogs() as cm:
375 # hack, we want to just capture logs, but assertLogs fails if not
375 # hack, we want to just capture logs, but assertLogs fails if not
376 # logs get produce.
376 # logs get produce.
377 # so log one things we ignore.
377 # so log one things we ignore.
378 import logging as log_mod
378 import logging as log_mod
379 log = log_mod.getLogger()
379 log = log_mod.getLogger()
380 log.info('Nothing')
380 log.info('Nothing')
381 # end hack.
381 # end hack.
382 _ip.run_cell("reset -f")
382 _ip.run_cell("reset -f")
383
383
384 assert len(cm.output) == 1
384 assert len(cm.output) == 1
385 for out in cm.output:
385 for out in cm.output:
386 assert "Invalid alias" not in out
386 assert "Invalid alias" not in out
387
387
388 def test_tb_syntaxerror():
388 def test_tb_syntaxerror():
389 """test %tb after a SyntaxError"""
389 """test %tb after a SyntaxError"""
390 ip = get_ipython()
390 ip = get_ipython()
391 ip.run_cell("for")
391 ip.run_cell("for")
392
392
393 # trap and validate stdout
393 # trap and validate stdout
394 save_stdout = sys.stdout
394 save_stdout = sys.stdout
395 try:
395 try:
396 sys.stdout = StringIO()
396 sys.stdout = StringIO()
397 ip.run_cell("%tb")
397 ip.run_cell("%tb")
398 out = sys.stdout.getvalue()
398 out = sys.stdout.getvalue()
399 finally:
399 finally:
400 sys.stdout = save_stdout
400 sys.stdout = save_stdout
401 # trim output, and only check the last line
401 # trim output, and only check the last line
402 last_line = out.rstrip().splitlines()[-1].strip()
402 last_line = out.rstrip().splitlines()[-1].strip()
403 assert last_line == "SyntaxError: invalid syntax"
403 assert last_line == "SyntaxError: invalid syntax"
404
404
405
405
406 def test_time():
406 def test_time():
407 ip = get_ipython()
407 ip = get_ipython()
408
408
409 with tt.AssertPrints("Wall time: "):
409 with tt.AssertPrints("Wall time: "):
410 ip.run_cell("%time None")
410 ip.run_cell("%time None")
411
411
412 ip.run_cell("def f(kmjy):\n"
412 ip.run_cell("def f(kmjy):\n"
413 " %time print (2*kmjy)")
413 " %time print (2*kmjy)")
414
414
415 with tt.AssertPrints("Wall time: "):
415 with tt.AssertPrints("Wall time: "):
416 with tt.AssertPrints("hihi", suppress=False):
416 with tt.AssertPrints("hihi", suppress=False):
417 ip.run_cell("f('hi')")
417 ip.run_cell("f('hi')")
418
418
419
419
420 # ';' at the end of %time prevents instruction value to be printed.
420 # ';' at the end of %time prevents instruction value to be printed.
421 # This tests fix for #13837.
421 # This tests fix for #13837.
422 def test_time_no_output_with_semicolon():
422 def test_time_no_output_with_semicolon():
423 ip = get_ipython()
423 ip = get_ipython()
424
424
425 # Test %time cases
425 # Test %time cases
426 with tt.AssertPrints(" 123456"):
426 with tt.AssertPrints(" 123456"):
427 with tt.AssertPrints("Wall time: ", suppress=False):
427 with tt.AssertPrints("Wall time: ", suppress=False):
428 with tt.AssertPrints("CPU times: ", suppress=False):
428 with tt.AssertPrints("CPU times: ", suppress=False):
429 ip.run_cell("%time 123000+456")
429 ip.run_cell("%time 123000+456")
430
430
431 with tt.AssertNotPrints(" 123456"):
431 with tt.AssertNotPrints(" 123456"):
432 with tt.AssertPrints("Wall time: ", suppress=False):
432 with tt.AssertPrints("Wall time: ", suppress=False):
433 with tt.AssertPrints("CPU times: ", suppress=False):
433 with tt.AssertPrints("CPU times: ", suppress=False):
434 ip.run_cell("%time 123000+456;")
434 ip.run_cell("%time 123000+456;")
435
435
436 with tt.AssertPrints(" 123456"):
436 with tt.AssertPrints(" 123456"):
437 with tt.AssertPrints("Wall time: ", suppress=False):
437 with tt.AssertPrints("Wall time: ", suppress=False):
438 with tt.AssertPrints("CPU times: ", suppress=False):
438 with tt.AssertPrints("CPU times: ", suppress=False):
439 ip.run_cell("%time 123000+456 # Comment")
439 ip.run_cell("%time 123000+456 # Comment")
440
440
441 with tt.AssertNotPrints(" 123456"):
441 with tt.AssertNotPrints(" 123456"):
442 with tt.AssertPrints("Wall time: ", suppress=False):
442 with tt.AssertPrints("Wall time: ", suppress=False):
443 with tt.AssertPrints("CPU times: ", suppress=False):
443 with tt.AssertPrints("CPU times: ", suppress=False):
444 ip.run_cell("%time 123000+456; # Comment")
444 ip.run_cell("%time 123000+456; # Comment")
445
445
446 with tt.AssertPrints(" 123456"):
446 with tt.AssertPrints(" 123456"):
447 with tt.AssertPrints("Wall time: ", suppress=False):
447 with tt.AssertPrints("Wall time: ", suppress=False):
448 with tt.AssertPrints("CPU times: ", suppress=False):
448 with tt.AssertPrints("CPU times: ", suppress=False):
449 ip.run_cell("%time 123000+456 # ;Comment")
449 ip.run_cell("%time 123000+456 # ;Comment")
450
450
451 # Test %%time cases
451 # Test %%time cases
452 with tt.AssertPrints("123456"):
452 with tt.AssertPrints("123456"):
453 with tt.AssertPrints("Wall time: ", suppress=False):
453 with tt.AssertPrints("Wall time: ", suppress=False):
454 with tt.AssertPrints("CPU times: ", suppress=False):
454 with tt.AssertPrints("CPU times: ", suppress=False):
455 ip.run_cell("%%time\n123000+456\n\n\n")
455 ip.run_cell("%%time\n123000+456\n\n\n")
456
456
457 with tt.AssertNotPrints("123456"):
457 with tt.AssertNotPrints("123456"):
458 with tt.AssertPrints("Wall time: ", suppress=False):
458 with tt.AssertPrints("Wall time: ", suppress=False):
459 with tt.AssertPrints("CPU times: ", suppress=False):
459 with tt.AssertPrints("CPU times: ", suppress=False):
460 ip.run_cell("%%time\n123000+456;\n\n\n")
460 ip.run_cell("%%time\n123000+456;\n\n\n")
461
461
462 with tt.AssertPrints("123456"):
462 with tt.AssertPrints("123456"):
463 with tt.AssertPrints("Wall time: ", suppress=False):
463 with tt.AssertPrints("Wall time: ", suppress=False):
464 with tt.AssertPrints("CPU times: ", suppress=False):
464 with tt.AssertPrints("CPU times: ", suppress=False):
465 ip.run_cell("%%time\n123000+456 # Comment\n\n\n")
465 ip.run_cell("%%time\n123000+456 # Comment\n\n\n")
466
466
467 with tt.AssertNotPrints("123456"):
467 with tt.AssertNotPrints("123456"):
468 with tt.AssertPrints("Wall time: ", suppress=False):
468 with tt.AssertPrints("Wall time: ", suppress=False):
469 with tt.AssertPrints("CPU times: ", suppress=False):
469 with tt.AssertPrints("CPU times: ", suppress=False):
470 ip.run_cell("%%time\n123000+456; # Comment\n\n\n")
470 ip.run_cell("%%time\n123000+456; # Comment\n\n\n")
471
471
472 with tt.AssertPrints("123456"):
472 with tt.AssertPrints("123456"):
473 with tt.AssertPrints("Wall time: ", suppress=False):
473 with tt.AssertPrints("Wall time: ", suppress=False):
474 with tt.AssertPrints("CPU times: ", suppress=False):
474 with tt.AssertPrints("CPU times: ", suppress=False):
475 ip.run_cell("%%time\n123000+456 # ;Comment\n\n\n")
475 ip.run_cell("%%time\n123000+456 # ;Comment\n\n\n")
476
476
477
477
478 def test_time_last_not_expression():
478 def test_time_last_not_expression():
479 ip.run_cell("%%time\n"
479 ip.run_cell("%%time\n"
480 "var_1 = 1\n"
480 "var_1 = 1\n"
481 "var_2 = 2\n")
481 "var_2 = 2\n")
482 assert ip.user_ns['var_1'] == 1
482 assert ip.user_ns['var_1'] == 1
483 del ip.user_ns['var_1']
483 del ip.user_ns['var_1']
484 assert ip.user_ns['var_2'] == 2
484 assert ip.user_ns['var_2'] == 2
485 del ip.user_ns['var_2']
485 del ip.user_ns['var_2']
486
486
487
487
488 @dec.skip_win32
488 @dec.skip_win32
489 def test_time2():
489 def test_time2():
490 ip = get_ipython()
490 ip = get_ipython()
491
491
492 with tt.AssertPrints("CPU times: user "):
492 with tt.AssertPrints("CPU times: user "):
493 ip.run_cell("%time None")
493 ip.run_cell("%time None")
494
494
495 def test_time3():
495 def test_time3():
496 """Erroneous magic function calls, issue gh-3334"""
496 """Erroneous magic function calls, issue gh-3334"""
497 ip = get_ipython()
497 ip = get_ipython()
498 ip.user_ns.pop('run', None)
498 ip.user_ns.pop('run', None)
499
499
500 with tt.AssertNotPrints("not found", channel='stderr'):
500 with tt.AssertNotPrints("not found", channel='stderr'):
501 ip.run_cell("%%time\n"
501 ip.run_cell("%%time\n"
502 "run = 0\n"
502 "run = 0\n"
503 "run += 1")
503 "run += 1")
504
504
505 def test_multiline_time():
505 def test_multiline_time():
506 """Make sure last statement from time return a value."""
506 """Make sure last statement from time return a value."""
507 ip = get_ipython()
507 ip = get_ipython()
508 ip.user_ns.pop('run', None)
508 ip.user_ns.pop('run', None)
509
509
510 ip.run_cell(
510 ip.run_cell(
511 dedent(
511 dedent(
512 """\
512 """\
513 %%time
513 %%time
514 a = "ho"
514 a = "ho"
515 b = "hey"
515 b = "hey"
516 a+b
516 a+b
517 """
517 """
518 )
518 )
519 )
519 )
520 assert ip.user_ns_hidden["_"] == "hohey"
520 assert ip.user_ns_hidden["_"] == "hohey"
521
521
522
522
523 def test_time_local_ns():
523 def test_time_local_ns():
524 """
524 """
525 Test that local_ns is actually global_ns when running a cell magic
525 Test that local_ns is actually global_ns when running a cell magic
526 """
526 """
527 ip = get_ipython()
527 ip = get_ipython()
528 ip.run_cell("%%time\n" "myvar = 1")
528 ip.run_cell("%%time\n" "myvar = 1")
529 assert ip.user_ns["myvar"] == 1
529 assert ip.user_ns["myvar"] == 1
530 del ip.user_ns["myvar"]
530 del ip.user_ns["myvar"]
531
531
532
532
533 def test_doctest_mode():
533 def test_doctest_mode():
534 "Toggle doctest_mode twice, it should be a no-op and run without error"
534 "Toggle doctest_mode twice, it should be a no-op and run without error"
535 _ip.run_line_magic("doctest_mode", "")
535 _ip.run_line_magic("doctest_mode", "")
536 _ip.run_line_magic("doctest_mode", "")
536 _ip.run_line_magic("doctest_mode", "")
537
537
538
538
539 def test_parse_options():
539 def test_parse_options():
540 """Tests for basic options parsing in magics."""
540 """Tests for basic options parsing in magics."""
541 # These are only the most minimal of tests, more should be added later. At
541 # These are only the most minimal of tests, more should be added later. At
542 # the very least we check that basic text/unicode calls work OK.
542 # the very least we check that basic text/unicode calls work OK.
543 m = DummyMagics(_ip)
543 m = DummyMagics(_ip)
544 assert m.parse_options("foo", "")[1] == "foo"
544 assert m.parse_options("foo", "")[1] == "foo"
545 assert m.parse_options("foo", "")[1] == "foo"
545 assert m.parse_options("foo", "")[1] == "foo"
546
546
547
547
548 def test_parse_options_preserve_non_option_string():
548 def test_parse_options_preserve_non_option_string():
549 """Test to assert preservation of non-option part of magic-block, while parsing magic options."""
549 """Test to assert preservation of non-option part of magic-block, while parsing magic options."""
550 m = DummyMagics(_ip)
550 m = DummyMagics(_ip)
551 opts, stmt = m.parse_options(
551 opts, stmt = m.parse_options(
552 " -n1 -r 13 _ = 314 + foo", "n:r:", preserve_non_opts=True
552 " -n1 -r 13 _ = 314 + foo", "n:r:", preserve_non_opts=True
553 )
553 )
554 assert opts == {"n": "1", "r": "13"}
554 assert opts == {"n": "1", "r": "13"}
555 assert stmt == "_ = 314 + foo"
555 assert stmt == "_ = 314 + foo"
556
556
557
557
558 def test_run_magic_preserve_code_block():
558 def test_run_magic_preserve_code_block():
559 """Test to assert preservation of non-option part of magic-block, while running magic."""
559 """Test to assert preservation of non-option part of magic-block, while running magic."""
560 _ip.user_ns["spaces"] = []
560 _ip.user_ns["spaces"] = []
561 _ip.run_line_magic(
561 _ip.run_line_magic(
562 "timeit", "-n1 -r1 spaces.append([s.count(' ') for s in ['document']])"
562 "timeit", "-n1 -r1 spaces.append([s.count(' ') for s in ['document']])"
563 )
563 )
564 assert _ip.user_ns["spaces"] == [[0]]
564 assert _ip.user_ns["spaces"] == [[0]]
565
565
566
566
567 def test_dirops():
567 def test_dirops():
568 """Test various directory handling operations."""
568 """Test various directory handling operations."""
569 # curpath = lambda :os.path.splitdrive(os.getcwd())[1].replace('\\','/')
569 # curpath = lambda :os.path.splitdrive(os.getcwd())[1].replace('\\','/')
570 curpath = os.getcwd
570 curpath = os.getcwd
571 startdir = os.getcwd()
571 startdir = os.getcwd()
572 ipdir = os.path.realpath(_ip.ipython_dir)
572 ipdir = os.path.realpath(_ip.ipython_dir)
573 try:
573 try:
574 _ip.run_line_magic("cd", '"%s"' % ipdir)
574 _ip.run_line_magic("cd", '"%s"' % ipdir)
575 assert curpath() == ipdir
575 assert curpath() == ipdir
576 _ip.run_line_magic("cd", "-")
576 _ip.run_line_magic("cd", "-")
577 assert curpath() == startdir
577 assert curpath() == startdir
578 _ip.run_line_magic("pushd", '"%s"' % ipdir)
578 _ip.run_line_magic("pushd", '"%s"' % ipdir)
579 assert curpath() == ipdir
579 assert curpath() == ipdir
580 _ip.run_line_magic("popd", "")
580 _ip.run_line_magic("popd", "")
581 assert curpath() == startdir
581 assert curpath() == startdir
582 finally:
582 finally:
583 os.chdir(startdir)
583 os.chdir(startdir)
584
584
585
585
586 def test_cd_force_quiet():
586 def test_cd_force_quiet():
587 """Test OSMagics.cd_force_quiet option"""
587 """Test OSMagics.cd_force_quiet option"""
588 _ip.config.OSMagics.cd_force_quiet = True
588 _ip.config.OSMagics.cd_force_quiet = True
589 osmagics = osm.OSMagics(shell=_ip)
589 osmagics = osm.OSMagics(shell=_ip)
590
590
591 startdir = os.getcwd()
591 startdir = os.getcwd()
592 ipdir = os.path.realpath(_ip.ipython_dir)
592 ipdir = os.path.realpath(_ip.ipython_dir)
593
593
594 try:
594 try:
595 with tt.AssertNotPrints(ipdir):
595 with tt.AssertNotPrints(ipdir):
596 osmagics.cd('"%s"' % ipdir)
596 osmagics.cd('"%s"' % ipdir)
597 with tt.AssertNotPrints(startdir):
597 with tt.AssertNotPrints(startdir):
598 osmagics.cd('-')
598 osmagics.cd('-')
599 finally:
599 finally:
600 os.chdir(startdir)
600 os.chdir(startdir)
601
601
602
602
603 def test_xmode():
603 def test_xmode():
604 # Calling xmode three times should be a no-op
604 # Calling xmode three times should be a no-op
605 xmode = _ip.InteractiveTB.mode
605 xmode = _ip.InteractiveTB.mode
606 for i in range(4):
606 for i in range(4):
607 _ip.run_line_magic("xmode", "")
607 _ip.run_line_magic("xmode", "")
608 assert _ip.InteractiveTB.mode == xmode
608 assert _ip.InteractiveTB.mode == xmode
609
609
610 def test_reset_hard():
610 def test_reset_hard():
611 monitor = []
611 monitor = []
612 class A(object):
612 class A(object):
613 def __del__(self):
613 def __del__(self):
614 monitor.append(1)
614 monitor.append(1)
615 def __repr__(self):
615 def __repr__(self):
616 return "<A instance>"
616 return "<A instance>"
617
617
618 _ip.user_ns["a"] = A()
618 _ip.user_ns["a"] = A()
619 _ip.run_cell("a")
619 _ip.run_cell("a")
620
620
621 assert monitor == []
621 assert monitor == []
622 _ip.run_line_magic("reset", "-f")
622 _ip.run_line_magic("reset", "-f")
623 assert monitor == [1]
623 assert monitor == [1]
624
624
625 class TestXdel(tt.TempFileMixin):
625 class TestXdel(tt.TempFileMixin):
626 def test_xdel(self):
626 def test_xdel(self):
627 """Test that references from %run are cleared by xdel."""
627 """Test that references from %run are cleared by xdel."""
628 src = ("class A(object):\n"
628 src = ("class A(object):\n"
629 " monitor = []\n"
629 " monitor = []\n"
630 " def __del__(self):\n"
630 " def __del__(self):\n"
631 " self.monitor.append(1)\n"
631 " self.monitor.append(1)\n"
632 "a = A()\n")
632 "a = A()\n")
633 self.mktmp(src)
633 self.mktmp(src)
634 # %run creates some hidden references...
634 # %run creates some hidden references...
635 _ip.run_line_magic("run", "%s" % self.fname)
635 _ip.run_line_magic("run", "%s" % self.fname)
636 # ... as does the displayhook.
636 # ... as does the displayhook.
637 _ip.run_cell("a")
637 _ip.run_cell("a")
638
638
639 monitor = _ip.user_ns["A"].monitor
639 monitor = _ip.user_ns["A"].monitor
640 assert monitor == []
640 assert monitor == []
641
641
642 _ip.run_line_magic("xdel", "a")
642 _ip.run_line_magic("xdel", "a")
643
643
644 # Check that a's __del__ method has been called.
644 # Check that a's __del__ method has been called.
645 gc.collect(0)
645 gc.collect(0)
646 assert monitor == [1]
646 assert monitor == [1]
647
647
648 def doctest_who():
648 def doctest_who():
649 """doctest for %who
649 """doctest for %who
650
650
651 In [1]: %reset -sf
651 In [1]: %reset -sf
652
652
653 In [2]: alpha = 123
653 In [2]: alpha = 123
654
654
655 In [3]: beta = 'beta'
655 In [3]: beta = 'beta'
656
656
657 In [4]: %who int
657 In [4]: %who int
658 alpha
658 alpha
659
659
660 In [5]: %who str
660 In [5]: %who str
661 beta
661 beta
662
662
663 In [6]: %whos
663 In [6]: %whos
664 Variable Type Data/Info
664 Variable Type Data/Info
665 ----------------------------
665 ----------------------------
666 alpha int 123
666 alpha int 123
667 beta str beta
667 beta str beta
668
668
669 In [7]: %who_ls
669 In [7]: %who_ls
670 Out[7]: ['alpha', 'beta']
670 Out[7]: ['alpha', 'beta']
671 """
671 """
672
672
673 def test_whos():
673 def test_whos():
674 """Check that whos is protected against objects where repr() fails."""
674 """Check that whos is protected against objects where repr() fails."""
675 class A(object):
675 class A(object):
676 def __repr__(self):
676 def __repr__(self):
677 raise Exception()
677 raise Exception()
678 _ip.user_ns['a'] = A()
678 _ip.user_ns['a'] = A()
679 _ip.run_line_magic("whos", "")
679 _ip.run_line_magic("whos", "")
680
680
681 def doctest_precision():
681 def doctest_precision():
682 """doctest for %precision
682 """doctest for %precision
683
683
684 In [1]: f = get_ipython().display_formatter.formatters['text/plain']
684 In [1]: f = get_ipython().display_formatter.formatters['text/plain']
685
685
686 In [2]: %precision 5
686 In [2]: %precision 5
687 Out[2]: '%.5f'
687 Out[2]: '%.5f'
688
688
689 In [3]: f.float_format
689 In [3]: f.float_format
690 Out[3]: '%.5f'
690 Out[3]: '%.5f'
691
691
692 In [4]: %precision %e
692 In [4]: %precision %e
693 Out[4]: '%e'
693 Out[4]: '%e'
694
694
695 In [5]: f(3.1415927)
695 In [5]: f(3.1415927)
696 Out[5]: '3.141593e+00'
696 Out[5]: '3.141593e+00'
697 """
697 """
698
698
699 def test_debug_magic():
699 def test_debug_magic():
700 """Test debugging a small code with %debug
700 """Test debugging a small code with %debug
701
701
702 In [1]: with PdbTestInput(['c']):
702 In [1]: with PdbTestInput(['c']):
703 ...: %debug print("a b") #doctest: +ELLIPSIS
703 ...: %debug print("a b") #doctest: +ELLIPSIS
704 ...:
704 ...:
705 ...
705 ...
706 ipdb> c
706 ipdb> c
707 a b
707 a b
708 In [2]:
708 In [2]:
709 """
709 """
710
710
711 def test_psearch():
711 def test_psearch():
712 with tt.AssertPrints("dict.fromkeys"):
712 with tt.AssertPrints("dict.fromkeys"):
713 _ip.run_cell("dict.fr*?")
713 _ip.run_cell("dict.fr*?")
714 with tt.AssertPrints("Ο€.is_integer"):
714 with tt.AssertPrints("Ο€.is_integer"):
715 _ip.run_cell("Ο€ = 3.14;\nΟ€.is_integ*?")
715 _ip.run_cell("Ο€ = 3.14;\nΟ€.is_integ*?")
716
716
717 def test_timeit_shlex():
717 def test_timeit_shlex():
718 """test shlex issues with timeit (#1109)"""
718 """test shlex issues with timeit (#1109)"""
719 _ip.ex("def f(*a,**kw): pass")
719 _ip.ex("def f(*a,**kw): pass")
720 _ip.run_line_magic("timeit", '-n1 "this is a bug".count(" ")')
720 _ip.run_line_magic("timeit", '-n1 "this is a bug".count(" ")')
721 _ip.run_line_magic("timeit", '-r1 -n1 f(" ", 1)')
721 _ip.run_line_magic("timeit", '-r1 -n1 f(" ", 1)')
722 _ip.run_line_magic("timeit", '-r1 -n1 f(" ", 1, " ", 2, " ")')
722 _ip.run_line_magic("timeit", '-r1 -n1 f(" ", 1, " ", 2, " ")')
723 _ip.run_line_magic("timeit", '-r1 -n1 ("a " + "b")')
723 _ip.run_line_magic("timeit", '-r1 -n1 ("a " + "b")')
724 _ip.run_line_magic("timeit", '-r1 -n1 f("a " + "b")')
724 _ip.run_line_magic("timeit", '-r1 -n1 f("a " + "b")')
725 _ip.run_line_magic("timeit", '-r1 -n1 f("a " + "b ")')
725 _ip.run_line_magic("timeit", '-r1 -n1 f("a " + "b ")')
726
726
727
727
728 def test_timeit_special_syntax():
728 def test_timeit_special_syntax():
729 "Test %%timeit with IPython special syntax"
729 "Test %%timeit with IPython special syntax"
730 @register_line_magic
730 @register_line_magic
731 def lmagic(line):
731 def lmagic(line):
732 ip = get_ipython()
732 ip = get_ipython()
733 ip.user_ns['lmagic_out'] = line
733 ip.user_ns['lmagic_out'] = line
734
734
735 # line mode test
735 # line mode test
736 _ip.run_line_magic("timeit", "-n1 -r1 %lmagic my line")
736 _ip.run_line_magic("timeit", "-n1 -r1 %lmagic my line")
737 assert _ip.user_ns["lmagic_out"] == "my line"
737 assert _ip.user_ns["lmagic_out"] == "my line"
738 # cell mode test
738 # cell mode test
739 _ip.run_cell_magic("timeit", "-n1 -r1", "%lmagic my line2")
739 _ip.run_cell_magic("timeit", "-n1 -r1", "%lmagic my line2")
740 assert _ip.user_ns["lmagic_out"] == "my line2"
740 assert _ip.user_ns["lmagic_out"] == "my line2"
741
741
742
742
743 def test_timeit_return():
743 def test_timeit_return():
744 """
744 """
745 test whether timeit -o return object
745 test whether timeit -o return object
746 """
746 """
747
747
748 res = _ip.run_line_magic('timeit','-n10 -r10 -o 1')
748 res = _ip.run_line_magic('timeit','-n10 -r10 -o 1')
749 assert(res is not None)
749 assert(res is not None)
750
750
751 def test_timeit_quiet():
751 def test_timeit_quiet():
752 """
752 """
753 test quiet option of timeit magic
753 test quiet option of timeit magic
754 """
754 """
755 with tt.AssertNotPrints("loops"):
755 with tt.AssertNotPrints("loops"):
756 _ip.run_cell("%timeit -n1 -r1 -q 1")
756 _ip.run_cell("%timeit -n1 -r1 -q 1")
757
757
758 def test_timeit_return_quiet():
758 def test_timeit_return_quiet():
759 with tt.AssertNotPrints("loops"):
759 with tt.AssertNotPrints("loops"):
760 res = _ip.run_line_magic('timeit', '-n1 -r1 -q -o 1')
760 res = _ip.run_line_magic('timeit', '-n1 -r1 -q -o 1')
761 assert (res is not None)
761 assert (res is not None)
762
762
763 def test_timeit_invalid_return():
763 def test_timeit_invalid_return():
764 with pytest.raises(SyntaxError):
764 with pytest.raises(SyntaxError):
765 _ip.run_line_magic('timeit', 'return')
765 _ip.run_line_magic('timeit', 'return')
766
766
767 @dec.skipif(execution.profile is None)
767 @dec.skipif(execution.profile is None)
768 def test_prun_special_syntax():
768 def test_prun_special_syntax():
769 "Test %%prun with IPython special syntax"
769 "Test %%prun with IPython special syntax"
770 @register_line_magic
770 @register_line_magic
771 def lmagic(line):
771 def lmagic(line):
772 ip = get_ipython()
772 ip = get_ipython()
773 ip.user_ns['lmagic_out'] = line
773 ip.user_ns['lmagic_out'] = line
774
774
775 # line mode test
775 # line mode test
776 _ip.run_line_magic("prun", "-q %lmagic my line")
776 _ip.run_line_magic("prun", "-q %lmagic my line")
777 assert _ip.user_ns["lmagic_out"] == "my line"
777 assert _ip.user_ns["lmagic_out"] == "my line"
778 # cell mode test
778 # cell mode test
779 _ip.run_cell_magic("prun", "-q", "%lmagic my line2")
779 _ip.run_cell_magic("prun", "-q", "%lmagic my line2")
780 assert _ip.user_ns["lmagic_out"] == "my line2"
780 assert _ip.user_ns["lmagic_out"] == "my line2"
781
781
782
782
783 @dec.skipif(execution.profile is None)
783 @dec.skipif(execution.profile is None)
784 def test_prun_quotes():
784 def test_prun_quotes():
785 "Test that prun does not clobber string escapes (GH #1302)"
785 "Test that prun does not clobber string escapes (GH #1302)"
786 _ip.magic(r"prun -q x = '\t'")
786 _ip.magic(r"prun -q x = '\t'")
787 assert _ip.user_ns["x"] == "\t"
787 assert _ip.user_ns["x"] == "\t"
788
788
789
789
790 def test_extension():
790 def test_extension():
791 # Debugging information for failures of this test
791 # Debugging information for failures of this test
792 print('sys.path:')
792 print('sys.path:')
793 for p in sys.path:
793 for p in sys.path:
794 print(' ', p)
794 print(' ', p)
795 print('CWD', os.getcwd())
795 print('CWD', os.getcwd())
796
796
797 pytest.raises(ImportError, _ip.magic, "load_ext daft_extension")
797 pytest.raises(ImportError, _ip.magic, "load_ext daft_extension")
798 daft_path = os.path.join(os.path.dirname(__file__), "daft_extension")
798 daft_path = os.path.join(os.path.dirname(__file__), "daft_extension")
799 sys.path.insert(0, daft_path)
799 sys.path.insert(0, daft_path)
800 try:
800 try:
801 _ip.user_ns.pop('arq', None)
801 _ip.user_ns.pop('arq', None)
802 invalidate_caches() # Clear import caches
802 invalidate_caches() # Clear import caches
803 _ip.run_line_magic("load_ext", "daft_extension")
803 _ip.run_line_magic("load_ext", "daft_extension")
804 assert _ip.user_ns["arq"] == 185
804 assert _ip.user_ns["arq"] == 185
805 _ip.run_line_magic("unload_ext", "daft_extension")
805 _ip.run_line_magic("unload_ext", "daft_extension")
806 assert 'arq' not in _ip.user_ns
806 assert 'arq' not in _ip.user_ns
807 finally:
807 finally:
808 sys.path.remove(daft_path)
808 sys.path.remove(daft_path)
809
809
810
810
811 def test_notebook_export_json():
811 def test_notebook_export_json():
812 pytest.importorskip("nbformat")
812 pytest.importorskip("nbformat")
813 _ip = get_ipython()
813 _ip = get_ipython()
814 _ip.history_manager.reset() # Clear any existing history.
814 _ip.history_manager.reset() # Clear any existing history.
815 cmds = ["a=1", "def b():\n return a**2", "print('noΓ«l, Γ©tΓ©', b())"]
815 cmds = ["a=1", "def b():\n return a**2", "print('noΓ«l, Γ©tΓ©', b())"]
816 for i, cmd in enumerate(cmds, start=1):
816 for i, cmd in enumerate(cmds, start=1):
817 _ip.history_manager.store_inputs(i, cmd)
817 _ip.history_manager.store_inputs(i, cmd)
818 with TemporaryDirectory() as td:
818 with TemporaryDirectory() as td:
819 outfile = os.path.join(td, "nb.ipynb")
819 outfile = os.path.join(td, "nb.ipynb")
820 _ip.run_line_magic("notebook", "%s" % outfile)
820 _ip.run_line_magic("notebook", "%s" % outfile)
821
821
822
822
823 class TestEnv(TestCase):
823 class TestEnv(TestCase):
824
824
825 def test_env(self):
825 def test_env(self):
826 env = _ip.run_line_magic("env", "")
826 env = _ip.run_line_magic("env", "")
827 self.assertTrue(isinstance(env, dict))
827 self.assertTrue(isinstance(env, dict))
828
828
829 def test_env_secret(self):
829 def test_env_secret(self):
830 env = _ip.run_line_magic("env", "")
830 env = _ip.run_line_magic("env", "")
831 hidden = "<hidden>"
831 hidden = "<hidden>"
832 with mock.patch.dict(
832 with mock.patch.dict(
833 os.environ,
833 os.environ,
834 {
834 {
835 "API_KEY": "abc123",
835 "API_KEY": "abc123",
836 "SECRET_THING": "ssshhh",
836 "SECRET_THING": "ssshhh",
837 "JUPYTER_TOKEN": "",
837 "JUPYTER_TOKEN": "",
838 "VAR": "abc"
838 "VAR": "abc"
839 }
839 }
840 ):
840 ):
841 env = _ip.run_line_magic("env", "")
841 env = _ip.run_line_magic("env", "")
842 assert env["API_KEY"] == hidden
842 assert env["API_KEY"] == hidden
843 assert env["SECRET_THING"] == hidden
843 assert env["SECRET_THING"] == hidden
844 assert env["JUPYTER_TOKEN"] == hidden
844 assert env["JUPYTER_TOKEN"] == hidden
845 assert env["VAR"] == "abc"
845 assert env["VAR"] == "abc"
846
846
847 def test_env_get_set_simple(self):
847 def test_env_get_set_simple(self):
848 env = _ip.run_line_magic("env", "var val1")
848 env = _ip.run_line_magic("env", "var val1")
849 self.assertEqual(env, None)
849 self.assertEqual(env, None)
850 self.assertEqual(os.environ["var"], "val1")
850 self.assertEqual(os.environ["var"], "val1")
851 self.assertEqual(_ip.run_line_magic("env", "var"), "val1")
851 self.assertEqual(_ip.run_line_magic("env", "var"), "val1")
852 env = _ip.run_line_magic("env", "var=val2")
852 env = _ip.run_line_magic("env", "var=val2")
853 self.assertEqual(env, None)
853 self.assertEqual(env, None)
854 self.assertEqual(os.environ['var'], 'val2')
854 self.assertEqual(os.environ['var'], 'val2')
855
855
856 def test_env_get_set_complex(self):
856 def test_env_get_set_complex(self):
857 env = _ip.run_line_magic("env", "var 'val1 '' 'val2")
857 env = _ip.run_line_magic("env", "var 'val1 '' 'val2")
858 self.assertEqual(env, None)
858 self.assertEqual(env, None)
859 self.assertEqual(os.environ['var'], "'val1 '' 'val2")
859 self.assertEqual(os.environ['var'], "'val1 '' 'val2")
860 self.assertEqual(_ip.run_line_magic("env", "var"), "'val1 '' 'val2")
860 self.assertEqual(_ip.run_line_magic("env", "var"), "'val1 '' 'val2")
861 env = _ip.run_line_magic("env", 'var=val2 val3="val4')
861 env = _ip.run_line_magic("env", 'var=val2 val3="val4')
862 self.assertEqual(env, None)
862 self.assertEqual(env, None)
863 self.assertEqual(os.environ['var'], 'val2 val3="val4')
863 self.assertEqual(os.environ['var'], 'val2 val3="val4')
864
864
865 def test_env_set_bad_input(self):
865 def test_env_set_bad_input(self):
866 self.assertRaises(UsageError, lambda: _ip.run_line_magic("set_env", "var"))
866 self.assertRaises(UsageError, lambda: _ip.run_line_magic("set_env", "var"))
867
867
868 def test_env_set_whitespace(self):
868 def test_env_set_whitespace(self):
869 self.assertRaises(UsageError, lambda: _ip.run_line_magic("env", "var A=B"))
869 self.assertRaises(UsageError, lambda: _ip.run_line_magic("env", "var A=B"))
870
870
871
871
872 class CellMagicTestCase(TestCase):
872 class CellMagicTestCase(TestCase):
873
873
874 def check_ident(self, magic):
874 def check_ident(self, magic):
875 # Manually called, we get the result
875 # Manually called, we get the result
876 out = _ip.run_cell_magic(magic, "a", "b")
876 out = _ip.run_cell_magic(magic, "a", "b")
877 assert out == ("a", "b")
877 assert out == ("a", "b")
878 # Via run_cell, it goes into the user's namespace via displayhook
878 # Via run_cell, it goes into the user's namespace via displayhook
879 _ip.run_cell("%%" + magic + " c\nd\n")
879 _ip.run_cell("%%" + magic + " c\nd\n")
880 assert _ip.user_ns["_"] == ("c", "d\n")
880 assert _ip.user_ns["_"] == ("c", "d\n")
881
881
882 def test_cell_magic_func_deco(self):
882 def test_cell_magic_func_deco(self):
883 "Cell magic using simple decorator"
883 "Cell magic using simple decorator"
884 @register_cell_magic
884 @register_cell_magic
885 def cellm(line, cell):
885 def cellm(line, cell):
886 return line, cell
886 return line, cell
887
887
888 self.check_ident('cellm')
888 self.check_ident('cellm')
889
889
890 def test_cell_magic_reg(self):
890 def test_cell_magic_reg(self):
891 "Cell magic manually registered"
891 "Cell magic manually registered"
892 def cellm(line, cell):
892 def cellm(line, cell):
893 return line, cell
893 return line, cell
894
894
895 _ip.register_magic_function(cellm, 'cell', 'cellm2')
895 _ip.register_magic_function(cellm, 'cell', 'cellm2')
896 self.check_ident('cellm2')
896 self.check_ident('cellm2')
897
897
898 def test_cell_magic_class(self):
898 def test_cell_magic_class(self):
899 "Cell magics declared via a class"
899 "Cell magics declared via a class"
900 @magics_class
900 @magics_class
901 class MyMagics(Magics):
901 class MyMagics(Magics):
902
902
903 @cell_magic
903 @cell_magic
904 def cellm3(self, line, cell):
904 def cellm3(self, line, cell):
905 return line, cell
905 return line, cell
906
906
907 _ip.register_magics(MyMagics)
907 _ip.register_magics(MyMagics)
908 self.check_ident('cellm3')
908 self.check_ident('cellm3')
909
909
910 def test_cell_magic_class2(self):
910 def test_cell_magic_class2(self):
911 "Cell magics declared via a class, #2"
911 "Cell magics declared via a class, #2"
912 @magics_class
912 @magics_class
913 class MyMagics2(Magics):
913 class MyMagics2(Magics):
914
914
915 @cell_magic('cellm4')
915 @cell_magic('cellm4')
916 def cellm33(self, line, cell):
916 def cellm33(self, line, cell):
917 return line, cell
917 return line, cell
918
918
919 _ip.register_magics(MyMagics2)
919 _ip.register_magics(MyMagics2)
920 self.check_ident('cellm4')
920 self.check_ident('cellm4')
921 # Check that nothing is registered as 'cellm33'
921 # Check that nothing is registered as 'cellm33'
922 c33 = _ip.find_cell_magic('cellm33')
922 c33 = _ip.find_cell_magic('cellm33')
923 assert c33 == None
923 assert c33 == None
924
924
925 def test_file():
925 def test_file():
926 """Basic %%writefile"""
926 """Basic %%writefile"""
927 ip = get_ipython()
927 ip = get_ipython()
928 with TemporaryDirectory() as td:
928 with TemporaryDirectory() as td:
929 fname = os.path.join(td, "file1")
929 fname = os.path.join(td, "file1")
930 ip.run_cell_magic(
930 ip.run_cell_magic(
931 "writefile",
931 "writefile",
932 fname,
932 fname,
933 "\n".join(
933 "\n".join(
934 [
934 [
935 "line1",
935 "line1",
936 "line2",
936 "line2",
937 ]
937 ]
938 ),
938 ),
939 )
939 )
940 s = Path(fname).read_text(encoding="utf-8")
940 s = Path(fname).read_text(encoding="utf-8")
941 assert "line1\n" in s
941 assert "line1\n" in s
942 assert "line2" in s
942 assert "line2" in s
943
943
944
944
945 @dec.skip_win32
945 @dec.skip_win32
946 def test_file_single_quote():
946 def test_file_single_quote():
947 """Basic %%writefile with embedded single quotes"""
947 """Basic %%writefile with embedded single quotes"""
948 ip = get_ipython()
948 ip = get_ipython()
949 with TemporaryDirectory() as td:
949 with TemporaryDirectory() as td:
950 fname = os.path.join(td, "'file1'")
950 fname = os.path.join(td, "'file1'")
951 ip.run_cell_magic(
951 ip.run_cell_magic(
952 "writefile",
952 "writefile",
953 fname,
953 fname,
954 "\n".join(
954 "\n".join(
955 [
955 [
956 "line1",
956 "line1",
957 "line2",
957 "line2",
958 ]
958 ]
959 ),
959 ),
960 )
960 )
961 s = Path(fname).read_text(encoding="utf-8")
961 s = Path(fname).read_text(encoding="utf-8")
962 assert "line1\n" in s
962 assert "line1\n" in s
963 assert "line2" in s
963 assert "line2" in s
964
964
965
965
966 @dec.skip_win32
966 @dec.skip_win32
967 def test_file_double_quote():
967 def test_file_double_quote():
968 """Basic %%writefile with embedded double quotes"""
968 """Basic %%writefile with embedded double quotes"""
969 ip = get_ipython()
969 ip = get_ipython()
970 with TemporaryDirectory() as td:
970 with TemporaryDirectory() as td:
971 fname = os.path.join(td, '"file1"')
971 fname = os.path.join(td, '"file1"')
972 ip.run_cell_magic(
972 ip.run_cell_magic(
973 "writefile",
973 "writefile",
974 fname,
974 fname,
975 "\n".join(
975 "\n".join(
976 [
976 [
977 "line1",
977 "line1",
978 "line2",
978 "line2",
979 ]
979 ]
980 ),
980 ),
981 )
981 )
982 s = Path(fname).read_text(encoding="utf-8")
982 s = Path(fname).read_text(encoding="utf-8")
983 assert "line1\n" in s
983 assert "line1\n" in s
984 assert "line2" in s
984 assert "line2" in s
985
985
986
986
987 def test_file_var_expand():
987 def test_file_var_expand():
988 """%%writefile $filename"""
988 """%%writefile $filename"""
989 ip = get_ipython()
989 ip = get_ipython()
990 with TemporaryDirectory() as td:
990 with TemporaryDirectory() as td:
991 fname = os.path.join(td, "file1")
991 fname = os.path.join(td, "file1")
992 ip.user_ns["filename"] = fname
992 ip.user_ns["filename"] = fname
993 ip.run_cell_magic(
993 ip.run_cell_magic(
994 "writefile",
994 "writefile",
995 "$filename",
995 "$filename",
996 "\n".join(
996 "\n".join(
997 [
997 [
998 "line1",
998 "line1",
999 "line2",
999 "line2",
1000 ]
1000 ]
1001 ),
1001 ),
1002 )
1002 )
1003 s = Path(fname).read_text(encoding="utf-8")
1003 s = Path(fname).read_text(encoding="utf-8")
1004 assert "line1\n" in s
1004 assert "line1\n" in s
1005 assert "line2" in s
1005 assert "line2" in s
1006
1006
1007
1007
1008 def test_file_unicode():
1008 def test_file_unicode():
1009 """%%writefile with unicode cell"""
1009 """%%writefile with unicode cell"""
1010 ip = get_ipython()
1010 ip = get_ipython()
1011 with TemporaryDirectory() as td:
1011 with TemporaryDirectory() as td:
1012 fname = os.path.join(td, 'file1')
1012 fname = os.path.join(td, 'file1')
1013 ip.run_cell_magic("writefile", fname, u'\n'.join([
1013 ip.run_cell_magic("writefile", fname, u'\n'.join([
1014 u'linΓ©1',
1014 u'linΓ©1',
1015 u'linΓ©2',
1015 u'linΓ©2',
1016 ]))
1016 ]))
1017 with io.open(fname, encoding='utf-8') as f:
1017 with io.open(fname, encoding='utf-8') as f:
1018 s = f.read()
1018 s = f.read()
1019 assert "linΓ©1\n" in s
1019 assert "linΓ©1\n" in s
1020 assert "linΓ©2" in s
1020 assert "linΓ©2" in s
1021
1021
1022
1022
1023 def test_file_amend():
1023 def test_file_amend():
1024 """%%writefile -a amends files"""
1024 """%%writefile -a amends files"""
1025 ip = get_ipython()
1025 ip = get_ipython()
1026 with TemporaryDirectory() as td:
1026 with TemporaryDirectory() as td:
1027 fname = os.path.join(td, "file2")
1027 fname = os.path.join(td, "file2")
1028 ip.run_cell_magic(
1028 ip.run_cell_magic(
1029 "writefile",
1029 "writefile",
1030 fname,
1030 fname,
1031 "\n".join(
1031 "\n".join(
1032 [
1032 [
1033 "line1",
1033 "line1",
1034 "line2",
1034 "line2",
1035 ]
1035 ]
1036 ),
1036 ),
1037 )
1037 )
1038 ip.run_cell_magic(
1038 ip.run_cell_magic(
1039 "writefile",
1039 "writefile",
1040 "-a %s" % fname,
1040 "-a %s" % fname,
1041 "\n".join(
1041 "\n".join(
1042 [
1042 [
1043 "line3",
1043 "line3",
1044 "line4",
1044 "line4",
1045 ]
1045 ]
1046 ),
1046 ),
1047 )
1047 )
1048 s = Path(fname).read_text(encoding="utf-8")
1048 s = Path(fname).read_text(encoding="utf-8")
1049 assert "line1\n" in s
1049 assert "line1\n" in s
1050 assert "line3\n" in s
1050 assert "line3\n" in s
1051
1051
1052
1052
1053 def test_file_spaces():
1053 def test_file_spaces():
1054 """%%file with spaces in filename"""
1054 """%%file with spaces in filename"""
1055 ip = get_ipython()
1055 ip = get_ipython()
1056 with TemporaryWorkingDirectory() as td:
1056 with TemporaryWorkingDirectory() as td:
1057 fname = "file name"
1057 fname = "file name"
1058 ip.run_cell_magic(
1058 ip.run_cell_magic(
1059 "file",
1059 "file",
1060 '"%s"' % fname,
1060 '"%s"' % fname,
1061 "\n".join(
1061 "\n".join(
1062 [
1062 [
1063 "line1",
1063 "line1",
1064 "line2",
1064 "line2",
1065 ]
1065 ]
1066 ),
1066 ),
1067 )
1067 )
1068 s = Path(fname).read_text(encoding="utf-8")
1068 s = Path(fname).read_text(encoding="utf-8")
1069 assert "line1\n" in s
1069 assert "line1\n" in s
1070 assert "line2" in s
1070 assert "line2" in s
1071
1071
1072
1072
1073 def test_script_config():
1073 def test_script_config():
1074 ip = get_ipython()
1074 ip = get_ipython()
1075 ip.config.ScriptMagics.script_magics = ['whoda']
1075 ip.config.ScriptMagics.script_magics = ['whoda']
1076 sm = script.ScriptMagics(shell=ip)
1076 sm = script.ScriptMagics(shell=ip)
1077 assert "whoda" in sm.magics["cell"]
1077 assert "whoda" in sm.magics["cell"]
1078
1078
1079
1079
1080 def test_script_out():
1080 def test_script_out():
1081 ip = get_ipython()
1081 ip = get_ipython()
1082 ip.run_cell_magic("script", f"--out output {sys.executable}", "print('hi')")
1082 ip.run_cell_magic("script", f"--out output {sys.executable}", "print('hi')")
1083 assert ip.user_ns["output"].strip() == "hi"
1083 assert ip.user_ns["output"].strip() == "hi"
1084
1084
1085
1085
1086 def test_script_err():
1086 def test_script_err():
1087 ip = get_ipython()
1087 ip = get_ipython()
1088 ip.run_cell_magic(
1088 ip.run_cell_magic(
1089 "script",
1089 "script",
1090 f"--err error {sys.executable}",
1090 f"--err error {sys.executable}",
1091 "import sys; print('hello', file=sys.stderr)",
1091 "import sys; print('hello', file=sys.stderr)",
1092 )
1092 )
1093 assert ip.user_ns["error"].strip() == "hello"
1093 assert ip.user_ns["error"].strip() == "hello"
1094
1094
1095
1095
1096 def test_script_out_err():
1096 def test_script_out_err():
1097
1098 ip = get_ipython()
1097 ip = get_ipython()
1099 ip.run_cell_magic(
1098 ip.run_cell_magic(
1100 "script",
1099 "script",
1101 f"--out output --err error {sys.executable}",
1100 f"--out output --err error {sys.executable}",
1102 "\n".join(
1101 "\n".join(
1103 [
1102 [
1104 "import sys",
1103 "import sys",
1105 "print('hi')",
1104 "print('hi')",
1106 "print('hello', file=sys.stderr)",
1105 "print('hello', file=sys.stderr)",
1107 ]
1106 ]
1108 ),
1107 ),
1109 )
1108 )
1110 assert ip.user_ns["output"].strip() == "hi"
1109 assert ip.user_ns["output"].strip() == "hi"
1111 assert ip.user_ns["error"].strip() == "hello"
1110 assert ip.user_ns["error"].strip() == "hello"
1112
1111
1113
1112
1114 async def test_script_bg_out():
1113 async def test_script_bg_out():
1115 ip = get_ipython()
1114 ip = get_ipython()
1116 ip.run_cell_magic("script", f"--bg --out output {sys.executable}", "print('hi')")
1115 ip.run_cell_magic("script", f"--bg --out output {sys.executable}", "print('hi')")
1117 assert (await ip.user_ns["output"].read()).strip() == b"hi"
1116 assert (await ip.user_ns["output"].read()).strip() == b"hi"
1118 assert ip.user_ns["output"].at_eof()
1117 assert ip.user_ns["output"].at_eof()
1119
1118
1120
1119
1121 async def test_script_bg_err():
1120 async def test_script_bg_err():
1122 ip = get_ipython()
1121 ip = get_ipython()
1123 ip.run_cell_magic(
1122 ip.run_cell_magic(
1124 "script",
1123 "script",
1125 f"--bg --err error {sys.executable}",
1124 f"--bg --err error {sys.executable}",
1126 "import sys; print('hello', file=sys.stderr)",
1125 "import sys; print('hello', file=sys.stderr)",
1127 )
1126 )
1128 assert (await ip.user_ns["error"].read()).strip() == b"hello"
1127 assert (await ip.user_ns["error"].read()).strip() == b"hello"
1129 assert ip.user_ns["error"].at_eof()
1128 assert ip.user_ns["error"].at_eof()
1130
1129
1131
1130
1132 async def test_script_bg_out_err():
1131 async def test_script_bg_out_err():
1133 ip = get_ipython()
1132 ip = get_ipython()
1134 ip.run_cell_magic(
1133 ip.run_cell_magic(
1135 "script",
1134 "script",
1136 f"--bg --out output --err error {sys.executable}",
1135 f"--bg --out output --err error {sys.executable}",
1137 "\n".join(
1136 "\n".join(
1138 [
1137 [
1139 "import sys",
1138 "import sys",
1140 "print('hi')",
1139 "print('hi')",
1141 "print('hello', file=sys.stderr)",
1140 "print('hello', file=sys.stderr)",
1142 ]
1141 ]
1143 ),
1142 ),
1144 )
1143 )
1145 assert (await ip.user_ns["output"].read()).strip() == b"hi"
1144 assert (await ip.user_ns["output"].read()).strip() == b"hi"
1146 assert (await ip.user_ns["error"].read()).strip() == b"hello"
1145 assert (await ip.user_ns["error"].read()).strip() == b"hello"
1147 assert ip.user_ns["output"].at_eof()
1146 assert ip.user_ns["output"].at_eof()
1148 assert ip.user_ns["error"].at_eof()
1147 assert ip.user_ns["error"].at_eof()
1149
1148
1150
1149
1151 async def test_script_bg_proc():
1150 async def test_script_bg_proc():
1152 ip = get_ipython()
1151 ip = get_ipython()
1153 ip.run_cell_magic(
1152 ip.run_cell_magic(
1154 "script",
1153 "script",
1155 f"--bg --out output --proc p {sys.executable}",
1154 f"--bg --out output --proc p {sys.executable}",
1156 "\n".join(
1155 "\n".join(
1157 [
1156 [
1158 "import sys",
1157 "import sys",
1159 "print('hi')",
1158 "print('hi')",
1160 "print('hello', file=sys.stderr)",
1159 "print('hello', file=sys.stderr)",
1161 ]
1160 ]
1162 ),
1161 ),
1163 )
1162 )
1164 p = ip.user_ns["p"]
1163 p = ip.user_ns["p"]
1165 await p.wait()
1164 await p.wait()
1166 assert p.returncode == 0
1165 assert p.returncode == 0
1167 assert (await p.stdout.read()).strip() == b"hi"
1166 assert (await p.stdout.read()).strip() == b"hi"
1168 # not captured, so empty
1167 # not captured, so empty
1169 assert (await p.stderr.read()) == b""
1168 assert (await p.stderr.read()) == b""
1170 assert p.stdout.at_eof()
1169 assert p.stdout.at_eof()
1171 assert p.stderr.at_eof()
1170 assert p.stderr.at_eof()
1172
1171
1173
1172
1174 def test_script_defaults():
1173 def test_script_defaults():
1175 ip = get_ipython()
1174 ip = get_ipython()
1176 for cmd in ['sh', 'bash', 'perl', 'ruby']:
1175 for cmd in ['sh', 'bash', 'perl', 'ruby']:
1177 try:
1176 try:
1178 find_cmd(cmd)
1177 find_cmd(cmd)
1179 except Exception:
1178 except Exception:
1180 pass
1179 pass
1181 else:
1180 else:
1182 assert cmd in ip.magics_manager.magics["cell"]
1181 assert cmd in ip.magics_manager.magics["cell"]
1183
1182
1184
1183
1185 @magics_class
1184 @magics_class
1186 class FooFoo(Magics):
1185 class FooFoo(Magics):
1187 """class with both %foo and %%foo magics"""
1186 """class with both %foo and %%foo magics"""
1188 @line_magic('foo')
1187 @line_magic('foo')
1189 def line_foo(self, line):
1188 def line_foo(self, line):
1190 "I am line foo"
1189 "I am line foo"
1191 pass
1190 pass
1192
1191
1193 @cell_magic("foo")
1192 @cell_magic("foo")
1194 def cell_foo(self, line, cell):
1193 def cell_foo(self, line, cell):
1195 "I am cell foo, not line foo"
1194 "I am cell foo, not line foo"
1196 pass
1195 pass
1197
1196
1198 def test_line_cell_info():
1197 def test_line_cell_info():
1199 """%%foo and %foo magics are distinguishable to inspect"""
1198 """%%foo and %foo magics are distinguishable to inspect"""
1200 ip = get_ipython()
1199 ip = get_ipython()
1201 ip.magics_manager.register(FooFoo)
1200 ip.magics_manager.register(FooFoo)
1202 oinfo = ip.object_inspect("foo")
1201 oinfo = ip.object_inspect("foo")
1203 assert oinfo["found"] is True
1202 assert oinfo["found"] is True
1204 assert oinfo["ismagic"] is True
1203 assert oinfo["ismagic"] is True
1205
1204
1206 oinfo = ip.object_inspect("%%foo")
1205 oinfo = ip.object_inspect("%%foo")
1207 assert oinfo["found"] is True
1206 assert oinfo["found"] is True
1208 assert oinfo["ismagic"] is True
1207 assert oinfo["ismagic"] is True
1209 assert oinfo["docstring"] == FooFoo.cell_foo.__doc__
1208 assert oinfo["docstring"] == FooFoo.cell_foo.__doc__
1210
1209
1211 oinfo = ip.object_inspect("%foo")
1210 oinfo = ip.object_inspect("%foo")
1212 assert oinfo["found"] is True
1211 assert oinfo["found"] is True
1213 assert oinfo["ismagic"] is True
1212 assert oinfo["ismagic"] is True
1214 assert oinfo["docstring"] == FooFoo.line_foo.__doc__
1213 assert oinfo["docstring"] == FooFoo.line_foo.__doc__
1215
1214
1216
1215
1217 def test_multiple_magics():
1216 def test_multiple_magics():
1218 ip = get_ipython()
1217 ip = get_ipython()
1219 foo1 = FooFoo(ip)
1218 foo1 = FooFoo(ip)
1220 foo2 = FooFoo(ip)
1219 foo2 = FooFoo(ip)
1221 mm = ip.magics_manager
1220 mm = ip.magics_manager
1222 mm.register(foo1)
1221 mm.register(foo1)
1223 assert mm.magics["line"]["foo"].__self__ is foo1
1222 assert mm.magics["line"]["foo"].__self__ is foo1
1224 mm.register(foo2)
1223 mm.register(foo2)
1225 assert mm.magics["line"]["foo"].__self__ is foo2
1224 assert mm.magics["line"]["foo"].__self__ is foo2
1226
1225
1227
1226
1228 def test_alias_magic():
1227 def test_alias_magic():
1229 """Test %alias_magic."""
1228 """Test %alias_magic."""
1230 ip = get_ipython()
1229 ip = get_ipython()
1231 mm = ip.magics_manager
1230 mm = ip.magics_manager
1232
1231
1233 # Basic operation: both cell and line magics are created, if possible.
1232 # Basic operation: both cell and line magics are created, if possible.
1234 ip.run_line_magic("alias_magic", "timeit_alias timeit")
1233 ip.run_line_magic("alias_magic", "timeit_alias timeit")
1235 assert "timeit_alias" in mm.magics["line"]
1234 assert "timeit_alias" in mm.magics["line"]
1236 assert "timeit_alias" in mm.magics["cell"]
1235 assert "timeit_alias" in mm.magics["cell"]
1237
1236
1238 # --cell is specified, line magic not created.
1237 # --cell is specified, line magic not created.
1239 ip.run_line_magic("alias_magic", "--cell timeit_cell_alias timeit")
1238 ip.run_line_magic("alias_magic", "--cell timeit_cell_alias timeit")
1240 assert "timeit_cell_alias" not in mm.magics["line"]
1239 assert "timeit_cell_alias" not in mm.magics["line"]
1241 assert "timeit_cell_alias" in mm.magics["cell"]
1240 assert "timeit_cell_alias" in mm.magics["cell"]
1242
1241
1243 # Test that line alias is created successfully.
1242 # Test that line alias is created successfully.
1244 ip.run_line_magic("alias_magic", "--line env_alias env")
1243 ip.run_line_magic("alias_magic", "--line env_alias env")
1245 assert ip.run_line_magic("env", "") == ip.run_line_magic("env_alias", "")
1244 assert ip.run_line_magic("env", "") == ip.run_line_magic("env_alias", "")
1246
1245
1247 # Test that line alias with parameters passed in is created successfully.
1246 # Test that line alias with parameters passed in is created successfully.
1248 ip.run_line_magic(
1247 ip.run_line_magic(
1249 "alias_magic", "--line history_alias history --params " + shlex.quote("3")
1248 "alias_magic", "--line history_alias history --params " + shlex.quote("3")
1250 )
1249 )
1251 assert "history_alias" in mm.magics["line"]
1250 assert "history_alias" in mm.magics["line"]
1252
1251
1253
1252
1254 def test_save():
1253 def test_save():
1255 """Test %save."""
1254 """Test %save."""
1256 ip = get_ipython()
1255 ip = get_ipython()
1257 ip.history_manager.reset() # Clear any existing history.
1256 ip.history_manager.reset() # Clear any existing history.
1258 cmds = ["a=1", "def b():\n return a**2", "print(a, b())"]
1257 cmds = ["a=1", "def b():\n return a**2", "print(a, b())"]
1259 for i, cmd in enumerate(cmds, start=1):
1258 for i, cmd in enumerate(cmds, start=1):
1260 ip.history_manager.store_inputs(i, cmd)
1259 ip.history_manager.store_inputs(i, cmd)
1261 with TemporaryDirectory() as tmpdir:
1260 with TemporaryDirectory() as tmpdir:
1262 file = os.path.join(tmpdir, "testsave.py")
1261 file = os.path.join(tmpdir, "testsave.py")
1263 ip.run_line_magic("save", "%s 1-10" % file)
1262 ip.run_line_magic("save", "%s 1-10" % file)
1264 content = Path(file).read_text(encoding="utf-8")
1263 content = Path(file).read_text(encoding="utf-8")
1265 assert content.count(cmds[0]) == 1
1264 assert content.count(cmds[0]) == 1
1266 assert "coding: utf-8" in content
1265 assert "coding: utf-8" in content
1267 ip.run_line_magic("save", "-a %s 1-10" % file)
1266 ip.run_line_magic("save", "-a %s 1-10" % file)
1268 content = Path(file).read_text(encoding="utf-8")
1267 content = Path(file).read_text(encoding="utf-8")
1269 assert content.count(cmds[0]) == 2
1268 assert content.count(cmds[0]) == 2
1270 assert "coding: utf-8" in content
1269 assert "coding: utf-8" in content
1271
1270
1272
1271
1273 def test_save_with_no_args():
1272 def test_save_with_no_args():
1274 ip = get_ipython()
1273 ip = get_ipython()
1275 ip.history_manager.reset() # Clear any existing history.
1274 ip.history_manager.reset() # Clear any existing history.
1276 cmds = ["a=1", "def b():\n return a**2", "print(a, b())", "%save"]
1275 cmds = ["a=1", "def b():\n return a**2", "print(a, b())", "%save"]
1277 for i, cmd in enumerate(cmds, start=1):
1276 for i, cmd in enumerate(cmds, start=1):
1278 ip.history_manager.store_inputs(i, cmd)
1277 ip.history_manager.store_inputs(i, cmd)
1279
1278
1280 with TemporaryDirectory() as tmpdir:
1279 with TemporaryDirectory() as tmpdir:
1281 path = os.path.join(tmpdir, "testsave.py")
1280 path = os.path.join(tmpdir, "testsave.py")
1282 ip.run_line_magic("save", path)
1281 ip.run_line_magic("save", path)
1283 content = Path(path).read_text(encoding="utf-8")
1282 content = Path(path).read_text(encoding="utf-8")
1284 expected_content = dedent(
1283 expected_content = dedent(
1285 """\
1284 """\
1286 # coding: utf-8
1285 # coding: utf-8
1287 a=1
1286 a=1
1288 def b():
1287 def b():
1289 return a**2
1288 return a**2
1290 print(a, b())
1289 print(a, b())
1291 """
1290 """
1292 )
1291 )
1293 assert content == expected_content
1292 assert content == expected_content
1294
1293
1295
1294
1296 def test_store():
1295 def test_store():
1297 """Test %store."""
1296 """Test %store."""
1298 ip = get_ipython()
1297 ip = get_ipython()
1299 ip.run_line_magic('load_ext', 'storemagic')
1298 ip.run_line_magic('load_ext', 'storemagic')
1300
1299
1301 # make sure the storage is empty
1300 # make sure the storage is empty
1302 ip.run_line_magic("store", "-z")
1301 ip.run_line_magic("store", "-z")
1303 ip.user_ns["var"] = 42
1302 ip.user_ns["var"] = 42
1304 ip.run_line_magic("store", "var")
1303 ip.run_line_magic("store", "var")
1305 ip.user_ns["var"] = 39
1304 ip.user_ns["var"] = 39
1306 ip.run_line_magic("store", "-r")
1305 ip.run_line_magic("store", "-r")
1307 assert ip.user_ns["var"] == 42
1306 assert ip.user_ns["var"] == 42
1308
1307
1309 ip.run_line_magic("store", "-d var")
1308 ip.run_line_magic("store", "-d var")
1310 ip.user_ns["var"] = 39
1309 ip.user_ns["var"] = 39
1311 ip.run_line_magic("store", "-r")
1310 ip.run_line_magic("store", "-r")
1312 assert ip.user_ns["var"] == 39
1311 assert ip.user_ns["var"] == 39
1313
1312
1314
1313
1315 def _run_edit_test(arg_s, exp_filename=None,
1314 def _run_edit_test(arg_s, exp_filename=None,
1316 exp_lineno=-1,
1315 exp_lineno=-1,
1317 exp_contents=None,
1316 exp_contents=None,
1318 exp_is_temp=None):
1317 exp_is_temp=None):
1319 ip = get_ipython()
1318 ip = get_ipython()
1320 M = code.CodeMagics(ip)
1319 M = code.CodeMagics(ip)
1321 last_call = ['','']
1320 last_call = ['','']
1322 opts,args = M.parse_options(arg_s,'prxn:')
1321 opts,args = M.parse_options(arg_s,'prxn:')
1323 filename, lineno, is_temp = M._find_edit_target(ip, args, opts, last_call)
1322 filename, lineno, is_temp = M._find_edit_target(ip, args, opts, last_call)
1324
1323
1325 if exp_filename is not None:
1324 if exp_filename is not None:
1326 assert exp_filename == filename
1325 assert exp_filename == filename
1327 if exp_contents is not None:
1326 if exp_contents is not None:
1328 with io.open(filename, 'r', encoding='utf-8') as f:
1327 with io.open(filename, 'r', encoding='utf-8') as f:
1329 contents = f.read()
1328 contents = f.read()
1330 assert exp_contents == contents
1329 assert exp_contents == contents
1331 if exp_lineno != -1:
1330 if exp_lineno != -1:
1332 assert exp_lineno == lineno
1331 assert exp_lineno == lineno
1333 if exp_is_temp is not None:
1332 if exp_is_temp is not None:
1334 assert exp_is_temp == is_temp
1333 assert exp_is_temp == is_temp
1335
1334
1336
1335
1337 def test_edit_interactive():
1336 def test_edit_interactive():
1338 """%edit on interactively defined objects"""
1337 """%edit on interactively defined objects"""
1339 ip = get_ipython()
1338 ip = get_ipython()
1340 n = ip.execution_count
1339 n = ip.execution_count
1341 ip.run_cell("def foo(): return 1", store_history=True)
1340 ip.run_cell("def foo(): return 1", store_history=True)
1342
1341
1343 with pytest.raises(code.InteractivelyDefined) as e:
1342 with pytest.raises(code.InteractivelyDefined) as e:
1344 _run_edit_test("foo")
1343 _run_edit_test("foo")
1345 assert e.value.index == n
1344 assert e.value.index == n
1346
1345
1347
1346
1348 def test_edit_cell():
1347 def test_edit_cell():
1349 """%edit [cell id]"""
1348 """%edit [cell id]"""
1350 ip = get_ipython()
1349 ip = get_ipython()
1351
1350
1352 ip.run_cell("def foo(): return 1", store_history=True)
1351 ip.run_cell("def foo(): return 1", store_history=True)
1353
1352
1354 # test
1353 # test
1355 _run_edit_test("1", exp_contents=ip.user_ns['In'][1], exp_is_temp=True)
1354 _run_edit_test("1", exp_contents=ip.user_ns['In'][1], exp_is_temp=True)
1356
1355
1357 def test_edit_fname():
1356 def test_edit_fname():
1358 """%edit file"""
1357 """%edit file"""
1359 # test
1358 # test
1360 _run_edit_test("test file.py", exp_filename="test file.py")
1359 _run_edit_test("test file.py", exp_filename="test file.py")
1361
1360
1362 def test_bookmark():
1361 def test_bookmark():
1363 ip = get_ipython()
1362 ip = get_ipython()
1364 ip.run_line_magic('bookmark', 'bmname')
1363 ip.run_line_magic('bookmark', 'bmname')
1365 with tt.AssertPrints('bmname'):
1364 with tt.AssertPrints('bmname'):
1366 ip.run_line_magic('bookmark', '-l')
1365 ip.run_line_magic('bookmark', '-l')
1367 ip.run_line_magic('bookmark', '-d bmname')
1366 ip.run_line_magic('bookmark', '-d bmname')
1368
1367
1369 def test_ls_magic():
1368 def test_ls_magic():
1370 ip = get_ipython()
1369 ip = get_ipython()
1371 json_formatter = ip.display_formatter.formatters['application/json']
1370 json_formatter = ip.display_formatter.formatters['application/json']
1372 json_formatter.enabled = True
1371 json_formatter.enabled = True
1373 lsmagic = ip.run_line_magic("lsmagic", "")
1372 lsmagic = ip.run_line_magic("lsmagic", "")
1374 with warnings.catch_warnings(record=True) as w:
1373 with warnings.catch_warnings(record=True) as w:
1375 j = json_formatter(lsmagic)
1374 j = json_formatter(lsmagic)
1376 assert sorted(j) == ["cell", "line"]
1375 assert sorted(j) == ["cell", "line"]
1377 assert w == [] # no warnings
1376 assert w == [] # no warnings
1378
1377
1379
1378
1380 def test_strip_initial_indent():
1379 def test_strip_initial_indent():
1381 def sii(s):
1380 def sii(s):
1382 lines = s.splitlines()
1381 lines = s.splitlines()
1383 return '\n'.join(code.strip_initial_indent(lines))
1382 return '\n'.join(code.strip_initial_indent(lines))
1384
1383
1385 assert sii(" a = 1\nb = 2") == "a = 1\nb = 2"
1384 assert sii(" a = 1\nb = 2") == "a = 1\nb = 2"
1386 assert sii(" a\n b\nc") == "a\n b\nc"
1385 assert sii(" a\n b\nc") == "a\n b\nc"
1387 assert sii("a\n b") == "a\n b"
1386 assert sii("a\n b") == "a\n b"
1388
1387
1389 def test_logging_magic_quiet_from_arg():
1388 def test_logging_magic_quiet_from_arg():
1390 _ip.config.LoggingMagics.quiet = False
1389 _ip.config.LoggingMagics.quiet = False
1391 lm = logging.LoggingMagics(shell=_ip)
1390 lm = logging.LoggingMagics(shell=_ip)
1392 with TemporaryDirectory() as td:
1391 with TemporaryDirectory() as td:
1393 try:
1392 try:
1394 with tt.AssertNotPrints(re.compile("Activating.*")):
1393 with tt.AssertNotPrints(re.compile("Activating.*")):
1395 lm.logstart('-q {}'.format(
1394 lm.logstart('-q {}'.format(
1396 os.path.join(td, "quiet_from_arg.log")))
1395 os.path.join(td, "quiet_from_arg.log")))
1397 finally:
1396 finally:
1398 _ip.logger.logstop()
1397 _ip.logger.logstop()
1399
1398
1400 def test_logging_magic_quiet_from_config():
1399 def test_logging_magic_quiet_from_config():
1401 _ip.config.LoggingMagics.quiet = True
1400 _ip.config.LoggingMagics.quiet = True
1402 lm = logging.LoggingMagics(shell=_ip)
1401 lm = logging.LoggingMagics(shell=_ip)
1403 with TemporaryDirectory() as td:
1402 with TemporaryDirectory() as td:
1404 try:
1403 try:
1405 with tt.AssertNotPrints(re.compile("Activating.*")):
1404 with tt.AssertNotPrints(re.compile("Activating.*")):
1406 lm.logstart(os.path.join(td, "quiet_from_config.log"))
1405 lm.logstart(os.path.join(td, "quiet_from_config.log"))
1407 finally:
1406 finally:
1408 _ip.logger.logstop()
1407 _ip.logger.logstop()
1409
1408
1410
1409
1411 def test_logging_magic_not_quiet():
1410 def test_logging_magic_not_quiet():
1412 _ip.config.LoggingMagics.quiet = False
1411 _ip.config.LoggingMagics.quiet = False
1413 lm = logging.LoggingMagics(shell=_ip)
1412 lm = logging.LoggingMagics(shell=_ip)
1414 with TemporaryDirectory() as td:
1413 with TemporaryDirectory() as td:
1415 try:
1414 try:
1416 with tt.AssertPrints(re.compile("Activating.*")):
1415 with tt.AssertPrints(re.compile("Activating.*")):
1417 lm.logstart(os.path.join(td, "not_quiet.log"))
1416 lm.logstart(os.path.join(td, "not_quiet.log"))
1418 finally:
1417 finally:
1419 _ip.logger.logstop()
1418 _ip.logger.logstop()
1420
1419
1421
1420
1422 def test_time_no_var_expand():
1421 def test_time_no_var_expand():
1423 _ip.user_ns["a"] = 5
1422 _ip.user_ns["a"] = 5
1424 _ip.user_ns["b"] = []
1423 _ip.user_ns["b"] = []
1425 _ip.run_line_magic("time", 'b.append("{a}")')
1424 _ip.run_line_magic("time", 'b.append("{a}")')
1426 assert _ip.user_ns["b"] == ["{a}"]
1425 assert _ip.user_ns["b"] == ["{a}"]
1427
1426
1428
1427
1429 # this is slow, put at the end for local testing.
1428 # this is slow, put at the end for local testing.
1430 def test_timeit_arguments():
1429 def test_timeit_arguments():
1431 "Test valid timeit arguments, should not cause SyntaxError (GH #1269)"
1430 "Test valid timeit arguments, should not cause SyntaxError (GH #1269)"
1432 _ip.run_line_magic("timeit", "-n1 -r1 a=('#')")
1431 _ip.run_line_magic("timeit", "-n1 -r1 a=('#')")
1433
1432
1434
1433
1435 MINIMAL_LAZY_MAGIC = """
1434 MINIMAL_LAZY_MAGIC = """
1436 from IPython.core.magic import (
1435 from IPython.core.magic import (
1437 Magics,
1436 Magics,
1438 magics_class,
1437 magics_class,
1439 line_magic,
1438 line_magic,
1440 cell_magic,
1439 cell_magic,
1441 )
1440 )
1442
1441
1443
1442
1444 @magics_class
1443 @magics_class
1445 class LazyMagics(Magics):
1444 class LazyMagics(Magics):
1446 @line_magic
1445 @line_magic
1447 def lazy_line(self, line):
1446 def lazy_line(self, line):
1448 print("Lazy Line")
1447 print("Lazy Line")
1449
1448
1450 @cell_magic
1449 @cell_magic
1451 def lazy_cell(self, line, cell):
1450 def lazy_cell(self, line, cell):
1452 print("Lazy Cell")
1451 print("Lazy Cell")
1453
1452
1454
1453
1455 def load_ipython_extension(ipython):
1454 def load_ipython_extension(ipython):
1456 ipython.register_magics(LazyMagics)
1455 ipython.register_magics(LazyMagics)
1457 """
1456 """
1458
1457
1459
1458
1460 def test_lazy_magics():
1459 def test_lazy_magics():
1461 with pytest.raises(UsageError):
1460 with pytest.raises(UsageError):
1462 ip.run_line_magic("lazy_line", "")
1461 ip.run_line_magic("lazy_line", "")
1463
1462
1464 startdir = os.getcwd()
1463 startdir = os.getcwd()
1465
1464
1466 with TemporaryDirectory() as tmpdir:
1465 with TemporaryDirectory() as tmpdir:
1467 with prepended_to_syspath(tmpdir):
1466 with prepended_to_syspath(tmpdir):
1468 ptempdir = Path(tmpdir)
1467 ptempdir = Path(tmpdir)
1469 tf = ptempdir / "lazy_magic_module.py"
1468 tf = ptempdir / "lazy_magic_module.py"
1470 tf.write_text(MINIMAL_LAZY_MAGIC)
1469 tf.write_text(MINIMAL_LAZY_MAGIC)
1471 ip.magics_manager.register_lazy("lazy_line", Path(tf.name).name[:-3])
1470 ip.magics_manager.register_lazy("lazy_line", Path(tf.name).name[:-3])
1472 with tt.AssertPrints("Lazy Line"):
1471 with tt.AssertPrints("Lazy Line"):
1473 ip.run_line_magic("lazy_line", "")
1472 ip.run_line_magic("lazy_line", "")
1474
1473
1475
1474
1476 TEST_MODULE = """
1475 TEST_MODULE = """
1477 print('Loaded my_tmp')
1476 print('Loaded my_tmp')
1478 if __name__ == "__main__":
1477 if __name__ == "__main__":
1479 print('I just ran a script')
1478 print('I just ran a script')
1480 """
1479 """
1481
1480
1482 def test_run_module_from_import_hook():
1481 def test_run_module_from_import_hook():
1483 "Test that a module can be loaded via an import hook"
1482 "Test that a module can be loaded via an import hook"
1484 with TemporaryDirectory() as tmpdir:
1483 with TemporaryDirectory() as tmpdir:
1485 fullpath = os.path.join(tmpdir, "my_tmp.py")
1484 fullpath = os.path.join(tmpdir, "my_tmp.py")
1486 Path(fullpath).write_text(TEST_MODULE, encoding="utf-8")
1485 Path(fullpath).write_text(TEST_MODULE, encoding="utf-8")
1487
1486
1488 import importlib.abc
1487 import importlib.abc
1489 import importlib.util
1488 import importlib.util
1490
1489
1491 class MyTempImporter(importlib.abc.MetaPathFinder, importlib.abc.SourceLoader):
1490 class MyTempImporter(importlib.abc.MetaPathFinder, importlib.abc.SourceLoader):
1492 def find_spec(self, fullname, path, target=None):
1491 def find_spec(self, fullname, path, target=None):
1493 if fullname == "my_tmp":
1492 if fullname == "my_tmp":
1494 return importlib.util.spec_from_loader(fullname, self)
1493 return importlib.util.spec_from_loader(fullname, self)
1495
1494
1496 def get_filename(self, fullname):
1495 def get_filename(self, fullname):
1497 assert fullname == "my_tmp"
1496 assert fullname == "my_tmp"
1498 return fullpath
1497 return fullpath
1499
1498
1500 def get_data(self, path):
1499 def get_data(self, path):
1501 assert Path(path).samefile(fullpath)
1500 assert Path(path).samefile(fullpath)
1502 return Path(fullpath).read_text(encoding="utf-8")
1501 return Path(fullpath).read_text(encoding="utf-8")
1503
1502
1504 sys.meta_path.insert(0, MyTempImporter())
1503 sys.meta_path.insert(0, MyTempImporter())
1505
1504
1506 with capture_output() as captured:
1505 with capture_output() as captured:
1507 _ip.run_line_magic("run", "-m my_tmp")
1506 _ip.run_line_magic("run", "-m my_tmp")
1508 _ip.run_cell("import my_tmp")
1507 _ip.run_cell("import my_tmp")
1509
1508
1510 output = "Loaded my_tmp\nI just ran a script\nLoaded my_tmp\n"
1509 output = "Loaded my_tmp\nI just ran a script\nLoaded my_tmp\n"
1511 assert output == captured.stdout
1510 assert output == captured.stdout
1512
1511
1513 sys.meta_path.pop(0)
1512 sys.meta_path.pop(0)
@@ -1,271 +1,270 b''
1 """Tests for pylab tools module.
1 """Tests for pylab tools module.
2 """
2 """
3
3
4 # Copyright (c) IPython Development Team.
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
5 # Distributed under the terms of the Modified BSD License.
6
6
7
7
8 from binascii import a2b_base64
8 from binascii import a2b_base64
9 from io import BytesIO
9 from io import BytesIO
10
10
11 import pytest
11 import pytest
12
12
13 matplotlib = pytest.importorskip("matplotlib")
13 matplotlib = pytest.importorskip("matplotlib")
14 matplotlib.use('Agg')
14 matplotlib.use('Agg')
15 from matplotlib.figure import Figure
15 from matplotlib.figure import Figure
16
16
17 from matplotlib import pyplot as plt
17 from matplotlib import pyplot as plt
18 from matplotlib_inline import backend_inline
18 from matplotlib_inline import backend_inline
19 import numpy as np
19 import numpy as np
20
20
21 from IPython.core.getipython import get_ipython
21 from IPython.core.getipython import get_ipython
22 from IPython.core.interactiveshell import InteractiveShell
22 from IPython.core.interactiveshell import InteractiveShell
23 from IPython.core.display import _PNG, _JPEG
23 from IPython.core.display import _PNG, _JPEG
24 from .. import pylabtools as pt
24 from .. import pylabtools as pt
25
25
26 from IPython.testing import decorators as dec
26 from IPython.testing import decorators as dec
27
27
28
28
29 def test_figure_to_svg():
29 def test_figure_to_svg():
30 # simple empty-figure test
30 # simple empty-figure test
31 fig = plt.figure()
31 fig = plt.figure()
32 assert pt.print_figure(fig, "svg") is None
32 assert pt.print_figure(fig, "svg") is None
33
33
34 plt.close('all')
34 plt.close('all')
35
35
36 # simple check for at least svg-looking output
36 # simple check for at least svg-looking output
37 fig = plt.figure()
37 fig = plt.figure()
38 ax = fig.add_subplot(1,1,1)
38 ax = fig.add_subplot(1,1,1)
39 ax.plot([1,2,3])
39 ax.plot([1,2,3])
40 plt.draw()
40 plt.draw()
41 svg = pt.print_figure(fig, "svg")[:100].lower()
41 svg = pt.print_figure(fig, "svg")[:100].lower()
42 assert "doctype svg" in svg
42 assert "doctype svg" in svg
43
43
44
44
45 def _check_pil_jpeg_bytes():
45 def _check_pil_jpeg_bytes():
46 """Skip if PIL can't write JPEGs to BytesIO objects"""
46 """Skip if PIL can't write JPEGs to BytesIO objects"""
47 # PIL's JPEG plugin can't write to BytesIO objects
47 # PIL's JPEG plugin can't write to BytesIO objects
48 # Pillow fixes this
48 # Pillow fixes this
49 from PIL import Image
49 from PIL import Image
50 buf = BytesIO()
50 buf = BytesIO()
51 img = Image.new("RGB", (4,4))
51 img = Image.new("RGB", (4,4))
52 try:
52 try:
53 img.save(buf, 'jpeg')
53 img.save(buf, 'jpeg')
54 except Exception as e:
54 except Exception as e:
55 ename = e.__class__.__name__
55 ename = e.__class__.__name__
56 raise pytest.skip("PIL can't write JPEG to BytesIO: %s: %s" % (ename, e)) from e
56 raise pytest.skip("PIL can't write JPEG to BytesIO: %s: %s" % (ename, e)) from e
57
57
58 @dec.skip_without("PIL.Image")
58 @dec.skip_without("PIL.Image")
59 def test_figure_to_jpeg():
59 def test_figure_to_jpeg():
60 _check_pil_jpeg_bytes()
60 _check_pil_jpeg_bytes()
61 # simple check for at least jpeg-looking output
61 # simple check for at least jpeg-looking output
62 fig = plt.figure()
62 fig = plt.figure()
63 ax = fig.add_subplot(1,1,1)
63 ax = fig.add_subplot(1,1,1)
64 ax.plot([1,2,3])
64 ax.plot([1,2,3])
65 plt.draw()
65 plt.draw()
66 jpeg = pt.print_figure(fig, 'jpeg', pil_kwargs={'optimize': 50})[:100].lower()
66 jpeg = pt.print_figure(fig, 'jpeg', pil_kwargs={'optimize': 50})[:100].lower()
67 assert jpeg.startswith(_JPEG)
67 assert jpeg.startswith(_JPEG)
68
68
69 def test_retina_figure():
69 def test_retina_figure():
70 # simple empty-figure test
70 # simple empty-figure test
71 fig = plt.figure()
71 fig = plt.figure()
72 assert pt.retina_figure(fig) == None
72 assert pt.retina_figure(fig) == None
73 plt.close('all')
73 plt.close('all')
74
74
75 fig = plt.figure()
75 fig = plt.figure()
76 ax = fig.add_subplot(1,1,1)
76 ax = fig.add_subplot(1,1,1)
77 ax.plot([1,2,3])
77 ax.plot([1,2,3])
78 plt.draw()
78 plt.draw()
79 png, md = pt.retina_figure(fig)
79 png, md = pt.retina_figure(fig)
80 assert png.startswith(_PNG)
80 assert png.startswith(_PNG)
81 assert "width" in md
81 assert "width" in md
82 assert "height" in md
82 assert "height" in md
83
83
84
84
85 _fmt_mime_map = {
85 _fmt_mime_map = {
86 'png': 'image/png',
86 'png': 'image/png',
87 'jpeg': 'image/jpeg',
87 'jpeg': 'image/jpeg',
88 'pdf': 'application/pdf',
88 'pdf': 'application/pdf',
89 'retina': 'image/png',
89 'retina': 'image/png',
90 'svg': 'image/svg+xml',
90 'svg': 'image/svg+xml',
91 }
91 }
92
92
93 def test_select_figure_formats_str():
93 def test_select_figure_formats_str():
94 ip = get_ipython()
94 ip = get_ipython()
95 for fmt, active_mime in _fmt_mime_map.items():
95 for fmt, active_mime in _fmt_mime_map.items():
96 pt.select_figure_formats(ip, fmt)
96 pt.select_figure_formats(ip, fmt)
97 for mime, f in ip.display_formatter.formatters.items():
97 for mime, f in ip.display_formatter.formatters.items():
98 if mime == active_mime:
98 if mime == active_mime:
99 assert Figure in f
99 assert Figure in f
100 else:
100 else:
101 assert Figure not in f
101 assert Figure not in f
102
102
103 def test_select_figure_formats_kwargs():
103 def test_select_figure_formats_kwargs():
104 ip = get_ipython()
104 ip = get_ipython()
105 kwargs = dict(bbox_inches="tight")
105 kwargs = dict(bbox_inches="tight")
106 pt.select_figure_formats(ip, "png", **kwargs)
106 pt.select_figure_formats(ip, "png", **kwargs)
107 formatter = ip.display_formatter.formatters["image/png"]
107 formatter = ip.display_formatter.formatters["image/png"]
108 f = formatter.lookup_by_type(Figure)
108 f = formatter.lookup_by_type(Figure)
109 cell = f.keywords
109 cell = f.keywords
110 expected = kwargs
110 expected = kwargs
111 expected["base64"] = True
111 expected["base64"] = True
112 expected["fmt"] = "png"
112 expected["fmt"] = "png"
113 assert cell == expected
113 assert cell == expected
114
114
115 # check that the formatter doesn't raise
115 # check that the formatter doesn't raise
116 fig = plt.figure()
116 fig = plt.figure()
117 ax = fig.add_subplot(1,1,1)
117 ax = fig.add_subplot(1,1,1)
118 ax.plot([1,2,3])
118 ax.plot([1,2,3])
119 plt.draw()
119 plt.draw()
120 formatter.enabled = True
120 formatter.enabled = True
121 png = formatter(fig)
121 png = formatter(fig)
122 assert isinstance(png, str)
122 assert isinstance(png, str)
123 png_bytes = a2b_base64(png)
123 png_bytes = a2b_base64(png)
124 assert png_bytes.startswith(_PNG)
124 assert png_bytes.startswith(_PNG)
125
125
126 def test_select_figure_formats_set():
126 def test_select_figure_formats_set():
127 ip = get_ipython()
127 ip = get_ipython()
128 for fmts in [
128 for fmts in [
129 {'png', 'svg'},
129 {'png', 'svg'},
130 ['png'],
130 ['png'],
131 ('jpeg', 'pdf', 'retina'),
131 ('jpeg', 'pdf', 'retina'),
132 {'svg'},
132 {'svg'},
133 ]:
133 ]:
134 active_mimes = {_fmt_mime_map[fmt] for fmt in fmts}
134 active_mimes = {_fmt_mime_map[fmt] for fmt in fmts}
135 pt.select_figure_formats(ip, fmts)
135 pt.select_figure_formats(ip, fmts)
136 for mime, f in ip.display_formatter.formatters.items():
136 for mime, f in ip.display_formatter.formatters.items():
137 if mime in active_mimes:
137 if mime in active_mimes:
138 assert Figure in f
138 assert Figure in f
139 else:
139 else:
140 assert Figure not in f
140 assert Figure not in f
141
141
142 def test_select_figure_formats_bad():
142 def test_select_figure_formats_bad():
143 ip = get_ipython()
143 ip = get_ipython()
144 with pytest.raises(ValueError):
144 with pytest.raises(ValueError):
145 pt.select_figure_formats(ip, 'foo')
145 pt.select_figure_formats(ip, 'foo')
146 with pytest.raises(ValueError):
146 with pytest.raises(ValueError):
147 pt.select_figure_formats(ip, {'png', 'foo'})
147 pt.select_figure_formats(ip, {'png', 'foo'})
148 with pytest.raises(ValueError):
148 with pytest.raises(ValueError):
149 pt.select_figure_formats(ip, ['retina', 'pdf', 'bar', 'bad'])
149 pt.select_figure_formats(ip, ['retina', 'pdf', 'bar', 'bad'])
150
150
151 def test_import_pylab():
151 def test_import_pylab():
152 ns = {}
152 ns = {}
153 pt.import_pylab(ns, import_all=False)
153 pt.import_pylab(ns, import_all=False)
154 assert "plt" in ns
154 assert "plt" in ns
155 assert ns["np"] == np
155 assert ns["np"] == np
156
156
157
157
158 class TestPylabSwitch(object):
158 class TestPylabSwitch(object):
159 class Shell(InteractiveShell):
159 class Shell(InteractiveShell):
160 def init_history(self):
160 def init_history(self):
161 """Sets up the command history, and starts regular autosaves."""
161 """Sets up the command history, and starts regular autosaves."""
162 self.config.HistoryManager.hist_file = ":memory:"
162 self.config.HistoryManager.hist_file = ":memory:"
163 super().init_history()
163 super().init_history()
164
164
165 def enable_gui(self, gui):
165 def enable_gui(self, gui):
166 pass
166 pass
167
167
168 def setup(self):
168 def setup(self):
169 import matplotlib
169 import matplotlib
170 def act_mpl(backend):
170 def act_mpl(backend):
171 matplotlib.rcParams['backend'] = backend
171 matplotlib.rcParams['backend'] = backend
172
172
173 # Save rcParams since they get modified
173 # Save rcParams since they get modified
174 self._saved_rcParams = matplotlib.rcParams
174 self._saved_rcParams = matplotlib.rcParams
175 self._saved_rcParamsOrig = matplotlib.rcParamsOrig
175 self._saved_rcParamsOrig = matplotlib.rcParamsOrig
176 matplotlib.rcParams = dict(backend='Qt4Agg')
176 matplotlib.rcParams = dict(backend='Qt4Agg')
177 matplotlib.rcParamsOrig = dict(backend='Qt4Agg')
177 matplotlib.rcParamsOrig = dict(backend='Qt4Agg')
178
178
179 # Mock out functions
179 # Mock out functions
180 self._save_am = pt.activate_matplotlib
180 self._save_am = pt.activate_matplotlib
181 pt.activate_matplotlib = act_mpl
181 pt.activate_matplotlib = act_mpl
182 self._save_ip = pt.import_pylab
182 self._save_ip = pt.import_pylab
183 pt.import_pylab = lambda *a,**kw:None
183 pt.import_pylab = lambda *a,**kw:None
184 self._save_cis = backend_inline.configure_inline_support
184 self._save_cis = backend_inline.configure_inline_support
185 backend_inline.configure_inline_support = lambda *a, **kw: None
185 backend_inline.configure_inline_support = lambda *a, **kw: None
186
186
187 def teardown(self):
187 def teardown(self):
188 pt.activate_matplotlib = self._save_am
188 pt.activate_matplotlib = self._save_am
189 pt.import_pylab = self._save_ip
189 pt.import_pylab = self._save_ip
190 backend_inline.configure_inline_support = self._save_cis
190 backend_inline.configure_inline_support = self._save_cis
191 import matplotlib
191 import matplotlib
192 matplotlib.rcParams = self._saved_rcParams
192 matplotlib.rcParams = self._saved_rcParams
193 matplotlib.rcParamsOrig = self._saved_rcParamsOrig
193 matplotlib.rcParamsOrig = self._saved_rcParamsOrig
194
194
195 def test_qt(self):
195 def test_qt(self):
196
197 s = self.Shell()
196 s = self.Shell()
198 gui, backend = s.enable_matplotlib(None)
197 gui, backend = s.enable_matplotlib(None)
199 assert gui == "qt"
198 assert gui == "qt"
200 assert s.pylab_gui_select == "qt"
199 assert s.pylab_gui_select == "qt"
201
200
202 gui, backend = s.enable_matplotlib("inline")
201 gui, backend = s.enable_matplotlib("inline")
203 assert gui == "inline"
202 assert gui == "inline"
204 assert s.pylab_gui_select == "qt"
203 assert s.pylab_gui_select == "qt"
205
204
206 gui, backend = s.enable_matplotlib("qt")
205 gui, backend = s.enable_matplotlib("qt")
207 assert gui == "qt"
206 assert gui == "qt"
208 assert s.pylab_gui_select == "qt"
207 assert s.pylab_gui_select == "qt"
209
208
210 gui, backend = s.enable_matplotlib("inline")
209 gui, backend = s.enable_matplotlib("inline")
211 assert gui == "inline"
210 assert gui == "inline"
212 assert s.pylab_gui_select == "qt"
211 assert s.pylab_gui_select == "qt"
213
212
214 gui, backend = s.enable_matplotlib()
213 gui, backend = s.enable_matplotlib()
215 assert gui == "qt"
214 assert gui == "qt"
216 assert s.pylab_gui_select == "qt"
215 assert s.pylab_gui_select == "qt"
217
216
218 def test_inline(self):
217 def test_inline(self):
219 s = self.Shell()
218 s = self.Shell()
220 gui, backend = s.enable_matplotlib("inline")
219 gui, backend = s.enable_matplotlib("inline")
221 assert gui == "inline"
220 assert gui == "inline"
222 assert s.pylab_gui_select == None
221 assert s.pylab_gui_select == None
223
222
224 gui, backend = s.enable_matplotlib("inline")
223 gui, backend = s.enable_matplotlib("inline")
225 assert gui == "inline"
224 assert gui == "inline"
226 assert s.pylab_gui_select == None
225 assert s.pylab_gui_select == None
227
226
228 gui, backend = s.enable_matplotlib("qt")
227 gui, backend = s.enable_matplotlib("qt")
229 assert gui == "qt"
228 assert gui == "qt"
230 assert s.pylab_gui_select == "qt"
229 assert s.pylab_gui_select == "qt"
231
230
232 def test_inline_twice(self):
231 def test_inline_twice(self):
233 "Using '%matplotlib inline' twice should not reset formatters"
232 "Using '%matplotlib inline' twice should not reset formatters"
234
233
235 ip = self.Shell()
234 ip = self.Shell()
236 gui, backend = ip.enable_matplotlib("inline")
235 gui, backend = ip.enable_matplotlib("inline")
237 assert gui == "inline"
236 assert gui == "inline"
238
237
239 fmts = {'png'}
238 fmts = {'png'}
240 active_mimes = {_fmt_mime_map[fmt] for fmt in fmts}
239 active_mimes = {_fmt_mime_map[fmt] for fmt in fmts}
241 pt.select_figure_formats(ip, fmts)
240 pt.select_figure_formats(ip, fmts)
242
241
243 gui, backend = ip.enable_matplotlib("inline")
242 gui, backend = ip.enable_matplotlib("inline")
244 assert gui == "inline"
243 assert gui == "inline"
245
244
246 for mime, f in ip.display_formatter.formatters.items():
245 for mime, f in ip.display_formatter.formatters.items():
247 if mime in active_mimes:
246 if mime in active_mimes:
248 assert Figure in f
247 assert Figure in f
249 else:
248 else:
250 assert Figure not in f
249 assert Figure not in f
251
250
252 def test_qt_gtk(self):
251 def test_qt_gtk(self):
253 s = self.Shell()
252 s = self.Shell()
254 gui, backend = s.enable_matplotlib("qt")
253 gui, backend = s.enable_matplotlib("qt")
255 assert gui == "qt"
254 assert gui == "qt"
256 assert s.pylab_gui_select == "qt"
255 assert s.pylab_gui_select == "qt"
257
256
258 gui, backend = s.enable_matplotlib("gtk")
257 gui, backend = s.enable_matplotlib("gtk")
259 assert gui == "qt"
258 assert gui == "qt"
260 assert s.pylab_gui_select == "qt"
259 assert s.pylab_gui_select == "qt"
261
260
262
261
263 def test_no_gui_backends():
262 def test_no_gui_backends():
264 for k in ['agg', 'svg', 'pdf', 'ps']:
263 for k in ['agg', 'svg', 'pdf', 'ps']:
265 assert k not in pt.backend2gui
264 assert k not in pt.backend2gui
266
265
267
266
268 def test_figure_no_canvas():
267 def test_figure_no_canvas():
269 fig = Figure()
268 fig = Figure()
270 fig.canvas = None
269 fig.canvas = None
271 pt.print_figure(fig)
270 pt.print_figure(fig)
@@ -1,1354 +1,1377 b''
1 # -*- coding: utf-8 -*-
1 # -*- coding: utf-8 -*-
2 """
2 """
3 Verbose and colourful traceback formatting.
3 Verbose and colourful traceback formatting.
4
4
5 **ColorTB**
5 **ColorTB**
6
6
7 I've always found it a bit hard to visually parse tracebacks in Python. The
7 I've always found it a bit hard to visually parse tracebacks in Python. The
8 ColorTB class is a solution to that problem. It colors the different parts of a
8 ColorTB class is a solution to that problem. It colors the different parts of a
9 traceback in a manner similar to what you would expect from a syntax-highlighting
9 traceback in a manner similar to what you would expect from a syntax-highlighting
10 text editor.
10 text editor.
11
11
12 Installation instructions for ColorTB::
12 Installation instructions for ColorTB::
13
13
14 import sys,ultratb
14 import sys,ultratb
15 sys.excepthook = ultratb.ColorTB()
15 sys.excepthook = ultratb.ColorTB()
16
16
17 **VerboseTB**
17 **VerboseTB**
18
18
19 I've also included a port of Ka-Ping Yee's "cgitb.py" that produces all kinds
19 I've also included a port of Ka-Ping Yee's "cgitb.py" that produces all kinds
20 of useful info when a traceback occurs. Ping originally had it spit out HTML
20 of useful info when a traceback occurs. Ping originally had it spit out HTML
21 and intended it for CGI programmers, but why should they have all the fun? I
21 and intended it for CGI programmers, but why should they have all the fun? I
22 altered it to spit out colored text to the terminal. It's a bit overwhelming,
22 altered it to spit out colored text to the terminal. It's a bit overwhelming,
23 but kind of neat, and maybe useful for long-running programs that you believe
23 but kind of neat, and maybe useful for long-running programs that you believe
24 are bug-free. If a crash *does* occur in that type of program you want details.
24 are bug-free. If a crash *does* occur in that type of program you want details.
25 Give it a shot--you'll love it or you'll hate it.
25 Give it a shot--you'll love it or you'll hate it.
26
26
27 .. note::
27 .. note::
28
28
29 The Verbose mode prints the variables currently visible where the exception
29 The Verbose mode prints the variables currently visible where the exception
30 happened (shortening their strings if too long). This can potentially be
30 happened (shortening their strings if too long). This can potentially be
31 very slow, if you happen to have a huge data structure whose string
31 very slow, if you happen to have a huge data structure whose string
32 representation is complex to compute. Your computer may appear to freeze for
32 representation is complex to compute. Your computer may appear to freeze for
33 a while with cpu usage at 100%. If this occurs, you can cancel the traceback
33 a while with cpu usage at 100%. If this occurs, you can cancel the traceback
34 with Ctrl-C (maybe hitting it more than once).
34 with Ctrl-C (maybe hitting it more than once).
35
35
36 If you encounter this kind of situation often, you may want to use the
36 If you encounter this kind of situation often, you may want to use the
37 Verbose_novars mode instead of the regular Verbose, which avoids formatting
37 Verbose_novars mode instead of the regular Verbose, which avoids formatting
38 variables (but otherwise includes the information and context given by
38 variables (but otherwise includes the information and context given by
39 Verbose).
39 Verbose).
40
40
41 .. note::
41 .. note::
42
42
43 The verbose mode print all variables in the stack, which means it can
43 The verbose mode print all variables in the stack, which means it can
44 potentially leak sensitive information like access keys, or unencrypted
44 potentially leak sensitive information like access keys, or unencrypted
45 password.
45 password.
46
46
47 Installation instructions for VerboseTB::
47 Installation instructions for VerboseTB::
48
48
49 import sys,ultratb
49 import sys,ultratb
50 sys.excepthook = ultratb.VerboseTB()
50 sys.excepthook = ultratb.VerboseTB()
51
51
52 Note: Much of the code in this module was lifted verbatim from the standard
52 Note: Much of the code in this module was lifted verbatim from the standard
53 library module 'traceback.py' and Ka-Ping Yee's 'cgitb.py'.
53 library module 'traceback.py' and Ka-Ping Yee's 'cgitb.py'.
54
54
55 Color schemes
55 Color schemes
56 -------------
56 -------------
57
57
58 The colors are defined in the class TBTools through the use of the
58 The colors are defined in the class TBTools through the use of the
59 ColorSchemeTable class. Currently the following exist:
59 ColorSchemeTable class. Currently the following exist:
60
60
61 - NoColor: allows all of this module to be used in any terminal (the color
61 - NoColor: allows all of this module to be used in any terminal (the color
62 escapes are just dummy blank strings).
62 escapes are just dummy blank strings).
63
63
64 - Linux: is meant to look good in a terminal like the Linux console (black
64 - Linux: is meant to look good in a terminal like the Linux console (black
65 or very dark background).
65 or very dark background).
66
66
67 - LightBG: similar to Linux but swaps dark/light colors to be more readable
67 - LightBG: similar to Linux but swaps dark/light colors to be more readable
68 in light background terminals.
68 in light background terminals.
69
69
70 - Neutral: a neutral color scheme that should be readable on both light and
70 - Neutral: a neutral color scheme that should be readable on both light and
71 dark background
71 dark background
72
72
73 You can implement other color schemes easily, the syntax is fairly
73 You can implement other color schemes easily, the syntax is fairly
74 self-explanatory. Please send back new schemes you develop to the author for
74 self-explanatory. Please send back new schemes you develop to the author for
75 possible inclusion in future releases.
75 possible inclusion in future releases.
76
76
77 Inheritance diagram:
77 Inheritance diagram:
78
78
79 .. inheritance-diagram:: IPython.core.ultratb
79 .. inheritance-diagram:: IPython.core.ultratb
80 :parts: 3
80 :parts: 3
81 """
81 """
82
82
83 #*****************************************************************************
83 #*****************************************************************************
84 # Copyright (C) 2001 Nathaniel Gray <n8gray@caltech.edu>
84 # Copyright (C) 2001 Nathaniel Gray <n8gray@caltech.edu>
85 # Copyright (C) 2001-2004 Fernando Perez <fperez@colorado.edu>
85 # Copyright (C) 2001-2004 Fernando Perez <fperez@colorado.edu>
86 #
86 #
87 # Distributed under the terms of the BSD License. The full license is in
87 # Distributed under the terms of the BSD License. The full license is in
88 # the file COPYING, distributed as part of this software.
88 # the file COPYING, distributed as part of this software.
89 #*****************************************************************************
89 #*****************************************************************************
90
90
91
91
92 import inspect
92 import inspect
93 import linecache
93 import linecache
94 import pydoc
94 import pydoc
95 import sys
95 import sys
96 import time
96 import time
97 import traceback
97 import traceback
98 from types import TracebackType
98 from types import TracebackType
99 from typing import Tuple, List, Any, Optional
99 from typing import Tuple, List, Any, Optional
100
100
101 import stack_data
101 import stack_data
102 from stack_data import FrameInfo as SDFrameInfo
102 from stack_data import FrameInfo as SDFrameInfo
103 from pygments.formatters.terminal256 import Terminal256Formatter
103 from pygments.formatters.terminal256 import Terminal256Formatter
104 from pygments.styles import get_style_by_name
104 from pygments.styles import get_style_by_name
105
105
106 # IPython's own modules
106 # IPython's own modules
107 from IPython import get_ipython
107 from IPython import get_ipython
108 from IPython.core import debugger
108 from IPython.core import debugger
109 from IPython.core.display_trap import DisplayTrap
109 from IPython.core.display_trap import DisplayTrap
110 from IPython.core.excolors import exception_colors
110 from IPython.core.excolors import exception_colors
111 from IPython.utils import PyColorize
111 from IPython.utils import PyColorize
112 from IPython.utils import path as util_path
112 from IPython.utils import path as util_path
113 from IPython.utils import py3compat
113 from IPython.utils import py3compat
114 from IPython.utils.terminal import get_terminal_size
114 from IPython.utils.terminal import get_terminal_size
115
115
116 import IPython.utils.colorable as colorable
116 import IPython.utils.colorable as colorable
117
117
118 # Globals
118 # Globals
119 # amount of space to put line numbers before verbose tracebacks
119 # amount of space to put line numbers before verbose tracebacks
120 INDENT_SIZE = 8
120 INDENT_SIZE = 8
121
121
122 # Default color scheme. This is used, for example, by the traceback
122 # Default color scheme. This is used, for example, by the traceback
123 # formatter. When running in an actual IPython instance, the user's rc.colors
123 # formatter. When running in an actual IPython instance, the user's rc.colors
124 # value is used, but having a module global makes this functionality available
124 # value is used, but having a module global makes this functionality available
125 # to users of ultratb who are NOT running inside ipython.
125 # to users of ultratb who are NOT running inside ipython.
126 DEFAULT_SCHEME = 'NoColor'
126 DEFAULT_SCHEME = 'NoColor'
127 FAST_THRESHOLD = 10_000
127 FAST_THRESHOLD = 10_000
128
128
129 # ---------------------------------------------------------------------------
129 # ---------------------------------------------------------------------------
130 # Code begins
130 # Code begins
131
131
132 # Helper function -- largely belongs to VerboseTB, but we need the same
132 # Helper function -- largely belongs to VerboseTB, but we need the same
133 # functionality to produce a pseudo verbose TB for SyntaxErrors, so that they
133 # functionality to produce a pseudo verbose TB for SyntaxErrors, so that they
134 # can be recognized properly by ipython.el's py-traceback-line-re
134 # can be recognized properly by ipython.el's py-traceback-line-re
135 # (SyntaxErrors have to be treated specially because they have no traceback)
135 # (SyntaxErrors have to be treated specially because they have no traceback)
136
136
137
137
138 def _format_traceback_lines(lines, Colors, has_colors: bool, lvals):
138 def _format_traceback_lines(lines, Colors, has_colors: bool, lvals):
139 """
139 """
140 Format tracebacks lines with pointing arrow, leading numbers...
140 Format tracebacks lines with pointing arrow, leading numbers...
141
141
142 Parameters
142 Parameters
143 ----------
143 ----------
144 lines : list[Line]
144 lines : list[Line]
145 Colors
145 Colors
146 ColorScheme used.
146 ColorScheme used.
147 lvals : str
147 lvals : str
148 Values of local variables, already colored, to inject just after the error line.
148 Values of local variables, already colored, to inject just after the error line.
149 """
149 """
150 numbers_width = INDENT_SIZE - 1
150 numbers_width = INDENT_SIZE - 1
151 res = []
151 res = []
152
152
153 for stack_line in lines:
153 for stack_line in lines:
154 if stack_line is stack_data.LINE_GAP:
154 if stack_line is stack_data.LINE_GAP:
155 res.append('%s (...)%s\n' % (Colors.linenoEm, Colors.Normal))
155 res.append('%s (...)%s\n' % (Colors.linenoEm, Colors.Normal))
156 continue
156 continue
157
157
158 line = stack_line.render(pygmented=has_colors).rstrip('\n') + '\n'
158 line = stack_line.render(pygmented=has_colors).rstrip('\n') + '\n'
159 lineno = stack_line.lineno
159 lineno = stack_line.lineno
160 if stack_line.is_current:
160 if stack_line.is_current:
161 # This is the line with the error
161 # This is the line with the error
162 pad = numbers_width - len(str(lineno))
162 pad = numbers_width - len(str(lineno))
163 num = '%s%s' % (debugger.make_arrow(pad), str(lineno))
163 num = '%s%s' % (debugger.make_arrow(pad), str(lineno))
164 start_color = Colors.linenoEm
164 start_color = Colors.linenoEm
165 else:
165 else:
166 num = '%*s' % (numbers_width, lineno)
166 num = '%*s' % (numbers_width, lineno)
167 start_color = Colors.lineno
167 start_color = Colors.lineno
168
168
169 line = '%s%s%s %s' % (start_color, num, Colors.Normal, line)
169 line = '%s%s%s %s' % (start_color, num, Colors.Normal, line)
170
170
171 res.append(line)
171 res.append(line)
172 if lvals and stack_line.is_current:
172 if lvals and stack_line.is_current:
173 res.append(lvals + '\n')
173 res.append(lvals + '\n')
174 return res
174 return res
175
175
176 def _simple_format_traceback_lines(lnum, index, lines, Colors, lvals, _line_format):
176 def _simple_format_traceback_lines(lnum, index, lines, Colors, lvals, _line_format):
177 """
177 """
178 Format tracebacks lines with pointing arrow, leading numbers...
178 Format tracebacks lines with pointing arrow, leading numbers...
179
179
180 Parameters
180 Parameters
181 ==========
181 ==========
182
182
183 lnum: int
183 lnum: int
184 number of the target line of code.
184 number of the target line of code.
185 index: int
185 index: int
186 which line in the list should be highlighted.
186 which line in the list should be highlighted.
187 lines: list[string]
187 lines: list[string]
188 Colors:
188 Colors:
189 ColorScheme used.
189 ColorScheme used.
190 lvals: bytes
190 lvals: bytes
191 Values of local variables, already colored, to inject just after the error line.
191 Values of local variables, already colored, to inject just after the error line.
192 _line_format: f (str) -> (str, bool)
192 _line_format: f (str) -> (str, bool)
193 return (colorized version of str, failure to do so)
193 return (colorized version of str, failure to do so)
194 """
194 """
195 numbers_width = INDENT_SIZE - 1
195 numbers_width = INDENT_SIZE - 1
196 res = []
196 res = []
197
197
198 for i,line in enumerate(lines, lnum-index):
198 for i, line in enumerate(lines, lnum - index):
199 line = py3compat.cast_unicode(line)
199 line = py3compat.cast_unicode(line)
200
200
201 new_line, err = _line_format(line, 'str')
201 new_line, err = _line_format(line, "str")
202 if not err:
202 if not err:
203 line = new_line
203 line = new_line
204
204
205 if i == lnum:
205 if i == lnum:
206 # This is the line with the error
206 # This is the line with the error
207 pad = numbers_width - len(str(i))
207 pad = numbers_width - len(str(i))
208 num = '%s%s' % (debugger.make_arrow(pad), str(lnum))
208 num = "%s%s" % (debugger.make_arrow(pad), str(lnum))
209 line = '%s%s%s %s%s' % (Colors.linenoEm, num,
209 line = "%s%s%s %s%s" % (
210 Colors.line, line, Colors.Normal)
210 Colors.linenoEm,
211 num,
212 Colors.line,
213 line,
214 Colors.Normal,
215 )
211 else:
216 else:
212 num = '%*s' % (numbers_width, i)
217 num = "%*s" % (numbers_width, i)
213 line = '%s%s%s %s' % (Colors.lineno, num,
218 line = "%s%s%s %s" % (Colors.lineno, num, Colors.Normal, line)
214 Colors.Normal, line)
215
219
216 res.append(line)
220 res.append(line)
217 if lvals and i == lnum:
221 if lvals and i == lnum:
218 res.append(lvals + '\n')
222 res.append(lvals + "\n")
219 return res
223 return res
220
224
225
221 def _format_filename(file, ColorFilename, ColorNormal, *, lineno=None):
226 def _format_filename(file, ColorFilename, ColorNormal, *, lineno=None):
222 """
227 """
223 Format filename lines with custom formatting from caching compiler or `File *.py` by default
228 Format filename lines with custom formatting from caching compiler or `File *.py` by default
224
229
225 Parameters
230 Parameters
226 ----------
231 ----------
227 file : str
232 file : str
228 ColorFilename
233 ColorFilename
229 ColorScheme's filename coloring to be used.
234 ColorScheme's filename coloring to be used.
230 ColorNormal
235 ColorNormal
231 ColorScheme's normal coloring to be used.
236 ColorScheme's normal coloring to be used.
232 """
237 """
233 ipinst = get_ipython()
238 ipinst = get_ipython()
234 if (
239 if (
235 ipinst is not None
240 ipinst is not None
236 and (data := ipinst.compile.format_code_name(file)) is not None
241 and (data := ipinst.compile.format_code_name(file)) is not None
237 ):
242 ):
238 label, name = data
243 label, name = data
239 if lineno is None:
244 if lineno is None:
240 tpl_link = f"{{label}} {ColorFilename}{{name}}{ColorNormal}"
245 tpl_link = f"{{label}} {ColorFilename}{{name}}{ColorNormal}"
241 else:
246 else:
242 tpl_link = (
247 tpl_link = (
243 f"{{label}} {ColorFilename}{{name}}, line {{lineno}}{ColorNormal}"
248 f"{{label}} {ColorFilename}{{name}}, line {{lineno}}{ColorNormal}"
244 )
249 )
245 else:
250 else:
246 label = "File"
251 label = "File"
247 name = util_path.compress_user(
252 name = util_path.compress_user(
248 py3compat.cast_unicode(file, util_path.fs_encoding)
253 py3compat.cast_unicode(file, util_path.fs_encoding)
249 )
254 )
250 if lineno is None:
255 if lineno is None:
251 tpl_link = f"{{label}} {ColorFilename}{{name}}{ColorNormal}"
256 tpl_link = f"{{label}} {ColorFilename}{{name}}{ColorNormal}"
252 else:
257 else:
253 # can we make this the more friendly ", line {{lineno}}", or do we need to preserve the formatting with the colon?
258 # can we make this the more friendly ", line {{lineno}}", or do we need to preserve the formatting with the colon?
254 tpl_link = f"{{label}} {ColorFilename}{{name}}:{{lineno}}{ColorNormal}"
259 tpl_link = f"{{label}} {ColorFilename}{{name}}:{{lineno}}{ColorNormal}"
255
260
256 return tpl_link.format(label=label, name=name, lineno=lineno)
261 return tpl_link.format(label=label, name=name, lineno=lineno)
257
262
258 #---------------------------------------------------------------------------
263 #---------------------------------------------------------------------------
259 # Module classes
264 # Module classes
260 class TBTools(colorable.Colorable):
265 class TBTools(colorable.Colorable):
261 """Basic tools used by all traceback printer classes."""
266 """Basic tools used by all traceback printer classes."""
262
267
263 # Number of frames to skip when reporting tracebacks
268 # Number of frames to skip when reporting tracebacks
264 tb_offset = 0
269 tb_offset = 0
265
270
266 def __init__(
271 def __init__(
267 self,
272 self,
268 color_scheme="NoColor",
273 color_scheme="NoColor",
269 call_pdb=False,
274 call_pdb=False,
270 ostream=None,
275 ostream=None,
271 parent=None,
276 parent=None,
272 config=None,
277 config=None,
273 *,
278 *,
274 debugger_cls=None,
279 debugger_cls=None,
275 ):
280 ):
276 # Whether to call the interactive pdb debugger after printing
281 # Whether to call the interactive pdb debugger after printing
277 # tracebacks or not
282 # tracebacks or not
278 super(TBTools, self).__init__(parent=parent, config=config)
283 super(TBTools, self).__init__(parent=parent, config=config)
279 self.call_pdb = call_pdb
284 self.call_pdb = call_pdb
280
285
281 # Output stream to write to. Note that we store the original value in
286 # Output stream to write to. Note that we store the original value in
282 # a private attribute and then make the public ostream a property, so
287 # a private attribute and then make the public ostream a property, so
283 # that we can delay accessing sys.stdout until runtime. The way
288 # that we can delay accessing sys.stdout until runtime. The way
284 # things are written now, the sys.stdout object is dynamically managed
289 # things are written now, the sys.stdout object is dynamically managed
285 # so a reference to it should NEVER be stored statically. This
290 # so a reference to it should NEVER be stored statically. This
286 # property approach confines this detail to a single location, and all
291 # property approach confines this detail to a single location, and all
287 # subclasses can simply access self.ostream for writing.
292 # subclasses can simply access self.ostream for writing.
288 self._ostream = ostream
293 self._ostream = ostream
289
294
290 # Create color table
295 # Create color table
291 self.color_scheme_table = exception_colors()
296 self.color_scheme_table = exception_colors()
292
297
293 self.set_colors(color_scheme)
298 self.set_colors(color_scheme)
294 self.old_scheme = color_scheme # save initial value for toggles
299 self.old_scheme = color_scheme # save initial value for toggles
295 self.debugger_cls = debugger_cls or debugger.Pdb
300 self.debugger_cls = debugger_cls or debugger.Pdb
296
301
297 if call_pdb:
302 if call_pdb:
298 self.pdb = self.debugger_cls()
303 self.pdb = self.debugger_cls()
299 else:
304 else:
300 self.pdb = None
305 self.pdb = None
301
306
302 def _get_ostream(self):
307 def _get_ostream(self):
303 """Output stream that exceptions are written to.
308 """Output stream that exceptions are written to.
304
309
305 Valid values are:
310 Valid values are:
306
311
307 - None: the default, which means that IPython will dynamically resolve
312 - None: the default, which means that IPython will dynamically resolve
308 to sys.stdout. This ensures compatibility with most tools, including
313 to sys.stdout. This ensures compatibility with most tools, including
309 Windows (where plain stdout doesn't recognize ANSI escapes).
314 Windows (where plain stdout doesn't recognize ANSI escapes).
310
315
311 - Any object with 'write' and 'flush' attributes.
316 - Any object with 'write' and 'flush' attributes.
312 """
317 """
313 return sys.stdout if self._ostream is None else self._ostream
318 return sys.stdout if self._ostream is None else self._ostream
314
319
315 def _set_ostream(self, val):
320 def _set_ostream(self, val):
316 assert val is None or (hasattr(val, 'write') and hasattr(val, 'flush'))
321 assert val is None or (hasattr(val, 'write') and hasattr(val, 'flush'))
317 self._ostream = val
322 self._ostream = val
318
323
319 ostream = property(_get_ostream, _set_ostream)
324 ostream = property(_get_ostream, _set_ostream)
320
325
321 @staticmethod
326 @staticmethod
322 def _get_chained_exception(exception_value):
327 def _get_chained_exception(exception_value):
323 cause = getattr(exception_value, "__cause__", None)
328 cause = getattr(exception_value, "__cause__", None)
324 if cause:
329 if cause:
325 return cause
330 return cause
326 if getattr(exception_value, "__suppress_context__", False):
331 if getattr(exception_value, "__suppress_context__", False):
327 return None
332 return None
328 return getattr(exception_value, "__context__", None)
333 return getattr(exception_value, "__context__", None)
329
334
330 def get_parts_of_chained_exception(
335 def get_parts_of_chained_exception(
331 self, evalue
336 self, evalue
332 ) -> Optional[Tuple[type, BaseException, TracebackType]]:
337 ) -> Optional[Tuple[type, BaseException, TracebackType]]:
333
334 chained_evalue = self._get_chained_exception(evalue)
338 chained_evalue = self._get_chained_exception(evalue)
335
339
336 if chained_evalue:
340 if chained_evalue:
337 return chained_evalue.__class__, chained_evalue, chained_evalue.__traceback__
341 return chained_evalue.__class__, chained_evalue, chained_evalue.__traceback__
338 return None
342 return None
339
343
340 def prepare_chained_exception_message(self, cause) -> List[Any]:
344 def prepare_chained_exception_message(self, cause) -> List[Any]:
341 direct_cause = "\nThe above exception was the direct cause of the following exception:\n"
345 direct_cause = "\nThe above exception was the direct cause of the following exception:\n"
342 exception_during_handling = "\nDuring handling of the above exception, another exception occurred:\n"
346 exception_during_handling = "\nDuring handling of the above exception, another exception occurred:\n"
343
347
344 if cause:
348 if cause:
345 message = [[direct_cause]]
349 message = [[direct_cause]]
346 else:
350 else:
347 message = [[exception_during_handling]]
351 message = [[exception_during_handling]]
348 return message
352 return message
349
353
350 @property
354 @property
351 def has_colors(self) -> bool:
355 def has_colors(self) -> bool:
352 return self.color_scheme_table.active_scheme_name.lower() != "nocolor"
356 return self.color_scheme_table.active_scheme_name.lower() != "nocolor"
353
357
354 def set_colors(self, *args, **kw):
358 def set_colors(self, *args, **kw):
355 """Shorthand access to the color table scheme selector method."""
359 """Shorthand access to the color table scheme selector method."""
356
360
357 # Set own color table
361 # Set own color table
358 self.color_scheme_table.set_active_scheme(*args, **kw)
362 self.color_scheme_table.set_active_scheme(*args, **kw)
359 # for convenience, set Colors to the active scheme
363 # for convenience, set Colors to the active scheme
360 self.Colors = self.color_scheme_table.active_colors
364 self.Colors = self.color_scheme_table.active_colors
361 # Also set colors of debugger
365 # Also set colors of debugger
362 if hasattr(self, 'pdb') and self.pdb is not None:
366 if hasattr(self, 'pdb') and self.pdb is not None:
363 self.pdb.set_colors(*args, **kw)
367 self.pdb.set_colors(*args, **kw)
364
368
365 def color_toggle(self):
369 def color_toggle(self):
366 """Toggle between the currently active color scheme and NoColor."""
370 """Toggle between the currently active color scheme and NoColor."""
367
371
368 if self.color_scheme_table.active_scheme_name == 'NoColor':
372 if self.color_scheme_table.active_scheme_name == 'NoColor':
369 self.color_scheme_table.set_active_scheme(self.old_scheme)
373 self.color_scheme_table.set_active_scheme(self.old_scheme)
370 self.Colors = self.color_scheme_table.active_colors
374 self.Colors = self.color_scheme_table.active_colors
371 else:
375 else:
372 self.old_scheme = self.color_scheme_table.active_scheme_name
376 self.old_scheme = self.color_scheme_table.active_scheme_name
373 self.color_scheme_table.set_active_scheme('NoColor')
377 self.color_scheme_table.set_active_scheme('NoColor')
374 self.Colors = self.color_scheme_table.active_colors
378 self.Colors = self.color_scheme_table.active_colors
375
379
376 def stb2text(self, stb):
380 def stb2text(self, stb):
377 """Convert a structured traceback (a list) to a string."""
381 """Convert a structured traceback (a list) to a string."""
378 return '\n'.join(stb)
382 return '\n'.join(stb)
379
383
380 def text(self, etype, value, tb, tb_offset: Optional[int] = None, context=5):
384 def text(self, etype, value, tb, tb_offset: Optional[int] = None, context=5):
381 """Return formatted traceback.
385 """Return formatted traceback.
382
386
383 Subclasses may override this if they add extra arguments.
387 Subclasses may override this if they add extra arguments.
384 """
388 """
385 tb_list = self.structured_traceback(etype, value, tb,
389 tb_list = self.structured_traceback(etype, value, tb,
386 tb_offset, context)
390 tb_offset, context)
387 return self.stb2text(tb_list)
391 return self.stb2text(tb_list)
388
392
389 def structured_traceback(
393 def structured_traceback(
390 self, etype, evalue, tb, tb_offset: Optional[int] = None, context=5, mode=None
394 self, etype, evalue, tb, tb_offset: Optional[int] = None, context=5, mode=None
391 ):
395 ):
392 """Return a list of traceback frames.
396 """Return a list of traceback frames.
393
397
394 Must be implemented by each class.
398 Must be implemented by each class.
395 """
399 """
396 raise NotImplementedError()
400 raise NotImplementedError()
397
401
398
402
399 #---------------------------------------------------------------------------
403 #---------------------------------------------------------------------------
400 class ListTB(TBTools):
404 class ListTB(TBTools):
401 """Print traceback information from a traceback list, with optional color.
405 """Print traceback information from a traceback list, with optional color.
402
406
403 Calling requires 3 arguments: (etype, evalue, elist)
407 Calling requires 3 arguments: (etype, evalue, elist)
404 as would be obtained by::
408 as would be obtained by::
405
409
406 etype, evalue, tb = sys.exc_info()
410 etype, evalue, tb = sys.exc_info()
407 if tb:
411 if tb:
408 elist = traceback.extract_tb(tb)
412 elist = traceback.extract_tb(tb)
409 else:
413 else:
410 elist = None
414 elist = None
411
415
412 It can thus be used by programs which need to process the traceback before
416 It can thus be used by programs which need to process the traceback before
413 printing (such as console replacements based on the code module from the
417 printing (such as console replacements based on the code module from the
414 standard library).
418 standard library).
415
419
416 Because they are meant to be called without a full traceback (only a
420 Because they are meant to be called without a full traceback (only a
417 list), instances of this class can't call the interactive pdb debugger."""
421 list), instances of this class can't call the interactive pdb debugger."""
418
422
419
423
420 def __call__(self, etype, value, elist):
424 def __call__(self, etype, value, elist):
421 self.ostream.flush()
425 self.ostream.flush()
422 self.ostream.write(self.text(etype, value, elist))
426 self.ostream.write(self.text(etype, value, elist))
423 self.ostream.write('\n')
427 self.ostream.write('\n')
424
428
425 def _extract_tb(self, tb):
429 def _extract_tb(self, tb):
426 if tb:
430 if tb:
427 return traceback.extract_tb(tb)
431 return traceback.extract_tb(tb)
428 else:
432 else:
429 return None
433 return None
430
434
431 def structured_traceback(
435 def structured_traceback(
432 self,
436 self,
433 etype: type,
437 etype: type,
434 evalue: BaseException,
438 evalue: BaseException,
435 etb: Optional[TracebackType] = None,
439 etb: Optional[TracebackType] = None,
436 tb_offset: Optional[int] = None,
440 tb_offset: Optional[int] = None,
437 context=5,
441 context=5,
438 ):
442 ):
439 """Return a color formatted string with the traceback info.
443 """Return a color formatted string with the traceback info.
440
444
441 Parameters
445 Parameters
442 ----------
446 ----------
443 etype : exception type
447 etype : exception type
444 Type of the exception raised.
448 Type of the exception raised.
445 evalue : object
449 evalue : object
446 Data stored in the exception
450 Data stored in the exception
447 etb : list | TracebackType | None
451 etb : list | TracebackType | None
448 If list: List of frames, see class docstring for details.
452 If list: List of frames, see class docstring for details.
449 If Traceback: Traceback of the exception.
453 If Traceback: Traceback of the exception.
450 tb_offset : int, optional
454 tb_offset : int, optional
451 Number of frames in the traceback to skip. If not given, the
455 Number of frames in the traceback to skip. If not given, the
452 instance evalue is used (set in constructor).
456 instance evalue is used (set in constructor).
453 context : int, optional
457 context : int, optional
454 Number of lines of context information to print.
458 Number of lines of context information to print.
455
459
456 Returns
460 Returns
457 -------
461 -------
458 String with formatted exception.
462 String with formatted exception.
459 """
463 """
460 # This is a workaround to get chained_exc_ids in recursive calls
464 # This is a workaround to get chained_exc_ids in recursive calls
461 # etb should not be a tuple if structured_traceback is not recursive
465 # etb should not be a tuple if structured_traceback is not recursive
462 if isinstance(etb, tuple):
466 if isinstance(etb, tuple):
463 etb, chained_exc_ids = etb
467 etb, chained_exc_ids = etb
464 else:
468 else:
465 chained_exc_ids = set()
469 chained_exc_ids = set()
466
470
467 if isinstance(etb, list):
471 if isinstance(etb, list):
468 elist = etb
472 elist = etb
469 elif etb is not None:
473 elif etb is not None:
470 elist = self._extract_tb(etb)
474 elist = self._extract_tb(etb)
471 else:
475 else:
472 elist = []
476 elist = []
473 tb_offset = self.tb_offset if tb_offset is None else tb_offset
477 tb_offset = self.tb_offset if tb_offset is None else tb_offset
474 assert isinstance(tb_offset, int)
478 assert isinstance(tb_offset, int)
475 Colors = self.Colors
479 Colors = self.Colors
476 out_list = []
480 out_list = []
477 if elist:
481 if elist:
478
482
479 if tb_offset and len(elist) > tb_offset:
483 if tb_offset and len(elist) > tb_offset:
480 elist = elist[tb_offset:]
484 elist = elist[tb_offset:]
481
485
482 out_list.append('Traceback %s(most recent call last)%s:' %
486 out_list.append('Traceback %s(most recent call last)%s:' %
483 (Colors.normalEm, Colors.Normal) + '\n')
487 (Colors.normalEm, Colors.Normal) + '\n')
484 out_list.extend(self._format_list(elist))
488 out_list.extend(self._format_list(elist))
485 # The exception info should be a single entry in the list.
489 # The exception info should be a single entry in the list.
486 lines = ''.join(self._format_exception_only(etype, evalue))
490 lines = ''.join(self._format_exception_only(etype, evalue))
487 out_list.append(lines)
491 out_list.append(lines)
488
492
489 exception = self.get_parts_of_chained_exception(evalue)
493 exception = self.get_parts_of_chained_exception(evalue)
490
494
491 if exception and not id(exception[1]) in chained_exc_ids:
495 if exception and not id(exception[1]) in chained_exc_ids:
492 chained_exception_message = self.prepare_chained_exception_message(
496 chained_exception_message = self.prepare_chained_exception_message(
493 evalue.__cause__)[0]
497 evalue.__cause__)[0]
494 etype, evalue, etb = exception
498 etype, evalue, etb = exception
495 # Trace exception to avoid infinite 'cause' loop
499 # Trace exception to avoid infinite 'cause' loop
496 chained_exc_ids.add(id(exception[1]))
500 chained_exc_ids.add(id(exception[1]))
497 chained_exceptions_tb_offset = 0
501 chained_exceptions_tb_offset = 0
498 out_list = (
502 out_list = (
499 self.structured_traceback(
503 self.structured_traceback(
500 etype, evalue, (etb, chained_exc_ids),
504 etype, evalue, (etb, chained_exc_ids),
501 chained_exceptions_tb_offset, context)
505 chained_exceptions_tb_offset, context)
502 + chained_exception_message
506 + chained_exception_message
503 + out_list)
507 + out_list)
504
508
505 return out_list
509 return out_list
506
510
507 def _format_list(self, extracted_list):
511 def _format_list(self, extracted_list):
508 """Format a list of traceback entry tuples for printing.
512 """Format a list of traceback entry tuples for printing.
509
513
510 Given a list of tuples as returned by extract_tb() or
514 Given a list of tuples as returned by extract_tb() or
511 extract_stack(), return a list of strings ready for printing.
515 extract_stack(), return a list of strings ready for printing.
512 Each string in the resulting list corresponds to the item with the
516 Each string in the resulting list corresponds to the item with the
513 same index in the argument list. Each string ends in a newline;
517 same index in the argument list. Each string ends in a newline;
514 the strings may contain internal newlines as well, for those items
518 the strings may contain internal newlines as well, for those items
515 whose source text line is not None.
519 whose source text line is not None.
516
520
517 Lifted almost verbatim from traceback.py
521 Lifted almost verbatim from traceback.py
518 """
522 """
519
523
520 Colors = self.Colors
524 Colors = self.Colors
521 list = []
525 list = []
522 for ind, (filename, lineno, name, line) in enumerate(extracted_list):
526 for ind, (filename, lineno, name, line) in enumerate(extracted_list):
523 normalCol, nameCol, fileCol, lineCol = (
527 normalCol, nameCol, fileCol, lineCol = (
524 # Emphasize the last entry
528 # Emphasize the last entry
525 (Colors.normalEm, Colors.nameEm, Colors.filenameEm, Colors.line)
529 (Colors.normalEm, Colors.nameEm, Colors.filenameEm, Colors.line)
526 if ind == len(extracted_list) - 1
530 if ind == len(extracted_list) - 1
527 else (Colors.Normal, Colors.name, Colors.filename, "")
531 else (Colors.Normal, Colors.name, Colors.filename, "")
528 )
532 )
529
533
530 fns = _format_filename(filename, fileCol, normalCol, lineno=lineno)
534 fns = _format_filename(filename, fileCol, normalCol, lineno=lineno)
531 item = f"{normalCol} {fns}"
535 item = f"{normalCol} {fns}"
532
536
533 if name != "<module>":
537 if name != "<module>":
534 item += f" in {nameCol}{name}{normalCol}\n"
538 item += f" in {nameCol}{name}{normalCol}\n"
535 else:
539 else:
536 item += "\n"
540 item += "\n"
537 if line:
541 if line:
538 item += f"{lineCol} {line.strip()}{normalCol}\n"
542 item += f"{lineCol} {line.strip()}{normalCol}\n"
539 list.append(item)
543 list.append(item)
540
544
541 return list
545 return list
542
546
543 def _format_exception_only(self, etype, value):
547 def _format_exception_only(self, etype, value):
544 """Format the exception part of a traceback.
548 """Format the exception part of a traceback.
545
549
546 The arguments are the exception type and value such as given by
550 The arguments are the exception type and value such as given by
547 sys.exc_info()[:2]. The return value is a list of strings, each ending
551 sys.exc_info()[:2]. The return value is a list of strings, each ending
548 in a newline. Normally, the list contains a single string; however,
552 in a newline. Normally, the list contains a single string; however,
549 for SyntaxError exceptions, it contains several lines that (when
553 for SyntaxError exceptions, it contains several lines that (when
550 printed) display detailed information about where the syntax error
554 printed) display detailed information about where the syntax error
551 occurred. The message indicating which exception occurred is the
555 occurred. The message indicating which exception occurred is the
552 always last string in the list.
556 always last string in the list.
553
557
554 Also lifted nearly verbatim from traceback.py
558 Also lifted nearly verbatim from traceback.py
555 """
559 """
556 have_filedata = False
560 have_filedata = False
557 Colors = self.Colors
561 Colors = self.Colors
558 list = []
562 list = []
559 stype = py3compat.cast_unicode(Colors.excName + etype.__name__ + Colors.Normal)
563 stype = py3compat.cast_unicode(Colors.excName + etype.__name__ + Colors.Normal)
560 if value is None:
564 if value is None:
561 # Not sure if this can still happen in Python 2.6 and above
565 # Not sure if this can still happen in Python 2.6 and above
562 list.append(stype + '\n')
566 list.append(stype + '\n')
563 else:
567 else:
564 if issubclass(etype, SyntaxError):
568 if issubclass(etype, SyntaxError):
565 have_filedata = True
569 have_filedata = True
566 if not value.filename: value.filename = "<string>"
570 if not value.filename: value.filename = "<string>"
567 if value.lineno:
571 if value.lineno:
568 lineno = value.lineno
572 lineno = value.lineno
569 textline = linecache.getline(value.filename, value.lineno)
573 textline = linecache.getline(value.filename, value.lineno)
570 else:
574 else:
571 lineno = "unknown"
575 lineno = "unknown"
572 textline = ""
576 textline = ""
573 list.append(
577 list.append(
574 "%s %s%s\n"
578 "%s %s%s\n"
575 % (
579 % (
576 Colors.normalEm,
580 Colors.normalEm,
577 _format_filename(
581 _format_filename(
578 value.filename,
582 value.filename,
579 Colors.filenameEm,
583 Colors.filenameEm,
580 Colors.normalEm,
584 Colors.normalEm,
581 lineno=(None if lineno == "unknown" else lineno),
585 lineno=(None if lineno == "unknown" else lineno),
582 ),
586 ),
583 Colors.Normal,
587 Colors.Normal,
584 )
588 )
585 )
589 )
586 if textline == "":
590 if textline == "":
587 textline = py3compat.cast_unicode(value.text, "utf-8")
591 textline = py3compat.cast_unicode(value.text, "utf-8")
588
592
589 if textline is not None:
593 if textline is not None:
590 i = 0
594 i = 0
591 while i < len(textline) and textline[i].isspace():
595 while i < len(textline) and textline[i].isspace():
592 i += 1
596 i += 1
593 list.append('%s %s%s\n' % (Colors.line,
597 list.append('%s %s%s\n' % (Colors.line,
594 textline.strip(),
598 textline.strip(),
595 Colors.Normal))
599 Colors.Normal))
596 if value.offset is not None:
600 if value.offset is not None:
597 s = ' '
601 s = ' '
598 for c in textline[i:value.offset - 1]:
602 for c in textline[i:value.offset - 1]:
599 if c.isspace():
603 if c.isspace():
600 s += c
604 s += c
601 else:
605 else:
602 s += ' '
606 s += ' '
603 list.append('%s%s^%s\n' % (Colors.caret, s,
607 list.append('%s%s^%s\n' % (Colors.caret, s,
604 Colors.Normal))
608 Colors.Normal))
605
609
606 try:
610 try:
607 s = value.msg
611 s = value.msg
608 except Exception:
612 except Exception:
609 s = self._some_str(value)
613 s = self._some_str(value)
610 if s:
614 if s:
611 list.append('%s%s:%s %s\n' % (stype, Colors.excName,
615 list.append('%s%s:%s %s\n' % (stype, Colors.excName,
612 Colors.Normal, s))
616 Colors.Normal, s))
613 else:
617 else:
614 list.append('%s\n' % stype)
618 list.append('%s\n' % stype)
615
619
616 # sync with user hooks
620 # sync with user hooks
617 if have_filedata:
621 if have_filedata:
618 ipinst = get_ipython()
622 ipinst = get_ipython()
619 if ipinst is not None:
623 if ipinst is not None:
620 ipinst.hooks.synchronize_with_editor(value.filename, value.lineno, 0)
624 ipinst.hooks.synchronize_with_editor(value.filename, value.lineno, 0)
621
625
622 return list
626 return list
623
627
624 def get_exception_only(self, etype, value):
628 def get_exception_only(self, etype, value):
625 """Only print the exception type and message, without a traceback.
629 """Only print the exception type and message, without a traceback.
626
630
627 Parameters
631 Parameters
628 ----------
632 ----------
629 etype : exception type
633 etype : exception type
630 value : exception value
634 value : exception value
631 """
635 """
632 return ListTB.structured_traceback(self, etype, value)
636 return ListTB.structured_traceback(self, etype, value)
633
637
634 def show_exception_only(self, etype, evalue):
638 def show_exception_only(self, etype, evalue):
635 """Only print the exception type and message, without a traceback.
639 """Only print the exception type and message, without a traceback.
636
640
637 Parameters
641 Parameters
638 ----------
642 ----------
639 etype : exception type
643 etype : exception type
640 evalue : exception value
644 evalue : exception value
641 """
645 """
642 # This method needs to use __call__ from *this* class, not the one from
646 # This method needs to use __call__ from *this* class, not the one from
643 # a subclass whose signature or behavior may be different
647 # a subclass whose signature or behavior may be different
644 ostream = self.ostream
648 ostream = self.ostream
645 ostream.flush()
649 ostream.flush()
646 ostream.write('\n'.join(self.get_exception_only(etype, evalue)))
650 ostream.write('\n'.join(self.get_exception_only(etype, evalue)))
647 ostream.flush()
651 ostream.flush()
648
652
649 def _some_str(self, value):
653 def _some_str(self, value):
650 # Lifted from traceback.py
654 # Lifted from traceback.py
651 try:
655 try:
652 return py3compat.cast_unicode(str(value))
656 return py3compat.cast_unicode(str(value))
653 except:
657 except:
654 return u'<unprintable %s object>' % type(value).__name__
658 return u'<unprintable %s object>' % type(value).__name__
655
659
656
660
657 class FrameInfo:
661 class FrameInfo:
658 """
662 """
659 Mirror of stack data's FrameInfo, but so that we can bypass highlighting on
663 Mirror of stack data's FrameInfo, but so that we can bypass highlighting on
660 really long frames.
664 really long frames.
661 """
665 """
662
666
663 description : Optional[str]
667 description: Optional[str]
664 filename : str
668 filename: str
665 lineno : int
669 lineno: int
666
670
667 @classmethod
671 @classmethod
668 def _from_stack_data_FrameInfo(cls, frame_info):
672 def _from_stack_data_FrameInfo(cls, frame_info):
669 return cls(
673 return cls(
670 getattr(frame_info, "description", None),
674 getattr(frame_info, "description", None),
671 getattr(frame_info, "filename", None),
675 getattr(frame_info, "filename", None),
672 getattr(frame_info, "lineno", None),
676 getattr(frame_info, "lineno", None),
673 getattr(frame_info, "frame", None),
677 getattr(frame_info, "frame", None),
674 getattr(frame_info, "code", None),
678 getattr(frame_info, "code", None),
675 sd=frame_info,
679 sd=frame_info,
676 )
680 )
677
681
678 def __init__(self, description, filename, lineno, frame, code, sd=None):
682 def __init__(self, description, filename, lineno, frame, code, sd=None):
679 self.description = description
683 self.description = description
680 self.filename = filename
684 self.filename = filename
681 self.lineno = lineno
685 self.lineno = lineno
682 self.frame = frame
686 self.frame = frame
683 self.code = code
687 self.code = code
684 self._sd = sd
688 self._sd = sd
685
689
686 # self.lines = []
690 # self.lines = []
687 if sd is None:
691 if sd is None:
688 ix = inspect.getsourcelines(frame)
692 ix = inspect.getsourcelines(frame)
689 self.raw_lines = ix[0]
693 self.raw_lines = ix[0]
690
694
691 @property
695 @property
692 def variables_in_executing_piece(self):
696 def variables_in_executing_piece(self):
693 if self._sd:
697 if self._sd:
694 return self._sd.variables_in_executing_piece
698 return self._sd.variables_in_executing_piece
695 else:
699 else:
696 return []
700 return []
697
701
698 @property
702 @property
699 def lines(self):
703 def lines(self):
700 return self._sd.lines
704 return self._sd.lines
701
705
702 @property
706 @property
703 def executing(self):
707 def executing(self):
704 if self._sd:
708 if self._sd:
705 return self._sd.executing
709 return self._sd.executing
706 else:
710 else:
707 return None
711 return None
708
712
709
713
710
714 # ----------------------------------------------------------------------------
711
712 #----------------------------------------------------------------------------
713 class VerboseTB(TBTools):
715 class VerboseTB(TBTools):
714 """A port of Ka-Ping Yee's cgitb.py module that outputs color text instead
716 """A port of Ka-Ping Yee's cgitb.py module that outputs color text instead
715 of HTML. Requires inspect and pydoc. Crazy, man.
717 of HTML. Requires inspect and pydoc. Crazy, man.
716
718
717 Modified version which optionally strips the topmost entries from the
719 Modified version which optionally strips the topmost entries from the
718 traceback, to be used with alternate interpreters (because their own code
720 traceback, to be used with alternate interpreters (because their own code
719 would appear in the traceback)."""
721 would appear in the traceback)."""
720
722
721 _tb_highlight = "bg:ansiyellow"
723 _tb_highlight = "bg:ansiyellow"
722
724
723 def __init__(
725 def __init__(
724 self,
726 self,
725 color_scheme: str = "Linux",
727 color_scheme: str = "Linux",
726 call_pdb: bool = False,
728 call_pdb: bool = False,
727 ostream=None,
729 ostream=None,
728 tb_offset: int = 0,
730 tb_offset: int = 0,
729 long_header: bool = False,
731 long_header: bool = False,
730 include_vars: bool = True,
732 include_vars: bool = True,
731 check_cache=None,
733 check_cache=None,
732 debugger_cls=None,
734 debugger_cls=None,
733 parent=None,
735 parent=None,
734 config=None,
736 config=None,
735 ):
737 ):
736 """Specify traceback offset, headers and color scheme.
738 """Specify traceback offset, headers and color scheme.
737
739
738 Define how many frames to drop from the tracebacks. Calling it with
740 Define how many frames to drop from the tracebacks. Calling it with
739 tb_offset=1 allows use of this handler in interpreters which will have
741 tb_offset=1 allows use of this handler in interpreters which will have
740 their own code at the top of the traceback (VerboseTB will first
742 their own code at the top of the traceback (VerboseTB will first
741 remove that frame before printing the traceback info)."""
743 remove that frame before printing the traceback info)."""
742 TBTools.__init__(
744 TBTools.__init__(
743 self,
745 self,
744 color_scheme=color_scheme,
746 color_scheme=color_scheme,
745 call_pdb=call_pdb,
747 call_pdb=call_pdb,
746 ostream=ostream,
748 ostream=ostream,
747 parent=parent,
749 parent=parent,
748 config=config,
750 config=config,
749 debugger_cls=debugger_cls,
751 debugger_cls=debugger_cls,
750 )
752 )
751 self.tb_offset = tb_offset
753 self.tb_offset = tb_offset
752 self.long_header = long_header
754 self.long_header = long_header
753 self.include_vars = include_vars
755 self.include_vars = include_vars
754 # By default we use linecache.checkcache, but the user can provide a
756 # By default we use linecache.checkcache, but the user can provide a
755 # different check_cache implementation. This was formerly used by the
757 # different check_cache implementation. This was formerly used by the
756 # IPython kernel for interactive code, but is no longer necessary.
758 # IPython kernel for interactive code, but is no longer necessary.
757 if check_cache is None:
759 if check_cache is None:
758 check_cache = linecache.checkcache
760 check_cache = linecache.checkcache
759 self.check_cache = check_cache
761 self.check_cache = check_cache
760
762
761 self.skip_hidden = True
763 self.skip_hidden = True
762
764
763 def format_record(self, frame_info:FrameInfo):
765 def format_record(self, frame_info: FrameInfo):
764 """Format a single stack frame"""
766 """Format a single stack frame"""
765 assert isinstance(frame_info, FrameInfo)
767 assert isinstance(frame_info, FrameInfo)
766 Colors = self.Colors # just a shorthand + quicker name lookup
768 Colors = self.Colors # just a shorthand + quicker name lookup
767 ColorsNormal = Colors.Normal # used a lot
769 ColorsNormal = Colors.Normal # used a lot
768
770
769 if isinstance(frame_info._sd, stack_data.RepeatedFrames):
771 if isinstance(frame_info._sd, stack_data.RepeatedFrames):
770 return ' %s[... skipping similar frames: %s]%s\n' % (
772 return ' %s[... skipping similar frames: %s]%s\n' % (
771 Colors.excName, frame_info.description, ColorsNormal)
773 Colors.excName, frame_info.description, ColorsNormal)
772
774
773 indent = " " * INDENT_SIZE
775 indent = " " * INDENT_SIZE
774 em_normal = "%s\n%s%s" % (Colors.valEm, indent, ColorsNormal)
776 em_normal = "%s\n%s%s" % (Colors.valEm, indent, ColorsNormal)
775 tpl_call = f"in {Colors.vName}{{file}}{Colors.valEm}{{scope}}{ColorsNormal}"
777 tpl_call = f"in {Colors.vName}{{file}}{Colors.valEm}{{scope}}{ColorsNormal}"
776 tpl_call_fail = "in %s%%s%s(***failed resolving arguments***)%s" % (
778 tpl_call_fail = "in %s%%s%s(***failed resolving arguments***)%s" % (
777 Colors.vName,
779 Colors.vName,
778 Colors.valEm,
780 Colors.valEm,
779 ColorsNormal,
781 ColorsNormal,
780 )
782 )
781 tpl_name_val = "%%s %s= %%s%s" % (Colors.valEm, ColorsNormal)
783 tpl_name_val = "%%s %s= %%s%s" % (Colors.valEm, ColorsNormal)
782
784
783 link = _format_filename(
785 link = _format_filename(
784 frame_info.filename,
786 frame_info.filename,
785 Colors.filenameEm,
787 Colors.filenameEm,
786 ColorsNormal,
788 ColorsNormal,
787 lineno=frame_info.lineno,
789 lineno=frame_info.lineno,
788 )
790 )
789 args, varargs, varkw, locals_ = inspect.getargvalues(frame_info.frame)
791 args, varargs, varkw, locals_ = inspect.getargvalues(frame_info.frame)
790 if frame_info.executing is not None:
792 if frame_info.executing is not None:
791 func = frame_info.executing.code_qualname()
793 func = frame_info.executing.code_qualname()
792 else:
794 else:
793 func = '?'
795 func = "?"
794 if func == "<module>":
796 if func == "<module>":
795 call = ""
797 call = ""
796 else:
798 else:
797 # Decide whether to include variable details or not
799 # Decide whether to include variable details or not
798 var_repr = eqrepr if self.include_vars else nullrepr
800 var_repr = eqrepr if self.include_vars else nullrepr
799 try:
801 try:
800 scope = inspect.formatargvalues(
802 scope = inspect.formatargvalues(
801 args, varargs, varkw, locals_, formatvalue=var_repr
803 args, varargs, varkw, locals_, formatvalue=var_repr
802 )
804 )
803 call = tpl_call.format(file=func, scope=scope)
805 call = tpl_call.format(file=func, scope=scope)
804 except KeyError:
806 except KeyError:
805 # This happens in situations like errors inside generator
807 # This happens in situations like errors inside generator
806 # expressions, where local variables are listed in the
808 # expressions, where local variables are listed in the
807 # line, but can't be extracted from the frame. I'm not
809 # line, but can't be extracted from the frame. I'm not
808 # 100% sure this isn't actually a bug in inspect itself,
810 # 100% sure this isn't actually a bug in inspect itself,
809 # but since there's no info for us to compute with, the
811 # but since there's no info for us to compute with, the
810 # best we can do is report the failure and move on. Here
812 # best we can do is report the failure and move on. Here
811 # we must *not* call any traceback construction again,
813 # we must *not* call any traceback construction again,
812 # because that would mess up use of %debug later on. So we
814 # because that would mess up use of %debug later on. So we
813 # simply report the failure and move on. The only
815 # simply report the failure and move on. The only
814 # limitation will be that this frame won't have locals
816 # limitation will be that this frame won't have locals
815 # listed in the call signature. Quite subtle problem...
817 # listed in the call signature. Quite subtle problem...
816 # I can't think of a good way to validate this in a unit
818 # I can't think of a good way to validate this in a unit
817 # test, but running a script consisting of:
819 # test, but running a script consisting of:
818 # dict( (k,v.strip()) for (k,v) in range(10) )
820 # dict( (k,v.strip()) for (k,v) in range(10) )
819 # will illustrate the error, if this exception catch is
821 # will illustrate the error, if this exception catch is
820 # disabled.
822 # disabled.
821 call = tpl_call_fail % func
823 call = tpl_call_fail % func
822
824
823 lvals = ''
825 lvals = ''
824 lvals_list = []
826 lvals_list = []
825 if self.include_vars:
827 if self.include_vars:
826 try:
828 try:
827 # we likely want to fix stackdata at some point, but
829 # we likely want to fix stackdata at some point, but
828 # still need a workaround.
830 # still need a workaround.
829 fibp = frame_info.variables_in_executing_piece
831 fibp = frame_info.variables_in_executing_piece
830 for var in fibp:
832 for var in fibp:
831 lvals_list.append(tpl_name_val % (var.name, repr(var.value)))
833 lvals_list.append(tpl_name_val % (var.name, repr(var.value)))
832 except Exception:
834 except Exception:
833 lvals_list.append(
835 lvals_list.append(
834 "Exception trying to inspect frame. No more locals available."
836 "Exception trying to inspect frame. No more locals available."
835 )
837 )
836 if lvals_list:
838 if lvals_list:
837 lvals = '%s%s' % (indent, em_normal.join(lvals_list))
839 lvals = '%s%s' % (indent, em_normal.join(lvals_list))
838
840
839 result = f'{link}{", " if call else ""}{call}\n'
841 result = f'{link}{", " if call else ""}{call}\n'
840 if frame_info._sd is None:
842 if frame_info._sd is None:
841 assert False
843 assert False
842 # fast fallback if file is too long
844 # fast fallback if file is too long
843 tpl_link = '%s%%s%s' % (Colors.filenameEm, ColorsNormal)
845 tpl_link = "%s%%s%s" % (Colors.filenameEm, ColorsNormal)
844 link = tpl_link % util_path.compress_user(frame_info.filename)
846 link = tpl_link % util_path.compress_user(frame_info.filename)
845 level = '%s %s\n' % (link, call)
847 level = "%s %s\n" % (link, call)
846 _line_format = PyColorize.Parser(style=self.color_scheme_table.active_scheme_name, parent=self).format2
848 _line_format = PyColorize.Parser(
849 style=self.color_scheme_table.active_scheme_name, parent=self
850 ).format2
847 first_line = frame_info.code.co_firstlineno
851 first_line = frame_info.code.co_firstlineno
848 current_line = frame_info.lineno[0]
852 current_line = frame_info.lineno[0]
849 return '%s%s' % (level, ''.join(
853 return "%s%s" % (
850 _simple_format_traceback_lines(current_line, current_line-first_line, frame_info.raw_lines, Colors, lvals,
854 level,
851 _line_format)))
855 "".join(
852 #result += "\n".join(frame_info.raw_lines)
856 _simple_format_traceback_lines(
857 current_line,
858 current_line - first_line,
859 frame_info.raw_lines,
860 Colors,
861 lvals,
862 _line_format,
863 )
864 ),
865 )
866 # result += "\n".join(frame_info.raw_lines)
853 else:
867 else:
854 result += "".join(
868 result += "".join(
855 _format_traceback_lines(
869 _format_traceback_lines(
856 frame_info.lines, Colors, self.has_colors, lvals
870 frame_info.lines, Colors, self.has_colors, lvals
857 )
871 )
858 )
872 )
859 return result
873 return result
860
874
861 def prepare_header(self, etype, long_version=False):
875 def prepare_header(self, etype, long_version=False):
862 colors = self.Colors # just a shorthand + quicker name lookup
876 colors = self.Colors # just a shorthand + quicker name lookup
863 colorsnormal = colors.Normal # used a lot
877 colorsnormal = colors.Normal # used a lot
864 exc = '%s%s%s' % (colors.excName, etype, colorsnormal)
878 exc = '%s%s%s' % (colors.excName, etype, colorsnormal)
865 width = min(75, get_terminal_size()[0])
879 width = min(75, get_terminal_size()[0])
866 if long_version:
880 if long_version:
867 # Header with the exception type, python version, and date
881 # Header with the exception type, python version, and date
868 pyver = 'Python ' + sys.version.split()[0] + ': ' + sys.executable
882 pyver = 'Python ' + sys.version.split()[0] + ': ' + sys.executable
869 date = time.ctime(time.time())
883 date = time.ctime(time.time())
870
884
871 head = '%s%s%s\n%s%s%s\n%s' % (colors.topline, '-' * width, colorsnormal,
885 head = '%s%s%s\n%s%s%s\n%s' % (colors.topline, '-' * width, colorsnormal,
872 exc, ' ' * (width - len(str(etype)) - len(pyver)),
886 exc, ' ' * (width - len(str(etype)) - len(pyver)),
873 pyver, date.rjust(width) )
887 pyver, date.rjust(width) )
874 head += "\nA problem occurred executing Python code. Here is the sequence of function" \
888 head += "\nA problem occurred executing Python code. Here is the sequence of function" \
875 "\ncalls leading up to the error, with the most recent (innermost) call last."
889 "\ncalls leading up to the error, with the most recent (innermost) call last."
876 else:
890 else:
877 # Simplified header
891 # Simplified header
878 head = '%s%s' % (exc, 'Traceback (most recent call last)'. \
892 head = '%s%s' % (exc, 'Traceback (most recent call last)'. \
879 rjust(width - len(str(etype))) )
893 rjust(width - len(str(etype))) )
880
894
881 return head
895 return head
882
896
883 def format_exception(self, etype, evalue):
897 def format_exception(self, etype, evalue):
884 colors = self.Colors # just a shorthand + quicker name lookup
898 colors = self.Colors # just a shorthand + quicker name lookup
885 colorsnormal = colors.Normal # used a lot
899 colorsnormal = colors.Normal # used a lot
886 # Get (safely) a string form of the exception info
900 # Get (safely) a string form of the exception info
887 try:
901 try:
888 etype_str, evalue_str = map(str, (etype, evalue))
902 etype_str, evalue_str = map(str, (etype, evalue))
889 except:
903 except:
890 # User exception is improperly defined.
904 # User exception is improperly defined.
891 etype, evalue = str, sys.exc_info()[:2]
905 etype, evalue = str, sys.exc_info()[:2]
892 etype_str, evalue_str = map(str, (etype, evalue))
906 etype_str, evalue_str = map(str, (etype, evalue))
893 # ... and format it
907 # ... and format it
894 return ['%s%s%s: %s' % (colors.excName, etype_str,
908 return ['%s%s%s: %s' % (colors.excName, etype_str,
895 colorsnormal, py3compat.cast_unicode(evalue_str))]
909 colorsnormal, py3compat.cast_unicode(evalue_str))]
896
910
897 def format_exception_as_a_whole(
911 def format_exception_as_a_whole(
898 self,
912 self,
899 etype: type,
913 etype: type,
900 evalue: BaseException,
914 evalue: BaseException,
901 etb: Optional[TracebackType],
915 etb: Optional[TracebackType],
902 number_of_lines_of_context,
916 number_of_lines_of_context,
903 tb_offset: Optional[int],
917 tb_offset: Optional[int],
904 ):
918 ):
905 """Formats the header, traceback and exception message for a single exception.
919 """Formats the header, traceback and exception message for a single exception.
906
920
907 This may be called multiple times by Python 3 exception chaining
921 This may be called multiple times by Python 3 exception chaining
908 (PEP 3134).
922 (PEP 3134).
909 """
923 """
910 # some locals
924 # some locals
911 orig_etype = etype
925 orig_etype = etype
912 try:
926 try:
913 etype = etype.__name__
927 etype = etype.__name__
914 except AttributeError:
928 except AttributeError:
915 pass
929 pass
916
930
917 tb_offset = self.tb_offset if tb_offset is None else tb_offset
931 tb_offset = self.tb_offset if tb_offset is None else tb_offset
918 assert isinstance(tb_offset, int)
932 assert isinstance(tb_offset, int)
919 head = self.prepare_header(etype, self.long_header)
933 head = self.prepare_header(etype, self.long_header)
920 records = (
934 records = (
921 self.get_records(etb, number_of_lines_of_context, tb_offset) if etb else []
935 self.get_records(etb, number_of_lines_of_context, tb_offset) if etb else []
922 )
936 )
923
937
924 frames = []
938 frames = []
925 skipped = 0
939 skipped = 0
926 lastrecord = len(records) - 1
940 lastrecord = len(records) - 1
927 for i, record in enumerate(records):
941 for i, record in enumerate(records):
928 if not isinstance(record._sd, stack_data.RepeatedFrames) and self.skip_hidden:
942 if (
929 if record.frame.f_locals.get("__tracebackhide__", 0) and i != lastrecord:
943 not isinstance(record._sd, stack_data.RepeatedFrames)
944 and self.skip_hidden
945 ):
946 if (
947 record.frame.f_locals.get("__tracebackhide__", 0)
948 and i != lastrecord
949 ):
930 skipped += 1
950 skipped += 1
931 continue
951 continue
932 if skipped:
952 if skipped:
933 Colors = self.Colors # just a shorthand + quicker name lookup
953 Colors = self.Colors # just a shorthand + quicker name lookup
934 ColorsNormal = Colors.Normal # used a lot
954 ColorsNormal = Colors.Normal # used a lot
935 frames.append(
955 frames.append(
936 " %s[... skipping hidden %s frame]%s\n"
956 " %s[... skipping hidden %s frame]%s\n"
937 % (Colors.excName, skipped, ColorsNormal)
957 % (Colors.excName, skipped, ColorsNormal)
938 )
958 )
939 skipped = 0
959 skipped = 0
940 frames.append(self.format_record(record))
960 frames.append(self.format_record(record))
941 if skipped:
961 if skipped:
942 Colors = self.Colors # just a shorthand + quicker name lookup
962 Colors = self.Colors # just a shorthand + quicker name lookup
943 ColorsNormal = Colors.Normal # used a lot
963 ColorsNormal = Colors.Normal # used a lot
944 frames.append(
964 frames.append(
945 " %s[... skipping hidden %s frame]%s\n"
965 " %s[... skipping hidden %s frame]%s\n"
946 % (Colors.excName, skipped, ColorsNormal)
966 % (Colors.excName, skipped, ColorsNormal)
947 )
967 )
948
968
949 formatted_exception = self.format_exception(etype, evalue)
969 formatted_exception = self.format_exception(etype, evalue)
950 if records:
970 if records:
951 frame_info = records[-1]
971 frame_info = records[-1]
952 ipinst = get_ipython()
972 ipinst = get_ipython()
953 if ipinst is not None:
973 if ipinst is not None:
954 ipinst.hooks.synchronize_with_editor(frame_info.filename, frame_info.lineno, 0)
974 ipinst.hooks.synchronize_with_editor(frame_info.filename, frame_info.lineno, 0)
955
975
956 return [[head] + frames + [''.join(formatted_exception[0])]]
976 return [[head] + frames + [''.join(formatted_exception[0])]]
957
977
958 def get_records(
978 def get_records(
959 self, etb: TracebackType, number_of_lines_of_context: int, tb_offset: int
979 self, etb: TracebackType, number_of_lines_of_context: int, tb_offset: int
960 ):
980 ):
961 assert etb is not None
981 assert etb is not None
962 context = number_of_lines_of_context - 1
982 context = number_of_lines_of_context - 1
963 after = context // 2
983 after = context // 2
964 before = context - after
984 before = context - after
965 if self.has_colors:
985 if self.has_colors:
966 style = get_style_by_name("default")
986 style = get_style_by_name("default")
967 style = stack_data.style_with_executing_node(style, self._tb_highlight)
987 style = stack_data.style_with_executing_node(style, self._tb_highlight)
968 formatter = Terminal256Formatter(style=style)
988 formatter = Terminal256Formatter(style=style)
969 else:
989 else:
970 formatter = None
990 formatter = None
971 options = stack_data.Options(
991 options = stack_data.Options(
972 before=before,
992 before=before,
973 after=after,
993 after=after,
974 pygments_formatter=formatter,
994 pygments_formatter=formatter,
975 )
995 )
976
996
977 # let's estimate the amount of code we eill have to parse/highlight.
997 # let's estimate the amount of code we eill have to parse/highlight.
978 cf = etb
998 cf = etb
979 max_len = 0
999 max_len = 0
980 tbs = []
1000 tbs = []
981 while cf is not None:
1001 while cf is not None:
982 source_file = inspect.getsourcefile(etb.tb_frame)
1002 source_file = inspect.getsourcefile(etb.tb_frame)
983 lines, first = inspect.getsourcelines(etb.tb_frame)
1003 lines, first = inspect.getsourcelines(etb.tb_frame)
984 max_len = max(max_len, first+len(lines))
1004 max_len = max(max_len, first + len(lines))
985 tbs.append(cf)
1005 tbs.append(cf)
986 cf = cf.tb_next
1006 cf = cf.tb_next
987
1007
988
989
990 if max_len > FAST_THRESHOLD:
1008 if max_len > FAST_THRESHOLD:
991 FIs = []
1009 FIs = []
992 for tb in tbs:
1010 for tb in tbs:
993 frame = tb.tb_frame
1011 frame = tb.tb_frame
994 lineno = frame.f_lineno,
1012 lineno = (frame.f_lineno,)
995 code = frame.f_code
1013 code = frame.f_code
996 filename = code.co_filename
1014 filename = code.co_filename
997 FIs.append( FrameInfo("Raw frame", filename, lineno, frame, code))
1015 FIs.append(FrameInfo("Raw frame", filename, lineno, frame, code))
998 return FIs
1016 return FIs
999 res = list(stack_data.FrameInfo.stack_data(etb, options=options))[tb_offset:]
1017 res = list(stack_data.FrameInfo.stack_data(etb, options=options))[tb_offset:]
1000 res = [FrameInfo._from_stack_data_FrameInfo(r) for r in res]
1018 res = [FrameInfo._from_stack_data_FrameInfo(r) for r in res]
1001 return res
1019 return res
1002
1020
1003 def structured_traceback(
1021 def structured_traceback(
1004 self,
1022 self,
1005 etype: type,
1023 etype: type,
1006 evalue: Optional[BaseException],
1024 evalue: Optional[BaseException],
1007 etb: Optional[TracebackType],
1025 etb: Optional[TracebackType],
1008 tb_offset: Optional[int] = None,
1026 tb_offset: Optional[int] = None,
1009 number_of_lines_of_context: int = 5,
1027 number_of_lines_of_context: int = 5,
1010 ):
1028 ):
1011 """Return a nice text document describing the traceback."""
1029 """Return a nice text document describing the traceback."""
1012 formatted_exception = self.format_exception_as_a_whole(etype, evalue, etb, number_of_lines_of_context,
1030 formatted_exception = self.format_exception_as_a_whole(etype, evalue, etb, number_of_lines_of_context,
1013 tb_offset)
1031 tb_offset)
1014
1032
1015 colors = self.Colors # just a shorthand + quicker name lookup
1033 colors = self.Colors # just a shorthand + quicker name lookup
1016 colorsnormal = colors.Normal # used a lot
1034 colorsnormal = colors.Normal # used a lot
1017 head = '%s%s%s' % (colors.topline, '-' * min(75, get_terminal_size()[0]), colorsnormal)
1035 head = '%s%s%s' % (colors.topline, '-' * min(75, get_terminal_size()[0]), colorsnormal)
1018 structured_traceback_parts = [head]
1036 structured_traceback_parts = [head]
1019 chained_exceptions_tb_offset = 0
1037 chained_exceptions_tb_offset = 0
1020 lines_of_context = 3
1038 lines_of_context = 3
1021 formatted_exceptions = formatted_exception
1039 formatted_exceptions = formatted_exception
1022 exception = self.get_parts_of_chained_exception(evalue)
1040 exception = self.get_parts_of_chained_exception(evalue)
1023 if exception:
1041 if exception:
1024 assert evalue is not None
1042 assert evalue is not None
1025 formatted_exceptions += self.prepare_chained_exception_message(evalue.__cause__)
1043 formatted_exceptions += self.prepare_chained_exception_message(evalue.__cause__)
1026 etype, evalue, etb = exception
1044 etype, evalue, etb = exception
1027 else:
1045 else:
1028 evalue = None
1046 evalue = None
1029 chained_exc_ids = set()
1047 chained_exc_ids = set()
1030 while evalue:
1048 while evalue:
1031 formatted_exceptions += self.format_exception_as_a_whole(etype, evalue, etb, lines_of_context,
1049 formatted_exceptions += self.format_exception_as_a_whole(etype, evalue, etb, lines_of_context,
1032 chained_exceptions_tb_offset)
1050 chained_exceptions_tb_offset)
1033 exception = self.get_parts_of_chained_exception(evalue)
1051 exception = self.get_parts_of_chained_exception(evalue)
1034
1052
1035 if exception and not id(exception[1]) in chained_exc_ids:
1053 if exception and not id(exception[1]) in chained_exc_ids:
1036 chained_exc_ids.add(id(exception[1])) # trace exception to avoid infinite 'cause' loop
1054 chained_exc_ids.add(id(exception[1])) # trace exception to avoid infinite 'cause' loop
1037 formatted_exceptions += self.prepare_chained_exception_message(evalue.__cause__)
1055 formatted_exceptions += self.prepare_chained_exception_message(evalue.__cause__)
1038 etype, evalue, etb = exception
1056 etype, evalue, etb = exception
1039 else:
1057 else:
1040 evalue = None
1058 evalue = None
1041
1059
1042 # we want to see exceptions in a reversed order:
1060 # we want to see exceptions in a reversed order:
1043 # the first exception should be on top
1061 # the first exception should be on top
1044 for formatted_exception in reversed(formatted_exceptions):
1062 for formatted_exception in reversed(formatted_exceptions):
1045 structured_traceback_parts += formatted_exception
1063 structured_traceback_parts += formatted_exception
1046
1064
1047 return structured_traceback_parts
1065 return structured_traceback_parts
1048
1066
1049 def debugger(self, force: bool = False):
1067 def debugger(self, force: bool = False):
1050 """Call up the pdb debugger if desired, always clean up the tb
1068 """Call up the pdb debugger if desired, always clean up the tb
1051 reference.
1069 reference.
1052
1070
1053 Keywords:
1071 Keywords:
1054
1072
1055 - force(False): by default, this routine checks the instance call_pdb
1073 - force(False): by default, this routine checks the instance call_pdb
1056 flag and does not actually invoke the debugger if the flag is false.
1074 flag and does not actually invoke the debugger if the flag is false.
1057 The 'force' option forces the debugger to activate even if the flag
1075 The 'force' option forces the debugger to activate even if the flag
1058 is false.
1076 is false.
1059
1077
1060 If the call_pdb flag is set, the pdb interactive debugger is
1078 If the call_pdb flag is set, the pdb interactive debugger is
1061 invoked. In all cases, the self.tb reference to the current traceback
1079 invoked. In all cases, the self.tb reference to the current traceback
1062 is deleted to prevent lingering references which hamper memory
1080 is deleted to prevent lingering references which hamper memory
1063 management.
1081 management.
1064
1082
1065 Note that each call to pdb() does an 'import readline', so if your app
1083 Note that each call to pdb() does an 'import readline', so if your app
1066 requires a special setup for the readline completers, you'll have to
1084 requires a special setup for the readline completers, you'll have to
1067 fix that by hand after invoking the exception handler."""
1085 fix that by hand after invoking the exception handler."""
1068
1086
1069 if force or self.call_pdb:
1087 if force or self.call_pdb:
1070 if self.pdb is None:
1088 if self.pdb is None:
1071 self.pdb = self.debugger_cls()
1089 self.pdb = self.debugger_cls()
1072 # the system displayhook may have changed, restore the original
1090 # the system displayhook may have changed, restore the original
1073 # for pdb
1091 # for pdb
1074 display_trap = DisplayTrap(hook=sys.__displayhook__)
1092 display_trap = DisplayTrap(hook=sys.__displayhook__)
1075 with display_trap:
1093 with display_trap:
1076 self.pdb.reset()
1094 self.pdb.reset()
1077 # Find the right frame so we don't pop up inside ipython itself
1095 # Find the right frame so we don't pop up inside ipython itself
1078 if hasattr(self, 'tb') and self.tb is not None:
1096 if hasattr(self, 'tb') and self.tb is not None:
1079 etb = self.tb
1097 etb = self.tb
1080 else:
1098 else:
1081 etb = self.tb = sys.last_traceback
1099 etb = self.tb = sys.last_traceback
1082 while self.tb is not None and self.tb.tb_next is not None:
1100 while self.tb is not None and self.tb.tb_next is not None:
1083 assert self.tb.tb_next is not None
1101 assert self.tb.tb_next is not None
1084 self.tb = self.tb.tb_next
1102 self.tb = self.tb.tb_next
1085 if etb and etb.tb_next:
1103 if etb and etb.tb_next:
1086 etb = etb.tb_next
1104 etb = etb.tb_next
1087 self.pdb.botframe = etb.tb_frame
1105 self.pdb.botframe = etb.tb_frame
1088 self.pdb.interaction(None, etb)
1106 self.pdb.interaction(None, etb)
1089
1107
1090 if hasattr(self, 'tb'):
1108 if hasattr(self, 'tb'):
1091 del self.tb
1109 del self.tb
1092
1110
1093 def handler(self, info=None):
1111 def handler(self, info=None):
1094 (etype, evalue, etb) = info or sys.exc_info()
1112 (etype, evalue, etb) = info or sys.exc_info()
1095 self.tb = etb
1113 self.tb = etb
1096 ostream = self.ostream
1114 ostream = self.ostream
1097 ostream.flush()
1115 ostream.flush()
1098 ostream.write(self.text(etype, evalue, etb))
1116 ostream.write(self.text(etype, evalue, etb))
1099 ostream.write('\n')
1117 ostream.write('\n')
1100 ostream.flush()
1118 ostream.flush()
1101
1119
1102 # Changed so an instance can just be called as VerboseTB_inst() and print
1120 # Changed so an instance can just be called as VerboseTB_inst() and print
1103 # out the right info on its own.
1121 # out the right info on its own.
1104 def __call__(self, etype=None, evalue=None, etb=None):
1122 def __call__(self, etype=None, evalue=None, etb=None):
1105 """This hook can replace sys.excepthook (for Python 2.1 or higher)."""
1123 """This hook can replace sys.excepthook (for Python 2.1 or higher)."""
1106 if etb is None:
1124 if etb is None:
1107 self.handler()
1125 self.handler()
1108 else:
1126 else:
1109 self.handler((etype, evalue, etb))
1127 self.handler((etype, evalue, etb))
1110 try:
1128 try:
1111 self.debugger()
1129 self.debugger()
1112 except KeyboardInterrupt:
1130 except KeyboardInterrupt:
1113 print("\nKeyboardInterrupt")
1131 print("\nKeyboardInterrupt")
1114
1132
1115
1133
1116 #----------------------------------------------------------------------------
1134 #----------------------------------------------------------------------------
1117 class FormattedTB(VerboseTB, ListTB):
1135 class FormattedTB(VerboseTB, ListTB):
1118 """Subclass ListTB but allow calling with a traceback.
1136 """Subclass ListTB but allow calling with a traceback.
1119
1137
1120 It can thus be used as a sys.excepthook for Python > 2.1.
1138 It can thus be used as a sys.excepthook for Python > 2.1.
1121
1139
1122 Also adds 'Context' and 'Verbose' modes, not available in ListTB.
1140 Also adds 'Context' and 'Verbose' modes, not available in ListTB.
1123
1141
1124 Allows a tb_offset to be specified. This is useful for situations where
1142 Allows a tb_offset to be specified. This is useful for situations where
1125 one needs to remove a number of topmost frames from the traceback (such as
1143 one needs to remove a number of topmost frames from the traceback (such as
1126 occurs with python programs that themselves execute other python code,
1144 occurs with python programs that themselves execute other python code,
1127 like Python shells). """
1145 like Python shells). """
1128
1146
1129 mode: str
1147 mode: str
1130
1148
1131 def __init__(self, mode='Plain', color_scheme='Linux', call_pdb=False,
1149 def __init__(self, mode='Plain', color_scheme='Linux', call_pdb=False,
1132 ostream=None,
1150 ostream=None,
1133 tb_offset=0, long_header=False, include_vars=False,
1151 tb_offset=0, long_header=False, include_vars=False,
1134 check_cache=None, debugger_cls=None,
1152 check_cache=None, debugger_cls=None,
1135 parent=None, config=None):
1153 parent=None, config=None):
1136
1154
1137 # NEVER change the order of this list. Put new modes at the end:
1155 # NEVER change the order of this list. Put new modes at the end:
1138 self.valid_modes = ['Plain', 'Context', 'Verbose', 'Minimal']
1156 self.valid_modes = ['Plain', 'Context', 'Verbose', 'Minimal']
1139 self.verbose_modes = self.valid_modes[1:3]
1157 self.verbose_modes = self.valid_modes[1:3]
1140
1158
1141 VerboseTB.__init__(self, color_scheme=color_scheme, call_pdb=call_pdb,
1159 VerboseTB.__init__(self, color_scheme=color_scheme, call_pdb=call_pdb,
1142 ostream=ostream, tb_offset=tb_offset,
1160 ostream=ostream, tb_offset=tb_offset,
1143 long_header=long_header, include_vars=include_vars,
1161 long_header=long_header, include_vars=include_vars,
1144 check_cache=check_cache, debugger_cls=debugger_cls,
1162 check_cache=check_cache, debugger_cls=debugger_cls,
1145 parent=parent, config=config)
1163 parent=parent, config=config)
1146
1164
1147 # Different types of tracebacks are joined with different separators to
1165 # Different types of tracebacks are joined with different separators to
1148 # form a single string. They are taken from this dict
1166 # form a single string. They are taken from this dict
1149 self._join_chars = dict(Plain='', Context='\n', Verbose='\n',
1167 self._join_chars = dict(Plain='', Context='\n', Verbose='\n',
1150 Minimal='')
1168 Minimal='')
1151 # set_mode also sets the tb_join_char attribute
1169 # set_mode also sets the tb_join_char attribute
1152 self.set_mode(mode)
1170 self.set_mode(mode)
1153
1171
1154 def structured_traceback(self, etype, value, tb, tb_offset=None, number_of_lines_of_context=5):
1172 def structured_traceback(self, etype, value, tb, tb_offset=None, number_of_lines_of_context=5):
1155 tb_offset = self.tb_offset if tb_offset is None else tb_offset
1173 tb_offset = self.tb_offset if tb_offset is None else tb_offset
1156 mode = self.mode
1174 mode = self.mode
1157 if mode in self.verbose_modes:
1175 if mode in self.verbose_modes:
1158 # Verbose modes need a full traceback
1176 # Verbose modes need a full traceback
1159 return VerboseTB.structured_traceback(
1177 return VerboseTB.structured_traceback(
1160 self, etype, value, tb, tb_offset, number_of_lines_of_context
1178 self, etype, value, tb, tb_offset, number_of_lines_of_context
1161 )
1179 )
1162 elif mode == 'Minimal':
1180 elif mode == 'Minimal':
1163 return ListTB.get_exception_only(self, etype, value)
1181 return ListTB.get_exception_only(self, etype, value)
1164 else:
1182 else:
1165 # We must check the source cache because otherwise we can print
1183 # We must check the source cache because otherwise we can print
1166 # out-of-date source code.
1184 # out-of-date source code.
1167 self.check_cache()
1185 self.check_cache()
1168 # Now we can extract and format the exception
1186 # Now we can extract and format the exception
1169 return ListTB.structured_traceback(
1187 return ListTB.structured_traceback(
1170 self, etype, value, tb, tb_offset, number_of_lines_of_context
1188 self, etype, value, tb, tb_offset, number_of_lines_of_context
1171 )
1189 )
1172
1190
1173 def stb2text(self, stb):
1191 def stb2text(self, stb):
1174 """Convert a structured traceback (a list) to a string."""
1192 """Convert a structured traceback (a list) to a string."""
1175 return self.tb_join_char.join(stb)
1193 return self.tb_join_char.join(stb)
1176
1194
1177 def set_mode(self, mode: Optional[str] = None):
1195 def set_mode(self, mode: Optional[str] = None):
1178 """Switch to the desired mode.
1196 """Switch to the desired mode.
1179
1197
1180 If mode is not specified, cycles through the available modes."""
1198 If mode is not specified, cycles through the available modes."""
1181
1199
1182 if not mode:
1200 if not mode:
1183 new_idx = (self.valid_modes.index(self.mode) + 1 ) % \
1201 new_idx = (self.valid_modes.index(self.mode) + 1 ) % \
1184 len(self.valid_modes)
1202 len(self.valid_modes)
1185 self.mode = self.valid_modes[new_idx]
1203 self.mode = self.valid_modes[new_idx]
1186 elif mode not in self.valid_modes:
1204 elif mode not in self.valid_modes:
1187 raise ValueError(
1205 raise ValueError(
1188 "Unrecognized mode in FormattedTB: <" + mode + ">\n"
1206 "Unrecognized mode in FormattedTB: <" + mode + ">\n"
1189 "Valid modes: " + str(self.valid_modes)
1207 "Valid modes: " + str(self.valid_modes)
1190 )
1208 )
1191 else:
1209 else:
1192 assert isinstance(mode, str)
1210 assert isinstance(mode, str)
1193 self.mode = mode
1211 self.mode = mode
1194 # include variable details only in 'Verbose' mode
1212 # include variable details only in 'Verbose' mode
1195 self.include_vars = (self.mode == self.valid_modes[2])
1213 self.include_vars = (self.mode == self.valid_modes[2])
1196 # Set the join character for generating text tracebacks
1214 # Set the join character for generating text tracebacks
1197 self.tb_join_char = self._join_chars[self.mode]
1215 self.tb_join_char = self._join_chars[self.mode]
1198
1216
1199 # some convenient shortcuts
1217 # some convenient shortcuts
1200 def plain(self):
1218 def plain(self):
1201 self.set_mode(self.valid_modes[0])
1219 self.set_mode(self.valid_modes[0])
1202
1220
1203 def context(self):
1221 def context(self):
1204 self.set_mode(self.valid_modes[1])
1222 self.set_mode(self.valid_modes[1])
1205
1223
1206 def verbose(self):
1224 def verbose(self):
1207 self.set_mode(self.valid_modes[2])
1225 self.set_mode(self.valid_modes[2])
1208
1226
1209 def minimal(self):
1227 def minimal(self):
1210 self.set_mode(self.valid_modes[3])
1228 self.set_mode(self.valid_modes[3])
1211
1229
1212
1230
1213 #----------------------------------------------------------------------------
1231 #----------------------------------------------------------------------------
1214 class AutoFormattedTB(FormattedTB):
1232 class AutoFormattedTB(FormattedTB):
1215 """A traceback printer which can be called on the fly.
1233 """A traceback printer which can be called on the fly.
1216
1234
1217 It will find out about exceptions by itself.
1235 It will find out about exceptions by itself.
1218
1236
1219 A brief example::
1237 A brief example::
1220
1238
1221 AutoTB = AutoFormattedTB(mode = 'Verbose',color_scheme='Linux')
1239 AutoTB = AutoFormattedTB(mode = 'Verbose',color_scheme='Linux')
1222 try:
1240 try:
1223 ...
1241 ...
1224 except:
1242 except:
1225 AutoTB() # or AutoTB(out=logfile) where logfile is an open file object
1243 AutoTB() # or AutoTB(out=logfile) where logfile is an open file object
1226 """
1244 """
1227
1245
1228 def __call__(self, etype=None, evalue=None, etb=None,
1246 def __call__(self, etype=None, evalue=None, etb=None,
1229 out=None, tb_offset=None):
1247 out=None, tb_offset=None):
1230 """Print out a formatted exception traceback.
1248 """Print out a formatted exception traceback.
1231
1249
1232 Optional arguments:
1250 Optional arguments:
1233 - out: an open file-like object to direct output to.
1251 - out: an open file-like object to direct output to.
1234
1252
1235 - tb_offset: the number of frames to skip over in the stack, on a
1253 - tb_offset: the number of frames to skip over in the stack, on a
1236 per-call basis (this overrides temporarily the instance's tb_offset
1254 per-call basis (this overrides temporarily the instance's tb_offset
1237 given at initialization time."""
1255 given at initialization time."""
1238
1256
1239 if out is None:
1257 if out is None:
1240 out = self.ostream
1258 out = self.ostream
1241 out.flush()
1259 out.flush()
1242 out.write(self.text(etype, evalue, etb, tb_offset))
1260 out.write(self.text(etype, evalue, etb, tb_offset))
1243 out.write('\n')
1261 out.write('\n')
1244 out.flush()
1262 out.flush()
1245 # FIXME: we should remove the auto pdb behavior from here and leave
1263 # FIXME: we should remove the auto pdb behavior from here and leave
1246 # that to the clients.
1264 # that to the clients.
1247 try:
1265 try:
1248 self.debugger()
1266 self.debugger()
1249 except KeyboardInterrupt:
1267 except KeyboardInterrupt:
1250 print("\nKeyboardInterrupt")
1268 print("\nKeyboardInterrupt")
1251
1269
1252 def structured_traceback(self, etype=None, value=None, tb=None,
1270 def structured_traceback(
1253 tb_offset=None, number_of_lines_of_context=5):
1271 self,
1254
1272 etype=None,
1273 value=None,
1274 tb=None,
1275 tb_offset=None,
1276 number_of_lines_of_context=5,
1277 ):
1255 etype: type
1278 etype: type
1256 value: BaseException
1279 value: BaseException
1257 # tb: TracebackType or tupleof tb types ?
1280 # tb: TracebackType or tupleof tb types ?
1258 if etype is None:
1281 if etype is None:
1259 etype, value, tb = sys.exc_info()
1282 etype, value, tb = sys.exc_info()
1260 if isinstance(tb, tuple):
1283 if isinstance(tb, tuple):
1261 # tb is a tuple if this is a chained exception.
1284 # tb is a tuple if this is a chained exception.
1262 self.tb = tb[0]
1285 self.tb = tb[0]
1263 else:
1286 else:
1264 self.tb = tb
1287 self.tb = tb
1265 return FormattedTB.structured_traceback(
1288 return FormattedTB.structured_traceback(
1266 self, etype, value, tb, tb_offset, number_of_lines_of_context)
1289 self, etype, value, tb, tb_offset, number_of_lines_of_context)
1267
1290
1268
1291
1269 #---------------------------------------------------------------------------
1292 #---------------------------------------------------------------------------
1270
1293
1271 # A simple class to preserve Nathan's original functionality.
1294 # A simple class to preserve Nathan's original functionality.
1272 class ColorTB(FormattedTB):
1295 class ColorTB(FormattedTB):
1273 """Shorthand to initialize a FormattedTB in Linux colors mode."""
1296 """Shorthand to initialize a FormattedTB in Linux colors mode."""
1274
1297
1275 def __init__(self, color_scheme='Linux', call_pdb=0, **kwargs):
1298 def __init__(self, color_scheme='Linux', call_pdb=0, **kwargs):
1276 FormattedTB.__init__(self, color_scheme=color_scheme,
1299 FormattedTB.__init__(self, color_scheme=color_scheme,
1277 call_pdb=call_pdb, **kwargs)
1300 call_pdb=call_pdb, **kwargs)
1278
1301
1279
1302
1280 class SyntaxTB(ListTB):
1303 class SyntaxTB(ListTB):
1281 """Extension which holds some state: the last exception value"""
1304 """Extension which holds some state: the last exception value"""
1282
1305
1283 def __init__(self, color_scheme='NoColor', parent=None, config=None):
1306 def __init__(self, color_scheme='NoColor', parent=None, config=None):
1284 ListTB.__init__(self, color_scheme, parent=parent, config=config)
1307 ListTB.__init__(self, color_scheme, parent=parent, config=config)
1285 self.last_syntax_error = None
1308 self.last_syntax_error = None
1286
1309
1287 def __call__(self, etype, value, elist):
1310 def __call__(self, etype, value, elist):
1288 self.last_syntax_error = value
1311 self.last_syntax_error = value
1289
1312
1290 ListTB.__call__(self, etype, value, elist)
1313 ListTB.__call__(self, etype, value, elist)
1291
1314
1292 def structured_traceback(self, etype, value, elist, tb_offset=None,
1315 def structured_traceback(self, etype, value, elist, tb_offset=None,
1293 context=5):
1316 context=5):
1294 # If the source file has been edited, the line in the syntax error can
1317 # If the source file has been edited, the line in the syntax error can
1295 # be wrong (retrieved from an outdated cache). This replaces it with
1318 # be wrong (retrieved from an outdated cache). This replaces it with
1296 # the current value.
1319 # the current value.
1297 if isinstance(value, SyntaxError) \
1320 if isinstance(value, SyntaxError) \
1298 and isinstance(value.filename, str) \
1321 and isinstance(value.filename, str) \
1299 and isinstance(value.lineno, int):
1322 and isinstance(value.lineno, int):
1300 linecache.checkcache(value.filename)
1323 linecache.checkcache(value.filename)
1301 newtext = linecache.getline(value.filename, value.lineno)
1324 newtext = linecache.getline(value.filename, value.lineno)
1302 if newtext:
1325 if newtext:
1303 value.text = newtext
1326 value.text = newtext
1304 self.last_syntax_error = value
1327 self.last_syntax_error = value
1305 return super(SyntaxTB, self).structured_traceback(etype, value, elist,
1328 return super(SyntaxTB, self).structured_traceback(etype, value, elist,
1306 tb_offset=tb_offset, context=context)
1329 tb_offset=tb_offset, context=context)
1307
1330
1308 def clear_err_state(self):
1331 def clear_err_state(self):
1309 """Return the current error state and clear it"""
1332 """Return the current error state and clear it"""
1310 e = self.last_syntax_error
1333 e = self.last_syntax_error
1311 self.last_syntax_error = None
1334 self.last_syntax_error = None
1312 return e
1335 return e
1313
1336
1314 def stb2text(self, stb):
1337 def stb2text(self, stb):
1315 """Convert a structured traceback (a list) to a string."""
1338 """Convert a structured traceback (a list) to a string."""
1316 return ''.join(stb)
1339 return ''.join(stb)
1317
1340
1318
1341
1319 # some internal-use functions
1342 # some internal-use functions
1320 def text_repr(value):
1343 def text_repr(value):
1321 """Hopefully pretty robust repr equivalent."""
1344 """Hopefully pretty robust repr equivalent."""
1322 # this is pretty horrible but should always return *something*
1345 # this is pretty horrible but should always return *something*
1323 try:
1346 try:
1324 return pydoc.text.repr(value)
1347 return pydoc.text.repr(value)
1325 except KeyboardInterrupt:
1348 except KeyboardInterrupt:
1326 raise
1349 raise
1327 except:
1350 except:
1328 try:
1351 try:
1329 return repr(value)
1352 return repr(value)
1330 except KeyboardInterrupt:
1353 except KeyboardInterrupt:
1331 raise
1354 raise
1332 except:
1355 except:
1333 try:
1356 try:
1334 # all still in an except block so we catch
1357 # all still in an except block so we catch
1335 # getattr raising
1358 # getattr raising
1336 name = getattr(value, '__name__', None)
1359 name = getattr(value, '__name__', None)
1337 if name:
1360 if name:
1338 # ick, recursion
1361 # ick, recursion
1339 return text_repr(name)
1362 return text_repr(name)
1340 klass = getattr(value, '__class__', None)
1363 klass = getattr(value, '__class__', None)
1341 if klass:
1364 if klass:
1342 return '%s instance' % text_repr(klass)
1365 return '%s instance' % text_repr(klass)
1343 except KeyboardInterrupt:
1366 except KeyboardInterrupt:
1344 raise
1367 raise
1345 except:
1368 except:
1346 return 'UNRECOVERABLE REPR FAILURE'
1369 return 'UNRECOVERABLE REPR FAILURE'
1347
1370
1348
1371
1349 def eqrepr(value, repr=text_repr):
1372 def eqrepr(value, repr=text_repr):
1350 return '=%s' % repr(value)
1373 return '=%s' % repr(value)
1351
1374
1352
1375
1353 def nullrepr(value, repr=text_repr):
1376 def nullrepr(value, repr=text_repr):
1354 return ''
1377 return ''
@@ -1,178 +1,177 b''
1 import asyncio
1 import asyncio
2 import os
2 import os
3 import sys
3 import sys
4
4
5 from IPython.core.debugger import Pdb
5 from IPython.core.debugger import Pdb
6 from IPython.core.completer import IPCompleter
6 from IPython.core.completer import IPCompleter
7 from .ptutils import IPythonPTCompleter
7 from .ptutils import IPythonPTCompleter
8 from .shortcuts import create_ipython_shortcuts
8 from .shortcuts import create_ipython_shortcuts
9 from . import embed
9 from . import embed
10
10
11 from pathlib import Path
11 from pathlib import Path
12 from pygments.token import Token
12 from pygments.token import Token
13 from prompt_toolkit.shortcuts.prompt import PromptSession
13 from prompt_toolkit.shortcuts.prompt import PromptSession
14 from prompt_toolkit.enums import EditingMode
14 from prompt_toolkit.enums import EditingMode
15 from prompt_toolkit.formatted_text import PygmentsTokens
15 from prompt_toolkit.formatted_text import PygmentsTokens
16 from prompt_toolkit.history import InMemoryHistory, FileHistory
16 from prompt_toolkit.history import InMemoryHistory, FileHistory
17 from concurrent.futures import ThreadPoolExecutor
17 from concurrent.futures import ThreadPoolExecutor
18
18
19 from prompt_toolkit import __version__ as ptk_version
19 from prompt_toolkit import __version__ as ptk_version
20 PTK3 = ptk_version.startswith('3.')
20 PTK3 = ptk_version.startswith('3.')
21
21
22
22
23 # we want to avoid ptk as much as possible when using subprocesses
23 # we want to avoid ptk as much as possible when using subprocesses
24 # as it uses cursor positioning requests, deletes color ....
24 # as it uses cursor positioning requests, deletes color ....
25 _use_simple_prompt = "IPY_TEST_SIMPLE_PROMPT" in os.environ
25 _use_simple_prompt = "IPY_TEST_SIMPLE_PROMPT" in os.environ
26
26
27
27
28 class TerminalPdb(Pdb):
28 class TerminalPdb(Pdb):
29 """Standalone IPython debugger."""
29 """Standalone IPython debugger."""
30
30
31 def __init__(self, *args, pt_session_options=None, **kwargs):
31 def __init__(self, *args, pt_session_options=None, **kwargs):
32 Pdb.__init__(self, *args, **kwargs)
32 Pdb.__init__(self, *args, **kwargs)
33 self._ptcomp = None
33 self._ptcomp = None
34 self.pt_init(pt_session_options)
34 self.pt_init(pt_session_options)
35 self.thread_executor = ThreadPoolExecutor(1)
35 self.thread_executor = ThreadPoolExecutor(1)
36
36
37 def pt_init(self, pt_session_options=None):
37 def pt_init(self, pt_session_options=None):
38 """Initialize the prompt session and the prompt loop
38 """Initialize the prompt session and the prompt loop
39 and store them in self.pt_app and self.pt_loop.
39 and store them in self.pt_app and self.pt_loop.
40
40
41 Additional keyword arguments for the PromptSession class
41 Additional keyword arguments for the PromptSession class
42 can be specified in pt_session_options.
42 can be specified in pt_session_options.
43 """
43 """
44 if pt_session_options is None:
44 if pt_session_options is None:
45 pt_session_options = {}
45 pt_session_options = {}
46
46
47 def get_prompt_tokens():
47 def get_prompt_tokens():
48 return [(Token.Prompt, self.prompt)]
48 return [(Token.Prompt, self.prompt)]
49
49
50 if self._ptcomp is None:
50 if self._ptcomp is None:
51 compl = IPCompleter(
51 compl = IPCompleter(
52 shell=self.shell, namespace={}, global_namespace={}, parent=self.shell
52 shell=self.shell, namespace={}, global_namespace={}, parent=self.shell
53 )
53 )
54 # add a completer for all the do_ methods
54 # add a completer for all the do_ methods
55 methods_names = [m[3:] for m in dir(self) if m.startswith("do_")]
55 methods_names = [m[3:] for m in dir(self) if m.startswith("do_")]
56
56
57 def gen_comp(self, text):
57 def gen_comp(self, text):
58 return [m for m in methods_names if m.startswith(text)]
58 return [m for m in methods_names if m.startswith(text)]
59 import types
59 import types
60 newcomp = types.MethodType(gen_comp, compl)
60 newcomp = types.MethodType(gen_comp, compl)
61 compl.custom_matchers.insert(0, newcomp)
61 compl.custom_matchers.insert(0, newcomp)
62 # end add completer.
62 # end add completer.
63
63
64 self._ptcomp = IPythonPTCompleter(compl)
64 self._ptcomp = IPythonPTCompleter(compl)
65
65
66 # setup history only when we start pdb
66 # setup history only when we start pdb
67 if self.shell.debugger_history is None:
67 if self.shell.debugger_history is None:
68 if self.shell.debugger_history_file is not None:
68 if self.shell.debugger_history_file is not None:
69
70 p = Path(self.shell.debugger_history_file).expanduser()
69 p = Path(self.shell.debugger_history_file).expanduser()
71 if not p.exists():
70 if not p.exists():
72 p.touch()
71 p.touch()
73 self.debugger_history = FileHistory(os.path.expanduser(str(p)))
72 self.debugger_history = FileHistory(os.path.expanduser(str(p)))
74 else:
73 else:
75 self.debugger_history = InMemoryHistory()
74 self.debugger_history = InMemoryHistory()
76 else:
75 else:
77 self.debugger_history = self.shell.debugger_history
76 self.debugger_history = self.shell.debugger_history
78
77
79 options = dict(
78 options = dict(
80 message=(lambda: PygmentsTokens(get_prompt_tokens())),
79 message=(lambda: PygmentsTokens(get_prompt_tokens())),
81 editing_mode=getattr(EditingMode, self.shell.editing_mode.upper()),
80 editing_mode=getattr(EditingMode, self.shell.editing_mode.upper()),
82 key_bindings=create_ipython_shortcuts(self.shell),
81 key_bindings=create_ipython_shortcuts(self.shell),
83 history=self.debugger_history,
82 history=self.debugger_history,
84 completer=self._ptcomp,
83 completer=self._ptcomp,
85 enable_history_search=True,
84 enable_history_search=True,
86 mouse_support=self.shell.mouse_support,
85 mouse_support=self.shell.mouse_support,
87 complete_style=self.shell.pt_complete_style,
86 complete_style=self.shell.pt_complete_style,
88 style=getattr(self.shell, "style", None),
87 style=getattr(self.shell, "style", None),
89 color_depth=self.shell.color_depth,
88 color_depth=self.shell.color_depth,
90 )
89 )
91
90
92 if not PTK3:
91 if not PTK3:
93 options['inputhook'] = self.shell.inputhook
92 options['inputhook'] = self.shell.inputhook
94 options.update(pt_session_options)
93 options.update(pt_session_options)
95 if not _use_simple_prompt:
94 if not _use_simple_prompt:
96 self.pt_loop = asyncio.new_event_loop()
95 self.pt_loop = asyncio.new_event_loop()
97 self.pt_app = PromptSession(**options)
96 self.pt_app = PromptSession(**options)
98
97
99 def cmdloop(self, intro=None):
98 def cmdloop(self, intro=None):
100 """Repeatedly issue a prompt, accept input, parse an initial prefix
99 """Repeatedly issue a prompt, accept input, parse an initial prefix
101 off the received input, and dispatch to action methods, passing them
100 off the received input, and dispatch to action methods, passing them
102 the remainder of the line as argument.
101 the remainder of the line as argument.
103
102
104 override the same methods from cmd.Cmd to provide prompt toolkit replacement.
103 override the same methods from cmd.Cmd to provide prompt toolkit replacement.
105 """
104 """
106 if not self.use_rawinput:
105 if not self.use_rawinput:
107 raise ValueError('Sorry ipdb does not support use_rawinput=False')
106 raise ValueError('Sorry ipdb does not support use_rawinput=False')
108
107
109 # In order to make sure that prompt, which uses asyncio doesn't
108 # In order to make sure that prompt, which uses asyncio doesn't
110 # interfere with applications in which it's used, we always run the
109 # interfere with applications in which it's used, we always run the
111 # prompt itself in a different thread (we can't start an event loop
110 # prompt itself in a different thread (we can't start an event loop
112 # within an event loop). This new thread won't have any event loop
111 # within an event loop). This new thread won't have any event loop
113 # running, and here we run our prompt-loop.
112 # running, and here we run our prompt-loop.
114 self.preloop()
113 self.preloop()
115
114
116 try:
115 try:
117 if intro is not None:
116 if intro is not None:
118 self.intro = intro
117 self.intro = intro
119 if self.intro:
118 if self.intro:
120 print(self.intro, file=self.stdout)
119 print(self.intro, file=self.stdout)
121 stop = None
120 stop = None
122 while not stop:
121 while not stop:
123 if self.cmdqueue:
122 if self.cmdqueue:
124 line = self.cmdqueue.pop(0)
123 line = self.cmdqueue.pop(0)
125 else:
124 else:
126 self._ptcomp.ipy_completer.namespace = self.curframe_locals
125 self._ptcomp.ipy_completer.namespace = self.curframe_locals
127 self._ptcomp.ipy_completer.global_namespace = self.curframe.f_globals
126 self._ptcomp.ipy_completer.global_namespace = self.curframe.f_globals
128
127
129 # Run the prompt in a different thread.
128 # Run the prompt in a different thread.
130 if not _use_simple_prompt:
129 if not _use_simple_prompt:
131 try:
130 try:
132 line = self.thread_executor.submit(
131 line = self.thread_executor.submit(
133 self.pt_app.prompt
132 self.pt_app.prompt
134 ).result()
133 ).result()
135 except EOFError:
134 except EOFError:
136 line = "EOF"
135 line = "EOF"
137 else:
136 else:
138 line = input("ipdb> ")
137 line = input("ipdb> ")
139
138
140 line = self.precmd(line)
139 line = self.precmd(line)
141 stop = self.onecmd(line)
140 stop = self.onecmd(line)
142 stop = self.postcmd(stop, line)
141 stop = self.postcmd(stop, line)
143 self.postloop()
142 self.postloop()
144 except Exception:
143 except Exception:
145 raise
144 raise
146
145
147 def do_interact(self, arg):
146 def do_interact(self, arg):
148 ipshell = embed.InteractiveShellEmbed(
147 ipshell = embed.InteractiveShellEmbed(
149 config=self.shell.config,
148 config=self.shell.config,
150 banner1="*interactive*",
149 banner1="*interactive*",
151 exit_msg="*exiting interactive console...*",
150 exit_msg="*exiting interactive console...*",
152 )
151 )
153 global_ns = self.curframe.f_globals
152 global_ns = self.curframe.f_globals
154 ipshell(
153 ipshell(
155 module=sys.modules.get(global_ns["__name__"], None),
154 module=sys.modules.get(global_ns["__name__"], None),
156 local_ns=self.curframe_locals,
155 local_ns=self.curframe_locals,
157 )
156 )
158
157
159
158
160 def set_trace(frame=None):
159 def set_trace(frame=None):
161 """
160 """
162 Start debugging from `frame`.
161 Start debugging from `frame`.
163
162
164 If frame is not specified, debugging starts from caller's frame.
163 If frame is not specified, debugging starts from caller's frame.
165 """
164 """
166 TerminalPdb().set_trace(frame or sys._getframe().f_back)
165 TerminalPdb().set_trace(frame or sys._getframe().f_back)
167
166
168
167
169 if __name__ == '__main__':
168 if __name__ == '__main__':
170 import pdb
169 import pdb
171 # IPython.core.debugger.Pdb.trace_dispatch shall not catch
170 # IPython.core.debugger.Pdb.trace_dispatch shall not catch
172 # bdb.BdbQuit. When started through __main__ and an exception
171 # bdb.BdbQuit. When started through __main__ and an exception
173 # happened after hitting "c", this is needed in order to
172 # happened after hitting "c", this is needed in order to
174 # be able to quit the debugging session (see #9950).
173 # be able to quit the debugging session (see #9950).
175 old_trace_dispatch = pdb.Pdb.trace_dispatch
174 old_trace_dispatch = pdb.Pdb.trace_dispatch
176 pdb.Pdb = TerminalPdb # type: ignore
175 pdb.Pdb = TerminalPdb # type: ignore
177 pdb.Pdb.trace_dispatch = old_trace_dispatch # type: ignore
176 pdb.Pdb.trace_dispatch = old_trace_dispatch # type: ignore
178 pdb.main()
177 pdb.main()
@@ -1,860 +1,859 b''
1 # Based on Pytest doctest.py
1 # Based on Pytest doctest.py
2 # Original license:
2 # Original license:
3 # The MIT License (MIT)
3 # The MIT License (MIT)
4 #
4 #
5 # Copyright (c) 2004-2021 Holger Krekel and others
5 # Copyright (c) 2004-2021 Holger Krekel and others
6 """Discover and run ipdoctests in modules and test files."""
6 """Discover and run ipdoctests in modules and test files."""
7 import builtins
7 import builtins
8 import bdb
8 import bdb
9 import inspect
9 import inspect
10 import os
10 import os
11 import platform
11 import platform
12 import sys
12 import sys
13 import traceback
13 import traceback
14 import types
14 import types
15 import warnings
15 import warnings
16 from contextlib import contextmanager
16 from contextlib import contextmanager
17 from pathlib import Path
17 from pathlib import Path
18 from typing import Any
18 from typing import Any
19 from typing import Callable
19 from typing import Callable
20 from typing import Dict
20 from typing import Dict
21 from typing import Generator
21 from typing import Generator
22 from typing import Iterable
22 from typing import Iterable
23 from typing import List
23 from typing import List
24 from typing import Optional
24 from typing import Optional
25 from typing import Pattern
25 from typing import Pattern
26 from typing import Sequence
26 from typing import Sequence
27 from typing import Tuple
27 from typing import Tuple
28 from typing import Type
28 from typing import Type
29 from typing import TYPE_CHECKING
29 from typing import TYPE_CHECKING
30 from typing import Union
30 from typing import Union
31
31
32 import pytest
32 import pytest
33 from _pytest import outcomes
33 from _pytest import outcomes
34 from _pytest._code.code import ExceptionInfo
34 from _pytest._code.code import ExceptionInfo
35 from _pytest._code.code import ReprFileLocation
35 from _pytest._code.code import ReprFileLocation
36 from _pytest._code.code import TerminalRepr
36 from _pytest._code.code import TerminalRepr
37 from _pytest._io import TerminalWriter
37 from _pytest._io import TerminalWriter
38 from _pytest.compat import safe_getattr
38 from _pytest.compat import safe_getattr
39 from _pytest.config import Config
39 from _pytest.config import Config
40 from _pytest.config.argparsing import Parser
40 from _pytest.config.argparsing import Parser
41 from _pytest.fixtures import FixtureRequest
41 from _pytest.fixtures import FixtureRequest
42 from _pytest.nodes import Collector
42 from _pytest.nodes import Collector
43 from _pytest.outcomes import OutcomeException
43 from _pytest.outcomes import OutcomeException
44 from _pytest.pathlib import fnmatch_ex
44 from _pytest.pathlib import fnmatch_ex
45 from _pytest.pathlib import import_path
45 from _pytest.pathlib import import_path
46 from _pytest.python_api import approx
46 from _pytest.python_api import approx
47 from _pytest.warning_types import PytestWarning
47 from _pytest.warning_types import PytestWarning
48
48
49 if TYPE_CHECKING:
49 if TYPE_CHECKING:
50 import doctest
50 import doctest
51
51
52 DOCTEST_REPORT_CHOICE_NONE = "none"
52 DOCTEST_REPORT_CHOICE_NONE = "none"
53 DOCTEST_REPORT_CHOICE_CDIFF = "cdiff"
53 DOCTEST_REPORT_CHOICE_CDIFF = "cdiff"
54 DOCTEST_REPORT_CHOICE_NDIFF = "ndiff"
54 DOCTEST_REPORT_CHOICE_NDIFF = "ndiff"
55 DOCTEST_REPORT_CHOICE_UDIFF = "udiff"
55 DOCTEST_REPORT_CHOICE_UDIFF = "udiff"
56 DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE = "only_first_failure"
56 DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE = "only_first_failure"
57
57
58 DOCTEST_REPORT_CHOICES = (
58 DOCTEST_REPORT_CHOICES = (
59 DOCTEST_REPORT_CHOICE_NONE,
59 DOCTEST_REPORT_CHOICE_NONE,
60 DOCTEST_REPORT_CHOICE_CDIFF,
60 DOCTEST_REPORT_CHOICE_CDIFF,
61 DOCTEST_REPORT_CHOICE_NDIFF,
61 DOCTEST_REPORT_CHOICE_NDIFF,
62 DOCTEST_REPORT_CHOICE_UDIFF,
62 DOCTEST_REPORT_CHOICE_UDIFF,
63 DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE,
63 DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE,
64 )
64 )
65
65
66 # Lazy definition of runner class
66 # Lazy definition of runner class
67 RUNNER_CLASS = None
67 RUNNER_CLASS = None
68 # Lazy definition of output checker class
68 # Lazy definition of output checker class
69 CHECKER_CLASS: Optional[Type["IPDoctestOutputChecker"]] = None
69 CHECKER_CLASS: Optional[Type["IPDoctestOutputChecker"]] = None
70
70
71
71
72 def pytest_addoption(parser: Parser) -> None:
72 def pytest_addoption(parser: Parser) -> None:
73 parser.addini(
73 parser.addini(
74 "ipdoctest_optionflags",
74 "ipdoctest_optionflags",
75 "option flags for ipdoctests",
75 "option flags for ipdoctests",
76 type="args",
76 type="args",
77 default=["ELLIPSIS"],
77 default=["ELLIPSIS"],
78 )
78 )
79 parser.addini(
79 parser.addini(
80 "ipdoctest_encoding", "encoding used for ipdoctest files", default="utf-8"
80 "ipdoctest_encoding", "encoding used for ipdoctest files", default="utf-8"
81 )
81 )
82 group = parser.getgroup("collect")
82 group = parser.getgroup("collect")
83 group.addoption(
83 group.addoption(
84 "--ipdoctest-modules",
84 "--ipdoctest-modules",
85 action="store_true",
85 action="store_true",
86 default=False,
86 default=False,
87 help="run ipdoctests in all .py modules",
87 help="run ipdoctests in all .py modules",
88 dest="ipdoctestmodules",
88 dest="ipdoctestmodules",
89 )
89 )
90 group.addoption(
90 group.addoption(
91 "--ipdoctest-report",
91 "--ipdoctest-report",
92 type=str.lower,
92 type=str.lower,
93 default="udiff",
93 default="udiff",
94 help="choose another output format for diffs on ipdoctest failure",
94 help="choose another output format for diffs on ipdoctest failure",
95 choices=DOCTEST_REPORT_CHOICES,
95 choices=DOCTEST_REPORT_CHOICES,
96 dest="ipdoctestreport",
96 dest="ipdoctestreport",
97 )
97 )
98 group.addoption(
98 group.addoption(
99 "--ipdoctest-glob",
99 "--ipdoctest-glob",
100 action="append",
100 action="append",
101 default=[],
101 default=[],
102 metavar="pat",
102 metavar="pat",
103 help="ipdoctests file matching pattern, default: test*.txt",
103 help="ipdoctests file matching pattern, default: test*.txt",
104 dest="ipdoctestglob",
104 dest="ipdoctestglob",
105 )
105 )
106 group.addoption(
106 group.addoption(
107 "--ipdoctest-ignore-import-errors",
107 "--ipdoctest-ignore-import-errors",
108 action="store_true",
108 action="store_true",
109 default=False,
109 default=False,
110 help="ignore ipdoctest ImportErrors",
110 help="ignore ipdoctest ImportErrors",
111 dest="ipdoctest_ignore_import_errors",
111 dest="ipdoctest_ignore_import_errors",
112 )
112 )
113 group.addoption(
113 group.addoption(
114 "--ipdoctest-continue-on-failure",
114 "--ipdoctest-continue-on-failure",
115 action="store_true",
115 action="store_true",
116 default=False,
116 default=False,
117 help="for a given ipdoctest, continue to run after the first failure",
117 help="for a given ipdoctest, continue to run after the first failure",
118 dest="ipdoctest_continue_on_failure",
118 dest="ipdoctest_continue_on_failure",
119 )
119 )
120
120
121
121
122 def pytest_unconfigure() -> None:
122 def pytest_unconfigure() -> None:
123 global RUNNER_CLASS
123 global RUNNER_CLASS
124
124
125 RUNNER_CLASS = None
125 RUNNER_CLASS = None
126
126
127
127
128 def pytest_collect_file(
128 def pytest_collect_file(
129 file_path: Path,
129 file_path: Path,
130 parent: Collector,
130 parent: Collector,
131 ) -> Optional[Union["IPDoctestModule", "IPDoctestTextfile"]]:
131 ) -> Optional[Union["IPDoctestModule", "IPDoctestTextfile"]]:
132 config = parent.config
132 config = parent.config
133 if file_path.suffix == ".py":
133 if file_path.suffix == ".py":
134 if config.option.ipdoctestmodules and not any(
134 if config.option.ipdoctestmodules and not any(
135 (_is_setup_py(file_path), _is_main_py(file_path))
135 (_is_setup_py(file_path), _is_main_py(file_path))
136 ):
136 ):
137 mod: IPDoctestModule = IPDoctestModule.from_parent(parent, path=file_path)
137 mod: IPDoctestModule = IPDoctestModule.from_parent(parent, path=file_path)
138 return mod
138 return mod
139 elif _is_ipdoctest(config, file_path, parent):
139 elif _is_ipdoctest(config, file_path, parent):
140 txt: IPDoctestTextfile = IPDoctestTextfile.from_parent(parent, path=file_path)
140 txt: IPDoctestTextfile = IPDoctestTextfile.from_parent(parent, path=file_path)
141 return txt
141 return txt
142 return None
142 return None
143
143
144
144
145 if int(pytest.__version__.split(".")[0]) < 7:
145 if int(pytest.__version__.split(".")[0]) < 7:
146 _collect_file = pytest_collect_file
146 _collect_file = pytest_collect_file
147
147
148 def pytest_collect_file(
148 def pytest_collect_file(
149 path,
149 path,
150 parent: Collector,
150 parent: Collector,
151 ) -> Optional[Union["IPDoctestModule", "IPDoctestTextfile"]]:
151 ) -> Optional[Union["IPDoctestModule", "IPDoctestTextfile"]]:
152 return _collect_file(Path(path), parent)
152 return _collect_file(Path(path), parent)
153
153
154 _import_path = import_path
154 _import_path = import_path
155
155
156 def import_path(path, root):
156 def import_path(path, root):
157 import py.path
157 import py.path
158
158
159 return _import_path(py.path.local(path))
159 return _import_path(py.path.local(path))
160
160
161
161
162 def _is_setup_py(path: Path) -> bool:
162 def _is_setup_py(path: Path) -> bool:
163 if path.name != "setup.py":
163 if path.name != "setup.py":
164 return False
164 return False
165 contents = path.read_bytes()
165 contents = path.read_bytes()
166 return b"setuptools" in contents or b"distutils" in contents
166 return b"setuptools" in contents or b"distutils" in contents
167
167
168
168
169 def _is_ipdoctest(config: Config, path: Path, parent: Collector) -> bool:
169 def _is_ipdoctest(config: Config, path: Path, parent: Collector) -> bool:
170 if path.suffix in (".txt", ".rst") and parent.session.isinitpath(path):
170 if path.suffix in (".txt", ".rst") and parent.session.isinitpath(path):
171 return True
171 return True
172 globs = config.getoption("ipdoctestglob") or ["test*.txt"]
172 globs = config.getoption("ipdoctestglob") or ["test*.txt"]
173 return any(fnmatch_ex(glob, path) for glob in globs)
173 return any(fnmatch_ex(glob, path) for glob in globs)
174
174
175
175
176 def _is_main_py(path: Path) -> bool:
176 def _is_main_py(path: Path) -> bool:
177 return path.name == "__main__.py"
177 return path.name == "__main__.py"
178
178
179
179
180 class ReprFailDoctest(TerminalRepr):
180 class ReprFailDoctest(TerminalRepr):
181 def __init__(
181 def __init__(
182 self, reprlocation_lines: Sequence[Tuple[ReprFileLocation, Sequence[str]]]
182 self, reprlocation_lines: Sequence[Tuple[ReprFileLocation, Sequence[str]]]
183 ) -> None:
183 ) -> None:
184 self.reprlocation_lines = reprlocation_lines
184 self.reprlocation_lines = reprlocation_lines
185
185
186 def toterminal(self, tw: TerminalWriter) -> None:
186 def toterminal(self, tw: TerminalWriter) -> None:
187 for reprlocation, lines in self.reprlocation_lines:
187 for reprlocation, lines in self.reprlocation_lines:
188 for line in lines:
188 for line in lines:
189 tw.line(line)
189 tw.line(line)
190 reprlocation.toterminal(tw)
190 reprlocation.toterminal(tw)
191
191
192
192
193 class MultipleDoctestFailures(Exception):
193 class MultipleDoctestFailures(Exception):
194 def __init__(self, failures: Sequence["doctest.DocTestFailure"]) -> None:
194 def __init__(self, failures: Sequence["doctest.DocTestFailure"]) -> None:
195 super().__init__()
195 super().__init__()
196 self.failures = failures
196 self.failures = failures
197
197
198
198
199 def _init_runner_class() -> Type["IPDocTestRunner"]:
199 def _init_runner_class() -> Type["IPDocTestRunner"]:
200 import doctest
200 import doctest
201 from .ipdoctest import IPDocTestRunner
201 from .ipdoctest import IPDocTestRunner
202
202
203 class PytestDoctestRunner(IPDocTestRunner):
203 class PytestDoctestRunner(IPDocTestRunner):
204 """Runner to collect failures.
204 """Runner to collect failures.
205
205
206 Note that the out variable in this case is a list instead of a
206 Note that the out variable in this case is a list instead of a
207 stdout-like object.
207 stdout-like object.
208 """
208 """
209
209
210 def __init__(
210 def __init__(
211 self,
211 self,
212 checker: Optional["IPDoctestOutputChecker"] = None,
212 checker: Optional["IPDoctestOutputChecker"] = None,
213 verbose: Optional[bool] = None,
213 verbose: Optional[bool] = None,
214 optionflags: int = 0,
214 optionflags: int = 0,
215 continue_on_failure: bool = True,
215 continue_on_failure: bool = True,
216 ) -> None:
216 ) -> None:
217 super().__init__(checker=checker, verbose=verbose, optionflags=optionflags)
217 super().__init__(checker=checker, verbose=verbose, optionflags=optionflags)
218 self.continue_on_failure = continue_on_failure
218 self.continue_on_failure = continue_on_failure
219
219
220 def report_failure(
220 def report_failure(
221 self,
221 self,
222 out,
222 out,
223 test: "doctest.DocTest",
223 test: "doctest.DocTest",
224 example: "doctest.Example",
224 example: "doctest.Example",
225 got: str,
225 got: str,
226 ) -> None:
226 ) -> None:
227 failure = doctest.DocTestFailure(test, example, got)
227 failure = doctest.DocTestFailure(test, example, got)
228 if self.continue_on_failure:
228 if self.continue_on_failure:
229 out.append(failure)
229 out.append(failure)
230 else:
230 else:
231 raise failure
231 raise failure
232
232
233 def report_unexpected_exception(
233 def report_unexpected_exception(
234 self,
234 self,
235 out,
235 out,
236 test: "doctest.DocTest",
236 test: "doctest.DocTest",
237 example: "doctest.Example",
237 example: "doctest.Example",
238 exc_info: Tuple[Type[BaseException], BaseException, types.TracebackType],
238 exc_info: Tuple[Type[BaseException], BaseException, types.TracebackType],
239 ) -> None:
239 ) -> None:
240 if isinstance(exc_info[1], OutcomeException):
240 if isinstance(exc_info[1], OutcomeException):
241 raise exc_info[1]
241 raise exc_info[1]
242 if isinstance(exc_info[1], bdb.BdbQuit):
242 if isinstance(exc_info[1], bdb.BdbQuit):
243 outcomes.exit("Quitting debugger")
243 outcomes.exit("Quitting debugger")
244 failure = doctest.UnexpectedException(test, example, exc_info)
244 failure = doctest.UnexpectedException(test, example, exc_info)
245 if self.continue_on_failure:
245 if self.continue_on_failure:
246 out.append(failure)
246 out.append(failure)
247 else:
247 else:
248 raise failure
248 raise failure
249
249
250 return PytestDoctestRunner
250 return PytestDoctestRunner
251
251
252
252
253 def _get_runner(
253 def _get_runner(
254 checker: Optional["IPDoctestOutputChecker"] = None,
254 checker: Optional["IPDoctestOutputChecker"] = None,
255 verbose: Optional[bool] = None,
255 verbose: Optional[bool] = None,
256 optionflags: int = 0,
256 optionflags: int = 0,
257 continue_on_failure: bool = True,
257 continue_on_failure: bool = True,
258 ) -> "IPDocTestRunner":
258 ) -> "IPDocTestRunner":
259 # We need this in order to do a lazy import on doctest
259 # We need this in order to do a lazy import on doctest
260 global RUNNER_CLASS
260 global RUNNER_CLASS
261 if RUNNER_CLASS is None:
261 if RUNNER_CLASS is None:
262 RUNNER_CLASS = _init_runner_class()
262 RUNNER_CLASS = _init_runner_class()
263 # Type ignored because the continue_on_failure argument is only defined on
263 # Type ignored because the continue_on_failure argument is only defined on
264 # PytestDoctestRunner, which is lazily defined so can't be used as a type.
264 # PytestDoctestRunner, which is lazily defined so can't be used as a type.
265 return RUNNER_CLASS( # type: ignore
265 return RUNNER_CLASS( # type: ignore
266 checker=checker,
266 checker=checker,
267 verbose=verbose,
267 verbose=verbose,
268 optionflags=optionflags,
268 optionflags=optionflags,
269 continue_on_failure=continue_on_failure,
269 continue_on_failure=continue_on_failure,
270 )
270 )
271
271
272
272
273 class IPDoctestItem(pytest.Item):
273 class IPDoctestItem(pytest.Item):
274 def __init__(
274 def __init__(
275 self,
275 self,
276 name: str,
276 name: str,
277 parent: "Union[IPDoctestTextfile, IPDoctestModule]",
277 parent: "Union[IPDoctestTextfile, IPDoctestModule]",
278 runner: Optional["IPDocTestRunner"] = None,
278 runner: Optional["IPDocTestRunner"] = None,
279 dtest: Optional["doctest.DocTest"] = None,
279 dtest: Optional["doctest.DocTest"] = None,
280 ) -> None:
280 ) -> None:
281 super().__init__(name, parent)
281 super().__init__(name, parent)
282 self.runner = runner
282 self.runner = runner
283 self.dtest = dtest
283 self.dtest = dtest
284 self.obj = None
284 self.obj = None
285 self.fixture_request: Optional[FixtureRequest] = None
285 self.fixture_request: Optional[FixtureRequest] = None
286
286
287 @classmethod
287 @classmethod
288 def from_parent( # type: ignore
288 def from_parent( # type: ignore
289 cls,
289 cls,
290 parent: "Union[IPDoctestTextfile, IPDoctestModule]",
290 parent: "Union[IPDoctestTextfile, IPDoctestModule]",
291 *,
291 *,
292 name: str,
292 name: str,
293 runner: "IPDocTestRunner",
293 runner: "IPDocTestRunner",
294 dtest: "doctest.DocTest",
294 dtest: "doctest.DocTest",
295 ):
295 ):
296 # incompatible signature due to imposed limits on subclass
296 # incompatible signature due to imposed limits on subclass
297 """The public named constructor."""
297 """The public named constructor."""
298 return super().from_parent(name=name, parent=parent, runner=runner, dtest=dtest)
298 return super().from_parent(name=name, parent=parent, runner=runner, dtest=dtest)
299
299
300 def setup(self) -> None:
300 def setup(self) -> None:
301 if self.dtest is not None:
301 if self.dtest is not None:
302 self.fixture_request = _setup_fixtures(self)
302 self.fixture_request = _setup_fixtures(self)
303 globs = dict(getfixture=self.fixture_request.getfixturevalue)
303 globs = dict(getfixture=self.fixture_request.getfixturevalue)
304 for name, value in self.fixture_request.getfixturevalue(
304 for name, value in self.fixture_request.getfixturevalue(
305 "ipdoctest_namespace"
305 "ipdoctest_namespace"
306 ).items():
306 ).items():
307 globs[name] = value
307 globs[name] = value
308 self.dtest.globs.update(globs)
308 self.dtest.globs.update(globs)
309
309
310 from .ipdoctest import IPExample
310 from .ipdoctest import IPExample
311
311
312 if isinstance(self.dtest.examples[0], IPExample):
312 if isinstance(self.dtest.examples[0], IPExample):
313 # for IPython examples *only*, we swap the globals with the ipython
313 # for IPython examples *only*, we swap the globals with the ipython
314 # namespace, after updating it with the globals (which doctest
314 # namespace, after updating it with the globals (which doctest
315 # fills with the necessary info from the module being tested).
315 # fills with the necessary info from the module being tested).
316 self._user_ns_orig = {}
316 self._user_ns_orig = {}
317 self._user_ns_orig.update(_ip.user_ns)
317 self._user_ns_orig.update(_ip.user_ns)
318 _ip.user_ns.update(self.dtest.globs)
318 _ip.user_ns.update(self.dtest.globs)
319 # We must remove the _ key in the namespace, so that Python's
319 # We must remove the _ key in the namespace, so that Python's
320 # doctest code sets it naturally
320 # doctest code sets it naturally
321 _ip.user_ns.pop("_", None)
321 _ip.user_ns.pop("_", None)
322 _ip.user_ns["__builtins__"] = builtins
322 _ip.user_ns["__builtins__"] = builtins
323 self.dtest.globs = _ip.user_ns
323 self.dtest.globs = _ip.user_ns
324
324
325 def teardown(self) -> None:
325 def teardown(self) -> None:
326 from .ipdoctest import IPExample
326 from .ipdoctest import IPExample
327
327
328 # Undo the test.globs reassignment we made
328 # Undo the test.globs reassignment we made
329 if isinstance(self.dtest.examples[0], IPExample):
329 if isinstance(self.dtest.examples[0], IPExample):
330 self.dtest.globs = {}
330 self.dtest.globs = {}
331 _ip.user_ns.clear()
331 _ip.user_ns.clear()
332 _ip.user_ns.update(self._user_ns_orig)
332 _ip.user_ns.update(self._user_ns_orig)
333 del self._user_ns_orig
333 del self._user_ns_orig
334
334
335 self.dtest.globs.clear()
335 self.dtest.globs.clear()
336
336
337 def runtest(self) -> None:
337 def runtest(self) -> None:
338 assert self.dtest is not None
338 assert self.dtest is not None
339 assert self.runner is not None
339 assert self.runner is not None
340 _check_all_skipped(self.dtest)
340 _check_all_skipped(self.dtest)
341 self._disable_output_capturing_for_darwin()
341 self._disable_output_capturing_for_darwin()
342 failures: List["doctest.DocTestFailure"] = []
342 failures: List["doctest.DocTestFailure"] = []
343
343
344 # exec(compile(..., "single", ...), ...) puts result in builtins._
344 # exec(compile(..., "single", ...), ...) puts result in builtins._
345 had_underscore_value = hasattr(builtins, "_")
345 had_underscore_value = hasattr(builtins, "_")
346 underscore_original_value = getattr(builtins, "_", None)
346 underscore_original_value = getattr(builtins, "_", None)
347
347
348 # Save our current directory and switch out to the one where the
348 # Save our current directory and switch out to the one where the
349 # test was originally created, in case another doctest did a
349 # test was originally created, in case another doctest did a
350 # directory change. We'll restore this in the finally clause.
350 # directory change. We'll restore this in the finally clause.
351 curdir = os.getcwd()
351 curdir = os.getcwd()
352 os.chdir(self.fspath.dirname)
352 os.chdir(self.fspath.dirname)
353 try:
353 try:
354 # Type ignored because we change the type of `out` from what
354 # Type ignored because we change the type of `out` from what
355 # ipdoctest expects.
355 # ipdoctest expects.
356 self.runner.run(self.dtest, out=failures, clear_globs=False) # type: ignore[arg-type]
356 self.runner.run(self.dtest, out=failures, clear_globs=False) # type: ignore[arg-type]
357 finally:
357 finally:
358 os.chdir(curdir)
358 os.chdir(curdir)
359 if had_underscore_value:
359 if had_underscore_value:
360 setattr(builtins, "_", underscore_original_value)
360 setattr(builtins, "_", underscore_original_value)
361 elif hasattr(builtins, "_"):
361 elif hasattr(builtins, "_"):
362 delattr(builtins, "_")
362 delattr(builtins, "_")
363
363
364 if failures:
364 if failures:
365 raise MultipleDoctestFailures(failures)
365 raise MultipleDoctestFailures(failures)
366
366
367 def _disable_output_capturing_for_darwin(self) -> None:
367 def _disable_output_capturing_for_darwin(self) -> None:
368 """Disable output capturing. Otherwise, stdout is lost to ipdoctest (pytest#985)."""
368 """Disable output capturing. Otherwise, stdout is lost to ipdoctest (pytest#985)."""
369 if platform.system() != "Darwin":
369 if platform.system() != "Darwin":
370 return
370 return
371 capman = self.config.pluginmanager.getplugin("capturemanager")
371 capman = self.config.pluginmanager.getplugin("capturemanager")
372 if capman:
372 if capman:
373 capman.suspend_global_capture(in_=True)
373 capman.suspend_global_capture(in_=True)
374 out, err = capman.read_global_capture()
374 out, err = capman.read_global_capture()
375 sys.stdout.write(out)
375 sys.stdout.write(out)
376 sys.stderr.write(err)
376 sys.stderr.write(err)
377
377
378 # TODO: Type ignored -- breaks Liskov Substitution.
378 # TODO: Type ignored -- breaks Liskov Substitution.
379 def repr_failure( # type: ignore[override]
379 def repr_failure( # type: ignore[override]
380 self,
380 self,
381 excinfo: ExceptionInfo[BaseException],
381 excinfo: ExceptionInfo[BaseException],
382 ) -> Union[str, TerminalRepr]:
382 ) -> Union[str, TerminalRepr]:
383 import doctest
383 import doctest
384
384
385 failures: Optional[
385 failures: Optional[
386 Sequence[Union[doctest.DocTestFailure, doctest.UnexpectedException]]
386 Sequence[Union[doctest.DocTestFailure, doctest.UnexpectedException]]
387 ] = None
387 ] = None
388 if isinstance(
388 if isinstance(
389 excinfo.value, (doctest.DocTestFailure, doctest.UnexpectedException)
389 excinfo.value, (doctest.DocTestFailure, doctest.UnexpectedException)
390 ):
390 ):
391 failures = [excinfo.value]
391 failures = [excinfo.value]
392 elif isinstance(excinfo.value, MultipleDoctestFailures):
392 elif isinstance(excinfo.value, MultipleDoctestFailures):
393 failures = excinfo.value.failures
393 failures = excinfo.value.failures
394
394
395 if failures is None:
395 if failures is None:
396 return super().repr_failure(excinfo)
396 return super().repr_failure(excinfo)
397
397
398 reprlocation_lines = []
398 reprlocation_lines = []
399 for failure in failures:
399 for failure in failures:
400 example = failure.example
400 example = failure.example
401 test = failure.test
401 test = failure.test
402 filename = test.filename
402 filename = test.filename
403 if test.lineno is None:
403 if test.lineno is None:
404 lineno = None
404 lineno = None
405 else:
405 else:
406 lineno = test.lineno + example.lineno + 1
406 lineno = test.lineno + example.lineno + 1
407 message = type(failure).__name__
407 message = type(failure).__name__
408 # TODO: ReprFileLocation doesn't expect a None lineno.
408 # TODO: ReprFileLocation doesn't expect a None lineno.
409 reprlocation = ReprFileLocation(filename, lineno, message) # type: ignore[arg-type]
409 reprlocation = ReprFileLocation(filename, lineno, message) # type: ignore[arg-type]
410 checker = _get_checker()
410 checker = _get_checker()
411 report_choice = _get_report_choice(self.config.getoption("ipdoctestreport"))
411 report_choice = _get_report_choice(self.config.getoption("ipdoctestreport"))
412 if lineno is not None:
412 if lineno is not None:
413 assert failure.test.docstring is not None
413 assert failure.test.docstring is not None
414 lines = failure.test.docstring.splitlines(False)
414 lines = failure.test.docstring.splitlines(False)
415 # add line numbers to the left of the error message
415 # add line numbers to the left of the error message
416 assert test.lineno is not None
416 assert test.lineno is not None
417 lines = [
417 lines = [
418 "%03d %s" % (i + test.lineno + 1, x) for (i, x) in enumerate(lines)
418 "%03d %s" % (i + test.lineno + 1, x) for (i, x) in enumerate(lines)
419 ]
419 ]
420 # trim docstring error lines to 10
420 # trim docstring error lines to 10
421 lines = lines[max(example.lineno - 9, 0) : example.lineno + 1]
421 lines = lines[max(example.lineno - 9, 0) : example.lineno + 1]
422 else:
422 else:
423 lines = [
423 lines = [
424 "EXAMPLE LOCATION UNKNOWN, not showing all tests of that example"
424 "EXAMPLE LOCATION UNKNOWN, not showing all tests of that example"
425 ]
425 ]
426 indent = ">>>"
426 indent = ">>>"
427 for line in example.source.splitlines():
427 for line in example.source.splitlines():
428 lines.append(f"??? {indent} {line}")
428 lines.append(f"??? {indent} {line}")
429 indent = "..."
429 indent = "..."
430 if isinstance(failure, doctest.DocTestFailure):
430 if isinstance(failure, doctest.DocTestFailure):
431 lines += checker.output_difference(
431 lines += checker.output_difference(
432 example, failure.got, report_choice
432 example, failure.got, report_choice
433 ).split("\n")
433 ).split("\n")
434 else:
434 else:
435 inner_excinfo = ExceptionInfo.from_exc_info(failure.exc_info)
435 inner_excinfo = ExceptionInfo.from_exc_info(failure.exc_info)
436 lines += ["UNEXPECTED EXCEPTION: %s" % repr(inner_excinfo.value)]
436 lines += ["UNEXPECTED EXCEPTION: %s" % repr(inner_excinfo.value)]
437 lines += [
437 lines += [
438 x.strip("\n") for x in traceback.format_exception(*failure.exc_info)
438 x.strip("\n") for x in traceback.format_exception(*failure.exc_info)
439 ]
439 ]
440 reprlocation_lines.append((reprlocation, lines))
440 reprlocation_lines.append((reprlocation, lines))
441 return ReprFailDoctest(reprlocation_lines)
441 return ReprFailDoctest(reprlocation_lines)
442
442
443 def reportinfo(self) -> Tuple[Union["os.PathLike[str]", str], Optional[int], str]:
443 def reportinfo(self) -> Tuple[Union["os.PathLike[str]", str], Optional[int], str]:
444 assert self.dtest is not None
444 assert self.dtest is not None
445 return self.path, self.dtest.lineno, "[ipdoctest] %s" % self.name
445 return self.path, self.dtest.lineno, "[ipdoctest] %s" % self.name
446
446
447 if int(pytest.__version__.split(".")[0]) < 7:
447 if int(pytest.__version__.split(".")[0]) < 7:
448
448
449 @property
449 @property
450 def path(self) -> Path:
450 def path(self) -> Path:
451 return Path(self.fspath)
451 return Path(self.fspath)
452
452
453
453
454 def _get_flag_lookup() -> Dict[str, int]:
454 def _get_flag_lookup() -> Dict[str, int]:
455 import doctest
455 import doctest
456
456
457 return dict(
457 return dict(
458 DONT_ACCEPT_TRUE_FOR_1=doctest.DONT_ACCEPT_TRUE_FOR_1,
458 DONT_ACCEPT_TRUE_FOR_1=doctest.DONT_ACCEPT_TRUE_FOR_1,
459 DONT_ACCEPT_BLANKLINE=doctest.DONT_ACCEPT_BLANKLINE,
459 DONT_ACCEPT_BLANKLINE=doctest.DONT_ACCEPT_BLANKLINE,
460 NORMALIZE_WHITESPACE=doctest.NORMALIZE_WHITESPACE,
460 NORMALIZE_WHITESPACE=doctest.NORMALIZE_WHITESPACE,
461 ELLIPSIS=doctest.ELLIPSIS,
461 ELLIPSIS=doctest.ELLIPSIS,
462 IGNORE_EXCEPTION_DETAIL=doctest.IGNORE_EXCEPTION_DETAIL,
462 IGNORE_EXCEPTION_DETAIL=doctest.IGNORE_EXCEPTION_DETAIL,
463 COMPARISON_FLAGS=doctest.COMPARISON_FLAGS,
463 COMPARISON_FLAGS=doctest.COMPARISON_FLAGS,
464 ALLOW_UNICODE=_get_allow_unicode_flag(),
464 ALLOW_UNICODE=_get_allow_unicode_flag(),
465 ALLOW_BYTES=_get_allow_bytes_flag(),
465 ALLOW_BYTES=_get_allow_bytes_flag(),
466 NUMBER=_get_number_flag(),
466 NUMBER=_get_number_flag(),
467 )
467 )
468
468
469
469
470 def get_optionflags(parent):
470 def get_optionflags(parent):
471 optionflags_str = parent.config.getini("ipdoctest_optionflags")
471 optionflags_str = parent.config.getini("ipdoctest_optionflags")
472 flag_lookup_table = _get_flag_lookup()
472 flag_lookup_table = _get_flag_lookup()
473 flag_acc = 0
473 flag_acc = 0
474 for flag in optionflags_str:
474 for flag in optionflags_str:
475 flag_acc |= flag_lookup_table[flag]
475 flag_acc |= flag_lookup_table[flag]
476 return flag_acc
476 return flag_acc
477
477
478
478
479 def _get_continue_on_failure(config):
479 def _get_continue_on_failure(config):
480 continue_on_failure = config.getvalue("ipdoctest_continue_on_failure")
480 continue_on_failure = config.getvalue("ipdoctest_continue_on_failure")
481 if continue_on_failure:
481 if continue_on_failure:
482 # We need to turn off this if we use pdb since we should stop at
482 # We need to turn off this if we use pdb since we should stop at
483 # the first failure.
483 # the first failure.
484 if config.getvalue("usepdb"):
484 if config.getvalue("usepdb"):
485 continue_on_failure = False
485 continue_on_failure = False
486 return continue_on_failure
486 return continue_on_failure
487
487
488
488
489 class IPDoctestTextfile(pytest.Module):
489 class IPDoctestTextfile(pytest.Module):
490 obj = None
490 obj = None
491
491
492 def collect(self) -> Iterable[IPDoctestItem]:
492 def collect(self) -> Iterable[IPDoctestItem]:
493 import doctest
493 import doctest
494 from .ipdoctest import IPDocTestParser
494 from .ipdoctest import IPDocTestParser
495
495
496 # Inspired by doctest.testfile; ideally we would use it directly,
496 # Inspired by doctest.testfile; ideally we would use it directly,
497 # but it doesn't support passing a custom checker.
497 # but it doesn't support passing a custom checker.
498 encoding = self.config.getini("ipdoctest_encoding")
498 encoding = self.config.getini("ipdoctest_encoding")
499 text = self.path.read_text(encoding)
499 text = self.path.read_text(encoding)
500 filename = str(self.path)
500 filename = str(self.path)
501 name = self.path.name
501 name = self.path.name
502 globs = {"__name__": "__main__"}
502 globs = {"__name__": "__main__"}
503
503
504 optionflags = get_optionflags(self)
504 optionflags = get_optionflags(self)
505
505
506 runner = _get_runner(
506 runner = _get_runner(
507 verbose=False,
507 verbose=False,
508 optionflags=optionflags,
508 optionflags=optionflags,
509 checker=_get_checker(),
509 checker=_get_checker(),
510 continue_on_failure=_get_continue_on_failure(self.config),
510 continue_on_failure=_get_continue_on_failure(self.config),
511 )
511 )
512
512
513 parser = IPDocTestParser()
513 parser = IPDocTestParser()
514 test = parser.get_doctest(text, globs, name, filename, 0)
514 test = parser.get_doctest(text, globs, name, filename, 0)
515 if test.examples:
515 if test.examples:
516 yield IPDoctestItem.from_parent(
516 yield IPDoctestItem.from_parent(
517 self, name=test.name, runner=runner, dtest=test
517 self, name=test.name, runner=runner, dtest=test
518 )
518 )
519
519
520 if int(pytest.__version__.split(".")[0]) < 7:
520 if int(pytest.__version__.split(".")[0]) < 7:
521
521
522 @property
522 @property
523 def path(self) -> Path:
523 def path(self) -> Path:
524 return Path(self.fspath)
524 return Path(self.fspath)
525
525
526 @classmethod
526 @classmethod
527 def from_parent(
527 def from_parent(
528 cls,
528 cls,
529 parent,
529 parent,
530 *,
530 *,
531 fspath=None,
531 fspath=None,
532 path: Optional[Path] = None,
532 path: Optional[Path] = None,
533 **kw,
533 **kw,
534 ):
534 ):
535 if path is not None:
535 if path is not None:
536 import py.path
536 import py.path
537
537
538 fspath = py.path.local(path)
538 fspath = py.path.local(path)
539 return super().from_parent(parent=parent, fspath=fspath, **kw)
539 return super().from_parent(parent=parent, fspath=fspath, **kw)
540
540
541
541
542 def _check_all_skipped(test: "doctest.DocTest") -> None:
542 def _check_all_skipped(test: "doctest.DocTest") -> None:
543 """Raise pytest.skip() if all examples in the given DocTest have the SKIP
543 """Raise pytest.skip() if all examples in the given DocTest have the SKIP
544 option set."""
544 option set."""
545 import doctest
545 import doctest
546
546
547 all_skipped = all(x.options.get(doctest.SKIP, False) for x in test.examples)
547 all_skipped = all(x.options.get(doctest.SKIP, False) for x in test.examples)
548 if all_skipped:
548 if all_skipped:
549 pytest.skip("all docstests skipped by +SKIP option")
549 pytest.skip("all docstests skipped by +SKIP option")
550
550
551
551
552 def _is_mocked(obj: object) -> bool:
552 def _is_mocked(obj: object) -> bool:
553 """Return if an object is possibly a mock object by checking the
553 """Return if an object is possibly a mock object by checking the
554 existence of a highly improbable attribute."""
554 existence of a highly improbable attribute."""
555 return (
555 return (
556 safe_getattr(obj, "pytest_mock_example_attribute_that_shouldnt_exist", None)
556 safe_getattr(obj, "pytest_mock_example_attribute_that_shouldnt_exist", None)
557 is not None
557 is not None
558 )
558 )
559
559
560
560
561 @contextmanager
561 @contextmanager
562 def _patch_unwrap_mock_aware() -> Generator[None, None, None]:
562 def _patch_unwrap_mock_aware() -> Generator[None, None, None]:
563 """Context manager which replaces ``inspect.unwrap`` with a version
563 """Context manager which replaces ``inspect.unwrap`` with a version
564 that's aware of mock objects and doesn't recurse into them."""
564 that's aware of mock objects and doesn't recurse into them."""
565 real_unwrap = inspect.unwrap
565 real_unwrap = inspect.unwrap
566
566
567 def _mock_aware_unwrap(
567 def _mock_aware_unwrap(
568 func: Callable[..., Any], *, stop: Optional[Callable[[Any], Any]] = None
568 func: Callable[..., Any], *, stop: Optional[Callable[[Any], Any]] = None
569 ) -> Any:
569 ) -> Any:
570 try:
570 try:
571 if stop is None or stop is _is_mocked:
571 if stop is None or stop is _is_mocked:
572 return real_unwrap(func, stop=_is_mocked)
572 return real_unwrap(func, stop=_is_mocked)
573 _stop = stop
573 _stop = stop
574 return real_unwrap(func, stop=lambda obj: _is_mocked(obj) or _stop(func))
574 return real_unwrap(func, stop=lambda obj: _is_mocked(obj) or _stop(func))
575 except Exception as e:
575 except Exception as e:
576 warnings.warn(
576 warnings.warn(
577 "Got %r when unwrapping %r. This is usually caused "
577 "Got %r when unwrapping %r. This is usually caused "
578 "by a violation of Python's object protocol; see e.g. "
578 "by a violation of Python's object protocol; see e.g. "
579 "https://github.com/pytest-dev/pytest/issues/5080" % (e, func),
579 "https://github.com/pytest-dev/pytest/issues/5080" % (e, func),
580 PytestWarning,
580 PytestWarning,
581 )
581 )
582 raise
582 raise
583
583
584 inspect.unwrap = _mock_aware_unwrap
584 inspect.unwrap = _mock_aware_unwrap
585 try:
585 try:
586 yield
586 yield
587 finally:
587 finally:
588 inspect.unwrap = real_unwrap
588 inspect.unwrap = real_unwrap
589
589
590
590
591 class IPDoctestModule(pytest.Module):
591 class IPDoctestModule(pytest.Module):
592 def collect(self) -> Iterable[IPDoctestItem]:
592 def collect(self) -> Iterable[IPDoctestItem]:
593 import doctest
593 import doctest
594 from .ipdoctest import DocTestFinder, IPDocTestParser
594 from .ipdoctest import DocTestFinder, IPDocTestParser
595
595
596 class MockAwareDocTestFinder(DocTestFinder):
596 class MockAwareDocTestFinder(DocTestFinder):
597 """A hackish ipdoctest finder that overrides stdlib internals to fix a stdlib bug.
597 """A hackish ipdoctest finder that overrides stdlib internals to fix a stdlib bug.
598
598
599 https://github.com/pytest-dev/pytest/issues/3456
599 https://github.com/pytest-dev/pytest/issues/3456
600 https://bugs.python.org/issue25532
600 https://bugs.python.org/issue25532
601 """
601 """
602
602
603 def _find_lineno(self, obj, source_lines):
603 def _find_lineno(self, obj, source_lines):
604 """Doctest code does not take into account `@property`, this
604 """Doctest code does not take into account `@property`, this
605 is a hackish way to fix it. https://bugs.python.org/issue17446
605 is a hackish way to fix it. https://bugs.python.org/issue17446
606
606
607 Wrapped Doctests will need to be unwrapped so the correct
607 Wrapped Doctests will need to be unwrapped so the correct
608 line number is returned. This will be reported upstream. #8796
608 line number is returned. This will be reported upstream. #8796
609 """
609 """
610 if isinstance(obj, property):
610 if isinstance(obj, property):
611 obj = getattr(obj, "fget", obj)
611 obj = getattr(obj, "fget", obj)
612
612
613 if hasattr(obj, "__wrapped__"):
613 if hasattr(obj, "__wrapped__"):
614 # Get the main obj in case of it being wrapped
614 # Get the main obj in case of it being wrapped
615 obj = inspect.unwrap(obj)
615 obj = inspect.unwrap(obj)
616
616
617 # Type ignored because this is a private function.
617 # Type ignored because this is a private function.
618 return super()._find_lineno( # type:ignore[misc]
618 return super()._find_lineno( # type:ignore[misc]
619 obj,
619 obj,
620 source_lines,
620 source_lines,
621 )
621 )
622
622
623 def _find(
623 def _find(
624 self, tests, obj, name, module, source_lines, globs, seen
624 self, tests, obj, name, module, source_lines, globs, seen
625 ) -> None:
625 ) -> None:
626 if _is_mocked(obj):
626 if _is_mocked(obj):
627 return
627 return
628 with _patch_unwrap_mock_aware():
628 with _patch_unwrap_mock_aware():
629
630 # Type ignored because this is a private function.
629 # Type ignored because this is a private function.
631 super()._find( # type:ignore[misc]
630 super()._find( # type:ignore[misc]
632 tests, obj, name, module, source_lines, globs, seen
631 tests, obj, name, module, source_lines, globs, seen
633 )
632 )
634
633
635 if self.path.name == "conftest.py":
634 if self.path.name == "conftest.py":
636 if int(pytest.__version__.split(".")[0]) < 7:
635 if int(pytest.__version__.split(".")[0]) < 7:
637 module = self.config.pluginmanager._importconftest(
636 module = self.config.pluginmanager._importconftest(
638 self.path,
637 self.path,
639 self.config.getoption("importmode"),
638 self.config.getoption("importmode"),
640 )
639 )
641 else:
640 else:
642 module = self.config.pluginmanager._importconftest(
641 module = self.config.pluginmanager._importconftest(
643 self.path,
642 self.path,
644 self.config.getoption("importmode"),
643 self.config.getoption("importmode"),
645 rootpath=self.config.rootpath,
644 rootpath=self.config.rootpath,
646 )
645 )
647 else:
646 else:
648 try:
647 try:
649 module = import_path(self.path, root=self.config.rootpath)
648 module = import_path(self.path, root=self.config.rootpath)
650 except ImportError:
649 except ImportError:
651 if self.config.getvalue("ipdoctest_ignore_import_errors"):
650 if self.config.getvalue("ipdoctest_ignore_import_errors"):
652 pytest.skip("unable to import module %r" % self.path)
651 pytest.skip("unable to import module %r" % self.path)
653 else:
652 else:
654 raise
653 raise
655 # Uses internal doctest module parsing mechanism.
654 # Uses internal doctest module parsing mechanism.
656 finder = MockAwareDocTestFinder(parser=IPDocTestParser())
655 finder = MockAwareDocTestFinder(parser=IPDocTestParser())
657 optionflags = get_optionflags(self)
656 optionflags = get_optionflags(self)
658 runner = _get_runner(
657 runner = _get_runner(
659 verbose=False,
658 verbose=False,
660 optionflags=optionflags,
659 optionflags=optionflags,
661 checker=_get_checker(),
660 checker=_get_checker(),
662 continue_on_failure=_get_continue_on_failure(self.config),
661 continue_on_failure=_get_continue_on_failure(self.config),
663 )
662 )
664
663
665 for test in finder.find(module, module.__name__):
664 for test in finder.find(module, module.__name__):
666 if test.examples: # skip empty ipdoctests
665 if test.examples: # skip empty ipdoctests
667 yield IPDoctestItem.from_parent(
666 yield IPDoctestItem.from_parent(
668 self, name=test.name, runner=runner, dtest=test
667 self, name=test.name, runner=runner, dtest=test
669 )
668 )
670
669
671 if int(pytest.__version__.split(".")[0]) < 7:
670 if int(pytest.__version__.split(".")[0]) < 7:
672
671
673 @property
672 @property
674 def path(self) -> Path:
673 def path(self) -> Path:
675 return Path(self.fspath)
674 return Path(self.fspath)
676
675
677 @classmethod
676 @classmethod
678 def from_parent(
677 def from_parent(
679 cls,
678 cls,
680 parent,
679 parent,
681 *,
680 *,
682 fspath=None,
681 fspath=None,
683 path: Optional[Path] = None,
682 path: Optional[Path] = None,
684 **kw,
683 **kw,
685 ):
684 ):
686 if path is not None:
685 if path is not None:
687 import py.path
686 import py.path
688
687
689 fspath = py.path.local(path)
688 fspath = py.path.local(path)
690 return super().from_parent(parent=parent, fspath=fspath, **kw)
689 return super().from_parent(parent=parent, fspath=fspath, **kw)
691
690
692
691
693 def _setup_fixtures(doctest_item: IPDoctestItem) -> FixtureRequest:
692 def _setup_fixtures(doctest_item: IPDoctestItem) -> FixtureRequest:
694 """Used by IPDoctestTextfile and IPDoctestItem to setup fixture information."""
693 """Used by IPDoctestTextfile and IPDoctestItem to setup fixture information."""
695
694
696 def func() -> None:
695 def func() -> None:
697 pass
696 pass
698
697
699 doctest_item.funcargs = {} # type: ignore[attr-defined]
698 doctest_item.funcargs = {} # type: ignore[attr-defined]
700 fm = doctest_item.session._fixturemanager
699 fm = doctest_item.session._fixturemanager
701 doctest_item._fixtureinfo = fm.getfixtureinfo( # type: ignore[attr-defined]
700 doctest_item._fixtureinfo = fm.getfixtureinfo( # type: ignore[attr-defined]
702 node=doctest_item, func=func, cls=None, funcargs=False
701 node=doctest_item, func=func, cls=None, funcargs=False
703 )
702 )
704 fixture_request = FixtureRequest(doctest_item, _ispytest=True)
703 fixture_request = FixtureRequest(doctest_item, _ispytest=True)
705 fixture_request._fillfixtures()
704 fixture_request._fillfixtures()
706 return fixture_request
705 return fixture_request
707
706
708
707
709 def _init_checker_class() -> Type["IPDoctestOutputChecker"]:
708 def _init_checker_class() -> Type["IPDoctestOutputChecker"]:
710 import doctest
709 import doctest
711 import re
710 import re
712 from .ipdoctest import IPDoctestOutputChecker
711 from .ipdoctest import IPDoctestOutputChecker
713
712
714 class LiteralsOutputChecker(IPDoctestOutputChecker):
713 class LiteralsOutputChecker(IPDoctestOutputChecker):
715 # Based on doctest_nose_plugin.py from the nltk project
714 # Based on doctest_nose_plugin.py from the nltk project
716 # (https://github.com/nltk/nltk) and on the "numtest" doctest extension
715 # (https://github.com/nltk/nltk) and on the "numtest" doctest extension
717 # by Sebastien Boisgerault (https://github.com/boisgera/numtest).
716 # by Sebastien Boisgerault (https://github.com/boisgera/numtest).
718
717
719 _unicode_literal_re = re.compile(r"(\W|^)[uU]([rR]?[\'\"])", re.UNICODE)
718 _unicode_literal_re = re.compile(r"(\W|^)[uU]([rR]?[\'\"])", re.UNICODE)
720 _bytes_literal_re = re.compile(r"(\W|^)[bB]([rR]?[\'\"])", re.UNICODE)
719 _bytes_literal_re = re.compile(r"(\W|^)[bB]([rR]?[\'\"])", re.UNICODE)
721 _number_re = re.compile(
720 _number_re = re.compile(
722 r"""
721 r"""
723 (?P<number>
722 (?P<number>
724 (?P<mantissa>
723 (?P<mantissa>
725 (?P<integer1> [+-]?\d*)\.(?P<fraction>\d+)
724 (?P<integer1> [+-]?\d*)\.(?P<fraction>\d+)
726 |
725 |
727 (?P<integer2> [+-]?\d+)\.
726 (?P<integer2> [+-]?\d+)\.
728 )
727 )
729 (?:
728 (?:
730 [Ee]
729 [Ee]
731 (?P<exponent1> [+-]?\d+)
730 (?P<exponent1> [+-]?\d+)
732 )?
731 )?
733 |
732 |
734 (?P<integer3> [+-]?\d+)
733 (?P<integer3> [+-]?\d+)
735 (?:
734 (?:
736 [Ee]
735 [Ee]
737 (?P<exponent2> [+-]?\d+)
736 (?P<exponent2> [+-]?\d+)
738 )
737 )
739 )
738 )
740 """,
739 """,
741 re.VERBOSE,
740 re.VERBOSE,
742 )
741 )
743
742
744 def check_output(self, want: str, got: str, optionflags: int) -> bool:
743 def check_output(self, want: str, got: str, optionflags: int) -> bool:
745 if super().check_output(want, got, optionflags):
744 if super().check_output(want, got, optionflags):
746 return True
745 return True
747
746
748 allow_unicode = optionflags & _get_allow_unicode_flag()
747 allow_unicode = optionflags & _get_allow_unicode_flag()
749 allow_bytes = optionflags & _get_allow_bytes_flag()
748 allow_bytes = optionflags & _get_allow_bytes_flag()
750 allow_number = optionflags & _get_number_flag()
749 allow_number = optionflags & _get_number_flag()
751
750
752 if not allow_unicode and not allow_bytes and not allow_number:
751 if not allow_unicode and not allow_bytes and not allow_number:
753 return False
752 return False
754
753
755 def remove_prefixes(regex: Pattern[str], txt: str) -> str:
754 def remove_prefixes(regex: Pattern[str], txt: str) -> str:
756 return re.sub(regex, r"\1\2", txt)
755 return re.sub(regex, r"\1\2", txt)
757
756
758 if allow_unicode:
757 if allow_unicode:
759 want = remove_prefixes(self._unicode_literal_re, want)
758 want = remove_prefixes(self._unicode_literal_re, want)
760 got = remove_prefixes(self._unicode_literal_re, got)
759 got = remove_prefixes(self._unicode_literal_re, got)
761
760
762 if allow_bytes:
761 if allow_bytes:
763 want = remove_prefixes(self._bytes_literal_re, want)
762 want = remove_prefixes(self._bytes_literal_re, want)
764 got = remove_prefixes(self._bytes_literal_re, got)
763 got = remove_prefixes(self._bytes_literal_re, got)
765
764
766 if allow_number:
765 if allow_number:
767 got = self._remove_unwanted_precision(want, got)
766 got = self._remove_unwanted_precision(want, got)
768
767
769 return super().check_output(want, got, optionflags)
768 return super().check_output(want, got, optionflags)
770
769
771 def _remove_unwanted_precision(self, want: str, got: str) -> str:
770 def _remove_unwanted_precision(self, want: str, got: str) -> str:
772 wants = list(self._number_re.finditer(want))
771 wants = list(self._number_re.finditer(want))
773 gots = list(self._number_re.finditer(got))
772 gots = list(self._number_re.finditer(got))
774 if len(wants) != len(gots):
773 if len(wants) != len(gots):
775 return got
774 return got
776 offset = 0
775 offset = 0
777 for w, g in zip(wants, gots):
776 for w, g in zip(wants, gots):
778 fraction: Optional[str] = w.group("fraction")
777 fraction: Optional[str] = w.group("fraction")
779 exponent: Optional[str] = w.group("exponent1")
778 exponent: Optional[str] = w.group("exponent1")
780 if exponent is None:
779 if exponent is None:
781 exponent = w.group("exponent2")
780 exponent = w.group("exponent2")
782 precision = 0 if fraction is None else len(fraction)
781 precision = 0 if fraction is None else len(fraction)
783 if exponent is not None:
782 if exponent is not None:
784 precision -= int(exponent)
783 precision -= int(exponent)
785 if float(w.group()) == approx(float(g.group()), abs=10**-precision):
784 if float(w.group()) == approx(float(g.group()), abs=10**-precision):
786 # They're close enough. Replace the text we actually
785 # They're close enough. Replace the text we actually
787 # got with the text we want, so that it will match when we
786 # got with the text we want, so that it will match when we
788 # check the string literally.
787 # check the string literally.
789 got = (
788 got = (
790 got[: g.start() + offset] + w.group() + got[g.end() + offset :]
789 got[: g.start() + offset] + w.group() + got[g.end() + offset :]
791 )
790 )
792 offset += w.end() - w.start() - (g.end() - g.start())
791 offset += w.end() - w.start() - (g.end() - g.start())
793 return got
792 return got
794
793
795 return LiteralsOutputChecker
794 return LiteralsOutputChecker
796
795
797
796
798 def _get_checker() -> "IPDoctestOutputChecker":
797 def _get_checker() -> "IPDoctestOutputChecker":
799 """Return a IPDoctestOutputChecker subclass that supports some
798 """Return a IPDoctestOutputChecker subclass that supports some
800 additional options:
799 additional options:
801
800
802 * ALLOW_UNICODE and ALLOW_BYTES options to ignore u'' and b''
801 * ALLOW_UNICODE and ALLOW_BYTES options to ignore u'' and b''
803 prefixes (respectively) in string literals. Useful when the same
802 prefixes (respectively) in string literals. Useful when the same
804 ipdoctest should run in Python 2 and Python 3.
803 ipdoctest should run in Python 2 and Python 3.
805
804
806 * NUMBER to ignore floating-point differences smaller than the
805 * NUMBER to ignore floating-point differences smaller than the
807 precision of the literal number in the ipdoctest.
806 precision of the literal number in the ipdoctest.
808
807
809 An inner class is used to avoid importing "ipdoctest" at the module
808 An inner class is used to avoid importing "ipdoctest" at the module
810 level.
809 level.
811 """
810 """
812 global CHECKER_CLASS
811 global CHECKER_CLASS
813 if CHECKER_CLASS is None:
812 if CHECKER_CLASS is None:
814 CHECKER_CLASS = _init_checker_class()
813 CHECKER_CLASS = _init_checker_class()
815 return CHECKER_CLASS()
814 return CHECKER_CLASS()
816
815
817
816
818 def _get_allow_unicode_flag() -> int:
817 def _get_allow_unicode_flag() -> int:
819 """Register and return the ALLOW_UNICODE flag."""
818 """Register and return the ALLOW_UNICODE flag."""
820 import doctest
819 import doctest
821
820
822 return doctest.register_optionflag("ALLOW_UNICODE")
821 return doctest.register_optionflag("ALLOW_UNICODE")
823
822
824
823
825 def _get_allow_bytes_flag() -> int:
824 def _get_allow_bytes_flag() -> int:
826 """Register and return the ALLOW_BYTES flag."""
825 """Register and return the ALLOW_BYTES flag."""
827 import doctest
826 import doctest
828
827
829 return doctest.register_optionflag("ALLOW_BYTES")
828 return doctest.register_optionflag("ALLOW_BYTES")
830
829
831
830
832 def _get_number_flag() -> int:
831 def _get_number_flag() -> int:
833 """Register and return the NUMBER flag."""
832 """Register and return the NUMBER flag."""
834 import doctest
833 import doctest
835
834
836 return doctest.register_optionflag("NUMBER")
835 return doctest.register_optionflag("NUMBER")
837
836
838
837
839 def _get_report_choice(key: str) -> int:
838 def _get_report_choice(key: str) -> int:
840 """Return the actual `ipdoctest` module flag value.
839 """Return the actual `ipdoctest` module flag value.
841
840
842 We want to do it as late as possible to avoid importing `ipdoctest` and all
841 We want to do it as late as possible to avoid importing `ipdoctest` and all
843 its dependencies when parsing options, as it adds overhead and breaks tests.
842 its dependencies when parsing options, as it adds overhead and breaks tests.
844 """
843 """
845 import doctest
844 import doctest
846
845
847 return {
846 return {
848 DOCTEST_REPORT_CHOICE_UDIFF: doctest.REPORT_UDIFF,
847 DOCTEST_REPORT_CHOICE_UDIFF: doctest.REPORT_UDIFF,
849 DOCTEST_REPORT_CHOICE_CDIFF: doctest.REPORT_CDIFF,
848 DOCTEST_REPORT_CHOICE_CDIFF: doctest.REPORT_CDIFF,
850 DOCTEST_REPORT_CHOICE_NDIFF: doctest.REPORT_NDIFF,
849 DOCTEST_REPORT_CHOICE_NDIFF: doctest.REPORT_NDIFF,
851 DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE: doctest.REPORT_ONLY_FIRST_FAILURE,
850 DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE: doctest.REPORT_ONLY_FIRST_FAILURE,
852 DOCTEST_REPORT_CHOICE_NONE: 0,
851 DOCTEST_REPORT_CHOICE_NONE: 0,
853 }[key]
852 }[key]
854
853
855
854
856 @pytest.fixture(scope="session")
855 @pytest.fixture(scope="session")
857 def ipdoctest_namespace() -> Dict[str, Any]:
856 def ipdoctest_namespace() -> Dict[str, Any]:
858 """Fixture that returns a :py:class:`dict` that will be injected into the
857 """Fixture that returns a :py:class:`dict` that will be injected into the
859 namespace of ipdoctests."""
858 namespace of ipdoctests."""
860 return dict()
859 return dict()
General Comments 0
You need to be logged in to leave comments. Login now