##// END OF EJS Templates
reformat with darker
Matthias Bussonnier -
Show More
@@ -1,3347 +1,3346 b''
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 Ξ±
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 Ξ±
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press :kbd:`Tab` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\Ξ±<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 63 ``False``.
64 64
65 65
66 66 Experimental
67 67 ============
68 68
69 69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 70 generate completions both using static analysis of the code, and dynamically
71 71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 72 for Python. The APIs attached to this new mechanism is unstable and will
73 73 raise unless use in an :any:`provisionalcompleter` context manager.
74 74
75 75 You will find that the following are experimental:
76 76
77 77 - :any:`provisionalcompleter`
78 78 - :any:`IPCompleter.completions`
79 79 - :any:`Completion`
80 80 - :any:`rectify_completions`
81 81
82 82 .. note::
83 83
84 84 better name for :any:`rectify_completions` ?
85 85
86 86 We welcome any feedback on these new API, and we also encourage you to try this
87 87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 88 to have extra logging information if :any:`jedi` is crashing, or if current
89 89 IPython completer pending deprecations are returning results not yet handled
90 90 by :any:`jedi`
91 91
92 92 Using Jedi for tab completion allow snippets like the following to work without
93 93 having to execute any code:
94 94
95 95 >>> myvar = ['hello', 42]
96 96 ... myvar[1].bi<tab>
97 97
98 98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 100 option.
101 101
102 102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 103 current development version to get better completions.
104 104
105 105 Matchers
106 106 ========
107 107
108 108 All completions routines are implemented using unified *Matchers* API.
109 109 The matchers API is provisional and subject to change without notice.
110 110
111 111 The built-in matchers include:
112 112
113 113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 115 - :any:`IPCompleter.unicode_name_matcher`,
116 116 :any:`IPCompleter.fwd_unicode_matcher`
117 117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 125 (`complete_command`) with string dispatch (including regular expressions).
126 126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 127 Jedi results to match behaviour in earlier IPython versions.
128 128
129 129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130 130
131 131 Matcher API
132 132 -----------
133 133
134 134 Simplifying some details, the ``Matcher`` interface can described as
135 135
136 136 .. code-block::
137 137
138 138 MatcherAPIv1 = Callable[[str], list[str]]
139 139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140 140
141 141 Matcher = MatcherAPIv1 | MatcherAPIv2
142 142
143 143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 144 and remains supported as a simplest way for generating completions. This is also
145 145 currently the only API supported by the IPython hooks system `complete_command`.
146 146
147 147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 149 and requires a literal ``2`` for v2 Matchers.
150 150
151 151 Once the API stabilises future versions may relax the requirement for specifying
152 152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154 154
155 155 Suppression of competing matchers
156 156 ---------------------------------
157 157
158 158 By default results from all matchers are combined, in the order determined by
159 159 their priority. Matchers can request to suppress results from subsequent
160 160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161 161
162 162 When multiple matchers simultaneously request surpression, the results from of
163 163 the matcher with higher priority will be returned.
164 164
165 165 Sometimes it is desirable to suppress most but not all other matchers;
166 166 this can be achieved by adding a list of identifiers of matchers which
167 167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168 168
169 169 The suppression behaviour can is user-configurable via
170 170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 171 """
172 172
173 173
174 174 # Copyright (c) IPython Development Team.
175 175 # Distributed under the terms of the Modified BSD License.
176 176 #
177 177 # Some of this code originated from rlcompleter in the Python standard library
178 178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179 179
180 180 from __future__ import annotations
181 181 import builtins as builtin_mod
182 182 import enum
183 183 import glob
184 184 import inspect
185 185 import itertools
186 186 import keyword
187 187 import os
188 188 import re
189 189 import string
190 190 import sys
191 191 import tokenize
192 192 import time
193 193 import unicodedata
194 194 import uuid
195 195 import warnings
196 196 from ast import literal_eval
197 197 from collections import defaultdict
198 198 from contextlib import contextmanager
199 199 from dataclasses import dataclass
200 200 from functools import cached_property, partial
201 201 from types import SimpleNamespace
202 202 from typing import (
203 203 Iterable,
204 204 Iterator,
205 205 List,
206 206 Tuple,
207 207 Union,
208 208 Any,
209 209 Sequence,
210 210 Dict,
211 211 Optional,
212 212 TYPE_CHECKING,
213 213 Set,
214 214 Sized,
215 215 TypeVar,
216 216 Literal,
217 217 )
218 218
219 219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 220 from IPython.core.error import TryNext
221 221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 223 from IPython.core.oinspect import InspectColors
224 224 from IPython.testing.skipdoctest import skip_doctest
225 225 from IPython.utils import generics
226 226 from IPython.utils.decorators import sphinx_options
227 227 from IPython.utils.dir2 import dir2, get_real_method
228 228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 229 from IPython.utils.path import ensure_dir_exists
230 230 from IPython.utils.process import arg_split
231 231 from traitlets import (
232 232 Bool,
233 233 Enum,
234 234 Int,
235 235 List as ListTrait,
236 236 Unicode,
237 237 Dict as DictTrait,
238 238 Union as UnionTrait,
239 239 observe,
240 240 )
241 241 from traitlets.config.configurable import Configurable
242 242
243 243 import __main__
244 244
245 245 # skip module docstests
246 246 __skip_doctest__ = True
247 247
248 248
249 249 try:
250 250 import jedi
251 251 jedi.settings.case_insensitive_completion = False
252 252 import jedi.api.helpers
253 253 import jedi.api.classes
254 254 JEDI_INSTALLED = True
255 255 except ImportError:
256 256 JEDI_INSTALLED = False
257 257
258 258
259 259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 260 from typing import cast
261 261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 262 else:
263 263 from typing import Generic
264 264
265 265 def cast(type_, obj):
266 266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 267 return obj
268 268
269 269 # do not require on runtime
270 270 NotRequired = Tuple # requires Python >=3.11
271 271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 272 Protocol = object # requires Python >=3.8
273 273 TypeAlias = Any # requires Python >=3.10
274 274 TypeGuard = Generic # requires Python >=3.10
275 275 if GENERATING_DOCUMENTATION:
276 276 from typing import TypedDict
277 277
278 278 # -----------------------------------------------------------------------------
279 279 # Globals
280 280 #-----------------------------------------------------------------------------
281 281
282 282 # ranges where we have most of the valid unicode names. We could be more finer
283 283 # grained but is it worth it for performance While unicode have character in the
284 284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 285 # write this). With below range we cover them all, with a density of ~67%
286 286 # biggest next gap we consider only adds up about 1% density and there are 600
287 287 # gaps that would need hard coding.
288 288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289 289
290 290 # Public API
291 291 __all__ = ["Completer", "IPCompleter"]
292 292
293 293 if sys.platform == 'win32':
294 294 PROTECTABLES = ' '
295 295 else:
296 296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297 297
298 298 # Protect against returning an enormous number of completions which the frontend
299 299 # may have trouble processing.
300 300 MATCHES_LIMIT = 500
301 301
302 302 # Completion type reported when no type can be inferred.
303 303 _UNKNOWN_TYPE = "<unknown>"
304 304
305 305 # sentinel value to signal lack of a match
306 306 not_found = object()
307 307
308 308 class ProvisionalCompleterWarning(FutureWarning):
309 309 """
310 310 Exception raise by an experimental feature in this module.
311 311
312 312 Wrap code in :any:`provisionalcompleter` context manager if you
313 313 are certain you want to use an unstable feature.
314 314 """
315 315 pass
316 316
317 317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318 318
319 319
320 320 @skip_doctest
321 321 @contextmanager
322 322 def provisionalcompleter(action='ignore'):
323 323 """
324 324 This context manager has to be used in any place where unstable completer
325 325 behavior and API may be called.
326 326
327 327 >>> with provisionalcompleter():
328 328 ... completer.do_experimental_things() # works
329 329
330 330 >>> completer.do_experimental_things() # raises.
331 331
332 332 .. note::
333 333
334 334 Unstable
335 335
336 336 By using this context manager you agree that the API in use may change
337 337 without warning, and that you won't complain if they do so.
338 338
339 339 You also understand that, if the API is not to your liking, you should report
340 340 a bug to explain your use case upstream.
341 341
342 342 We'll be happy to get your feedback, feature requests, and improvements on
343 343 any of the unstable APIs!
344 344 """
345 345 with warnings.catch_warnings():
346 346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 347 yield
348 348
349 349
350 350 def has_open_quotes(s):
351 351 """Return whether a string has open quotes.
352 352
353 353 This simply counts whether the number of quote characters of either type in
354 354 the string is odd.
355 355
356 356 Returns
357 357 -------
358 358 If there is an open quote, the quote character is returned. Else, return
359 359 False.
360 360 """
361 361 # We check " first, then ', so complex cases with nested quotes will get
362 362 # the " to take precedence.
363 363 if s.count('"') % 2:
364 364 return '"'
365 365 elif s.count("'") % 2:
366 366 return "'"
367 367 else:
368 368 return False
369 369
370 370
371 371 def protect_filename(s, protectables=PROTECTABLES):
372 372 """Escape a string to protect certain characters."""
373 373 if set(s) & set(protectables):
374 374 if sys.platform == "win32":
375 375 return '"' + s + '"'
376 376 else:
377 377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 378 else:
379 379 return s
380 380
381 381
382 382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 383 """Expand ``~``-style usernames in strings.
384 384
385 385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 386 extra information that will be useful if the input was being used in
387 387 computing completions, and you wish to return the completions with the
388 388 original '~' instead of its expanded value.
389 389
390 390 Parameters
391 391 ----------
392 392 path : str
393 393 String to be expanded. If no ~ is present, the output is the same as the
394 394 input.
395 395
396 396 Returns
397 397 -------
398 398 newpath : str
399 399 Result of ~ expansion in the input path.
400 400 tilde_expand : bool
401 401 Whether any expansion was performed or not.
402 402 tilde_val : str
403 403 The value that ~ was replaced with.
404 404 """
405 405 # Default values
406 406 tilde_expand = False
407 407 tilde_val = ''
408 408 newpath = path
409 409
410 410 if path.startswith('~'):
411 411 tilde_expand = True
412 412 rest = len(path)-1
413 413 newpath = os.path.expanduser(path)
414 414 if rest:
415 415 tilde_val = newpath[:-rest]
416 416 else:
417 417 tilde_val = newpath
418 418
419 419 return newpath, tilde_expand, tilde_val
420 420
421 421
422 422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 423 """Does the opposite of expand_user, with its outputs.
424 424 """
425 425 if tilde_expand:
426 426 return path.replace(tilde_val, '~')
427 427 else:
428 428 return path
429 429
430 430
431 431 def completions_sorting_key(word):
432 432 """key for sorting completions
433 433
434 434 This does several things:
435 435
436 436 - Demote any completions starting with underscores to the end
437 437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 438 by their name
439 439 """
440 440 prio1, prio2 = 0, 0
441 441
442 442 if word.startswith('__'):
443 443 prio1 = 2
444 444 elif word.startswith('_'):
445 445 prio1 = 1
446 446
447 447 if word.endswith('='):
448 448 prio1 = -1
449 449
450 450 if word.startswith('%%'):
451 451 # If there's another % in there, this is something else, so leave it alone
452 452 if not "%" in word[2:]:
453 453 word = word[2:]
454 454 prio2 = 2
455 455 elif word.startswith('%'):
456 456 if not "%" in word[1:]:
457 457 word = word[1:]
458 458 prio2 = 1
459 459
460 460 return prio1, word, prio2
461 461
462 462
463 463 class _FakeJediCompletion:
464 464 """
465 465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467 467
468 468 Added in IPython 6.0 so should likely be removed for 7.0
469 469
470 470 """
471 471
472 472 def __init__(self, name):
473 473
474 474 self.name = name
475 475 self.complete = name
476 476 self.type = 'crashed'
477 477 self.name_with_symbols = name
478 478 self.signature = ""
479 479 self._origin = "fake"
480 480 self.text = "crashed"
481 481
482 482 def __repr__(self):
483 483 return '<Fake completion object jedi has crashed>'
484 484
485 485
486 486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487 487
488 488
489 489 class Completion:
490 490 """
491 491 Completion object used and returned by IPython completers.
492 492
493 493 .. warning::
494 494
495 495 Unstable
496 496
497 497 This function is unstable, API may change without warning.
498 498 It will also raise unless use in proper context manager.
499 499
500 500 This act as a middle ground :any:`Completion` object between the
501 501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 502 object. While Jedi need a lot of information about evaluator and how the
503 503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 504 need user facing information.
505 505
506 506 - Which range should be replaced replaced by what.
507 507 - Some metadata (like completion type), or meta information to displayed to
508 508 the use user.
509 509
510 510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 512 """
513 513
514 514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515 515
516 516 def __init__(
517 517 self,
518 518 start: int,
519 519 end: int,
520 520 text: str,
521 521 *,
522 522 type: Optional[str] = None,
523 523 _origin="",
524 524 signature="",
525 525 ) -> None:
526 526 warnings.warn(
527 527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 528 "It may change without warnings. "
529 529 "Use in corresponding context manager.",
530 530 category=ProvisionalCompleterWarning,
531 531 stacklevel=2,
532 532 )
533 533
534 534 self.start = start
535 535 self.end = end
536 536 self.text = text
537 537 self.type = type
538 538 self.signature = signature
539 539 self._origin = _origin
540 540
541 541 def __repr__(self):
542 542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544 544
545 545 def __eq__(self, other) -> bool:
546 546 """
547 547 Equality and hash do not hash the type (as some completer may not be
548 548 able to infer the type), but are use to (partially) de-duplicate
549 549 completion.
550 550
551 551 Completely de-duplicating completion is a bit tricker that just
552 552 comparing as it depends on surrounding text, which Completions are not
553 553 aware of.
554 554 """
555 555 return self.start == other.start and \
556 556 self.end == other.end and \
557 557 self.text == other.text
558 558
559 559 def __hash__(self):
560 560 return hash((self.start, self.end, self.text))
561 561
562 562
563 563 class SimpleCompletion:
564 564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565 565
566 566 .. warning::
567 567
568 568 Provisional
569 569
570 570 This class is used to describe the currently supported attributes of
571 571 simple completion items, and any additional implementation details
572 572 should not be relied on. Additional attributes may be included in
573 573 future versions, and meaning of text disambiguated from the current
574 574 dual meaning of "text to insert" and "text to used as a label".
575 575 """
576 576
577 577 __slots__ = ["text", "type"]
578 578
579 579 def __init__(self, text: str, *, type: Optional[str] = None):
580 580 self.text = text
581 581 self.type = type
582 582
583 583 def __repr__(self):
584 584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585 585
586 586
587 587 class _MatcherResultBase(TypedDict):
588 588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589 589
590 590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 591 matched_fragment: NotRequired[str]
592 592
593 593 #: Whether to suppress results from all other matchers (True), some
594 594 #: matchers (set of identifiers) or none (False); default is False.
595 595 suppress: NotRequired[Union[bool, Set[str]]]
596 596
597 597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 598 #: requests to suppress all other matchers; defaults to an empty set.
599 599 do_not_suppress: NotRequired[Set[str]]
600 600
601 601 #: Are completions already ordered and should be left as-is? default is False.
602 602 ordered: NotRequired[bool]
603 603
604 604
605 605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 607 """Result of new-style completion matcher."""
608 608
609 609 # note: TypedDict is added again to the inheritance chain
610 610 # in order to get __orig_bases__ for documentation
611 611
612 612 #: List of candidate completions
613 613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614 614
615 615
616 616 class _JediMatcherResult(_MatcherResultBase):
617 617 """Matching result returned by Jedi (will be processed differently)"""
618 618
619 619 #: list of candidate completions
620 620 completions: Iterator[_JediCompletionLike]
621 621
622 622
623 623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625 625
626 626
627 627 @dataclass
628 628 class CompletionContext:
629 629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630 630
631 631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 634 # from the completer, and make substituting them in sub-classes easier.
635 635
636 636 #: Relevant fragment of code directly preceding the cursor.
637 637 #: The extraction of token is implemented via splitter heuristic
638 638 #: (following readline behaviour for legacy reasons), which is user configurable
639 639 #: (by switching the greedy mode).
640 640 token: str
641 641
642 642 #: The full available content of the editor or buffer
643 643 full_text: str
644 644
645 645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 646 cursor_position: int
647 647
648 648 #: Cursor line in ``full_text``.
649 649 cursor_line: int
650 650
651 651 #: The maximum number of completions that will be used downstream.
652 652 #: Matchers can use this information to abort early.
653 653 #: The built-in Jedi matcher is currently excepted from this limit.
654 654 # If not given, return all possible completions.
655 655 limit: Optional[int]
656 656
657 657 @cached_property
658 658 def text_until_cursor(self) -> str:
659 659 return self.line_with_cursor[: self.cursor_position]
660 660
661 661 @cached_property
662 662 def line_with_cursor(self) -> str:
663 663 return self.full_text.split("\n")[self.cursor_line]
664 664
665 665
666 666 #: Matcher results for API v2.
667 667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668 668
669 669
670 670 class _MatcherAPIv1Base(Protocol):
671 671 def __call__(self, text: str) -> List[str]:
672 672 """Call signature."""
673 673 ...
674 674
675 675 #: Used to construct the default matcher identifier
676 676 __qualname__: str
677 677
678 678
679 679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 680 #: API version
681 681 matcher_api_version: Optional[Literal[1]]
682 682
683 683 def __call__(self, text: str) -> List[str]:
684 684 """Call signature."""
685 685 ...
686 686
687 687
688 688 #: Protocol describing Matcher API v1.
689 689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690 690
691 691
692 692 class MatcherAPIv2(Protocol):
693 693 """Protocol describing Matcher API v2."""
694 694
695 695 #: API version
696 696 matcher_api_version: Literal[2] = 2
697 697
698 698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 699 """Call signature."""
700 700 ...
701 701
702 702 #: Used to construct the default matcher identifier
703 703 __qualname__: str
704 704
705 705
706 706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707 707
708 708
709 709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 710 api_version = _get_matcher_api_version(matcher)
711 711 return api_version == 1
712 712
713 713
714 714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 715 api_version = _get_matcher_api_version(matcher)
716 716 return api_version == 2
717 717
718 718
719 719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 720 """Determines whether objects is sizable"""
721 721 return hasattr(value, "__len__")
722 722
723 723
724 724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 725 """Determines whether objects is sizable"""
726 726 return hasattr(value, "__next__")
727 727
728 728
729 729 def has_any_completions(result: MatcherResult) -> bool:
730 730 """Check if any result includes any completions."""
731 731 completions = result["completions"]
732 732 if _is_sizable(completions):
733 733 return len(completions) != 0
734 734 if _is_iterator(completions):
735 735 try:
736 736 old_iterator = completions
737 737 first = next(old_iterator)
738 738 result["completions"] = cast(
739 739 Iterator[SimpleCompletion],
740 740 itertools.chain([first], old_iterator),
741 741 )
742 742 return True
743 743 except StopIteration:
744 744 return False
745 745 raise ValueError(
746 746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 747 )
748 748
749 749
750 750 def completion_matcher(
751 751 *,
752 752 priority: Optional[float] = None,
753 753 identifier: Optional[str] = None,
754 754 api_version: int = 1,
755 755 ):
756 756 """Adds attributes describing the matcher.
757 757
758 758 Parameters
759 759 ----------
760 760 priority : Optional[float]
761 761 The priority of the matcher, determines the order of execution of matchers.
762 762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 763 identifier : Optional[str]
764 764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 765 and also used to for debugging (will be passed as ``origin`` with the completions).
766 766
767 767 Defaults to matcher function's ``__qualname__`` (for example,
768 768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 770 api_version: Optional[int]
771 771 version of the Matcher API used by this matcher.
772 772 Currently supported values are 1 and 2.
773 773 Defaults to 1.
774 774 """
775 775
776 776 def wrapper(func: Matcher):
777 777 func.matcher_priority = priority or 0 # type: ignore
778 778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 779 func.matcher_api_version = api_version # type: ignore
780 780 if TYPE_CHECKING:
781 781 if api_version == 1:
782 782 func = cast(MatcherAPIv1, func)
783 783 elif api_version == 2:
784 784 func = cast(MatcherAPIv2, func)
785 785 return func
786 786
787 787 return wrapper
788 788
789 789
790 790 def _get_matcher_priority(matcher: Matcher):
791 791 return getattr(matcher, "matcher_priority", 0)
792 792
793 793
794 794 def _get_matcher_id(matcher: Matcher):
795 795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796 796
797 797
798 798 def _get_matcher_api_version(matcher):
799 799 return getattr(matcher, "matcher_api_version", 1)
800 800
801 801
802 802 context_matcher = partial(completion_matcher, api_version=2)
803 803
804 804
805 805 _IC = Iterable[Completion]
806 806
807 807
808 808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 809 """
810 810 Deduplicate a set of completions.
811 811
812 812 .. warning::
813 813
814 814 Unstable
815 815
816 816 This function is unstable, API may change without warning.
817 817
818 818 Parameters
819 819 ----------
820 820 text : str
821 821 text that should be completed.
822 822 completions : Iterator[Completion]
823 823 iterator over the completions to deduplicate
824 824
825 825 Yields
826 826 ------
827 827 `Completions` objects
828 828 Completions coming from multiple sources, may be different but end up having
829 829 the same effect when applied to ``text``. If this is the case, this will
830 830 consider completions as equal and only emit the first encountered.
831 831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 832 the IPython completer does return things that Jedi does not, but should be
833 833 at some point.
834 834 """
835 835 completions = list(completions)
836 836 if not completions:
837 837 return
838 838
839 839 new_start = min(c.start for c in completions)
840 840 new_end = max(c.end for c in completions)
841 841
842 842 seen = set()
843 843 for c in completions:
844 844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 845 if new_text not in seen:
846 846 yield c
847 847 seen.add(new_text)
848 848
849 849
850 850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 851 """
852 852 Rectify a set of completions to all have the same ``start`` and ``end``
853 853
854 854 .. warning::
855 855
856 856 Unstable
857 857
858 858 This function is unstable, API may change without warning.
859 859 It will also raise unless use in proper context manager.
860 860
861 861 Parameters
862 862 ----------
863 863 text : str
864 864 text that should be completed.
865 865 completions : Iterator[Completion]
866 866 iterator over the completions to rectify
867 867 _debug : bool
868 868 Log failed completion
869 869
870 870 Notes
871 871 -----
872 872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 873 the Jupyter Protocol requires them to behave like so. This will readjust
874 874 the completion to have the same ``start`` and ``end`` by padding both
875 875 extremities with surrounding text.
876 876
877 877 During stabilisation should support a ``_debug`` option to log which
878 878 completion are return by the IPython completer and not found in Jedi in
879 879 order to make upstream bug report.
880 880 """
881 881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 882 "It may change without warnings. "
883 883 "Use in corresponding context manager.",
884 884 category=ProvisionalCompleterWarning, stacklevel=2)
885 885
886 886 completions = list(completions)
887 887 if not completions:
888 888 return
889 889 starts = (c.start for c in completions)
890 890 ends = (c.end for c in completions)
891 891
892 892 new_start = min(starts)
893 893 new_end = max(ends)
894 894
895 895 seen_jedi = set()
896 896 seen_python_matches = set()
897 897 for c in completions:
898 898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 899 if c._origin == 'jedi':
900 900 seen_jedi.add(new_text)
901 901 elif c._origin == 'IPCompleter.python_matches':
902 902 seen_python_matches.add(new_text)
903 903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 904 diff = seen_python_matches.difference(seen_jedi)
905 905 if diff and _debug:
906 906 print('IPython.python matches have extras:', diff)
907 907
908 908
909 909 if sys.platform == 'win32':
910 910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 911 else:
912 912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913 913
914 914 GREEDY_DELIMS = ' =\r\n'
915 915
916 916
917 917 class CompletionSplitter(object):
918 918 """An object to split an input line in a manner similar to readline.
919 919
920 920 By having our own implementation, we can expose readline-like completion in
921 921 a uniform manner to all frontends. This object only needs to be given the
922 922 line of text to be split and the cursor position on said line, and it
923 923 returns the 'word' to be completed on at the cursor after splitting the
924 924 entire line.
925 925
926 926 What characters are used as splitting delimiters can be controlled by
927 927 setting the ``delims`` attribute (this is a property that internally
928 928 automatically builds the necessary regular expression)"""
929 929
930 930 # Private interface
931 931
932 932 # A string of delimiter characters. The default value makes sense for
933 933 # IPython's most typical usage patterns.
934 934 _delims = DELIMS
935 935
936 936 # The expression (a normal string) to be compiled into a regular expression
937 937 # for actual splitting. We store it as an attribute mostly for ease of
938 938 # debugging, since this type of code can be so tricky to debug.
939 939 _delim_expr = None
940 940
941 941 # The regular expression that does the actual splitting
942 942 _delim_re = None
943 943
944 944 def __init__(self, delims=None):
945 945 delims = CompletionSplitter._delims if delims is None else delims
946 946 self.delims = delims
947 947
948 948 @property
949 949 def delims(self):
950 950 """Return the string of delimiter characters."""
951 951 return self._delims
952 952
953 953 @delims.setter
954 954 def delims(self, delims):
955 955 """Set the delimiters for line splitting."""
956 956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 957 self._delim_re = re.compile(expr)
958 958 self._delims = delims
959 959 self._delim_expr = expr
960 960
961 961 def split_line(self, line, cursor_pos=None):
962 962 """Split a line of text with a cursor at the given position.
963 963 """
964 964 l = line if cursor_pos is None else line[:cursor_pos]
965 965 return self._delim_re.split(l)[-1]
966 966
967 967
968 968
969 969 class Completer(Configurable):
970 970
971 971 greedy = Bool(
972 972 False,
973 973 help="""Activate greedy completion.
974 974
975 975 .. deprecated:: 8.8
976 976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977 977
978 978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979 979
980 980 - ``Completer.evaluation = 'unsafe'``
981 981 - ``Completer.auto_close_dict_keys = True``
982 982 """,
983 983 ).tag(config=True)
984 984
985 985 evaluation = Enum(
986 986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 987 default_value="limited",
988 988 help="""Policy for code evaluation under completion.
989 989
990 990 Successive options allow to enable more eager evaluation for better
991 991 completion suggestions, including for nested dictionaries, nested lists,
992 992 or even results of function calls.
993 993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995 995
996 996 Allowed values are:
997 997
998 998 - ``forbidden``: no evaluation of code is permitted,
999 999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 1000 no item/attribute evaluationm no access to locals/globals,
1001 1001 no evaluation of any operations or comparisons.
1002 1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 1007 syntax with side-effects like `del x`,
1008 1008 - ``dangerous``: completely arbitrary evaluation.
1009 1009 """,
1010 1010 ).tag(config=True)
1011 1011
1012 1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 1014 "Default to True if jedi is installed.").tag(config=True)
1015 1015
1016 1016 jedi_compute_type_timeout = Int(default_value=400,
1017 1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 1019 performance by preventing jedi to build its cache.
1020 1020 """).tag(config=True)
1021 1021
1022 1022 debug = Bool(default_value=False,
1023 1023 help='Enable debug for the Completer. Mostly print extra '
1024 1024 'information for experimental jedi integration.')\
1025 1025 .tag(config=True)
1026 1026
1027 1027 backslash_combining_completions = Bool(True,
1028 1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 1029 "Includes completion of latex commands, unicode names, and expanding "
1030 1030 "unicode characters back to latex commands.").tag(config=True)
1031 1031
1032 1032 auto_close_dict_keys = Bool(
1033 1033 False,
1034 1034 help="""
1035 1035 Enable auto-closing dictionary keys.
1036 1036
1037 1037 When enabled string keys will be suffixed with a final quote
1038 1038 (matching the opening quote), tuple keys will also receive a
1039 1039 separating comma if needed, and keys which are final will
1040 1040 receive a closing bracket (``]``).
1041 1041 """,
1042 1042 ).tag(config=True)
1043 1043
1044 1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 1045 """Create a new completer for the command line.
1046 1046
1047 1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048 1048
1049 1049 If unspecified, the default namespace where completions are performed
1050 1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 1051 given as dictionaries.
1052 1052
1053 1053 An optional second namespace can be given. This allows the completer
1054 1054 to handle cases where both the local and global scopes need to be
1055 1055 distinguished.
1056 1056 """
1057 1057
1058 1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 1060 # to bind to __main__.__dict__ at completion time, not now.
1061 1061 if namespace is None:
1062 1062 self.use_main_ns = True
1063 1063 else:
1064 1064 self.use_main_ns = False
1065 1065 self.namespace = namespace
1066 1066
1067 1067 # The global namespace, if given, can be bound directly
1068 1068 if global_namespace is None:
1069 1069 self.global_namespace = {}
1070 1070 else:
1071 1071 self.global_namespace = global_namespace
1072 1072
1073 1073 self.custom_matchers = []
1074 1074
1075 1075 super(Completer, self).__init__(**kwargs)
1076 1076
1077 1077 def complete(self, text, state):
1078 1078 """Return the next possible completion for 'text'.
1079 1079
1080 1080 This is called successively with state == 0, 1, 2, ... until it
1081 1081 returns None. The completion should begin with 'text'.
1082 1082
1083 1083 """
1084 1084 if self.use_main_ns:
1085 1085 self.namespace = __main__.__dict__
1086 1086
1087 1087 if state == 0:
1088 1088 if "." in text:
1089 1089 self.matches = self.attr_matches(text)
1090 1090 else:
1091 1091 self.matches = self.global_matches(text)
1092 1092 try:
1093 1093 return self.matches[state]
1094 1094 except IndexError:
1095 1095 return None
1096 1096
1097 1097 def global_matches(self, text):
1098 1098 """Compute matches when text is a simple name.
1099 1099
1100 1100 Return a list of all keywords, built-in functions and names currently
1101 1101 defined in self.namespace or self.global_namespace that match.
1102 1102
1103 1103 """
1104 1104 matches = []
1105 1105 match_append = matches.append
1106 1106 n = len(text)
1107 1107 for lst in [
1108 1108 keyword.kwlist,
1109 1109 builtin_mod.__dict__.keys(),
1110 1110 list(self.namespace.keys()),
1111 1111 list(self.global_namespace.keys()),
1112 1112 ]:
1113 1113 for word in lst:
1114 1114 if word[:n] == text and word != "__builtins__":
1115 1115 match_append(word)
1116 1116
1117 1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 1119 shortened = {
1120 1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 1121 for word in lst
1122 1122 if snake_case_re.match(word)
1123 1123 }
1124 1124 for word in shortened.keys():
1125 1125 if word[:n] == text and word != "__builtins__":
1126 1126 match_append(shortened[word])
1127 1127 return matches
1128 1128
1129 1129 def attr_matches(self, text):
1130 1130 """Compute matches when text contains a dot.
1131 1131
1132 1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 1134 evaluated and its attributes (as revealed by dir()) are used as
1135 1135 possible completions. (For class instances, class members are
1136 1136 also considered.)
1137 1137
1138 1138 WARNING: this can still invoke arbitrary C code, if an object
1139 1139 with a __getattr__ hook is evaluated.
1140 1140
1141 1141 """
1142 1142 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1143 1143 if not m2:
1144 1144 return []
1145 1145 expr, attr = m2.group(1, 2)
1146 1146
1147 1147 obj = self._evaluate_expr(expr)
1148 1148
1149 1149 if obj is not_found:
1150 1150 return []
1151 1151
1152 1152 if self.limit_to__all__ and hasattr(obj, '__all__'):
1153 1153 words = get__all__entries(obj)
1154 1154 else:
1155 1155 words = dir2(obj)
1156 1156
1157 1157 try:
1158 1158 words = generics.complete_object(obj, words)
1159 1159 except TryNext:
1160 1160 pass
1161 1161 except AssertionError:
1162 1162 raise
1163 1163 except Exception:
1164 1164 # Silence errors from completion function
1165 1165 pass
1166 1166 # Build match list to return
1167 1167 n = len(attr)
1168 1168
1169 1169 # Note: ideally we would just return words here and the prefix
1170 1170 # reconciliator would know that we intend to append to rather than
1171 1171 # replace the input text; this requires refactoring to return range
1172 1172 # which ought to be replaced (as does jedi).
1173 1173 tokens = _parse_tokens(expr)
1174 1174 rev_tokens = reversed(tokens)
1175 1175 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1176 1176 name_turn = True
1177 1177
1178 1178 parts = []
1179 1179 for token in rev_tokens:
1180 1180 if token.type in skip_over:
1181 1181 continue
1182 1182 if token.type == tokenize.NAME and name_turn:
1183 1183 parts.append(token.string)
1184 1184 name_turn = False
1185 1185 elif token.type == tokenize.OP and token.string == "." and not name_turn:
1186 1186 parts.append(token.string)
1187 1187 name_turn = True
1188 1188 else:
1189 1189 # short-circuit if not empty nor name token
1190 1190 break
1191 1191
1192 1192 prefix_after_space = "".join(reversed(parts))
1193 1193
1194 1194 return ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr]
1195 1195
1196 1196 def _evaluate_expr(self, expr):
1197 1197 obj = not_found
1198 1198 done = False
1199 1199 while not done and expr:
1200 1200 try:
1201 1201 obj = guarded_eval(
1202 1202 expr,
1203 1203 EvaluationContext(
1204 1204 globals=self.global_namespace,
1205 1205 locals=self.namespace,
1206 1206 evaluation=self.evaluation,
1207 1207 ),
1208 1208 )
1209 1209 done = True
1210 1210 except Exception as e:
1211 1211 if self.debug:
1212 1212 print("Evaluation exception", e)
1213 1213 # trim the expression to remove any invalid prefix
1214 1214 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1215 1215 # where parenthesis is not closed.
1216 1216 # TODO: make this faster by reusing parts of the computation?
1217 1217 expr = expr[1:]
1218 1218 return obj
1219 1219
1220 1220 def get__all__entries(obj):
1221 1221 """returns the strings in the __all__ attribute"""
1222 1222 try:
1223 1223 words = getattr(obj, '__all__')
1224 1224 except:
1225 1225 return []
1226 1226
1227 1227 return [w for w in words if isinstance(w, str)]
1228 1228
1229 1229
1230 1230 class _DictKeyState(enum.Flag):
1231 1231 """Represent state of the key match in context of other possible matches.
1232 1232
1233 1233 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1234 1234 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1235 1235 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1236 1236 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1237 1237 """
1238 1238
1239 1239 BASELINE = 0
1240 1240 END_OF_ITEM = enum.auto()
1241 1241 END_OF_TUPLE = enum.auto()
1242 1242 IN_TUPLE = enum.auto()
1243 1243
1244 1244
1245 1245 def _parse_tokens(c):
1246 1246 """Parse tokens even if there is an error."""
1247 1247 tokens = []
1248 1248 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1249 1249 while True:
1250 1250 try:
1251 1251 tokens.append(next(token_generator))
1252 1252 except tokenize.TokenError:
1253 1253 return tokens
1254 1254 except StopIteration:
1255 1255 return tokens
1256 1256
1257 1257
1258 1258 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1259 1259 """Match any valid Python numeric literal in a prefix of dictionary keys.
1260 1260
1261 1261 References:
1262 1262 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1263 1263 - https://docs.python.org/3/library/tokenize.html
1264 1264 """
1265 1265 if prefix[-1].isspace():
1266 1266 # if user typed a space we do not have anything to complete
1267 1267 # even if there was a valid number token before
1268 1268 return None
1269 1269 tokens = _parse_tokens(prefix)
1270 1270 rev_tokens = reversed(tokens)
1271 1271 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1272 1272 number = None
1273 1273 for token in rev_tokens:
1274 1274 if token.type in skip_over:
1275 1275 continue
1276 1276 if number is None:
1277 1277 if token.type == tokenize.NUMBER:
1278 1278 number = token.string
1279 1279 continue
1280 1280 else:
1281 1281 # we did not match a number
1282 1282 return None
1283 1283 if token.type == tokenize.OP:
1284 1284 if token.string == ",":
1285 1285 break
1286 1286 if token.string in {"+", "-"}:
1287 1287 number = token.string + number
1288 1288 else:
1289 1289 return None
1290 1290 return number
1291 1291
1292 1292
1293 1293 _INT_FORMATS = {
1294 1294 "0b": bin,
1295 1295 "0o": oct,
1296 1296 "0x": hex,
1297 1297 }
1298 1298
1299 1299
1300 1300 def match_dict_keys(
1301 1301 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1302 1302 prefix: str,
1303 1303 delims: str,
1304 1304 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1305 1305 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1306 1306 """Used by dict_key_matches, matching the prefix to a list of keys
1307 1307
1308 1308 Parameters
1309 1309 ----------
1310 1310 keys
1311 1311 list of keys in dictionary currently being completed.
1312 1312 prefix
1313 1313 Part of the text already typed by the user. E.g. `mydict[b'fo`
1314 1314 delims
1315 1315 String of delimiters to consider when finding the current key.
1316 1316 extra_prefix : optional
1317 1317 Part of the text already typed in multi-key index cases. E.g. for
1318 1318 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1319 1319
1320 1320 Returns
1321 1321 -------
1322 1322 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1323 1323 ``quote`` being the quote that need to be used to close current string.
1324 1324 ``token_start`` the position where the replacement should start occurring,
1325 1325 ``matches`` a dictionary of replacement/completion keys on keys and values
1326 1326 indicating whether the state.
1327 1327 """
1328 1328 prefix_tuple = extra_prefix if extra_prefix else ()
1329 1329
1330 1330 prefix_tuple_size = sum(
1331 1331 [
1332 1332 # for pandas, do not count slices as taking space
1333 1333 not isinstance(k, slice)
1334 1334 for k in prefix_tuple
1335 1335 ]
1336 1336 )
1337 1337 text_serializable_types = (str, bytes, int, float, slice)
1338 1338
1339 1339 def filter_prefix_tuple(key):
1340 1340 # Reject too short keys
1341 1341 if len(key) <= prefix_tuple_size:
1342 1342 return False
1343 1343 # Reject keys which cannot be serialised to text
1344 1344 for k in key:
1345 1345 if not isinstance(k, text_serializable_types):
1346 1346 return False
1347 1347 # Reject keys that do not match the prefix
1348 1348 for k, pt in zip(key, prefix_tuple):
1349 1349 if k != pt and not isinstance(pt, slice):
1350 1350 return False
1351 1351 # All checks passed!
1352 1352 return True
1353 1353
1354 1354 filtered_key_is_final: Dict[
1355 1355 Union[str, bytes, int, float], _DictKeyState
1356 1356 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1357 1357
1358 1358 for k in keys:
1359 1359 # If at least one of the matches is not final, mark as undetermined.
1360 1360 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1361 1361 # `111` appears final on first match but is not final on the second.
1362 1362
1363 1363 if isinstance(k, tuple):
1364 1364 if filter_prefix_tuple(k):
1365 1365 key_fragment = k[prefix_tuple_size]
1366 1366 filtered_key_is_final[key_fragment] |= (
1367 1367 _DictKeyState.END_OF_TUPLE
1368 1368 if len(k) == prefix_tuple_size + 1
1369 1369 else _DictKeyState.IN_TUPLE
1370 1370 )
1371 1371 elif prefix_tuple_size > 0:
1372 1372 # we are completing a tuple but this key is not a tuple,
1373 1373 # so we should ignore it
1374 1374 pass
1375 1375 else:
1376 1376 if isinstance(k, text_serializable_types):
1377 1377 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1378 1378
1379 1379 filtered_keys = filtered_key_is_final.keys()
1380 1380
1381 1381 if not prefix:
1382 1382 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1383 1383
1384 1384 quote_match = re.search("(?:\"|')", prefix)
1385 1385 is_user_prefix_numeric = False
1386 1386
1387 1387 if quote_match:
1388 1388 quote = quote_match.group()
1389 1389 valid_prefix = prefix + quote
1390 1390 try:
1391 1391 prefix_str = literal_eval(valid_prefix)
1392 1392 except Exception:
1393 1393 return "", 0, {}
1394 1394 else:
1395 1395 # If it does not look like a string, let's assume
1396 1396 # we are dealing with a number or variable.
1397 1397 number_match = _match_number_in_dict_key_prefix(prefix)
1398 1398
1399 1399 # We do not want the key matcher to suggest variable names so we yield:
1400 1400 if number_match is None:
1401 1401 # The alternative would be to assume that user forgort the quote
1402 1402 # and if the substring matches, suggest adding it at the start.
1403 1403 return "", 0, {}
1404 1404
1405 1405 prefix_str = number_match
1406 1406 is_user_prefix_numeric = True
1407 1407 quote = ""
1408 1408
1409 1409 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1410 1410 token_match = re.search(pattern, prefix, re.UNICODE)
1411 1411 assert token_match is not None # silence mypy
1412 1412 token_start = token_match.start()
1413 1413 token_prefix = token_match.group()
1414 1414
1415 1415 matched: Dict[str, _DictKeyState] = {}
1416 1416
1417 1417 str_key: Union[str, bytes]
1418 1418
1419 1419 for key in filtered_keys:
1420 1420 if isinstance(key, (int, float)):
1421 1421 # User typed a number but this key is not a number.
1422 1422 if not is_user_prefix_numeric:
1423 1423 continue
1424 1424 str_key = str(key)
1425 1425 if isinstance(key, int):
1426 1426 int_base = prefix_str[:2].lower()
1427 1427 # if user typed integer using binary/oct/hex notation:
1428 1428 if int_base in _INT_FORMATS:
1429 1429 int_format = _INT_FORMATS[int_base]
1430 1430 str_key = int_format(key)
1431 1431 else:
1432 1432 # User typed a string but this key is a number.
1433 1433 if is_user_prefix_numeric:
1434 1434 continue
1435 1435 str_key = key
1436 1436 try:
1437 1437 if not str_key.startswith(prefix_str):
1438 1438 continue
1439 1439 except (AttributeError, TypeError, UnicodeError) as e:
1440 1440 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1441 1441 continue
1442 1442
1443 1443 # reformat remainder of key to begin with prefix
1444 1444 rem = str_key[len(prefix_str) :]
1445 1445 # force repr wrapped in '
1446 1446 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1447 1447 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1448 1448 if quote == '"':
1449 1449 # The entered prefix is quoted with ",
1450 1450 # but the match is quoted with '.
1451 1451 # A contained " hence needs escaping for comparison:
1452 1452 rem_repr = rem_repr.replace('"', '\\"')
1453 1453
1454 1454 # then reinsert prefix from start of token
1455 1455 match = "%s%s" % (token_prefix, rem_repr)
1456 1456
1457 1457 matched[match] = filtered_key_is_final[key]
1458 1458 return quote, token_start, matched
1459 1459
1460 1460
1461 1461 def cursor_to_position(text:str, line:int, column:int)->int:
1462 1462 """
1463 1463 Convert the (line,column) position of the cursor in text to an offset in a
1464 1464 string.
1465 1465
1466 1466 Parameters
1467 1467 ----------
1468 1468 text : str
1469 1469 The text in which to calculate the cursor offset
1470 1470 line : int
1471 1471 Line of the cursor; 0-indexed
1472 1472 column : int
1473 1473 Column of the cursor 0-indexed
1474 1474
1475 1475 Returns
1476 1476 -------
1477 1477 Position of the cursor in ``text``, 0-indexed.
1478 1478
1479 1479 See Also
1480 1480 --------
1481 1481 position_to_cursor : reciprocal of this function
1482 1482
1483 1483 """
1484 1484 lines = text.split('\n')
1485 1485 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1486 1486
1487 1487 return sum(len(l) + 1 for l in lines[:line]) + column
1488 1488
1489 1489 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1490 1490 """
1491 1491 Convert the position of the cursor in text (0 indexed) to a line
1492 1492 number(0-indexed) and a column number (0-indexed) pair
1493 1493
1494 1494 Position should be a valid position in ``text``.
1495 1495
1496 1496 Parameters
1497 1497 ----------
1498 1498 text : str
1499 1499 The text in which to calculate the cursor offset
1500 1500 offset : int
1501 1501 Position of the cursor in ``text``, 0-indexed.
1502 1502
1503 1503 Returns
1504 1504 -------
1505 1505 (line, column) : (int, int)
1506 1506 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1507 1507
1508 1508 See Also
1509 1509 --------
1510 1510 cursor_to_position : reciprocal of this function
1511 1511
1512 1512 """
1513 1513
1514 1514 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1515 1515
1516 1516 before = text[:offset]
1517 1517 blines = before.split('\n') # ! splitnes trim trailing \n
1518 1518 line = before.count('\n')
1519 1519 col = len(blines[-1])
1520 1520 return line, col
1521 1521
1522 1522
1523 1523 def _safe_isinstance(obj, module, class_name, *attrs):
1524 1524 """Checks if obj is an instance of module.class_name if loaded
1525 1525 """
1526 1526 if module in sys.modules:
1527 1527 m = sys.modules[module]
1528 1528 for attr in [class_name, *attrs]:
1529 1529 m = getattr(m, attr)
1530 1530 return isinstance(obj, m)
1531 1531
1532 1532
1533 1533 @context_matcher()
1534 1534 def back_unicode_name_matcher(context: CompletionContext):
1535 1535 """Match Unicode characters back to Unicode name
1536 1536
1537 1537 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1538 1538 """
1539 1539 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1540 1540 return _convert_matcher_v1_result_to_v2(
1541 1541 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1542 1542 )
1543 1543
1544 1544
1545 1545 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1546 1546 """Match Unicode characters back to Unicode name
1547 1547
1548 1548 This does ``β˜ƒ`` -> ``\\snowman``
1549 1549
1550 1550 Note that snowman is not a valid python3 combining character but will be expanded.
1551 1551 Though it will not recombine back to the snowman character by the completion machinery.
1552 1552
1553 1553 This will not either back-complete standard sequences like \\n, \\b ...
1554 1554
1555 1555 .. deprecated:: 8.6
1556 1556 You can use :meth:`back_unicode_name_matcher` instead.
1557 1557
1558 1558 Returns
1559 1559 =======
1560 1560
1561 1561 Return a tuple with two elements:
1562 1562
1563 1563 - The Unicode character that was matched (preceded with a backslash), or
1564 1564 empty string,
1565 1565 - a sequence (of 1), name for the match Unicode character, preceded by
1566 1566 backslash, or empty if no match.
1567 1567 """
1568 1568 if len(text)<2:
1569 1569 return '', ()
1570 1570 maybe_slash = text[-2]
1571 1571 if maybe_slash != '\\':
1572 1572 return '', ()
1573 1573
1574 1574 char = text[-1]
1575 1575 # no expand on quote for completion in strings.
1576 1576 # nor backcomplete standard ascii keys
1577 1577 if char in string.ascii_letters or char in ('"',"'"):
1578 1578 return '', ()
1579 1579 try :
1580 1580 unic = unicodedata.name(char)
1581 1581 return '\\'+char,('\\'+unic,)
1582 1582 except KeyError:
1583 1583 pass
1584 1584 return '', ()
1585 1585
1586 1586
1587 1587 @context_matcher()
1588 1588 def back_latex_name_matcher(context: CompletionContext):
1589 1589 """Match latex characters back to unicode name
1590 1590
1591 1591 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1592 1592 """
1593 1593 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1594 1594 return _convert_matcher_v1_result_to_v2(
1595 1595 matches, type="latex", fragment=fragment, suppress_if_matches=True
1596 1596 )
1597 1597
1598 1598
1599 1599 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1600 1600 """Match latex characters back to unicode name
1601 1601
1602 1602 This does ``\\β„΅`` -> ``\\aleph``
1603 1603
1604 1604 .. deprecated:: 8.6
1605 1605 You can use :meth:`back_latex_name_matcher` instead.
1606 1606 """
1607 1607 if len(text)<2:
1608 1608 return '', ()
1609 1609 maybe_slash = text[-2]
1610 1610 if maybe_slash != '\\':
1611 1611 return '', ()
1612 1612
1613 1613
1614 1614 char = text[-1]
1615 1615 # no expand on quote for completion in strings.
1616 1616 # nor backcomplete standard ascii keys
1617 1617 if char in string.ascii_letters or char in ('"',"'"):
1618 1618 return '', ()
1619 1619 try :
1620 1620 latex = reverse_latex_symbol[char]
1621 1621 # '\\' replace the \ as well
1622 1622 return '\\'+char,[latex]
1623 1623 except KeyError:
1624 1624 pass
1625 1625 return '', ()
1626 1626
1627 1627
1628 1628 def _formatparamchildren(parameter) -> str:
1629 1629 """
1630 1630 Get parameter name and value from Jedi Private API
1631 1631
1632 1632 Jedi does not expose a simple way to get `param=value` from its API.
1633 1633
1634 1634 Parameters
1635 1635 ----------
1636 1636 parameter
1637 1637 Jedi's function `Param`
1638 1638
1639 1639 Returns
1640 1640 -------
1641 1641 A string like 'a', 'b=1', '*args', '**kwargs'
1642 1642
1643 1643 """
1644 1644 description = parameter.description
1645 1645 if not description.startswith('param '):
1646 1646 raise ValueError('Jedi function parameter description have change format.'
1647 1647 'Expected "param ...", found %r".' % description)
1648 1648 return description[6:]
1649 1649
1650 1650 def _make_signature(completion)-> str:
1651 1651 """
1652 1652 Make the signature from a jedi completion
1653 1653
1654 1654 Parameters
1655 1655 ----------
1656 1656 completion : jedi.Completion
1657 1657 object does not complete a function type
1658 1658
1659 1659 Returns
1660 1660 -------
1661 1661 a string consisting of the function signature, with the parenthesis but
1662 1662 without the function name. example:
1663 1663 `(a, *args, b=1, **kwargs)`
1664 1664
1665 1665 """
1666 1666
1667 1667 # it looks like this might work on jedi 0.17
1668 1668 if hasattr(completion, 'get_signatures'):
1669 1669 signatures = completion.get_signatures()
1670 1670 if not signatures:
1671 1671 return '(?)'
1672 1672
1673 1673 c0 = completion.get_signatures()[0]
1674 1674 return '('+c0.to_string().split('(', maxsplit=1)[1]
1675 1675
1676 1676 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1677 1677 for p in signature.defined_names()) if f])
1678 1678
1679 1679
1680 1680 _CompleteResult = Dict[str, MatcherResult]
1681 1681
1682 1682
1683 1683 DICT_MATCHER_REGEX = re.compile(
1684 1684 r"""(?x)
1685 1685 ( # match dict-referring - or any get item object - expression
1686 1686 .+
1687 1687 )
1688 1688 \[ # open bracket
1689 1689 \s* # and optional whitespace
1690 1690 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1691 1691 # and slices
1692 1692 ((?:(?:
1693 1693 (?: # closed string
1694 1694 [uUbB]? # string prefix (r not handled)
1695 1695 (?:
1696 1696 '(?:[^']|(?<!\\)\\')*'
1697 1697 |
1698 1698 "(?:[^"]|(?<!\\)\\")*"
1699 1699 )
1700 1700 )
1701 1701 |
1702 1702 # capture integers and slices
1703 1703 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1704 1704 |
1705 1705 # integer in bin/hex/oct notation
1706 1706 0[bBxXoO]_?(?:\w|\d)+
1707 1707 )
1708 1708 \s*,\s*
1709 1709 )*)
1710 1710 ((?:
1711 1711 (?: # unclosed string
1712 1712 [uUbB]? # string prefix (r not handled)
1713 1713 (?:
1714 1714 '(?:[^']|(?<!\\)\\')*
1715 1715 |
1716 1716 "(?:[^"]|(?<!\\)\\")*
1717 1717 )
1718 1718 )
1719 1719 |
1720 1720 # unfinished integer
1721 1721 (?:[-+]?\d+)
1722 1722 |
1723 1723 # integer in bin/hex/oct notation
1724 1724 0[bBxXoO]_?(?:\w|\d)+
1725 1725 )
1726 1726 )?
1727 1727 $
1728 1728 """
1729 1729 )
1730 1730
1731 1731
1732 1732 def _convert_matcher_v1_result_to_v2(
1733 1733 matches: Sequence[str],
1734 1734 type: str,
1735 1735 fragment: Optional[str] = None,
1736 1736 suppress_if_matches: bool = False,
1737 1737 ) -> SimpleMatcherResult:
1738 1738 """Utility to help with transition"""
1739 1739 result = {
1740 1740 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1741 1741 "suppress": (True if matches else False) if suppress_if_matches else False,
1742 1742 }
1743 1743 if fragment is not None:
1744 1744 result["matched_fragment"] = fragment
1745 1745 return cast(SimpleMatcherResult, result)
1746 1746
1747 1747
1748 1748 class IPCompleter(Completer):
1749 1749 """Extension of the completer class with IPython-specific features"""
1750 1750
1751 1751 @observe('greedy')
1752 1752 def _greedy_changed(self, change):
1753 1753 """update the splitter and readline delims when greedy is changed"""
1754 1754 if change["new"]:
1755 1755 self.evaluation = "unsafe"
1756 1756 self.auto_close_dict_keys = True
1757 1757 self.splitter.delims = GREEDY_DELIMS
1758 1758 else:
1759 1759 self.evaluation = "limited"
1760 1760 self.auto_close_dict_keys = False
1761 1761 self.splitter.delims = DELIMS
1762 1762
1763 1763 dict_keys_only = Bool(
1764 1764 False,
1765 1765 help="""
1766 1766 Whether to show dict key matches only.
1767 1767
1768 1768 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1769 1769 """,
1770 1770 )
1771 1771
1772 1772 suppress_competing_matchers = UnionTrait(
1773 1773 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1774 1774 default_value=None,
1775 1775 help="""
1776 1776 Whether to suppress completions from other *Matchers*.
1777 1777
1778 1778 When set to ``None`` (default) the matchers will attempt to auto-detect
1779 1779 whether suppression of other matchers is desirable. For example, at
1780 1780 the beginning of a line followed by `%` we expect a magic completion
1781 1781 to be the only applicable option, and after ``my_dict['`` we usually
1782 1782 expect a completion with an existing dictionary key.
1783 1783
1784 1784 If you want to disable this heuristic and see completions from all matchers,
1785 1785 set ``IPCompleter.suppress_competing_matchers = False``.
1786 1786 To disable the heuristic for specific matchers provide a dictionary mapping:
1787 1787 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1788 1788
1789 1789 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1790 1790 completions to the set of matchers with the highest priority;
1791 1791 this is equivalent to ``IPCompleter.merge_completions`` and
1792 1792 can be beneficial for performance, but will sometimes omit relevant
1793 1793 candidates from matchers further down the priority list.
1794 1794 """,
1795 1795 ).tag(config=True)
1796 1796
1797 1797 merge_completions = Bool(
1798 1798 True,
1799 1799 help="""Whether to merge completion results into a single list
1800 1800
1801 1801 If False, only the completion results from the first non-empty
1802 1802 completer will be returned.
1803 1803
1804 1804 As of version 8.6.0, setting the value to ``False`` is an alias for:
1805 1805 ``IPCompleter.suppress_competing_matchers = True.``.
1806 1806 """,
1807 1807 ).tag(config=True)
1808 1808
1809 1809 disable_matchers = ListTrait(
1810 1810 Unicode(),
1811 1811 help="""List of matchers to disable.
1812 1812
1813 1813 The list should contain matcher identifiers (see :any:`completion_matcher`).
1814 1814 """,
1815 1815 ).tag(config=True)
1816 1816
1817 1817 omit__names = Enum(
1818 1818 (0, 1, 2),
1819 1819 default_value=2,
1820 1820 help="""Instruct the completer to omit private method names
1821 1821
1822 1822 Specifically, when completing on ``object.<tab>``.
1823 1823
1824 1824 When 2 [default]: all names that start with '_' will be excluded.
1825 1825
1826 1826 When 1: all 'magic' names (``__foo__``) will be excluded.
1827 1827
1828 1828 When 0: nothing will be excluded.
1829 1829 """
1830 1830 ).tag(config=True)
1831 1831 limit_to__all__ = Bool(False,
1832 1832 help="""
1833 1833 DEPRECATED as of version 5.0.
1834 1834
1835 1835 Instruct the completer to use __all__ for the completion
1836 1836
1837 1837 Specifically, when completing on ``object.<tab>``.
1838 1838
1839 1839 When True: only those names in obj.__all__ will be included.
1840 1840
1841 1841 When False [default]: the __all__ attribute is ignored
1842 1842 """,
1843 1843 ).tag(config=True)
1844 1844
1845 1845 profile_completions = Bool(
1846 1846 default_value=False,
1847 1847 help="If True, emit profiling data for completion subsystem using cProfile."
1848 1848 ).tag(config=True)
1849 1849
1850 1850 profiler_output_dir = Unicode(
1851 1851 default_value=".completion_profiles",
1852 1852 help="Template for path at which to output profile data for completions."
1853 1853 ).tag(config=True)
1854 1854
1855 1855 @observe('limit_to__all__')
1856 1856 def _limit_to_all_changed(self, change):
1857 1857 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1858 1858 'value has been deprecated since IPython 5.0, will be made to have '
1859 1859 'no effects and then removed in future version of IPython.',
1860 1860 UserWarning)
1861 1861
1862 1862 def __init__(
1863 1863 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1864 1864 ):
1865 1865 """IPCompleter() -> completer
1866 1866
1867 1867 Return a completer object.
1868 1868
1869 1869 Parameters
1870 1870 ----------
1871 1871 shell
1872 1872 a pointer to the ipython shell itself. This is needed
1873 1873 because this completer knows about magic functions, and those can
1874 1874 only be accessed via the ipython instance.
1875 1875 namespace : dict, optional
1876 1876 an optional dict where completions are performed.
1877 1877 global_namespace : dict, optional
1878 1878 secondary optional dict for completions, to
1879 1879 handle cases (such as IPython embedded inside functions) where
1880 1880 both Python scopes are visible.
1881 1881 config : Config
1882 1882 traitlet's config object
1883 1883 **kwargs
1884 1884 passed to super class unmodified.
1885 1885 """
1886 1886
1887 1887 self.magic_escape = ESC_MAGIC
1888 1888 self.splitter = CompletionSplitter()
1889 1889
1890 1890 # _greedy_changed() depends on splitter and readline being defined:
1891 1891 super().__init__(
1892 1892 namespace=namespace,
1893 1893 global_namespace=global_namespace,
1894 1894 config=config,
1895 1895 **kwargs,
1896 1896 )
1897 1897
1898 1898 # List where completion matches will be stored
1899 1899 self.matches = []
1900 1900 self.shell = shell
1901 1901 # Regexp to split filenames with spaces in them
1902 1902 self.space_name_re = re.compile(r'([^\\] )')
1903 1903 # Hold a local ref. to glob.glob for speed
1904 1904 self.glob = glob.glob
1905 1905
1906 1906 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1907 1907 # buffers, to avoid completion problems.
1908 1908 term = os.environ.get('TERM','xterm')
1909 1909 self.dumb_terminal = term in ['dumb','emacs']
1910 1910
1911 1911 # Special handling of backslashes needed in win32 platforms
1912 1912 if sys.platform == "win32":
1913 1913 self.clean_glob = self._clean_glob_win32
1914 1914 else:
1915 1915 self.clean_glob = self._clean_glob
1916 1916
1917 1917 #regexp to parse docstring for function signature
1918 1918 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1919 1919 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1920 1920 #use this if positional argument name is also needed
1921 1921 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1922 1922
1923 1923 self.magic_arg_matchers = [
1924 1924 self.magic_config_matcher,
1925 1925 self.magic_color_matcher,
1926 1926 ]
1927 1927
1928 1928 # This is set externally by InteractiveShell
1929 1929 self.custom_completers = None
1930 1930
1931 1931 # This is a list of names of unicode characters that can be completed
1932 1932 # into their corresponding unicode value. The list is large, so we
1933 1933 # lazily initialize it on first use. Consuming code should access this
1934 1934 # attribute through the `@unicode_names` property.
1935 1935 self._unicode_names = None
1936 1936
1937 1937 self._backslash_combining_matchers = [
1938 1938 self.latex_name_matcher,
1939 1939 self.unicode_name_matcher,
1940 1940 back_latex_name_matcher,
1941 1941 back_unicode_name_matcher,
1942 1942 self.fwd_unicode_matcher,
1943 1943 ]
1944 1944
1945 1945 if not self.backslash_combining_completions:
1946 1946 for matcher in self._backslash_combining_matchers:
1947 1947 self.disable_matchers.append(_get_matcher_id(matcher))
1948 1948
1949 1949 if not self.merge_completions:
1950 1950 self.suppress_competing_matchers = True
1951 1951
1952 1952 @property
1953 1953 def matchers(self) -> List[Matcher]:
1954 1954 """All active matcher routines for completion"""
1955 1955 if self.dict_keys_only:
1956 1956 return [self.dict_key_matcher]
1957 1957
1958 1958 if self.use_jedi:
1959 1959 return [
1960 1960 *self.custom_matchers,
1961 1961 *self._backslash_combining_matchers,
1962 1962 *self.magic_arg_matchers,
1963 1963 self.custom_completer_matcher,
1964 1964 self.magic_matcher,
1965 1965 self._jedi_matcher,
1966 1966 self.dict_key_matcher,
1967 1967 self.file_matcher,
1968 1968 ]
1969 1969 else:
1970 1970 return [
1971 1971 *self.custom_matchers,
1972 1972 *self._backslash_combining_matchers,
1973 1973 *self.magic_arg_matchers,
1974 1974 self.custom_completer_matcher,
1975 1975 self.dict_key_matcher,
1976 1976 # TODO: convert python_matches to v2 API
1977 1977 self.magic_matcher,
1978 1978 self.python_matches,
1979 1979 self.file_matcher,
1980 1980 self.python_func_kw_matcher,
1981 1981 ]
1982 1982
1983 1983 def all_completions(self, text:str) -> List[str]:
1984 1984 """
1985 1985 Wrapper around the completion methods for the benefit of emacs.
1986 1986 """
1987 1987 prefix = text.rpartition('.')[0]
1988 1988 with provisionalcompleter():
1989 1989 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1990 1990 for c in self.completions(text, len(text))]
1991 1991
1992 1992 return self.complete(text)[1]
1993 1993
1994 1994 def _clean_glob(self, text:str):
1995 1995 return self.glob("%s*" % text)
1996 1996
1997 1997 def _clean_glob_win32(self, text:str):
1998 1998 return [f.replace("\\","/")
1999 1999 for f in self.glob("%s*" % text)]
2000 2000
2001 2001 @context_matcher()
2002 2002 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2003 2003 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2004 2004 matches = self.file_matches(context.token)
2005 2005 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2006 2006 # starts with `/home/`, `C:\`, etc)
2007 2007 return _convert_matcher_v1_result_to_v2(matches, type="path")
2008 2008
2009 2009 def file_matches(self, text: str) -> List[str]:
2010 2010 """Match filenames, expanding ~USER type strings.
2011 2011
2012 2012 Most of the seemingly convoluted logic in this completer is an
2013 2013 attempt to handle filenames with spaces in them. And yet it's not
2014 2014 quite perfect, because Python's readline doesn't expose all of the
2015 2015 GNU readline details needed for this to be done correctly.
2016 2016
2017 2017 For a filename with a space in it, the printed completions will be
2018 2018 only the parts after what's already been typed (instead of the
2019 2019 full completions, as is normally done). I don't think with the
2020 2020 current (as of Python 2.3) Python readline it's possible to do
2021 2021 better.
2022 2022
2023 2023 .. deprecated:: 8.6
2024 2024 You can use :meth:`file_matcher` instead.
2025 2025 """
2026 2026
2027 2027 # chars that require escaping with backslash - i.e. chars
2028 2028 # that readline treats incorrectly as delimiters, but we
2029 2029 # don't want to treat as delimiters in filename matching
2030 2030 # when escaped with backslash
2031 2031 if text.startswith('!'):
2032 2032 text = text[1:]
2033 2033 text_prefix = u'!'
2034 2034 else:
2035 2035 text_prefix = u''
2036 2036
2037 2037 text_until_cursor = self.text_until_cursor
2038 2038 # track strings with open quotes
2039 2039 open_quotes = has_open_quotes(text_until_cursor)
2040 2040
2041 2041 if '(' in text_until_cursor or '[' in text_until_cursor:
2042 2042 lsplit = text
2043 2043 else:
2044 2044 try:
2045 2045 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2046 2046 lsplit = arg_split(text_until_cursor)[-1]
2047 2047 except ValueError:
2048 2048 # typically an unmatched ", or backslash without escaped char.
2049 2049 if open_quotes:
2050 2050 lsplit = text_until_cursor.split(open_quotes)[-1]
2051 2051 else:
2052 2052 return []
2053 2053 except IndexError:
2054 2054 # tab pressed on empty line
2055 2055 lsplit = ""
2056 2056
2057 2057 if not open_quotes and lsplit != protect_filename(lsplit):
2058 2058 # if protectables are found, do matching on the whole escaped name
2059 2059 has_protectables = True
2060 2060 text0,text = text,lsplit
2061 2061 else:
2062 2062 has_protectables = False
2063 2063 text = os.path.expanduser(text)
2064 2064
2065 2065 if text == "":
2066 2066 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2067 2067
2068 2068 # Compute the matches from the filesystem
2069 2069 if sys.platform == 'win32':
2070 2070 m0 = self.clean_glob(text)
2071 2071 else:
2072 2072 m0 = self.clean_glob(text.replace('\\', ''))
2073 2073
2074 2074 if has_protectables:
2075 2075 # If we had protectables, we need to revert our changes to the
2076 2076 # beginning of filename so that we don't double-write the part
2077 2077 # of the filename we have so far
2078 2078 len_lsplit = len(lsplit)
2079 2079 matches = [text_prefix + text0 +
2080 2080 protect_filename(f[len_lsplit:]) for f in m0]
2081 2081 else:
2082 2082 if open_quotes:
2083 2083 # if we have a string with an open quote, we don't need to
2084 2084 # protect the names beyond the quote (and we _shouldn't_, as
2085 2085 # it would cause bugs when the filesystem call is made).
2086 2086 matches = m0 if sys.platform == "win32" else\
2087 2087 [protect_filename(f, open_quotes) for f in m0]
2088 2088 else:
2089 2089 matches = [text_prefix +
2090 2090 protect_filename(f) for f in m0]
2091 2091
2092 2092 # Mark directories in input list by appending '/' to their names.
2093 2093 return [x+'/' if os.path.isdir(x) else x for x in matches]
2094 2094
2095 2095 @context_matcher()
2096 2096 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2097 2097 """Match magics."""
2098 2098 text = context.token
2099 2099 matches = self.magic_matches(text)
2100 2100 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2101 2101 is_magic_prefix = len(text) > 0 and text[0] == "%"
2102 2102 result["suppress"] = is_magic_prefix and bool(result["completions"])
2103 2103 return result
2104 2104
2105 2105 def magic_matches(self, text: str):
2106 2106 """Match magics.
2107 2107
2108 2108 .. deprecated:: 8.6
2109 2109 You can use :meth:`magic_matcher` instead.
2110 2110 """
2111 2111 # Get all shell magics now rather than statically, so magics loaded at
2112 2112 # runtime show up too.
2113 2113 lsm = self.shell.magics_manager.lsmagic()
2114 2114 line_magics = lsm['line']
2115 2115 cell_magics = lsm['cell']
2116 2116 pre = self.magic_escape
2117 2117 pre2 = pre+pre
2118 2118
2119 2119 explicit_magic = text.startswith(pre)
2120 2120
2121 2121 # Completion logic:
2122 2122 # - user gives %%: only do cell magics
2123 2123 # - user gives %: do both line and cell magics
2124 2124 # - no prefix: do both
2125 2125 # In other words, line magics are skipped if the user gives %% explicitly
2126 2126 #
2127 2127 # We also exclude magics that match any currently visible names:
2128 2128 # https://github.com/ipython/ipython/issues/4877, unless the user has
2129 2129 # typed a %:
2130 2130 # https://github.com/ipython/ipython/issues/10754
2131 2131 bare_text = text.lstrip(pre)
2132 2132 global_matches = self.global_matches(bare_text)
2133 2133 if not explicit_magic:
2134 2134 def matches(magic):
2135 2135 """
2136 2136 Filter magics, in particular remove magics that match
2137 2137 a name present in global namespace.
2138 2138 """
2139 2139 return ( magic.startswith(bare_text) and
2140 2140 magic not in global_matches )
2141 2141 else:
2142 2142 def matches(magic):
2143 2143 return magic.startswith(bare_text)
2144 2144
2145 2145 comp = [ pre2+m for m in cell_magics if matches(m)]
2146 2146 if not text.startswith(pre2):
2147 2147 comp += [ pre+m for m in line_magics if matches(m)]
2148 2148
2149 2149 return comp
2150 2150
2151 2151 @context_matcher()
2152 2152 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2153 2153 """Match class names and attributes for %config magic."""
2154 2154 # NOTE: uses `line_buffer` equivalent for compatibility
2155 2155 matches = self.magic_config_matches(context.line_with_cursor)
2156 2156 return _convert_matcher_v1_result_to_v2(matches, type="param")
2157 2157
2158 2158 def magic_config_matches(self, text: str) -> List[str]:
2159 2159 """Match class names and attributes for %config magic.
2160 2160
2161 2161 .. deprecated:: 8.6
2162 2162 You can use :meth:`magic_config_matcher` instead.
2163 2163 """
2164 2164 texts = text.strip().split()
2165 2165
2166 2166 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2167 2167 # get all configuration classes
2168 2168 classes = sorted(set([ c for c in self.shell.configurables
2169 2169 if c.__class__.class_traits(config=True)
2170 2170 ]), key=lambda x: x.__class__.__name__)
2171 2171 classnames = [ c.__class__.__name__ for c in classes ]
2172 2172
2173 2173 # return all classnames if config or %config is given
2174 2174 if len(texts) == 1:
2175 2175 return classnames
2176 2176
2177 2177 # match classname
2178 2178 classname_texts = texts[1].split('.')
2179 2179 classname = classname_texts[0]
2180 2180 classname_matches = [ c for c in classnames
2181 2181 if c.startswith(classname) ]
2182 2182
2183 2183 # return matched classes or the matched class with attributes
2184 2184 if texts[1].find('.') < 0:
2185 2185 return classname_matches
2186 2186 elif len(classname_matches) == 1 and \
2187 2187 classname_matches[0] == classname:
2188 2188 cls = classes[classnames.index(classname)].__class__
2189 2189 help = cls.class_get_help()
2190 2190 # strip leading '--' from cl-args:
2191 2191 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2192 2192 return [ attr.split('=')[0]
2193 2193 for attr in help.strip().splitlines()
2194 2194 if attr.startswith(texts[1]) ]
2195 2195 return []
2196 2196
2197 2197 @context_matcher()
2198 2198 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2199 2199 """Match color schemes for %colors magic."""
2200 2200 # NOTE: uses `line_buffer` equivalent for compatibility
2201 2201 matches = self.magic_color_matches(context.line_with_cursor)
2202 2202 return _convert_matcher_v1_result_to_v2(matches, type="param")
2203 2203
2204 2204 def magic_color_matches(self, text: str) -> List[str]:
2205 2205 """Match color schemes for %colors magic.
2206 2206
2207 2207 .. deprecated:: 8.6
2208 2208 You can use :meth:`magic_color_matcher` instead.
2209 2209 """
2210 2210 texts = text.split()
2211 2211 if text.endswith(' '):
2212 2212 # .split() strips off the trailing whitespace. Add '' back
2213 2213 # so that: '%colors ' -> ['%colors', '']
2214 2214 texts.append('')
2215 2215
2216 2216 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2217 2217 prefix = texts[1]
2218 2218 return [ color for color in InspectColors.keys()
2219 2219 if color.startswith(prefix) ]
2220 2220 return []
2221 2221
2222 2222 @context_matcher(identifier="IPCompleter.jedi_matcher")
2223 2223 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2224 2224 matches = self._jedi_matches(
2225 2225 cursor_column=context.cursor_position,
2226 2226 cursor_line=context.cursor_line,
2227 2227 text=context.full_text,
2228 2228 )
2229 2229 return {
2230 2230 "completions": matches,
2231 2231 # static analysis should not suppress other matchers
2232 2232 "suppress": False,
2233 2233 }
2234 2234
2235 2235 def _jedi_matches(
2236 2236 self, cursor_column: int, cursor_line: int, text: str
2237 2237 ) -> Iterator[_JediCompletionLike]:
2238 2238 """
2239 2239 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2240 2240 cursor position.
2241 2241
2242 2242 Parameters
2243 2243 ----------
2244 2244 cursor_column : int
2245 2245 column position of the cursor in ``text``, 0-indexed.
2246 2246 cursor_line : int
2247 2247 line position of the cursor in ``text``, 0-indexed
2248 2248 text : str
2249 2249 text to complete
2250 2250
2251 2251 Notes
2252 2252 -----
2253 2253 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2254 2254 object containing a string with the Jedi debug information attached.
2255 2255
2256 2256 .. deprecated:: 8.6
2257 2257 You can use :meth:`_jedi_matcher` instead.
2258 2258 """
2259 2259 namespaces = [self.namespace]
2260 2260 if self.global_namespace is not None:
2261 2261 namespaces.append(self.global_namespace)
2262 2262
2263 2263 completion_filter = lambda x:x
2264 2264 offset = cursor_to_position(text, cursor_line, cursor_column)
2265 2265 # filter output if we are completing for object members
2266 2266 if offset:
2267 2267 pre = text[offset-1]
2268 2268 if pre == '.':
2269 2269 if self.omit__names == 2:
2270 2270 completion_filter = lambda c:not c.name.startswith('_')
2271 2271 elif self.omit__names == 1:
2272 2272 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2273 2273 elif self.omit__names == 0:
2274 2274 completion_filter = lambda x:x
2275 2275 else:
2276 2276 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2277 2277
2278 2278 interpreter = jedi.Interpreter(text[:offset], namespaces)
2279 2279 try_jedi = True
2280 2280
2281 2281 try:
2282 2282 # find the first token in the current tree -- if it is a ' or " then we are in a string
2283 2283 completing_string = False
2284 2284 try:
2285 2285 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2286 2286 except StopIteration:
2287 2287 pass
2288 2288 else:
2289 2289 # note the value may be ', ", or it may also be ''' or """, or
2290 2290 # in some cases, """what/you/typed..., but all of these are
2291 2291 # strings.
2292 2292 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2293 2293
2294 2294 # if we are in a string jedi is likely not the right candidate for
2295 2295 # now. Skip it.
2296 2296 try_jedi = not completing_string
2297 2297 except Exception as e:
2298 2298 # many of things can go wrong, we are using private API just don't crash.
2299 2299 if self.debug:
2300 2300 print("Error detecting if completing a non-finished string :", e, '|')
2301 2301
2302 2302 if not try_jedi:
2303 2303 return iter([])
2304 2304 try:
2305 2305 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2306 2306 except Exception as e:
2307 2307 if self.debug:
2308 2308 return iter(
2309 2309 [
2310 2310 _FakeJediCompletion(
2311 2311 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2312 2312 % (e)
2313 2313 )
2314 2314 ]
2315 2315 )
2316 2316 else:
2317 2317 return iter([])
2318 2318
2319 2319 @completion_matcher(api_version=1)
2320 2320 def python_matches(self, text: str) -> Iterable[str]:
2321 2321 """Match attributes or global python names"""
2322 2322 if "." in text:
2323 2323 try:
2324 2324 matches = self.attr_matches(text)
2325 2325 if text.endswith('.') and self.omit__names:
2326 2326 if self.omit__names == 1:
2327 2327 # true if txt is _not_ a __ name, false otherwise:
2328 2328 no__name = (lambda txt:
2329 2329 re.match(r'.*\.__.*?__',txt) is None)
2330 2330 else:
2331 2331 # true if txt is _not_ a _ name, false otherwise:
2332 2332 no__name = (lambda txt:
2333 2333 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2334 2334 matches = filter(no__name, matches)
2335 2335 except NameError:
2336 2336 # catches <undefined attributes>.<tab>
2337 2337 matches = []
2338 2338 else:
2339 2339 matches = self.global_matches(text)
2340 2340 return matches
2341 2341
2342 2342 def _default_arguments_from_docstring(self, doc):
2343 2343 """Parse the first line of docstring for call signature.
2344 2344
2345 2345 Docstring should be of the form 'min(iterable[, key=func])\n'.
2346 2346 It can also parse cython docstring of the form
2347 2347 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2348 2348 """
2349 2349 if doc is None:
2350 2350 return []
2351 2351
2352 2352 #care only the firstline
2353 2353 line = doc.lstrip().splitlines()[0]
2354 2354
2355 2355 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2356 2356 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2357 2357 sig = self.docstring_sig_re.search(line)
2358 2358 if sig is None:
2359 2359 return []
2360 2360 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2361 2361 sig = sig.groups()[0].split(',')
2362 2362 ret = []
2363 2363 for s in sig:
2364 2364 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2365 2365 ret += self.docstring_kwd_re.findall(s)
2366 2366 return ret
2367 2367
2368 2368 def _default_arguments(self, obj):
2369 2369 """Return the list of default arguments of obj if it is callable,
2370 2370 or empty list otherwise."""
2371 2371 call_obj = obj
2372 2372 ret = []
2373 2373 if inspect.isbuiltin(obj):
2374 2374 pass
2375 2375 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2376 2376 if inspect.isclass(obj):
2377 2377 #for cython embedsignature=True the constructor docstring
2378 2378 #belongs to the object itself not __init__
2379 2379 ret += self._default_arguments_from_docstring(
2380 2380 getattr(obj, '__doc__', ''))
2381 2381 # for classes, check for __init__,__new__
2382 2382 call_obj = (getattr(obj, '__init__', None) or
2383 2383 getattr(obj, '__new__', None))
2384 2384 # for all others, check if they are __call__able
2385 2385 elif hasattr(obj, '__call__'):
2386 2386 call_obj = obj.__call__
2387 2387 ret += self._default_arguments_from_docstring(
2388 2388 getattr(call_obj, '__doc__', ''))
2389 2389
2390 2390 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2391 2391 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2392 2392
2393 2393 try:
2394 2394 sig = inspect.signature(obj)
2395 2395 ret.extend(k for k, v in sig.parameters.items() if
2396 2396 v.kind in _keeps)
2397 2397 except ValueError:
2398 2398 pass
2399 2399
2400 2400 return list(set(ret))
2401 2401
2402 2402 @context_matcher()
2403 2403 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2404 2404 """Match named parameters (kwargs) of the last open function."""
2405 2405 matches = self.python_func_kw_matches(context.token)
2406 2406 return _convert_matcher_v1_result_to_v2(matches, type="param")
2407 2407
2408 2408 def python_func_kw_matches(self, text):
2409 2409 """Match named parameters (kwargs) of the last open function.
2410 2410
2411 2411 .. deprecated:: 8.6
2412 2412 You can use :meth:`python_func_kw_matcher` instead.
2413 2413 """
2414 2414
2415 2415 if "." in text: # a parameter cannot be dotted
2416 2416 return []
2417 2417 try: regexp = self.__funcParamsRegex
2418 2418 except AttributeError:
2419 2419 regexp = self.__funcParamsRegex = re.compile(r'''
2420 2420 '.*?(?<!\\)' | # single quoted strings or
2421 2421 ".*?(?<!\\)" | # double quoted strings or
2422 2422 \w+ | # identifier
2423 2423 \S # other characters
2424 2424 ''', re.VERBOSE | re.DOTALL)
2425 2425 # 1. find the nearest identifier that comes before an unclosed
2426 2426 # parenthesis before the cursor
2427 2427 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2428 2428 tokens = regexp.findall(self.text_until_cursor)
2429 2429 iterTokens = reversed(tokens); openPar = 0
2430 2430
2431 2431 for token in iterTokens:
2432 2432 if token == ')':
2433 2433 openPar -= 1
2434 2434 elif token == '(':
2435 2435 openPar += 1
2436 2436 if openPar > 0:
2437 2437 # found the last unclosed parenthesis
2438 2438 break
2439 2439 else:
2440 2440 return []
2441 2441 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2442 2442 ids = []
2443 2443 isId = re.compile(r'\w+$').match
2444 2444
2445 2445 while True:
2446 2446 try:
2447 2447 ids.append(next(iterTokens))
2448 2448 if not isId(ids[-1]):
2449 2449 ids.pop(); break
2450 2450 if not next(iterTokens) == '.':
2451 2451 break
2452 2452 except StopIteration:
2453 2453 break
2454 2454
2455 2455 # Find all named arguments already assigned to, as to avoid suggesting
2456 2456 # them again
2457 2457 usedNamedArgs = set()
2458 2458 par_level = -1
2459 2459 for token, next_token in zip(tokens, tokens[1:]):
2460 2460 if token == '(':
2461 2461 par_level += 1
2462 2462 elif token == ')':
2463 2463 par_level -= 1
2464 2464
2465 2465 if par_level != 0:
2466 2466 continue
2467 2467
2468 2468 if next_token != '=':
2469 2469 continue
2470 2470
2471 2471 usedNamedArgs.add(token)
2472 2472
2473 2473 argMatches = []
2474 2474 try:
2475 2475 callableObj = '.'.join(ids[::-1])
2476 2476 namedArgs = self._default_arguments(eval(callableObj,
2477 2477 self.namespace))
2478 2478
2479 2479 # Remove used named arguments from the list, no need to show twice
2480 2480 for namedArg in set(namedArgs) - usedNamedArgs:
2481 2481 if namedArg.startswith(text):
2482 2482 argMatches.append("%s=" %namedArg)
2483 2483 except:
2484 2484 pass
2485 2485
2486 2486 return argMatches
2487 2487
2488 2488 @staticmethod
2489 2489 def _get_keys(obj: Any) -> List[Any]:
2490 2490 # Objects can define their own completions by defining an
2491 2491 # _ipy_key_completions_() method.
2492 2492 method = get_real_method(obj, '_ipython_key_completions_')
2493 2493 if method is not None:
2494 2494 return method()
2495 2495
2496 2496 # Special case some common in-memory dict-like types
2497 2497 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2498 2498 try:
2499 2499 return list(obj.keys())
2500 2500 except Exception:
2501 2501 return []
2502 2502 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2503 2503 try:
2504 2504 return list(obj.obj.keys())
2505 2505 except Exception:
2506 2506 return []
2507 2507 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2508 2508 _safe_isinstance(obj, 'numpy', 'void'):
2509 2509 return obj.dtype.names or []
2510 2510 return []
2511 2511
2512 2512 @context_matcher()
2513 2513 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2514 2514 """Match string keys in a dictionary, after e.g. ``foo[``."""
2515 2515 matches = self.dict_key_matches(context.token)
2516 2516 return _convert_matcher_v1_result_to_v2(
2517 2517 matches, type="dict key", suppress_if_matches=True
2518 2518 )
2519 2519
2520 2520 def dict_key_matches(self, text: str) -> List[str]:
2521 2521 """Match string keys in a dictionary, after e.g. ``foo[``.
2522 2522
2523 2523 .. deprecated:: 8.6
2524 2524 You can use :meth:`dict_key_matcher` instead.
2525 2525 """
2526 2526
2527 2527 # Short-circuit on closed dictionary (regular expression would
2528 2528 # not match anyway, but would take quite a while).
2529 2529 if self.text_until_cursor.strip().endswith("]"):
2530 2530 return []
2531 2531
2532 2532 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2533 2533
2534 2534 if match is None:
2535 2535 return []
2536 2536
2537 2537 expr, prior_tuple_keys, key_prefix = match.groups()
2538 2538
2539 2539 obj = self._evaluate_expr(expr)
2540 2540
2541 2541 if obj is not_found:
2542 2542 return []
2543 2543
2544 2544 keys = self._get_keys(obj)
2545 2545 if not keys:
2546 2546 return keys
2547 2547
2548 2548 tuple_prefix = guarded_eval(
2549 2549 prior_tuple_keys,
2550 2550 EvaluationContext(
2551 2551 globals=self.global_namespace,
2552 2552 locals=self.namespace,
2553 2553 evaluation=self.evaluation,
2554 2554 in_subscript=True,
2555 2555 ),
2556 2556 )
2557 2557
2558 2558 closing_quote, token_offset, matches = match_dict_keys(
2559 2559 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2560 2560 )
2561 2561 if not matches:
2562 2562 return []
2563 2563
2564 2564 # get the cursor position of
2565 2565 # - the text being completed
2566 2566 # - the start of the key text
2567 2567 # - the start of the completion
2568 2568 text_start = len(self.text_until_cursor) - len(text)
2569 2569 if key_prefix:
2570 2570 key_start = match.start(3)
2571 2571 completion_start = key_start + token_offset
2572 2572 else:
2573 2573 key_start = completion_start = match.end()
2574 2574
2575 2575 # grab the leading prefix, to make sure all completions start with `text`
2576 2576 if text_start > key_start:
2577 2577 leading = ''
2578 2578 else:
2579 2579 leading = text[text_start:completion_start]
2580 2580
2581 2581 # append closing quote and bracket as appropriate
2582 2582 # this is *not* appropriate if the opening quote or bracket is outside
2583 2583 # the text given to this method, e.g. `d["""a\nt
2584 2584 can_close_quote = False
2585 2585 can_close_bracket = False
2586 2586
2587 2587 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2588 2588
2589 2589 if continuation.startswith(closing_quote):
2590 2590 # do not close if already closed, e.g. `d['a<tab>'`
2591 2591 continuation = continuation[len(closing_quote) :]
2592 2592 else:
2593 2593 can_close_quote = True
2594 2594
2595 2595 continuation = continuation.strip()
2596 2596
2597 2597 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2598 2598 # handling it is out of scope, so let's avoid appending suffixes.
2599 2599 has_known_tuple_handling = isinstance(obj, dict)
2600 2600
2601 2601 can_close_bracket = (
2602 2602 not continuation.startswith("]") and self.auto_close_dict_keys
2603 2603 )
2604 2604 can_close_tuple_item = (
2605 2605 not continuation.startswith(",")
2606 2606 and has_known_tuple_handling
2607 2607 and self.auto_close_dict_keys
2608 2608 )
2609 2609 can_close_quote = can_close_quote and self.auto_close_dict_keys
2610 2610
2611 2611 # fast path if closing qoute should be appended but not suffix is allowed
2612 2612 if not can_close_quote and not can_close_bracket and closing_quote:
2613 2613 return [leading + k for k in matches]
2614 2614
2615 2615 results = []
2616 2616
2617 2617 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2618 2618
2619 2619 for k, state_flag in matches.items():
2620 2620 result = leading + k
2621 2621 if can_close_quote and closing_quote:
2622 2622 result += closing_quote
2623 2623
2624 2624 if state_flag == end_of_tuple_or_item:
2625 2625 # We do not know which suffix to add,
2626 2626 # e.g. both tuple item and string
2627 2627 # match this item.
2628 2628 pass
2629 2629
2630 2630 if state_flag in end_of_tuple_or_item and can_close_bracket:
2631 2631 result += "]"
2632 2632 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2633 2633 result += ", "
2634 2634 results.append(result)
2635 2635 return results
2636 2636
2637 2637 @context_matcher()
2638 2638 def unicode_name_matcher(self, context: CompletionContext):
2639 2639 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2640 2640 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2641 2641 return _convert_matcher_v1_result_to_v2(
2642 2642 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2643 2643 )
2644 2644
2645 2645 @staticmethod
2646 2646 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2647 2647 """Match Latex-like syntax for unicode characters base
2648 2648 on the name of the character.
2649 2649
2650 2650 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2651 2651
2652 2652 Works only on valid python 3 identifier, or on combining characters that
2653 2653 will combine to form a valid identifier.
2654 2654 """
2655 2655 slashpos = text.rfind('\\')
2656 2656 if slashpos > -1:
2657 2657 s = text[slashpos+1:]
2658 2658 try :
2659 2659 unic = unicodedata.lookup(s)
2660 2660 # allow combining chars
2661 2661 if ('a'+unic).isidentifier():
2662 2662 return '\\'+s,[unic]
2663 2663 except KeyError:
2664 2664 pass
2665 2665 return '', []
2666 2666
2667 2667 @context_matcher()
2668 2668 def latex_name_matcher(self, context: CompletionContext):
2669 2669 """Match Latex syntax for unicode characters.
2670 2670
2671 2671 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2672 2672 """
2673 2673 fragment, matches = self.latex_matches(context.text_until_cursor)
2674 2674 return _convert_matcher_v1_result_to_v2(
2675 2675 matches, type="latex", fragment=fragment, suppress_if_matches=True
2676 2676 )
2677 2677
2678 2678 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2679 2679 """Match Latex syntax for unicode characters.
2680 2680
2681 2681 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2682 2682
2683 2683 .. deprecated:: 8.6
2684 2684 You can use :meth:`latex_name_matcher` instead.
2685 2685 """
2686 2686 slashpos = text.rfind('\\')
2687 2687 if slashpos > -1:
2688 2688 s = text[slashpos:]
2689 2689 if s in latex_symbols:
2690 2690 # Try to complete a full latex symbol to unicode
2691 2691 # \\alpha -> Ξ±
2692 2692 return s, [latex_symbols[s]]
2693 2693 else:
2694 2694 # If a user has partially typed a latex symbol, give them
2695 2695 # a full list of options \al -> [\aleph, \alpha]
2696 2696 matches = [k for k in latex_symbols if k.startswith(s)]
2697 2697 if matches:
2698 2698 return s, matches
2699 2699 return '', ()
2700 2700
2701 2701 @context_matcher()
2702 2702 def custom_completer_matcher(self, context):
2703 2703 """Dispatch custom completer.
2704 2704
2705 2705 If a match is found, suppresses all other matchers except for Jedi.
2706 2706 """
2707 2707 matches = self.dispatch_custom_completer(context.token) or []
2708 2708 result = _convert_matcher_v1_result_to_v2(
2709 2709 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2710 2710 )
2711 2711 result["ordered"] = True
2712 2712 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2713 2713 return result
2714 2714
2715 2715 def dispatch_custom_completer(self, text):
2716 2716 """
2717 2717 .. deprecated:: 8.6
2718 2718 You can use :meth:`custom_completer_matcher` instead.
2719 2719 """
2720 2720 if not self.custom_completers:
2721 2721 return
2722 2722
2723 2723 line = self.line_buffer
2724 2724 if not line.strip():
2725 2725 return None
2726 2726
2727 2727 # Create a little structure to pass all the relevant information about
2728 2728 # the current completion to any custom completer.
2729 2729 event = SimpleNamespace()
2730 2730 event.line = line
2731 2731 event.symbol = text
2732 2732 cmd = line.split(None,1)[0]
2733 2733 event.command = cmd
2734 2734 event.text_until_cursor = self.text_until_cursor
2735 2735
2736 2736 # for foo etc, try also to find completer for %foo
2737 2737 if not cmd.startswith(self.magic_escape):
2738 2738 try_magic = self.custom_completers.s_matches(
2739 2739 self.magic_escape + cmd)
2740 2740 else:
2741 2741 try_magic = []
2742 2742
2743 2743 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2744 2744 try_magic,
2745 2745 self.custom_completers.flat_matches(self.text_until_cursor)):
2746 2746 try:
2747 2747 res = c(event)
2748 2748 if res:
2749 2749 # first, try case sensitive match
2750 2750 withcase = [r for r in res if r.startswith(text)]
2751 2751 if withcase:
2752 2752 return withcase
2753 2753 # if none, then case insensitive ones are ok too
2754 2754 text_low = text.lower()
2755 2755 return [r for r in res if r.lower().startswith(text_low)]
2756 2756 except TryNext:
2757 2757 pass
2758 2758 except KeyboardInterrupt:
2759 2759 """
2760 2760 If custom completer take too long,
2761 2761 let keyboard interrupt abort and return nothing.
2762 2762 """
2763 2763 break
2764 2764
2765 2765 return None
2766 2766
2767 2767 def completions(self, text: str, offset: int)->Iterator[Completion]:
2768 2768 """
2769 2769 Returns an iterator over the possible completions
2770 2770
2771 2771 .. warning::
2772 2772
2773 2773 Unstable
2774 2774
2775 2775 This function is unstable, API may change without warning.
2776 2776 It will also raise unless use in proper context manager.
2777 2777
2778 2778 Parameters
2779 2779 ----------
2780 2780 text : str
2781 2781 Full text of the current input, multi line string.
2782 2782 offset : int
2783 2783 Integer representing the position of the cursor in ``text``. Offset
2784 2784 is 0-based indexed.
2785 2785
2786 2786 Yields
2787 2787 ------
2788 2788 Completion
2789 2789
2790 2790 Notes
2791 2791 -----
2792 2792 The cursor on a text can either be seen as being "in between"
2793 2793 characters or "On" a character depending on the interface visible to
2794 2794 the user. For consistency the cursor being on "in between" characters X
2795 2795 and Y is equivalent to the cursor being "on" character Y, that is to say
2796 2796 the character the cursor is on is considered as being after the cursor.
2797 2797
2798 2798 Combining characters may span more that one position in the
2799 2799 text.
2800 2800
2801 2801 .. note::
2802 2802
2803 2803 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2804 2804 fake Completion token to distinguish completion returned by Jedi
2805 2805 and usual IPython completion.
2806 2806
2807 2807 .. note::
2808 2808
2809 2809 Completions are not completely deduplicated yet. If identical
2810 2810 completions are coming from different sources this function does not
2811 2811 ensure that each completion object will only be present once.
2812 2812 """
2813 2813 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2814 2814 "It may change without warnings. "
2815 2815 "Use in corresponding context manager.",
2816 2816 category=ProvisionalCompleterWarning, stacklevel=2)
2817 2817
2818 2818 seen = set()
2819 2819 profiler:Optional[cProfile.Profile]
2820 2820 try:
2821 2821 if self.profile_completions:
2822 2822 import cProfile
2823 2823 profiler = cProfile.Profile()
2824 2824 profiler.enable()
2825 2825 else:
2826 2826 profiler = None
2827 2827
2828 2828 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2829 2829 if c and (c in seen):
2830 2830 continue
2831 2831 yield c
2832 2832 seen.add(c)
2833 2833 except KeyboardInterrupt:
2834 2834 """if completions take too long and users send keyboard interrupt,
2835 2835 do not crash and return ASAP. """
2836 2836 pass
2837 2837 finally:
2838 2838 if profiler is not None:
2839 2839 profiler.disable()
2840 2840 ensure_dir_exists(self.profiler_output_dir)
2841 2841 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2842 2842 print("Writing profiler output to", output_path)
2843 2843 profiler.dump_stats(output_path)
2844 2844
2845 2845 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2846 2846 """
2847 2847 Core completion module.Same signature as :any:`completions`, with the
2848 2848 extra `timeout` parameter (in seconds).
2849 2849
2850 2850 Computing jedi's completion ``.type`` can be quite expensive (it is a
2851 2851 lazy property) and can require some warm-up, more warm up than just
2852 2852 computing the ``name`` of a completion. The warm-up can be :
2853 2853
2854 2854 - Long warm-up the first time a module is encountered after
2855 2855 install/update: actually build parse/inference tree.
2856 2856
2857 2857 - first time the module is encountered in a session: load tree from
2858 2858 disk.
2859 2859
2860 2860 We don't want to block completions for tens of seconds so we give the
2861 2861 completer a "budget" of ``_timeout`` seconds per invocation to compute
2862 2862 completions types, the completions that have not yet been computed will
2863 2863 be marked as "unknown" an will have a chance to be computed next round
2864 2864 are things get cached.
2865 2865
2866 2866 Keep in mind that Jedi is not the only thing treating the completion so
2867 2867 keep the timeout short-ish as if we take more than 0.3 second we still
2868 2868 have lots of processing to do.
2869 2869
2870 2870 """
2871 2871 deadline = time.monotonic() + _timeout
2872 2872
2873 2873 before = full_text[:offset]
2874 2874 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2875 2875
2876 2876 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2877 2877
2878 2878 def is_non_jedi_result(
2879 2879 result: MatcherResult, identifier: str
2880 2880 ) -> TypeGuard[SimpleMatcherResult]:
2881 2881 return identifier != jedi_matcher_id
2882 2882
2883 2883 results = self._complete(
2884 2884 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2885 2885 )
2886 2886
2887 2887 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2888 2888 identifier: result
2889 2889 for identifier, result in results.items()
2890 2890 if is_non_jedi_result(result, identifier)
2891 2891 }
2892 2892
2893 2893 jedi_matches = (
2894 2894 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2895 2895 if jedi_matcher_id in results
2896 2896 else ()
2897 2897 )
2898 2898
2899 2899 iter_jm = iter(jedi_matches)
2900 2900 if _timeout:
2901 2901 for jm in iter_jm:
2902 2902 try:
2903 2903 type_ = jm.type
2904 2904 except Exception:
2905 2905 if self.debug:
2906 2906 print("Error in Jedi getting type of ", jm)
2907 2907 type_ = None
2908 2908 delta = len(jm.name_with_symbols) - len(jm.complete)
2909 2909 if type_ == 'function':
2910 2910 signature = _make_signature(jm)
2911 2911 else:
2912 2912 signature = ''
2913 2913 yield Completion(start=offset - delta,
2914 2914 end=offset,
2915 2915 text=jm.name_with_symbols,
2916 2916 type=type_,
2917 2917 signature=signature,
2918 2918 _origin='jedi')
2919 2919
2920 2920 if time.monotonic() > deadline:
2921 2921 break
2922 2922
2923 2923 for jm in iter_jm:
2924 2924 delta = len(jm.name_with_symbols) - len(jm.complete)
2925 2925 yield Completion(
2926 2926 start=offset - delta,
2927 2927 end=offset,
2928 2928 text=jm.name_with_symbols,
2929 2929 type=_UNKNOWN_TYPE, # don't compute type for speed
2930 2930 _origin="jedi",
2931 2931 signature="",
2932 2932 )
2933 2933
2934 2934 # TODO:
2935 2935 # Suppress this, right now just for debug.
2936 2936 if jedi_matches and non_jedi_results and self.debug:
2937 2937 some_start_offset = before.rfind(
2938 2938 next(iter(non_jedi_results.values()))["matched_fragment"]
2939 2939 )
2940 2940 yield Completion(
2941 2941 start=some_start_offset,
2942 2942 end=offset,
2943 2943 text="--jedi/ipython--",
2944 2944 _origin="debug",
2945 2945 type="none",
2946 2946 signature="",
2947 2947 )
2948 2948
2949 2949 ordered: List[Completion] = []
2950 2950 sortable: List[Completion] = []
2951 2951
2952 2952 for origin, result in non_jedi_results.items():
2953 2953 matched_text = result["matched_fragment"]
2954 2954 start_offset = before.rfind(matched_text)
2955 2955 is_ordered = result.get("ordered", False)
2956 2956 container = ordered if is_ordered else sortable
2957 2957
2958 2958 # I'm unsure if this is always true, so let's assert and see if it
2959 2959 # crash
2960 2960 assert before.endswith(matched_text)
2961 2961
2962 2962 for simple_completion in result["completions"]:
2963 2963 completion = Completion(
2964 2964 start=start_offset,
2965 2965 end=offset,
2966 2966 text=simple_completion.text,
2967 2967 _origin=origin,
2968 2968 signature="",
2969 2969 type=simple_completion.type or _UNKNOWN_TYPE,
2970 2970 )
2971 2971 container.append(completion)
2972 2972
2973 2973 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2974 2974 :MATCHES_LIMIT
2975 2975 ]
2976 2976
2977 2977 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2978 2978 """Find completions for the given text and line context.
2979 2979
2980 2980 Note that both the text and the line_buffer are optional, but at least
2981 2981 one of them must be given.
2982 2982
2983 2983 Parameters
2984 2984 ----------
2985 2985 text : string, optional
2986 2986 Text to perform the completion on. If not given, the line buffer
2987 2987 is split using the instance's CompletionSplitter object.
2988 2988 line_buffer : string, optional
2989 2989 If not given, the completer attempts to obtain the current line
2990 2990 buffer via readline. This keyword allows clients which are
2991 2991 requesting for text completions in non-readline contexts to inform
2992 2992 the completer of the entire text.
2993 2993 cursor_pos : int, optional
2994 2994 Index of the cursor in the full line buffer. Should be provided by
2995 2995 remote frontends where kernel has no access to frontend state.
2996 2996
2997 2997 Returns
2998 2998 -------
2999 2999 Tuple of two items:
3000 3000 text : str
3001 3001 Text that was actually used in the completion.
3002 3002 matches : list
3003 3003 A list of completion matches.
3004 3004
3005 3005 Notes
3006 3006 -----
3007 3007 This API is likely to be deprecated and replaced by
3008 3008 :any:`IPCompleter.completions` in the future.
3009 3009
3010 3010 """
3011 3011 warnings.warn('`Completer.complete` is pending deprecation since '
3012 3012 'IPython 6.0 and will be replaced by `Completer.completions`.',
3013 3013 PendingDeprecationWarning)
3014 3014 # potential todo, FOLD the 3rd throw away argument of _complete
3015 3015 # into the first 2 one.
3016 3016 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3017 3017 # TODO: should we deprecate now, or does it stay?
3018 3018
3019 3019 results = self._complete(
3020 3020 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3021 3021 )
3022 3022
3023 3023 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3024 3024
3025 3025 return self._arrange_and_extract(
3026 3026 results,
3027 3027 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3028 3028 skip_matchers={jedi_matcher_id},
3029 3029 # this API does not support different start/end positions (fragments of token).
3030 3030 abort_if_offset_changes=True,
3031 3031 )
3032 3032
3033 3033 def _arrange_and_extract(
3034 3034 self,
3035 3035 results: Dict[str, MatcherResult],
3036 3036 skip_matchers: Set[str],
3037 3037 abort_if_offset_changes: bool,
3038 3038 ):
3039
3040 3039 sortable: List[AnyMatcherCompletion] = []
3041 3040 ordered: List[AnyMatcherCompletion] = []
3042 3041 most_recent_fragment = None
3043 3042 for identifier, result in results.items():
3044 3043 if identifier in skip_matchers:
3045 3044 continue
3046 3045 if not result["completions"]:
3047 3046 continue
3048 3047 if not most_recent_fragment:
3049 3048 most_recent_fragment = result["matched_fragment"]
3050 3049 if (
3051 3050 abort_if_offset_changes
3052 3051 and result["matched_fragment"] != most_recent_fragment
3053 3052 ):
3054 3053 break
3055 3054 if result.get("ordered", False):
3056 3055 ordered.extend(result["completions"])
3057 3056 else:
3058 3057 sortable.extend(result["completions"])
3059 3058
3060 3059 if not most_recent_fragment:
3061 3060 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3062 3061
3063 3062 return most_recent_fragment, [
3064 3063 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3065 3064 ]
3066 3065
3067 3066 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3068 3067 full_text=None) -> _CompleteResult:
3069 3068 """
3070 3069 Like complete but can also returns raw jedi completions as well as the
3071 3070 origin of the completion text. This could (and should) be made much
3072 3071 cleaner but that will be simpler once we drop the old (and stateful)
3073 3072 :any:`complete` API.
3074 3073
3075 3074 With current provisional API, cursor_pos act both (depending on the
3076 3075 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3077 3076 ``column`` when passing multiline strings this could/should be renamed
3078 3077 but would add extra noise.
3079 3078
3080 3079 Parameters
3081 3080 ----------
3082 3081 cursor_line
3083 3082 Index of the line the cursor is on. 0 indexed.
3084 3083 cursor_pos
3085 3084 Position of the cursor in the current line/line_buffer/text. 0
3086 3085 indexed.
3087 3086 line_buffer : optional, str
3088 3087 The current line the cursor is in, this is mostly due to legacy
3089 3088 reason that readline could only give a us the single current line.
3090 3089 Prefer `full_text`.
3091 3090 text : str
3092 3091 The current "token" the cursor is in, mostly also for historical
3093 3092 reasons. as the completer would trigger only after the current line
3094 3093 was parsed.
3095 3094 full_text : str
3096 3095 Full text of the current cell.
3097 3096
3098 3097 Returns
3099 3098 -------
3100 3099 An ordered dictionary where keys are identifiers of completion
3101 3100 matchers and values are ``MatcherResult``s.
3102 3101 """
3103 3102
3104 3103 # if the cursor position isn't given, the only sane assumption we can
3105 3104 # make is that it's at the end of the line (the common case)
3106 3105 if cursor_pos is None:
3107 3106 cursor_pos = len(line_buffer) if text is None else len(text)
3108 3107
3109 3108 if self.use_main_ns:
3110 3109 self.namespace = __main__.__dict__
3111 3110
3112 3111 # if text is either None or an empty string, rely on the line buffer
3113 3112 if (not line_buffer) and full_text:
3114 3113 line_buffer = full_text.split('\n')[cursor_line]
3115 3114 if not text: # issue #11508: check line_buffer before calling split_line
3116 3115 text = (
3117 3116 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3118 3117 )
3119 3118
3120 3119 # If no line buffer is given, assume the input text is all there was
3121 3120 if line_buffer is None:
3122 3121 line_buffer = text
3123 3122
3124 3123 # deprecated - do not use `line_buffer` in new code.
3125 3124 self.line_buffer = line_buffer
3126 3125 self.text_until_cursor = self.line_buffer[:cursor_pos]
3127 3126
3128 3127 if not full_text:
3129 3128 full_text = line_buffer
3130 3129
3131 3130 context = CompletionContext(
3132 3131 full_text=full_text,
3133 3132 cursor_position=cursor_pos,
3134 3133 cursor_line=cursor_line,
3135 3134 token=text,
3136 3135 limit=MATCHES_LIMIT,
3137 3136 )
3138 3137
3139 3138 # Start with a clean slate of completions
3140 3139 results: Dict[str, MatcherResult] = {}
3141 3140
3142 3141 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3143 3142
3144 3143 suppressed_matchers: Set[str] = set()
3145 3144
3146 3145 matchers = {
3147 3146 _get_matcher_id(matcher): matcher
3148 3147 for matcher in sorted(
3149 3148 self.matchers, key=_get_matcher_priority, reverse=True
3150 3149 )
3151 3150 }
3152 3151
3153 3152 for matcher_id, matcher in matchers.items():
3154 3153 matcher_id = _get_matcher_id(matcher)
3155 3154
3156 3155 if matcher_id in self.disable_matchers:
3157 3156 continue
3158 3157
3159 3158 if matcher_id in results:
3160 3159 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3161 3160
3162 3161 if matcher_id in suppressed_matchers:
3163 3162 continue
3164 3163
3165 3164 result: MatcherResult
3166 3165 try:
3167 3166 if _is_matcher_v1(matcher):
3168 3167 result = _convert_matcher_v1_result_to_v2(
3169 3168 matcher(text), type=_UNKNOWN_TYPE
3170 3169 )
3171 3170 elif _is_matcher_v2(matcher):
3172 3171 result = matcher(context)
3173 3172 else:
3174 3173 api_version = _get_matcher_api_version(matcher)
3175 3174 raise ValueError(f"Unsupported API version {api_version}")
3176 3175 except:
3177 3176 # Show the ugly traceback if the matcher causes an
3178 3177 # exception, but do NOT crash the kernel!
3179 3178 sys.excepthook(*sys.exc_info())
3180 3179 continue
3181 3180
3182 3181 # set default value for matched fragment if suffix was not selected.
3183 3182 result["matched_fragment"] = result.get("matched_fragment", context.token)
3184 3183
3185 3184 if not suppressed_matchers:
3186 3185 suppression_recommended: Union[bool, Set[str]] = result.get(
3187 3186 "suppress", False
3188 3187 )
3189 3188
3190 3189 suppression_config = (
3191 3190 self.suppress_competing_matchers.get(matcher_id, None)
3192 3191 if isinstance(self.suppress_competing_matchers, dict)
3193 3192 else self.suppress_competing_matchers
3194 3193 )
3195 3194 should_suppress = (
3196 3195 (suppression_config is True)
3197 3196 or (suppression_recommended and (suppression_config is not False))
3198 3197 ) and has_any_completions(result)
3199 3198
3200 3199 if should_suppress:
3201 3200 suppression_exceptions: Set[str] = result.get(
3202 3201 "do_not_suppress", set()
3203 3202 )
3204 3203 if isinstance(suppression_recommended, Iterable):
3205 3204 to_suppress = set(suppression_recommended)
3206 3205 else:
3207 3206 to_suppress = set(matchers)
3208 3207 suppressed_matchers = to_suppress - suppression_exceptions
3209 3208
3210 3209 new_results = {}
3211 3210 for previous_matcher_id, previous_result in results.items():
3212 3211 if previous_matcher_id not in suppressed_matchers:
3213 3212 new_results[previous_matcher_id] = previous_result
3214 3213 results = new_results
3215 3214
3216 3215 results[matcher_id] = result
3217 3216
3218 3217 _, matches = self._arrange_and_extract(
3219 3218 results,
3220 3219 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3221 3220 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3222 3221 skip_matchers={jedi_matcher_id},
3223 3222 abort_if_offset_changes=False,
3224 3223 )
3225 3224
3226 3225 # populate legacy stateful API
3227 3226 self.matches = matches
3228 3227
3229 3228 return results
3230 3229
3231 3230 @staticmethod
3232 3231 def _deduplicate(
3233 3232 matches: Sequence[AnyCompletion],
3234 3233 ) -> Iterable[AnyCompletion]:
3235 3234 filtered_matches: Dict[str, AnyCompletion] = {}
3236 3235 for match in matches:
3237 3236 text = match.text
3238 3237 if (
3239 3238 text not in filtered_matches
3240 3239 or filtered_matches[text].type == _UNKNOWN_TYPE
3241 3240 ):
3242 3241 filtered_matches[text] = match
3243 3242
3244 3243 return filtered_matches.values()
3245 3244
3246 3245 @staticmethod
3247 3246 def _sort(matches: Sequence[AnyCompletion]):
3248 3247 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3249 3248
3250 3249 @context_matcher()
3251 3250 def fwd_unicode_matcher(self, context: CompletionContext):
3252 3251 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3253 3252 # TODO: use `context.limit` to terminate early once we matched the maximum
3254 3253 # number that will be used downstream; can be added as an optional to
3255 3254 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3256 3255 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3257 3256 return _convert_matcher_v1_result_to_v2(
3258 3257 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3259 3258 )
3260 3259
3261 3260 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3262 3261 """
3263 3262 Forward match a string starting with a backslash with a list of
3264 3263 potential Unicode completions.
3265 3264
3266 3265 Will compute list of Unicode character names on first call and cache it.
3267 3266
3268 3267 .. deprecated:: 8.6
3269 3268 You can use :meth:`fwd_unicode_matcher` instead.
3270 3269
3271 3270 Returns
3272 3271 -------
3273 3272 At tuple with:
3274 3273 - matched text (empty if no matches)
3275 3274 - list of potential completions, empty tuple otherwise)
3276 3275 """
3277 3276 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3278 3277 # We could do a faster match using a Trie.
3279 3278
3280 3279 # Using pygtrie the following seem to work:
3281 3280
3282 3281 # s = PrefixSet()
3283 3282
3284 3283 # for c in range(0,0x10FFFF + 1):
3285 3284 # try:
3286 3285 # s.add(unicodedata.name(chr(c)))
3287 3286 # except ValueError:
3288 3287 # pass
3289 3288 # [''.join(k) for k in s.iter(prefix)]
3290 3289
3291 3290 # But need to be timed and adds an extra dependency.
3292 3291
3293 3292 slashpos = text.rfind('\\')
3294 3293 # if text starts with slash
3295 3294 if slashpos > -1:
3296 3295 # PERF: It's important that we don't access self._unicode_names
3297 3296 # until we're inside this if-block. _unicode_names is lazily
3298 3297 # initialized, and it takes a user-noticeable amount of time to
3299 3298 # initialize it, so we don't want to initialize it unless we're
3300 3299 # actually going to use it.
3301 3300 s = text[slashpos + 1 :]
3302 3301 sup = s.upper()
3303 3302 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3304 3303 if candidates:
3305 3304 return s, candidates
3306 3305 candidates = [x for x in self.unicode_names if sup in x]
3307 3306 if candidates:
3308 3307 return s, candidates
3309 3308 splitsup = sup.split(" ")
3310 3309 candidates = [
3311 3310 x for x in self.unicode_names if all(u in x for u in splitsup)
3312 3311 ]
3313 3312 if candidates:
3314 3313 return s, candidates
3315 3314
3316 3315 return "", ()
3317 3316
3318 3317 # if text does not start with slash
3319 3318 else:
3320 3319 return '', ()
3321 3320
3322 3321 @property
3323 3322 def unicode_names(self) -> List[str]:
3324 3323 """List of names of unicode code points that can be completed.
3325 3324
3326 3325 The list is lazily initialized on first access.
3327 3326 """
3328 3327 if self._unicode_names is None:
3329 3328 names = []
3330 3329 for c in range(0,0x10FFFF + 1):
3331 3330 try:
3332 3331 names.append(unicodedata.name(chr(c)))
3333 3332 except ValueError:
3334 3333 pass
3335 3334 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3336 3335
3337 3336 return self._unicode_names
3338 3337
3339 3338 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3340 3339 names = []
3341 3340 for start,stop in ranges:
3342 3341 for c in range(start, stop) :
3343 3342 try:
3344 3343 names.append(unicodedata.name(chr(c)))
3345 3344 except ValueError:
3346 3345 pass
3347 3346 return names
@@ -1,998 +1,997 b''
1 1 # -*- coding: utf-8 -*-
2 2 """
3 3 Pdb debugger class.
4 4
5 5
6 6 This is an extension to PDB which adds a number of new features.
7 7 Note that there is also the `IPython.terminal.debugger` class which provides UI
8 8 improvements.
9 9
10 10 We also strongly recommend to use this via the `ipdb` package, which provides
11 11 extra configuration options.
12 12
13 13 Among other things, this subclass of PDB:
14 14 - supports many IPython magics like pdef/psource
15 15 - hide frames in tracebacks based on `__tracebackhide__`
16 16 - allows to skip frames based on `__debuggerskip__`
17 17
18 18 The skipping and hiding frames are configurable via the `skip_predicates`
19 19 command.
20 20
21 21 By default, frames from readonly files will be hidden, frames containing
22 22 ``__tracebackhide__=True`` will be hidden.
23 23
24 24 Frames containing ``__debuggerskip__`` will be stepped over, frames who's parent
25 25 frames value of ``__debuggerskip__`` is ``True`` will be skipped.
26 26
27 27 >>> def helpers_helper():
28 28 ... pass
29 29 ...
30 30 ... def helper_1():
31 31 ... print("don't step in me")
32 32 ... helpers_helpers() # will be stepped over unless breakpoint set.
33 33 ...
34 34 ...
35 35 ... def helper_2():
36 36 ... print("in me neither")
37 37 ...
38 38
39 39 One can define a decorator that wraps a function between the two helpers:
40 40
41 41 >>> def pdb_skipped_decorator(function):
42 42 ...
43 43 ...
44 44 ... def wrapped_fn(*args, **kwargs):
45 45 ... __debuggerskip__ = True
46 46 ... helper_1()
47 47 ... __debuggerskip__ = False
48 48 ... result = function(*args, **kwargs)
49 49 ... __debuggerskip__ = True
50 50 ... helper_2()
51 51 ... # setting __debuggerskip__ to False again is not necessary
52 52 ... return result
53 53 ...
54 54 ... return wrapped_fn
55 55
56 56 When decorating a function, ipdb will directly step into ``bar()`` by
57 57 default:
58 58
59 59 >>> @foo_decorator
60 60 ... def bar(x, y):
61 61 ... return x * y
62 62
63 63
64 64 You can toggle the behavior with
65 65
66 66 ipdb> skip_predicates debuggerskip false
67 67
68 68 or configure it in your ``.pdbrc``
69 69
70 70
71 71
72 72 License
73 73 -------
74 74
75 75 Modified from the standard pdb.Pdb class to avoid including readline, so that
76 76 the command line completion of other programs which include this isn't
77 77 damaged.
78 78
79 79 In the future, this class will be expanded with improvements over the standard
80 80 pdb.
81 81
82 82 The original code in this file is mainly lifted out of cmd.py in Python 2.2,
83 83 with minor changes. Licensing should therefore be under the standard Python
84 84 terms. For details on the PSF (Python Software Foundation) standard license,
85 85 see:
86 86
87 87 https://docs.python.org/2/license.html
88 88
89 89
90 90 All the changes since then are under the same license as IPython.
91 91
92 92 """
93 93
94 94 #*****************************************************************************
95 95 #
96 96 # This file is licensed under the PSF license.
97 97 #
98 98 # Copyright (C) 2001 Python Software Foundation, www.python.org
99 99 # Copyright (C) 2005-2006 Fernando Perez. <fperez@colorado.edu>
100 100 #
101 101 #
102 102 #*****************************************************************************
103 103
104 104 import inspect
105 105 import linecache
106 106 import sys
107 107 import re
108 108 import os
109 109
110 110 from IPython import get_ipython
111 111 from IPython.utils import PyColorize
112 112 from IPython.utils import coloransi, py3compat
113 113 from IPython.core.excolors import exception_colors
114 114
115 115 # skip module docstests
116 116 __skip_doctest__ = True
117 117
118 118 prompt = 'ipdb> '
119 119
120 120 # We have to check this directly from sys.argv, config struct not yet available
121 121 from pdb import Pdb as OldPdb
122 122
123 123 # Allow the set_trace code to operate outside of an ipython instance, even if
124 124 # it does so with some limitations. The rest of this support is implemented in
125 125 # the Tracer constructor.
126 126
127 127 DEBUGGERSKIP = "__debuggerskip__"
128 128
129 129
130 130 def make_arrow(pad):
131 131 """generate the leading arrow in front of traceback or debugger"""
132 132 if pad >= 2:
133 133 return '-'*(pad-2) + '> '
134 134 elif pad == 1:
135 135 return '>'
136 136 return ''
137 137
138 138
139 139 def BdbQuit_excepthook(et, ev, tb, excepthook=None):
140 140 """Exception hook which handles `BdbQuit` exceptions.
141 141
142 142 All other exceptions are processed using the `excepthook`
143 143 parameter.
144 144 """
145 145 raise ValueError(
146 146 "`BdbQuit_excepthook` is deprecated since version 5.1",
147 147 )
148 148
149 149
150 150 def BdbQuit_IPython_excepthook(self, et, ev, tb, tb_offset=None):
151 151 raise ValueError(
152 152 "`BdbQuit_IPython_excepthook` is deprecated since version 5.1",
153 153 DeprecationWarning, stacklevel=2)
154 154
155 155
156 156 RGX_EXTRA_INDENT = re.compile(r'(?<=\n)\s+')
157 157
158 158
159 159 def strip_indentation(multiline_string):
160 160 return RGX_EXTRA_INDENT.sub('', multiline_string)
161 161
162 162
163 163 def decorate_fn_with_doc(new_fn, old_fn, additional_text=""):
164 164 """Make new_fn have old_fn's doc string. This is particularly useful
165 165 for the ``do_...`` commands that hook into the help system.
166 166 Adapted from from a comp.lang.python posting
167 167 by Duncan Booth."""
168 168 def wrapper(*args, **kw):
169 169 return new_fn(*args, **kw)
170 170 if old_fn.__doc__:
171 171 wrapper.__doc__ = strip_indentation(old_fn.__doc__) + additional_text
172 172 return wrapper
173 173
174 174
175 175 class Pdb(OldPdb):
176 176 """Modified Pdb class, does not load readline.
177 177
178 178 for a standalone version that uses prompt_toolkit, see
179 179 `IPython.terminal.debugger.TerminalPdb` and
180 180 `IPython.terminal.debugger.set_trace()`
181 181
182 182
183 183 This debugger can hide and skip frames that are tagged according to some predicates.
184 184 See the `skip_predicates` commands.
185 185
186 186 """
187 187
188 188 default_predicates = {
189 189 "tbhide": True,
190 190 "readonly": False,
191 191 "ipython_internal": True,
192 192 "debuggerskip": True,
193 193 }
194 194
195 195 def __init__(self, completekey=None, stdin=None, stdout=None, context=5, **kwargs):
196 196 """Create a new IPython debugger.
197 197
198 198 Parameters
199 199 ----------
200 200 completekey : default None
201 201 Passed to pdb.Pdb.
202 202 stdin : default None
203 203 Passed to pdb.Pdb.
204 204 stdout : default None
205 205 Passed to pdb.Pdb.
206 206 context : int
207 207 Number of lines of source code context to show when
208 208 displaying stacktrace information.
209 209 **kwargs
210 210 Passed to pdb.Pdb.
211 211
212 212 Notes
213 213 -----
214 214 The possibilities are python version dependent, see the python
215 215 docs for more info.
216 216 """
217 217
218 218 # Parent constructor:
219 219 try:
220 220 self.context = int(context)
221 221 if self.context <= 0:
222 222 raise ValueError("Context must be a positive integer")
223 223 except (TypeError, ValueError) as e:
224 224 raise ValueError("Context must be a positive integer") from e
225 225
226 226 # `kwargs` ensures full compatibility with stdlib's `pdb.Pdb`.
227 227 OldPdb.__init__(self, completekey, stdin, stdout, **kwargs)
228 228
229 229 # IPython changes...
230 230 self.shell = get_ipython()
231 231
232 232 if self.shell is None:
233 233 save_main = sys.modules['__main__']
234 234 # No IPython instance running, we must create one
235 235 from IPython.terminal.interactiveshell import \
236 236 TerminalInteractiveShell
237 237 self.shell = TerminalInteractiveShell.instance()
238 238 # needed by any code which calls __import__("__main__") after
239 239 # the debugger was entered. See also #9941.
240 240 sys.modules["__main__"] = save_main
241 241
242 242
243 243 color_scheme = self.shell.colors
244 244
245 245 self.aliases = {}
246 246
247 247 # Create color table: we copy the default one from the traceback
248 248 # module and add a few attributes needed for debugging
249 249 self.color_scheme_table = exception_colors()
250 250
251 251 # shorthands
252 252 C = coloransi.TermColors
253 253 cst = self.color_scheme_table
254 254
255 255 cst['NoColor'].colors.prompt = C.NoColor
256 256 cst['NoColor'].colors.breakpoint_enabled = C.NoColor
257 257 cst['NoColor'].colors.breakpoint_disabled = C.NoColor
258 258
259 259 cst['Linux'].colors.prompt = C.Green
260 260 cst['Linux'].colors.breakpoint_enabled = C.LightRed
261 261 cst['Linux'].colors.breakpoint_disabled = C.Red
262 262
263 263 cst['LightBG'].colors.prompt = C.Blue
264 264 cst['LightBG'].colors.breakpoint_enabled = C.LightRed
265 265 cst['LightBG'].colors.breakpoint_disabled = C.Red
266 266
267 267 cst['Neutral'].colors.prompt = C.Blue
268 268 cst['Neutral'].colors.breakpoint_enabled = C.LightRed
269 269 cst['Neutral'].colors.breakpoint_disabled = C.Red
270 270
271 271 # Add a python parser so we can syntax highlight source while
272 272 # debugging.
273 273 self.parser = PyColorize.Parser(style=color_scheme)
274 274 self.set_colors(color_scheme)
275 275
276 276 # Set the prompt - the default prompt is '(Pdb)'
277 277 self.prompt = prompt
278 278 self.skip_hidden = True
279 279 self.report_skipped = True
280 280
281 281 # list of predicates we use to skip frames
282 282 self._predicates = self.default_predicates
283 283
284 284 #
285 285 def set_colors(self, scheme):
286 286 """Shorthand access to the color table scheme selector method."""
287 287 self.color_scheme_table.set_active_scheme(scheme)
288 288 self.parser.style = scheme
289 289
290 290 def set_trace(self, frame=None):
291 291 if frame is None:
292 292 frame = sys._getframe().f_back
293 293 self.initial_frame = frame
294 294 return super().set_trace(frame)
295 295
296 296 def _hidden_predicate(self, frame):
297 297 """
298 298 Given a frame return whether it it should be hidden or not by IPython.
299 299 """
300 300
301 301 if self._predicates["readonly"]:
302 302 fname = frame.f_code.co_filename
303 303 # we need to check for file existence and interactively define
304 304 # function would otherwise appear as RO.
305 305 if os.path.isfile(fname) and not os.access(fname, os.W_OK):
306 306 return True
307 307
308 308 if self._predicates["tbhide"]:
309 309 if frame in (self.curframe, getattr(self, "initial_frame", None)):
310 310 return False
311 311 frame_locals = self._get_frame_locals(frame)
312 312 if "__tracebackhide__" not in frame_locals:
313 313 return False
314 314 return frame_locals["__tracebackhide__"]
315 315 return False
316 316
317 317 def hidden_frames(self, stack):
318 318 """
319 319 Given an index in the stack return whether it should be skipped.
320 320
321 321 This is used in up/down and where to skip frames.
322 322 """
323 323 # The f_locals dictionary is updated from the actual frame
324 324 # locals whenever the .f_locals accessor is called, so we
325 325 # avoid calling it here to preserve self.curframe_locals.
326 326 # Furthermore, there is no good reason to hide the current frame.
327 327 ip_hide = [self._hidden_predicate(s[0]) for s in stack]
328 328 ip_start = [i for i, s in enumerate(ip_hide) if s == "__ipython_bottom__"]
329 329 if ip_start and self._predicates["ipython_internal"]:
330 330 ip_hide = [h if i > ip_start[0] else True for (i, h) in enumerate(ip_hide)]
331 331 return ip_hide
332 332
333 333 def interaction(self, frame, traceback):
334 334 try:
335 335 OldPdb.interaction(self, frame, traceback)
336 336 except KeyboardInterrupt:
337 337 self.stdout.write("\n" + self.shell.get_exception_only())
338 338
339 339 def precmd(self, line):
340 340 """Perform useful escapes on the command before it is executed."""
341 341
342 342 if line.endswith("??"):
343 343 line = "pinfo2 " + line[:-2]
344 344 elif line.endswith("?"):
345 345 line = "pinfo " + line[:-1]
346 346
347 347 line = super().precmd(line)
348 348
349 349 return line
350 350
351 351 def new_do_frame(self, arg):
352 352 OldPdb.do_frame(self, arg)
353 353
354 354 def new_do_quit(self, arg):
355 355
356 356 if hasattr(self, 'old_all_completions'):
357 357 self.shell.Completer.all_completions = self.old_all_completions
358 358
359 359 return OldPdb.do_quit(self, arg)
360 360
361 361 do_q = do_quit = decorate_fn_with_doc(new_do_quit, OldPdb.do_quit)
362 362
363 363 def new_do_restart(self, arg):
364 364 """Restart command. In the context of ipython this is exactly the same
365 365 thing as 'quit'."""
366 366 self.msg("Restart doesn't make sense here. Using 'quit' instead.")
367 367 return self.do_quit(arg)
368 368
369 369 def print_stack_trace(self, context=None):
370 370 Colors = self.color_scheme_table.active_colors
371 371 ColorsNormal = Colors.Normal
372 372 if context is None:
373 373 context = self.context
374 374 try:
375 375 context = int(context)
376 376 if context <= 0:
377 377 raise ValueError("Context must be a positive integer")
378 378 except (TypeError, ValueError) as e:
379 379 raise ValueError("Context must be a positive integer") from e
380 380 try:
381 381 skipped = 0
382 382 for hidden, frame_lineno in zip(self.hidden_frames(self.stack), self.stack):
383 383 if hidden and self.skip_hidden:
384 384 skipped += 1
385 385 continue
386 386 if skipped:
387 387 print(
388 388 f"{Colors.excName} [... skipping {skipped} hidden frame(s)]{ColorsNormal}\n"
389 389 )
390 390 skipped = 0
391 391 self.print_stack_entry(frame_lineno, context=context)
392 392 if skipped:
393 393 print(
394 394 f"{Colors.excName} [... skipping {skipped} hidden frame(s)]{ColorsNormal}\n"
395 395 )
396 396 except KeyboardInterrupt:
397 397 pass
398 398
399 399 def print_stack_entry(self, frame_lineno, prompt_prefix='\n-> ',
400 400 context=None):
401 401 if context is None:
402 402 context = self.context
403 403 try:
404 404 context = int(context)
405 405 if context <= 0:
406 406 raise ValueError("Context must be a positive integer")
407 407 except (TypeError, ValueError) as e:
408 408 raise ValueError("Context must be a positive integer") from e
409 409 print(self.format_stack_entry(frame_lineno, '', context), file=self.stdout)
410 410
411 411 # vds: >>
412 412 frame, lineno = frame_lineno
413 413 filename = frame.f_code.co_filename
414 414 self.shell.hooks.synchronize_with_editor(filename, lineno, 0)
415 415 # vds: <<
416 416
417 417 def _get_frame_locals(self, frame):
418 418 """ "
419 419 Accessing f_local of current frame reset the namespace, so we want to avoid
420 420 that or the following can happen
421 421
422 422 ipdb> foo
423 423 "old"
424 424 ipdb> foo = "new"
425 425 ipdb> foo
426 426 "new"
427 427 ipdb> where
428 428 ipdb> foo
429 429 "old"
430 430
431 431 So if frame is self.current_frame we instead return self.curframe_locals
432 432
433 433 """
434 434 if frame is self.curframe:
435 435 return self.curframe_locals
436 436 else:
437 437 return frame.f_locals
438 438
439 439 def format_stack_entry(self, frame_lineno, lprefix=': ', context=None):
440 440 if context is None:
441 441 context = self.context
442 442 try:
443 443 context = int(context)
444 444 if context <= 0:
445 445 print("Context must be a positive integer", file=self.stdout)
446 446 except (TypeError, ValueError):
447 447 print("Context must be a positive integer", file=self.stdout)
448 448
449 449 import reprlib
450 450
451 451 ret = []
452 452
453 453 Colors = self.color_scheme_table.active_colors
454 454 ColorsNormal = Colors.Normal
455 455 tpl_link = "%s%%s%s" % (Colors.filenameEm, ColorsNormal)
456 456 tpl_call = "%s%%s%s%%s%s" % (Colors.vName, Colors.valEm, ColorsNormal)
457 457 tpl_line = "%%s%s%%s %s%%s" % (Colors.lineno, ColorsNormal)
458 458 tpl_line_em = "%%s%s%%s %s%%s%s" % (Colors.linenoEm, Colors.line, ColorsNormal)
459 459
460 460 frame, lineno = frame_lineno
461 461
462 462 return_value = ''
463 463 loc_frame = self._get_frame_locals(frame)
464 464 if "__return__" in loc_frame:
465 465 rv = loc_frame["__return__"]
466 466 # return_value += '->'
467 467 return_value += reprlib.repr(rv) + "\n"
468 468 ret.append(return_value)
469 469
470 470 #s = filename + '(' + `lineno` + ')'
471 471 filename = self.canonic(frame.f_code.co_filename)
472 472 link = tpl_link % py3compat.cast_unicode(filename)
473 473
474 474 if frame.f_code.co_name:
475 475 func = frame.f_code.co_name
476 476 else:
477 477 func = "<lambda>"
478 478
479 479 call = ""
480 480 if func != "?":
481 481 if "__args__" in loc_frame:
482 482 args = reprlib.repr(loc_frame["__args__"])
483 483 else:
484 484 args = '()'
485 485 call = tpl_call % (func, args)
486 486
487 487 # The level info should be generated in the same format pdb uses, to
488 488 # avoid breaking the pdbtrack functionality of python-mode in *emacs.
489 489 if frame is self.curframe:
490 490 ret.append('> ')
491 491 else:
492 492 ret.append(" ")
493 493 ret.append("%s(%s)%s\n" % (link, lineno, call))
494 494
495 495 start = lineno - 1 - context//2
496 496 lines = linecache.getlines(filename)
497 497 start = min(start, len(lines) - context)
498 498 start = max(start, 0)
499 499 lines = lines[start : start + context]
500 500
501 501 for i, line in enumerate(lines):
502 502 show_arrow = start + 1 + i == lineno
503 503 linetpl = (frame is self.curframe or show_arrow) and tpl_line_em or tpl_line
504 504 ret.append(
505 505 self.__format_line(
506 506 linetpl, filename, start + 1 + i, line, arrow=show_arrow
507 507 )
508 508 )
509 509 return "".join(ret)
510 510
511 511 def __format_line(self, tpl_line, filename, lineno, line, arrow=False):
512 512 bp_mark = ""
513 513 bp_mark_color = ""
514 514
515 515 new_line, err = self.parser.format2(line, 'str')
516 516 if not err:
517 517 line = new_line
518 518
519 519 bp = None
520 520 if lineno in self.get_file_breaks(filename):
521 521 bps = self.get_breaks(filename, lineno)
522 522 bp = bps[-1]
523 523
524 524 if bp:
525 525 Colors = self.color_scheme_table.active_colors
526 526 bp_mark = str(bp.number)
527 527 bp_mark_color = Colors.breakpoint_enabled
528 528 if not bp.enabled:
529 529 bp_mark_color = Colors.breakpoint_disabled
530 530
531 531 numbers_width = 7
532 532 if arrow:
533 533 # This is the line with the error
534 534 pad = numbers_width - len(str(lineno)) - len(bp_mark)
535 535 num = '%s%s' % (make_arrow(pad), str(lineno))
536 536 else:
537 537 num = '%*s' % (numbers_width - len(bp_mark), str(lineno))
538 538
539 539 return tpl_line % (bp_mark_color + bp_mark, num, line)
540 540
541 541 def print_list_lines(self, filename, first, last):
542 542 """The printing (as opposed to the parsing part of a 'list'
543 543 command."""
544 544 try:
545 545 Colors = self.color_scheme_table.active_colors
546 546 ColorsNormal = Colors.Normal
547 547 tpl_line = '%%s%s%%s %s%%s' % (Colors.lineno, ColorsNormal)
548 548 tpl_line_em = '%%s%s%%s %s%%s%s' % (Colors.linenoEm, Colors.line, ColorsNormal)
549 549 src = []
550 550 if filename == "<string>" and hasattr(self, "_exec_filename"):
551 551 filename = self._exec_filename
552 552
553 553 for lineno in range(first, last+1):
554 554 line = linecache.getline(filename, lineno)
555 555 if not line:
556 556 break
557 557
558 558 if lineno == self.curframe.f_lineno:
559 559 line = self.__format_line(
560 560 tpl_line_em, filename, lineno, line, arrow=True
561 561 )
562 562 else:
563 563 line = self.__format_line(
564 564 tpl_line, filename, lineno, line, arrow=False
565 565 )
566 566
567 567 src.append(line)
568 568 self.lineno = lineno
569 569
570 570 print(''.join(src), file=self.stdout)
571 571
572 572 except KeyboardInterrupt:
573 573 pass
574 574
575 575 def do_skip_predicates(self, args):
576 576 """
577 577 Turn on/off individual predicates as to whether a frame should be hidden/skip.
578 578
579 579 The global option to skip (or not) hidden frames is set with skip_hidden
580 580
581 581 To change the value of a predicate
582 582
583 583 skip_predicates key [true|false]
584 584
585 585 Call without arguments to see the current values.
586 586
587 587 To permanently change the value of an option add the corresponding
588 588 command to your ``~/.pdbrc`` file. If you are programmatically using the
589 589 Pdb instance you can also change the ``default_predicates`` class
590 590 attribute.
591 591 """
592 592 if not args.strip():
593 593 print("current predicates:")
594 for (p, v) in self._predicates.items():
594 for p, v in self._predicates.items():
595 595 print(" ", p, ":", v)
596 596 return
597 597 type_value = args.strip().split(" ")
598 598 if len(type_value) != 2:
599 599 print(
600 600 f"Usage: skip_predicates <type> <value>, with <type> one of {set(self._predicates.keys())}"
601 601 )
602 602 return
603 603
604 604 type_, value = type_value
605 605 if type_ not in self._predicates:
606 606 print(f"{type_!r} not in {set(self._predicates.keys())}")
607 607 return
608 608 if value.lower() not in ("true", "yes", "1", "no", "false", "0"):
609 609 print(
610 610 f"{value!r} is invalid - use one of ('true', 'yes', '1', 'no', 'false', '0')"
611 611 )
612 612 return
613 613
614 614 self._predicates[type_] = value.lower() in ("true", "yes", "1")
615 615 if not any(self._predicates.values()):
616 616 print(
617 617 "Warning, all predicates set to False, skip_hidden may not have any effects."
618 618 )
619 619
620 620 def do_skip_hidden(self, arg):
621 621 """
622 622 Change whether or not we should skip frames with the
623 623 __tracebackhide__ attribute.
624 624 """
625 625 if not arg.strip():
626 626 print(
627 627 f"skip_hidden = {self.skip_hidden}, use 'yes','no', 'true', or 'false' to change."
628 628 )
629 629 elif arg.strip().lower() in ("true", "yes"):
630 630 self.skip_hidden = True
631 631 elif arg.strip().lower() in ("false", "no"):
632 632 self.skip_hidden = False
633 633 if not any(self._predicates.values()):
634 634 print(
635 635 "Warning, all predicates set to False, skip_hidden may not have any effects."
636 636 )
637 637
638 638 def do_list(self, arg):
639 639 """Print lines of code from the current stack frame
640 640 """
641 641 self.lastcmd = 'list'
642 642 last = None
643 643 if arg:
644 644 try:
645 645 x = eval(arg, {}, {})
646 646 if type(x) == type(()):
647 647 first, last = x
648 648 first = int(first)
649 649 last = int(last)
650 650 if last < first:
651 651 # Assume it's a count
652 652 last = first + last
653 653 else:
654 654 first = max(1, int(x) - 5)
655 655 except:
656 656 print('*** Error in argument:', repr(arg), file=self.stdout)
657 657 return
658 658 elif self.lineno is None:
659 659 first = max(1, self.curframe.f_lineno - 5)
660 660 else:
661 661 first = self.lineno + 1
662 662 if last is None:
663 663 last = first + 10
664 664 self.print_list_lines(self.curframe.f_code.co_filename, first, last)
665 665
666 666 # vds: >>
667 667 lineno = first
668 668 filename = self.curframe.f_code.co_filename
669 669 self.shell.hooks.synchronize_with_editor(filename, lineno, 0)
670 670 # vds: <<
671 671
672 672 do_l = do_list
673 673
674 674 def getsourcelines(self, obj):
675 675 lines, lineno = inspect.findsource(obj)
676 676 if inspect.isframe(obj) and obj.f_globals is self._get_frame_locals(obj):
677 677 # must be a module frame: do not try to cut a block out of it
678 678 return lines, 1
679 679 elif inspect.ismodule(obj):
680 680 return lines, 1
681 681 return inspect.getblock(lines[lineno:]), lineno+1
682 682
683 683 def do_longlist(self, arg):
684 684 """Print lines of code from the current stack frame.
685 685
686 686 Shows more lines than 'list' does.
687 687 """
688 688 self.lastcmd = 'longlist'
689 689 try:
690 690 lines, lineno = self.getsourcelines(self.curframe)
691 691 except OSError as err:
692 692 self.error(err)
693 693 return
694 694 last = lineno + len(lines)
695 695 self.print_list_lines(self.curframe.f_code.co_filename, lineno, last)
696 696 do_ll = do_longlist
697 697
698 698 def do_debug(self, arg):
699 699 """debug code
700 700 Enter a recursive debugger that steps through the code
701 701 argument (which is an arbitrary expression or statement to be
702 702 executed in the current environment).
703 703 """
704 704 trace_function = sys.gettrace()
705 705 sys.settrace(None)
706 706 globals = self.curframe.f_globals
707 707 locals = self.curframe_locals
708 708 p = self.__class__(completekey=self.completekey,
709 709 stdin=self.stdin, stdout=self.stdout)
710 710 p.use_rawinput = self.use_rawinput
711 711 p.prompt = "(%s) " % self.prompt.strip()
712 712 self.message("ENTERING RECURSIVE DEBUGGER")
713 713 sys.call_tracing(p.run, (arg, globals, locals))
714 714 self.message("LEAVING RECURSIVE DEBUGGER")
715 715 sys.settrace(trace_function)
716 716 self.lastcmd = p.lastcmd
717 717
718 718 def do_pdef(self, arg):
719 719 """Print the call signature for any callable object.
720 720
721 721 The debugger interface to %pdef"""
722 722 namespaces = [
723 723 ("Locals", self.curframe_locals),
724 724 ("Globals", self.curframe.f_globals),
725 725 ]
726 726 self.shell.find_line_magic("pdef")(arg, namespaces=namespaces)
727 727
728 728 def do_pdoc(self, arg):
729 729 """Print the docstring for an object.
730 730
731 731 The debugger interface to %pdoc."""
732 732 namespaces = [
733 733 ("Locals", self.curframe_locals),
734 734 ("Globals", self.curframe.f_globals),
735 735 ]
736 736 self.shell.find_line_magic("pdoc")(arg, namespaces=namespaces)
737 737
738 738 def do_pfile(self, arg):
739 739 """Print (or run through pager) the file where an object is defined.
740 740
741 741 The debugger interface to %pfile.
742 742 """
743 743 namespaces = [
744 744 ("Locals", self.curframe_locals),
745 745 ("Globals", self.curframe.f_globals),
746 746 ]
747 747 self.shell.find_line_magic("pfile")(arg, namespaces=namespaces)
748 748
749 749 def do_pinfo(self, arg):
750 750 """Provide detailed information about an object.
751 751
752 752 The debugger interface to %pinfo, i.e., obj?."""
753 753 namespaces = [
754 754 ("Locals", self.curframe_locals),
755 755 ("Globals", self.curframe.f_globals),
756 756 ]
757 757 self.shell.find_line_magic("pinfo")(arg, namespaces=namespaces)
758 758
759 759 def do_pinfo2(self, arg):
760 760 """Provide extra detailed information about an object.
761 761
762 762 The debugger interface to %pinfo2, i.e., obj??."""
763 763 namespaces = [
764 764 ("Locals", self.curframe_locals),
765 765 ("Globals", self.curframe.f_globals),
766 766 ]
767 767 self.shell.find_line_magic("pinfo2")(arg, namespaces=namespaces)
768 768
769 769 def do_psource(self, arg):
770 770 """Print (or run through pager) the source code for an object."""
771 771 namespaces = [
772 772 ("Locals", self.curframe_locals),
773 773 ("Globals", self.curframe.f_globals),
774 774 ]
775 775 self.shell.find_line_magic("psource")(arg, namespaces=namespaces)
776 776
777 777 def do_where(self, arg):
778 778 """w(here)
779 779 Print a stack trace, with the most recent frame at the bottom.
780 780 An arrow indicates the "current frame", which determines the
781 781 context of most commands. 'bt' is an alias for this command.
782 782
783 783 Take a number as argument as an (optional) number of context line to
784 784 print"""
785 785 if arg:
786 786 try:
787 787 context = int(arg)
788 788 except ValueError as err:
789 789 self.error(err)
790 790 return
791 791 self.print_stack_trace(context)
792 792 else:
793 793 self.print_stack_trace()
794 794
795 795 do_w = do_where
796 796
797 797 def break_anywhere(self, frame):
798 798 """
799 799 _stop_in_decorator_internals is overly restrictive, as we may still want
800 800 to trace function calls, so we need to also update break_anywhere so
801 801 that is we don't `stop_here`, because of debugger skip, we may still
802 802 stop at any point inside the function
803 803
804 804 """
805 805
806 806 sup = super().break_anywhere(frame)
807 807 if sup:
808 808 return sup
809 809 if self._predicates["debuggerskip"]:
810 810 if DEBUGGERSKIP in frame.f_code.co_varnames:
811 811 return True
812 812 if frame.f_back and self._get_frame_locals(frame.f_back).get(DEBUGGERSKIP):
813 813 return True
814 814 return False
815 815
816 816 def _is_in_decorator_internal_and_should_skip(self, frame):
817 817 """
818 818 Utility to tell us whether we are in a decorator internal and should stop.
819 819
820 820 """
821 821
822 822 # if we are disabled don't skip
823 823 if not self._predicates["debuggerskip"]:
824 824 return False
825 825
826 826 # if frame is tagged, skip by default.
827 827 if DEBUGGERSKIP in frame.f_code.co_varnames:
828 828 return True
829 829
830 830 # if one of the parent frame value set to True skip as well.
831 831
832 832 cframe = frame
833 833 while getattr(cframe, "f_back", None):
834 834 cframe = cframe.f_back
835 835 if self._get_frame_locals(cframe).get(DEBUGGERSKIP):
836 836 return True
837 837
838 838 return False
839 839
840 840 def stop_here(self, frame):
841
842 841 if self._is_in_decorator_internal_and_should_skip(frame) is True:
843 842 return False
844 843
845 844 hidden = False
846 845 if self.skip_hidden:
847 846 hidden = self._hidden_predicate(frame)
848 847 if hidden:
849 848 if self.report_skipped:
850 849 Colors = self.color_scheme_table.active_colors
851 850 ColorsNormal = Colors.Normal
852 851 print(
853 852 f"{Colors.excName} [... skipped 1 hidden frame]{ColorsNormal}\n"
854 853 )
855 854 return super().stop_here(frame)
856 855
857 856 def do_up(self, arg):
858 857 """u(p) [count]
859 858 Move the current frame count (default one) levels up in the
860 859 stack trace (to an older frame).
861 860
862 861 Will skip hidden frames.
863 862 """
864 863 # modified version of upstream that skips
865 864 # frames with __tracebackhide__
866 865 if self.curindex == 0:
867 866 self.error("Oldest frame")
868 867 return
869 868 try:
870 869 count = int(arg or 1)
871 870 except ValueError:
872 871 self.error("Invalid frame count (%s)" % arg)
873 872 return
874 873 skipped = 0
875 874 if count < 0:
876 875 _newframe = 0
877 876 else:
878 877 counter = 0
879 878 hidden_frames = self.hidden_frames(self.stack)
880 879 for i in range(self.curindex - 1, -1, -1):
881 880 if hidden_frames[i] and self.skip_hidden:
882 881 skipped += 1
883 882 continue
884 883 counter += 1
885 884 if counter >= count:
886 885 break
887 886 else:
888 887 # if no break occurred.
889 888 self.error(
890 889 "all frames above hidden, use `skip_hidden False` to get get into those."
891 890 )
892 891 return
893 892
894 893 Colors = self.color_scheme_table.active_colors
895 894 ColorsNormal = Colors.Normal
896 895 _newframe = i
897 896 self._select_frame(_newframe)
898 897 if skipped:
899 898 print(
900 899 f"{Colors.excName} [... skipped {skipped} hidden frame(s)]{ColorsNormal}\n"
901 900 )
902 901
903 902 def do_down(self, arg):
904 903 """d(own) [count]
905 904 Move the current frame count (default one) levels down in the
906 905 stack trace (to a newer frame).
907 906
908 907 Will skip hidden frames.
909 908 """
910 909 if self.curindex + 1 == len(self.stack):
911 910 self.error("Newest frame")
912 911 return
913 912 try:
914 913 count = int(arg or 1)
915 914 except ValueError:
916 915 self.error("Invalid frame count (%s)" % arg)
917 916 return
918 917 if count < 0:
919 918 _newframe = len(self.stack) - 1
920 919 else:
921 920 counter = 0
922 921 skipped = 0
923 922 hidden_frames = self.hidden_frames(self.stack)
924 923 for i in range(self.curindex + 1, len(self.stack)):
925 924 if hidden_frames[i] and self.skip_hidden:
926 925 skipped += 1
927 926 continue
928 927 counter += 1
929 928 if counter >= count:
930 929 break
931 930 else:
932 931 self.error(
933 932 "all frames below hidden, use `skip_hidden False` to get get into those."
934 933 )
935 934 return
936 935
937 936 Colors = self.color_scheme_table.active_colors
938 937 ColorsNormal = Colors.Normal
939 938 if skipped:
940 939 print(
941 940 f"{Colors.excName} [... skipped {skipped} hidden frame(s)]{ColorsNormal}\n"
942 941 )
943 942 _newframe = i
944 943
945 944 self._select_frame(_newframe)
946 945
947 946 do_d = do_down
948 947 do_u = do_up
949 948
950 949 def do_context(self, context):
951 950 """context number_of_lines
952 951 Set the number of lines of source code to show when displaying
953 952 stacktrace information.
954 953 """
955 954 try:
956 955 new_context = int(context)
957 956 if new_context <= 0:
958 957 raise ValueError()
959 958 self.context = new_context
960 959 except ValueError:
961 960 self.error("The 'context' command requires a positive integer argument.")
962 961
963 962
964 963 class InterruptiblePdb(Pdb):
965 964 """Version of debugger where KeyboardInterrupt exits the debugger altogether."""
966 965
967 966 def cmdloop(self, intro=None):
968 967 """Wrap cmdloop() such that KeyboardInterrupt stops the debugger."""
969 968 try:
970 969 return OldPdb.cmdloop(self, intro=intro)
971 970 except KeyboardInterrupt:
972 971 self.stop_here = lambda frame: False
973 972 self.do_quit("")
974 973 sys.settrace(None)
975 974 self.quitting = False
976 975 raise
977 976
978 977 def _cmdloop(self):
979 978 while True:
980 979 try:
981 980 # keyboard interrupts allow for an easy way to cancel
982 981 # the current command, so allow them during interactive input
983 982 self.allow_kbdint = True
984 983 self.cmdloop()
985 984 self.allow_kbdint = False
986 985 break
987 986 except KeyboardInterrupt:
988 987 self.message('--KeyboardInterrupt--')
989 988 raise
990 989
991 990
992 991 def set_trace(frame=None):
993 992 """
994 993 Start debugging from `frame`.
995 994
996 995 If frame is not specified, debugging starts from caller's frame.
997 996 """
998 997 Pdb().set_trace(frame or sys._getframe().f_back)
@@ -1,575 +1,574 b''
1 1 """Tests for debugging machinery.
2 2 """
3 3
4 4 # Copyright (c) IPython Development Team.
5 5 # Distributed under the terms of the Modified BSD License.
6 6
7 7 import builtins
8 8 import os
9 9 import sys
10 10 import platform
11 11
12 12 from tempfile import NamedTemporaryFile
13 13 from textwrap import dedent
14 14 from unittest.mock import patch
15 15
16 16 from IPython.core import debugger
17 17 from IPython.testing import IPYTHON_TESTING_TIMEOUT_SCALE
18 18 from IPython.testing.decorators import skip_win32
19 19 import pytest
20 20
21 21 #-----------------------------------------------------------------------------
22 22 # Helper classes, from CPython's Pdb test suite
23 23 #-----------------------------------------------------------------------------
24 24
25 25 class _FakeInput(object):
26 26 """
27 27 A fake input stream for pdb's interactive debugger. Whenever a
28 28 line is read, print it (to simulate the user typing it), and then
29 29 return it. The set of lines to return is specified in the
30 30 constructor; they should not have trailing newlines.
31 31 """
32 32 def __init__(self, lines):
33 33 self.lines = iter(lines)
34 34
35 35 def readline(self):
36 36 line = next(self.lines)
37 37 print(line)
38 38 return line+'\n'
39 39
40 40 class PdbTestInput(object):
41 41 """Context manager that makes testing Pdb in doctests easier."""
42 42
43 43 def __init__(self, input):
44 44 self.input = input
45 45
46 46 def __enter__(self):
47 47 self.real_stdin = sys.stdin
48 48 sys.stdin = _FakeInput(self.input)
49 49
50 50 def __exit__(self, *exc):
51 51 sys.stdin = self.real_stdin
52 52
53 53 #-----------------------------------------------------------------------------
54 54 # Tests
55 55 #-----------------------------------------------------------------------------
56 56
57 57 def test_ipdb_magics():
58 58 '''Test calling some IPython magics from ipdb.
59 59
60 60 First, set up some test functions and classes which we can inspect.
61 61
62 62 >>> class ExampleClass(object):
63 63 ... """Docstring for ExampleClass."""
64 64 ... def __init__(self):
65 65 ... """Docstring for ExampleClass.__init__"""
66 66 ... pass
67 67 ... def __str__(self):
68 68 ... return "ExampleClass()"
69 69
70 70 >>> def example_function(x, y, z="hello"):
71 71 ... """Docstring for example_function."""
72 72 ... pass
73 73
74 74 >>> old_trace = sys.gettrace()
75 75
76 76 Create a function which triggers ipdb.
77 77
78 78 >>> def trigger_ipdb():
79 79 ... a = ExampleClass()
80 80 ... debugger.Pdb().set_trace()
81 81
82 82 >>> with PdbTestInput([
83 83 ... 'pdef example_function',
84 84 ... 'pdoc ExampleClass',
85 85 ... 'up',
86 86 ... 'down',
87 87 ... 'list',
88 88 ... 'pinfo a',
89 89 ... 'll',
90 90 ... 'continue',
91 91 ... ]):
92 92 ... trigger_ipdb()
93 93 --Return--
94 94 None
95 95 > <doctest ...>(3)trigger_ipdb()
96 96 1 def trigger_ipdb():
97 97 2 a = ExampleClass()
98 98 ----> 3 debugger.Pdb().set_trace()
99 99 <BLANKLINE>
100 100 ipdb> pdef example_function
101 101 example_function(x, y, z='hello')
102 102 ipdb> pdoc ExampleClass
103 103 Class docstring:
104 104 Docstring for ExampleClass.
105 105 Init docstring:
106 106 Docstring for ExampleClass.__init__
107 107 ipdb> up
108 108 > <doctest ...>(11)<module>()
109 109 7 'pinfo a',
110 110 8 'll',
111 111 9 'continue',
112 112 10 ]):
113 113 ---> 11 trigger_ipdb()
114 114 <BLANKLINE>
115 115 ipdb> down
116 116 None
117 117 > <doctest ...>(3)trigger_ipdb()
118 118 1 def trigger_ipdb():
119 119 2 a = ExampleClass()
120 120 ----> 3 debugger.Pdb().set_trace()
121 121 <BLANKLINE>
122 122 ipdb> list
123 123 1 def trigger_ipdb():
124 124 2 a = ExampleClass()
125 125 ----> 3 debugger.Pdb().set_trace()
126 126 <BLANKLINE>
127 127 ipdb> pinfo a
128 128 Type: ExampleClass
129 129 String form: ExampleClass()
130 130 Namespace: Local...
131 131 Docstring: Docstring for ExampleClass.
132 132 Init docstring: Docstring for ExampleClass.__init__
133 133 ipdb> ll
134 134 1 def trigger_ipdb():
135 135 2 a = ExampleClass()
136 136 ----> 3 debugger.Pdb().set_trace()
137 137 <BLANKLINE>
138 138 ipdb> continue
139 139
140 140 Restore previous trace function, e.g. for coverage.py
141 141
142 142 >>> sys.settrace(old_trace)
143 143 '''
144 144
145 145 def test_ipdb_magics2():
146 146 '''Test ipdb with a very short function.
147 147
148 148 >>> old_trace = sys.gettrace()
149 149
150 150 >>> def bar():
151 151 ... pass
152 152
153 153 Run ipdb.
154 154
155 155 >>> with PdbTestInput([
156 156 ... 'continue',
157 157 ... ]):
158 158 ... debugger.Pdb().runcall(bar)
159 159 > <doctest ...>(2)bar()
160 160 1 def bar():
161 161 ----> 2 pass
162 162 <BLANKLINE>
163 163 ipdb> continue
164 164
165 165 Restore previous trace function, e.g. for coverage.py
166 166
167 167 >>> sys.settrace(old_trace)
168 168 '''
169 169
170 170 def can_quit():
171 171 '''Test that quit work in ipydb
172 172
173 173 >>> old_trace = sys.gettrace()
174 174
175 175 >>> def bar():
176 176 ... pass
177 177
178 178 >>> with PdbTestInput([
179 179 ... 'quit',
180 180 ... ]):
181 181 ... debugger.Pdb().runcall(bar)
182 182 > <doctest ...>(2)bar()
183 183 1 def bar():
184 184 ----> 2 pass
185 185 <BLANKLINE>
186 186 ipdb> quit
187 187
188 188 Restore previous trace function, e.g. for coverage.py
189 189
190 190 >>> sys.settrace(old_trace)
191 191 '''
192 192
193 193
194 194 def can_exit():
195 195 '''Test that quit work in ipydb
196 196
197 197 >>> old_trace = sys.gettrace()
198 198
199 199 >>> def bar():
200 200 ... pass
201 201
202 202 >>> with PdbTestInput([
203 203 ... 'exit',
204 204 ... ]):
205 205 ... debugger.Pdb().runcall(bar)
206 206 > <doctest ...>(2)bar()
207 207 1 def bar():
208 208 ----> 2 pass
209 209 <BLANKLINE>
210 210 ipdb> exit
211 211
212 212 Restore previous trace function, e.g. for coverage.py
213 213
214 214 >>> sys.settrace(old_trace)
215 215 '''
216 216
217 217
218 218 def test_interruptible_core_debugger():
219 219 """The debugger can be interrupted.
220 220
221 221 The presumption is there is some mechanism that causes a KeyboardInterrupt
222 222 (this is implemented in ipykernel). We want to ensure the
223 223 KeyboardInterrupt cause debugging to cease.
224 224 """
225 225 def raising_input(msg="", called=[0]):
226 226 called[0] += 1
227 227 assert called[0] == 1, "input() should only be called once!"
228 228 raise KeyboardInterrupt()
229 229
230 230 tracer_orig = sys.gettrace()
231 231 try:
232 232 with patch.object(builtins, "input", raising_input):
233 233 debugger.InterruptiblePdb().set_trace()
234 234 # The way this test will fail is by set_trace() never exiting,
235 235 # resulting in a timeout by the test runner. The alternative
236 236 # implementation would involve a subprocess, but that adds issues
237 237 # with interrupting subprocesses that are rather complex, so it's
238 238 # simpler just to do it this way.
239 239 finally:
240 240 # restore the original trace function
241 241 sys.settrace(tracer_orig)
242 242
243 243
244 244 @skip_win32
245 245 def test_xmode_skip():
246 246 """that xmode skip frames
247 247
248 248 Not as a doctest as pytest does not run doctests.
249 249 """
250 250 import pexpect
251 251 env = os.environ.copy()
252 252 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
253 253
254 254 child = pexpect.spawn(
255 255 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
256 256 )
257 257 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
258 258
259 259 child.expect("IPython")
260 260 child.expect("\n")
261 261 child.expect_exact("In [1]")
262 262
263 263 block = dedent(
264 264 """
265 265 def f():
266 266 __tracebackhide__ = True
267 267 g()
268 268
269 269 def g():
270 270 raise ValueError
271 271
272 272 f()
273 273 """
274 274 )
275 275
276 276 for line in block.splitlines():
277 277 child.sendline(line)
278 278 child.expect_exact(line)
279 279 child.expect_exact("skipping")
280 280
281 281 block = dedent(
282 282 """
283 283 def f():
284 284 __tracebackhide__ = True
285 285 g()
286 286
287 287 def g():
288 288 from IPython.core.debugger import set_trace
289 289 set_trace()
290 290
291 291 f()
292 292 """
293 293 )
294 294
295 295 for line in block.splitlines():
296 296 child.sendline(line)
297 297 child.expect_exact(line)
298 298
299 299 child.expect("ipdb>")
300 300 child.sendline("w")
301 301 child.expect("hidden")
302 302 child.expect("ipdb>")
303 303 child.sendline("skip_hidden false")
304 304 child.sendline("w")
305 305 child.expect("__traceba")
306 306 child.expect("ipdb>")
307 307
308 308 child.close()
309 309
310 310
311 311 skip_decorators_blocks = (
312 312 """
313 313 def helpers_helper():
314 314 pass # should not stop here except breakpoint
315 315 """,
316 316 """
317 317 def helper_1():
318 318 helpers_helper() # should not stop here
319 319 """,
320 320 """
321 321 def helper_2():
322 322 pass # should not stop here
323 323 """,
324 324 """
325 325 def pdb_skipped_decorator2(function):
326 326 def wrapped_fn(*args, **kwargs):
327 327 __debuggerskip__ = True
328 328 helper_2()
329 329 __debuggerskip__ = False
330 330 result = function(*args, **kwargs)
331 331 __debuggerskip__ = True
332 332 helper_2()
333 333 return result
334 334 return wrapped_fn
335 335 """,
336 336 """
337 337 def pdb_skipped_decorator(function):
338 338 def wrapped_fn(*args, **kwargs):
339 339 __debuggerskip__ = True
340 340 helper_1()
341 341 __debuggerskip__ = False
342 342 result = function(*args, **kwargs)
343 343 __debuggerskip__ = True
344 344 helper_2()
345 345 return result
346 346 return wrapped_fn
347 347 """,
348 348 """
349 349 @pdb_skipped_decorator
350 350 @pdb_skipped_decorator2
351 351 def bar(x, y):
352 352 return x * y
353 353 """,
354 354 """import IPython.terminal.debugger as ipdb""",
355 355 """
356 356 def f():
357 357 ipdb.set_trace()
358 358 bar(3, 4)
359 359 """,
360 360 """
361 361 f()
362 362 """,
363 363 )
364 364
365 365
366 366 def _decorator_skip_setup():
367 367 import pexpect
368 368
369 369 env = os.environ.copy()
370 370 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
371 371 env["PROMPT_TOOLKIT_NO_CPR"] = "1"
372 372
373 373 child = pexpect.spawn(
374 374 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
375 375 )
376 376 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
377 377
378 378 child.expect("IPython")
379 379 child.expect("\n")
380 380
381 381 child.timeout = 5 * IPYTHON_TESTING_TIMEOUT_SCALE
382 382 child.str_last_chars = 500
383 383
384 384 dedented_blocks = [dedent(b).strip() for b in skip_decorators_blocks]
385 385 in_prompt_number = 1
386 386 for cblock in dedented_blocks:
387 387 child.expect_exact(f"In [{in_prompt_number}]:")
388 388 in_prompt_number += 1
389 389 for line in cblock.splitlines():
390 390 child.sendline(line)
391 391 child.expect_exact(line)
392 392 child.sendline("")
393 393 return child
394 394
395 395
396 396 @pytest.mark.skip(reason="recently fail for unknown reason on CI")
397 397 @skip_win32
398 398 def test_decorator_skip():
399 399 """test that decorator frames can be skipped."""
400 400
401 401 child = _decorator_skip_setup()
402 402
403 403 child.expect_exact("ipython-input-8")
404 404 child.expect_exact("3 bar(3, 4)")
405 405 child.expect("ipdb>")
406 406
407 407 child.expect("ipdb>")
408 408 child.sendline("step")
409 409 child.expect_exact("step")
410 410 child.expect_exact("--Call--")
411 411 child.expect_exact("ipython-input-6")
412 412
413 413 child.expect_exact("1 @pdb_skipped_decorator")
414 414
415 415 child.sendline("s")
416 416 child.expect_exact("return x * y")
417 417
418 418 child.close()
419 419
420 420
421 421 @pytest.mark.skip(reason="recently fail for unknown reason on CI")
422 422 @pytest.mark.skipif(platform.python_implementation() == "PyPy", reason="issues on PyPy")
423 423 @skip_win32
424 424 def test_decorator_skip_disabled():
425 425 """test that decorator frame skipping can be disabled"""
426 426
427 427 child = _decorator_skip_setup()
428 428
429 429 child.expect_exact("3 bar(3, 4)")
430 430
431 431 for input_, expected in [
432 432 ("skip_predicates debuggerskip False", ""),
433 433 ("skip_predicates", "debuggerskip : False"),
434 434 ("step", "---> 2 def wrapped_fn"),
435 435 ("step", "----> 3 __debuggerskip__"),
436 436 ("step", "----> 4 helper_1()"),
437 437 ("step", "---> 1 def helper_1():"),
438 438 ("next", "----> 2 helpers_helper()"),
439 439 ("next", "--Return--"),
440 440 ("next", "----> 5 __debuggerskip__ = False"),
441 441 ]:
442 442 child.expect("ipdb>")
443 443 child.sendline(input_)
444 444 child.expect_exact(input_)
445 445 child.expect_exact(expected)
446 446
447 447 child.close()
448 448
449 449
450 450 @pytest.mark.skipif(platform.python_implementation() == "PyPy", reason="issues on PyPy")
451 451 @skip_win32
452 452 def test_decorator_skip_with_breakpoint():
453 453 """test that decorator frame skipping can be disabled"""
454 454
455 455 import pexpect
456 456
457 457 env = os.environ.copy()
458 458 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
459 459 env["PROMPT_TOOLKIT_NO_CPR"] = "1"
460 460
461 461 child = pexpect.spawn(
462 462 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
463 463 )
464 464 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
465 465 child.str_last_chars = 500
466 466
467 467 child.expect("IPython")
468 468 child.expect("\n")
469 469
470 470 child.timeout = 5 * IPYTHON_TESTING_TIMEOUT_SCALE
471 471
472 472 ### we need a filename, so we need to exec the full block with a filename
473 473 with NamedTemporaryFile(suffix=".py", dir=".", delete=True) as tf:
474
475 474 name = tf.name[:-3].split("/")[-1]
476 475 tf.write("\n".join([dedent(x) for x in skip_decorators_blocks[:-1]]).encode())
477 476 tf.flush()
478 477 codeblock = f"from {name} import f"
479 478
480 479 dedented_blocks = [
481 480 codeblock,
482 481 "f()",
483 482 ]
484 483
485 484 in_prompt_number = 1
486 485 for cblock in dedented_blocks:
487 486 child.expect_exact(f"In [{in_prompt_number}]:")
488 487 in_prompt_number += 1
489 488 for line in cblock.splitlines():
490 489 child.sendline(line)
491 490 child.expect_exact(line)
492 491 child.sendline("")
493 492
494 493 # as the filename does not exists, we'll rely on the filename prompt
495 494 child.expect_exact("47 bar(3, 4)")
496 495
497 496 for input_, expected in [
498 497 (f"b {name}.py:3", ""),
499 498 ("step", "1---> 3 pass # should not stop here except"),
500 499 ("step", "---> 38 @pdb_skipped_decorator"),
501 500 ("continue", ""),
502 501 ]:
503 502 child.expect("ipdb>")
504 503 child.sendline(input_)
505 504 child.expect_exact(input_)
506 505 child.expect_exact(expected)
507 506
508 507 child.close()
509 508
510 509
511 510 @skip_win32
512 511 def test_where_erase_value():
513 512 """Test that `where` does not access f_locals and erase values."""
514 513 import pexpect
515 514
516 515 env = os.environ.copy()
517 516 env["IPY_TEST_SIMPLE_PROMPT"] = "1"
518 517
519 518 child = pexpect.spawn(
520 519 sys.executable, ["-m", "IPython", "--colors=nocolor"], env=env
521 520 )
522 521 child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE
523 522
524 523 child.expect("IPython")
525 524 child.expect("\n")
526 525 child.expect_exact("In [1]")
527 526
528 527 block = dedent(
529 528 """
530 529 def simple_f():
531 530 myvar = 1
532 531 print(myvar)
533 532 1/0
534 533 print(myvar)
535 534 simple_f() """
536 535 )
537 536
538 537 for line in block.splitlines():
539 538 child.sendline(line)
540 539 child.expect_exact(line)
541 540 child.expect_exact("ZeroDivisionError")
542 541 child.expect_exact("In [2]:")
543 542
544 543 child.sendline("%debug")
545 544
546 545 ##
547 546 child.expect("ipdb>")
548 547
549 548 child.sendline("myvar")
550 549 child.expect("1")
551 550
552 551 ##
553 552 child.expect("ipdb>")
554 553
555 554 child.sendline("myvar = 2")
556 555
557 556 ##
558 557 child.expect_exact("ipdb>")
559 558
560 559 child.sendline("myvar")
561 560
562 561 child.expect_exact("2")
563 562
564 563 ##
565 564 child.expect("ipdb>")
566 565 child.sendline("where")
567 566
568 567 ##
569 568 child.expect("ipdb>")
570 569 child.sendline("myvar")
571 570
572 571 child.expect_exact("2")
573 572 child.expect("ipdb>")
574 573
575 574 child.close()
@@ -1,1513 +1,1512 b''
1 1 # -*- coding: utf-8 -*-
2 2 """Tests for various magic functions."""
3 3
4 4 import gc
5 5 import io
6 6 import os
7 7 import re
8 8 import shlex
9 9 import sys
10 10 import warnings
11 11 from importlib import invalidate_caches
12 12 from io import StringIO
13 13 from pathlib import Path
14 14 from textwrap import dedent
15 15 from unittest import TestCase, mock
16 16
17 17 import pytest
18 18
19 19 from IPython import get_ipython
20 20 from IPython.core import magic
21 21 from IPython.core.error import UsageError
22 22 from IPython.core.magic import (
23 23 Magics,
24 24 cell_magic,
25 25 line_magic,
26 26 magics_class,
27 27 register_cell_magic,
28 28 register_line_magic,
29 29 )
30 30 from IPython.core.magics import code, execution, logging, osm, script
31 31 from IPython.testing import decorators as dec
32 32 from IPython.testing import tools as tt
33 33 from IPython.utils.io import capture_output
34 34 from IPython.utils.process import find_cmd
35 35 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
36 36 from IPython.utils.syspathcontext import prepended_to_syspath
37 37
38 38 from .test_debugger import PdbTestInput
39 39
40 40 from tempfile import NamedTemporaryFile
41 41
42 42 @magic.magics_class
43 43 class DummyMagics(magic.Magics): pass
44 44
45 45 def test_extract_code_ranges():
46 46 instr = "1 3 5-6 7-9 10:15 17: :10 10- -13 :"
47 47 expected = [
48 48 (0, 1),
49 49 (2, 3),
50 50 (4, 6),
51 51 (6, 9),
52 52 (9, 14),
53 53 (16, None),
54 54 (None, 9),
55 55 (9, None),
56 56 (None, 13),
57 57 (None, None),
58 58 ]
59 59 actual = list(code.extract_code_ranges(instr))
60 60 assert actual == expected
61 61
62 62 def test_extract_symbols():
63 63 source = """import foo\na = 10\ndef b():\n return 42\n\n\nclass A: pass\n\n\n"""
64 64 symbols_args = ["a", "b", "A", "A,b", "A,a", "z"]
65 65 expected = [([], ['a']),
66 66 (["def b():\n return 42\n"], []),
67 67 (["class A: pass\n"], []),
68 68 (["class A: pass\n", "def b():\n return 42\n"], []),
69 69 (["class A: pass\n"], ['a']),
70 70 ([], ['z'])]
71 71 for symbols, exp in zip(symbols_args, expected):
72 72 assert code.extract_symbols(source, symbols) == exp
73 73
74 74
75 75 def test_extract_symbols_raises_exception_with_non_python_code():
76 76 source = ("=begin A Ruby program :)=end\n"
77 77 "def hello\n"
78 78 "puts 'Hello world'\n"
79 79 "end")
80 80 with pytest.raises(SyntaxError):
81 81 code.extract_symbols(source, "hello")
82 82
83 83
84 84 def test_magic_not_found():
85 85 # magic not found raises UsageError
86 86 with pytest.raises(UsageError):
87 87 _ip.run_line_magic("doesntexist", "")
88 88
89 89 # ensure result isn't success when a magic isn't found
90 90 result = _ip.run_cell('%doesntexist')
91 91 assert isinstance(result.error_in_exec, UsageError)
92 92
93 93
94 94 def test_cell_magic_not_found():
95 95 # magic not found raises UsageError
96 96 with pytest.raises(UsageError):
97 97 _ip.run_cell_magic('doesntexist', 'line', 'cell')
98 98
99 99 # ensure result isn't success when a magic isn't found
100 100 result = _ip.run_cell('%%doesntexist')
101 101 assert isinstance(result.error_in_exec, UsageError)
102 102
103 103
104 104 def test_magic_error_status():
105 105 def fail(shell):
106 106 1/0
107 107 _ip.register_magic_function(fail)
108 108 result = _ip.run_cell('%fail')
109 109 assert isinstance(result.error_in_exec, ZeroDivisionError)
110 110
111 111
112 112 def test_config():
113 113 """ test that config magic does not raise
114 114 can happen if Configurable init is moved too early into
115 115 Magics.__init__ as then a Config object will be registered as a
116 116 magic.
117 117 """
118 118 ## should not raise.
119 119 _ip.run_line_magic("config", "")
120 120
121 121
122 122 def test_config_available_configs():
123 123 """ test that config magic prints available configs in unique and
124 124 sorted order. """
125 125 with capture_output() as captured:
126 126 _ip.run_line_magic("config", "")
127 127
128 128 stdout = captured.stdout
129 129 config_classes = stdout.strip().split('\n')[1:]
130 130 assert config_classes == sorted(set(config_classes))
131 131
132 132 def test_config_print_class():
133 133 """ test that config with a classname prints the class's options. """
134 134 with capture_output() as captured:
135 135 _ip.run_line_magic("config", "TerminalInteractiveShell")
136 136
137 137 stdout = captured.stdout
138 138 assert re.match(
139 139 "TerminalInteractiveShell.* options", stdout.splitlines()[0]
140 140 ), f"{stdout}\n\n1st line of stdout not like 'TerminalInteractiveShell.* options'"
141 141
142 142
143 143 def test_rehashx():
144 144 # clear up everything
145 145 _ip.alias_manager.clear_aliases()
146 146 del _ip.db['syscmdlist']
147 147
148 148 _ip.run_line_magic("rehashx", "")
149 149 # Practically ALL ipython development systems will have more than 10 aliases
150 150
151 151 assert len(_ip.alias_manager.aliases) > 10
152 152 for name, cmd in _ip.alias_manager.aliases:
153 153 # we must strip dots from alias names
154 154 assert "." not in name
155 155
156 156 # rehashx must fill up syscmdlist
157 157 scoms = _ip.db['syscmdlist']
158 158 assert len(scoms) > 10
159 159
160 160
161 161 def test_magic_parse_options():
162 162 """Test that we don't mangle paths when parsing magic options."""
163 163 ip = get_ipython()
164 164 path = 'c:\\x'
165 165 m = DummyMagics(ip)
166 166 opts = m.parse_options('-f %s' % path,'f:')[0]
167 167 # argv splitting is os-dependent
168 168 if os.name == 'posix':
169 169 expected = 'c:x'
170 170 else:
171 171 expected = path
172 172 assert opts["f"] == expected
173 173
174 174
175 175 def test_magic_parse_long_options():
176 176 """Magic.parse_options can handle --foo=bar long options"""
177 177 ip = get_ipython()
178 178 m = DummyMagics(ip)
179 179 opts, _ = m.parse_options("--foo --bar=bubble", "a", "foo", "bar=")
180 180 assert "foo" in opts
181 181 assert "bar" in opts
182 182 assert opts["bar"] == "bubble"
183 183
184 184
185 185 def doctest_hist_f():
186 186 """Test %hist -f with temporary filename.
187 187
188 188 In [9]: import tempfile
189 189
190 190 In [10]: tfile = tempfile.mktemp('.py','tmp-ipython-')
191 191
192 192 In [11]: %hist -nl -f $tfile 3
193 193
194 194 In [13]: import os; os.unlink(tfile)
195 195 """
196 196
197 197
198 198 def doctest_hist_op():
199 199 """Test %hist -op
200 200
201 201 In [1]: class b(float):
202 202 ...: pass
203 203 ...:
204 204
205 205 In [2]: class s(object):
206 206 ...: def __str__(self):
207 207 ...: return 's'
208 208 ...:
209 209
210 210 In [3]:
211 211
212 212 In [4]: class r(b):
213 213 ...: def __repr__(self):
214 214 ...: return 'r'
215 215 ...:
216 216
217 217 In [5]: class sr(s,r): pass
218 218 ...:
219 219
220 220 In [6]:
221 221
222 222 In [7]: bb=b()
223 223
224 224 In [8]: ss=s()
225 225
226 226 In [9]: rr=r()
227 227
228 228 In [10]: ssrr=sr()
229 229
230 230 In [11]: 4.5
231 231 Out[11]: 4.5
232 232
233 233 In [12]: str(ss)
234 234 Out[12]: 's'
235 235
236 236 In [13]:
237 237
238 238 In [14]: %hist -op
239 239 >>> class b:
240 240 ... pass
241 241 ...
242 242 >>> class s(b):
243 243 ... def __str__(self):
244 244 ... return 's'
245 245 ...
246 246 >>>
247 247 >>> class r(b):
248 248 ... def __repr__(self):
249 249 ... return 'r'
250 250 ...
251 251 >>> class sr(s,r): pass
252 252 >>>
253 253 >>> bb=b()
254 254 >>> ss=s()
255 255 >>> rr=r()
256 256 >>> ssrr=sr()
257 257 >>> 4.5
258 258 4.5
259 259 >>> str(ss)
260 260 's'
261 261 >>>
262 262 """
263 263
264 264 def test_hist_pof():
265 265 ip = get_ipython()
266 266 ip.run_cell("1+2", store_history=True)
267 267 #raise Exception(ip.history_manager.session_number)
268 268 #raise Exception(list(ip.history_manager._get_range_session()))
269 269 with TemporaryDirectory() as td:
270 270 tf = os.path.join(td, 'hist.py')
271 271 ip.run_line_magic('history', '-pof %s' % tf)
272 272 assert os.path.isfile(tf)
273 273
274 274
275 275 def test_macro():
276 276 ip = get_ipython()
277 277 ip.history_manager.reset() # Clear any existing history.
278 278 cmds = ["a=1", "def b():\n return a**2", "print(a,b())"]
279 279 for i, cmd in enumerate(cmds, start=1):
280 280 ip.history_manager.store_inputs(i, cmd)
281 281 ip.run_line_magic("macro", "test 1-3")
282 282 assert ip.user_ns["test"].value == "\n".join(cmds) + "\n"
283 283
284 284 # List macros
285 285 assert "test" in ip.run_line_magic("macro", "")
286 286
287 287
288 288 def test_macro_run():
289 289 """Test that we can run a multi-line macro successfully."""
290 290 ip = get_ipython()
291 291 ip.history_manager.reset()
292 292 cmds = ["a=10", "a+=1", "print(a)", "%macro test 2-3"]
293 293 for cmd in cmds:
294 294 ip.run_cell(cmd, store_history=True)
295 295 assert ip.user_ns["test"].value == "a+=1\nprint(a)\n"
296 296 with tt.AssertPrints("12"):
297 297 ip.run_cell("test")
298 298 with tt.AssertPrints("13"):
299 299 ip.run_cell("test")
300 300
301 301
302 302 def test_magic_magic():
303 303 """Test %magic"""
304 304 ip = get_ipython()
305 305 with capture_output() as captured:
306 306 ip.run_line_magic("magic", "")
307 307
308 308 stdout = captured.stdout
309 309 assert "%magic" in stdout
310 310 assert "IPython" in stdout
311 311 assert "Available" in stdout
312 312
313 313
314 314 @dec.skipif_not_numpy
315 315 def test_numpy_reset_array_undec():
316 316 "Test '%reset array' functionality"
317 317 _ip.ex("import numpy as np")
318 318 _ip.ex("a = np.empty(2)")
319 319 assert "a" in _ip.user_ns
320 320 _ip.run_line_magic("reset", "-f array")
321 321 assert "a" not in _ip.user_ns
322 322
323 323
324 324 def test_reset_out():
325 325 "Test '%reset out' magic"
326 326 _ip.run_cell("parrot = 'dead'", store_history=True)
327 327 # test '%reset -f out', make an Out prompt
328 328 _ip.run_cell("parrot", store_history=True)
329 329 assert "dead" in [_ip.user_ns[x] for x in ("_", "__", "___")]
330 330 _ip.run_line_magic("reset", "-f out")
331 331 assert "dead" not in [_ip.user_ns[x] for x in ("_", "__", "___")]
332 332 assert len(_ip.user_ns["Out"]) == 0
333 333
334 334
335 335 def test_reset_in():
336 336 "Test '%reset in' magic"
337 337 # test '%reset -f in'
338 338 _ip.run_cell("parrot", store_history=True)
339 339 assert "parrot" in [_ip.user_ns[x] for x in ("_i", "_ii", "_iii")]
340 340 _ip.run_line_magic("reset", "-f in")
341 341 assert "parrot" not in [_ip.user_ns[x] for x in ("_i", "_ii", "_iii")]
342 342 assert len(set(_ip.user_ns["In"])) == 1
343 343
344 344
345 345 def test_reset_dhist():
346 346 "Test '%reset dhist' magic"
347 347 _ip.run_cell("tmp = [d for d in _dh]") # copy before clearing
348 348 _ip.run_line_magic("cd", os.path.dirname(pytest.__file__))
349 349 _ip.run_line_magic("cd", "-")
350 350 assert len(_ip.user_ns["_dh"]) > 0
351 351 _ip.run_line_magic("reset", "-f dhist")
352 352 assert len(_ip.user_ns["_dh"]) == 0
353 353 _ip.run_cell("_dh = [d for d in tmp]") # restore
354 354
355 355
356 356 def test_reset_in_length():
357 357 "Test that '%reset in' preserves In[] length"
358 358 _ip.run_cell("print 'foo'")
359 359 _ip.run_cell("reset -f in")
360 360 assert len(_ip.user_ns["In"]) == _ip.displayhook.prompt_count + 1
361 361
362 362
363 363 class TestResetErrors(TestCase):
364 364
365 365 def test_reset_redefine(self):
366 366
367 367 @magics_class
368 368 class KernelMagics(Magics):
369 369 @line_magic
370 370 def less(self, shell): pass
371 371
372 372 _ip.register_magics(KernelMagics)
373 373
374 374 with self.assertLogs() as cm:
375 375 # hack, we want to just capture logs, but assertLogs fails if not
376 376 # logs get produce.
377 377 # so log one things we ignore.
378 378 import logging as log_mod
379 379 log = log_mod.getLogger()
380 380 log.info('Nothing')
381 381 # end hack.
382 382 _ip.run_cell("reset -f")
383 383
384 384 assert len(cm.output) == 1
385 385 for out in cm.output:
386 386 assert "Invalid alias" not in out
387 387
388 388 def test_tb_syntaxerror():
389 389 """test %tb after a SyntaxError"""
390 390 ip = get_ipython()
391 391 ip.run_cell("for")
392 392
393 393 # trap and validate stdout
394 394 save_stdout = sys.stdout
395 395 try:
396 396 sys.stdout = StringIO()
397 397 ip.run_cell("%tb")
398 398 out = sys.stdout.getvalue()
399 399 finally:
400 400 sys.stdout = save_stdout
401 401 # trim output, and only check the last line
402 402 last_line = out.rstrip().splitlines()[-1].strip()
403 403 assert last_line == "SyntaxError: invalid syntax"
404 404
405 405
406 406 def test_time():
407 407 ip = get_ipython()
408 408
409 409 with tt.AssertPrints("Wall time: "):
410 410 ip.run_cell("%time None")
411 411
412 412 ip.run_cell("def f(kmjy):\n"
413 413 " %time print (2*kmjy)")
414 414
415 415 with tt.AssertPrints("Wall time: "):
416 416 with tt.AssertPrints("hihi", suppress=False):
417 417 ip.run_cell("f('hi')")
418 418
419 419
420 420 # ';' at the end of %time prevents instruction value to be printed.
421 421 # This tests fix for #13837.
422 422 def test_time_no_output_with_semicolon():
423 423 ip = get_ipython()
424 424
425 425 # Test %time cases
426 426 with tt.AssertPrints(" 123456"):
427 427 with tt.AssertPrints("Wall time: ", suppress=False):
428 428 with tt.AssertPrints("CPU times: ", suppress=False):
429 429 ip.run_cell("%time 123000+456")
430 430
431 431 with tt.AssertNotPrints(" 123456"):
432 432 with tt.AssertPrints("Wall time: ", suppress=False):
433 433 with tt.AssertPrints("CPU times: ", suppress=False):
434 434 ip.run_cell("%time 123000+456;")
435 435
436 436 with tt.AssertPrints(" 123456"):
437 437 with tt.AssertPrints("Wall time: ", suppress=False):
438 438 with tt.AssertPrints("CPU times: ", suppress=False):
439 439 ip.run_cell("%time 123000+456 # Comment")
440 440
441 441 with tt.AssertNotPrints(" 123456"):
442 442 with tt.AssertPrints("Wall time: ", suppress=False):
443 443 with tt.AssertPrints("CPU times: ", suppress=False):
444 444 ip.run_cell("%time 123000+456; # Comment")
445 445
446 446 with tt.AssertPrints(" 123456"):
447 447 with tt.AssertPrints("Wall time: ", suppress=False):
448 448 with tt.AssertPrints("CPU times: ", suppress=False):
449 449 ip.run_cell("%time 123000+456 # ;Comment")
450 450
451 451 # Test %%time cases
452 452 with tt.AssertPrints("123456"):
453 453 with tt.AssertPrints("Wall time: ", suppress=False):
454 454 with tt.AssertPrints("CPU times: ", suppress=False):
455 455 ip.run_cell("%%time\n123000+456\n\n\n")
456 456
457 457 with tt.AssertNotPrints("123456"):
458 458 with tt.AssertPrints("Wall time: ", suppress=False):
459 459 with tt.AssertPrints("CPU times: ", suppress=False):
460 460 ip.run_cell("%%time\n123000+456;\n\n\n")
461 461
462 462 with tt.AssertPrints("123456"):
463 463 with tt.AssertPrints("Wall time: ", suppress=False):
464 464 with tt.AssertPrints("CPU times: ", suppress=False):
465 465 ip.run_cell("%%time\n123000+456 # Comment\n\n\n")
466 466
467 467 with tt.AssertNotPrints("123456"):
468 468 with tt.AssertPrints("Wall time: ", suppress=False):
469 469 with tt.AssertPrints("CPU times: ", suppress=False):
470 470 ip.run_cell("%%time\n123000+456; # Comment\n\n\n")
471 471
472 472 with tt.AssertPrints("123456"):
473 473 with tt.AssertPrints("Wall time: ", suppress=False):
474 474 with tt.AssertPrints("CPU times: ", suppress=False):
475 475 ip.run_cell("%%time\n123000+456 # ;Comment\n\n\n")
476 476
477 477
478 478 def test_time_last_not_expression():
479 479 ip.run_cell("%%time\n"
480 480 "var_1 = 1\n"
481 481 "var_2 = 2\n")
482 482 assert ip.user_ns['var_1'] == 1
483 483 del ip.user_ns['var_1']
484 484 assert ip.user_ns['var_2'] == 2
485 485 del ip.user_ns['var_2']
486 486
487 487
488 488 @dec.skip_win32
489 489 def test_time2():
490 490 ip = get_ipython()
491 491
492 492 with tt.AssertPrints("CPU times: user "):
493 493 ip.run_cell("%time None")
494 494
495 495 def test_time3():
496 496 """Erroneous magic function calls, issue gh-3334"""
497 497 ip = get_ipython()
498 498 ip.user_ns.pop('run', None)
499 499
500 500 with tt.AssertNotPrints("not found", channel='stderr'):
501 501 ip.run_cell("%%time\n"
502 502 "run = 0\n"
503 503 "run += 1")
504 504
505 505 def test_multiline_time():
506 506 """Make sure last statement from time return a value."""
507 507 ip = get_ipython()
508 508 ip.user_ns.pop('run', None)
509 509
510 510 ip.run_cell(
511 511 dedent(
512 512 """\
513 513 %%time
514 514 a = "ho"
515 515 b = "hey"
516 516 a+b
517 517 """
518 518 )
519 519 )
520 520 assert ip.user_ns_hidden["_"] == "hohey"
521 521
522 522
523 523 def test_time_local_ns():
524 524 """
525 525 Test that local_ns is actually global_ns when running a cell magic
526 526 """
527 527 ip = get_ipython()
528 528 ip.run_cell("%%time\n" "myvar = 1")
529 529 assert ip.user_ns["myvar"] == 1
530 530 del ip.user_ns["myvar"]
531 531
532 532
533 533 def test_doctest_mode():
534 534 "Toggle doctest_mode twice, it should be a no-op and run without error"
535 535 _ip.run_line_magic("doctest_mode", "")
536 536 _ip.run_line_magic("doctest_mode", "")
537 537
538 538
539 539 def test_parse_options():
540 540 """Tests for basic options parsing in magics."""
541 541 # These are only the most minimal of tests, more should be added later. At
542 542 # the very least we check that basic text/unicode calls work OK.
543 543 m = DummyMagics(_ip)
544 544 assert m.parse_options("foo", "")[1] == "foo"
545 545 assert m.parse_options("foo", "")[1] == "foo"
546 546
547 547
548 548 def test_parse_options_preserve_non_option_string():
549 549 """Test to assert preservation of non-option part of magic-block, while parsing magic options."""
550 550 m = DummyMagics(_ip)
551 551 opts, stmt = m.parse_options(
552 552 " -n1 -r 13 _ = 314 + foo", "n:r:", preserve_non_opts=True
553 553 )
554 554 assert opts == {"n": "1", "r": "13"}
555 555 assert stmt == "_ = 314 + foo"
556 556
557 557
558 558 def test_run_magic_preserve_code_block():
559 559 """Test to assert preservation of non-option part of magic-block, while running magic."""
560 560 _ip.user_ns["spaces"] = []
561 561 _ip.run_line_magic(
562 562 "timeit", "-n1 -r1 spaces.append([s.count(' ') for s in ['document']])"
563 563 )
564 564 assert _ip.user_ns["spaces"] == [[0]]
565 565
566 566
567 567 def test_dirops():
568 568 """Test various directory handling operations."""
569 569 # curpath = lambda :os.path.splitdrive(os.getcwd())[1].replace('\\','/')
570 570 curpath = os.getcwd
571 571 startdir = os.getcwd()
572 572 ipdir = os.path.realpath(_ip.ipython_dir)
573 573 try:
574 574 _ip.run_line_magic("cd", '"%s"' % ipdir)
575 575 assert curpath() == ipdir
576 576 _ip.run_line_magic("cd", "-")
577 577 assert curpath() == startdir
578 578 _ip.run_line_magic("pushd", '"%s"' % ipdir)
579 579 assert curpath() == ipdir
580 580 _ip.run_line_magic("popd", "")
581 581 assert curpath() == startdir
582 582 finally:
583 583 os.chdir(startdir)
584 584
585 585
586 586 def test_cd_force_quiet():
587 587 """Test OSMagics.cd_force_quiet option"""
588 588 _ip.config.OSMagics.cd_force_quiet = True
589 589 osmagics = osm.OSMagics(shell=_ip)
590 590
591 591 startdir = os.getcwd()
592 592 ipdir = os.path.realpath(_ip.ipython_dir)
593 593
594 594 try:
595 595 with tt.AssertNotPrints(ipdir):
596 596 osmagics.cd('"%s"' % ipdir)
597 597 with tt.AssertNotPrints(startdir):
598 598 osmagics.cd('-')
599 599 finally:
600 600 os.chdir(startdir)
601 601
602 602
603 603 def test_xmode():
604 604 # Calling xmode three times should be a no-op
605 605 xmode = _ip.InteractiveTB.mode
606 606 for i in range(4):
607 607 _ip.run_line_magic("xmode", "")
608 608 assert _ip.InteractiveTB.mode == xmode
609 609
610 610 def test_reset_hard():
611 611 monitor = []
612 612 class A(object):
613 613 def __del__(self):
614 614 monitor.append(1)
615 615 def __repr__(self):
616 616 return "<A instance>"
617 617
618 618 _ip.user_ns["a"] = A()
619 619 _ip.run_cell("a")
620 620
621 621 assert monitor == []
622 622 _ip.run_line_magic("reset", "-f")
623 623 assert monitor == [1]
624 624
625 625 class TestXdel(tt.TempFileMixin):
626 626 def test_xdel(self):
627 627 """Test that references from %run are cleared by xdel."""
628 628 src = ("class A(object):\n"
629 629 " monitor = []\n"
630 630 " def __del__(self):\n"
631 631 " self.monitor.append(1)\n"
632 632 "a = A()\n")
633 633 self.mktmp(src)
634 634 # %run creates some hidden references...
635 635 _ip.run_line_magic("run", "%s" % self.fname)
636 636 # ... as does the displayhook.
637 637 _ip.run_cell("a")
638 638
639 639 monitor = _ip.user_ns["A"].monitor
640 640 assert monitor == []
641 641
642 642 _ip.run_line_magic("xdel", "a")
643 643
644 644 # Check that a's __del__ method has been called.
645 645 gc.collect(0)
646 646 assert monitor == [1]
647 647
648 648 def doctest_who():
649 649 """doctest for %who
650 650
651 651 In [1]: %reset -sf
652 652
653 653 In [2]: alpha = 123
654 654
655 655 In [3]: beta = 'beta'
656 656
657 657 In [4]: %who int
658 658 alpha
659 659
660 660 In [5]: %who str
661 661 beta
662 662
663 663 In [6]: %whos
664 664 Variable Type Data/Info
665 665 ----------------------------
666 666 alpha int 123
667 667 beta str beta
668 668
669 669 In [7]: %who_ls
670 670 Out[7]: ['alpha', 'beta']
671 671 """
672 672
673 673 def test_whos():
674 674 """Check that whos is protected against objects where repr() fails."""
675 675 class A(object):
676 676 def __repr__(self):
677 677 raise Exception()
678 678 _ip.user_ns['a'] = A()
679 679 _ip.run_line_magic("whos", "")
680 680
681 681 def doctest_precision():
682 682 """doctest for %precision
683 683
684 684 In [1]: f = get_ipython().display_formatter.formatters['text/plain']
685 685
686 686 In [2]: %precision 5
687 687 Out[2]: '%.5f'
688 688
689 689 In [3]: f.float_format
690 690 Out[3]: '%.5f'
691 691
692 692 In [4]: %precision %e
693 693 Out[4]: '%e'
694 694
695 695 In [5]: f(3.1415927)
696 696 Out[5]: '3.141593e+00'
697 697 """
698 698
699 699 def test_debug_magic():
700 700 """Test debugging a small code with %debug
701 701
702 702 In [1]: with PdbTestInput(['c']):
703 703 ...: %debug print("a b") #doctest: +ELLIPSIS
704 704 ...:
705 705 ...
706 706 ipdb> c
707 707 a b
708 708 In [2]:
709 709 """
710 710
711 711 def test_psearch():
712 712 with tt.AssertPrints("dict.fromkeys"):
713 713 _ip.run_cell("dict.fr*?")
714 714 with tt.AssertPrints("Ο€.is_integer"):
715 715 _ip.run_cell("Ο€ = 3.14;\nΟ€.is_integ*?")
716 716
717 717 def test_timeit_shlex():
718 718 """test shlex issues with timeit (#1109)"""
719 719 _ip.ex("def f(*a,**kw): pass")
720 720 _ip.run_line_magic("timeit", '-n1 "this is a bug".count(" ")')
721 721 _ip.run_line_magic("timeit", '-r1 -n1 f(" ", 1)')
722 722 _ip.run_line_magic("timeit", '-r1 -n1 f(" ", 1, " ", 2, " ")')
723 723 _ip.run_line_magic("timeit", '-r1 -n1 ("a " + "b")')
724 724 _ip.run_line_magic("timeit", '-r1 -n1 f("a " + "b")')
725 725 _ip.run_line_magic("timeit", '-r1 -n1 f("a " + "b ")')
726 726
727 727
728 728 def test_timeit_special_syntax():
729 729 "Test %%timeit with IPython special syntax"
730 730 @register_line_magic
731 731 def lmagic(line):
732 732 ip = get_ipython()
733 733 ip.user_ns['lmagic_out'] = line
734 734
735 735 # line mode test
736 736 _ip.run_line_magic("timeit", "-n1 -r1 %lmagic my line")
737 737 assert _ip.user_ns["lmagic_out"] == "my line"
738 738 # cell mode test
739 739 _ip.run_cell_magic("timeit", "-n1 -r1", "%lmagic my line2")
740 740 assert _ip.user_ns["lmagic_out"] == "my line2"
741 741
742 742
743 743 def test_timeit_return():
744 744 """
745 745 test whether timeit -o return object
746 746 """
747 747
748 748 res = _ip.run_line_magic('timeit','-n10 -r10 -o 1')
749 749 assert(res is not None)
750 750
751 751 def test_timeit_quiet():
752 752 """
753 753 test quiet option of timeit magic
754 754 """
755 755 with tt.AssertNotPrints("loops"):
756 756 _ip.run_cell("%timeit -n1 -r1 -q 1")
757 757
758 758 def test_timeit_return_quiet():
759 759 with tt.AssertNotPrints("loops"):
760 760 res = _ip.run_line_magic('timeit', '-n1 -r1 -q -o 1')
761 761 assert (res is not None)
762 762
763 763 def test_timeit_invalid_return():
764 764 with pytest.raises(SyntaxError):
765 765 _ip.run_line_magic('timeit', 'return')
766 766
767 767 @dec.skipif(execution.profile is None)
768 768 def test_prun_special_syntax():
769 769 "Test %%prun with IPython special syntax"
770 770 @register_line_magic
771 771 def lmagic(line):
772 772 ip = get_ipython()
773 773 ip.user_ns['lmagic_out'] = line
774 774
775 775 # line mode test
776 776 _ip.run_line_magic("prun", "-q %lmagic my line")
777 777 assert _ip.user_ns["lmagic_out"] == "my line"
778 778 # cell mode test
779 779 _ip.run_cell_magic("prun", "-q", "%lmagic my line2")
780 780 assert _ip.user_ns["lmagic_out"] == "my line2"
781 781
782 782
783 783 @dec.skipif(execution.profile is None)
784 784 def test_prun_quotes():
785 785 "Test that prun does not clobber string escapes (GH #1302)"
786 786 _ip.magic(r"prun -q x = '\t'")
787 787 assert _ip.user_ns["x"] == "\t"
788 788
789 789
790 790 def test_extension():
791 791 # Debugging information for failures of this test
792 792 print('sys.path:')
793 793 for p in sys.path:
794 794 print(' ', p)
795 795 print('CWD', os.getcwd())
796 796
797 797 pytest.raises(ImportError, _ip.magic, "load_ext daft_extension")
798 798 daft_path = os.path.join(os.path.dirname(__file__), "daft_extension")
799 799 sys.path.insert(0, daft_path)
800 800 try:
801 801 _ip.user_ns.pop('arq', None)
802 802 invalidate_caches() # Clear import caches
803 803 _ip.run_line_magic("load_ext", "daft_extension")
804 804 assert _ip.user_ns["arq"] == 185
805 805 _ip.run_line_magic("unload_ext", "daft_extension")
806 806 assert 'arq' not in _ip.user_ns
807 807 finally:
808 808 sys.path.remove(daft_path)
809 809
810 810
811 811 def test_notebook_export_json():
812 812 pytest.importorskip("nbformat")
813 813 _ip = get_ipython()
814 814 _ip.history_manager.reset() # Clear any existing history.
815 815 cmds = ["a=1", "def b():\n return a**2", "print('noΓ«l, Γ©tΓ©', b())"]
816 816 for i, cmd in enumerate(cmds, start=1):
817 817 _ip.history_manager.store_inputs(i, cmd)
818 818 with TemporaryDirectory() as td:
819 819 outfile = os.path.join(td, "nb.ipynb")
820 820 _ip.run_line_magic("notebook", "%s" % outfile)
821 821
822 822
823 823 class TestEnv(TestCase):
824 824
825 825 def test_env(self):
826 826 env = _ip.run_line_magic("env", "")
827 827 self.assertTrue(isinstance(env, dict))
828 828
829 829 def test_env_secret(self):
830 830 env = _ip.run_line_magic("env", "")
831 831 hidden = "<hidden>"
832 832 with mock.patch.dict(
833 833 os.environ,
834 834 {
835 835 "API_KEY": "abc123",
836 836 "SECRET_THING": "ssshhh",
837 837 "JUPYTER_TOKEN": "",
838 838 "VAR": "abc"
839 839 }
840 840 ):
841 841 env = _ip.run_line_magic("env", "")
842 842 assert env["API_KEY"] == hidden
843 843 assert env["SECRET_THING"] == hidden
844 844 assert env["JUPYTER_TOKEN"] == hidden
845 845 assert env["VAR"] == "abc"
846 846
847 847 def test_env_get_set_simple(self):
848 848 env = _ip.run_line_magic("env", "var val1")
849 849 self.assertEqual(env, None)
850 850 self.assertEqual(os.environ["var"], "val1")
851 851 self.assertEqual(_ip.run_line_magic("env", "var"), "val1")
852 852 env = _ip.run_line_magic("env", "var=val2")
853 853 self.assertEqual(env, None)
854 854 self.assertEqual(os.environ['var'], 'val2')
855 855
856 856 def test_env_get_set_complex(self):
857 857 env = _ip.run_line_magic("env", "var 'val1 '' 'val2")
858 858 self.assertEqual(env, None)
859 859 self.assertEqual(os.environ['var'], "'val1 '' 'val2")
860 860 self.assertEqual(_ip.run_line_magic("env", "var"), "'val1 '' 'val2")
861 861 env = _ip.run_line_magic("env", 'var=val2 val3="val4')
862 862 self.assertEqual(env, None)
863 863 self.assertEqual(os.environ['var'], 'val2 val3="val4')
864 864
865 865 def test_env_set_bad_input(self):
866 866 self.assertRaises(UsageError, lambda: _ip.run_line_magic("set_env", "var"))
867 867
868 868 def test_env_set_whitespace(self):
869 869 self.assertRaises(UsageError, lambda: _ip.run_line_magic("env", "var A=B"))
870 870
871 871
872 872 class CellMagicTestCase(TestCase):
873 873
874 874 def check_ident(self, magic):
875 875 # Manually called, we get the result
876 876 out = _ip.run_cell_magic(magic, "a", "b")
877 877 assert out == ("a", "b")
878 878 # Via run_cell, it goes into the user's namespace via displayhook
879 879 _ip.run_cell("%%" + magic + " c\nd\n")
880 880 assert _ip.user_ns["_"] == ("c", "d\n")
881 881
882 882 def test_cell_magic_func_deco(self):
883 883 "Cell magic using simple decorator"
884 884 @register_cell_magic
885 885 def cellm(line, cell):
886 886 return line, cell
887 887
888 888 self.check_ident('cellm')
889 889
890 890 def test_cell_magic_reg(self):
891 891 "Cell magic manually registered"
892 892 def cellm(line, cell):
893 893 return line, cell
894 894
895 895 _ip.register_magic_function(cellm, 'cell', 'cellm2')
896 896 self.check_ident('cellm2')
897 897
898 898 def test_cell_magic_class(self):
899 899 "Cell magics declared via a class"
900 900 @magics_class
901 901 class MyMagics(Magics):
902 902
903 903 @cell_magic
904 904 def cellm3(self, line, cell):
905 905 return line, cell
906 906
907 907 _ip.register_magics(MyMagics)
908 908 self.check_ident('cellm3')
909 909
910 910 def test_cell_magic_class2(self):
911 911 "Cell magics declared via a class, #2"
912 912 @magics_class
913 913 class MyMagics2(Magics):
914 914
915 915 @cell_magic('cellm4')
916 916 def cellm33(self, line, cell):
917 917 return line, cell
918 918
919 919 _ip.register_magics(MyMagics2)
920 920 self.check_ident('cellm4')
921 921 # Check that nothing is registered as 'cellm33'
922 922 c33 = _ip.find_cell_magic('cellm33')
923 923 assert c33 == None
924 924
925 925 def test_file():
926 926 """Basic %%writefile"""
927 927 ip = get_ipython()
928 928 with TemporaryDirectory() as td:
929 929 fname = os.path.join(td, "file1")
930 930 ip.run_cell_magic(
931 931 "writefile",
932 932 fname,
933 933 "\n".join(
934 934 [
935 935 "line1",
936 936 "line2",
937 937 ]
938 938 ),
939 939 )
940 940 s = Path(fname).read_text(encoding="utf-8")
941 941 assert "line1\n" in s
942 942 assert "line2" in s
943 943
944 944
945 945 @dec.skip_win32
946 946 def test_file_single_quote():
947 947 """Basic %%writefile with embedded single quotes"""
948 948 ip = get_ipython()
949 949 with TemporaryDirectory() as td:
950 950 fname = os.path.join(td, "'file1'")
951 951 ip.run_cell_magic(
952 952 "writefile",
953 953 fname,
954 954 "\n".join(
955 955 [
956 956 "line1",
957 957 "line2",
958 958 ]
959 959 ),
960 960 )
961 961 s = Path(fname).read_text(encoding="utf-8")
962 962 assert "line1\n" in s
963 963 assert "line2" in s
964 964
965 965
966 966 @dec.skip_win32
967 967 def test_file_double_quote():
968 968 """Basic %%writefile with embedded double quotes"""
969 969 ip = get_ipython()
970 970 with TemporaryDirectory() as td:
971 971 fname = os.path.join(td, '"file1"')
972 972 ip.run_cell_magic(
973 973 "writefile",
974 974 fname,
975 975 "\n".join(
976 976 [
977 977 "line1",
978 978 "line2",
979 979 ]
980 980 ),
981 981 )
982 982 s = Path(fname).read_text(encoding="utf-8")
983 983 assert "line1\n" in s
984 984 assert "line2" in s
985 985
986 986
987 987 def test_file_var_expand():
988 988 """%%writefile $filename"""
989 989 ip = get_ipython()
990 990 with TemporaryDirectory() as td:
991 991 fname = os.path.join(td, "file1")
992 992 ip.user_ns["filename"] = fname
993 993 ip.run_cell_magic(
994 994 "writefile",
995 995 "$filename",
996 996 "\n".join(
997 997 [
998 998 "line1",
999 999 "line2",
1000 1000 ]
1001 1001 ),
1002 1002 )
1003 1003 s = Path(fname).read_text(encoding="utf-8")
1004 1004 assert "line1\n" in s
1005 1005 assert "line2" in s
1006 1006
1007 1007
1008 1008 def test_file_unicode():
1009 1009 """%%writefile with unicode cell"""
1010 1010 ip = get_ipython()
1011 1011 with TemporaryDirectory() as td:
1012 1012 fname = os.path.join(td, 'file1')
1013 1013 ip.run_cell_magic("writefile", fname, u'\n'.join([
1014 1014 u'linΓ©1',
1015 1015 u'linΓ©2',
1016 1016 ]))
1017 1017 with io.open(fname, encoding='utf-8') as f:
1018 1018 s = f.read()
1019 1019 assert "linΓ©1\n" in s
1020 1020 assert "linΓ©2" in s
1021 1021
1022 1022
1023 1023 def test_file_amend():
1024 1024 """%%writefile -a amends files"""
1025 1025 ip = get_ipython()
1026 1026 with TemporaryDirectory() as td:
1027 1027 fname = os.path.join(td, "file2")
1028 1028 ip.run_cell_magic(
1029 1029 "writefile",
1030 1030 fname,
1031 1031 "\n".join(
1032 1032 [
1033 1033 "line1",
1034 1034 "line2",
1035 1035 ]
1036 1036 ),
1037 1037 )
1038 1038 ip.run_cell_magic(
1039 1039 "writefile",
1040 1040 "-a %s" % fname,
1041 1041 "\n".join(
1042 1042 [
1043 1043 "line3",
1044 1044 "line4",
1045 1045 ]
1046 1046 ),
1047 1047 )
1048 1048 s = Path(fname).read_text(encoding="utf-8")
1049 1049 assert "line1\n" in s
1050 1050 assert "line3\n" in s
1051 1051
1052 1052
1053 1053 def test_file_spaces():
1054 1054 """%%file with spaces in filename"""
1055 1055 ip = get_ipython()
1056 1056 with TemporaryWorkingDirectory() as td:
1057 1057 fname = "file name"
1058 1058 ip.run_cell_magic(
1059 1059 "file",
1060 1060 '"%s"' % fname,
1061 1061 "\n".join(
1062 1062 [
1063 1063 "line1",
1064 1064 "line2",
1065 1065 ]
1066 1066 ),
1067 1067 )
1068 1068 s = Path(fname).read_text(encoding="utf-8")
1069 1069 assert "line1\n" in s
1070 1070 assert "line2" in s
1071 1071
1072 1072
1073 1073 def test_script_config():
1074 1074 ip = get_ipython()
1075 1075 ip.config.ScriptMagics.script_magics = ['whoda']
1076 1076 sm = script.ScriptMagics(shell=ip)
1077 1077 assert "whoda" in sm.magics["cell"]
1078 1078
1079 1079
1080 1080 def test_script_out():
1081 1081 ip = get_ipython()
1082 1082 ip.run_cell_magic("script", f"--out output {sys.executable}", "print('hi')")
1083 1083 assert ip.user_ns["output"].strip() == "hi"
1084 1084
1085 1085
1086 1086 def test_script_err():
1087 1087 ip = get_ipython()
1088 1088 ip.run_cell_magic(
1089 1089 "script",
1090 1090 f"--err error {sys.executable}",
1091 1091 "import sys; print('hello', file=sys.stderr)",
1092 1092 )
1093 1093 assert ip.user_ns["error"].strip() == "hello"
1094 1094
1095 1095
1096 1096 def test_script_out_err():
1097
1098 1097 ip = get_ipython()
1099 1098 ip.run_cell_magic(
1100 1099 "script",
1101 1100 f"--out output --err error {sys.executable}",
1102 1101 "\n".join(
1103 1102 [
1104 1103 "import sys",
1105 1104 "print('hi')",
1106 1105 "print('hello', file=sys.stderr)",
1107 1106 ]
1108 1107 ),
1109 1108 )
1110 1109 assert ip.user_ns["output"].strip() == "hi"
1111 1110 assert ip.user_ns["error"].strip() == "hello"
1112 1111
1113 1112
1114 1113 async def test_script_bg_out():
1115 1114 ip = get_ipython()
1116 1115 ip.run_cell_magic("script", f"--bg --out output {sys.executable}", "print('hi')")
1117 1116 assert (await ip.user_ns["output"].read()).strip() == b"hi"
1118 1117 assert ip.user_ns["output"].at_eof()
1119 1118
1120 1119
1121 1120 async def test_script_bg_err():
1122 1121 ip = get_ipython()
1123 1122 ip.run_cell_magic(
1124 1123 "script",
1125 1124 f"--bg --err error {sys.executable}",
1126 1125 "import sys; print('hello', file=sys.stderr)",
1127 1126 )
1128 1127 assert (await ip.user_ns["error"].read()).strip() == b"hello"
1129 1128 assert ip.user_ns["error"].at_eof()
1130 1129
1131 1130
1132 1131 async def test_script_bg_out_err():
1133 1132 ip = get_ipython()
1134 1133 ip.run_cell_magic(
1135 1134 "script",
1136 1135 f"--bg --out output --err error {sys.executable}",
1137 1136 "\n".join(
1138 1137 [
1139 1138 "import sys",
1140 1139 "print('hi')",
1141 1140 "print('hello', file=sys.stderr)",
1142 1141 ]
1143 1142 ),
1144 1143 )
1145 1144 assert (await ip.user_ns["output"].read()).strip() == b"hi"
1146 1145 assert (await ip.user_ns["error"].read()).strip() == b"hello"
1147 1146 assert ip.user_ns["output"].at_eof()
1148 1147 assert ip.user_ns["error"].at_eof()
1149 1148
1150 1149
1151 1150 async def test_script_bg_proc():
1152 1151 ip = get_ipython()
1153 1152 ip.run_cell_magic(
1154 1153 "script",
1155 1154 f"--bg --out output --proc p {sys.executable}",
1156 1155 "\n".join(
1157 1156 [
1158 1157 "import sys",
1159 1158 "print('hi')",
1160 1159 "print('hello', file=sys.stderr)",
1161 1160 ]
1162 1161 ),
1163 1162 )
1164 1163 p = ip.user_ns["p"]
1165 1164 await p.wait()
1166 1165 assert p.returncode == 0
1167 1166 assert (await p.stdout.read()).strip() == b"hi"
1168 1167 # not captured, so empty
1169 1168 assert (await p.stderr.read()) == b""
1170 1169 assert p.stdout.at_eof()
1171 1170 assert p.stderr.at_eof()
1172 1171
1173 1172
1174 1173 def test_script_defaults():
1175 1174 ip = get_ipython()
1176 1175 for cmd in ['sh', 'bash', 'perl', 'ruby']:
1177 1176 try:
1178 1177 find_cmd(cmd)
1179 1178 except Exception:
1180 1179 pass
1181 1180 else:
1182 1181 assert cmd in ip.magics_manager.magics["cell"]
1183 1182
1184 1183
1185 1184 @magics_class
1186 1185 class FooFoo(Magics):
1187 1186 """class with both %foo and %%foo magics"""
1188 1187 @line_magic('foo')
1189 1188 def line_foo(self, line):
1190 1189 "I am line foo"
1191 1190 pass
1192 1191
1193 1192 @cell_magic("foo")
1194 1193 def cell_foo(self, line, cell):
1195 1194 "I am cell foo, not line foo"
1196 1195 pass
1197 1196
1198 1197 def test_line_cell_info():
1199 1198 """%%foo and %foo magics are distinguishable to inspect"""
1200 1199 ip = get_ipython()
1201 1200 ip.magics_manager.register(FooFoo)
1202 1201 oinfo = ip.object_inspect("foo")
1203 1202 assert oinfo["found"] is True
1204 1203 assert oinfo["ismagic"] is True
1205 1204
1206 1205 oinfo = ip.object_inspect("%%foo")
1207 1206 assert oinfo["found"] is True
1208 1207 assert oinfo["ismagic"] is True
1209 1208 assert oinfo["docstring"] == FooFoo.cell_foo.__doc__
1210 1209
1211 1210 oinfo = ip.object_inspect("%foo")
1212 1211 assert oinfo["found"] is True
1213 1212 assert oinfo["ismagic"] is True
1214 1213 assert oinfo["docstring"] == FooFoo.line_foo.__doc__
1215 1214
1216 1215
1217 1216 def test_multiple_magics():
1218 1217 ip = get_ipython()
1219 1218 foo1 = FooFoo(ip)
1220 1219 foo2 = FooFoo(ip)
1221 1220 mm = ip.magics_manager
1222 1221 mm.register(foo1)
1223 1222 assert mm.magics["line"]["foo"].__self__ is foo1
1224 1223 mm.register(foo2)
1225 1224 assert mm.magics["line"]["foo"].__self__ is foo2
1226 1225
1227 1226
1228 1227 def test_alias_magic():
1229 1228 """Test %alias_magic."""
1230 1229 ip = get_ipython()
1231 1230 mm = ip.magics_manager
1232 1231
1233 1232 # Basic operation: both cell and line magics are created, if possible.
1234 1233 ip.run_line_magic("alias_magic", "timeit_alias timeit")
1235 1234 assert "timeit_alias" in mm.magics["line"]
1236 1235 assert "timeit_alias" in mm.magics["cell"]
1237 1236
1238 1237 # --cell is specified, line magic not created.
1239 1238 ip.run_line_magic("alias_magic", "--cell timeit_cell_alias timeit")
1240 1239 assert "timeit_cell_alias" not in mm.magics["line"]
1241 1240 assert "timeit_cell_alias" in mm.magics["cell"]
1242 1241
1243 1242 # Test that line alias is created successfully.
1244 1243 ip.run_line_magic("alias_magic", "--line env_alias env")
1245 1244 assert ip.run_line_magic("env", "") == ip.run_line_magic("env_alias", "")
1246 1245
1247 1246 # Test that line alias with parameters passed in is created successfully.
1248 1247 ip.run_line_magic(
1249 1248 "alias_magic", "--line history_alias history --params " + shlex.quote("3")
1250 1249 )
1251 1250 assert "history_alias" in mm.magics["line"]
1252 1251
1253 1252
1254 1253 def test_save():
1255 1254 """Test %save."""
1256 1255 ip = get_ipython()
1257 1256 ip.history_manager.reset() # Clear any existing history.
1258 1257 cmds = ["a=1", "def b():\n return a**2", "print(a, b())"]
1259 1258 for i, cmd in enumerate(cmds, start=1):
1260 1259 ip.history_manager.store_inputs(i, cmd)
1261 1260 with TemporaryDirectory() as tmpdir:
1262 1261 file = os.path.join(tmpdir, "testsave.py")
1263 1262 ip.run_line_magic("save", "%s 1-10" % file)
1264 1263 content = Path(file).read_text(encoding="utf-8")
1265 1264 assert content.count(cmds[0]) == 1
1266 1265 assert "coding: utf-8" in content
1267 1266 ip.run_line_magic("save", "-a %s 1-10" % file)
1268 1267 content = Path(file).read_text(encoding="utf-8")
1269 1268 assert content.count(cmds[0]) == 2
1270 1269 assert "coding: utf-8" in content
1271 1270
1272 1271
1273 1272 def test_save_with_no_args():
1274 1273 ip = get_ipython()
1275 1274 ip.history_manager.reset() # Clear any existing history.
1276 1275 cmds = ["a=1", "def b():\n return a**2", "print(a, b())", "%save"]
1277 1276 for i, cmd in enumerate(cmds, start=1):
1278 1277 ip.history_manager.store_inputs(i, cmd)
1279 1278
1280 1279 with TemporaryDirectory() as tmpdir:
1281 1280 path = os.path.join(tmpdir, "testsave.py")
1282 1281 ip.run_line_magic("save", path)
1283 1282 content = Path(path).read_text(encoding="utf-8")
1284 1283 expected_content = dedent(
1285 1284 """\
1286 1285 # coding: utf-8
1287 1286 a=1
1288 1287 def b():
1289 1288 return a**2
1290 1289 print(a, b())
1291 1290 """
1292 1291 )
1293 1292 assert content == expected_content
1294 1293
1295 1294
1296 1295 def test_store():
1297 1296 """Test %store."""
1298 1297 ip = get_ipython()
1299 1298 ip.run_line_magic('load_ext', 'storemagic')
1300 1299
1301 1300 # make sure the storage is empty
1302 1301 ip.run_line_magic("store", "-z")
1303 1302 ip.user_ns["var"] = 42
1304 1303 ip.run_line_magic("store", "var")
1305 1304 ip.user_ns["var"] = 39
1306 1305 ip.run_line_magic("store", "-r")
1307 1306 assert ip.user_ns["var"] == 42
1308 1307
1309 1308 ip.run_line_magic("store", "-d var")
1310 1309 ip.user_ns["var"] = 39
1311 1310 ip.run_line_magic("store", "-r")
1312 1311 assert ip.user_ns["var"] == 39
1313 1312
1314 1313
1315 1314 def _run_edit_test(arg_s, exp_filename=None,
1316 1315 exp_lineno=-1,
1317 1316 exp_contents=None,
1318 1317 exp_is_temp=None):
1319 1318 ip = get_ipython()
1320 1319 M = code.CodeMagics(ip)
1321 1320 last_call = ['','']
1322 1321 opts,args = M.parse_options(arg_s,'prxn:')
1323 1322 filename, lineno, is_temp = M._find_edit_target(ip, args, opts, last_call)
1324 1323
1325 1324 if exp_filename is not None:
1326 1325 assert exp_filename == filename
1327 1326 if exp_contents is not None:
1328 1327 with io.open(filename, 'r', encoding='utf-8') as f:
1329 1328 contents = f.read()
1330 1329 assert exp_contents == contents
1331 1330 if exp_lineno != -1:
1332 1331 assert exp_lineno == lineno
1333 1332 if exp_is_temp is not None:
1334 1333 assert exp_is_temp == is_temp
1335 1334
1336 1335
1337 1336 def test_edit_interactive():
1338 1337 """%edit on interactively defined objects"""
1339 1338 ip = get_ipython()
1340 1339 n = ip.execution_count
1341 1340 ip.run_cell("def foo(): return 1", store_history=True)
1342 1341
1343 1342 with pytest.raises(code.InteractivelyDefined) as e:
1344 1343 _run_edit_test("foo")
1345 1344 assert e.value.index == n
1346 1345
1347 1346
1348 1347 def test_edit_cell():
1349 1348 """%edit [cell id]"""
1350 1349 ip = get_ipython()
1351 1350
1352 1351 ip.run_cell("def foo(): return 1", store_history=True)
1353 1352
1354 1353 # test
1355 1354 _run_edit_test("1", exp_contents=ip.user_ns['In'][1], exp_is_temp=True)
1356 1355
1357 1356 def test_edit_fname():
1358 1357 """%edit file"""
1359 1358 # test
1360 1359 _run_edit_test("test file.py", exp_filename="test file.py")
1361 1360
1362 1361 def test_bookmark():
1363 1362 ip = get_ipython()
1364 1363 ip.run_line_magic('bookmark', 'bmname')
1365 1364 with tt.AssertPrints('bmname'):
1366 1365 ip.run_line_magic('bookmark', '-l')
1367 1366 ip.run_line_magic('bookmark', '-d bmname')
1368 1367
1369 1368 def test_ls_magic():
1370 1369 ip = get_ipython()
1371 1370 json_formatter = ip.display_formatter.formatters['application/json']
1372 1371 json_formatter.enabled = True
1373 1372 lsmagic = ip.run_line_magic("lsmagic", "")
1374 1373 with warnings.catch_warnings(record=True) as w:
1375 1374 j = json_formatter(lsmagic)
1376 1375 assert sorted(j) == ["cell", "line"]
1377 1376 assert w == [] # no warnings
1378 1377
1379 1378
1380 1379 def test_strip_initial_indent():
1381 1380 def sii(s):
1382 1381 lines = s.splitlines()
1383 1382 return '\n'.join(code.strip_initial_indent(lines))
1384 1383
1385 1384 assert sii(" a = 1\nb = 2") == "a = 1\nb = 2"
1386 1385 assert sii(" a\n b\nc") == "a\n b\nc"
1387 1386 assert sii("a\n b") == "a\n b"
1388 1387
1389 1388 def test_logging_magic_quiet_from_arg():
1390 1389 _ip.config.LoggingMagics.quiet = False
1391 1390 lm = logging.LoggingMagics(shell=_ip)
1392 1391 with TemporaryDirectory() as td:
1393 1392 try:
1394 1393 with tt.AssertNotPrints(re.compile("Activating.*")):
1395 1394 lm.logstart('-q {}'.format(
1396 1395 os.path.join(td, "quiet_from_arg.log")))
1397 1396 finally:
1398 1397 _ip.logger.logstop()
1399 1398
1400 1399 def test_logging_magic_quiet_from_config():
1401 1400 _ip.config.LoggingMagics.quiet = True
1402 1401 lm = logging.LoggingMagics(shell=_ip)
1403 1402 with TemporaryDirectory() as td:
1404 1403 try:
1405 1404 with tt.AssertNotPrints(re.compile("Activating.*")):
1406 1405 lm.logstart(os.path.join(td, "quiet_from_config.log"))
1407 1406 finally:
1408 1407 _ip.logger.logstop()
1409 1408
1410 1409
1411 1410 def test_logging_magic_not_quiet():
1412 1411 _ip.config.LoggingMagics.quiet = False
1413 1412 lm = logging.LoggingMagics(shell=_ip)
1414 1413 with TemporaryDirectory() as td:
1415 1414 try:
1416 1415 with tt.AssertPrints(re.compile("Activating.*")):
1417 1416 lm.logstart(os.path.join(td, "not_quiet.log"))
1418 1417 finally:
1419 1418 _ip.logger.logstop()
1420 1419
1421 1420
1422 1421 def test_time_no_var_expand():
1423 1422 _ip.user_ns["a"] = 5
1424 1423 _ip.user_ns["b"] = []
1425 1424 _ip.run_line_magic("time", 'b.append("{a}")')
1426 1425 assert _ip.user_ns["b"] == ["{a}"]
1427 1426
1428 1427
1429 1428 # this is slow, put at the end for local testing.
1430 1429 def test_timeit_arguments():
1431 1430 "Test valid timeit arguments, should not cause SyntaxError (GH #1269)"
1432 1431 _ip.run_line_magic("timeit", "-n1 -r1 a=('#')")
1433 1432
1434 1433
1435 1434 MINIMAL_LAZY_MAGIC = """
1436 1435 from IPython.core.magic import (
1437 1436 Magics,
1438 1437 magics_class,
1439 1438 line_magic,
1440 1439 cell_magic,
1441 1440 )
1442 1441
1443 1442
1444 1443 @magics_class
1445 1444 class LazyMagics(Magics):
1446 1445 @line_magic
1447 1446 def lazy_line(self, line):
1448 1447 print("Lazy Line")
1449 1448
1450 1449 @cell_magic
1451 1450 def lazy_cell(self, line, cell):
1452 1451 print("Lazy Cell")
1453 1452
1454 1453
1455 1454 def load_ipython_extension(ipython):
1456 1455 ipython.register_magics(LazyMagics)
1457 1456 """
1458 1457
1459 1458
1460 1459 def test_lazy_magics():
1461 1460 with pytest.raises(UsageError):
1462 1461 ip.run_line_magic("lazy_line", "")
1463 1462
1464 1463 startdir = os.getcwd()
1465 1464
1466 1465 with TemporaryDirectory() as tmpdir:
1467 1466 with prepended_to_syspath(tmpdir):
1468 1467 ptempdir = Path(tmpdir)
1469 1468 tf = ptempdir / "lazy_magic_module.py"
1470 1469 tf.write_text(MINIMAL_LAZY_MAGIC)
1471 1470 ip.magics_manager.register_lazy("lazy_line", Path(tf.name).name[:-3])
1472 1471 with tt.AssertPrints("Lazy Line"):
1473 1472 ip.run_line_magic("lazy_line", "")
1474 1473
1475 1474
1476 1475 TEST_MODULE = """
1477 1476 print('Loaded my_tmp')
1478 1477 if __name__ == "__main__":
1479 1478 print('I just ran a script')
1480 1479 """
1481 1480
1482 1481 def test_run_module_from_import_hook():
1483 1482 "Test that a module can be loaded via an import hook"
1484 1483 with TemporaryDirectory() as tmpdir:
1485 1484 fullpath = os.path.join(tmpdir, "my_tmp.py")
1486 1485 Path(fullpath).write_text(TEST_MODULE, encoding="utf-8")
1487 1486
1488 1487 import importlib.abc
1489 1488 import importlib.util
1490 1489
1491 1490 class MyTempImporter(importlib.abc.MetaPathFinder, importlib.abc.SourceLoader):
1492 1491 def find_spec(self, fullname, path, target=None):
1493 1492 if fullname == "my_tmp":
1494 1493 return importlib.util.spec_from_loader(fullname, self)
1495 1494
1496 1495 def get_filename(self, fullname):
1497 1496 assert fullname == "my_tmp"
1498 1497 return fullpath
1499 1498
1500 1499 def get_data(self, path):
1501 1500 assert Path(path).samefile(fullpath)
1502 1501 return Path(fullpath).read_text(encoding="utf-8")
1503 1502
1504 1503 sys.meta_path.insert(0, MyTempImporter())
1505 1504
1506 1505 with capture_output() as captured:
1507 1506 _ip.run_line_magic("run", "-m my_tmp")
1508 1507 _ip.run_cell("import my_tmp")
1509 1508
1510 1509 output = "Loaded my_tmp\nI just ran a script\nLoaded my_tmp\n"
1511 1510 assert output == captured.stdout
1512 1511
1513 1512 sys.meta_path.pop(0)
@@ -1,271 +1,270 b''
1 1 """Tests for pylab tools module.
2 2 """
3 3
4 4 # Copyright (c) IPython Development Team.
5 5 # Distributed under the terms of the Modified BSD License.
6 6
7 7
8 8 from binascii import a2b_base64
9 9 from io import BytesIO
10 10
11 11 import pytest
12 12
13 13 matplotlib = pytest.importorskip("matplotlib")
14 14 matplotlib.use('Agg')
15 15 from matplotlib.figure import Figure
16 16
17 17 from matplotlib import pyplot as plt
18 18 from matplotlib_inline import backend_inline
19 19 import numpy as np
20 20
21 21 from IPython.core.getipython import get_ipython
22 22 from IPython.core.interactiveshell import InteractiveShell
23 23 from IPython.core.display import _PNG, _JPEG
24 24 from .. import pylabtools as pt
25 25
26 26 from IPython.testing import decorators as dec
27 27
28 28
29 29 def test_figure_to_svg():
30 30 # simple empty-figure test
31 31 fig = plt.figure()
32 32 assert pt.print_figure(fig, "svg") is None
33 33
34 34 plt.close('all')
35 35
36 36 # simple check for at least svg-looking output
37 37 fig = plt.figure()
38 38 ax = fig.add_subplot(1,1,1)
39 39 ax.plot([1,2,3])
40 40 plt.draw()
41 41 svg = pt.print_figure(fig, "svg")[:100].lower()
42 42 assert "doctype svg" in svg
43 43
44 44
45 45 def _check_pil_jpeg_bytes():
46 46 """Skip if PIL can't write JPEGs to BytesIO objects"""
47 47 # PIL's JPEG plugin can't write to BytesIO objects
48 48 # Pillow fixes this
49 49 from PIL import Image
50 50 buf = BytesIO()
51 51 img = Image.new("RGB", (4,4))
52 52 try:
53 53 img.save(buf, 'jpeg')
54 54 except Exception as e:
55 55 ename = e.__class__.__name__
56 56 raise pytest.skip("PIL can't write JPEG to BytesIO: %s: %s" % (ename, e)) from e
57 57
58 58 @dec.skip_without("PIL.Image")
59 59 def test_figure_to_jpeg():
60 60 _check_pil_jpeg_bytes()
61 61 # simple check for at least jpeg-looking output
62 62 fig = plt.figure()
63 63 ax = fig.add_subplot(1,1,1)
64 64 ax.plot([1,2,3])
65 65 plt.draw()
66 66 jpeg = pt.print_figure(fig, 'jpeg', pil_kwargs={'optimize': 50})[:100].lower()
67 67 assert jpeg.startswith(_JPEG)
68 68
69 69 def test_retina_figure():
70 70 # simple empty-figure test
71 71 fig = plt.figure()
72 72 assert pt.retina_figure(fig) == None
73 73 plt.close('all')
74 74
75 75 fig = plt.figure()
76 76 ax = fig.add_subplot(1,1,1)
77 77 ax.plot([1,2,3])
78 78 plt.draw()
79 79 png, md = pt.retina_figure(fig)
80 80 assert png.startswith(_PNG)
81 81 assert "width" in md
82 82 assert "height" in md
83 83
84 84
85 85 _fmt_mime_map = {
86 86 'png': 'image/png',
87 87 'jpeg': 'image/jpeg',
88 88 'pdf': 'application/pdf',
89 89 'retina': 'image/png',
90 90 'svg': 'image/svg+xml',
91 91 }
92 92
93 93 def test_select_figure_formats_str():
94 94 ip = get_ipython()
95 95 for fmt, active_mime in _fmt_mime_map.items():
96 96 pt.select_figure_formats(ip, fmt)
97 97 for mime, f in ip.display_formatter.formatters.items():
98 98 if mime == active_mime:
99 99 assert Figure in f
100 100 else:
101 101 assert Figure not in f
102 102
103 103 def test_select_figure_formats_kwargs():
104 104 ip = get_ipython()
105 105 kwargs = dict(bbox_inches="tight")
106 106 pt.select_figure_formats(ip, "png", **kwargs)
107 107 formatter = ip.display_formatter.formatters["image/png"]
108 108 f = formatter.lookup_by_type(Figure)
109 109 cell = f.keywords
110 110 expected = kwargs
111 111 expected["base64"] = True
112 112 expected["fmt"] = "png"
113 113 assert cell == expected
114 114
115 115 # check that the formatter doesn't raise
116 116 fig = plt.figure()
117 117 ax = fig.add_subplot(1,1,1)
118 118 ax.plot([1,2,3])
119 119 plt.draw()
120 120 formatter.enabled = True
121 121 png = formatter(fig)
122 122 assert isinstance(png, str)
123 123 png_bytes = a2b_base64(png)
124 124 assert png_bytes.startswith(_PNG)
125 125
126 126 def test_select_figure_formats_set():
127 127 ip = get_ipython()
128 128 for fmts in [
129 129 {'png', 'svg'},
130 130 ['png'],
131 131 ('jpeg', 'pdf', 'retina'),
132 132 {'svg'},
133 133 ]:
134 134 active_mimes = {_fmt_mime_map[fmt] for fmt in fmts}
135 135 pt.select_figure_formats(ip, fmts)
136 136 for mime, f in ip.display_formatter.formatters.items():
137 137 if mime in active_mimes:
138 138 assert Figure in f
139 139 else:
140 140 assert Figure not in f
141 141
142 142 def test_select_figure_formats_bad():
143 143 ip = get_ipython()
144 144 with pytest.raises(ValueError):
145 145 pt.select_figure_formats(ip, 'foo')
146 146 with pytest.raises(ValueError):
147 147 pt.select_figure_formats(ip, {'png', 'foo'})
148 148 with pytest.raises(ValueError):
149 149 pt.select_figure_formats(ip, ['retina', 'pdf', 'bar', 'bad'])
150 150
151 151 def test_import_pylab():
152 152 ns = {}
153 153 pt.import_pylab(ns, import_all=False)
154 154 assert "plt" in ns
155 155 assert ns["np"] == np
156 156
157 157
158 158 class TestPylabSwitch(object):
159 159 class Shell(InteractiveShell):
160 160 def init_history(self):
161 161 """Sets up the command history, and starts regular autosaves."""
162 162 self.config.HistoryManager.hist_file = ":memory:"
163 163 super().init_history()
164 164
165 165 def enable_gui(self, gui):
166 166 pass
167 167
168 168 def setup(self):
169 169 import matplotlib
170 170 def act_mpl(backend):
171 171 matplotlib.rcParams['backend'] = backend
172 172
173 173 # Save rcParams since they get modified
174 174 self._saved_rcParams = matplotlib.rcParams
175 175 self._saved_rcParamsOrig = matplotlib.rcParamsOrig
176 176 matplotlib.rcParams = dict(backend='Qt4Agg')
177 177 matplotlib.rcParamsOrig = dict(backend='Qt4Agg')
178 178
179 179 # Mock out functions
180 180 self._save_am = pt.activate_matplotlib
181 181 pt.activate_matplotlib = act_mpl
182 182 self._save_ip = pt.import_pylab
183 183 pt.import_pylab = lambda *a,**kw:None
184 184 self._save_cis = backend_inline.configure_inline_support
185 185 backend_inline.configure_inline_support = lambda *a, **kw: None
186 186
187 187 def teardown(self):
188 188 pt.activate_matplotlib = self._save_am
189 189 pt.import_pylab = self._save_ip
190 190 backend_inline.configure_inline_support = self._save_cis
191 191 import matplotlib
192 192 matplotlib.rcParams = self._saved_rcParams
193 193 matplotlib.rcParamsOrig = self._saved_rcParamsOrig
194 194
195 195 def test_qt(self):
196
197 196 s = self.Shell()
198 197 gui, backend = s.enable_matplotlib(None)
199 198 assert gui == "qt"
200 199 assert s.pylab_gui_select == "qt"
201 200
202 201 gui, backend = s.enable_matplotlib("inline")
203 202 assert gui == "inline"
204 203 assert s.pylab_gui_select == "qt"
205 204
206 205 gui, backend = s.enable_matplotlib("qt")
207 206 assert gui == "qt"
208 207 assert s.pylab_gui_select == "qt"
209 208
210 209 gui, backend = s.enable_matplotlib("inline")
211 210 assert gui == "inline"
212 211 assert s.pylab_gui_select == "qt"
213 212
214 213 gui, backend = s.enable_matplotlib()
215 214 assert gui == "qt"
216 215 assert s.pylab_gui_select == "qt"
217 216
218 217 def test_inline(self):
219 218 s = self.Shell()
220 219 gui, backend = s.enable_matplotlib("inline")
221 220 assert gui == "inline"
222 221 assert s.pylab_gui_select == None
223 222
224 223 gui, backend = s.enable_matplotlib("inline")
225 224 assert gui == "inline"
226 225 assert s.pylab_gui_select == None
227 226
228 227 gui, backend = s.enable_matplotlib("qt")
229 228 assert gui == "qt"
230 229 assert s.pylab_gui_select == "qt"
231 230
232 231 def test_inline_twice(self):
233 232 "Using '%matplotlib inline' twice should not reset formatters"
234 233
235 234 ip = self.Shell()
236 235 gui, backend = ip.enable_matplotlib("inline")
237 236 assert gui == "inline"
238 237
239 238 fmts = {'png'}
240 239 active_mimes = {_fmt_mime_map[fmt] for fmt in fmts}
241 240 pt.select_figure_formats(ip, fmts)
242 241
243 242 gui, backend = ip.enable_matplotlib("inline")
244 243 assert gui == "inline"
245 244
246 245 for mime, f in ip.display_formatter.formatters.items():
247 246 if mime in active_mimes:
248 247 assert Figure in f
249 248 else:
250 249 assert Figure not in f
251 250
252 251 def test_qt_gtk(self):
253 252 s = self.Shell()
254 253 gui, backend = s.enable_matplotlib("qt")
255 254 assert gui == "qt"
256 255 assert s.pylab_gui_select == "qt"
257 256
258 257 gui, backend = s.enable_matplotlib("gtk")
259 258 assert gui == "qt"
260 259 assert s.pylab_gui_select == "qt"
261 260
262 261
263 262 def test_no_gui_backends():
264 263 for k in ['agg', 'svg', 'pdf', 'ps']:
265 264 assert k not in pt.backend2gui
266 265
267 266
268 267 def test_figure_no_canvas():
269 268 fig = Figure()
270 269 fig.canvas = None
271 270 pt.print_figure(fig)
@@ -1,1354 +1,1377 b''
1 1 # -*- coding: utf-8 -*-
2 2 """
3 3 Verbose and colourful traceback formatting.
4 4
5 5 **ColorTB**
6 6
7 7 I've always found it a bit hard to visually parse tracebacks in Python. The
8 8 ColorTB class is a solution to that problem. It colors the different parts of a
9 9 traceback in a manner similar to what you would expect from a syntax-highlighting
10 10 text editor.
11 11
12 12 Installation instructions for ColorTB::
13 13
14 14 import sys,ultratb
15 15 sys.excepthook = ultratb.ColorTB()
16 16
17 17 **VerboseTB**
18 18
19 19 I've also included a port of Ka-Ping Yee's "cgitb.py" that produces all kinds
20 20 of useful info when a traceback occurs. Ping originally had it spit out HTML
21 21 and intended it for CGI programmers, but why should they have all the fun? I
22 22 altered it to spit out colored text to the terminal. It's a bit overwhelming,
23 23 but kind of neat, and maybe useful for long-running programs that you believe
24 24 are bug-free. If a crash *does* occur in that type of program you want details.
25 25 Give it a shot--you'll love it or you'll hate it.
26 26
27 27 .. note::
28 28
29 29 The Verbose mode prints the variables currently visible where the exception
30 30 happened (shortening their strings if too long). This can potentially be
31 31 very slow, if you happen to have a huge data structure whose string
32 32 representation is complex to compute. Your computer may appear to freeze for
33 33 a while with cpu usage at 100%. If this occurs, you can cancel the traceback
34 34 with Ctrl-C (maybe hitting it more than once).
35 35
36 36 If you encounter this kind of situation often, you may want to use the
37 37 Verbose_novars mode instead of the regular Verbose, which avoids formatting
38 38 variables (but otherwise includes the information and context given by
39 39 Verbose).
40 40
41 41 .. note::
42 42
43 43 The verbose mode print all variables in the stack, which means it can
44 44 potentially leak sensitive information like access keys, or unencrypted
45 45 password.
46 46
47 47 Installation instructions for VerboseTB::
48 48
49 49 import sys,ultratb
50 50 sys.excepthook = ultratb.VerboseTB()
51 51
52 52 Note: Much of the code in this module was lifted verbatim from the standard
53 53 library module 'traceback.py' and Ka-Ping Yee's 'cgitb.py'.
54 54
55 55 Color schemes
56 56 -------------
57 57
58 58 The colors are defined in the class TBTools through the use of the
59 59 ColorSchemeTable class. Currently the following exist:
60 60
61 61 - NoColor: allows all of this module to be used in any terminal (the color
62 62 escapes are just dummy blank strings).
63 63
64 64 - Linux: is meant to look good in a terminal like the Linux console (black
65 65 or very dark background).
66 66
67 67 - LightBG: similar to Linux but swaps dark/light colors to be more readable
68 68 in light background terminals.
69 69
70 70 - Neutral: a neutral color scheme that should be readable on both light and
71 71 dark background
72 72
73 73 You can implement other color schemes easily, the syntax is fairly
74 74 self-explanatory. Please send back new schemes you develop to the author for
75 75 possible inclusion in future releases.
76 76
77 77 Inheritance diagram:
78 78
79 79 .. inheritance-diagram:: IPython.core.ultratb
80 80 :parts: 3
81 81 """
82 82
83 83 #*****************************************************************************
84 84 # Copyright (C) 2001 Nathaniel Gray <n8gray@caltech.edu>
85 85 # Copyright (C) 2001-2004 Fernando Perez <fperez@colorado.edu>
86 86 #
87 87 # Distributed under the terms of the BSD License. The full license is in
88 88 # the file COPYING, distributed as part of this software.
89 89 #*****************************************************************************
90 90
91 91
92 92 import inspect
93 93 import linecache
94 94 import pydoc
95 95 import sys
96 96 import time
97 97 import traceback
98 98 from types import TracebackType
99 99 from typing import Tuple, List, Any, Optional
100 100
101 101 import stack_data
102 102 from stack_data import FrameInfo as SDFrameInfo
103 103 from pygments.formatters.terminal256 import Terminal256Formatter
104 104 from pygments.styles import get_style_by_name
105 105
106 106 # IPython's own modules
107 107 from IPython import get_ipython
108 108 from IPython.core import debugger
109 109 from IPython.core.display_trap import DisplayTrap
110 110 from IPython.core.excolors import exception_colors
111 111 from IPython.utils import PyColorize
112 112 from IPython.utils import path as util_path
113 113 from IPython.utils import py3compat
114 114 from IPython.utils.terminal import get_terminal_size
115 115
116 116 import IPython.utils.colorable as colorable
117 117
118 118 # Globals
119 119 # amount of space to put line numbers before verbose tracebacks
120 120 INDENT_SIZE = 8
121 121
122 122 # Default color scheme. This is used, for example, by the traceback
123 123 # formatter. When running in an actual IPython instance, the user's rc.colors
124 124 # value is used, but having a module global makes this functionality available
125 125 # to users of ultratb who are NOT running inside ipython.
126 126 DEFAULT_SCHEME = 'NoColor'
127 127 FAST_THRESHOLD = 10_000
128 128
129 129 # ---------------------------------------------------------------------------
130 130 # Code begins
131 131
132 132 # Helper function -- largely belongs to VerboseTB, but we need the same
133 133 # functionality to produce a pseudo verbose TB for SyntaxErrors, so that they
134 134 # can be recognized properly by ipython.el's py-traceback-line-re
135 135 # (SyntaxErrors have to be treated specially because they have no traceback)
136 136
137 137
138 138 def _format_traceback_lines(lines, Colors, has_colors: bool, lvals):
139 139 """
140 140 Format tracebacks lines with pointing arrow, leading numbers...
141 141
142 142 Parameters
143 143 ----------
144 144 lines : list[Line]
145 145 Colors
146 146 ColorScheme used.
147 147 lvals : str
148 148 Values of local variables, already colored, to inject just after the error line.
149 149 """
150 150 numbers_width = INDENT_SIZE - 1
151 151 res = []
152 152
153 153 for stack_line in lines:
154 154 if stack_line is stack_data.LINE_GAP:
155 155 res.append('%s (...)%s\n' % (Colors.linenoEm, Colors.Normal))
156 156 continue
157 157
158 158 line = stack_line.render(pygmented=has_colors).rstrip('\n') + '\n'
159 159 lineno = stack_line.lineno
160 160 if stack_line.is_current:
161 161 # This is the line with the error
162 162 pad = numbers_width - len(str(lineno))
163 163 num = '%s%s' % (debugger.make_arrow(pad), str(lineno))
164 164 start_color = Colors.linenoEm
165 165 else:
166 166 num = '%*s' % (numbers_width, lineno)
167 167 start_color = Colors.lineno
168 168
169 169 line = '%s%s%s %s' % (start_color, num, Colors.Normal, line)
170 170
171 171 res.append(line)
172 172 if lvals and stack_line.is_current:
173 173 res.append(lvals + '\n')
174 174 return res
175 175
176 176 def _simple_format_traceback_lines(lnum, index, lines, Colors, lvals, _line_format):
177 177 """
178 178 Format tracebacks lines with pointing arrow, leading numbers...
179 179
180 180 Parameters
181 181 ==========
182 182
183 183 lnum: int
184 184 number of the target line of code.
185 185 index: int
186 186 which line in the list should be highlighted.
187 187 lines: list[string]
188 188 Colors:
189 189 ColorScheme used.
190 190 lvals: bytes
191 191 Values of local variables, already colored, to inject just after the error line.
192 192 _line_format: f (str) -> (str, bool)
193 193 return (colorized version of str, failure to do so)
194 194 """
195 195 numbers_width = INDENT_SIZE - 1
196 196 res = []
197 197
198 198 for i,line in enumerate(lines, lnum-index):
199 199 line = py3compat.cast_unicode(line)
200 200
201 new_line, err = _line_format(line, 'str')
201 new_line, err = _line_format(line, "str")
202 202 if not err:
203 203 line = new_line
204 204
205 205 if i == lnum:
206 206 # This is the line with the error
207 207 pad = numbers_width - len(str(i))
208 num = '%s%s' % (debugger.make_arrow(pad), str(lnum))
209 line = '%s%s%s %s%s' % (Colors.linenoEm, num,
210 Colors.line, line, Colors.Normal)
208 num = "%s%s" % (debugger.make_arrow(pad), str(lnum))
209 line = "%s%s%s %s%s" % (
210 Colors.linenoEm,
211 num,
212 Colors.line,
213 line,
214 Colors.Normal,
215 )
211 216 else:
212 num = '%*s' % (numbers_width, i)
213 line = '%s%s%s %s' % (Colors.lineno, num,
214 Colors.Normal, line)
217 num = "%*s" % (numbers_width, i)
218 line = "%s%s%s %s" % (Colors.lineno, num, Colors.Normal, line)
215 219
216 220 res.append(line)
217 221 if lvals and i == lnum:
218 res.append(lvals + '\n')
222 res.append(lvals + "\n")
219 223 return res
220 224
225
221 226 def _format_filename(file, ColorFilename, ColorNormal, *, lineno=None):
222 227 """
223 228 Format filename lines with custom formatting from caching compiler or `File *.py` by default
224 229
225 230 Parameters
226 231 ----------
227 232 file : str
228 233 ColorFilename
229 234 ColorScheme's filename coloring to be used.
230 235 ColorNormal
231 236 ColorScheme's normal coloring to be used.
232 237 """
233 238 ipinst = get_ipython()
234 239 if (
235 240 ipinst is not None
236 241 and (data := ipinst.compile.format_code_name(file)) is not None
237 242 ):
238 243 label, name = data
239 244 if lineno is None:
240 245 tpl_link = f"{{label}} {ColorFilename}{{name}}{ColorNormal}"
241 246 else:
242 247 tpl_link = (
243 248 f"{{label}} {ColorFilename}{{name}}, line {{lineno}}{ColorNormal}"
244 249 )
245 250 else:
246 251 label = "File"
247 252 name = util_path.compress_user(
248 253 py3compat.cast_unicode(file, util_path.fs_encoding)
249 254 )
250 255 if lineno is None:
251 256 tpl_link = f"{{label}} {ColorFilename}{{name}}{ColorNormal}"
252 257 else:
253 258 # can we make this the more friendly ", line {{lineno}}", or do we need to preserve the formatting with the colon?
254 259 tpl_link = f"{{label}} {ColorFilename}{{name}}:{{lineno}}{ColorNormal}"
255 260
256 261 return tpl_link.format(label=label, name=name, lineno=lineno)
257 262
258 263 #---------------------------------------------------------------------------
259 264 # Module classes
260 265 class TBTools(colorable.Colorable):
261 266 """Basic tools used by all traceback printer classes."""
262 267
263 268 # Number of frames to skip when reporting tracebacks
264 269 tb_offset = 0
265 270
266 271 def __init__(
267 272 self,
268 273 color_scheme="NoColor",
269 274 call_pdb=False,
270 275 ostream=None,
271 276 parent=None,
272 277 config=None,
273 278 *,
274 279 debugger_cls=None,
275 280 ):
276 281 # Whether to call the interactive pdb debugger after printing
277 282 # tracebacks or not
278 283 super(TBTools, self).__init__(parent=parent, config=config)
279 284 self.call_pdb = call_pdb
280 285
281 286 # Output stream to write to. Note that we store the original value in
282 287 # a private attribute and then make the public ostream a property, so
283 288 # that we can delay accessing sys.stdout until runtime. The way
284 289 # things are written now, the sys.stdout object is dynamically managed
285 290 # so a reference to it should NEVER be stored statically. This
286 291 # property approach confines this detail to a single location, and all
287 292 # subclasses can simply access self.ostream for writing.
288 293 self._ostream = ostream
289 294
290 295 # Create color table
291 296 self.color_scheme_table = exception_colors()
292 297
293 298 self.set_colors(color_scheme)
294 299 self.old_scheme = color_scheme # save initial value for toggles
295 300 self.debugger_cls = debugger_cls or debugger.Pdb
296 301
297 302 if call_pdb:
298 303 self.pdb = self.debugger_cls()
299 304 else:
300 305 self.pdb = None
301 306
302 307 def _get_ostream(self):
303 308 """Output stream that exceptions are written to.
304 309
305 310 Valid values are:
306 311
307 312 - None: the default, which means that IPython will dynamically resolve
308 313 to sys.stdout. This ensures compatibility with most tools, including
309 314 Windows (where plain stdout doesn't recognize ANSI escapes).
310 315
311 316 - Any object with 'write' and 'flush' attributes.
312 317 """
313 318 return sys.stdout if self._ostream is None else self._ostream
314 319
315 320 def _set_ostream(self, val):
316 321 assert val is None or (hasattr(val, 'write') and hasattr(val, 'flush'))
317 322 self._ostream = val
318 323
319 324 ostream = property(_get_ostream, _set_ostream)
320 325
321 326 @staticmethod
322 327 def _get_chained_exception(exception_value):
323 328 cause = getattr(exception_value, "__cause__", None)
324 329 if cause:
325 330 return cause
326 331 if getattr(exception_value, "__suppress_context__", False):
327 332 return None
328 333 return getattr(exception_value, "__context__", None)
329 334
330 335 def get_parts_of_chained_exception(
331 336 self, evalue
332 337 ) -> Optional[Tuple[type, BaseException, TracebackType]]:
333
334 338 chained_evalue = self._get_chained_exception(evalue)
335 339
336 340 if chained_evalue:
337 341 return chained_evalue.__class__, chained_evalue, chained_evalue.__traceback__
338 342 return None
339 343
340 344 def prepare_chained_exception_message(self, cause) -> List[Any]:
341 345 direct_cause = "\nThe above exception was the direct cause of the following exception:\n"
342 346 exception_during_handling = "\nDuring handling of the above exception, another exception occurred:\n"
343 347
344 348 if cause:
345 349 message = [[direct_cause]]
346 350 else:
347 351 message = [[exception_during_handling]]
348 352 return message
349 353
350 354 @property
351 355 def has_colors(self) -> bool:
352 356 return self.color_scheme_table.active_scheme_name.lower() != "nocolor"
353 357
354 358 def set_colors(self, *args, **kw):
355 359 """Shorthand access to the color table scheme selector method."""
356 360
357 361 # Set own color table
358 362 self.color_scheme_table.set_active_scheme(*args, **kw)
359 363 # for convenience, set Colors to the active scheme
360 364 self.Colors = self.color_scheme_table.active_colors
361 365 # Also set colors of debugger
362 366 if hasattr(self, 'pdb') and self.pdb is not None:
363 367 self.pdb.set_colors(*args, **kw)
364 368
365 369 def color_toggle(self):
366 370 """Toggle between the currently active color scheme and NoColor."""
367 371
368 372 if self.color_scheme_table.active_scheme_name == 'NoColor':
369 373 self.color_scheme_table.set_active_scheme(self.old_scheme)
370 374 self.Colors = self.color_scheme_table.active_colors
371 375 else:
372 376 self.old_scheme = self.color_scheme_table.active_scheme_name
373 377 self.color_scheme_table.set_active_scheme('NoColor')
374 378 self.Colors = self.color_scheme_table.active_colors
375 379
376 380 def stb2text(self, stb):
377 381 """Convert a structured traceback (a list) to a string."""
378 382 return '\n'.join(stb)
379 383
380 384 def text(self, etype, value, tb, tb_offset: Optional[int] = None, context=5):
381 385 """Return formatted traceback.
382 386
383 387 Subclasses may override this if they add extra arguments.
384 388 """
385 389 tb_list = self.structured_traceback(etype, value, tb,
386 390 tb_offset, context)
387 391 return self.stb2text(tb_list)
388 392
389 393 def structured_traceback(
390 394 self, etype, evalue, tb, tb_offset: Optional[int] = None, context=5, mode=None
391 395 ):
392 396 """Return a list of traceback frames.
393 397
394 398 Must be implemented by each class.
395 399 """
396 400 raise NotImplementedError()
397 401
398 402
399 403 #---------------------------------------------------------------------------
400 404 class ListTB(TBTools):
401 405 """Print traceback information from a traceback list, with optional color.
402 406
403 407 Calling requires 3 arguments: (etype, evalue, elist)
404 408 as would be obtained by::
405 409
406 410 etype, evalue, tb = sys.exc_info()
407 411 if tb:
408 412 elist = traceback.extract_tb(tb)
409 413 else:
410 414 elist = None
411 415
412 416 It can thus be used by programs which need to process the traceback before
413 417 printing (such as console replacements based on the code module from the
414 418 standard library).
415 419
416 420 Because they are meant to be called without a full traceback (only a
417 421 list), instances of this class can't call the interactive pdb debugger."""
418 422
419 423
420 424 def __call__(self, etype, value, elist):
421 425 self.ostream.flush()
422 426 self.ostream.write(self.text(etype, value, elist))
423 427 self.ostream.write('\n')
424 428
425 429 def _extract_tb(self, tb):
426 430 if tb:
427 431 return traceback.extract_tb(tb)
428 432 else:
429 433 return None
430 434
431 435 def structured_traceback(
432 436 self,
433 437 etype: type,
434 438 evalue: BaseException,
435 439 etb: Optional[TracebackType] = None,
436 440 tb_offset: Optional[int] = None,
437 441 context=5,
438 442 ):
439 443 """Return a color formatted string with the traceback info.
440 444
441 445 Parameters
442 446 ----------
443 447 etype : exception type
444 448 Type of the exception raised.
445 449 evalue : object
446 450 Data stored in the exception
447 451 etb : list | TracebackType | None
448 452 If list: List of frames, see class docstring for details.
449 453 If Traceback: Traceback of the exception.
450 454 tb_offset : int, optional
451 455 Number of frames in the traceback to skip. If not given, the
452 456 instance evalue is used (set in constructor).
453 457 context : int, optional
454 458 Number of lines of context information to print.
455 459
456 460 Returns
457 461 -------
458 462 String with formatted exception.
459 463 """
460 464 # This is a workaround to get chained_exc_ids in recursive calls
461 465 # etb should not be a tuple if structured_traceback is not recursive
462 466 if isinstance(etb, tuple):
463 467 etb, chained_exc_ids = etb
464 468 else:
465 469 chained_exc_ids = set()
466 470
467 471 if isinstance(etb, list):
468 472 elist = etb
469 473 elif etb is not None:
470 474 elist = self._extract_tb(etb)
471 475 else:
472 476 elist = []
473 477 tb_offset = self.tb_offset if tb_offset is None else tb_offset
474 478 assert isinstance(tb_offset, int)
475 479 Colors = self.Colors
476 480 out_list = []
477 481 if elist:
478 482
479 483 if tb_offset and len(elist) > tb_offset:
480 484 elist = elist[tb_offset:]
481 485
482 486 out_list.append('Traceback %s(most recent call last)%s:' %
483 487 (Colors.normalEm, Colors.Normal) + '\n')
484 488 out_list.extend(self._format_list(elist))
485 489 # The exception info should be a single entry in the list.
486 490 lines = ''.join(self._format_exception_only(etype, evalue))
487 491 out_list.append(lines)
488 492
489 493 exception = self.get_parts_of_chained_exception(evalue)
490 494
491 495 if exception and not id(exception[1]) in chained_exc_ids:
492 496 chained_exception_message = self.prepare_chained_exception_message(
493 497 evalue.__cause__)[0]
494 498 etype, evalue, etb = exception
495 499 # Trace exception to avoid infinite 'cause' loop
496 500 chained_exc_ids.add(id(exception[1]))
497 501 chained_exceptions_tb_offset = 0
498 502 out_list = (
499 503 self.structured_traceback(
500 504 etype, evalue, (etb, chained_exc_ids),
501 505 chained_exceptions_tb_offset, context)
502 506 + chained_exception_message
503 507 + out_list)
504 508
505 509 return out_list
506 510
507 511 def _format_list(self, extracted_list):
508 512 """Format a list of traceback entry tuples for printing.
509 513
510 514 Given a list of tuples as returned by extract_tb() or
511 515 extract_stack(), return a list of strings ready for printing.
512 516 Each string in the resulting list corresponds to the item with the
513 517 same index in the argument list. Each string ends in a newline;
514 518 the strings may contain internal newlines as well, for those items
515 519 whose source text line is not None.
516 520
517 521 Lifted almost verbatim from traceback.py
518 522 """
519 523
520 524 Colors = self.Colors
521 525 list = []
522 526 for ind, (filename, lineno, name, line) in enumerate(extracted_list):
523 527 normalCol, nameCol, fileCol, lineCol = (
524 528 # Emphasize the last entry
525 529 (Colors.normalEm, Colors.nameEm, Colors.filenameEm, Colors.line)
526 530 if ind == len(extracted_list) - 1
527 531 else (Colors.Normal, Colors.name, Colors.filename, "")
528 532 )
529 533
530 534 fns = _format_filename(filename, fileCol, normalCol, lineno=lineno)
531 535 item = f"{normalCol} {fns}"
532 536
533 537 if name != "<module>":
534 538 item += f" in {nameCol}{name}{normalCol}\n"
535 539 else:
536 540 item += "\n"
537 541 if line:
538 542 item += f"{lineCol} {line.strip()}{normalCol}\n"
539 543 list.append(item)
540 544
541 545 return list
542 546
543 547 def _format_exception_only(self, etype, value):
544 548 """Format the exception part of a traceback.
545 549
546 550 The arguments are the exception type and value such as given by
547 551 sys.exc_info()[:2]. The return value is a list of strings, each ending
548 552 in a newline. Normally, the list contains a single string; however,
549 553 for SyntaxError exceptions, it contains several lines that (when
550 554 printed) display detailed information about where the syntax error
551 555 occurred. The message indicating which exception occurred is the
552 556 always last string in the list.
553 557
554 558 Also lifted nearly verbatim from traceback.py
555 559 """
556 560 have_filedata = False
557 561 Colors = self.Colors
558 562 list = []
559 563 stype = py3compat.cast_unicode(Colors.excName + etype.__name__ + Colors.Normal)
560 564 if value is None:
561 565 # Not sure if this can still happen in Python 2.6 and above
562 566 list.append(stype + '\n')
563 567 else:
564 568 if issubclass(etype, SyntaxError):
565 569 have_filedata = True
566 570 if not value.filename: value.filename = "<string>"
567 571 if value.lineno:
568 572 lineno = value.lineno
569 573 textline = linecache.getline(value.filename, value.lineno)
570 574 else:
571 575 lineno = "unknown"
572 576 textline = ""
573 577 list.append(
574 578 "%s %s%s\n"
575 579 % (
576 580 Colors.normalEm,
577 581 _format_filename(
578 582 value.filename,
579 583 Colors.filenameEm,
580 584 Colors.normalEm,
581 585 lineno=(None if lineno == "unknown" else lineno),
582 586 ),
583 587 Colors.Normal,
584 588 )
585 589 )
586 590 if textline == "":
587 591 textline = py3compat.cast_unicode(value.text, "utf-8")
588 592
589 593 if textline is not None:
590 594 i = 0
591 595 while i < len(textline) and textline[i].isspace():
592 596 i += 1
593 597 list.append('%s %s%s\n' % (Colors.line,
594 598 textline.strip(),
595 599 Colors.Normal))
596 600 if value.offset is not None:
597 601 s = ' '
598 602 for c in textline[i:value.offset - 1]:
599 603 if c.isspace():
600 604 s += c
601 605 else:
602 606 s += ' '
603 607 list.append('%s%s^%s\n' % (Colors.caret, s,
604 608 Colors.Normal))
605 609
606 610 try:
607 611 s = value.msg
608 612 except Exception:
609 613 s = self._some_str(value)
610 614 if s:
611 615 list.append('%s%s:%s %s\n' % (stype, Colors.excName,
612 616 Colors.Normal, s))
613 617 else:
614 618 list.append('%s\n' % stype)
615 619
616 620 # sync with user hooks
617 621 if have_filedata:
618 622 ipinst = get_ipython()
619 623 if ipinst is not None:
620 624 ipinst.hooks.synchronize_with_editor(value.filename, value.lineno, 0)
621 625
622 626 return list
623 627
624 628 def get_exception_only(self, etype, value):
625 629 """Only print the exception type and message, without a traceback.
626 630
627 631 Parameters
628 632 ----------
629 633 etype : exception type
630 634 value : exception value
631 635 """
632 636 return ListTB.structured_traceback(self, etype, value)
633 637
634 638 def show_exception_only(self, etype, evalue):
635 639 """Only print the exception type and message, without a traceback.
636 640
637 641 Parameters
638 642 ----------
639 643 etype : exception type
640 644 evalue : exception value
641 645 """
642 646 # This method needs to use __call__ from *this* class, not the one from
643 647 # a subclass whose signature or behavior may be different
644 648 ostream = self.ostream
645 649 ostream.flush()
646 650 ostream.write('\n'.join(self.get_exception_only(etype, evalue)))
647 651 ostream.flush()
648 652
649 653 def _some_str(self, value):
650 654 # Lifted from traceback.py
651 655 try:
652 656 return py3compat.cast_unicode(str(value))
653 657 except:
654 658 return u'<unprintable %s object>' % type(value).__name__
655 659
656 660
657 661 class FrameInfo:
658 662 """
659 663 Mirror of stack data's FrameInfo, but so that we can bypass highlighting on
660 664 really long frames.
661 665 """
662 666
663 667 description : Optional[str]
664 668 filename : str
665 669 lineno : int
666 670
667 671 @classmethod
668 672 def _from_stack_data_FrameInfo(cls, frame_info):
669 673 return cls(
670 674 getattr(frame_info, "description", None),
671 675 getattr(frame_info, "filename", None),
672 676 getattr(frame_info, "lineno", None),
673 677 getattr(frame_info, "frame", None),
674 678 getattr(frame_info, "code", None),
675 679 sd=frame_info,
676 680 )
677 681
678 682 def __init__(self, description, filename, lineno, frame, code, sd=None):
679 683 self.description = description
680 684 self.filename = filename
681 685 self.lineno = lineno
682 686 self.frame = frame
683 687 self.code = code
684 688 self._sd = sd
685 689
686 690 # self.lines = []
687 691 if sd is None:
688 692 ix = inspect.getsourcelines(frame)
689 693 self.raw_lines = ix[0]
690 694
691 695 @property
692 696 def variables_in_executing_piece(self):
693 697 if self._sd:
694 698 return self._sd.variables_in_executing_piece
695 699 else:
696 700 return []
697 701
698 702 @property
699 703 def lines(self):
700 704 return self._sd.lines
701 705
702 706 @property
703 707 def executing(self):
704 708 if self._sd:
705 709 return self._sd.executing
706 710 else:
707 711 return None
708 712
709 713
710
711
712 714 #----------------------------------------------------------------------------
713 715 class VerboseTB(TBTools):
714 716 """A port of Ka-Ping Yee's cgitb.py module that outputs color text instead
715 717 of HTML. Requires inspect and pydoc. Crazy, man.
716 718
717 719 Modified version which optionally strips the topmost entries from the
718 720 traceback, to be used with alternate interpreters (because their own code
719 721 would appear in the traceback)."""
720 722
721 723 _tb_highlight = "bg:ansiyellow"
722 724
723 725 def __init__(
724 726 self,
725 727 color_scheme: str = "Linux",
726 728 call_pdb: bool = False,
727 729 ostream=None,
728 730 tb_offset: int = 0,
729 731 long_header: bool = False,
730 732 include_vars: bool = True,
731 733 check_cache=None,
732 734 debugger_cls=None,
733 735 parent=None,
734 736 config=None,
735 737 ):
736 738 """Specify traceback offset, headers and color scheme.
737 739
738 740 Define how many frames to drop from the tracebacks. Calling it with
739 741 tb_offset=1 allows use of this handler in interpreters which will have
740 742 their own code at the top of the traceback (VerboseTB will first
741 743 remove that frame before printing the traceback info)."""
742 744 TBTools.__init__(
743 745 self,
744 746 color_scheme=color_scheme,
745 747 call_pdb=call_pdb,
746 748 ostream=ostream,
747 749 parent=parent,
748 750 config=config,
749 751 debugger_cls=debugger_cls,
750 752 )
751 753 self.tb_offset = tb_offset
752 754 self.long_header = long_header
753 755 self.include_vars = include_vars
754 756 # By default we use linecache.checkcache, but the user can provide a
755 757 # different check_cache implementation. This was formerly used by the
756 758 # IPython kernel for interactive code, but is no longer necessary.
757 759 if check_cache is None:
758 760 check_cache = linecache.checkcache
759 761 self.check_cache = check_cache
760 762
761 763 self.skip_hidden = True
762 764
763 765 def format_record(self, frame_info:FrameInfo):
764 766 """Format a single stack frame"""
765 767 assert isinstance(frame_info, FrameInfo)
766 768 Colors = self.Colors # just a shorthand + quicker name lookup
767 769 ColorsNormal = Colors.Normal # used a lot
768 770
769 771 if isinstance(frame_info._sd, stack_data.RepeatedFrames):
770 772 return ' %s[... skipping similar frames: %s]%s\n' % (
771 773 Colors.excName, frame_info.description, ColorsNormal)
772 774
773 775 indent = " " * INDENT_SIZE
774 776 em_normal = "%s\n%s%s" % (Colors.valEm, indent, ColorsNormal)
775 777 tpl_call = f"in {Colors.vName}{{file}}{Colors.valEm}{{scope}}{ColorsNormal}"
776 778 tpl_call_fail = "in %s%%s%s(***failed resolving arguments***)%s" % (
777 779 Colors.vName,
778 780 Colors.valEm,
779 781 ColorsNormal,
780 782 )
781 783 tpl_name_val = "%%s %s= %%s%s" % (Colors.valEm, ColorsNormal)
782 784
783 785 link = _format_filename(
784 786 frame_info.filename,
785 787 Colors.filenameEm,
786 788 ColorsNormal,
787 789 lineno=frame_info.lineno,
788 790 )
789 791 args, varargs, varkw, locals_ = inspect.getargvalues(frame_info.frame)
790 792 if frame_info.executing is not None:
791 793 func = frame_info.executing.code_qualname()
792 794 else:
793 func = '?'
795 func = "?"
794 796 if func == "<module>":
795 797 call = ""
796 798 else:
797 799 # Decide whether to include variable details or not
798 800 var_repr = eqrepr if self.include_vars else nullrepr
799 801 try:
800 802 scope = inspect.formatargvalues(
801 803 args, varargs, varkw, locals_, formatvalue=var_repr
802 804 )
803 805 call = tpl_call.format(file=func, scope=scope)
804 806 except KeyError:
805 807 # This happens in situations like errors inside generator
806 808 # expressions, where local variables are listed in the
807 809 # line, but can't be extracted from the frame. I'm not
808 810 # 100% sure this isn't actually a bug in inspect itself,
809 811 # but since there's no info for us to compute with, the
810 812 # best we can do is report the failure and move on. Here
811 813 # we must *not* call any traceback construction again,
812 814 # because that would mess up use of %debug later on. So we
813 815 # simply report the failure and move on. The only
814 816 # limitation will be that this frame won't have locals
815 817 # listed in the call signature. Quite subtle problem...
816 818 # I can't think of a good way to validate this in a unit
817 819 # test, but running a script consisting of:
818 820 # dict( (k,v.strip()) for (k,v) in range(10) )
819 821 # will illustrate the error, if this exception catch is
820 822 # disabled.
821 823 call = tpl_call_fail % func
822 824
823 825 lvals = ''
824 826 lvals_list = []
825 827 if self.include_vars:
826 828 try:
827 829 # we likely want to fix stackdata at some point, but
828 830 # still need a workaround.
829 831 fibp = frame_info.variables_in_executing_piece
830 832 for var in fibp:
831 833 lvals_list.append(tpl_name_val % (var.name, repr(var.value)))
832 834 except Exception:
833 835 lvals_list.append(
834 836 "Exception trying to inspect frame. No more locals available."
835 837 )
836 838 if lvals_list:
837 839 lvals = '%s%s' % (indent, em_normal.join(lvals_list))
838 840
839 841 result = f'{link}{", " if call else ""}{call}\n'
840 842 if frame_info._sd is None:
841 843 assert False
842 844 # fast fallback if file is too long
843 tpl_link = '%s%%s%s' % (Colors.filenameEm, ColorsNormal)
845 tpl_link = "%s%%s%s" % (Colors.filenameEm, ColorsNormal)
844 846 link = tpl_link % util_path.compress_user(frame_info.filename)
845 level = '%s %s\n' % (link, call)
846 _line_format = PyColorize.Parser(style=self.color_scheme_table.active_scheme_name, parent=self).format2
847 level = "%s %s\n" % (link, call)
848 _line_format = PyColorize.Parser(
849 style=self.color_scheme_table.active_scheme_name, parent=self
850 ).format2
847 851 first_line = frame_info.code.co_firstlineno
848 852 current_line = frame_info.lineno[0]
849 return '%s%s' % (level, ''.join(
850 _simple_format_traceback_lines(current_line, current_line-first_line, frame_info.raw_lines, Colors, lvals,
851 _line_format)))
853 return "%s%s" % (
854 level,
855 "".join(
856 _simple_format_traceback_lines(
857 current_line,
858 current_line - first_line,
859 frame_info.raw_lines,
860 Colors,
861 lvals,
862 _line_format,
863 )
864 ),
865 )
852 866 #result += "\n".join(frame_info.raw_lines)
853 867 else:
854 868 result += "".join(
855 869 _format_traceback_lines(
856 870 frame_info.lines, Colors, self.has_colors, lvals
857 871 )
858 872 )
859 873 return result
860 874
861 875 def prepare_header(self, etype, long_version=False):
862 876 colors = self.Colors # just a shorthand + quicker name lookup
863 877 colorsnormal = colors.Normal # used a lot
864 878 exc = '%s%s%s' % (colors.excName, etype, colorsnormal)
865 879 width = min(75, get_terminal_size()[0])
866 880 if long_version:
867 881 # Header with the exception type, python version, and date
868 882 pyver = 'Python ' + sys.version.split()[0] + ': ' + sys.executable
869 883 date = time.ctime(time.time())
870 884
871 885 head = '%s%s%s\n%s%s%s\n%s' % (colors.topline, '-' * width, colorsnormal,
872 886 exc, ' ' * (width - len(str(etype)) - len(pyver)),
873 887 pyver, date.rjust(width) )
874 888 head += "\nA problem occurred executing Python code. Here is the sequence of function" \
875 889 "\ncalls leading up to the error, with the most recent (innermost) call last."
876 890 else:
877 891 # Simplified header
878 892 head = '%s%s' % (exc, 'Traceback (most recent call last)'. \
879 893 rjust(width - len(str(etype))) )
880 894
881 895 return head
882 896
883 897 def format_exception(self, etype, evalue):
884 898 colors = self.Colors # just a shorthand + quicker name lookup
885 899 colorsnormal = colors.Normal # used a lot
886 900 # Get (safely) a string form of the exception info
887 901 try:
888 902 etype_str, evalue_str = map(str, (etype, evalue))
889 903 except:
890 904 # User exception is improperly defined.
891 905 etype, evalue = str, sys.exc_info()[:2]
892 906 etype_str, evalue_str = map(str, (etype, evalue))
893 907 # ... and format it
894 908 return ['%s%s%s: %s' % (colors.excName, etype_str,
895 909 colorsnormal, py3compat.cast_unicode(evalue_str))]
896 910
897 911 def format_exception_as_a_whole(
898 912 self,
899 913 etype: type,
900 914 evalue: BaseException,
901 915 etb: Optional[TracebackType],
902 916 number_of_lines_of_context,
903 917 tb_offset: Optional[int],
904 918 ):
905 919 """Formats the header, traceback and exception message for a single exception.
906 920
907 921 This may be called multiple times by Python 3 exception chaining
908 922 (PEP 3134).
909 923 """
910 924 # some locals
911 925 orig_etype = etype
912 926 try:
913 927 etype = etype.__name__
914 928 except AttributeError:
915 929 pass
916 930
917 931 tb_offset = self.tb_offset if tb_offset is None else tb_offset
918 932 assert isinstance(tb_offset, int)
919 933 head = self.prepare_header(etype, self.long_header)
920 934 records = (
921 935 self.get_records(etb, number_of_lines_of_context, tb_offset) if etb else []
922 936 )
923 937
924 938 frames = []
925 939 skipped = 0
926 940 lastrecord = len(records) - 1
927 941 for i, record in enumerate(records):
928 if not isinstance(record._sd, stack_data.RepeatedFrames) and self.skip_hidden:
929 if record.frame.f_locals.get("__tracebackhide__", 0) and i != lastrecord:
942 if (
943 not isinstance(record._sd, stack_data.RepeatedFrames)
944 and self.skip_hidden
945 ):
946 if (
947 record.frame.f_locals.get("__tracebackhide__", 0)
948 and i != lastrecord
949 ):
930 950 skipped += 1
931 951 continue
932 952 if skipped:
933 953 Colors = self.Colors # just a shorthand + quicker name lookup
934 954 ColorsNormal = Colors.Normal # used a lot
935 955 frames.append(
936 956 " %s[... skipping hidden %s frame]%s\n"
937 957 % (Colors.excName, skipped, ColorsNormal)
938 958 )
939 959 skipped = 0
940 960 frames.append(self.format_record(record))
941 961 if skipped:
942 962 Colors = self.Colors # just a shorthand + quicker name lookup
943 963 ColorsNormal = Colors.Normal # used a lot
944 964 frames.append(
945 965 " %s[... skipping hidden %s frame]%s\n"
946 966 % (Colors.excName, skipped, ColorsNormal)
947 967 )
948 968
949 969 formatted_exception = self.format_exception(etype, evalue)
950 970 if records:
951 971 frame_info = records[-1]
952 972 ipinst = get_ipython()
953 973 if ipinst is not None:
954 974 ipinst.hooks.synchronize_with_editor(frame_info.filename, frame_info.lineno, 0)
955 975
956 976 return [[head] + frames + [''.join(formatted_exception[0])]]
957 977
958 978 def get_records(
959 979 self, etb: TracebackType, number_of_lines_of_context: int, tb_offset: int
960 980 ):
961 981 assert etb is not None
962 982 context = number_of_lines_of_context - 1
963 983 after = context // 2
964 984 before = context - after
965 985 if self.has_colors:
966 986 style = get_style_by_name("default")
967 987 style = stack_data.style_with_executing_node(style, self._tb_highlight)
968 988 formatter = Terminal256Formatter(style=style)
969 989 else:
970 990 formatter = None
971 991 options = stack_data.Options(
972 992 before=before,
973 993 after=after,
974 994 pygments_formatter=formatter,
975 995 )
976 996
977 997 # let's estimate the amount of code we eill have to parse/highlight.
978 998 cf = etb
979 999 max_len = 0
980 1000 tbs = []
981 1001 while cf is not None:
982 1002 source_file = inspect.getsourcefile(etb.tb_frame)
983 1003 lines, first = inspect.getsourcelines(etb.tb_frame)
984 1004 max_len = max(max_len, first+len(lines))
985 1005 tbs.append(cf)
986 1006 cf = cf.tb_next
987 1007
988
989
990 1008 if max_len > FAST_THRESHOLD:
991 1009 FIs = []
992 1010 for tb in tbs:
993 1011 frame = tb.tb_frame
994 lineno = frame.f_lineno,
1012 lineno = (frame.f_lineno,)
995 1013 code = frame.f_code
996 1014 filename = code.co_filename
997 1015 FIs.append( FrameInfo("Raw frame", filename, lineno, frame, code))
998 1016 return FIs
999 1017 res = list(stack_data.FrameInfo.stack_data(etb, options=options))[tb_offset:]
1000 1018 res = [FrameInfo._from_stack_data_FrameInfo(r) for r in res]
1001 1019 return res
1002 1020
1003 1021 def structured_traceback(
1004 1022 self,
1005 1023 etype: type,
1006 1024 evalue: Optional[BaseException],
1007 1025 etb: Optional[TracebackType],
1008 1026 tb_offset: Optional[int] = None,
1009 1027 number_of_lines_of_context: int = 5,
1010 1028 ):
1011 1029 """Return a nice text document describing the traceback."""
1012 1030 formatted_exception = self.format_exception_as_a_whole(etype, evalue, etb, number_of_lines_of_context,
1013 1031 tb_offset)
1014 1032
1015 1033 colors = self.Colors # just a shorthand + quicker name lookup
1016 1034 colorsnormal = colors.Normal # used a lot
1017 1035 head = '%s%s%s' % (colors.topline, '-' * min(75, get_terminal_size()[0]), colorsnormal)
1018 1036 structured_traceback_parts = [head]
1019 1037 chained_exceptions_tb_offset = 0
1020 1038 lines_of_context = 3
1021 1039 formatted_exceptions = formatted_exception
1022 1040 exception = self.get_parts_of_chained_exception(evalue)
1023 1041 if exception:
1024 1042 assert evalue is not None
1025 1043 formatted_exceptions += self.prepare_chained_exception_message(evalue.__cause__)
1026 1044 etype, evalue, etb = exception
1027 1045 else:
1028 1046 evalue = None
1029 1047 chained_exc_ids = set()
1030 1048 while evalue:
1031 1049 formatted_exceptions += self.format_exception_as_a_whole(etype, evalue, etb, lines_of_context,
1032 1050 chained_exceptions_tb_offset)
1033 1051 exception = self.get_parts_of_chained_exception(evalue)
1034 1052
1035 1053 if exception and not id(exception[1]) in chained_exc_ids:
1036 1054 chained_exc_ids.add(id(exception[1])) # trace exception to avoid infinite 'cause' loop
1037 1055 formatted_exceptions += self.prepare_chained_exception_message(evalue.__cause__)
1038 1056 etype, evalue, etb = exception
1039 1057 else:
1040 1058 evalue = None
1041 1059
1042 1060 # we want to see exceptions in a reversed order:
1043 1061 # the first exception should be on top
1044 1062 for formatted_exception in reversed(formatted_exceptions):
1045 1063 structured_traceback_parts += formatted_exception
1046 1064
1047 1065 return structured_traceback_parts
1048 1066
1049 1067 def debugger(self, force: bool = False):
1050 1068 """Call up the pdb debugger if desired, always clean up the tb
1051 1069 reference.
1052 1070
1053 1071 Keywords:
1054 1072
1055 1073 - force(False): by default, this routine checks the instance call_pdb
1056 1074 flag and does not actually invoke the debugger if the flag is false.
1057 1075 The 'force' option forces the debugger to activate even if the flag
1058 1076 is false.
1059 1077
1060 1078 If the call_pdb flag is set, the pdb interactive debugger is
1061 1079 invoked. In all cases, the self.tb reference to the current traceback
1062 1080 is deleted to prevent lingering references which hamper memory
1063 1081 management.
1064 1082
1065 1083 Note that each call to pdb() does an 'import readline', so if your app
1066 1084 requires a special setup for the readline completers, you'll have to
1067 1085 fix that by hand after invoking the exception handler."""
1068 1086
1069 1087 if force or self.call_pdb:
1070 1088 if self.pdb is None:
1071 1089 self.pdb = self.debugger_cls()
1072 1090 # the system displayhook may have changed, restore the original
1073 1091 # for pdb
1074 1092 display_trap = DisplayTrap(hook=sys.__displayhook__)
1075 1093 with display_trap:
1076 1094 self.pdb.reset()
1077 1095 # Find the right frame so we don't pop up inside ipython itself
1078 1096 if hasattr(self, 'tb') and self.tb is not None:
1079 1097 etb = self.tb
1080 1098 else:
1081 1099 etb = self.tb = sys.last_traceback
1082 1100 while self.tb is not None and self.tb.tb_next is not None:
1083 1101 assert self.tb.tb_next is not None
1084 1102 self.tb = self.tb.tb_next
1085 1103 if etb and etb.tb_next:
1086 1104 etb = etb.tb_next
1087 1105 self.pdb.botframe = etb.tb_frame
1088 1106 self.pdb.interaction(None, etb)
1089 1107
1090 1108 if hasattr(self, 'tb'):
1091 1109 del self.tb
1092 1110
1093 1111 def handler(self, info=None):
1094 1112 (etype, evalue, etb) = info or sys.exc_info()
1095 1113 self.tb = etb
1096 1114 ostream = self.ostream
1097 1115 ostream.flush()
1098 1116 ostream.write(self.text(etype, evalue, etb))
1099 1117 ostream.write('\n')
1100 1118 ostream.flush()
1101 1119
1102 1120 # Changed so an instance can just be called as VerboseTB_inst() and print
1103 1121 # out the right info on its own.
1104 1122 def __call__(self, etype=None, evalue=None, etb=None):
1105 1123 """This hook can replace sys.excepthook (for Python 2.1 or higher)."""
1106 1124 if etb is None:
1107 1125 self.handler()
1108 1126 else:
1109 1127 self.handler((etype, evalue, etb))
1110 1128 try:
1111 1129 self.debugger()
1112 1130 except KeyboardInterrupt:
1113 1131 print("\nKeyboardInterrupt")
1114 1132
1115 1133
1116 1134 #----------------------------------------------------------------------------
1117 1135 class FormattedTB(VerboseTB, ListTB):
1118 1136 """Subclass ListTB but allow calling with a traceback.
1119 1137
1120 1138 It can thus be used as a sys.excepthook for Python > 2.1.
1121 1139
1122 1140 Also adds 'Context' and 'Verbose' modes, not available in ListTB.
1123 1141
1124 1142 Allows a tb_offset to be specified. This is useful for situations where
1125 1143 one needs to remove a number of topmost frames from the traceback (such as
1126 1144 occurs with python programs that themselves execute other python code,
1127 1145 like Python shells). """
1128 1146
1129 1147 mode: str
1130 1148
1131 1149 def __init__(self, mode='Plain', color_scheme='Linux', call_pdb=False,
1132 1150 ostream=None,
1133 1151 tb_offset=0, long_header=False, include_vars=False,
1134 1152 check_cache=None, debugger_cls=None,
1135 1153 parent=None, config=None):
1136 1154
1137 1155 # NEVER change the order of this list. Put new modes at the end:
1138 1156 self.valid_modes = ['Plain', 'Context', 'Verbose', 'Minimal']
1139 1157 self.verbose_modes = self.valid_modes[1:3]
1140 1158
1141 1159 VerboseTB.__init__(self, color_scheme=color_scheme, call_pdb=call_pdb,
1142 1160 ostream=ostream, tb_offset=tb_offset,
1143 1161 long_header=long_header, include_vars=include_vars,
1144 1162 check_cache=check_cache, debugger_cls=debugger_cls,
1145 1163 parent=parent, config=config)
1146 1164
1147 1165 # Different types of tracebacks are joined with different separators to
1148 1166 # form a single string. They are taken from this dict
1149 1167 self._join_chars = dict(Plain='', Context='\n', Verbose='\n',
1150 1168 Minimal='')
1151 1169 # set_mode also sets the tb_join_char attribute
1152 1170 self.set_mode(mode)
1153 1171
1154 1172 def structured_traceback(self, etype, value, tb, tb_offset=None, number_of_lines_of_context=5):
1155 1173 tb_offset = self.tb_offset if tb_offset is None else tb_offset
1156 1174 mode = self.mode
1157 1175 if mode in self.verbose_modes:
1158 1176 # Verbose modes need a full traceback
1159 1177 return VerboseTB.structured_traceback(
1160 1178 self, etype, value, tb, tb_offset, number_of_lines_of_context
1161 1179 )
1162 1180 elif mode == 'Minimal':
1163 1181 return ListTB.get_exception_only(self, etype, value)
1164 1182 else:
1165 1183 # We must check the source cache because otherwise we can print
1166 1184 # out-of-date source code.
1167 1185 self.check_cache()
1168 1186 # Now we can extract and format the exception
1169 1187 return ListTB.structured_traceback(
1170 1188 self, etype, value, tb, tb_offset, number_of_lines_of_context
1171 1189 )
1172 1190
1173 1191 def stb2text(self, stb):
1174 1192 """Convert a structured traceback (a list) to a string."""
1175 1193 return self.tb_join_char.join(stb)
1176 1194
1177 1195 def set_mode(self, mode: Optional[str] = None):
1178 1196 """Switch to the desired mode.
1179 1197
1180 1198 If mode is not specified, cycles through the available modes."""
1181 1199
1182 1200 if not mode:
1183 1201 new_idx = (self.valid_modes.index(self.mode) + 1 ) % \
1184 1202 len(self.valid_modes)
1185 1203 self.mode = self.valid_modes[new_idx]
1186 1204 elif mode not in self.valid_modes:
1187 1205 raise ValueError(
1188 1206 "Unrecognized mode in FormattedTB: <" + mode + ">\n"
1189 1207 "Valid modes: " + str(self.valid_modes)
1190 1208 )
1191 1209 else:
1192 1210 assert isinstance(mode, str)
1193 1211 self.mode = mode
1194 1212 # include variable details only in 'Verbose' mode
1195 1213 self.include_vars = (self.mode == self.valid_modes[2])
1196 1214 # Set the join character for generating text tracebacks
1197 1215 self.tb_join_char = self._join_chars[self.mode]
1198 1216
1199 1217 # some convenient shortcuts
1200 1218 def plain(self):
1201 1219 self.set_mode(self.valid_modes[0])
1202 1220
1203 1221 def context(self):
1204 1222 self.set_mode(self.valid_modes[1])
1205 1223
1206 1224 def verbose(self):
1207 1225 self.set_mode(self.valid_modes[2])
1208 1226
1209 1227 def minimal(self):
1210 1228 self.set_mode(self.valid_modes[3])
1211 1229
1212 1230
1213 1231 #----------------------------------------------------------------------------
1214 1232 class AutoFormattedTB(FormattedTB):
1215 1233 """A traceback printer which can be called on the fly.
1216 1234
1217 1235 It will find out about exceptions by itself.
1218 1236
1219 1237 A brief example::
1220 1238
1221 1239 AutoTB = AutoFormattedTB(mode = 'Verbose',color_scheme='Linux')
1222 1240 try:
1223 1241 ...
1224 1242 except:
1225 1243 AutoTB() # or AutoTB(out=logfile) where logfile is an open file object
1226 1244 """
1227 1245
1228 1246 def __call__(self, etype=None, evalue=None, etb=None,
1229 1247 out=None, tb_offset=None):
1230 1248 """Print out a formatted exception traceback.
1231 1249
1232 1250 Optional arguments:
1233 1251 - out: an open file-like object to direct output to.
1234 1252
1235 1253 - tb_offset: the number of frames to skip over in the stack, on a
1236 1254 per-call basis (this overrides temporarily the instance's tb_offset
1237 1255 given at initialization time."""
1238 1256
1239 1257 if out is None:
1240 1258 out = self.ostream
1241 1259 out.flush()
1242 1260 out.write(self.text(etype, evalue, etb, tb_offset))
1243 1261 out.write('\n')
1244 1262 out.flush()
1245 1263 # FIXME: we should remove the auto pdb behavior from here and leave
1246 1264 # that to the clients.
1247 1265 try:
1248 1266 self.debugger()
1249 1267 except KeyboardInterrupt:
1250 1268 print("\nKeyboardInterrupt")
1251 1269
1252 def structured_traceback(self, etype=None, value=None, tb=None,
1253 tb_offset=None, number_of_lines_of_context=5):
1254
1270 def structured_traceback(
1271 self,
1272 etype=None,
1273 value=None,
1274 tb=None,
1275 tb_offset=None,
1276 number_of_lines_of_context=5,
1277 ):
1255 1278 etype: type
1256 1279 value: BaseException
1257 1280 # tb: TracebackType or tupleof tb types ?
1258 1281 if etype is None:
1259 1282 etype, value, tb = sys.exc_info()
1260 1283 if isinstance(tb, tuple):
1261 1284 # tb is a tuple if this is a chained exception.
1262 1285 self.tb = tb[0]
1263 1286 else:
1264 1287 self.tb = tb
1265 1288 return FormattedTB.structured_traceback(
1266 1289 self, etype, value, tb, tb_offset, number_of_lines_of_context)
1267 1290
1268 1291
1269 1292 #---------------------------------------------------------------------------
1270 1293
1271 1294 # A simple class to preserve Nathan's original functionality.
1272 1295 class ColorTB(FormattedTB):
1273 1296 """Shorthand to initialize a FormattedTB in Linux colors mode."""
1274 1297
1275 1298 def __init__(self, color_scheme='Linux', call_pdb=0, **kwargs):
1276 1299 FormattedTB.__init__(self, color_scheme=color_scheme,
1277 1300 call_pdb=call_pdb, **kwargs)
1278 1301
1279 1302
1280 1303 class SyntaxTB(ListTB):
1281 1304 """Extension which holds some state: the last exception value"""
1282 1305
1283 1306 def __init__(self, color_scheme='NoColor', parent=None, config=None):
1284 1307 ListTB.__init__(self, color_scheme, parent=parent, config=config)
1285 1308 self.last_syntax_error = None
1286 1309
1287 1310 def __call__(self, etype, value, elist):
1288 1311 self.last_syntax_error = value
1289 1312
1290 1313 ListTB.__call__(self, etype, value, elist)
1291 1314
1292 1315 def structured_traceback(self, etype, value, elist, tb_offset=None,
1293 1316 context=5):
1294 1317 # If the source file has been edited, the line in the syntax error can
1295 1318 # be wrong (retrieved from an outdated cache). This replaces it with
1296 1319 # the current value.
1297 1320 if isinstance(value, SyntaxError) \
1298 1321 and isinstance(value.filename, str) \
1299 1322 and isinstance(value.lineno, int):
1300 1323 linecache.checkcache(value.filename)
1301 1324 newtext = linecache.getline(value.filename, value.lineno)
1302 1325 if newtext:
1303 1326 value.text = newtext
1304 1327 self.last_syntax_error = value
1305 1328 return super(SyntaxTB, self).structured_traceback(etype, value, elist,
1306 1329 tb_offset=tb_offset, context=context)
1307 1330
1308 1331 def clear_err_state(self):
1309 1332 """Return the current error state and clear it"""
1310 1333 e = self.last_syntax_error
1311 1334 self.last_syntax_error = None
1312 1335 return e
1313 1336
1314 1337 def stb2text(self, stb):
1315 1338 """Convert a structured traceback (a list) to a string."""
1316 1339 return ''.join(stb)
1317 1340
1318 1341
1319 1342 # some internal-use functions
1320 1343 def text_repr(value):
1321 1344 """Hopefully pretty robust repr equivalent."""
1322 1345 # this is pretty horrible but should always return *something*
1323 1346 try:
1324 1347 return pydoc.text.repr(value)
1325 1348 except KeyboardInterrupt:
1326 1349 raise
1327 1350 except:
1328 1351 try:
1329 1352 return repr(value)
1330 1353 except KeyboardInterrupt:
1331 1354 raise
1332 1355 except:
1333 1356 try:
1334 1357 # all still in an except block so we catch
1335 1358 # getattr raising
1336 1359 name = getattr(value, '__name__', None)
1337 1360 if name:
1338 1361 # ick, recursion
1339 1362 return text_repr(name)
1340 1363 klass = getattr(value, '__class__', None)
1341 1364 if klass:
1342 1365 return '%s instance' % text_repr(klass)
1343 1366 except KeyboardInterrupt:
1344 1367 raise
1345 1368 except:
1346 1369 return 'UNRECOVERABLE REPR FAILURE'
1347 1370
1348 1371
1349 1372 def eqrepr(value, repr=text_repr):
1350 1373 return '=%s' % repr(value)
1351 1374
1352 1375
1353 1376 def nullrepr(value, repr=text_repr):
1354 1377 return ''
@@ -1,178 +1,177 b''
1 1 import asyncio
2 2 import os
3 3 import sys
4 4
5 5 from IPython.core.debugger import Pdb
6 6 from IPython.core.completer import IPCompleter
7 7 from .ptutils import IPythonPTCompleter
8 8 from .shortcuts import create_ipython_shortcuts
9 9 from . import embed
10 10
11 11 from pathlib import Path
12 12 from pygments.token import Token
13 13 from prompt_toolkit.shortcuts.prompt import PromptSession
14 14 from prompt_toolkit.enums import EditingMode
15 15 from prompt_toolkit.formatted_text import PygmentsTokens
16 16 from prompt_toolkit.history import InMemoryHistory, FileHistory
17 17 from concurrent.futures import ThreadPoolExecutor
18 18
19 19 from prompt_toolkit import __version__ as ptk_version
20 20 PTK3 = ptk_version.startswith('3.')
21 21
22 22
23 23 # we want to avoid ptk as much as possible when using subprocesses
24 24 # as it uses cursor positioning requests, deletes color ....
25 25 _use_simple_prompt = "IPY_TEST_SIMPLE_PROMPT" in os.environ
26 26
27 27
28 28 class TerminalPdb(Pdb):
29 29 """Standalone IPython debugger."""
30 30
31 31 def __init__(self, *args, pt_session_options=None, **kwargs):
32 32 Pdb.__init__(self, *args, **kwargs)
33 33 self._ptcomp = None
34 34 self.pt_init(pt_session_options)
35 35 self.thread_executor = ThreadPoolExecutor(1)
36 36
37 37 def pt_init(self, pt_session_options=None):
38 38 """Initialize the prompt session and the prompt loop
39 39 and store them in self.pt_app and self.pt_loop.
40 40
41 41 Additional keyword arguments for the PromptSession class
42 42 can be specified in pt_session_options.
43 43 """
44 44 if pt_session_options is None:
45 45 pt_session_options = {}
46 46
47 47 def get_prompt_tokens():
48 48 return [(Token.Prompt, self.prompt)]
49 49
50 50 if self._ptcomp is None:
51 51 compl = IPCompleter(
52 52 shell=self.shell, namespace={}, global_namespace={}, parent=self.shell
53 53 )
54 54 # add a completer for all the do_ methods
55 55 methods_names = [m[3:] for m in dir(self) if m.startswith("do_")]
56 56
57 57 def gen_comp(self, text):
58 58 return [m for m in methods_names if m.startswith(text)]
59 59 import types
60 60 newcomp = types.MethodType(gen_comp, compl)
61 61 compl.custom_matchers.insert(0, newcomp)
62 62 # end add completer.
63 63
64 64 self._ptcomp = IPythonPTCompleter(compl)
65 65
66 66 # setup history only when we start pdb
67 67 if self.shell.debugger_history is None:
68 68 if self.shell.debugger_history_file is not None:
69
70 69 p = Path(self.shell.debugger_history_file).expanduser()
71 70 if not p.exists():
72 71 p.touch()
73 72 self.debugger_history = FileHistory(os.path.expanduser(str(p)))
74 73 else:
75 74 self.debugger_history = InMemoryHistory()
76 75 else:
77 76 self.debugger_history = self.shell.debugger_history
78 77
79 78 options = dict(
80 79 message=(lambda: PygmentsTokens(get_prompt_tokens())),
81 80 editing_mode=getattr(EditingMode, self.shell.editing_mode.upper()),
82 81 key_bindings=create_ipython_shortcuts(self.shell),
83 82 history=self.debugger_history,
84 83 completer=self._ptcomp,
85 84 enable_history_search=True,
86 85 mouse_support=self.shell.mouse_support,
87 86 complete_style=self.shell.pt_complete_style,
88 87 style=getattr(self.shell, "style", None),
89 88 color_depth=self.shell.color_depth,
90 89 )
91 90
92 91 if not PTK3:
93 92 options['inputhook'] = self.shell.inputhook
94 93 options.update(pt_session_options)
95 94 if not _use_simple_prompt:
96 95 self.pt_loop = asyncio.new_event_loop()
97 96 self.pt_app = PromptSession(**options)
98 97
99 98 def cmdloop(self, intro=None):
100 99 """Repeatedly issue a prompt, accept input, parse an initial prefix
101 100 off the received input, and dispatch to action methods, passing them
102 101 the remainder of the line as argument.
103 102
104 103 override the same methods from cmd.Cmd to provide prompt toolkit replacement.
105 104 """
106 105 if not self.use_rawinput:
107 106 raise ValueError('Sorry ipdb does not support use_rawinput=False')
108 107
109 108 # In order to make sure that prompt, which uses asyncio doesn't
110 109 # interfere with applications in which it's used, we always run the
111 110 # prompt itself in a different thread (we can't start an event loop
112 111 # within an event loop). This new thread won't have any event loop
113 112 # running, and here we run our prompt-loop.
114 113 self.preloop()
115 114
116 115 try:
117 116 if intro is not None:
118 117 self.intro = intro
119 118 if self.intro:
120 119 print(self.intro, file=self.stdout)
121 120 stop = None
122 121 while not stop:
123 122 if self.cmdqueue:
124 123 line = self.cmdqueue.pop(0)
125 124 else:
126 125 self._ptcomp.ipy_completer.namespace = self.curframe_locals
127 126 self._ptcomp.ipy_completer.global_namespace = self.curframe.f_globals
128 127
129 128 # Run the prompt in a different thread.
130 129 if not _use_simple_prompt:
131 130 try:
132 131 line = self.thread_executor.submit(
133 132 self.pt_app.prompt
134 133 ).result()
135 134 except EOFError:
136 135 line = "EOF"
137 136 else:
138 137 line = input("ipdb> ")
139 138
140 139 line = self.precmd(line)
141 140 stop = self.onecmd(line)
142 141 stop = self.postcmd(stop, line)
143 142 self.postloop()
144 143 except Exception:
145 144 raise
146 145
147 146 def do_interact(self, arg):
148 147 ipshell = embed.InteractiveShellEmbed(
149 148 config=self.shell.config,
150 149 banner1="*interactive*",
151 150 exit_msg="*exiting interactive console...*",
152 151 )
153 152 global_ns = self.curframe.f_globals
154 153 ipshell(
155 154 module=sys.modules.get(global_ns["__name__"], None),
156 155 local_ns=self.curframe_locals,
157 156 )
158 157
159 158
160 159 def set_trace(frame=None):
161 160 """
162 161 Start debugging from `frame`.
163 162
164 163 If frame is not specified, debugging starts from caller's frame.
165 164 """
166 165 TerminalPdb().set_trace(frame or sys._getframe().f_back)
167 166
168 167
169 168 if __name__ == '__main__':
170 169 import pdb
171 170 # IPython.core.debugger.Pdb.trace_dispatch shall not catch
172 171 # bdb.BdbQuit. When started through __main__ and an exception
173 172 # happened after hitting "c", this is needed in order to
174 173 # be able to quit the debugging session (see #9950).
175 174 old_trace_dispatch = pdb.Pdb.trace_dispatch
176 175 pdb.Pdb = TerminalPdb # type: ignore
177 176 pdb.Pdb.trace_dispatch = old_trace_dispatch # type: ignore
178 177 pdb.main()
@@ -1,860 +1,859 b''
1 1 # Based on Pytest doctest.py
2 2 # Original license:
3 3 # The MIT License (MIT)
4 4 #
5 5 # Copyright (c) 2004-2021 Holger Krekel and others
6 6 """Discover and run ipdoctests in modules and test files."""
7 7 import builtins
8 8 import bdb
9 9 import inspect
10 10 import os
11 11 import platform
12 12 import sys
13 13 import traceback
14 14 import types
15 15 import warnings
16 16 from contextlib import contextmanager
17 17 from pathlib import Path
18 18 from typing import Any
19 19 from typing import Callable
20 20 from typing import Dict
21 21 from typing import Generator
22 22 from typing import Iterable
23 23 from typing import List
24 24 from typing import Optional
25 25 from typing import Pattern
26 26 from typing import Sequence
27 27 from typing import Tuple
28 28 from typing import Type
29 29 from typing import TYPE_CHECKING
30 30 from typing import Union
31 31
32 32 import pytest
33 33 from _pytest import outcomes
34 34 from _pytest._code.code import ExceptionInfo
35 35 from _pytest._code.code import ReprFileLocation
36 36 from _pytest._code.code import TerminalRepr
37 37 from _pytest._io import TerminalWriter
38 38 from _pytest.compat import safe_getattr
39 39 from _pytest.config import Config
40 40 from _pytest.config.argparsing import Parser
41 41 from _pytest.fixtures import FixtureRequest
42 42 from _pytest.nodes import Collector
43 43 from _pytest.outcomes import OutcomeException
44 44 from _pytest.pathlib import fnmatch_ex
45 45 from _pytest.pathlib import import_path
46 46 from _pytest.python_api import approx
47 47 from _pytest.warning_types import PytestWarning
48 48
49 49 if TYPE_CHECKING:
50 50 import doctest
51 51
52 52 DOCTEST_REPORT_CHOICE_NONE = "none"
53 53 DOCTEST_REPORT_CHOICE_CDIFF = "cdiff"
54 54 DOCTEST_REPORT_CHOICE_NDIFF = "ndiff"
55 55 DOCTEST_REPORT_CHOICE_UDIFF = "udiff"
56 56 DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE = "only_first_failure"
57 57
58 58 DOCTEST_REPORT_CHOICES = (
59 59 DOCTEST_REPORT_CHOICE_NONE,
60 60 DOCTEST_REPORT_CHOICE_CDIFF,
61 61 DOCTEST_REPORT_CHOICE_NDIFF,
62 62 DOCTEST_REPORT_CHOICE_UDIFF,
63 63 DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE,
64 64 )
65 65
66 66 # Lazy definition of runner class
67 67 RUNNER_CLASS = None
68 68 # Lazy definition of output checker class
69 69 CHECKER_CLASS: Optional[Type["IPDoctestOutputChecker"]] = None
70 70
71 71
72 72 def pytest_addoption(parser: Parser) -> None:
73 73 parser.addini(
74 74 "ipdoctest_optionflags",
75 75 "option flags for ipdoctests",
76 76 type="args",
77 77 default=["ELLIPSIS"],
78 78 )
79 79 parser.addini(
80 80 "ipdoctest_encoding", "encoding used for ipdoctest files", default="utf-8"
81 81 )
82 82 group = parser.getgroup("collect")
83 83 group.addoption(
84 84 "--ipdoctest-modules",
85 85 action="store_true",
86 86 default=False,
87 87 help="run ipdoctests in all .py modules",
88 88 dest="ipdoctestmodules",
89 89 )
90 90 group.addoption(
91 91 "--ipdoctest-report",
92 92 type=str.lower,
93 93 default="udiff",
94 94 help="choose another output format for diffs on ipdoctest failure",
95 95 choices=DOCTEST_REPORT_CHOICES,
96 96 dest="ipdoctestreport",
97 97 )
98 98 group.addoption(
99 99 "--ipdoctest-glob",
100 100 action="append",
101 101 default=[],
102 102 metavar="pat",
103 103 help="ipdoctests file matching pattern, default: test*.txt",
104 104 dest="ipdoctestglob",
105 105 )
106 106 group.addoption(
107 107 "--ipdoctest-ignore-import-errors",
108 108 action="store_true",
109 109 default=False,
110 110 help="ignore ipdoctest ImportErrors",
111 111 dest="ipdoctest_ignore_import_errors",
112 112 )
113 113 group.addoption(
114 114 "--ipdoctest-continue-on-failure",
115 115 action="store_true",
116 116 default=False,
117 117 help="for a given ipdoctest, continue to run after the first failure",
118 118 dest="ipdoctest_continue_on_failure",
119 119 )
120 120
121 121
122 122 def pytest_unconfigure() -> None:
123 123 global RUNNER_CLASS
124 124
125 125 RUNNER_CLASS = None
126 126
127 127
128 128 def pytest_collect_file(
129 129 file_path: Path,
130 130 parent: Collector,
131 131 ) -> Optional[Union["IPDoctestModule", "IPDoctestTextfile"]]:
132 132 config = parent.config
133 133 if file_path.suffix == ".py":
134 134 if config.option.ipdoctestmodules and not any(
135 135 (_is_setup_py(file_path), _is_main_py(file_path))
136 136 ):
137 137 mod: IPDoctestModule = IPDoctestModule.from_parent(parent, path=file_path)
138 138 return mod
139 139 elif _is_ipdoctest(config, file_path, parent):
140 140 txt: IPDoctestTextfile = IPDoctestTextfile.from_parent(parent, path=file_path)
141 141 return txt
142 142 return None
143 143
144 144
145 145 if int(pytest.__version__.split(".")[0]) < 7:
146 146 _collect_file = pytest_collect_file
147 147
148 148 def pytest_collect_file(
149 149 path,
150 150 parent: Collector,
151 151 ) -> Optional[Union["IPDoctestModule", "IPDoctestTextfile"]]:
152 152 return _collect_file(Path(path), parent)
153 153
154 154 _import_path = import_path
155 155
156 156 def import_path(path, root):
157 157 import py.path
158 158
159 159 return _import_path(py.path.local(path))
160 160
161 161
162 162 def _is_setup_py(path: Path) -> bool:
163 163 if path.name != "setup.py":
164 164 return False
165 165 contents = path.read_bytes()
166 166 return b"setuptools" in contents or b"distutils" in contents
167 167
168 168
169 169 def _is_ipdoctest(config: Config, path: Path, parent: Collector) -> bool:
170 170 if path.suffix in (".txt", ".rst") and parent.session.isinitpath(path):
171 171 return True
172 172 globs = config.getoption("ipdoctestglob") or ["test*.txt"]
173 173 return any(fnmatch_ex(glob, path) for glob in globs)
174 174
175 175
176 176 def _is_main_py(path: Path) -> bool:
177 177 return path.name == "__main__.py"
178 178
179 179
180 180 class ReprFailDoctest(TerminalRepr):
181 181 def __init__(
182 182 self, reprlocation_lines: Sequence[Tuple[ReprFileLocation, Sequence[str]]]
183 183 ) -> None:
184 184 self.reprlocation_lines = reprlocation_lines
185 185
186 186 def toterminal(self, tw: TerminalWriter) -> None:
187 187 for reprlocation, lines in self.reprlocation_lines:
188 188 for line in lines:
189 189 tw.line(line)
190 190 reprlocation.toterminal(tw)
191 191
192 192
193 193 class MultipleDoctestFailures(Exception):
194 194 def __init__(self, failures: Sequence["doctest.DocTestFailure"]) -> None:
195 195 super().__init__()
196 196 self.failures = failures
197 197
198 198
199 199 def _init_runner_class() -> Type["IPDocTestRunner"]:
200 200 import doctest
201 201 from .ipdoctest import IPDocTestRunner
202 202
203 203 class PytestDoctestRunner(IPDocTestRunner):
204 204 """Runner to collect failures.
205 205
206 206 Note that the out variable in this case is a list instead of a
207 207 stdout-like object.
208 208 """
209 209
210 210 def __init__(
211 211 self,
212 212 checker: Optional["IPDoctestOutputChecker"] = None,
213 213 verbose: Optional[bool] = None,
214 214 optionflags: int = 0,
215 215 continue_on_failure: bool = True,
216 216 ) -> None:
217 217 super().__init__(checker=checker, verbose=verbose, optionflags=optionflags)
218 218 self.continue_on_failure = continue_on_failure
219 219
220 220 def report_failure(
221 221 self,
222 222 out,
223 223 test: "doctest.DocTest",
224 224 example: "doctest.Example",
225 225 got: str,
226 226 ) -> None:
227 227 failure = doctest.DocTestFailure(test, example, got)
228 228 if self.continue_on_failure:
229 229 out.append(failure)
230 230 else:
231 231 raise failure
232 232
233 233 def report_unexpected_exception(
234 234 self,
235 235 out,
236 236 test: "doctest.DocTest",
237 237 example: "doctest.Example",
238 238 exc_info: Tuple[Type[BaseException], BaseException, types.TracebackType],
239 239 ) -> None:
240 240 if isinstance(exc_info[1], OutcomeException):
241 241 raise exc_info[1]
242 242 if isinstance(exc_info[1], bdb.BdbQuit):
243 243 outcomes.exit("Quitting debugger")
244 244 failure = doctest.UnexpectedException(test, example, exc_info)
245 245 if self.continue_on_failure:
246 246 out.append(failure)
247 247 else:
248 248 raise failure
249 249
250 250 return PytestDoctestRunner
251 251
252 252
253 253 def _get_runner(
254 254 checker: Optional["IPDoctestOutputChecker"] = None,
255 255 verbose: Optional[bool] = None,
256 256 optionflags: int = 0,
257 257 continue_on_failure: bool = True,
258 258 ) -> "IPDocTestRunner":
259 259 # We need this in order to do a lazy import on doctest
260 260 global RUNNER_CLASS
261 261 if RUNNER_CLASS is None:
262 262 RUNNER_CLASS = _init_runner_class()
263 263 # Type ignored because the continue_on_failure argument is only defined on
264 264 # PytestDoctestRunner, which is lazily defined so can't be used as a type.
265 265 return RUNNER_CLASS( # type: ignore
266 266 checker=checker,
267 267 verbose=verbose,
268 268 optionflags=optionflags,
269 269 continue_on_failure=continue_on_failure,
270 270 )
271 271
272 272
273 273 class IPDoctestItem(pytest.Item):
274 274 def __init__(
275 275 self,
276 276 name: str,
277 277 parent: "Union[IPDoctestTextfile, IPDoctestModule]",
278 278 runner: Optional["IPDocTestRunner"] = None,
279 279 dtest: Optional["doctest.DocTest"] = None,
280 280 ) -> None:
281 281 super().__init__(name, parent)
282 282 self.runner = runner
283 283 self.dtest = dtest
284 284 self.obj = None
285 285 self.fixture_request: Optional[FixtureRequest] = None
286 286
287 287 @classmethod
288 288 def from_parent( # type: ignore
289 289 cls,
290 290 parent: "Union[IPDoctestTextfile, IPDoctestModule]",
291 291 *,
292 292 name: str,
293 293 runner: "IPDocTestRunner",
294 294 dtest: "doctest.DocTest",
295 295 ):
296 296 # incompatible signature due to imposed limits on subclass
297 297 """The public named constructor."""
298 298 return super().from_parent(name=name, parent=parent, runner=runner, dtest=dtest)
299 299
300 300 def setup(self) -> None:
301 301 if self.dtest is not None:
302 302 self.fixture_request = _setup_fixtures(self)
303 303 globs = dict(getfixture=self.fixture_request.getfixturevalue)
304 304 for name, value in self.fixture_request.getfixturevalue(
305 305 "ipdoctest_namespace"
306 306 ).items():
307 307 globs[name] = value
308 308 self.dtest.globs.update(globs)
309 309
310 310 from .ipdoctest import IPExample
311 311
312 312 if isinstance(self.dtest.examples[0], IPExample):
313 313 # for IPython examples *only*, we swap the globals with the ipython
314 314 # namespace, after updating it with the globals (which doctest
315 315 # fills with the necessary info from the module being tested).
316 316 self._user_ns_orig = {}
317 317 self._user_ns_orig.update(_ip.user_ns)
318 318 _ip.user_ns.update(self.dtest.globs)
319 319 # We must remove the _ key in the namespace, so that Python's
320 320 # doctest code sets it naturally
321 321 _ip.user_ns.pop("_", None)
322 322 _ip.user_ns["__builtins__"] = builtins
323 323 self.dtest.globs = _ip.user_ns
324 324
325 325 def teardown(self) -> None:
326 326 from .ipdoctest import IPExample
327 327
328 328 # Undo the test.globs reassignment we made
329 329 if isinstance(self.dtest.examples[0], IPExample):
330 330 self.dtest.globs = {}
331 331 _ip.user_ns.clear()
332 332 _ip.user_ns.update(self._user_ns_orig)
333 333 del self._user_ns_orig
334 334
335 335 self.dtest.globs.clear()
336 336
337 337 def runtest(self) -> None:
338 338 assert self.dtest is not None
339 339 assert self.runner is not None
340 340 _check_all_skipped(self.dtest)
341 341 self._disable_output_capturing_for_darwin()
342 342 failures: List["doctest.DocTestFailure"] = []
343 343
344 344 # exec(compile(..., "single", ...), ...) puts result in builtins._
345 345 had_underscore_value = hasattr(builtins, "_")
346 346 underscore_original_value = getattr(builtins, "_", None)
347 347
348 348 # Save our current directory and switch out to the one where the
349 349 # test was originally created, in case another doctest did a
350 350 # directory change. We'll restore this in the finally clause.
351 351 curdir = os.getcwd()
352 352 os.chdir(self.fspath.dirname)
353 353 try:
354 354 # Type ignored because we change the type of `out` from what
355 355 # ipdoctest expects.
356 356 self.runner.run(self.dtest, out=failures, clear_globs=False) # type: ignore[arg-type]
357 357 finally:
358 358 os.chdir(curdir)
359 359 if had_underscore_value:
360 360 setattr(builtins, "_", underscore_original_value)
361 361 elif hasattr(builtins, "_"):
362 362 delattr(builtins, "_")
363 363
364 364 if failures:
365 365 raise MultipleDoctestFailures(failures)
366 366
367 367 def _disable_output_capturing_for_darwin(self) -> None:
368 368 """Disable output capturing. Otherwise, stdout is lost to ipdoctest (pytest#985)."""
369 369 if platform.system() != "Darwin":
370 370 return
371 371 capman = self.config.pluginmanager.getplugin("capturemanager")
372 372 if capman:
373 373 capman.suspend_global_capture(in_=True)
374 374 out, err = capman.read_global_capture()
375 375 sys.stdout.write(out)
376 376 sys.stderr.write(err)
377 377
378 378 # TODO: Type ignored -- breaks Liskov Substitution.
379 379 def repr_failure( # type: ignore[override]
380 380 self,
381 381 excinfo: ExceptionInfo[BaseException],
382 382 ) -> Union[str, TerminalRepr]:
383 383 import doctest
384 384
385 385 failures: Optional[
386 386 Sequence[Union[doctest.DocTestFailure, doctest.UnexpectedException]]
387 387 ] = None
388 388 if isinstance(
389 389 excinfo.value, (doctest.DocTestFailure, doctest.UnexpectedException)
390 390 ):
391 391 failures = [excinfo.value]
392 392 elif isinstance(excinfo.value, MultipleDoctestFailures):
393 393 failures = excinfo.value.failures
394 394
395 395 if failures is None:
396 396 return super().repr_failure(excinfo)
397 397
398 398 reprlocation_lines = []
399 399 for failure in failures:
400 400 example = failure.example
401 401 test = failure.test
402 402 filename = test.filename
403 403 if test.lineno is None:
404 404 lineno = None
405 405 else:
406 406 lineno = test.lineno + example.lineno + 1
407 407 message = type(failure).__name__
408 408 # TODO: ReprFileLocation doesn't expect a None lineno.
409 409 reprlocation = ReprFileLocation(filename, lineno, message) # type: ignore[arg-type]
410 410 checker = _get_checker()
411 411 report_choice = _get_report_choice(self.config.getoption("ipdoctestreport"))
412 412 if lineno is not None:
413 413 assert failure.test.docstring is not None
414 414 lines = failure.test.docstring.splitlines(False)
415 415 # add line numbers to the left of the error message
416 416 assert test.lineno is not None
417 417 lines = [
418 418 "%03d %s" % (i + test.lineno + 1, x) for (i, x) in enumerate(lines)
419 419 ]
420 420 # trim docstring error lines to 10
421 421 lines = lines[max(example.lineno - 9, 0) : example.lineno + 1]
422 422 else:
423 423 lines = [
424 424 "EXAMPLE LOCATION UNKNOWN, not showing all tests of that example"
425 425 ]
426 426 indent = ">>>"
427 427 for line in example.source.splitlines():
428 428 lines.append(f"??? {indent} {line}")
429 429 indent = "..."
430 430 if isinstance(failure, doctest.DocTestFailure):
431 431 lines += checker.output_difference(
432 432 example, failure.got, report_choice
433 433 ).split("\n")
434 434 else:
435 435 inner_excinfo = ExceptionInfo.from_exc_info(failure.exc_info)
436 436 lines += ["UNEXPECTED EXCEPTION: %s" % repr(inner_excinfo.value)]
437 437 lines += [
438 438 x.strip("\n") for x in traceback.format_exception(*failure.exc_info)
439 439 ]
440 440 reprlocation_lines.append((reprlocation, lines))
441 441 return ReprFailDoctest(reprlocation_lines)
442 442
443 443 def reportinfo(self) -> Tuple[Union["os.PathLike[str]", str], Optional[int], str]:
444 444 assert self.dtest is not None
445 445 return self.path, self.dtest.lineno, "[ipdoctest] %s" % self.name
446 446
447 447 if int(pytest.__version__.split(".")[0]) < 7:
448 448
449 449 @property
450 450 def path(self) -> Path:
451 451 return Path(self.fspath)
452 452
453 453
454 454 def _get_flag_lookup() -> Dict[str, int]:
455 455 import doctest
456 456
457 457 return dict(
458 458 DONT_ACCEPT_TRUE_FOR_1=doctest.DONT_ACCEPT_TRUE_FOR_1,
459 459 DONT_ACCEPT_BLANKLINE=doctest.DONT_ACCEPT_BLANKLINE,
460 460 NORMALIZE_WHITESPACE=doctest.NORMALIZE_WHITESPACE,
461 461 ELLIPSIS=doctest.ELLIPSIS,
462 462 IGNORE_EXCEPTION_DETAIL=doctest.IGNORE_EXCEPTION_DETAIL,
463 463 COMPARISON_FLAGS=doctest.COMPARISON_FLAGS,
464 464 ALLOW_UNICODE=_get_allow_unicode_flag(),
465 465 ALLOW_BYTES=_get_allow_bytes_flag(),
466 466 NUMBER=_get_number_flag(),
467 467 )
468 468
469 469
470 470 def get_optionflags(parent):
471 471 optionflags_str = parent.config.getini("ipdoctest_optionflags")
472 472 flag_lookup_table = _get_flag_lookup()
473 473 flag_acc = 0
474 474 for flag in optionflags_str:
475 475 flag_acc |= flag_lookup_table[flag]
476 476 return flag_acc
477 477
478 478
479 479 def _get_continue_on_failure(config):
480 480 continue_on_failure = config.getvalue("ipdoctest_continue_on_failure")
481 481 if continue_on_failure:
482 482 # We need to turn off this if we use pdb since we should stop at
483 483 # the first failure.
484 484 if config.getvalue("usepdb"):
485 485 continue_on_failure = False
486 486 return continue_on_failure
487 487
488 488
489 489 class IPDoctestTextfile(pytest.Module):
490 490 obj = None
491 491
492 492 def collect(self) -> Iterable[IPDoctestItem]:
493 493 import doctest
494 494 from .ipdoctest import IPDocTestParser
495 495
496 496 # Inspired by doctest.testfile; ideally we would use it directly,
497 497 # but it doesn't support passing a custom checker.
498 498 encoding = self.config.getini("ipdoctest_encoding")
499 499 text = self.path.read_text(encoding)
500 500 filename = str(self.path)
501 501 name = self.path.name
502 502 globs = {"__name__": "__main__"}
503 503
504 504 optionflags = get_optionflags(self)
505 505
506 506 runner = _get_runner(
507 507 verbose=False,
508 508 optionflags=optionflags,
509 509 checker=_get_checker(),
510 510 continue_on_failure=_get_continue_on_failure(self.config),
511 511 )
512 512
513 513 parser = IPDocTestParser()
514 514 test = parser.get_doctest(text, globs, name, filename, 0)
515 515 if test.examples:
516 516 yield IPDoctestItem.from_parent(
517 517 self, name=test.name, runner=runner, dtest=test
518 518 )
519 519
520 520 if int(pytest.__version__.split(".")[0]) < 7:
521 521
522 522 @property
523 523 def path(self) -> Path:
524 524 return Path(self.fspath)
525 525
526 526 @classmethod
527 527 def from_parent(
528 528 cls,
529 529 parent,
530 530 *,
531 531 fspath=None,
532 532 path: Optional[Path] = None,
533 533 **kw,
534 534 ):
535 535 if path is not None:
536 536 import py.path
537 537
538 538 fspath = py.path.local(path)
539 539 return super().from_parent(parent=parent, fspath=fspath, **kw)
540 540
541 541
542 542 def _check_all_skipped(test: "doctest.DocTest") -> None:
543 543 """Raise pytest.skip() if all examples in the given DocTest have the SKIP
544 544 option set."""
545 545 import doctest
546 546
547 547 all_skipped = all(x.options.get(doctest.SKIP, False) for x in test.examples)
548 548 if all_skipped:
549 549 pytest.skip("all docstests skipped by +SKIP option")
550 550
551 551
552 552 def _is_mocked(obj: object) -> bool:
553 553 """Return if an object is possibly a mock object by checking the
554 554 existence of a highly improbable attribute."""
555 555 return (
556 556 safe_getattr(obj, "pytest_mock_example_attribute_that_shouldnt_exist", None)
557 557 is not None
558 558 )
559 559
560 560
561 561 @contextmanager
562 562 def _patch_unwrap_mock_aware() -> Generator[None, None, None]:
563 563 """Context manager which replaces ``inspect.unwrap`` with a version
564 564 that's aware of mock objects and doesn't recurse into them."""
565 565 real_unwrap = inspect.unwrap
566 566
567 567 def _mock_aware_unwrap(
568 568 func: Callable[..., Any], *, stop: Optional[Callable[[Any], Any]] = None
569 569 ) -> Any:
570 570 try:
571 571 if stop is None or stop is _is_mocked:
572 572 return real_unwrap(func, stop=_is_mocked)
573 573 _stop = stop
574 574 return real_unwrap(func, stop=lambda obj: _is_mocked(obj) or _stop(func))
575 575 except Exception as e:
576 576 warnings.warn(
577 577 "Got %r when unwrapping %r. This is usually caused "
578 578 "by a violation of Python's object protocol; see e.g. "
579 579 "https://github.com/pytest-dev/pytest/issues/5080" % (e, func),
580 580 PytestWarning,
581 581 )
582 582 raise
583 583
584 584 inspect.unwrap = _mock_aware_unwrap
585 585 try:
586 586 yield
587 587 finally:
588 588 inspect.unwrap = real_unwrap
589 589
590 590
591 591 class IPDoctestModule(pytest.Module):
592 592 def collect(self) -> Iterable[IPDoctestItem]:
593 593 import doctest
594 594 from .ipdoctest import DocTestFinder, IPDocTestParser
595 595
596 596 class MockAwareDocTestFinder(DocTestFinder):
597 597 """A hackish ipdoctest finder that overrides stdlib internals to fix a stdlib bug.
598 598
599 599 https://github.com/pytest-dev/pytest/issues/3456
600 600 https://bugs.python.org/issue25532
601 601 """
602 602
603 603 def _find_lineno(self, obj, source_lines):
604 604 """Doctest code does not take into account `@property`, this
605 605 is a hackish way to fix it. https://bugs.python.org/issue17446
606 606
607 607 Wrapped Doctests will need to be unwrapped so the correct
608 608 line number is returned. This will be reported upstream. #8796
609 609 """
610 610 if isinstance(obj, property):
611 611 obj = getattr(obj, "fget", obj)
612 612
613 613 if hasattr(obj, "__wrapped__"):
614 614 # Get the main obj in case of it being wrapped
615 615 obj = inspect.unwrap(obj)
616 616
617 617 # Type ignored because this is a private function.
618 618 return super()._find_lineno( # type:ignore[misc]
619 619 obj,
620 620 source_lines,
621 621 )
622 622
623 623 def _find(
624 624 self, tests, obj, name, module, source_lines, globs, seen
625 625 ) -> None:
626 626 if _is_mocked(obj):
627 627 return
628 628 with _patch_unwrap_mock_aware():
629
630 629 # Type ignored because this is a private function.
631 630 super()._find( # type:ignore[misc]
632 631 tests, obj, name, module, source_lines, globs, seen
633 632 )
634 633
635 634 if self.path.name == "conftest.py":
636 635 if int(pytest.__version__.split(".")[0]) < 7:
637 636 module = self.config.pluginmanager._importconftest(
638 637 self.path,
639 638 self.config.getoption("importmode"),
640 639 )
641 640 else:
642 641 module = self.config.pluginmanager._importconftest(
643 642 self.path,
644 643 self.config.getoption("importmode"),
645 644 rootpath=self.config.rootpath,
646 645 )
647 646 else:
648 647 try:
649 648 module = import_path(self.path, root=self.config.rootpath)
650 649 except ImportError:
651 650 if self.config.getvalue("ipdoctest_ignore_import_errors"):
652 651 pytest.skip("unable to import module %r" % self.path)
653 652 else:
654 653 raise
655 654 # Uses internal doctest module parsing mechanism.
656 655 finder = MockAwareDocTestFinder(parser=IPDocTestParser())
657 656 optionflags = get_optionflags(self)
658 657 runner = _get_runner(
659 658 verbose=False,
660 659 optionflags=optionflags,
661 660 checker=_get_checker(),
662 661 continue_on_failure=_get_continue_on_failure(self.config),
663 662 )
664 663
665 664 for test in finder.find(module, module.__name__):
666 665 if test.examples: # skip empty ipdoctests
667 666 yield IPDoctestItem.from_parent(
668 667 self, name=test.name, runner=runner, dtest=test
669 668 )
670 669
671 670 if int(pytest.__version__.split(".")[0]) < 7:
672 671
673 672 @property
674 673 def path(self) -> Path:
675 674 return Path(self.fspath)
676 675
677 676 @classmethod
678 677 def from_parent(
679 678 cls,
680 679 parent,
681 680 *,
682 681 fspath=None,
683 682 path: Optional[Path] = None,
684 683 **kw,
685 684 ):
686 685 if path is not None:
687 686 import py.path
688 687
689 688 fspath = py.path.local(path)
690 689 return super().from_parent(parent=parent, fspath=fspath, **kw)
691 690
692 691
693 692 def _setup_fixtures(doctest_item: IPDoctestItem) -> FixtureRequest:
694 693 """Used by IPDoctestTextfile and IPDoctestItem to setup fixture information."""
695 694
696 695 def func() -> None:
697 696 pass
698 697
699 698 doctest_item.funcargs = {} # type: ignore[attr-defined]
700 699 fm = doctest_item.session._fixturemanager
701 700 doctest_item._fixtureinfo = fm.getfixtureinfo( # type: ignore[attr-defined]
702 701 node=doctest_item, func=func, cls=None, funcargs=False
703 702 )
704 703 fixture_request = FixtureRequest(doctest_item, _ispytest=True)
705 704 fixture_request._fillfixtures()
706 705 return fixture_request
707 706
708 707
709 708 def _init_checker_class() -> Type["IPDoctestOutputChecker"]:
710 709 import doctest
711 710 import re
712 711 from .ipdoctest import IPDoctestOutputChecker
713 712
714 713 class LiteralsOutputChecker(IPDoctestOutputChecker):
715 714 # Based on doctest_nose_plugin.py from the nltk project
716 715 # (https://github.com/nltk/nltk) and on the "numtest" doctest extension
717 716 # by Sebastien Boisgerault (https://github.com/boisgera/numtest).
718 717
719 718 _unicode_literal_re = re.compile(r"(\W|^)[uU]([rR]?[\'\"])", re.UNICODE)
720 719 _bytes_literal_re = re.compile(r"(\W|^)[bB]([rR]?[\'\"])", re.UNICODE)
721 720 _number_re = re.compile(
722 721 r"""
723 722 (?P<number>
724 723 (?P<mantissa>
725 724 (?P<integer1> [+-]?\d*)\.(?P<fraction>\d+)
726 725 |
727 726 (?P<integer2> [+-]?\d+)\.
728 727 )
729 728 (?:
730 729 [Ee]
731 730 (?P<exponent1> [+-]?\d+)
732 731 )?
733 732 |
734 733 (?P<integer3> [+-]?\d+)
735 734 (?:
736 735 [Ee]
737 736 (?P<exponent2> [+-]?\d+)
738 737 )
739 738 )
740 739 """,
741 740 re.VERBOSE,
742 741 )
743 742
744 743 def check_output(self, want: str, got: str, optionflags: int) -> bool:
745 744 if super().check_output(want, got, optionflags):
746 745 return True
747 746
748 747 allow_unicode = optionflags & _get_allow_unicode_flag()
749 748 allow_bytes = optionflags & _get_allow_bytes_flag()
750 749 allow_number = optionflags & _get_number_flag()
751 750
752 751 if not allow_unicode and not allow_bytes and not allow_number:
753 752 return False
754 753
755 754 def remove_prefixes(regex: Pattern[str], txt: str) -> str:
756 755 return re.sub(regex, r"\1\2", txt)
757 756
758 757 if allow_unicode:
759 758 want = remove_prefixes(self._unicode_literal_re, want)
760 759 got = remove_prefixes(self._unicode_literal_re, got)
761 760
762 761 if allow_bytes:
763 762 want = remove_prefixes(self._bytes_literal_re, want)
764 763 got = remove_prefixes(self._bytes_literal_re, got)
765 764
766 765 if allow_number:
767 766 got = self._remove_unwanted_precision(want, got)
768 767
769 768 return super().check_output(want, got, optionflags)
770 769
771 770 def _remove_unwanted_precision(self, want: str, got: str) -> str:
772 771 wants = list(self._number_re.finditer(want))
773 772 gots = list(self._number_re.finditer(got))
774 773 if len(wants) != len(gots):
775 774 return got
776 775 offset = 0
777 776 for w, g in zip(wants, gots):
778 777 fraction: Optional[str] = w.group("fraction")
779 778 exponent: Optional[str] = w.group("exponent1")
780 779 if exponent is None:
781 780 exponent = w.group("exponent2")
782 781 precision = 0 if fraction is None else len(fraction)
783 782 if exponent is not None:
784 783 precision -= int(exponent)
785 784 if float(w.group()) == approx(float(g.group()), abs=10**-precision):
786 785 # They're close enough. Replace the text we actually
787 786 # got with the text we want, so that it will match when we
788 787 # check the string literally.
789 788 got = (
790 789 got[: g.start() + offset] + w.group() + got[g.end() + offset :]
791 790 )
792 791 offset += w.end() - w.start() - (g.end() - g.start())
793 792 return got
794 793
795 794 return LiteralsOutputChecker
796 795
797 796
798 797 def _get_checker() -> "IPDoctestOutputChecker":
799 798 """Return a IPDoctestOutputChecker subclass that supports some
800 799 additional options:
801 800
802 801 * ALLOW_UNICODE and ALLOW_BYTES options to ignore u'' and b''
803 802 prefixes (respectively) in string literals. Useful when the same
804 803 ipdoctest should run in Python 2 and Python 3.
805 804
806 805 * NUMBER to ignore floating-point differences smaller than the
807 806 precision of the literal number in the ipdoctest.
808 807
809 808 An inner class is used to avoid importing "ipdoctest" at the module
810 809 level.
811 810 """
812 811 global CHECKER_CLASS
813 812 if CHECKER_CLASS is None:
814 813 CHECKER_CLASS = _init_checker_class()
815 814 return CHECKER_CLASS()
816 815
817 816
818 817 def _get_allow_unicode_flag() -> int:
819 818 """Register and return the ALLOW_UNICODE flag."""
820 819 import doctest
821 820
822 821 return doctest.register_optionflag("ALLOW_UNICODE")
823 822
824 823
825 824 def _get_allow_bytes_flag() -> int:
826 825 """Register and return the ALLOW_BYTES flag."""
827 826 import doctest
828 827
829 828 return doctest.register_optionflag("ALLOW_BYTES")
830 829
831 830
832 831 def _get_number_flag() -> int:
833 832 """Register and return the NUMBER flag."""
834 833 import doctest
835 834
836 835 return doctest.register_optionflag("NUMBER")
837 836
838 837
839 838 def _get_report_choice(key: str) -> int:
840 839 """Return the actual `ipdoctest` module flag value.
841 840
842 841 We want to do it as late as possible to avoid importing `ipdoctest` and all
843 842 its dependencies when parsing options, as it adds overhead and breaks tests.
844 843 """
845 844 import doctest
846 845
847 846 return {
848 847 DOCTEST_REPORT_CHOICE_UDIFF: doctest.REPORT_UDIFF,
849 848 DOCTEST_REPORT_CHOICE_CDIFF: doctest.REPORT_CDIFF,
850 849 DOCTEST_REPORT_CHOICE_NDIFF: doctest.REPORT_NDIFF,
851 850 DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE: doctest.REPORT_ONLY_FIRST_FAILURE,
852 851 DOCTEST_REPORT_CHOICE_NONE: 0,
853 852 }[key]
854 853
855 854
856 855 @pytest.fixture(scope="session")
857 856 def ipdoctest_namespace() -> Dict[str, Any]:
858 857 """Fixture that returns a :py:class:`dict` that will be injected into the
859 858 namespace of ipdoctests."""
860 859 return dict()
General Comments 0
You need to be logged in to leave comments. Login now