##// END OF EJS Templates
Fix typos (#14533)...
M Bussonnier -
r28891:03646ad1 merge
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,3389 +1,3389
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 α
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 α
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press :kbd:`Tab` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\α<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 63 ``False``.
64 64
65 65
66 66 Experimental
67 67 ============
68 68
69 69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 70 generate completions both using static analysis of the code, and dynamically
71 71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 72 for Python. The APIs attached to this new mechanism is unstable and will
73 73 raise unless use in an :any:`provisionalcompleter` context manager.
74 74
75 75 You will find that the following are experimental:
76 76
77 77 - :any:`provisionalcompleter`
78 78 - :any:`IPCompleter.completions`
79 79 - :any:`Completion`
80 80 - :any:`rectify_completions`
81 81
82 82 .. note::
83 83
84 84 better name for :any:`rectify_completions` ?
85 85
86 86 We welcome any feedback on these new API, and we also encourage you to try this
87 87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 88 to have extra logging information if :any:`jedi` is crashing, or if current
89 89 IPython completer pending deprecations are returning results not yet handled
90 90 by :any:`jedi`
91 91
92 92 Using Jedi for tab completion allow snippets like the following to work without
93 93 having to execute any code:
94 94
95 95 >>> myvar = ['hello', 42]
96 96 ... myvar[1].bi<tab>
97 97
98 98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 100 option.
101 101
102 102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 103 current development version to get better completions.
104 104
105 105 Matchers
106 106 ========
107 107
108 108 All completions routines are implemented using unified *Matchers* API.
109 109 The matchers API is provisional and subject to change without notice.
110 110
111 111 The built-in matchers include:
112 112
113 113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 115 - :any:`IPCompleter.unicode_name_matcher`,
116 116 :any:`IPCompleter.fwd_unicode_matcher`
117 117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 125 (`complete_command`) with string dispatch (including regular expressions).
126 126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 127 Jedi results to match behaviour in earlier IPython versions.
128 128
129 129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130 130
131 131 Matcher API
132 132 -----------
133 133
134 134 Simplifying some details, the ``Matcher`` interface can described as
135 135
136 136 .. code-block::
137 137
138 138 MatcherAPIv1 = Callable[[str], list[str]]
139 139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140 140
141 141 Matcher = MatcherAPIv1 | MatcherAPIv2
142 142
143 143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 144 and remains supported as a simplest way for generating completions. This is also
145 145 currently the only API supported by the IPython hooks system `complete_command`.
146 146
147 147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 149 and requires a literal ``2`` for v2 Matchers.
150 150
151 151 Once the API stabilises future versions may relax the requirement for specifying
152 152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154 154
155 155 Suppression of competing matchers
156 156 ---------------------------------
157 157
158 158 By default results from all matchers are combined, in the order determined by
159 159 their priority. Matchers can request to suppress results from subsequent
160 160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161 161
162 When multiple matchers simultaneously request surpression, the results from of
162 When multiple matchers simultaneously request suppression, the results from of
163 163 the matcher with higher priority will be returned.
164 164
165 165 Sometimes it is desirable to suppress most but not all other matchers;
166 166 this can be achieved by adding a set of identifiers of matchers which
167 167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168 168
169 169 The suppression behaviour can is user-configurable via
170 170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 171 """
172 172
173 173
174 174 # Copyright (c) IPython Development Team.
175 175 # Distributed under the terms of the Modified BSD License.
176 176 #
177 177 # Some of this code originated from rlcompleter in the Python standard library
178 178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179 179
180 180 from __future__ import annotations
181 181 import builtins as builtin_mod
182 182 import enum
183 183 import glob
184 184 import inspect
185 185 import itertools
186 186 import keyword
187 187 import os
188 188 import re
189 189 import string
190 190 import sys
191 191 import tokenize
192 192 import time
193 193 import unicodedata
194 194 import uuid
195 195 import warnings
196 196 from ast import literal_eval
197 197 from collections import defaultdict
198 198 from contextlib import contextmanager
199 199 from dataclasses import dataclass
200 200 from functools import cached_property, partial
201 201 from types import SimpleNamespace
202 202 from typing import (
203 203 Iterable,
204 204 Iterator,
205 205 List,
206 206 Tuple,
207 207 Union,
208 208 Any,
209 209 Sequence,
210 210 Dict,
211 211 Optional,
212 212 TYPE_CHECKING,
213 213 Set,
214 214 Sized,
215 215 TypeVar,
216 216 Literal,
217 217 )
218 218
219 219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 220 from IPython.core.error import TryNext
221 221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 223 from IPython.core.oinspect import InspectColors
224 224 from IPython.testing.skipdoctest import skip_doctest
225 225 from IPython.utils import generics
226 226 from IPython.utils.decorators import sphinx_options
227 227 from IPython.utils.dir2 import dir2, get_real_method
228 228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 229 from IPython.utils.path import ensure_dir_exists
230 230 from IPython.utils.process import arg_split
231 231 from traitlets import (
232 232 Bool,
233 233 Enum,
234 234 Int,
235 235 List as ListTrait,
236 236 Unicode,
237 237 Dict as DictTrait,
238 238 Union as UnionTrait,
239 239 observe,
240 240 )
241 241 from traitlets.config.configurable import Configurable
242 242
243 243 import __main__
244 244
245 245 # skip module docstests
246 246 __skip_doctest__ = True
247 247
248 248
249 249 try:
250 250 import jedi
251 251 jedi.settings.case_insensitive_completion = False
252 252 import jedi.api.helpers
253 253 import jedi.api.classes
254 254 JEDI_INSTALLED = True
255 255 except ImportError:
256 256 JEDI_INSTALLED = False
257 257
258 258
259 259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 260 from typing import cast
261 261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 262 else:
263 263 from typing import Generic
264 264
265 265 def cast(type_, obj):
266 266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 267 return obj
268 268
269 269 # do not require on runtime
270 270 NotRequired = Tuple # requires Python >=3.11
271 271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 272 Protocol = object # requires Python >=3.8
273 273 TypeAlias = Any # requires Python >=3.10
274 274 TypeGuard = Generic # requires Python >=3.10
275 275 if GENERATING_DOCUMENTATION:
276 276 from typing import TypedDict
277 277
278 278 # -----------------------------------------------------------------------------
279 279 # Globals
280 280 #-----------------------------------------------------------------------------
281 281
282 282 # ranges where we have most of the valid unicode names. We could be more finer
283 283 # grained but is it worth it for performance While unicode have character in the
284 284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 285 # write this). With below range we cover them all, with a density of ~67%
286 286 # biggest next gap we consider only adds up about 1% density and there are 600
287 287 # gaps that would need hard coding.
288 288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289 289
290 290 # Public API
291 291 __all__ = ["Completer", "IPCompleter"]
292 292
293 293 if sys.platform == 'win32':
294 294 PROTECTABLES = ' '
295 295 else:
296 296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297 297
298 298 # Protect against returning an enormous number of completions which the frontend
299 299 # may have trouble processing.
300 300 MATCHES_LIMIT = 500
301 301
302 302 # Completion type reported when no type can be inferred.
303 303 _UNKNOWN_TYPE = "<unknown>"
304 304
305 305 # sentinel value to signal lack of a match
306 306 not_found = object()
307 307
308 308 class ProvisionalCompleterWarning(FutureWarning):
309 309 """
310 310 Exception raise by an experimental feature in this module.
311 311
312 312 Wrap code in :any:`provisionalcompleter` context manager if you
313 313 are certain you want to use an unstable feature.
314 314 """
315 315 pass
316 316
317 317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318 318
319 319
320 320 @skip_doctest
321 321 @contextmanager
322 322 def provisionalcompleter(action='ignore'):
323 323 """
324 324 This context manager has to be used in any place where unstable completer
325 325 behavior and API may be called.
326 326
327 327 >>> with provisionalcompleter():
328 328 ... completer.do_experimental_things() # works
329 329
330 330 >>> completer.do_experimental_things() # raises.
331 331
332 332 .. note::
333 333
334 334 Unstable
335 335
336 336 By using this context manager you agree that the API in use may change
337 337 without warning, and that you won't complain if they do so.
338 338
339 339 You also understand that, if the API is not to your liking, you should report
340 340 a bug to explain your use case upstream.
341 341
342 342 We'll be happy to get your feedback, feature requests, and improvements on
343 343 any of the unstable APIs!
344 344 """
345 345 with warnings.catch_warnings():
346 346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 347 yield
348 348
349 349
350 350 def has_open_quotes(s):
351 351 """Return whether a string has open quotes.
352 352
353 353 This simply counts whether the number of quote characters of either type in
354 354 the string is odd.
355 355
356 356 Returns
357 357 -------
358 358 If there is an open quote, the quote character is returned. Else, return
359 359 False.
360 360 """
361 361 # We check " first, then ', so complex cases with nested quotes will get
362 362 # the " to take precedence.
363 363 if s.count('"') % 2:
364 364 return '"'
365 365 elif s.count("'") % 2:
366 366 return "'"
367 367 else:
368 368 return False
369 369
370 370
371 371 def protect_filename(s, protectables=PROTECTABLES):
372 372 """Escape a string to protect certain characters."""
373 373 if set(s) & set(protectables):
374 374 if sys.platform == "win32":
375 375 return '"' + s + '"'
376 376 else:
377 377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 378 else:
379 379 return s
380 380
381 381
382 382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 383 """Expand ``~``-style usernames in strings.
384 384
385 385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 386 extra information that will be useful if the input was being used in
387 387 computing completions, and you wish to return the completions with the
388 388 original '~' instead of its expanded value.
389 389
390 390 Parameters
391 391 ----------
392 392 path : str
393 393 String to be expanded. If no ~ is present, the output is the same as the
394 394 input.
395 395
396 396 Returns
397 397 -------
398 398 newpath : str
399 399 Result of ~ expansion in the input path.
400 400 tilde_expand : bool
401 401 Whether any expansion was performed or not.
402 402 tilde_val : str
403 403 The value that ~ was replaced with.
404 404 """
405 405 # Default values
406 406 tilde_expand = False
407 407 tilde_val = ''
408 408 newpath = path
409 409
410 410 if path.startswith('~'):
411 411 tilde_expand = True
412 412 rest = len(path)-1
413 413 newpath = os.path.expanduser(path)
414 414 if rest:
415 415 tilde_val = newpath[:-rest]
416 416 else:
417 417 tilde_val = newpath
418 418
419 419 return newpath, tilde_expand, tilde_val
420 420
421 421
422 422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 423 """Does the opposite of expand_user, with its outputs.
424 424 """
425 425 if tilde_expand:
426 426 return path.replace(tilde_val, '~')
427 427 else:
428 428 return path
429 429
430 430
431 431 def completions_sorting_key(word):
432 432 """key for sorting completions
433 433
434 434 This does several things:
435 435
436 436 - Demote any completions starting with underscores to the end
437 437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 438 by their name
439 439 """
440 440 prio1, prio2 = 0, 0
441 441
442 442 if word.startswith('__'):
443 443 prio1 = 2
444 444 elif word.startswith('_'):
445 445 prio1 = 1
446 446
447 447 if word.endswith('='):
448 448 prio1 = -1
449 449
450 450 if word.startswith('%%'):
451 451 # If there's another % in there, this is something else, so leave it alone
452 452 if not "%" in word[2:]:
453 453 word = word[2:]
454 454 prio2 = 2
455 455 elif word.startswith('%'):
456 456 if not "%" in word[1:]:
457 457 word = word[1:]
458 458 prio2 = 1
459 459
460 460 return prio1, word, prio2
461 461
462 462
463 463 class _FakeJediCompletion:
464 464 """
465 465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467 467
468 468 Added in IPython 6.0 so should likely be removed for 7.0
469 469
470 470 """
471 471
472 472 def __init__(self, name):
473 473
474 474 self.name = name
475 475 self.complete = name
476 476 self.type = 'crashed'
477 477 self.name_with_symbols = name
478 478 self.signature = ""
479 479 self._origin = "fake"
480 480 self.text = "crashed"
481 481
482 482 def __repr__(self):
483 483 return '<Fake completion object jedi has crashed>'
484 484
485 485
486 486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487 487
488 488
489 489 class Completion:
490 490 """
491 491 Completion object used and returned by IPython completers.
492 492
493 493 .. warning::
494 494
495 495 Unstable
496 496
497 497 This function is unstable, API may change without warning.
498 498 It will also raise unless use in proper context manager.
499 499
500 500 This act as a middle ground :any:`Completion` object between the
501 501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 502 object. While Jedi need a lot of information about evaluator and how the
503 503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 504 need user facing information.
505 505
506 506 - Which range should be replaced replaced by what.
507 507 - Some metadata (like completion type), or meta information to displayed to
508 508 the use user.
509 509
510 510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 512 """
513 513
514 514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515 515
516 516 def __init__(
517 517 self,
518 518 start: int,
519 519 end: int,
520 520 text: str,
521 521 *,
522 522 type: Optional[str] = None,
523 523 _origin="",
524 524 signature="",
525 525 ) -> None:
526 526 warnings.warn(
527 527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 528 "It may change without warnings. "
529 529 "Use in corresponding context manager.",
530 530 category=ProvisionalCompleterWarning,
531 531 stacklevel=2,
532 532 )
533 533
534 534 self.start = start
535 535 self.end = end
536 536 self.text = text
537 537 self.type = type
538 538 self.signature = signature
539 539 self._origin = _origin
540 540
541 541 def __repr__(self):
542 542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544 544
545 545 def __eq__(self, other) -> bool:
546 546 """
547 547 Equality and hash do not hash the type (as some completer may not be
548 548 able to infer the type), but are use to (partially) de-duplicate
549 549 completion.
550 550
551 551 Completely de-duplicating completion is a bit tricker that just
552 552 comparing as it depends on surrounding text, which Completions are not
553 553 aware of.
554 554 """
555 555 return self.start == other.start and \
556 556 self.end == other.end and \
557 557 self.text == other.text
558 558
559 559 def __hash__(self):
560 560 return hash((self.start, self.end, self.text))
561 561
562 562
563 563 class SimpleCompletion:
564 564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565 565
566 566 .. warning::
567 567
568 568 Provisional
569 569
570 570 This class is used to describe the currently supported attributes of
571 571 simple completion items, and any additional implementation details
572 572 should not be relied on. Additional attributes may be included in
573 573 future versions, and meaning of text disambiguated from the current
574 574 dual meaning of "text to insert" and "text to used as a label".
575 575 """
576 576
577 577 __slots__ = ["text", "type"]
578 578
579 579 def __init__(self, text: str, *, type: Optional[str] = None):
580 580 self.text = text
581 581 self.type = type
582 582
583 583 def __repr__(self):
584 584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585 585
586 586
587 587 class _MatcherResultBase(TypedDict):
588 588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589 589
590 590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 591 matched_fragment: NotRequired[str]
592 592
593 593 #: Whether to suppress results from all other matchers (True), some
594 594 #: matchers (set of identifiers) or none (False); default is False.
595 595 suppress: NotRequired[Union[bool, Set[str]]]
596 596
597 597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 598 #: requests to suppress all other matchers; defaults to an empty set.
599 599 do_not_suppress: NotRequired[Set[str]]
600 600
601 601 #: Are completions already ordered and should be left as-is? default is False.
602 602 ordered: NotRequired[bool]
603 603
604 604
605 605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 607 """Result of new-style completion matcher."""
608 608
609 609 # note: TypedDict is added again to the inheritance chain
610 610 # in order to get __orig_bases__ for documentation
611 611
612 612 #: List of candidate completions
613 613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614 614
615 615
616 616 class _JediMatcherResult(_MatcherResultBase):
617 617 """Matching result returned by Jedi (will be processed differently)"""
618 618
619 619 #: list of candidate completions
620 620 completions: Iterator[_JediCompletionLike]
621 621
622 622
623 623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625 625
626 626
627 627 @dataclass
628 628 class CompletionContext:
629 629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630 630
631 631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 634 # from the completer, and make substituting them in sub-classes easier.
635 635
636 636 #: Relevant fragment of code directly preceding the cursor.
637 637 #: The extraction of token is implemented via splitter heuristic
638 638 #: (following readline behaviour for legacy reasons), which is user configurable
639 639 #: (by switching the greedy mode).
640 640 token: str
641 641
642 642 #: The full available content of the editor or buffer
643 643 full_text: str
644 644
645 645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 646 cursor_position: int
647 647
648 648 #: Cursor line in ``full_text``.
649 649 cursor_line: int
650 650
651 651 #: The maximum number of completions that will be used downstream.
652 652 #: Matchers can use this information to abort early.
653 653 #: The built-in Jedi matcher is currently excepted from this limit.
654 654 # If not given, return all possible completions.
655 655 limit: Optional[int]
656 656
657 657 @cached_property
658 658 def text_until_cursor(self) -> str:
659 659 return self.line_with_cursor[: self.cursor_position]
660 660
661 661 @cached_property
662 662 def line_with_cursor(self) -> str:
663 663 return self.full_text.split("\n")[self.cursor_line]
664 664
665 665
666 666 #: Matcher results for API v2.
667 667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668 668
669 669
670 670 class _MatcherAPIv1Base(Protocol):
671 671 def __call__(self, text: str) -> List[str]:
672 672 """Call signature."""
673 673 ...
674 674
675 675 #: Used to construct the default matcher identifier
676 676 __qualname__: str
677 677
678 678
679 679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 680 #: API version
681 681 matcher_api_version: Optional[Literal[1]]
682 682
683 683 def __call__(self, text: str) -> List[str]:
684 684 """Call signature."""
685 685 ...
686 686
687 687
688 688 #: Protocol describing Matcher API v1.
689 689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690 690
691 691
692 692 class MatcherAPIv2(Protocol):
693 693 """Protocol describing Matcher API v2."""
694 694
695 695 #: API version
696 696 matcher_api_version: Literal[2] = 2
697 697
698 698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 699 """Call signature."""
700 700 ...
701 701
702 702 #: Used to construct the default matcher identifier
703 703 __qualname__: str
704 704
705 705
706 706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707 707
708 708
709 709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 710 api_version = _get_matcher_api_version(matcher)
711 711 return api_version == 1
712 712
713 713
714 714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 715 api_version = _get_matcher_api_version(matcher)
716 716 return api_version == 2
717 717
718 718
719 719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 720 """Determines whether objects is sizable"""
721 721 return hasattr(value, "__len__")
722 722
723 723
724 724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 725 """Determines whether objects is sizable"""
726 726 return hasattr(value, "__next__")
727 727
728 728
729 729 def has_any_completions(result: MatcherResult) -> bool:
730 730 """Check if any result includes any completions."""
731 731 completions = result["completions"]
732 732 if _is_sizable(completions):
733 733 return len(completions) != 0
734 734 if _is_iterator(completions):
735 735 try:
736 736 old_iterator = completions
737 737 first = next(old_iterator)
738 738 result["completions"] = cast(
739 739 Iterator[SimpleCompletion],
740 740 itertools.chain([first], old_iterator),
741 741 )
742 742 return True
743 743 except StopIteration:
744 744 return False
745 745 raise ValueError(
746 746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 747 )
748 748
749 749
750 750 def completion_matcher(
751 751 *,
752 752 priority: Optional[float] = None,
753 753 identifier: Optional[str] = None,
754 754 api_version: int = 1,
755 755 ):
756 756 """Adds attributes describing the matcher.
757 757
758 758 Parameters
759 759 ----------
760 760 priority : Optional[float]
761 761 The priority of the matcher, determines the order of execution of matchers.
762 762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 763 identifier : Optional[str]
764 764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 765 and also used to for debugging (will be passed as ``origin`` with the completions).
766 766
767 767 Defaults to matcher function's ``__qualname__`` (for example,
768 768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 770 api_version: Optional[int]
771 771 version of the Matcher API used by this matcher.
772 772 Currently supported values are 1 and 2.
773 773 Defaults to 1.
774 774 """
775 775
776 776 def wrapper(func: Matcher):
777 777 func.matcher_priority = priority or 0 # type: ignore
778 778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 779 func.matcher_api_version = api_version # type: ignore
780 780 if TYPE_CHECKING:
781 781 if api_version == 1:
782 782 func = cast(MatcherAPIv1, func)
783 783 elif api_version == 2:
784 784 func = cast(MatcherAPIv2, func)
785 785 return func
786 786
787 787 return wrapper
788 788
789 789
790 790 def _get_matcher_priority(matcher: Matcher):
791 791 return getattr(matcher, "matcher_priority", 0)
792 792
793 793
794 794 def _get_matcher_id(matcher: Matcher):
795 795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796 796
797 797
798 798 def _get_matcher_api_version(matcher):
799 799 return getattr(matcher, "matcher_api_version", 1)
800 800
801 801
802 802 context_matcher = partial(completion_matcher, api_version=2)
803 803
804 804
805 805 _IC = Iterable[Completion]
806 806
807 807
808 808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 809 """
810 810 Deduplicate a set of completions.
811 811
812 812 .. warning::
813 813
814 814 Unstable
815 815
816 816 This function is unstable, API may change without warning.
817 817
818 818 Parameters
819 819 ----------
820 820 text : str
821 821 text that should be completed.
822 822 completions : Iterator[Completion]
823 823 iterator over the completions to deduplicate
824 824
825 825 Yields
826 826 ------
827 827 `Completions` objects
828 828 Completions coming from multiple sources, may be different but end up having
829 829 the same effect when applied to ``text``. If this is the case, this will
830 830 consider completions as equal and only emit the first encountered.
831 831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 832 the IPython completer does return things that Jedi does not, but should be
833 833 at some point.
834 834 """
835 835 completions = list(completions)
836 836 if not completions:
837 837 return
838 838
839 839 new_start = min(c.start for c in completions)
840 840 new_end = max(c.end for c in completions)
841 841
842 842 seen = set()
843 843 for c in completions:
844 844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 845 if new_text not in seen:
846 846 yield c
847 847 seen.add(new_text)
848 848
849 849
850 850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 851 """
852 852 Rectify a set of completions to all have the same ``start`` and ``end``
853 853
854 854 .. warning::
855 855
856 856 Unstable
857 857
858 858 This function is unstable, API may change without warning.
859 859 It will also raise unless use in proper context manager.
860 860
861 861 Parameters
862 862 ----------
863 863 text : str
864 864 text that should be completed.
865 865 completions : Iterator[Completion]
866 866 iterator over the completions to rectify
867 867 _debug : bool
868 868 Log failed completion
869 869
870 870 Notes
871 871 -----
872 872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 873 the Jupyter Protocol requires them to behave like so. This will readjust
874 874 the completion to have the same ``start`` and ``end`` by padding both
875 875 extremities with surrounding text.
876 876
877 877 During stabilisation should support a ``_debug`` option to log which
878 878 completion are return by the IPython completer and not found in Jedi in
879 879 order to make upstream bug report.
880 880 """
881 881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 882 "It may change without warnings. "
883 883 "Use in corresponding context manager.",
884 884 category=ProvisionalCompleterWarning, stacklevel=2)
885 885
886 886 completions = list(completions)
887 887 if not completions:
888 888 return
889 889 starts = (c.start for c in completions)
890 890 ends = (c.end for c in completions)
891 891
892 892 new_start = min(starts)
893 893 new_end = max(ends)
894 894
895 895 seen_jedi = set()
896 896 seen_python_matches = set()
897 897 for c in completions:
898 898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 899 if c._origin == 'jedi':
900 900 seen_jedi.add(new_text)
901 901 elif c._origin == "IPCompleter.python_matcher":
902 902 seen_python_matches.add(new_text)
903 903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 904 diff = seen_python_matches.difference(seen_jedi)
905 905 if diff and _debug:
906 906 print('IPython.python matches have extras:', diff)
907 907
908 908
909 909 if sys.platform == 'win32':
910 910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 911 else:
912 912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913 913
914 914 GREEDY_DELIMS = ' =\r\n'
915 915
916 916
917 917 class CompletionSplitter(object):
918 918 """An object to split an input line in a manner similar to readline.
919 919
920 920 By having our own implementation, we can expose readline-like completion in
921 921 a uniform manner to all frontends. This object only needs to be given the
922 922 line of text to be split and the cursor position on said line, and it
923 923 returns the 'word' to be completed on at the cursor after splitting the
924 924 entire line.
925 925
926 926 What characters are used as splitting delimiters can be controlled by
927 927 setting the ``delims`` attribute (this is a property that internally
928 928 automatically builds the necessary regular expression)"""
929 929
930 930 # Private interface
931 931
932 932 # A string of delimiter characters. The default value makes sense for
933 933 # IPython's most typical usage patterns.
934 934 _delims = DELIMS
935 935
936 936 # The expression (a normal string) to be compiled into a regular expression
937 937 # for actual splitting. We store it as an attribute mostly for ease of
938 938 # debugging, since this type of code can be so tricky to debug.
939 939 _delim_expr = None
940 940
941 941 # The regular expression that does the actual splitting
942 942 _delim_re = None
943 943
944 944 def __init__(self, delims=None):
945 945 delims = CompletionSplitter._delims if delims is None else delims
946 946 self.delims = delims
947 947
948 948 @property
949 949 def delims(self):
950 950 """Return the string of delimiter characters."""
951 951 return self._delims
952 952
953 953 @delims.setter
954 954 def delims(self, delims):
955 955 """Set the delimiters for line splitting."""
956 956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 957 self._delim_re = re.compile(expr)
958 958 self._delims = delims
959 959 self._delim_expr = expr
960 960
961 961 def split_line(self, line, cursor_pos=None):
962 962 """Split a line of text with a cursor at the given position.
963 963 """
964 964 l = line if cursor_pos is None else line[:cursor_pos]
965 965 return self._delim_re.split(l)[-1]
966 966
967 967
968 968
969 969 class Completer(Configurable):
970 970
971 971 greedy = Bool(
972 972 False,
973 973 help="""Activate greedy completion.
974 974
975 975 .. deprecated:: 8.8
976 976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977 977
978 978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979 979
980 980 - ``Completer.evaluation = 'unsafe'``
981 981 - ``Completer.auto_close_dict_keys = True``
982 982 """,
983 983 ).tag(config=True)
984 984
985 985 evaluation = Enum(
986 986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 987 default_value="limited",
988 988 help="""Policy for code evaluation under completion.
989 989
990 990 Successive options allow to enable more eager evaluation for better
991 991 completion suggestions, including for nested dictionaries, nested lists,
992 992 or even results of function calls.
993 993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995 995
996 996 Allowed values are:
997 997
998 998 - ``forbidden``: no evaluation of code is permitted,
999 999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 1000 no item/attribute evaluationm no access to locals/globals,
1001 1001 no evaluation of any operations or comparisons.
1002 1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 1007 syntax with side-effects like `del x`,
1008 1008 - ``dangerous``: completely arbitrary evaluation.
1009 1009 """,
1010 1010 ).tag(config=True)
1011 1011
1012 1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 1014 "Default to True if jedi is installed.").tag(config=True)
1015 1015
1016 1016 jedi_compute_type_timeout = Int(default_value=400,
1017 1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 1019 performance by preventing jedi to build its cache.
1020 1020 """).tag(config=True)
1021 1021
1022 1022 debug = Bool(default_value=False,
1023 1023 help='Enable debug for the Completer. Mostly print extra '
1024 1024 'information for experimental jedi integration.')\
1025 1025 .tag(config=True)
1026 1026
1027 1027 backslash_combining_completions = Bool(True,
1028 1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 1029 "Includes completion of latex commands, unicode names, and expanding "
1030 1030 "unicode characters back to latex commands.").tag(config=True)
1031 1031
1032 1032 auto_close_dict_keys = Bool(
1033 1033 False,
1034 1034 help="""
1035 1035 Enable auto-closing dictionary keys.
1036 1036
1037 1037 When enabled string keys will be suffixed with a final quote
1038 1038 (matching the opening quote), tuple keys will also receive a
1039 1039 separating comma if needed, and keys which are final will
1040 1040 receive a closing bracket (``]``).
1041 1041 """,
1042 1042 ).tag(config=True)
1043 1043
1044 1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 1045 """Create a new completer for the command line.
1046 1046
1047 1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048 1048
1049 1049 If unspecified, the default namespace where completions are performed
1050 1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 1051 given as dictionaries.
1052 1052
1053 1053 An optional second namespace can be given. This allows the completer
1054 1054 to handle cases where both the local and global scopes need to be
1055 1055 distinguished.
1056 1056 """
1057 1057
1058 1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 1060 # to bind to __main__.__dict__ at completion time, not now.
1061 1061 if namespace is None:
1062 1062 self.use_main_ns = True
1063 1063 else:
1064 1064 self.use_main_ns = False
1065 1065 self.namespace = namespace
1066 1066
1067 1067 # The global namespace, if given, can be bound directly
1068 1068 if global_namespace is None:
1069 1069 self.global_namespace = {}
1070 1070 else:
1071 1071 self.global_namespace = global_namespace
1072 1072
1073 1073 self.custom_matchers = []
1074 1074
1075 1075 super(Completer, self).__init__(**kwargs)
1076 1076
1077 1077 def complete(self, text, state):
1078 1078 """Return the next possible completion for 'text'.
1079 1079
1080 1080 This is called successively with state == 0, 1, 2, ... until it
1081 1081 returns None. The completion should begin with 'text'.
1082 1082
1083 1083 """
1084 1084 if self.use_main_ns:
1085 1085 self.namespace = __main__.__dict__
1086 1086
1087 1087 if state == 0:
1088 1088 if "." in text:
1089 1089 self.matches = self.attr_matches(text)
1090 1090 else:
1091 1091 self.matches = self.global_matches(text)
1092 1092 try:
1093 1093 return self.matches[state]
1094 1094 except IndexError:
1095 1095 return None
1096 1096
1097 1097 def global_matches(self, text):
1098 1098 """Compute matches when text is a simple name.
1099 1099
1100 1100 Return a list of all keywords, built-in functions and names currently
1101 1101 defined in self.namespace or self.global_namespace that match.
1102 1102
1103 1103 """
1104 1104 matches = []
1105 1105 match_append = matches.append
1106 1106 n = len(text)
1107 1107 for lst in [
1108 1108 keyword.kwlist,
1109 1109 builtin_mod.__dict__.keys(),
1110 1110 list(self.namespace.keys()),
1111 1111 list(self.global_namespace.keys()),
1112 1112 ]:
1113 1113 for word in lst:
1114 1114 if word[:n] == text and word != "__builtins__":
1115 1115 match_append(word)
1116 1116
1117 1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 1119 shortened = {
1120 1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 1121 for word in lst
1122 1122 if snake_case_re.match(word)
1123 1123 }
1124 1124 for word in shortened.keys():
1125 1125 if word[:n] == text and word != "__builtins__":
1126 1126 match_append(shortened[word])
1127 1127 return matches
1128 1128
1129 1129 def attr_matches(self, text):
1130 1130 """Compute matches when text contains a dot.
1131 1131
1132 1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 1134 evaluated and its attributes (as revealed by dir()) are used as
1135 1135 possible completions. (For class instances, class members are
1136 1136 also considered.)
1137 1137
1138 1138 WARNING: this can still invoke arbitrary C code, if an object
1139 1139 with a __getattr__ hook is evaluated.
1140 1140
1141 1141 """
1142 1142 return self._attr_matches(text)[0]
1143 1143
1144 1144 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1145 1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1146 1146 if not m2:
1147 1147 return [], ""
1148 1148 expr, attr = m2.group(1, 2)
1149 1149
1150 1150 obj = self._evaluate_expr(expr)
1151 1151
1152 1152 if obj is not_found:
1153 1153 return [], ""
1154 1154
1155 1155 if self.limit_to__all__ and hasattr(obj, '__all__'):
1156 1156 words = get__all__entries(obj)
1157 1157 else:
1158 1158 words = dir2(obj)
1159 1159
1160 1160 try:
1161 1161 words = generics.complete_object(obj, words)
1162 1162 except TryNext:
1163 1163 pass
1164 1164 except AssertionError:
1165 1165 raise
1166 1166 except Exception:
1167 1167 # Silence errors from completion function
1168 1168 pass
1169 1169 # Build match list to return
1170 1170 n = len(attr)
1171 1171
1172 1172 # Note: ideally we would just return words here and the prefix
1173 1173 # reconciliator would know that we intend to append to rather than
1174 1174 # replace the input text; this requires refactoring to return range
1175 1175 # which ought to be replaced (as does jedi).
1176 1176 if include_prefix:
1177 1177 tokens = _parse_tokens(expr)
1178 1178 rev_tokens = reversed(tokens)
1179 1179 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1180 1180 name_turn = True
1181 1181
1182 1182 parts = []
1183 1183 for token in rev_tokens:
1184 1184 if token.type in skip_over:
1185 1185 continue
1186 1186 if token.type == tokenize.NAME and name_turn:
1187 1187 parts.append(token.string)
1188 1188 name_turn = False
1189 1189 elif (
1190 1190 token.type == tokenize.OP and token.string == "." and not name_turn
1191 1191 ):
1192 1192 parts.append(token.string)
1193 1193 name_turn = True
1194 1194 else:
1195 1195 # short-circuit if not empty nor name token
1196 1196 break
1197 1197
1198 1198 prefix_after_space = "".join(reversed(parts))
1199 1199 else:
1200 1200 prefix_after_space = ""
1201 1201
1202 1202 return (
1203 1203 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1204 1204 "." + attr,
1205 1205 )
1206 1206
1207 1207 def _evaluate_expr(self, expr):
1208 1208 obj = not_found
1209 1209 done = False
1210 1210 while not done and expr:
1211 1211 try:
1212 1212 obj = guarded_eval(
1213 1213 expr,
1214 1214 EvaluationContext(
1215 1215 globals=self.global_namespace,
1216 1216 locals=self.namespace,
1217 1217 evaluation=self.evaluation,
1218 1218 ),
1219 1219 )
1220 1220 done = True
1221 1221 except Exception as e:
1222 1222 if self.debug:
1223 1223 print("Evaluation exception", e)
1224 1224 # trim the expression to remove any invalid prefix
1225 1225 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1226 1226 # where parenthesis is not closed.
1227 1227 # TODO: make this faster by reusing parts of the computation?
1228 1228 expr = expr[1:]
1229 1229 return obj
1230 1230
1231 1231 def get__all__entries(obj):
1232 1232 """returns the strings in the __all__ attribute"""
1233 1233 try:
1234 1234 words = getattr(obj, '__all__')
1235 1235 except:
1236 1236 return []
1237 1237
1238 1238 return [w for w in words if isinstance(w, str)]
1239 1239
1240 1240
1241 1241 class _DictKeyState(enum.Flag):
1242 1242 """Represent state of the key match in context of other possible matches.
1243 1243
1244 1244 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1245 1245 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1246 1246 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1247 1247 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1248 1248 """
1249 1249
1250 1250 BASELINE = 0
1251 1251 END_OF_ITEM = enum.auto()
1252 1252 END_OF_TUPLE = enum.auto()
1253 1253 IN_TUPLE = enum.auto()
1254 1254
1255 1255
1256 1256 def _parse_tokens(c):
1257 1257 """Parse tokens even if there is an error."""
1258 1258 tokens = []
1259 1259 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1260 1260 while True:
1261 1261 try:
1262 1262 tokens.append(next(token_generator))
1263 1263 except tokenize.TokenError:
1264 1264 return tokens
1265 1265 except StopIteration:
1266 1266 return tokens
1267 1267
1268 1268
1269 1269 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1270 1270 """Match any valid Python numeric literal in a prefix of dictionary keys.
1271 1271
1272 1272 References:
1273 1273 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1274 1274 - https://docs.python.org/3/library/tokenize.html
1275 1275 """
1276 1276 if prefix[-1].isspace():
1277 1277 # if user typed a space we do not have anything to complete
1278 1278 # even if there was a valid number token before
1279 1279 return None
1280 1280 tokens = _parse_tokens(prefix)
1281 1281 rev_tokens = reversed(tokens)
1282 1282 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1283 1283 number = None
1284 1284 for token in rev_tokens:
1285 1285 if token.type in skip_over:
1286 1286 continue
1287 1287 if number is None:
1288 1288 if token.type == tokenize.NUMBER:
1289 1289 number = token.string
1290 1290 continue
1291 1291 else:
1292 1292 # we did not match a number
1293 1293 return None
1294 1294 if token.type == tokenize.OP:
1295 1295 if token.string == ",":
1296 1296 break
1297 1297 if token.string in {"+", "-"}:
1298 1298 number = token.string + number
1299 1299 else:
1300 1300 return None
1301 1301 return number
1302 1302
1303 1303
1304 1304 _INT_FORMATS = {
1305 1305 "0b": bin,
1306 1306 "0o": oct,
1307 1307 "0x": hex,
1308 1308 }
1309 1309
1310 1310
1311 1311 def match_dict_keys(
1312 1312 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1313 1313 prefix: str,
1314 1314 delims: str,
1315 1315 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1316 1316 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1317 1317 """Used by dict_key_matches, matching the prefix to a list of keys
1318 1318
1319 1319 Parameters
1320 1320 ----------
1321 1321 keys
1322 1322 list of keys in dictionary currently being completed.
1323 1323 prefix
1324 1324 Part of the text already typed by the user. E.g. `mydict[b'fo`
1325 1325 delims
1326 1326 String of delimiters to consider when finding the current key.
1327 1327 extra_prefix : optional
1328 1328 Part of the text already typed in multi-key index cases. E.g. for
1329 1329 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1330 1330
1331 1331 Returns
1332 1332 -------
1333 1333 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1334 1334 ``quote`` being the quote that need to be used to close current string.
1335 1335 ``token_start`` the position where the replacement should start occurring,
1336 1336 ``matches`` a dictionary of replacement/completion keys on keys and values
1337 1337 indicating whether the state.
1338 1338 """
1339 1339 prefix_tuple = extra_prefix if extra_prefix else ()
1340 1340
1341 1341 prefix_tuple_size = sum(
1342 1342 [
1343 1343 # for pandas, do not count slices as taking space
1344 1344 not isinstance(k, slice)
1345 1345 for k in prefix_tuple
1346 1346 ]
1347 1347 )
1348 1348 text_serializable_types = (str, bytes, int, float, slice)
1349 1349
1350 1350 def filter_prefix_tuple(key):
1351 1351 # Reject too short keys
1352 1352 if len(key) <= prefix_tuple_size:
1353 1353 return False
1354 1354 # Reject keys which cannot be serialised to text
1355 1355 for k in key:
1356 1356 if not isinstance(k, text_serializable_types):
1357 1357 return False
1358 1358 # Reject keys that do not match the prefix
1359 1359 for k, pt in zip(key, prefix_tuple):
1360 1360 if k != pt and not isinstance(pt, slice):
1361 1361 return False
1362 1362 # All checks passed!
1363 1363 return True
1364 1364
1365 1365 filtered_key_is_final: Dict[
1366 1366 Union[str, bytes, int, float], _DictKeyState
1367 1367 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1368 1368
1369 1369 for k in keys:
1370 1370 # If at least one of the matches is not final, mark as undetermined.
1371 1371 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1372 1372 # `111` appears final on first match but is not final on the second.
1373 1373
1374 1374 if isinstance(k, tuple):
1375 1375 if filter_prefix_tuple(k):
1376 1376 key_fragment = k[prefix_tuple_size]
1377 1377 filtered_key_is_final[key_fragment] |= (
1378 1378 _DictKeyState.END_OF_TUPLE
1379 1379 if len(k) == prefix_tuple_size + 1
1380 1380 else _DictKeyState.IN_TUPLE
1381 1381 )
1382 1382 elif prefix_tuple_size > 0:
1383 1383 # we are completing a tuple but this key is not a tuple,
1384 1384 # so we should ignore it
1385 1385 pass
1386 1386 else:
1387 1387 if isinstance(k, text_serializable_types):
1388 1388 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1389 1389
1390 1390 filtered_keys = filtered_key_is_final.keys()
1391 1391
1392 1392 if not prefix:
1393 1393 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1394 1394
1395 1395 quote_match = re.search("(?:\"|')", prefix)
1396 1396 is_user_prefix_numeric = False
1397 1397
1398 1398 if quote_match:
1399 1399 quote = quote_match.group()
1400 1400 valid_prefix = prefix + quote
1401 1401 try:
1402 1402 prefix_str = literal_eval(valid_prefix)
1403 1403 except Exception:
1404 1404 return "", 0, {}
1405 1405 else:
1406 1406 # If it does not look like a string, let's assume
1407 1407 # we are dealing with a number or variable.
1408 1408 number_match = _match_number_in_dict_key_prefix(prefix)
1409 1409
1410 1410 # We do not want the key matcher to suggest variable names so we yield:
1411 1411 if number_match is None:
1412 1412 # The alternative would be to assume that user forgort the quote
1413 1413 # and if the substring matches, suggest adding it at the start.
1414 1414 return "", 0, {}
1415 1415
1416 1416 prefix_str = number_match
1417 1417 is_user_prefix_numeric = True
1418 1418 quote = ""
1419 1419
1420 1420 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1421 1421 token_match = re.search(pattern, prefix, re.UNICODE)
1422 1422 assert token_match is not None # silence mypy
1423 1423 token_start = token_match.start()
1424 1424 token_prefix = token_match.group()
1425 1425
1426 1426 matched: Dict[str, _DictKeyState] = {}
1427 1427
1428 1428 str_key: Union[str, bytes]
1429 1429
1430 1430 for key in filtered_keys:
1431 1431 if isinstance(key, (int, float)):
1432 1432 # User typed a number but this key is not a number.
1433 1433 if not is_user_prefix_numeric:
1434 1434 continue
1435 1435 str_key = str(key)
1436 1436 if isinstance(key, int):
1437 1437 int_base = prefix_str[:2].lower()
1438 1438 # if user typed integer using binary/oct/hex notation:
1439 1439 if int_base in _INT_FORMATS:
1440 1440 int_format = _INT_FORMATS[int_base]
1441 1441 str_key = int_format(key)
1442 1442 else:
1443 1443 # User typed a string but this key is a number.
1444 1444 if is_user_prefix_numeric:
1445 1445 continue
1446 1446 str_key = key
1447 1447 try:
1448 1448 if not str_key.startswith(prefix_str):
1449 1449 continue
1450 1450 except (AttributeError, TypeError, UnicodeError) as e:
1451 1451 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1452 1452 continue
1453 1453
1454 1454 # reformat remainder of key to begin with prefix
1455 1455 rem = str_key[len(prefix_str) :]
1456 1456 # force repr wrapped in '
1457 1457 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1458 1458 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1459 1459 if quote == '"':
1460 1460 # The entered prefix is quoted with ",
1461 1461 # but the match is quoted with '.
1462 1462 # A contained " hence needs escaping for comparison:
1463 1463 rem_repr = rem_repr.replace('"', '\\"')
1464 1464
1465 1465 # then reinsert prefix from start of token
1466 1466 match = "%s%s" % (token_prefix, rem_repr)
1467 1467
1468 1468 matched[match] = filtered_key_is_final[key]
1469 1469 return quote, token_start, matched
1470 1470
1471 1471
1472 1472 def cursor_to_position(text:str, line:int, column:int)->int:
1473 1473 """
1474 1474 Convert the (line,column) position of the cursor in text to an offset in a
1475 1475 string.
1476 1476
1477 1477 Parameters
1478 1478 ----------
1479 1479 text : str
1480 1480 The text in which to calculate the cursor offset
1481 1481 line : int
1482 1482 Line of the cursor; 0-indexed
1483 1483 column : int
1484 1484 Column of the cursor 0-indexed
1485 1485
1486 1486 Returns
1487 1487 -------
1488 1488 Position of the cursor in ``text``, 0-indexed.
1489 1489
1490 1490 See Also
1491 1491 --------
1492 1492 position_to_cursor : reciprocal of this function
1493 1493
1494 1494 """
1495 1495 lines = text.split('\n')
1496 1496 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1497 1497
1498 1498 return sum(len(l) + 1 for l in lines[:line]) + column
1499 1499
1500 1500 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1501 1501 """
1502 1502 Convert the position of the cursor in text (0 indexed) to a line
1503 1503 number(0-indexed) and a column number (0-indexed) pair
1504 1504
1505 1505 Position should be a valid position in ``text``.
1506 1506
1507 1507 Parameters
1508 1508 ----------
1509 1509 text : str
1510 1510 The text in which to calculate the cursor offset
1511 1511 offset : int
1512 1512 Position of the cursor in ``text``, 0-indexed.
1513 1513
1514 1514 Returns
1515 1515 -------
1516 1516 (line, column) : (int, int)
1517 1517 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1518 1518
1519 1519 See Also
1520 1520 --------
1521 1521 cursor_to_position : reciprocal of this function
1522 1522
1523 1523 """
1524 1524
1525 1525 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1526 1526
1527 1527 before = text[:offset]
1528 1528 blines = before.split('\n') # ! splitnes trim trailing \n
1529 1529 line = before.count('\n')
1530 1530 col = len(blines[-1])
1531 1531 return line, col
1532 1532
1533 1533
1534 1534 def _safe_isinstance(obj, module, class_name, *attrs):
1535 1535 """Checks if obj is an instance of module.class_name if loaded
1536 1536 """
1537 1537 if module in sys.modules:
1538 1538 m = sys.modules[module]
1539 1539 for attr in [class_name, *attrs]:
1540 1540 m = getattr(m, attr)
1541 1541 return isinstance(obj, m)
1542 1542
1543 1543
1544 1544 @context_matcher()
1545 1545 def back_unicode_name_matcher(context: CompletionContext):
1546 1546 """Match Unicode characters back to Unicode name
1547 1547
1548 1548 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1549 1549 """
1550 1550 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1551 1551 return _convert_matcher_v1_result_to_v2(
1552 1552 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1553 1553 )
1554 1554
1555 1555
1556 1556 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1557 1557 """Match Unicode characters back to Unicode name
1558 1558
1559 1559 This does ``☃`` -> ``\\snowman``
1560 1560
1561 1561 Note that snowman is not a valid python3 combining character but will be expanded.
1562 1562 Though it will not recombine back to the snowman character by the completion machinery.
1563 1563
1564 1564 This will not either back-complete standard sequences like \\n, \\b ...
1565 1565
1566 1566 .. deprecated:: 8.6
1567 1567 You can use :meth:`back_unicode_name_matcher` instead.
1568 1568
1569 1569 Returns
1570 1570 =======
1571 1571
1572 1572 Return a tuple with two elements:
1573 1573
1574 1574 - The Unicode character that was matched (preceded with a backslash), or
1575 1575 empty string,
1576 1576 - a sequence (of 1), name for the match Unicode character, preceded by
1577 1577 backslash, or empty if no match.
1578 1578 """
1579 1579 if len(text)<2:
1580 1580 return '', ()
1581 1581 maybe_slash = text[-2]
1582 1582 if maybe_slash != '\\':
1583 1583 return '', ()
1584 1584
1585 1585 char = text[-1]
1586 1586 # no expand on quote for completion in strings.
1587 1587 # nor backcomplete standard ascii keys
1588 1588 if char in string.ascii_letters or char in ('"',"'"):
1589 1589 return '', ()
1590 1590 try :
1591 1591 unic = unicodedata.name(char)
1592 1592 return '\\'+char,('\\'+unic,)
1593 1593 except KeyError:
1594 1594 pass
1595 1595 return '', ()
1596 1596
1597 1597
1598 1598 @context_matcher()
1599 1599 def back_latex_name_matcher(context: CompletionContext):
1600 1600 """Match latex characters back to unicode name
1601 1601
1602 1602 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1603 1603 """
1604 1604 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1605 1605 return _convert_matcher_v1_result_to_v2(
1606 1606 matches, type="latex", fragment=fragment, suppress_if_matches=True
1607 1607 )
1608 1608
1609 1609
1610 1610 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1611 1611 """Match latex characters back to unicode name
1612 1612
1613 1613 This does ``\\ℵ`` -> ``\\aleph``
1614 1614
1615 1615 .. deprecated:: 8.6
1616 1616 You can use :meth:`back_latex_name_matcher` instead.
1617 1617 """
1618 1618 if len(text)<2:
1619 1619 return '', ()
1620 1620 maybe_slash = text[-2]
1621 1621 if maybe_slash != '\\':
1622 1622 return '', ()
1623 1623
1624 1624
1625 1625 char = text[-1]
1626 1626 # no expand on quote for completion in strings.
1627 1627 # nor backcomplete standard ascii keys
1628 1628 if char in string.ascii_letters or char in ('"',"'"):
1629 1629 return '', ()
1630 1630 try :
1631 1631 latex = reverse_latex_symbol[char]
1632 1632 # '\\' replace the \ as well
1633 1633 return '\\'+char,[latex]
1634 1634 except KeyError:
1635 1635 pass
1636 1636 return '', ()
1637 1637
1638 1638
1639 1639 def _formatparamchildren(parameter) -> str:
1640 1640 """
1641 1641 Get parameter name and value from Jedi Private API
1642 1642
1643 1643 Jedi does not expose a simple way to get `param=value` from its API.
1644 1644
1645 1645 Parameters
1646 1646 ----------
1647 1647 parameter
1648 1648 Jedi's function `Param`
1649 1649
1650 1650 Returns
1651 1651 -------
1652 1652 A string like 'a', 'b=1', '*args', '**kwargs'
1653 1653
1654 1654 """
1655 1655 description = parameter.description
1656 1656 if not description.startswith('param '):
1657 1657 raise ValueError('Jedi function parameter description have change format.'
1658 1658 'Expected "param ...", found %r".' % description)
1659 1659 return description[6:]
1660 1660
1661 1661 def _make_signature(completion)-> str:
1662 1662 """
1663 1663 Make the signature from a jedi completion
1664 1664
1665 1665 Parameters
1666 1666 ----------
1667 1667 completion : jedi.Completion
1668 1668 object does not complete a function type
1669 1669
1670 1670 Returns
1671 1671 -------
1672 1672 a string consisting of the function signature, with the parenthesis but
1673 1673 without the function name. example:
1674 1674 `(a, *args, b=1, **kwargs)`
1675 1675
1676 1676 """
1677 1677
1678 1678 # it looks like this might work on jedi 0.17
1679 1679 if hasattr(completion, 'get_signatures'):
1680 1680 signatures = completion.get_signatures()
1681 1681 if not signatures:
1682 1682 return '(?)'
1683 1683
1684 1684 c0 = completion.get_signatures()[0]
1685 1685 return '('+c0.to_string().split('(', maxsplit=1)[1]
1686 1686
1687 1687 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1688 1688 for p in signature.defined_names()) if f])
1689 1689
1690 1690
1691 1691 _CompleteResult = Dict[str, MatcherResult]
1692 1692
1693 1693
1694 1694 DICT_MATCHER_REGEX = re.compile(
1695 1695 r"""(?x)
1696 1696 ( # match dict-referring - or any get item object - expression
1697 1697 .+
1698 1698 )
1699 1699 \[ # open bracket
1700 1700 \s* # and optional whitespace
1701 1701 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1702 1702 # and slices
1703 1703 ((?:(?:
1704 1704 (?: # closed string
1705 1705 [uUbB]? # string prefix (r not handled)
1706 1706 (?:
1707 1707 '(?:[^']|(?<!\\)\\')*'
1708 1708 |
1709 1709 "(?:[^"]|(?<!\\)\\")*"
1710 1710 )
1711 1711 )
1712 1712 |
1713 1713 # capture integers and slices
1714 1714 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1715 1715 |
1716 1716 # integer in bin/hex/oct notation
1717 1717 0[bBxXoO]_?(?:\w|\d)+
1718 1718 )
1719 1719 \s*,\s*
1720 1720 )*)
1721 1721 ((?:
1722 1722 (?: # unclosed string
1723 1723 [uUbB]? # string prefix (r not handled)
1724 1724 (?:
1725 1725 '(?:[^']|(?<!\\)\\')*
1726 1726 |
1727 1727 "(?:[^"]|(?<!\\)\\")*
1728 1728 )
1729 1729 )
1730 1730 |
1731 1731 # unfinished integer
1732 1732 (?:[-+]?\d+)
1733 1733 |
1734 1734 # integer in bin/hex/oct notation
1735 1735 0[bBxXoO]_?(?:\w|\d)+
1736 1736 )
1737 1737 )?
1738 1738 $
1739 1739 """
1740 1740 )
1741 1741
1742 1742
1743 1743 def _convert_matcher_v1_result_to_v2(
1744 1744 matches: Sequence[str],
1745 1745 type: str,
1746 1746 fragment: Optional[str] = None,
1747 1747 suppress_if_matches: bool = False,
1748 1748 ) -> SimpleMatcherResult:
1749 1749 """Utility to help with transition"""
1750 1750 result = {
1751 1751 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1752 1752 "suppress": (True if matches else False) if suppress_if_matches else False,
1753 1753 }
1754 1754 if fragment is not None:
1755 1755 result["matched_fragment"] = fragment
1756 1756 return cast(SimpleMatcherResult, result)
1757 1757
1758 1758
1759 1759 class IPCompleter(Completer):
1760 1760 """Extension of the completer class with IPython-specific features"""
1761 1761
1762 1762 @observe('greedy')
1763 1763 def _greedy_changed(self, change):
1764 1764 """update the splitter and readline delims when greedy is changed"""
1765 1765 if change["new"]:
1766 1766 self.evaluation = "unsafe"
1767 1767 self.auto_close_dict_keys = True
1768 1768 self.splitter.delims = GREEDY_DELIMS
1769 1769 else:
1770 1770 self.evaluation = "limited"
1771 1771 self.auto_close_dict_keys = False
1772 1772 self.splitter.delims = DELIMS
1773 1773
1774 1774 dict_keys_only = Bool(
1775 1775 False,
1776 1776 help="""
1777 1777 Whether to show dict key matches only.
1778 1778
1779 1779 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1780 1780 """,
1781 1781 )
1782 1782
1783 1783 suppress_competing_matchers = UnionTrait(
1784 1784 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1785 1785 default_value=None,
1786 1786 help="""
1787 1787 Whether to suppress completions from other *Matchers*.
1788 1788
1789 1789 When set to ``None`` (default) the matchers will attempt to auto-detect
1790 1790 whether suppression of other matchers is desirable. For example, at
1791 1791 the beginning of a line followed by `%` we expect a magic completion
1792 1792 to be the only applicable option, and after ``my_dict['`` we usually
1793 1793 expect a completion with an existing dictionary key.
1794 1794
1795 1795 If you want to disable this heuristic and see completions from all matchers,
1796 1796 set ``IPCompleter.suppress_competing_matchers = False``.
1797 1797 To disable the heuristic for specific matchers provide a dictionary mapping:
1798 1798 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1799 1799
1800 1800 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1801 1801 completions to the set of matchers with the highest priority;
1802 1802 this is equivalent to ``IPCompleter.merge_completions`` and
1803 1803 can be beneficial for performance, but will sometimes omit relevant
1804 1804 candidates from matchers further down the priority list.
1805 1805 """,
1806 1806 ).tag(config=True)
1807 1807
1808 1808 merge_completions = Bool(
1809 1809 True,
1810 1810 help="""Whether to merge completion results into a single list
1811 1811
1812 1812 If False, only the completion results from the first non-empty
1813 1813 completer will be returned.
1814 1814
1815 1815 As of version 8.6.0, setting the value to ``False`` is an alias for:
1816 1816 ``IPCompleter.suppress_competing_matchers = True.``.
1817 1817 """,
1818 1818 ).tag(config=True)
1819 1819
1820 1820 disable_matchers = ListTrait(
1821 1821 Unicode(),
1822 1822 help="""List of matchers to disable.
1823 1823
1824 1824 The list should contain matcher identifiers (see :any:`completion_matcher`).
1825 1825 """,
1826 1826 ).tag(config=True)
1827 1827
1828 1828 omit__names = Enum(
1829 1829 (0, 1, 2),
1830 1830 default_value=2,
1831 1831 help="""Instruct the completer to omit private method names
1832 1832
1833 1833 Specifically, when completing on ``object.<tab>``.
1834 1834
1835 1835 When 2 [default]: all names that start with '_' will be excluded.
1836 1836
1837 1837 When 1: all 'magic' names (``__foo__``) will be excluded.
1838 1838
1839 1839 When 0: nothing will be excluded.
1840 1840 """
1841 1841 ).tag(config=True)
1842 1842 limit_to__all__ = Bool(False,
1843 1843 help="""
1844 1844 DEPRECATED as of version 5.0.
1845 1845
1846 1846 Instruct the completer to use __all__ for the completion
1847 1847
1848 1848 Specifically, when completing on ``object.<tab>``.
1849 1849
1850 1850 When True: only those names in obj.__all__ will be included.
1851 1851
1852 1852 When False [default]: the __all__ attribute is ignored
1853 1853 """,
1854 1854 ).tag(config=True)
1855 1855
1856 1856 profile_completions = Bool(
1857 1857 default_value=False,
1858 1858 help="If True, emit profiling data for completion subsystem using cProfile."
1859 1859 ).tag(config=True)
1860 1860
1861 1861 profiler_output_dir = Unicode(
1862 1862 default_value=".completion_profiles",
1863 1863 help="Template for path at which to output profile data for completions."
1864 1864 ).tag(config=True)
1865 1865
1866 1866 @observe('limit_to__all__')
1867 1867 def _limit_to_all_changed(self, change):
1868 1868 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1869 1869 'value has been deprecated since IPython 5.0, will be made to have '
1870 1870 'no effects and then removed in future version of IPython.',
1871 1871 UserWarning)
1872 1872
1873 1873 def __init__(
1874 1874 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1875 1875 ):
1876 1876 """IPCompleter() -> completer
1877 1877
1878 1878 Return a completer object.
1879 1879
1880 1880 Parameters
1881 1881 ----------
1882 1882 shell
1883 1883 a pointer to the ipython shell itself. This is needed
1884 1884 because this completer knows about magic functions, and those can
1885 1885 only be accessed via the ipython instance.
1886 1886 namespace : dict, optional
1887 1887 an optional dict where completions are performed.
1888 1888 global_namespace : dict, optional
1889 1889 secondary optional dict for completions, to
1890 1890 handle cases (such as IPython embedded inside functions) where
1891 1891 both Python scopes are visible.
1892 1892 config : Config
1893 1893 traitlet's config object
1894 1894 **kwargs
1895 1895 passed to super class unmodified.
1896 1896 """
1897 1897
1898 1898 self.magic_escape = ESC_MAGIC
1899 1899 self.splitter = CompletionSplitter()
1900 1900
1901 1901 # _greedy_changed() depends on splitter and readline being defined:
1902 1902 super().__init__(
1903 1903 namespace=namespace,
1904 1904 global_namespace=global_namespace,
1905 1905 config=config,
1906 1906 **kwargs,
1907 1907 )
1908 1908
1909 1909 # List where completion matches will be stored
1910 1910 self.matches = []
1911 1911 self.shell = shell
1912 1912 # Regexp to split filenames with spaces in them
1913 1913 self.space_name_re = re.compile(r'([^\\] )')
1914 1914 # Hold a local ref. to glob.glob for speed
1915 1915 self.glob = glob.glob
1916 1916
1917 1917 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1918 1918 # buffers, to avoid completion problems.
1919 1919 term = os.environ.get('TERM','xterm')
1920 1920 self.dumb_terminal = term in ['dumb','emacs']
1921 1921
1922 1922 # Special handling of backslashes needed in win32 platforms
1923 1923 if sys.platform == "win32":
1924 1924 self.clean_glob = self._clean_glob_win32
1925 1925 else:
1926 1926 self.clean_glob = self._clean_glob
1927 1927
1928 1928 #regexp to parse docstring for function signature
1929 1929 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1930 1930 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1931 1931 #use this if positional argument name is also needed
1932 1932 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1933 1933
1934 1934 self.magic_arg_matchers = [
1935 1935 self.magic_config_matcher,
1936 1936 self.magic_color_matcher,
1937 1937 ]
1938 1938
1939 1939 # This is set externally by InteractiveShell
1940 1940 self.custom_completers = None
1941 1941
1942 1942 # This is a list of names of unicode characters that can be completed
1943 1943 # into their corresponding unicode value. The list is large, so we
1944 1944 # lazily initialize it on first use. Consuming code should access this
1945 1945 # attribute through the `@unicode_names` property.
1946 1946 self._unicode_names = None
1947 1947
1948 1948 self._backslash_combining_matchers = [
1949 1949 self.latex_name_matcher,
1950 1950 self.unicode_name_matcher,
1951 1951 back_latex_name_matcher,
1952 1952 back_unicode_name_matcher,
1953 1953 self.fwd_unicode_matcher,
1954 1954 ]
1955 1955
1956 1956 if not self.backslash_combining_completions:
1957 1957 for matcher in self._backslash_combining_matchers:
1958 1958 self.disable_matchers.append(_get_matcher_id(matcher))
1959 1959
1960 1960 if not self.merge_completions:
1961 1961 self.suppress_competing_matchers = True
1962 1962
1963 1963 @property
1964 1964 def matchers(self) -> List[Matcher]:
1965 1965 """All active matcher routines for completion"""
1966 1966 if self.dict_keys_only:
1967 1967 return [self.dict_key_matcher]
1968 1968
1969 1969 if self.use_jedi:
1970 1970 return [
1971 1971 *self.custom_matchers,
1972 1972 *self._backslash_combining_matchers,
1973 1973 *self.magic_arg_matchers,
1974 1974 self.custom_completer_matcher,
1975 1975 self.magic_matcher,
1976 1976 self._jedi_matcher,
1977 1977 self.dict_key_matcher,
1978 1978 self.file_matcher,
1979 1979 ]
1980 1980 else:
1981 1981 return [
1982 1982 *self.custom_matchers,
1983 1983 *self._backslash_combining_matchers,
1984 1984 *self.magic_arg_matchers,
1985 1985 self.custom_completer_matcher,
1986 1986 self.dict_key_matcher,
1987 1987 self.magic_matcher,
1988 1988 self.python_matcher,
1989 1989 self.file_matcher,
1990 1990 self.python_func_kw_matcher,
1991 1991 ]
1992 1992
1993 1993 def all_completions(self, text:str) -> List[str]:
1994 1994 """
1995 1995 Wrapper around the completion methods for the benefit of emacs.
1996 1996 """
1997 1997 prefix = text.rpartition('.')[0]
1998 1998 with provisionalcompleter():
1999 1999 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2000 2000 for c in self.completions(text, len(text))]
2001 2001
2002 2002 return self.complete(text)[1]
2003 2003
2004 2004 def _clean_glob(self, text:str):
2005 2005 return self.glob("%s*" % text)
2006 2006
2007 2007 def _clean_glob_win32(self, text:str):
2008 2008 return [f.replace("\\","/")
2009 2009 for f in self.glob("%s*" % text)]
2010 2010
2011 2011 @context_matcher()
2012 2012 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2013 2013 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2014 2014 matches = self.file_matches(context.token)
2015 2015 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2016 2016 # starts with `/home/`, `C:\`, etc)
2017 2017 return _convert_matcher_v1_result_to_v2(matches, type="path")
2018 2018
2019 2019 def file_matches(self, text: str) -> List[str]:
2020 2020 """Match filenames, expanding ~USER type strings.
2021 2021
2022 2022 Most of the seemingly convoluted logic in this completer is an
2023 2023 attempt to handle filenames with spaces in them. And yet it's not
2024 2024 quite perfect, because Python's readline doesn't expose all of the
2025 2025 GNU readline details needed for this to be done correctly.
2026 2026
2027 2027 For a filename with a space in it, the printed completions will be
2028 2028 only the parts after what's already been typed (instead of the
2029 2029 full completions, as is normally done). I don't think with the
2030 2030 current (as of Python 2.3) Python readline it's possible to do
2031 2031 better.
2032 2032
2033 2033 .. deprecated:: 8.6
2034 2034 You can use :meth:`file_matcher` instead.
2035 2035 """
2036 2036
2037 2037 # chars that require escaping with backslash - i.e. chars
2038 2038 # that readline treats incorrectly as delimiters, but we
2039 2039 # don't want to treat as delimiters in filename matching
2040 2040 # when escaped with backslash
2041 2041 if text.startswith('!'):
2042 2042 text = text[1:]
2043 2043 text_prefix = u'!'
2044 2044 else:
2045 2045 text_prefix = u''
2046 2046
2047 2047 text_until_cursor = self.text_until_cursor
2048 2048 # track strings with open quotes
2049 2049 open_quotes = has_open_quotes(text_until_cursor)
2050 2050
2051 2051 if '(' in text_until_cursor or '[' in text_until_cursor:
2052 2052 lsplit = text
2053 2053 else:
2054 2054 try:
2055 2055 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2056 2056 lsplit = arg_split(text_until_cursor)[-1]
2057 2057 except ValueError:
2058 2058 # typically an unmatched ", or backslash without escaped char.
2059 2059 if open_quotes:
2060 2060 lsplit = text_until_cursor.split(open_quotes)[-1]
2061 2061 else:
2062 2062 return []
2063 2063 except IndexError:
2064 2064 # tab pressed on empty line
2065 2065 lsplit = ""
2066 2066
2067 2067 if not open_quotes and lsplit != protect_filename(lsplit):
2068 2068 # if protectables are found, do matching on the whole escaped name
2069 2069 has_protectables = True
2070 2070 text0,text = text,lsplit
2071 2071 else:
2072 2072 has_protectables = False
2073 2073 text = os.path.expanduser(text)
2074 2074
2075 2075 if text == "":
2076 2076 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2077 2077
2078 2078 # Compute the matches from the filesystem
2079 2079 if sys.platform == 'win32':
2080 2080 m0 = self.clean_glob(text)
2081 2081 else:
2082 2082 m0 = self.clean_glob(text.replace('\\', ''))
2083 2083
2084 2084 if has_protectables:
2085 2085 # If we had protectables, we need to revert our changes to the
2086 2086 # beginning of filename so that we don't double-write the part
2087 2087 # of the filename we have so far
2088 2088 len_lsplit = len(lsplit)
2089 2089 matches = [text_prefix + text0 +
2090 2090 protect_filename(f[len_lsplit:]) for f in m0]
2091 2091 else:
2092 2092 if open_quotes:
2093 2093 # if we have a string with an open quote, we don't need to
2094 2094 # protect the names beyond the quote (and we _shouldn't_, as
2095 2095 # it would cause bugs when the filesystem call is made).
2096 2096 matches = m0 if sys.platform == "win32" else\
2097 2097 [protect_filename(f, open_quotes) for f in m0]
2098 2098 else:
2099 2099 matches = [text_prefix +
2100 2100 protect_filename(f) for f in m0]
2101 2101
2102 2102 # Mark directories in input list by appending '/' to their names.
2103 2103 return [x+'/' if os.path.isdir(x) else x for x in matches]
2104 2104
2105 2105 @context_matcher()
2106 2106 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2107 2107 """Match magics."""
2108 2108 text = context.token
2109 2109 matches = self.magic_matches(text)
2110 2110 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2111 2111 is_magic_prefix = len(text) > 0 and text[0] == "%"
2112 2112 result["suppress"] = is_magic_prefix and bool(result["completions"])
2113 2113 return result
2114 2114
2115 2115 def magic_matches(self, text: str):
2116 2116 """Match magics.
2117 2117
2118 2118 .. deprecated:: 8.6
2119 2119 You can use :meth:`magic_matcher` instead.
2120 2120 """
2121 2121 # Get all shell magics now rather than statically, so magics loaded at
2122 2122 # runtime show up too.
2123 2123 lsm = self.shell.magics_manager.lsmagic()
2124 2124 line_magics = lsm['line']
2125 2125 cell_magics = lsm['cell']
2126 2126 pre = self.magic_escape
2127 2127 pre2 = pre+pre
2128 2128
2129 2129 explicit_magic = text.startswith(pre)
2130 2130
2131 2131 # Completion logic:
2132 2132 # - user gives %%: only do cell magics
2133 2133 # - user gives %: do both line and cell magics
2134 2134 # - no prefix: do both
2135 2135 # In other words, line magics are skipped if the user gives %% explicitly
2136 2136 #
2137 2137 # We also exclude magics that match any currently visible names:
2138 2138 # https://github.com/ipython/ipython/issues/4877, unless the user has
2139 2139 # typed a %:
2140 2140 # https://github.com/ipython/ipython/issues/10754
2141 2141 bare_text = text.lstrip(pre)
2142 2142 global_matches = self.global_matches(bare_text)
2143 2143 if not explicit_magic:
2144 2144 def matches(magic):
2145 2145 """
2146 2146 Filter magics, in particular remove magics that match
2147 2147 a name present in global namespace.
2148 2148 """
2149 2149 return ( magic.startswith(bare_text) and
2150 2150 magic not in global_matches )
2151 2151 else:
2152 2152 def matches(magic):
2153 2153 return magic.startswith(bare_text)
2154 2154
2155 2155 comp = [ pre2+m for m in cell_magics if matches(m)]
2156 2156 if not text.startswith(pre2):
2157 2157 comp += [ pre+m for m in line_magics if matches(m)]
2158 2158
2159 2159 return comp
2160 2160
2161 2161 @context_matcher()
2162 2162 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2163 2163 """Match class names and attributes for %config magic."""
2164 2164 # NOTE: uses `line_buffer` equivalent for compatibility
2165 2165 matches = self.magic_config_matches(context.line_with_cursor)
2166 2166 return _convert_matcher_v1_result_to_v2(matches, type="param")
2167 2167
2168 2168 def magic_config_matches(self, text: str) -> List[str]:
2169 2169 """Match class names and attributes for %config magic.
2170 2170
2171 2171 .. deprecated:: 8.6
2172 2172 You can use :meth:`magic_config_matcher` instead.
2173 2173 """
2174 2174 texts = text.strip().split()
2175 2175
2176 2176 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2177 2177 # get all configuration classes
2178 2178 classes = sorted(set([ c for c in self.shell.configurables
2179 2179 if c.__class__.class_traits(config=True)
2180 2180 ]), key=lambda x: x.__class__.__name__)
2181 2181 classnames = [ c.__class__.__name__ for c in classes ]
2182 2182
2183 2183 # return all classnames if config or %config is given
2184 2184 if len(texts) == 1:
2185 2185 return classnames
2186 2186
2187 2187 # match classname
2188 2188 classname_texts = texts[1].split('.')
2189 2189 classname = classname_texts[0]
2190 2190 classname_matches = [ c for c in classnames
2191 2191 if c.startswith(classname) ]
2192 2192
2193 2193 # return matched classes or the matched class with attributes
2194 2194 if texts[1].find('.') < 0:
2195 2195 return classname_matches
2196 2196 elif len(classname_matches) == 1 and \
2197 2197 classname_matches[0] == classname:
2198 2198 cls = classes[classnames.index(classname)].__class__
2199 2199 help = cls.class_get_help()
2200 2200 # strip leading '--' from cl-args:
2201 2201 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2202 2202 return [ attr.split('=')[0]
2203 2203 for attr in help.strip().splitlines()
2204 2204 if attr.startswith(texts[1]) ]
2205 2205 return []
2206 2206
2207 2207 @context_matcher()
2208 2208 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2209 2209 """Match color schemes for %colors magic."""
2210 2210 # NOTE: uses `line_buffer` equivalent for compatibility
2211 2211 matches = self.magic_color_matches(context.line_with_cursor)
2212 2212 return _convert_matcher_v1_result_to_v2(matches, type="param")
2213 2213
2214 2214 def magic_color_matches(self, text: str) -> List[str]:
2215 2215 """Match color schemes for %colors magic.
2216 2216
2217 2217 .. deprecated:: 8.6
2218 2218 You can use :meth:`magic_color_matcher` instead.
2219 2219 """
2220 2220 texts = text.split()
2221 2221 if text.endswith(' '):
2222 2222 # .split() strips off the trailing whitespace. Add '' back
2223 2223 # so that: '%colors ' -> ['%colors', '']
2224 2224 texts.append('')
2225 2225
2226 2226 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2227 2227 prefix = texts[1]
2228 2228 return [ color for color in InspectColors.keys()
2229 2229 if color.startswith(prefix) ]
2230 2230 return []
2231 2231
2232 2232 @context_matcher(identifier="IPCompleter.jedi_matcher")
2233 2233 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2234 2234 matches = self._jedi_matches(
2235 2235 cursor_column=context.cursor_position,
2236 2236 cursor_line=context.cursor_line,
2237 2237 text=context.full_text,
2238 2238 )
2239 2239 return {
2240 2240 "completions": matches,
2241 2241 # static analysis should not suppress other matchers
2242 2242 "suppress": False,
2243 2243 }
2244 2244
2245 2245 def _jedi_matches(
2246 2246 self, cursor_column: int, cursor_line: int, text: str
2247 2247 ) -> Iterator[_JediCompletionLike]:
2248 2248 """
2249 2249 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2250 2250 cursor position.
2251 2251
2252 2252 Parameters
2253 2253 ----------
2254 2254 cursor_column : int
2255 2255 column position of the cursor in ``text``, 0-indexed.
2256 2256 cursor_line : int
2257 2257 line position of the cursor in ``text``, 0-indexed
2258 2258 text : str
2259 2259 text to complete
2260 2260
2261 2261 Notes
2262 2262 -----
2263 2263 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2264 2264 object containing a string with the Jedi debug information attached.
2265 2265
2266 2266 .. deprecated:: 8.6
2267 2267 You can use :meth:`_jedi_matcher` instead.
2268 2268 """
2269 2269 namespaces = [self.namespace]
2270 2270 if self.global_namespace is not None:
2271 2271 namespaces.append(self.global_namespace)
2272 2272
2273 2273 completion_filter = lambda x:x
2274 2274 offset = cursor_to_position(text, cursor_line, cursor_column)
2275 2275 # filter output if we are completing for object members
2276 2276 if offset:
2277 2277 pre = text[offset-1]
2278 2278 if pre == '.':
2279 2279 if self.omit__names == 2:
2280 2280 completion_filter = lambda c:not c.name.startswith('_')
2281 2281 elif self.omit__names == 1:
2282 2282 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2283 2283 elif self.omit__names == 0:
2284 2284 completion_filter = lambda x:x
2285 2285 else:
2286 2286 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2287 2287
2288 2288 interpreter = jedi.Interpreter(text[:offset], namespaces)
2289 2289 try_jedi = True
2290 2290
2291 2291 try:
2292 2292 # find the first token in the current tree -- if it is a ' or " then we are in a string
2293 2293 completing_string = False
2294 2294 try:
2295 2295 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2296 2296 except StopIteration:
2297 2297 pass
2298 2298 else:
2299 2299 # note the value may be ', ", or it may also be ''' or """, or
2300 2300 # in some cases, """what/you/typed..., but all of these are
2301 2301 # strings.
2302 2302 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2303 2303
2304 2304 # if we are in a string jedi is likely not the right candidate for
2305 2305 # now. Skip it.
2306 2306 try_jedi = not completing_string
2307 2307 except Exception as e:
2308 2308 # many of things can go wrong, we are using private API just don't crash.
2309 2309 if self.debug:
2310 2310 print("Error detecting if completing a non-finished string :", e, '|')
2311 2311
2312 2312 if not try_jedi:
2313 2313 return iter([])
2314 2314 try:
2315 2315 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2316 2316 except Exception as e:
2317 2317 if self.debug:
2318 2318 return iter(
2319 2319 [
2320 2320 _FakeJediCompletion(
2321 2321 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2322 2322 % (e)
2323 2323 )
2324 2324 ]
2325 2325 )
2326 2326 else:
2327 2327 return iter([])
2328 2328
2329 2329 @context_matcher()
2330 2330 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2331 2331 """Match attributes or global python names"""
2332 2332 text = context.line_with_cursor
2333 2333 if "." in text:
2334 2334 try:
2335 2335 matches, fragment = self._attr_matches(text, include_prefix=False)
2336 2336 if text.endswith(".") and self.omit__names:
2337 2337 if self.omit__names == 1:
2338 2338 # true if txt is _not_ a __ name, false otherwise:
2339 2339 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2340 2340 else:
2341 2341 # true if txt is _not_ a _ name, false otherwise:
2342 2342 no__name = (
2343 2343 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2344 2344 is None
2345 2345 )
2346 2346 matches = filter(no__name, matches)
2347 2347 return _convert_matcher_v1_result_to_v2(
2348 2348 matches, type="attribute", fragment=fragment
2349 2349 )
2350 2350 except NameError:
2351 2351 # catches <undefined attributes>.<tab>
2352 2352 matches = []
2353 2353 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2354 2354 else:
2355 2355 matches = self.global_matches(context.token)
2356 2356 # TODO: maybe distinguish between functions, modules and just "variables"
2357 2357 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2358 2358
2359 2359 @completion_matcher(api_version=1)
2360 2360 def python_matches(self, text: str) -> Iterable[str]:
2361 2361 """Match attributes or global python names.
2362 2362
2363 2363 .. deprecated:: 8.27
2364 2364 You can use :meth:`python_matcher` instead."""
2365 2365 if "." in text:
2366 2366 try:
2367 2367 matches = self.attr_matches(text)
2368 2368 if text.endswith('.') and self.omit__names:
2369 2369 if self.omit__names == 1:
2370 2370 # true if txt is _not_ a __ name, false otherwise:
2371 2371 no__name = (lambda txt:
2372 2372 re.match(r'.*\.__.*?__',txt) is None)
2373 2373 else:
2374 2374 # true if txt is _not_ a _ name, false otherwise:
2375 2375 no__name = (lambda txt:
2376 2376 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2377 2377 matches = filter(no__name, matches)
2378 2378 except NameError:
2379 2379 # catches <undefined attributes>.<tab>
2380 2380 matches = []
2381 2381 else:
2382 2382 matches = self.global_matches(text)
2383 2383 return matches
2384 2384
2385 2385 def _default_arguments_from_docstring(self, doc):
2386 2386 """Parse the first line of docstring for call signature.
2387 2387
2388 2388 Docstring should be of the form 'min(iterable[, key=func])\n'.
2389 2389 It can also parse cython docstring of the form
2390 2390 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2391 2391 """
2392 2392 if doc is None:
2393 2393 return []
2394 2394
2395 2395 #care only the firstline
2396 2396 line = doc.lstrip().splitlines()[0]
2397 2397
2398 2398 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2399 2399 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2400 2400 sig = self.docstring_sig_re.search(line)
2401 2401 if sig is None:
2402 2402 return []
2403 2403 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2404 2404 sig = sig.groups()[0].split(',')
2405 2405 ret = []
2406 2406 for s in sig:
2407 2407 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2408 2408 ret += self.docstring_kwd_re.findall(s)
2409 2409 return ret
2410 2410
2411 2411 def _default_arguments(self, obj):
2412 2412 """Return the list of default arguments of obj if it is callable,
2413 2413 or empty list otherwise."""
2414 2414 call_obj = obj
2415 2415 ret = []
2416 2416 if inspect.isbuiltin(obj):
2417 2417 pass
2418 2418 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2419 2419 if inspect.isclass(obj):
2420 2420 #for cython embedsignature=True the constructor docstring
2421 2421 #belongs to the object itself not __init__
2422 2422 ret += self._default_arguments_from_docstring(
2423 2423 getattr(obj, '__doc__', ''))
2424 2424 # for classes, check for __init__,__new__
2425 2425 call_obj = (getattr(obj, '__init__', None) or
2426 2426 getattr(obj, '__new__', None))
2427 2427 # for all others, check if they are __call__able
2428 2428 elif hasattr(obj, '__call__'):
2429 2429 call_obj = obj.__call__
2430 2430 ret += self._default_arguments_from_docstring(
2431 2431 getattr(call_obj, '__doc__', ''))
2432 2432
2433 2433 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2434 2434 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2435 2435
2436 2436 try:
2437 2437 sig = inspect.signature(obj)
2438 2438 ret.extend(k for k, v in sig.parameters.items() if
2439 2439 v.kind in _keeps)
2440 2440 except ValueError:
2441 2441 pass
2442 2442
2443 2443 return list(set(ret))
2444 2444
2445 2445 @context_matcher()
2446 2446 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2447 2447 """Match named parameters (kwargs) of the last open function."""
2448 2448 matches = self.python_func_kw_matches(context.token)
2449 2449 return _convert_matcher_v1_result_to_v2(matches, type="param")
2450 2450
2451 2451 def python_func_kw_matches(self, text):
2452 2452 """Match named parameters (kwargs) of the last open function.
2453 2453
2454 2454 .. deprecated:: 8.6
2455 2455 You can use :meth:`python_func_kw_matcher` instead.
2456 2456 """
2457 2457
2458 2458 if "." in text: # a parameter cannot be dotted
2459 2459 return []
2460 2460 try: regexp = self.__funcParamsRegex
2461 2461 except AttributeError:
2462 2462 regexp = self.__funcParamsRegex = re.compile(r'''
2463 2463 '.*?(?<!\\)' | # single quoted strings or
2464 2464 ".*?(?<!\\)" | # double quoted strings or
2465 2465 \w+ | # identifier
2466 2466 \S # other characters
2467 2467 ''', re.VERBOSE | re.DOTALL)
2468 2468 # 1. find the nearest identifier that comes before an unclosed
2469 2469 # parenthesis before the cursor
2470 2470 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2471 2471 tokens = regexp.findall(self.text_until_cursor)
2472 2472 iterTokens = reversed(tokens); openPar = 0
2473 2473
2474 2474 for token in iterTokens:
2475 2475 if token == ')':
2476 2476 openPar -= 1
2477 2477 elif token == '(':
2478 2478 openPar += 1
2479 2479 if openPar > 0:
2480 2480 # found the last unclosed parenthesis
2481 2481 break
2482 2482 else:
2483 2483 return []
2484 2484 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2485 2485 ids = []
2486 2486 isId = re.compile(r'\w+$').match
2487 2487
2488 2488 while True:
2489 2489 try:
2490 2490 ids.append(next(iterTokens))
2491 2491 if not isId(ids[-1]):
2492 2492 ids.pop(); break
2493 2493 if not next(iterTokens) == '.':
2494 2494 break
2495 2495 except StopIteration:
2496 2496 break
2497 2497
2498 2498 # Find all named arguments already assigned to, as to avoid suggesting
2499 2499 # them again
2500 2500 usedNamedArgs = set()
2501 2501 par_level = -1
2502 2502 for token, next_token in zip(tokens, tokens[1:]):
2503 2503 if token == '(':
2504 2504 par_level += 1
2505 2505 elif token == ')':
2506 2506 par_level -= 1
2507 2507
2508 2508 if par_level != 0:
2509 2509 continue
2510 2510
2511 2511 if next_token != '=':
2512 2512 continue
2513 2513
2514 2514 usedNamedArgs.add(token)
2515 2515
2516 2516 argMatches = []
2517 2517 try:
2518 2518 callableObj = '.'.join(ids[::-1])
2519 2519 namedArgs = self._default_arguments(eval(callableObj,
2520 2520 self.namespace))
2521 2521
2522 2522 # Remove used named arguments from the list, no need to show twice
2523 2523 for namedArg in set(namedArgs) - usedNamedArgs:
2524 2524 if namedArg.startswith(text):
2525 2525 argMatches.append("%s=" %namedArg)
2526 2526 except:
2527 2527 pass
2528 2528
2529 2529 return argMatches
2530 2530
2531 2531 @staticmethod
2532 2532 def _get_keys(obj: Any) -> List[Any]:
2533 2533 # Objects can define their own completions by defining an
2534 2534 # _ipy_key_completions_() method.
2535 2535 method = get_real_method(obj, '_ipython_key_completions_')
2536 2536 if method is not None:
2537 2537 return method()
2538 2538
2539 2539 # Special case some common in-memory dict-like types
2540 2540 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2541 2541 try:
2542 2542 return list(obj.keys())
2543 2543 except Exception:
2544 2544 return []
2545 2545 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2546 2546 try:
2547 2547 return list(obj.obj.keys())
2548 2548 except Exception:
2549 2549 return []
2550 2550 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2551 2551 _safe_isinstance(obj, 'numpy', 'void'):
2552 2552 return obj.dtype.names or []
2553 2553 return []
2554 2554
2555 2555 @context_matcher()
2556 2556 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2557 2557 """Match string keys in a dictionary, after e.g. ``foo[``."""
2558 2558 matches = self.dict_key_matches(context.token)
2559 2559 return _convert_matcher_v1_result_to_v2(
2560 2560 matches, type="dict key", suppress_if_matches=True
2561 2561 )
2562 2562
2563 2563 def dict_key_matches(self, text: str) -> List[str]:
2564 2564 """Match string keys in a dictionary, after e.g. ``foo[``.
2565 2565
2566 2566 .. deprecated:: 8.6
2567 2567 You can use :meth:`dict_key_matcher` instead.
2568 2568 """
2569 2569
2570 2570 # Short-circuit on closed dictionary (regular expression would
2571 2571 # not match anyway, but would take quite a while).
2572 2572 if self.text_until_cursor.strip().endswith("]"):
2573 2573 return []
2574 2574
2575 2575 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2576 2576
2577 2577 if match is None:
2578 2578 return []
2579 2579
2580 2580 expr, prior_tuple_keys, key_prefix = match.groups()
2581 2581
2582 2582 obj = self._evaluate_expr(expr)
2583 2583
2584 2584 if obj is not_found:
2585 2585 return []
2586 2586
2587 2587 keys = self._get_keys(obj)
2588 2588 if not keys:
2589 2589 return keys
2590 2590
2591 2591 tuple_prefix = guarded_eval(
2592 2592 prior_tuple_keys,
2593 2593 EvaluationContext(
2594 2594 globals=self.global_namespace,
2595 2595 locals=self.namespace,
2596 2596 evaluation=self.evaluation, # type: ignore
2597 2597 in_subscript=True,
2598 2598 ),
2599 2599 )
2600 2600
2601 2601 closing_quote, token_offset, matches = match_dict_keys(
2602 2602 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2603 2603 )
2604 2604 if not matches:
2605 2605 return []
2606 2606
2607 2607 # get the cursor position of
2608 2608 # - the text being completed
2609 2609 # - the start of the key text
2610 2610 # - the start of the completion
2611 2611 text_start = len(self.text_until_cursor) - len(text)
2612 2612 if key_prefix:
2613 2613 key_start = match.start(3)
2614 2614 completion_start = key_start + token_offset
2615 2615 else:
2616 2616 key_start = completion_start = match.end()
2617 2617
2618 2618 # grab the leading prefix, to make sure all completions start with `text`
2619 2619 if text_start > key_start:
2620 2620 leading = ''
2621 2621 else:
2622 2622 leading = text[text_start:completion_start]
2623 2623
2624 2624 # append closing quote and bracket as appropriate
2625 2625 # this is *not* appropriate if the opening quote or bracket is outside
2626 2626 # the text given to this method, e.g. `d["""a\nt
2627 2627 can_close_quote = False
2628 2628 can_close_bracket = False
2629 2629
2630 2630 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2631 2631
2632 2632 if continuation.startswith(closing_quote):
2633 2633 # do not close if already closed, e.g. `d['a<tab>'`
2634 2634 continuation = continuation[len(closing_quote) :]
2635 2635 else:
2636 2636 can_close_quote = True
2637 2637
2638 2638 continuation = continuation.strip()
2639 2639
2640 2640 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2641 2641 # handling it is out of scope, so let's avoid appending suffixes.
2642 2642 has_known_tuple_handling = isinstance(obj, dict)
2643 2643
2644 2644 can_close_bracket = (
2645 2645 not continuation.startswith("]") and self.auto_close_dict_keys
2646 2646 )
2647 2647 can_close_tuple_item = (
2648 2648 not continuation.startswith(",")
2649 2649 and has_known_tuple_handling
2650 2650 and self.auto_close_dict_keys
2651 2651 )
2652 2652 can_close_quote = can_close_quote and self.auto_close_dict_keys
2653 2653
2654 # fast path if closing qoute should be appended but not suffix is allowed
2654 # fast path if closing quote should be appended but not suffix is allowed
2655 2655 if not can_close_quote and not can_close_bracket and closing_quote:
2656 2656 return [leading + k for k in matches]
2657 2657
2658 2658 results = []
2659 2659
2660 2660 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2661 2661
2662 2662 for k, state_flag in matches.items():
2663 2663 result = leading + k
2664 2664 if can_close_quote and closing_quote:
2665 2665 result += closing_quote
2666 2666
2667 2667 if state_flag == end_of_tuple_or_item:
2668 2668 # We do not know which suffix to add,
2669 2669 # e.g. both tuple item and string
2670 2670 # match this item.
2671 2671 pass
2672 2672
2673 2673 if state_flag in end_of_tuple_or_item and can_close_bracket:
2674 2674 result += "]"
2675 2675 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2676 2676 result += ", "
2677 2677 results.append(result)
2678 2678 return results
2679 2679
2680 2680 @context_matcher()
2681 2681 def unicode_name_matcher(self, context: CompletionContext):
2682 2682 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2683 2683 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2684 2684 return _convert_matcher_v1_result_to_v2(
2685 2685 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2686 2686 )
2687 2687
2688 2688 @staticmethod
2689 2689 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2690 2690 """Match Latex-like syntax for unicode characters base
2691 2691 on the name of the character.
2692 2692
2693 2693 This does ``\\GREEK SMALL LETTER ETA`` -> ``η``
2694 2694
2695 2695 Works only on valid python 3 identifier, or on combining characters that
2696 2696 will combine to form a valid identifier.
2697 2697 """
2698 2698 slashpos = text.rfind('\\')
2699 2699 if slashpos > -1:
2700 2700 s = text[slashpos+1:]
2701 2701 try :
2702 2702 unic = unicodedata.lookup(s)
2703 2703 # allow combining chars
2704 2704 if ('a'+unic).isidentifier():
2705 2705 return '\\'+s,[unic]
2706 2706 except KeyError:
2707 2707 pass
2708 2708 return '', []
2709 2709
2710 2710 @context_matcher()
2711 2711 def latex_name_matcher(self, context: CompletionContext):
2712 2712 """Match Latex syntax for unicode characters.
2713 2713
2714 2714 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
2715 2715 """
2716 2716 fragment, matches = self.latex_matches(context.text_until_cursor)
2717 2717 return _convert_matcher_v1_result_to_v2(
2718 2718 matches, type="latex", fragment=fragment, suppress_if_matches=True
2719 2719 )
2720 2720
2721 2721 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2722 2722 """Match Latex syntax for unicode characters.
2723 2723
2724 2724 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
2725 2725
2726 2726 .. deprecated:: 8.6
2727 2727 You can use :meth:`latex_name_matcher` instead.
2728 2728 """
2729 2729 slashpos = text.rfind('\\')
2730 2730 if slashpos > -1:
2731 2731 s = text[slashpos:]
2732 2732 if s in latex_symbols:
2733 2733 # Try to complete a full latex symbol to unicode
2734 2734 # \\alpha -> α
2735 2735 return s, [latex_symbols[s]]
2736 2736 else:
2737 2737 # If a user has partially typed a latex symbol, give them
2738 2738 # a full list of options \al -> [\aleph, \alpha]
2739 2739 matches = [k for k in latex_symbols if k.startswith(s)]
2740 2740 if matches:
2741 2741 return s, matches
2742 2742 return '', ()
2743 2743
2744 2744 @context_matcher()
2745 2745 def custom_completer_matcher(self, context):
2746 2746 """Dispatch custom completer.
2747 2747
2748 2748 If a match is found, suppresses all other matchers except for Jedi.
2749 2749 """
2750 2750 matches = self.dispatch_custom_completer(context.token) or []
2751 2751 result = _convert_matcher_v1_result_to_v2(
2752 2752 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2753 2753 )
2754 2754 result["ordered"] = True
2755 2755 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2756 2756 return result
2757 2757
2758 2758 def dispatch_custom_completer(self, text):
2759 2759 """
2760 2760 .. deprecated:: 8.6
2761 2761 You can use :meth:`custom_completer_matcher` instead.
2762 2762 """
2763 2763 if not self.custom_completers:
2764 2764 return
2765 2765
2766 2766 line = self.line_buffer
2767 2767 if not line.strip():
2768 2768 return None
2769 2769
2770 2770 # Create a little structure to pass all the relevant information about
2771 2771 # the current completion to any custom completer.
2772 2772 event = SimpleNamespace()
2773 2773 event.line = line
2774 2774 event.symbol = text
2775 2775 cmd = line.split(None,1)[0]
2776 2776 event.command = cmd
2777 2777 event.text_until_cursor = self.text_until_cursor
2778 2778
2779 2779 # for foo etc, try also to find completer for %foo
2780 2780 if not cmd.startswith(self.magic_escape):
2781 2781 try_magic = self.custom_completers.s_matches(
2782 2782 self.magic_escape + cmd)
2783 2783 else:
2784 2784 try_magic = []
2785 2785
2786 2786 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2787 2787 try_magic,
2788 2788 self.custom_completers.flat_matches(self.text_until_cursor)):
2789 2789 try:
2790 2790 res = c(event)
2791 2791 if res:
2792 2792 # first, try case sensitive match
2793 2793 withcase = [r for r in res if r.startswith(text)]
2794 2794 if withcase:
2795 2795 return withcase
2796 2796 # if none, then case insensitive ones are ok too
2797 2797 text_low = text.lower()
2798 2798 return [r for r in res if r.lower().startswith(text_low)]
2799 2799 except TryNext:
2800 2800 pass
2801 2801 except KeyboardInterrupt:
2802 2802 """
2803 2803 If custom completer take too long,
2804 2804 let keyboard interrupt abort and return nothing.
2805 2805 """
2806 2806 break
2807 2807
2808 2808 return None
2809 2809
2810 2810 def completions(self, text: str, offset: int)->Iterator[Completion]:
2811 2811 """
2812 2812 Returns an iterator over the possible completions
2813 2813
2814 2814 .. warning::
2815 2815
2816 2816 Unstable
2817 2817
2818 2818 This function is unstable, API may change without warning.
2819 2819 It will also raise unless use in proper context manager.
2820 2820
2821 2821 Parameters
2822 2822 ----------
2823 2823 text : str
2824 2824 Full text of the current input, multi line string.
2825 2825 offset : int
2826 2826 Integer representing the position of the cursor in ``text``. Offset
2827 2827 is 0-based indexed.
2828 2828
2829 2829 Yields
2830 2830 ------
2831 2831 Completion
2832 2832
2833 2833 Notes
2834 2834 -----
2835 2835 The cursor on a text can either be seen as being "in between"
2836 2836 characters or "On" a character depending on the interface visible to
2837 2837 the user. For consistency the cursor being on "in between" characters X
2838 2838 and Y is equivalent to the cursor being "on" character Y, that is to say
2839 2839 the character the cursor is on is considered as being after the cursor.
2840 2840
2841 2841 Combining characters may span more that one position in the
2842 2842 text.
2843 2843
2844 2844 .. note::
2845 2845
2846 2846 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2847 2847 fake Completion token to distinguish completion returned by Jedi
2848 2848 and usual IPython completion.
2849 2849
2850 2850 .. note::
2851 2851
2852 2852 Completions are not completely deduplicated yet. If identical
2853 2853 completions are coming from different sources this function does not
2854 2854 ensure that each completion object will only be present once.
2855 2855 """
2856 2856 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2857 2857 "It may change without warnings. "
2858 2858 "Use in corresponding context manager.",
2859 2859 category=ProvisionalCompleterWarning, stacklevel=2)
2860 2860
2861 2861 seen = set()
2862 2862 profiler:Optional[cProfile.Profile]
2863 2863 try:
2864 2864 if self.profile_completions:
2865 2865 import cProfile
2866 2866 profiler = cProfile.Profile()
2867 2867 profiler.enable()
2868 2868 else:
2869 2869 profiler = None
2870 2870
2871 2871 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2872 2872 if c and (c in seen):
2873 2873 continue
2874 2874 yield c
2875 2875 seen.add(c)
2876 2876 except KeyboardInterrupt:
2877 2877 """if completions take too long and users send keyboard interrupt,
2878 2878 do not crash and return ASAP. """
2879 2879 pass
2880 2880 finally:
2881 2881 if profiler is not None:
2882 2882 profiler.disable()
2883 2883 ensure_dir_exists(self.profiler_output_dir)
2884 2884 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2885 2885 print("Writing profiler output to", output_path)
2886 2886 profiler.dump_stats(output_path)
2887 2887
2888 2888 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2889 2889 """
2890 2890 Core completion module.Same signature as :any:`completions`, with the
2891 2891 extra `timeout` parameter (in seconds).
2892 2892
2893 2893 Computing jedi's completion ``.type`` can be quite expensive (it is a
2894 2894 lazy property) and can require some warm-up, more warm up than just
2895 2895 computing the ``name`` of a completion. The warm-up can be :
2896 2896
2897 2897 - Long warm-up the first time a module is encountered after
2898 2898 install/update: actually build parse/inference tree.
2899 2899
2900 2900 - first time the module is encountered in a session: load tree from
2901 2901 disk.
2902 2902
2903 2903 We don't want to block completions for tens of seconds so we give the
2904 2904 completer a "budget" of ``_timeout`` seconds per invocation to compute
2905 2905 completions types, the completions that have not yet been computed will
2906 2906 be marked as "unknown" an will have a chance to be computed next round
2907 2907 are things get cached.
2908 2908
2909 2909 Keep in mind that Jedi is not the only thing treating the completion so
2910 2910 keep the timeout short-ish as if we take more than 0.3 second we still
2911 2911 have lots of processing to do.
2912 2912
2913 2913 """
2914 2914 deadline = time.monotonic() + _timeout
2915 2915
2916 2916 before = full_text[:offset]
2917 2917 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2918 2918
2919 2919 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2920 2920
2921 2921 def is_non_jedi_result(
2922 2922 result: MatcherResult, identifier: str
2923 2923 ) -> TypeGuard[SimpleMatcherResult]:
2924 2924 return identifier != jedi_matcher_id
2925 2925
2926 2926 results = self._complete(
2927 2927 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2928 2928 )
2929 2929
2930 2930 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2931 2931 identifier: result
2932 2932 for identifier, result in results.items()
2933 2933 if is_non_jedi_result(result, identifier)
2934 2934 }
2935 2935
2936 2936 jedi_matches = (
2937 2937 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2938 2938 if jedi_matcher_id in results
2939 2939 else ()
2940 2940 )
2941 2941
2942 2942 iter_jm = iter(jedi_matches)
2943 2943 if _timeout:
2944 2944 for jm in iter_jm:
2945 2945 try:
2946 2946 type_ = jm.type
2947 2947 except Exception:
2948 2948 if self.debug:
2949 2949 print("Error in Jedi getting type of ", jm)
2950 2950 type_ = None
2951 2951 delta = len(jm.name_with_symbols) - len(jm.complete)
2952 2952 if type_ == 'function':
2953 2953 signature = _make_signature(jm)
2954 2954 else:
2955 2955 signature = ''
2956 2956 yield Completion(start=offset - delta,
2957 2957 end=offset,
2958 2958 text=jm.name_with_symbols,
2959 2959 type=type_,
2960 2960 signature=signature,
2961 2961 _origin='jedi')
2962 2962
2963 2963 if time.monotonic() > deadline:
2964 2964 break
2965 2965
2966 2966 for jm in iter_jm:
2967 2967 delta = len(jm.name_with_symbols) - len(jm.complete)
2968 2968 yield Completion(
2969 2969 start=offset - delta,
2970 2970 end=offset,
2971 2971 text=jm.name_with_symbols,
2972 2972 type=_UNKNOWN_TYPE, # don't compute type for speed
2973 2973 _origin="jedi",
2974 2974 signature="",
2975 2975 )
2976 2976
2977 2977 # TODO:
2978 2978 # Suppress this, right now just for debug.
2979 2979 if jedi_matches and non_jedi_results and self.debug:
2980 2980 some_start_offset = before.rfind(
2981 2981 next(iter(non_jedi_results.values()))["matched_fragment"]
2982 2982 )
2983 2983 yield Completion(
2984 2984 start=some_start_offset,
2985 2985 end=offset,
2986 2986 text="--jedi/ipython--",
2987 2987 _origin="debug",
2988 2988 type="none",
2989 2989 signature="",
2990 2990 )
2991 2991
2992 2992 ordered: List[Completion] = []
2993 2993 sortable: List[Completion] = []
2994 2994
2995 2995 for origin, result in non_jedi_results.items():
2996 2996 matched_text = result["matched_fragment"]
2997 2997 start_offset = before.rfind(matched_text)
2998 2998 is_ordered = result.get("ordered", False)
2999 2999 container = ordered if is_ordered else sortable
3000 3000
3001 3001 # I'm unsure if this is always true, so let's assert and see if it
3002 3002 # crash
3003 3003 assert before.endswith(matched_text)
3004 3004
3005 3005 for simple_completion in result["completions"]:
3006 3006 completion = Completion(
3007 3007 start=start_offset,
3008 3008 end=offset,
3009 3009 text=simple_completion.text,
3010 3010 _origin=origin,
3011 3011 signature="",
3012 3012 type=simple_completion.type or _UNKNOWN_TYPE,
3013 3013 )
3014 3014 container.append(completion)
3015 3015
3016 3016 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3017 3017 :MATCHES_LIMIT
3018 3018 ]
3019 3019
3020 3020 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3021 3021 """Find completions for the given text and line context.
3022 3022
3023 3023 Note that both the text and the line_buffer are optional, but at least
3024 3024 one of them must be given.
3025 3025
3026 3026 Parameters
3027 3027 ----------
3028 3028 text : string, optional
3029 3029 Text to perform the completion on. If not given, the line buffer
3030 3030 is split using the instance's CompletionSplitter object.
3031 3031 line_buffer : string, optional
3032 3032 If not given, the completer attempts to obtain the current line
3033 3033 buffer via readline. This keyword allows clients which are
3034 3034 requesting for text completions in non-readline contexts to inform
3035 3035 the completer of the entire text.
3036 3036 cursor_pos : int, optional
3037 3037 Index of the cursor in the full line buffer. Should be provided by
3038 3038 remote frontends where kernel has no access to frontend state.
3039 3039
3040 3040 Returns
3041 3041 -------
3042 3042 Tuple of two items:
3043 3043 text : str
3044 3044 Text that was actually used in the completion.
3045 3045 matches : list
3046 3046 A list of completion matches.
3047 3047
3048 3048 Notes
3049 3049 -----
3050 3050 This API is likely to be deprecated and replaced by
3051 3051 :any:`IPCompleter.completions` in the future.
3052 3052
3053 3053 """
3054 3054 warnings.warn('`Completer.complete` is pending deprecation since '
3055 3055 'IPython 6.0 and will be replaced by `Completer.completions`.',
3056 3056 PendingDeprecationWarning)
3057 3057 # potential todo, FOLD the 3rd throw away argument of _complete
3058 3058 # into the first 2 one.
3059 3059 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3060 3060 # TODO: should we deprecate now, or does it stay?
3061 3061
3062 3062 results = self._complete(
3063 3063 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3064 3064 )
3065 3065
3066 3066 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3067 3067
3068 3068 return self._arrange_and_extract(
3069 3069 results,
3070 3070 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3071 3071 skip_matchers={jedi_matcher_id},
3072 3072 # this API does not support different start/end positions (fragments of token).
3073 3073 abort_if_offset_changes=True,
3074 3074 )
3075 3075
3076 3076 def _arrange_and_extract(
3077 3077 self,
3078 3078 results: Dict[str, MatcherResult],
3079 3079 skip_matchers: Set[str],
3080 3080 abort_if_offset_changes: bool,
3081 3081 ):
3082 3082 sortable: List[AnyMatcherCompletion] = []
3083 3083 ordered: List[AnyMatcherCompletion] = []
3084 3084 most_recent_fragment = None
3085 3085 for identifier, result in results.items():
3086 3086 if identifier in skip_matchers:
3087 3087 continue
3088 3088 if not result["completions"]:
3089 3089 continue
3090 3090 if not most_recent_fragment:
3091 3091 most_recent_fragment = result["matched_fragment"]
3092 3092 if (
3093 3093 abort_if_offset_changes
3094 3094 and result["matched_fragment"] != most_recent_fragment
3095 3095 ):
3096 3096 break
3097 3097 if result.get("ordered", False):
3098 3098 ordered.extend(result["completions"])
3099 3099 else:
3100 3100 sortable.extend(result["completions"])
3101 3101
3102 3102 if not most_recent_fragment:
3103 3103 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3104 3104
3105 3105 return most_recent_fragment, [
3106 3106 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3107 3107 ]
3108 3108
3109 3109 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3110 3110 full_text=None) -> _CompleteResult:
3111 3111 """
3112 3112 Like complete but can also returns raw jedi completions as well as the
3113 3113 origin of the completion text. This could (and should) be made much
3114 3114 cleaner but that will be simpler once we drop the old (and stateful)
3115 3115 :any:`complete` API.
3116 3116
3117 3117 With current provisional API, cursor_pos act both (depending on the
3118 3118 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3119 3119 ``column`` when passing multiline strings this could/should be renamed
3120 3120 but would add extra noise.
3121 3121
3122 3122 Parameters
3123 3123 ----------
3124 3124 cursor_line
3125 3125 Index of the line the cursor is on. 0 indexed.
3126 3126 cursor_pos
3127 3127 Position of the cursor in the current line/line_buffer/text. 0
3128 3128 indexed.
3129 3129 line_buffer : optional, str
3130 3130 The current line the cursor is in, this is mostly due to legacy
3131 3131 reason that readline could only give a us the single current line.
3132 3132 Prefer `full_text`.
3133 3133 text : str
3134 3134 The current "token" the cursor is in, mostly also for historical
3135 3135 reasons. as the completer would trigger only after the current line
3136 3136 was parsed.
3137 3137 full_text : str
3138 3138 Full text of the current cell.
3139 3139
3140 3140 Returns
3141 3141 -------
3142 3142 An ordered dictionary where keys are identifiers of completion
3143 3143 matchers and values are ``MatcherResult``s.
3144 3144 """
3145 3145
3146 3146 # if the cursor position isn't given, the only sane assumption we can
3147 3147 # make is that it's at the end of the line (the common case)
3148 3148 if cursor_pos is None:
3149 3149 cursor_pos = len(line_buffer) if text is None else len(text)
3150 3150
3151 3151 if self.use_main_ns:
3152 3152 self.namespace = __main__.__dict__
3153 3153
3154 3154 # if text is either None or an empty string, rely on the line buffer
3155 3155 if (not line_buffer) and full_text:
3156 3156 line_buffer = full_text.split('\n')[cursor_line]
3157 3157 if not text: # issue #11508: check line_buffer before calling split_line
3158 3158 text = (
3159 3159 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3160 3160 )
3161 3161
3162 3162 # If no line buffer is given, assume the input text is all there was
3163 3163 if line_buffer is None:
3164 3164 line_buffer = text
3165 3165
3166 3166 # deprecated - do not use `line_buffer` in new code.
3167 3167 self.line_buffer = line_buffer
3168 3168 self.text_until_cursor = self.line_buffer[:cursor_pos]
3169 3169
3170 3170 if not full_text:
3171 3171 full_text = line_buffer
3172 3172
3173 3173 context = CompletionContext(
3174 3174 full_text=full_text,
3175 3175 cursor_position=cursor_pos,
3176 3176 cursor_line=cursor_line,
3177 3177 token=text,
3178 3178 limit=MATCHES_LIMIT,
3179 3179 )
3180 3180
3181 3181 # Start with a clean slate of completions
3182 3182 results: Dict[str, MatcherResult] = {}
3183 3183
3184 3184 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3185 3185
3186 3186 suppressed_matchers: Set[str] = set()
3187 3187
3188 3188 matchers = {
3189 3189 _get_matcher_id(matcher): matcher
3190 3190 for matcher in sorted(
3191 3191 self.matchers, key=_get_matcher_priority, reverse=True
3192 3192 )
3193 3193 }
3194 3194
3195 3195 for matcher_id, matcher in matchers.items():
3196 3196 matcher_id = _get_matcher_id(matcher)
3197 3197
3198 3198 if matcher_id in self.disable_matchers:
3199 3199 continue
3200 3200
3201 3201 if matcher_id in results:
3202 3202 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3203 3203
3204 3204 if matcher_id in suppressed_matchers:
3205 3205 continue
3206 3206
3207 3207 result: MatcherResult
3208 3208 try:
3209 3209 if _is_matcher_v1(matcher):
3210 3210 result = _convert_matcher_v1_result_to_v2(
3211 3211 matcher(text), type=_UNKNOWN_TYPE
3212 3212 )
3213 3213 elif _is_matcher_v2(matcher):
3214 3214 result = matcher(context)
3215 3215 else:
3216 3216 api_version = _get_matcher_api_version(matcher)
3217 3217 raise ValueError(f"Unsupported API version {api_version}")
3218 3218 except:
3219 3219 # Show the ugly traceback if the matcher causes an
3220 3220 # exception, but do NOT crash the kernel!
3221 3221 sys.excepthook(*sys.exc_info())
3222 3222 continue
3223 3223
3224 3224 # set default value for matched fragment if suffix was not selected.
3225 3225 result["matched_fragment"] = result.get("matched_fragment", context.token)
3226 3226
3227 3227 if not suppressed_matchers:
3228 3228 suppression_recommended: Union[bool, Set[str]] = result.get(
3229 3229 "suppress", False
3230 3230 )
3231 3231
3232 3232 suppression_config = (
3233 3233 self.suppress_competing_matchers.get(matcher_id, None)
3234 3234 if isinstance(self.suppress_competing_matchers, dict)
3235 3235 else self.suppress_competing_matchers
3236 3236 )
3237 3237 should_suppress = (
3238 3238 (suppression_config is True)
3239 3239 or (suppression_recommended and (suppression_config is not False))
3240 3240 ) and has_any_completions(result)
3241 3241
3242 3242 if should_suppress:
3243 3243 suppression_exceptions: Set[str] = result.get(
3244 3244 "do_not_suppress", set()
3245 3245 )
3246 3246 if isinstance(suppression_recommended, Iterable):
3247 3247 to_suppress = set(suppression_recommended)
3248 3248 else:
3249 3249 to_suppress = set(matchers)
3250 3250 suppressed_matchers = to_suppress - suppression_exceptions
3251 3251
3252 3252 new_results = {}
3253 3253 for previous_matcher_id, previous_result in results.items():
3254 3254 if previous_matcher_id not in suppressed_matchers:
3255 3255 new_results[previous_matcher_id] = previous_result
3256 3256 results = new_results
3257 3257
3258 3258 results[matcher_id] = result
3259 3259
3260 3260 _, matches = self._arrange_and_extract(
3261 3261 results,
3262 3262 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3263 3263 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3264 3264 skip_matchers={jedi_matcher_id},
3265 3265 abort_if_offset_changes=False,
3266 3266 )
3267 3267
3268 3268 # populate legacy stateful API
3269 3269 self.matches = matches
3270 3270
3271 3271 return results
3272 3272
3273 3273 @staticmethod
3274 3274 def _deduplicate(
3275 3275 matches: Sequence[AnyCompletion],
3276 3276 ) -> Iterable[AnyCompletion]:
3277 3277 filtered_matches: Dict[str, AnyCompletion] = {}
3278 3278 for match in matches:
3279 3279 text = match.text
3280 3280 if (
3281 3281 text not in filtered_matches
3282 3282 or filtered_matches[text].type == _UNKNOWN_TYPE
3283 3283 ):
3284 3284 filtered_matches[text] = match
3285 3285
3286 3286 return filtered_matches.values()
3287 3287
3288 3288 @staticmethod
3289 3289 def _sort(matches: Sequence[AnyCompletion]):
3290 3290 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3291 3291
3292 3292 @context_matcher()
3293 3293 def fwd_unicode_matcher(self, context: CompletionContext):
3294 3294 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3295 3295 # TODO: use `context.limit` to terminate early once we matched the maximum
3296 3296 # number that will be used downstream; can be added as an optional to
3297 3297 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3298 3298 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3299 3299 return _convert_matcher_v1_result_to_v2(
3300 3300 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3301 3301 )
3302 3302
3303 3303 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3304 3304 """
3305 3305 Forward match a string starting with a backslash with a list of
3306 3306 potential Unicode completions.
3307 3307
3308 3308 Will compute list of Unicode character names on first call and cache it.
3309 3309
3310 3310 .. deprecated:: 8.6
3311 3311 You can use :meth:`fwd_unicode_matcher` instead.
3312 3312
3313 3313 Returns
3314 3314 -------
3315 3315 At tuple with:
3316 3316 - matched text (empty if no matches)
3317 3317 - list of potential completions, empty tuple otherwise)
3318 3318 """
3319 3319 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3320 3320 # We could do a faster match using a Trie.
3321 3321
3322 3322 # Using pygtrie the following seem to work:
3323 3323
3324 3324 # s = PrefixSet()
3325 3325
3326 3326 # for c in range(0,0x10FFFF + 1):
3327 3327 # try:
3328 3328 # s.add(unicodedata.name(chr(c)))
3329 3329 # except ValueError:
3330 3330 # pass
3331 3331 # [''.join(k) for k in s.iter(prefix)]
3332 3332
3333 3333 # But need to be timed and adds an extra dependency.
3334 3334
3335 3335 slashpos = text.rfind('\\')
3336 3336 # if text starts with slash
3337 3337 if slashpos > -1:
3338 3338 # PERF: It's important that we don't access self._unicode_names
3339 3339 # until we're inside this if-block. _unicode_names is lazily
3340 3340 # initialized, and it takes a user-noticeable amount of time to
3341 3341 # initialize it, so we don't want to initialize it unless we're
3342 3342 # actually going to use it.
3343 3343 s = text[slashpos + 1 :]
3344 3344 sup = s.upper()
3345 3345 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3346 3346 if candidates:
3347 3347 return s, candidates
3348 3348 candidates = [x for x in self.unicode_names if sup in x]
3349 3349 if candidates:
3350 3350 return s, candidates
3351 3351 splitsup = sup.split(" ")
3352 3352 candidates = [
3353 3353 x for x in self.unicode_names if all(u in x for u in splitsup)
3354 3354 ]
3355 3355 if candidates:
3356 3356 return s, candidates
3357 3357
3358 3358 return "", ()
3359 3359
3360 3360 # if text does not start with slash
3361 3361 else:
3362 3362 return '', ()
3363 3363
3364 3364 @property
3365 3365 def unicode_names(self) -> List[str]:
3366 3366 """List of names of unicode code points that can be completed.
3367 3367
3368 3368 The list is lazily initialized on first access.
3369 3369 """
3370 3370 if self._unicode_names is None:
3371 3371 names = []
3372 3372 for c in range(0,0x10FFFF + 1):
3373 3373 try:
3374 3374 names.append(unicodedata.name(chr(c)))
3375 3375 except ValueError:
3376 3376 pass
3377 3377 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3378 3378
3379 3379 return self._unicode_names
3380 3380
3381 3381 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3382 3382 names = []
3383 3383 for start,stop in ranges:
3384 3384 for c in range(start, stop) :
3385 3385 try:
3386 3386 names.append(unicodedata.name(chr(c)))
3387 3387 except ValueError:
3388 3388 pass
3389 3389 return names
@@ -1,382 +1,382
1 1 # encoding: utf-8
2 2 """Implementations for various useful completers.
3 3
4 4 These are all loaded by default by IPython.
5 5 """
6 6 #-----------------------------------------------------------------------------
7 7 # Copyright (C) 2010-2011 The IPython Development Team.
8 8 #
9 9 # Distributed under the terms of the BSD License.
10 10 #
11 11 # The full license is in the file COPYING.txt, distributed with this software.
12 12 #-----------------------------------------------------------------------------
13 13
14 14 #-----------------------------------------------------------------------------
15 15 # Imports
16 16 #-----------------------------------------------------------------------------
17 17
18 18 # Stdlib imports
19 19 import glob
20 20 import inspect
21 21 import os
22 22 import re
23 23 import sys
24 24 from importlib import import_module
25 25 from importlib.machinery import all_suffixes
26 26
27 27
28 28 # Third-party imports
29 29 from time import time
30 30 from zipimport import zipimporter
31 31
32 32 # Our own imports
33 33 from .completer import expand_user, compress_user
34 34 from .error import TryNext
35 35 from ..utils._process_common import arg_split
36 36
37 37 # FIXME: this should be pulled in with the right call via the component system
38 38 from IPython import get_ipython
39 39
40 40 from typing import List
41 41
42 42 #-----------------------------------------------------------------------------
43 43 # Globals and constants
44 44 #-----------------------------------------------------------------------------
45 45 _suffixes = all_suffixes()
46 46
47 47 # Time in seconds after which the rootmodules will be stored permanently in the
48 48 # ipython ip.db database (kept in the user's .ipython dir).
49 49 TIMEOUT_STORAGE = 2
50 50
51 51 # Time in seconds after which we give up
52 52 TIMEOUT_GIVEUP = 20
53 53
54 54 # Regular expression for the python import statement
55 55 import_re = re.compile(r'(?P<name>[^\W\d]\w*?)'
56 56 r'(?P<package>[/\\]__init__)?'
57 57 r'(?P<suffix>%s)$' %
58 58 r'|'.join(re.escape(s) for s in _suffixes))
59 59
60 60 # RE for the ipython %run command (python + ipython scripts)
61 61 magic_run_re = re.compile(r'.*(\.ipy|\.ipynb|\.py[w]?)$')
62 62
63 63 #-----------------------------------------------------------------------------
64 64 # Local utilities
65 65 #-----------------------------------------------------------------------------
66 66
67 67
68 68 def module_list(path: str) -> List[str]:
69 69 """
70 70 Return the list containing the names of the modules available in the given
71 71 folder.
72 72 """
73 73 # sys.path has the cwd as an empty string, but isdir/listdir need it as '.'
74 74 if path == '':
75 75 path = '.'
76 76
77 77 # A few local constants to be used in loops below
78 78 pjoin = os.path.join
79 79
80 80 if os.path.isdir(path):
81 81 # Build a list of all files in the directory and all files
82 82 # in its subdirectories. For performance reasons, do not
83 83 # recurse more than one level into subdirectories.
84 84 files: List[str] = []
85 85 for root, dirs, nondirs in os.walk(path, followlinks=True):
86 86 subdir = root[len(path)+1:]
87 87 if subdir:
88 88 files.extend(pjoin(subdir, f) for f in nondirs)
89 89 dirs[:] = [] # Do not recurse into additional subdirectories.
90 90 else:
91 91 files.extend(nondirs)
92 92
93 93 else:
94 94 try:
95 95 files = list(zipimporter(path)._files.keys()) # type: ignore
96 96 except Exception:
97 97 files = []
98 98
99 99 # Build a list of modules which match the import_re regex.
100 100 modules = []
101 101 for f in files:
102 102 m = import_re.match(f)
103 103 if m:
104 104 modules.append(m.group('name'))
105 105 return list(set(modules))
106 106
107 107
108 108 def get_root_modules():
109 109 """
110 110 Returns a list containing the names of all the modules available in the
111 111 folders of the pythonpath.
112 112
113 113 ip.db['rootmodules_cache'] maps sys.path entries to list of modules.
114 114 """
115 115 ip = get_ipython()
116 116 if ip is None:
117 117 # No global shell instance to store cached list of modules.
118 118 # Don't try to scan for modules every time.
119 119 return list(sys.builtin_module_names)
120 120
121 121 if getattr(ip.db, "_mock", False):
122 122 rootmodules_cache = {}
123 123 else:
124 124 rootmodules_cache = ip.db.get("rootmodules_cache", {})
125 125 rootmodules = list(sys.builtin_module_names)
126 126 start_time = time()
127 127 store = False
128 128 for path in sys.path:
129 129 try:
130 130 modules = rootmodules_cache[path]
131 131 except KeyError:
132 132 modules = module_list(path)
133 133 try:
134 134 modules.remove('__init__')
135 135 except ValueError:
136 136 pass
137 137 if path not in ('', '.'): # cwd modules should not be cached
138 138 rootmodules_cache[path] = modules
139 139 if time() - start_time > TIMEOUT_STORAGE and not store:
140 140 store = True
141 141 print("\nCaching the list of root modules, please wait!")
142 142 print("(This will only be done once - type '%rehashx' to "
143 143 "reset cache!)\n")
144 144 sys.stdout.flush()
145 145 if time() - start_time > TIMEOUT_GIVEUP:
146 146 print("This is taking too long, we give up.\n")
147 147 return []
148 148 rootmodules.extend(modules)
149 149 if store:
150 150 ip.db['rootmodules_cache'] = rootmodules_cache
151 151 rootmodules = list(set(rootmodules))
152 152 return rootmodules
153 153
154 154
155 155 def is_importable(module, attr: str, only_modules) -> bool:
156 156 if only_modules:
157 157 try:
158 158 mod = getattr(module, attr)
159 159 except ModuleNotFoundError:
160 160 # See gh-14434
161 161 return False
162 162 return inspect.ismodule(mod)
163 163 else:
164 164 return not(attr[:2] == '__' and attr[-2:] == '__')
165 165
166 166 def is_possible_submodule(module, attr):
167 167 try:
168 168 obj = getattr(module, attr)
169 169 except AttributeError:
170 # Is possilby an unimported submodule
170 # Is possibly an unimported submodule
171 171 return True
172 172 except TypeError:
173 173 # https://github.com/ipython/ipython/issues/9678
174 174 return False
175 175 return inspect.ismodule(obj)
176 176
177 177
178 178 def try_import(mod: str, only_modules=False) -> List[str]:
179 179 """
180 180 Try to import given module and return list of potential completions.
181 181 """
182 182 mod = mod.rstrip('.')
183 183 try:
184 184 m = import_module(mod)
185 185 except:
186 186 return []
187 187
188 188 m_is_init = '__init__' in (getattr(m, '__file__', '') or '')
189 189
190 190 completions = []
191 191 if (not hasattr(m, '__file__')) or (not only_modules) or m_is_init:
192 192 completions.extend( [attr for attr in dir(m) if
193 193 is_importable(m, attr, only_modules)])
194 194
195 195 m_all = getattr(m, "__all__", [])
196 196 if only_modules:
197 197 completions.extend(attr for attr in m_all if is_possible_submodule(m, attr))
198 198 else:
199 199 completions.extend(m_all)
200 200
201 201 if m_is_init:
202 202 file_ = m.__file__
203 203 file_path = os.path.dirname(file_) # type: ignore
204 204 if file_path is not None:
205 205 completions.extend(module_list(file_path))
206 206 completions_set = {c for c in completions if isinstance(c, str)}
207 207 completions_set.discard('__init__')
208 208 return list(completions_set)
209 209
210 210
211 211 #-----------------------------------------------------------------------------
212 212 # Completion-related functions.
213 213 #-----------------------------------------------------------------------------
214 214
215 215 def quick_completer(cmd, completions):
216 216 r""" Easily create a trivial completer for a command.
217 217
218 218 Takes either a list of completions, or all completions in string (that will
219 219 be split on whitespace).
220 220
221 221 Example::
222 222
223 223 [d:\ipython]|1> import ipy_completers
224 224 [d:\ipython]|2> ipy_completers.quick_completer('foo', ['bar','baz'])
225 225 [d:\ipython]|3> foo b<TAB>
226 226 bar baz
227 227 [d:\ipython]|3> foo ba
228 228 """
229 229
230 230 if isinstance(completions, str):
231 231 completions = completions.split()
232 232
233 233 def do_complete(self, event):
234 234 return completions
235 235
236 236 get_ipython().set_hook('complete_command',do_complete, str_key = cmd)
237 237
238 238 def module_completion(line):
239 239 """
240 240 Returns a list containing the completion possibilities for an import line.
241 241
242 242 The line looks like this :
243 243 'import xml.d'
244 244 'from xml.dom import'
245 245 """
246 246
247 247 words = line.split(' ')
248 248 nwords = len(words)
249 249
250 250 # from whatever <tab> -> 'import '
251 251 if nwords == 3 and words[0] == 'from':
252 252 return ['import ']
253 253
254 254 # 'from xy<tab>' or 'import xy<tab>'
255 255 if nwords < 3 and (words[0] in {'%aimport', 'import', 'from'}) :
256 256 if nwords == 1:
257 257 return get_root_modules()
258 258 mod = words[1].split('.')
259 259 if len(mod) < 2:
260 260 return get_root_modules()
261 261 completion_list = try_import('.'.join(mod[:-1]), True)
262 262 return ['.'.join(mod[:-1] + [el]) for el in completion_list]
263 263
264 264 # 'from xyz import abc<tab>'
265 265 if nwords >= 3 and words[0] == 'from':
266 266 mod = words[1]
267 267 return try_import(mod)
268 268
269 269 #-----------------------------------------------------------------------------
270 270 # Completers
271 271 #-----------------------------------------------------------------------------
272 272 # These all have the func(self, event) signature to be used as custom
273 273 # completers
274 274
275 275 def module_completer(self,event):
276 276 """Give completions after user has typed 'import ...' or 'from ...'"""
277 277
278 278 # This works in all versions of python. While 2.5 has
279 279 # pkgutil.walk_packages(), that particular routine is fairly dangerous,
280 280 # since it imports *EVERYTHING* on sys.path. That is: a) very slow b) full
281 281 # of possibly problematic side effects.
282 282 # This search the folders in the sys.path for available modules.
283 283
284 284 return module_completion(event.line)
285 285
286 286 # FIXME: there's a lot of logic common to the run, cd and builtin file
287 287 # completers, that is currently reimplemented in each.
288 288
289 289 def magic_run_completer(self, event):
290 290 """Complete files that end in .py or .ipy or .ipynb for the %run command.
291 291 """
292 292 comps = arg_split(event.line, strict=False)
293 293 # relpath should be the current token that we need to complete.
294 294 if (len(comps) > 1) and (not event.line.endswith(' ')):
295 295 relpath = comps[-1].strip("'\"")
296 296 else:
297 297 relpath = ''
298 298
299 299 #print("\nev=", event) # dbg
300 300 #print("rp=", relpath) # dbg
301 301 #print('comps=', comps) # dbg
302 302
303 303 lglob = glob.glob
304 304 isdir = os.path.isdir
305 305 relpath, tilde_expand, tilde_val = expand_user(relpath)
306 306
307 307 # Find if the user has already typed the first filename, after which we
308 308 # should complete on all files, since after the first one other files may
309 309 # be arguments to the input script.
310 310
311 311 if any(magic_run_re.match(c) for c in comps):
312 312 matches = [f.replace('\\','/') + ('/' if isdir(f) else '')
313 313 for f in lglob(relpath+'*')]
314 314 else:
315 315 dirs = [f.replace('\\','/') + "/" for f in lglob(relpath+'*') if isdir(f)]
316 316 pys = [f.replace('\\','/')
317 317 for f in lglob(relpath+'*.py') + lglob(relpath+'*.ipy') +
318 318 lglob(relpath+'*.ipynb') + lglob(relpath + '*.pyw')]
319 319
320 320 matches = dirs + pys
321 321
322 322 #print('run comp:', dirs+pys) # dbg
323 323 return [compress_user(p, tilde_expand, tilde_val) for p in matches]
324 324
325 325
326 326 def cd_completer(self, event):
327 327 """Completer function for cd, which only returns directories."""
328 328 ip = get_ipython()
329 329 relpath = event.symbol
330 330
331 331 #print(event) # dbg
332 332 if event.line.endswith('-b') or ' -b ' in event.line:
333 333 # return only bookmark completions
334 334 bkms = self.db.get('bookmarks', None)
335 335 if bkms:
336 336 return bkms.keys()
337 337 else:
338 338 return []
339 339
340 340 if event.symbol == '-':
341 341 width_dh = str(len(str(len(ip.user_ns['_dh']) + 1)))
342 342 # jump in directory history by number
343 343 fmt = '-%0' + width_dh +'d [%s]'
344 344 ents = [ fmt % (i,s) for i,s in enumerate(ip.user_ns['_dh'])]
345 345 if len(ents) > 1:
346 346 return ents
347 347 return []
348 348
349 349 if event.symbol.startswith('--'):
350 350 return ["--" + os.path.basename(d) for d in ip.user_ns['_dh']]
351 351
352 352 # Expand ~ in path and normalize directory separators.
353 353 relpath, tilde_expand, tilde_val = expand_user(relpath)
354 354 relpath = relpath.replace('\\','/')
355 355
356 356 found = []
357 357 for d in [f.replace('\\','/') + '/' for f in glob.glob(relpath+'*')
358 358 if os.path.isdir(f)]:
359 359 if ' ' in d:
360 360 # we don't want to deal with any of that, complex code
361 361 # for this is elsewhere
362 362 raise TryNext
363 363
364 364 found.append(d)
365 365
366 366 if not found:
367 367 if os.path.isdir(relpath):
368 368 return [compress_user(relpath, tilde_expand, tilde_val)]
369 369
370 370 # if no completions so far, try bookmarks
371 371 bks = self.db.get('bookmarks',{})
372 372 bkmatches = [s for s in bks if s.startswith(event.symbol)]
373 373 if bkmatches:
374 374 return bkmatches
375 375
376 376 raise TryNext
377 377
378 378 return [compress_user(p, tilde_expand, tilde_val) for p in found]
379 379
380 380 def reset_completer(self, event):
381 381 "A completer for %reset magic"
382 382 return '-f -s in out array dhist'.split()
@@ -1,1136 +1,1136
1 1 """
2 2 Pdb debugger class.
3 3
4 4
5 5 This is an extension to PDB which adds a number of new features.
6 6 Note that there is also the `IPython.terminal.debugger` class which provides UI
7 7 improvements.
8 8
9 9 We also strongly recommend to use this via the `ipdb` package, which provides
10 10 extra configuration options.
11 11
12 12 Among other things, this subclass of PDB:
13 13 - supports many IPython magics like pdef/psource
14 14 - hide frames in tracebacks based on `__tracebackhide__`
15 15 - allows to skip frames based on `__debuggerskip__`
16 16
17 17
18 18 Global Configuration
19 19 --------------------
20 20
21 21 The IPython debugger will by read the global ``~/.pdbrc`` file.
22 That is to say you can list all comands supported by ipdb in your `~/.pdbrc`
22 That is to say you can list all commands supported by ipdb in your `~/.pdbrc`
23 23 configuration file, to globally configure pdb.
24 24
25 25 Example::
26 26
27 27 # ~/.pdbrc
28 28 skip_predicates debuggerskip false
29 29 skip_hidden false
30 30 context 25
31 31
32 32 Features
33 33 --------
34 34
35 35 The IPython debugger can hide and skip frames when printing or moving through
36 36 the stack. This can have a performance impact, so can be configures.
37 37
38 38 The skipping and hiding frames are configurable via the `skip_predicates`
39 39 command.
40 40
41 41 By default, frames from readonly files will be hidden, frames containing
42 42 ``__tracebackhide__ = True`` will be hidden.
43 43
44 44 Frames containing ``__debuggerskip__`` will be stepped over, frames whose parent
45 45 frames value of ``__debuggerskip__`` is ``True`` will also be skipped.
46 46
47 47 >>> def helpers_helper():
48 48 ... pass
49 49 ...
50 50 ... def helper_1():
51 51 ... print("don't step in me")
52 52 ... helpers_helpers() # will be stepped over unless breakpoint set.
53 53 ...
54 54 ...
55 55 ... def helper_2():
56 56 ... print("in me neither")
57 57 ...
58 58
59 59 One can define a decorator that wraps a function between the two helpers:
60 60
61 61 >>> def pdb_skipped_decorator(function):
62 62 ...
63 63 ...
64 64 ... def wrapped_fn(*args, **kwargs):
65 65 ... __debuggerskip__ = True
66 66 ... helper_1()
67 67 ... __debuggerskip__ = False
68 68 ... result = function(*args, **kwargs)
69 69 ... __debuggerskip__ = True
70 70 ... helper_2()
71 71 ... # setting __debuggerskip__ to False again is not necessary
72 72 ... return result
73 73 ...
74 74 ... return wrapped_fn
75 75
76 76 When decorating a function, ipdb will directly step into ``bar()`` by
77 77 default:
78 78
79 79 >>> @foo_decorator
80 80 ... def bar(x, y):
81 81 ... return x * y
82 82
83 83
84 84 You can toggle the behavior with
85 85
86 86 ipdb> skip_predicates debuggerskip false
87 87
88 88 or configure it in your ``.pdbrc``
89 89
90 90
91 91
92 92 License
93 93 -------
94 94
95 95 Modified from the standard pdb.Pdb class to avoid including readline, so that
96 96 the command line completion of other programs which include this isn't
97 97 damaged.
98 98
99 99 In the future, this class will be expanded with improvements over the standard
100 100 pdb.
101 101
102 102 The original code in this file is mainly lifted out of cmd.py in Python 2.2,
103 103 with minor changes. Licensing should therefore be under the standard Python
104 104 terms. For details on the PSF (Python Software Foundation) standard license,
105 105 see:
106 106
107 107 https://docs.python.org/2/license.html
108 108
109 109
110 110 All the changes since then are under the same license as IPython.
111 111
112 112 """
113 113
114 114 #*****************************************************************************
115 115 #
116 116 # This file is licensed under the PSF license.
117 117 #
118 118 # Copyright (C) 2001 Python Software Foundation, www.python.org
119 119 # Copyright (C) 2005-2006 Fernando Perez. <fperez@colorado.edu>
120 120 #
121 121 #
122 122 #*****************************************************************************
123 123
124 124 from __future__ import annotations
125 125
126 126 import inspect
127 127 import linecache
128 128 import os
129 129 import re
130 130 import sys
131 131 from contextlib import contextmanager
132 132 from functools import lru_cache
133 133
134 134 from IPython import get_ipython
135 135 from IPython.core.excolors import exception_colors
136 136 from IPython.utils import PyColorize, coloransi, py3compat
137 137
138 138 from typing import TYPE_CHECKING
139 139
140 140 if TYPE_CHECKING:
141 141 # otherwise circular import
142 142 from IPython.core.interactiveshell import InteractiveShell
143 143
144 144 # skip module docstests
145 145 __skip_doctest__ = True
146 146
147 147 prompt = 'ipdb> '
148 148
149 149 # We have to check this directly from sys.argv, config struct not yet available
150 150 from pdb import Pdb as OldPdb
151 151
152 152 # Allow the set_trace code to operate outside of an ipython instance, even if
153 153 # it does so with some limitations. The rest of this support is implemented in
154 154 # the Tracer constructor.
155 155
156 156 DEBUGGERSKIP = "__debuggerskip__"
157 157
158 158
159 159 # this has been implemented in Pdb in Python 3.13 (https://github.com/python/cpython/pull/106676
160 160 # on lower python versions, we backported the feature.
161 161 CHAIN_EXCEPTIONS = sys.version_info < (3, 13)
162 162
163 163
164 164 def make_arrow(pad):
165 165 """generate the leading arrow in front of traceback or debugger"""
166 166 if pad >= 2:
167 167 return '-'*(pad-2) + '> '
168 168 elif pad == 1:
169 169 return '>'
170 170 return ''
171 171
172 172
173 173 def BdbQuit_excepthook(et, ev, tb, excepthook=None):
174 174 """Exception hook which handles `BdbQuit` exceptions.
175 175
176 176 All other exceptions are processed using the `excepthook`
177 177 parameter.
178 178 """
179 179 raise ValueError(
180 "`BdbQuit_excepthook` is deprecated since version 5.1. It is still arround only because it is still imported by ipdb.",
180 "`BdbQuit_excepthook` is deprecated since version 5.1. It is still around only because it is still imported by ipdb.",
181 181 )
182 182
183 183
184 184 RGX_EXTRA_INDENT = re.compile(r'(?<=\n)\s+')
185 185
186 186
187 187 def strip_indentation(multiline_string):
188 188 return RGX_EXTRA_INDENT.sub('', multiline_string)
189 189
190 190
191 191 def decorate_fn_with_doc(new_fn, old_fn, additional_text=""):
192 192 """Make new_fn have old_fn's doc string. This is particularly useful
193 193 for the ``do_...`` commands that hook into the help system.
194 194 Adapted from from a comp.lang.python posting
195 195 by Duncan Booth."""
196 196 def wrapper(*args, **kw):
197 197 return new_fn(*args, **kw)
198 198 if old_fn.__doc__:
199 199 wrapper.__doc__ = strip_indentation(old_fn.__doc__) + additional_text
200 200 return wrapper
201 201
202 202
203 203 class Pdb(OldPdb):
204 204 """Modified Pdb class, does not load readline.
205 205
206 206 for a standalone version that uses prompt_toolkit, see
207 207 `IPython.terminal.debugger.TerminalPdb` and
208 208 `IPython.terminal.debugger.set_trace()`
209 209
210 210
211 211 This debugger can hide and skip frames that are tagged according to some predicates.
212 212 See the `skip_predicates` commands.
213 213
214 214 """
215 215
216 216 shell: InteractiveShell
217 217
218 218 if CHAIN_EXCEPTIONS:
219 219 MAX_CHAINED_EXCEPTION_DEPTH = 999
220 220
221 221 default_predicates = {
222 222 "tbhide": True,
223 223 "readonly": False,
224 224 "ipython_internal": True,
225 225 "debuggerskip": True,
226 226 }
227 227
228 228 def __init__(self, completekey=None, stdin=None, stdout=None, context=5, **kwargs):
229 229 """Create a new IPython debugger.
230 230
231 231 Parameters
232 232 ----------
233 233 completekey : default None
234 234 Passed to pdb.Pdb.
235 235 stdin : default None
236 236 Passed to pdb.Pdb.
237 237 stdout : default None
238 238 Passed to pdb.Pdb.
239 239 context : int
240 240 Number of lines of source code context to show when
241 241 displaying stacktrace information.
242 242 **kwargs
243 243 Passed to pdb.Pdb.
244 244
245 245 Notes
246 246 -----
247 247 The possibilities are python version dependent, see the python
248 248 docs for more info.
249 249 """
250 250
251 251 # Parent constructor:
252 252 try:
253 253 self.context = int(context)
254 254 if self.context <= 0:
255 255 raise ValueError("Context must be a positive integer")
256 256 except (TypeError, ValueError) as e:
257 257 raise ValueError("Context must be a positive integer") from e
258 258
259 259 # `kwargs` ensures full compatibility with stdlib's `pdb.Pdb`.
260 260 OldPdb.__init__(self, completekey, stdin, stdout, **kwargs)
261 261
262 262 # IPython changes...
263 263 self.shell = get_ipython()
264 264
265 265 if self.shell is None:
266 266 save_main = sys.modules['__main__']
267 267 # No IPython instance running, we must create one
268 268 from IPython.terminal.interactiveshell import \
269 269 TerminalInteractiveShell
270 270 self.shell = TerminalInteractiveShell.instance()
271 271 # needed by any code which calls __import__("__main__") after
272 272 # the debugger was entered. See also #9941.
273 273 sys.modules["__main__"] = save_main
274 274
275 275
276 276 color_scheme = self.shell.colors
277 277
278 278 self.aliases = {}
279 279
280 280 # Create color table: we copy the default one from the traceback
281 281 # module and add a few attributes needed for debugging
282 282 self.color_scheme_table = exception_colors()
283 283
284 284 # shorthands
285 285 C = coloransi.TermColors
286 286 cst = self.color_scheme_table
287 287
288 288
289 289 # Add a python parser so we can syntax highlight source while
290 290 # debugging.
291 291 self.parser = PyColorize.Parser(style=color_scheme)
292 292 self.set_colors(color_scheme)
293 293
294 294 # Set the prompt - the default prompt is '(Pdb)'
295 295 self.prompt = prompt
296 296 self.skip_hidden = True
297 297 self.report_skipped = True
298 298
299 299 # list of predicates we use to skip frames
300 300 self._predicates = self.default_predicates
301 301
302 302 if CHAIN_EXCEPTIONS:
303 303 self._chained_exceptions = tuple()
304 304 self._chained_exception_index = 0
305 305
306 306 #
307 307 def set_colors(self, scheme):
308 308 """Shorthand access to the color table scheme selector method."""
309 309 self.color_scheme_table.set_active_scheme(scheme)
310 310 self.parser.style = scheme
311 311
312 312 def set_trace(self, frame=None):
313 313 if frame is None:
314 314 frame = sys._getframe().f_back
315 315 self.initial_frame = frame
316 316 return super().set_trace(frame)
317 317
318 318 def _hidden_predicate(self, frame):
319 319 """
320 320 Given a frame return whether it it should be hidden or not by IPython.
321 321 """
322 322
323 323 if self._predicates["readonly"]:
324 324 fname = frame.f_code.co_filename
325 325 # we need to check for file existence and interactively define
326 326 # function would otherwise appear as RO.
327 327 if os.path.isfile(fname) and not os.access(fname, os.W_OK):
328 328 return True
329 329
330 330 if self._predicates["tbhide"]:
331 331 if frame in (self.curframe, getattr(self, "initial_frame", None)):
332 332 return False
333 333 frame_locals = self._get_frame_locals(frame)
334 334 if "__tracebackhide__" not in frame_locals:
335 335 return False
336 336 return frame_locals["__tracebackhide__"]
337 337 return False
338 338
339 339 def hidden_frames(self, stack):
340 340 """
341 341 Given an index in the stack return whether it should be skipped.
342 342
343 343 This is used in up/down and where to skip frames.
344 344 """
345 345 # The f_locals dictionary is updated from the actual frame
346 346 # locals whenever the .f_locals accessor is called, so we
347 347 # avoid calling it here to preserve self.curframe_locals.
348 348 # Furthermore, there is no good reason to hide the current frame.
349 349 ip_hide = [self._hidden_predicate(s[0]) for s in stack]
350 350 ip_start = [i for i, s in enumerate(ip_hide) if s == "__ipython_bottom__"]
351 351 if ip_start and self._predicates["ipython_internal"]:
352 352 ip_hide = [h if i > ip_start[0] else True for (i, h) in enumerate(ip_hide)]
353 353 return ip_hide
354 354
355 355 if CHAIN_EXCEPTIONS:
356 356
357 357 def _get_tb_and_exceptions(self, tb_or_exc):
358 358 """
359 359 Given a tracecack or an exception, return a tuple of chained exceptions
360 360 and current traceback to inspect.
361 361 This will deal with selecting the right ``__cause__`` or ``__context__``
362 362 as well as handling cycles, and return a flattened list of exceptions we
363 363 can jump to with do_exceptions.
364 364 """
365 365 _exceptions = []
366 366 if isinstance(tb_or_exc, BaseException):
367 367 traceback, current = tb_or_exc.__traceback__, tb_or_exc
368 368
369 369 while current is not None:
370 370 if current in _exceptions:
371 371 break
372 372 _exceptions.append(current)
373 373 if current.__cause__ is not None:
374 374 current = current.__cause__
375 375 elif (
376 376 current.__context__ is not None
377 377 and not current.__suppress_context__
378 378 ):
379 379 current = current.__context__
380 380
381 381 if len(_exceptions) >= self.MAX_CHAINED_EXCEPTION_DEPTH:
382 382 self.message(
383 383 f"More than {self.MAX_CHAINED_EXCEPTION_DEPTH}"
384 384 " chained exceptions found, not all exceptions"
385 385 "will be browsable with `exceptions`."
386 386 )
387 387 break
388 388 else:
389 389 traceback = tb_or_exc
390 390 return tuple(reversed(_exceptions)), traceback
391 391
392 392 @contextmanager
393 393 def _hold_exceptions(self, exceptions):
394 394 """
395 395 Context manager to ensure proper cleaning of exceptions references
396 396 When given a chained exception instead of a traceback,
397 397 pdb may hold references to many objects which may leak memory.
398 398 We use this context manager to make sure everything is properly cleaned
399 399 """
400 400 try:
401 401 self._chained_exceptions = exceptions
402 402 self._chained_exception_index = len(exceptions) - 1
403 403 yield
404 404 finally:
405 405 # we can't put those in forget as otherwise they would
406 406 # be cleared on exception change
407 407 self._chained_exceptions = tuple()
408 408 self._chained_exception_index = 0
409 409
410 410 def do_exceptions(self, arg):
411 411 """exceptions [number]
412 412 List or change current exception in an exception chain.
413 413 Without arguments, list all the current exception in the exception
414 414 chain. Exceptions will be numbered, with the current exception indicated
415 415 with an arrow.
416 416 If given an integer as argument, switch to the exception at that index.
417 417 """
418 418 if not self._chained_exceptions:
419 419 self.message(
420 420 "Did not find chained exceptions. To move between"
421 421 " exceptions, pdb/post_mortem must be given an exception"
422 422 " object rather than a traceback."
423 423 )
424 424 return
425 425 if not arg:
426 426 for ix, exc in enumerate(self._chained_exceptions):
427 427 prompt = ">" if ix == self._chained_exception_index else " "
428 428 rep = repr(exc)
429 429 if len(rep) > 80:
430 430 rep = rep[:77] + "..."
431 431 indicator = (
432 432 " -"
433 433 if self._chained_exceptions[ix].__traceback__ is None
434 434 else f"{ix:>3}"
435 435 )
436 436 self.message(f"{prompt} {indicator} {rep}")
437 437 else:
438 438 try:
439 439 number = int(arg)
440 440 except ValueError:
441 441 self.error("Argument must be an integer")
442 442 return
443 443 if 0 <= number < len(self._chained_exceptions):
444 444 if self._chained_exceptions[number].__traceback__ is None:
445 445 self.error(
446 446 "This exception does not have a traceback, cannot jump to it"
447 447 )
448 448 return
449 449
450 450 self._chained_exception_index = number
451 451 self.setup(None, self._chained_exceptions[number].__traceback__)
452 452 self.print_stack_entry(self.stack[self.curindex])
453 453 else:
454 454 self.error("No exception with that number")
455 455
456 456 def interaction(self, frame, tb_or_exc):
457 457 try:
458 458 if CHAIN_EXCEPTIONS:
459 459 # this context manager is part of interaction in 3.13
460 460 _chained_exceptions, tb = self._get_tb_and_exceptions(tb_or_exc)
461 461 if isinstance(tb_or_exc, BaseException):
462 462 assert tb is not None, "main exception must have a traceback"
463 463 with self._hold_exceptions(_chained_exceptions):
464 464 OldPdb.interaction(self, frame, tb)
465 465 else:
466 466 OldPdb.interaction(self, frame, tb_or_exc)
467 467
468 468 except KeyboardInterrupt:
469 469 self.stdout.write("\n" + self.shell.get_exception_only())
470 470
471 471 def precmd(self, line):
472 472 """Perform useful escapes on the command before it is executed."""
473 473
474 474 if line.endswith("??"):
475 475 line = "pinfo2 " + line[:-2]
476 476 elif line.endswith("?"):
477 477 line = "pinfo " + line[:-1]
478 478
479 479 line = super().precmd(line)
480 480
481 481 return line
482 482
483 483 def new_do_quit(self, arg):
484 484 return OldPdb.do_quit(self, arg)
485 485
486 486 do_q = do_quit = decorate_fn_with_doc(new_do_quit, OldPdb.do_quit)
487 487
488 488 def print_stack_trace(self, context=None):
489 489 Colors = self.color_scheme_table.active_colors
490 490 ColorsNormal = Colors.Normal
491 491 if context is None:
492 492 context = self.context
493 493 try:
494 494 context = int(context)
495 495 if context <= 0:
496 496 raise ValueError("Context must be a positive integer")
497 497 except (TypeError, ValueError) as e:
498 498 raise ValueError("Context must be a positive integer") from e
499 499 try:
500 500 skipped = 0
501 501 for hidden, frame_lineno in zip(self.hidden_frames(self.stack), self.stack):
502 502 if hidden and self.skip_hidden:
503 503 skipped += 1
504 504 continue
505 505 if skipped:
506 506 print(
507 507 f"{Colors.excName} [... skipping {skipped} hidden frame(s)]{ColorsNormal}\n"
508 508 )
509 509 skipped = 0
510 510 self.print_stack_entry(frame_lineno, context=context)
511 511 if skipped:
512 512 print(
513 513 f"{Colors.excName} [... skipping {skipped} hidden frame(s)]{ColorsNormal}\n"
514 514 )
515 515 except KeyboardInterrupt:
516 516 pass
517 517
518 518 def print_stack_entry(self, frame_lineno, prompt_prefix='\n-> ',
519 519 context=None):
520 520 if context is None:
521 521 context = self.context
522 522 try:
523 523 context = int(context)
524 524 if context <= 0:
525 525 raise ValueError("Context must be a positive integer")
526 526 except (TypeError, ValueError) as e:
527 527 raise ValueError("Context must be a positive integer") from e
528 528 print(self.format_stack_entry(frame_lineno, '', context), file=self.stdout)
529 529
530 530 # vds: >>
531 531 frame, lineno = frame_lineno
532 532 filename = frame.f_code.co_filename
533 533 self.shell.hooks.synchronize_with_editor(filename, lineno, 0)
534 534 # vds: <<
535 535
536 536 def _get_frame_locals(self, frame):
537 537 """ "
538 538 Accessing f_local of current frame reset the namespace, so we want to avoid
539 539 that or the following can happen
540 540
541 541 ipdb> foo
542 542 "old"
543 543 ipdb> foo = "new"
544 544 ipdb> foo
545 545 "new"
546 546 ipdb> where
547 547 ipdb> foo
548 548 "old"
549 549
550 550 So if frame is self.current_frame we instead return self.curframe_locals
551 551
552 552 """
553 553 if frame is self.curframe:
554 554 return self.curframe_locals
555 555 else:
556 556 return frame.f_locals
557 557
558 558 def format_stack_entry(self, frame_lineno, lprefix=': ', context=None):
559 559 if context is None:
560 560 context = self.context
561 561 try:
562 562 context = int(context)
563 563 if context <= 0:
564 564 print("Context must be a positive integer", file=self.stdout)
565 565 except (TypeError, ValueError):
566 566 print("Context must be a positive integer", file=self.stdout)
567 567
568 568 import reprlib
569 569
570 570 ret = []
571 571
572 572 Colors = self.color_scheme_table.active_colors
573 573 ColorsNormal = Colors.Normal
574 574 tpl_link = "%s%%s%s" % (Colors.filenameEm, ColorsNormal)
575 575 tpl_call = "%s%%s%s%%s%s" % (Colors.vName, Colors.valEm, ColorsNormal)
576 576 tpl_line = "%%s%s%%s %s%%s" % (Colors.lineno, ColorsNormal)
577 577 tpl_line_em = "%%s%s%%s %s%%s%s" % (Colors.linenoEm, Colors.line, ColorsNormal)
578 578
579 579 frame, lineno = frame_lineno
580 580
581 581 return_value = ''
582 582 loc_frame = self._get_frame_locals(frame)
583 583 if "__return__" in loc_frame:
584 584 rv = loc_frame["__return__"]
585 585 # return_value += '->'
586 586 return_value += reprlib.repr(rv) + "\n"
587 587 ret.append(return_value)
588 588
589 589 #s = filename + '(' + `lineno` + ')'
590 590 filename = self.canonic(frame.f_code.co_filename)
591 591 link = tpl_link % py3compat.cast_unicode(filename)
592 592
593 593 if frame.f_code.co_name:
594 594 func = frame.f_code.co_name
595 595 else:
596 596 func = "<lambda>"
597 597
598 598 call = ""
599 599 if func != "?":
600 600 if "__args__" in loc_frame:
601 601 args = reprlib.repr(loc_frame["__args__"])
602 602 else:
603 603 args = '()'
604 604 call = tpl_call % (func, args)
605 605
606 606 # The level info should be generated in the same format pdb uses, to
607 607 # avoid breaking the pdbtrack functionality of python-mode in *emacs.
608 608 if frame is self.curframe:
609 609 ret.append('> ')
610 610 else:
611 611 ret.append(" ")
612 612 ret.append("%s(%s)%s\n" % (link, lineno, call))
613 613
614 614 start = lineno - 1 - context//2
615 615 lines = linecache.getlines(filename)
616 616 start = min(start, len(lines) - context)
617 617 start = max(start, 0)
618 618 lines = lines[start : start + context]
619 619
620 620 for i, line in enumerate(lines):
621 621 show_arrow = start + 1 + i == lineno
622 622 linetpl = (frame is self.curframe or show_arrow) and tpl_line_em or tpl_line
623 623 ret.append(
624 624 self.__format_line(
625 625 linetpl, filename, start + 1 + i, line, arrow=show_arrow
626 626 )
627 627 )
628 628 return "".join(ret)
629 629
630 630 def __format_line(self, tpl_line, filename, lineno, line, arrow=False):
631 631 bp_mark = ""
632 632 bp_mark_color = ""
633 633
634 634 new_line, err = self.parser.format2(line, 'str')
635 635 if not err:
636 636 line = new_line
637 637
638 638 bp = None
639 639 if lineno in self.get_file_breaks(filename):
640 640 bps = self.get_breaks(filename, lineno)
641 641 bp = bps[-1]
642 642
643 643 if bp:
644 644 Colors = self.color_scheme_table.active_colors
645 645 bp_mark = str(bp.number)
646 646 bp_mark_color = Colors.breakpoint_enabled
647 647 if not bp.enabled:
648 648 bp_mark_color = Colors.breakpoint_disabled
649 649
650 650 numbers_width = 7
651 651 if arrow:
652 652 # This is the line with the error
653 653 pad = numbers_width - len(str(lineno)) - len(bp_mark)
654 654 num = '%s%s' % (make_arrow(pad), str(lineno))
655 655 else:
656 656 num = '%*s' % (numbers_width - len(bp_mark), str(lineno))
657 657
658 658 return tpl_line % (bp_mark_color + bp_mark, num, line)
659 659
660 660 def print_list_lines(self, filename, first, last):
661 661 """The printing (as opposed to the parsing part of a 'list'
662 662 command."""
663 663 try:
664 664 Colors = self.color_scheme_table.active_colors
665 665 ColorsNormal = Colors.Normal
666 666 tpl_line = '%%s%s%%s %s%%s' % (Colors.lineno, ColorsNormal)
667 667 tpl_line_em = '%%s%s%%s %s%%s%s' % (Colors.linenoEm, Colors.line, ColorsNormal)
668 668 src = []
669 669 if filename == "<string>" and hasattr(self, "_exec_filename"):
670 670 filename = self._exec_filename
671 671
672 672 for lineno in range(first, last+1):
673 673 line = linecache.getline(filename, lineno)
674 674 if not line:
675 675 break
676 676
677 677 if lineno == self.curframe.f_lineno:
678 678 line = self.__format_line(
679 679 tpl_line_em, filename, lineno, line, arrow=True
680 680 )
681 681 else:
682 682 line = self.__format_line(
683 683 tpl_line, filename, lineno, line, arrow=False
684 684 )
685 685
686 686 src.append(line)
687 687 self.lineno = lineno
688 688
689 689 print(''.join(src), file=self.stdout)
690 690
691 691 except KeyboardInterrupt:
692 692 pass
693 693
694 694 def do_skip_predicates(self, args):
695 695 """
696 696 Turn on/off individual predicates as to whether a frame should be hidden/skip.
697 697
698 698 The global option to skip (or not) hidden frames is set with skip_hidden
699 699
700 700 To change the value of a predicate
701 701
702 702 skip_predicates key [true|false]
703 703
704 704 Call without arguments to see the current values.
705 705
706 706 To permanently change the value of an option add the corresponding
707 707 command to your ``~/.pdbrc`` file. If you are programmatically using the
708 708 Pdb instance you can also change the ``default_predicates`` class
709 709 attribute.
710 710 """
711 711 if not args.strip():
712 712 print("current predicates:")
713 713 for p, v in self._predicates.items():
714 714 print(" ", p, ":", v)
715 715 return
716 716 type_value = args.strip().split(" ")
717 717 if len(type_value) != 2:
718 718 print(
719 719 f"Usage: skip_predicates <type> <value>, with <type> one of {set(self._predicates.keys())}"
720 720 )
721 721 return
722 722
723 723 type_, value = type_value
724 724 if type_ not in self._predicates:
725 725 print(f"{type_!r} not in {set(self._predicates.keys())}")
726 726 return
727 727 if value.lower() not in ("true", "yes", "1", "no", "false", "0"):
728 728 print(
729 729 f"{value!r} is invalid - use one of ('true', 'yes', '1', 'no', 'false', '0')"
730 730 )
731 731 return
732 732
733 733 self._predicates[type_] = value.lower() in ("true", "yes", "1")
734 734 if not any(self._predicates.values()):
735 735 print(
736 736 "Warning, all predicates set to False, skip_hidden may not have any effects."
737 737 )
738 738
739 739 def do_skip_hidden(self, arg):
740 740 """
741 741 Change whether or not we should skip frames with the
742 742 __tracebackhide__ attribute.
743 743 """
744 744 if not arg.strip():
745 745 print(
746 746 f"skip_hidden = {self.skip_hidden}, use 'yes','no', 'true', or 'false' to change."
747 747 )
748 748 elif arg.strip().lower() in ("true", "yes"):
749 749 self.skip_hidden = True
750 750 elif arg.strip().lower() in ("false", "no"):
751 751 self.skip_hidden = False
752 752 if not any(self._predicates.values()):
753 753 print(
754 754 "Warning, all predicates set to False, skip_hidden may not have any effects."
755 755 )
756 756
757 757 def do_list(self, arg):
758 758 """Print lines of code from the current stack frame
759 759 """
760 760 self.lastcmd = 'list'
761 761 last = None
762 762 if arg and arg != ".":
763 763 try:
764 764 x = eval(arg, {}, {})
765 765 if type(x) == type(()):
766 766 first, last = x
767 767 first = int(first)
768 768 last = int(last)
769 769 if last < first:
770 770 # Assume it's a count
771 771 last = first + last
772 772 else:
773 773 first = max(1, int(x) - 5)
774 774 except:
775 775 print('*** Error in argument:', repr(arg), file=self.stdout)
776 776 return
777 777 elif self.lineno is None or arg == ".":
778 778 first = max(1, self.curframe.f_lineno - 5)
779 779 else:
780 780 first = self.lineno + 1
781 781 if last is None:
782 782 last = first + 10
783 783 self.print_list_lines(self.curframe.f_code.co_filename, first, last)
784 784
785 785 # vds: >>
786 786 lineno = first
787 787 filename = self.curframe.f_code.co_filename
788 788 self.shell.hooks.synchronize_with_editor(filename, lineno, 0)
789 789 # vds: <<
790 790
791 791 do_l = do_list
792 792
793 793 def getsourcelines(self, obj):
794 794 lines, lineno = inspect.findsource(obj)
795 795 if inspect.isframe(obj) and obj.f_globals is self._get_frame_locals(obj):
796 796 # must be a module frame: do not try to cut a block out of it
797 797 return lines, 1
798 798 elif inspect.ismodule(obj):
799 799 return lines, 1
800 800 return inspect.getblock(lines[lineno:]), lineno+1
801 801
802 802 def do_longlist(self, arg):
803 803 """Print lines of code from the current stack frame.
804 804
805 805 Shows more lines than 'list' does.
806 806 """
807 807 self.lastcmd = 'longlist'
808 808 try:
809 809 lines, lineno = self.getsourcelines(self.curframe)
810 810 except OSError as err:
811 811 self.error(err)
812 812 return
813 813 last = lineno + len(lines)
814 814 self.print_list_lines(self.curframe.f_code.co_filename, lineno, last)
815 815 do_ll = do_longlist
816 816
817 817 def do_debug(self, arg):
818 818 """debug code
819 819 Enter a recursive debugger that steps through the code
820 820 argument (which is an arbitrary expression or statement to be
821 821 executed in the current environment).
822 822 """
823 823 trace_function = sys.gettrace()
824 824 sys.settrace(None)
825 825 globals = self.curframe.f_globals
826 826 locals = self.curframe_locals
827 827 p = self.__class__(completekey=self.completekey,
828 828 stdin=self.stdin, stdout=self.stdout)
829 829 p.use_rawinput = self.use_rawinput
830 830 p.prompt = "(%s) " % self.prompt.strip()
831 831 self.message("ENTERING RECURSIVE DEBUGGER")
832 832 sys.call_tracing(p.run, (arg, globals, locals))
833 833 self.message("LEAVING RECURSIVE DEBUGGER")
834 834 sys.settrace(trace_function)
835 835 self.lastcmd = p.lastcmd
836 836
837 837 def do_pdef(self, arg):
838 838 """Print the call signature for any callable object.
839 839
840 840 The debugger interface to %pdef"""
841 841 namespaces = [
842 842 ("Locals", self.curframe_locals),
843 843 ("Globals", self.curframe.f_globals),
844 844 ]
845 845 self.shell.find_line_magic("pdef")(arg, namespaces=namespaces)
846 846
847 847 def do_pdoc(self, arg):
848 848 """Print the docstring for an object.
849 849
850 850 The debugger interface to %pdoc."""
851 851 namespaces = [
852 852 ("Locals", self.curframe_locals),
853 853 ("Globals", self.curframe.f_globals),
854 854 ]
855 855 self.shell.find_line_magic("pdoc")(arg, namespaces=namespaces)
856 856
857 857 def do_pfile(self, arg):
858 858 """Print (or run through pager) the file where an object is defined.
859 859
860 860 The debugger interface to %pfile.
861 861 """
862 862 namespaces = [
863 863 ("Locals", self.curframe_locals),
864 864 ("Globals", self.curframe.f_globals),
865 865 ]
866 866 self.shell.find_line_magic("pfile")(arg, namespaces=namespaces)
867 867
868 868 def do_pinfo(self, arg):
869 869 """Provide detailed information about an object.
870 870
871 871 The debugger interface to %pinfo, i.e., obj?."""
872 872 namespaces = [
873 873 ("Locals", self.curframe_locals),
874 874 ("Globals", self.curframe.f_globals),
875 875 ]
876 876 self.shell.find_line_magic("pinfo")(arg, namespaces=namespaces)
877 877
878 878 def do_pinfo2(self, arg):
879 879 """Provide extra detailed information about an object.
880 880
881 881 The debugger interface to %pinfo2, i.e., obj??."""
882 882 namespaces = [
883 883 ("Locals", self.curframe_locals),
884 884 ("Globals", self.curframe.f_globals),
885 885 ]
886 886 self.shell.find_line_magic("pinfo2")(arg, namespaces=namespaces)
887 887
888 888 def do_psource(self, arg):
889 889 """Print (or run through pager) the source code for an object."""
890 890 namespaces = [
891 891 ("Locals", self.curframe_locals),
892 892 ("Globals", self.curframe.f_globals),
893 893 ]
894 894 self.shell.find_line_magic("psource")(arg, namespaces=namespaces)
895 895
896 896 def do_where(self, arg):
897 897 """w(here)
898 898 Print a stack trace, with the most recent frame at the bottom.
899 899 An arrow indicates the "current frame", which determines the
900 900 context of most commands. 'bt' is an alias for this command.
901 901
902 902 Take a number as argument as an (optional) number of context line to
903 903 print"""
904 904 if arg:
905 905 try:
906 906 context = int(arg)
907 907 except ValueError as err:
908 908 self.error(err)
909 909 return
910 910 self.print_stack_trace(context)
911 911 else:
912 912 self.print_stack_trace()
913 913
914 914 do_w = do_where
915 915
916 916 def break_anywhere(self, frame):
917 917 """
918 918 _stop_in_decorator_internals is overly restrictive, as we may still want
919 919 to trace function calls, so we need to also update break_anywhere so
920 920 that is we don't `stop_here`, because of debugger skip, we may still
921 921 stop at any point inside the function
922 922
923 923 """
924 924
925 925 sup = super().break_anywhere(frame)
926 926 if sup:
927 927 return sup
928 928 if self._predicates["debuggerskip"]:
929 929 if DEBUGGERSKIP in frame.f_code.co_varnames:
930 930 return True
931 931 if frame.f_back and self._get_frame_locals(frame.f_back).get(DEBUGGERSKIP):
932 932 return True
933 933 return False
934 934
935 935 def _is_in_decorator_internal_and_should_skip(self, frame):
936 936 """
937 937 Utility to tell us whether we are in a decorator internal and should stop.
938 938
939 939 """
940 940 # if we are disabled don't skip
941 941 if not self._predicates["debuggerskip"]:
942 942 return False
943 943
944 944 return self._cachable_skip(frame)
945 945
946 946 @lru_cache(1024)
947 947 def _cached_one_parent_frame_debuggerskip(self, frame):
948 948 """
949 949 Cache looking up for DEBUGGERSKIP on parent frame.
950 950
951 951 This should speedup walking through deep frame when one of the highest
952 952 one does have a debugger skip.
953 953
954 954 This is likely to introduce fake positive though.
955 955 """
956 956 while getattr(frame, "f_back", None):
957 957 frame = frame.f_back
958 958 if self._get_frame_locals(frame).get(DEBUGGERSKIP):
959 959 return True
960 960 return None
961 961
962 962 @lru_cache(1024)
963 963 def _cachable_skip(self, frame):
964 964 # if frame is tagged, skip by default.
965 965 if DEBUGGERSKIP in frame.f_code.co_varnames:
966 966 return True
967 967
968 968 # if one of the parent frame value set to True skip as well.
969 969 if self._cached_one_parent_frame_debuggerskip(frame):
970 970 return True
971 971
972 972 return False
973 973
974 974 def stop_here(self, frame):
975 975 if self._is_in_decorator_internal_and_should_skip(frame) is True:
976 976 return False
977 977
978 978 hidden = False
979 979 if self.skip_hidden:
980 980 hidden = self._hidden_predicate(frame)
981 981 if hidden:
982 982 if self.report_skipped:
983 983 Colors = self.color_scheme_table.active_colors
984 984 ColorsNormal = Colors.Normal
985 985 print(
986 986 f"{Colors.excName} [... skipped 1 hidden frame]{ColorsNormal}\n"
987 987 )
988 988 return super().stop_here(frame)
989 989
990 990 def do_up(self, arg):
991 991 """u(p) [count]
992 992 Move the current frame count (default one) levels up in the
993 993 stack trace (to an older frame).
994 994
995 995 Will skip hidden frames.
996 996 """
997 997 # modified version of upstream that skips
998 998 # frames with __tracebackhide__
999 999 if self.curindex == 0:
1000 1000 self.error("Oldest frame")
1001 1001 return
1002 1002 try:
1003 1003 count = int(arg or 1)
1004 1004 except ValueError:
1005 1005 self.error("Invalid frame count (%s)" % arg)
1006 1006 return
1007 1007 skipped = 0
1008 1008 if count < 0:
1009 1009 _newframe = 0
1010 1010 else:
1011 1011 counter = 0
1012 1012 hidden_frames = self.hidden_frames(self.stack)
1013 1013 for i in range(self.curindex - 1, -1, -1):
1014 1014 if hidden_frames[i] and self.skip_hidden:
1015 1015 skipped += 1
1016 1016 continue
1017 1017 counter += 1
1018 1018 if counter >= count:
1019 1019 break
1020 1020 else:
1021 1021 # if no break occurred.
1022 1022 self.error(
1023 1023 "all frames above hidden, use `skip_hidden False` to get get into those."
1024 1024 )
1025 1025 return
1026 1026
1027 1027 Colors = self.color_scheme_table.active_colors
1028 1028 ColorsNormal = Colors.Normal
1029 1029 _newframe = i
1030 1030 self._select_frame(_newframe)
1031 1031 if skipped:
1032 1032 print(
1033 1033 f"{Colors.excName} [... skipped {skipped} hidden frame(s)]{ColorsNormal}\n"
1034 1034 )
1035 1035
1036 1036 def do_down(self, arg):
1037 1037 """d(own) [count]
1038 1038 Move the current frame count (default one) levels down in the
1039 1039 stack trace (to a newer frame).
1040 1040
1041 1041 Will skip hidden frames.
1042 1042 """
1043 1043 if self.curindex + 1 == len(self.stack):
1044 1044 self.error("Newest frame")
1045 1045 return
1046 1046 try:
1047 1047 count = int(arg or 1)
1048 1048 except ValueError:
1049 1049 self.error("Invalid frame count (%s)" % arg)
1050 1050 return
1051 1051 if count < 0:
1052 1052 _newframe = len(self.stack) - 1
1053 1053 else:
1054 1054 counter = 0
1055 1055 skipped = 0
1056 1056 hidden_frames = self.hidden_frames(self.stack)
1057 1057 for i in range(self.curindex + 1, len(self.stack)):
1058 1058 if hidden_frames[i] and self.skip_hidden:
1059 1059 skipped += 1
1060 1060 continue
1061 1061 counter += 1
1062 1062 if counter >= count:
1063 1063 break
1064 1064 else:
1065 1065 self.error(
1066 1066 "all frames below hidden, use `skip_hidden False` to get get into those."
1067 1067 )
1068 1068 return
1069 1069
1070 1070 Colors = self.color_scheme_table.active_colors
1071 1071 ColorsNormal = Colors.Normal
1072 1072 if skipped:
1073 1073 print(
1074 1074 f"{Colors.excName} [... skipped {skipped} hidden frame(s)]{ColorsNormal}\n"
1075 1075 )
1076 1076 _newframe = i
1077 1077
1078 1078 self._select_frame(_newframe)
1079 1079
1080 1080 do_d = do_down
1081 1081 do_u = do_up
1082 1082
1083 1083 def do_context(self, context):
1084 1084 """context number_of_lines
1085 1085 Set the number of lines of source code to show when displaying
1086 1086 stacktrace information.
1087 1087 """
1088 1088 try:
1089 1089 new_context = int(context)
1090 1090 if new_context <= 0:
1091 1091 raise ValueError()
1092 1092 self.context = new_context
1093 1093 except ValueError:
1094 1094 self.error(
1095 1095 f"The 'context' command requires a positive integer argument (current value {self.context})."
1096 1096 )
1097 1097
1098 1098
1099 1099 class InterruptiblePdb(Pdb):
1100 1100 """Version of debugger where KeyboardInterrupt exits the debugger altogether."""
1101 1101
1102 1102 def cmdloop(self, intro=None):
1103 1103 """Wrap cmdloop() such that KeyboardInterrupt stops the debugger."""
1104 1104 try:
1105 1105 return OldPdb.cmdloop(self, intro=intro)
1106 1106 except KeyboardInterrupt:
1107 1107 self.stop_here = lambda frame: False
1108 1108 self.do_quit("")
1109 1109 sys.settrace(None)
1110 1110 self.quitting = False
1111 1111 raise
1112 1112
1113 1113 def _cmdloop(self):
1114 1114 while True:
1115 1115 try:
1116 1116 # keyboard interrupts allow for an easy way to cancel
1117 1117 # the current command, so allow them during interactive input
1118 1118 self.allow_kbdint = True
1119 1119 self.cmdloop()
1120 1120 self.allow_kbdint = False
1121 1121 break
1122 1122 except KeyboardInterrupt:
1123 1123 self.message('--KeyboardInterrupt--')
1124 1124 raise
1125 1125
1126 1126
1127 1127 def set_trace(frame=None, header=None):
1128 1128 """
1129 1129 Start debugging from `frame`.
1130 1130
1131 1131 If frame is not specified, debugging starts from caller's frame.
1132 1132 """
1133 1133 pdb = Pdb()
1134 1134 if header is not None:
1135 1135 pdb.message(header)
1136 1136 pdb.set_trace(frame or sys._getframe().f_back)
@@ -1,898 +1,898
1 1 from inspect import isclass, signature, Signature
2 2 from typing import (
3 3 Annotated,
4 4 AnyStr,
5 5 Callable,
6 6 Dict,
7 7 Literal,
8 8 NamedTuple,
9 9 NewType,
10 10 Optional,
11 11 Protocol,
12 12 Set,
13 13 Sequence,
14 14 Tuple,
15 15 Type,
16 16 TypeGuard,
17 17 Union,
18 18 get_args,
19 19 get_origin,
20 20 is_typeddict,
21 21 )
22 22 import ast
23 23 import builtins
24 24 import collections
25 25 import operator
26 26 import sys
27 27 from functools import cached_property
28 28 from dataclasses import dataclass, field
29 29 from types import MethodDescriptorType, ModuleType
30 30
31 31 from IPython.utils.decorators import undoc
32 32
33 33
34 34 if sys.version_info < (3, 11):
35 35 from typing_extensions import Self, LiteralString
36 36 else:
37 37 from typing import Self, LiteralString
38 38
39 39 if sys.version_info < (3, 12):
40 40 from typing_extensions import TypeAliasType
41 41 else:
42 42 from typing import TypeAliasType
43 43
44 44
45 45 @undoc
46 46 class HasGetItem(Protocol):
47 47 def __getitem__(self, key) -> None:
48 48 ...
49 49
50 50
51 51 @undoc
52 52 class InstancesHaveGetItem(Protocol):
53 53 def __call__(self, *args, **kwargs) -> HasGetItem:
54 54 ...
55 55
56 56
57 57 @undoc
58 58 class HasGetAttr(Protocol):
59 59 def __getattr__(self, key) -> None:
60 60 ...
61 61
62 62
63 63 @undoc
64 64 class DoesNotHaveGetAttr(Protocol):
65 65 pass
66 66
67 67
68 68 # By default `__getattr__` is not explicitly implemented on most objects
69 69 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
70 70
71 71
72 72 def _unbind_method(func: Callable) -> Union[Callable, None]:
73 73 """Get unbound method for given bound method.
74 74
75 75 Returns None if cannot get unbound method, or method is already unbound.
76 76 """
77 77 owner = getattr(func, "__self__", None)
78 78 owner_class = type(owner)
79 79 name = getattr(func, "__name__", None)
80 80 instance_dict_overrides = getattr(owner, "__dict__", None)
81 81 if (
82 82 owner is not None
83 83 and name
84 84 and (
85 85 not instance_dict_overrides
86 86 or (instance_dict_overrides and name not in instance_dict_overrides)
87 87 )
88 88 ):
89 89 return getattr(owner_class, name)
90 90 return None
91 91
92 92
93 93 @undoc
94 94 @dataclass
95 95 class EvaluationPolicy:
96 96 """Definition of evaluation policy."""
97 97
98 98 allow_locals_access: bool = False
99 99 allow_globals_access: bool = False
100 100 allow_item_access: bool = False
101 101 allow_attr_access: bool = False
102 102 allow_builtins_access: bool = False
103 103 allow_all_operations: bool = False
104 104 allow_any_calls: bool = False
105 105 allowed_calls: Set[Callable] = field(default_factory=set)
106 106
107 107 def can_get_item(self, value, item):
108 108 return self.allow_item_access
109 109
110 110 def can_get_attr(self, value, attr):
111 111 return self.allow_attr_access
112 112
113 113 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
114 114 if self.allow_all_operations:
115 115 return True
116 116
117 117 def can_call(self, func):
118 118 if self.allow_any_calls:
119 119 return True
120 120
121 121 if func in self.allowed_calls:
122 122 return True
123 123
124 124 owner_method = _unbind_method(func)
125 125
126 126 if owner_method and owner_method in self.allowed_calls:
127 127 return True
128 128
129 129
130 130 def _get_external(module_name: str, access_path: Sequence[str]):
131 131 """Get value from external module given a dotted access path.
132 132
133 133 Raises:
134 134 * `KeyError` if module is removed not found, and
135 * `AttributeError` if acess path does not match an exported object
135 * `AttributeError` if access path does not match an exported object
136 136 """
137 137 member_type = sys.modules[module_name]
138 138 for attr in access_path:
139 139 member_type = getattr(member_type, attr)
140 140 return member_type
141 141
142 142
143 143 def _has_original_dunder_external(
144 144 value,
145 145 module_name: str,
146 146 access_path: Sequence[str],
147 147 method_name: str,
148 148 ):
149 149 if module_name not in sys.modules:
150 150 # LBYLB as it is faster
151 151 return False
152 152 try:
153 153 member_type = _get_external(module_name, access_path)
154 154 value_type = type(value)
155 155 if type(value) == member_type:
156 156 return True
157 157 if method_name == "__getattribute__":
158 158 # we have to short-circuit here due to an unresolved issue in
159 159 # `isinstance` implementation: https://bugs.python.org/issue32683
160 160 return False
161 161 if isinstance(value, member_type):
162 162 method = getattr(value_type, method_name, None)
163 163 member_method = getattr(member_type, method_name, None)
164 164 if member_method == method:
165 165 return True
166 166 except (AttributeError, KeyError):
167 167 return False
168 168
169 169
170 170 def _has_original_dunder(
171 171 value, allowed_types, allowed_methods, allowed_external, method_name
172 172 ):
173 173 # note: Python ignores `__getattr__`/`__getitem__` on instances,
174 174 # we only need to check at class level
175 175 value_type = type(value)
176 176
177 177 # strict type check passes → no need to check method
178 178 if value_type in allowed_types:
179 179 return True
180 180
181 181 method = getattr(value_type, method_name, None)
182 182
183 183 if method is None:
184 184 return None
185 185
186 186 if method in allowed_methods:
187 187 return True
188 188
189 189 for module_name, *access_path in allowed_external:
190 190 if _has_original_dunder_external(value, module_name, access_path, method_name):
191 191 return True
192 192
193 193 return False
194 194
195 195
196 196 @undoc
197 197 @dataclass
198 198 class SelectivePolicy(EvaluationPolicy):
199 199 allowed_getitem: Set[InstancesHaveGetItem] = field(default_factory=set)
200 200 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
201 201
202 202 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
203 203 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
204 204
205 205 allowed_operations: Set = field(default_factory=set)
206 206 allowed_operations_external: Set[Tuple[str, ...]] = field(default_factory=set)
207 207
208 208 _operation_methods_cache: Dict[str, Set[Callable]] = field(
209 209 default_factory=dict, init=False
210 210 )
211 211
212 212 def can_get_attr(self, value, attr):
213 213 has_original_attribute = _has_original_dunder(
214 214 value,
215 215 allowed_types=self.allowed_getattr,
216 216 allowed_methods=self._getattribute_methods,
217 217 allowed_external=self.allowed_getattr_external,
218 218 method_name="__getattribute__",
219 219 )
220 220 has_original_attr = _has_original_dunder(
221 221 value,
222 222 allowed_types=self.allowed_getattr,
223 223 allowed_methods=self._getattr_methods,
224 224 allowed_external=self.allowed_getattr_external,
225 225 method_name="__getattr__",
226 226 )
227 227
228 228 accept = False
229 229
230 230 # Many objects do not have `__getattr__`, this is fine.
231 231 if has_original_attr is None and has_original_attribute:
232 232 accept = True
233 233 else:
234 234 # Accept objects without modifications to `__getattr__` and `__getattribute__`
235 235 accept = has_original_attr and has_original_attribute
236 236
237 237 if accept:
238 # We still need to check for overriden properties.
238 # We still need to check for overridden properties.
239 239
240 240 value_class = type(value)
241 241 if not hasattr(value_class, attr):
242 242 return True
243 243
244 244 class_attr_val = getattr(value_class, attr)
245 245 is_property = isinstance(class_attr_val, property)
246 246
247 247 if not is_property:
248 248 return True
249 249
250 250 # Properties in allowed types are ok (although we do not include any
251 251 # properties in our default allow list currently).
252 252 if type(value) in self.allowed_getattr:
253 253 return True # pragma: no cover
254 254
255 255 # Properties in subclasses of allowed types may be ok if not changed
256 256 for module_name, *access_path in self.allowed_getattr_external:
257 257 try:
258 258 external_class = _get_external(module_name, access_path)
259 259 external_class_attr_val = getattr(external_class, attr)
260 260 except (KeyError, AttributeError):
261 261 return False # pragma: no cover
262 262 return class_attr_val == external_class_attr_val
263 263
264 264 return False
265 265
266 266 def can_get_item(self, value, item):
267 267 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
268 268 return _has_original_dunder(
269 269 value,
270 270 allowed_types=self.allowed_getitem,
271 271 allowed_methods=self._getitem_methods,
272 272 allowed_external=self.allowed_getitem_external,
273 273 method_name="__getitem__",
274 274 )
275 275
276 276 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
277 277 objects = [a]
278 278 if b is not None:
279 279 objects.append(b)
280 280 return all(
281 281 [
282 282 _has_original_dunder(
283 283 obj,
284 284 allowed_types=self.allowed_operations,
285 285 allowed_methods=self._operator_dunder_methods(dunder),
286 286 allowed_external=self.allowed_operations_external,
287 287 method_name=dunder,
288 288 )
289 289 for dunder in dunders
290 290 for obj in objects
291 291 ]
292 292 )
293 293
294 294 def _operator_dunder_methods(self, dunder: str) -> Set[Callable]:
295 295 if dunder not in self._operation_methods_cache:
296 296 self._operation_methods_cache[dunder] = self._safe_get_methods(
297 297 self.allowed_operations, dunder
298 298 )
299 299 return self._operation_methods_cache[dunder]
300 300
301 301 @cached_property
302 302 def _getitem_methods(self) -> Set[Callable]:
303 303 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
304 304
305 305 @cached_property
306 306 def _getattr_methods(self) -> Set[Callable]:
307 307 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
308 308
309 309 @cached_property
310 310 def _getattribute_methods(self) -> Set[Callable]:
311 311 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
312 312
313 313 def _safe_get_methods(self, classes, name) -> Set[Callable]:
314 314 return {
315 315 method
316 316 for class_ in classes
317 317 for method in [getattr(class_, name, None)]
318 318 if method
319 319 }
320 320
321 321
322 322 class _DummyNamedTuple(NamedTuple):
323 323 """Used internally to retrieve methods of named tuple instance."""
324 324
325 325
326 326 class EvaluationContext(NamedTuple):
327 327 #: Local namespace
328 328 locals: dict
329 329 #: Global namespace
330 330 globals: dict
331 331 #: Evaluation policy identifier
332 332 evaluation: Literal[
333 333 "forbidden", "minimal", "limited", "unsafe", "dangerous"
334 334 ] = "forbidden"
335 #: Whether the evalution of code takes place inside of a subscript.
335 #: Whether the evaluation of code takes place inside of a subscript.
336 336 #: Useful for evaluating ``:-1, 'col'`` in ``df[:-1, 'col']``.
337 337 in_subscript: bool = False
338 338
339 339
340 340 class _IdentitySubscript:
341 341 """Returns the key itself when item is requested via subscript."""
342 342
343 343 def __getitem__(self, key):
344 344 return key
345 345
346 346
347 347 IDENTITY_SUBSCRIPT = _IdentitySubscript()
348 348 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
349 349 UNKNOWN_SIGNATURE = Signature()
350 350 NOT_EVALUATED = object()
351 351
352 352
353 353 class GuardRejection(Exception):
354 354 """Exception raised when guard rejects evaluation attempt."""
355 355
356 356 pass
357 357
358 358
359 359 def guarded_eval(code: str, context: EvaluationContext):
360 360 """Evaluate provided code in the evaluation context.
361 361
362 362 If evaluation policy given by context is set to ``forbidden``
363 363 no evaluation will be performed; if it is set to ``dangerous``
364 364 standard :func:`eval` will be used; finally, for any other,
365 365 policy :func:`eval_node` will be called on parsed AST.
366 366 """
367 367 locals_ = context.locals
368 368
369 369 if context.evaluation == "forbidden":
370 370 raise GuardRejection("Forbidden mode")
371 371
372 372 # note: not using `ast.literal_eval` as it does not implement
373 373 # getitem at all, for example it fails on simple `[0][1]`
374 374
375 375 if context.in_subscript:
376 # syntatic sugar for ellipsis (:) is only available in susbcripts
376 # syntactic sugar for ellipsis (:) is only available in subscripts
377 377 # so we need to trick the ast parser into thinking that we have
378 378 # a subscript, but we need to be able to later recognise that we did
379 379 # it so we can ignore the actual __getitem__ operation
380 380 if not code:
381 381 return tuple()
382 382 locals_ = locals_.copy()
383 383 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
384 384 code = SUBSCRIPT_MARKER + "[" + code + "]"
385 385 context = EvaluationContext(**{**context._asdict(), **{"locals": locals_}})
386 386
387 387 if context.evaluation == "dangerous":
388 388 return eval(code, context.globals, context.locals)
389 389
390 390 expression = ast.parse(code, mode="eval")
391 391
392 392 return eval_node(expression, context)
393 393
394 394
395 395 BINARY_OP_DUNDERS: Dict[Type[ast.operator], Tuple[str]] = {
396 396 ast.Add: ("__add__",),
397 397 ast.Sub: ("__sub__",),
398 398 ast.Mult: ("__mul__",),
399 399 ast.Div: ("__truediv__",),
400 400 ast.FloorDiv: ("__floordiv__",),
401 401 ast.Mod: ("__mod__",),
402 402 ast.Pow: ("__pow__",),
403 403 ast.LShift: ("__lshift__",),
404 404 ast.RShift: ("__rshift__",),
405 405 ast.BitOr: ("__or__",),
406 406 ast.BitXor: ("__xor__",),
407 407 ast.BitAnd: ("__and__",),
408 408 ast.MatMult: ("__matmul__",),
409 409 }
410 410
411 411 COMP_OP_DUNDERS: Dict[Type[ast.cmpop], Tuple[str, ...]] = {
412 412 ast.Eq: ("__eq__",),
413 413 ast.NotEq: ("__ne__", "__eq__"),
414 414 ast.Lt: ("__lt__", "__gt__"),
415 415 ast.LtE: ("__le__", "__ge__"),
416 416 ast.Gt: ("__gt__", "__lt__"),
417 417 ast.GtE: ("__ge__", "__le__"),
418 418 ast.In: ("__contains__",),
419 419 # Note: ast.Is, ast.IsNot, ast.NotIn are handled specially
420 420 }
421 421
422 422 UNARY_OP_DUNDERS: Dict[Type[ast.unaryop], Tuple[str, ...]] = {
423 423 ast.USub: ("__neg__",),
424 424 ast.UAdd: ("__pos__",),
425 425 # we have to check both __inv__ and __invert__!
426 426 ast.Invert: ("__invert__", "__inv__"),
427 427 ast.Not: ("__not__",),
428 428 }
429 429
430 430
431 431 class ImpersonatingDuck:
432 432 """A dummy class used to create objects of other classes without calling their ``__init__``"""
433 433
434 434 # no-op: override __class__ to impersonate
435 435
436 436
437 437 class _Duck:
438 438 """A dummy class used to create objects pretending to have given attributes"""
439 439
440 440 def __init__(self, attributes: Optional[dict] = None, items: Optional[dict] = None):
441 441 self.attributes = attributes or {}
442 442 self.items = items or {}
443 443
444 444 def __getattr__(self, attr: str):
445 445 return self.attributes[attr]
446 446
447 447 def __hasattr__(self, attr: str):
448 448 return attr in self.attributes
449 449
450 450 def __dir__(self):
451 451 return [*dir(super), *self.attributes]
452 452
453 453 def __getitem__(self, key: str):
454 454 return self.items[key]
455 455
456 456 def __hasitem__(self, key: str):
457 457 return self.items[key]
458 458
459 459 def _ipython_key_completions_(self):
460 460 return self.items.keys()
461 461
462 462
463 463 def _find_dunder(node_op, dunders) -> Union[Tuple[str, ...], None]:
464 464 dunder = None
465 465 for op, candidate_dunder in dunders.items():
466 466 if isinstance(node_op, op):
467 467 dunder = candidate_dunder
468 468 return dunder
469 469
470 470
471 471 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
472 472 """Evaluate AST node in provided context.
473 473
474 474 Applies evaluation restrictions defined in the context. Currently does not support evaluation of functions with keyword arguments.
475 475
476 476 Does not evaluate actions that always have side effects:
477 477
478 478 - class definitions (``class sth: ...``)
479 479 - function definitions (``def sth: ...``)
480 480 - variable assignments (``x = 1``)
481 481 - augmented assignments (``x += 1``)
482 482 - deletions (``del x``)
483 483
484 484 Does not evaluate operations which do not return values:
485 485
486 486 - assertions (``assert x``)
487 487 - pass (``pass``)
488 488 - imports (``import x``)
489 489 - control flow:
490 490
491 491 - conditionals (``if x:``) except for ternary IfExp (``a if x else b``)
492 492 - loops (``for`` and ``while``)
493 493 - exception handling
494 494
495 495 The purpose of this function is to guard against unwanted side-effects;
496 496 it does not give guarantees on protection from malicious code execution.
497 497 """
498 498 policy = EVALUATION_POLICIES[context.evaluation]
499 499 if node is None:
500 500 return None
501 501 if isinstance(node, ast.Expression):
502 502 return eval_node(node.body, context)
503 503 if isinstance(node, ast.BinOp):
504 504 left = eval_node(node.left, context)
505 505 right = eval_node(node.right, context)
506 506 dunders = _find_dunder(node.op, BINARY_OP_DUNDERS)
507 507 if dunders:
508 508 if policy.can_operate(dunders, left, right):
509 509 return getattr(left, dunders[0])(right)
510 510 else:
511 511 raise GuardRejection(
512 512 f"Operation (`{dunders}`) for",
513 513 type(left),
514 514 f"not allowed in {context.evaluation} mode",
515 515 )
516 516 if isinstance(node, ast.Compare):
517 517 left = eval_node(node.left, context)
518 518 all_true = True
519 519 negate = False
520 520 for op, right in zip(node.ops, node.comparators):
521 521 right = eval_node(right, context)
522 522 dunder = None
523 523 dunders = _find_dunder(op, COMP_OP_DUNDERS)
524 524 if not dunders:
525 525 if isinstance(op, ast.NotIn):
526 526 dunders = COMP_OP_DUNDERS[ast.In]
527 527 negate = True
528 528 if isinstance(op, ast.Is):
529 529 dunder = "is_"
530 530 if isinstance(op, ast.IsNot):
531 531 dunder = "is_"
532 532 negate = True
533 533 if not dunder and dunders:
534 534 dunder = dunders[0]
535 535 if dunder:
536 536 a, b = (right, left) if dunder == "__contains__" else (left, right)
537 537 if dunder == "is_" or dunders and policy.can_operate(dunders, a, b):
538 538 result = getattr(operator, dunder)(a, b)
539 539 if negate:
540 540 result = not result
541 541 if not result:
542 542 all_true = False
543 543 left = right
544 544 else:
545 545 raise GuardRejection(
546 546 f"Comparison (`{dunder}`) for",
547 547 type(left),
548 548 f"not allowed in {context.evaluation} mode",
549 549 )
550 550 else:
551 551 raise ValueError(
552 552 f"Comparison `{dunder}` not supported"
553 553 ) # pragma: no cover
554 554 return all_true
555 555 if isinstance(node, ast.Constant):
556 556 return node.value
557 557 if isinstance(node, ast.Tuple):
558 558 return tuple(eval_node(e, context) for e in node.elts)
559 559 if isinstance(node, ast.List):
560 560 return [eval_node(e, context) for e in node.elts]
561 561 if isinstance(node, ast.Set):
562 562 return {eval_node(e, context) for e in node.elts}
563 563 if isinstance(node, ast.Dict):
564 564 return dict(
565 565 zip(
566 566 [eval_node(k, context) for k in node.keys],
567 567 [eval_node(v, context) for v in node.values],
568 568 )
569 569 )
570 570 if isinstance(node, ast.Slice):
571 571 return slice(
572 572 eval_node(node.lower, context),
573 573 eval_node(node.upper, context),
574 574 eval_node(node.step, context),
575 575 )
576 576 if isinstance(node, ast.UnaryOp):
577 577 value = eval_node(node.operand, context)
578 578 dunders = _find_dunder(node.op, UNARY_OP_DUNDERS)
579 579 if dunders:
580 580 if policy.can_operate(dunders, value):
581 581 return getattr(value, dunders[0])()
582 582 else:
583 583 raise GuardRejection(
584 584 f"Operation (`{dunders}`) for",
585 585 type(value),
586 586 f"not allowed in {context.evaluation} mode",
587 587 )
588 588 if isinstance(node, ast.Subscript):
589 589 value = eval_node(node.value, context)
590 590 slice_ = eval_node(node.slice, context)
591 591 if policy.can_get_item(value, slice_):
592 592 return value[slice_]
593 593 raise GuardRejection(
594 594 "Subscript access (`__getitem__`) for",
595 595 type(value), # not joined to avoid calling `repr`
596 596 f" not allowed in {context.evaluation} mode",
597 597 )
598 598 if isinstance(node, ast.Name):
599 599 return _eval_node_name(node.id, context)
600 600 if isinstance(node, ast.Attribute):
601 601 value = eval_node(node.value, context)
602 602 if policy.can_get_attr(value, node.attr):
603 603 return getattr(value, node.attr)
604 604 raise GuardRejection(
605 605 "Attribute access (`__getattr__`) for",
606 606 type(value), # not joined to avoid calling `repr`
607 607 f"not allowed in {context.evaluation} mode",
608 608 )
609 609 if isinstance(node, ast.IfExp):
610 610 test = eval_node(node.test, context)
611 611 if test:
612 612 return eval_node(node.body, context)
613 613 else:
614 614 return eval_node(node.orelse, context)
615 615 if isinstance(node, ast.Call):
616 616 func = eval_node(node.func, context)
617 617 if policy.can_call(func) and not node.keywords:
618 618 args = [eval_node(arg, context) for arg in node.args]
619 619 return func(*args)
620 620 if isclass(func):
621 621 # this code path gets entered when calling class e.g. `MyClass()`
622 622 # or `my_instance.__class__()` - in both cases `func` is `MyClass`.
623 623 # Should return `MyClass` if `__new__` is not overridden,
624 624 # otherwise whatever `__new__` return type is.
625 625 overridden_return_type = _eval_return_type(func.__new__, node, context)
626 626 if overridden_return_type is not NOT_EVALUATED:
627 627 return overridden_return_type
628 628 return _create_duck_for_heap_type(func)
629 629 else:
630 630 return_type = _eval_return_type(func, node, context)
631 631 if return_type is not NOT_EVALUATED:
632 632 return return_type
633 633 raise GuardRejection(
634 634 "Call for",
635 635 func, # not joined to avoid calling `repr`
636 636 f"not allowed in {context.evaluation} mode",
637 637 )
638 638 raise ValueError("Unhandled node", ast.dump(node))
639 639
640 640
641 641 def _eval_return_type(func: Callable, node: ast.Call, context: EvaluationContext):
642 642 """Evaluate return type of a given callable function.
643 643
644 644 Returns the built-in type, a duck or NOT_EVALUATED sentinel.
645 645 """
646 646 try:
647 647 sig = signature(func)
648 648 except ValueError:
649 649 sig = UNKNOWN_SIGNATURE
650 650 # if annotation was not stringized, or it was stringized
651 651 # but resolved by signature call we know the return type
652 652 not_empty = sig.return_annotation is not Signature.empty
653 653 if not_empty:
654 654 return _resolve_annotation(sig.return_annotation, sig, func, node, context)
655 655 return NOT_EVALUATED
656 656
657 657
658 658 def _resolve_annotation(
659 659 annotation,
660 660 sig: Signature,
661 661 func: Callable,
662 662 node: ast.Call,
663 663 context: EvaluationContext,
664 664 ):
665 665 """Resolve annotation created by user with `typing` module and custom objects."""
666 666 annotation = (
667 667 _eval_node_name(annotation, context)
668 668 if isinstance(annotation, str)
669 669 else annotation
670 670 )
671 671 origin = get_origin(annotation)
672 672 if annotation is Self and hasattr(func, "__self__"):
673 673 return func.__self__
674 674 elif origin is Literal:
675 675 type_args = get_args(annotation)
676 676 if len(type_args) == 1:
677 677 return type_args[0]
678 678 elif annotation is LiteralString:
679 679 return ""
680 680 elif annotation is AnyStr:
681 681 index = None
682 682 for i, (key, value) in enumerate(sig.parameters.items()):
683 683 if value.annotation is AnyStr:
684 684 index = i
685 685 break
686 686 if index is not None and index < len(node.args):
687 687 return eval_node(node.args[index], context)
688 688 elif origin is TypeGuard:
689 689 return bool()
690 690 elif origin is Union:
691 691 attributes = [
692 692 attr
693 693 for type_arg in get_args(annotation)
694 694 for attr in dir(_resolve_annotation(type_arg, sig, func, node, context))
695 695 ]
696 696 return _Duck(attributes=dict.fromkeys(attributes))
697 697 elif is_typeddict(annotation):
698 698 return _Duck(
699 699 attributes=dict.fromkeys(dir(dict())),
700 700 items={
701 701 k: _resolve_annotation(v, sig, func, node, context)
702 702 for k, v in annotation.__annotations__.items()
703 703 },
704 704 )
705 705 elif hasattr(annotation, "_is_protocol"):
706 706 return _Duck(attributes=dict.fromkeys(dir(annotation)))
707 707 elif origin is Annotated:
708 708 type_arg = get_args(annotation)[0]
709 709 return _resolve_annotation(type_arg, sig, func, node, context)
710 710 elif isinstance(annotation, NewType):
711 711 return _eval_or_create_duck(annotation.__supertype__, node, context)
712 712 elif isinstance(annotation, TypeAliasType):
713 713 return _eval_or_create_duck(annotation.__value__, node, context)
714 714 else:
715 715 return _eval_or_create_duck(annotation, node, context)
716 716
717 717
718 718 def _eval_node_name(node_id: str, context: EvaluationContext):
719 719 policy = EVALUATION_POLICIES[context.evaluation]
720 720 if policy.allow_locals_access and node_id in context.locals:
721 721 return context.locals[node_id]
722 722 if policy.allow_globals_access and node_id in context.globals:
723 723 return context.globals[node_id]
724 724 if policy.allow_builtins_access and hasattr(builtins, node_id):
725 725 # note: do not use __builtins__, it is implementation detail of cPython
726 726 return getattr(builtins, node_id)
727 727 if not policy.allow_globals_access and not policy.allow_locals_access:
728 728 raise GuardRejection(
729 729 f"Namespace access not allowed in {context.evaluation} mode"
730 730 )
731 731 else:
732 732 raise NameError(f"{node_id} not found in locals, globals, nor builtins")
733 733
734 734
735 735 def _eval_or_create_duck(duck_type, node: ast.Call, context: EvaluationContext):
736 736 policy = EVALUATION_POLICIES[context.evaluation]
737 737 # if allow-listed builtin is on type annotation, instantiate it
738 738 if policy.can_call(duck_type) and not node.keywords:
739 739 args = [eval_node(arg, context) for arg in node.args]
740 740 return duck_type(*args)
741 741 # if custom class is in type annotation, mock it
742 742 return _create_duck_for_heap_type(duck_type)
743 743
744 744
745 745 def _create_duck_for_heap_type(duck_type):
746 746 """Create an imitation of an object of a given type (a duck).
747 747
748 748 Returns the duck or NOT_EVALUATED sentinel if duck could not be created.
749 749 """
750 750 duck = ImpersonatingDuck()
751 751 try:
752 752 # this only works for heap types, not builtins
753 753 duck.__class__ = duck_type
754 754 return duck
755 755 except TypeError:
756 756 pass
757 757 return NOT_EVALUATED
758 758
759 759
760 760 SUPPORTED_EXTERNAL_GETITEM = {
761 761 ("pandas", "core", "indexing", "_iLocIndexer"),
762 762 ("pandas", "core", "indexing", "_LocIndexer"),
763 763 ("pandas", "DataFrame"),
764 764 ("pandas", "Series"),
765 765 ("numpy", "ndarray"),
766 766 ("numpy", "void"),
767 767 }
768 768
769 769
770 770 BUILTIN_GETITEM: Set[InstancesHaveGetItem] = {
771 771 dict,
772 772 str, # type: ignore[arg-type]
773 773 bytes, # type: ignore[arg-type]
774 774 list,
775 775 tuple,
776 776 collections.defaultdict,
777 777 collections.deque,
778 778 collections.OrderedDict,
779 779 collections.ChainMap,
780 780 collections.UserDict,
781 781 collections.UserList,
782 782 collections.UserString, # type: ignore[arg-type]
783 783 _DummyNamedTuple,
784 784 _IdentitySubscript,
785 785 }
786 786
787 787
788 788 def _list_methods(cls, source=None):
789 789 """For use on immutable objects or with methods returning a copy"""
790 790 return [getattr(cls, k) for k in (source if source else dir(cls))]
791 791
792 792
793 793 dict_non_mutating_methods = ("copy", "keys", "values", "items")
794 794 list_non_mutating_methods = ("copy", "index", "count")
795 795 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
796 796
797 797
798 798 dict_keys: Type[collections.abc.KeysView] = type({}.keys())
799 799
800 800 NUMERICS = {int, float, complex}
801 801
802 802 ALLOWED_CALLS = {
803 803 bytes,
804 804 *_list_methods(bytes),
805 805 dict,
806 806 *_list_methods(dict, dict_non_mutating_methods),
807 807 dict_keys.isdisjoint,
808 808 list,
809 809 *_list_methods(list, list_non_mutating_methods),
810 810 set,
811 811 *_list_methods(set, set_non_mutating_methods),
812 812 frozenset,
813 813 *_list_methods(frozenset),
814 814 range,
815 815 str,
816 816 *_list_methods(str),
817 817 tuple,
818 818 *_list_methods(tuple),
819 819 *NUMERICS,
820 820 *[method for numeric_cls in NUMERICS for method in _list_methods(numeric_cls)],
821 821 collections.deque,
822 822 *_list_methods(collections.deque, list_non_mutating_methods),
823 823 collections.defaultdict,
824 824 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
825 825 collections.OrderedDict,
826 826 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
827 827 collections.UserDict,
828 828 *_list_methods(collections.UserDict, dict_non_mutating_methods),
829 829 collections.UserList,
830 830 *_list_methods(collections.UserList, list_non_mutating_methods),
831 831 collections.UserString,
832 832 *_list_methods(collections.UserString, dir(str)),
833 833 collections.Counter,
834 834 *_list_methods(collections.Counter, dict_non_mutating_methods),
835 835 collections.Counter.elements,
836 836 collections.Counter.most_common,
837 837 }
838 838
839 839 BUILTIN_GETATTR: Set[MayHaveGetattr] = {
840 840 *BUILTIN_GETITEM,
841 841 set,
842 842 frozenset,
843 843 object,
844 844 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
845 845 *NUMERICS,
846 846 dict_keys,
847 847 MethodDescriptorType,
848 848 ModuleType,
849 849 }
850 850
851 851
852 852 BUILTIN_OPERATIONS = {*BUILTIN_GETATTR}
853 853
854 854 EVALUATION_POLICIES = {
855 855 "minimal": EvaluationPolicy(
856 856 allow_builtins_access=True,
857 857 allow_locals_access=False,
858 858 allow_globals_access=False,
859 859 allow_item_access=False,
860 860 allow_attr_access=False,
861 861 allowed_calls=set(),
862 862 allow_any_calls=False,
863 863 allow_all_operations=False,
864 864 ),
865 865 "limited": SelectivePolicy(
866 866 allowed_getitem=BUILTIN_GETITEM,
867 867 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
868 868 allowed_getattr=BUILTIN_GETATTR,
869 869 allowed_getattr_external={
870 870 # pandas Series/Frame implements custom `__getattr__`
871 871 ("pandas", "DataFrame"),
872 872 ("pandas", "Series"),
873 873 },
874 874 allowed_operations=BUILTIN_OPERATIONS,
875 875 allow_builtins_access=True,
876 876 allow_locals_access=True,
877 877 allow_globals_access=True,
878 878 allowed_calls=ALLOWED_CALLS,
879 879 ),
880 880 "unsafe": EvaluationPolicy(
881 881 allow_builtins_access=True,
882 882 allow_locals_access=True,
883 883 allow_globals_access=True,
884 884 allow_attr_access=True,
885 885 allow_item_access=True,
886 886 allow_any_calls=True,
887 887 allow_all_operations=True,
888 888 ),
889 889 }
890 890
891 891
892 892 __all__ = [
893 893 "guarded_eval",
894 894 "eval_node",
895 895 "GuardRejection",
896 896 "EvaluationContext",
897 897 "_unbind_method",
898 898 ]
@@ -1,798 +1,798
1 1 """DEPRECATED: Input handling and transformation machinery.
2 2
3 3 This module was deprecated in IPython 7.0, in favour of inputtransformer2.
4 4
5 5 The first class in this module, :class:`InputSplitter`, is designed to tell when
6 6 input from a line-oriented frontend is complete and should be executed, and when
7 7 the user should be prompted for another line of code instead. The name 'input
8 8 splitter' is largely for historical reasons.
9 9
10 10 A companion, :class:`IPythonInputSplitter`, provides the same functionality but
11 11 with full support for the extended IPython syntax (magics, system calls, etc).
12 12 The code to actually do these transformations is in :mod:`IPython.core.inputtransformer`.
13 13 :class:`IPythonInputSplitter` feeds the raw code to the transformers in order
14 14 and stores the results.
15 15
16 16 For more details, see the class docstrings below.
17 17 """
18 18 from __future__ import annotations
19 19
20 20 from warnings import warn
21 21
22 22 warn('IPython.core.inputsplitter is deprecated since IPython 7 in favor of `IPython.core.inputtransformer2`',
23 23 DeprecationWarning)
24 24
25 25 # Copyright (c) IPython Development Team.
26 26 # Distributed under the terms of the Modified BSD License.
27 27 import ast
28 28 import codeop
29 29 import io
30 30 import re
31 31 import sys
32 32 import tokenize
33 33 import warnings
34 34
35 35 from typing import List, Tuple, Union, Optional, TYPE_CHECKING
36 36 from types import CodeType
37 37
38 38 from IPython.core.inputtransformer import (leading_indent,
39 39 classic_prompt,
40 40 ipy_prompt,
41 41 cellmagic,
42 42 assemble_logical_lines,
43 43 help_end,
44 44 escaped_commands,
45 45 assign_from_magic,
46 46 assign_from_system,
47 47 assemble_python_lines,
48 48 )
49 49 from IPython.utils import tokenutil
50 50
51 51 # These are available in this module for backwards compatibility.
52 52 from IPython.core.inputtransformer import (ESC_SHELL, ESC_SH_CAP, ESC_HELP,
53 53 ESC_HELP2, ESC_MAGIC, ESC_MAGIC2,
54 54 ESC_QUOTE, ESC_QUOTE2, ESC_PAREN, ESC_SEQUENCES)
55 55
56 56 if TYPE_CHECKING:
57 57 from typing_extensions import Self
58 58 #-----------------------------------------------------------------------------
59 59 # Utilities
60 60 #-----------------------------------------------------------------------------
61 61
62 62 # FIXME: These are general-purpose utilities that later can be moved to the
63 63 # general ward. Kept here for now because we're being very strict about test
64 64 # coverage with this code, and this lets us ensure that we keep 100% coverage
65 65 # while developing.
66 66
67 67 # compiled regexps for autoindent management
68 68 dedent_re = re.compile('|'.join([
69 69 r'^\s+raise(\s.*)?$', # raise statement (+ space + other stuff, maybe)
70 70 r'^\s+raise\([^\)]*\).*$', # wacky raise with immediate open paren
71 71 r'^\s+return(\s.*)?$', # normal return (+ space + other stuff, maybe)
72 72 r'^\s+return\([^\)]*\).*$', # wacky return with immediate open paren
73 73 r'^\s+pass\s*$', # pass (optionally followed by trailing spaces)
74 74 r'^\s+break\s*$', # break (optionally followed by trailing spaces)
75 75 r'^\s+continue\s*$', # continue (optionally followed by trailing spaces)
76 76 ]))
77 77 ini_spaces_re = re.compile(r'^([ \t\r\f\v]+)')
78 78
79 79 # regexp to match pure comment lines so we don't accidentally insert 'if 1:'
80 80 # before pure comments
81 81 comment_line_re = re.compile(r'^\s*\#')
82 82
83 83
84 84 def num_ini_spaces(s):
85 85 """Return the number of initial spaces in a string.
86 86
87 87 Note that tabs are counted as a single space. For now, we do *not* support
88 88 mixing of tabs and spaces in the user's input.
89 89
90 90 Parameters
91 91 ----------
92 92 s : string
93 93
94 94 Returns
95 95 -------
96 96 n : int
97 97 """
98 98 warnings.warn(
99 99 "`num_ini_spaces` is Pending Deprecation since IPython 8.17."
100 "It is considered fro removal in in future version. "
100 "It is considered for removal in in future version. "
101 101 "Please open an issue if you believe it should be kept.",
102 102 stacklevel=2,
103 103 category=PendingDeprecationWarning,
104 104 )
105 105 ini_spaces = ini_spaces_re.match(s)
106 106 if ini_spaces:
107 107 return ini_spaces.end()
108 108 else:
109 109 return 0
110 110
111 111 # Fake token types for partial_tokenize:
112 112 INCOMPLETE_STRING = tokenize.N_TOKENS
113 113 IN_MULTILINE_STATEMENT = tokenize.N_TOKENS + 1
114 114
115 115 # The 2 classes below have the same API as TokenInfo, but don't try to look up
116 116 # a token type name that they won't find.
117 117 class IncompleteString:
118 118 type = exact_type = INCOMPLETE_STRING
119 119 def __init__(self, s, start, end, line):
120 120 self.s = s
121 121 self.start = start
122 122 self.end = end
123 123 self.line = line
124 124
125 125 class InMultilineStatement:
126 126 type = exact_type = IN_MULTILINE_STATEMENT
127 127 def __init__(self, pos, line):
128 128 self.s = ''
129 129 self.start = self.end = pos
130 130 self.line = line
131 131
132 132 def partial_tokens(s):
133 133 """Iterate over tokens from a possibly-incomplete string of code.
134 134
135 135 This adds two special token types: INCOMPLETE_STRING and
136 136 IN_MULTILINE_STATEMENT. These can only occur as the last token yielded, and
137 137 represent the two main ways for code to be incomplete.
138 138 """
139 139 readline = io.StringIO(s).readline
140 140 token = tokenize.TokenInfo(tokenize.NEWLINE, '', (1, 0), (1, 0), '')
141 141 try:
142 142 for token in tokenutil.generate_tokens_catch_errors(readline):
143 143 yield token
144 144 except tokenize.TokenError as e:
145 145 # catch EOF error
146 146 lines = s.splitlines(keepends=True)
147 147 end = len(lines), len(lines[-1])
148 148 if 'multi-line string' in e.args[0]:
149 149 l, c = start = token.end
150 150 s = lines[l-1][c:] + ''.join(lines[l:])
151 151 yield IncompleteString(s, start, end, lines[-1])
152 152 elif 'multi-line statement' in e.args[0]:
153 153 yield InMultilineStatement(end, lines[-1])
154 154 else:
155 155 raise
156 156
157 157 def find_next_indent(code) -> int:
158 158 """Find the number of spaces for the next line of indentation"""
159 159 tokens = list(partial_tokens(code))
160 160 if tokens[-1].type == tokenize.ENDMARKER:
161 161 tokens.pop()
162 162 if not tokens:
163 163 return 0
164 164
165 165 while tokens[-1].type in {
166 166 tokenize.DEDENT,
167 167 tokenize.NEWLINE,
168 168 tokenize.COMMENT,
169 169 tokenize.ERRORTOKEN,
170 170 }:
171 171 tokens.pop()
172 172
173 173 # Starting in Python 3.12, the tokenize module adds implicit newlines at the end
174 174 # of input. We need to remove those if we're in a multiline statement
175 175 if tokens[-1].type == IN_MULTILINE_STATEMENT:
176 176 while tokens[-2].type in {tokenize.NL}:
177 177 tokens.pop(-2)
178 178
179 179
180 180 if tokens[-1].type == INCOMPLETE_STRING:
181 181 # Inside a multiline string
182 182 return 0
183 183
184 184 # Find the indents used before
185 185 prev_indents = [0]
186 186 def _add_indent(n):
187 187 if n != prev_indents[-1]:
188 188 prev_indents.append(n)
189 189
190 190 tokiter = iter(tokens)
191 191 for tok in tokiter:
192 192 if tok.type in {tokenize.INDENT, tokenize.DEDENT}:
193 193 _add_indent(tok.end[1])
194 194 elif (tok.type == tokenize.NL):
195 195 try:
196 196 _add_indent(next(tokiter).start[1])
197 197 except StopIteration:
198 198 break
199 199
200 200 last_indent = prev_indents.pop()
201 201
202 202 # If we've just opened a multiline statement (e.g. 'a = ['), indent more
203 203 if tokens[-1].type == IN_MULTILINE_STATEMENT:
204 204 if tokens[-2].exact_type in {tokenize.LPAR, tokenize.LSQB, tokenize.LBRACE}:
205 205 return last_indent + 4
206 206 return last_indent
207 207
208 208 if tokens[-1].exact_type == tokenize.COLON:
209 209 # Line ends with colon - indent
210 210 return last_indent + 4
211 211
212 212 if last_indent:
213 213 # Examine the last line for dedent cues - statements like return or
214 214 # raise which normally end a block of code.
215 215 last_line_starts = 0
216 216 for i, tok in enumerate(tokens):
217 217 if tok.type == tokenize.NEWLINE:
218 218 last_line_starts = i + 1
219 219
220 220 last_line_tokens = tokens[last_line_starts:]
221 221 names = [t.string for t in last_line_tokens if t.type == tokenize.NAME]
222 222 if names and names[0] in {'raise', 'return', 'pass', 'break', 'continue'}:
223 223 # Find the most recent indentation less than the current level
224 224 for indent in reversed(prev_indents):
225 225 if indent < last_indent:
226 226 return indent
227 227
228 228 return last_indent
229 229
230 230
231 231 def last_blank(src):
232 232 """Determine if the input source ends in a blank.
233 233
234 234 A blank is either a newline or a line consisting of whitespace.
235 235
236 236 Parameters
237 237 ----------
238 238 src : string
239 239 A single or multiline string.
240 240 """
241 241 if not src: return False
242 242 ll = src.splitlines()[-1]
243 243 return (ll == '') or ll.isspace()
244 244
245 245
246 246 last_two_blanks_re = re.compile(r'\n\s*\n\s*$', re.MULTILINE)
247 247 last_two_blanks_re2 = re.compile(r'.+\n\s*\n\s+$', re.MULTILINE)
248 248
249 249 def last_two_blanks(src):
250 250 """Determine if the input source ends in two blanks.
251 251
252 252 A blank is either a newline or a line consisting of whitespace.
253 253
254 254 Parameters
255 255 ----------
256 256 src : string
257 257 A single or multiline string.
258 258 """
259 259 if not src: return False
260 260 # The logic here is tricky: I couldn't get a regexp to work and pass all
261 261 # the tests, so I took a different approach: split the source by lines,
262 262 # grab the last two and prepend '###\n' as a stand-in for whatever was in
263 263 # the body before the last two lines. Then, with that structure, it's
264 264 # possible to analyze with two regexps. Not the most elegant solution, but
265 265 # it works. If anyone tries to change this logic, make sure to validate
266 266 # the whole test suite first!
267 267 new_src = '\n'.join(['###\n'] + src.splitlines()[-2:])
268 268 return (bool(last_two_blanks_re.match(new_src)) or
269 269 bool(last_two_blanks_re2.match(new_src)) )
270 270
271 271
272 272 def remove_comments(src):
273 273 """Remove all comments from input source.
274 274
275 275 Note: comments are NOT recognized inside of strings!
276 276
277 277 Parameters
278 278 ----------
279 279 src : string
280 280 A single or multiline input string.
281 281
282 282 Returns
283 283 -------
284 284 String with all Python comments removed.
285 285 """
286 286
287 287 return re.sub('#.*', '', src)
288 288
289 289
290 290 def get_input_encoding():
291 291 """Return the default standard input encoding.
292 292
293 293 If sys.stdin has no encoding, 'ascii' is returned."""
294 294 # There are strange environments for which sys.stdin.encoding is None. We
295 295 # ensure that a valid encoding is returned.
296 296 encoding = getattr(sys.stdin, 'encoding', None)
297 297 if encoding is None:
298 298 encoding = 'ascii'
299 299 return encoding
300 300
301 301 #-----------------------------------------------------------------------------
302 302 # Classes and functions for normal Python syntax handling
303 303 #-----------------------------------------------------------------------------
304 304
305 305 class InputSplitter(object):
306 306 r"""An object that can accumulate lines of Python source before execution.
307 307
308 308 This object is designed to be fed python source line-by-line, using
309 309 :meth:`push`. It will return on each push whether the currently pushed
310 310 code could be executed already. In addition, it provides a method called
311 311 :meth:`push_accepts_more` that can be used to query whether more input
312 312 can be pushed into a single interactive block.
313 313
314 314 This is a simple example of how an interactive terminal-based client can use
315 315 this tool::
316 316
317 317 isp = InputSplitter()
318 318 while isp.push_accepts_more():
319 319 indent = ' '*isp.indent_spaces
320 320 prompt = '>>> ' + indent
321 321 line = indent + raw_input(prompt)
322 322 isp.push(line)
323 323 print('Input source was:\n', isp.source_reset())
324 324 """
325 325 # A cache for storing the current indentation
326 326 # The first value stores the most recently processed source input
327 327 # The second value is the number of spaces for the current indentation
328 328 # If self.source matches the first value, the second value is a valid
329 329 # current indentation. Otherwise, the cache is invalid and the indentation
330 330 # must be recalculated.
331 331 _indent_spaces_cache: Union[Tuple[None, None], Tuple[str, int]] = None, None
332 332 # String, indicating the default input encoding. It is computed by default
333 333 # at initialization time via get_input_encoding(), but it can be reset by a
334 334 # client with specific knowledge of the encoding.
335 335 encoding = ''
336 336 # String where the current full source input is stored, properly encoded.
337 337 # Reading this attribute is the normal way of querying the currently pushed
338 338 # source code, that has been properly encoded.
339 339 source: str = ""
340 340 # Code object corresponding to the current source. It is automatically
341 341 # synced to the source, so it can be queried at any time to obtain the code
342 342 # object; it will be None if the source doesn't compile to valid Python.
343 343 code: Optional[CodeType] = None
344 344
345 345 # Private attributes
346 346
347 347 # List with lines of input accumulated so far
348 348 _buffer: List[str]
349 349 # Command compiler
350 350 _compile: codeop.CommandCompiler
351 351 # Boolean indicating whether the current block is complete
352 352 _is_complete: Optional[bool] = None
353 353 # Boolean indicating whether the current block has an unrecoverable syntax error
354 354 _is_invalid: bool = False
355 355
356 356 def __init__(self) -> None:
357 357 """Create a new InputSplitter instance."""
358 358 self._buffer = []
359 359 self._compile = codeop.CommandCompiler()
360 360 self.encoding = get_input_encoding()
361 361
362 362 def reset(self):
363 363 """Reset the input buffer and associated state."""
364 364 self._buffer[:] = []
365 365 self.source = ''
366 366 self.code = None
367 367 self._is_complete = False
368 368 self._is_invalid = False
369 369
370 370 def source_reset(self):
371 371 """Return the input source and perform a full reset.
372 372 """
373 373 out = self.source
374 374 self.reset()
375 375 return out
376 376
377 377 def check_complete(self, source):
378 378 """Return whether a block of code is ready to execute, or should be continued
379 379
380 380 This is a non-stateful API, and will reset the state of this InputSplitter.
381 381
382 382 Parameters
383 383 ----------
384 384 source : string
385 385 Python input code, which can be multiline.
386 386
387 387 Returns
388 388 -------
389 389 status : str
390 390 One of 'complete', 'incomplete', or 'invalid' if source is not a
391 391 prefix of valid code.
392 392 indent_spaces : int or None
393 393 The number of spaces by which to indent the next line of code. If
394 394 status is not 'incomplete', this is None.
395 395 """
396 396 self.reset()
397 397 try:
398 398 self.push(source)
399 399 except SyntaxError:
400 400 # Transformers in IPythonInputSplitter can raise SyntaxError,
401 401 # which push() will not catch.
402 402 return 'invalid', None
403 403 else:
404 404 if self._is_invalid:
405 405 return 'invalid', None
406 406 elif self.push_accepts_more():
407 407 return 'incomplete', self.get_indent_spaces()
408 408 else:
409 409 return 'complete', None
410 410 finally:
411 411 self.reset()
412 412
413 413 def push(self, lines:str) -> bool:
414 414 """Push one or more lines of input.
415 415
416 416 This stores the given lines and returns a status code indicating
417 417 whether the code forms a complete Python block or not.
418 418
419 419 Any exceptions generated in compilation are swallowed, but if an
420 420 exception was produced, the method returns True.
421 421
422 422 Parameters
423 423 ----------
424 424 lines : string
425 425 One or more lines of Python input.
426 426
427 427 Returns
428 428 -------
429 429 is_complete : boolean
430 430 True if the current input source (the result of the current input
431 431 plus prior inputs) forms a complete Python execution block. Note that
432 432 this value is also stored as a private attribute (``_is_complete``), so it
433 433 can be queried at any time.
434 434 """
435 435 assert isinstance(lines, str)
436 436 self._store(lines)
437 437 source = self.source
438 438
439 439 # Before calling _compile(), reset the code object to None so that if an
440 440 # exception is raised in compilation, we don't mislead by having
441 441 # inconsistent code/source attributes.
442 442 self.code, self._is_complete = None, None
443 443 self._is_invalid = False
444 444
445 445 # Honor termination lines properly
446 446 if source.endswith('\\\n'):
447 447 return False
448 448
449 449 try:
450 450 with warnings.catch_warnings():
451 451 warnings.simplefilter('error', SyntaxWarning)
452 452 self.code = self._compile(source, symbol="exec")
453 453 # Invalid syntax can produce any of a number of different errors from
454 454 # inside the compiler, so we have to catch them all. Syntax errors
455 455 # immediately produce a 'ready' block, so the invalid Python can be
456 456 # sent to the kernel for evaluation with possible ipython
457 457 # special-syntax conversion.
458 458 except (SyntaxError, OverflowError, ValueError, TypeError,
459 459 MemoryError, SyntaxWarning):
460 460 self._is_complete = True
461 461 self._is_invalid = True
462 462 else:
463 463 # Compilation didn't produce any exceptions (though it may not have
464 464 # given a complete code object)
465 465 self._is_complete = self.code is not None
466 466
467 467 return self._is_complete
468 468
469 469 def push_accepts_more(self):
470 470 """Return whether a block of interactive input can accept more input.
471 471
472 472 This method is meant to be used by line-oriented frontends, who need to
473 473 guess whether a block is complete or not based solely on prior and
474 474 current input lines. The InputSplitter considers it has a complete
475 475 interactive block and will not accept more input when either:
476 476
477 477 * A SyntaxError is raised
478 478
479 479 * The code is complete and consists of a single line or a single
480 480 non-compound statement
481 481
482 482 * The code is complete and has a blank line at the end
483 483
484 484 If the current input produces a syntax error, this method immediately
485 485 returns False but does *not* raise the syntax error exception, as
486 486 typically clients will want to send invalid syntax to an execution
487 487 backend which might convert the invalid syntax into valid Python via
488 488 one of the dynamic IPython mechanisms.
489 489 """
490 490
491 491 # With incomplete input, unconditionally accept more
492 492 # A syntax error also sets _is_complete to True - see push()
493 493 if not self._is_complete:
494 494 #print("Not complete") # debug
495 495 return True
496 496
497 497 # The user can make any (complete) input execute by leaving a blank line
498 498 last_line = self.source.splitlines()[-1]
499 499 if (not last_line) or last_line.isspace():
500 500 #print("Blank line") # debug
501 501 return False
502 502
503 503 # If there's just a single line or AST node, and we're flush left, as is
504 504 # the case after a simple statement such as 'a=1', we want to execute it
505 505 # straight away.
506 506 if self.get_indent_spaces() == 0:
507 507 if len(self.source.splitlines()) <= 1:
508 508 return False
509 509
510 510 try:
511 511 code_ast = ast.parse("".join(self._buffer))
512 512 except Exception:
513 513 #print("Can't parse AST") # debug
514 514 return False
515 515 else:
516 516 if len(code_ast.body) == 1 and \
517 517 not hasattr(code_ast.body[0], 'body'):
518 518 #print("Simple statement") # debug
519 519 return False
520 520
521 521 # General fallback - accept more code
522 522 return True
523 523
524 524 def get_indent_spaces(self) -> int:
525 525 sourcefor, n = self._indent_spaces_cache
526 526 if sourcefor == self.source:
527 527 assert n is not None
528 528 return n
529 529
530 530 # self.source always has a trailing newline
531 531 n = find_next_indent(self.source[:-1])
532 532 self._indent_spaces_cache = (self.source, n)
533 533 return n
534 534
535 535 # Backwards compatibility. I think all code that used .indent_spaces was
536 536 # inside IPython, but we can leave this here until IPython 7 in case any
537 537 # other modules are using it. -TK, November 2017
538 538 indent_spaces = property(get_indent_spaces)
539 539
540 540 def _store(self, lines, buffer=None, store='source'):
541 541 """Store one or more lines of input.
542 542
543 543 If input lines are not newline-terminated, a newline is automatically
544 544 appended."""
545 545
546 546 if buffer is None:
547 547 buffer = self._buffer
548 548
549 549 if lines.endswith('\n'):
550 550 buffer.append(lines)
551 551 else:
552 552 buffer.append(lines+'\n')
553 553 setattr(self, store, self._set_source(buffer))
554 554
555 555 def _set_source(self, buffer):
556 556 return u''.join(buffer)
557 557
558 558
559 559 class IPythonInputSplitter(InputSplitter):
560 560 """An input splitter that recognizes all of IPython's special syntax."""
561 561
562 562 # String with raw, untransformed input.
563 563 source_raw = ''
564 564
565 565 # Flag to track when a transformer has stored input that it hasn't given
566 566 # back yet.
567 567 transformer_accumulating = False
568 568
569 569 # Flag to track when assemble_python_lines has stored input that it hasn't
570 570 # given back yet.
571 571 within_python_line = False
572 572
573 573 # Private attributes
574 574
575 575 # List with lines of raw input accumulated so far.
576 576 _buffer_raw: List[str]
577 577
578 578 def __init__(self, line_input_checker=True, physical_line_transforms=None,
579 579 logical_line_transforms=None, python_line_transforms=None):
580 580 super(IPythonInputSplitter, self).__init__()
581 581 self._buffer_raw = []
582 582 self._validate = True
583 583
584 584 if physical_line_transforms is not None:
585 585 self.physical_line_transforms = physical_line_transforms
586 586 else:
587 587 self.physical_line_transforms = [
588 588 leading_indent(),
589 589 classic_prompt(),
590 590 ipy_prompt(),
591 591 cellmagic(end_on_blank_line=line_input_checker),
592 592 ]
593 593
594 594 self.assemble_logical_lines = assemble_logical_lines()
595 595 if logical_line_transforms is not None:
596 596 self.logical_line_transforms = logical_line_transforms
597 597 else:
598 598 self.logical_line_transforms = [
599 599 help_end(),
600 600 escaped_commands(),
601 601 assign_from_magic(),
602 602 assign_from_system(),
603 603 ]
604 604
605 605 self.assemble_python_lines = assemble_python_lines()
606 606 if python_line_transforms is not None:
607 607 self.python_line_transforms = python_line_transforms
608 608 else:
609 609 # We don't use any of these at present
610 610 self.python_line_transforms = []
611 611
612 612 @property
613 613 def transforms(self):
614 614 "Quick access to all transformers."
615 615 return self.physical_line_transforms + \
616 616 [self.assemble_logical_lines] + self.logical_line_transforms + \
617 617 [self.assemble_python_lines] + self.python_line_transforms
618 618
619 619 @property
620 620 def transforms_in_use(self):
621 621 """Transformers, excluding logical line transformers if we're in a
622 622 Python line."""
623 623 t = self.physical_line_transforms[:]
624 624 if not self.within_python_line:
625 625 t += [self.assemble_logical_lines] + self.logical_line_transforms
626 626 return t + [self.assemble_python_lines] + self.python_line_transforms
627 627
628 628 def reset(self):
629 629 """Reset the input buffer and associated state."""
630 630 super(IPythonInputSplitter, self).reset()
631 631 self._buffer_raw[:] = []
632 632 self.source_raw = ''
633 633 self.transformer_accumulating = False
634 634 self.within_python_line = False
635 635
636 636 for t in self.transforms:
637 637 try:
638 638 t.reset()
639 639 except SyntaxError:
640 640 # Nothing that calls reset() expects to handle transformer
641 641 # errors
642 642 pass
643 643
644 644 def flush_transformers(self: Self):
645 645 def _flush(transform, outs: List[str]):
646 646 """yield transformed lines
647 647
648 648 always strings, never None
649 649
650 650 transform: the current transform
651 651 outs: an iterable of previously transformed inputs.
652 652 Each may be multiline, which will be passed
653 653 one line at a time to transform.
654 654 """
655 655 for out in outs:
656 656 for line in out.splitlines():
657 657 # push one line at a time
658 658 tmp = transform.push(line)
659 659 if tmp is not None:
660 660 yield tmp
661 661
662 662 # reset the transform
663 663 tmp = transform.reset()
664 664 if tmp is not None:
665 665 yield tmp
666 666
667 667 out: List[str] = []
668 668 for t in self.transforms_in_use:
669 669 out = _flush(t, out)
670 670
671 671 out = list(out)
672 672 if out:
673 673 self._store('\n'.join(out))
674 674
675 675 def raw_reset(self):
676 676 """Return raw input only and perform a full reset.
677 677 """
678 678 out = self.source_raw
679 679 self.reset()
680 680 return out
681 681
682 682 def source_reset(self):
683 683 try:
684 684 self.flush_transformers()
685 685 return self.source
686 686 finally:
687 687 self.reset()
688 688
689 689 def push_accepts_more(self):
690 690 if self.transformer_accumulating:
691 691 return True
692 692 else:
693 693 return super(IPythonInputSplitter, self).push_accepts_more()
694 694
695 695 def transform_cell(self, cell):
696 696 """Process and translate a cell of input.
697 697 """
698 698 self.reset()
699 699 try:
700 700 self.push(cell)
701 701 self.flush_transformers()
702 702 return self.source
703 703 finally:
704 704 self.reset()
705 705
706 706 def push(self, lines:str) -> bool:
707 707 """Push one or more lines of IPython input.
708 708
709 709 This stores the given lines and returns a status code indicating
710 710 whether the code forms a complete Python block or not, after processing
711 711 all input lines for special IPython syntax.
712 712
713 713 Any exceptions generated in compilation are swallowed, but if an
714 714 exception was produced, the method returns True.
715 715
716 716 Parameters
717 717 ----------
718 718 lines : string
719 719 One or more lines of Python input.
720 720
721 721 Returns
722 722 -------
723 723 is_complete : boolean
724 724 True if the current input source (the result of the current input
725 725 plus prior inputs) forms a complete Python execution block. Note that
726 726 this value is also stored as a private attribute (_is_complete), so it
727 727 can be queried at any time.
728 728 """
729 729 assert isinstance(lines, str)
730 730 # We must ensure all input is pure unicode
731 731 # ''.splitlines() --> [], but we need to push the empty line to transformers
732 732 lines_list = lines.splitlines()
733 733 if not lines_list:
734 734 lines_list = ['']
735 735
736 736 # Store raw source before applying any transformations to it. Note
737 737 # that this must be done *after* the reset() call that would otherwise
738 738 # flush the buffer.
739 739 self._store(lines, self._buffer_raw, 'source_raw')
740 740
741 741 transformed_lines_list = []
742 742 for line in lines_list:
743 743 transformed = self._transform_line(line)
744 744 if transformed is not None:
745 745 transformed_lines_list.append(transformed)
746 746
747 747 if transformed_lines_list:
748 748 transformed_lines = '\n'.join(transformed_lines_list)
749 749 return super(IPythonInputSplitter, self).push(transformed_lines)
750 750 else:
751 751 # Got nothing back from transformers - they must be waiting for
752 752 # more input.
753 753 return False
754 754
755 755 def _transform_line(self, line):
756 756 """Push a line of input code through the various transformers.
757 757
758 758 Returns any output from the transformers, or None if a transformer
759 759 is accumulating lines.
760 760
761 761 Sets self.transformer_accumulating as a side effect.
762 762 """
763 763 def _accumulating(dbg):
764 764 #print(dbg)
765 765 self.transformer_accumulating = True
766 766 return None
767 767
768 768 for transformer in self.physical_line_transforms:
769 769 line = transformer.push(line)
770 770 if line is None:
771 771 return _accumulating(transformer)
772 772
773 773 if not self.within_python_line:
774 774 line = self.assemble_logical_lines.push(line)
775 775 if line is None:
776 776 return _accumulating('acc logical line')
777 777
778 778 for transformer in self.logical_line_transforms:
779 779 line = transformer.push(line)
780 780 if line is None:
781 781 return _accumulating(transformer)
782 782
783 783 line = self.assemble_python_lines.push(line)
784 784 if line is None:
785 785 self.within_python_line = True
786 786 return _accumulating('acc python line')
787 787 else:
788 788 self.within_python_line = False
789 789
790 790 for transformer in self.python_line_transforms:
791 791 line = transformer.push(line)
792 792 if line is None:
793 793 return _accumulating(transformer)
794 794
795 795 #print("transformers clear") #debug
796 796 self.transformer_accumulating = False
797 797 return line
798 798
@@ -1,3987 +1,3987
1 1 # -*- coding: utf-8 -*-
2 2 """Main IPython class."""
3 3
4 4 #-----------------------------------------------------------------------------
5 5 # Copyright (C) 2001 Janko Hauser <jhauser@zscout.de>
6 6 # Copyright (C) 2001-2007 Fernando Perez. <fperez@colorado.edu>
7 7 # Copyright (C) 2008-2011 The IPython Development Team
8 8 #
9 9 # Distributed under the terms of the BSD License. The full license is in
10 10 # the file COPYING, distributed as part of this software.
11 11 #-----------------------------------------------------------------------------
12 12
13 13
14 14 import abc
15 15 import ast
16 16 import atexit
17 17 import bdb
18 18 import builtins as builtin_mod
19 19 import functools
20 20 import inspect
21 21 import os
22 22 import re
23 23 import runpy
24 24 import shutil
25 25 import subprocess
26 26 import sys
27 27 import tempfile
28 28 import traceback
29 29 import types
30 30 import warnings
31 31 from ast import stmt
32 32 from io import open as io_open
33 33 from logging import error
34 34 from pathlib import Path
35 35 from typing import Callable
36 36 from typing import List as ListType, Dict as DictType, Any as AnyType
37 37 from typing import Optional, Sequence, Tuple
38 38 from warnings import warn
39 39
40 40 try:
41 41 from pickleshare import PickleShareDB
42 42 except ModuleNotFoundError:
43 43
44 44 class PickleShareDB: # type: ignore [no-redef]
45 45 _mock = True
46 46
47 47 def __init__(self, path):
48 48 pass
49 49
50 50 def get(self, key, default=None):
51 51 warn(
52 52 f"This is now an optional IPython functionality, using {key} requires you to install the `pickleshare` library.",
53 53 stacklevel=2,
54 54 )
55 55 return default
56 56
57 57 def __getitem__(self, key):
58 58 warn(
59 59 f"This is now an optional IPython functionality, using {key} requires you to install the `pickleshare` library.",
60 60 stacklevel=2,
61 61 )
62 62 return None
63 63
64 64 def __setitem__(self, key, value):
65 65 warn(
66 66 f"This is now an optional IPython functionality, setting {key} requires you to install the `pickleshare` library.",
67 67 stacklevel=2,
68 68 )
69 69
70 70 def __delitem__(self, key):
71 71 warn(
72 72 f"This is now an optional IPython functionality, deleting {key} requires you to install the `pickleshare` library.",
73 73 stacklevel=2,
74 74 )
75 75
76 76
77 77 from tempfile import TemporaryDirectory
78 78 from traitlets import (
79 79 Any,
80 80 Bool,
81 81 CaselessStrEnum,
82 82 Dict,
83 83 Enum,
84 84 Instance,
85 85 Integer,
86 86 List,
87 87 Type,
88 88 Unicode,
89 89 default,
90 90 observe,
91 91 validate,
92 92 )
93 93 from traitlets.config.configurable import SingletonConfigurable
94 94 from traitlets.utils.importstring import import_item
95 95
96 96 import IPython.core.hooks
97 97 from IPython.core import magic, oinspect, page, prefilter, ultratb
98 98 from IPython.core.alias import Alias, AliasManager
99 99 from IPython.core.autocall import ExitAutocall
100 100 from IPython.core.builtin_trap import BuiltinTrap
101 101 from IPython.core.compilerop import CachingCompiler
102 102 from IPython.core.debugger import InterruptiblePdb
103 103 from IPython.core.display_trap import DisplayTrap
104 104 from IPython.core.displayhook import DisplayHook
105 105 from IPython.core.displaypub import DisplayPublisher
106 106 from IPython.core.error import InputRejected, UsageError
107 107 from IPython.core.events import EventManager, available_events
108 108 from IPython.core.extensions import ExtensionManager
109 109 from IPython.core.formatters import DisplayFormatter
110 110 from IPython.core.history import HistoryManager
111 111 from IPython.core.inputtransformer2 import ESC_MAGIC, ESC_MAGIC2
112 112 from IPython.core.logger import Logger
113 113 from IPython.core.macro import Macro
114 114 from IPython.core.payload import PayloadManager
115 115 from IPython.core.prefilter import PrefilterManager
116 116 from IPython.core.profiledir import ProfileDir
117 117 from IPython.core.usage import default_banner
118 118 from IPython.display import display
119 119 from IPython.paths import get_ipython_dir
120 120 from IPython.testing.skipdoctest import skip_doctest
121 121 from IPython.utils import PyColorize, io, openpy, py3compat
122 122 from IPython.utils.decorators import undoc
123 123 from IPython.utils.io import ask_yes_no
124 124 from IPython.utils.ipstruct import Struct
125 125 from IPython.utils.path import ensure_dir_exists, get_home_dir, get_py_filename
126 126 from IPython.utils.process import getoutput, system
127 127 from IPython.utils.strdispatch import StrDispatch
128 128 from IPython.utils.syspathcontext import prepended_to_syspath
129 129 from IPython.utils.text import DollarFormatter, LSString, SList, format_screen
130 130 from IPython.core.oinspect import OInfo
131 131
132 132
133 133 sphinxify: Optional[Callable]
134 134
135 135 try:
136 136 import docrepr.sphinxify as sphx
137 137
138 138 def sphinxify(oinfo):
139 139 wrapped_docstring = sphx.wrap_main_docstring(oinfo)
140 140
141 141 def sphinxify_docstring(docstring):
142 142 with TemporaryDirectory() as dirname:
143 143 return {
144 144 "text/html": sphx.sphinxify(wrapped_docstring, dirname),
145 145 "text/plain": docstring,
146 146 }
147 147
148 148 return sphinxify_docstring
149 149 except ImportError:
150 150 sphinxify = None
151 151
152 152 if sys.version_info[:2] < (3, 11):
153 153 from exceptiongroup import BaseExceptionGroup
154 154
155 155 class ProvisionalWarning(DeprecationWarning):
156 156 """
157 157 Warning class for unstable features
158 158 """
159 159 pass
160 160
161 161 from ast import Module
162 162
163 163 _assign_nodes = (ast.AugAssign, ast.AnnAssign, ast.Assign)
164 164 _single_targets_nodes = (ast.AugAssign, ast.AnnAssign)
165 165
166 166 #-----------------------------------------------------------------------------
167 167 # Await Helpers
168 168 #-----------------------------------------------------------------------------
169 169
170 170 # we still need to run things using the asyncio eventloop, but there is no
171 171 # async integration
172 172 from .async_helpers import (
173 173 _asyncio_runner,
174 174 _curio_runner,
175 175 _pseudo_sync_runner,
176 176 _should_be_async,
177 177 _trio_runner,
178 178 )
179 179
180 180 #-----------------------------------------------------------------------------
181 181 # Globals
182 182 #-----------------------------------------------------------------------------
183 183
184 184 # compiled regexps for autoindent management
185 185 dedent_re = re.compile(r'^\s+raise|^\s+return|^\s+pass')
186 186
187 187 #-----------------------------------------------------------------------------
188 188 # Utilities
189 189 #-----------------------------------------------------------------------------
190 190
191 191
192 192 def is_integer_string(s: str):
193 193 """
194 194 Variant of "str.isnumeric()" that allow negative values and other ints.
195 195 """
196 196 try:
197 197 int(s)
198 198 return True
199 199 except ValueError:
200 200 return False
201 201 raise ValueError("Unexpected error")
202 202
203 203
204 204 @undoc
205 205 def softspace(file, newvalue):
206 206 """Copied from code.py, to remove the dependency"""
207 207
208 208 oldvalue = 0
209 209 try:
210 210 oldvalue = file.softspace
211 211 except AttributeError:
212 212 pass
213 213 try:
214 214 file.softspace = newvalue
215 215 except (AttributeError, TypeError):
216 216 # "attribute-less object" or "read-only attributes"
217 217 pass
218 218 return oldvalue
219 219
220 220 @undoc
221 221 def no_op(*a, **kw):
222 222 pass
223 223
224 224
225 225 class SpaceInInput(Exception): pass
226 226
227 227
228 228 class SeparateUnicode(Unicode):
229 229 r"""A Unicode subclass to validate separate_in, separate_out, etc.
230 230
231 231 This is a Unicode based trait that converts '0'->'' and ``'\\n'->'\n'``.
232 232 """
233 233
234 234 def validate(self, obj, value):
235 235 if value == '0': value = ''
236 236 value = value.replace('\\n','\n')
237 237 return super(SeparateUnicode, self).validate(obj, value)
238 238
239 239
240 240 @undoc
241 241 class DummyMod(object):
242 242 """A dummy module used for IPython's interactive module when
243 243 a namespace must be assigned to the module's __dict__."""
244 244 __spec__ = None
245 245
246 246
247 247 class ExecutionInfo(object):
248 248 """The arguments used for a call to :meth:`InteractiveShell.run_cell`
249 249
250 250 Stores information about what is going to happen.
251 251 """
252 252 raw_cell = None
253 253 store_history = False
254 254 silent = False
255 255 shell_futures = True
256 256 cell_id = None
257 257
258 258 def __init__(self, raw_cell, store_history, silent, shell_futures, cell_id):
259 259 self.raw_cell = raw_cell
260 260 self.store_history = store_history
261 261 self.silent = silent
262 262 self.shell_futures = shell_futures
263 263 self.cell_id = cell_id
264 264
265 265 def __repr__(self):
266 266 name = self.__class__.__qualname__
267 267 raw_cell = (
268 268 (self.raw_cell[:50] + "..") if len(self.raw_cell) > 50 else self.raw_cell
269 269 )
270 270 return (
271 271 '<%s object at %x, raw_cell="%s" store_history=%s silent=%s shell_futures=%s cell_id=%s>'
272 272 % (
273 273 name,
274 274 id(self),
275 275 raw_cell,
276 276 self.store_history,
277 277 self.silent,
278 278 self.shell_futures,
279 279 self.cell_id,
280 280 )
281 281 )
282 282
283 283
284 284 class ExecutionResult:
285 285 """The result of a call to :meth:`InteractiveShell.run_cell`
286 286
287 287 Stores information about what took place.
288 288 """
289 289
290 290 execution_count: Optional[int] = None
291 291 error_before_exec: Optional[bool] = None
292 292 error_in_exec: Optional[BaseException] = None
293 293 info = None
294 294 result = None
295 295
296 296 def __init__(self, info):
297 297 self.info = info
298 298
299 299 @property
300 300 def success(self):
301 301 return (self.error_before_exec is None) and (self.error_in_exec is None)
302 302
303 303 def raise_error(self):
304 304 """Reraises error if `success` is `False`, otherwise does nothing"""
305 305 if self.error_before_exec is not None:
306 306 raise self.error_before_exec
307 307 if self.error_in_exec is not None:
308 308 raise self.error_in_exec
309 309
310 310 def __repr__(self):
311 311 name = self.__class__.__qualname__
312 312 return '<%s object at %x, execution_count=%s error_before_exec=%s error_in_exec=%s info=%s result=%s>' %\
313 313 (name, id(self), self.execution_count, self.error_before_exec, self.error_in_exec, repr(self.info), repr(self.result))
314 314
315 315 @functools.wraps(io_open)
316 316 def _modified_open(file, *args, **kwargs):
317 317 if file in {0, 1, 2}:
318 318 raise ValueError(
319 319 f"IPython won't let you open fd={file} by default "
320 320 "as it is likely to crash IPython. If you know what you are doing, "
321 321 "you can use builtins' open."
322 322 )
323 323
324 324 return io_open(file, *args, **kwargs)
325 325
326 326 class InteractiveShell(SingletonConfigurable):
327 327 """An enhanced, interactive shell for Python."""
328 328
329 329 _instance = None
330 330
331 331 ast_transformers: List[ast.NodeTransformer] = List(
332 332 [],
333 333 help="""
334 334 A list of ast.NodeTransformer subclass instances, which will be applied
335 335 to user input before code is run.
336 336 """,
337 337 ).tag(config=True)
338 338
339 339 autocall = Enum((0,1,2), default_value=0, help=
340 340 """
341 341 Make IPython automatically call any callable object even if you didn't
342 342 type explicit parentheses. For example, 'str 43' becomes 'str(43)'
343 343 automatically. The value can be '0' to disable the feature, '1' for
344 344 'smart' autocall, where it is not applied if there are no more
345 345 arguments on the line, and '2' for 'full' autocall, where all callable
346 346 objects are automatically called (even if no arguments are present).
347 347 """
348 348 ).tag(config=True)
349 349
350 350 autoindent = Bool(True, help=
351 351 """
352 352 Autoindent IPython code entered interactively.
353 353 """
354 354 ).tag(config=True)
355 355
356 356 autoawait = Bool(True, help=
357 357 """
358 358 Automatically run await statement in the top level repl.
359 359 """
360 360 ).tag(config=True)
361 361
362 362 loop_runner_map ={
363 363 'asyncio':(_asyncio_runner, True),
364 364 'curio':(_curio_runner, True),
365 365 'trio':(_trio_runner, True),
366 366 'sync': (_pseudo_sync_runner, False)
367 367 }
368 368
369 369 loop_runner = Any(default_value="IPython.core.interactiveshell._asyncio_runner",
370 370 allow_none=True,
371 371 help="""Select the loop runner that will be used to execute top-level asynchronous code"""
372 372 ).tag(config=True)
373 373
374 374 @default('loop_runner')
375 375 def _default_loop_runner(self):
376 376 return import_item("IPython.core.interactiveshell._asyncio_runner")
377 377
378 378 @validate('loop_runner')
379 379 def _import_runner(self, proposal):
380 380 if isinstance(proposal.value, str):
381 381 if proposal.value in self.loop_runner_map:
382 382 runner, autoawait = self.loop_runner_map[proposal.value]
383 383 self.autoawait = autoawait
384 384 return runner
385 385 runner = import_item(proposal.value)
386 386 if not callable(runner):
387 387 raise ValueError('loop_runner must be callable')
388 388 return runner
389 389 if not callable(proposal.value):
390 390 raise ValueError('loop_runner must be callable')
391 391 return proposal.value
392 392
393 393 automagic = Bool(True, help=
394 394 """
395 395 Enable magic commands to be called without the leading %.
396 396 """
397 397 ).tag(config=True)
398 398
399 399 banner1 = Unicode(default_banner,
400 400 help="""The part of the banner to be printed before the profile"""
401 401 ).tag(config=True)
402 402 banner2 = Unicode('',
403 403 help="""The part of the banner to be printed after the profile"""
404 404 ).tag(config=True)
405 405
406 406 cache_size = Integer(1000, help=
407 407 """
408 408 Set the size of the output cache. The default is 1000, you can
409 409 change it permanently in your config file. Setting it to 0 completely
410 410 disables the caching system, and the minimum value accepted is 3 (if
411 411 you provide a value less than 3, it is reset to 0 and a warning is
412 412 issued). This limit is defined because otherwise you'll spend more
413 413 time re-flushing a too small cache than working
414 414 """
415 415 ).tag(config=True)
416 416 color_info = Bool(True, help=
417 417 """
418 418 Use colors for displaying information about objects. Because this
419 419 information is passed through a pager (like 'less'), and some pagers
420 420 get confused with color codes, this capability can be turned off.
421 421 """
422 422 ).tag(config=True)
423 423 colors = CaselessStrEnum(('Neutral', 'NoColor','LightBG','Linux'),
424 424 default_value='Neutral',
425 425 help="Set the color scheme (NoColor, Neutral, Linux, or LightBG)."
426 426 ).tag(config=True)
427 427 debug = Bool(False).tag(config=True)
428 428 disable_failing_post_execute = Bool(False,
429 429 help="Don't call post-execute functions that have failed in the past."
430 430 ).tag(config=True)
431 431 display_formatter = Instance(DisplayFormatter, allow_none=True)
432 432 displayhook_class = Type(DisplayHook)
433 433 display_pub_class = Type(DisplayPublisher)
434 434 compiler_class = Type(CachingCompiler)
435 435 inspector_class = Type(
436 436 oinspect.Inspector, help="Class to use to instantiate the shell inspector"
437 437 ).tag(config=True)
438 438
439 439 sphinxify_docstring = Bool(False, help=
440 440 """
441 441 Enables rich html representation of docstrings. (This requires the
442 442 docrepr module).
443 443 """).tag(config=True)
444 444
445 445 @observe("sphinxify_docstring")
446 446 def _sphinxify_docstring_changed(self, change):
447 447 if change['new']:
448 448 warn("`sphinxify_docstring` is provisional since IPython 5.0 and might change in future versions." , ProvisionalWarning)
449 449
450 450 enable_html_pager = Bool(False, help=
451 451 """
452 452 (Provisional API) enables html representation in mime bundles sent
453 453 to pagers.
454 454 """).tag(config=True)
455 455
456 456 @observe("enable_html_pager")
457 457 def _enable_html_pager_changed(self, change):
458 458 if change['new']:
459 459 warn("`enable_html_pager` is provisional since IPython 5.0 and might change in future versions.", ProvisionalWarning)
460 460
461 461 data_pub_class = None
462 462
463 463 exit_now = Bool(False)
464 464 exiter = Instance(ExitAutocall)
465 465 @default('exiter')
466 466 def _exiter_default(self):
467 467 return ExitAutocall(self)
468 468 # Monotonically increasing execution counter
469 469 execution_count = Integer(1)
470 470 filename = Unicode("<ipython console>")
471 471 ipython_dir= Unicode('').tag(config=True) # Set to get_ipython_dir() in __init__
472 472
473 473 # Used to transform cells before running them, and check whether code is complete
474 474 input_transformer_manager = Instance('IPython.core.inputtransformer2.TransformerManager',
475 475 ())
476 476
477 477 @property
478 478 def input_transformers_cleanup(self):
479 479 return self.input_transformer_manager.cleanup_transforms
480 480
481 481 input_transformers_post: List = List(
482 482 [],
483 483 help="A list of string input transformers, to be applied after IPython's "
484 484 "own input transformations."
485 485 )
486 486
487 487 @property
488 488 def input_splitter(self):
489 489 """Make this available for backward compatibility (pre-7.0 release) with existing code.
490 490
491 491 For example, ipykernel ipykernel currently uses
492 492 `shell.input_splitter.check_complete`
493 493 """
494 494 from warnings import warn
495 495 warn("`input_splitter` is deprecated since IPython 7.0, prefer `input_transformer_manager`.",
496 496 DeprecationWarning, stacklevel=2
497 497 )
498 498 return self.input_transformer_manager
499 499
500 500 logstart = Bool(False, help=
501 501 """
502 502 Start logging to the default log file in overwrite mode.
503 503 Use `logappend` to specify a log file to **append** logs to.
504 504 """
505 505 ).tag(config=True)
506 506 logfile = Unicode('', help=
507 507 """
508 508 The name of the logfile to use.
509 509 """
510 510 ).tag(config=True)
511 511 logappend = Unicode('', help=
512 512 """
513 513 Start logging to the given file in append mode.
514 514 Use `logfile` to specify a log file to **overwrite** logs to.
515 515 """
516 516 ).tag(config=True)
517 517 object_info_string_level = Enum((0,1,2), default_value=0,
518 518 ).tag(config=True)
519 519 pdb = Bool(False, help=
520 520 """
521 521 Automatically call the pdb debugger after every exception.
522 522 """
523 523 ).tag(config=True)
524 524 display_page = Bool(False,
525 525 help="""If True, anything that would be passed to the pager
526 526 will be displayed as regular output instead."""
527 527 ).tag(config=True)
528 528
529 529
530 530 show_rewritten_input = Bool(True,
531 531 help="Show rewritten input, e.g. for autocall."
532 532 ).tag(config=True)
533 533
534 534 quiet = Bool(False).tag(config=True)
535 535
536 536 history_length = Integer(10000,
537 537 help='Total length of command history'
538 538 ).tag(config=True)
539 539
540 540 history_load_length = Integer(1000, help=
541 541 """
542 542 The number of saved history entries to be loaded
543 543 into the history buffer at startup.
544 544 """
545 545 ).tag(config=True)
546 546
547 547 ast_node_interactivity = Enum(['all', 'last', 'last_expr', 'none', 'last_expr_or_assign'],
548 548 default_value='last_expr',
549 549 help="""
550 550 'all', 'last', 'last_expr' or 'none', 'last_expr_or_assign' specifying
551 551 which nodes should be run interactively (displaying output from expressions).
552 552 """
553 553 ).tag(config=True)
554 554
555 555 warn_venv = Bool(
556 556 True,
557 557 help="Warn if running in a virtual environment with no IPython installed (so IPython from the global environment is used).",
558 558 ).tag(config=True)
559 559
560 560 # TODO: this part of prompt management should be moved to the frontends.
561 561 # Use custom TraitTypes that convert '0'->'' and '\\n'->'\n'
562 562 separate_in = SeparateUnicode('\n').tag(config=True)
563 563 separate_out = SeparateUnicode('').tag(config=True)
564 564 separate_out2 = SeparateUnicode('').tag(config=True)
565 565 wildcards_case_sensitive = Bool(True).tag(config=True)
566 566 xmode = CaselessStrEnum(('Context', 'Plain', 'Verbose', 'Minimal'),
567 567 default_value='Context',
568 568 help="Switch modes for the IPython exception handlers."
569 569 ).tag(config=True)
570 570
571 571 # Subcomponents of InteractiveShell
572 572 alias_manager = Instance("IPython.core.alias.AliasManager", allow_none=True)
573 573 prefilter_manager = Instance(
574 574 "IPython.core.prefilter.PrefilterManager", allow_none=True
575 575 )
576 576 builtin_trap = Instance("IPython.core.builtin_trap.BuiltinTrap")
577 577 display_trap = Instance("IPython.core.display_trap.DisplayTrap")
578 578 extension_manager = Instance(
579 579 "IPython.core.extensions.ExtensionManager", allow_none=True
580 580 )
581 581 payload_manager = Instance("IPython.core.payload.PayloadManager", allow_none=True)
582 582 history_manager = Instance(
583 583 "IPython.core.history.HistoryAccessorBase", allow_none=True
584 584 )
585 585 magics_manager = Instance("IPython.core.magic.MagicsManager")
586 586
587 587 profile_dir = Instance('IPython.core.application.ProfileDir', allow_none=True)
588 588 @property
589 589 def profile(self):
590 590 if self.profile_dir is not None:
591 591 name = os.path.basename(self.profile_dir.location)
592 592 return name.replace('profile_','')
593 593
594 594
595 595 # Private interface
596 596 _post_execute = Dict()
597 597
598 598 # Tracks any GUI loop loaded for pylab
599 599 pylab_gui_select = None
600 600
601 601 last_execution_succeeded = Bool(True, help='Did last executed command succeeded')
602 602
603 603 last_execution_result = Instance('IPython.core.interactiveshell.ExecutionResult', help='Result of executing the last command', allow_none=True)
604 604
605 605 def __init__(self, ipython_dir=None, profile_dir=None,
606 606 user_module=None, user_ns=None,
607 607 custom_exceptions=((), None), **kwargs):
608 608 # This is where traits with a config_key argument are updated
609 609 # from the values on config.
610 610 super(InteractiveShell, self).__init__(**kwargs)
611 611 if 'PromptManager' in self.config:
612 612 warn('As of IPython 5.0 `PromptManager` config will have no effect'
613 613 ' and has been replaced by TerminalInteractiveShell.prompts_class')
614 614 self.configurables = [self]
615 615
616 616 # These are relatively independent and stateless
617 617 self.init_ipython_dir(ipython_dir)
618 618 self.init_profile_dir(profile_dir)
619 619 self.init_instance_attrs()
620 620 self.init_environment()
621 621
622 622 # Check if we're in a virtualenv, and set up sys.path.
623 623 self.init_virtualenv()
624 624
625 625 # Create namespaces (user_ns, user_global_ns, etc.)
626 626 self.init_create_namespaces(user_module, user_ns)
627 627 # This has to be done after init_create_namespaces because it uses
628 628 # something in self.user_ns, but before init_sys_modules, which
629 629 # is the first thing to modify sys.
630 630 # TODO: When we override sys.stdout and sys.stderr before this class
631 631 # is created, we are saving the overridden ones here. Not sure if this
632 632 # is what we want to do.
633 633 self.save_sys_module_state()
634 634 self.init_sys_modules()
635 635
636 636 # While we're trying to have each part of the code directly access what
637 637 # it needs without keeping redundant references to objects, we have too
638 638 # much legacy code that expects ip.db to exist.
639 639 self.db = PickleShareDB(os.path.join(self.profile_dir.location, 'db'))
640 640
641 641 self.init_history()
642 642 self.init_encoding()
643 643 self.init_prefilter()
644 644
645 645 self.init_syntax_highlighting()
646 646 self.init_hooks()
647 647 self.init_events()
648 648 self.init_pushd_popd_magic()
649 649 self.init_user_ns()
650 650 self.init_logger()
651 651 self.init_builtins()
652 652
653 653 # The following was in post_config_initialization
654 654 self.init_inspector()
655 655 self.raw_input_original = input
656 656 self.init_completer()
657 657 # TODO: init_io() needs to happen before init_traceback handlers
658 658 # because the traceback handlers hardcode the stdout/stderr streams.
659 659 # This logic in in debugger.Pdb and should eventually be changed.
660 660 self.init_io()
661 661 self.init_traceback_handlers(custom_exceptions)
662 662 self.init_prompts()
663 663 self.init_display_formatter()
664 664 self.init_display_pub()
665 665 self.init_data_pub()
666 666 self.init_displayhook()
667 667 self.init_magics()
668 668 self.init_alias()
669 669 self.init_logstart()
670 670 self.init_pdb()
671 671 self.init_extension_manager()
672 672 self.init_payload()
673 673 self.events.trigger('shell_initialized', self)
674 674 atexit.register(self.atexit_operations)
675 675
676 676 # The trio runner is used for running Trio in the foreground thread. It
677 677 # is different from `_trio_runner(async_fn)` in `async_helpers.py`
678 678 # which calls `trio.run()` for every cell. This runner runs all cells
679 679 # inside a single Trio event loop. If used, it is set from
680 680 # `ipykernel.kernelapp`.
681 681 self.trio_runner = None
682 682
683 683 def get_ipython(self):
684 684 """Return the currently running IPython instance."""
685 685 return self
686 686
687 687 #-------------------------------------------------------------------------
688 688 # Trait changed handlers
689 689 #-------------------------------------------------------------------------
690 690 @observe('ipython_dir')
691 691 def _ipython_dir_changed(self, change):
692 692 ensure_dir_exists(change['new'])
693 693
694 694 def set_autoindent(self,value=None):
695 695 """Set the autoindent flag.
696 696
697 697 If called with no arguments, it acts as a toggle."""
698 698 if value is None:
699 699 self.autoindent = not self.autoindent
700 700 else:
701 701 self.autoindent = value
702 702
703 703 def set_trio_runner(self, tr):
704 704 self.trio_runner = tr
705 705
706 706 #-------------------------------------------------------------------------
707 707 # init_* methods called by __init__
708 708 #-------------------------------------------------------------------------
709 709
710 710 def init_ipython_dir(self, ipython_dir):
711 711 if ipython_dir is not None:
712 712 self.ipython_dir = ipython_dir
713 713 return
714 714
715 715 self.ipython_dir = get_ipython_dir()
716 716
717 717 def init_profile_dir(self, profile_dir):
718 718 if profile_dir is not None:
719 719 self.profile_dir = profile_dir
720 720 return
721 721 self.profile_dir = ProfileDir.create_profile_dir_by_name(
722 722 self.ipython_dir, "default"
723 723 )
724 724
725 725 def init_instance_attrs(self):
726 726 self.more = False
727 727
728 728 # command compiler
729 729 self.compile = self.compiler_class()
730 730
731 731 # Make an empty namespace, which extension writers can rely on both
732 732 # existing and NEVER being used by ipython itself. This gives them a
733 733 # convenient location for storing additional information and state
734 734 # their extensions may require, without fear of collisions with other
735 735 # ipython names that may develop later.
736 736 self.meta = Struct()
737 737
738 738 # Temporary files used for various purposes. Deleted at exit.
739 739 # The files here are stored with Path from Pathlib
740 740 self.tempfiles = []
741 741 self.tempdirs = []
742 742
743 743 # keep track of where we started running (mainly for crash post-mortem)
744 744 # This is not being used anywhere currently.
745 745 self.starting_dir = os.getcwd()
746 746
747 747 # Indentation management
748 748 self.indent_current_nsp = 0
749 749
750 750 # Dict to track post-execution functions that have been registered
751 751 self._post_execute = {}
752 752
753 753 def init_environment(self):
754 754 """Any changes we need to make to the user's environment."""
755 755 pass
756 756
757 757 def init_encoding(self):
758 758 # Get system encoding at startup time. Certain terminals (like Emacs
759 759 # under Win32 have it set to None, and we need to have a known valid
760 760 # encoding to use in the raw_input() method
761 761 try:
762 762 self.stdin_encoding = sys.stdin.encoding or 'ascii'
763 763 except AttributeError:
764 764 self.stdin_encoding = 'ascii'
765 765
766 766
767 767 @observe('colors')
768 768 def init_syntax_highlighting(self, changes=None):
769 769 # Python source parser/formatter for syntax highlighting
770 770 pyformat = PyColorize.Parser(style=self.colors, parent=self).format
771 771 self.pycolorize = lambda src: pyformat(src,'str')
772 772
773 773 def refresh_style(self):
774 774 # No-op here, used in subclass
775 775 pass
776 776
777 777 def init_pushd_popd_magic(self):
778 778 # for pushd/popd management
779 779 self.home_dir = get_home_dir()
780 780
781 781 self.dir_stack = []
782 782
783 783 def init_logger(self):
784 784 self.logger = Logger(self.home_dir, logfname='ipython_log.py',
785 785 logmode='rotate')
786 786
787 787 def init_logstart(self):
788 788 """Initialize logging in case it was requested at the command line.
789 789 """
790 790 if self.logappend:
791 791 self.magic('logstart %s append' % self.logappend)
792 792 elif self.logfile:
793 793 self.magic('logstart %s' % self.logfile)
794 794 elif self.logstart:
795 795 self.magic('logstart')
796 796
797 797
798 798 def init_builtins(self):
799 799 # A single, static flag that we set to True. Its presence indicates
800 800 # that an IPython shell has been created, and we make no attempts at
801 801 # removing on exit or representing the existence of more than one
802 802 # IPython at a time.
803 803 builtin_mod.__dict__['__IPYTHON__'] = True
804 804 builtin_mod.__dict__['display'] = display
805 805
806 806 self.builtin_trap = BuiltinTrap(shell=self)
807 807
808 808 @observe('colors')
809 809 def init_inspector(self, changes=None):
810 810 # Object inspector
811 811 self.inspector = self.inspector_class(
812 812 oinspect.InspectColors,
813 813 PyColorize.ANSICodeColors,
814 814 self.colors,
815 815 self.object_info_string_level,
816 816 )
817 817
818 818 def init_io(self):
819 819 # implemented in subclasses, TerminalInteractiveShell does call
820 820 # colorama.init().
821 821 pass
822 822
823 823 def init_prompts(self):
824 824 # Set system prompts, so that scripts can decide if they are running
825 825 # interactively.
826 826 sys.ps1 = 'In : '
827 827 sys.ps2 = '...: '
828 828 sys.ps3 = 'Out: '
829 829
830 830 def init_display_formatter(self):
831 831 self.display_formatter = DisplayFormatter(parent=self)
832 832 self.configurables.append(self.display_formatter)
833 833
834 834 def init_display_pub(self):
835 835 self.display_pub = self.display_pub_class(parent=self, shell=self)
836 836 self.configurables.append(self.display_pub)
837 837
838 838 def init_data_pub(self):
839 839 if not self.data_pub_class:
840 840 self.data_pub = None
841 841 return
842 842 self.data_pub = self.data_pub_class(parent=self)
843 843 self.configurables.append(self.data_pub)
844 844
845 845 def init_displayhook(self):
846 846 # Initialize displayhook, set in/out prompts and printing system
847 847 self.displayhook = self.displayhook_class(
848 848 parent=self,
849 849 shell=self,
850 850 cache_size=self.cache_size,
851 851 )
852 852 self.configurables.append(self.displayhook)
853 853 # This is a context manager that installs/revmoes the displayhook at
854 854 # the appropriate time.
855 855 self.display_trap = DisplayTrap(hook=self.displayhook)
856 856
857 857 @staticmethod
858 858 def get_path_links(p: Path):
859 859 """Gets path links including all symlinks
860 860
861 861 Examples
862 862 --------
863 863 In [1]: from IPython.core.interactiveshell import InteractiveShell
864 864
865 865 In [2]: import sys, pathlib
866 866
867 867 In [3]: paths = InteractiveShell.get_path_links(pathlib.Path(sys.executable))
868 868
869 869 In [4]: len(paths) == len(set(paths))
870 870 Out[4]: True
871 871
872 872 In [5]: bool(paths)
873 873 Out[5]: True
874 874 """
875 875 paths = [p]
876 876 while p.is_symlink():
877 877 new_path = Path(os.readlink(p))
878 878 if not new_path.is_absolute():
879 879 new_path = p.parent / new_path
880 880 p = new_path
881 881 paths.append(p)
882 882 return paths
883 883
884 884 def init_virtualenv(self):
885 885 """Add the current virtualenv to sys.path so the user can import modules from it.
886 886 This isn't perfect: it doesn't use the Python interpreter with which the
887 887 virtualenv was built, and it ignores the --no-site-packages option. A
888 888 warning will appear suggesting the user installs IPython in the
889 889 virtualenv, but for many cases, it probably works well enough.
890 890
891 891 Adapted from code snippets online.
892 892
893 893 http://blog.ufsoft.org/2009/1/29/ipython-and-virtualenv
894 894 """
895 895 if 'VIRTUAL_ENV' not in os.environ:
896 896 # Not in a virtualenv
897 897 return
898 898 elif os.environ["VIRTUAL_ENV"] == "":
899 899 warn("Virtual env path set to '', please check if this is intended.")
900 900 return
901 901
902 902 p = Path(sys.executable)
903 903 p_venv = Path(os.environ["VIRTUAL_ENV"])
904 904
905 905 # fallback venv detection:
906 906 # stdlib venv may symlink sys.executable, so we can't use realpath.
907 907 # but others can symlink *to* the venv Python, so we can't just use sys.executable.
908 908 # So we just check every item in the symlink tree (generally <= 3)
909 909 paths = self.get_path_links(p)
910 910
911 911 # In Cygwin paths like "c:\..." and '\cygdrive\c\...' are possible
912 912 if len(p_venv.parts) > 2 and p_venv.parts[1] == "cygdrive":
913 913 drive_name = p_venv.parts[2]
914 914 p_venv = (drive_name + ":/") / Path(*p_venv.parts[3:])
915 915
916 916 if any(p_venv == p.parents[1] for p in paths):
917 917 # Our exe is inside or has access to the virtualenv, don't need to do anything.
918 918 return
919 919
920 920 if sys.platform == "win32":
921 921 virtual_env = str(Path(os.environ["VIRTUAL_ENV"], "Lib", "site-packages"))
922 922 else:
923 923 virtual_env_path = Path(
924 924 os.environ["VIRTUAL_ENV"], "lib", "python{}.{}", "site-packages"
925 925 )
926 926 p_ver = sys.version_info[:2]
927 927
928 928 # Predict version from py[thon]-x.x in the $VIRTUAL_ENV
929 929 re_m = re.search(r"\bpy(?:thon)?([23])\.(\d+)\b", os.environ["VIRTUAL_ENV"])
930 930 if re_m:
931 931 predicted_path = Path(str(virtual_env_path).format(*re_m.groups()))
932 932 if predicted_path.exists():
933 933 p_ver = re_m.groups()
934 934
935 935 virtual_env = str(virtual_env_path).format(*p_ver)
936 936 if self.warn_venv:
937 937 warn(
938 938 "Attempting to work in a virtualenv. If you encounter problems, "
939 939 "please install IPython inside the virtualenv."
940 940 )
941 941 import site
942 942 sys.path.insert(0, virtual_env)
943 943 site.addsitedir(virtual_env)
944 944
945 945 #-------------------------------------------------------------------------
946 946 # Things related to injections into the sys module
947 947 #-------------------------------------------------------------------------
948 948
949 949 def save_sys_module_state(self):
950 950 """Save the state of hooks in the sys module.
951 951
952 952 This has to be called after self.user_module is created.
953 953 """
954 954 self._orig_sys_module_state = {'stdin': sys.stdin,
955 955 'stdout': sys.stdout,
956 956 'stderr': sys.stderr,
957 957 'excepthook': sys.excepthook}
958 958 self._orig_sys_modules_main_name = self.user_module.__name__
959 959 self._orig_sys_modules_main_mod = sys.modules.get(self.user_module.__name__)
960 960
961 961 def restore_sys_module_state(self):
962 962 """Restore the state of the sys module."""
963 963 try:
964 964 for k, v in self._orig_sys_module_state.items():
965 965 setattr(sys, k, v)
966 966 except AttributeError:
967 967 pass
968 968 # Reset what what done in self.init_sys_modules
969 969 if self._orig_sys_modules_main_mod is not None:
970 970 sys.modules[self._orig_sys_modules_main_name] = self._orig_sys_modules_main_mod
971 971
972 972 #-------------------------------------------------------------------------
973 973 # Things related to the banner
974 974 #-------------------------------------------------------------------------
975 975
976 976 @property
977 977 def banner(self):
978 978 banner = self.banner1
979 979 if self.profile and self.profile != 'default':
980 980 banner += '\nIPython profile: %s\n' % self.profile
981 981 if self.banner2:
982 982 banner += '\n' + self.banner2
983 983 return banner
984 984
985 985 def show_banner(self, banner=None):
986 986 if banner is None:
987 987 banner = self.banner
988 988 sys.stdout.write(banner)
989 989
990 990 #-------------------------------------------------------------------------
991 991 # Things related to hooks
992 992 #-------------------------------------------------------------------------
993 993
994 994 def init_hooks(self):
995 995 # hooks holds pointers used for user-side customizations
996 996 self.hooks = Struct()
997 997
998 998 self.strdispatchers = {}
999 999
1000 1000 # Set all default hooks, defined in the IPython.hooks module.
1001 1001 hooks = IPython.core.hooks
1002 1002 for hook_name in hooks.__all__:
1003 1003 # default hooks have priority 100, i.e. low; user hooks should have
1004 1004 # 0-100 priority
1005 1005 self.set_hook(hook_name, getattr(hooks, hook_name), 100)
1006 1006
1007 1007 if self.display_page:
1008 1008 self.set_hook('show_in_pager', page.as_hook(page.display_page), 90)
1009 1009
1010 1010 def set_hook(self, name, hook, priority=50, str_key=None, re_key=None):
1011 1011 """set_hook(name,hook) -> sets an internal IPython hook.
1012 1012
1013 1013 IPython exposes some of its internal API as user-modifiable hooks. By
1014 1014 adding your function to one of these hooks, you can modify IPython's
1015 1015 behavior to call at runtime your own routines."""
1016 1016
1017 1017 # At some point in the future, this should validate the hook before it
1018 1018 # accepts it. Probably at least check that the hook takes the number
1019 1019 # of args it's supposed to.
1020 1020
1021 1021 f = types.MethodType(hook,self)
1022 1022
1023 1023 # check if the hook is for strdispatcher first
1024 1024 if str_key is not None:
1025 1025 sdp = self.strdispatchers.get(name, StrDispatch())
1026 1026 sdp.add_s(str_key, f, priority )
1027 1027 self.strdispatchers[name] = sdp
1028 1028 return
1029 1029 if re_key is not None:
1030 1030 sdp = self.strdispatchers.get(name, StrDispatch())
1031 1031 sdp.add_re(re.compile(re_key), f, priority )
1032 1032 self.strdispatchers[name] = sdp
1033 1033 return
1034 1034
1035 1035 dp = getattr(self.hooks, name, None)
1036 1036 if name not in IPython.core.hooks.__all__:
1037 1037 print("Warning! Hook '%s' is not one of %s" % \
1038 1038 (name, IPython.core.hooks.__all__ ))
1039 1039
1040 1040 if name in IPython.core.hooks.deprecated:
1041 1041 alternative = IPython.core.hooks.deprecated[name]
1042 1042 raise ValueError(
1043 1043 "Hook {} has been deprecated since IPython 5.0. Use {} instead.".format(
1044 1044 name, alternative
1045 1045 )
1046 1046 )
1047 1047
1048 1048 if not dp:
1049 1049 dp = IPython.core.hooks.CommandChainDispatcher()
1050 1050
1051 1051 try:
1052 1052 dp.add(f,priority)
1053 1053 except AttributeError:
1054 1054 # it was not commandchain, plain old func - replace
1055 1055 dp = f
1056 1056
1057 1057 setattr(self.hooks,name, dp)
1058 1058
1059 1059 #-------------------------------------------------------------------------
1060 1060 # Things related to events
1061 1061 #-------------------------------------------------------------------------
1062 1062
1063 1063 def init_events(self):
1064 1064 self.events = EventManager(self, available_events)
1065 1065
1066 1066 self.events.register("pre_execute", self._clear_warning_registry)
1067 1067
1068 1068 def register_post_execute(self, func):
1069 1069 """DEPRECATED: Use ip.events.register('post_run_cell', func)
1070 1070
1071 1071 Register a function for calling after code execution.
1072 1072 """
1073 1073 raise ValueError(
1074 1074 "ip.register_post_execute is deprecated since IPython 1.0, use "
1075 1075 "ip.events.register('post_run_cell', func) instead."
1076 1076 )
1077 1077
1078 1078 def _clear_warning_registry(self):
1079 1079 # clear the warning registry, so that different code blocks with
1080 1080 # overlapping line number ranges don't cause spurious suppression of
1081 1081 # warnings (see gh-6611 for details)
1082 1082 if "__warningregistry__" in self.user_global_ns:
1083 1083 del self.user_global_ns["__warningregistry__"]
1084 1084
1085 1085 #-------------------------------------------------------------------------
1086 1086 # Things related to the "main" module
1087 1087 #-------------------------------------------------------------------------
1088 1088
1089 1089 def new_main_mod(self, filename, modname):
1090 1090 """Return a new 'main' module object for user code execution.
1091 1091
1092 1092 ``filename`` should be the path of the script which will be run in the
1093 1093 module. Requests with the same filename will get the same module, with
1094 1094 its namespace cleared.
1095 1095
1096 1096 ``modname`` should be the module name - normally either '__main__' or
1097 1097 the basename of the file without the extension.
1098 1098
1099 1099 When scripts are executed via %run, we must keep a reference to their
1100 1100 __main__ module around so that Python doesn't
1101 1101 clear it, rendering references to module globals useless.
1102 1102
1103 1103 This method keeps said reference in a private dict, keyed by the
1104 1104 absolute path of the script. This way, for multiple executions of the
1105 1105 same script we only keep one copy of the namespace (the last one),
1106 1106 thus preventing memory leaks from old references while allowing the
1107 1107 objects from the last execution to be accessible.
1108 1108 """
1109 1109 filename = os.path.abspath(filename)
1110 1110 try:
1111 1111 main_mod = self._main_mod_cache[filename]
1112 1112 except KeyError:
1113 1113 main_mod = self._main_mod_cache[filename] = types.ModuleType(
1114 1114 modname,
1115 1115 doc="Module created for script run in IPython")
1116 1116 else:
1117 1117 main_mod.__dict__.clear()
1118 1118 main_mod.__name__ = modname
1119 1119
1120 1120 main_mod.__file__ = filename
1121 1121 # It seems pydoc (and perhaps others) needs any module instance to
1122 1122 # implement a __nonzero__ method
1123 1123 main_mod.__nonzero__ = lambda : True
1124 1124
1125 1125 return main_mod
1126 1126
1127 1127 def clear_main_mod_cache(self):
1128 1128 """Clear the cache of main modules.
1129 1129
1130 1130 Mainly for use by utilities like %reset.
1131 1131
1132 1132 Examples
1133 1133 --------
1134 1134 In [15]: import IPython
1135 1135
1136 1136 In [16]: m = _ip.new_main_mod(IPython.__file__, 'IPython')
1137 1137
1138 1138 In [17]: len(_ip._main_mod_cache) > 0
1139 1139 Out[17]: True
1140 1140
1141 1141 In [18]: _ip.clear_main_mod_cache()
1142 1142
1143 1143 In [19]: len(_ip._main_mod_cache) == 0
1144 1144 Out[19]: True
1145 1145 """
1146 1146 self._main_mod_cache.clear()
1147 1147
1148 1148 #-------------------------------------------------------------------------
1149 1149 # Things related to debugging
1150 1150 #-------------------------------------------------------------------------
1151 1151
1152 1152 def init_pdb(self):
1153 1153 # Set calling of pdb on exceptions
1154 1154 # self.call_pdb is a property
1155 1155 self.call_pdb = self.pdb
1156 1156
1157 1157 def _get_call_pdb(self):
1158 1158 return self._call_pdb
1159 1159
1160 1160 def _set_call_pdb(self,val):
1161 1161
1162 1162 if val not in (0,1,False,True):
1163 1163 raise ValueError('new call_pdb value must be boolean')
1164 1164
1165 1165 # store value in instance
1166 1166 self._call_pdb = val
1167 1167
1168 1168 # notify the actual exception handlers
1169 1169 self.InteractiveTB.call_pdb = val
1170 1170
1171 1171 call_pdb = property(_get_call_pdb,_set_call_pdb,None,
1172 1172 'Control auto-activation of pdb at exceptions')
1173 1173
1174 1174 def debugger(self,force=False):
1175 1175 """Call the pdb debugger.
1176 1176
1177 1177 Keywords:
1178 1178
1179 1179 - force(False): by default, this routine checks the instance call_pdb
1180 1180 flag and does not actually invoke the debugger if the flag is false.
1181 1181 The 'force' option forces the debugger to activate even if the flag
1182 1182 is false.
1183 1183 """
1184 1184
1185 1185 if not (force or self.call_pdb):
1186 1186 return
1187 1187
1188 1188 if not hasattr(sys,'last_traceback'):
1189 1189 error('No traceback has been produced, nothing to debug.')
1190 1190 return
1191 1191
1192 1192 self.InteractiveTB.debugger(force=True)
1193 1193
1194 1194 #-------------------------------------------------------------------------
1195 1195 # Things related to IPython's various namespaces
1196 1196 #-------------------------------------------------------------------------
1197 1197 default_user_namespaces = True
1198 1198
1199 1199 def init_create_namespaces(self, user_module=None, user_ns=None):
1200 1200 # Create the namespace where the user will operate. user_ns is
1201 1201 # normally the only one used, and it is passed to the exec calls as
1202 1202 # the locals argument. But we do carry a user_global_ns namespace
1203 1203 # given as the exec 'globals' argument, This is useful in embedding
1204 1204 # situations where the ipython shell opens in a context where the
1205 1205 # distinction between locals and globals is meaningful. For
1206 1206 # non-embedded contexts, it is just the same object as the user_ns dict.
1207 1207
1208 1208 # FIXME. For some strange reason, __builtins__ is showing up at user
1209 1209 # level as a dict instead of a module. This is a manual fix, but I
1210 1210 # should really track down where the problem is coming from. Alex
1211 1211 # Schmolck reported this problem first.
1212 1212
1213 1213 # A useful post by Alex Martelli on this topic:
1214 1214 # Re: inconsistent value from __builtins__
1215 1215 # Von: Alex Martelli <aleaxit@yahoo.com>
1216 1216 # Datum: Freitag 01 Oktober 2004 04:45:34 nachmittags/abends
1217 1217 # Gruppen: comp.lang.python
1218 1218
1219 1219 # Michael Hohn <hohn@hooknose.lbl.gov> wrote:
1220 1220 # > >>> print type(builtin_check.get_global_binding('__builtins__'))
1221 1221 # > <type 'dict'>
1222 1222 # > >>> print type(__builtins__)
1223 1223 # > <type 'module'>
1224 1224 # > Is this difference in return value intentional?
1225 1225
1226 1226 # Well, it's documented that '__builtins__' can be either a dictionary
1227 1227 # or a module, and it's been that way for a long time. Whether it's
1228 1228 # intentional (or sensible), I don't know. In any case, the idea is
1229 1229 # that if you need to access the built-in namespace directly, you
1230 1230 # should start with "import __builtin__" (note, no 's') which will
1231 1231 # definitely give you a module. Yeah, it's somewhat confusing:-(.
1232 1232
1233 1233 # These routines return a properly built module and dict as needed by
1234 1234 # the rest of the code, and can also be used by extension writers to
1235 1235 # generate properly initialized namespaces.
1236 1236 if (user_ns is not None) or (user_module is not None):
1237 1237 self.default_user_namespaces = False
1238 1238 self.user_module, self.user_ns = self.prepare_user_module(user_module, user_ns)
1239 1239
1240 1240 # A record of hidden variables we have added to the user namespace, so
1241 1241 # we can list later only variables defined in actual interactive use.
1242 1242 self.user_ns_hidden = {}
1243 1243
1244 1244 # Now that FakeModule produces a real module, we've run into a nasty
1245 1245 # problem: after script execution (via %run), the module where the user
1246 1246 # code ran is deleted. Now that this object is a true module (needed
1247 1247 # so doctest and other tools work correctly), the Python module
1248 1248 # teardown mechanism runs over it, and sets to None every variable
1249 1249 # present in that module. Top-level references to objects from the
1250 1250 # script survive, because the user_ns is updated with them. However,
1251 1251 # calling functions defined in the script that use other things from
1252 1252 # the script will fail, because the function's closure had references
1253 1253 # to the original objects, which are now all None. So we must protect
1254 1254 # these modules from deletion by keeping a cache.
1255 1255 #
1256 1256 # To avoid keeping stale modules around (we only need the one from the
1257 1257 # last run), we use a dict keyed with the full path to the script, so
1258 1258 # only the last version of the module is held in the cache. Note,
1259 1259 # however, that we must cache the module *namespace contents* (their
1260 1260 # __dict__). Because if we try to cache the actual modules, old ones
1261 1261 # (uncached) could be destroyed while still holding references (such as
1262 1262 # those held by GUI objects that tend to be long-lived)>
1263 1263 #
1264 1264 # The %reset command will flush this cache. See the cache_main_mod()
1265 1265 # and clear_main_mod_cache() methods for details on use.
1266 1266
1267 1267 # This is the cache used for 'main' namespaces
1268 1268 self._main_mod_cache = {}
1269 1269
1270 1270 # A table holding all the namespaces IPython deals with, so that
1271 1271 # introspection facilities can search easily.
1272 1272 self.ns_table = {'user_global':self.user_module.__dict__,
1273 1273 'user_local':self.user_ns,
1274 1274 'builtin':builtin_mod.__dict__
1275 1275 }
1276 1276
1277 1277 @property
1278 1278 def user_global_ns(self):
1279 1279 return self.user_module.__dict__
1280 1280
1281 1281 def prepare_user_module(self, user_module=None, user_ns=None):
1282 1282 """Prepare the module and namespace in which user code will be run.
1283 1283
1284 1284 When IPython is started normally, both parameters are None: a new module
1285 1285 is created automatically, and its __dict__ used as the namespace.
1286 1286
1287 1287 If only user_module is provided, its __dict__ is used as the namespace.
1288 1288 If only user_ns is provided, a dummy module is created, and user_ns
1289 1289 becomes the global namespace. If both are provided (as they may be
1290 1290 when embedding), user_ns is the local namespace, and user_module
1291 1291 provides the global namespace.
1292 1292
1293 1293 Parameters
1294 1294 ----------
1295 1295 user_module : module, optional
1296 1296 The current user module in which IPython is being run. If None,
1297 1297 a clean module will be created.
1298 1298 user_ns : dict, optional
1299 1299 A namespace in which to run interactive commands.
1300 1300
1301 1301 Returns
1302 1302 -------
1303 1303 A tuple of user_module and user_ns, each properly initialised.
1304 1304 """
1305 1305 if user_module is None and user_ns is not None:
1306 1306 user_ns.setdefault("__name__", "__main__")
1307 1307 user_module = DummyMod()
1308 1308 user_module.__dict__ = user_ns
1309 1309
1310 1310 if user_module is None:
1311 1311 user_module = types.ModuleType("__main__",
1312 1312 doc="Automatically created module for IPython interactive environment")
1313 1313
1314 1314 # We must ensure that __builtin__ (without the final 's') is always
1315 1315 # available and pointing to the __builtin__ *module*. For more details:
1316 1316 # http://mail.python.org/pipermail/python-dev/2001-April/014068.html
1317 1317 user_module.__dict__.setdefault('__builtin__', builtin_mod)
1318 1318 user_module.__dict__.setdefault('__builtins__', builtin_mod)
1319 1319
1320 1320 if user_ns is None:
1321 1321 user_ns = user_module.__dict__
1322 1322
1323 1323 return user_module, user_ns
1324 1324
1325 1325 def init_sys_modules(self):
1326 1326 # We need to insert into sys.modules something that looks like a
1327 1327 # module but which accesses the IPython namespace, for shelve and
1328 1328 # pickle to work interactively. Normally they rely on getting
1329 1329 # everything out of __main__, but for embedding purposes each IPython
1330 1330 # instance has its own private namespace, so we can't go shoving
1331 1331 # everything into __main__.
1332 1332
1333 1333 # note, however, that we should only do this for non-embedded
1334 1334 # ipythons, which really mimic the __main__.__dict__ with their own
1335 1335 # namespace. Embedded instances, on the other hand, should not do
1336 1336 # this because they need to manage the user local/global namespaces
1337 1337 # only, but they live within a 'normal' __main__ (meaning, they
1338 1338 # shouldn't overtake the execution environment of the script they're
1339 1339 # embedded in).
1340 1340
1341 1341 # This is overridden in the InteractiveShellEmbed subclass to a no-op.
1342 1342 main_name = self.user_module.__name__
1343 1343 sys.modules[main_name] = self.user_module
1344 1344
1345 1345 def init_user_ns(self):
1346 1346 """Initialize all user-visible namespaces to their minimum defaults.
1347 1347
1348 1348 Certain history lists are also initialized here, as they effectively
1349 1349 act as user namespaces.
1350 1350
1351 1351 Notes
1352 1352 -----
1353 1353 All data structures here are only filled in, they are NOT reset by this
1354 1354 method. If they were not empty before, data will simply be added to
1355 1355 them.
1356 1356 """
1357 1357 # This function works in two parts: first we put a few things in
1358 1358 # user_ns, and we sync that contents into user_ns_hidden so that these
1359 1359 # initial variables aren't shown by %who. After the sync, we add the
1360 1360 # rest of what we *do* want the user to see with %who even on a new
1361 1361 # session (probably nothing, so they really only see their own stuff)
1362 1362
1363 1363 # The user dict must *always* have a __builtin__ reference to the
1364 1364 # Python standard __builtin__ namespace, which must be imported.
1365 1365 # This is so that certain operations in prompt evaluation can be
1366 1366 # reliably executed with builtins. Note that we can NOT use
1367 1367 # __builtins__ (note the 's'), because that can either be a dict or a
1368 1368 # module, and can even mutate at runtime, depending on the context
1369 1369 # (Python makes no guarantees on it). In contrast, __builtin__ is
1370 1370 # always a module object, though it must be explicitly imported.
1371 1371
1372 1372 # For more details:
1373 1373 # http://mail.python.org/pipermail/python-dev/2001-April/014068.html
1374 1374 ns = {}
1375 1375
1376 1376 # make global variables for user access to the histories
1377 1377 ns['_ih'] = self.history_manager.input_hist_parsed
1378 1378 ns['_oh'] = self.history_manager.output_hist
1379 1379 ns['_dh'] = self.history_manager.dir_hist
1380 1380
1381 1381 # user aliases to input and output histories. These shouldn't show up
1382 1382 # in %who, as they can have very large reprs.
1383 1383 ns['In'] = self.history_manager.input_hist_parsed
1384 1384 ns['Out'] = self.history_manager.output_hist
1385 1385
1386 1386 # Store myself as the public api!!!
1387 1387 ns['get_ipython'] = self.get_ipython
1388 1388
1389 1389 ns['exit'] = self.exiter
1390 1390 ns['quit'] = self.exiter
1391 1391 ns["open"] = _modified_open
1392 1392
1393 1393 # Sync what we've added so far to user_ns_hidden so these aren't seen
1394 1394 # by %who
1395 1395 self.user_ns_hidden.update(ns)
1396 1396
1397 1397 # Anything put into ns now would show up in %who. Think twice before
1398 1398 # putting anything here, as we really want %who to show the user their
1399 1399 # stuff, not our variables.
1400 1400
1401 1401 # Finally, update the real user's namespace
1402 1402 self.user_ns.update(ns)
1403 1403
1404 1404 @property
1405 1405 def all_ns_refs(self):
1406 1406 """Get a list of references to all the namespace dictionaries in which
1407 1407 IPython might store a user-created object.
1408 1408
1409 1409 Note that this does not include the displayhook, which also caches
1410 1410 objects from the output."""
1411 1411 return [self.user_ns, self.user_global_ns, self.user_ns_hidden] + \
1412 1412 [m.__dict__ for m in self._main_mod_cache.values()]
1413 1413
1414 1414 def reset(self, new_session=True, aggressive=False):
1415 1415 """Clear all internal namespaces, and attempt to release references to
1416 1416 user objects.
1417 1417
1418 1418 If new_session is True, a new history session will be opened.
1419 1419 """
1420 1420 # Clear histories
1421 1421 assert self.history_manager is not None
1422 1422 self.history_manager.reset(new_session)
1423 1423 # Reset counter used to index all histories
1424 1424 if new_session:
1425 1425 self.execution_count = 1
1426 1426
1427 1427 # Reset last execution result
1428 1428 self.last_execution_succeeded = True
1429 1429 self.last_execution_result = None
1430 1430
1431 1431 # Flush cached output items
1432 1432 if self.displayhook.do_full_cache:
1433 1433 self.displayhook.flush()
1434 1434
1435 1435 # The main execution namespaces must be cleared very carefully,
1436 1436 # skipping the deletion of the builtin-related keys, because doing so
1437 1437 # would cause errors in many object's __del__ methods.
1438 1438 if self.user_ns is not self.user_global_ns:
1439 1439 self.user_ns.clear()
1440 1440 ns = self.user_global_ns
1441 1441 drop_keys = set(ns.keys())
1442 1442 drop_keys.discard('__builtin__')
1443 1443 drop_keys.discard('__builtins__')
1444 1444 drop_keys.discard('__name__')
1445 1445 for k in drop_keys:
1446 1446 del ns[k]
1447 1447
1448 1448 self.user_ns_hidden.clear()
1449 1449
1450 1450 # Restore the user namespaces to minimal usability
1451 1451 self.init_user_ns()
1452 1452 if aggressive and not hasattr(self, "_sys_modules_keys"):
1453 1453 print("Cannot restore sys.module, no snapshot")
1454 1454 elif aggressive:
1455 1455 print("culling sys module...")
1456 1456 current_keys = set(sys.modules.keys())
1457 1457 for k in current_keys - self._sys_modules_keys:
1458 1458 if k.startswith("multiprocessing"):
1459 1459 continue
1460 1460 del sys.modules[k]
1461 1461
1462 1462 # Restore the default and user aliases
1463 1463 self.alias_manager.clear_aliases()
1464 1464 self.alias_manager.init_aliases()
1465 1465
1466 1466 # Now define aliases that only make sense on the terminal, because they
1467 1467 # need direct access to the console in a way that we can't emulate in
1468 1468 # GUI or web frontend
1469 1469 if os.name == 'posix':
1470 1470 for cmd in ('clear', 'more', 'less', 'man'):
1471 1471 if cmd not in self.magics_manager.magics['line']:
1472 1472 self.alias_manager.soft_define_alias(cmd, cmd)
1473 1473
1474 1474 # Flush the private list of module references kept for script
1475 1475 # execution protection
1476 1476 self.clear_main_mod_cache()
1477 1477
1478 1478 def del_var(self, varname, by_name=False):
1479 1479 """Delete a variable from the various namespaces, so that, as
1480 1480 far as possible, we're not keeping any hidden references to it.
1481 1481
1482 1482 Parameters
1483 1483 ----------
1484 1484 varname : str
1485 1485 The name of the variable to delete.
1486 1486 by_name : bool
1487 1487 If True, delete variables with the given name in each
1488 1488 namespace. If False (default), find the variable in the user
1489 1489 namespace, and delete references to it.
1490 1490 """
1491 1491 if varname in ('__builtin__', '__builtins__'):
1492 1492 raise ValueError("Refusing to delete %s" % varname)
1493 1493
1494 1494 ns_refs = self.all_ns_refs
1495 1495
1496 1496 if by_name: # Delete by name
1497 1497 for ns in ns_refs:
1498 1498 try:
1499 1499 del ns[varname]
1500 1500 except KeyError:
1501 1501 pass
1502 1502 else: # Delete by object
1503 1503 try:
1504 1504 obj = self.user_ns[varname]
1505 1505 except KeyError as e:
1506 1506 raise NameError("name '%s' is not defined" % varname) from e
1507 1507 # Also check in output history
1508 1508 assert self.history_manager is not None
1509 1509 ns_refs.append(self.history_manager.output_hist)
1510 1510 for ns in ns_refs:
1511 1511 to_delete = [n for n, o in ns.items() if o is obj]
1512 1512 for name in to_delete:
1513 1513 del ns[name]
1514 1514
1515 1515 # Ensure it is removed from the last execution result
1516 1516 if self.last_execution_result.result is obj:
1517 1517 self.last_execution_result = None
1518 1518
1519 1519 # displayhook keeps extra references, but not in a dictionary
1520 1520 for name in ('_', '__', '___'):
1521 1521 if getattr(self.displayhook, name) is obj:
1522 1522 setattr(self.displayhook, name, None)
1523 1523
1524 1524 def reset_selective(self, regex=None):
1525 1525 """Clear selective variables from internal namespaces based on a
1526 1526 specified regular expression.
1527 1527
1528 1528 Parameters
1529 1529 ----------
1530 1530 regex : string or compiled pattern, optional
1531 1531 A regular expression pattern that will be used in searching
1532 1532 variable names in the users namespaces.
1533 1533 """
1534 1534 if regex is not None:
1535 1535 try:
1536 1536 m = re.compile(regex)
1537 1537 except TypeError as e:
1538 1538 raise TypeError('regex must be a string or compiled pattern') from e
1539 1539 # Search for keys in each namespace that match the given regex
1540 1540 # If a match is found, delete the key/value pair.
1541 1541 for ns in self.all_ns_refs:
1542 1542 for var in ns:
1543 1543 if m.search(var):
1544 1544 del ns[var]
1545 1545
1546 1546 def push(self, variables, interactive=True):
1547 1547 """Inject a group of variables into the IPython user namespace.
1548 1548
1549 1549 Parameters
1550 1550 ----------
1551 1551 variables : dict, str or list/tuple of str
1552 1552 The variables to inject into the user's namespace. If a dict, a
1553 1553 simple update is done. If a str, the string is assumed to have
1554 1554 variable names separated by spaces. A list/tuple of str can also
1555 1555 be used to give the variable names. If just the variable names are
1556 1556 give (list/tuple/str) then the variable values looked up in the
1557 1557 callers frame.
1558 1558 interactive : bool
1559 1559 If True (default), the variables will be listed with the ``who``
1560 1560 magic.
1561 1561 """
1562 1562 vdict = None
1563 1563
1564 1564 # We need a dict of name/value pairs to do namespace updates.
1565 1565 if isinstance(variables, dict):
1566 1566 vdict = variables
1567 1567 elif isinstance(variables, (str, list, tuple)):
1568 1568 if isinstance(variables, str):
1569 1569 vlist = variables.split()
1570 1570 else:
1571 1571 vlist = variables
1572 1572 vdict = {}
1573 1573 cf = sys._getframe(1)
1574 1574 for name in vlist:
1575 1575 try:
1576 1576 vdict[name] = eval(name, cf.f_globals, cf.f_locals)
1577 1577 except:
1578 1578 print('Could not get variable %s from %s' %
1579 1579 (name,cf.f_code.co_name))
1580 1580 else:
1581 1581 raise ValueError('variables must be a dict/str/list/tuple')
1582 1582
1583 1583 # Propagate variables to user namespace
1584 1584 self.user_ns.update(vdict)
1585 1585
1586 1586 # And configure interactive visibility
1587 1587 user_ns_hidden = self.user_ns_hidden
1588 1588 if interactive:
1589 1589 for name in vdict:
1590 1590 user_ns_hidden.pop(name, None)
1591 1591 else:
1592 1592 user_ns_hidden.update(vdict)
1593 1593
1594 1594 def drop_by_id(self, variables):
1595 1595 """Remove a dict of variables from the user namespace, if they are the
1596 1596 same as the values in the dictionary.
1597 1597
1598 1598 This is intended for use by extensions: variables that they've added can
1599 1599 be taken back out if they are unloaded, without removing any that the
1600 1600 user has overwritten.
1601 1601
1602 1602 Parameters
1603 1603 ----------
1604 1604 variables : dict
1605 1605 A dictionary mapping object names (as strings) to the objects.
1606 1606 """
1607 1607 for name, obj in variables.items():
1608 1608 if name in self.user_ns and self.user_ns[name] is obj:
1609 1609 del self.user_ns[name]
1610 1610 self.user_ns_hidden.pop(name, None)
1611 1611
1612 1612 #-------------------------------------------------------------------------
1613 1613 # Things related to object introspection
1614 1614 #-------------------------------------------------------------------------
1615 1615 @staticmethod
1616 1616 def _find_parts(oname: str) -> Tuple[bool, ListType[str]]:
1617 1617 """
1618 1618 Given an object name, return a list of parts of this object name.
1619 1619
1620 1620 Basically split on docs when using attribute access,
1621 1621 and extract the value when using square bracket.
1622 1622
1623 1623
1624 1624 For example foo.bar[3].baz[x] -> foo, bar, 3, baz, x
1625 1625
1626 1626
1627 1627 Returns
1628 1628 -------
1629 1629 parts_ok: bool
1630 wether we were properly able to parse parts.
1630 whether we were properly able to parse parts.
1631 1631 parts: list of str
1632 1632 extracted parts
1633 1633
1634 1634
1635 1635
1636 1636 """
1637 1637 raw_parts = oname.split(".")
1638 1638 parts = []
1639 1639 parts_ok = True
1640 1640 for p in raw_parts:
1641 1641 if p.endswith("]"):
1642 1642 var, *indices = p.split("[")
1643 1643 if not var.isidentifier():
1644 1644 parts_ok = False
1645 1645 break
1646 1646 parts.append(var)
1647 1647 for ind in indices:
1648 1648 if ind[-1] != "]" and not is_integer_string(ind[:-1]):
1649 1649 parts_ok = False
1650 1650 break
1651 1651 parts.append(ind[:-1])
1652 1652 continue
1653 1653
1654 1654 if not p.isidentifier():
1655 1655 parts_ok = False
1656 1656 parts.append(p)
1657 1657
1658 1658 return parts_ok, parts
1659 1659
1660 1660 def _ofind(
1661 1661 self, oname: str, namespaces: Optional[Sequence[Tuple[str, AnyType]]] = None
1662 1662 ) -> OInfo:
1663 1663 """Find an object in the available namespaces.
1664 1664
1665 1665
1666 1666 Returns
1667 1667 -------
1668 1668 OInfo with fields:
1669 1669 - ismagic
1670 1670 - isalias
1671 1671 - found
1672 1672 - obj
1673 1673 - namespac
1674 1674 - parent
1675 1675
1676 1676 Has special code to detect magic functions.
1677 1677 """
1678 1678 oname = oname.strip()
1679 1679 parts_ok, parts = self._find_parts(oname)
1680 1680
1681 1681 if (
1682 1682 not oname.startswith(ESC_MAGIC)
1683 1683 and not oname.startswith(ESC_MAGIC2)
1684 1684 and not parts_ok
1685 1685 ):
1686 1686 return OInfo(
1687 1687 ismagic=False,
1688 1688 isalias=False,
1689 1689 found=False,
1690 1690 obj=None,
1691 1691 namespace=None,
1692 1692 parent=None,
1693 1693 )
1694 1694
1695 1695 if namespaces is None:
1696 1696 # Namespaces to search in:
1697 1697 # Put them in a list. The order is important so that we
1698 1698 # find things in the same order that Python finds them.
1699 1699 namespaces = [ ('Interactive', self.user_ns),
1700 1700 ('Interactive (global)', self.user_global_ns),
1701 1701 ('Python builtin', builtin_mod.__dict__),
1702 1702 ]
1703 1703
1704 1704 ismagic = False
1705 1705 isalias = False
1706 1706 found = False
1707 1707 ospace = None
1708 1708 parent = None
1709 1709 obj = None
1710 1710
1711 1711
1712 1712 # Look for the given name by splitting it in parts. If the head is
1713 1713 # found, then we look for all the remaining parts as members, and only
1714 1714 # declare success if we can find them all.
1715 1715 oname_parts = parts
1716 1716 oname_head, oname_rest = oname_parts[0],oname_parts[1:]
1717 1717 for nsname,ns in namespaces:
1718 1718 try:
1719 1719 obj = ns[oname_head]
1720 1720 except KeyError:
1721 1721 continue
1722 1722 else:
1723 1723 for idx, part in enumerate(oname_rest):
1724 1724 try:
1725 1725 parent = obj
1726 1726 # The last part is looked up in a special way to avoid
1727 1727 # descriptor invocation as it may raise or have side
1728 1728 # effects.
1729 1729 if idx == len(oname_rest) - 1:
1730 1730 obj = self._getattr_property(obj, part)
1731 1731 else:
1732 1732 if is_integer_string(part):
1733 1733 obj = obj[int(part)]
1734 1734 else:
1735 1735 obj = getattr(obj, part)
1736 1736 except:
1737 1737 # Blanket except b/c some badly implemented objects
1738 1738 # allow __getattr__ to raise exceptions other than
1739 1739 # AttributeError, which then crashes IPython.
1740 1740 break
1741 1741 else:
1742 1742 # If we finish the for loop (no break), we got all members
1743 1743 found = True
1744 1744 ospace = nsname
1745 1745 break # namespace loop
1746 1746
1747 1747 # Try to see if it's magic
1748 1748 if not found:
1749 1749 obj = None
1750 1750 if oname.startswith(ESC_MAGIC2):
1751 1751 oname = oname.lstrip(ESC_MAGIC2)
1752 1752 obj = self.find_cell_magic(oname)
1753 1753 elif oname.startswith(ESC_MAGIC):
1754 1754 oname = oname.lstrip(ESC_MAGIC)
1755 1755 obj = self.find_line_magic(oname)
1756 1756 else:
1757 1757 # search without prefix, so run? will find %run?
1758 1758 obj = self.find_line_magic(oname)
1759 1759 if obj is None:
1760 1760 obj = self.find_cell_magic(oname)
1761 1761 if obj is not None:
1762 1762 found = True
1763 1763 ospace = 'IPython internal'
1764 1764 ismagic = True
1765 1765 isalias = isinstance(obj, Alias)
1766 1766
1767 1767 # Last try: special-case some literals like '', [], {}, etc:
1768 1768 if not found and oname_head in ["''",'""','[]','{}','()']:
1769 1769 obj = eval(oname_head)
1770 1770 found = True
1771 1771 ospace = 'Interactive'
1772 1772
1773 1773 return OInfo(
1774 1774 obj=obj,
1775 1775 found=found,
1776 1776 parent=parent,
1777 1777 ismagic=ismagic,
1778 1778 isalias=isalias,
1779 1779 namespace=ospace,
1780 1780 )
1781 1781
1782 1782 @staticmethod
1783 1783 def _getattr_property(obj, attrname):
1784 1784 """Property-aware getattr to use in object finding.
1785 1785
1786 1786 If attrname represents a property, return it unevaluated (in case it has
1787 1787 side effects or raises an error.
1788 1788
1789 1789 """
1790 1790 if not isinstance(obj, type):
1791 1791 try:
1792 1792 # `getattr(type(obj), attrname)` is not guaranteed to return
1793 1793 # `obj`, but does so for property:
1794 1794 #
1795 1795 # property.__get__(self, None, cls) -> self
1796 1796 #
1797 1797 # The universal alternative is to traverse the mro manually
1798 1798 # searching for attrname in class dicts.
1799 1799 if is_integer_string(attrname):
1800 1800 return obj[int(attrname)]
1801 1801 else:
1802 1802 attr = getattr(type(obj), attrname)
1803 1803 except AttributeError:
1804 1804 pass
1805 1805 else:
1806 1806 # This relies on the fact that data descriptors (with both
1807 1807 # __get__ & __set__ magic methods) take precedence over
1808 1808 # instance-level attributes:
1809 1809 #
1810 1810 # class A(object):
1811 1811 # @property
1812 1812 # def foobar(self): return 123
1813 1813 # a = A()
1814 1814 # a.__dict__['foobar'] = 345
1815 1815 # a.foobar # == 123
1816 1816 #
1817 1817 # So, a property may be returned right away.
1818 1818 if isinstance(attr, property):
1819 1819 return attr
1820 1820
1821 1821 # Nothing helped, fall back.
1822 1822 return getattr(obj, attrname)
1823 1823
1824 1824 def _object_find(self, oname, namespaces=None) -> OInfo:
1825 1825 """Find an object and return a struct with info about it."""
1826 1826 return self._ofind(oname, namespaces)
1827 1827
1828 1828 def _inspect(self, meth, oname: str, namespaces=None, **kw):
1829 1829 """Generic interface to the inspector system.
1830 1830
1831 1831 This function is meant to be called by pdef, pdoc & friends.
1832 1832 """
1833 1833 info: OInfo = self._object_find(oname, namespaces)
1834 1834 if self.sphinxify_docstring:
1835 1835 if sphinxify is None:
1836 1836 raise ImportError("Module ``docrepr`` required but missing")
1837 1837 docformat = sphinxify(self.object_inspect(oname))
1838 1838 else:
1839 1839 docformat = None
1840 1840 if info.found or hasattr(info.parent, oinspect.HOOK_NAME):
1841 1841 pmethod = getattr(self.inspector, meth)
1842 1842 # TODO: only apply format_screen to the plain/text repr of the mime
1843 1843 # bundle.
1844 1844 formatter = format_screen if info.ismagic else docformat
1845 1845 if meth == 'pdoc':
1846 1846 pmethod(info.obj, oname, formatter)
1847 1847 elif meth == 'pinfo':
1848 1848 pmethod(
1849 1849 info.obj,
1850 1850 oname,
1851 1851 formatter,
1852 1852 info,
1853 1853 enable_html_pager=self.enable_html_pager,
1854 1854 **kw,
1855 1855 )
1856 1856 else:
1857 1857 pmethod(info.obj, oname)
1858 1858 else:
1859 1859 print('Object `%s` not found.' % oname)
1860 1860 return 'not found' # so callers can take other action
1861 1861
1862 1862 def object_inspect(self, oname, detail_level=0):
1863 1863 """Get object info about oname"""
1864 1864 with self.builtin_trap:
1865 1865 info = self._object_find(oname)
1866 1866 if info.found:
1867 1867 return self.inspector.info(info.obj, oname, info=info,
1868 1868 detail_level=detail_level
1869 1869 )
1870 1870 else:
1871 1871 return oinspect.object_info(name=oname, found=False)
1872 1872
1873 1873 def object_inspect_text(self, oname, detail_level=0):
1874 1874 """Get object info as formatted text"""
1875 1875 return self.object_inspect_mime(oname, detail_level)['text/plain']
1876 1876
1877 1877 def object_inspect_mime(self, oname, detail_level=0, omit_sections=()):
1878 1878 """Get object info as a mimebundle of formatted representations.
1879 1879
1880 1880 A mimebundle is a dictionary, keyed by mime-type.
1881 1881 It must always have the key `'text/plain'`.
1882 1882 """
1883 1883 with self.builtin_trap:
1884 1884 info = self._object_find(oname)
1885 1885 if info.found:
1886 1886 docformat = (
1887 1887 sphinxify(self.object_inspect(oname))
1888 1888 if self.sphinxify_docstring
1889 1889 else None
1890 1890 )
1891 1891 return self.inspector._get_info(
1892 1892 info.obj,
1893 1893 oname,
1894 1894 info=info,
1895 1895 detail_level=detail_level,
1896 1896 formatter=docformat,
1897 1897 omit_sections=omit_sections,
1898 1898 )
1899 1899 else:
1900 1900 raise KeyError(oname)
1901 1901
1902 1902 #-------------------------------------------------------------------------
1903 1903 # Things related to history management
1904 1904 #-------------------------------------------------------------------------
1905 1905
1906 1906 def init_history(self):
1907 1907 """Sets up the command history, and starts regular autosaves."""
1908 1908 self.history_manager = HistoryManager(shell=self, parent=self)
1909 1909 self.configurables.append(self.history_manager)
1910 1910
1911 1911 #-------------------------------------------------------------------------
1912 1912 # Things related to exception handling and tracebacks (not debugging)
1913 1913 #-------------------------------------------------------------------------
1914 1914
1915 1915 debugger_cls = InterruptiblePdb
1916 1916
1917 1917 def init_traceback_handlers(self, custom_exceptions):
1918 1918 # Syntax error handler.
1919 1919 self.SyntaxTB = ultratb.SyntaxTB(color_scheme='NoColor', parent=self)
1920 1920
1921 1921 # The interactive one is initialized with an offset, meaning we always
1922 1922 # want to remove the topmost item in the traceback, which is our own
1923 1923 # internal code. Valid modes: ['Plain','Context','Verbose','Minimal']
1924 1924 self.InteractiveTB = ultratb.AutoFormattedTB(mode = 'Plain',
1925 1925 color_scheme='NoColor',
1926 1926 tb_offset = 1,
1927 1927 debugger_cls=self.debugger_cls, parent=self)
1928 1928
1929 1929 # The instance will store a pointer to the system-wide exception hook,
1930 1930 # so that runtime code (such as magics) can access it. This is because
1931 1931 # during the read-eval loop, it may get temporarily overwritten.
1932 1932 self.sys_excepthook = sys.excepthook
1933 1933
1934 1934 # and add any custom exception handlers the user may have specified
1935 1935 self.set_custom_exc(*custom_exceptions)
1936 1936
1937 1937 # Set the exception mode
1938 1938 self.InteractiveTB.set_mode(mode=self.xmode)
1939 1939
1940 1940 def set_custom_exc(self, exc_tuple, handler):
1941 1941 """set_custom_exc(exc_tuple, handler)
1942 1942
1943 1943 Set a custom exception handler, which will be called if any of the
1944 1944 exceptions in exc_tuple occur in the mainloop (specifically, in the
1945 1945 run_code() method).
1946 1946
1947 1947 Parameters
1948 1948 ----------
1949 1949 exc_tuple : tuple of exception classes
1950 1950 A *tuple* of exception classes, for which to call the defined
1951 1951 handler. It is very important that you use a tuple, and NOT A
1952 1952 LIST here, because of the way Python's except statement works. If
1953 1953 you only want to trap a single exception, use a singleton tuple::
1954 1954
1955 1955 exc_tuple == (MyCustomException,)
1956 1956
1957 1957 handler : callable
1958 1958 handler must have the following signature::
1959 1959
1960 1960 def my_handler(self, etype, value, tb, tb_offset=None):
1961 1961 ...
1962 1962 return structured_traceback
1963 1963
1964 1964 Your handler must return a structured traceback (a list of strings),
1965 1965 or None.
1966 1966
1967 1967 This will be made into an instance method (via types.MethodType)
1968 1968 of IPython itself, and it will be called if any of the exceptions
1969 1969 listed in the exc_tuple are caught. If the handler is None, an
1970 1970 internal basic one is used, which just prints basic info.
1971 1971
1972 1972 To protect IPython from crashes, if your handler ever raises an
1973 1973 exception or returns an invalid result, it will be immediately
1974 1974 disabled.
1975 1975
1976 1976 Notes
1977 1977 -----
1978 1978 WARNING: by putting in your own exception handler into IPython's main
1979 1979 execution loop, you run a very good chance of nasty crashes. This
1980 1980 facility should only be used if you really know what you are doing.
1981 1981 """
1982 1982
1983 1983 if not isinstance(exc_tuple, tuple):
1984 1984 raise TypeError("The custom exceptions must be given as a tuple.")
1985 1985
1986 1986 def dummy_handler(self, etype, value, tb, tb_offset=None):
1987 1987 print('*** Simple custom exception handler ***')
1988 1988 print('Exception type :', etype)
1989 1989 print('Exception value:', value)
1990 1990 print('Traceback :', tb)
1991 1991
1992 1992 def validate_stb(stb):
1993 1993 """validate structured traceback return type
1994 1994
1995 1995 return type of CustomTB *should* be a list of strings, but allow
1996 1996 single strings or None, which are harmless.
1997 1997
1998 1998 This function will *always* return a list of strings,
1999 1999 and will raise a TypeError if stb is inappropriate.
2000 2000 """
2001 2001 msg = "CustomTB must return list of strings, not %r" % stb
2002 2002 if stb is None:
2003 2003 return []
2004 2004 elif isinstance(stb, str):
2005 2005 return [stb]
2006 2006 elif not isinstance(stb, list):
2007 2007 raise TypeError(msg)
2008 2008 # it's a list
2009 2009 for line in stb:
2010 2010 # check every element
2011 2011 if not isinstance(line, str):
2012 2012 raise TypeError(msg)
2013 2013 return stb
2014 2014
2015 2015 if handler is None:
2016 2016 wrapped = dummy_handler
2017 2017 else:
2018 2018 def wrapped(self,etype,value,tb,tb_offset=None):
2019 2019 """wrap CustomTB handler, to protect IPython from user code
2020 2020
2021 2021 This makes it harder (but not impossible) for custom exception
2022 2022 handlers to crash IPython.
2023 2023 """
2024 2024 try:
2025 2025 stb = handler(self,etype,value,tb,tb_offset=tb_offset)
2026 2026 return validate_stb(stb)
2027 2027 except:
2028 2028 # clear custom handler immediately
2029 2029 self.set_custom_exc((), None)
2030 2030 print("Custom TB Handler failed, unregistering", file=sys.stderr)
2031 2031 # show the exception in handler first
2032 2032 stb = self.InteractiveTB.structured_traceback(*sys.exc_info())
2033 2033 print(self.InteractiveTB.stb2text(stb))
2034 2034 print("The original exception:")
2035 2035 stb = self.InteractiveTB.structured_traceback(
2036 2036 etype, value, tb, tb_offset=tb_offset
2037 2037 )
2038 2038 return stb
2039 2039
2040 2040 self.CustomTB = types.MethodType(wrapped,self)
2041 2041 self.custom_exceptions = exc_tuple
2042 2042
2043 2043 def excepthook(self, etype, value, tb):
2044 2044 """One more defense for GUI apps that call sys.excepthook.
2045 2045
2046 2046 GUI frameworks like wxPython trap exceptions and call
2047 2047 sys.excepthook themselves. I guess this is a feature that
2048 2048 enables them to keep running after exceptions that would
2049 2049 otherwise kill their mainloop. This is a bother for IPython
2050 2050 which expects to catch all of the program exceptions with a try:
2051 2051 except: statement.
2052 2052
2053 2053 Normally, IPython sets sys.excepthook to a CrashHandler instance, so if
2054 2054 any app directly invokes sys.excepthook, it will look to the user like
2055 2055 IPython crashed. In order to work around this, we can disable the
2056 2056 CrashHandler and replace it with this excepthook instead, which prints a
2057 2057 regular traceback using our InteractiveTB. In this fashion, apps which
2058 2058 call sys.excepthook will generate a regular-looking exception from
2059 2059 IPython, and the CrashHandler will only be triggered by real IPython
2060 2060 crashes.
2061 2061
2062 2062 This hook should be used sparingly, only in places which are not likely
2063 2063 to be true IPython errors.
2064 2064 """
2065 2065 self.showtraceback((etype, value, tb), tb_offset=0)
2066 2066
2067 2067 def _get_exc_info(self, exc_tuple=None):
2068 2068 """get exc_info from a given tuple, sys.exc_info() or sys.last_type etc.
2069 2069
2070 2070 Ensures sys.last_type,value,traceback hold the exc_info we found,
2071 2071 from whichever source.
2072 2072
2073 2073 raises ValueError if none of these contain any information
2074 2074 """
2075 2075 if exc_tuple is None:
2076 2076 etype, value, tb = sys.exc_info()
2077 2077 else:
2078 2078 etype, value, tb = exc_tuple
2079 2079
2080 2080 if etype is None:
2081 2081 if hasattr(sys, 'last_type'):
2082 2082 etype, value, tb = sys.last_type, sys.last_value, \
2083 2083 sys.last_traceback
2084 2084
2085 2085 if etype is None:
2086 2086 raise ValueError("No exception to find")
2087 2087
2088 2088 # Now store the exception info in sys.last_type etc.
2089 2089 # WARNING: these variables are somewhat deprecated and not
2090 2090 # necessarily safe to use in a threaded environment, but tools
2091 2091 # like pdb depend on their existence, so let's set them. If we
2092 2092 # find problems in the field, we'll need to revisit their use.
2093 2093 sys.last_type = etype
2094 2094 sys.last_value = value
2095 2095 sys.last_traceback = tb
2096 2096
2097 2097 return etype, value, tb
2098 2098
2099 2099 def show_usage_error(self, exc):
2100 2100 """Show a short message for UsageErrors
2101 2101
2102 2102 These are special exceptions that shouldn't show a traceback.
2103 2103 """
2104 2104 print("UsageError: %s" % exc, file=sys.stderr)
2105 2105
2106 2106 def get_exception_only(self, exc_tuple=None):
2107 2107 """
2108 2108 Return as a string (ending with a newline) the exception that
2109 2109 just occurred, without any traceback.
2110 2110 """
2111 2111 etype, value, tb = self._get_exc_info(exc_tuple)
2112 2112 msg = traceback.format_exception_only(etype, value)
2113 2113 return ''.join(msg)
2114 2114
2115 2115 def showtraceback(self, exc_tuple=None, filename=None, tb_offset=None,
2116 2116 exception_only=False, running_compiled_code=False):
2117 2117 """Display the exception that just occurred.
2118 2118
2119 2119 If nothing is known about the exception, this is the method which
2120 2120 should be used throughout the code for presenting user tracebacks,
2121 2121 rather than directly invoking the InteractiveTB object.
2122 2122
2123 2123 A specific showsyntaxerror() also exists, but this method can take
2124 2124 care of calling it if needed, so unless you are explicitly catching a
2125 2125 SyntaxError exception, don't try to analyze the stack manually and
2126 2126 simply call this method."""
2127 2127
2128 2128 try:
2129 2129 try:
2130 2130 etype, value, tb = self._get_exc_info(exc_tuple)
2131 2131 except ValueError:
2132 2132 print('No traceback available to show.', file=sys.stderr)
2133 2133 return
2134 2134
2135 2135 if issubclass(etype, SyntaxError):
2136 2136 # Though this won't be called by syntax errors in the input
2137 2137 # line, there may be SyntaxError cases with imported code.
2138 2138 self.showsyntaxerror(filename, running_compiled_code)
2139 2139 elif etype is UsageError:
2140 2140 self.show_usage_error(value)
2141 2141 else:
2142 2142 if exception_only:
2143 2143 stb = ['An exception has occurred, use %tb to see '
2144 2144 'the full traceback.\n']
2145 2145 stb.extend(self.InteractiveTB.get_exception_only(etype,
2146 2146 value))
2147 2147 else:
2148 2148
2149 2149 def contains_exceptiongroup(val):
2150 2150 if val is None:
2151 2151 return False
2152 2152 return isinstance(
2153 2153 val, BaseExceptionGroup
2154 2154 ) or contains_exceptiongroup(val.__context__)
2155 2155
2156 2156 if contains_exceptiongroup(value):
2157 2157 # fall back to native exception formatting until ultratb
2158 2158 # supports exception groups
2159 2159 traceback.print_exc()
2160 2160 else:
2161 2161 try:
2162 2162 # Exception classes can customise their traceback - we
2163 2163 # use this in IPython.parallel for exceptions occurring
2164 2164 # in the engines. This should return a list of strings.
2165 2165 if hasattr(value, "_render_traceback_"):
2166 2166 stb = value._render_traceback_()
2167 2167 else:
2168 2168 stb = self.InteractiveTB.structured_traceback(
2169 2169 etype, value, tb, tb_offset=tb_offset
2170 2170 )
2171 2171
2172 2172 except Exception:
2173 2173 print(
2174 2174 "Unexpected exception formatting exception. Falling back to standard exception"
2175 2175 )
2176 2176 traceback.print_exc()
2177 2177 return None
2178 2178
2179 2179 self._showtraceback(etype, value, stb)
2180 2180 if self.call_pdb:
2181 2181 # drop into debugger
2182 2182 self.debugger(force=True)
2183 2183 return
2184 2184
2185 2185 # Actually show the traceback
2186 2186 self._showtraceback(etype, value, stb)
2187 2187
2188 2188 except KeyboardInterrupt:
2189 2189 print('\n' + self.get_exception_only(), file=sys.stderr)
2190 2190
2191 2191 def _showtraceback(self, etype, evalue, stb: str):
2192 2192 """Actually show a traceback.
2193 2193
2194 2194 Subclasses may override this method to put the traceback on a different
2195 2195 place, like a side channel.
2196 2196 """
2197 2197 val = self.InteractiveTB.stb2text(stb)
2198 2198 try:
2199 2199 print(val)
2200 2200 except UnicodeEncodeError:
2201 2201 print(val.encode("utf-8", "backslashreplace").decode())
2202 2202
2203 2203 def showsyntaxerror(self, filename=None, running_compiled_code=False):
2204 2204 """Display the syntax error that just occurred.
2205 2205
2206 2206 This doesn't display a stack trace because there isn't one.
2207 2207
2208 2208 If a filename is given, it is stuffed in the exception instead
2209 2209 of what was there before (because Python's parser always uses
2210 2210 "<string>" when reading from a string).
2211 2211
2212 2212 If the syntax error occurred when running a compiled code (i.e. running_compile_code=True),
2213 2213 longer stack trace will be displayed.
2214 2214 """
2215 2215 etype, value, last_traceback = self._get_exc_info()
2216 2216
2217 2217 if filename and issubclass(etype, SyntaxError):
2218 2218 try:
2219 2219 value.filename = filename
2220 2220 except:
2221 2221 # Not the format we expect; leave it alone
2222 2222 pass
2223 2223
2224 2224 # If the error occurred when executing compiled code, we should provide full stacktrace.
2225 2225 elist = traceback.extract_tb(last_traceback) if running_compiled_code else []
2226 2226 stb = self.SyntaxTB.structured_traceback(etype, value, elist)
2227 2227 self._showtraceback(etype, value, stb)
2228 2228
2229 2229 # This is overridden in TerminalInteractiveShell to show a message about
2230 2230 # the %paste magic.
2231 2231 def showindentationerror(self):
2232 2232 """Called by _run_cell when there's an IndentationError in code entered
2233 2233 at the prompt.
2234 2234
2235 2235 This is overridden in TerminalInteractiveShell to show a message about
2236 2236 the %paste magic."""
2237 2237 self.showsyntaxerror()
2238 2238
2239 2239 @skip_doctest
2240 2240 def set_next_input(self, s, replace=False):
2241 2241 """ Sets the 'default' input string for the next command line.
2242 2242
2243 2243 Example::
2244 2244
2245 2245 In [1]: _ip.set_next_input("Hello Word")
2246 2246 In [2]: Hello Word_ # cursor is here
2247 2247 """
2248 2248 self.rl_next_input = s
2249 2249
2250 2250 def _indent_current_str(self):
2251 2251 """return the current level of indentation as a string"""
2252 2252 return self.input_splitter.get_indent_spaces() * ' '
2253 2253
2254 2254 #-------------------------------------------------------------------------
2255 2255 # Things related to text completion
2256 2256 #-------------------------------------------------------------------------
2257 2257
2258 2258 def init_completer(self):
2259 2259 """Initialize the completion machinery.
2260 2260
2261 2261 This creates completion machinery that can be used by client code,
2262 2262 either interactively in-process (typically triggered by the readline
2263 2263 library), programmatically (such as in test suites) or out-of-process
2264 2264 (typically over the network by remote frontends).
2265 2265 """
2266 2266 from IPython.core.completer import IPCompleter
2267 2267 from IPython.core.completerlib import (
2268 2268 cd_completer,
2269 2269 magic_run_completer,
2270 2270 module_completer,
2271 2271 reset_completer,
2272 2272 )
2273 2273
2274 2274 self.Completer = IPCompleter(shell=self,
2275 2275 namespace=self.user_ns,
2276 2276 global_namespace=self.user_global_ns,
2277 2277 parent=self,
2278 2278 )
2279 2279 self.configurables.append(self.Completer)
2280 2280
2281 2281 # Add custom completers to the basic ones built into IPCompleter
2282 2282 sdisp = self.strdispatchers.get('complete_command', StrDispatch())
2283 2283 self.strdispatchers['complete_command'] = sdisp
2284 2284 self.Completer.custom_completers = sdisp
2285 2285
2286 2286 self.set_hook('complete_command', module_completer, str_key = 'import')
2287 2287 self.set_hook('complete_command', module_completer, str_key = 'from')
2288 2288 self.set_hook('complete_command', module_completer, str_key = '%aimport')
2289 2289 self.set_hook('complete_command', magic_run_completer, str_key = '%run')
2290 2290 self.set_hook('complete_command', cd_completer, str_key = '%cd')
2291 2291 self.set_hook('complete_command', reset_completer, str_key = '%reset')
2292 2292
2293 2293 @skip_doctest
2294 2294 def complete(self, text, line=None, cursor_pos=None):
2295 2295 """Return the completed text and a list of completions.
2296 2296
2297 2297 Parameters
2298 2298 ----------
2299 2299 text : string
2300 2300 A string of text to be completed on. It can be given as empty and
2301 2301 instead a line/position pair are given. In this case, the
2302 2302 completer itself will split the line like readline does.
2303 2303 line : string, optional
2304 2304 The complete line that text is part of.
2305 2305 cursor_pos : int, optional
2306 2306 The position of the cursor on the input line.
2307 2307
2308 2308 Returns
2309 2309 -------
2310 2310 text : string
2311 2311 The actual text that was completed.
2312 2312 matches : list
2313 2313 A sorted list with all possible completions.
2314 2314
2315 2315 Notes
2316 2316 -----
2317 2317 The optional arguments allow the completion to take more context into
2318 2318 account, and are part of the low-level completion API.
2319 2319
2320 2320 This is a wrapper around the completion mechanism, similar to what
2321 2321 readline does at the command line when the TAB key is hit. By
2322 2322 exposing it as a method, it can be used by other non-readline
2323 2323 environments (such as GUIs) for text completion.
2324 2324
2325 2325 Examples
2326 2326 --------
2327 2327 In [1]: x = 'hello'
2328 2328
2329 2329 In [2]: _ip.complete('x.l')
2330 2330 Out[2]: ('x.l', ['x.ljust', 'x.lower', 'x.lstrip'])
2331 2331 """
2332 2332
2333 2333 # Inject names into __builtin__ so we can complete on the added names.
2334 2334 with self.builtin_trap:
2335 2335 return self.Completer.complete(text, line, cursor_pos)
2336 2336
2337 2337 def set_custom_completer(self, completer, pos=0) -> None:
2338 2338 """Adds a new custom completer function.
2339 2339
2340 2340 The position argument (defaults to 0) is the index in the completers
2341 2341 list where you want the completer to be inserted.
2342 2342
2343 2343 `completer` should have the following signature::
2344 2344
2345 2345 def completion(self: Completer, text: string) -> List[str]:
2346 2346 raise NotImplementedError
2347 2347
2348 2348 It will be bound to the current Completer instance and pass some text
2349 2349 and return a list with current completions to suggest to the user.
2350 2350 """
2351 2351
2352 2352 newcomp = types.MethodType(completer, self.Completer)
2353 2353 self.Completer.custom_matchers.insert(pos,newcomp)
2354 2354
2355 2355 def set_completer_frame(self, frame=None):
2356 2356 """Set the frame of the completer."""
2357 2357 if frame:
2358 2358 self.Completer.namespace = frame.f_locals
2359 2359 self.Completer.global_namespace = frame.f_globals
2360 2360 else:
2361 2361 self.Completer.namespace = self.user_ns
2362 2362 self.Completer.global_namespace = self.user_global_ns
2363 2363
2364 2364 #-------------------------------------------------------------------------
2365 2365 # Things related to magics
2366 2366 #-------------------------------------------------------------------------
2367 2367
2368 2368 def init_magics(self):
2369 2369 from IPython.core import magics as m
2370 2370 self.magics_manager = magic.MagicsManager(shell=self,
2371 2371 parent=self,
2372 2372 user_magics=m.UserMagics(self))
2373 2373 self.configurables.append(self.magics_manager)
2374 2374
2375 2375 # Expose as public API from the magics manager
2376 2376 self.register_magics = self.magics_manager.register
2377 2377
2378 2378 self.register_magics(m.AutoMagics, m.BasicMagics, m.CodeMagics,
2379 2379 m.ConfigMagics, m.DisplayMagics, m.ExecutionMagics,
2380 2380 m.ExtensionMagics, m.HistoryMagics, m.LoggingMagics,
2381 2381 m.NamespaceMagics, m.OSMagics, m.PackagingMagics,
2382 2382 m.PylabMagics, m.ScriptMagics,
2383 2383 )
2384 2384 self.register_magics(m.AsyncMagics)
2385 2385
2386 2386 # Register Magic Aliases
2387 2387 mman = self.magics_manager
2388 2388 # FIXME: magic aliases should be defined by the Magics classes
2389 2389 # or in MagicsManager, not here
2390 2390 mman.register_alias('ed', 'edit')
2391 2391 mman.register_alias('hist', 'history')
2392 2392 mman.register_alias('rep', 'recall')
2393 2393 mman.register_alias('SVG', 'svg', 'cell')
2394 2394 mman.register_alias('HTML', 'html', 'cell')
2395 2395 mman.register_alias('file', 'writefile', 'cell')
2396 2396
2397 2397 # FIXME: Move the color initialization to the DisplayHook, which
2398 2398 # should be split into a prompt manager and displayhook. We probably
2399 2399 # even need a centralize colors management object.
2400 2400 self.run_line_magic('colors', self.colors)
2401 2401
2402 2402 # Defined here so that it's included in the documentation
2403 2403 @functools.wraps(magic.MagicsManager.register_function)
2404 2404 def register_magic_function(self, func, magic_kind='line', magic_name=None):
2405 2405 self.magics_manager.register_function(
2406 2406 func, magic_kind=magic_kind, magic_name=magic_name
2407 2407 )
2408 2408
2409 2409 def _find_with_lazy_load(self, /, type_, magic_name: str):
2410 2410 """
2411 2411 Try to find a magic potentially lazy-loading it.
2412 2412
2413 2413 Parameters
2414 2414 ----------
2415 2415
2416 2416 type_: "line"|"cell"
2417 2417 the type of magics we are trying to find/lazy load.
2418 2418 magic_name: str
2419 2419 The name of the magic we are trying to find/lazy load
2420 2420
2421 2421
2422 2422 Note that this may have any side effects
2423 2423 """
2424 2424 finder = {"line": self.find_line_magic, "cell": self.find_cell_magic}[type_]
2425 2425 fn = finder(magic_name)
2426 2426 if fn is not None:
2427 2427 return fn
2428 2428 lazy = self.magics_manager.lazy_magics.get(magic_name)
2429 2429 if lazy is None:
2430 2430 return None
2431 2431
2432 2432 self.run_line_magic("load_ext", lazy)
2433 2433 res = finder(magic_name)
2434 2434 return res
2435 2435
2436 2436 def run_line_magic(self, magic_name: str, line: str, _stack_depth=1):
2437 2437 """Execute the given line magic.
2438 2438
2439 2439 Parameters
2440 2440 ----------
2441 2441 magic_name : str
2442 2442 Name of the desired magic function, without '%' prefix.
2443 2443 line : str
2444 2444 The rest of the input line as a single string.
2445 2445 _stack_depth : int
2446 2446 If run_line_magic() is called from magic() then _stack_depth=2.
2447 2447 This is added to ensure backward compatibility for use of 'get_ipython().magic()'
2448 2448 """
2449 2449 fn = self._find_with_lazy_load("line", magic_name)
2450 2450 if fn is None:
2451 2451 lazy = self.magics_manager.lazy_magics.get(magic_name)
2452 2452 if lazy:
2453 2453 self.run_line_magic("load_ext", lazy)
2454 2454 fn = self.find_line_magic(magic_name)
2455 2455 if fn is None:
2456 2456 cm = self.find_cell_magic(magic_name)
2457 2457 etpl = "Line magic function `%%%s` not found%s."
2458 2458 extra = '' if cm is None else (' (But cell magic `%%%%%s` exists, '
2459 2459 'did you mean that instead?)' % magic_name )
2460 2460 raise UsageError(etpl % (magic_name, extra))
2461 2461 else:
2462 2462 # Note: this is the distance in the stack to the user's frame.
2463 2463 # This will need to be updated if the internal calling logic gets
2464 2464 # refactored, or else we'll be expanding the wrong variables.
2465 2465
2466 2466 # Determine stack_depth depending on where run_line_magic() has been called
2467 2467 stack_depth = _stack_depth
2468 2468 if getattr(fn, magic.MAGIC_NO_VAR_EXPAND_ATTR, False):
2469 2469 # magic has opted out of var_expand
2470 2470 magic_arg_s = line
2471 2471 else:
2472 2472 magic_arg_s = self.var_expand(line, stack_depth)
2473 2473 # Put magic args in a list so we can call with f(*a) syntax
2474 2474 args = [magic_arg_s]
2475 2475 kwargs = {}
2476 2476 # Grab local namespace if we need it:
2477 2477 if getattr(fn, "needs_local_scope", False):
2478 2478 kwargs['local_ns'] = self.get_local_scope(stack_depth)
2479 2479 with self.builtin_trap:
2480 2480 result = fn(*args, **kwargs)
2481 2481
2482 2482 # The code below prevents the output from being displayed
2483 2483 # when using magics with decorator @output_can_be_silenced
2484 2484 # when the last Python token in the expression is a ';'.
2485 2485 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
2486 2486 if DisplayHook.semicolon_at_end_of_expression(magic_arg_s):
2487 2487 return None
2488 2488
2489 2489 return result
2490 2490
2491 2491 def get_local_scope(self, stack_depth):
2492 2492 """Get local scope at given stack depth.
2493 2493
2494 2494 Parameters
2495 2495 ----------
2496 2496 stack_depth : int
2497 2497 Depth relative to calling frame
2498 2498 """
2499 2499 return sys._getframe(stack_depth + 1).f_locals
2500 2500
2501 2501 def run_cell_magic(self, magic_name, line, cell):
2502 2502 """Execute the given cell magic.
2503 2503
2504 2504 Parameters
2505 2505 ----------
2506 2506 magic_name : str
2507 2507 Name of the desired magic function, without '%' prefix.
2508 2508 line : str
2509 2509 The rest of the first input line as a single string.
2510 2510 cell : str
2511 2511 The body of the cell as a (possibly multiline) string.
2512 2512 """
2513 2513 fn = self._find_with_lazy_load("cell", magic_name)
2514 2514 if fn is None:
2515 2515 lm = self.find_line_magic(magic_name)
2516 2516 etpl = "Cell magic `%%{0}` not found{1}."
2517 2517 extra = '' if lm is None else (' (But line magic `%{0}` exists, '
2518 2518 'did you mean that instead?)'.format(magic_name))
2519 2519 raise UsageError(etpl.format(magic_name, extra))
2520 2520 elif cell == '':
2521 2521 message = '%%{0} is a cell magic, but the cell body is empty.'.format(magic_name)
2522 2522 if self.find_line_magic(magic_name) is not None:
2523 2523 message += ' Did you mean the line magic %{0} (single %)?'.format(magic_name)
2524 2524 raise UsageError(message)
2525 2525 else:
2526 2526 # Note: this is the distance in the stack to the user's frame.
2527 2527 # This will need to be updated if the internal calling logic gets
2528 2528 # refactored, or else we'll be expanding the wrong variables.
2529 2529 stack_depth = 2
2530 2530 if getattr(fn, magic.MAGIC_NO_VAR_EXPAND_ATTR, False):
2531 2531 # magic has opted out of var_expand
2532 2532 magic_arg_s = line
2533 2533 else:
2534 2534 magic_arg_s = self.var_expand(line, stack_depth)
2535 2535 kwargs = {}
2536 2536 if getattr(fn, "needs_local_scope", False):
2537 2537 kwargs['local_ns'] = self.user_ns
2538 2538
2539 2539 with self.builtin_trap:
2540 2540 args = (magic_arg_s, cell)
2541 2541 result = fn(*args, **kwargs)
2542 2542
2543 2543 # The code below prevents the output from being displayed
2544 2544 # when using magics with decorator @output_can_be_silenced
2545 2545 # when the last Python token in the expression is a ';'.
2546 2546 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
2547 2547 if DisplayHook.semicolon_at_end_of_expression(cell):
2548 2548 return None
2549 2549
2550 2550 return result
2551 2551
2552 2552 def find_line_magic(self, magic_name):
2553 2553 """Find and return a line magic by name.
2554 2554
2555 2555 Returns None if the magic isn't found."""
2556 2556 return self.magics_manager.magics['line'].get(magic_name)
2557 2557
2558 2558 def find_cell_magic(self, magic_name):
2559 2559 """Find and return a cell magic by name.
2560 2560
2561 2561 Returns None if the magic isn't found."""
2562 2562 return self.magics_manager.magics['cell'].get(magic_name)
2563 2563
2564 2564 def find_magic(self, magic_name, magic_kind='line'):
2565 2565 """Find and return a magic of the given type by name.
2566 2566
2567 2567 Returns None if the magic isn't found."""
2568 2568 return self.magics_manager.magics[magic_kind].get(magic_name)
2569 2569
2570 2570 def magic(self, arg_s):
2571 2571 """
2572 2572 DEPRECATED
2573 2573
2574 2574 Deprecated since IPython 0.13 (warning added in
2575 2575 8.1), use run_line_magic(magic_name, parameter_s).
2576 2576
2577 2577 Call a magic function by name.
2578 2578
2579 2579 Input: a string containing the name of the magic function to call and
2580 2580 any additional arguments to be passed to the magic.
2581 2581
2582 2582 magic('name -opt foo bar') is equivalent to typing at the ipython
2583 2583 prompt:
2584 2584
2585 2585 In[1]: %name -opt foo bar
2586 2586
2587 2587 To call a magic without arguments, simply use magic('name').
2588 2588
2589 2589 This provides a proper Python function to call IPython's magics in any
2590 2590 valid Python code you can type at the interpreter, including loops and
2591 2591 compound statements.
2592 2592 """
2593 2593 warnings.warn(
2594 2594 "`magic(...)` is deprecated since IPython 0.13 (warning added in "
2595 2595 "8.1), use run_line_magic(magic_name, parameter_s).",
2596 2596 DeprecationWarning,
2597 2597 stacklevel=2,
2598 2598 )
2599 2599 # TODO: should we issue a loud deprecation warning here?
2600 2600 magic_name, _, magic_arg_s = arg_s.partition(' ')
2601 2601 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
2602 2602 return self.run_line_magic(magic_name, magic_arg_s, _stack_depth=2)
2603 2603
2604 2604 #-------------------------------------------------------------------------
2605 2605 # Things related to macros
2606 2606 #-------------------------------------------------------------------------
2607 2607
2608 2608 def define_macro(self, name, themacro):
2609 2609 """Define a new macro
2610 2610
2611 2611 Parameters
2612 2612 ----------
2613 2613 name : str
2614 2614 The name of the macro.
2615 2615 themacro : str or Macro
2616 2616 The action to do upon invoking the macro. If a string, a new
2617 2617 Macro object is created by passing the string to it.
2618 2618 """
2619 2619
2620 2620 from IPython.core import macro
2621 2621
2622 2622 if isinstance(themacro, str):
2623 2623 themacro = macro.Macro(themacro)
2624 2624 if not isinstance(themacro, macro.Macro):
2625 2625 raise ValueError('A macro must be a string or a Macro instance.')
2626 2626 self.user_ns[name] = themacro
2627 2627
2628 2628 #-------------------------------------------------------------------------
2629 2629 # Things related to the running of system commands
2630 2630 #-------------------------------------------------------------------------
2631 2631
2632 2632 def system_piped(self, cmd):
2633 2633 """Call the given cmd in a subprocess, piping stdout/err
2634 2634
2635 2635 Parameters
2636 2636 ----------
2637 2637 cmd : str
2638 2638 Command to execute (can not end in '&', as background processes are
2639 2639 not supported. Should not be a command that expects input
2640 2640 other than simple text.
2641 2641 """
2642 2642 if cmd.rstrip().endswith('&'):
2643 2643 # this is *far* from a rigorous test
2644 2644 # We do not support backgrounding processes because we either use
2645 2645 # pexpect or pipes to read from. Users can always just call
2646 2646 # os.system() or use ip.system=ip.system_raw
2647 2647 # if they really want a background process.
2648 2648 raise OSError("Background processes not supported.")
2649 2649
2650 2650 # we explicitly do NOT return the subprocess status code, because
2651 2651 # a non-None value would trigger :func:`sys.displayhook` calls.
2652 2652 # Instead, we store the exit_code in user_ns.
2653 2653 self.user_ns['_exit_code'] = system(self.var_expand(cmd, depth=1))
2654 2654
2655 2655 def system_raw(self, cmd):
2656 2656 """Call the given cmd in a subprocess using os.system on Windows or
2657 2657 subprocess.call using the system shell on other platforms.
2658 2658
2659 2659 Parameters
2660 2660 ----------
2661 2661 cmd : str
2662 2662 Command to execute.
2663 2663 """
2664 2664 cmd = self.var_expand(cmd, depth=1)
2665 2665 # warn if there is an IPython magic alternative.
2666 2666 if cmd == "":
2667 2667 main_cmd = ""
2668 2668 else:
2669 2669 main_cmd = cmd.split()[0]
2670 2670 has_magic_alternatives = ("pip", "conda", "cd")
2671 2671
2672 2672 if main_cmd in has_magic_alternatives:
2673 2673 warnings.warn(
2674 2674 (
2675 2675 "You executed the system command !{0} which may not work "
2676 2676 "as expected. Try the IPython magic %{0} instead."
2677 2677 ).format(main_cmd)
2678 2678 )
2679 2679
2680 2680 # protect os.system from UNC paths on Windows, which it can't handle:
2681 2681 if sys.platform == 'win32':
2682 2682 from IPython.utils._process_win32 import AvoidUNCPath
2683 2683 with AvoidUNCPath() as path:
2684 2684 if path is not None:
2685 2685 cmd = '"pushd %s &&"%s' % (path, cmd)
2686 2686 try:
2687 2687 ec = os.system(cmd)
2688 2688 except KeyboardInterrupt:
2689 2689 print('\n' + self.get_exception_only(), file=sys.stderr)
2690 2690 ec = -2
2691 2691 else:
2692 2692 # For posix the result of the subprocess.call() below is an exit
2693 2693 # code, which by convention is zero for success, positive for
2694 2694 # program failure. Exit codes above 128 are reserved for signals,
2695 2695 # and the formula for converting a signal to an exit code is usually
2696 2696 # signal_number+128. To more easily differentiate between exit
2697 2697 # codes and signals, ipython uses negative numbers. For instance
2698 2698 # since control-c is signal 2 but exit code 130, ipython's
2699 2699 # _exit_code variable will read -2. Note that some shells like
2700 2700 # csh and fish don't follow sh/bash conventions for exit codes.
2701 2701 executable = os.environ.get('SHELL', None)
2702 2702 try:
2703 2703 # Use env shell instead of default /bin/sh
2704 2704 ec = subprocess.call(cmd, shell=True, executable=executable)
2705 2705 except KeyboardInterrupt:
2706 2706 # intercept control-C; a long traceback is not useful here
2707 2707 print('\n' + self.get_exception_only(), file=sys.stderr)
2708 2708 ec = 130
2709 2709 if ec > 128:
2710 2710 ec = -(ec - 128)
2711 2711
2712 2712 # We explicitly do NOT return the subprocess status code, because
2713 2713 # a non-None value would trigger :func:`sys.displayhook` calls.
2714 2714 # Instead, we store the exit_code in user_ns. Note the semantics
2715 2715 # of _exit_code: for control-c, _exit_code == -signal.SIGNIT,
2716 2716 # but raising SystemExit(_exit_code) will give status 254!
2717 2717 self.user_ns['_exit_code'] = ec
2718 2718
2719 2719 # use piped system by default, because it is better behaved
2720 2720 system = system_piped
2721 2721
2722 2722 def getoutput(self, cmd, split=True, depth=0):
2723 2723 """Get output (possibly including stderr) from a subprocess.
2724 2724
2725 2725 Parameters
2726 2726 ----------
2727 2727 cmd : str
2728 2728 Command to execute (can not end in '&', as background processes are
2729 2729 not supported.
2730 2730 split : bool, optional
2731 2731 If True, split the output into an IPython SList. Otherwise, an
2732 2732 IPython LSString is returned. These are objects similar to normal
2733 2733 lists and strings, with a few convenience attributes for easier
2734 2734 manipulation of line-based output. You can use '?' on them for
2735 2735 details.
2736 2736 depth : int, optional
2737 2737 How many frames above the caller are the local variables which should
2738 2738 be expanded in the command string? The default (0) assumes that the
2739 2739 expansion variables are in the stack frame calling this function.
2740 2740 """
2741 2741 if cmd.rstrip().endswith('&'):
2742 2742 # this is *far* from a rigorous test
2743 2743 raise OSError("Background processes not supported.")
2744 2744 out = getoutput(self.var_expand(cmd, depth=depth+1))
2745 2745 if split:
2746 2746 out = SList(out.splitlines())
2747 2747 else:
2748 2748 out = LSString(out)
2749 2749 return out
2750 2750
2751 2751 #-------------------------------------------------------------------------
2752 2752 # Things related to aliases
2753 2753 #-------------------------------------------------------------------------
2754 2754
2755 2755 def init_alias(self):
2756 2756 self.alias_manager = AliasManager(shell=self, parent=self)
2757 2757 self.configurables.append(self.alias_manager)
2758 2758
2759 2759 #-------------------------------------------------------------------------
2760 2760 # Things related to extensions
2761 2761 #-------------------------------------------------------------------------
2762 2762
2763 2763 def init_extension_manager(self):
2764 2764 self.extension_manager = ExtensionManager(shell=self, parent=self)
2765 2765 self.configurables.append(self.extension_manager)
2766 2766
2767 2767 #-------------------------------------------------------------------------
2768 2768 # Things related to payloads
2769 2769 #-------------------------------------------------------------------------
2770 2770
2771 2771 def init_payload(self):
2772 2772 self.payload_manager = PayloadManager(parent=self)
2773 2773 self.configurables.append(self.payload_manager)
2774 2774
2775 2775 #-------------------------------------------------------------------------
2776 2776 # Things related to the prefilter
2777 2777 #-------------------------------------------------------------------------
2778 2778
2779 2779 def init_prefilter(self):
2780 2780 self.prefilter_manager = PrefilterManager(shell=self, parent=self)
2781 2781 self.configurables.append(self.prefilter_manager)
2782 2782 # Ultimately this will be refactored in the new interpreter code, but
2783 2783 # for now, we should expose the main prefilter method (there's legacy
2784 2784 # code out there that may rely on this).
2785 2785 self.prefilter = self.prefilter_manager.prefilter_lines
2786 2786
2787 2787 def auto_rewrite_input(self, cmd):
2788 2788 """Print to the screen the rewritten form of the user's command.
2789 2789
2790 2790 This shows visual feedback by rewriting input lines that cause
2791 2791 automatic calling to kick in, like::
2792 2792
2793 2793 /f x
2794 2794
2795 2795 into::
2796 2796
2797 2797 ------> f(x)
2798 2798
2799 2799 after the user's input prompt. This helps the user understand that the
2800 2800 input line was transformed automatically by IPython.
2801 2801 """
2802 2802 if not self.show_rewritten_input:
2803 2803 return
2804 2804
2805 2805 # This is overridden in TerminalInteractiveShell to use fancy prompts
2806 2806 print("------> " + cmd)
2807 2807
2808 2808 #-------------------------------------------------------------------------
2809 2809 # Things related to extracting values/expressions from kernel and user_ns
2810 2810 #-------------------------------------------------------------------------
2811 2811
2812 2812 def _user_obj_error(self):
2813 2813 """return simple exception dict
2814 2814
2815 2815 for use in user_expressions
2816 2816 """
2817 2817
2818 2818 etype, evalue, tb = self._get_exc_info()
2819 2819 stb = self.InteractiveTB.get_exception_only(etype, evalue)
2820 2820
2821 2821 exc_info = {
2822 2822 "status": "error",
2823 2823 "traceback": stb,
2824 2824 "ename": etype.__name__,
2825 2825 "evalue": py3compat.safe_unicode(evalue),
2826 2826 }
2827 2827
2828 2828 return exc_info
2829 2829
2830 2830 def _format_user_obj(self, obj):
2831 2831 """format a user object to display dict
2832 2832
2833 2833 for use in user_expressions
2834 2834 """
2835 2835
2836 2836 data, md = self.display_formatter.format(obj)
2837 2837 value = {
2838 2838 'status' : 'ok',
2839 2839 'data' : data,
2840 2840 'metadata' : md,
2841 2841 }
2842 2842 return value
2843 2843
2844 2844 def user_expressions(self, expressions):
2845 2845 """Evaluate a dict of expressions in the user's namespace.
2846 2846
2847 2847 Parameters
2848 2848 ----------
2849 2849 expressions : dict
2850 2850 A dict with string keys and string values. The expression values
2851 2851 should be valid Python expressions, each of which will be evaluated
2852 2852 in the user namespace.
2853 2853
2854 2854 Returns
2855 2855 -------
2856 2856 A dict, keyed like the input expressions dict, with the rich mime-typed
2857 2857 display_data of each value.
2858 2858 """
2859 2859 out = {}
2860 2860 user_ns = self.user_ns
2861 2861 global_ns = self.user_global_ns
2862 2862
2863 2863 for key, expr in expressions.items():
2864 2864 try:
2865 2865 value = self._format_user_obj(eval(expr, global_ns, user_ns))
2866 2866 except:
2867 2867 value = self._user_obj_error()
2868 2868 out[key] = value
2869 2869 return out
2870 2870
2871 2871 #-------------------------------------------------------------------------
2872 2872 # Things related to the running of code
2873 2873 #-------------------------------------------------------------------------
2874 2874
2875 2875 def ex(self, cmd):
2876 2876 """Execute a normal python statement in user namespace."""
2877 2877 with self.builtin_trap:
2878 2878 exec(cmd, self.user_global_ns, self.user_ns)
2879 2879
2880 2880 def ev(self, expr):
2881 2881 """Evaluate python expression expr in user namespace.
2882 2882
2883 2883 Returns the result of evaluation
2884 2884 """
2885 2885 with self.builtin_trap:
2886 2886 return eval(expr, self.user_global_ns, self.user_ns)
2887 2887
2888 2888 def safe_execfile(self, fname, *where, exit_ignore=False, raise_exceptions=False, shell_futures=False):
2889 2889 """A safe version of the builtin execfile().
2890 2890
2891 2891 This version will never throw an exception, but instead print
2892 2892 helpful error messages to the screen. This only works on pure
2893 2893 Python files with the .py extension.
2894 2894
2895 2895 Parameters
2896 2896 ----------
2897 2897 fname : string
2898 2898 The name of the file to be executed.
2899 2899 *where : tuple
2900 2900 One or two namespaces, passed to execfile() as (globals,locals).
2901 2901 If only one is given, it is passed as both.
2902 2902 exit_ignore : bool (False)
2903 2903 If True, then silence SystemExit for non-zero status (it is always
2904 2904 silenced for zero status, as it is so common).
2905 2905 raise_exceptions : bool (False)
2906 2906 If True raise exceptions everywhere. Meant for testing.
2907 2907 shell_futures : bool (False)
2908 2908 If True, the code will share future statements with the interactive
2909 2909 shell. It will both be affected by previous __future__ imports, and
2910 2910 any __future__ imports in the code will affect the shell. If False,
2911 2911 __future__ imports are not shared in either direction.
2912 2912
2913 2913 """
2914 2914 fname = Path(fname).expanduser().resolve()
2915 2915
2916 2916 # Make sure we can open the file
2917 2917 try:
2918 2918 with fname.open("rb"):
2919 2919 pass
2920 2920 except:
2921 2921 warn('Could not open file <%s> for safe execution.' % fname)
2922 2922 return
2923 2923
2924 2924 # Find things also in current directory. This is needed to mimic the
2925 2925 # behavior of running a script from the system command line, where
2926 2926 # Python inserts the script's directory into sys.path
2927 2927 dname = str(fname.parent)
2928 2928
2929 2929 with prepended_to_syspath(dname), self.builtin_trap:
2930 2930 try:
2931 2931 glob, loc = (where + (None, ))[:2]
2932 2932 py3compat.execfile(
2933 2933 fname, glob, loc,
2934 2934 self.compile if shell_futures else None)
2935 2935 except SystemExit as status:
2936 2936 # If the call was made with 0 or None exit status (sys.exit(0)
2937 2937 # or sys.exit() ), don't bother showing a traceback, as both of
2938 2938 # these are considered normal by the OS:
2939 2939 # > python -c'import sys;sys.exit(0)'; echo $?
2940 2940 # 0
2941 2941 # > python -c'import sys;sys.exit()'; echo $?
2942 2942 # 0
2943 2943 # For other exit status, we show the exception unless
2944 2944 # explicitly silenced, but only in short form.
2945 2945 if status.code:
2946 2946 if raise_exceptions:
2947 2947 raise
2948 2948 if not exit_ignore:
2949 2949 self.showtraceback(exception_only=True)
2950 2950 except:
2951 2951 if raise_exceptions:
2952 2952 raise
2953 2953 # tb offset is 2 because we wrap execfile
2954 2954 self.showtraceback(tb_offset=2)
2955 2955
2956 2956 def safe_execfile_ipy(self, fname, shell_futures=False, raise_exceptions=False):
2957 2957 """Like safe_execfile, but for .ipy or .ipynb files with IPython syntax.
2958 2958
2959 2959 Parameters
2960 2960 ----------
2961 2961 fname : str
2962 2962 The name of the file to execute. The filename must have a
2963 2963 .ipy or .ipynb extension.
2964 2964 shell_futures : bool (False)
2965 2965 If True, the code will share future statements with the interactive
2966 2966 shell. It will both be affected by previous __future__ imports, and
2967 2967 any __future__ imports in the code will affect the shell. If False,
2968 2968 __future__ imports are not shared in either direction.
2969 2969 raise_exceptions : bool (False)
2970 2970 If True raise exceptions everywhere. Meant for testing.
2971 2971 """
2972 2972 fname = Path(fname).expanduser().resolve()
2973 2973
2974 2974 # Make sure we can open the file
2975 2975 try:
2976 2976 with fname.open("rb"):
2977 2977 pass
2978 2978 except:
2979 2979 warn('Could not open file <%s> for safe execution.' % fname)
2980 2980 return
2981 2981
2982 2982 # Find things also in current directory. This is needed to mimic the
2983 2983 # behavior of running a script from the system command line, where
2984 2984 # Python inserts the script's directory into sys.path
2985 2985 dname = str(fname.parent)
2986 2986
2987 2987 def get_cells():
2988 2988 """generator for sequence of code blocks to run"""
2989 2989 if fname.suffix == ".ipynb":
2990 2990 from nbformat import read
2991 2991 nb = read(fname, as_version=4)
2992 2992 if not nb.cells:
2993 2993 return
2994 2994 for cell in nb.cells:
2995 2995 if cell.cell_type == 'code':
2996 2996 yield cell.source
2997 2997 else:
2998 2998 yield fname.read_text(encoding="utf-8")
2999 2999
3000 3000 with prepended_to_syspath(dname):
3001 3001 try:
3002 3002 for cell in get_cells():
3003 3003 result = self.run_cell(cell, silent=True, shell_futures=shell_futures)
3004 3004 if raise_exceptions:
3005 3005 result.raise_error()
3006 3006 elif not result.success:
3007 3007 break
3008 3008 except:
3009 3009 if raise_exceptions:
3010 3010 raise
3011 3011 self.showtraceback()
3012 3012 warn('Unknown failure executing file: <%s>' % fname)
3013 3013
3014 3014 def safe_run_module(self, mod_name, where):
3015 3015 """A safe version of runpy.run_module().
3016 3016
3017 3017 This version will never throw an exception, but instead print
3018 3018 helpful error messages to the screen.
3019 3019
3020 3020 `SystemExit` exceptions with status code 0 or None are ignored.
3021 3021
3022 3022 Parameters
3023 3023 ----------
3024 3024 mod_name : string
3025 3025 The name of the module to be executed.
3026 3026 where : dict
3027 3027 The globals namespace.
3028 3028 """
3029 3029 try:
3030 3030 try:
3031 3031 where.update(
3032 3032 runpy.run_module(str(mod_name), run_name="__main__",
3033 3033 alter_sys=True)
3034 3034 )
3035 3035 except SystemExit as status:
3036 3036 if status.code:
3037 3037 raise
3038 3038 except:
3039 3039 self.showtraceback()
3040 3040 warn('Unknown failure executing module: <%s>' % mod_name)
3041 3041
3042 3042 def run_cell(
3043 3043 self,
3044 3044 raw_cell,
3045 3045 store_history=False,
3046 3046 silent=False,
3047 3047 shell_futures=True,
3048 3048 cell_id=None,
3049 3049 ):
3050 3050 """Run a complete IPython cell.
3051 3051
3052 3052 Parameters
3053 3053 ----------
3054 3054 raw_cell : str
3055 3055 The code (including IPython code such as %magic functions) to run.
3056 3056 store_history : bool
3057 3057 If True, the raw and translated cell will be stored in IPython's
3058 3058 history. For user code calling back into IPython's machinery, this
3059 3059 should be set to False.
3060 3060 silent : bool
3061 3061 If True, avoid side-effects, such as implicit displayhooks and
3062 3062 and logging. silent=True forces store_history=False.
3063 3063 shell_futures : bool
3064 3064 If True, the code will share future statements with the interactive
3065 3065 shell. It will both be affected by previous __future__ imports, and
3066 3066 any __future__ imports in the code will affect the shell. If False,
3067 3067 __future__ imports are not shared in either direction.
3068 3068
3069 3069 Returns
3070 3070 -------
3071 3071 result : :class:`ExecutionResult`
3072 3072 """
3073 3073 result = None
3074 3074 try:
3075 3075 result = self._run_cell(
3076 3076 raw_cell, store_history, silent, shell_futures, cell_id
3077 3077 )
3078 3078 finally:
3079 3079 self.events.trigger('post_execute')
3080 3080 if not silent:
3081 3081 self.events.trigger('post_run_cell', result)
3082 3082 return result
3083 3083
3084 3084 def _run_cell(
3085 3085 self,
3086 3086 raw_cell: str,
3087 3087 store_history: bool,
3088 3088 silent: bool,
3089 3089 shell_futures: bool,
3090 3090 cell_id: str,
3091 3091 ) -> ExecutionResult:
3092 3092 """Internal method to run a complete IPython cell."""
3093 3093
3094 3094 # we need to avoid calling self.transform_cell multiple time on the same thing
3095 3095 # so we need to store some results:
3096 3096 preprocessing_exc_tuple = None
3097 3097 try:
3098 3098 transformed_cell = self.transform_cell(raw_cell)
3099 3099 except Exception:
3100 3100 transformed_cell = raw_cell
3101 3101 preprocessing_exc_tuple = sys.exc_info()
3102 3102
3103 3103 assert transformed_cell is not None
3104 3104 coro = self.run_cell_async(
3105 3105 raw_cell,
3106 3106 store_history=store_history,
3107 3107 silent=silent,
3108 3108 shell_futures=shell_futures,
3109 3109 transformed_cell=transformed_cell,
3110 3110 preprocessing_exc_tuple=preprocessing_exc_tuple,
3111 3111 cell_id=cell_id,
3112 3112 )
3113 3113
3114 3114 # run_cell_async is async, but may not actually need an eventloop.
3115 3115 # when this is the case, we want to run it using the pseudo_sync_runner
3116 3116 # so that code can invoke eventloops (for example via the %run , and
3117 3117 # `%paste` magic.
3118 3118 if self.trio_runner:
3119 3119 runner = self.trio_runner
3120 3120 elif self.should_run_async(
3121 3121 raw_cell,
3122 3122 transformed_cell=transformed_cell,
3123 3123 preprocessing_exc_tuple=preprocessing_exc_tuple,
3124 3124 ):
3125 3125 runner = self.loop_runner
3126 3126 else:
3127 3127 runner = _pseudo_sync_runner
3128 3128
3129 3129 try:
3130 3130 result = runner(coro)
3131 3131 except BaseException as e:
3132 3132 info = ExecutionInfo(
3133 3133 raw_cell, store_history, silent, shell_futures, cell_id
3134 3134 )
3135 3135 result = ExecutionResult(info)
3136 3136 result.error_in_exec = e
3137 3137 self.showtraceback(running_compiled_code=True)
3138 3138 finally:
3139 3139 return result
3140 3140
3141 3141 def should_run_async(
3142 3142 self, raw_cell: str, *, transformed_cell=None, preprocessing_exc_tuple=None
3143 3143 ) -> bool:
3144 3144 """Return whether a cell should be run asynchronously via a coroutine runner
3145 3145
3146 3146 Parameters
3147 3147 ----------
3148 3148 raw_cell : str
3149 3149 The code to be executed
3150 3150
3151 3151 Returns
3152 3152 -------
3153 3153 result: bool
3154 3154 Whether the code needs to be run with a coroutine runner or not
3155 3155 .. versionadded:: 7.0
3156 3156 """
3157 3157 if not self.autoawait:
3158 3158 return False
3159 3159 if preprocessing_exc_tuple is not None:
3160 3160 return False
3161 3161 assert preprocessing_exc_tuple is None
3162 3162 if transformed_cell is None:
3163 3163 warnings.warn(
3164 3164 "`should_run_async` will not call `transform_cell`"
3165 3165 " automatically in the future. Please pass the result to"
3166 3166 " `transformed_cell` argument and any exception that happen"
3167 3167 " during the"
3168 3168 "transform in `preprocessing_exc_tuple` in"
3169 3169 " IPython 7.17 and above.",
3170 3170 DeprecationWarning,
3171 3171 stacklevel=2,
3172 3172 )
3173 3173 try:
3174 3174 cell = self.transform_cell(raw_cell)
3175 3175 except Exception:
3176 3176 # any exception during transform will be raised
3177 3177 # prior to execution
3178 3178 return False
3179 3179 else:
3180 3180 cell = transformed_cell
3181 3181 return _should_be_async(cell)
3182 3182
3183 3183 async def run_cell_async(
3184 3184 self,
3185 3185 raw_cell: str,
3186 3186 store_history=False,
3187 3187 silent=False,
3188 3188 shell_futures=True,
3189 3189 *,
3190 3190 transformed_cell: Optional[str] = None,
3191 3191 preprocessing_exc_tuple: Optional[AnyType] = None,
3192 3192 cell_id=None,
3193 3193 ) -> ExecutionResult:
3194 3194 """Run a complete IPython cell asynchronously.
3195 3195
3196 3196 Parameters
3197 3197 ----------
3198 3198 raw_cell : str
3199 3199 The code (including IPython code such as %magic functions) to run.
3200 3200 store_history : bool
3201 3201 If True, the raw and translated cell will be stored in IPython's
3202 3202 history. For user code calling back into IPython's machinery, this
3203 3203 should be set to False.
3204 3204 silent : bool
3205 3205 If True, avoid side-effects, such as implicit displayhooks and
3206 3206 and logging. silent=True forces store_history=False.
3207 3207 shell_futures : bool
3208 3208 If True, the code will share future statements with the interactive
3209 3209 shell. It will both be affected by previous __future__ imports, and
3210 3210 any __future__ imports in the code will affect the shell. If False,
3211 3211 __future__ imports are not shared in either direction.
3212 3212 transformed_cell: str
3213 3213 cell that was passed through transformers
3214 3214 preprocessing_exc_tuple:
3215 3215 trace if the transformation failed.
3216 3216
3217 3217 Returns
3218 3218 -------
3219 3219 result : :class:`ExecutionResult`
3220 3220
3221 3221 .. versionadded:: 7.0
3222 3222 """
3223 3223 info = ExecutionInfo(raw_cell, store_history, silent, shell_futures, cell_id)
3224 3224 result = ExecutionResult(info)
3225 3225
3226 3226 if (not raw_cell) or raw_cell.isspace():
3227 3227 self.last_execution_succeeded = True
3228 3228 self.last_execution_result = result
3229 3229 return result
3230 3230
3231 3231 if silent:
3232 3232 store_history = False
3233 3233
3234 3234 if store_history:
3235 3235 result.execution_count = self.execution_count
3236 3236
3237 3237 def error_before_exec(value):
3238 3238 if store_history:
3239 3239 self.execution_count += 1
3240 3240 result.error_before_exec = value
3241 3241 self.last_execution_succeeded = False
3242 3242 self.last_execution_result = result
3243 3243 return result
3244 3244
3245 3245 self.events.trigger('pre_execute')
3246 3246 if not silent:
3247 3247 self.events.trigger('pre_run_cell', info)
3248 3248
3249 3249 if transformed_cell is None:
3250 3250 warnings.warn(
3251 3251 "`run_cell_async` will not call `transform_cell`"
3252 3252 " automatically in the future. Please pass the result to"
3253 3253 " `transformed_cell` argument and any exception that happen"
3254 3254 " during the"
3255 3255 "transform in `preprocessing_exc_tuple` in"
3256 3256 " IPython 7.17 and above.",
3257 3257 DeprecationWarning,
3258 3258 stacklevel=2,
3259 3259 )
3260 3260 # If any of our input transformation (input_transformer_manager or
3261 3261 # prefilter_manager) raises an exception, we store it in this variable
3262 3262 # so that we can display the error after logging the input and storing
3263 3263 # it in the history.
3264 3264 try:
3265 3265 cell = self.transform_cell(raw_cell)
3266 3266 except Exception:
3267 3267 preprocessing_exc_tuple = sys.exc_info()
3268 3268 cell = raw_cell # cell has to exist so it can be stored/logged
3269 3269 else:
3270 3270 preprocessing_exc_tuple = None
3271 3271 else:
3272 3272 if preprocessing_exc_tuple is None:
3273 3273 cell = transformed_cell
3274 3274 else:
3275 3275 cell = raw_cell
3276 3276
3277 3277 # Do NOT store paste/cpaste magic history
3278 3278 if "get_ipython().run_line_magic(" in cell and "paste" in cell:
3279 3279 store_history = False
3280 3280
3281 3281 # Store raw and processed history
3282 3282 if store_history:
3283 3283 assert self.history_manager is not None
3284 3284 self.history_manager.store_inputs(self.execution_count, cell, raw_cell)
3285 3285 if not silent:
3286 3286 self.logger.log(cell, raw_cell)
3287 3287
3288 3288 # Display the exception if input processing failed.
3289 3289 if preprocessing_exc_tuple is not None:
3290 3290 self.showtraceback(preprocessing_exc_tuple)
3291 3291 if store_history:
3292 3292 self.execution_count += 1
3293 3293 return error_before_exec(preprocessing_exc_tuple[1])
3294 3294
3295 3295 # Our own compiler remembers the __future__ environment. If we want to
3296 3296 # run code with a separate __future__ environment, use the default
3297 3297 # compiler
3298 3298 compiler = self.compile if shell_futures else self.compiler_class()
3299 3299
3300 3300 with self.builtin_trap:
3301 3301 cell_name = compiler.cache(cell, self.execution_count, raw_code=raw_cell)
3302 3302
3303 3303 with self.display_trap:
3304 3304 # Compile to bytecode
3305 3305 try:
3306 3306 code_ast = compiler.ast_parse(cell, filename=cell_name)
3307 3307 except self.custom_exceptions as e:
3308 3308 etype, value, tb = sys.exc_info()
3309 3309 self.CustomTB(etype, value, tb)
3310 3310 return error_before_exec(e)
3311 3311 except IndentationError as e:
3312 3312 self.showindentationerror()
3313 3313 return error_before_exec(e)
3314 3314 except (OverflowError, SyntaxError, ValueError, TypeError,
3315 3315 MemoryError) as e:
3316 3316 self.showsyntaxerror()
3317 3317 return error_before_exec(e)
3318 3318
3319 3319 # Apply AST transformations
3320 3320 try:
3321 3321 code_ast = self.transform_ast(code_ast)
3322 3322 except InputRejected as e:
3323 3323 self.showtraceback()
3324 3324 return error_before_exec(e)
3325 3325
3326 3326 # Give the displayhook a reference to our ExecutionResult so it
3327 3327 # can fill in the output value.
3328 3328 self.displayhook.exec_result = result
3329 3329
3330 3330 # Execute the user code
3331 3331 interactivity = "none" if silent else self.ast_node_interactivity
3332 3332
3333 3333
3334 3334 has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
3335 3335 interactivity=interactivity, compiler=compiler, result=result)
3336 3336
3337 3337 self.last_execution_succeeded = not has_raised
3338 3338 self.last_execution_result = result
3339 3339
3340 3340 # Reset this so later displayed values do not modify the
3341 3341 # ExecutionResult
3342 3342 self.displayhook.exec_result = None
3343 3343
3344 3344 if store_history:
3345 3345 assert self.history_manager is not None
3346 3346 # Write output to the database. Does nothing unless
3347 3347 # history output logging is enabled.
3348 3348 self.history_manager.store_output(self.execution_count)
3349 3349 # Each cell is a *single* input, regardless of how many lines it has
3350 3350 self.execution_count += 1
3351 3351
3352 3352 return result
3353 3353
3354 3354 def transform_cell(self, raw_cell):
3355 3355 """Transform an input cell before parsing it.
3356 3356
3357 3357 Static transformations, implemented in IPython.core.inputtransformer2,
3358 3358 deal with things like ``%magic`` and ``!system`` commands.
3359 3359 These run on all input.
3360 3360 Dynamic transformations, for things like unescaped magics and the exit
3361 3361 autocall, depend on the state of the interpreter.
3362 3362 These only apply to single line inputs.
3363 3363
3364 3364 These string-based transformations are followed by AST transformations;
3365 3365 see :meth:`transform_ast`.
3366 3366 """
3367 3367 # Static input transformations
3368 3368 cell = self.input_transformer_manager.transform_cell(raw_cell)
3369 3369
3370 3370 if len(cell.splitlines()) == 1:
3371 3371 # Dynamic transformations - only applied for single line commands
3372 3372 with self.builtin_trap:
3373 3373 # use prefilter_lines to handle trailing newlines
3374 3374 # restore trailing newline for ast.parse
3375 3375 cell = self.prefilter_manager.prefilter_lines(cell) + '\n'
3376 3376
3377 3377 lines = cell.splitlines(keepends=True)
3378 3378 for transform in self.input_transformers_post:
3379 3379 lines = transform(lines)
3380 3380 cell = ''.join(lines)
3381 3381
3382 3382 return cell
3383 3383
3384 3384 def transform_ast(self, node):
3385 3385 """Apply the AST transformations from self.ast_transformers
3386 3386
3387 3387 Parameters
3388 3388 ----------
3389 3389 node : ast.Node
3390 3390 The root node to be transformed. Typically called with the ast.Module
3391 3391 produced by parsing user input.
3392 3392
3393 3393 Returns
3394 3394 -------
3395 3395 An ast.Node corresponding to the node it was called with. Note that it
3396 3396 may also modify the passed object, so don't rely on references to the
3397 3397 original AST.
3398 3398 """
3399 3399 for transformer in self.ast_transformers:
3400 3400 try:
3401 3401 node = transformer.visit(node)
3402 3402 except InputRejected:
3403 3403 # User-supplied AST transformers can reject an input by raising
3404 3404 # an InputRejected. Short-circuit in this case so that we
3405 3405 # don't unregister the transform.
3406 3406 raise
3407 3407 except Exception as e:
3408 3408 warn(
3409 3409 "AST transformer %r threw an error. It will be unregistered. %s"
3410 3410 % (transformer, e)
3411 3411 )
3412 3412 self.ast_transformers.remove(transformer)
3413 3413
3414 3414 if self.ast_transformers:
3415 3415 ast.fix_missing_locations(node)
3416 3416 return node
3417 3417
3418 3418 async def run_ast_nodes(
3419 3419 self,
3420 3420 nodelist: ListType[stmt],
3421 3421 cell_name: str,
3422 3422 interactivity="last_expr",
3423 3423 compiler=compile,
3424 3424 result=None,
3425 3425 ):
3426 3426 """Run a sequence of AST nodes. The execution mode depends on the
3427 3427 interactivity parameter.
3428 3428
3429 3429 Parameters
3430 3430 ----------
3431 3431 nodelist : list
3432 3432 A sequence of AST nodes to run.
3433 3433 cell_name : str
3434 3434 Will be passed to the compiler as the filename of the cell. Typically
3435 3435 the value returned by ip.compile.cache(cell).
3436 3436 interactivity : str
3437 3437 'all', 'last', 'last_expr' , 'last_expr_or_assign' or 'none',
3438 3438 specifying which nodes should be run interactively (displaying output
3439 3439 from expressions). 'last_expr' will run the last node interactively
3440 3440 only if it is an expression (i.e. expressions in loops or other blocks
3441 3441 are not displayed) 'last_expr_or_assign' will run the last expression
3442 3442 or the last assignment. Other values for this parameter will raise a
3443 3443 ValueError.
3444 3444
3445 3445 compiler : callable
3446 3446 A function with the same interface as the built-in compile(), to turn
3447 3447 the AST nodes into code objects. Default is the built-in compile().
3448 3448 result : ExecutionResult, optional
3449 3449 An object to store exceptions that occur during execution.
3450 3450
3451 3451 Returns
3452 3452 -------
3453 3453 True if an exception occurred while running code, False if it finished
3454 3454 running.
3455 3455 """
3456 3456 if not nodelist:
3457 3457 return
3458 3458
3459 3459
3460 3460 if interactivity == 'last_expr_or_assign':
3461 3461 if isinstance(nodelist[-1], _assign_nodes):
3462 3462 asg = nodelist[-1]
3463 3463 if isinstance(asg, ast.Assign) and len(asg.targets) == 1:
3464 3464 target = asg.targets[0]
3465 3465 elif isinstance(asg, _single_targets_nodes):
3466 3466 target = asg.target
3467 3467 else:
3468 3468 target = None
3469 3469 if isinstance(target, ast.Name):
3470 3470 nnode = ast.Expr(ast.Name(target.id, ast.Load()))
3471 3471 ast.fix_missing_locations(nnode)
3472 3472 nodelist.append(nnode)
3473 3473 interactivity = 'last_expr'
3474 3474
3475 3475 _async = False
3476 3476 if interactivity == 'last_expr':
3477 3477 if isinstance(nodelist[-1], ast.Expr):
3478 3478 interactivity = "last"
3479 3479 else:
3480 3480 interactivity = "none"
3481 3481
3482 3482 if interactivity == 'none':
3483 3483 to_run_exec, to_run_interactive = nodelist, []
3484 3484 elif interactivity == 'last':
3485 3485 to_run_exec, to_run_interactive = nodelist[:-1], nodelist[-1:]
3486 3486 elif interactivity == 'all':
3487 3487 to_run_exec, to_run_interactive = [], nodelist
3488 3488 else:
3489 3489 raise ValueError("Interactivity was %r" % interactivity)
3490 3490
3491 3491 try:
3492 3492
3493 3493 def compare(code):
3494 3494 is_async = inspect.CO_COROUTINE & code.co_flags == inspect.CO_COROUTINE
3495 3495 return is_async
3496 3496
3497 3497 # refactor that to just change the mod constructor.
3498 3498 to_run = []
3499 3499 for node in to_run_exec:
3500 3500 to_run.append((node, "exec"))
3501 3501
3502 3502 for node in to_run_interactive:
3503 3503 to_run.append((node, "single"))
3504 3504
3505 3505 for node, mode in to_run:
3506 3506 if mode == "exec":
3507 3507 mod = Module([node], [])
3508 3508 elif mode == "single":
3509 3509 mod = ast.Interactive([node]) # type: ignore
3510 3510 with compiler.extra_flags(
3511 3511 getattr(ast, "PyCF_ALLOW_TOP_LEVEL_AWAIT", 0x0)
3512 3512 if self.autoawait
3513 3513 else 0x0
3514 3514 ):
3515 3515 code = compiler(mod, cell_name, mode)
3516 3516 asy = compare(code)
3517 3517 if await self.run_code(code, result, async_=asy):
3518 3518 return True
3519 3519
3520 3520 # Flush softspace
3521 3521 if softspace(sys.stdout, 0):
3522 3522 print()
3523 3523
3524 3524 except:
3525 3525 # It's possible to have exceptions raised here, typically by
3526 3526 # compilation of odd code (such as a naked 'return' outside a
3527 3527 # function) that did parse but isn't valid. Typically the exception
3528 3528 # is a SyntaxError, but it's safest just to catch anything and show
3529 3529 # the user a traceback.
3530 3530
3531 3531 # We do only one try/except outside the loop to minimize the impact
3532 3532 # on runtime, and also because if any node in the node list is
3533 3533 # broken, we should stop execution completely.
3534 3534 if result:
3535 3535 result.error_before_exec = sys.exc_info()[1]
3536 3536 self.showtraceback()
3537 3537 return True
3538 3538
3539 3539 return False
3540 3540
3541 3541 async def run_code(self, code_obj, result=None, *, async_=False):
3542 3542 """Execute a code object.
3543 3543
3544 3544 When an exception occurs, self.showtraceback() is called to display a
3545 3545 traceback.
3546 3546
3547 3547 Parameters
3548 3548 ----------
3549 3549 code_obj : code object
3550 3550 A compiled code object, to be executed
3551 3551 result : ExecutionResult, optional
3552 3552 An object to store exceptions that occur during execution.
3553 3553 async_ : Bool (Experimental)
3554 3554 Attempt to run top-level asynchronous code in a default loop.
3555 3555
3556 3556 Returns
3557 3557 -------
3558 3558 False : successful execution.
3559 3559 True : an error occurred.
3560 3560 """
3561 3561 # special value to say that anything above is IPython and should be
3562 3562 # hidden.
3563 3563 __tracebackhide__ = "__ipython_bottom__"
3564 3564 # Set our own excepthook in case the user code tries to call it
3565 3565 # directly, so that the IPython crash handler doesn't get triggered
3566 3566 old_excepthook, sys.excepthook = sys.excepthook, self.excepthook
3567 3567
3568 3568 # we save the original sys.excepthook in the instance, in case config
3569 3569 # code (such as magics) needs access to it.
3570 3570 self.sys_excepthook = old_excepthook
3571 3571 outflag = True # happens in more places, so it's easier as default
3572 3572 try:
3573 3573 try:
3574 3574 if async_:
3575 3575 await eval(code_obj, self.user_global_ns, self.user_ns)
3576 3576 else:
3577 3577 exec(code_obj, self.user_global_ns, self.user_ns)
3578 3578 finally:
3579 3579 # Reset our crash handler in place
3580 3580 sys.excepthook = old_excepthook
3581 3581 except SystemExit as e:
3582 3582 if result is not None:
3583 3583 result.error_in_exec = e
3584 3584 self.showtraceback(exception_only=True)
3585 3585 warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1)
3586 3586 except bdb.BdbQuit:
3587 3587 etype, value, tb = sys.exc_info()
3588 3588 if result is not None:
3589 3589 result.error_in_exec = value
3590 3590 # the BdbQuit stops here
3591 3591 except self.custom_exceptions:
3592 3592 etype, value, tb = sys.exc_info()
3593 3593 if result is not None:
3594 3594 result.error_in_exec = value
3595 3595 self.CustomTB(etype, value, tb)
3596 3596 except:
3597 3597 if result is not None:
3598 3598 result.error_in_exec = sys.exc_info()[1]
3599 3599 self.showtraceback(running_compiled_code=True)
3600 3600 else:
3601 3601 outflag = False
3602 3602 return outflag
3603 3603
3604 3604 # For backwards compatibility
3605 3605 runcode = run_code
3606 3606
3607 3607 def check_complete(self, code: str) -> Tuple[str, str]:
3608 3608 """Return whether a block of code is ready to execute, or should be continued
3609 3609
3610 3610 Parameters
3611 3611 ----------
3612 3612 code : string
3613 3613 Python input code, which can be multiline.
3614 3614
3615 3615 Returns
3616 3616 -------
3617 3617 status : str
3618 3618 One of 'complete', 'incomplete', or 'invalid' if source is not a
3619 3619 prefix of valid code.
3620 3620 indent : str
3621 3621 When status is 'incomplete', this is some whitespace to insert on
3622 3622 the next line of the prompt.
3623 3623 """
3624 3624 status, nspaces = self.input_transformer_manager.check_complete(code)
3625 3625 return status, ' ' * (nspaces or 0)
3626 3626
3627 3627 #-------------------------------------------------------------------------
3628 3628 # Things related to GUI support and pylab
3629 3629 #-------------------------------------------------------------------------
3630 3630
3631 3631 active_eventloop: Optional[str] = None
3632 3632
3633 3633 def enable_gui(self, gui=None):
3634 3634 raise NotImplementedError('Implement enable_gui in a subclass')
3635 3635
3636 3636 def enable_matplotlib(self, gui=None):
3637 3637 """Enable interactive matplotlib and inline figure support.
3638 3638
3639 3639 This takes the following steps:
3640 3640
3641 3641 1. select the appropriate eventloop and matplotlib backend
3642 3642 2. set up matplotlib for interactive use with that backend
3643 3643 3. configure formatters for inline figure display
3644 3644 4. enable the selected gui eventloop
3645 3645
3646 3646 Parameters
3647 3647 ----------
3648 3648 gui : optional, string
3649 3649 If given, dictates the choice of matplotlib GUI backend to use
3650 3650 (should be one of IPython's supported backends, 'qt', 'osx', 'tk',
3651 3651 'gtk', 'wx' or 'inline'), otherwise we use the default chosen by
3652 3652 matplotlib (as dictated by the matplotlib build-time options plus the
3653 3653 user's matplotlibrc configuration file). Note that not all backends
3654 3654 make sense in all contexts, for example a terminal ipython can't
3655 3655 display figures inline.
3656 3656 """
3657 3657 from .pylabtools import _matplotlib_manages_backends
3658 3658
3659 3659 if not _matplotlib_manages_backends() and gui in (None, "auto"):
3660 3660 # Early import of backend_inline required for its side effect of
3661 3661 # calling _enable_matplotlib_integration()
3662 3662 import matplotlib_inline.backend_inline
3663 3663
3664 3664 from IPython.core import pylabtools as pt
3665 3665 gui, backend = pt.find_gui_and_backend(gui, self.pylab_gui_select)
3666 3666
3667 3667 if gui != None:
3668 3668 # If we have our first gui selection, store it
3669 3669 if self.pylab_gui_select is None:
3670 3670 self.pylab_gui_select = gui
3671 3671 # Otherwise if they are different
3672 3672 elif gui != self.pylab_gui_select:
3673 3673 print('Warning: Cannot change to a different GUI toolkit: %s.'
3674 3674 ' Using %s instead.' % (gui, self.pylab_gui_select))
3675 3675 gui, backend = pt.find_gui_and_backend(self.pylab_gui_select)
3676 3676
3677 3677 pt.activate_matplotlib(backend)
3678 3678
3679 3679 from matplotlib_inline.backend_inline import configure_inline_support
3680 3680
3681 3681 configure_inline_support(self, backend)
3682 3682
3683 3683 # Now we must activate the gui pylab wants to use, and fix %run to take
3684 3684 # plot updates into account
3685 3685 self.enable_gui(gui)
3686 3686 self.magics_manager.registry['ExecutionMagics'].default_runner = \
3687 3687 pt.mpl_runner(self.safe_execfile)
3688 3688
3689 3689 return gui, backend
3690 3690
3691 3691 def enable_pylab(self, gui=None, import_all=True, welcome_message=False):
3692 3692 """Activate pylab support at runtime.
3693 3693
3694 3694 This turns on support for matplotlib, preloads into the interactive
3695 3695 namespace all of numpy and pylab, and configures IPython to correctly
3696 3696 interact with the GUI event loop. The GUI backend to be used can be
3697 3697 optionally selected with the optional ``gui`` argument.
3698 3698
3699 3699 This method only adds preloading the namespace to InteractiveShell.enable_matplotlib.
3700 3700
3701 3701 Parameters
3702 3702 ----------
3703 3703 gui : optional, string
3704 3704 If given, dictates the choice of matplotlib GUI backend to use
3705 3705 (should be one of IPython's supported backends, 'qt', 'osx', 'tk',
3706 3706 'gtk', 'wx' or 'inline'), otherwise we use the default chosen by
3707 3707 matplotlib (as dictated by the matplotlib build-time options plus the
3708 3708 user's matplotlibrc configuration file). Note that not all backends
3709 3709 make sense in all contexts, for example a terminal ipython can't
3710 3710 display figures inline.
3711 3711 import_all : optional, bool, default: True
3712 3712 Whether to do `from numpy import *` and `from pylab import *`
3713 3713 in addition to module imports.
3714 3714 welcome_message : deprecated
3715 3715 This argument is ignored, no welcome message will be displayed.
3716 3716 """
3717 3717 from IPython.core.pylabtools import import_pylab
3718 3718
3719 3719 gui, backend = self.enable_matplotlib(gui)
3720 3720
3721 3721 # We want to prevent the loading of pylab to pollute the user's
3722 3722 # namespace as shown by the %who* magics, so we execute the activation
3723 3723 # code in an empty namespace, and we update *both* user_ns and
3724 3724 # user_ns_hidden with this information.
3725 3725 ns = {}
3726 3726 import_pylab(ns, import_all)
3727 3727 # warn about clobbered names
3728 3728 ignored = {"__builtins__"}
3729 3729 both = set(ns).intersection(self.user_ns).difference(ignored)
3730 3730 clobbered = [ name for name in both if self.user_ns[name] is not ns[name] ]
3731 3731 self.user_ns.update(ns)
3732 3732 self.user_ns_hidden.update(ns)
3733 3733 return gui, backend, clobbered
3734 3734
3735 3735 #-------------------------------------------------------------------------
3736 3736 # Utilities
3737 3737 #-------------------------------------------------------------------------
3738 3738
3739 3739 def var_expand(self, cmd, depth=0, formatter=DollarFormatter()):
3740 3740 """Expand python variables in a string.
3741 3741
3742 3742 The depth argument indicates how many frames above the caller should
3743 3743 be walked to look for the local namespace where to expand variables.
3744 3744
3745 3745 The global namespace for expansion is always the user's interactive
3746 3746 namespace.
3747 3747 """
3748 3748 ns = self.user_ns.copy()
3749 3749 try:
3750 3750 frame = sys._getframe(depth+1)
3751 3751 except ValueError:
3752 3752 # This is thrown if there aren't that many frames on the stack,
3753 3753 # e.g. if a script called run_line_magic() directly.
3754 3754 pass
3755 3755 else:
3756 3756 ns.update(frame.f_locals)
3757 3757
3758 3758 try:
3759 3759 # We have to use .vformat() here, because 'self' is a valid and common
3760 3760 # name, and expanding **ns for .format() would make it collide with
3761 3761 # the 'self' argument of the method.
3762 3762 cmd = formatter.vformat(cmd, args=[], kwargs=ns)
3763 3763 except Exception:
3764 3764 # if formatter couldn't format, just let it go untransformed
3765 3765 pass
3766 3766 return cmd
3767 3767
3768 3768 def mktempfile(self, data=None, prefix='ipython_edit_'):
3769 3769 """Make a new tempfile and return its filename.
3770 3770
3771 3771 This makes a call to tempfile.mkstemp (created in a tempfile.mkdtemp),
3772 3772 but it registers the created filename internally so ipython cleans it up
3773 3773 at exit time.
3774 3774
3775 3775 Optional inputs:
3776 3776
3777 3777 - data(None): if data is given, it gets written out to the temp file
3778 3778 immediately, and the file is closed again."""
3779 3779
3780 3780 dir_path = Path(tempfile.mkdtemp(prefix=prefix))
3781 3781 self.tempdirs.append(dir_path)
3782 3782
3783 3783 handle, filename = tempfile.mkstemp(".py", prefix, dir=str(dir_path))
3784 3784 os.close(handle) # On Windows, there can only be one open handle on a file
3785 3785
3786 3786 file_path = Path(filename)
3787 3787 self.tempfiles.append(file_path)
3788 3788
3789 3789 if data:
3790 3790 file_path.write_text(data, encoding="utf-8")
3791 3791 return filename
3792 3792
3793 3793 def ask_yes_no(self, prompt, default=None, interrupt=None):
3794 3794 if self.quiet:
3795 3795 return True
3796 3796 return ask_yes_no(prompt,default,interrupt)
3797 3797
3798 3798 def show_usage(self):
3799 3799 """Show a usage message"""
3800 3800 page.page(IPython.core.usage.interactive_usage)
3801 3801
3802 3802 def extract_input_lines(self, range_str, raw=False):
3803 3803 """Return as a string a set of input history slices.
3804 3804
3805 3805 Parameters
3806 3806 ----------
3807 3807 range_str : str
3808 3808 The set of slices is given as a string, like "~5/6-~4/2 4:8 9",
3809 3809 since this function is for use by magic functions which get their
3810 3810 arguments as strings. The number before the / is the session
3811 3811 number: ~n goes n back from the current session.
3812 3812
3813 3813 If empty string is given, returns history of current session
3814 3814 without the last input.
3815 3815
3816 3816 raw : bool, optional
3817 3817 By default, the processed input is used. If this is true, the raw
3818 3818 input history is used instead.
3819 3819
3820 3820 Notes
3821 3821 -----
3822 3822 Slices can be described with two notations:
3823 3823
3824 3824 * ``N:M`` -> standard python form, means including items N...(M-1).
3825 3825 * ``N-M`` -> include items N..M (closed endpoint).
3826 3826 """
3827 3827 lines = self.history_manager.get_range_by_str(range_str, raw=raw)
3828 3828 text = "\n".join(x for _, _, x in lines)
3829 3829
3830 3830 # Skip the last line, as it's probably the magic that called this
3831 3831 if not range_str:
3832 3832 if "\n" not in text:
3833 3833 text = ""
3834 3834 else:
3835 3835 text = text[: text.rfind("\n")]
3836 3836
3837 3837 return text
3838 3838
3839 3839 def find_user_code(self, target, raw=True, py_only=False, skip_encoding_cookie=True, search_ns=False):
3840 3840 """Get a code string from history, file, url, or a string or macro.
3841 3841
3842 3842 This is mainly used by magic functions.
3843 3843
3844 3844 Parameters
3845 3845 ----------
3846 3846 target : str
3847 3847 A string specifying code to retrieve. This will be tried respectively
3848 3848 as: ranges of input history (see %history for syntax), url,
3849 3849 corresponding .py file, filename, or an expression evaluating to a
3850 3850 string or Macro in the user namespace.
3851 3851
3852 3852 If empty string is given, returns complete history of current
3853 3853 session, without the last line.
3854 3854
3855 3855 raw : bool
3856 3856 If true (default), retrieve raw history. Has no effect on the other
3857 3857 retrieval mechanisms.
3858 3858
3859 3859 py_only : bool (default False)
3860 3860 Only try to fetch python code, do not try alternative methods to decode file
3861 3861 if unicode fails.
3862 3862
3863 3863 Returns
3864 3864 -------
3865 3865 A string of code.
3866 3866 ValueError is raised if nothing is found, and TypeError if it evaluates
3867 3867 to an object of another type. In each case, .args[0] is a printable
3868 3868 message.
3869 3869 """
3870 3870 code = self.extract_input_lines(target, raw=raw) # Grab history
3871 3871 if code:
3872 3872 return code
3873 3873 try:
3874 3874 if target.startswith(('http://', 'https://')):
3875 3875 return openpy.read_py_url(target, skip_encoding_cookie=skip_encoding_cookie)
3876 3876 except UnicodeDecodeError as e:
3877 3877 if not py_only :
3878 3878 # Deferred import
3879 3879 from urllib.request import urlopen
3880 3880 response = urlopen(target)
3881 3881 return response.read().decode('latin1')
3882 3882 raise ValueError(("'%s' seem to be unreadable.") % target) from e
3883 3883
3884 3884 potential_target = [target]
3885 3885 try :
3886 3886 potential_target.insert(0,get_py_filename(target))
3887 3887 except IOError:
3888 3888 pass
3889 3889
3890 3890 for tgt in potential_target :
3891 3891 if os.path.isfile(tgt): # Read file
3892 3892 try :
3893 3893 return openpy.read_py_file(tgt, skip_encoding_cookie=skip_encoding_cookie)
3894 3894 except UnicodeDecodeError as e:
3895 3895 if not py_only :
3896 3896 with io_open(tgt,'r', encoding='latin1') as f :
3897 3897 return f.read()
3898 3898 raise ValueError(("'%s' seem to be unreadable.") % target) from e
3899 3899 elif os.path.isdir(os.path.expanduser(tgt)):
3900 3900 raise ValueError("'%s' is a directory, not a regular file." % target)
3901 3901
3902 3902 if search_ns:
3903 3903 # Inspect namespace to load object source
3904 3904 object_info = self.object_inspect(target, detail_level=1)
3905 3905 if object_info['found'] and object_info['source']:
3906 3906 return object_info['source']
3907 3907
3908 3908 try: # User namespace
3909 3909 codeobj = eval(target, self.user_ns)
3910 3910 except Exception as e:
3911 3911 raise ValueError(("'%s' was not found in history, as a file, url, "
3912 3912 "nor in the user namespace.") % target) from e
3913 3913
3914 3914 if isinstance(codeobj, str):
3915 3915 return codeobj
3916 3916 elif isinstance(codeobj, Macro):
3917 3917 return codeobj.value
3918 3918
3919 3919 raise TypeError("%s is neither a string nor a macro." % target,
3920 3920 codeobj)
3921 3921
3922 3922 def _atexit_once(self):
3923 3923 """
3924 3924 At exist operation that need to be called at most once.
3925 3925 Second call to this function per instance will do nothing.
3926 3926 """
3927 3927
3928 3928 if not getattr(self, "_atexit_once_called", False):
3929 3929 self._atexit_once_called = True
3930 3930 # Clear all user namespaces to release all references cleanly.
3931 3931 self.reset(new_session=False)
3932 3932 # Close the history session (this stores the end time and line count)
3933 3933 # this must be *before* the tempfile cleanup, in case of temporary
3934 3934 # history db
3935 3935 self.history_manager.end_session()
3936 3936 self.history_manager = None
3937 3937
3938 3938 #-------------------------------------------------------------------------
3939 3939 # Things related to IPython exiting
3940 3940 #-------------------------------------------------------------------------
3941 3941 def atexit_operations(self):
3942 3942 """This will be executed at the time of exit.
3943 3943
3944 3944 Cleanup operations and saving of persistent data that is done
3945 3945 unconditionally by IPython should be performed here.
3946 3946
3947 3947 For things that may depend on startup flags or platform specifics (such
3948 3948 as having readline or not), register a separate atexit function in the
3949 3949 code that has the appropriate information, rather than trying to
3950 3950 clutter
3951 3951 """
3952 3952 self._atexit_once()
3953 3953
3954 3954 # Cleanup all tempfiles and folders left around
3955 3955 for tfile in self.tempfiles:
3956 3956 try:
3957 3957 tfile.unlink()
3958 3958 self.tempfiles.remove(tfile)
3959 3959 except FileNotFoundError:
3960 3960 pass
3961 3961 del self.tempfiles
3962 3962 for tdir in self.tempdirs:
3963 3963 try:
3964 3964 shutil.rmtree(tdir)
3965 3965 self.tempdirs.remove(tdir)
3966 3966 except FileNotFoundError:
3967 3967 pass
3968 3968 del self.tempdirs
3969 3969
3970 3970 # Restore user's cursor
3971 3971 if hasattr(self, "editing_mode") and self.editing_mode == "vi":
3972 3972 sys.stdout.write("\x1b[0 q")
3973 3973 sys.stdout.flush()
3974 3974
3975 3975 def cleanup(self):
3976 3976 self.restore_sys_module_state()
3977 3977
3978 3978
3979 3979 # Overridden in terminal subclass to change prompts
3980 3980 def switch_doctest_mode(self, mode):
3981 3981 pass
3982 3982
3983 3983
3984 3984 class InteractiveShellABC(metaclass=abc.ABCMeta):
3985 3985 """An abstract base class for InteractiveShell."""
3986 3986
3987 3987 InteractiveShellABC.register(InteractiveShell)
@@ -1,330 +1,330
1 1 """
2 2 This module contains utility function and classes to inject simple ast
3 3 transformations based on code strings into IPython. While it is already possible
4 4 with ast-transformers it is not easy to directly manipulate ast.
5 5
6 6
7 7 IPython has pre-code and post-code hooks, but are ran from within the IPython
8 machinery so may be inappropriate, for example for performance mesurement.
8 machinery so may be inappropriate, for example for performance measurement.
9 9
10 10 This module give you tools to simplify this, and expose 2 classes:
11 11
12 12 - `ReplaceCodeTransformer` which is a simple ast transformer based on code
13 13 template,
14 14
15 15 and for advance case:
16 16
17 17 - `Mangler` which is a simple ast transformer that mangle names in the ast.
18 18
19 19
20 20 Example, let's try to make a simple version of the ``timeit`` magic, that run a
21 21 code snippet 10 times and print the average time taken.
22 22
23 23 Basically we want to run :
24 24
25 25 .. code-block:: python
26 26
27 27 from time import perf_counter
28 28 now = perf_counter()
29 29 for i in range(10):
30 30 __code__ # our code
31 31 print(f"Time taken: {(perf_counter() - now)/10}")
32 32 __ret__ # the result of the last statement
33 33
34 34 Where ``__code__`` is the code snippet we want to run, and ``__ret__`` is the
35 35 result, so that if we for example run `dataframe.head()` IPython still display
36 36 the head of dataframe instead of nothing.
37 37
38 38 Here is a complete example of a file `timit2.py` that define such a magic:
39 39
40 40 .. code-block:: python
41 41
42 42 from IPython.core.magic import (
43 43 Magics,
44 44 magics_class,
45 45 line_cell_magic,
46 46 )
47 47 from IPython.core.magics.ast_mod import ReplaceCodeTransformer
48 48 from textwrap import dedent
49 49 import ast
50 50
51 51 template = template = dedent('''
52 52 from time import perf_counter
53 53 now = perf_counter()
54 54 for i in range(10):
55 55 __code__
56 56 print(f"Time taken: {(perf_counter() - now)/10}")
57 57 __ret__
58 58 '''
59 59 )
60 60
61 61
62 62 @magics_class
63 63 class AstM(Magics):
64 64 @line_cell_magic
65 65 def t2(self, line, cell):
66 66 transformer = ReplaceCodeTransformer.from_string(template)
67 67 transformer.debug = True
68 68 transformer.mangler.debug = True
69 69 new_code = transformer.visit(ast.parse(cell))
70 70 return exec(compile(new_code, "<ast>", "exec"))
71 71
72 72
73 73 def load_ipython_extension(ip):
74 74 ip.register_magics(AstM)
75 75
76 76
77 77
78 78 .. code-block:: python
79 79
80 80 In [1]: %load_ext timit2
81 81
82 82 In [2]: %%t2
83 83 ...: import time
84 84 ...: time.sleep(0.05)
85 85 ...:
86 86 ...:
87 87 Time taken: 0.05435649999999441
88 88
89 89
90 90 If you wish to ran all the code enter in IPython in an ast transformer, you can
91 91 do so as well:
92 92
93 93 .. code-block:: python
94 94
95 95 In [1]: from IPython.core.magics.ast_mod import ReplaceCodeTransformer
96 96 ...:
97 97 ...: template = '''
98 98 ...: from time import perf_counter
99 99 ...: now = perf_counter()
100 100 ...: __code__
101 101 ...: print(f"Code ran in {perf_counter()-now}")
102 102 ...: __ret__'''
103 103 ...:
104 104 ...: get_ipython().ast_transformers.append(ReplaceCodeTransformer.from_string(template))
105 105
106 106 In [2]: 1+1
107 107 Code ran in 3.40410006174352e-05
108 108 Out[2]: 2
109 109
110 110
111 111
112 112 Hygiene and Mangling
113 113 --------------------
114 114
115 115 The ast transformer above is not hygienic, it may not work if the user code use
116 116 the same variable names as the ones used in the template. For example.
117 117
118 118 To help with this by default the `ReplaceCodeTransformer` will mangle all names
119 119 staring with 3 underscores. This is a simple heuristic that should work in most
120 120 case, but can be cumbersome in some case. We provide a `Mangler` class that can
121 121 be overridden to change the mangling heuristic, or simply use the `mangle_all`
122 122 utility function. It will _try_ to mangle all names (except `__ret__` and
123 123 `__code__`), but this include builtins (``print``, ``range``, ``type``) and
124 124 replace those by invalid identifiers py prepending ``mangle-``:
125 125 ``mangle-print``, ``mangle-range``, ``mangle-type`` etc. This is not a problem
126 126 as currently Python AST support invalid identifiers, but it may not be the case
127 127 in the future.
128 128
129 129 You can set `ReplaceCodeTransformer.debug=True` and
130 130 `ReplaceCodeTransformer.mangler.debug=True` to see the code after mangling and
131 131 transforming:
132 132
133 133 .. code-block:: python
134 134
135 135
136 136 In [1]: from IPython.core.magics.ast_mod import ReplaceCodeTransformer, mangle_all
137 137 ...:
138 138 ...: template = '''
139 139 ...: from builtins import type, print
140 140 ...: from time import perf_counter
141 141 ...: now = perf_counter()
142 142 ...: __code__
143 143 ...: print(f"Code ran in {perf_counter()-now}")
144 144 ...: __ret__'''
145 145 ...:
146 146 ...: transformer = ReplaceCodeTransformer.from_string(template, mangling_predicate=mangle_all)
147 147
148 148
149 149 In [2]: transformer.debug = True
150 150 ...: transformer.mangler.debug = True
151 151 ...: get_ipython().ast_transformers.append(transformer)
152 152
153 153 In [3]: 1+1
154 154 Mangling Alias mangle-type
155 155 Mangling Alias mangle-print
156 156 Mangling Alias mangle-perf_counter
157 157 Mangling now
158 158 Mangling perf_counter
159 159 Not mangling __code__
160 160 Mangling print
161 161 Mangling perf_counter
162 162 Mangling now
163 163 Not mangling __ret__
164 164 ---- Transformed code ----
165 165 from builtins import type as mangle-type, print as mangle-print
166 166 from time import perf_counter as mangle-perf_counter
167 167 mangle-now = mangle-perf_counter()
168 168 ret-tmp = 1 + 1
169 169 mangle-print(f'Code ran in {mangle-perf_counter() - mangle-now}')
170 170 ret-tmp
171 171 ---- ---------------- ----
172 172 Code ran in 0.00013654199938173406
173 173 Out[3]: 2
174 174
175 175
176 176 """
177 177
178 178 __skip_doctest__ = True
179 179
180 180
181 181 from ast import (
182 182 NodeTransformer,
183 183 Store,
184 184 Load,
185 185 Name,
186 186 Expr,
187 187 Assign,
188 188 Module,
189 189 Import,
190 190 ImportFrom,
191 191 )
192 192 import ast
193 193 import copy
194 194
195 195 from typing import Dict, Optional, Union
196 196
197 197
198 198 mangle_all = lambda name: False if name in ("__ret__", "__code__") else True
199 199
200 200
201 201 class Mangler(NodeTransformer):
202 202 """
203 203 Mangle given names in and ast tree to make sure they do not conflict with
204 204 user code.
205 205 """
206 206
207 207 enabled: bool = True
208 208 debug: bool = False
209 209
210 210 def log(self, *args, **kwargs):
211 211 if self.debug:
212 212 print(*args, **kwargs)
213 213
214 214 def __init__(self, predicate=None):
215 215 if predicate is None:
216 216 predicate = lambda name: name.startswith("___")
217 217 self.predicate = predicate
218 218
219 219 def visit_Name(self, node):
220 220 if self.predicate(node.id):
221 221 self.log("Mangling", node.id)
222 222 # Once in the ast we do not need
223 223 # names to be valid identifiers.
224 224 node.id = "mangle-" + node.id
225 225 else:
226 226 self.log("Not mangling", node.id)
227 227 return node
228 228
229 229 def visit_FunctionDef(self, node):
230 230 if self.predicate(node.name):
231 231 self.log("Mangling", node.name)
232 232 node.name = "mangle-" + node.name
233 233 else:
234 234 self.log("Not mangling", node.name)
235 235
236 236 for arg in node.args.args:
237 237 if self.predicate(arg.arg):
238 238 self.log("Mangling function arg", arg.arg)
239 239 arg.arg = "mangle-" + arg.arg
240 240 else:
241 241 self.log("Not mangling function arg", arg.arg)
242 242 return self.generic_visit(node)
243 243
244 244 def visit_ImportFrom(self, node: ImportFrom):
245 245 return self._visit_Import_and_ImportFrom(node)
246 246
247 247 def visit_Import(self, node: Import):
248 248 return self._visit_Import_and_ImportFrom(node)
249 249
250 250 def _visit_Import_and_ImportFrom(self, node: Union[Import, ImportFrom]):
251 251 for alias in node.names:
252 252 asname = alias.name if alias.asname is None else alias.asname
253 253 if self.predicate(asname):
254 254 new_name: str = "mangle-" + asname
255 255 self.log("Mangling Alias", new_name)
256 256 alias.asname = new_name
257 257 else:
258 258 self.log("Not mangling Alias", alias.asname)
259 259 return node
260 260
261 261
262 262 class ReplaceCodeTransformer(NodeTransformer):
263 263 enabled: bool = True
264 264 debug: bool = False
265 265 mangler: Mangler
266 266
267 267 def __init__(
268 268 self, template: Module, mapping: Optional[Dict] = None, mangling_predicate=None
269 269 ):
270 270 assert isinstance(mapping, (dict, type(None)))
271 271 assert isinstance(mangling_predicate, (type(None), type(lambda: None)))
272 272 assert isinstance(template, ast.Module)
273 273 self.template = template
274 274 self.mangler = Mangler(predicate=mangling_predicate)
275 275 if mapping is None:
276 276 mapping = {}
277 277 self.mapping = mapping
278 278
279 279 @classmethod
280 280 def from_string(
281 281 cls, template: str, mapping: Optional[Dict] = None, mangling_predicate=None
282 282 ):
283 283 return cls(
284 284 ast.parse(template), mapping=mapping, mangling_predicate=mangling_predicate
285 285 )
286 286
287 287 def visit_Module(self, code):
288 288 if not self.enabled:
289 289 return code
290 290 # if not isinstance(code, ast.Module):
291 291 # recursively called...
292 292 # return generic_visit(self, code)
293 293 last = code.body[-1]
294 294 if isinstance(last, Expr):
295 295 code.body.pop()
296 296 code.body.append(Assign([Name("ret-tmp", ctx=Store())], value=last.value))
297 297 ast.fix_missing_locations(code)
298 298 ret = Expr(value=Name("ret-tmp", ctx=Load()))
299 299 ret = ast.fix_missing_locations(ret)
300 300 self.mapping["__ret__"] = ret
301 301 else:
302 302 self.mapping["__ret__"] = ast.parse("None").body[0]
303 303 self.mapping["__code__"] = code.body
304 304 tpl = ast.fix_missing_locations(self.template)
305 305
306 306 tx = copy.deepcopy(tpl)
307 307 tx = self.mangler.visit(tx)
308 308 node = self.generic_visit(tx)
309 309 node_2 = ast.fix_missing_locations(node)
310 310 if self.debug:
311 311 print("---- Transformed code ----")
312 312 print(ast.unparse(node_2))
313 313 print("---- ---------------- ----")
314 314 return node_2
315 315
316 316 # this does not work as the name might be in a list and one might want to extend the list.
317 317 # def visit_Name(self, name):
318 318 # if name.id in self.mapping and name.id == "__ret__":
319 319 # print(name, "in mapping")
320 320 # if isinstance(name.ctx, ast.Store):
321 321 # return Name("tmp", ctx=Store())
322 322 # else:
323 323 # return copy.deepcopy(self.mapping[name.id])
324 324 # return name
325 325
326 326 def visit_Expr(self, expr):
327 327 if isinstance(expr.value, Name) and expr.value.id in self.mapping:
328 328 if self.mapping[expr.value.id] is not None:
329 329 return copy.deepcopy(self.mapping[expr.value.id])
330 330 return self.generic_visit(expr)
@@ -1,1283 +1,1283
1 1 """Tools for inspecting Python objects.
2 2
3 3 Uses syntax highlighting for presenting the various information elements.
4 4
5 5 Similar in spirit to the inspect module, but all calls take a name argument to
6 6 reference the name under which an object is being read.
7 7 """
8 8
9 9 # Copyright (c) IPython Development Team.
10 10 # Distributed under the terms of the Modified BSD License.
11 11
12 12 __all__ = ['Inspector','InspectColors']
13 13
14 14 # stdlib modules
15 15 from dataclasses import dataclass
16 16 from inspect import signature
17 17 from textwrap import dedent
18 18 import ast
19 19 import html
20 20 import inspect
21 21 import io as stdlib_io
22 22 import linecache
23 23 import os
24 24 import types
25 25 import warnings
26 26
27 27
28 28 from typing import (
29 29 cast,
30 30 Any,
31 31 Optional,
32 32 Dict,
33 33 Union,
34 34 List,
35 35 TypedDict,
36 36 TypeAlias,
37 37 Tuple,
38 38 )
39 39
40 40 import traitlets
41 41
42 42 # IPython's own
43 43 from IPython.core import page
44 44 from IPython.lib.pretty import pretty
45 45 from IPython.testing.skipdoctest import skip_doctest
46 46 from IPython.utils import PyColorize, openpy
47 47 from IPython.utils.dir2 import safe_hasattr
48 48 from IPython.utils.path import compress_user
49 49 from IPython.utils.text import indent
50 50 from IPython.utils.wildcard import list_namespace, typestr2type
51 51 from IPython.utils.coloransi import TermColors
52 52 from IPython.utils.colorable import Colorable
53 53 from IPython.utils.decorators import undoc
54 54
55 55 from pygments import highlight
56 56 from pygments.lexers import PythonLexer
57 57 from pygments.formatters import HtmlFormatter
58 58
59 59 HOOK_NAME = "__custom_documentations__"
60 60
61 61
62 62 UnformattedBundle: TypeAlias = Dict[str, List[Tuple[str, str]]] # List of (title, body)
63 63 Bundle: TypeAlias = Dict[str, str]
64 64
65 65
66 66 @dataclass
67 67 class OInfo:
68 68 ismagic: bool
69 69 isalias: bool
70 70 found: bool
71 71 namespace: Optional[str]
72 72 parent: Any
73 73 obj: Any
74 74
75 75 def get(self, field):
76 76 """Get a field from the object for backward compatibility with before 8.12
77 77
78 78 see https://github.com/h5py/h5py/issues/2253
79 79 """
80 80 # We need to deprecate this at some point, but the warning will show in completion.
81 81 # Let's comment this for now and uncomment end of 2023 ish
82 82 # warnings.warn(
83 83 # f"OInfo dataclass with fields access since IPython 8.12 please use OInfo.{field} instead."
84 84 # "OInfo used to be a dict but a dataclass provide static fields verification with mypy."
85 85 # "This warning and backward compatibility `get()` method were added in 8.13.",
86 86 # DeprecationWarning,
87 87 # stacklevel=2,
88 88 # )
89 89 return getattr(self, field)
90 90
91 91
92 92 def pylight(code):
93 93 return highlight(code, PythonLexer(), HtmlFormatter(noclasses=True))
94 94
95 95 # builtin docstrings to ignore
96 96 _func_call_docstring = types.FunctionType.__call__.__doc__
97 97 _object_init_docstring = object.__init__.__doc__
98 98 _builtin_type_docstrings = {
99 99 inspect.getdoc(t) for t in (types.ModuleType, types.MethodType,
100 100 types.FunctionType, property)
101 101 }
102 102
103 103 _builtin_func_type = type(all)
104 104 _builtin_meth_type = type(str.upper) # Bound methods have the same type as builtin functions
105 105 #****************************************************************************
106 106 # Builtin color schemes
107 107
108 108 Colors = TermColors # just a shorthand
109 109
110 110 InspectColors = PyColorize.ANSICodeColors
111 111
112 112 #****************************************************************************
113 113 # Auxiliary functions and objects
114 114
115 115
116 116 class InfoDict(TypedDict):
117 117 type_name: Optional[str]
118 118 base_class: Optional[str]
119 119 string_form: Optional[str]
120 120 namespace: Optional[str]
121 121 length: Optional[str]
122 122 file: Optional[str]
123 123 definition: Optional[str]
124 124 docstring: Optional[str]
125 125 source: Optional[str]
126 126 init_definition: Optional[str]
127 127 class_docstring: Optional[str]
128 128 init_docstring: Optional[str]
129 129 call_def: Optional[str]
130 130 call_docstring: Optional[str]
131 131 subclasses: Optional[str]
132 132 # These won't be printed but will be used to determine how to
133 133 # format the object
134 134 ismagic: bool
135 135 isalias: bool
136 136 isclass: bool
137 137 found: bool
138 138 name: str
139 139
140 140
141 141 _info_fields = list(InfoDict.__annotations__.keys())
142 142
143 143
144 144 def __getattr__(name):
145 145 if name == "info_fields":
146 146 warnings.warn(
147 147 "IPython.core.oinspect's `info_fields` is considered for deprecation and may be removed in the Future. ",
148 148 DeprecationWarning,
149 149 stacklevel=2,
150 150 )
151 151 return _info_fields
152 152
153 153 raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
154 154
155 155
156 156 @dataclass
157 157 class InspectorHookData:
158 158 """Data passed to the mime hook"""
159 159
160 160 obj: Any
161 161 info: Optional[OInfo]
162 162 info_dict: InfoDict
163 163 detail_level: int
164 164 omit_sections: list[str]
165 165
166 166
167 167 @undoc
168 168 def object_info(
169 169 *,
170 170 name: str,
171 171 found: bool,
172 172 isclass: bool = False,
173 173 isalias: bool = False,
174 174 ismagic: bool = False,
175 175 **kw,
176 176 ) -> InfoDict:
177 177 """Make an object info dict with all fields present."""
178 178 infodict = kw
179 179 infodict = {k: None for k in _info_fields if k not in infodict}
180 180 infodict["name"] = name # type: ignore
181 181 infodict["found"] = found # type: ignore
182 182 infodict["isclass"] = isclass # type: ignore
183 183 infodict["isalias"] = isalias # type: ignore
184 184 infodict["ismagic"] = ismagic # type: ignore
185 185
186 186 return InfoDict(**infodict) # type:ignore
187 187
188 188
189 189 def get_encoding(obj):
190 190 """Get encoding for python source file defining obj
191 191
192 192 Returns None if obj is not defined in a sourcefile.
193 193 """
194 194 ofile = find_file(obj)
195 195 # run contents of file through pager starting at line where the object
196 196 # is defined, as long as the file isn't binary and is actually on the
197 197 # filesystem.
198 198 if ofile is None:
199 199 return None
200 200 elif ofile.endswith(('.so', '.dll', '.pyd')):
201 201 return None
202 202 elif not os.path.isfile(ofile):
203 203 return None
204 204 else:
205 205 # Print only text files, not extension binaries. Note that
206 206 # getsourcelines returns lineno with 1-offset and page() uses
207 207 # 0-offset, so we must adjust.
208 208 with stdlib_io.open(ofile, 'rb') as buffer: # Tweaked to use io.open for Python 2
209 209 encoding, _lines = openpy.detect_encoding(buffer.readline)
210 210 return encoding
211 211
212 212
213 213 def getdoc(obj) -> Union[str, None]:
214 214 """Stable wrapper around inspect.getdoc.
215 215
216 216 This can't crash because of attribute problems.
217 217
218 218 It also attempts to call a getdoc() method on the given object. This
219 219 allows objects which provide their docstrings via non-standard mechanisms
220 220 (like Pyro proxies) to still be inspected by ipython's ? system.
221 221 """
222 222 # Allow objects to offer customized documentation via a getdoc method:
223 223 try:
224 224 ds = obj.getdoc()
225 225 except Exception:
226 226 pass
227 227 else:
228 228 if isinstance(ds, str):
229 229 return inspect.cleandoc(ds)
230 230 docstr = inspect.getdoc(obj)
231 231 return docstr
232 232
233 233
234 234 def getsource(obj, oname='') -> Union[str,None]:
235 235 """Wrapper around inspect.getsource.
236 236
237 237 This can be modified by other projects to provide customized source
238 238 extraction.
239 239
240 240 Parameters
241 241 ----------
242 242 obj : object
243 243 an object whose source code we will attempt to extract
244 244 oname : str
245 245 (optional) a name under which the object is known
246 246
247 247 Returns
248 248 -------
249 249 src : unicode or None
250 250
251 251 """
252 252
253 253 if isinstance(obj, property):
254 254 sources = []
255 255 for attrname in ['fget', 'fset', 'fdel']:
256 256 fn = getattr(obj, attrname)
257 257 if fn is not None:
258 258 encoding = get_encoding(fn)
259 259 oname_prefix = ('%s.' % oname) if oname else ''
260 260 sources.append(''.join(('# ', oname_prefix, attrname)))
261 261 if inspect.isfunction(fn):
262 262 _src = getsource(fn)
263 263 if _src:
264 264 # assert _src is not None, "please mypy"
265 265 sources.append(dedent(_src))
266 266 else:
267 267 # Default str/repr only prints function name,
268 268 # pretty.pretty prints module name too.
269 269 sources.append(
270 270 '%s%s = %s\n' % (oname_prefix, attrname, pretty(fn))
271 271 )
272 272 if sources:
273 273 return '\n'.join(sources)
274 274 else:
275 275 return None
276 276
277 277 else:
278 278 # Get source for non-property objects.
279 279
280 280 obj = _get_wrapped(obj)
281 281
282 282 try:
283 283 src = inspect.getsource(obj)
284 284 except TypeError:
285 285 # The object itself provided no meaningful source, try looking for
286 286 # its class definition instead.
287 287 try:
288 288 src = inspect.getsource(obj.__class__)
289 289 except (OSError, TypeError):
290 290 return None
291 291 except OSError:
292 292 return None
293 293
294 294 return src
295 295
296 296
297 297 def is_simple_callable(obj):
298 298 """True if obj is a function ()"""
299 299 return (inspect.isfunction(obj) or inspect.ismethod(obj) or \
300 300 isinstance(obj, _builtin_func_type) or isinstance(obj, _builtin_meth_type))
301 301
302 302 @undoc
303 303 def getargspec(obj):
304 304 """Wrapper around :func:`inspect.getfullargspec`
305 305
306 306 In addition to functions and methods, this can also handle objects with a
307 307 ``__call__`` attribute.
308 308
309 309 DEPRECATED: Deprecated since 7.10. Do not use, will be removed.
310 310 """
311 311
312 312 warnings.warn('`getargspec` function is deprecated as of IPython 7.10'
313 313 'and will be removed in future versions.', DeprecationWarning, stacklevel=2)
314 314
315 315 if safe_hasattr(obj, '__call__') and not is_simple_callable(obj):
316 316 obj = obj.__call__
317 317
318 318 return inspect.getfullargspec(obj)
319 319
320 320 @undoc
321 321 def format_argspec(argspec):
322 322 """Format argspect, convenience wrapper around inspect's.
323 323
324 324 This takes a dict instead of ordered arguments and calls
325 325 inspect.format_argspec with the arguments in the necessary order.
326 326
327 327 DEPRECATED (since 7.10): Do not use; will be removed in future versions.
328 328 """
329 329
330 330 warnings.warn('`format_argspec` function is deprecated as of IPython 7.10'
331 331 'and will be removed in future versions.', DeprecationWarning, stacklevel=2)
332 332
333 333
334 334 return inspect.formatargspec(argspec['args'], argspec['varargs'],
335 335 argspec['varkw'], argspec['defaults'])
336 336
337 337 @undoc
338 338 def call_tip(oinfo, format_call=True):
339 339 """DEPRECATED since 6.0. Extract call tip data from an oinfo dict."""
340 340 warnings.warn(
341 341 "`call_tip` function is deprecated as of IPython 6.0"
342 342 "and will be removed in future versions.",
343 343 DeprecationWarning,
344 344 stacklevel=2,
345 345 )
346 346 # Get call definition
347 347 argspec = oinfo.get('argspec')
348 348 if argspec is None:
349 349 call_line = None
350 350 else:
351 351 # Callable objects will have 'self' as their first argument, prune
352 352 # it out if it's there for clarity (since users do *not* pass an
353 353 # extra first argument explicitly).
354 354 try:
355 355 has_self = argspec['args'][0] == 'self'
356 356 except (KeyError, IndexError):
357 357 pass
358 358 else:
359 359 if has_self:
360 360 argspec['args'] = argspec['args'][1:]
361 361
362 362 call_line = oinfo['name']+format_argspec(argspec)
363 363
364 364 # Now get docstring.
365 365 # The priority is: call docstring, constructor docstring, main one.
366 366 doc = oinfo.get('call_docstring')
367 367 if doc is None:
368 368 doc = oinfo.get('init_docstring')
369 369 if doc is None:
370 370 doc = oinfo.get('docstring','')
371 371
372 372 return call_line, doc
373 373
374 374
375 375 def _get_wrapped(obj):
376 376 """Get the original object if wrapped in one or more @decorators
377 377
378 378 Some objects automatically construct similar objects on any unrecognised
379 379 attribute access (e.g. unittest.mock.call). To protect against infinite loops,
380 380 this will arbitrarily cut off after 100 levels of obj.__wrapped__
381 381 attribute access. --TK, Jan 2016
382 382 """
383 383 orig_obj = obj
384 384 i = 0
385 385 while safe_hasattr(obj, '__wrapped__'):
386 386 obj = obj.__wrapped__
387 387 i += 1
388 388 if i > 100:
389 389 # __wrapped__ is probably a lie, so return the thing we started with
390 390 return orig_obj
391 391 return obj
392 392
393 393 def find_file(obj) -> Optional[str]:
394 394 """Find the absolute path to the file where an object was defined.
395 395
396 396 This is essentially a robust wrapper around `inspect.getabsfile`.
397 397
398 398 Returns None if no file can be found.
399 399
400 400 Parameters
401 401 ----------
402 402 obj : any Python object
403 403
404 404 Returns
405 405 -------
406 406 fname : str
407 407 The absolute path to the file where the object was defined.
408 408 """
409 409 obj = _get_wrapped(obj)
410 410
411 411 fname: Optional[str] = None
412 412 try:
413 413 fname = inspect.getabsfile(obj)
414 414 except TypeError:
415 415 # For an instance, the file that matters is where its class was
416 416 # declared.
417 417 try:
418 418 fname = inspect.getabsfile(obj.__class__)
419 419 except (OSError, TypeError):
420 420 # Can happen for builtins
421 421 pass
422 422 except OSError:
423 423 pass
424 424
425 425 return fname
426 426
427 427
428 428 def find_source_lines(obj):
429 429 """Find the line number in a file where an object was defined.
430 430
431 431 This is essentially a robust wrapper around `inspect.getsourcelines`.
432 432
433 433 Returns None if no file can be found.
434 434
435 435 Parameters
436 436 ----------
437 437 obj : any Python object
438 438
439 439 Returns
440 440 -------
441 441 lineno : int
442 442 The line number where the object definition starts.
443 443 """
444 444 obj = _get_wrapped(obj)
445 445
446 446 try:
447 447 lineno = inspect.getsourcelines(obj)[1]
448 448 except TypeError:
449 449 # For instances, try the class object like getsource() does
450 450 try:
451 451 lineno = inspect.getsourcelines(obj.__class__)[1]
452 452 except (OSError, TypeError):
453 453 return None
454 454 except OSError:
455 455 return None
456 456
457 457 return lineno
458 458
459 459 class Inspector(Colorable):
460 460
461 461 mime_hooks = traitlets.Dict(
462 462 config=True,
463 help="dictionary of mime to callable to add informations into help mimebundle dict",
463 help="dictionary of mime to callable to add information into help mimebundle dict",
464 464 ).tag(config=True)
465 465
466 466 def __init__(
467 467 self,
468 468 color_table=InspectColors,
469 469 code_color_table=PyColorize.ANSICodeColors,
470 470 scheme=None,
471 471 str_detail_level=0,
472 472 parent=None,
473 473 config=None,
474 474 ):
475 475 super(Inspector, self).__init__(parent=parent, config=config)
476 476 self.color_table = color_table
477 477 self.parser = PyColorize.Parser(out='str', parent=self, style=scheme)
478 478 self.format = self.parser.format
479 479 self.str_detail_level = str_detail_level
480 480 self.set_active_scheme(scheme)
481 481
482 482 def _getdef(self,obj,oname='') -> Union[str,None]:
483 483 """Return the call signature for any callable object.
484 484
485 485 If any exception is generated, None is returned instead and the
486 486 exception is suppressed."""
487 487 if not callable(obj):
488 488 return None
489 489 try:
490 490 return _render_signature(signature(obj), oname)
491 491 except:
492 492 return None
493 493
494 494 def __head(self,h) -> str:
495 495 """Return a header string with proper colors."""
496 496 return '%s%s%s' % (self.color_table.active_colors.header,h,
497 497 self.color_table.active_colors.normal)
498 498
499 499 def set_active_scheme(self, scheme):
500 500 if scheme is not None:
501 501 self.color_table.set_active_scheme(scheme)
502 502 self.parser.color_table.set_active_scheme(scheme)
503 503
504 504 def noinfo(self, msg, oname):
505 505 """Generic message when no information is found."""
506 506 print('No %s found' % msg, end=' ')
507 507 if oname:
508 508 print('for %s' % oname)
509 509 else:
510 510 print()
511 511
512 512 def pdef(self, obj, oname=''):
513 513 """Print the call signature for any callable object.
514 514
515 515 If the object is a class, print the constructor information."""
516 516
517 517 if not callable(obj):
518 518 print('Object is not callable.')
519 519 return
520 520
521 521 header = ''
522 522
523 523 if inspect.isclass(obj):
524 524 header = self.__head('Class constructor information:\n')
525 525
526 526
527 527 output = self._getdef(obj,oname)
528 528 if output is None:
529 529 self.noinfo('definition header',oname)
530 530 else:
531 531 print(header,self.format(output), end=' ')
532 532
533 533 # In Python 3, all classes are new-style, so they all have __init__.
534 534 @skip_doctest
535 535 def pdoc(self, obj, oname='', formatter=None):
536 536 """Print the docstring for any object.
537 537
538 538 Optional:
539 539 -formatter: a function to run the docstring through for specially
540 540 formatted docstrings.
541 541
542 542 Examples
543 543 --------
544 544 In [1]: class NoInit:
545 545 ...: pass
546 546
547 547 In [2]: class NoDoc:
548 548 ...: def __init__(self):
549 549 ...: pass
550 550
551 551 In [3]: %pdoc NoDoc
552 552 No documentation found for NoDoc
553 553
554 554 In [4]: %pdoc NoInit
555 555 No documentation found for NoInit
556 556
557 557 In [5]: obj = NoInit()
558 558
559 559 In [6]: %pdoc obj
560 560 No documentation found for obj
561 561
562 562 In [5]: obj2 = NoDoc()
563 563
564 564 In [6]: %pdoc obj2
565 565 No documentation found for obj2
566 566 """
567 567
568 568 head = self.__head # For convenience
569 569 lines = []
570 570 ds = getdoc(obj)
571 571 if formatter:
572 572 ds = formatter(ds).get('plain/text', ds)
573 573 if ds:
574 574 lines.append(head("Class docstring:"))
575 575 lines.append(indent(ds))
576 576 if inspect.isclass(obj) and hasattr(obj, '__init__'):
577 577 init_ds = getdoc(obj.__init__)
578 578 if init_ds is not None:
579 579 lines.append(head("Init docstring:"))
580 580 lines.append(indent(init_ds))
581 581 elif hasattr(obj,'__call__'):
582 582 call_ds = getdoc(obj.__call__)
583 583 if call_ds:
584 584 lines.append(head("Call docstring:"))
585 585 lines.append(indent(call_ds))
586 586
587 587 if not lines:
588 588 self.noinfo('documentation',oname)
589 589 else:
590 590 page.page('\n'.join(lines))
591 591
592 592 def psource(self, obj, oname=''):
593 593 """Print the source code for an object."""
594 594
595 595 # Flush the source cache because inspect can return out-of-date source
596 596 linecache.checkcache()
597 597 try:
598 598 src = getsource(obj, oname=oname)
599 599 except Exception:
600 600 src = None
601 601
602 602 if src is None:
603 603 self.noinfo('source', oname)
604 604 else:
605 605 page.page(self.format(src))
606 606
607 607 def pfile(self, obj, oname=''):
608 608 """Show the whole file where an object was defined."""
609 609
610 610 lineno = find_source_lines(obj)
611 611 if lineno is None:
612 612 self.noinfo('file', oname)
613 613 return
614 614
615 615 ofile = find_file(obj)
616 616 # run contents of file through pager starting at line where the object
617 617 # is defined, as long as the file isn't binary and is actually on the
618 618 # filesystem.
619 619 if ofile is None:
620 620 print("Could not find file for object")
621 621 elif ofile.endswith((".so", ".dll", ".pyd")):
622 622 print("File %r is binary, not printing." % ofile)
623 623 elif not os.path.isfile(ofile):
624 624 print('File %r does not exist, not printing.' % ofile)
625 625 else:
626 626 # Print only text files, not extension binaries. Note that
627 627 # getsourcelines returns lineno with 1-offset and page() uses
628 628 # 0-offset, so we must adjust.
629 629 page.page(self.format(openpy.read_py_file(ofile, skip_encoding_cookie=False)), lineno - 1)
630 630
631 631
632 632 def _mime_format(self, text:str, formatter=None) -> dict:
633 633 """Return a mime bundle representation of the input text.
634 634
635 635 - if `formatter` is None, the returned mime bundle has
636 636 a ``text/plain`` field, with the input text.
637 637 a ``text/html`` field with a ``<pre>`` tag containing the input text.
638 638
639 639 - if ``formatter`` is not None, it must be a callable transforming the
640 640 input text into a mime bundle. Default values for ``text/plain`` and
641 641 ``text/html`` representations are the ones described above.
642 642
643 643 Note:
644 644
645 645 Formatters returning strings are supported but this behavior is deprecated.
646 646
647 647 """
648 648 defaults = {
649 649 "text/plain": text,
650 650 "text/html": f"<pre>{html.escape(text)}</pre>",
651 651 }
652 652
653 653 if formatter is None:
654 654 return defaults
655 655 else:
656 656 formatted = formatter(text)
657 657
658 658 if not isinstance(formatted, dict):
659 659 # Handle the deprecated behavior of a formatter returning
660 660 # a string instead of a mime bundle.
661 661 return {"text/plain": formatted, "text/html": f"<pre>{formatted}</pre>"}
662 662
663 663 else:
664 664 return dict(defaults, **formatted)
665 665
666 666 def format_mime(self, bundle: UnformattedBundle) -> Bundle:
667 667 """Format a mimebundle being created by _make_info_unformatted into a real mimebundle"""
668 668 # Format text/plain mimetype
669 669 assert isinstance(bundle["text/plain"], list)
670 670 for item in bundle["text/plain"]:
671 671 assert isinstance(item, tuple)
672 672
673 673 new_b: Bundle = {}
674 674 lines = []
675 675 _len = max(len(h) for h, _ in bundle["text/plain"])
676 676
677 677 for head, body in bundle["text/plain"]:
678 678 body = body.strip("\n")
679 679 delim = "\n" if "\n" in body else " "
680 680 lines.append(
681 681 f"{self.__head(head+':')}{(_len - len(head))*' '}{delim}{body}"
682 682 )
683 683
684 684 new_b["text/plain"] = "\n".join(lines)
685 685
686 686 if "text/html" in bundle:
687 687 assert isinstance(bundle["text/html"], list)
688 688 for item in bundle["text/html"]:
689 689 assert isinstance(item, tuple)
690 690 # Format the text/html mimetype
691 691 if isinstance(bundle["text/html"], (list, tuple)):
692 692 # bundle['text/html'] is a list of (head, formatted body) pairs
693 693 new_b["text/html"] = "\n".join(
694 694 (f"<h1>{head}</h1>\n{body}" for (head, body) in bundle["text/html"])
695 695 )
696 696
697 697 for k in bundle.keys():
698 698 if k in ("text/html", "text/plain"):
699 699 continue
700 700 else:
701 701 new_b[k] = bundle[k] # type:ignore
702 702 return new_b
703 703
704 704 def _append_info_field(
705 705 self,
706 706 bundle: UnformattedBundle,
707 707 title: str,
708 708 key: str,
709 709 info,
710 710 omit_sections: List[str],
711 711 formatter,
712 712 ):
713 713 """Append an info value to the unformatted mimebundle being constructed by _make_info_unformatted"""
714 714 if title in omit_sections or key in omit_sections:
715 715 return
716 716 field = info[key]
717 717 if field is not None:
718 718 formatted_field = self._mime_format(field, formatter)
719 719 bundle["text/plain"].append((title, formatted_field["text/plain"]))
720 720 bundle["text/html"].append((title, formatted_field["text/html"]))
721 721
722 722 def _make_info_unformatted(
723 723 self, obj, info, formatter, detail_level, omit_sections
724 724 ) -> UnformattedBundle:
725 725 """Assemble the mimebundle as unformatted lists of information"""
726 726 bundle: UnformattedBundle = {
727 727 "text/plain": [],
728 728 "text/html": [],
729 729 }
730 730
731 731 # A convenience function to simplify calls below
732 732 def append_field(
733 733 bundle: UnformattedBundle, title: str, key: str, formatter=None
734 734 ):
735 735 self._append_info_field(
736 736 bundle,
737 737 title=title,
738 738 key=key,
739 739 info=info,
740 740 omit_sections=omit_sections,
741 741 formatter=formatter,
742 742 )
743 743
744 744 def code_formatter(text) -> Bundle:
745 745 return {
746 746 'text/plain': self.format(text),
747 747 'text/html': pylight(text)
748 748 }
749 749
750 750 if info["isalias"]:
751 751 append_field(bundle, "Repr", "string_form")
752 752
753 753 elif info['ismagic']:
754 754 if detail_level > 0:
755 755 append_field(bundle, "Source", "source", code_formatter)
756 756 else:
757 757 append_field(bundle, "Docstring", "docstring", formatter)
758 758 append_field(bundle, "File", "file")
759 759
760 760 elif info['isclass'] or is_simple_callable(obj):
761 761 # Functions, methods, classes
762 762 append_field(bundle, "Signature", "definition", code_formatter)
763 763 append_field(bundle, "Init signature", "init_definition", code_formatter)
764 764 append_field(bundle, "Docstring", "docstring", formatter)
765 765 if detail_level > 0 and info["source"]:
766 766 append_field(bundle, "Source", "source", code_formatter)
767 767 else:
768 768 append_field(bundle, "Init docstring", "init_docstring", formatter)
769 769
770 770 append_field(bundle, "File", "file")
771 771 append_field(bundle, "Type", "type_name")
772 772 append_field(bundle, "Subclasses", "subclasses")
773 773
774 774 else:
775 775 # General Python objects
776 776 append_field(bundle, "Signature", "definition", code_formatter)
777 777 append_field(bundle, "Call signature", "call_def", code_formatter)
778 778 append_field(bundle, "Type", "type_name")
779 779 append_field(bundle, "String form", "string_form")
780 780
781 781 # Namespace
782 782 if info["namespace"] != "Interactive":
783 783 append_field(bundle, "Namespace", "namespace")
784 784
785 785 append_field(bundle, "Length", "length")
786 786 append_field(bundle, "File", "file")
787 787
788 788 # Source or docstring, depending on detail level and whether
789 789 # source found.
790 790 if detail_level > 0 and info["source"]:
791 791 append_field(bundle, "Source", "source", code_formatter)
792 792 else:
793 793 append_field(bundle, "Docstring", "docstring", formatter)
794 794
795 795 append_field(bundle, "Class docstring", "class_docstring", formatter)
796 796 append_field(bundle, "Init docstring", "init_docstring", formatter)
797 797 append_field(bundle, "Call docstring", "call_docstring", formatter)
798 798 return bundle
799 799
800 800
801 801 def _get_info(
802 802 self,
803 803 obj: Any,
804 804 oname: str = "",
805 805 formatter=None,
806 806 info: Optional[OInfo] = None,
807 807 detail_level: int = 0,
808 808 omit_sections: Union[List[str], Tuple[()]] = (),
809 809 ) -> Bundle:
810 810 """Retrieve an info dict and format it.
811 811
812 812 Parameters
813 813 ----------
814 814 obj : any
815 815 Object to inspect and return info from
816 816 oname : str (default: ''):
817 817 Name of the variable pointing to `obj`.
818 818 formatter : callable
819 819 info
820 820 already computed information
821 821 detail_level : integer
822 822 Granularity of detail level, if set to 1, give more information.
823 823 omit_sections : list[str]
824 824 Titles or keys to omit from output (can be set, tuple, etc., anything supporting `in`)
825 825 """
826 826
827 827 info_dict = self.info(obj, oname=oname, info=info, detail_level=detail_level)
828 828 omit_sections = list(omit_sections)
829 829
830 830 bundle = self._make_info_unformatted(
831 831 obj,
832 832 info_dict,
833 833 formatter,
834 834 detail_level=detail_level,
835 835 omit_sections=omit_sections,
836 836 )
837 837 if self.mime_hooks:
838 838 hook_data = InspectorHookData(
839 839 obj=obj,
840 840 info=info,
841 841 info_dict=info_dict,
842 842 detail_level=detail_level,
843 843 omit_sections=omit_sections,
844 844 )
845 845 for key, hook in self.mime_hooks.items(): # type:ignore
846 846 required_parameters = [
847 847 parameter
848 848 for parameter in inspect.signature(hook).parameters.values()
849 849 if parameter.default != inspect.Parameter.default
850 850 ]
851 851 if len(required_parameters) == 1:
852 852 res = hook(hook_data)
853 853 else:
854 854 warnings.warn(
855 855 "MIME hook format changed in IPython 8.22; hooks should now accept"
856 856 " a single parameter (InspectorHookData); support for hooks requiring"
857 857 " two-parameters (obj and info) will be removed in a future version",
858 858 DeprecationWarning,
859 859 stacklevel=2,
860 860 )
861 861 res = hook(obj, info)
862 862 if res is not None:
863 863 bundle[key] = res
864 864 return self.format_mime(bundle)
865 865
866 866 def pinfo(
867 867 self,
868 868 obj,
869 869 oname="",
870 870 formatter=None,
871 871 info: Optional[OInfo] = None,
872 872 detail_level=0,
873 873 enable_html_pager=True,
874 874 omit_sections=(),
875 875 ):
876 876 """Show detailed information about an object.
877 877
878 878 Optional arguments:
879 879
880 880 - oname: name of the variable pointing to the object.
881 881
882 882 - formatter: callable (optional)
883 883 A special formatter for docstrings.
884 884
885 885 The formatter is a callable that takes a string as an input
886 886 and returns either a formatted string or a mime type bundle
887 887 in the form of a dictionary.
888 888
889 889 Although the support of custom formatter returning a string
890 890 instead of a mime type bundle is deprecated.
891 891
892 892 - info: a structure with some information fields which may have been
893 893 precomputed already.
894 894
895 895 - detail_level: if set to 1, more information is given.
896 896
897 897 - omit_sections: set of section keys and titles to omit
898 898 """
899 899 assert info is not None
900 900 info_b: Bundle = self._get_info(
901 901 obj, oname, formatter, info, detail_level, omit_sections=omit_sections
902 902 )
903 903 if not enable_html_pager:
904 904 del info_b["text/html"]
905 905 page.page(info_b)
906 906
907 907 def _info(self, obj, oname="", info=None, detail_level=0):
908 908 """
909 909 Inspector.info() was likely improperly marked as deprecated
910 910 while only a parameter was deprecated. We "un-deprecate" it.
911 911 """
912 912
913 913 warnings.warn(
914 914 "The `Inspector.info()` method has been un-deprecated as of 8.0 "
915 915 "and the `formatter=` keyword removed. `Inspector._info` is now "
916 916 "an alias, and you can just call `.info()` directly.",
917 917 DeprecationWarning,
918 918 stacklevel=2,
919 919 )
920 920 return self.info(obj, oname=oname, info=info, detail_level=detail_level)
921 921
922 922 def info(self, obj, oname="", info=None, detail_level=0) -> InfoDict:
923 923 """Compute a dict with detailed information about an object.
924 924
925 925 Parameters
926 926 ----------
927 927 obj : any
928 928 An object to find information about
929 929 oname : str (default: '')
930 930 Name of the variable pointing to `obj`.
931 931 info : (default: None)
932 932 A struct (dict like with attr access) with some information fields
933 933 which may have been precomputed already.
934 934 detail_level : int (default:0)
935 935 If set to 1, more information is given.
936 936
937 937 Returns
938 938 -------
939 939 An object info dict with known fields from `info_fields` (see `InfoDict`).
940 940 """
941 941
942 942 if info is None:
943 943 ismagic = False
944 944 isalias = False
945 945 ospace = ''
946 946 else:
947 947 ismagic = info.ismagic
948 948 isalias = info.isalias
949 949 ospace = info.namespace
950 950
951 951 # Get docstring, special-casing aliases:
952 952 att_name = oname.split(".")[-1]
953 953 parents_docs = None
954 954 prelude = ""
955 955 if info and info.parent is not None and hasattr(info.parent, HOOK_NAME):
956 956 parents_docs_dict = getattr(info.parent, HOOK_NAME)
957 957 parents_docs = parents_docs_dict.get(att_name, None)
958 958 out: InfoDict = cast(
959 959 InfoDict,
960 960 {
961 961 **{field: None for field in _info_fields},
962 962 **{
963 963 "name": oname,
964 964 "found": True,
965 965 "isalias": isalias,
966 966 "ismagic": ismagic,
967 967 "subclasses": None,
968 968 },
969 969 },
970 970 )
971 971
972 972 if parents_docs:
973 973 ds = parents_docs
974 974 elif isalias:
975 975 if not callable(obj):
976 976 try:
977 977 ds = "Alias to the system command:\n %s" % obj[1]
978 978 except:
979 979 ds = "Alias: " + str(obj)
980 980 else:
981 981 ds = "Alias to " + str(obj)
982 982 if obj.__doc__:
983 983 ds += "\nDocstring:\n" + obj.__doc__
984 984 else:
985 985 ds_or_None = getdoc(obj)
986 986 if ds_or_None is None:
987 987 ds = '<no docstring>'
988 988 else:
989 989 ds = ds_or_None
990 990
991 991 ds = prelude + ds
992 992
993 993 # store output in a dict, we initialize it here and fill it as we go
994 994
995 995 string_max = 200 # max size of strings to show (snipped if longer)
996 996 shalf = int((string_max - 5) / 2)
997 997
998 998 if ismagic:
999 999 out['type_name'] = 'Magic function'
1000 1000 elif isalias:
1001 1001 out['type_name'] = 'System alias'
1002 1002 else:
1003 1003 out['type_name'] = type(obj).__name__
1004 1004
1005 1005 try:
1006 1006 bclass = obj.__class__
1007 1007 out['base_class'] = str(bclass)
1008 1008 except:
1009 1009 pass
1010 1010
1011 1011 # String form, but snip if too long in ? form (full in ??)
1012 1012 if detail_level >= self.str_detail_level:
1013 1013 try:
1014 1014 ostr = str(obj)
1015 1015 if not detail_level and len(ostr) > string_max:
1016 1016 ostr = ostr[:shalf] + ' <...> ' + ostr[-shalf:]
1017 1017 # TODO: `'string_form'.expandtabs()` seems wrong, but
1018 1018 # it was (nearly) like this since the first commit ever.
1019 1019 ostr = ("\n" + " " * len("string_form".expandtabs())).join(
1020 1020 q.strip() for q in ostr.split("\n")
1021 1021 )
1022 1022 out["string_form"] = ostr
1023 1023 except:
1024 1024 pass
1025 1025
1026 1026 if ospace:
1027 1027 out['namespace'] = ospace
1028 1028
1029 1029 # Length (for strings and lists)
1030 1030 try:
1031 1031 out['length'] = str(len(obj))
1032 1032 except Exception:
1033 1033 pass
1034 1034
1035 1035 # Filename where object was defined
1036 1036 binary_file = False
1037 1037 fname = find_file(obj)
1038 1038 if fname is None:
1039 1039 # if anything goes wrong, we don't want to show source, so it's as
1040 1040 # if the file was binary
1041 1041 binary_file = True
1042 1042 else:
1043 1043 if fname.endswith(('.so', '.dll', '.pyd')):
1044 1044 binary_file = True
1045 1045 elif fname.endswith('<string>'):
1046 1046 fname = 'Dynamically generated function. No source code available.'
1047 1047 out['file'] = compress_user(fname)
1048 1048
1049 1049 # Original source code for a callable, class or property.
1050 1050 if detail_level:
1051 1051 # Flush the source cache because inspect can return out-of-date
1052 1052 # source
1053 1053 linecache.checkcache()
1054 1054 try:
1055 1055 if isinstance(obj, property) or not binary_file:
1056 1056 src = getsource(obj, oname)
1057 1057 if src is not None:
1058 1058 src = src.rstrip()
1059 1059 out['source'] = src
1060 1060
1061 1061 except Exception:
1062 1062 pass
1063 1063
1064 1064 # Add docstring only if no source is to be shown (avoid repetitions).
1065 1065 if ds and not self._source_contains_docstring(out.get('source'), ds):
1066 1066 out['docstring'] = ds
1067 1067
1068 1068 # Constructor docstring for classes
1069 1069 if inspect.isclass(obj):
1070 1070 out['isclass'] = True
1071 1071
1072 1072 # get the init signature:
1073 1073 try:
1074 1074 init_def = self._getdef(obj, oname)
1075 1075 except AttributeError:
1076 1076 init_def = None
1077 1077
1078 1078 # get the __init__ docstring
1079 1079 try:
1080 1080 obj_init = obj.__init__
1081 1081 except AttributeError:
1082 1082 init_ds = None
1083 1083 else:
1084 1084 if init_def is None:
1085 1085 # Get signature from init if top-level sig failed.
1086 1086 # Can happen for built-in types (list, etc.).
1087 1087 try:
1088 1088 init_def = self._getdef(obj_init, oname)
1089 1089 except AttributeError:
1090 1090 pass
1091 1091 init_ds = getdoc(obj_init)
1092 1092 # Skip Python's auto-generated docstrings
1093 1093 if init_ds == _object_init_docstring:
1094 1094 init_ds = None
1095 1095
1096 1096 if init_def:
1097 1097 out['init_definition'] = init_def
1098 1098
1099 1099 if init_ds:
1100 1100 out['init_docstring'] = init_ds
1101 1101
1102 1102 names = [sub.__name__ for sub in type.__subclasses__(obj)]
1103 1103 if len(names) < 10:
1104 1104 all_names = ', '.join(names)
1105 1105 else:
1106 1106 all_names = ', '.join(names[:10]+['...'])
1107 1107 out['subclasses'] = all_names
1108 1108 # and class docstring for instances:
1109 1109 else:
1110 1110 # reconstruct the function definition and print it:
1111 1111 defln = self._getdef(obj, oname)
1112 1112 if defln:
1113 1113 out['definition'] = defln
1114 1114
1115 1115 # First, check whether the instance docstring is identical to the
1116 1116 # class one, and print it separately if they don't coincide. In
1117 1117 # most cases they will, but it's nice to print all the info for
1118 1118 # objects which use instance-customized docstrings.
1119 1119 if ds:
1120 1120 try:
1121 1121 cls = getattr(obj,'__class__')
1122 1122 except:
1123 1123 class_ds = None
1124 1124 else:
1125 1125 class_ds = getdoc(cls)
1126 1126 # Skip Python's auto-generated docstrings
1127 1127 if class_ds in _builtin_type_docstrings:
1128 1128 class_ds = None
1129 1129 if class_ds and ds != class_ds:
1130 1130 out['class_docstring'] = class_ds
1131 1131
1132 1132 # Next, try to show constructor docstrings
1133 1133 try:
1134 1134 init_ds = getdoc(obj.__init__)
1135 1135 # Skip Python's auto-generated docstrings
1136 1136 if init_ds == _object_init_docstring:
1137 1137 init_ds = None
1138 1138 except AttributeError:
1139 1139 init_ds = None
1140 1140 if init_ds:
1141 1141 out['init_docstring'] = init_ds
1142 1142
1143 1143 # Call form docstring for callable instances
1144 1144 if safe_hasattr(obj, '__call__') and not is_simple_callable(obj):
1145 1145 call_def = self._getdef(obj.__call__, oname)
1146 1146 if call_def and (call_def != out.get('definition')):
1147 1147 # it may never be the case that call def and definition differ,
1148 1148 # but don't include the same signature twice
1149 1149 out['call_def'] = call_def
1150 1150 call_ds = getdoc(obj.__call__)
1151 1151 # Skip Python's auto-generated docstrings
1152 1152 if call_ds == _func_call_docstring:
1153 1153 call_ds = None
1154 1154 if call_ds:
1155 1155 out['call_docstring'] = call_ds
1156 1156
1157 1157 return out
1158 1158
1159 1159 @staticmethod
1160 1160 def _source_contains_docstring(src, doc):
1161 1161 """
1162 1162 Check whether the source *src* contains the docstring *doc*.
1163 1163
1164 1164 This is is helper function to skip displaying the docstring if the
1165 1165 source already contains it, avoiding repetition of information.
1166 1166 """
1167 1167 try:
1168 1168 (def_node,) = ast.parse(dedent(src)).body
1169 1169 return ast.get_docstring(def_node) == doc # type: ignore[arg-type]
1170 1170 except Exception:
1171 1171 # The source can become invalid or even non-existent (because it
1172 1172 # is re-fetched from the source file) so the above code fail in
1173 1173 # arbitrary ways.
1174 1174 return False
1175 1175
1176 1176 def psearch(self,pattern,ns_table,ns_search=[],
1177 1177 ignore_case=False,show_all=False, *, list_types=False):
1178 1178 """Search namespaces with wildcards for objects.
1179 1179
1180 1180 Arguments:
1181 1181
1182 1182 - pattern: string containing shell-like wildcards to use in namespace
1183 1183 searches and optionally a type specification to narrow the search to
1184 1184 objects of that type.
1185 1185
1186 1186 - ns_table: dict of name->namespaces for search.
1187 1187
1188 1188 Optional arguments:
1189 1189
1190 1190 - ns_search: list of namespace names to include in search.
1191 1191
1192 1192 - ignore_case(False): make the search case-insensitive.
1193 1193
1194 1194 - show_all(False): show all names, including those starting with
1195 1195 underscores.
1196 1196
1197 1197 - list_types(False): list all available object types for object matching.
1198 1198 """
1199 1199 # print('ps pattern:<%r>' % pattern) # dbg
1200 1200
1201 1201 # defaults
1202 1202 type_pattern = 'all'
1203 1203 filter = ''
1204 1204
1205 1205 # list all object types
1206 1206 if list_types:
1207 1207 page.page('\n'.join(sorted(typestr2type)))
1208 1208 return
1209 1209
1210 1210 cmds = pattern.split()
1211 1211 len_cmds = len(cmds)
1212 1212 if len_cmds == 1:
1213 1213 # Only filter pattern given
1214 1214 filter = cmds[0]
1215 1215 elif len_cmds == 2:
1216 1216 # Both filter and type specified
1217 1217 filter,type_pattern = cmds
1218 1218 else:
1219 1219 raise ValueError('invalid argument string for psearch: <%s>' %
1220 1220 pattern)
1221 1221
1222 1222 # filter search namespaces
1223 1223 for name in ns_search:
1224 1224 if name not in ns_table:
1225 1225 raise ValueError('invalid namespace <%s>. Valid names: %s' %
1226 1226 (name,ns_table.keys()))
1227 1227
1228 1228 # print('type_pattern:',type_pattern) # dbg
1229 1229 search_result, namespaces_seen = set(), set()
1230 1230 for ns_name in ns_search:
1231 1231 ns = ns_table[ns_name]
1232 1232 # Normally, locals and globals are the same, so we just check one.
1233 1233 if id(ns) in namespaces_seen:
1234 1234 continue
1235 1235 namespaces_seen.add(id(ns))
1236 1236 tmp_res = list_namespace(ns, type_pattern, filter,
1237 1237 ignore_case=ignore_case, show_all=show_all)
1238 1238 search_result.update(tmp_res)
1239 1239
1240 1240 page.page('\n'.join(sorted(search_result)))
1241 1241
1242 1242
1243 1243 def _render_signature(obj_signature, obj_name) -> str:
1244 1244 """
1245 1245 This was mostly taken from inspect.Signature.__str__.
1246 1246 Look there for the comments.
1247 1247 The only change is to add linebreaks when this gets too long.
1248 1248 """
1249 1249 result = []
1250 1250 pos_only = False
1251 1251 kw_only = True
1252 1252 for param in obj_signature.parameters.values():
1253 1253 if param.kind == inspect.Parameter.POSITIONAL_ONLY:
1254 1254 pos_only = True
1255 1255 elif pos_only:
1256 1256 result.append('/')
1257 1257 pos_only = False
1258 1258
1259 1259 if param.kind == inspect.Parameter.VAR_POSITIONAL:
1260 1260 kw_only = False
1261 1261 elif param.kind == inspect.Parameter.KEYWORD_ONLY and kw_only:
1262 1262 result.append('*')
1263 1263 kw_only = False
1264 1264
1265 1265 result.append(str(param))
1266 1266
1267 1267 if pos_only:
1268 1268 result.append('/')
1269 1269
1270 1270 # add up name, parameters, braces (2), and commas
1271 1271 if len(obj_name) + sum(len(r) + 2 for r in result) > 75:
1272 1272 # This doesn’t fit behind “Signature: ” in an inspect window.
1273 1273 rendered = '{}(\n{})'.format(obj_name, ''.join(
1274 1274 ' {},\n'.format(r) for r in result)
1275 1275 )
1276 1276 else:
1277 1277 rendered = '{}({})'.format(obj_name, ', '.join(result))
1278 1278
1279 1279 if obj_signature.return_annotation is not inspect._empty:
1280 1280 anno = inspect.formatannotation(obj_signature.return_annotation)
1281 1281 rendered += ' -> {}'.format(anno)
1282 1282
1283 1283 return rendered
@@ -1,542 +1,542
1 1 # -*- coding: utf-8 -*-
2 2 """Pylab (matplotlib) support utilities."""
3 3
4 4 # Copyright (c) IPython Development Team.
5 5 # Distributed under the terms of the Modified BSD License.
6 6
7 7 from io import BytesIO
8 8 from binascii import b2a_base64
9 9 from functools import partial
10 10 import warnings
11 11
12 12 from IPython.core.display import _pngxy
13 13 from IPython.utils.decorators import flag_calls
14 14
15 15
16 16 # Matplotlib backend resolution functionality moved from IPython to Matplotlib
17 17 # in IPython 8.24 and Matplotlib 3.9.0. Need to keep `backends` and `backend2gui`
18 18 # here for earlier Matplotlib and for external backend libraries such as
19 19 # mplcairo that might rely upon it.
20 20 _deprecated_backends = {
21 21 "tk": "TkAgg",
22 22 "gtk": "GTKAgg",
23 23 "gtk3": "GTK3Agg",
24 24 "gtk4": "GTK4Agg",
25 25 "wx": "WXAgg",
26 26 "qt4": "Qt4Agg",
27 27 "qt5": "Qt5Agg",
28 28 "qt6": "QtAgg",
29 29 "qt": "QtAgg",
30 30 "osx": "MacOSX",
31 31 "nbagg": "nbAgg",
32 32 "webagg": "WebAgg",
33 33 "notebook": "nbAgg",
34 34 "agg": "agg",
35 35 "svg": "svg",
36 36 "pdf": "pdf",
37 37 "ps": "ps",
38 38 "inline": "module://matplotlib_inline.backend_inline",
39 39 "ipympl": "module://ipympl.backend_nbagg",
40 40 "widget": "module://ipympl.backend_nbagg",
41 41 }
42 42
43 43 # We also need a reverse backends2guis mapping that will properly choose which
44 44 # GUI support to activate based on the desired matplotlib backend. For the
45 45 # most part it's just a reverse of the above dict, but we also need to add a
46 46 # few others that map to the same GUI manually:
47 47 _deprecated_backend2gui = dict(
48 48 zip(_deprecated_backends.values(), _deprecated_backends.keys())
49 49 )
50 50 # In the reverse mapping, there are a few extra valid matplotlib backends that
51 51 # map to the same GUI support
52 52 _deprecated_backend2gui["GTK"] = _deprecated_backend2gui["GTKCairo"] = "gtk"
53 53 _deprecated_backend2gui["GTK3Cairo"] = "gtk3"
54 54 _deprecated_backend2gui["GTK4Cairo"] = "gtk4"
55 55 _deprecated_backend2gui["WX"] = "wx"
56 56 _deprecated_backend2gui["CocoaAgg"] = "osx"
57 57 # There needs to be a hysteresis here as the new QtAgg Matplotlib backend
58 58 # supports either Qt5 or Qt6 and the IPython qt event loop support Qt4, Qt5,
59 59 # and Qt6.
60 60 _deprecated_backend2gui["QtAgg"] = "qt"
61 61 _deprecated_backend2gui["Qt4Agg"] = "qt4"
62 62 _deprecated_backend2gui["Qt5Agg"] = "qt5"
63 63
64 64 # And some backends that don't need GUI integration
65 65 del _deprecated_backend2gui["nbAgg"]
66 66 del _deprecated_backend2gui["agg"]
67 67 del _deprecated_backend2gui["svg"]
68 68 del _deprecated_backend2gui["pdf"]
69 69 del _deprecated_backend2gui["ps"]
70 70 del _deprecated_backend2gui["module://matplotlib_inline.backend_inline"]
71 71 del _deprecated_backend2gui["module://ipympl.backend_nbagg"]
72 72
73 73
74 74 # Deprecated attributes backends and backend2gui mostly following PEP 562.
75 75 def __getattr__(name):
76 76 if name in ("backends", "backend2gui"):
77 77 warnings.warn(
78 78 f"{name} is deprecated since IPython 8.24, backends are managed "
79 79 "in matplotlib and can be externally registered.",
80 80 DeprecationWarning,
81 81 )
82 82 return globals()[f"_deprecated_{name}"]
83 83 raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
84 84
85 85
86 86 #-----------------------------------------------------------------------------
87 87 # Matplotlib utilities
88 88 #-----------------------------------------------------------------------------
89 89
90 90
91 91 def getfigs(*fig_nums):
92 92 """Get a list of matplotlib figures by figure numbers.
93 93
94 94 If no arguments are given, all available figures are returned. If the
95 95 argument list contains references to invalid figures, a warning is printed
96 96 but the function continues pasting further figures.
97 97
98 98 Parameters
99 99 ----------
100 100 figs : tuple
101 101 A tuple of ints giving the figure numbers of the figures to return.
102 102 """
103 103 from matplotlib._pylab_helpers import Gcf
104 104 if not fig_nums:
105 105 fig_managers = Gcf.get_all_fig_managers()
106 106 return [fm.canvas.figure for fm in fig_managers]
107 107 else:
108 108 figs = []
109 109 for num in fig_nums:
110 110 f = Gcf.figs.get(num)
111 111 if f is None:
112 112 print('Warning: figure %s not available.' % num)
113 113 else:
114 114 figs.append(f.canvas.figure)
115 115 return figs
116 116
117 117
118 118 def figsize(sizex, sizey):
119 119 """Set the default figure size to be [sizex, sizey].
120 120
121 121 This is just an easy to remember, convenience wrapper that sets::
122 122
123 123 matplotlib.rcParams['figure.figsize'] = [sizex, sizey]
124 124 """
125 125 import matplotlib
126 126 matplotlib.rcParams['figure.figsize'] = [sizex, sizey]
127 127
128 128
129 129 def print_figure(fig, fmt="png", bbox_inches="tight", base64=False, **kwargs):
130 130 """Print a figure to an image, and return the resulting file data
131 131
132 132 Returned data will be bytes unless ``fmt='svg'``,
133 133 in which case it will be unicode.
134 134
135 135 Any keyword args are passed to fig.canvas.print_figure,
136 136 such as ``quality`` or ``bbox_inches``.
137 137
138 138 If `base64` is True, return base64-encoded str instead of raw bytes
139 139 for binary-encoded image formats
140 140
141 141 .. versionadded:: 7.29
142 142 base64 argument
143 143 """
144 144 # When there's an empty figure, we shouldn't return anything, otherwise we
145 145 # get big blank areas in the qt console.
146 146 if not fig.axes and not fig.lines:
147 147 return
148 148
149 149 dpi = fig.dpi
150 150 if fmt == 'retina':
151 151 dpi = dpi * 2
152 152 fmt = 'png'
153 153
154 154 # build keyword args
155 155 kw = {
156 156 "format":fmt,
157 157 "facecolor":fig.get_facecolor(),
158 158 "edgecolor":fig.get_edgecolor(),
159 159 "dpi":dpi,
160 160 "bbox_inches":bbox_inches,
161 161 }
162 162 # **kwargs get higher priority
163 163 kw.update(kwargs)
164 164
165 165 bytes_io = BytesIO()
166 166 if fig.canvas is None:
167 167 from matplotlib.backend_bases import FigureCanvasBase
168 168 FigureCanvasBase(fig)
169 169
170 170 fig.canvas.print_figure(bytes_io, **kw)
171 171 data = bytes_io.getvalue()
172 172 if fmt == 'svg':
173 173 data = data.decode('utf-8')
174 174 elif base64:
175 175 data = b2a_base64(data, newline=False).decode("ascii")
176 176 return data
177 177
178 178 def retina_figure(fig, base64=False, **kwargs):
179 179 """format a figure as a pixel-doubled (retina) PNG
180 180
181 181 If `base64` is True, return base64-encoded str instead of raw bytes
182 182 for binary-encoded image formats
183 183
184 184 .. versionadded:: 7.29
185 185 base64 argument
186 186 """
187 187 pngdata = print_figure(fig, fmt="retina", base64=False, **kwargs)
188 188 # Make sure that retina_figure acts just like print_figure and returns
189 189 # None when the figure is empty.
190 190 if pngdata is None:
191 191 return
192 192 w, h = _pngxy(pngdata)
193 193 metadata = {"width": w//2, "height":h//2}
194 194 if base64:
195 195 pngdata = b2a_base64(pngdata, newline=False).decode("ascii")
196 196 return pngdata, metadata
197 197
198 198
199 199 # We need a little factory function here to create the closure where
200 200 # safe_execfile can live.
201 201 def mpl_runner(safe_execfile):
202 202 """Factory to return a matplotlib-enabled runner for %run.
203 203
204 204 Parameters
205 205 ----------
206 206 safe_execfile : function
207 207 This must be a function with the same interface as the
208 208 :meth:`safe_execfile` method of IPython.
209 209
210 210 Returns
211 211 -------
212 212 A function suitable for use as the ``runner`` argument of the %run magic
213 213 function.
214 214 """
215 215
216 216 def mpl_execfile(fname,*where,**kw):
217 217 """matplotlib-aware wrapper around safe_execfile.
218 218
219 219 Its interface is identical to that of the :func:`execfile` builtin.
220 220
221 221 This is ultimately a call to execfile(), but wrapped in safeties to
222 222 properly handle interactive rendering."""
223 223
224 224 import matplotlib
225 225 import matplotlib.pyplot as plt
226 226
227 227 # print('*** Matplotlib runner ***') # dbg
228 228 # turn off rendering until end of script
229 229 with matplotlib.rc_context({"interactive": False}):
230 230 safe_execfile(fname, *where, **kw)
231 231
232 232 if matplotlib.is_interactive():
233 233 plt.show()
234 234
235 235 # make rendering call now, if the user tried to do it
236 236 if plt.draw_if_interactive.called:
237 237 plt.draw()
238 238 plt.draw_if_interactive.called = False
239 239
240 240 # re-draw everything that is stale
241 241 try:
242 242 da = plt.draw_all
243 243 except AttributeError:
244 244 pass
245 245 else:
246 246 da()
247 247
248 248 return mpl_execfile
249 249
250 250
251 251 def _reshow_nbagg_figure(fig):
252 252 """reshow an nbagg figure"""
253 253 try:
254 254 reshow = fig.canvas.manager.reshow
255 255 except AttributeError as e:
256 256 raise NotImplementedError() from e
257 257 else:
258 258 reshow()
259 259
260 260
261 261 def select_figure_formats(shell, formats, **kwargs):
262 262 """Select figure formats for the inline backend.
263 263
264 264 Parameters
265 265 ----------
266 266 shell : InteractiveShell
267 267 The main IPython instance.
268 268 formats : str or set
269 269 One or a set of figure formats to enable: 'png', 'retina', 'jpeg', 'svg', 'pdf'.
270 270 **kwargs : any
271 271 Extra keyword arguments to be passed to fig.canvas.print_figure.
272 272 """
273 273 import matplotlib
274 274 from matplotlib.figure import Figure
275 275
276 276 svg_formatter = shell.display_formatter.formatters['image/svg+xml']
277 277 png_formatter = shell.display_formatter.formatters['image/png']
278 278 jpg_formatter = shell.display_formatter.formatters['image/jpeg']
279 279 pdf_formatter = shell.display_formatter.formatters['application/pdf']
280 280
281 281 if isinstance(formats, str):
282 282 formats = {formats}
283 283 # cast in case of list / tuple
284 284 formats = set(formats)
285 285
286 286 [ f.pop(Figure, None) for f in shell.display_formatter.formatters.values() ]
287 287 mplbackend = matplotlib.get_backend().lower()
288 288 if mplbackend in ("nbagg", "ipympl", "widget", "module://ipympl.backend_nbagg"):
289 289 formatter = shell.display_formatter.ipython_display_formatter
290 290 formatter.for_type(Figure, _reshow_nbagg_figure)
291 291
292 292 supported = {'png', 'png2x', 'retina', 'jpg', 'jpeg', 'svg', 'pdf'}
293 293 bad = formats.difference(supported)
294 294 if bad:
295 295 bs = "%s" % ','.join([repr(f) for f in bad])
296 296 gs = "%s" % ','.join([repr(f) for f in supported])
297 297 raise ValueError("supported formats are: %s not %s" % (gs, bs))
298 298
299 299 if "png" in formats:
300 300 png_formatter.for_type(
301 301 Figure, partial(print_figure, fmt="png", base64=True, **kwargs)
302 302 )
303 303 if "retina" in formats or "png2x" in formats:
304 304 png_formatter.for_type(Figure, partial(retina_figure, base64=True, **kwargs))
305 305 if "jpg" in formats or "jpeg" in formats:
306 306 jpg_formatter.for_type(
307 307 Figure, partial(print_figure, fmt="jpg", base64=True, **kwargs)
308 308 )
309 309 if "svg" in formats:
310 310 svg_formatter.for_type(Figure, partial(print_figure, fmt="svg", **kwargs))
311 311 if "pdf" in formats:
312 312 pdf_formatter.for_type(
313 313 Figure, partial(print_figure, fmt="pdf", base64=True, **kwargs)
314 314 )
315 315
316 316 #-----------------------------------------------------------------------------
317 317 # Code for initializing matplotlib and importing pylab
318 318 #-----------------------------------------------------------------------------
319 319
320 320
321 321 def find_gui_and_backend(gui=None, gui_select=None):
322 322 """Given a gui string return the gui and mpl backend.
323 323
324 324 Parameters
325 325 ----------
326 326 gui : str
327 327 Can be one of ('tk','gtk','wx','qt','qt4','inline','agg').
328 328 gui_select : str
329 329 Can be one of ('tk','gtk','wx','qt','qt4','inline').
330 330 This is any gui already selected by the shell.
331 331
332 332 Returns
333 333 -------
334 334 A tuple of (gui, backend) where backend is one of ('TkAgg','GTKAgg',
335 335 'WXAgg','Qt4Agg','module://matplotlib_inline.backend_inline','agg').
336 336 """
337 337
338 338 import matplotlib
339 339
340 340 if _matplotlib_manages_backends():
341 341 backend_registry = matplotlib.backends.registry.backend_registry
342 342
343 343 # gui argument may be a gui event loop or may be a backend name.
344 344 if gui in ("auto", None):
345 345 backend = matplotlib.rcParamsOrig["backend"]
346 346 backend, gui = backend_registry.resolve_backend(backend)
347 347 else:
348 348 gui = _convert_gui_to_matplotlib(gui)
349 349 backend, gui = backend_registry.resolve_gui_or_backend(gui)
350 350
351 351 gui = _convert_gui_from_matplotlib(gui)
352 352 return gui, backend
353 353
354 354 # Fallback to previous behaviour (Matplotlib < 3.9)
355 355 mpl_version_info = getattr(matplotlib, "__version_info__", (0, 0))
356 356 has_unified_qt_backend = mpl_version_info >= (3, 5)
357 357
358 358 from IPython.core.pylabtools import backends
359 359
360 360 backends_ = dict(backends)
361 361 if not has_unified_qt_backend:
362 362 backends_["qt"] = "qt5agg"
363 363
364 364 if gui and gui != 'auto':
365 365 # select backend based on requested gui
366 366 backend = backends_[gui]
367 367 if gui == 'agg':
368 368 gui = None
369 369 else:
370 370 # We need to read the backend from the original data structure, *not*
371 371 # from mpl.rcParams, since a prior invocation of %matplotlib may have
372 372 # overwritten that.
373 373 # WARNING: this assumes matplotlib 1.1 or newer!!
374 374 backend = matplotlib.rcParamsOrig['backend']
375 375 # In this case, we need to find what the appropriate gui selection call
376 376 # should be for IPython, so we can activate inputhook accordingly
377 377 from IPython.core.pylabtools import backend2gui
378 378 gui = backend2gui.get(backend, None)
379 379
380 380 # If we have already had a gui active, we need it and inline are the
381 381 # ones allowed.
382 382 if gui_select and gui != gui_select:
383 383 gui = gui_select
384 384 backend = backends_[gui]
385 385
386 386 # Matplotlib before _matplotlib_manages_backends() can return "inline" for
387 387 # no gui event loop rather than the None that IPython >= 8.24.0 expects.
388 388 if gui == "inline":
389 389 gui = None
390 390
391 391 return gui, backend
392 392
393 393
394 394 def activate_matplotlib(backend):
395 395 """Activate the given backend and set interactive to True."""
396 396
397 397 import matplotlib
398 398 matplotlib.interactive(True)
399 399
400 400 # Matplotlib had a bug where even switch_backend could not force
401 401 # the rcParam to update. This needs to be set *before* the module
402 402 # magic of switch_backend().
403 403 matplotlib.rcParams['backend'] = backend
404 404
405 405 # Due to circular imports, pyplot may be only partially initialised
406 406 # when this function runs.
407 407 # So avoid needing matplotlib attribute-lookup to access pyplot.
408 408 from matplotlib import pyplot as plt
409 409
410 410 plt.switch_backend(backend)
411 411
412 412 plt.show._needmain = False
413 413 # We need to detect at runtime whether show() is called by the user.
414 414 # For this, we wrap it into a decorator which adds a 'called' flag.
415 415 plt.draw_if_interactive = flag_calls(plt.draw_if_interactive)
416 416
417 417
418 418 def import_pylab(user_ns, import_all=True):
419 419 """Populate the namespace with pylab-related values.
420 420
421 421 Imports matplotlib, pylab, numpy, and everything from pylab and numpy.
422 422
423 423 Also imports a few names from IPython (figsize, display, getfigs)
424 424
425 425 """
426 426
427 427 # Import numpy as np/pyplot as plt are conventions we're trying to
428 428 # somewhat standardize on. Making them available to users by default
429 429 # will greatly help this.
430 430 s = ("import numpy\n"
431 431 "import matplotlib\n"
432 432 "from matplotlib import pylab, mlab, pyplot\n"
433 433 "np = numpy\n"
434 434 "plt = pyplot\n"
435 435 )
436 436 exec(s, user_ns)
437 437
438 438 if import_all:
439 439 s = ("from matplotlib.pylab import *\n"
440 440 "from numpy import *\n")
441 441 exec(s, user_ns)
442 442
443 443 # IPython symbols to add
444 444 user_ns['figsize'] = figsize
445 445 from IPython.display import display
446 446 # Add display and getfigs to the user's namespace
447 447 user_ns['display'] = display
448 448 user_ns['getfigs'] = getfigs
449 449
450 450
451 451 def configure_inline_support(shell, backend):
452 452 """
453 453 .. deprecated:: 7.23
454 454
455 455 use `matplotlib_inline.backend_inline.configure_inline_support()`
456 456
457 457 Configure an IPython shell object for matplotlib use.
458 458
459 459 Parameters
460 460 ----------
461 461 shell : InteractiveShell instance
462 462 backend : matplotlib backend
463 463 """
464 464 warnings.warn(
465 465 "`configure_inline_support` is deprecated since IPython 7.23, directly "
466 466 "use `matplotlib_inline.backend_inline.configure_inline_support()`",
467 467 DeprecationWarning,
468 468 stacklevel=2,
469 469 )
470 470
471 471 from matplotlib_inline.backend_inline import (
472 472 configure_inline_support as configure_inline_support_orig,
473 473 )
474 474
475 475 configure_inline_support_orig(shell, backend)
476 476
477 477
478 478 # Determine if Matplotlib manages backends only if needed, and cache result.
479 479 # Do not read this directly, instead use _matplotlib_manages_backends().
480 480 _matplotlib_manages_backends_value: bool | None = None
481 481
482 482
483 483 def _matplotlib_manages_backends() -> bool:
484 484 """Return True if Matplotlib manages backends, False otherwise.
485 485
486 486 If it returns True, the caller can be sure that
487 487 matplotlib.backends.registry.backend_registry is available along with
488 488 member functions resolve_gui_or_backend, resolve_backend, list_all, and
489 489 list_gui_frameworks.
490 490
491 491 This function can be removed as it will always return True when Python
492 492 3.12, the latest version supported by Matplotlib < 3.9, reaches
493 493 end-of-life in late 2028.
494 494 """
495 495 global _matplotlib_manages_backends_value
496 496 if _matplotlib_manages_backends_value is None:
497 497 try:
498 498 from matplotlib.backends.registry import backend_registry
499 499
500 500 _matplotlib_manages_backends_value = hasattr(
501 501 backend_registry, "resolve_gui_or_backend"
502 502 )
503 503 except ImportError:
504 504 _matplotlib_manages_backends_value = False
505 505
506 506 return _matplotlib_manages_backends_value
507 507
508 508
509 509 def _list_matplotlib_backends_and_gui_loops() -> list[str]:
510 510 """Return list of all Matplotlib backends and GUI event loops.
511 511
512 512 This is the list returned by
513 513 %matplotlib --list
514 514 """
515 515 if _matplotlib_manages_backends():
516 516 from matplotlib.backends.registry import backend_registry
517 517
518 518 ret = backend_registry.list_all() + [
519 519 _convert_gui_from_matplotlib(gui)
520 520 for gui in backend_registry.list_gui_frameworks()
521 521 ]
522 522 else:
523 523 from IPython.core import pylabtools
524 524
525 525 ret = list(pylabtools.backends.keys())
526 526
527 527 return sorted(["auto"] + ret)
528 528
529 529
530 530 # Matplotlib and IPython do not always use the same gui framework name.
531 # Always use the approprate one of these conversion functions when passing a
531 # Always use the appropriate one of these conversion functions when passing a
532 532 # gui framework name to/from Matplotlib.
533 533 def _convert_gui_to_matplotlib(gui: str | None) -> str | None:
534 534 if gui and gui.lower() == "osx":
535 535 return "macosx"
536 536 return gui
537 537
538 538
539 539 def _convert_gui_from_matplotlib(gui: str | None) -> str | None:
540 540 if gui and gui.lower() == "macosx":
541 541 return "osx"
542 542 return gui
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now