##// END OF EJS Templates
Some minimal typing and removal of allow-none
Matthias Bussonnier -
Show More
@@ -1,3346 +1,3346 b''
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 Ξ±
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 Ξ±
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press :kbd:`Tab` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\Ξ±<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 63 ``False``.
64 64
65 65
66 66 Experimental
67 67 ============
68 68
69 69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 70 generate completions both using static analysis of the code, and dynamically
71 71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 72 for Python. The APIs attached to this new mechanism is unstable and will
73 73 raise unless use in an :any:`provisionalcompleter` context manager.
74 74
75 75 You will find that the following are experimental:
76 76
77 77 - :any:`provisionalcompleter`
78 78 - :any:`IPCompleter.completions`
79 79 - :any:`Completion`
80 80 - :any:`rectify_completions`
81 81
82 82 .. note::
83 83
84 84 better name for :any:`rectify_completions` ?
85 85
86 86 We welcome any feedback on these new API, and we also encourage you to try this
87 87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 88 to have extra logging information if :any:`jedi` is crashing, or if current
89 89 IPython completer pending deprecations are returning results not yet handled
90 90 by :any:`jedi`
91 91
92 92 Using Jedi for tab completion allow snippets like the following to work without
93 93 having to execute any code:
94 94
95 95 >>> myvar = ['hello', 42]
96 96 ... myvar[1].bi<tab>
97 97
98 98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 100 option.
101 101
102 102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 103 current development version to get better completions.
104 104
105 105 Matchers
106 106 ========
107 107
108 108 All completions routines are implemented using unified *Matchers* API.
109 109 The matchers API is provisional and subject to change without notice.
110 110
111 111 The built-in matchers include:
112 112
113 113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 115 - :any:`IPCompleter.unicode_name_matcher`,
116 116 :any:`IPCompleter.fwd_unicode_matcher`
117 117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 125 (`complete_command`) with string dispatch (including regular expressions).
126 126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 127 Jedi results to match behaviour in earlier IPython versions.
128 128
129 129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130 130
131 131 Matcher API
132 132 -----------
133 133
134 134 Simplifying some details, the ``Matcher`` interface can described as
135 135
136 136 .. code-block::
137 137
138 138 MatcherAPIv1 = Callable[[str], list[str]]
139 139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140 140
141 141 Matcher = MatcherAPIv1 | MatcherAPIv2
142 142
143 143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 144 and remains supported as a simplest way for generating completions. This is also
145 145 currently the only API supported by the IPython hooks system `complete_command`.
146 146
147 147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 149 and requires a literal ``2`` for v2 Matchers.
150 150
151 151 Once the API stabilises future versions may relax the requirement for specifying
152 152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154 154
155 155 Suppression of competing matchers
156 156 ---------------------------------
157 157
158 158 By default results from all matchers are combined, in the order determined by
159 159 their priority. Matchers can request to suppress results from subsequent
160 160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161 161
162 162 When multiple matchers simultaneously request surpression, the results from of
163 163 the matcher with higher priority will be returned.
164 164
165 165 Sometimes it is desirable to suppress most but not all other matchers;
166 166 this can be achieved by adding a set of identifiers of matchers which
167 167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168 168
169 169 The suppression behaviour can is user-configurable via
170 170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 171 """
172 172
173 173
174 174 # Copyright (c) IPython Development Team.
175 175 # Distributed under the terms of the Modified BSD License.
176 176 #
177 177 # Some of this code originated from rlcompleter in the Python standard library
178 178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179 179
180 180 from __future__ import annotations
181 181 import builtins as builtin_mod
182 182 import enum
183 183 import glob
184 184 import inspect
185 185 import itertools
186 186 import keyword
187 187 import os
188 188 import re
189 189 import string
190 190 import sys
191 191 import tokenize
192 192 import time
193 193 import unicodedata
194 194 import uuid
195 195 import warnings
196 196 from ast import literal_eval
197 197 from collections import defaultdict
198 198 from contextlib import contextmanager
199 199 from dataclasses import dataclass
200 200 from functools import cached_property, partial
201 201 from types import SimpleNamespace
202 202 from typing import (
203 203 Iterable,
204 204 Iterator,
205 205 List,
206 206 Tuple,
207 207 Union,
208 208 Any,
209 209 Sequence,
210 210 Dict,
211 211 Optional,
212 212 TYPE_CHECKING,
213 213 Set,
214 214 Sized,
215 215 TypeVar,
216 216 Literal,
217 217 )
218 218
219 219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 220 from IPython.core.error import TryNext
221 221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 223 from IPython.core.oinspect import InspectColors
224 224 from IPython.testing.skipdoctest import skip_doctest
225 225 from IPython.utils import generics
226 226 from IPython.utils.decorators import sphinx_options
227 227 from IPython.utils.dir2 import dir2, get_real_method
228 228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 229 from IPython.utils.path import ensure_dir_exists
230 230 from IPython.utils.process import arg_split
231 231 from traitlets import (
232 232 Bool,
233 233 Enum,
234 234 Int,
235 235 List as ListTrait,
236 236 Unicode,
237 237 Dict as DictTrait,
238 238 Union as UnionTrait,
239 239 observe,
240 240 )
241 241 from traitlets.config.configurable import Configurable
242 242
243 243 import __main__
244 244
245 245 # skip module docstests
246 246 __skip_doctest__ = True
247 247
248 248
249 249 try:
250 250 import jedi
251 251 jedi.settings.case_insensitive_completion = False
252 252 import jedi.api.helpers
253 253 import jedi.api.classes
254 254 JEDI_INSTALLED = True
255 255 except ImportError:
256 256 JEDI_INSTALLED = False
257 257
258 258
259 259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 260 from typing import cast
261 261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 262 else:
263 263 from typing import Generic
264 264
265 265 def cast(type_, obj):
266 266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 267 return obj
268 268
269 269 # do not require on runtime
270 270 NotRequired = Tuple # requires Python >=3.11
271 271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 272 Protocol = object # requires Python >=3.8
273 273 TypeAlias = Any # requires Python >=3.10
274 274 TypeGuard = Generic # requires Python >=3.10
275 275 if GENERATING_DOCUMENTATION:
276 276 from typing import TypedDict
277 277
278 278 # -----------------------------------------------------------------------------
279 279 # Globals
280 280 #-----------------------------------------------------------------------------
281 281
282 282 # ranges where we have most of the valid unicode names. We could be more finer
283 283 # grained but is it worth it for performance While unicode have character in the
284 284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 285 # write this). With below range we cover them all, with a density of ~67%
286 286 # biggest next gap we consider only adds up about 1% density and there are 600
287 287 # gaps that would need hard coding.
288 288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289 289
290 290 # Public API
291 291 __all__ = ["Completer", "IPCompleter"]
292 292
293 293 if sys.platform == 'win32':
294 294 PROTECTABLES = ' '
295 295 else:
296 296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297 297
298 298 # Protect against returning an enormous number of completions which the frontend
299 299 # may have trouble processing.
300 300 MATCHES_LIMIT = 500
301 301
302 302 # Completion type reported when no type can be inferred.
303 303 _UNKNOWN_TYPE = "<unknown>"
304 304
305 305 # sentinel value to signal lack of a match
306 306 not_found = object()
307 307
308 308 class ProvisionalCompleterWarning(FutureWarning):
309 309 """
310 310 Exception raise by an experimental feature in this module.
311 311
312 312 Wrap code in :any:`provisionalcompleter` context manager if you
313 313 are certain you want to use an unstable feature.
314 314 """
315 315 pass
316 316
317 317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318 318
319 319
320 320 @skip_doctest
321 321 @contextmanager
322 322 def provisionalcompleter(action='ignore'):
323 323 """
324 324 This context manager has to be used in any place where unstable completer
325 325 behavior and API may be called.
326 326
327 327 >>> with provisionalcompleter():
328 328 ... completer.do_experimental_things() # works
329 329
330 330 >>> completer.do_experimental_things() # raises.
331 331
332 332 .. note::
333 333
334 334 Unstable
335 335
336 336 By using this context manager you agree that the API in use may change
337 337 without warning, and that you won't complain if they do so.
338 338
339 339 You also understand that, if the API is not to your liking, you should report
340 340 a bug to explain your use case upstream.
341 341
342 342 We'll be happy to get your feedback, feature requests, and improvements on
343 343 any of the unstable APIs!
344 344 """
345 345 with warnings.catch_warnings():
346 346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 347 yield
348 348
349 349
350 350 def has_open_quotes(s):
351 351 """Return whether a string has open quotes.
352 352
353 353 This simply counts whether the number of quote characters of either type in
354 354 the string is odd.
355 355
356 356 Returns
357 357 -------
358 358 If there is an open quote, the quote character is returned. Else, return
359 359 False.
360 360 """
361 361 # We check " first, then ', so complex cases with nested quotes will get
362 362 # the " to take precedence.
363 363 if s.count('"') % 2:
364 364 return '"'
365 365 elif s.count("'") % 2:
366 366 return "'"
367 367 else:
368 368 return False
369 369
370 370
371 371 def protect_filename(s, protectables=PROTECTABLES):
372 372 """Escape a string to protect certain characters."""
373 373 if set(s) & set(protectables):
374 374 if sys.platform == "win32":
375 375 return '"' + s + '"'
376 376 else:
377 377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 378 else:
379 379 return s
380 380
381 381
382 382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 383 """Expand ``~``-style usernames in strings.
384 384
385 385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 386 extra information that will be useful if the input was being used in
387 387 computing completions, and you wish to return the completions with the
388 388 original '~' instead of its expanded value.
389 389
390 390 Parameters
391 391 ----------
392 392 path : str
393 393 String to be expanded. If no ~ is present, the output is the same as the
394 394 input.
395 395
396 396 Returns
397 397 -------
398 398 newpath : str
399 399 Result of ~ expansion in the input path.
400 400 tilde_expand : bool
401 401 Whether any expansion was performed or not.
402 402 tilde_val : str
403 403 The value that ~ was replaced with.
404 404 """
405 405 # Default values
406 406 tilde_expand = False
407 407 tilde_val = ''
408 408 newpath = path
409 409
410 410 if path.startswith('~'):
411 411 tilde_expand = True
412 412 rest = len(path)-1
413 413 newpath = os.path.expanduser(path)
414 414 if rest:
415 415 tilde_val = newpath[:-rest]
416 416 else:
417 417 tilde_val = newpath
418 418
419 419 return newpath, tilde_expand, tilde_val
420 420
421 421
422 422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 423 """Does the opposite of expand_user, with its outputs.
424 424 """
425 425 if tilde_expand:
426 426 return path.replace(tilde_val, '~')
427 427 else:
428 428 return path
429 429
430 430
431 431 def completions_sorting_key(word):
432 432 """key for sorting completions
433 433
434 434 This does several things:
435 435
436 436 - Demote any completions starting with underscores to the end
437 437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 438 by their name
439 439 """
440 440 prio1, prio2 = 0, 0
441 441
442 442 if word.startswith('__'):
443 443 prio1 = 2
444 444 elif word.startswith('_'):
445 445 prio1 = 1
446 446
447 447 if word.endswith('='):
448 448 prio1 = -1
449 449
450 450 if word.startswith('%%'):
451 451 # If there's another % in there, this is something else, so leave it alone
452 452 if not "%" in word[2:]:
453 453 word = word[2:]
454 454 prio2 = 2
455 455 elif word.startswith('%'):
456 456 if not "%" in word[1:]:
457 457 word = word[1:]
458 458 prio2 = 1
459 459
460 460 return prio1, word, prio2
461 461
462 462
463 463 class _FakeJediCompletion:
464 464 """
465 465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467 467
468 468 Added in IPython 6.0 so should likely be removed for 7.0
469 469
470 470 """
471 471
472 472 def __init__(self, name):
473 473
474 474 self.name = name
475 475 self.complete = name
476 476 self.type = 'crashed'
477 477 self.name_with_symbols = name
478 478 self.signature = ""
479 479 self._origin = "fake"
480 480 self.text = "crashed"
481 481
482 482 def __repr__(self):
483 483 return '<Fake completion object jedi has crashed>'
484 484
485 485
486 486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487 487
488 488
489 489 class Completion:
490 490 """
491 491 Completion object used and returned by IPython completers.
492 492
493 493 .. warning::
494 494
495 495 Unstable
496 496
497 497 This function is unstable, API may change without warning.
498 498 It will also raise unless use in proper context manager.
499 499
500 500 This act as a middle ground :any:`Completion` object between the
501 501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 502 object. While Jedi need a lot of information about evaluator and how the
503 503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 504 need user facing information.
505 505
506 506 - Which range should be replaced replaced by what.
507 507 - Some metadata (like completion type), or meta information to displayed to
508 508 the use user.
509 509
510 510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 512 """
513 513
514 514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515 515
516 516 def __init__(
517 517 self,
518 518 start: int,
519 519 end: int,
520 520 text: str,
521 521 *,
522 522 type: Optional[str] = None,
523 523 _origin="",
524 524 signature="",
525 525 ) -> None:
526 526 warnings.warn(
527 527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 528 "It may change without warnings. "
529 529 "Use in corresponding context manager.",
530 530 category=ProvisionalCompleterWarning,
531 531 stacklevel=2,
532 532 )
533 533
534 534 self.start = start
535 535 self.end = end
536 536 self.text = text
537 537 self.type = type
538 538 self.signature = signature
539 539 self._origin = _origin
540 540
541 541 def __repr__(self):
542 542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544 544
545 545 def __eq__(self, other) -> bool:
546 546 """
547 547 Equality and hash do not hash the type (as some completer may not be
548 548 able to infer the type), but are use to (partially) de-duplicate
549 549 completion.
550 550
551 551 Completely de-duplicating completion is a bit tricker that just
552 552 comparing as it depends on surrounding text, which Completions are not
553 553 aware of.
554 554 """
555 555 return self.start == other.start and \
556 556 self.end == other.end and \
557 557 self.text == other.text
558 558
559 559 def __hash__(self):
560 560 return hash((self.start, self.end, self.text))
561 561
562 562
563 563 class SimpleCompletion:
564 564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565 565
566 566 .. warning::
567 567
568 568 Provisional
569 569
570 570 This class is used to describe the currently supported attributes of
571 571 simple completion items, and any additional implementation details
572 572 should not be relied on. Additional attributes may be included in
573 573 future versions, and meaning of text disambiguated from the current
574 574 dual meaning of "text to insert" and "text to used as a label".
575 575 """
576 576
577 577 __slots__ = ["text", "type"]
578 578
579 579 def __init__(self, text: str, *, type: Optional[str] = None):
580 580 self.text = text
581 581 self.type = type
582 582
583 583 def __repr__(self):
584 584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585 585
586 586
587 587 class _MatcherResultBase(TypedDict):
588 588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589 589
590 590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 591 matched_fragment: NotRequired[str]
592 592
593 593 #: Whether to suppress results from all other matchers (True), some
594 594 #: matchers (set of identifiers) or none (False); default is False.
595 595 suppress: NotRequired[Union[bool, Set[str]]]
596 596
597 597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 598 #: requests to suppress all other matchers; defaults to an empty set.
599 599 do_not_suppress: NotRequired[Set[str]]
600 600
601 601 #: Are completions already ordered and should be left as-is? default is False.
602 602 ordered: NotRequired[bool]
603 603
604 604
605 605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 607 """Result of new-style completion matcher."""
608 608
609 609 # note: TypedDict is added again to the inheritance chain
610 610 # in order to get __orig_bases__ for documentation
611 611
612 612 #: List of candidate completions
613 613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614 614
615 615
616 616 class _JediMatcherResult(_MatcherResultBase):
617 617 """Matching result returned by Jedi (will be processed differently)"""
618 618
619 619 #: list of candidate completions
620 620 completions: Iterator[_JediCompletionLike]
621 621
622 622
623 623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625 625
626 626
627 627 @dataclass
628 628 class CompletionContext:
629 629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630 630
631 631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 634 # from the completer, and make substituting them in sub-classes easier.
635 635
636 636 #: Relevant fragment of code directly preceding the cursor.
637 637 #: The extraction of token is implemented via splitter heuristic
638 638 #: (following readline behaviour for legacy reasons), which is user configurable
639 639 #: (by switching the greedy mode).
640 640 token: str
641 641
642 642 #: The full available content of the editor or buffer
643 643 full_text: str
644 644
645 645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 646 cursor_position: int
647 647
648 648 #: Cursor line in ``full_text``.
649 649 cursor_line: int
650 650
651 651 #: The maximum number of completions that will be used downstream.
652 652 #: Matchers can use this information to abort early.
653 653 #: The built-in Jedi matcher is currently excepted from this limit.
654 654 # If not given, return all possible completions.
655 655 limit: Optional[int]
656 656
657 657 @cached_property
658 658 def text_until_cursor(self) -> str:
659 659 return self.line_with_cursor[: self.cursor_position]
660 660
661 661 @cached_property
662 662 def line_with_cursor(self) -> str:
663 663 return self.full_text.split("\n")[self.cursor_line]
664 664
665 665
666 666 #: Matcher results for API v2.
667 667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668 668
669 669
670 670 class _MatcherAPIv1Base(Protocol):
671 671 def __call__(self, text: str) -> List[str]:
672 672 """Call signature."""
673 673 ...
674 674
675 675 #: Used to construct the default matcher identifier
676 676 __qualname__: str
677 677
678 678
679 679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 680 #: API version
681 681 matcher_api_version: Optional[Literal[1]]
682 682
683 683 def __call__(self, text: str) -> List[str]:
684 684 """Call signature."""
685 685 ...
686 686
687 687
688 688 #: Protocol describing Matcher API v1.
689 689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690 690
691 691
692 692 class MatcherAPIv2(Protocol):
693 693 """Protocol describing Matcher API v2."""
694 694
695 695 #: API version
696 696 matcher_api_version: Literal[2] = 2
697 697
698 698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 699 """Call signature."""
700 700 ...
701 701
702 702 #: Used to construct the default matcher identifier
703 703 __qualname__: str
704 704
705 705
706 706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707 707
708 708
709 709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 710 api_version = _get_matcher_api_version(matcher)
711 711 return api_version == 1
712 712
713 713
714 714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 715 api_version = _get_matcher_api_version(matcher)
716 716 return api_version == 2
717 717
718 718
719 719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 720 """Determines whether objects is sizable"""
721 721 return hasattr(value, "__len__")
722 722
723 723
724 724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 725 """Determines whether objects is sizable"""
726 726 return hasattr(value, "__next__")
727 727
728 728
729 729 def has_any_completions(result: MatcherResult) -> bool:
730 730 """Check if any result includes any completions."""
731 731 completions = result["completions"]
732 732 if _is_sizable(completions):
733 733 return len(completions) != 0
734 734 if _is_iterator(completions):
735 735 try:
736 736 old_iterator = completions
737 737 first = next(old_iterator)
738 738 result["completions"] = cast(
739 739 Iterator[SimpleCompletion],
740 740 itertools.chain([first], old_iterator),
741 741 )
742 742 return True
743 743 except StopIteration:
744 744 return False
745 745 raise ValueError(
746 746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 747 )
748 748
749 749
750 750 def completion_matcher(
751 751 *,
752 752 priority: Optional[float] = None,
753 753 identifier: Optional[str] = None,
754 754 api_version: int = 1,
755 755 ):
756 756 """Adds attributes describing the matcher.
757 757
758 758 Parameters
759 759 ----------
760 760 priority : Optional[float]
761 761 The priority of the matcher, determines the order of execution of matchers.
762 762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 763 identifier : Optional[str]
764 764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 765 and also used to for debugging (will be passed as ``origin`` with the completions).
766 766
767 767 Defaults to matcher function's ``__qualname__`` (for example,
768 768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 770 api_version: Optional[int]
771 771 version of the Matcher API used by this matcher.
772 772 Currently supported values are 1 and 2.
773 773 Defaults to 1.
774 774 """
775 775
776 776 def wrapper(func: Matcher):
777 777 func.matcher_priority = priority or 0 # type: ignore
778 778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 779 func.matcher_api_version = api_version # type: ignore
780 780 if TYPE_CHECKING:
781 781 if api_version == 1:
782 782 func = cast(MatcherAPIv1, func)
783 783 elif api_version == 2:
784 784 func = cast(MatcherAPIv2, func)
785 785 return func
786 786
787 787 return wrapper
788 788
789 789
790 790 def _get_matcher_priority(matcher: Matcher):
791 791 return getattr(matcher, "matcher_priority", 0)
792 792
793 793
794 794 def _get_matcher_id(matcher: Matcher):
795 795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796 796
797 797
798 798 def _get_matcher_api_version(matcher):
799 799 return getattr(matcher, "matcher_api_version", 1)
800 800
801 801
802 802 context_matcher = partial(completion_matcher, api_version=2)
803 803
804 804
805 805 _IC = Iterable[Completion]
806 806
807 807
808 808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 809 """
810 810 Deduplicate a set of completions.
811 811
812 812 .. warning::
813 813
814 814 Unstable
815 815
816 816 This function is unstable, API may change without warning.
817 817
818 818 Parameters
819 819 ----------
820 820 text : str
821 821 text that should be completed.
822 822 completions : Iterator[Completion]
823 823 iterator over the completions to deduplicate
824 824
825 825 Yields
826 826 ------
827 827 `Completions` objects
828 828 Completions coming from multiple sources, may be different but end up having
829 829 the same effect when applied to ``text``. If this is the case, this will
830 830 consider completions as equal and only emit the first encountered.
831 831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 832 the IPython completer does return things that Jedi does not, but should be
833 833 at some point.
834 834 """
835 835 completions = list(completions)
836 836 if not completions:
837 837 return
838 838
839 839 new_start = min(c.start for c in completions)
840 840 new_end = max(c.end for c in completions)
841 841
842 842 seen = set()
843 843 for c in completions:
844 844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 845 if new_text not in seen:
846 846 yield c
847 847 seen.add(new_text)
848 848
849 849
850 850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 851 """
852 852 Rectify a set of completions to all have the same ``start`` and ``end``
853 853
854 854 .. warning::
855 855
856 856 Unstable
857 857
858 858 This function is unstable, API may change without warning.
859 859 It will also raise unless use in proper context manager.
860 860
861 861 Parameters
862 862 ----------
863 863 text : str
864 864 text that should be completed.
865 865 completions : Iterator[Completion]
866 866 iterator over the completions to rectify
867 867 _debug : bool
868 868 Log failed completion
869 869
870 870 Notes
871 871 -----
872 872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 873 the Jupyter Protocol requires them to behave like so. This will readjust
874 874 the completion to have the same ``start`` and ``end`` by padding both
875 875 extremities with surrounding text.
876 876
877 877 During stabilisation should support a ``_debug`` option to log which
878 878 completion are return by the IPython completer and not found in Jedi in
879 879 order to make upstream bug report.
880 880 """
881 881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 882 "It may change without warnings. "
883 883 "Use in corresponding context manager.",
884 884 category=ProvisionalCompleterWarning, stacklevel=2)
885 885
886 886 completions = list(completions)
887 887 if not completions:
888 888 return
889 889 starts = (c.start for c in completions)
890 890 ends = (c.end for c in completions)
891 891
892 892 new_start = min(starts)
893 893 new_end = max(ends)
894 894
895 895 seen_jedi = set()
896 896 seen_python_matches = set()
897 897 for c in completions:
898 898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 899 if c._origin == 'jedi':
900 900 seen_jedi.add(new_text)
901 901 elif c._origin == 'IPCompleter.python_matches':
902 902 seen_python_matches.add(new_text)
903 903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 904 diff = seen_python_matches.difference(seen_jedi)
905 905 if diff and _debug:
906 906 print('IPython.python matches have extras:', diff)
907 907
908 908
909 909 if sys.platform == 'win32':
910 910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 911 else:
912 912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913 913
914 914 GREEDY_DELIMS = ' =\r\n'
915 915
916 916
917 917 class CompletionSplitter(object):
918 918 """An object to split an input line in a manner similar to readline.
919 919
920 920 By having our own implementation, we can expose readline-like completion in
921 921 a uniform manner to all frontends. This object only needs to be given the
922 922 line of text to be split and the cursor position on said line, and it
923 923 returns the 'word' to be completed on at the cursor after splitting the
924 924 entire line.
925 925
926 926 What characters are used as splitting delimiters can be controlled by
927 927 setting the ``delims`` attribute (this is a property that internally
928 928 automatically builds the necessary regular expression)"""
929 929
930 930 # Private interface
931 931
932 932 # A string of delimiter characters. The default value makes sense for
933 933 # IPython's most typical usage patterns.
934 934 _delims = DELIMS
935 935
936 936 # The expression (a normal string) to be compiled into a regular expression
937 937 # for actual splitting. We store it as an attribute mostly for ease of
938 938 # debugging, since this type of code can be so tricky to debug.
939 939 _delim_expr = None
940 940
941 941 # The regular expression that does the actual splitting
942 942 _delim_re = None
943 943
944 944 def __init__(self, delims=None):
945 945 delims = CompletionSplitter._delims if delims is None else delims
946 946 self.delims = delims
947 947
948 948 @property
949 949 def delims(self):
950 950 """Return the string of delimiter characters."""
951 951 return self._delims
952 952
953 953 @delims.setter
954 954 def delims(self, delims):
955 955 """Set the delimiters for line splitting."""
956 956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 957 self._delim_re = re.compile(expr)
958 958 self._delims = delims
959 959 self._delim_expr = expr
960 960
961 961 def split_line(self, line, cursor_pos=None):
962 962 """Split a line of text with a cursor at the given position.
963 963 """
964 964 l = line if cursor_pos is None else line[:cursor_pos]
965 965 return self._delim_re.split(l)[-1]
966 966
967 967
968 968
969 969 class Completer(Configurable):
970 970
971 971 greedy = Bool(
972 972 False,
973 973 help="""Activate greedy completion.
974 974
975 975 .. deprecated:: 8.8
976 976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977 977
978 978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979 979
980 980 - ``Completer.evaluation = 'unsafe'``
981 981 - ``Completer.auto_close_dict_keys = True``
982 982 """,
983 983 ).tag(config=True)
984 984
985 985 evaluation = Enum(
986 986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 987 default_value="limited",
988 988 help="""Policy for code evaluation under completion.
989 989
990 990 Successive options allow to enable more eager evaluation for better
991 991 completion suggestions, including for nested dictionaries, nested lists,
992 992 or even results of function calls.
993 993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995 995
996 996 Allowed values are:
997 997
998 998 - ``forbidden``: no evaluation of code is permitted,
999 999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 1000 no item/attribute evaluationm no access to locals/globals,
1001 1001 no evaluation of any operations or comparisons.
1002 1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 1007 syntax with side-effects like `del x`,
1008 1008 - ``dangerous``: completely arbitrary evaluation.
1009 1009 """,
1010 1010 ).tag(config=True)
1011 1011
1012 1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 1014 "Default to True if jedi is installed.").tag(config=True)
1015 1015
1016 1016 jedi_compute_type_timeout = Int(default_value=400,
1017 1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 1019 performance by preventing jedi to build its cache.
1020 1020 """).tag(config=True)
1021 1021
1022 1022 debug = Bool(default_value=False,
1023 1023 help='Enable debug for the Completer. Mostly print extra '
1024 1024 'information for experimental jedi integration.')\
1025 1025 .tag(config=True)
1026 1026
1027 1027 backslash_combining_completions = Bool(True,
1028 1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 1029 "Includes completion of latex commands, unicode names, and expanding "
1030 1030 "unicode characters back to latex commands.").tag(config=True)
1031 1031
1032 1032 auto_close_dict_keys = Bool(
1033 1033 False,
1034 1034 help="""
1035 1035 Enable auto-closing dictionary keys.
1036 1036
1037 1037 When enabled string keys will be suffixed with a final quote
1038 1038 (matching the opening quote), tuple keys will also receive a
1039 1039 separating comma if needed, and keys which are final will
1040 1040 receive a closing bracket (``]``).
1041 1041 """,
1042 1042 ).tag(config=True)
1043 1043
1044 1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 1045 """Create a new completer for the command line.
1046 1046
1047 1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048 1048
1049 1049 If unspecified, the default namespace where completions are performed
1050 1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 1051 given as dictionaries.
1052 1052
1053 1053 An optional second namespace can be given. This allows the completer
1054 1054 to handle cases where both the local and global scopes need to be
1055 1055 distinguished.
1056 1056 """
1057 1057
1058 1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 1060 # to bind to __main__.__dict__ at completion time, not now.
1061 1061 if namespace is None:
1062 1062 self.use_main_ns = True
1063 1063 else:
1064 1064 self.use_main_ns = False
1065 1065 self.namespace = namespace
1066 1066
1067 1067 # The global namespace, if given, can be bound directly
1068 1068 if global_namespace is None:
1069 1069 self.global_namespace = {}
1070 1070 else:
1071 1071 self.global_namespace = global_namespace
1072 1072
1073 1073 self.custom_matchers = []
1074 1074
1075 1075 super(Completer, self).__init__(**kwargs)
1076 1076
1077 1077 def complete(self, text, state):
1078 1078 """Return the next possible completion for 'text'.
1079 1079
1080 1080 This is called successively with state == 0, 1, 2, ... until it
1081 1081 returns None. The completion should begin with 'text'.
1082 1082
1083 1083 """
1084 1084 if self.use_main_ns:
1085 1085 self.namespace = __main__.__dict__
1086 1086
1087 1087 if state == 0:
1088 1088 if "." in text:
1089 1089 self.matches = self.attr_matches(text)
1090 1090 else:
1091 1091 self.matches = self.global_matches(text)
1092 1092 try:
1093 1093 return self.matches[state]
1094 1094 except IndexError:
1095 1095 return None
1096 1096
1097 1097 def global_matches(self, text):
1098 1098 """Compute matches when text is a simple name.
1099 1099
1100 1100 Return a list of all keywords, built-in functions and names currently
1101 1101 defined in self.namespace or self.global_namespace that match.
1102 1102
1103 1103 """
1104 1104 matches = []
1105 1105 match_append = matches.append
1106 1106 n = len(text)
1107 1107 for lst in [
1108 1108 keyword.kwlist,
1109 1109 builtin_mod.__dict__.keys(),
1110 1110 list(self.namespace.keys()),
1111 1111 list(self.global_namespace.keys()),
1112 1112 ]:
1113 1113 for word in lst:
1114 1114 if word[:n] == text and word != "__builtins__":
1115 1115 match_append(word)
1116 1116
1117 1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 1119 shortened = {
1120 1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 1121 for word in lst
1122 1122 if snake_case_re.match(word)
1123 1123 }
1124 1124 for word in shortened.keys():
1125 1125 if word[:n] == text and word != "__builtins__":
1126 1126 match_append(shortened[word])
1127 1127 return matches
1128 1128
1129 1129 def attr_matches(self, text):
1130 1130 """Compute matches when text contains a dot.
1131 1131
1132 1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 1134 evaluated and its attributes (as revealed by dir()) are used as
1135 1135 possible completions. (For class instances, class members are
1136 1136 also considered.)
1137 1137
1138 1138 WARNING: this can still invoke arbitrary C code, if an object
1139 1139 with a __getattr__ hook is evaluated.
1140 1140
1141 1141 """
1142 1142 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1143 1143 if not m2:
1144 1144 return []
1145 1145 expr, attr = m2.group(1, 2)
1146 1146
1147 1147 obj = self._evaluate_expr(expr)
1148 1148
1149 1149 if obj is not_found:
1150 1150 return []
1151 1151
1152 1152 if self.limit_to__all__ and hasattr(obj, '__all__'):
1153 1153 words = get__all__entries(obj)
1154 1154 else:
1155 1155 words = dir2(obj)
1156 1156
1157 1157 try:
1158 1158 words = generics.complete_object(obj, words)
1159 1159 except TryNext:
1160 1160 pass
1161 1161 except AssertionError:
1162 1162 raise
1163 1163 except Exception:
1164 1164 # Silence errors from completion function
1165 1165 pass
1166 1166 # Build match list to return
1167 1167 n = len(attr)
1168 1168
1169 1169 # Note: ideally we would just return words here and the prefix
1170 1170 # reconciliator would know that we intend to append to rather than
1171 1171 # replace the input text; this requires refactoring to return range
1172 1172 # which ought to be replaced (as does jedi).
1173 1173 tokens = _parse_tokens(expr)
1174 1174 rev_tokens = reversed(tokens)
1175 1175 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1176 1176 name_turn = True
1177 1177
1178 1178 parts = []
1179 1179 for token in rev_tokens:
1180 1180 if token.type in skip_over:
1181 1181 continue
1182 1182 if token.type == tokenize.NAME and name_turn:
1183 1183 parts.append(token.string)
1184 1184 name_turn = False
1185 1185 elif token.type == tokenize.OP and token.string == "." and not name_turn:
1186 1186 parts.append(token.string)
1187 1187 name_turn = True
1188 1188 else:
1189 1189 # short-circuit if not empty nor name token
1190 1190 break
1191 1191
1192 1192 prefix_after_space = "".join(reversed(parts))
1193 1193
1194 1194 return ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr]
1195 1195
1196 1196 def _evaluate_expr(self, expr):
1197 1197 obj = not_found
1198 1198 done = False
1199 1199 while not done and expr:
1200 1200 try:
1201 1201 obj = guarded_eval(
1202 1202 expr,
1203 1203 EvaluationContext(
1204 1204 globals=self.global_namespace,
1205 1205 locals=self.namespace,
1206 1206 evaluation=self.evaluation,
1207 1207 ),
1208 1208 )
1209 1209 done = True
1210 1210 except Exception as e:
1211 1211 if self.debug:
1212 1212 print("Evaluation exception", e)
1213 1213 # trim the expression to remove any invalid prefix
1214 1214 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1215 1215 # where parenthesis is not closed.
1216 1216 # TODO: make this faster by reusing parts of the computation?
1217 1217 expr = expr[1:]
1218 1218 return obj
1219 1219
1220 1220 def get__all__entries(obj):
1221 1221 """returns the strings in the __all__ attribute"""
1222 1222 try:
1223 1223 words = getattr(obj, '__all__')
1224 1224 except:
1225 1225 return []
1226 1226
1227 1227 return [w for w in words if isinstance(w, str)]
1228 1228
1229 1229
1230 1230 class _DictKeyState(enum.Flag):
1231 1231 """Represent state of the key match in context of other possible matches.
1232 1232
1233 1233 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1234 1234 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1235 1235 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1236 1236 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1237 1237 """
1238 1238
1239 1239 BASELINE = 0
1240 1240 END_OF_ITEM = enum.auto()
1241 1241 END_OF_TUPLE = enum.auto()
1242 1242 IN_TUPLE = enum.auto()
1243 1243
1244 1244
1245 1245 def _parse_tokens(c):
1246 1246 """Parse tokens even if there is an error."""
1247 1247 tokens = []
1248 1248 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1249 1249 while True:
1250 1250 try:
1251 1251 tokens.append(next(token_generator))
1252 1252 except tokenize.TokenError:
1253 1253 return tokens
1254 1254 except StopIteration:
1255 1255 return tokens
1256 1256
1257 1257
1258 1258 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1259 1259 """Match any valid Python numeric literal in a prefix of dictionary keys.
1260 1260
1261 1261 References:
1262 1262 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1263 1263 - https://docs.python.org/3/library/tokenize.html
1264 1264 """
1265 1265 if prefix[-1].isspace():
1266 1266 # if user typed a space we do not have anything to complete
1267 1267 # even if there was a valid number token before
1268 1268 return None
1269 1269 tokens = _parse_tokens(prefix)
1270 1270 rev_tokens = reversed(tokens)
1271 1271 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1272 1272 number = None
1273 1273 for token in rev_tokens:
1274 1274 if token.type in skip_over:
1275 1275 continue
1276 1276 if number is None:
1277 1277 if token.type == tokenize.NUMBER:
1278 1278 number = token.string
1279 1279 continue
1280 1280 else:
1281 1281 # we did not match a number
1282 1282 return None
1283 1283 if token.type == tokenize.OP:
1284 1284 if token.string == ",":
1285 1285 break
1286 1286 if token.string in {"+", "-"}:
1287 1287 number = token.string + number
1288 1288 else:
1289 1289 return None
1290 1290 return number
1291 1291
1292 1292
1293 1293 _INT_FORMATS = {
1294 1294 "0b": bin,
1295 1295 "0o": oct,
1296 1296 "0x": hex,
1297 1297 }
1298 1298
1299 1299
1300 1300 def match_dict_keys(
1301 1301 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1302 1302 prefix: str,
1303 1303 delims: str,
1304 1304 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1305 1305 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1306 1306 """Used by dict_key_matches, matching the prefix to a list of keys
1307 1307
1308 1308 Parameters
1309 1309 ----------
1310 1310 keys
1311 1311 list of keys in dictionary currently being completed.
1312 1312 prefix
1313 1313 Part of the text already typed by the user. E.g. `mydict[b'fo`
1314 1314 delims
1315 1315 String of delimiters to consider when finding the current key.
1316 1316 extra_prefix : optional
1317 1317 Part of the text already typed in multi-key index cases. E.g. for
1318 1318 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1319 1319
1320 1320 Returns
1321 1321 -------
1322 1322 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1323 1323 ``quote`` being the quote that need to be used to close current string.
1324 1324 ``token_start`` the position where the replacement should start occurring,
1325 1325 ``matches`` a dictionary of replacement/completion keys on keys and values
1326 1326 indicating whether the state.
1327 1327 """
1328 1328 prefix_tuple = extra_prefix if extra_prefix else ()
1329 1329
1330 1330 prefix_tuple_size = sum(
1331 1331 [
1332 1332 # for pandas, do not count slices as taking space
1333 1333 not isinstance(k, slice)
1334 1334 for k in prefix_tuple
1335 1335 ]
1336 1336 )
1337 1337 text_serializable_types = (str, bytes, int, float, slice)
1338 1338
1339 1339 def filter_prefix_tuple(key):
1340 1340 # Reject too short keys
1341 1341 if len(key) <= prefix_tuple_size:
1342 1342 return False
1343 1343 # Reject keys which cannot be serialised to text
1344 1344 for k in key:
1345 1345 if not isinstance(k, text_serializable_types):
1346 1346 return False
1347 1347 # Reject keys that do not match the prefix
1348 1348 for k, pt in zip(key, prefix_tuple):
1349 1349 if k != pt and not isinstance(pt, slice):
1350 1350 return False
1351 1351 # All checks passed!
1352 1352 return True
1353 1353
1354 1354 filtered_key_is_final: Dict[
1355 1355 Union[str, bytes, int, float], _DictKeyState
1356 1356 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1357 1357
1358 1358 for k in keys:
1359 1359 # If at least one of the matches is not final, mark as undetermined.
1360 1360 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1361 1361 # `111` appears final on first match but is not final on the second.
1362 1362
1363 1363 if isinstance(k, tuple):
1364 1364 if filter_prefix_tuple(k):
1365 1365 key_fragment = k[prefix_tuple_size]
1366 1366 filtered_key_is_final[key_fragment] |= (
1367 1367 _DictKeyState.END_OF_TUPLE
1368 1368 if len(k) == prefix_tuple_size + 1
1369 1369 else _DictKeyState.IN_TUPLE
1370 1370 )
1371 1371 elif prefix_tuple_size > 0:
1372 1372 # we are completing a tuple but this key is not a tuple,
1373 1373 # so we should ignore it
1374 1374 pass
1375 1375 else:
1376 1376 if isinstance(k, text_serializable_types):
1377 1377 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1378 1378
1379 1379 filtered_keys = filtered_key_is_final.keys()
1380 1380
1381 1381 if not prefix:
1382 1382 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1383 1383
1384 1384 quote_match = re.search("(?:\"|')", prefix)
1385 1385 is_user_prefix_numeric = False
1386 1386
1387 1387 if quote_match:
1388 1388 quote = quote_match.group()
1389 1389 valid_prefix = prefix + quote
1390 1390 try:
1391 1391 prefix_str = literal_eval(valid_prefix)
1392 1392 except Exception:
1393 1393 return "", 0, {}
1394 1394 else:
1395 1395 # If it does not look like a string, let's assume
1396 1396 # we are dealing with a number or variable.
1397 1397 number_match = _match_number_in_dict_key_prefix(prefix)
1398 1398
1399 1399 # We do not want the key matcher to suggest variable names so we yield:
1400 1400 if number_match is None:
1401 1401 # The alternative would be to assume that user forgort the quote
1402 1402 # and if the substring matches, suggest adding it at the start.
1403 1403 return "", 0, {}
1404 1404
1405 1405 prefix_str = number_match
1406 1406 is_user_prefix_numeric = True
1407 1407 quote = ""
1408 1408
1409 1409 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1410 1410 token_match = re.search(pattern, prefix, re.UNICODE)
1411 1411 assert token_match is not None # silence mypy
1412 1412 token_start = token_match.start()
1413 1413 token_prefix = token_match.group()
1414 1414
1415 1415 matched: Dict[str, _DictKeyState] = {}
1416 1416
1417 1417 str_key: Union[str, bytes]
1418 1418
1419 1419 for key in filtered_keys:
1420 1420 if isinstance(key, (int, float)):
1421 1421 # User typed a number but this key is not a number.
1422 1422 if not is_user_prefix_numeric:
1423 1423 continue
1424 1424 str_key = str(key)
1425 1425 if isinstance(key, int):
1426 1426 int_base = prefix_str[:2].lower()
1427 1427 # if user typed integer using binary/oct/hex notation:
1428 1428 if int_base in _INT_FORMATS:
1429 1429 int_format = _INT_FORMATS[int_base]
1430 1430 str_key = int_format(key)
1431 1431 else:
1432 1432 # User typed a string but this key is a number.
1433 1433 if is_user_prefix_numeric:
1434 1434 continue
1435 1435 str_key = key
1436 1436 try:
1437 1437 if not str_key.startswith(prefix_str):
1438 1438 continue
1439 1439 except (AttributeError, TypeError, UnicodeError) as e:
1440 1440 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1441 1441 continue
1442 1442
1443 1443 # reformat remainder of key to begin with prefix
1444 1444 rem = str_key[len(prefix_str) :]
1445 1445 # force repr wrapped in '
1446 1446 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1447 1447 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1448 1448 if quote == '"':
1449 1449 # The entered prefix is quoted with ",
1450 1450 # but the match is quoted with '.
1451 1451 # A contained " hence needs escaping for comparison:
1452 1452 rem_repr = rem_repr.replace('"', '\\"')
1453 1453
1454 1454 # then reinsert prefix from start of token
1455 1455 match = "%s%s" % (token_prefix, rem_repr)
1456 1456
1457 1457 matched[match] = filtered_key_is_final[key]
1458 1458 return quote, token_start, matched
1459 1459
1460 1460
1461 1461 def cursor_to_position(text:str, line:int, column:int)->int:
1462 1462 """
1463 1463 Convert the (line,column) position of the cursor in text to an offset in a
1464 1464 string.
1465 1465
1466 1466 Parameters
1467 1467 ----------
1468 1468 text : str
1469 1469 The text in which to calculate the cursor offset
1470 1470 line : int
1471 1471 Line of the cursor; 0-indexed
1472 1472 column : int
1473 1473 Column of the cursor 0-indexed
1474 1474
1475 1475 Returns
1476 1476 -------
1477 1477 Position of the cursor in ``text``, 0-indexed.
1478 1478
1479 1479 See Also
1480 1480 --------
1481 1481 position_to_cursor : reciprocal of this function
1482 1482
1483 1483 """
1484 1484 lines = text.split('\n')
1485 1485 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1486 1486
1487 1487 return sum(len(l) + 1 for l in lines[:line]) + column
1488 1488
1489 1489 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1490 1490 """
1491 1491 Convert the position of the cursor in text (0 indexed) to a line
1492 1492 number(0-indexed) and a column number (0-indexed) pair
1493 1493
1494 1494 Position should be a valid position in ``text``.
1495 1495
1496 1496 Parameters
1497 1497 ----------
1498 1498 text : str
1499 1499 The text in which to calculate the cursor offset
1500 1500 offset : int
1501 1501 Position of the cursor in ``text``, 0-indexed.
1502 1502
1503 1503 Returns
1504 1504 -------
1505 1505 (line, column) : (int, int)
1506 1506 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1507 1507
1508 1508 See Also
1509 1509 --------
1510 1510 cursor_to_position : reciprocal of this function
1511 1511
1512 1512 """
1513 1513
1514 1514 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1515 1515
1516 1516 before = text[:offset]
1517 1517 blines = before.split('\n') # ! splitnes trim trailing \n
1518 1518 line = before.count('\n')
1519 1519 col = len(blines[-1])
1520 1520 return line, col
1521 1521
1522 1522
1523 1523 def _safe_isinstance(obj, module, class_name, *attrs):
1524 1524 """Checks if obj is an instance of module.class_name if loaded
1525 1525 """
1526 1526 if module in sys.modules:
1527 1527 m = sys.modules[module]
1528 1528 for attr in [class_name, *attrs]:
1529 1529 m = getattr(m, attr)
1530 1530 return isinstance(obj, m)
1531 1531
1532 1532
1533 1533 @context_matcher()
1534 1534 def back_unicode_name_matcher(context: CompletionContext):
1535 1535 """Match Unicode characters back to Unicode name
1536 1536
1537 1537 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1538 1538 """
1539 1539 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1540 1540 return _convert_matcher_v1_result_to_v2(
1541 1541 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1542 1542 )
1543 1543
1544 1544
1545 1545 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1546 1546 """Match Unicode characters back to Unicode name
1547 1547
1548 1548 This does ``β˜ƒ`` -> ``\\snowman``
1549 1549
1550 1550 Note that snowman is not a valid python3 combining character but will be expanded.
1551 1551 Though it will not recombine back to the snowman character by the completion machinery.
1552 1552
1553 1553 This will not either back-complete standard sequences like \\n, \\b ...
1554 1554
1555 1555 .. deprecated:: 8.6
1556 1556 You can use :meth:`back_unicode_name_matcher` instead.
1557 1557
1558 1558 Returns
1559 1559 =======
1560 1560
1561 1561 Return a tuple with two elements:
1562 1562
1563 1563 - The Unicode character that was matched (preceded with a backslash), or
1564 1564 empty string,
1565 1565 - a sequence (of 1), name for the match Unicode character, preceded by
1566 1566 backslash, or empty if no match.
1567 1567 """
1568 1568 if len(text)<2:
1569 1569 return '', ()
1570 1570 maybe_slash = text[-2]
1571 1571 if maybe_slash != '\\':
1572 1572 return '', ()
1573 1573
1574 1574 char = text[-1]
1575 1575 # no expand on quote for completion in strings.
1576 1576 # nor backcomplete standard ascii keys
1577 1577 if char in string.ascii_letters or char in ('"',"'"):
1578 1578 return '', ()
1579 1579 try :
1580 1580 unic = unicodedata.name(char)
1581 1581 return '\\'+char,('\\'+unic,)
1582 1582 except KeyError:
1583 1583 pass
1584 1584 return '', ()
1585 1585
1586 1586
1587 1587 @context_matcher()
1588 1588 def back_latex_name_matcher(context: CompletionContext):
1589 1589 """Match latex characters back to unicode name
1590 1590
1591 1591 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1592 1592 """
1593 1593 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1594 1594 return _convert_matcher_v1_result_to_v2(
1595 1595 matches, type="latex", fragment=fragment, suppress_if_matches=True
1596 1596 )
1597 1597
1598 1598
1599 1599 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1600 1600 """Match latex characters back to unicode name
1601 1601
1602 1602 This does ``\\β„΅`` -> ``\\aleph``
1603 1603
1604 1604 .. deprecated:: 8.6
1605 1605 You can use :meth:`back_latex_name_matcher` instead.
1606 1606 """
1607 1607 if len(text)<2:
1608 1608 return '', ()
1609 1609 maybe_slash = text[-2]
1610 1610 if maybe_slash != '\\':
1611 1611 return '', ()
1612 1612
1613 1613
1614 1614 char = text[-1]
1615 1615 # no expand on quote for completion in strings.
1616 1616 # nor backcomplete standard ascii keys
1617 1617 if char in string.ascii_letters or char in ('"',"'"):
1618 1618 return '', ()
1619 1619 try :
1620 1620 latex = reverse_latex_symbol[char]
1621 1621 # '\\' replace the \ as well
1622 1622 return '\\'+char,[latex]
1623 1623 except KeyError:
1624 1624 pass
1625 1625 return '', ()
1626 1626
1627 1627
1628 1628 def _formatparamchildren(parameter) -> str:
1629 1629 """
1630 1630 Get parameter name and value from Jedi Private API
1631 1631
1632 1632 Jedi does not expose a simple way to get `param=value` from its API.
1633 1633
1634 1634 Parameters
1635 1635 ----------
1636 1636 parameter
1637 1637 Jedi's function `Param`
1638 1638
1639 1639 Returns
1640 1640 -------
1641 1641 A string like 'a', 'b=1', '*args', '**kwargs'
1642 1642
1643 1643 """
1644 1644 description = parameter.description
1645 1645 if not description.startswith('param '):
1646 1646 raise ValueError('Jedi function parameter description have change format.'
1647 1647 'Expected "param ...", found %r".' % description)
1648 1648 return description[6:]
1649 1649
1650 1650 def _make_signature(completion)-> str:
1651 1651 """
1652 1652 Make the signature from a jedi completion
1653 1653
1654 1654 Parameters
1655 1655 ----------
1656 1656 completion : jedi.Completion
1657 1657 object does not complete a function type
1658 1658
1659 1659 Returns
1660 1660 -------
1661 1661 a string consisting of the function signature, with the parenthesis but
1662 1662 without the function name. example:
1663 1663 `(a, *args, b=1, **kwargs)`
1664 1664
1665 1665 """
1666 1666
1667 1667 # it looks like this might work on jedi 0.17
1668 1668 if hasattr(completion, 'get_signatures'):
1669 1669 signatures = completion.get_signatures()
1670 1670 if not signatures:
1671 1671 return '(?)'
1672 1672
1673 1673 c0 = completion.get_signatures()[0]
1674 1674 return '('+c0.to_string().split('(', maxsplit=1)[1]
1675 1675
1676 1676 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1677 1677 for p in signature.defined_names()) if f])
1678 1678
1679 1679
1680 1680 _CompleteResult = Dict[str, MatcherResult]
1681 1681
1682 1682
1683 1683 DICT_MATCHER_REGEX = re.compile(
1684 1684 r"""(?x)
1685 1685 ( # match dict-referring - or any get item object - expression
1686 1686 .+
1687 1687 )
1688 1688 \[ # open bracket
1689 1689 \s* # and optional whitespace
1690 1690 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1691 1691 # and slices
1692 1692 ((?:(?:
1693 1693 (?: # closed string
1694 1694 [uUbB]? # string prefix (r not handled)
1695 1695 (?:
1696 1696 '(?:[^']|(?<!\\)\\')*'
1697 1697 |
1698 1698 "(?:[^"]|(?<!\\)\\")*"
1699 1699 )
1700 1700 )
1701 1701 |
1702 1702 # capture integers and slices
1703 1703 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1704 1704 |
1705 1705 # integer in bin/hex/oct notation
1706 1706 0[bBxXoO]_?(?:\w|\d)+
1707 1707 )
1708 1708 \s*,\s*
1709 1709 )*)
1710 1710 ((?:
1711 1711 (?: # unclosed string
1712 1712 [uUbB]? # string prefix (r not handled)
1713 1713 (?:
1714 1714 '(?:[^']|(?<!\\)\\')*
1715 1715 |
1716 1716 "(?:[^"]|(?<!\\)\\")*
1717 1717 )
1718 1718 )
1719 1719 |
1720 1720 # unfinished integer
1721 1721 (?:[-+]?\d+)
1722 1722 |
1723 1723 # integer in bin/hex/oct notation
1724 1724 0[bBxXoO]_?(?:\w|\d)+
1725 1725 )
1726 1726 )?
1727 1727 $
1728 1728 """
1729 1729 )
1730 1730
1731 1731
1732 1732 def _convert_matcher_v1_result_to_v2(
1733 1733 matches: Sequence[str],
1734 1734 type: str,
1735 1735 fragment: Optional[str] = None,
1736 1736 suppress_if_matches: bool = False,
1737 1737 ) -> SimpleMatcherResult:
1738 1738 """Utility to help with transition"""
1739 1739 result = {
1740 1740 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1741 1741 "suppress": (True if matches else False) if suppress_if_matches else False,
1742 1742 }
1743 1743 if fragment is not None:
1744 1744 result["matched_fragment"] = fragment
1745 1745 return cast(SimpleMatcherResult, result)
1746 1746
1747 1747
1748 1748 class IPCompleter(Completer):
1749 1749 """Extension of the completer class with IPython-specific features"""
1750 1750
1751 1751 @observe('greedy')
1752 1752 def _greedy_changed(self, change):
1753 1753 """update the splitter and readline delims when greedy is changed"""
1754 1754 if change["new"]:
1755 1755 self.evaluation = "unsafe"
1756 1756 self.auto_close_dict_keys = True
1757 1757 self.splitter.delims = GREEDY_DELIMS
1758 1758 else:
1759 1759 self.evaluation = "limited"
1760 1760 self.auto_close_dict_keys = False
1761 1761 self.splitter.delims = DELIMS
1762 1762
1763 1763 dict_keys_only = Bool(
1764 1764 False,
1765 1765 help="""
1766 1766 Whether to show dict key matches only.
1767 1767
1768 1768 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1769 1769 """,
1770 1770 )
1771 1771
1772 1772 suppress_competing_matchers = UnionTrait(
1773 1773 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1774 1774 default_value=None,
1775 1775 help="""
1776 1776 Whether to suppress completions from other *Matchers*.
1777 1777
1778 1778 When set to ``None`` (default) the matchers will attempt to auto-detect
1779 1779 whether suppression of other matchers is desirable. For example, at
1780 1780 the beginning of a line followed by `%` we expect a magic completion
1781 1781 to be the only applicable option, and after ``my_dict['`` we usually
1782 1782 expect a completion with an existing dictionary key.
1783 1783
1784 1784 If you want to disable this heuristic and see completions from all matchers,
1785 1785 set ``IPCompleter.suppress_competing_matchers = False``.
1786 1786 To disable the heuristic for specific matchers provide a dictionary mapping:
1787 1787 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1788 1788
1789 1789 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1790 1790 completions to the set of matchers with the highest priority;
1791 1791 this is equivalent to ``IPCompleter.merge_completions`` and
1792 1792 can be beneficial for performance, but will sometimes omit relevant
1793 1793 candidates from matchers further down the priority list.
1794 1794 """,
1795 1795 ).tag(config=True)
1796 1796
1797 1797 merge_completions = Bool(
1798 1798 True,
1799 1799 help="""Whether to merge completion results into a single list
1800 1800
1801 1801 If False, only the completion results from the first non-empty
1802 1802 completer will be returned.
1803 1803
1804 1804 As of version 8.6.0, setting the value to ``False`` is an alias for:
1805 1805 ``IPCompleter.suppress_competing_matchers = True.``.
1806 1806 """,
1807 1807 ).tag(config=True)
1808 1808
1809 1809 disable_matchers = ListTrait(
1810 1810 Unicode(),
1811 1811 help="""List of matchers to disable.
1812 1812
1813 1813 The list should contain matcher identifiers (see :any:`completion_matcher`).
1814 1814 """,
1815 1815 ).tag(config=True)
1816 1816
1817 1817 omit__names = Enum(
1818 1818 (0, 1, 2),
1819 1819 default_value=2,
1820 1820 help="""Instruct the completer to omit private method names
1821 1821
1822 1822 Specifically, when completing on ``object.<tab>``.
1823 1823
1824 1824 When 2 [default]: all names that start with '_' will be excluded.
1825 1825
1826 1826 When 1: all 'magic' names (``__foo__``) will be excluded.
1827 1827
1828 1828 When 0: nothing will be excluded.
1829 1829 """
1830 1830 ).tag(config=True)
1831 1831 limit_to__all__ = Bool(False,
1832 1832 help="""
1833 1833 DEPRECATED as of version 5.0.
1834 1834
1835 1835 Instruct the completer to use __all__ for the completion
1836 1836
1837 1837 Specifically, when completing on ``object.<tab>``.
1838 1838
1839 1839 When True: only those names in obj.__all__ will be included.
1840 1840
1841 1841 When False [default]: the __all__ attribute is ignored
1842 1842 """,
1843 1843 ).tag(config=True)
1844 1844
1845 1845 profile_completions = Bool(
1846 1846 default_value=False,
1847 1847 help="If True, emit profiling data for completion subsystem using cProfile."
1848 1848 ).tag(config=True)
1849 1849
1850 1850 profiler_output_dir = Unicode(
1851 1851 default_value=".completion_profiles",
1852 1852 help="Template for path at which to output profile data for completions."
1853 1853 ).tag(config=True)
1854 1854
1855 1855 @observe('limit_to__all__')
1856 1856 def _limit_to_all_changed(self, change):
1857 1857 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1858 1858 'value has been deprecated since IPython 5.0, will be made to have '
1859 1859 'no effects and then removed in future version of IPython.',
1860 1860 UserWarning)
1861 1861
1862 1862 def __init__(
1863 1863 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1864 1864 ):
1865 1865 """IPCompleter() -> completer
1866 1866
1867 1867 Return a completer object.
1868 1868
1869 1869 Parameters
1870 1870 ----------
1871 1871 shell
1872 1872 a pointer to the ipython shell itself. This is needed
1873 1873 because this completer knows about magic functions, and those can
1874 1874 only be accessed via the ipython instance.
1875 1875 namespace : dict, optional
1876 1876 an optional dict where completions are performed.
1877 1877 global_namespace : dict, optional
1878 1878 secondary optional dict for completions, to
1879 1879 handle cases (such as IPython embedded inside functions) where
1880 1880 both Python scopes are visible.
1881 1881 config : Config
1882 1882 traitlet's config object
1883 1883 **kwargs
1884 1884 passed to super class unmodified.
1885 1885 """
1886 1886
1887 1887 self.magic_escape = ESC_MAGIC
1888 1888 self.splitter = CompletionSplitter()
1889 1889
1890 1890 # _greedy_changed() depends on splitter and readline being defined:
1891 1891 super().__init__(
1892 1892 namespace=namespace,
1893 1893 global_namespace=global_namespace,
1894 1894 config=config,
1895 1895 **kwargs,
1896 1896 )
1897 1897
1898 1898 # List where completion matches will be stored
1899 1899 self.matches = []
1900 1900 self.shell = shell
1901 1901 # Regexp to split filenames with spaces in them
1902 1902 self.space_name_re = re.compile(r'([^\\] )')
1903 1903 # Hold a local ref. to glob.glob for speed
1904 1904 self.glob = glob.glob
1905 1905
1906 1906 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1907 1907 # buffers, to avoid completion problems.
1908 1908 term = os.environ.get('TERM','xterm')
1909 1909 self.dumb_terminal = term in ['dumb','emacs']
1910 1910
1911 1911 # Special handling of backslashes needed in win32 platforms
1912 1912 if sys.platform == "win32":
1913 1913 self.clean_glob = self._clean_glob_win32
1914 1914 else:
1915 1915 self.clean_glob = self._clean_glob
1916 1916
1917 1917 #regexp to parse docstring for function signature
1918 1918 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1919 1919 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1920 1920 #use this if positional argument name is also needed
1921 1921 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1922 1922
1923 1923 self.magic_arg_matchers = [
1924 1924 self.magic_config_matcher,
1925 1925 self.magic_color_matcher,
1926 1926 ]
1927 1927
1928 1928 # This is set externally by InteractiveShell
1929 1929 self.custom_completers = None
1930 1930
1931 1931 # This is a list of names of unicode characters that can be completed
1932 1932 # into their corresponding unicode value. The list is large, so we
1933 1933 # lazily initialize it on first use. Consuming code should access this
1934 1934 # attribute through the `@unicode_names` property.
1935 1935 self._unicode_names = None
1936 1936
1937 1937 self._backslash_combining_matchers = [
1938 1938 self.latex_name_matcher,
1939 1939 self.unicode_name_matcher,
1940 1940 back_latex_name_matcher,
1941 1941 back_unicode_name_matcher,
1942 1942 self.fwd_unicode_matcher,
1943 1943 ]
1944 1944
1945 1945 if not self.backslash_combining_completions:
1946 1946 for matcher in self._backslash_combining_matchers:
1947 1947 self.disable_matchers.append(_get_matcher_id(matcher))
1948 1948
1949 1949 if not self.merge_completions:
1950 1950 self.suppress_competing_matchers = True
1951 1951
1952 1952 @property
1953 1953 def matchers(self) -> List[Matcher]:
1954 1954 """All active matcher routines for completion"""
1955 1955 if self.dict_keys_only:
1956 1956 return [self.dict_key_matcher]
1957 1957
1958 1958 if self.use_jedi:
1959 1959 return [
1960 1960 *self.custom_matchers,
1961 1961 *self._backslash_combining_matchers,
1962 1962 *self.magic_arg_matchers,
1963 1963 self.custom_completer_matcher,
1964 1964 self.magic_matcher,
1965 1965 self._jedi_matcher,
1966 1966 self.dict_key_matcher,
1967 1967 self.file_matcher,
1968 1968 ]
1969 1969 else:
1970 1970 return [
1971 1971 *self.custom_matchers,
1972 1972 *self._backslash_combining_matchers,
1973 1973 *self.magic_arg_matchers,
1974 1974 self.custom_completer_matcher,
1975 1975 self.dict_key_matcher,
1976 1976 # TODO: convert python_matches to v2 API
1977 1977 self.magic_matcher,
1978 1978 self.python_matches,
1979 1979 self.file_matcher,
1980 1980 self.python_func_kw_matcher,
1981 1981 ]
1982 1982
1983 1983 def all_completions(self, text:str) -> List[str]:
1984 1984 """
1985 1985 Wrapper around the completion methods for the benefit of emacs.
1986 1986 """
1987 1987 prefix = text.rpartition('.')[0]
1988 1988 with provisionalcompleter():
1989 1989 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1990 1990 for c in self.completions(text, len(text))]
1991 1991
1992 1992 return self.complete(text)[1]
1993 1993
1994 1994 def _clean_glob(self, text:str):
1995 1995 return self.glob("%s*" % text)
1996 1996
1997 1997 def _clean_glob_win32(self, text:str):
1998 1998 return [f.replace("\\","/")
1999 1999 for f in self.glob("%s*" % text)]
2000 2000
2001 2001 @context_matcher()
2002 2002 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2003 2003 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2004 2004 matches = self.file_matches(context.token)
2005 2005 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2006 2006 # starts with `/home/`, `C:\`, etc)
2007 2007 return _convert_matcher_v1_result_to_v2(matches, type="path")
2008 2008
2009 2009 def file_matches(self, text: str) -> List[str]:
2010 2010 """Match filenames, expanding ~USER type strings.
2011 2011
2012 2012 Most of the seemingly convoluted logic in this completer is an
2013 2013 attempt to handle filenames with spaces in them. And yet it's not
2014 2014 quite perfect, because Python's readline doesn't expose all of the
2015 2015 GNU readline details needed for this to be done correctly.
2016 2016
2017 2017 For a filename with a space in it, the printed completions will be
2018 2018 only the parts after what's already been typed (instead of the
2019 2019 full completions, as is normally done). I don't think with the
2020 2020 current (as of Python 2.3) Python readline it's possible to do
2021 2021 better.
2022 2022
2023 2023 .. deprecated:: 8.6
2024 2024 You can use :meth:`file_matcher` instead.
2025 2025 """
2026 2026
2027 2027 # chars that require escaping with backslash - i.e. chars
2028 2028 # that readline treats incorrectly as delimiters, but we
2029 2029 # don't want to treat as delimiters in filename matching
2030 2030 # when escaped with backslash
2031 2031 if text.startswith('!'):
2032 2032 text = text[1:]
2033 2033 text_prefix = u'!'
2034 2034 else:
2035 2035 text_prefix = u''
2036 2036
2037 2037 text_until_cursor = self.text_until_cursor
2038 2038 # track strings with open quotes
2039 2039 open_quotes = has_open_quotes(text_until_cursor)
2040 2040
2041 2041 if '(' in text_until_cursor or '[' in text_until_cursor:
2042 2042 lsplit = text
2043 2043 else:
2044 2044 try:
2045 2045 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2046 2046 lsplit = arg_split(text_until_cursor)[-1]
2047 2047 except ValueError:
2048 2048 # typically an unmatched ", or backslash without escaped char.
2049 2049 if open_quotes:
2050 2050 lsplit = text_until_cursor.split(open_quotes)[-1]
2051 2051 else:
2052 2052 return []
2053 2053 except IndexError:
2054 2054 # tab pressed on empty line
2055 2055 lsplit = ""
2056 2056
2057 2057 if not open_quotes and lsplit != protect_filename(lsplit):
2058 2058 # if protectables are found, do matching on the whole escaped name
2059 2059 has_protectables = True
2060 2060 text0,text = text,lsplit
2061 2061 else:
2062 2062 has_protectables = False
2063 2063 text = os.path.expanduser(text)
2064 2064
2065 2065 if text == "":
2066 2066 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2067 2067
2068 2068 # Compute the matches from the filesystem
2069 2069 if sys.platform == 'win32':
2070 2070 m0 = self.clean_glob(text)
2071 2071 else:
2072 2072 m0 = self.clean_glob(text.replace('\\', ''))
2073 2073
2074 2074 if has_protectables:
2075 2075 # If we had protectables, we need to revert our changes to the
2076 2076 # beginning of filename so that we don't double-write the part
2077 2077 # of the filename we have so far
2078 2078 len_lsplit = len(lsplit)
2079 2079 matches = [text_prefix + text0 +
2080 2080 protect_filename(f[len_lsplit:]) for f in m0]
2081 2081 else:
2082 2082 if open_quotes:
2083 2083 # if we have a string with an open quote, we don't need to
2084 2084 # protect the names beyond the quote (and we _shouldn't_, as
2085 2085 # it would cause bugs when the filesystem call is made).
2086 2086 matches = m0 if sys.platform == "win32" else\
2087 2087 [protect_filename(f, open_quotes) for f in m0]
2088 2088 else:
2089 2089 matches = [text_prefix +
2090 2090 protect_filename(f) for f in m0]
2091 2091
2092 2092 # Mark directories in input list by appending '/' to their names.
2093 2093 return [x+'/' if os.path.isdir(x) else x for x in matches]
2094 2094
2095 2095 @context_matcher()
2096 2096 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2097 2097 """Match magics."""
2098 2098 text = context.token
2099 2099 matches = self.magic_matches(text)
2100 2100 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2101 2101 is_magic_prefix = len(text) > 0 and text[0] == "%"
2102 2102 result["suppress"] = is_magic_prefix and bool(result["completions"])
2103 2103 return result
2104 2104
2105 2105 def magic_matches(self, text: str):
2106 2106 """Match magics.
2107 2107
2108 2108 .. deprecated:: 8.6
2109 2109 You can use :meth:`magic_matcher` instead.
2110 2110 """
2111 2111 # Get all shell magics now rather than statically, so magics loaded at
2112 2112 # runtime show up too.
2113 2113 lsm = self.shell.magics_manager.lsmagic()
2114 2114 line_magics = lsm['line']
2115 2115 cell_magics = lsm['cell']
2116 2116 pre = self.magic_escape
2117 2117 pre2 = pre+pre
2118 2118
2119 2119 explicit_magic = text.startswith(pre)
2120 2120
2121 2121 # Completion logic:
2122 2122 # - user gives %%: only do cell magics
2123 2123 # - user gives %: do both line and cell magics
2124 2124 # - no prefix: do both
2125 2125 # In other words, line magics are skipped if the user gives %% explicitly
2126 2126 #
2127 2127 # We also exclude magics that match any currently visible names:
2128 2128 # https://github.com/ipython/ipython/issues/4877, unless the user has
2129 2129 # typed a %:
2130 2130 # https://github.com/ipython/ipython/issues/10754
2131 2131 bare_text = text.lstrip(pre)
2132 2132 global_matches = self.global_matches(bare_text)
2133 2133 if not explicit_magic:
2134 2134 def matches(magic):
2135 2135 """
2136 2136 Filter magics, in particular remove magics that match
2137 2137 a name present in global namespace.
2138 2138 """
2139 2139 return ( magic.startswith(bare_text) and
2140 2140 magic not in global_matches )
2141 2141 else:
2142 2142 def matches(magic):
2143 2143 return magic.startswith(bare_text)
2144 2144
2145 2145 comp = [ pre2+m for m in cell_magics if matches(m)]
2146 2146 if not text.startswith(pre2):
2147 2147 comp += [ pre+m for m in line_magics if matches(m)]
2148 2148
2149 2149 return comp
2150 2150
2151 2151 @context_matcher()
2152 2152 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2153 2153 """Match class names and attributes for %config magic."""
2154 2154 # NOTE: uses `line_buffer` equivalent for compatibility
2155 2155 matches = self.magic_config_matches(context.line_with_cursor)
2156 2156 return _convert_matcher_v1_result_to_v2(matches, type="param")
2157 2157
2158 2158 def magic_config_matches(self, text: str) -> List[str]:
2159 2159 """Match class names and attributes for %config magic.
2160 2160
2161 2161 .. deprecated:: 8.6
2162 2162 You can use :meth:`magic_config_matcher` instead.
2163 2163 """
2164 2164 texts = text.strip().split()
2165 2165
2166 2166 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2167 2167 # get all configuration classes
2168 2168 classes = sorted(set([ c for c in self.shell.configurables
2169 2169 if c.__class__.class_traits(config=True)
2170 2170 ]), key=lambda x: x.__class__.__name__)
2171 2171 classnames = [ c.__class__.__name__ for c in classes ]
2172 2172
2173 2173 # return all classnames if config or %config is given
2174 2174 if len(texts) == 1:
2175 2175 return classnames
2176 2176
2177 2177 # match classname
2178 2178 classname_texts = texts[1].split('.')
2179 2179 classname = classname_texts[0]
2180 2180 classname_matches = [ c for c in classnames
2181 2181 if c.startswith(classname) ]
2182 2182
2183 2183 # return matched classes or the matched class with attributes
2184 2184 if texts[1].find('.') < 0:
2185 2185 return classname_matches
2186 2186 elif len(classname_matches) == 1 and \
2187 2187 classname_matches[0] == classname:
2188 2188 cls = classes[classnames.index(classname)].__class__
2189 2189 help = cls.class_get_help()
2190 2190 # strip leading '--' from cl-args:
2191 2191 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2192 2192 return [ attr.split('=')[0]
2193 2193 for attr in help.strip().splitlines()
2194 2194 if attr.startswith(texts[1]) ]
2195 2195 return []
2196 2196
2197 2197 @context_matcher()
2198 2198 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2199 2199 """Match color schemes for %colors magic."""
2200 2200 # NOTE: uses `line_buffer` equivalent for compatibility
2201 2201 matches = self.magic_color_matches(context.line_with_cursor)
2202 2202 return _convert_matcher_v1_result_to_v2(matches, type="param")
2203 2203
2204 2204 def magic_color_matches(self, text: str) -> List[str]:
2205 2205 """Match color schemes for %colors magic.
2206 2206
2207 2207 .. deprecated:: 8.6
2208 2208 You can use :meth:`magic_color_matcher` instead.
2209 2209 """
2210 2210 texts = text.split()
2211 2211 if text.endswith(' '):
2212 2212 # .split() strips off the trailing whitespace. Add '' back
2213 2213 # so that: '%colors ' -> ['%colors', '']
2214 2214 texts.append('')
2215 2215
2216 2216 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2217 2217 prefix = texts[1]
2218 2218 return [ color for color in InspectColors.keys()
2219 2219 if color.startswith(prefix) ]
2220 2220 return []
2221 2221
2222 2222 @context_matcher(identifier="IPCompleter.jedi_matcher")
2223 2223 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2224 2224 matches = self._jedi_matches(
2225 2225 cursor_column=context.cursor_position,
2226 2226 cursor_line=context.cursor_line,
2227 2227 text=context.full_text,
2228 2228 )
2229 2229 return {
2230 2230 "completions": matches,
2231 2231 # static analysis should not suppress other matchers
2232 2232 "suppress": False,
2233 2233 }
2234 2234
2235 2235 def _jedi_matches(
2236 2236 self, cursor_column: int, cursor_line: int, text: str
2237 2237 ) -> Iterator[_JediCompletionLike]:
2238 2238 """
2239 2239 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2240 2240 cursor position.
2241 2241
2242 2242 Parameters
2243 2243 ----------
2244 2244 cursor_column : int
2245 2245 column position of the cursor in ``text``, 0-indexed.
2246 2246 cursor_line : int
2247 2247 line position of the cursor in ``text``, 0-indexed
2248 2248 text : str
2249 2249 text to complete
2250 2250
2251 2251 Notes
2252 2252 -----
2253 2253 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2254 2254 object containing a string with the Jedi debug information attached.
2255 2255
2256 2256 .. deprecated:: 8.6
2257 2257 You can use :meth:`_jedi_matcher` instead.
2258 2258 """
2259 2259 namespaces = [self.namespace]
2260 2260 if self.global_namespace is not None:
2261 2261 namespaces.append(self.global_namespace)
2262 2262
2263 2263 completion_filter = lambda x:x
2264 2264 offset = cursor_to_position(text, cursor_line, cursor_column)
2265 2265 # filter output if we are completing for object members
2266 2266 if offset:
2267 2267 pre = text[offset-1]
2268 2268 if pre == '.':
2269 2269 if self.omit__names == 2:
2270 2270 completion_filter = lambda c:not c.name.startswith('_')
2271 2271 elif self.omit__names == 1:
2272 2272 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2273 2273 elif self.omit__names == 0:
2274 2274 completion_filter = lambda x:x
2275 2275 else:
2276 2276 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2277 2277
2278 2278 interpreter = jedi.Interpreter(text[:offset], namespaces)
2279 2279 try_jedi = True
2280 2280
2281 2281 try:
2282 2282 # find the first token in the current tree -- if it is a ' or " then we are in a string
2283 2283 completing_string = False
2284 2284 try:
2285 2285 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2286 2286 except StopIteration:
2287 2287 pass
2288 2288 else:
2289 2289 # note the value may be ', ", or it may also be ''' or """, or
2290 2290 # in some cases, """what/you/typed..., but all of these are
2291 2291 # strings.
2292 2292 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2293 2293
2294 2294 # if we are in a string jedi is likely not the right candidate for
2295 2295 # now. Skip it.
2296 2296 try_jedi = not completing_string
2297 2297 except Exception as e:
2298 2298 # many of things can go wrong, we are using private API just don't crash.
2299 2299 if self.debug:
2300 2300 print("Error detecting if completing a non-finished string :", e, '|')
2301 2301
2302 2302 if not try_jedi:
2303 2303 return iter([])
2304 2304 try:
2305 2305 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2306 2306 except Exception as e:
2307 2307 if self.debug:
2308 2308 return iter(
2309 2309 [
2310 2310 _FakeJediCompletion(
2311 2311 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2312 2312 % (e)
2313 2313 )
2314 2314 ]
2315 2315 )
2316 2316 else:
2317 2317 return iter([])
2318 2318
2319 2319 @completion_matcher(api_version=1)
2320 2320 def python_matches(self, text: str) -> Iterable[str]:
2321 2321 """Match attributes or global python names"""
2322 2322 if "." in text:
2323 2323 try:
2324 2324 matches = self.attr_matches(text)
2325 2325 if text.endswith('.') and self.omit__names:
2326 2326 if self.omit__names == 1:
2327 2327 # true if txt is _not_ a __ name, false otherwise:
2328 2328 no__name = (lambda txt:
2329 2329 re.match(r'.*\.__.*?__',txt) is None)
2330 2330 else:
2331 2331 # true if txt is _not_ a _ name, false otherwise:
2332 2332 no__name = (lambda txt:
2333 2333 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2334 2334 matches = filter(no__name, matches)
2335 2335 except NameError:
2336 2336 # catches <undefined attributes>.<tab>
2337 2337 matches = []
2338 2338 else:
2339 2339 matches = self.global_matches(text)
2340 2340 return matches
2341 2341
2342 2342 def _default_arguments_from_docstring(self, doc):
2343 2343 """Parse the first line of docstring for call signature.
2344 2344
2345 2345 Docstring should be of the form 'min(iterable[, key=func])\n'.
2346 2346 It can also parse cython docstring of the form
2347 2347 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2348 2348 """
2349 2349 if doc is None:
2350 2350 return []
2351 2351
2352 2352 #care only the firstline
2353 2353 line = doc.lstrip().splitlines()[0]
2354 2354
2355 2355 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2356 2356 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2357 2357 sig = self.docstring_sig_re.search(line)
2358 2358 if sig is None:
2359 2359 return []
2360 2360 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2361 2361 sig = sig.groups()[0].split(',')
2362 2362 ret = []
2363 2363 for s in sig:
2364 2364 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2365 2365 ret += self.docstring_kwd_re.findall(s)
2366 2366 return ret
2367 2367
2368 2368 def _default_arguments(self, obj):
2369 2369 """Return the list of default arguments of obj if it is callable,
2370 2370 or empty list otherwise."""
2371 2371 call_obj = obj
2372 2372 ret = []
2373 2373 if inspect.isbuiltin(obj):
2374 2374 pass
2375 2375 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2376 2376 if inspect.isclass(obj):
2377 2377 #for cython embedsignature=True the constructor docstring
2378 2378 #belongs to the object itself not __init__
2379 2379 ret += self._default_arguments_from_docstring(
2380 2380 getattr(obj, '__doc__', ''))
2381 2381 # for classes, check for __init__,__new__
2382 2382 call_obj = (getattr(obj, '__init__', None) or
2383 2383 getattr(obj, '__new__', None))
2384 2384 # for all others, check if they are __call__able
2385 2385 elif hasattr(obj, '__call__'):
2386 2386 call_obj = obj.__call__
2387 2387 ret += self._default_arguments_from_docstring(
2388 2388 getattr(call_obj, '__doc__', ''))
2389 2389
2390 2390 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2391 2391 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2392 2392
2393 2393 try:
2394 2394 sig = inspect.signature(obj)
2395 2395 ret.extend(k for k, v in sig.parameters.items() if
2396 2396 v.kind in _keeps)
2397 2397 except ValueError:
2398 2398 pass
2399 2399
2400 2400 return list(set(ret))
2401 2401
2402 2402 @context_matcher()
2403 2403 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2404 2404 """Match named parameters (kwargs) of the last open function."""
2405 2405 matches = self.python_func_kw_matches(context.token)
2406 2406 return _convert_matcher_v1_result_to_v2(matches, type="param")
2407 2407
2408 2408 def python_func_kw_matches(self, text):
2409 2409 """Match named parameters (kwargs) of the last open function.
2410 2410
2411 2411 .. deprecated:: 8.6
2412 2412 You can use :meth:`python_func_kw_matcher` instead.
2413 2413 """
2414 2414
2415 2415 if "." in text: # a parameter cannot be dotted
2416 2416 return []
2417 2417 try: regexp = self.__funcParamsRegex
2418 2418 except AttributeError:
2419 2419 regexp = self.__funcParamsRegex = re.compile(r'''
2420 2420 '.*?(?<!\\)' | # single quoted strings or
2421 2421 ".*?(?<!\\)" | # double quoted strings or
2422 2422 \w+ | # identifier
2423 2423 \S # other characters
2424 2424 ''', re.VERBOSE | re.DOTALL)
2425 2425 # 1. find the nearest identifier that comes before an unclosed
2426 2426 # parenthesis before the cursor
2427 2427 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2428 2428 tokens = regexp.findall(self.text_until_cursor)
2429 2429 iterTokens = reversed(tokens); openPar = 0
2430 2430
2431 2431 for token in iterTokens:
2432 2432 if token == ')':
2433 2433 openPar -= 1
2434 2434 elif token == '(':
2435 2435 openPar += 1
2436 2436 if openPar > 0:
2437 2437 # found the last unclosed parenthesis
2438 2438 break
2439 2439 else:
2440 2440 return []
2441 2441 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2442 2442 ids = []
2443 2443 isId = re.compile(r'\w+$').match
2444 2444
2445 2445 while True:
2446 2446 try:
2447 2447 ids.append(next(iterTokens))
2448 2448 if not isId(ids[-1]):
2449 2449 ids.pop(); break
2450 2450 if not next(iterTokens) == '.':
2451 2451 break
2452 2452 except StopIteration:
2453 2453 break
2454 2454
2455 2455 # Find all named arguments already assigned to, as to avoid suggesting
2456 2456 # them again
2457 2457 usedNamedArgs = set()
2458 2458 par_level = -1
2459 2459 for token, next_token in zip(tokens, tokens[1:]):
2460 2460 if token == '(':
2461 2461 par_level += 1
2462 2462 elif token == ')':
2463 2463 par_level -= 1
2464 2464
2465 2465 if par_level != 0:
2466 2466 continue
2467 2467
2468 2468 if next_token != '=':
2469 2469 continue
2470 2470
2471 2471 usedNamedArgs.add(token)
2472 2472
2473 2473 argMatches = []
2474 2474 try:
2475 2475 callableObj = '.'.join(ids[::-1])
2476 2476 namedArgs = self._default_arguments(eval(callableObj,
2477 2477 self.namespace))
2478 2478
2479 2479 # Remove used named arguments from the list, no need to show twice
2480 2480 for namedArg in set(namedArgs) - usedNamedArgs:
2481 2481 if namedArg.startswith(text):
2482 2482 argMatches.append("%s=" %namedArg)
2483 2483 except:
2484 2484 pass
2485 2485
2486 2486 return argMatches
2487 2487
2488 2488 @staticmethod
2489 2489 def _get_keys(obj: Any) -> List[Any]:
2490 2490 # Objects can define their own completions by defining an
2491 2491 # _ipy_key_completions_() method.
2492 2492 method = get_real_method(obj, '_ipython_key_completions_')
2493 2493 if method is not None:
2494 2494 return method()
2495 2495
2496 2496 # Special case some common in-memory dict-like types
2497 2497 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2498 2498 try:
2499 2499 return list(obj.keys())
2500 2500 except Exception:
2501 2501 return []
2502 2502 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2503 2503 try:
2504 2504 return list(obj.obj.keys())
2505 2505 except Exception:
2506 2506 return []
2507 2507 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2508 2508 _safe_isinstance(obj, 'numpy', 'void'):
2509 2509 return obj.dtype.names or []
2510 2510 return []
2511 2511
2512 2512 @context_matcher()
2513 2513 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2514 2514 """Match string keys in a dictionary, after e.g. ``foo[``."""
2515 2515 matches = self.dict_key_matches(context.token)
2516 2516 return _convert_matcher_v1_result_to_v2(
2517 2517 matches, type="dict key", suppress_if_matches=True
2518 2518 )
2519 2519
2520 2520 def dict_key_matches(self, text: str) -> List[str]:
2521 2521 """Match string keys in a dictionary, after e.g. ``foo[``.
2522 2522
2523 2523 .. deprecated:: 8.6
2524 2524 You can use :meth:`dict_key_matcher` instead.
2525 2525 """
2526 2526
2527 2527 # Short-circuit on closed dictionary (regular expression would
2528 2528 # not match anyway, but would take quite a while).
2529 2529 if self.text_until_cursor.strip().endswith("]"):
2530 2530 return []
2531 2531
2532 2532 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2533 2533
2534 2534 if match is None:
2535 2535 return []
2536 2536
2537 2537 expr, prior_tuple_keys, key_prefix = match.groups()
2538 2538
2539 2539 obj = self._evaluate_expr(expr)
2540 2540
2541 2541 if obj is not_found:
2542 2542 return []
2543 2543
2544 2544 keys = self._get_keys(obj)
2545 2545 if not keys:
2546 2546 return keys
2547 2547
2548 2548 tuple_prefix = guarded_eval(
2549 2549 prior_tuple_keys,
2550 2550 EvaluationContext(
2551 2551 globals=self.global_namespace,
2552 2552 locals=self.namespace,
2553 evaluation=self.evaluation,
2553 evaluation=self.evaluation, # type: ignore
2554 2554 in_subscript=True,
2555 2555 ),
2556 2556 )
2557 2557
2558 2558 closing_quote, token_offset, matches = match_dict_keys(
2559 2559 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2560 2560 )
2561 2561 if not matches:
2562 2562 return []
2563 2563
2564 2564 # get the cursor position of
2565 2565 # - the text being completed
2566 2566 # - the start of the key text
2567 2567 # - the start of the completion
2568 2568 text_start = len(self.text_until_cursor) - len(text)
2569 2569 if key_prefix:
2570 2570 key_start = match.start(3)
2571 2571 completion_start = key_start + token_offset
2572 2572 else:
2573 2573 key_start = completion_start = match.end()
2574 2574
2575 2575 # grab the leading prefix, to make sure all completions start with `text`
2576 2576 if text_start > key_start:
2577 2577 leading = ''
2578 2578 else:
2579 2579 leading = text[text_start:completion_start]
2580 2580
2581 2581 # append closing quote and bracket as appropriate
2582 2582 # this is *not* appropriate if the opening quote or bracket is outside
2583 2583 # the text given to this method, e.g. `d["""a\nt
2584 2584 can_close_quote = False
2585 2585 can_close_bracket = False
2586 2586
2587 2587 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2588 2588
2589 2589 if continuation.startswith(closing_quote):
2590 2590 # do not close if already closed, e.g. `d['a<tab>'`
2591 2591 continuation = continuation[len(closing_quote) :]
2592 2592 else:
2593 2593 can_close_quote = True
2594 2594
2595 2595 continuation = continuation.strip()
2596 2596
2597 2597 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2598 2598 # handling it is out of scope, so let's avoid appending suffixes.
2599 2599 has_known_tuple_handling = isinstance(obj, dict)
2600 2600
2601 2601 can_close_bracket = (
2602 2602 not continuation.startswith("]") and self.auto_close_dict_keys
2603 2603 )
2604 2604 can_close_tuple_item = (
2605 2605 not continuation.startswith(",")
2606 2606 and has_known_tuple_handling
2607 2607 and self.auto_close_dict_keys
2608 2608 )
2609 2609 can_close_quote = can_close_quote and self.auto_close_dict_keys
2610 2610
2611 2611 # fast path if closing qoute should be appended but not suffix is allowed
2612 2612 if not can_close_quote and not can_close_bracket and closing_quote:
2613 2613 return [leading + k for k in matches]
2614 2614
2615 2615 results = []
2616 2616
2617 2617 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2618 2618
2619 2619 for k, state_flag in matches.items():
2620 2620 result = leading + k
2621 2621 if can_close_quote and closing_quote:
2622 2622 result += closing_quote
2623 2623
2624 2624 if state_flag == end_of_tuple_or_item:
2625 2625 # We do not know which suffix to add,
2626 2626 # e.g. both tuple item and string
2627 2627 # match this item.
2628 2628 pass
2629 2629
2630 2630 if state_flag in end_of_tuple_or_item and can_close_bracket:
2631 2631 result += "]"
2632 2632 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2633 2633 result += ", "
2634 2634 results.append(result)
2635 2635 return results
2636 2636
2637 2637 @context_matcher()
2638 2638 def unicode_name_matcher(self, context: CompletionContext):
2639 2639 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2640 2640 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2641 2641 return _convert_matcher_v1_result_to_v2(
2642 2642 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2643 2643 )
2644 2644
2645 2645 @staticmethod
2646 2646 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2647 2647 """Match Latex-like syntax for unicode characters base
2648 2648 on the name of the character.
2649 2649
2650 2650 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2651 2651
2652 2652 Works only on valid python 3 identifier, or on combining characters that
2653 2653 will combine to form a valid identifier.
2654 2654 """
2655 2655 slashpos = text.rfind('\\')
2656 2656 if slashpos > -1:
2657 2657 s = text[slashpos+1:]
2658 2658 try :
2659 2659 unic = unicodedata.lookup(s)
2660 2660 # allow combining chars
2661 2661 if ('a'+unic).isidentifier():
2662 2662 return '\\'+s,[unic]
2663 2663 except KeyError:
2664 2664 pass
2665 2665 return '', []
2666 2666
2667 2667 @context_matcher()
2668 2668 def latex_name_matcher(self, context: CompletionContext):
2669 2669 """Match Latex syntax for unicode characters.
2670 2670
2671 2671 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2672 2672 """
2673 2673 fragment, matches = self.latex_matches(context.text_until_cursor)
2674 2674 return _convert_matcher_v1_result_to_v2(
2675 2675 matches, type="latex", fragment=fragment, suppress_if_matches=True
2676 2676 )
2677 2677
2678 2678 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2679 2679 """Match Latex syntax for unicode characters.
2680 2680
2681 2681 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2682 2682
2683 2683 .. deprecated:: 8.6
2684 2684 You can use :meth:`latex_name_matcher` instead.
2685 2685 """
2686 2686 slashpos = text.rfind('\\')
2687 2687 if slashpos > -1:
2688 2688 s = text[slashpos:]
2689 2689 if s in latex_symbols:
2690 2690 # Try to complete a full latex symbol to unicode
2691 2691 # \\alpha -> Ξ±
2692 2692 return s, [latex_symbols[s]]
2693 2693 else:
2694 2694 # If a user has partially typed a latex symbol, give them
2695 2695 # a full list of options \al -> [\aleph, \alpha]
2696 2696 matches = [k for k in latex_symbols if k.startswith(s)]
2697 2697 if matches:
2698 2698 return s, matches
2699 2699 return '', ()
2700 2700
2701 2701 @context_matcher()
2702 2702 def custom_completer_matcher(self, context):
2703 2703 """Dispatch custom completer.
2704 2704
2705 2705 If a match is found, suppresses all other matchers except for Jedi.
2706 2706 """
2707 2707 matches = self.dispatch_custom_completer(context.token) or []
2708 2708 result = _convert_matcher_v1_result_to_v2(
2709 2709 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2710 2710 )
2711 2711 result["ordered"] = True
2712 2712 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2713 2713 return result
2714 2714
2715 2715 def dispatch_custom_completer(self, text):
2716 2716 """
2717 2717 .. deprecated:: 8.6
2718 2718 You can use :meth:`custom_completer_matcher` instead.
2719 2719 """
2720 2720 if not self.custom_completers:
2721 2721 return
2722 2722
2723 2723 line = self.line_buffer
2724 2724 if not line.strip():
2725 2725 return None
2726 2726
2727 2727 # Create a little structure to pass all the relevant information about
2728 2728 # the current completion to any custom completer.
2729 2729 event = SimpleNamespace()
2730 2730 event.line = line
2731 2731 event.symbol = text
2732 2732 cmd = line.split(None,1)[0]
2733 2733 event.command = cmd
2734 2734 event.text_until_cursor = self.text_until_cursor
2735 2735
2736 2736 # for foo etc, try also to find completer for %foo
2737 2737 if not cmd.startswith(self.magic_escape):
2738 2738 try_magic = self.custom_completers.s_matches(
2739 2739 self.magic_escape + cmd)
2740 2740 else:
2741 2741 try_magic = []
2742 2742
2743 2743 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2744 2744 try_magic,
2745 2745 self.custom_completers.flat_matches(self.text_until_cursor)):
2746 2746 try:
2747 2747 res = c(event)
2748 2748 if res:
2749 2749 # first, try case sensitive match
2750 2750 withcase = [r for r in res if r.startswith(text)]
2751 2751 if withcase:
2752 2752 return withcase
2753 2753 # if none, then case insensitive ones are ok too
2754 2754 text_low = text.lower()
2755 2755 return [r for r in res if r.lower().startswith(text_low)]
2756 2756 except TryNext:
2757 2757 pass
2758 2758 except KeyboardInterrupt:
2759 2759 """
2760 2760 If custom completer take too long,
2761 2761 let keyboard interrupt abort and return nothing.
2762 2762 """
2763 2763 break
2764 2764
2765 2765 return None
2766 2766
2767 2767 def completions(self, text: str, offset: int)->Iterator[Completion]:
2768 2768 """
2769 2769 Returns an iterator over the possible completions
2770 2770
2771 2771 .. warning::
2772 2772
2773 2773 Unstable
2774 2774
2775 2775 This function is unstable, API may change without warning.
2776 2776 It will also raise unless use in proper context manager.
2777 2777
2778 2778 Parameters
2779 2779 ----------
2780 2780 text : str
2781 2781 Full text of the current input, multi line string.
2782 2782 offset : int
2783 2783 Integer representing the position of the cursor in ``text``. Offset
2784 2784 is 0-based indexed.
2785 2785
2786 2786 Yields
2787 2787 ------
2788 2788 Completion
2789 2789
2790 2790 Notes
2791 2791 -----
2792 2792 The cursor on a text can either be seen as being "in between"
2793 2793 characters or "On" a character depending on the interface visible to
2794 2794 the user. For consistency the cursor being on "in between" characters X
2795 2795 and Y is equivalent to the cursor being "on" character Y, that is to say
2796 2796 the character the cursor is on is considered as being after the cursor.
2797 2797
2798 2798 Combining characters may span more that one position in the
2799 2799 text.
2800 2800
2801 2801 .. note::
2802 2802
2803 2803 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2804 2804 fake Completion token to distinguish completion returned by Jedi
2805 2805 and usual IPython completion.
2806 2806
2807 2807 .. note::
2808 2808
2809 2809 Completions are not completely deduplicated yet. If identical
2810 2810 completions are coming from different sources this function does not
2811 2811 ensure that each completion object will only be present once.
2812 2812 """
2813 2813 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2814 2814 "It may change without warnings. "
2815 2815 "Use in corresponding context manager.",
2816 2816 category=ProvisionalCompleterWarning, stacklevel=2)
2817 2817
2818 2818 seen = set()
2819 2819 profiler:Optional[cProfile.Profile]
2820 2820 try:
2821 2821 if self.profile_completions:
2822 2822 import cProfile
2823 2823 profiler = cProfile.Profile()
2824 2824 profiler.enable()
2825 2825 else:
2826 2826 profiler = None
2827 2827
2828 2828 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2829 2829 if c and (c in seen):
2830 2830 continue
2831 2831 yield c
2832 2832 seen.add(c)
2833 2833 except KeyboardInterrupt:
2834 2834 """if completions take too long and users send keyboard interrupt,
2835 2835 do not crash and return ASAP. """
2836 2836 pass
2837 2837 finally:
2838 2838 if profiler is not None:
2839 2839 profiler.disable()
2840 2840 ensure_dir_exists(self.profiler_output_dir)
2841 2841 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2842 2842 print("Writing profiler output to", output_path)
2843 2843 profiler.dump_stats(output_path)
2844 2844
2845 2845 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2846 2846 """
2847 2847 Core completion module.Same signature as :any:`completions`, with the
2848 2848 extra `timeout` parameter (in seconds).
2849 2849
2850 2850 Computing jedi's completion ``.type`` can be quite expensive (it is a
2851 2851 lazy property) and can require some warm-up, more warm up than just
2852 2852 computing the ``name`` of a completion. The warm-up can be :
2853 2853
2854 2854 - Long warm-up the first time a module is encountered after
2855 2855 install/update: actually build parse/inference tree.
2856 2856
2857 2857 - first time the module is encountered in a session: load tree from
2858 2858 disk.
2859 2859
2860 2860 We don't want to block completions for tens of seconds so we give the
2861 2861 completer a "budget" of ``_timeout`` seconds per invocation to compute
2862 2862 completions types, the completions that have not yet been computed will
2863 2863 be marked as "unknown" an will have a chance to be computed next round
2864 2864 are things get cached.
2865 2865
2866 2866 Keep in mind that Jedi is not the only thing treating the completion so
2867 2867 keep the timeout short-ish as if we take more than 0.3 second we still
2868 2868 have lots of processing to do.
2869 2869
2870 2870 """
2871 2871 deadline = time.monotonic() + _timeout
2872 2872
2873 2873 before = full_text[:offset]
2874 2874 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2875 2875
2876 2876 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2877 2877
2878 2878 def is_non_jedi_result(
2879 2879 result: MatcherResult, identifier: str
2880 2880 ) -> TypeGuard[SimpleMatcherResult]:
2881 2881 return identifier != jedi_matcher_id
2882 2882
2883 2883 results = self._complete(
2884 2884 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2885 2885 )
2886 2886
2887 2887 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2888 2888 identifier: result
2889 2889 for identifier, result in results.items()
2890 2890 if is_non_jedi_result(result, identifier)
2891 2891 }
2892 2892
2893 2893 jedi_matches = (
2894 2894 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2895 2895 if jedi_matcher_id in results
2896 2896 else ()
2897 2897 )
2898 2898
2899 2899 iter_jm = iter(jedi_matches)
2900 2900 if _timeout:
2901 2901 for jm in iter_jm:
2902 2902 try:
2903 2903 type_ = jm.type
2904 2904 except Exception:
2905 2905 if self.debug:
2906 2906 print("Error in Jedi getting type of ", jm)
2907 2907 type_ = None
2908 2908 delta = len(jm.name_with_symbols) - len(jm.complete)
2909 2909 if type_ == 'function':
2910 2910 signature = _make_signature(jm)
2911 2911 else:
2912 2912 signature = ''
2913 2913 yield Completion(start=offset - delta,
2914 2914 end=offset,
2915 2915 text=jm.name_with_symbols,
2916 2916 type=type_,
2917 2917 signature=signature,
2918 2918 _origin='jedi')
2919 2919
2920 2920 if time.monotonic() > deadline:
2921 2921 break
2922 2922
2923 2923 for jm in iter_jm:
2924 2924 delta = len(jm.name_with_symbols) - len(jm.complete)
2925 2925 yield Completion(
2926 2926 start=offset - delta,
2927 2927 end=offset,
2928 2928 text=jm.name_with_symbols,
2929 2929 type=_UNKNOWN_TYPE, # don't compute type for speed
2930 2930 _origin="jedi",
2931 2931 signature="",
2932 2932 )
2933 2933
2934 2934 # TODO:
2935 2935 # Suppress this, right now just for debug.
2936 2936 if jedi_matches and non_jedi_results and self.debug:
2937 2937 some_start_offset = before.rfind(
2938 2938 next(iter(non_jedi_results.values()))["matched_fragment"]
2939 2939 )
2940 2940 yield Completion(
2941 2941 start=some_start_offset,
2942 2942 end=offset,
2943 2943 text="--jedi/ipython--",
2944 2944 _origin="debug",
2945 2945 type="none",
2946 2946 signature="",
2947 2947 )
2948 2948
2949 2949 ordered: List[Completion] = []
2950 2950 sortable: List[Completion] = []
2951 2951
2952 2952 for origin, result in non_jedi_results.items():
2953 2953 matched_text = result["matched_fragment"]
2954 2954 start_offset = before.rfind(matched_text)
2955 2955 is_ordered = result.get("ordered", False)
2956 2956 container = ordered if is_ordered else sortable
2957 2957
2958 2958 # I'm unsure if this is always true, so let's assert and see if it
2959 2959 # crash
2960 2960 assert before.endswith(matched_text)
2961 2961
2962 2962 for simple_completion in result["completions"]:
2963 2963 completion = Completion(
2964 2964 start=start_offset,
2965 2965 end=offset,
2966 2966 text=simple_completion.text,
2967 2967 _origin=origin,
2968 2968 signature="",
2969 2969 type=simple_completion.type or _UNKNOWN_TYPE,
2970 2970 )
2971 2971 container.append(completion)
2972 2972
2973 2973 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2974 2974 :MATCHES_LIMIT
2975 2975 ]
2976 2976
2977 2977 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2978 2978 """Find completions for the given text and line context.
2979 2979
2980 2980 Note that both the text and the line_buffer are optional, but at least
2981 2981 one of them must be given.
2982 2982
2983 2983 Parameters
2984 2984 ----------
2985 2985 text : string, optional
2986 2986 Text to perform the completion on. If not given, the line buffer
2987 2987 is split using the instance's CompletionSplitter object.
2988 2988 line_buffer : string, optional
2989 2989 If not given, the completer attempts to obtain the current line
2990 2990 buffer via readline. This keyword allows clients which are
2991 2991 requesting for text completions in non-readline contexts to inform
2992 2992 the completer of the entire text.
2993 2993 cursor_pos : int, optional
2994 2994 Index of the cursor in the full line buffer. Should be provided by
2995 2995 remote frontends where kernel has no access to frontend state.
2996 2996
2997 2997 Returns
2998 2998 -------
2999 2999 Tuple of two items:
3000 3000 text : str
3001 3001 Text that was actually used in the completion.
3002 3002 matches : list
3003 3003 A list of completion matches.
3004 3004
3005 3005 Notes
3006 3006 -----
3007 3007 This API is likely to be deprecated and replaced by
3008 3008 :any:`IPCompleter.completions` in the future.
3009 3009
3010 3010 """
3011 3011 warnings.warn('`Completer.complete` is pending deprecation since '
3012 3012 'IPython 6.0 and will be replaced by `Completer.completions`.',
3013 3013 PendingDeprecationWarning)
3014 3014 # potential todo, FOLD the 3rd throw away argument of _complete
3015 3015 # into the first 2 one.
3016 3016 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3017 3017 # TODO: should we deprecate now, or does it stay?
3018 3018
3019 3019 results = self._complete(
3020 3020 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3021 3021 )
3022 3022
3023 3023 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3024 3024
3025 3025 return self._arrange_and_extract(
3026 3026 results,
3027 3027 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3028 3028 skip_matchers={jedi_matcher_id},
3029 3029 # this API does not support different start/end positions (fragments of token).
3030 3030 abort_if_offset_changes=True,
3031 3031 )
3032 3032
3033 3033 def _arrange_and_extract(
3034 3034 self,
3035 3035 results: Dict[str, MatcherResult],
3036 3036 skip_matchers: Set[str],
3037 3037 abort_if_offset_changes: bool,
3038 3038 ):
3039 3039 sortable: List[AnyMatcherCompletion] = []
3040 3040 ordered: List[AnyMatcherCompletion] = []
3041 3041 most_recent_fragment = None
3042 3042 for identifier, result in results.items():
3043 3043 if identifier in skip_matchers:
3044 3044 continue
3045 3045 if not result["completions"]:
3046 3046 continue
3047 3047 if not most_recent_fragment:
3048 3048 most_recent_fragment = result["matched_fragment"]
3049 3049 if (
3050 3050 abort_if_offset_changes
3051 3051 and result["matched_fragment"] != most_recent_fragment
3052 3052 ):
3053 3053 break
3054 3054 if result.get("ordered", False):
3055 3055 ordered.extend(result["completions"])
3056 3056 else:
3057 3057 sortable.extend(result["completions"])
3058 3058
3059 3059 if not most_recent_fragment:
3060 3060 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3061 3061
3062 3062 return most_recent_fragment, [
3063 3063 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3064 3064 ]
3065 3065
3066 3066 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3067 3067 full_text=None) -> _CompleteResult:
3068 3068 """
3069 3069 Like complete but can also returns raw jedi completions as well as the
3070 3070 origin of the completion text. This could (and should) be made much
3071 3071 cleaner but that will be simpler once we drop the old (and stateful)
3072 3072 :any:`complete` API.
3073 3073
3074 3074 With current provisional API, cursor_pos act both (depending on the
3075 3075 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3076 3076 ``column`` when passing multiline strings this could/should be renamed
3077 3077 but would add extra noise.
3078 3078
3079 3079 Parameters
3080 3080 ----------
3081 3081 cursor_line
3082 3082 Index of the line the cursor is on. 0 indexed.
3083 3083 cursor_pos
3084 3084 Position of the cursor in the current line/line_buffer/text. 0
3085 3085 indexed.
3086 3086 line_buffer : optional, str
3087 3087 The current line the cursor is in, this is mostly due to legacy
3088 3088 reason that readline could only give a us the single current line.
3089 3089 Prefer `full_text`.
3090 3090 text : str
3091 3091 The current "token" the cursor is in, mostly also for historical
3092 3092 reasons. as the completer would trigger only after the current line
3093 3093 was parsed.
3094 3094 full_text : str
3095 3095 Full text of the current cell.
3096 3096
3097 3097 Returns
3098 3098 -------
3099 3099 An ordered dictionary where keys are identifiers of completion
3100 3100 matchers and values are ``MatcherResult``s.
3101 3101 """
3102 3102
3103 3103 # if the cursor position isn't given, the only sane assumption we can
3104 3104 # make is that it's at the end of the line (the common case)
3105 3105 if cursor_pos is None:
3106 3106 cursor_pos = len(line_buffer) if text is None else len(text)
3107 3107
3108 3108 if self.use_main_ns:
3109 3109 self.namespace = __main__.__dict__
3110 3110
3111 3111 # if text is either None or an empty string, rely on the line buffer
3112 3112 if (not line_buffer) and full_text:
3113 3113 line_buffer = full_text.split('\n')[cursor_line]
3114 3114 if not text: # issue #11508: check line_buffer before calling split_line
3115 3115 text = (
3116 3116 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3117 3117 )
3118 3118
3119 3119 # If no line buffer is given, assume the input text is all there was
3120 3120 if line_buffer is None:
3121 3121 line_buffer = text
3122 3122
3123 3123 # deprecated - do not use `line_buffer` in new code.
3124 3124 self.line_buffer = line_buffer
3125 3125 self.text_until_cursor = self.line_buffer[:cursor_pos]
3126 3126
3127 3127 if not full_text:
3128 3128 full_text = line_buffer
3129 3129
3130 3130 context = CompletionContext(
3131 3131 full_text=full_text,
3132 3132 cursor_position=cursor_pos,
3133 3133 cursor_line=cursor_line,
3134 3134 token=text,
3135 3135 limit=MATCHES_LIMIT,
3136 3136 )
3137 3137
3138 3138 # Start with a clean slate of completions
3139 3139 results: Dict[str, MatcherResult] = {}
3140 3140
3141 3141 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3142 3142
3143 3143 suppressed_matchers: Set[str] = set()
3144 3144
3145 3145 matchers = {
3146 3146 _get_matcher_id(matcher): matcher
3147 3147 for matcher in sorted(
3148 3148 self.matchers, key=_get_matcher_priority, reverse=True
3149 3149 )
3150 3150 }
3151 3151
3152 3152 for matcher_id, matcher in matchers.items():
3153 3153 matcher_id = _get_matcher_id(matcher)
3154 3154
3155 3155 if matcher_id in self.disable_matchers:
3156 3156 continue
3157 3157
3158 3158 if matcher_id in results:
3159 3159 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3160 3160
3161 3161 if matcher_id in suppressed_matchers:
3162 3162 continue
3163 3163
3164 3164 result: MatcherResult
3165 3165 try:
3166 3166 if _is_matcher_v1(matcher):
3167 3167 result = _convert_matcher_v1_result_to_v2(
3168 3168 matcher(text), type=_UNKNOWN_TYPE
3169 3169 )
3170 3170 elif _is_matcher_v2(matcher):
3171 3171 result = matcher(context)
3172 3172 else:
3173 3173 api_version = _get_matcher_api_version(matcher)
3174 3174 raise ValueError(f"Unsupported API version {api_version}")
3175 3175 except:
3176 3176 # Show the ugly traceback if the matcher causes an
3177 3177 # exception, but do NOT crash the kernel!
3178 3178 sys.excepthook(*sys.exc_info())
3179 3179 continue
3180 3180
3181 3181 # set default value for matched fragment if suffix was not selected.
3182 3182 result["matched_fragment"] = result.get("matched_fragment", context.token)
3183 3183
3184 3184 if not suppressed_matchers:
3185 3185 suppression_recommended: Union[bool, Set[str]] = result.get(
3186 3186 "suppress", False
3187 3187 )
3188 3188
3189 3189 suppression_config = (
3190 3190 self.suppress_competing_matchers.get(matcher_id, None)
3191 3191 if isinstance(self.suppress_competing_matchers, dict)
3192 3192 else self.suppress_competing_matchers
3193 3193 )
3194 3194 should_suppress = (
3195 3195 (suppression_config is True)
3196 3196 or (suppression_recommended and (suppression_config is not False))
3197 3197 ) and has_any_completions(result)
3198 3198
3199 3199 if should_suppress:
3200 3200 suppression_exceptions: Set[str] = result.get(
3201 3201 "do_not_suppress", set()
3202 3202 )
3203 3203 if isinstance(suppression_recommended, Iterable):
3204 3204 to_suppress = set(suppression_recommended)
3205 3205 else:
3206 3206 to_suppress = set(matchers)
3207 3207 suppressed_matchers = to_suppress - suppression_exceptions
3208 3208
3209 3209 new_results = {}
3210 3210 for previous_matcher_id, previous_result in results.items():
3211 3211 if previous_matcher_id not in suppressed_matchers:
3212 3212 new_results[previous_matcher_id] = previous_result
3213 3213 results = new_results
3214 3214
3215 3215 results[matcher_id] = result
3216 3216
3217 3217 _, matches = self._arrange_and_extract(
3218 3218 results,
3219 3219 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3220 3220 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3221 3221 skip_matchers={jedi_matcher_id},
3222 3222 abort_if_offset_changes=False,
3223 3223 )
3224 3224
3225 3225 # populate legacy stateful API
3226 3226 self.matches = matches
3227 3227
3228 3228 return results
3229 3229
3230 3230 @staticmethod
3231 3231 def _deduplicate(
3232 3232 matches: Sequence[AnyCompletion],
3233 3233 ) -> Iterable[AnyCompletion]:
3234 3234 filtered_matches: Dict[str, AnyCompletion] = {}
3235 3235 for match in matches:
3236 3236 text = match.text
3237 3237 if (
3238 3238 text not in filtered_matches
3239 3239 or filtered_matches[text].type == _UNKNOWN_TYPE
3240 3240 ):
3241 3241 filtered_matches[text] = match
3242 3242
3243 3243 return filtered_matches.values()
3244 3244
3245 3245 @staticmethod
3246 3246 def _sort(matches: Sequence[AnyCompletion]):
3247 3247 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3248 3248
3249 3249 @context_matcher()
3250 3250 def fwd_unicode_matcher(self, context: CompletionContext):
3251 3251 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3252 3252 # TODO: use `context.limit` to terminate early once we matched the maximum
3253 3253 # number that will be used downstream; can be added as an optional to
3254 3254 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3255 3255 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3256 3256 return _convert_matcher_v1_result_to_v2(
3257 3257 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3258 3258 )
3259 3259
3260 3260 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3261 3261 """
3262 3262 Forward match a string starting with a backslash with a list of
3263 3263 potential Unicode completions.
3264 3264
3265 3265 Will compute list of Unicode character names on first call and cache it.
3266 3266
3267 3267 .. deprecated:: 8.6
3268 3268 You can use :meth:`fwd_unicode_matcher` instead.
3269 3269
3270 3270 Returns
3271 3271 -------
3272 3272 At tuple with:
3273 3273 - matched text (empty if no matches)
3274 3274 - list of potential completions, empty tuple otherwise)
3275 3275 """
3276 3276 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3277 3277 # We could do a faster match using a Trie.
3278 3278
3279 3279 # Using pygtrie the following seem to work:
3280 3280
3281 3281 # s = PrefixSet()
3282 3282
3283 3283 # for c in range(0,0x10FFFF + 1):
3284 3284 # try:
3285 3285 # s.add(unicodedata.name(chr(c)))
3286 3286 # except ValueError:
3287 3287 # pass
3288 3288 # [''.join(k) for k in s.iter(prefix)]
3289 3289
3290 3290 # But need to be timed and adds an extra dependency.
3291 3291
3292 3292 slashpos = text.rfind('\\')
3293 3293 # if text starts with slash
3294 3294 if slashpos > -1:
3295 3295 # PERF: It's important that we don't access self._unicode_names
3296 3296 # until we're inside this if-block. _unicode_names is lazily
3297 3297 # initialized, and it takes a user-noticeable amount of time to
3298 3298 # initialize it, so we don't want to initialize it unless we're
3299 3299 # actually going to use it.
3300 3300 s = text[slashpos + 1 :]
3301 3301 sup = s.upper()
3302 3302 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3303 3303 if candidates:
3304 3304 return s, candidates
3305 3305 candidates = [x for x in self.unicode_names if sup in x]
3306 3306 if candidates:
3307 3307 return s, candidates
3308 3308 splitsup = sup.split(" ")
3309 3309 candidates = [
3310 3310 x for x in self.unicode_names if all(u in x for u in splitsup)
3311 3311 ]
3312 3312 if candidates:
3313 3313 return s, candidates
3314 3314
3315 3315 return "", ()
3316 3316
3317 3317 # if text does not start with slash
3318 3318 else:
3319 3319 return '', ()
3320 3320
3321 3321 @property
3322 3322 def unicode_names(self) -> List[str]:
3323 3323 """List of names of unicode code points that can be completed.
3324 3324
3325 3325 The list is lazily initialized on first access.
3326 3326 """
3327 3327 if self._unicode_names is None:
3328 3328 names = []
3329 3329 for c in range(0,0x10FFFF + 1):
3330 3330 try:
3331 3331 names.append(unicodedata.name(chr(c)))
3332 3332 except ValueError:
3333 3333 pass
3334 3334 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3335 3335
3336 3336 return self._unicode_names
3337 3337
3338 3338 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3339 3339 names = []
3340 3340 for start,stop in ranges:
3341 3341 for c in range(start, stop) :
3342 3342 try:
3343 3343 names.append(unicodedata.name(chr(c)))
3344 3344 except ValueError:
3345 3345 pass
3346 3346 return names
@@ -1,1031 +1,1031 b''
1 1 # -*- coding: utf-8 -*-
2 2 """Display formatters.
3 3
4 4 Inheritance diagram:
5 5
6 6 .. inheritance-diagram:: IPython.core.formatters
7 7 :parts: 3
8 8 """
9 9
10 10 # Copyright (c) IPython Development Team.
11 11 # Distributed under the terms of the Modified BSD License.
12 12
13 13 import abc
14 14 import sys
15 15 import traceback
16 16 import warnings
17 17 from io import StringIO
18 18
19 19 from decorator import decorator
20 20
21 21 from traitlets.config.configurable import Configurable
22 22 from .getipython import get_ipython
23 23 from ..utils.sentinel import Sentinel
24 24 from ..utils.dir2 import get_real_method
25 25 from ..lib import pretty
26 26 from traitlets import (
27 27 Bool, Dict, Integer, Unicode, CUnicode, ObjectName, List,
28 28 ForwardDeclaredInstance,
29 29 default, observe,
30 30 )
31 31
32 32 from typing import Any
33 33
34 34
35 35 class DisplayFormatter(Configurable):
36 36
37 37 active_types = List(Unicode(),
38 38 help="""List of currently active mime-types to display.
39 39 You can use this to set a white-list for formats to display.
40 40
41 41 Most users will not need to change this value.
42 42 """).tag(config=True)
43 43
44 44 @default('active_types')
45 45 def _active_types_default(self):
46 46 return self.format_types
47 47
48 48 @observe('active_types')
49 49 def _active_types_changed(self, change):
50 50 for key, formatter in self.formatters.items():
51 51 if key in change['new']:
52 52 formatter.enabled = True
53 53 else:
54 54 formatter.enabled = False
55 55
56 ipython_display_formatter = ForwardDeclaredInstance("FormatterABC")
56 ipython_display_formatter = ForwardDeclaredInstance("FormatterABC") # type: ignore
57 57
58 58 @default("ipython_display_formatter")
59 59 def _default_formatter(self):
60 60 return IPythonDisplayFormatter(parent=self)
61 61
62 mimebundle_formatter = ForwardDeclaredInstance("FormatterABC")
62 mimebundle_formatter = ForwardDeclaredInstance("FormatterABC") # type: ignore
63 63
64 64 @default("mimebundle_formatter")
65 65 def _default_mime_formatter(self):
66 66 return MimeBundleFormatter(parent=self)
67 67
68 68 # A dict of formatter whose keys are format types (MIME types) and whose
69 69 # values are subclasses of BaseFormatter.
70 70 formatters = Dict()
71 71
72 72 @default("formatters")
73 73 def _formatters_default(self):
74 74 """Activate the default formatters."""
75 75 formatter_classes = [
76 76 PlainTextFormatter,
77 77 HTMLFormatter,
78 78 MarkdownFormatter,
79 79 SVGFormatter,
80 80 PNGFormatter,
81 81 PDFFormatter,
82 82 JPEGFormatter,
83 83 LatexFormatter,
84 84 JSONFormatter,
85 85 JavascriptFormatter
86 86 ]
87 87 d = {}
88 88 for cls in formatter_classes:
89 89 f = cls(parent=self)
90 90 d[f.format_type] = f
91 91 return d
92 92
93 93 def format(self, obj, include=None, exclude=None):
94 94 """Return a format data dict for an object.
95 95
96 96 By default all format types will be computed.
97 97
98 98 The following MIME types are usually implemented:
99 99
100 100 * text/plain
101 101 * text/html
102 102 * text/markdown
103 103 * text/latex
104 104 * application/json
105 105 * application/javascript
106 106 * application/pdf
107 107 * image/png
108 108 * image/jpeg
109 109 * image/svg+xml
110 110
111 111 Parameters
112 112 ----------
113 113 obj : object
114 114 The Python object whose format data will be computed.
115 115 include : list, tuple or set; optional
116 116 A list of format type strings (MIME types) to include in the
117 117 format data dict. If this is set *only* the format types included
118 118 in this list will be computed.
119 119 exclude : list, tuple or set; optional
120 120 A list of format type string (MIME types) to exclude in the format
121 121 data dict. If this is set all format types will be computed,
122 122 except for those included in this argument.
123 123 Mimetypes present in exclude will take precedence over the ones in include
124 124
125 125 Returns
126 126 -------
127 127 (format_dict, metadata_dict) : tuple of two dicts
128 128 format_dict is a dictionary of key/value pairs, one of each format that was
129 129 generated for the object. The keys are the format types, which
130 130 will usually be MIME type strings and the values and JSON'able
131 131 data structure containing the raw data for the representation in
132 132 that format.
133 133
134 134 metadata_dict is a dictionary of metadata about each mime-type output.
135 135 Its keys will be a strict subset of the keys in format_dict.
136 136
137 137 Notes
138 138 -----
139 139 If an object implement `_repr_mimebundle_` as well as various
140 140 `_repr_*_`, the data returned by `_repr_mimebundle_` will take
141 141 precedence and the corresponding `_repr_*_` for this mimetype will
142 142 not be called.
143 143
144 144 """
145 145 format_dict = {}
146 146 md_dict = {}
147 147
148 148 if self.ipython_display_formatter(obj):
149 149 # object handled itself, don't proceed
150 150 return {}, {}
151 151
152 152 format_dict, md_dict = self.mimebundle_formatter(obj, include=include, exclude=exclude)
153 153
154 154 if format_dict or md_dict:
155 155 if include:
156 156 format_dict = {k:v for k,v in format_dict.items() if k in include}
157 157 md_dict = {k:v for k,v in md_dict.items() if k in include}
158 158 if exclude:
159 159 format_dict = {k:v for k,v in format_dict.items() if k not in exclude}
160 160 md_dict = {k:v for k,v in md_dict.items() if k not in exclude}
161 161
162 162 for format_type, formatter in self.formatters.items():
163 163 if format_type in format_dict:
164 164 # already got it from mimebundle, maybe don't render again.
165 165 # exception: manually registered per-mime renderer
166 166 # check priority:
167 167 # 1. user-registered per-mime formatter
168 168 # 2. mime-bundle (user-registered or repr method)
169 169 # 3. default per-mime formatter (e.g. repr method)
170 170 try:
171 171 formatter.lookup(obj)
172 172 except KeyError:
173 173 # no special formatter, use mime-bundle-provided value
174 174 continue
175 175 if include and format_type not in include:
176 176 continue
177 177 if exclude and format_type in exclude:
178 178 continue
179 179
180 180 md = None
181 181 try:
182 182 data = formatter(obj)
183 183 except:
184 184 # FIXME: log the exception
185 185 raise
186 186
187 187 # formatters can return raw data or (data, metadata)
188 188 if isinstance(data, tuple) and len(data) == 2:
189 189 data, md = data
190 190
191 191 if data is not None:
192 192 format_dict[format_type] = data
193 193 if md is not None:
194 194 md_dict[format_type] = md
195 195 return format_dict, md_dict
196 196
197 197 @property
198 198 def format_types(self):
199 199 """Return the format types (MIME types) of the active formatters."""
200 200 return list(self.formatters.keys())
201 201
202 202
203 203 #-----------------------------------------------------------------------------
204 204 # Formatters for specific format types (text, html, svg, etc.)
205 205 #-----------------------------------------------------------------------------
206 206
207 207
208 208 def _safe_repr(obj):
209 209 """Try to return a repr of an object
210 210
211 211 always returns a string, at least.
212 212 """
213 213 try:
214 214 return repr(obj)
215 215 except Exception as e:
216 216 return "un-repr-able object (%r)" % e
217 217
218 218
219 219 class FormatterWarning(UserWarning):
220 220 """Warning class for errors in formatters"""
221 221
222 222 @decorator
223 223 def catch_format_error(method, self, *args, **kwargs):
224 224 """show traceback on failed format call"""
225 225 try:
226 226 r = method(self, *args, **kwargs)
227 227 except NotImplementedError:
228 228 # don't warn on NotImplementedErrors
229 229 return self._check_return(None, args[0])
230 230 except Exception:
231 231 exc_info = sys.exc_info()
232 232 ip = get_ipython()
233 233 if ip is not None:
234 234 ip.showtraceback(exc_info)
235 235 else:
236 236 traceback.print_exception(*exc_info)
237 237 return self._check_return(None, args[0])
238 238 return self._check_return(r, args[0])
239 239
240 240
241 241 class FormatterABC(metaclass=abc.ABCMeta):
242 242 """ Abstract base class for Formatters.
243 243
244 244 A formatter is a callable class that is responsible for computing the
245 245 raw format data for a particular format type (MIME type). For example,
246 246 an HTML formatter would have a format type of `text/html` and would return
247 247 the HTML representation of the object when called.
248 248 """
249 249
250 250 # The format type of the data returned, usually a MIME type.
251 251 format_type = 'text/plain'
252 252
253 253 # Is the formatter enabled...
254 254 enabled = True
255 255
256 256 @abc.abstractmethod
257 257 def __call__(self, obj):
258 258 """Return a JSON'able representation of the object.
259 259
260 260 If the object cannot be formatted by this formatter,
261 261 warn and return None.
262 262 """
263 263 return repr(obj)
264 264
265 265
266 266 def _mod_name_key(typ):
267 267 """Return a (__module__, __name__) tuple for a type.
268 268
269 269 Used as key in Formatter.deferred_printers.
270 270 """
271 271 module = getattr(typ, '__module__', None)
272 272 name = getattr(typ, '__name__', None)
273 273 return (module, name)
274 274
275 275
276 276 def _get_type(obj):
277 277 """Return the type of an instance (old and new-style)"""
278 278 return getattr(obj, '__class__', None) or type(obj)
279 279
280 280
281 281 _raise_key_error = Sentinel('_raise_key_error', __name__,
282 282 """
283 283 Special value to raise a KeyError
284 284
285 285 Raise KeyError in `BaseFormatter.pop` if passed as the default value to `pop`
286 286 """)
287 287
288 288
289 289 class BaseFormatter(Configurable):
290 290 """A base formatter class that is configurable.
291 291
292 292 This formatter should usually be used as the base class of all formatters.
293 293 It is a traited :class:`Configurable` class and includes an extensible
294 294 API for users to determine how their objects are formatted. The following
295 295 logic is used to find a function to format an given object.
296 296
297 297 1. The object is introspected to see if it has a method with the name
298 298 :attr:`print_method`. If is does, that object is passed to that method
299 299 for formatting.
300 300 2. If no print method is found, three internal dictionaries are consulted
301 301 to find print method: :attr:`singleton_printers`, :attr:`type_printers`
302 302 and :attr:`deferred_printers`.
303 303
304 304 Users should use these dictionaries to register functions that will be
305 305 used to compute the format data for their objects (if those objects don't
306 306 have the special print methods). The easiest way of using these
307 307 dictionaries is through the :meth:`for_type` and :meth:`for_type_by_name`
308 308 methods.
309 309
310 310 If no function/callable is found to compute the format data, ``None`` is
311 311 returned and this format type is not used.
312 312 """
313 313
314 314 format_type = Unicode("text/plain")
315 315 _return_type: Any = str
316 316
317 317 enabled = Bool(True).tag(config=True)
318 318
319 319 print_method = ObjectName('__repr__')
320 320
321 321 # The singleton printers.
322 322 # Maps the IDs of the builtin singleton objects to the format functions.
323 323 singleton_printers = Dict().tag(config=True)
324 324
325 325 # The type-specific printers.
326 326 # Map type objects to the format functions.
327 327 type_printers = Dict().tag(config=True)
328 328
329 329 # The deferred-import type-specific printers.
330 330 # Map (modulename, classname) pairs to the format functions.
331 331 deferred_printers = Dict().tag(config=True)
332 332
333 333 @catch_format_error
334 334 def __call__(self, obj):
335 335 """Compute the format for an object."""
336 336 if self.enabled:
337 337 # lookup registered printer
338 338 try:
339 339 printer = self.lookup(obj)
340 340 except KeyError:
341 341 pass
342 342 else:
343 343 return printer(obj)
344 344 # Finally look for special method names
345 345 method = get_real_method(obj, self.print_method)
346 346 if method is not None:
347 347 return method()
348 348 return None
349 349 else:
350 350 return None
351 351
352 352 def __contains__(self, typ):
353 353 """map in to lookup_by_type"""
354 354 try:
355 355 self.lookup_by_type(typ)
356 356 except KeyError:
357 357 return False
358 358 else:
359 359 return True
360 360
361 361 def _check_return(self, r, obj):
362 362 """Check that a return value is appropriate
363 363
364 364 Return the value if so, None otherwise, warning if invalid.
365 365 """
366 366 if r is None or isinstance(r, self._return_type) or \
367 367 (isinstance(r, tuple) and r and isinstance(r[0], self._return_type)):
368 368 return r
369 369 else:
370 370 warnings.warn(
371 371 "%s formatter returned invalid type %s (expected %s) for object: %s" % \
372 372 (self.format_type, type(r), self._return_type, _safe_repr(obj)),
373 373 FormatterWarning
374 374 )
375 375
376 376 def lookup(self, obj):
377 377 """Look up the formatter for a given instance.
378 378
379 379 Parameters
380 380 ----------
381 381 obj : object instance
382 382
383 383 Returns
384 384 -------
385 385 f : callable
386 386 The registered formatting callable for the type.
387 387
388 388 Raises
389 389 ------
390 390 KeyError if the type has not been registered.
391 391 """
392 392 # look for singleton first
393 393 obj_id = id(obj)
394 394 if obj_id in self.singleton_printers:
395 395 return self.singleton_printers[obj_id]
396 396 # then lookup by type
397 397 return self.lookup_by_type(_get_type(obj))
398 398
399 399 def lookup_by_type(self, typ):
400 400 """Look up the registered formatter for a type.
401 401
402 402 Parameters
403 403 ----------
404 404 typ : type or '__module__.__name__' string for a type
405 405
406 406 Returns
407 407 -------
408 408 f : callable
409 409 The registered formatting callable for the type.
410 410
411 411 Raises
412 412 ------
413 413 KeyError if the type has not been registered.
414 414 """
415 415 if isinstance(typ, str):
416 416 typ_key = tuple(typ.rsplit('.',1))
417 417 if typ_key not in self.deferred_printers:
418 418 # We may have it cached in the type map. We will have to
419 419 # iterate over all of the types to check.
420 420 for cls in self.type_printers:
421 421 if _mod_name_key(cls) == typ_key:
422 422 return self.type_printers[cls]
423 423 else:
424 424 return self.deferred_printers[typ_key]
425 425 else:
426 426 for cls in pretty._get_mro(typ):
427 427 if cls in self.type_printers or self._in_deferred_types(cls):
428 428 return self.type_printers[cls]
429 429
430 430 # If we have reached here, the lookup failed.
431 431 raise KeyError("No registered printer for {0!r}".format(typ))
432 432
433 433 def for_type(self, typ, func=None):
434 434 """Add a format function for a given type.
435 435
436 436 Parameters
437 437 ----------
438 438 typ : type or '__module__.__name__' string for a type
439 439 The class of the object that will be formatted using `func`.
440 440
441 441 func : callable
442 442 A callable for computing the format data.
443 443 `func` will be called with the object to be formatted,
444 444 and will return the raw data in this formatter's format.
445 445 Subclasses may use a different call signature for the
446 446 `func` argument.
447 447
448 448 If `func` is None or not specified, there will be no change,
449 449 only returning the current value.
450 450
451 451 Returns
452 452 -------
453 453 oldfunc : callable
454 454 The currently registered callable.
455 455 If you are registering a new formatter,
456 456 this will be the previous value (to enable restoring later).
457 457 """
458 458 # if string given, interpret as 'pkg.module.class_name'
459 459 if isinstance(typ, str):
460 460 type_module, type_name = typ.rsplit('.', 1)
461 461 return self.for_type_by_name(type_module, type_name, func)
462 462
463 463 try:
464 464 oldfunc = self.lookup_by_type(typ)
465 465 except KeyError:
466 466 oldfunc = None
467 467
468 468 if func is not None:
469 469 self.type_printers[typ] = func
470 470
471 471 return oldfunc
472 472
473 473 def for_type_by_name(self, type_module, type_name, func=None):
474 474 """Add a format function for a type specified by the full dotted
475 475 module and name of the type, rather than the type of the object.
476 476
477 477 Parameters
478 478 ----------
479 479 type_module : str
480 480 The full dotted name of the module the type is defined in, like
481 481 ``numpy``.
482 482
483 483 type_name : str
484 484 The name of the type (the class name), like ``dtype``
485 485
486 486 func : callable
487 487 A callable for computing the format data.
488 488 `func` will be called with the object to be formatted,
489 489 and will return the raw data in this formatter's format.
490 490 Subclasses may use a different call signature for the
491 491 `func` argument.
492 492
493 493 If `func` is None or unspecified, there will be no change,
494 494 only returning the current value.
495 495
496 496 Returns
497 497 -------
498 498 oldfunc : callable
499 499 The currently registered callable.
500 500 If you are registering a new formatter,
501 501 this will be the previous value (to enable restoring later).
502 502 """
503 503 key = (type_module, type_name)
504 504
505 505 try:
506 506 oldfunc = self.lookup_by_type("%s.%s" % key)
507 507 except KeyError:
508 508 oldfunc = None
509 509
510 510 if func is not None:
511 511 self.deferred_printers[key] = func
512 512 return oldfunc
513 513
514 514 def pop(self, typ, default=_raise_key_error):
515 515 """Pop a formatter for the given type.
516 516
517 517 Parameters
518 518 ----------
519 519 typ : type or '__module__.__name__' string for a type
520 520 default : object
521 521 value to be returned if no formatter is registered for typ.
522 522
523 523 Returns
524 524 -------
525 525 obj : object
526 526 The last registered object for the type.
527 527
528 528 Raises
529 529 ------
530 530 KeyError if the type is not registered and default is not specified.
531 531 """
532 532
533 533 if isinstance(typ, str):
534 534 typ_key = tuple(typ.rsplit('.',1))
535 535 if typ_key not in self.deferred_printers:
536 536 # We may have it cached in the type map. We will have to
537 537 # iterate over all of the types to check.
538 538 for cls in self.type_printers:
539 539 if _mod_name_key(cls) == typ_key:
540 540 old = self.type_printers.pop(cls)
541 541 break
542 542 else:
543 543 old = default
544 544 else:
545 545 old = self.deferred_printers.pop(typ_key)
546 546 else:
547 547 if typ in self.type_printers:
548 548 old = self.type_printers.pop(typ)
549 549 else:
550 550 old = self.deferred_printers.pop(_mod_name_key(typ), default)
551 551 if old is _raise_key_error:
552 552 raise KeyError("No registered value for {0!r}".format(typ))
553 553 return old
554 554
555 555 def _in_deferred_types(self, cls):
556 556 """
557 557 Check if the given class is specified in the deferred type registry.
558 558
559 559 Successful matches will be moved to the regular type registry for future use.
560 560 """
561 561 mod = getattr(cls, '__module__', None)
562 562 name = getattr(cls, '__name__', None)
563 563 key = (mod, name)
564 564 if key in self.deferred_printers:
565 565 # Move the printer over to the regular registry.
566 566 printer = self.deferred_printers.pop(key)
567 567 self.type_printers[cls] = printer
568 568 return True
569 569 return False
570 570
571 571
572 572 class PlainTextFormatter(BaseFormatter):
573 573 """The default pretty-printer.
574 574
575 575 This uses :mod:`IPython.lib.pretty` to compute the format data of
576 576 the object. If the object cannot be pretty printed, :func:`repr` is used.
577 577 See the documentation of :mod:`IPython.lib.pretty` for details on
578 578 how to write pretty printers. Here is a simple example::
579 579
580 580 def dtype_pprinter(obj, p, cycle):
581 581 if cycle:
582 582 return p.text('dtype(...)')
583 583 if hasattr(obj, 'fields'):
584 584 if obj.fields is None:
585 585 p.text(repr(obj))
586 586 else:
587 587 p.begin_group(7, 'dtype([')
588 588 for i, field in enumerate(obj.descr):
589 589 if i > 0:
590 590 p.text(',')
591 591 p.breakable()
592 592 p.pretty(field)
593 593 p.end_group(7, '])')
594 594 """
595 595
596 596 # The format type of data returned.
597 597 format_type = Unicode('text/plain')
598 598
599 599 # This subclass ignores this attribute as it always need to return
600 600 # something.
601 601 enabled = Bool(True).tag(config=False)
602 602
603 603 max_seq_length = Integer(pretty.MAX_SEQ_LENGTH,
604 604 help="""Truncate large collections (lists, dicts, tuples, sets) to this size.
605 605
606 606 Set to 0 to disable truncation.
607 607 """
608 608 ).tag(config=True)
609 609
610 610 # Look for a _repr_pretty_ methods to use for pretty printing.
611 611 print_method = ObjectName('_repr_pretty_')
612 612
613 613 # Whether to pretty-print or not.
614 614 pprint = Bool(True).tag(config=True)
615 615
616 616 # Whether to be verbose or not.
617 617 verbose = Bool(False).tag(config=True)
618 618
619 619 # The maximum width.
620 620 max_width = Integer(79).tag(config=True)
621 621
622 622 # The newline character.
623 623 newline = Unicode('\n').tag(config=True)
624 624
625 625 # format-string for pprinting floats
626 626 float_format = Unicode('%r')
627 627 # setter for float precision, either int or direct format-string
628 628 float_precision = CUnicode('').tag(config=True)
629 629
630 630 @observe('float_precision')
631 631 def _float_precision_changed(self, change):
632 632 """float_precision changed, set float_format accordingly.
633 633
634 634 float_precision can be set by int or str.
635 635 This will set float_format, after interpreting input.
636 636 If numpy has been imported, numpy print precision will also be set.
637 637
638 638 integer `n` sets format to '%.nf', otherwise, format set directly.
639 639
640 640 An empty string returns to defaults (repr for float, 8 for numpy).
641 641
642 642 This parameter can be set via the '%precision' magic.
643 643 """
644 644 new = change['new']
645 645 if '%' in new:
646 646 # got explicit format string
647 647 fmt = new
648 648 try:
649 649 fmt%3.14159
650 650 except Exception as e:
651 651 raise ValueError("Precision must be int or format string, not %r"%new) from e
652 652 elif new:
653 653 # otherwise, should be an int
654 654 try:
655 655 i = int(new)
656 656 assert i >= 0
657 657 except ValueError as e:
658 658 raise ValueError("Precision must be int or format string, not %r"%new) from e
659 659 except AssertionError as e:
660 660 raise ValueError("int precision must be non-negative, not %r"%i) from e
661 661
662 662 fmt = '%%.%if'%i
663 663 if 'numpy' in sys.modules:
664 664 # set numpy precision if it has been imported
665 665 import numpy
666 666 numpy.set_printoptions(precision=i)
667 667 else:
668 668 # default back to repr
669 669 fmt = '%r'
670 670 if 'numpy' in sys.modules:
671 671 import numpy
672 672 # numpy default is 8
673 673 numpy.set_printoptions(precision=8)
674 674 self.float_format = fmt
675 675
676 676 # Use the default pretty printers from IPython.lib.pretty.
677 677 @default('singleton_printers')
678 678 def _singleton_printers_default(self):
679 679 return pretty._singleton_pprinters.copy()
680 680
681 681 @default('type_printers')
682 682 def _type_printers_default(self):
683 683 d = pretty._type_pprinters.copy()
684 684 d[float] = lambda obj,p,cycle: p.text(self.float_format%obj)
685 685 # if NumPy is used, set precision for its float64 type
686 686 if "numpy" in sys.modules:
687 687 import numpy
688 688
689 689 d[numpy.float64] = lambda obj, p, cycle: p.text(self.float_format % obj)
690 690 return d
691 691
692 692 @default('deferred_printers')
693 693 def _deferred_printers_default(self):
694 694 return pretty._deferred_type_pprinters.copy()
695 695
696 696 #### FormatterABC interface ####
697 697
698 698 @catch_format_error
699 699 def __call__(self, obj):
700 700 """Compute the pretty representation of the object."""
701 701 if not self.pprint:
702 702 return repr(obj)
703 703 else:
704 704 stream = StringIO()
705 705 printer = pretty.RepresentationPrinter(stream, self.verbose,
706 706 self.max_width, self.newline,
707 707 max_seq_length=self.max_seq_length,
708 708 singleton_pprinters=self.singleton_printers,
709 709 type_pprinters=self.type_printers,
710 710 deferred_pprinters=self.deferred_printers)
711 711 printer.pretty(obj)
712 712 printer.flush()
713 713 return stream.getvalue()
714 714
715 715
716 716 class HTMLFormatter(BaseFormatter):
717 717 """An HTML formatter.
718 718
719 719 To define the callables that compute the HTML representation of your
720 720 objects, define a :meth:`_repr_html_` method or use the :meth:`for_type`
721 721 or :meth:`for_type_by_name` methods to register functions that handle
722 722 this.
723 723
724 724 The return value of this formatter should be a valid HTML snippet that
725 725 could be injected into an existing DOM. It should *not* include the
726 726 ```<html>`` or ```<body>`` tags.
727 727 """
728 728 format_type = Unicode('text/html')
729 729
730 730 print_method = ObjectName('_repr_html_')
731 731
732 732
733 733 class MarkdownFormatter(BaseFormatter):
734 734 """A Markdown formatter.
735 735
736 736 To define the callables that compute the Markdown representation of your
737 737 objects, define a :meth:`_repr_markdown_` method or use the :meth:`for_type`
738 738 or :meth:`for_type_by_name` methods to register functions that handle
739 739 this.
740 740
741 741 The return value of this formatter should be a valid Markdown.
742 742 """
743 743 format_type = Unicode('text/markdown')
744 744
745 745 print_method = ObjectName('_repr_markdown_')
746 746
747 747 class SVGFormatter(BaseFormatter):
748 748 """An SVG formatter.
749 749
750 750 To define the callables that compute the SVG representation of your
751 751 objects, define a :meth:`_repr_svg_` method or use the :meth:`for_type`
752 752 or :meth:`for_type_by_name` methods to register functions that handle
753 753 this.
754 754
755 755 The return value of this formatter should be valid SVG enclosed in
756 756 ```<svg>``` tags, that could be injected into an existing DOM. It should
757 757 *not* include the ```<html>`` or ```<body>`` tags.
758 758 """
759 759 format_type = Unicode('image/svg+xml')
760 760
761 761 print_method = ObjectName('_repr_svg_')
762 762
763 763
764 764 class PNGFormatter(BaseFormatter):
765 765 """A PNG formatter.
766 766
767 767 To define the callables that compute the PNG representation of your
768 768 objects, define a :meth:`_repr_png_` method or use the :meth:`for_type`
769 769 or :meth:`for_type_by_name` methods to register functions that handle
770 770 this.
771 771
772 772 The return value of this formatter should be raw PNG data, *not*
773 773 base64 encoded.
774 774 """
775 775 format_type = Unicode('image/png')
776 776
777 777 print_method = ObjectName('_repr_png_')
778 778
779 779 _return_type = (bytes, str)
780 780
781 781
782 782 class JPEGFormatter(BaseFormatter):
783 783 """A JPEG formatter.
784 784
785 785 To define the callables that compute the JPEG representation of your
786 786 objects, define a :meth:`_repr_jpeg_` method or use the :meth:`for_type`
787 787 or :meth:`for_type_by_name` methods to register functions that handle
788 788 this.
789 789
790 790 The return value of this formatter should be raw JPEG data, *not*
791 791 base64 encoded.
792 792 """
793 793 format_type = Unicode('image/jpeg')
794 794
795 795 print_method = ObjectName('_repr_jpeg_')
796 796
797 797 _return_type = (bytes, str)
798 798
799 799
800 800 class LatexFormatter(BaseFormatter):
801 801 """A LaTeX formatter.
802 802
803 803 To define the callables that compute the LaTeX representation of your
804 804 objects, define a :meth:`_repr_latex_` method or use the :meth:`for_type`
805 805 or :meth:`for_type_by_name` methods to register functions that handle
806 806 this.
807 807
808 808 The return value of this formatter should be a valid LaTeX equation,
809 809 enclosed in either ```$```, ```$$``` or another LaTeX equation
810 810 environment.
811 811 """
812 812 format_type = Unicode('text/latex')
813 813
814 814 print_method = ObjectName('_repr_latex_')
815 815
816 816
817 817 class JSONFormatter(BaseFormatter):
818 818 """A JSON string formatter.
819 819
820 820 To define the callables that compute the JSONable representation of
821 821 your objects, define a :meth:`_repr_json_` method or use the :meth:`for_type`
822 822 or :meth:`for_type_by_name` methods to register functions that handle
823 823 this.
824 824
825 825 The return value of this formatter should be a JSONable list or dict.
826 826 JSON scalars (None, number, string) are not allowed, only dict or list containers.
827 827 """
828 828 format_type = Unicode('application/json')
829 829 _return_type = (list, dict)
830 830
831 831 print_method = ObjectName('_repr_json_')
832 832
833 833 def _check_return(self, r, obj):
834 834 """Check that a return value is appropriate
835 835
836 836 Return the value if so, None otherwise, warning if invalid.
837 837 """
838 838 if r is None:
839 839 return
840 840 md = None
841 841 if isinstance(r, tuple):
842 842 # unpack data, metadata tuple for type checking on first element
843 843 r, md = r
844 844
845 845 assert not isinstance(
846 846 r, str
847 847 ), "JSON-as-string has been deprecated since IPython < 3"
848 848
849 849 if md is not None:
850 850 # put the tuple back together
851 851 r = (r, md)
852 852 return super(JSONFormatter, self)._check_return(r, obj)
853 853
854 854
855 855 class JavascriptFormatter(BaseFormatter):
856 856 """A Javascript formatter.
857 857
858 858 To define the callables that compute the Javascript representation of
859 859 your objects, define a :meth:`_repr_javascript_` method or use the
860 860 :meth:`for_type` or :meth:`for_type_by_name` methods to register functions
861 861 that handle this.
862 862
863 863 The return value of this formatter should be valid Javascript code and
864 864 should *not* be enclosed in ```<script>``` tags.
865 865 """
866 866 format_type = Unicode('application/javascript')
867 867
868 868 print_method = ObjectName('_repr_javascript_')
869 869
870 870
871 871 class PDFFormatter(BaseFormatter):
872 872 """A PDF formatter.
873 873
874 874 To define the callables that compute the PDF representation of your
875 875 objects, define a :meth:`_repr_pdf_` method or use the :meth:`for_type`
876 876 or :meth:`for_type_by_name` methods to register functions that handle
877 877 this.
878 878
879 879 The return value of this formatter should be raw PDF data, *not*
880 880 base64 encoded.
881 881 """
882 882 format_type = Unicode('application/pdf')
883 883
884 884 print_method = ObjectName('_repr_pdf_')
885 885
886 886 _return_type = (bytes, str)
887 887
888 888 class IPythonDisplayFormatter(BaseFormatter):
889 889 """An escape-hatch Formatter for objects that know how to display themselves.
890 890
891 891 To define the callables that compute the representation of your
892 892 objects, define a :meth:`_ipython_display_` method or use the :meth:`for_type`
893 893 or :meth:`for_type_by_name` methods to register functions that handle
894 894 this. Unlike mime-type displays, this method should not return anything,
895 895 instead calling any appropriate display methods itself.
896 896
897 897 This display formatter has highest priority.
898 898 If it fires, no other display formatter will be called.
899 899
900 900 Prior to IPython 6.1, `_ipython_display_` was the only way to display custom mime-types
901 901 without registering a new Formatter.
902 902
903 903 IPython 6.1 introduces `_repr_mimebundle_` for displaying custom mime-types,
904 904 so `_ipython_display_` should only be used for objects that require unusual
905 905 display patterns, such as multiple display calls.
906 906 """
907 907 print_method = ObjectName('_ipython_display_')
908 908 _return_type = (type(None), bool)
909 909
910 910 @catch_format_error
911 911 def __call__(self, obj):
912 912 """Compute the format for an object."""
913 913 if self.enabled:
914 914 # lookup registered printer
915 915 try:
916 916 printer = self.lookup(obj)
917 917 except KeyError:
918 918 pass
919 919 else:
920 920 printer(obj)
921 921 return True
922 922 # Finally look for special method names
923 923 method = get_real_method(obj, self.print_method)
924 924 if method is not None:
925 925 method()
926 926 return True
927 927
928 928
929 929 class MimeBundleFormatter(BaseFormatter):
930 930 """A Formatter for arbitrary mime-types.
931 931
932 932 Unlike other `_repr_<mimetype>_` methods,
933 933 `_repr_mimebundle_` should return mime-bundle data,
934 934 either the mime-keyed `data` dictionary or the tuple `(data, metadata)`.
935 935 Any mime-type is valid.
936 936
937 937 To define the callables that compute the mime-bundle representation of your
938 938 objects, define a :meth:`_repr_mimebundle_` method or use the :meth:`for_type`
939 939 or :meth:`for_type_by_name` methods to register functions that handle
940 940 this.
941 941
942 942 .. versionadded:: 6.1
943 943 """
944 944 print_method = ObjectName('_repr_mimebundle_')
945 945 _return_type = dict
946 946
947 947 def _check_return(self, r, obj):
948 948 r = super(MimeBundleFormatter, self)._check_return(r, obj)
949 949 # always return (data, metadata):
950 950 if r is None:
951 951 return {}, {}
952 952 if not isinstance(r, tuple):
953 953 return r, {}
954 954 return r
955 955
956 956 @catch_format_error
957 957 def __call__(self, obj, include=None, exclude=None):
958 958 """Compute the format for an object.
959 959
960 960 Identical to parent's method but we pass extra parameters to the method.
961 961
962 962 Unlike other _repr_*_ `_repr_mimebundle_` should allow extra kwargs, in
963 963 particular `include` and `exclude`.
964 964 """
965 965 if self.enabled:
966 966 # lookup registered printer
967 967 try:
968 968 printer = self.lookup(obj)
969 969 except KeyError:
970 970 pass
971 971 else:
972 972 return printer(obj)
973 973 # Finally look for special method names
974 974 method = get_real_method(obj, self.print_method)
975 975
976 976 if method is not None:
977 977 return method(include=include, exclude=exclude)
978 978 return None
979 979 else:
980 980 return None
981 981
982 982
983 983 FormatterABC.register(BaseFormatter)
984 984 FormatterABC.register(PlainTextFormatter)
985 985 FormatterABC.register(HTMLFormatter)
986 986 FormatterABC.register(MarkdownFormatter)
987 987 FormatterABC.register(SVGFormatter)
988 988 FormatterABC.register(PNGFormatter)
989 989 FormatterABC.register(PDFFormatter)
990 990 FormatterABC.register(JPEGFormatter)
991 991 FormatterABC.register(LatexFormatter)
992 992 FormatterABC.register(JSONFormatter)
993 993 FormatterABC.register(JavascriptFormatter)
994 994 FormatterABC.register(IPythonDisplayFormatter)
995 995 FormatterABC.register(MimeBundleFormatter)
996 996
997 997
998 998 def format_display_data(obj, include=None, exclude=None):
999 999 """Return a format data dict for an object.
1000 1000
1001 1001 By default all format types will be computed.
1002 1002
1003 1003 Parameters
1004 1004 ----------
1005 1005 obj : object
1006 1006 The Python object whose format data will be computed.
1007 1007
1008 1008 Returns
1009 1009 -------
1010 1010 format_dict : dict
1011 1011 A dictionary of key/value pairs, one or each format that was
1012 1012 generated for the object. The keys are the format types, which
1013 1013 will usually be MIME type strings and the values and JSON'able
1014 1014 data structure containing the raw data for the representation in
1015 1015 that format.
1016 1016 include : list or tuple, optional
1017 1017 A list of format type strings (MIME types) to include in the
1018 1018 format data dict. If this is set *only* the format types included
1019 1019 in this list will be computed.
1020 1020 exclude : list or tuple, optional
1021 1021 A list of format type string (MIME types) to exclude in the format
1022 1022 data dict. If this is set all format types will be computed,
1023 1023 except for those included in this argument.
1024 1024 """
1025 1025 from .interactiveshell import InteractiveShell
1026 1026
1027 1027 return InteractiveShell.instance().display_formatter.format(
1028 1028 obj,
1029 1029 include,
1030 1030 exclude
1031 1031 )
@@ -1,978 +1,979 b''
1 1 """ History related magics and functionality """
2 2
3 3 # Copyright (c) IPython Development Team.
4 4 # Distributed under the terms of the Modified BSD License.
5 5
6 6
7 7 import atexit
8 8 import datetime
9 9 from pathlib import Path
10 10 import re
11 11 import sqlite3
12 12 import threading
13 13
14 14 from traitlets.config.configurable import LoggingConfigurable
15 15 from decorator import decorator
16 16 from IPython.utils.decorators import undoc
17 17 from IPython.paths import locate_profile
18 18 from traitlets import (
19 19 Any,
20 20 Bool,
21 21 Dict,
22 22 Instance,
23 23 Integer,
24 24 List,
25 25 Unicode,
26 26 Union,
27 27 TraitError,
28 28 default,
29 29 observe,
30 30 )
31 31
32 32 #-----------------------------------------------------------------------------
33 33 # Classes and functions
34 34 #-----------------------------------------------------------------------------
35 35
36 36 @undoc
37 37 class DummyDB(object):
38 38 """Dummy DB that will act as a black hole for history.
39 39
40 40 Only used in the absence of sqlite"""
41 41 def execute(*args, **kwargs):
42 42 return []
43 43
44 44 def commit(self, *args, **kwargs):
45 45 pass
46 46
47 47 def __enter__(self, *args, **kwargs):
48 48 pass
49 49
50 50 def __exit__(self, *args, **kwargs):
51 51 pass
52 52
53 53
54 54 @decorator
55 55 def only_when_enabled(f, self, *a, **kw):
56 56 """Decorator: return an empty list in the absence of sqlite."""
57 57 if not self.enabled:
58 58 return []
59 59 else:
60 60 return f(self, *a, **kw)
61 61
62 62
63 63 # use 16kB as threshold for whether a corrupt history db should be saved
64 64 # that should be at least 100 entries or so
65 65 _SAVE_DB_SIZE = 16384
66 66
67 67 @decorator
68 68 def catch_corrupt_db(f, self, *a, **kw):
69 69 """A decorator which wraps HistoryAccessor method calls to catch errors from
70 70 a corrupt SQLite database, move the old database out of the way, and create
71 71 a new one.
72 72
73 73 We avoid clobbering larger databases because this may be triggered due to filesystem issues,
74 74 not just a corrupt file.
75 75 """
76 76 try:
77 77 return f(self, *a, **kw)
78 78 except (sqlite3.DatabaseError, sqlite3.OperationalError) as e:
79 79 self._corrupt_db_counter += 1
80 80 self.log.error("Failed to open SQLite history %s (%s).", self.hist_file, e)
81 81 if self.hist_file != ':memory:':
82 82 if self._corrupt_db_counter > self._corrupt_db_limit:
83 83 self.hist_file = ':memory:'
84 84 self.log.error("Failed to load history too many times, history will not be saved.")
85 85 elif self.hist_file.is_file():
86 86 # move the file out of the way
87 87 base = str(self.hist_file.parent / self.hist_file.stem)
88 88 ext = self.hist_file.suffix
89 89 size = self.hist_file.stat().st_size
90 90 if size >= _SAVE_DB_SIZE:
91 91 # if there's significant content, avoid clobbering
92 92 now = datetime.datetime.now().isoformat().replace(':', '.')
93 93 newpath = base + '-corrupt-' + now + ext
94 94 # don't clobber previous corrupt backups
95 95 for i in range(100):
96 96 if not Path(newpath).exists():
97 97 break
98 98 else:
99 99 newpath = base + '-corrupt-' + now + (u'-%i' % i) + ext
100 100 else:
101 101 # not much content, possibly empty; don't worry about clobbering
102 102 # maybe we should just delete it?
103 103 newpath = base + '-corrupt' + ext
104 104 self.hist_file.rename(newpath)
105 105 self.log.error("History file was moved to %s and a new file created.", newpath)
106 106 self.init_db()
107 107 return []
108 108 else:
109 109 # Failed with :memory:, something serious is wrong
110 110 raise
111 111
112 112
113 113 class HistoryAccessorBase(LoggingConfigurable):
114 114 """An abstract class for History Accessors """
115 115
116 116 def get_tail(self, n=10, raw=True, output=False, include_latest=False):
117 117 raise NotImplementedError
118 118
119 119 def search(self, pattern="*", raw=True, search_raw=True,
120 120 output=False, n=None, unique=False):
121 121 raise NotImplementedError
122 122
123 123 def get_range(self, session, start=1, stop=None, raw=True,output=False):
124 124 raise NotImplementedError
125 125
126 126 def get_range_by_str(self, rangestr, raw=True, output=False):
127 127 raise NotImplementedError
128 128
129 129
130 130 class HistoryAccessor(HistoryAccessorBase):
131 131 """Access the history database without adding to it.
132 132
133 133 This is intended for use by standalone history tools. IPython shells use
134 134 HistoryManager, below, which is a subclass of this."""
135 135
136 136 # counter for init_db retries, so we don't keep trying over and over
137 137 _corrupt_db_counter = 0
138 138 # after two failures, fallback on :memory:
139 139 _corrupt_db_limit = 2
140 140
141 141 # String holding the path to the history file
142 142 hist_file = Union(
143 143 [Instance(Path), Unicode()],
144 144 help="""Path to file to use for SQLite history database.
145 145
146 146 By default, IPython will put the history database in the IPython
147 147 profile directory. If you would rather share one history among
148 148 profiles, you can set this value in each, so that they are consistent.
149 149
150 150 Due to an issue with fcntl, SQLite is known to misbehave on some NFS
151 151 mounts. If you see IPython hanging, try setting this to something on a
152 152 local disk, e.g::
153 153
154 154 ipython --HistoryManager.hist_file=/tmp/ipython_hist.sqlite
155 155
156 156 you can also use the specific value `:memory:` (including the colon
157 157 at both end but not the back ticks), to avoid creating an history file.
158 158
159 159 """,
160 160 ).tag(config=True)
161 161
162 162 enabled = Bool(True,
163 163 help="""enable the SQLite history
164 164
165 165 set enabled=False to disable the SQLite history,
166 166 in which case there will be no stored history, no SQLite connection,
167 167 and no background saving thread. This may be necessary in some
168 168 threaded environments where IPython is embedded.
169 169 """,
170 170 ).tag(config=True)
171 171
172 172 connection_options = Dict(
173 173 help="""Options for configuring the SQLite connection
174 174
175 175 These options are passed as keyword args to sqlite3.connect
176 176 when establishing database connections.
177 177 """
178 178 ).tag(config=True)
179 179
180 180 @default("connection_options")
181 181 def _default_connection_options(self):
182 182 return dict(check_same_thread=False)
183 183
184 184 # The SQLite database
185 185 db = Any()
186 186 @observe('db')
187 187 def _db_changed(self, change):
188 188 """validate the db, since it can be an Instance of two different types"""
189 189 new = change['new']
190 190 connection_types = (DummyDB, sqlite3.Connection)
191 191 if not isinstance(new, connection_types):
192 192 msg = "%s.db must be sqlite3 Connection or DummyDB, not %r" % \
193 193 (self.__class__.__name__, new)
194 194 raise TraitError(msg)
195 195
196 196 def __init__(self, profile="default", hist_file="", **traits):
197 197 """Create a new history accessor.
198 198
199 199 Parameters
200 200 ----------
201 201 profile : str
202 202 The name of the profile from which to open history.
203 203 hist_file : str
204 204 Path to an SQLite history database stored by IPython. If specified,
205 205 hist_file overrides profile.
206 206 config : :class:`~traitlets.config.loader.Config`
207 207 Config object. hist_file can also be set through this.
208 208 """
209 209 super(HistoryAccessor, self).__init__(**traits)
210 210 # defer setting hist_file from kwarg until after init,
211 211 # otherwise the default kwarg value would clobber any value
212 212 # set by config
213 213 if hist_file:
214 214 self.hist_file = hist_file
215 215
216 216 try:
217 217 self.hist_file
218 218 except TraitError:
219 219 # No one has set the hist_file, yet.
220 220 self.hist_file = self._get_hist_file_name(profile)
221 221
222 222 self.init_db()
223 223
224 224 def _get_hist_file_name(self, profile='default'):
225 225 """Find the history file for the given profile name.
226 226
227 227 This is overridden by the HistoryManager subclass, to use the shell's
228 228 active profile.
229 229
230 230 Parameters
231 231 ----------
232 232 profile : str
233 233 The name of a profile which has a history file.
234 234 """
235 235 return Path(locate_profile(profile)) / "history.sqlite"
236 236
237 237 @catch_corrupt_db
238 238 def init_db(self):
239 239 """Connect to the database, and create tables if necessary."""
240 240 if not self.enabled:
241 241 self.db = DummyDB()
242 242 return
243 243
244 244 # use detect_types so that timestamps return datetime objects
245 245 kwargs = dict(detect_types=sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES)
246 246 kwargs.update(self.connection_options)
247 247 self.db = sqlite3.connect(str(self.hist_file), **kwargs)
248 248 with self.db:
249 249 self.db.execute(
250 250 """CREATE TABLE IF NOT EXISTS sessions (session integer
251 251 primary key autoincrement, start timestamp,
252 252 end timestamp, num_cmds integer, remark text)"""
253 253 )
254 254 self.db.execute(
255 255 """CREATE TABLE IF NOT EXISTS history
256 256 (session integer, line integer, source text, source_raw text,
257 257 PRIMARY KEY (session, line))"""
258 258 )
259 259 # Output history is optional, but ensure the table's there so it can be
260 260 # enabled later.
261 261 self.db.execute(
262 262 """CREATE TABLE IF NOT EXISTS output_history
263 263 (session integer, line integer, output text,
264 264 PRIMARY KEY (session, line))"""
265 265 )
266 266 # success! reset corrupt db count
267 267 self._corrupt_db_counter = 0
268 268
269 269 def writeout_cache(self):
270 270 """Overridden by HistoryManager to dump the cache before certain
271 271 database lookups."""
272 272 pass
273 273
274 274 ## -------------------------------
275 275 ## Methods for retrieving history:
276 276 ## -------------------------------
277 277 def _run_sql(self, sql, params, raw=True, output=False, latest=False):
278 278 """Prepares and runs an SQL query for the history database.
279 279
280 280 Parameters
281 281 ----------
282 282 sql : str
283 283 Any filtering expressions to go after SELECT ... FROM ...
284 284 params : tuple
285 285 Parameters passed to the SQL query (to replace "?")
286 286 raw, output : bool
287 287 See :meth:`get_range`
288 288 latest : bool
289 289 Select rows with max (session, line)
290 290
291 291 Returns
292 292 -------
293 293 Tuples as :meth:`get_range`
294 294 """
295 295 toget = 'source_raw' if raw else 'source'
296 296 sqlfrom = "history"
297 297 if output:
298 298 sqlfrom = "history LEFT JOIN output_history USING (session, line)"
299 299 toget = "history.%s, output_history.output" % toget
300 300 if latest:
301 301 toget += ", MAX(session * 128 * 1024 + line)"
302 302 this_querry = "SELECT session, line, %s FROM %s " % (toget, sqlfrom) + sql
303 303 cur = self.db.execute(this_querry, params)
304 304 if latest:
305 305 cur = (row[:-1] for row in cur)
306 306 if output: # Regroup into 3-tuples, and parse JSON
307 307 return ((ses, lin, (inp, out)) for ses, lin, inp, out in cur)
308 308 return cur
309 309
310 310 @only_when_enabled
311 311 @catch_corrupt_db
312 312 def get_session_info(self, session):
313 313 """Get info about a session.
314 314
315 315 Parameters
316 316 ----------
317 317 session : int
318 318 Session number to retrieve.
319 319
320 320 Returns
321 321 -------
322 322 session_id : int
323 323 Session ID number
324 324 start : datetime
325 325 Timestamp for the start of the session.
326 326 end : datetime
327 327 Timestamp for the end of the session, or None if IPython crashed.
328 328 num_cmds : int
329 329 Number of commands run, or None if IPython crashed.
330 330 remark : unicode
331 331 A manually set description.
332 332 """
333 333 query = "SELECT * from sessions where session == ?"
334 334 return self.db.execute(query, (session,)).fetchone()
335 335
336 336 @catch_corrupt_db
337 337 def get_last_session_id(self):
338 338 """Get the last session ID currently in the database.
339 339
340 340 Within IPython, this should be the same as the value stored in
341 341 :attr:`HistoryManager.session_number`.
342 342 """
343 343 for record in self.get_tail(n=1, include_latest=True):
344 344 return record[0]
345 345
346 346 @catch_corrupt_db
347 347 def get_tail(self, n=10, raw=True, output=False, include_latest=False):
348 348 """Get the last n lines from the history database.
349 349
350 350 Parameters
351 351 ----------
352 352 n : int
353 353 The number of lines to get
354 354 raw, output : bool
355 355 See :meth:`get_range`
356 356 include_latest : bool
357 357 If False (default), n+1 lines are fetched, and the latest one
358 358 is discarded. This is intended to be used where the function
359 359 is called by a user command, which it should not return.
360 360
361 361 Returns
362 362 -------
363 363 Tuples as :meth:`get_range`
364 364 """
365 365 self.writeout_cache()
366 366 if not include_latest:
367 367 n += 1
368 368 cur = self._run_sql(
369 369 "ORDER BY session DESC, line DESC LIMIT ?", (n,), raw=raw, output=output
370 370 )
371 371 if not include_latest:
372 372 return reversed(list(cur)[1:])
373 373 return reversed(list(cur))
374 374
375 375 @catch_corrupt_db
376 376 def search(self, pattern="*", raw=True, search_raw=True,
377 377 output=False, n=None, unique=False):
378 378 """Search the database using unix glob-style matching (wildcards
379 379 * and ?).
380 380
381 381 Parameters
382 382 ----------
383 383 pattern : str
384 384 The wildcarded pattern to match when searching
385 385 search_raw : bool
386 386 If True, search the raw input, otherwise, the parsed input
387 387 raw, output : bool
388 388 See :meth:`get_range`
389 389 n : None or int
390 390 If an integer is given, it defines the limit of
391 391 returned entries.
392 392 unique : bool
393 393 When it is true, return only unique entries.
394 394
395 395 Returns
396 396 -------
397 397 Tuples as :meth:`get_range`
398 398 """
399 399 tosearch = "source_raw" if search_raw else "source"
400 400 if output:
401 401 tosearch = "history." + tosearch
402 402 self.writeout_cache()
403 403 sqlform = "WHERE %s GLOB ?" % tosearch
404 404 params = (pattern,)
405 405 if unique:
406 406 sqlform += ' GROUP BY {0}'.format(tosearch)
407 407 if n is not None:
408 408 sqlform += " ORDER BY session DESC, line DESC LIMIT ?"
409 409 params += (n,)
410 410 elif unique:
411 411 sqlform += " ORDER BY session, line"
412 412 cur = self._run_sql(sqlform, params, raw=raw, output=output, latest=unique)
413 413 if n is not None:
414 414 return reversed(list(cur))
415 415 return cur
416 416
417 417 @catch_corrupt_db
418 418 def get_range(self, session, start=1, stop=None, raw=True,output=False):
419 419 """Retrieve input by session.
420 420
421 421 Parameters
422 422 ----------
423 423 session : int
424 424 Session number to retrieve.
425 425 start : int
426 426 First line to retrieve.
427 427 stop : int
428 428 End of line range (excluded from output itself). If None, retrieve
429 429 to the end of the session.
430 430 raw : bool
431 431 If True, return untranslated input
432 432 output : bool
433 433 If True, attempt to include output. This will be 'real' Python
434 434 objects for the current session, or text reprs from previous
435 435 sessions if db_log_output was enabled at the time. Where no output
436 436 is found, None is used.
437 437
438 438 Returns
439 439 -------
440 440 entries
441 441 An iterator over the desired lines. Each line is a 3-tuple, either
442 442 (session, line, input) if output is False, or
443 443 (session, line, (input, output)) if output is True.
444 444 """
445 445 if stop:
446 446 lineclause = "line >= ? AND line < ?"
447 447 params = (session, start, stop)
448 448 else:
449 449 lineclause = "line>=?"
450 450 params = (session, start)
451 451
452 452 return self._run_sql("WHERE session==? AND %s" % lineclause,
453 453 params, raw=raw, output=output)
454 454
455 455 def get_range_by_str(self, rangestr, raw=True, output=False):
456 456 """Get lines of history from a string of ranges, as used by magic
457 457 commands %hist, %save, %macro, etc.
458 458
459 459 Parameters
460 460 ----------
461 461 rangestr : str
462 462 A string specifying ranges, e.g. "5 ~2/1-4". If empty string is used,
463 463 this will return everything from current session's history.
464 464
465 465 See the documentation of :func:`%history` for the full details.
466 466
467 467 raw, output : bool
468 468 As :meth:`get_range`
469 469
470 470 Returns
471 471 -------
472 472 Tuples as :meth:`get_range`
473 473 """
474 474 for sess, s, e in extract_hist_ranges(rangestr):
475 475 for line in self.get_range(sess, s, e, raw=raw, output=output):
476 476 yield line
477 477
478 478
479 479 class HistoryManager(HistoryAccessor):
480 480 """A class to organize all history-related functionality in one place.
481 481 """
482 482 # Public interface
483 483
484 484 # An instance of the IPython shell we are attached to
485 485 shell = Instance('IPython.core.interactiveshell.InteractiveShellABC',
486 486 allow_none=True)
487 487 # Lists to hold processed and raw history. These start with a blank entry
488 488 # so that we can index them starting from 1
489 489 input_hist_parsed = List([""])
490 490 input_hist_raw = List([""])
491 491 # A list of directories visited during session
492 dir_hist = List()
493 @default('dir_hist')
492 dir_hist: List = List()
493
494 @default("dir_hist")
494 495 def _dir_hist_default(self):
495 496 try:
496 497 return [Path.cwd()]
497 498 except OSError:
498 499 return []
499 500
500 501 # A dict of output history, keyed with ints from the shell's
501 502 # execution count.
502 503 output_hist = Dict()
503 504 # The text/plain repr of outputs.
504 505 output_hist_reprs = Dict()
505 506
506 507 # The number of the current session in the history database
507 508 session_number = Integer()
508 509
509 510 db_log_output = Bool(False,
510 511 help="Should the history database include output? (default: no)"
511 512 ).tag(config=True)
512 513 db_cache_size = Integer(0,
513 514 help="Write to database every x commands (higher values save disk access & power).\n"
514 515 "Values of 1 or less effectively disable caching."
515 516 ).tag(config=True)
516 517 # The input and output caches
517 db_input_cache = List()
518 db_output_cache = List()
518 db_input_cache: List = List()
519 db_output_cache: List = List()
519 520
520 521 # History saving in separate thread
521 522 save_thread = Instance('IPython.core.history.HistorySavingThread',
522 523 allow_none=True)
523 524 save_flag = Instance(threading.Event, allow_none=True)
524 525
525 526 # Private interface
526 527 # Variables used to store the three last inputs from the user. On each new
527 528 # history update, we populate the user's namespace with these, shifted as
528 529 # necessary.
529 _i00 = Unicode(u'')
530 _i = Unicode(u'')
531 _ii = Unicode(u'')
532 _iii = Unicode(u'')
530 _i00 = Unicode("")
531 _i = Unicode("")
532 _ii = Unicode("")
533 _iii = Unicode("")
533 534
534 535 # A regex matching all forms of the exit command, so that we don't store
535 536 # them in the history (it's annoying to rewind the first entry and land on
536 537 # an exit call).
537 538 _exit_re = re.compile(r"(exit|quit)(\s*\(.*\))?$")
538 539
539 540 def __init__(self, shell=None, config=None, **traits):
540 541 """Create a new history manager associated with a shell instance.
541 542 """
542 543 super(HistoryManager, self).__init__(shell=shell, config=config,
543 544 **traits)
544 545 self.save_flag = threading.Event()
545 546 self.db_input_cache_lock = threading.Lock()
546 547 self.db_output_cache_lock = threading.Lock()
547 548
548 549 try:
549 550 self.new_session()
550 551 except sqlite3.OperationalError:
551 552 self.log.error("Failed to create history session in %s. History will not be saved.",
552 553 self.hist_file, exc_info=True)
553 554 self.hist_file = ':memory:'
554 555
555 556 if self.enabled and self.hist_file != ':memory:':
556 557 self.save_thread = HistorySavingThread(self)
557 558 self.save_thread.start()
558 559
559 560 def _get_hist_file_name(self, profile=None):
560 561 """Get default history file name based on the Shell's profile.
561 562
562 563 The profile parameter is ignored, but must exist for compatibility with
563 564 the parent class."""
564 565 profile_dir = self.shell.profile_dir.location
565 566 return Path(profile_dir) / "history.sqlite"
566 567
567 568 @only_when_enabled
568 569 def new_session(self, conn=None):
569 570 """Get a new session number."""
570 571 if conn is None:
571 572 conn = self.db
572 573
573 574 with conn:
574 575 cur = conn.execute(
575 576 """INSERT INTO sessions VALUES (NULL, ?, NULL,
576 577 NULL, '') """,
577 578 (datetime.datetime.now().isoformat(" "),),
578 579 )
579 580 self.session_number = cur.lastrowid
580 581
581 582 def end_session(self):
582 583 """Close the database session, filling in the end time and line count."""
583 584 self.writeout_cache()
584 585 with self.db:
585 586 self.db.execute(
586 587 """UPDATE sessions SET end=?, num_cmds=? WHERE
587 588 session==?""",
588 589 (
589 590 datetime.datetime.now().isoformat(" "),
590 591 len(self.input_hist_parsed) - 1,
591 592 self.session_number,
592 593 ),
593 594 )
594 595 self.session_number = 0
595 596
596 597 def name_session(self, name):
597 598 """Give the current session a name in the history database."""
598 599 with self.db:
599 600 self.db.execute("UPDATE sessions SET remark=? WHERE session==?",
600 601 (name, self.session_number))
601 602
602 603 def reset(self, new_session=True):
603 604 """Clear the session history, releasing all object references, and
604 605 optionally open a new session."""
605 606 self.output_hist.clear()
606 607 # The directory history can't be completely empty
607 608 self.dir_hist[:] = [Path.cwd()]
608 609
609 610 if new_session:
610 611 if self.session_number:
611 612 self.end_session()
612 613 self.input_hist_parsed[:] = [""]
613 614 self.input_hist_raw[:] = [""]
614 615 self.new_session()
615 616
616 617 # ------------------------------
617 618 # Methods for retrieving history
618 619 # ------------------------------
619 620 def get_session_info(self, session=0):
620 621 """Get info about a session.
621 622
622 623 Parameters
623 624 ----------
624 625 session : int
625 626 Session number to retrieve. The current session is 0, and negative
626 627 numbers count back from current session, so -1 is the previous session.
627 628
628 629 Returns
629 630 -------
630 631 session_id : int
631 632 Session ID number
632 633 start : datetime
633 634 Timestamp for the start of the session.
634 635 end : datetime
635 636 Timestamp for the end of the session, or None if IPython crashed.
636 637 num_cmds : int
637 638 Number of commands run, or None if IPython crashed.
638 639 remark : unicode
639 640 A manually set description.
640 641 """
641 642 if session <= 0:
642 643 session += self.session_number
643 644
644 645 return super(HistoryManager, self).get_session_info(session=session)
645 646
646 647 @catch_corrupt_db
647 648 def get_tail(self, n=10, raw=True, output=False, include_latest=False):
648 649 """Get the last n lines from the history database.
649 650
650 651 Most recent entry last.
651 652
652 653 Completion will be reordered so that that the last ones are when
653 654 possible from current session.
654 655
655 656 Parameters
656 657 ----------
657 658 n : int
658 659 The number of lines to get
659 660 raw, output : bool
660 661 See :meth:`get_range`
661 662 include_latest : bool
662 663 If False (default), n+1 lines are fetched, and the latest one
663 664 is discarded. This is intended to be used where the function
664 665 is called by a user command, which it should not return.
665 666
666 667 Returns
667 668 -------
668 669 Tuples as :meth:`get_range`
669 670 """
670 671 self.writeout_cache()
671 672 if not include_latest:
672 673 n += 1
673 674 # cursor/line/entry
674 675 this_cur = list(
675 676 self._run_sql(
676 677 "WHERE session == ? ORDER BY line DESC LIMIT ? ",
677 678 (self.session_number, n),
678 679 raw=raw,
679 680 output=output,
680 681 )
681 682 )
682 683 other_cur = list(
683 684 self._run_sql(
684 685 "WHERE session != ? ORDER BY session DESC, line DESC LIMIT ?",
685 686 (self.session_number, n),
686 687 raw=raw,
687 688 output=output,
688 689 )
689 690 )
690 691
691 692 everything = this_cur + other_cur
692 693
693 694 everything = everything[:n]
694 695
695 696 if not include_latest:
696 697 return list(everything)[:0:-1]
697 698 return list(everything)[::-1]
698 699
699 700 def _get_range_session(self, start=1, stop=None, raw=True, output=False):
700 701 """Get input and output history from the current session. Called by
701 702 get_range, and takes similar parameters."""
702 703 input_hist = self.input_hist_raw if raw else self.input_hist_parsed
703 704
704 705 n = len(input_hist)
705 706 if start < 0:
706 707 start += n
707 708 if not stop or (stop > n):
708 709 stop = n
709 710 elif stop < 0:
710 711 stop += n
711 712
712 713 for i in range(start, stop):
713 714 if output:
714 715 line = (input_hist[i], self.output_hist_reprs.get(i))
715 716 else:
716 717 line = input_hist[i]
717 718 yield (0, i, line)
718 719
719 720 def get_range(self, session=0, start=1, stop=None, raw=True,output=False):
720 721 """Retrieve input by session.
721 722
722 723 Parameters
723 724 ----------
724 725 session : int
725 726 Session number to retrieve. The current session is 0, and negative
726 727 numbers count back from current session, so -1 is previous session.
727 728 start : int
728 729 First line to retrieve.
729 730 stop : int
730 731 End of line range (excluded from output itself). If None, retrieve
731 732 to the end of the session.
732 733 raw : bool
733 734 If True, return untranslated input
734 735 output : bool
735 736 If True, attempt to include output. This will be 'real' Python
736 737 objects for the current session, or text reprs from previous
737 738 sessions if db_log_output was enabled at the time. Where no output
738 739 is found, None is used.
739 740
740 741 Returns
741 742 -------
742 743 entries
743 744 An iterator over the desired lines. Each line is a 3-tuple, either
744 745 (session, line, input) if output is False, or
745 746 (session, line, (input, output)) if output is True.
746 747 """
747 748 if session <= 0:
748 749 session += self.session_number
749 750 if session==self.session_number: # Current session
750 751 return self._get_range_session(start, stop, raw, output)
751 752 return super(HistoryManager, self).get_range(session, start, stop, raw,
752 753 output)
753 754
754 755 ## ----------------------------
755 756 ## Methods for storing history:
756 757 ## ----------------------------
757 758 def store_inputs(self, line_num, source, source_raw=None):
758 759 """Store source and raw input in history and create input cache
759 760 variables ``_i*``.
760 761
761 762 Parameters
762 763 ----------
763 764 line_num : int
764 765 The prompt number of this input.
765 766 source : str
766 767 Python input.
767 768 source_raw : str, optional
768 769 If given, this is the raw input without any IPython transformations
769 770 applied to it. If not given, ``source`` is used.
770 771 """
771 772 if source_raw is None:
772 773 source_raw = source
773 774 source = source.rstrip('\n')
774 775 source_raw = source_raw.rstrip('\n')
775 776
776 777 # do not store exit/quit commands
777 778 if self._exit_re.match(source_raw.strip()):
778 779 return
779 780
780 781 self.input_hist_parsed.append(source)
781 782 self.input_hist_raw.append(source_raw)
782 783
783 784 with self.db_input_cache_lock:
784 785 self.db_input_cache.append((line_num, source, source_raw))
785 786 # Trigger to flush cache and write to DB.
786 787 if len(self.db_input_cache) >= self.db_cache_size:
787 788 self.save_flag.set()
788 789
789 790 # update the auto _i variables
790 791 self._iii = self._ii
791 792 self._ii = self._i
792 793 self._i = self._i00
793 794 self._i00 = source_raw
794 795
795 796 # hackish access to user namespace to create _i1,_i2... dynamically
796 797 new_i = '_i%s' % line_num
797 798 to_main = {'_i': self._i,
798 799 '_ii': self._ii,
799 800 '_iii': self._iii,
800 801 new_i : self._i00 }
801 802
802 803 if self.shell is not None:
803 804 self.shell.push(to_main, interactive=False)
804 805
805 806 def store_output(self, line_num):
806 807 """If database output logging is enabled, this saves all the
807 808 outputs from the indicated prompt number to the database. It's
808 809 called by run_cell after code has been executed.
809 810
810 811 Parameters
811 812 ----------
812 813 line_num : int
813 814 The line number from which to save outputs
814 815 """
815 816 if (not self.db_log_output) or (line_num not in self.output_hist_reprs):
816 817 return
817 818 output = self.output_hist_reprs[line_num]
818 819
819 820 with self.db_output_cache_lock:
820 821 self.db_output_cache.append((line_num, output))
821 822 if self.db_cache_size <= 1:
822 823 self.save_flag.set()
823 824
824 825 def _writeout_input_cache(self, conn):
825 826 with conn:
826 827 for line in self.db_input_cache:
827 828 conn.execute("INSERT INTO history VALUES (?, ?, ?, ?)",
828 829 (self.session_number,)+line)
829 830
830 831 def _writeout_output_cache(self, conn):
831 832 with conn:
832 833 for line in self.db_output_cache:
833 834 conn.execute("INSERT INTO output_history VALUES (?, ?, ?)",
834 835 (self.session_number,)+line)
835 836
836 837 @only_when_enabled
837 838 def writeout_cache(self, conn=None):
838 839 """Write any entries in the cache to the database."""
839 840 if conn is None:
840 841 conn = self.db
841 842
842 843 with self.db_input_cache_lock:
843 844 try:
844 845 self._writeout_input_cache(conn)
845 846 except sqlite3.IntegrityError:
846 847 self.new_session(conn)
847 848 print("ERROR! Session/line number was not unique in",
848 849 "database. History logging moved to new session",
849 850 self.session_number)
850 851 try:
851 852 # Try writing to the new session. If this fails, don't
852 853 # recurse
853 854 self._writeout_input_cache(conn)
854 855 except sqlite3.IntegrityError:
855 856 pass
856 857 finally:
857 858 self.db_input_cache = []
858 859
859 860 with self.db_output_cache_lock:
860 861 try:
861 862 self._writeout_output_cache(conn)
862 863 except sqlite3.IntegrityError:
863 864 print("!! Session/line number for output was not unique",
864 865 "in database. Output will not be stored.")
865 866 finally:
866 867 self.db_output_cache = []
867 868
868 869
869 870 class HistorySavingThread(threading.Thread):
870 871 """This thread takes care of writing history to the database, so that
871 872 the UI isn't held up while that happens.
872 873
873 874 It waits for the HistoryManager's save_flag to be set, then writes out
874 875 the history cache. The main thread is responsible for setting the flag when
875 876 the cache size reaches a defined threshold."""
876 877 daemon = True
877 878 stop_now = False
878 879 enabled = True
879 880 def __init__(self, history_manager):
880 881 super(HistorySavingThread, self).__init__(name="IPythonHistorySavingThread")
881 882 self.history_manager = history_manager
882 883 self.enabled = history_manager.enabled
883 884 atexit.register(self.stop)
884 885
885 886 @only_when_enabled
886 887 def run(self):
887 888 # We need a separate db connection per thread:
888 889 try:
889 890 self.db = sqlite3.connect(
890 891 str(self.history_manager.hist_file),
891 892 **self.history_manager.connection_options,
892 893 )
893 894 while True:
894 895 self.history_manager.save_flag.wait()
895 896 if self.stop_now:
896 897 self.db.close()
897 898 return
898 899 self.history_manager.save_flag.clear()
899 900 self.history_manager.writeout_cache(self.db)
900 901 except Exception as e:
901 902 print(("The history saving thread hit an unexpected error (%s)."
902 903 "History will not be written to the database.") % repr(e))
903 904
904 905 def stop(self):
905 906 """This can be called from the main thread to safely stop this thread.
906 907
907 908 Note that it does not attempt to write out remaining history before
908 909 exiting. That should be done by calling the HistoryManager's
909 910 end_session method."""
910 911 self.stop_now = True
911 912 self.history_manager.save_flag.set()
912 913 self.join()
913 914
914 915
915 916 # To match, e.g. ~5/8-~2/3
916 917 range_re = re.compile(r"""
917 918 ((?P<startsess>~?\d+)/)?
918 919 (?P<start>\d+)?
919 920 ((?P<sep>[\-:])
920 921 ((?P<endsess>~?\d+)/)?
921 922 (?P<end>\d+))?
922 923 $""", re.VERBOSE)
923 924
924 925
925 926 def extract_hist_ranges(ranges_str):
926 927 """Turn a string of history ranges into 3-tuples of (session, start, stop).
927 928
928 929 Empty string results in a `[(0, 1, None)]`, i.e. "everything from current
929 930 session".
930 931
931 932 Examples
932 933 --------
933 934 >>> list(extract_hist_ranges("~8/5-~7/4 2"))
934 935 [(-8, 5, None), (-7, 1, 5), (0, 2, 3)]
935 936 """
936 937 if ranges_str == "":
937 938 yield (0, 1, None) # Everything from current session
938 939 return
939 940
940 941 for range_str in ranges_str.split():
941 942 rmatch = range_re.match(range_str)
942 943 if not rmatch:
943 944 continue
944 945 start = rmatch.group("start")
945 946 if start:
946 947 start = int(start)
947 948 end = rmatch.group("end")
948 949 # If no end specified, get (a, a + 1)
949 950 end = int(end) if end else start + 1
950 951 else: # start not specified
951 952 if not rmatch.group('startsess'): # no startsess
952 953 continue
953 954 start = 1
954 955 end = None # provide the entire session hist
955 956
956 957 if rmatch.group("sep") == "-": # 1-3 == 1:4 --> [1, 2, 3]
957 958 end += 1
958 959 startsess = rmatch.group("startsess") or "0"
959 960 endsess = rmatch.group("endsess") or startsess
960 961 startsess = int(startsess.replace("~","-"))
961 962 endsess = int(endsess.replace("~","-"))
962 963 assert endsess >= startsess, "start session must be earlier than end session"
963 964
964 965 if endsess == startsess:
965 966 yield (startsess, start, end)
966 967 continue
967 968 # Multiple sessions in one range:
968 969 yield (startsess, start, None)
969 970 for sess in range(startsess+1, endsess):
970 971 yield (sess, 1, None)
971 972 yield (endsess, 1, end)
972 973
973 974
974 975 def _format_lineno(session, line):
975 976 """Helper function to format line numbers properly."""
976 977 if session == 0:
977 978 return str(line)
978 979 return "%s#%s" % (session, line)
@@ -1,3955 +1,3963 b''
1 1 # -*- coding: utf-8 -*-
2 2 """Main IPython class."""
3 3
4 4 #-----------------------------------------------------------------------------
5 5 # Copyright (C) 2001 Janko Hauser <jhauser@zscout.de>
6 6 # Copyright (C) 2001-2007 Fernando Perez. <fperez@colorado.edu>
7 7 # Copyright (C) 2008-2011 The IPython Development Team
8 8 #
9 9 # Distributed under the terms of the BSD License. The full license is in
10 10 # the file COPYING, distributed as part of this software.
11 11 #-----------------------------------------------------------------------------
12 12
13 13
14 14 import abc
15 15 import ast
16 16 import atexit
17 17 import bdb
18 18 import builtins as builtin_mod
19 19 import functools
20 20 import inspect
21 21 import os
22 22 import re
23 23 import runpy
24 24 import shutil
25 25 import subprocess
26 26 import sys
27 27 import tempfile
28 28 import traceback
29 29 import types
30 30 import warnings
31 31 from ast import stmt
32 32 from io import open as io_open
33 33 from logging import error
34 34 from pathlib import Path
35 35 from typing import Callable
36 36 from typing import List as ListType, Dict as DictType, Any as AnyType
37 37 from typing import Optional, Sequence, Tuple
38 38 from warnings import warn
39 39
40 40 try:
41 41 from pickleshare import PickleShareDB
42 42 except ModuleNotFoundError:
43 43
44 44 class PickleShareDB: # type: ignore [no-redef]
45 45 _mock = True
46 46
47 47 def __init__(self, path):
48 48 pass
49 49
50 50 def get(self, key, default):
51 51 warn(
52 52 f"using {key} requires you to install the `pickleshare` library.",
53 53 stacklevel=2,
54 54 )
55 55 return default
56 56
57 57 def __setitem__(self, key, value):
58 58 warn(
59 59 f"using {key} requires you to install the `pickleshare` library.",
60 60 stacklevel=2,
61 61 )
62 62
63 63
64 64 from tempfile import TemporaryDirectory
65 65 from traitlets import (
66 66 Any,
67 67 Bool,
68 68 CaselessStrEnum,
69 69 Dict,
70 70 Enum,
71 71 Instance,
72 72 Integer,
73 73 List,
74 74 Type,
75 75 Unicode,
76 76 default,
77 77 observe,
78 78 validate,
79 79 )
80 80 from traitlets.config.configurable import SingletonConfigurable
81 81 from traitlets.utils.importstring import import_item
82 82
83 83 import IPython.core.hooks
84 84 from IPython.core import magic, oinspect, page, prefilter, ultratb
85 85 from IPython.core.alias import Alias, AliasManager
86 86 from IPython.core.autocall import ExitAutocall
87 87 from IPython.core.builtin_trap import BuiltinTrap
88 88 from IPython.core.compilerop import CachingCompiler
89 89 from IPython.core.debugger import InterruptiblePdb
90 90 from IPython.core.display_trap import DisplayTrap
91 91 from IPython.core.displayhook import DisplayHook
92 92 from IPython.core.displaypub import DisplayPublisher
93 93 from IPython.core.error import InputRejected, UsageError
94 94 from IPython.core.events import EventManager, available_events
95 95 from IPython.core.extensions import ExtensionManager
96 96 from IPython.core.formatters import DisplayFormatter
97 97 from IPython.core.history import HistoryManager
98 98 from IPython.core.inputtransformer2 import ESC_MAGIC, ESC_MAGIC2
99 99 from IPython.core.logger import Logger
100 100 from IPython.core.macro import Macro
101 101 from IPython.core.payload import PayloadManager
102 102 from IPython.core.prefilter import PrefilterManager
103 103 from IPython.core.profiledir import ProfileDir
104 104 from IPython.core.usage import default_banner
105 105 from IPython.display import display
106 106 from IPython.paths import get_ipython_dir
107 107 from IPython.testing.skipdoctest import skip_doctest
108 108 from IPython.utils import PyColorize, io, openpy, py3compat
109 109 from IPython.utils.decorators import undoc
110 110 from IPython.utils.io import ask_yes_no
111 111 from IPython.utils.ipstruct import Struct
112 112 from IPython.utils.path import ensure_dir_exists, get_home_dir, get_py_filename
113 113 from IPython.utils.process import getoutput, system
114 114 from IPython.utils.strdispatch import StrDispatch
115 115 from IPython.utils.syspathcontext import prepended_to_syspath
116 116 from IPython.utils.text import DollarFormatter, LSString, SList, format_screen
117 117 from IPython.core.oinspect import OInfo
118 118
119 119
120 120 sphinxify: Optional[Callable]
121 121
122 122 try:
123 123 import docrepr.sphinxify as sphx
124 124
125 125 def sphinxify(oinfo):
126 126 wrapped_docstring = sphx.wrap_main_docstring(oinfo)
127 127
128 128 def sphinxify_docstring(docstring):
129 129 with TemporaryDirectory() as dirname:
130 130 return {
131 131 "text/html": sphx.sphinxify(wrapped_docstring, dirname),
132 132 "text/plain": docstring,
133 133 }
134 134
135 135 return sphinxify_docstring
136 136 except ImportError:
137 137 sphinxify = None
138 138
139 139 if sys.version_info[:2] < (3, 11):
140 140 from exceptiongroup import BaseExceptionGroup
141 141
142 142 class ProvisionalWarning(DeprecationWarning):
143 143 """
144 144 Warning class for unstable features
145 145 """
146 146 pass
147 147
148 148 from ast import Module
149 149
150 150 _assign_nodes = (ast.AugAssign, ast.AnnAssign, ast.Assign)
151 151 _single_targets_nodes = (ast.AugAssign, ast.AnnAssign)
152 152
153 153 #-----------------------------------------------------------------------------
154 154 # Await Helpers
155 155 #-----------------------------------------------------------------------------
156 156
157 157 # we still need to run things using the asyncio eventloop, but there is no
158 158 # async integration
159 159 from .async_helpers import (
160 160 _asyncio_runner,
161 161 _curio_runner,
162 162 _pseudo_sync_runner,
163 163 _should_be_async,
164 164 _trio_runner,
165 165 )
166 166
167 167 #-----------------------------------------------------------------------------
168 168 # Globals
169 169 #-----------------------------------------------------------------------------
170 170
171 171 # compiled regexps for autoindent management
172 172 dedent_re = re.compile(r'^\s+raise|^\s+return|^\s+pass')
173 173
174 174 #-----------------------------------------------------------------------------
175 175 # Utilities
176 176 #-----------------------------------------------------------------------------
177 177
178 178
179 179 def is_integer_string(s: str):
180 180 """
181 181 Variant of "str.isnumeric()" that allow negative values and other ints.
182 182 """
183 183 try:
184 184 int(s)
185 185 return True
186 186 except ValueError:
187 187 return False
188 188 raise ValueError("Unexpected error")
189 189
190 190
191 191 @undoc
192 192 def softspace(file, newvalue):
193 193 """Copied from code.py, to remove the dependency"""
194 194
195 195 oldvalue = 0
196 196 try:
197 197 oldvalue = file.softspace
198 198 except AttributeError:
199 199 pass
200 200 try:
201 201 file.softspace = newvalue
202 202 except (AttributeError, TypeError):
203 203 # "attribute-less object" or "read-only attributes"
204 204 pass
205 205 return oldvalue
206 206
207 207 @undoc
208 208 def no_op(*a, **kw):
209 209 pass
210 210
211 211
212 212 class SpaceInInput(Exception): pass
213 213
214 214
215 215 class SeparateUnicode(Unicode):
216 216 r"""A Unicode subclass to validate separate_in, separate_out, etc.
217 217
218 218 This is a Unicode based trait that converts '0'->'' and ``'\\n'->'\n'``.
219 219 """
220 220
221 221 def validate(self, obj, value):
222 222 if value == '0': value = ''
223 223 value = value.replace('\\n','\n')
224 224 return super(SeparateUnicode, self).validate(obj, value)
225 225
226 226
227 227 @undoc
228 228 class DummyMod(object):
229 229 """A dummy module used for IPython's interactive module when
230 230 a namespace must be assigned to the module's __dict__."""
231 231 __spec__ = None
232 232
233 233
234 234 class ExecutionInfo(object):
235 235 """The arguments used for a call to :meth:`InteractiveShell.run_cell`
236 236
237 237 Stores information about what is going to happen.
238 238 """
239 239 raw_cell = None
240 240 store_history = False
241 241 silent = False
242 242 shell_futures = True
243 243 cell_id = None
244 244
245 245 def __init__(self, raw_cell, store_history, silent, shell_futures, cell_id):
246 246 self.raw_cell = raw_cell
247 247 self.store_history = store_history
248 248 self.silent = silent
249 249 self.shell_futures = shell_futures
250 250 self.cell_id = cell_id
251 251
252 252 def __repr__(self):
253 253 name = self.__class__.__qualname__
254 254 raw_cell = (
255 255 (self.raw_cell[:50] + "..") if len(self.raw_cell) > 50 else self.raw_cell
256 256 )
257 257 return (
258 258 '<%s object at %x, raw_cell="%s" store_history=%s silent=%s shell_futures=%s cell_id=%s>'
259 259 % (
260 260 name,
261 261 id(self),
262 262 raw_cell,
263 263 self.store_history,
264 264 self.silent,
265 265 self.shell_futures,
266 266 self.cell_id,
267 267 )
268 268 )
269 269
270 270
271 271 class ExecutionResult(object):
272 272 """The result of a call to :meth:`InteractiveShell.run_cell`
273 273
274 274 Stores information about what took place.
275 275 """
276 276 execution_count = None
277 277 error_before_exec = None
278 278 error_in_exec: Optional[BaseException] = None
279 279 info = None
280 280 result = None
281 281
282 282 def __init__(self, info):
283 283 self.info = info
284 284
285 285 @property
286 286 def success(self):
287 287 return (self.error_before_exec is None) and (self.error_in_exec is None)
288 288
289 289 def raise_error(self):
290 290 """Reraises error if `success` is `False`, otherwise does nothing"""
291 291 if self.error_before_exec is not None:
292 292 raise self.error_before_exec
293 293 if self.error_in_exec is not None:
294 294 raise self.error_in_exec
295 295
296 296 def __repr__(self):
297 297 name = self.__class__.__qualname__
298 298 return '<%s object at %x, execution_count=%s error_before_exec=%s error_in_exec=%s info=%s result=%s>' %\
299 299 (name, id(self), self.execution_count, self.error_before_exec, self.error_in_exec, repr(self.info), repr(self.result))
300 300
301 301 @functools.wraps(io_open)
302 302 def _modified_open(file, *args, **kwargs):
303 303 if file in {0, 1, 2}:
304 304 raise ValueError(
305 305 f"IPython won't let you open fd={file} by default "
306 306 "as it is likely to crash IPython. If you know what you are doing, "
307 307 "you can use builtins' open."
308 308 )
309 309
310 310 return io_open(file, *args, **kwargs)
311 311
312 312 class InteractiveShell(SingletonConfigurable):
313 313 """An enhanced, interactive shell for Python."""
314 314
315 315 _instance = None
316 316
317 ast_transformers = List([], help=
318 """
317 ast_transformers: List[ast.NodeTransformer] = List(
318 [],
319 help="""
319 320 A list of ast.NodeTransformer subclass instances, which will be applied
320 321 to user input before code is run.
321 """
322 """,
322 323 ).tag(config=True)
323 324
324 325 autocall = Enum((0,1,2), default_value=0, help=
325 326 """
326 327 Make IPython automatically call any callable object even if you didn't
327 328 type explicit parentheses. For example, 'str 43' becomes 'str(43)'
328 329 automatically. The value can be '0' to disable the feature, '1' for
329 330 'smart' autocall, where it is not applied if there are no more
330 331 arguments on the line, and '2' for 'full' autocall, where all callable
331 332 objects are automatically called (even if no arguments are present).
332 333 """
333 334 ).tag(config=True)
334 335
335 336 autoindent = Bool(True, help=
336 337 """
337 338 Autoindent IPython code entered interactively.
338 339 """
339 340 ).tag(config=True)
340 341
341 342 autoawait = Bool(True, help=
342 343 """
343 344 Automatically run await statement in the top level repl.
344 345 """
345 346 ).tag(config=True)
346 347
347 348 loop_runner_map ={
348 349 'asyncio':(_asyncio_runner, True),
349 350 'curio':(_curio_runner, True),
350 351 'trio':(_trio_runner, True),
351 352 'sync': (_pseudo_sync_runner, False)
352 353 }
353 354
354 355 loop_runner = Any(default_value="IPython.core.interactiveshell._asyncio_runner",
355 356 allow_none=True,
356 357 help="""Select the loop runner that will be used to execute top-level asynchronous code"""
357 358 ).tag(config=True)
358 359
359 360 @default('loop_runner')
360 361 def _default_loop_runner(self):
361 362 return import_item("IPython.core.interactiveshell._asyncio_runner")
362 363
363 364 @validate('loop_runner')
364 365 def _import_runner(self, proposal):
365 366 if isinstance(proposal.value, str):
366 367 if proposal.value in self.loop_runner_map:
367 368 runner, autoawait = self.loop_runner_map[proposal.value]
368 369 self.autoawait = autoawait
369 370 return runner
370 371 runner = import_item(proposal.value)
371 372 if not callable(runner):
372 373 raise ValueError('loop_runner must be callable')
373 374 return runner
374 375 if not callable(proposal.value):
375 376 raise ValueError('loop_runner must be callable')
376 377 return proposal.value
377 378
378 379 automagic = Bool(True, help=
379 380 """
380 381 Enable magic commands to be called without the leading %.
381 382 """
382 383 ).tag(config=True)
383 384
384 385 banner1 = Unicode(default_banner,
385 386 help="""The part of the banner to be printed before the profile"""
386 387 ).tag(config=True)
387 388 banner2 = Unicode('',
388 389 help="""The part of the banner to be printed after the profile"""
389 390 ).tag(config=True)
390 391
391 392 cache_size = Integer(1000, help=
392 393 """
393 394 Set the size of the output cache. The default is 1000, you can
394 395 change it permanently in your config file. Setting it to 0 completely
395 396 disables the caching system, and the minimum value accepted is 3 (if
396 397 you provide a value less than 3, it is reset to 0 and a warning is
397 398 issued). This limit is defined because otherwise you'll spend more
398 399 time re-flushing a too small cache than working
399 400 """
400 401 ).tag(config=True)
401 402 color_info = Bool(True, help=
402 403 """
403 404 Use colors for displaying information about objects. Because this
404 405 information is passed through a pager (like 'less'), and some pagers
405 406 get confused with color codes, this capability can be turned off.
406 407 """
407 408 ).tag(config=True)
408 409 colors = CaselessStrEnum(('Neutral', 'NoColor','LightBG','Linux'),
409 410 default_value='Neutral',
410 411 help="Set the color scheme (NoColor, Neutral, Linux, or LightBG)."
411 412 ).tag(config=True)
412 413 debug = Bool(False).tag(config=True)
413 414 disable_failing_post_execute = Bool(False,
414 415 help="Don't call post-execute functions that have failed in the past."
415 416 ).tag(config=True)
416 417 display_formatter = Instance(DisplayFormatter, allow_none=True)
417 418 displayhook_class = Type(DisplayHook)
418 419 display_pub_class = Type(DisplayPublisher)
419 420 compiler_class = Type(CachingCompiler)
420 421 inspector_class = Type(
421 422 oinspect.Inspector, help="Class to use to instantiate the shell inspector"
422 423 ).tag(config=True)
423 424
424 425 sphinxify_docstring = Bool(False, help=
425 426 """
426 427 Enables rich html representation of docstrings. (This requires the
427 428 docrepr module).
428 429 """).tag(config=True)
429 430
430 431 @observe("sphinxify_docstring")
431 432 def _sphinxify_docstring_changed(self, change):
432 433 if change['new']:
433 434 warn("`sphinxify_docstring` is provisional since IPython 5.0 and might change in future versions." , ProvisionalWarning)
434 435
435 436 enable_html_pager = Bool(False, help=
436 437 """
437 438 (Provisional API) enables html representation in mime bundles sent
438 439 to pagers.
439 440 """).tag(config=True)
440 441
441 442 @observe("enable_html_pager")
442 443 def _enable_html_pager_changed(self, change):
443 444 if change['new']:
444 445 warn("`enable_html_pager` is provisional since IPython 5.0 and might change in future versions.", ProvisionalWarning)
445 446
446 447 data_pub_class = None
447 448
448 449 exit_now = Bool(False)
449 450 exiter = Instance(ExitAutocall)
450 451 @default('exiter')
451 452 def _exiter_default(self):
452 453 return ExitAutocall(self)
453 454 # Monotonically increasing execution counter
454 455 execution_count = Integer(1)
455 456 filename = Unicode("<ipython console>")
456 457 ipython_dir= Unicode('').tag(config=True) # Set to get_ipython_dir() in __init__
457 458
458 459 # Used to transform cells before running them, and check whether code is complete
459 460 input_transformer_manager = Instance('IPython.core.inputtransformer2.TransformerManager',
460 461 ())
461 462
462 463 @property
463 464 def input_transformers_cleanup(self):
464 465 return self.input_transformer_manager.cleanup_transforms
465 466
466 467 input_transformers_post = List([],
467 468 help="A list of string input transformers, to be applied after IPython's "
468 469 "own input transformations."
469 470 )
470 471
471 472 @property
472 473 def input_splitter(self):
473 474 """Make this available for backward compatibility (pre-7.0 release) with existing code.
474 475
475 476 For example, ipykernel ipykernel currently uses
476 477 `shell.input_splitter.check_complete`
477 478 """
478 479 from warnings import warn
479 480 warn("`input_splitter` is deprecated since IPython 7.0, prefer `input_transformer_manager`.",
480 481 DeprecationWarning, stacklevel=2
481 482 )
482 483 return self.input_transformer_manager
483 484
484 485 logstart = Bool(False, help=
485 486 """
486 487 Start logging to the default log file in overwrite mode.
487 488 Use `logappend` to specify a log file to **append** logs to.
488 489 """
489 490 ).tag(config=True)
490 491 logfile = Unicode('', help=
491 492 """
492 493 The name of the logfile to use.
493 494 """
494 495 ).tag(config=True)
495 496 logappend = Unicode('', help=
496 497 """
497 498 Start logging to the given file in append mode.
498 499 Use `logfile` to specify a log file to **overwrite** logs to.
499 500 """
500 501 ).tag(config=True)
501 502 object_info_string_level = Enum((0,1,2), default_value=0,
502 503 ).tag(config=True)
503 504 pdb = Bool(False, help=
504 505 """
505 506 Automatically call the pdb debugger after every exception.
506 507 """
507 508 ).tag(config=True)
508 509 display_page = Bool(False,
509 510 help="""If True, anything that would be passed to the pager
510 511 will be displayed as regular output instead."""
511 512 ).tag(config=True)
512 513
513 514
514 515 show_rewritten_input = Bool(True,
515 516 help="Show rewritten input, e.g. for autocall."
516 517 ).tag(config=True)
517 518
518 519 quiet = Bool(False).tag(config=True)
519 520
520 521 history_length = Integer(10000,
521 522 help='Total length of command history'
522 523 ).tag(config=True)
523 524
524 525 history_load_length = Integer(1000, help=
525 526 """
526 527 The number of saved history entries to be loaded
527 528 into the history buffer at startup.
528 529 """
529 530 ).tag(config=True)
530 531
531 532 ast_node_interactivity = Enum(['all', 'last', 'last_expr', 'none', 'last_expr_or_assign'],
532 533 default_value='last_expr',
533 534 help="""
534 535 'all', 'last', 'last_expr' or 'none', 'last_expr_or_assign' specifying
535 536 which nodes should be run interactively (displaying output from expressions).
536 537 """
537 538 ).tag(config=True)
538 539
539 540 warn_venv = Bool(
540 541 True,
541 542 help="Warn if running in a virtual environment with no IPython installed (so IPython from the global environment is used).",
542 543 ).tag(config=True)
543 544
544 545 # TODO: this part of prompt management should be moved to the frontends.
545 546 # Use custom TraitTypes that convert '0'->'' and '\\n'->'\n'
546 547 separate_in = SeparateUnicode('\n').tag(config=True)
547 548 separate_out = SeparateUnicode('').tag(config=True)
548 549 separate_out2 = SeparateUnicode('').tag(config=True)
549 550 wildcards_case_sensitive = Bool(True).tag(config=True)
550 551 xmode = CaselessStrEnum(('Context', 'Plain', 'Verbose', 'Minimal'),
551 552 default_value='Context',
552 553 help="Switch modes for the IPython exception handlers."
553 554 ).tag(config=True)
554 555
555 556 # Subcomponents of InteractiveShell
556 alias_manager = Instance('IPython.core.alias.AliasManager', allow_none=True)
557 prefilter_manager = Instance('IPython.core.prefilter.PrefilterManager', allow_none=True)
558 builtin_trap = Instance('IPython.core.builtin_trap.BuiltinTrap', allow_none=True)
559 display_trap = Instance('IPython.core.display_trap.DisplayTrap', allow_none=True)
560 extension_manager = Instance('IPython.core.extensions.ExtensionManager', allow_none=True)
561 payload_manager = Instance('IPython.core.payload.PayloadManager', allow_none=True)
562 history_manager = Instance('IPython.core.history.HistoryAccessorBase', allow_none=True)
563 magics_manager = Instance('IPython.core.magic.MagicsManager', allow_none=True)
557 alias_manager = Instance("IPython.core.alias.AliasManager", allow_none=True)
558 prefilter_manager = Instance(
559 "IPython.core.prefilter.PrefilterManager", allow_none=True
560 )
561 builtin_trap = Instance("IPython.core.builtin_trap.BuiltinTrap")
562 display_trap = Instance("IPython.core.display_trap.DisplayTrap")
563 extension_manager = Instance(
564 "IPython.core.extensions.ExtensionManager", allow_none=True
565 )
566 payload_manager = Instance("IPython.core.payload.PayloadManager", allow_none=True)
567 history_manager = Instance(
568 "IPython.core.history.HistoryAccessorBase", allow_none=True
569 )
570 magics_manager = Instance("IPython.core.magic.MagicsManager")
564 571
565 572 profile_dir = Instance('IPython.core.application.ProfileDir', allow_none=True)
566 573 @property
567 574 def profile(self):
568 575 if self.profile_dir is not None:
569 576 name = os.path.basename(self.profile_dir.location)
570 577 return name.replace('profile_','')
571 578
572 579
573 580 # Private interface
574 581 _post_execute = Dict()
575 582
576 583 # Tracks any GUI loop loaded for pylab
577 584 pylab_gui_select = None
578 585
579 586 last_execution_succeeded = Bool(True, help='Did last executed command succeeded')
580 587
581 588 last_execution_result = Instance('IPython.core.interactiveshell.ExecutionResult', help='Result of executing the last command', allow_none=True)
582 589
583 590 def __init__(self, ipython_dir=None, profile_dir=None,
584 591 user_module=None, user_ns=None,
585 592 custom_exceptions=((), None), **kwargs):
586 593 # This is where traits with a config_key argument are updated
587 594 # from the values on config.
588 595 super(InteractiveShell, self).__init__(**kwargs)
589 596 if 'PromptManager' in self.config:
590 597 warn('As of IPython 5.0 `PromptManager` config will have no effect'
591 598 ' and has been replaced by TerminalInteractiveShell.prompts_class')
592 599 self.configurables = [self]
593 600
594 601 # These are relatively independent and stateless
595 602 self.init_ipython_dir(ipython_dir)
596 603 self.init_profile_dir(profile_dir)
597 604 self.init_instance_attrs()
598 605 self.init_environment()
599 606
600 607 # Check if we're in a virtualenv, and set up sys.path.
601 608 self.init_virtualenv()
602 609
603 610 # Create namespaces (user_ns, user_global_ns, etc.)
604 611 self.init_create_namespaces(user_module, user_ns)
605 612 # This has to be done after init_create_namespaces because it uses
606 613 # something in self.user_ns, but before init_sys_modules, which
607 614 # is the first thing to modify sys.
608 615 # TODO: When we override sys.stdout and sys.stderr before this class
609 616 # is created, we are saving the overridden ones here. Not sure if this
610 617 # is what we want to do.
611 618 self.save_sys_module_state()
612 619 self.init_sys_modules()
613 620
614 621 # While we're trying to have each part of the code directly access what
615 622 # it needs without keeping redundant references to objects, we have too
616 623 # much legacy code that expects ip.db to exist.
617 624 self.db = PickleShareDB(os.path.join(self.profile_dir.location, 'db'))
618 625
619 626 self.init_history()
620 627 self.init_encoding()
621 628 self.init_prefilter()
622 629
623 630 self.init_syntax_highlighting()
624 631 self.init_hooks()
625 632 self.init_events()
626 633 self.init_pushd_popd_magic()
627 634 self.init_user_ns()
628 635 self.init_logger()
629 636 self.init_builtins()
630 637
631 638 # The following was in post_config_initialization
632 639 self.init_inspector()
633 640 self.raw_input_original = input
634 641 self.init_completer()
635 642 # TODO: init_io() needs to happen before init_traceback handlers
636 643 # because the traceback handlers hardcode the stdout/stderr streams.
637 644 # This logic in in debugger.Pdb and should eventually be changed.
638 645 self.init_io()
639 646 self.init_traceback_handlers(custom_exceptions)
640 647 self.init_prompts()
641 648 self.init_display_formatter()
642 649 self.init_display_pub()
643 650 self.init_data_pub()
644 651 self.init_displayhook()
645 652 self.init_magics()
646 653 self.init_alias()
647 654 self.init_logstart()
648 655 self.init_pdb()
649 656 self.init_extension_manager()
650 657 self.init_payload()
651 658 self.events.trigger('shell_initialized', self)
652 659 atexit.register(self.atexit_operations)
653 660
654 661 # The trio runner is used for running Trio in the foreground thread. It
655 662 # is different from `_trio_runner(async_fn)` in `async_helpers.py`
656 663 # which calls `trio.run()` for every cell. This runner runs all cells
657 664 # inside a single Trio event loop. If used, it is set from
658 665 # `ipykernel.kernelapp`.
659 666 self.trio_runner = None
660 667
661 668 def get_ipython(self):
662 669 """Return the currently running IPython instance."""
663 670 return self
664 671
665 672 #-------------------------------------------------------------------------
666 673 # Trait changed handlers
667 674 #-------------------------------------------------------------------------
668 675 @observe('ipython_dir')
669 676 def _ipython_dir_changed(self, change):
670 677 ensure_dir_exists(change['new'])
671 678
672 679 def set_autoindent(self,value=None):
673 680 """Set the autoindent flag.
674 681
675 682 If called with no arguments, it acts as a toggle."""
676 683 if value is None:
677 684 self.autoindent = not self.autoindent
678 685 else:
679 686 self.autoindent = value
680 687
681 688 def set_trio_runner(self, tr):
682 689 self.trio_runner = tr
683 690
684 691 #-------------------------------------------------------------------------
685 692 # init_* methods called by __init__
686 693 #-------------------------------------------------------------------------
687 694
688 695 def init_ipython_dir(self, ipython_dir):
689 696 if ipython_dir is not None:
690 697 self.ipython_dir = ipython_dir
691 698 return
692 699
693 700 self.ipython_dir = get_ipython_dir()
694 701
695 702 def init_profile_dir(self, profile_dir):
696 703 if profile_dir is not None:
697 704 self.profile_dir = profile_dir
698 705 return
699 706 self.profile_dir = ProfileDir.create_profile_dir_by_name(
700 707 self.ipython_dir, "default"
701 708 )
702 709
703 710 def init_instance_attrs(self):
704 711 self.more = False
705 712
706 713 # command compiler
707 714 self.compile = self.compiler_class()
708 715
709 716 # Make an empty namespace, which extension writers can rely on both
710 717 # existing and NEVER being used by ipython itself. This gives them a
711 718 # convenient location for storing additional information and state
712 719 # their extensions may require, without fear of collisions with other
713 720 # ipython names that may develop later.
714 721 self.meta = Struct()
715 722
716 723 # Temporary files used for various purposes. Deleted at exit.
717 724 # The files here are stored with Path from Pathlib
718 725 self.tempfiles = []
719 726 self.tempdirs = []
720 727
721 728 # keep track of where we started running (mainly for crash post-mortem)
722 729 # This is not being used anywhere currently.
723 730 self.starting_dir = os.getcwd()
724 731
725 732 # Indentation management
726 733 self.indent_current_nsp = 0
727 734
728 735 # Dict to track post-execution functions that have been registered
729 736 self._post_execute = {}
730 737
731 738 def init_environment(self):
732 739 """Any changes we need to make to the user's environment."""
733 740 pass
734 741
735 742 def init_encoding(self):
736 743 # Get system encoding at startup time. Certain terminals (like Emacs
737 744 # under Win32 have it set to None, and we need to have a known valid
738 745 # encoding to use in the raw_input() method
739 746 try:
740 747 self.stdin_encoding = sys.stdin.encoding or 'ascii'
741 748 except AttributeError:
742 749 self.stdin_encoding = 'ascii'
743 750
744 751
745 752 @observe('colors')
746 753 def init_syntax_highlighting(self, changes=None):
747 754 # Python source parser/formatter for syntax highlighting
748 755 pyformat = PyColorize.Parser(style=self.colors, parent=self).format
749 756 self.pycolorize = lambda src: pyformat(src,'str')
750 757
751 758 def refresh_style(self):
752 759 # No-op here, used in subclass
753 760 pass
754 761
755 762 def init_pushd_popd_magic(self):
756 763 # for pushd/popd management
757 764 self.home_dir = get_home_dir()
758 765
759 766 self.dir_stack = []
760 767
761 768 def init_logger(self):
762 769 self.logger = Logger(self.home_dir, logfname='ipython_log.py',
763 770 logmode='rotate')
764 771
765 772 def init_logstart(self):
766 773 """Initialize logging in case it was requested at the command line.
767 774 """
768 775 if self.logappend:
769 776 self.magic('logstart %s append' % self.logappend)
770 777 elif self.logfile:
771 778 self.magic('logstart %s' % self.logfile)
772 779 elif self.logstart:
773 780 self.magic('logstart')
774 781
775 782
776 783 def init_builtins(self):
777 784 # A single, static flag that we set to True. Its presence indicates
778 785 # that an IPython shell has been created, and we make no attempts at
779 786 # removing on exit or representing the existence of more than one
780 787 # IPython at a time.
781 788 builtin_mod.__dict__['__IPYTHON__'] = True
782 789 builtin_mod.__dict__['display'] = display
783 790
784 791 self.builtin_trap = BuiltinTrap(shell=self)
785 792
786 793 @observe('colors')
787 794 def init_inspector(self, changes=None):
788 795 # Object inspector
789 796 self.inspector = self.inspector_class(
790 797 oinspect.InspectColors,
791 798 PyColorize.ANSICodeColors,
792 799 self.colors,
793 800 self.object_info_string_level,
794 801 )
795 802
796 803 def init_io(self):
797 804 # implemented in subclasses, TerminalInteractiveShell does call
798 805 # colorama.init().
799 806 pass
800 807
801 808 def init_prompts(self):
802 809 # Set system prompts, so that scripts can decide if they are running
803 810 # interactively.
804 811 sys.ps1 = 'In : '
805 812 sys.ps2 = '...: '
806 813 sys.ps3 = 'Out: '
807 814
808 815 def init_display_formatter(self):
809 816 self.display_formatter = DisplayFormatter(parent=self)
810 817 self.configurables.append(self.display_formatter)
811 818
812 819 def init_display_pub(self):
813 820 self.display_pub = self.display_pub_class(parent=self, shell=self)
814 821 self.configurables.append(self.display_pub)
815 822
816 823 def init_data_pub(self):
817 824 if not self.data_pub_class:
818 825 self.data_pub = None
819 826 return
820 827 self.data_pub = self.data_pub_class(parent=self)
821 828 self.configurables.append(self.data_pub)
822 829
823 830 def init_displayhook(self):
824 831 # Initialize displayhook, set in/out prompts and printing system
825 832 self.displayhook = self.displayhook_class(
826 833 parent=self,
827 834 shell=self,
828 835 cache_size=self.cache_size,
829 836 )
830 837 self.configurables.append(self.displayhook)
831 838 # This is a context manager that installs/revmoes the displayhook at
832 839 # the appropriate time.
833 840 self.display_trap = DisplayTrap(hook=self.displayhook)
834 841
835 842 @staticmethod
836 843 def get_path_links(p: Path):
837 844 """Gets path links including all symlinks
838 845
839 846 Examples
840 847 --------
841 848 In [1]: from IPython.core.interactiveshell import InteractiveShell
842 849
843 850 In [2]: import sys, pathlib
844 851
845 852 In [3]: paths = InteractiveShell.get_path_links(pathlib.Path(sys.executable))
846 853
847 854 In [4]: len(paths) == len(set(paths))
848 855 Out[4]: True
849 856
850 857 In [5]: bool(paths)
851 858 Out[5]: True
852 859 """
853 860 paths = [p]
854 861 while p.is_symlink():
855 862 new_path = Path(os.readlink(p))
856 863 if not new_path.is_absolute():
857 864 new_path = p.parent / new_path
858 865 p = new_path
859 866 paths.append(p)
860 867 return paths
861 868
862 869 def init_virtualenv(self):
863 870 """Add the current virtualenv to sys.path so the user can import modules from it.
864 871 This isn't perfect: it doesn't use the Python interpreter with which the
865 872 virtualenv was built, and it ignores the --no-site-packages option. A
866 873 warning will appear suggesting the user installs IPython in the
867 874 virtualenv, but for many cases, it probably works well enough.
868 875
869 876 Adapted from code snippets online.
870 877
871 878 http://blog.ufsoft.org/2009/1/29/ipython-and-virtualenv
872 879 """
873 880 if 'VIRTUAL_ENV' not in os.environ:
874 881 # Not in a virtualenv
875 882 return
876 883 elif os.environ["VIRTUAL_ENV"] == "":
877 884 warn("Virtual env path set to '', please check if this is intended.")
878 885 return
879 886
880 887 p = Path(sys.executable)
881 888 p_venv = Path(os.environ["VIRTUAL_ENV"])
882 889
883 890 # fallback venv detection:
884 891 # stdlib venv may symlink sys.executable, so we can't use realpath.
885 892 # but others can symlink *to* the venv Python, so we can't just use sys.executable.
886 893 # So we just check every item in the symlink tree (generally <= 3)
887 894 paths = self.get_path_links(p)
888 895
889 896 # In Cygwin paths like "c:\..." and '\cygdrive\c\...' are possible
890 897 if p_venv.parts[1] == "cygdrive":
891 898 drive_name = p_venv.parts[2]
892 899 p_venv = (drive_name + ":/") / Path(*p_venv.parts[3:])
893 900
894 901 if any(p_venv == p.parents[1] for p in paths):
895 902 # Our exe is inside or has access to the virtualenv, don't need to do anything.
896 903 return
897 904
898 905 if sys.platform == "win32":
899 906 virtual_env = str(Path(os.environ["VIRTUAL_ENV"], "Lib", "site-packages"))
900 907 else:
901 908 virtual_env_path = Path(
902 909 os.environ["VIRTUAL_ENV"], "lib", "python{}.{}", "site-packages"
903 910 )
904 911 p_ver = sys.version_info[:2]
905 912
906 913 # Predict version from py[thon]-x.x in the $VIRTUAL_ENV
907 914 re_m = re.search(r"\bpy(?:thon)?([23])\.(\d+)\b", os.environ["VIRTUAL_ENV"])
908 915 if re_m:
909 916 predicted_path = Path(str(virtual_env_path).format(*re_m.groups()))
910 917 if predicted_path.exists():
911 918 p_ver = re_m.groups()
912 919
913 920 virtual_env = str(virtual_env_path).format(*p_ver)
914 921 if self.warn_venv:
915 922 warn(
916 923 "Attempting to work in a virtualenv. If you encounter problems, "
917 924 "please install IPython inside the virtualenv."
918 925 )
919 926 import site
920 927 sys.path.insert(0, virtual_env)
921 928 site.addsitedir(virtual_env)
922 929
923 930 #-------------------------------------------------------------------------
924 931 # Things related to injections into the sys module
925 932 #-------------------------------------------------------------------------
926 933
927 934 def save_sys_module_state(self):
928 935 """Save the state of hooks in the sys module.
929 936
930 937 This has to be called after self.user_module is created.
931 938 """
932 939 self._orig_sys_module_state = {'stdin': sys.stdin,
933 940 'stdout': sys.stdout,
934 941 'stderr': sys.stderr,
935 942 'excepthook': sys.excepthook}
936 943 self._orig_sys_modules_main_name = self.user_module.__name__
937 944 self._orig_sys_modules_main_mod = sys.modules.get(self.user_module.__name__)
938 945
939 946 def restore_sys_module_state(self):
940 947 """Restore the state of the sys module."""
941 948 try:
942 949 for k, v in self._orig_sys_module_state.items():
943 950 setattr(sys, k, v)
944 951 except AttributeError:
945 952 pass
946 953 # Reset what what done in self.init_sys_modules
947 954 if self._orig_sys_modules_main_mod is not None:
948 955 sys.modules[self._orig_sys_modules_main_name] = self._orig_sys_modules_main_mod
949 956
950 957 #-------------------------------------------------------------------------
951 958 # Things related to the banner
952 959 #-------------------------------------------------------------------------
953 960
954 961 @property
955 962 def banner(self):
956 963 banner = self.banner1
957 964 if self.profile and self.profile != 'default':
958 965 banner += '\nIPython profile: %s\n' % self.profile
959 966 if self.banner2:
960 967 banner += '\n' + self.banner2
961 968 return banner
962 969
963 970 def show_banner(self, banner=None):
964 971 if banner is None:
965 972 banner = self.banner
966 973 sys.stdout.write(banner)
967 974
968 975 #-------------------------------------------------------------------------
969 976 # Things related to hooks
970 977 #-------------------------------------------------------------------------
971 978
972 979 def init_hooks(self):
973 980 # hooks holds pointers used for user-side customizations
974 981 self.hooks = Struct()
975 982
976 983 self.strdispatchers = {}
977 984
978 985 # Set all default hooks, defined in the IPython.hooks module.
979 986 hooks = IPython.core.hooks
980 987 for hook_name in hooks.__all__:
981 988 # default hooks have priority 100, i.e. low; user hooks should have
982 989 # 0-100 priority
983 990 self.set_hook(hook_name, getattr(hooks, hook_name), 100)
984 991
985 992 if self.display_page:
986 993 self.set_hook('show_in_pager', page.as_hook(page.display_page), 90)
987 994
988 995 def set_hook(self, name, hook, priority=50, str_key=None, re_key=None):
989 996 """set_hook(name,hook) -> sets an internal IPython hook.
990 997
991 998 IPython exposes some of its internal API as user-modifiable hooks. By
992 999 adding your function to one of these hooks, you can modify IPython's
993 1000 behavior to call at runtime your own routines."""
994 1001
995 1002 # At some point in the future, this should validate the hook before it
996 1003 # accepts it. Probably at least check that the hook takes the number
997 1004 # of args it's supposed to.
998 1005
999 1006 f = types.MethodType(hook,self)
1000 1007
1001 1008 # check if the hook is for strdispatcher first
1002 1009 if str_key is not None:
1003 1010 sdp = self.strdispatchers.get(name, StrDispatch())
1004 1011 sdp.add_s(str_key, f, priority )
1005 1012 self.strdispatchers[name] = sdp
1006 1013 return
1007 1014 if re_key is not None:
1008 1015 sdp = self.strdispatchers.get(name, StrDispatch())
1009 1016 sdp.add_re(re.compile(re_key), f, priority )
1010 1017 self.strdispatchers[name] = sdp
1011 1018 return
1012 1019
1013 1020 dp = getattr(self.hooks, name, None)
1014 1021 if name not in IPython.core.hooks.__all__:
1015 1022 print("Warning! Hook '%s' is not one of %s" % \
1016 1023 (name, IPython.core.hooks.__all__ ))
1017 1024
1018 1025 if name in IPython.core.hooks.deprecated:
1019 1026 alternative = IPython.core.hooks.deprecated[name]
1020 1027 raise ValueError(
1021 1028 "Hook {} has been deprecated since IPython 5.0. Use {} instead.".format(
1022 1029 name, alternative
1023 1030 )
1024 1031 )
1025 1032
1026 1033 if not dp:
1027 1034 dp = IPython.core.hooks.CommandChainDispatcher()
1028 1035
1029 1036 try:
1030 1037 dp.add(f,priority)
1031 1038 except AttributeError:
1032 1039 # it was not commandchain, plain old func - replace
1033 1040 dp = f
1034 1041
1035 1042 setattr(self.hooks,name, dp)
1036 1043
1037 1044 #-------------------------------------------------------------------------
1038 1045 # Things related to events
1039 1046 #-------------------------------------------------------------------------
1040 1047
1041 1048 def init_events(self):
1042 1049 self.events = EventManager(self, available_events)
1043 1050
1044 1051 self.events.register("pre_execute", self._clear_warning_registry)
1045 1052
1046 1053 def register_post_execute(self, func):
1047 1054 """DEPRECATED: Use ip.events.register('post_run_cell', func)
1048 1055
1049 1056 Register a function for calling after code execution.
1050 1057 """
1051 1058 raise ValueError(
1052 1059 "ip.register_post_execute is deprecated since IPython 1.0, use "
1053 1060 "ip.events.register('post_run_cell', func) instead."
1054 1061 )
1055 1062
1056 1063 def _clear_warning_registry(self):
1057 1064 # clear the warning registry, so that different code blocks with
1058 1065 # overlapping line number ranges don't cause spurious suppression of
1059 1066 # warnings (see gh-6611 for details)
1060 1067 if "__warningregistry__" in self.user_global_ns:
1061 1068 del self.user_global_ns["__warningregistry__"]
1062 1069
1063 1070 #-------------------------------------------------------------------------
1064 1071 # Things related to the "main" module
1065 1072 #-------------------------------------------------------------------------
1066 1073
1067 1074 def new_main_mod(self, filename, modname):
1068 1075 """Return a new 'main' module object for user code execution.
1069 1076
1070 1077 ``filename`` should be the path of the script which will be run in the
1071 1078 module. Requests with the same filename will get the same module, with
1072 1079 its namespace cleared.
1073 1080
1074 1081 ``modname`` should be the module name - normally either '__main__' or
1075 1082 the basename of the file without the extension.
1076 1083
1077 1084 When scripts are executed via %run, we must keep a reference to their
1078 1085 __main__ module around so that Python doesn't
1079 1086 clear it, rendering references to module globals useless.
1080 1087
1081 1088 This method keeps said reference in a private dict, keyed by the
1082 1089 absolute path of the script. This way, for multiple executions of the
1083 1090 same script we only keep one copy of the namespace (the last one),
1084 1091 thus preventing memory leaks from old references while allowing the
1085 1092 objects from the last execution to be accessible.
1086 1093 """
1087 1094 filename = os.path.abspath(filename)
1088 1095 try:
1089 1096 main_mod = self._main_mod_cache[filename]
1090 1097 except KeyError:
1091 1098 main_mod = self._main_mod_cache[filename] = types.ModuleType(
1092 1099 modname,
1093 1100 doc="Module created for script run in IPython")
1094 1101 else:
1095 1102 main_mod.__dict__.clear()
1096 1103 main_mod.__name__ = modname
1097 1104
1098 1105 main_mod.__file__ = filename
1099 1106 # It seems pydoc (and perhaps others) needs any module instance to
1100 1107 # implement a __nonzero__ method
1101 1108 main_mod.__nonzero__ = lambda : True
1102 1109
1103 1110 return main_mod
1104 1111
1105 1112 def clear_main_mod_cache(self):
1106 1113 """Clear the cache of main modules.
1107 1114
1108 1115 Mainly for use by utilities like %reset.
1109 1116
1110 1117 Examples
1111 1118 --------
1112 1119 In [15]: import IPython
1113 1120
1114 1121 In [16]: m = _ip.new_main_mod(IPython.__file__, 'IPython')
1115 1122
1116 1123 In [17]: len(_ip._main_mod_cache) > 0
1117 1124 Out[17]: True
1118 1125
1119 1126 In [18]: _ip.clear_main_mod_cache()
1120 1127
1121 1128 In [19]: len(_ip._main_mod_cache) == 0
1122 1129 Out[19]: True
1123 1130 """
1124 1131 self._main_mod_cache.clear()
1125 1132
1126 1133 #-------------------------------------------------------------------------
1127 1134 # Things related to debugging
1128 1135 #-------------------------------------------------------------------------
1129 1136
1130 1137 def init_pdb(self):
1131 1138 # Set calling of pdb on exceptions
1132 1139 # self.call_pdb is a property
1133 1140 self.call_pdb = self.pdb
1134 1141
1135 1142 def _get_call_pdb(self):
1136 1143 return self._call_pdb
1137 1144
1138 1145 def _set_call_pdb(self,val):
1139 1146
1140 1147 if val not in (0,1,False,True):
1141 1148 raise ValueError('new call_pdb value must be boolean')
1142 1149
1143 1150 # store value in instance
1144 1151 self._call_pdb = val
1145 1152
1146 1153 # notify the actual exception handlers
1147 1154 self.InteractiveTB.call_pdb = val
1148 1155
1149 1156 call_pdb = property(_get_call_pdb,_set_call_pdb,None,
1150 1157 'Control auto-activation of pdb at exceptions')
1151 1158
1152 1159 def debugger(self,force=False):
1153 1160 """Call the pdb debugger.
1154 1161
1155 1162 Keywords:
1156 1163
1157 1164 - force(False): by default, this routine checks the instance call_pdb
1158 1165 flag and does not actually invoke the debugger if the flag is false.
1159 1166 The 'force' option forces the debugger to activate even if the flag
1160 1167 is false.
1161 1168 """
1162 1169
1163 1170 if not (force or self.call_pdb):
1164 1171 return
1165 1172
1166 1173 if not hasattr(sys,'last_traceback'):
1167 1174 error('No traceback has been produced, nothing to debug.')
1168 1175 return
1169 1176
1170 1177 self.InteractiveTB.debugger(force=True)
1171 1178
1172 1179 #-------------------------------------------------------------------------
1173 1180 # Things related to IPython's various namespaces
1174 1181 #-------------------------------------------------------------------------
1175 1182 default_user_namespaces = True
1176 1183
1177 1184 def init_create_namespaces(self, user_module=None, user_ns=None):
1178 1185 # Create the namespace where the user will operate. user_ns is
1179 1186 # normally the only one used, and it is passed to the exec calls as
1180 1187 # the locals argument. But we do carry a user_global_ns namespace
1181 1188 # given as the exec 'globals' argument, This is useful in embedding
1182 1189 # situations where the ipython shell opens in a context where the
1183 1190 # distinction between locals and globals is meaningful. For
1184 1191 # non-embedded contexts, it is just the same object as the user_ns dict.
1185 1192
1186 1193 # FIXME. For some strange reason, __builtins__ is showing up at user
1187 1194 # level as a dict instead of a module. This is a manual fix, but I
1188 1195 # should really track down where the problem is coming from. Alex
1189 1196 # Schmolck reported this problem first.
1190 1197
1191 1198 # A useful post by Alex Martelli on this topic:
1192 1199 # Re: inconsistent value from __builtins__
1193 1200 # Von: Alex Martelli <aleaxit@yahoo.com>
1194 1201 # Datum: Freitag 01 Oktober 2004 04:45:34 nachmittags/abends
1195 1202 # Gruppen: comp.lang.python
1196 1203
1197 1204 # Michael Hohn <hohn@hooknose.lbl.gov> wrote:
1198 1205 # > >>> print type(builtin_check.get_global_binding('__builtins__'))
1199 1206 # > <type 'dict'>
1200 1207 # > >>> print type(__builtins__)
1201 1208 # > <type 'module'>
1202 1209 # > Is this difference in return value intentional?
1203 1210
1204 1211 # Well, it's documented that '__builtins__' can be either a dictionary
1205 1212 # or a module, and it's been that way for a long time. Whether it's
1206 1213 # intentional (or sensible), I don't know. In any case, the idea is
1207 1214 # that if you need to access the built-in namespace directly, you
1208 1215 # should start with "import __builtin__" (note, no 's') which will
1209 1216 # definitely give you a module. Yeah, it's somewhat confusing:-(.
1210 1217
1211 1218 # These routines return a properly built module and dict as needed by
1212 1219 # the rest of the code, and can also be used by extension writers to
1213 1220 # generate properly initialized namespaces.
1214 1221 if (user_ns is not None) or (user_module is not None):
1215 1222 self.default_user_namespaces = False
1216 1223 self.user_module, self.user_ns = self.prepare_user_module(user_module, user_ns)
1217 1224
1218 1225 # A record of hidden variables we have added to the user namespace, so
1219 1226 # we can list later only variables defined in actual interactive use.
1220 1227 self.user_ns_hidden = {}
1221 1228
1222 1229 # Now that FakeModule produces a real module, we've run into a nasty
1223 1230 # problem: after script execution (via %run), the module where the user
1224 1231 # code ran is deleted. Now that this object is a true module (needed
1225 1232 # so doctest and other tools work correctly), the Python module
1226 1233 # teardown mechanism runs over it, and sets to None every variable
1227 1234 # present in that module. Top-level references to objects from the
1228 1235 # script survive, because the user_ns is updated with them. However,
1229 1236 # calling functions defined in the script that use other things from
1230 1237 # the script will fail, because the function's closure had references
1231 1238 # to the original objects, which are now all None. So we must protect
1232 1239 # these modules from deletion by keeping a cache.
1233 1240 #
1234 1241 # To avoid keeping stale modules around (we only need the one from the
1235 1242 # last run), we use a dict keyed with the full path to the script, so
1236 1243 # only the last version of the module is held in the cache. Note,
1237 1244 # however, that we must cache the module *namespace contents* (their
1238 1245 # __dict__). Because if we try to cache the actual modules, old ones
1239 1246 # (uncached) could be destroyed while still holding references (such as
1240 1247 # those held by GUI objects that tend to be long-lived)>
1241 1248 #
1242 1249 # The %reset command will flush this cache. See the cache_main_mod()
1243 1250 # and clear_main_mod_cache() methods for details on use.
1244 1251
1245 1252 # This is the cache used for 'main' namespaces
1246 1253 self._main_mod_cache = {}
1247 1254
1248 1255 # A table holding all the namespaces IPython deals with, so that
1249 1256 # introspection facilities can search easily.
1250 1257 self.ns_table = {'user_global':self.user_module.__dict__,
1251 1258 'user_local':self.user_ns,
1252 1259 'builtin':builtin_mod.__dict__
1253 1260 }
1254 1261
1255 1262 @property
1256 1263 def user_global_ns(self):
1257 1264 return self.user_module.__dict__
1258 1265
1259 1266 def prepare_user_module(self, user_module=None, user_ns=None):
1260 1267 """Prepare the module and namespace in which user code will be run.
1261 1268
1262 1269 When IPython is started normally, both parameters are None: a new module
1263 1270 is created automatically, and its __dict__ used as the namespace.
1264 1271
1265 1272 If only user_module is provided, its __dict__ is used as the namespace.
1266 1273 If only user_ns is provided, a dummy module is created, and user_ns
1267 1274 becomes the global namespace. If both are provided (as they may be
1268 1275 when embedding), user_ns is the local namespace, and user_module
1269 1276 provides the global namespace.
1270 1277
1271 1278 Parameters
1272 1279 ----------
1273 1280 user_module : module, optional
1274 1281 The current user module in which IPython is being run. If None,
1275 1282 a clean module will be created.
1276 1283 user_ns : dict, optional
1277 1284 A namespace in which to run interactive commands.
1278 1285
1279 1286 Returns
1280 1287 -------
1281 1288 A tuple of user_module and user_ns, each properly initialised.
1282 1289 """
1283 1290 if user_module is None and user_ns is not None:
1284 1291 user_ns.setdefault("__name__", "__main__")
1285 1292 user_module = DummyMod()
1286 1293 user_module.__dict__ = user_ns
1287 1294
1288 1295 if user_module is None:
1289 1296 user_module = types.ModuleType("__main__",
1290 1297 doc="Automatically created module for IPython interactive environment")
1291 1298
1292 1299 # We must ensure that __builtin__ (without the final 's') is always
1293 1300 # available and pointing to the __builtin__ *module*. For more details:
1294 1301 # http://mail.python.org/pipermail/python-dev/2001-April/014068.html
1295 1302 user_module.__dict__.setdefault('__builtin__', builtin_mod)
1296 1303 user_module.__dict__.setdefault('__builtins__', builtin_mod)
1297 1304
1298 1305 if user_ns is None:
1299 1306 user_ns = user_module.__dict__
1300 1307
1301 1308 return user_module, user_ns
1302 1309
1303 1310 def init_sys_modules(self):
1304 1311 # We need to insert into sys.modules something that looks like a
1305 1312 # module but which accesses the IPython namespace, for shelve and
1306 1313 # pickle to work interactively. Normally they rely on getting
1307 1314 # everything out of __main__, but for embedding purposes each IPython
1308 1315 # instance has its own private namespace, so we can't go shoving
1309 1316 # everything into __main__.
1310 1317
1311 1318 # note, however, that we should only do this for non-embedded
1312 1319 # ipythons, which really mimic the __main__.__dict__ with their own
1313 1320 # namespace. Embedded instances, on the other hand, should not do
1314 1321 # this because they need to manage the user local/global namespaces
1315 1322 # only, but they live within a 'normal' __main__ (meaning, they
1316 1323 # shouldn't overtake the execution environment of the script they're
1317 1324 # embedded in).
1318 1325
1319 1326 # This is overridden in the InteractiveShellEmbed subclass to a no-op.
1320 1327 main_name = self.user_module.__name__
1321 1328 sys.modules[main_name] = self.user_module
1322 1329
1323 1330 def init_user_ns(self):
1324 1331 """Initialize all user-visible namespaces to their minimum defaults.
1325 1332
1326 1333 Certain history lists are also initialized here, as they effectively
1327 1334 act as user namespaces.
1328 1335
1329 1336 Notes
1330 1337 -----
1331 1338 All data structures here are only filled in, they are NOT reset by this
1332 1339 method. If they were not empty before, data will simply be added to
1333 1340 them.
1334 1341 """
1335 1342 # This function works in two parts: first we put a few things in
1336 1343 # user_ns, and we sync that contents into user_ns_hidden so that these
1337 1344 # initial variables aren't shown by %who. After the sync, we add the
1338 1345 # rest of what we *do* want the user to see with %who even on a new
1339 1346 # session (probably nothing, so they really only see their own stuff)
1340 1347
1341 1348 # The user dict must *always* have a __builtin__ reference to the
1342 1349 # Python standard __builtin__ namespace, which must be imported.
1343 1350 # This is so that certain operations in prompt evaluation can be
1344 1351 # reliably executed with builtins. Note that we can NOT use
1345 1352 # __builtins__ (note the 's'), because that can either be a dict or a
1346 1353 # module, and can even mutate at runtime, depending on the context
1347 1354 # (Python makes no guarantees on it). In contrast, __builtin__ is
1348 1355 # always a module object, though it must be explicitly imported.
1349 1356
1350 1357 # For more details:
1351 1358 # http://mail.python.org/pipermail/python-dev/2001-April/014068.html
1352 1359 ns = {}
1353 1360
1354 1361 # make global variables for user access to the histories
1355 1362 ns['_ih'] = self.history_manager.input_hist_parsed
1356 1363 ns['_oh'] = self.history_manager.output_hist
1357 1364 ns['_dh'] = self.history_manager.dir_hist
1358 1365
1359 1366 # user aliases to input and output histories. These shouldn't show up
1360 1367 # in %who, as they can have very large reprs.
1361 1368 ns['In'] = self.history_manager.input_hist_parsed
1362 1369 ns['Out'] = self.history_manager.output_hist
1363 1370
1364 1371 # Store myself as the public api!!!
1365 1372 ns['get_ipython'] = self.get_ipython
1366 1373
1367 1374 ns['exit'] = self.exiter
1368 1375 ns['quit'] = self.exiter
1369 1376 ns["open"] = _modified_open
1370 1377
1371 1378 # Sync what we've added so far to user_ns_hidden so these aren't seen
1372 1379 # by %who
1373 1380 self.user_ns_hidden.update(ns)
1374 1381
1375 1382 # Anything put into ns now would show up in %who. Think twice before
1376 1383 # putting anything here, as we really want %who to show the user their
1377 1384 # stuff, not our variables.
1378 1385
1379 1386 # Finally, update the real user's namespace
1380 1387 self.user_ns.update(ns)
1381 1388
1382 1389 @property
1383 1390 def all_ns_refs(self):
1384 1391 """Get a list of references to all the namespace dictionaries in which
1385 1392 IPython might store a user-created object.
1386 1393
1387 1394 Note that this does not include the displayhook, which also caches
1388 1395 objects from the output."""
1389 1396 return [self.user_ns, self.user_global_ns, self.user_ns_hidden] + \
1390 1397 [m.__dict__ for m in self._main_mod_cache.values()]
1391 1398
1392 1399 def reset(self, new_session=True, aggressive=False):
1393 1400 """Clear all internal namespaces, and attempt to release references to
1394 1401 user objects.
1395 1402
1396 1403 If new_session is True, a new history session will be opened.
1397 1404 """
1398 1405 # Clear histories
1406 assert self.history_manager is not None
1399 1407 self.history_manager.reset(new_session)
1400 1408 # Reset counter used to index all histories
1401 1409 if new_session:
1402 1410 self.execution_count = 1
1403 1411
1404 1412 # Reset last execution result
1405 1413 self.last_execution_succeeded = True
1406 1414 self.last_execution_result = None
1407 1415
1408 1416 # Flush cached output items
1409 1417 if self.displayhook.do_full_cache:
1410 1418 self.displayhook.flush()
1411 1419
1412 1420 # The main execution namespaces must be cleared very carefully,
1413 1421 # skipping the deletion of the builtin-related keys, because doing so
1414 1422 # would cause errors in many object's __del__ methods.
1415 1423 if self.user_ns is not self.user_global_ns:
1416 1424 self.user_ns.clear()
1417 1425 ns = self.user_global_ns
1418 1426 drop_keys = set(ns.keys())
1419 1427 drop_keys.discard('__builtin__')
1420 1428 drop_keys.discard('__builtins__')
1421 1429 drop_keys.discard('__name__')
1422 1430 for k in drop_keys:
1423 1431 del ns[k]
1424 1432
1425 1433 self.user_ns_hidden.clear()
1426 1434
1427 1435 # Restore the user namespaces to minimal usability
1428 1436 self.init_user_ns()
1429 1437 if aggressive and not hasattr(self, "_sys_modules_keys"):
1430 1438 print("Cannot restore sys.module, no snapshot")
1431 1439 elif aggressive:
1432 1440 print("culling sys module...")
1433 1441 current_keys = set(sys.modules.keys())
1434 1442 for k in current_keys - self._sys_modules_keys:
1435 1443 if k.startswith("multiprocessing"):
1436 1444 continue
1437 1445 del sys.modules[k]
1438 1446
1439 1447 # Restore the default and user aliases
1440 1448 self.alias_manager.clear_aliases()
1441 1449 self.alias_manager.init_aliases()
1442 1450
1443 1451 # Now define aliases that only make sense on the terminal, because they
1444 1452 # need direct access to the console in a way that we can't emulate in
1445 1453 # GUI or web frontend
1446 1454 if os.name == 'posix':
1447 1455 for cmd in ('clear', 'more', 'less', 'man'):
1448 1456 if cmd not in self.magics_manager.magics['line']:
1449 1457 self.alias_manager.soft_define_alias(cmd, cmd)
1450 1458
1451 1459 # Flush the private list of module references kept for script
1452 1460 # execution protection
1453 1461 self.clear_main_mod_cache()
1454 1462
1455 1463 def del_var(self, varname, by_name=False):
1456 1464 """Delete a variable from the various namespaces, so that, as
1457 1465 far as possible, we're not keeping any hidden references to it.
1458 1466
1459 1467 Parameters
1460 1468 ----------
1461 1469 varname : str
1462 1470 The name of the variable to delete.
1463 1471 by_name : bool
1464 1472 If True, delete variables with the given name in each
1465 1473 namespace. If False (default), find the variable in the user
1466 1474 namespace, and delete references to it.
1467 1475 """
1468 1476 if varname in ('__builtin__', '__builtins__'):
1469 1477 raise ValueError("Refusing to delete %s" % varname)
1470 1478
1471 1479 ns_refs = self.all_ns_refs
1472 1480
1473 1481 if by_name: # Delete by name
1474 1482 for ns in ns_refs:
1475 1483 try:
1476 1484 del ns[varname]
1477 1485 except KeyError:
1478 1486 pass
1479 1487 else: # Delete by object
1480 1488 try:
1481 1489 obj = self.user_ns[varname]
1482 1490 except KeyError as e:
1483 1491 raise NameError("name '%s' is not defined" % varname) from e
1484 1492 # Also check in output history
1493 assert self.history_manager is not None
1485 1494 ns_refs.append(self.history_manager.output_hist)
1486 1495 for ns in ns_refs:
1487 1496 to_delete = [n for n, o in ns.items() if o is obj]
1488 1497 for name in to_delete:
1489 1498 del ns[name]
1490 1499
1491 1500 # Ensure it is removed from the last execution result
1492 1501 if self.last_execution_result.result is obj:
1493 1502 self.last_execution_result = None
1494 1503
1495 1504 # displayhook keeps extra references, but not in a dictionary
1496 1505 for name in ('_', '__', '___'):
1497 1506 if getattr(self.displayhook, name) is obj:
1498 1507 setattr(self.displayhook, name, None)
1499 1508
1500 1509 def reset_selective(self, regex=None):
1501 1510 """Clear selective variables from internal namespaces based on a
1502 1511 specified regular expression.
1503 1512
1504 1513 Parameters
1505 1514 ----------
1506 1515 regex : string or compiled pattern, optional
1507 1516 A regular expression pattern that will be used in searching
1508 1517 variable names in the users namespaces.
1509 1518 """
1510 1519 if regex is not None:
1511 1520 try:
1512 1521 m = re.compile(regex)
1513 1522 except TypeError as e:
1514 1523 raise TypeError('regex must be a string or compiled pattern') from e
1515 1524 # Search for keys in each namespace that match the given regex
1516 1525 # If a match is found, delete the key/value pair.
1517 1526 for ns in self.all_ns_refs:
1518 1527 for var in ns:
1519 1528 if m.search(var):
1520 1529 del ns[var]
1521 1530
1522 1531 def push(self, variables, interactive=True):
1523 1532 """Inject a group of variables into the IPython user namespace.
1524 1533
1525 1534 Parameters
1526 1535 ----------
1527 1536 variables : dict, str or list/tuple of str
1528 1537 The variables to inject into the user's namespace. If a dict, a
1529 1538 simple update is done. If a str, the string is assumed to have
1530 1539 variable names separated by spaces. A list/tuple of str can also
1531 1540 be used to give the variable names. If just the variable names are
1532 1541 give (list/tuple/str) then the variable values looked up in the
1533 1542 callers frame.
1534 1543 interactive : bool
1535 1544 If True (default), the variables will be listed with the ``who``
1536 1545 magic.
1537 1546 """
1538 1547 vdict = None
1539 1548
1540 1549 # We need a dict of name/value pairs to do namespace updates.
1541 1550 if isinstance(variables, dict):
1542 1551 vdict = variables
1543 1552 elif isinstance(variables, (str, list, tuple)):
1544 1553 if isinstance(variables, str):
1545 1554 vlist = variables.split()
1546 1555 else:
1547 1556 vlist = variables
1548 1557 vdict = {}
1549 1558 cf = sys._getframe(1)
1550 1559 for name in vlist:
1551 1560 try:
1552 1561 vdict[name] = eval(name, cf.f_globals, cf.f_locals)
1553 1562 except:
1554 1563 print('Could not get variable %s from %s' %
1555 1564 (name,cf.f_code.co_name))
1556 1565 else:
1557 1566 raise ValueError('variables must be a dict/str/list/tuple')
1558 1567
1559 1568 # Propagate variables to user namespace
1560 1569 self.user_ns.update(vdict)
1561 1570
1562 1571 # And configure interactive visibility
1563 1572 user_ns_hidden = self.user_ns_hidden
1564 1573 if interactive:
1565 1574 for name in vdict:
1566 1575 user_ns_hidden.pop(name, None)
1567 1576 else:
1568 1577 user_ns_hidden.update(vdict)
1569 1578
1570 1579 def drop_by_id(self, variables):
1571 1580 """Remove a dict of variables from the user namespace, if they are the
1572 1581 same as the values in the dictionary.
1573 1582
1574 1583 This is intended for use by extensions: variables that they've added can
1575 1584 be taken back out if they are unloaded, without removing any that the
1576 1585 user has overwritten.
1577 1586
1578 1587 Parameters
1579 1588 ----------
1580 1589 variables : dict
1581 1590 A dictionary mapping object names (as strings) to the objects.
1582 1591 """
1583 1592 for name, obj in variables.items():
1584 1593 if name in self.user_ns and self.user_ns[name] is obj:
1585 1594 del self.user_ns[name]
1586 1595 self.user_ns_hidden.pop(name, None)
1587 1596
1588 1597 #-------------------------------------------------------------------------
1589 1598 # Things related to object introspection
1590 1599 #-------------------------------------------------------------------------
1591 1600 @staticmethod
1592 1601 def _find_parts(oname: str) -> Tuple[bool, ListType[str]]:
1593 1602 """
1594 1603 Given an object name, return a list of parts of this object name.
1595 1604
1596 1605 Basically split on docs when using attribute access,
1597 1606 and extract the value when using square bracket.
1598 1607
1599 1608
1600 1609 For example foo.bar[3].baz[x] -> foo, bar, 3, baz, x
1601 1610
1602 1611
1603 1612 Returns
1604 1613 -------
1605 1614 parts_ok: bool
1606 1615 wether we were properly able to parse parts.
1607 1616 parts: list of str
1608 1617 extracted parts
1609 1618
1610 1619
1611 1620
1612 1621 """
1613 1622 raw_parts = oname.split(".")
1614 1623 parts = []
1615 1624 parts_ok = True
1616 1625 for p in raw_parts:
1617 1626 if p.endswith("]"):
1618 1627 var, *indices = p.split("[")
1619 1628 if not var.isidentifier():
1620 1629 parts_ok = False
1621 1630 break
1622 1631 parts.append(var)
1623 1632 for ind in indices:
1624 1633 if ind[-1] != "]" and not is_integer_string(ind[:-1]):
1625 1634 parts_ok = False
1626 1635 break
1627 1636 parts.append(ind[:-1])
1628 1637 continue
1629 1638
1630 1639 if not p.isidentifier():
1631 1640 parts_ok = False
1632 1641 parts.append(p)
1633 1642
1634 1643 return parts_ok, parts
1635 1644
1636 1645 def _ofind(
1637 1646 self, oname: str, namespaces: Optional[Sequence[Tuple[str, AnyType]]] = None
1638 1647 ) -> OInfo:
1639 1648 """Find an object in the available namespaces.
1640 1649
1641 1650
1642 1651 Returns
1643 1652 -------
1644 1653 OInfo with fields:
1645 1654 - ismagic
1646 1655 - isalias
1647 1656 - found
1648 1657 - obj
1649 1658 - namespac
1650 1659 - parent
1651 1660
1652 1661 Has special code to detect magic functions.
1653 1662 """
1654 1663 oname = oname.strip()
1655 1664 parts_ok, parts = self._find_parts(oname)
1656 1665
1657 1666 if (
1658 1667 not oname.startswith(ESC_MAGIC)
1659 1668 and not oname.startswith(ESC_MAGIC2)
1660 1669 and not parts_ok
1661 1670 ):
1662 1671 return OInfo(
1663 1672 ismagic=False,
1664 1673 isalias=False,
1665 1674 found=False,
1666 1675 obj=None,
1667 1676 namespace=None,
1668 1677 parent=None,
1669 1678 )
1670 1679
1671 1680 if namespaces is None:
1672 1681 # Namespaces to search in:
1673 1682 # Put them in a list. The order is important so that we
1674 1683 # find things in the same order that Python finds them.
1675 1684 namespaces = [ ('Interactive', self.user_ns),
1676 1685 ('Interactive (global)', self.user_global_ns),
1677 1686 ('Python builtin', builtin_mod.__dict__),
1678 1687 ]
1679 1688
1680 1689 ismagic = False
1681 1690 isalias = False
1682 1691 found = False
1683 1692 ospace = None
1684 1693 parent = None
1685 1694 obj = None
1686 1695
1687 1696
1688 1697 # Look for the given name by splitting it in parts. If the head is
1689 1698 # found, then we look for all the remaining parts as members, and only
1690 1699 # declare success if we can find them all.
1691 1700 oname_parts = parts
1692 1701 oname_head, oname_rest = oname_parts[0],oname_parts[1:]
1693 1702 for nsname,ns in namespaces:
1694 1703 try:
1695 1704 obj = ns[oname_head]
1696 1705 except KeyError:
1697 1706 continue
1698 1707 else:
1699 1708 for idx, part in enumerate(oname_rest):
1700 1709 try:
1701 1710 parent = obj
1702 1711 # The last part is looked up in a special way to avoid
1703 1712 # descriptor invocation as it may raise or have side
1704 1713 # effects.
1705 1714 if idx == len(oname_rest) - 1:
1706 1715 obj = self._getattr_property(obj, part)
1707 1716 else:
1708 1717 if is_integer_string(part):
1709 1718 obj = obj[int(part)]
1710 1719 else:
1711 1720 obj = getattr(obj, part)
1712 1721 except:
1713 1722 # Blanket except b/c some badly implemented objects
1714 1723 # allow __getattr__ to raise exceptions other than
1715 1724 # AttributeError, which then crashes IPython.
1716 1725 break
1717 1726 else:
1718 1727 # If we finish the for loop (no break), we got all members
1719 1728 found = True
1720 1729 ospace = nsname
1721 1730 break # namespace loop
1722 1731
1723 1732 # Try to see if it's magic
1724 1733 if not found:
1725 1734 obj = None
1726 1735 if oname.startswith(ESC_MAGIC2):
1727 1736 oname = oname.lstrip(ESC_MAGIC2)
1728 1737 obj = self.find_cell_magic(oname)
1729 1738 elif oname.startswith(ESC_MAGIC):
1730 1739 oname = oname.lstrip(ESC_MAGIC)
1731 1740 obj = self.find_line_magic(oname)
1732 1741 else:
1733 1742 # search without prefix, so run? will find %run?
1734 1743 obj = self.find_line_magic(oname)
1735 1744 if obj is None:
1736 1745 obj = self.find_cell_magic(oname)
1737 1746 if obj is not None:
1738 1747 found = True
1739 1748 ospace = 'IPython internal'
1740 1749 ismagic = True
1741 1750 isalias = isinstance(obj, Alias)
1742 1751
1743 1752 # Last try: special-case some literals like '', [], {}, etc:
1744 1753 if not found and oname_head in ["''",'""','[]','{}','()']:
1745 1754 obj = eval(oname_head)
1746 1755 found = True
1747 1756 ospace = 'Interactive'
1748 1757
1749 1758 return OInfo(
1750 1759 obj=obj,
1751 1760 found=found,
1752 1761 parent=parent,
1753 1762 ismagic=ismagic,
1754 1763 isalias=isalias,
1755 1764 namespace=ospace,
1756 1765 )
1757 1766
1758 1767 @staticmethod
1759 1768 def _getattr_property(obj, attrname):
1760 1769 """Property-aware getattr to use in object finding.
1761 1770
1762 1771 If attrname represents a property, return it unevaluated (in case it has
1763 1772 side effects or raises an error.
1764 1773
1765 1774 """
1766 1775 if not isinstance(obj, type):
1767 1776 try:
1768 1777 # `getattr(type(obj), attrname)` is not guaranteed to return
1769 1778 # `obj`, but does so for property:
1770 1779 #
1771 1780 # property.__get__(self, None, cls) -> self
1772 1781 #
1773 1782 # The universal alternative is to traverse the mro manually
1774 1783 # searching for attrname in class dicts.
1775 1784 if is_integer_string(attrname):
1776 1785 return obj[int(attrname)]
1777 1786 else:
1778 1787 attr = getattr(type(obj), attrname)
1779 1788 except AttributeError:
1780 1789 pass
1781 1790 else:
1782 1791 # This relies on the fact that data descriptors (with both
1783 1792 # __get__ & __set__ magic methods) take precedence over
1784 1793 # instance-level attributes:
1785 1794 #
1786 1795 # class A(object):
1787 1796 # @property
1788 1797 # def foobar(self): return 123
1789 1798 # a = A()
1790 1799 # a.__dict__['foobar'] = 345
1791 1800 # a.foobar # == 123
1792 1801 #
1793 1802 # So, a property may be returned right away.
1794 1803 if isinstance(attr, property):
1795 1804 return attr
1796 1805
1797 1806 # Nothing helped, fall back.
1798 1807 return getattr(obj, attrname)
1799 1808
1800 1809 def _object_find(self, oname, namespaces=None) -> OInfo:
1801 1810 """Find an object and return a struct with info about it."""
1802 1811 return self._ofind(oname, namespaces)
1803 1812
1804 def _inspect(self, meth, oname, namespaces=None, **kw):
1813 def _inspect(self, meth, oname: str, namespaces=None, **kw):
1805 1814 """Generic interface to the inspector system.
1806 1815
1807 1816 This function is meant to be called by pdef, pdoc & friends.
1808 1817 """
1809 1818 info: OInfo = self._object_find(oname, namespaces)
1810 1819 if self.sphinxify_docstring:
1811 1820 if sphinxify is None:
1812 1821 raise ImportError("Module ``docrepr`` required but missing")
1813 1822 docformat = sphinxify(self.object_inspect(oname))
1814 1823 else:
1815 1824 docformat = None
1816 1825 if info.found or hasattr(info.parent, oinspect.HOOK_NAME):
1817 1826 pmethod = getattr(self.inspector, meth)
1818 1827 # TODO: only apply format_screen to the plain/text repr of the mime
1819 1828 # bundle.
1820 1829 formatter = format_screen if info.ismagic else docformat
1821 1830 if meth == 'pdoc':
1822 1831 pmethod(info.obj, oname, formatter)
1823 1832 elif meth == 'pinfo':
1824 1833 pmethod(
1825 1834 info.obj,
1826 1835 oname,
1827 1836 formatter,
1828 1837 info,
1829 1838 enable_html_pager=self.enable_html_pager,
1830 1839 **kw,
1831 1840 )
1832 1841 else:
1833 1842 pmethod(info.obj, oname)
1834 1843 else:
1835 1844 print('Object `%s` not found.' % oname)
1836 1845 return 'not found' # so callers can take other action
1837 1846
1838 1847 def object_inspect(self, oname, detail_level=0):
1839 1848 """Get object info about oname"""
1840 1849 with self.builtin_trap:
1841 1850 info = self._object_find(oname)
1842 1851 if info.found:
1843 1852 return self.inspector.info(info.obj, oname, info=info,
1844 1853 detail_level=detail_level
1845 1854 )
1846 1855 else:
1847 1856 return oinspect.object_info(name=oname, found=False)
1848 1857
1849 1858 def object_inspect_text(self, oname, detail_level=0):
1850 1859 """Get object info as formatted text"""
1851 1860 return self.object_inspect_mime(oname, detail_level)['text/plain']
1852 1861
1853 1862 def object_inspect_mime(self, oname, detail_level=0, omit_sections=()):
1854 1863 """Get object info as a mimebundle of formatted representations.
1855 1864
1856 1865 A mimebundle is a dictionary, keyed by mime-type.
1857 1866 It must always have the key `'text/plain'`.
1858 1867 """
1859 1868 with self.builtin_trap:
1860 1869 info = self._object_find(oname)
1861 1870 if info.found:
1862 1871 docformat = (
1863 1872 sphinxify(self.object_inspect(oname))
1864 1873 if self.sphinxify_docstring
1865 1874 else None
1866 1875 )
1867 1876 return self.inspector._get_info(
1868 1877 info.obj,
1869 1878 oname,
1870 1879 info=info,
1871 1880 detail_level=detail_level,
1872 1881 formatter=docformat,
1873 1882 omit_sections=omit_sections,
1874 1883 )
1875 1884 else:
1876 1885 raise KeyError(oname)
1877 1886
1878 1887 #-------------------------------------------------------------------------
1879 1888 # Things related to history management
1880 1889 #-------------------------------------------------------------------------
1881 1890
1882 1891 def init_history(self):
1883 1892 """Sets up the command history, and starts regular autosaves."""
1884 1893 self.history_manager = HistoryManager(shell=self, parent=self)
1885 1894 self.configurables.append(self.history_manager)
1886 1895
1887 1896 #-------------------------------------------------------------------------
1888 1897 # Things related to exception handling and tracebacks (not debugging)
1889 1898 #-------------------------------------------------------------------------
1890 1899
1891 1900 debugger_cls = InterruptiblePdb
1892 1901
1893 1902 def init_traceback_handlers(self, custom_exceptions):
1894 1903 # Syntax error handler.
1895 1904 self.SyntaxTB = ultratb.SyntaxTB(color_scheme='NoColor', parent=self)
1896 1905
1897 1906 # The interactive one is initialized with an offset, meaning we always
1898 1907 # want to remove the topmost item in the traceback, which is our own
1899 1908 # internal code. Valid modes: ['Plain','Context','Verbose','Minimal']
1900 1909 self.InteractiveTB = ultratb.AutoFormattedTB(mode = 'Plain',
1901 1910 color_scheme='NoColor',
1902 1911 tb_offset = 1,
1903 1912 debugger_cls=self.debugger_cls, parent=self)
1904 1913
1905 1914 # The instance will store a pointer to the system-wide exception hook,
1906 1915 # so that runtime code (such as magics) can access it. This is because
1907 1916 # during the read-eval loop, it may get temporarily overwritten.
1908 1917 self.sys_excepthook = sys.excepthook
1909 1918
1910 1919 # and add any custom exception handlers the user may have specified
1911 1920 self.set_custom_exc(*custom_exceptions)
1912 1921
1913 1922 # Set the exception mode
1914 1923 self.InteractiveTB.set_mode(mode=self.xmode)
1915 1924
1916 1925 def set_custom_exc(self, exc_tuple, handler):
1917 1926 """set_custom_exc(exc_tuple, handler)
1918 1927
1919 1928 Set a custom exception handler, which will be called if any of the
1920 1929 exceptions in exc_tuple occur in the mainloop (specifically, in the
1921 1930 run_code() method).
1922 1931
1923 1932 Parameters
1924 1933 ----------
1925 1934 exc_tuple : tuple of exception classes
1926 1935 A *tuple* of exception classes, for which to call the defined
1927 1936 handler. It is very important that you use a tuple, and NOT A
1928 1937 LIST here, because of the way Python's except statement works. If
1929 1938 you only want to trap a single exception, use a singleton tuple::
1930 1939
1931 1940 exc_tuple == (MyCustomException,)
1932 1941
1933 1942 handler : callable
1934 1943 handler must have the following signature::
1935 1944
1936 1945 def my_handler(self, etype, value, tb, tb_offset=None):
1937 1946 ...
1938 1947 return structured_traceback
1939 1948
1940 1949 Your handler must return a structured traceback (a list of strings),
1941 1950 or None.
1942 1951
1943 1952 This will be made into an instance method (via types.MethodType)
1944 1953 of IPython itself, and it will be called if any of the exceptions
1945 1954 listed in the exc_tuple are caught. If the handler is None, an
1946 1955 internal basic one is used, which just prints basic info.
1947 1956
1948 1957 To protect IPython from crashes, if your handler ever raises an
1949 1958 exception or returns an invalid result, it will be immediately
1950 1959 disabled.
1951 1960
1952 1961 Notes
1953 1962 -----
1954 1963 WARNING: by putting in your own exception handler into IPython's main
1955 1964 execution loop, you run a very good chance of nasty crashes. This
1956 1965 facility should only be used if you really know what you are doing.
1957 1966 """
1958 1967
1959 1968 if not isinstance(exc_tuple, tuple):
1960 1969 raise TypeError("The custom exceptions must be given as a tuple.")
1961 1970
1962 1971 def dummy_handler(self, etype, value, tb, tb_offset=None):
1963 1972 print('*** Simple custom exception handler ***')
1964 1973 print('Exception type :', etype)
1965 1974 print('Exception value:', value)
1966 1975 print('Traceback :', tb)
1967 1976
1968 1977 def validate_stb(stb):
1969 1978 """validate structured traceback return type
1970 1979
1971 1980 return type of CustomTB *should* be a list of strings, but allow
1972 1981 single strings or None, which are harmless.
1973 1982
1974 1983 This function will *always* return a list of strings,
1975 1984 and will raise a TypeError if stb is inappropriate.
1976 1985 """
1977 1986 msg = "CustomTB must return list of strings, not %r" % stb
1978 1987 if stb is None:
1979 1988 return []
1980 1989 elif isinstance(stb, str):
1981 1990 return [stb]
1982 1991 elif not isinstance(stb, list):
1983 1992 raise TypeError(msg)
1984 1993 # it's a list
1985 1994 for line in stb:
1986 1995 # check every element
1987 1996 if not isinstance(line, str):
1988 1997 raise TypeError(msg)
1989 1998 return stb
1990 1999
1991 2000 if handler is None:
1992 2001 wrapped = dummy_handler
1993 2002 else:
1994 2003 def wrapped(self,etype,value,tb,tb_offset=None):
1995 2004 """wrap CustomTB handler, to protect IPython from user code
1996 2005
1997 2006 This makes it harder (but not impossible) for custom exception
1998 2007 handlers to crash IPython.
1999 2008 """
2000 2009 try:
2001 2010 stb = handler(self,etype,value,tb,tb_offset=tb_offset)
2002 2011 return validate_stb(stb)
2003 2012 except:
2004 2013 # clear custom handler immediately
2005 2014 self.set_custom_exc((), None)
2006 2015 print("Custom TB Handler failed, unregistering", file=sys.stderr)
2007 2016 # show the exception in handler first
2008 2017 stb = self.InteractiveTB.structured_traceback(*sys.exc_info())
2009 2018 print(self.InteractiveTB.stb2text(stb))
2010 2019 print("The original exception:")
2011 2020 stb = self.InteractiveTB.structured_traceback(
2012 2021 (etype,value,tb), tb_offset=tb_offset
2013 2022 )
2014 2023 return stb
2015 2024
2016 2025 self.CustomTB = types.MethodType(wrapped,self)
2017 2026 self.custom_exceptions = exc_tuple
2018 2027
2019 2028 def excepthook(self, etype, value, tb):
2020 2029 """One more defense for GUI apps that call sys.excepthook.
2021 2030
2022 2031 GUI frameworks like wxPython trap exceptions and call
2023 2032 sys.excepthook themselves. I guess this is a feature that
2024 2033 enables them to keep running after exceptions that would
2025 2034 otherwise kill their mainloop. This is a bother for IPython
2026 2035 which expects to catch all of the program exceptions with a try:
2027 2036 except: statement.
2028 2037
2029 2038 Normally, IPython sets sys.excepthook to a CrashHandler instance, so if
2030 2039 any app directly invokes sys.excepthook, it will look to the user like
2031 2040 IPython crashed. In order to work around this, we can disable the
2032 2041 CrashHandler and replace it with this excepthook instead, which prints a
2033 2042 regular traceback using our InteractiveTB. In this fashion, apps which
2034 2043 call sys.excepthook will generate a regular-looking exception from
2035 2044 IPython, and the CrashHandler will only be triggered by real IPython
2036 2045 crashes.
2037 2046
2038 2047 This hook should be used sparingly, only in places which are not likely
2039 2048 to be true IPython errors.
2040 2049 """
2041 2050 self.showtraceback((etype, value, tb), tb_offset=0)
2042 2051
2043 2052 def _get_exc_info(self, exc_tuple=None):
2044 2053 """get exc_info from a given tuple, sys.exc_info() or sys.last_type etc.
2045 2054
2046 2055 Ensures sys.last_type,value,traceback hold the exc_info we found,
2047 2056 from whichever source.
2048 2057
2049 2058 raises ValueError if none of these contain any information
2050 2059 """
2051 2060 if exc_tuple is None:
2052 2061 etype, value, tb = sys.exc_info()
2053 2062 else:
2054 2063 etype, value, tb = exc_tuple
2055 2064
2056 2065 if etype is None:
2057 2066 if hasattr(sys, 'last_type'):
2058 2067 etype, value, tb = sys.last_type, sys.last_value, \
2059 2068 sys.last_traceback
2060 2069
2061 2070 if etype is None:
2062 2071 raise ValueError("No exception to find")
2063 2072
2064 2073 # Now store the exception info in sys.last_type etc.
2065 2074 # WARNING: these variables are somewhat deprecated and not
2066 2075 # necessarily safe to use in a threaded environment, but tools
2067 2076 # like pdb depend on their existence, so let's set them. If we
2068 2077 # find problems in the field, we'll need to revisit their use.
2069 2078 sys.last_type = etype
2070 2079 sys.last_value = value
2071 2080 sys.last_traceback = tb
2072 2081
2073 2082 return etype, value, tb
2074 2083
2075 2084 def show_usage_error(self, exc):
2076 2085 """Show a short message for UsageErrors
2077 2086
2078 2087 These are special exceptions that shouldn't show a traceback.
2079 2088 """
2080 2089 print("UsageError: %s" % exc, file=sys.stderr)
2081 2090
2082 2091 def get_exception_only(self, exc_tuple=None):
2083 2092 """
2084 2093 Return as a string (ending with a newline) the exception that
2085 2094 just occurred, without any traceback.
2086 2095 """
2087 2096 etype, value, tb = self._get_exc_info(exc_tuple)
2088 2097 msg = traceback.format_exception_only(etype, value)
2089 2098 return ''.join(msg)
2090 2099
2091 2100 def showtraceback(self, exc_tuple=None, filename=None, tb_offset=None,
2092 2101 exception_only=False, running_compiled_code=False):
2093 2102 """Display the exception that just occurred.
2094 2103
2095 2104 If nothing is known about the exception, this is the method which
2096 2105 should be used throughout the code for presenting user tracebacks,
2097 2106 rather than directly invoking the InteractiveTB object.
2098 2107
2099 2108 A specific showsyntaxerror() also exists, but this method can take
2100 2109 care of calling it if needed, so unless you are explicitly catching a
2101 2110 SyntaxError exception, don't try to analyze the stack manually and
2102 2111 simply call this method."""
2103 2112
2104 2113 try:
2105 2114 try:
2106 2115 etype, value, tb = self._get_exc_info(exc_tuple)
2107 2116 except ValueError:
2108 2117 print('No traceback available to show.', file=sys.stderr)
2109 2118 return
2110 2119
2111 2120 if issubclass(etype, SyntaxError):
2112 2121 # Though this won't be called by syntax errors in the input
2113 2122 # line, there may be SyntaxError cases with imported code.
2114 2123 self.showsyntaxerror(filename, running_compiled_code)
2115 2124 elif etype is UsageError:
2116 2125 self.show_usage_error(value)
2117 2126 else:
2118 2127 if exception_only:
2119 2128 stb = ['An exception has occurred, use %tb to see '
2120 2129 'the full traceback.\n']
2121 2130 stb.extend(self.InteractiveTB.get_exception_only(etype,
2122 2131 value))
2123 2132 else:
2124 2133
2125 2134 def contains_exceptiongroup(val):
2126 2135 if val is None:
2127 2136 return False
2128 2137 return isinstance(
2129 2138 val, BaseExceptionGroup
2130 2139 ) or contains_exceptiongroup(val.__context__)
2131 2140
2132 2141 if contains_exceptiongroup(value):
2133 2142 # fall back to native exception formatting until ultratb
2134 2143 # supports exception groups
2135 2144 traceback.print_exc()
2136 2145 else:
2137 2146 try:
2138 2147 # Exception classes can customise their traceback - we
2139 2148 # use this in IPython.parallel for exceptions occurring
2140 2149 # in the engines. This should return a list of strings.
2141 2150 if hasattr(value, "_render_traceback_"):
2142 2151 stb = value._render_traceback_()
2143 2152 else:
2144 2153 stb = self.InteractiveTB.structured_traceback(
2145 2154 etype, value, tb, tb_offset=tb_offset
2146 2155 )
2147 2156
2148 2157 except Exception:
2149 2158 print(
2150 2159 "Unexpected exception formatting exception. Falling back to standard exception"
2151 2160 )
2152 2161 traceback.print_exc()
2153 2162 return None
2154 2163
2155 2164 self._showtraceback(etype, value, stb)
2156 2165 if self.call_pdb:
2157 2166 # drop into debugger
2158 2167 self.debugger(force=True)
2159 2168 return
2160 2169
2161 2170 # Actually show the traceback
2162 2171 self._showtraceback(etype, value, stb)
2163 2172
2164 2173 except KeyboardInterrupt:
2165 2174 print('\n' + self.get_exception_only(), file=sys.stderr)
2166 2175
2167 2176 def _showtraceback(self, etype, evalue, stb: str):
2168 2177 """Actually show a traceback.
2169 2178
2170 2179 Subclasses may override this method to put the traceback on a different
2171 2180 place, like a side channel.
2172 2181 """
2173 2182 val = self.InteractiveTB.stb2text(stb)
2174 2183 try:
2175 2184 print(val)
2176 2185 except UnicodeEncodeError:
2177 2186 print(val.encode("utf-8", "backslashreplace").decode())
2178 2187
2179 2188 def showsyntaxerror(self, filename=None, running_compiled_code=False):
2180 2189 """Display the syntax error that just occurred.
2181 2190
2182 2191 This doesn't display a stack trace because there isn't one.
2183 2192
2184 2193 If a filename is given, it is stuffed in the exception instead
2185 2194 of what was there before (because Python's parser always uses
2186 2195 "<string>" when reading from a string).
2187 2196
2188 2197 If the syntax error occurred when running a compiled code (i.e. running_compile_code=True),
2189 2198 longer stack trace will be displayed.
2190 2199 """
2191 2200 etype, value, last_traceback = self._get_exc_info()
2192 2201
2193 2202 if filename and issubclass(etype, SyntaxError):
2194 2203 try:
2195 2204 value.filename = filename
2196 2205 except:
2197 2206 # Not the format we expect; leave it alone
2198 2207 pass
2199 2208
2200 2209 # If the error occurred when executing compiled code, we should provide full stacktrace.
2201 2210 elist = traceback.extract_tb(last_traceback) if running_compiled_code else []
2202 2211 stb = self.SyntaxTB.structured_traceback(etype, value, elist)
2203 2212 self._showtraceback(etype, value, stb)
2204 2213
2205 2214 # This is overridden in TerminalInteractiveShell to show a message about
2206 2215 # the %paste magic.
2207 2216 def showindentationerror(self):
2208 2217 """Called by _run_cell when there's an IndentationError in code entered
2209 2218 at the prompt.
2210 2219
2211 2220 This is overridden in TerminalInteractiveShell to show a message about
2212 2221 the %paste magic."""
2213 2222 self.showsyntaxerror()
2214 2223
2215 2224 @skip_doctest
2216 2225 def set_next_input(self, s, replace=False):
2217 2226 """ Sets the 'default' input string for the next command line.
2218 2227
2219 2228 Example::
2220 2229
2221 2230 In [1]: _ip.set_next_input("Hello Word")
2222 2231 In [2]: Hello Word_ # cursor is here
2223 2232 """
2224 2233 self.rl_next_input = s
2225 2234
2226 2235 def _indent_current_str(self):
2227 2236 """return the current level of indentation as a string"""
2228 2237 return self.input_splitter.get_indent_spaces() * ' '
2229 2238
2230 2239 #-------------------------------------------------------------------------
2231 2240 # Things related to text completion
2232 2241 #-------------------------------------------------------------------------
2233 2242
2234 2243 def init_completer(self):
2235 2244 """Initialize the completion machinery.
2236 2245
2237 2246 This creates completion machinery that can be used by client code,
2238 2247 either interactively in-process (typically triggered by the readline
2239 2248 library), programmatically (such as in test suites) or out-of-process
2240 2249 (typically over the network by remote frontends).
2241 2250 """
2242 2251 from IPython.core.completer import IPCompleter
2243 2252 from IPython.core.completerlib import (
2244 2253 cd_completer,
2245 2254 magic_run_completer,
2246 2255 module_completer,
2247 2256 reset_completer,
2248 2257 )
2249 2258
2250 2259 self.Completer = IPCompleter(shell=self,
2251 2260 namespace=self.user_ns,
2252 2261 global_namespace=self.user_global_ns,
2253 2262 parent=self,
2254 2263 )
2255 2264 self.configurables.append(self.Completer)
2256 2265
2257 2266 # Add custom completers to the basic ones built into IPCompleter
2258 2267 sdisp = self.strdispatchers.get('complete_command', StrDispatch())
2259 2268 self.strdispatchers['complete_command'] = sdisp
2260 2269 self.Completer.custom_completers = sdisp
2261 2270
2262 2271 self.set_hook('complete_command', module_completer, str_key = 'import')
2263 2272 self.set_hook('complete_command', module_completer, str_key = 'from')
2264 2273 self.set_hook('complete_command', module_completer, str_key = '%aimport')
2265 2274 self.set_hook('complete_command', magic_run_completer, str_key = '%run')
2266 2275 self.set_hook('complete_command', cd_completer, str_key = '%cd')
2267 2276 self.set_hook('complete_command', reset_completer, str_key = '%reset')
2268 2277
2269 2278 @skip_doctest
2270 2279 def complete(self, text, line=None, cursor_pos=None):
2271 2280 """Return the completed text and a list of completions.
2272 2281
2273 2282 Parameters
2274 2283 ----------
2275 2284 text : string
2276 2285 A string of text to be completed on. It can be given as empty and
2277 2286 instead a line/position pair are given. In this case, the
2278 2287 completer itself will split the line like readline does.
2279 2288 line : string, optional
2280 2289 The complete line that text is part of.
2281 2290 cursor_pos : int, optional
2282 2291 The position of the cursor on the input line.
2283 2292
2284 2293 Returns
2285 2294 -------
2286 2295 text : string
2287 2296 The actual text that was completed.
2288 2297 matches : list
2289 2298 A sorted list with all possible completions.
2290 2299
2291 2300 Notes
2292 2301 -----
2293 2302 The optional arguments allow the completion to take more context into
2294 2303 account, and are part of the low-level completion API.
2295 2304
2296 2305 This is a wrapper around the completion mechanism, similar to what
2297 2306 readline does at the command line when the TAB key is hit. By
2298 2307 exposing it as a method, it can be used by other non-readline
2299 2308 environments (such as GUIs) for text completion.
2300 2309
2301 2310 Examples
2302 2311 --------
2303 2312 In [1]: x = 'hello'
2304 2313
2305 2314 In [2]: _ip.complete('x.l')
2306 2315 Out[2]: ('x.l', ['x.ljust', 'x.lower', 'x.lstrip'])
2307 2316 """
2308 2317
2309 2318 # Inject names into __builtin__ so we can complete on the added names.
2310 2319 with self.builtin_trap:
2311 2320 return self.Completer.complete(text, line, cursor_pos)
2312 2321
2313 2322 def set_custom_completer(self, completer, pos=0) -> None:
2314 2323 """Adds a new custom completer function.
2315 2324
2316 2325 The position argument (defaults to 0) is the index in the completers
2317 2326 list where you want the completer to be inserted.
2318 2327
2319 2328 `completer` should have the following signature::
2320 2329
2321 2330 def completion(self: Completer, text: string) -> List[str]:
2322 2331 raise NotImplementedError
2323 2332
2324 2333 It will be bound to the current Completer instance and pass some text
2325 2334 and return a list with current completions to suggest to the user.
2326 2335 """
2327 2336
2328 2337 newcomp = types.MethodType(completer, self.Completer)
2329 2338 self.Completer.custom_matchers.insert(pos,newcomp)
2330 2339
2331 2340 def set_completer_frame(self, frame=None):
2332 2341 """Set the frame of the completer."""
2333 2342 if frame:
2334 2343 self.Completer.namespace = frame.f_locals
2335 2344 self.Completer.global_namespace = frame.f_globals
2336 2345 else:
2337 2346 self.Completer.namespace = self.user_ns
2338 2347 self.Completer.global_namespace = self.user_global_ns
2339 2348
2340 2349 #-------------------------------------------------------------------------
2341 2350 # Things related to magics
2342 2351 #-------------------------------------------------------------------------
2343 2352
2344 2353 def init_magics(self):
2345 2354 from IPython.core import magics as m
2346 2355 self.magics_manager = magic.MagicsManager(shell=self,
2347 2356 parent=self,
2348 2357 user_magics=m.UserMagics(self))
2349 2358 self.configurables.append(self.magics_manager)
2350 2359
2351 2360 # Expose as public API from the magics manager
2352 2361 self.register_magics = self.magics_manager.register
2353 2362
2354 2363 self.register_magics(m.AutoMagics, m.BasicMagics, m.CodeMagics,
2355 2364 m.ConfigMagics, m.DisplayMagics, m.ExecutionMagics,
2356 2365 m.ExtensionMagics, m.HistoryMagics, m.LoggingMagics,
2357 2366 m.NamespaceMagics, m.OSMagics, m.PackagingMagics,
2358 2367 m.PylabMagics, m.ScriptMagics,
2359 2368 )
2360 2369 self.register_magics(m.AsyncMagics)
2361 2370
2362 2371 # Register Magic Aliases
2363 2372 mman = self.magics_manager
2364 2373 # FIXME: magic aliases should be defined by the Magics classes
2365 2374 # or in MagicsManager, not here
2366 2375 mman.register_alias('ed', 'edit')
2367 2376 mman.register_alias('hist', 'history')
2368 2377 mman.register_alias('rep', 'recall')
2369 2378 mman.register_alias('SVG', 'svg', 'cell')
2370 2379 mman.register_alias('HTML', 'html', 'cell')
2371 2380 mman.register_alias('file', 'writefile', 'cell')
2372 2381
2373 2382 # FIXME: Move the color initialization to the DisplayHook, which
2374 2383 # should be split into a prompt manager and displayhook. We probably
2375 2384 # even need a centralize colors management object.
2376 2385 self.run_line_magic('colors', self.colors)
2377 2386
2378 2387 # Defined here so that it's included in the documentation
2379 2388 @functools.wraps(magic.MagicsManager.register_function)
2380 2389 def register_magic_function(self, func, magic_kind='line', magic_name=None):
2381 2390 self.magics_manager.register_function(
2382 2391 func, magic_kind=magic_kind, magic_name=magic_name
2383 2392 )
2384 2393
2385 2394 def _find_with_lazy_load(self, /, type_, magic_name: str):
2386 2395 """
2387 2396 Try to find a magic potentially lazy-loading it.
2388 2397
2389 2398 Parameters
2390 2399 ----------
2391 2400
2392 2401 type_: "line"|"cell"
2393 2402 the type of magics we are trying to find/lazy load.
2394 2403 magic_name: str
2395 2404 The name of the magic we are trying to find/lazy load
2396 2405
2397 2406
2398 2407 Note that this may have any side effects
2399 2408 """
2400 2409 finder = {"line": self.find_line_magic, "cell": self.find_cell_magic}[type_]
2401 2410 fn = finder(magic_name)
2402 2411 if fn is not None:
2403 2412 return fn
2404 2413 lazy = self.magics_manager.lazy_magics.get(magic_name)
2405 2414 if lazy is None:
2406 2415 return None
2407 2416
2408 2417 self.run_line_magic("load_ext", lazy)
2409 2418 res = finder(magic_name)
2410 2419 return res
2411 2420
2412 def run_line_magic(self, magic_name: str, line, _stack_depth=1):
2421 def run_line_magic(self, magic_name: str, line: str, _stack_depth=1):
2413 2422 """Execute the given line magic.
2414 2423
2415 2424 Parameters
2416 2425 ----------
2417 2426 magic_name : str
2418 2427 Name of the desired magic function, without '%' prefix.
2419 2428 line : str
2420 2429 The rest of the input line as a single string.
2421 2430 _stack_depth : int
2422 2431 If run_line_magic() is called from magic() then _stack_depth=2.
2423 2432 This is added to ensure backward compatibility for use of 'get_ipython().magic()'
2424 2433 """
2425 2434 fn = self._find_with_lazy_load("line", magic_name)
2426 2435 if fn is None:
2427 2436 lazy = self.magics_manager.lazy_magics.get(magic_name)
2428 2437 if lazy:
2429 2438 self.run_line_magic("load_ext", lazy)
2430 2439 fn = self.find_line_magic(magic_name)
2431 2440 if fn is None:
2432 2441 cm = self.find_cell_magic(magic_name)
2433 2442 etpl = "Line magic function `%%%s` not found%s."
2434 2443 extra = '' if cm is None else (' (But cell magic `%%%%%s` exists, '
2435 2444 'did you mean that instead?)' % magic_name )
2436 2445 raise UsageError(etpl % (magic_name, extra))
2437 2446 else:
2438 2447 # Note: this is the distance in the stack to the user's frame.
2439 2448 # This will need to be updated if the internal calling logic gets
2440 2449 # refactored, or else we'll be expanding the wrong variables.
2441 2450
2442 2451 # Determine stack_depth depending on where run_line_magic() has been called
2443 2452 stack_depth = _stack_depth
2444 2453 if getattr(fn, magic.MAGIC_NO_VAR_EXPAND_ATTR, False):
2445 2454 # magic has opted out of var_expand
2446 2455 magic_arg_s = line
2447 2456 else:
2448 2457 magic_arg_s = self.var_expand(line, stack_depth)
2449 2458 # Put magic args in a list so we can call with f(*a) syntax
2450 2459 args = [magic_arg_s]
2451 2460 kwargs = {}
2452 2461 # Grab local namespace if we need it:
2453 2462 if getattr(fn, "needs_local_scope", False):
2454 2463 kwargs['local_ns'] = self.get_local_scope(stack_depth)
2455 2464 with self.builtin_trap:
2456 2465 result = fn(*args, **kwargs)
2457 2466
2458 2467 # The code below prevents the output from being displayed
2459 2468 # when using magics with decorator @output_can_be_silenced
2460 2469 # when the last Python token in the expression is a ';'.
2461 2470 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
2462 2471 if DisplayHook.semicolon_at_end_of_expression(magic_arg_s):
2463 2472 return None
2464 2473
2465 2474 return result
2466 2475
2467 2476 def get_local_scope(self, stack_depth):
2468 2477 """Get local scope at given stack depth.
2469 2478
2470 2479 Parameters
2471 2480 ----------
2472 2481 stack_depth : int
2473 2482 Depth relative to calling frame
2474 2483 """
2475 2484 return sys._getframe(stack_depth + 1).f_locals
2476 2485
2477 2486 def run_cell_magic(self, magic_name, line, cell):
2478 2487 """Execute the given cell magic.
2479 2488
2480 2489 Parameters
2481 2490 ----------
2482 2491 magic_name : str
2483 2492 Name of the desired magic function, without '%' prefix.
2484 2493 line : str
2485 2494 The rest of the first input line as a single string.
2486 2495 cell : str
2487 2496 The body of the cell as a (possibly multiline) string.
2488 2497 """
2489 2498 fn = self._find_with_lazy_load("cell", magic_name)
2490 2499 if fn is None:
2491 2500 lm = self.find_line_magic(magic_name)
2492 2501 etpl = "Cell magic `%%{0}` not found{1}."
2493 2502 extra = '' if lm is None else (' (But line magic `%{0}` exists, '
2494 2503 'did you mean that instead?)'.format(magic_name))
2495 2504 raise UsageError(etpl.format(magic_name, extra))
2496 2505 elif cell == '':
2497 2506 message = '%%{0} is a cell magic, but the cell body is empty.'.format(magic_name)
2498 2507 if self.find_line_magic(magic_name) is not None:
2499 2508 message += ' Did you mean the line magic %{0} (single %)?'.format(magic_name)
2500 2509 raise UsageError(message)
2501 2510 else:
2502 2511 # Note: this is the distance in the stack to the user's frame.
2503 2512 # This will need to be updated if the internal calling logic gets
2504 2513 # refactored, or else we'll be expanding the wrong variables.
2505 2514 stack_depth = 2
2506 2515 if getattr(fn, magic.MAGIC_NO_VAR_EXPAND_ATTR, False):
2507 2516 # magic has opted out of var_expand
2508 2517 magic_arg_s = line
2509 2518 else:
2510 2519 magic_arg_s = self.var_expand(line, stack_depth)
2511 2520 kwargs = {}
2512 2521 if getattr(fn, "needs_local_scope", False):
2513 2522 kwargs['local_ns'] = self.user_ns
2514 2523
2515 2524 with self.builtin_trap:
2516 2525 args = (magic_arg_s, cell)
2517 2526 result = fn(*args, **kwargs)
2518 2527
2519 2528 # The code below prevents the output from being displayed
2520 2529 # when using magics with decorator @output_can_be_silenced
2521 2530 # when the last Python token in the expression is a ';'.
2522 2531 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
2523 2532 if DisplayHook.semicolon_at_end_of_expression(cell):
2524 2533 return None
2525 2534
2526 2535 return result
2527 2536
2528 2537 def find_line_magic(self, magic_name):
2529 2538 """Find and return a line magic by name.
2530 2539
2531 2540 Returns None if the magic isn't found."""
2532 2541 return self.magics_manager.magics['line'].get(magic_name)
2533 2542
2534 2543 def find_cell_magic(self, magic_name):
2535 2544 """Find and return a cell magic by name.
2536 2545
2537 2546 Returns None if the magic isn't found."""
2538 2547 return self.magics_manager.magics['cell'].get(magic_name)
2539 2548
2540 2549 def find_magic(self, magic_name, magic_kind='line'):
2541 2550 """Find and return a magic of the given type by name.
2542 2551
2543 2552 Returns None if the magic isn't found."""
2544 2553 return self.magics_manager.magics[magic_kind].get(magic_name)
2545 2554
2546 2555 def magic(self, arg_s):
2547 2556 """
2548 2557 DEPRECATED
2549 2558
2550 2559 Deprecated since IPython 0.13 (warning added in
2551 2560 8.1), use run_line_magic(magic_name, parameter_s).
2552 2561
2553 2562 Call a magic function by name.
2554 2563
2555 2564 Input: a string containing the name of the magic function to call and
2556 2565 any additional arguments to be passed to the magic.
2557 2566
2558 2567 magic('name -opt foo bar') is equivalent to typing at the ipython
2559 2568 prompt:
2560 2569
2561 2570 In[1]: %name -opt foo bar
2562 2571
2563 2572 To call a magic without arguments, simply use magic('name').
2564 2573
2565 2574 This provides a proper Python function to call IPython's magics in any
2566 2575 valid Python code you can type at the interpreter, including loops and
2567 2576 compound statements.
2568 2577 """
2569 2578 warnings.warn(
2570 2579 "`magic(...)` is deprecated since IPython 0.13 (warning added in "
2571 2580 "8.1), use run_line_magic(magic_name, parameter_s).",
2572 2581 DeprecationWarning,
2573 2582 stacklevel=2,
2574 2583 )
2575 2584 # TODO: should we issue a loud deprecation warning here?
2576 2585 magic_name, _, magic_arg_s = arg_s.partition(' ')
2577 2586 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
2578 2587 return self.run_line_magic(magic_name, magic_arg_s, _stack_depth=2)
2579 2588
2580 2589 #-------------------------------------------------------------------------
2581 2590 # Things related to macros
2582 2591 #-------------------------------------------------------------------------
2583 2592
2584 2593 def define_macro(self, name, themacro):
2585 2594 """Define a new macro
2586 2595
2587 2596 Parameters
2588 2597 ----------
2589 2598 name : str
2590 2599 The name of the macro.
2591 2600 themacro : str or Macro
2592 2601 The action to do upon invoking the macro. If a string, a new
2593 2602 Macro object is created by passing the string to it.
2594 2603 """
2595 2604
2596 2605 from IPython.core import macro
2597 2606
2598 2607 if isinstance(themacro, str):
2599 2608 themacro = macro.Macro(themacro)
2600 2609 if not isinstance(themacro, macro.Macro):
2601 2610 raise ValueError('A macro must be a string or a Macro instance.')
2602 2611 self.user_ns[name] = themacro
2603 2612
2604 2613 #-------------------------------------------------------------------------
2605 2614 # Things related to the running of system commands
2606 2615 #-------------------------------------------------------------------------
2607 2616
2608 2617 def system_piped(self, cmd):
2609 2618 """Call the given cmd in a subprocess, piping stdout/err
2610 2619
2611 2620 Parameters
2612 2621 ----------
2613 2622 cmd : str
2614 2623 Command to execute (can not end in '&', as background processes are
2615 2624 not supported. Should not be a command that expects input
2616 2625 other than simple text.
2617 2626 """
2618 2627 if cmd.rstrip().endswith('&'):
2619 2628 # this is *far* from a rigorous test
2620 2629 # We do not support backgrounding processes because we either use
2621 2630 # pexpect or pipes to read from. Users can always just call
2622 2631 # os.system() or use ip.system=ip.system_raw
2623 2632 # if they really want a background process.
2624 2633 raise OSError("Background processes not supported.")
2625 2634
2626 2635 # we explicitly do NOT return the subprocess status code, because
2627 2636 # a non-None value would trigger :func:`sys.displayhook` calls.
2628 2637 # Instead, we store the exit_code in user_ns.
2629 2638 self.user_ns['_exit_code'] = system(self.var_expand(cmd, depth=1))
2630 2639
2631 2640 def system_raw(self, cmd):
2632 2641 """Call the given cmd in a subprocess using os.system on Windows or
2633 2642 subprocess.call using the system shell on other platforms.
2634 2643
2635 2644 Parameters
2636 2645 ----------
2637 2646 cmd : str
2638 2647 Command to execute.
2639 2648 """
2640 2649 cmd = self.var_expand(cmd, depth=1)
2641 2650 # warn if there is an IPython magic alternative.
2642 2651 if cmd == "":
2643 2652 main_cmd = ""
2644 2653 else:
2645 2654 main_cmd = cmd.split()[0]
2646 2655 has_magic_alternatives = ("pip", "conda", "cd")
2647 2656
2648 2657 if main_cmd in has_magic_alternatives:
2649 2658 warnings.warn(
2650 2659 (
2651 2660 "You executed the system command !{0} which may not work "
2652 2661 "as expected. Try the IPython magic %{0} instead."
2653 2662 ).format(main_cmd)
2654 2663 )
2655 2664
2656 2665 # protect os.system from UNC paths on Windows, which it can't handle:
2657 2666 if sys.platform == 'win32':
2658 2667 from IPython.utils._process_win32 import AvoidUNCPath
2659 2668 with AvoidUNCPath() as path:
2660 2669 if path is not None:
2661 2670 cmd = '"pushd %s &&"%s' % (path, cmd)
2662 2671 try:
2663 2672 ec = os.system(cmd)
2664 2673 except KeyboardInterrupt:
2665 2674 print('\n' + self.get_exception_only(), file=sys.stderr)
2666 2675 ec = -2
2667 2676 else:
2668 2677 # For posix the result of the subprocess.call() below is an exit
2669 2678 # code, which by convention is zero for success, positive for
2670 2679 # program failure. Exit codes above 128 are reserved for signals,
2671 2680 # and the formula for converting a signal to an exit code is usually
2672 2681 # signal_number+128. To more easily differentiate between exit
2673 2682 # codes and signals, ipython uses negative numbers. For instance
2674 2683 # since control-c is signal 2 but exit code 130, ipython's
2675 2684 # _exit_code variable will read -2. Note that some shells like
2676 2685 # csh and fish don't follow sh/bash conventions for exit codes.
2677 2686 executable = os.environ.get('SHELL', None)
2678 2687 try:
2679 2688 # Use env shell instead of default /bin/sh
2680 2689 ec = subprocess.call(cmd, shell=True, executable=executable)
2681 2690 except KeyboardInterrupt:
2682 2691 # intercept control-C; a long traceback is not useful here
2683 2692 print('\n' + self.get_exception_only(), file=sys.stderr)
2684 2693 ec = 130
2685 2694 if ec > 128:
2686 2695 ec = -(ec - 128)
2687 2696
2688 2697 # We explicitly do NOT return the subprocess status code, because
2689 2698 # a non-None value would trigger :func:`sys.displayhook` calls.
2690 2699 # Instead, we store the exit_code in user_ns. Note the semantics
2691 2700 # of _exit_code: for control-c, _exit_code == -signal.SIGNIT,
2692 2701 # but raising SystemExit(_exit_code) will give status 254!
2693 2702 self.user_ns['_exit_code'] = ec
2694 2703
2695 2704 # use piped system by default, because it is better behaved
2696 2705 system = system_piped
2697 2706
2698 2707 def getoutput(self, cmd, split=True, depth=0):
2699 2708 """Get output (possibly including stderr) from a subprocess.
2700 2709
2701 2710 Parameters
2702 2711 ----------
2703 2712 cmd : str
2704 2713 Command to execute (can not end in '&', as background processes are
2705 2714 not supported.
2706 2715 split : bool, optional
2707 2716 If True, split the output into an IPython SList. Otherwise, an
2708 2717 IPython LSString is returned. These are objects similar to normal
2709 2718 lists and strings, with a few convenience attributes for easier
2710 2719 manipulation of line-based output. You can use '?' on them for
2711 2720 details.
2712 2721 depth : int, optional
2713 2722 How many frames above the caller are the local variables which should
2714 2723 be expanded in the command string? The default (0) assumes that the
2715 2724 expansion variables are in the stack frame calling this function.
2716 2725 """
2717 2726 if cmd.rstrip().endswith('&'):
2718 2727 # this is *far* from a rigorous test
2719 2728 raise OSError("Background processes not supported.")
2720 2729 out = getoutput(self.var_expand(cmd, depth=depth+1))
2721 2730 if split:
2722 2731 out = SList(out.splitlines())
2723 2732 else:
2724 2733 out = LSString(out)
2725 2734 return out
2726 2735
2727 2736 #-------------------------------------------------------------------------
2728 2737 # Things related to aliases
2729 2738 #-------------------------------------------------------------------------
2730 2739
2731 2740 def init_alias(self):
2732 2741 self.alias_manager = AliasManager(shell=self, parent=self)
2733 2742 self.configurables.append(self.alias_manager)
2734 2743
2735 2744 #-------------------------------------------------------------------------
2736 2745 # Things related to extensions
2737 2746 #-------------------------------------------------------------------------
2738 2747
2739 2748 def init_extension_manager(self):
2740 2749 self.extension_manager = ExtensionManager(shell=self, parent=self)
2741 2750 self.configurables.append(self.extension_manager)
2742 2751
2743 2752 #-------------------------------------------------------------------------
2744 2753 # Things related to payloads
2745 2754 #-------------------------------------------------------------------------
2746 2755
2747 2756 def init_payload(self):
2748 2757 self.payload_manager = PayloadManager(parent=self)
2749 2758 self.configurables.append(self.payload_manager)
2750 2759
2751 2760 #-------------------------------------------------------------------------
2752 2761 # Things related to the prefilter
2753 2762 #-------------------------------------------------------------------------
2754 2763
2755 2764 def init_prefilter(self):
2756 2765 self.prefilter_manager = PrefilterManager(shell=self, parent=self)
2757 2766 self.configurables.append(self.prefilter_manager)
2758 2767 # Ultimately this will be refactored in the new interpreter code, but
2759 2768 # for now, we should expose the main prefilter method (there's legacy
2760 2769 # code out there that may rely on this).
2761 2770 self.prefilter = self.prefilter_manager.prefilter_lines
2762 2771
2763 2772 def auto_rewrite_input(self, cmd):
2764 2773 """Print to the screen the rewritten form of the user's command.
2765 2774
2766 2775 This shows visual feedback by rewriting input lines that cause
2767 2776 automatic calling to kick in, like::
2768 2777
2769 2778 /f x
2770 2779
2771 2780 into::
2772 2781
2773 2782 ------> f(x)
2774 2783
2775 2784 after the user's input prompt. This helps the user understand that the
2776 2785 input line was transformed automatically by IPython.
2777 2786 """
2778 2787 if not self.show_rewritten_input:
2779 2788 return
2780 2789
2781 2790 # This is overridden in TerminalInteractiveShell to use fancy prompts
2782 2791 print("------> " + cmd)
2783 2792
2784 2793 #-------------------------------------------------------------------------
2785 2794 # Things related to extracting values/expressions from kernel and user_ns
2786 2795 #-------------------------------------------------------------------------
2787 2796
2788 2797 def _user_obj_error(self):
2789 2798 """return simple exception dict
2790 2799
2791 2800 for use in user_expressions
2792 2801 """
2793 2802
2794 2803 etype, evalue, tb = self._get_exc_info()
2795 2804 stb = self.InteractiveTB.get_exception_only(etype, evalue)
2796 2805
2797 2806 exc_info = {
2798 2807 "status": "error",
2799 2808 "traceback": stb,
2800 2809 "ename": etype.__name__,
2801 2810 "evalue": py3compat.safe_unicode(evalue),
2802 2811 }
2803 2812
2804 2813 return exc_info
2805 2814
2806 2815 def _format_user_obj(self, obj):
2807 2816 """format a user object to display dict
2808 2817
2809 2818 for use in user_expressions
2810 2819 """
2811 2820
2812 2821 data, md = self.display_formatter.format(obj)
2813 2822 value = {
2814 2823 'status' : 'ok',
2815 2824 'data' : data,
2816 2825 'metadata' : md,
2817 2826 }
2818 2827 return value
2819 2828
2820 2829 def user_expressions(self, expressions):
2821 2830 """Evaluate a dict of expressions in the user's namespace.
2822 2831
2823 2832 Parameters
2824 2833 ----------
2825 2834 expressions : dict
2826 2835 A dict with string keys and string values. The expression values
2827 2836 should be valid Python expressions, each of which will be evaluated
2828 2837 in the user namespace.
2829 2838
2830 2839 Returns
2831 2840 -------
2832 2841 A dict, keyed like the input expressions dict, with the rich mime-typed
2833 2842 display_data of each value.
2834 2843 """
2835 2844 out = {}
2836 2845 user_ns = self.user_ns
2837 2846 global_ns = self.user_global_ns
2838 2847
2839 2848 for key, expr in expressions.items():
2840 2849 try:
2841 2850 value = self._format_user_obj(eval(expr, global_ns, user_ns))
2842 2851 except:
2843 2852 value = self._user_obj_error()
2844 2853 out[key] = value
2845 2854 return out
2846 2855
2847 2856 #-------------------------------------------------------------------------
2848 2857 # Things related to the running of code
2849 2858 #-------------------------------------------------------------------------
2850 2859
2851 2860 def ex(self, cmd):
2852 2861 """Execute a normal python statement in user namespace."""
2853 2862 with self.builtin_trap:
2854 2863 exec(cmd, self.user_global_ns, self.user_ns)
2855 2864
2856 2865 def ev(self, expr):
2857 2866 """Evaluate python expression expr in user namespace.
2858 2867
2859 2868 Returns the result of evaluation
2860 2869 """
2861 2870 with self.builtin_trap:
2862 2871 return eval(expr, self.user_global_ns, self.user_ns)
2863 2872
2864 2873 def safe_execfile(self, fname, *where, exit_ignore=False, raise_exceptions=False, shell_futures=False):
2865 2874 """A safe version of the builtin execfile().
2866 2875
2867 2876 This version will never throw an exception, but instead print
2868 2877 helpful error messages to the screen. This only works on pure
2869 2878 Python files with the .py extension.
2870 2879
2871 2880 Parameters
2872 2881 ----------
2873 2882 fname : string
2874 2883 The name of the file to be executed.
2875 2884 *where : tuple
2876 2885 One or two namespaces, passed to execfile() as (globals,locals).
2877 2886 If only one is given, it is passed as both.
2878 2887 exit_ignore : bool (False)
2879 2888 If True, then silence SystemExit for non-zero status (it is always
2880 2889 silenced for zero status, as it is so common).
2881 2890 raise_exceptions : bool (False)
2882 2891 If True raise exceptions everywhere. Meant for testing.
2883 2892 shell_futures : bool (False)
2884 2893 If True, the code will share future statements with the interactive
2885 2894 shell. It will both be affected by previous __future__ imports, and
2886 2895 any __future__ imports in the code will affect the shell. If False,
2887 2896 __future__ imports are not shared in either direction.
2888 2897
2889 2898 """
2890 2899 fname = Path(fname).expanduser().resolve()
2891 2900
2892 2901 # Make sure we can open the file
2893 2902 try:
2894 2903 with fname.open("rb"):
2895 2904 pass
2896 2905 except:
2897 2906 warn('Could not open file <%s> for safe execution.' % fname)
2898 2907 return
2899 2908
2900 2909 # Find things also in current directory. This is needed to mimic the
2901 2910 # behavior of running a script from the system command line, where
2902 2911 # Python inserts the script's directory into sys.path
2903 2912 dname = str(fname.parent)
2904 2913
2905 2914 with prepended_to_syspath(dname), self.builtin_trap:
2906 2915 try:
2907 2916 glob, loc = (where + (None, ))[:2]
2908 2917 py3compat.execfile(
2909 2918 fname, glob, loc,
2910 2919 self.compile if shell_futures else None)
2911 2920 except SystemExit as status:
2912 2921 # If the call was made with 0 or None exit status (sys.exit(0)
2913 2922 # or sys.exit() ), don't bother showing a traceback, as both of
2914 2923 # these are considered normal by the OS:
2915 2924 # > python -c'import sys;sys.exit(0)'; echo $?
2916 2925 # 0
2917 2926 # > python -c'import sys;sys.exit()'; echo $?
2918 2927 # 0
2919 2928 # For other exit status, we show the exception unless
2920 2929 # explicitly silenced, but only in short form.
2921 2930 if status.code:
2922 2931 if raise_exceptions:
2923 2932 raise
2924 2933 if not exit_ignore:
2925 2934 self.showtraceback(exception_only=True)
2926 2935 except:
2927 2936 if raise_exceptions:
2928 2937 raise
2929 2938 # tb offset is 2 because we wrap execfile
2930 2939 self.showtraceback(tb_offset=2)
2931 2940
2932 2941 def safe_execfile_ipy(self, fname, shell_futures=False, raise_exceptions=False):
2933 2942 """Like safe_execfile, but for .ipy or .ipynb files with IPython syntax.
2934 2943
2935 2944 Parameters
2936 2945 ----------
2937 2946 fname : str
2938 2947 The name of the file to execute. The filename must have a
2939 2948 .ipy or .ipynb extension.
2940 2949 shell_futures : bool (False)
2941 2950 If True, the code will share future statements with the interactive
2942 2951 shell. It will both be affected by previous __future__ imports, and
2943 2952 any __future__ imports in the code will affect the shell. If False,
2944 2953 __future__ imports are not shared in either direction.
2945 2954 raise_exceptions : bool (False)
2946 2955 If True raise exceptions everywhere. Meant for testing.
2947 2956 """
2948 2957 fname = Path(fname).expanduser().resolve()
2949 2958
2950 2959 # Make sure we can open the file
2951 2960 try:
2952 2961 with fname.open("rb"):
2953 2962 pass
2954 2963 except:
2955 2964 warn('Could not open file <%s> for safe execution.' % fname)
2956 2965 return
2957 2966
2958 2967 # Find things also in current directory. This is needed to mimic the
2959 2968 # behavior of running a script from the system command line, where
2960 2969 # Python inserts the script's directory into sys.path
2961 2970 dname = str(fname.parent)
2962 2971
2963 2972 def get_cells():
2964 2973 """generator for sequence of code blocks to run"""
2965 2974 if fname.suffix == ".ipynb":
2966 2975 from nbformat import read
2967 2976 nb = read(fname, as_version=4)
2968 2977 if not nb.cells:
2969 2978 return
2970 2979 for cell in nb.cells:
2971 2980 if cell.cell_type == 'code':
2972 2981 yield cell.source
2973 2982 else:
2974 2983 yield fname.read_text(encoding="utf-8")
2975 2984
2976 2985 with prepended_to_syspath(dname):
2977 2986 try:
2978 2987 for cell in get_cells():
2979 2988 result = self.run_cell(cell, silent=True, shell_futures=shell_futures)
2980 2989 if raise_exceptions:
2981 2990 result.raise_error()
2982 2991 elif not result.success:
2983 2992 break
2984 2993 except:
2985 2994 if raise_exceptions:
2986 2995 raise
2987 2996 self.showtraceback()
2988 2997 warn('Unknown failure executing file: <%s>' % fname)
2989 2998
2990 2999 def safe_run_module(self, mod_name, where):
2991 3000 """A safe version of runpy.run_module().
2992 3001
2993 3002 This version will never throw an exception, but instead print
2994 3003 helpful error messages to the screen.
2995 3004
2996 3005 `SystemExit` exceptions with status code 0 or None are ignored.
2997 3006
2998 3007 Parameters
2999 3008 ----------
3000 3009 mod_name : string
3001 3010 The name of the module to be executed.
3002 3011 where : dict
3003 3012 The globals namespace.
3004 3013 """
3005 3014 try:
3006 3015 try:
3007 3016 where.update(
3008 3017 runpy.run_module(str(mod_name), run_name="__main__",
3009 3018 alter_sys=True)
3010 3019 )
3011 3020 except SystemExit as status:
3012 3021 if status.code:
3013 3022 raise
3014 3023 except:
3015 3024 self.showtraceback()
3016 3025 warn('Unknown failure executing module: <%s>' % mod_name)
3017 3026
3018 3027 def run_cell(
3019 3028 self,
3020 3029 raw_cell,
3021 3030 store_history=False,
3022 3031 silent=False,
3023 3032 shell_futures=True,
3024 3033 cell_id=None,
3025 3034 ):
3026 3035 """Run a complete IPython cell.
3027 3036
3028 3037 Parameters
3029 3038 ----------
3030 3039 raw_cell : str
3031 3040 The code (including IPython code such as %magic functions) to run.
3032 3041 store_history : bool
3033 3042 If True, the raw and translated cell will be stored in IPython's
3034 3043 history. For user code calling back into IPython's machinery, this
3035 3044 should be set to False.
3036 3045 silent : bool
3037 3046 If True, avoid side-effects, such as implicit displayhooks and
3038 3047 and logging. silent=True forces store_history=False.
3039 3048 shell_futures : bool
3040 3049 If True, the code will share future statements with the interactive
3041 3050 shell. It will both be affected by previous __future__ imports, and
3042 3051 any __future__ imports in the code will affect the shell. If False,
3043 3052 __future__ imports are not shared in either direction.
3044 3053
3045 3054 Returns
3046 3055 -------
3047 3056 result : :class:`ExecutionResult`
3048 3057 """
3049 3058 result = None
3050 3059 try:
3051 3060 result = self._run_cell(
3052 3061 raw_cell, store_history, silent, shell_futures, cell_id
3053 3062 )
3054 3063 finally:
3055 3064 self.events.trigger('post_execute')
3056 3065 if not silent:
3057 3066 self.events.trigger('post_run_cell', result)
3058 3067 return result
3059 3068
3060 3069 def _run_cell(
3061 3070 self,
3062 3071 raw_cell: str,
3063 3072 store_history: bool,
3064 3073 silent: bool,
3065 3074 shell_futures: bool,
3066 3075 cell_id: str,
3067 3076 ) -> ExecutionResult:
3068 3077 """Internal method to run a complete IPython cell."""
3069 3078
3070 3079 # we need to avoid calling self.transform_cell multiple time on the same thing
3071 3080 # so we need to store some results:
3072 3081 preprocessing_exc_tuple = None
3073 3082 try:
3074 3083 transformed_cell = self.transform_cell(raw_cell)
3075 3084 except Exception:
3076 3085 transformed_cell = raw_cell
3077 3086 preprocessing_exc_tuple = sys.exc_info()
3078 3087
3079 3088 assert transformed_cell is not None
3080 3089 coro = self.run_cell_async(
3081 3090 raw_cell,
3082 3091 store_history=store_history,
3083 3092 silent=silent,
3084 3093 shell_futures=shell_futures,
3085 3094 transformed_cell=transformed_cell,
3086 3095 preprocessing_exc_tuple=preprocessing_exc_tuple,
3087 3096 cell_id=cell_id,
3088 3097 )
3089 3098
3090 3099 # run_cell_async is async, but may not actually need an eventloop.
3091 3100 # when this is the case, we want to run it using the pseudo_sync_runner
3092 3101 # so that code can invoke eventloops (for example via the %run , and
3093 3102 # `%paste` magic.
3094 3103 if self.trio_runner:
3095 3104 runner = self.trio_runner
3096 3105 elif self.should_run_async(
3097 3106 raw_cell,
3098 3107 transformed_cell=transformed_cell,
3099 3108 preprocessing_exc_tuple=preprocessing_exc_tuple,
3100 3109 ):
3101 3110 runner = self.loop_runner
3102 3111 else:
3103 3112 runner = _pseudo_sync_runner
3104 3113
3105 3114 try:
3106 3115 result = runner(coro)
3107 3116 except BaseException as e:
3108 3117 info = ExecutionInfo(
3109 3118 raw_cell, store_history, silent, shell_futures, cell_id
3110 3119 )
3111 3120 result = ExecutionResult(info)
3112 3121 result.error_in_exec = e
3113 3122 self.showtraceback(running_compiled_code=True)
3114 3123 finally:
3115 3124 return result
3116 3125
3117 3126 def should_run_async(
3118 3127 self, raw_cell: str, *, transformed_cell=None, preprocessing_exc_tuple=None
3119 3128 ) -> bool:
3120 3129 """Return whether a cell should be run asynchronously via a coroutine runner
3121 3130
3122 3131 Parameters
3123 3132 ----------
3124 3133 raw_cell : str
3125 3134 The code to be executed
3126 3135
3127 3136 Returns
3128 3137 -------
3129 3138 result: bool
3130 3139 Whether the code needs to be run with a coroutine runner or not
3131 3140 .. versionadded:: 7.0
3132 3141 """
3133 3142 if not self.autoawait:
3134 3143 return False
3135 3144 if preprocessing_exc_tuple is not None:
3136 3145 return False
3137 3146 assert preprocessing_exc_tuple is None
3138 3147 if transformed_cell is None:
3139 3148 warnings.warn(
3140 3149 "`should_run_async` will not call `transform_cell`"
3141 3150 " automatically in the future. Please pass the result to"
3142 3151 " `transformed_cell` argument and any exception that happen"
3143 3152 " during the"
3144 3153 "transform in `preprocessing_exc_tuple` in"
3145 3154 " IPython 7.17 and above.",
3146 3155 DeprecationWarning,
3147 3156 stacklevel=2,
3148 3157 )
3149 3158 try:
3150 3159 cell = self.transform_cell(raw_cell)
3151 3160 except Exception:
3152 3161 # any exception during transform will be raised
3153 3162 # prior to execution
3154 3163 return False
3155 3164 else:
3156 3165 cell = transformed_cell
3157 3166 return _should_be_async(cell)
3158 3167
3159 3168 async def run_cell_async(
3160 3169 self,
3161 3170 raw_cell: str,
3162 3171 store_history=False,
3163 3172 silent=False,
3164 3173 shell_futures=True,
3165 3174 *,
3166 3175 transformed_cell: Optional[str] = None,
3167 3176 preprocessing_exc_tuple: Optional[AnyType] = None,
3168 3177 cell_id=None,
3169 3178 ) -> ExecutionResult:
3170 3179 """Run a complete IPython cell asynchronously.
3171 3180
3172 3181 Parameters
3173 3182 ----------
3174 3183 raw_cell : str
3175 3184 The code (including IPython code such as %magic functions) to run.
3176 3185 store_history : bool
3177 3186 If True, the raw and translated cell will be stored in IPython's
3178 3187 history. For user code calling back into IPython's machinery, this
3179 3188 should be set to False.
3180 3189 silent : bool
3181 3190 If True, avoid side-effects, such as implicit displayhooks and
3182 3191 and logging. silent=True forces store_history=False.
3183 3192 shell_futures : bool
3184 3193 If True, the code will share future statements with the interactive
3185 3194 shell. It will both be affected by previous __future__ imports, and
3186 3195 any __future__ imports in the code will affect the shell. If False,
3187 3196 __future__ imports are not shared in either direction.
3188 3197 transformed_cell: str
3189 3198 cell that was passed through transformers
3190 3199 preprocessing_exc_tuple:
3191 3200 trace if the transformation failed.
3192 3201
3193 3202 Returns
3194 3203 -------
3195 3204 result : :class:`ExecutionResult`
3196 3205
3197 3206 .. versionadded:: 7.0
3198 3207 """
3199 3208 info = ExecutionInfo(raw_cell, store_history, silent, shell_futures, cell_id)
3200 3209 result = ExecutionResult(info)
3201 3210
3202 3211 if (not raw_cell) or raw_cell.isspace():
3203 3212 self.last_execution_succeeded = True
3204 3213 self.last_execution_result = result
3205 3214 return result
3206 3215
3207 3216 if silent:
3208 3217 store_history = False
3209 3218
3210 3219 if store_history:
3211 3220 result.execution_count = self.execution_count
3212 3221
3213 3222 def error_before_exec(value):
3214 3223 if store_history:
3215 3224 self.execution_count += 1
3216 3225 result.error_before_exec = value
3217 3226 self.last_execution_succeeded = False
3218 3227 self.last_execution_result = result
3219 3228 return result
3220 3229
3221 3230 self.events.trigger('pre_execute')
3222 3231 if not silent:
3223 3232 self.events.trigger('pre_run_cell', info)
3224 3233
3225 3234 if transformed_cell is None:
3226 3235 warnings.warn(
3227 3236 "`run_cell_async` will not call `transform_cell`"
3228 3237 " automatically in the future. Please pass the result to"
3229 3238 " `transformed_cell` argument and any exception that happen"
3230 3239 " during the"
3231 3240 "transform in `preprocessing_exc_tuple` in"
3232 3241 " IPython 7.17 and above.",
3233 3242 DeprecationWarning,
3234 3243 stacklevel=2,
3235 3244 )
3236 3245 # If any of our input transformation (input_transformer_manager or
3237 3246 # prefilter_manager) raises an exception, we store it in this variable
3238 3247 # so that we can display the error after logging the input and storing
3239 3248 # it in the history.
3240 3249 try:
3241 3250 cell = self.transform_cell(raw_cell)
3242 3251 except Exception:
3243 3252 preprocessing_exc_tuple = sys.exc_info()
3244 3253 cell = raw_cell # cell has to exist so it can be stored/logged
3245 3254 else:
3246 3255 preprocessing_exc_tuple = None
3247 3256 else:
3248 3257 if preprocessing_exc_tuple is None:
3249 3258 cell = transformed_cell
3250 3259 else:
3251 3260 cell = raw_cell
3252 3261
3253 3262 # Do NOT store paste/cpaste magic history
3254 3263 if "get_ipython().run_line_magic(" in cell and "paste" in cell:
3255 3264 store_history = False
3256 3265
3257 3266 # Store raw and processed history
3258 3267 if store_history:
3268 assert self.history_manager is not None
3259 3269 self.history_manager.store_inputs(self.execution_count, cell, raw_cell)
3260 3270 if not silent:
3261 3271 self.logger.log(cell, raw_cell)
3262 3272
3263 3273 # Display the exception if input processing failed.
3264 3274 if preprocessing_exc_tuple is not None:
3265 3275 self.showtraceback(preprocessing_exc_tuple)
3266 3276 if store_history:
3267 3277 self.execution_count += 1
3268 3278 return error_before_exec(preprocessing_exc_tuple[1])
3269 3279
3270 3280 # Our own compiler remembers the __future__ environment. If we want to
3271 3281 # run code with a separate __future__ environment, use the default
3272 3282 # compiler
3273 3283 compiler = self.compile if shell_futures else self.compiler_class()
3274 3284
3275 _run_async = False
3276
3277 3285 with self.builtin_trap:
3278 3286 cell_name = compiler.cache(cell, self.execution_count, raw_code=raw_cell)
3279 3287
3280 3288 with self.display_trap:
3281 3289 # Compile to bytecode
3282 3290 try:
3283 3291 code_ast = compiler.ast_parse(cell, filename=cell_name)
3284 3292 except self.custom_exceptions as e:
3285 3293 etype, value, tb = sys.exc_info()
3286 3294 self.CustomTB(etype, value, tb)
3287 3295 return error_before_exec(e)
3288 3296 except IndentationError as e:
3289 3297 self.showindentationerror()
3290 3298 return error_before_exec(e)
3291 3299 except (OverflowError, SyntaxError, ValueError, TypeError,
3292 3300 MemoryError) as e:
3293 3301 self.showsyntaxerror()
3294 3302 return error_before_exec(e)
3295 3303
3296 3304 # Apply AST transformations
3297 3305 try:
3298 3306 code_ast = self.transform_ast(code_ast)
3299 3307 except InputRejected as e:
3300 3308 self.showtraceback()
3301 3309 return error_before_exec(e)
3302 3310
3303 3311 # Give the displayhook a reference to our ExecutionResult so it
3304 3312 # can fill in the output value.
3305 3313 self.displayhook.exec_result = result
3306 3314
3307 3315 # Execute the user code
3308 3316 interactivity = "none" if silent else self.ast_node_interactivity
3309 3317
3310 3318
3311 3319 has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
3312 3320 interactivity=interactivity, compiler=compiler, result=result)
3313 3321
3314 3322 self.last_execution_succeeded = not has_raised
3315 3323 self.last_execution_result = result
3316 3324
3317 3325 # Reset this so later displayed values do not modify the
3318 3326 # ExecutionResult
3319 3327 self.displayhook.exec_result = None
3320 3328
3321 3329 if store_history:
3322 3330 # Write output to the database. Does nothing unless
3323 3331 # history output logging is enabled.
3324 3332 self.history_manager.store_output(self.execution_count)
3325 3333 # Each cell is a *single* input, regardless of how many lines it has
3326 3334 self.execution_count += 1
3327 3335
3328 3336 return result
3329 3337
3330 3338 def transform_cell(self, raw_cell):
3331 3339 """Transform an input cell before parsing it.
3332 3340
3333 3341 Static transformations, implemented in IPython.core.inputtransformer2,
3334 3342 deal with things like ``%magic`` and ``!system`` commands.
3335 3343 These run on all input.
3336 3344 Dynamic transformations, for things like unescaped magics and the exit
3337 3345 autocall, depend on the state of the interpreter.
3338 3346 These only apply to single line inputs.
3339 3347
3340 3348 These string-based transformations are followed by AST transformations;
3341 3349 see :meth:`transform_ast`.
3342 3350 """
3343 3351 # Static input transformations
3344 3352 cell = self.input_transformer_manager.transform_cell(raw_cell)
3345 3353
3346 3354 if len(cell.splitlines()) == 1:
3347 3355 # Dynamic transformations - only applied for single line commands
3348 3356 with self.builtin_trap:
3349 3357 # use prefilter_lines to handle trailing newlines
3350 3358 # restore trailing newline for ast.parse
3351 3359 cell = self.prefilter_manager.prefilter_lines(cell) + '\n'
3352 3360
3353 3361 lines = cell.splitlines(keepends=True)
3354 3362 for transform in self.input_transformers_post:
3355 3363 lines = transform(lines)
3356 3364 cell = ''.join(lines)
3357 3365
3358 3366 return cell
3359 3367
3360 3368 def transform_ast(self, node):
3361 3369 """Apply the AST transformations from self.ast_transformers
3362 3370
3363 3371 Parameters
3364 3372 ----------
3365 3373 node : ast.Node
3366 3374 The root node to be transformed. Typically called with the ast.Module
3367 3375 produced by parsing user input.
3368 3376
3369 3377 Returns
3370 3378 -------
3371 3379 An ast.Node corresponding to the node it was called with. Note that it
3372 3380 may also modify the passed object, so don't rely on references to the
3373 3381 original AST.
3374 3382 """
3375 3383 for transformer in self.ast_transformers:
3376 3384 try:
3377 3385 node = transformer.visit(node)
3378 3386 except InputRejected:
3379 3387 # User-supplied AST transformers can reject an input by raising
3380 3388 # an InputRejected. Short-circuit in this case so that we
3381 3389 # don't unregister the transform.
3382 3390 raise
3383 3391 except Exception as e:
3384 3392 warn(
3385 3393 "AST transformer %r threw an error. It will be unregistered. %s"
3386 3394 % (transformer, e)
3387 3395 )
3388 3396 self.ast_transformers.remove(transformer)
3389 3397
3390 3398 if self.ast_transformers:
3391 3399 ast.fix_missing_locations(node)
3392 3400 return node
3393 3401
3394 3402 async def run_ast_nodes(
3395 3403 self,
3396 3404 nodelist: ListType[stmt],
3397 3405 cell_name: str,
3398 3406 interactivity="last_expr",
3399 3407 compiler=compile,
3400 3408 result=None,
3401 3409 ):
3402 3410 """Run a sequence of AST nodes. The execution mode depends on the
3403 3411 interactivity parameter.
3404 3412
3405 3413 Parameters
3406 3414 ----------
3407 3415 nodelist : list
3408 3416 A sequence of AST nodes to run.
3409 3417 cell_name : str
3410 3418 Will be passed to the compiler as the filename of the cell. Typically
3411 3419 the value returned by ip.compile.cache(cell).
3412 3420 interactivity : str
3413 3421 'all', 'last', 'last_expr' , 'last_expr_or_assign' or 'none',
3414 3422 specifying which nodes should be run interactively (displaying output
3415 3423 from expressions). 'last_expr' will run the last node interactively
3416 3424 only if it is an expression (i.e. expressions in loops or other blocks
3417 3425 are not displayed) 'last_expr_or_assign' will run the last expression
3418 3426 or the last assignment. Other values for this parameter will raise a
3419 3427 ValueError.
3420 3428
3421 3429 compiler : callable
3422 3430 A function with the same interface as the built-in compile(), to turn
3423 3431 the AST nodes into code objects. Default is the built-in compile().
3424 3432 result : ExecutionResult, optional
3425 3433 An object to store exceptions that occur during execution.
3426 3434
3427 3435 Returns
3428 3436 -------
3429 3437 True if an exception occurred while running code, False if it finished
3430 3438 running.
3431 3439 """
3432 3440 if not nodelist:
3433 3441 return
3434 3442
3435 3443
3436 3444 if interactivity == 'last_expr_or_assign':
3437 3445 if isinstance(nodelist[-1], _assign_nodes):
3438 3446 asg = nodelist[-1]
3439 3447 if isinstance(asg, ast.Assign) and len(asg.targets) == 1:
3440 3448 target = asg.targets[0]
3441 3449 elif isinstance(asg, _single_targets_nodes):
3442 3450 target = asg.target
3443 3451 else:
3444 3452 target = None
3445 3453 if isinstance(target, ast.Name):
3446 3454 nnode = ast.Expr(ast.Name(target.id, ast.Load()))
3447 3455 ast.fix_missing_locations(nnode)
3448 3456 nodelist.append(nnode)
3449 3457 interactivity = 'last_expr'
3450 3458
3451 3459 _async = False
3452 3460 if interactivity == 'last_expr':
3453 3461 if isinstance(nodelist[-1], ast.Expr):
3454 3462 interactivity = "last"
3455 3463 else:
3456 3464 interactivity = "none"
3457 3465
3458 3466 if interactivity == 'none':
3459 3467 to_run_exec, to_run_interactive = nodelist, []
3460 3468 elif interactivity == 'last':
3461 3469 to_run_exec, to_run_interactive = nodelist[:-1], nodelist[-1:]
3462 3470 elif interactivity == 'all':
3463 3471 to_run_exec, to_run_interactive = [], nodelist
3464 3472 else:
3465 3473 raise ValueError("Interactivity was %r" % interactivity)
3466 3474
3467 3475 try:
3468 3476
3469 3477 def compare(code):
3470 3478 is_async = inspect.CO_COROUTINE & code.co_flags == inspect.CO_COROUTINE
3471 3479 return is_async
3472 3480
3473 3481 # refactor that to just change the mod constructor.
3474 3482 to_run = []
3475 3483 for node in to_run_exec:
3476 3484 to_run.append((node, "exec"))
3477 3485
3478 3486 for node in to_run_interactive:
3479 3487 to_run.append((node, "single"))
3480 3488
3481 3489 for node, mode in to_run:
3482 3490 if mode == "exec":
3483 3491 mod = Module([node], [])
3484 3492 elif mode == "single":
3485 3493 mod = ast.Interactive([node]) # type: ignore
3486 3494 with compiler.extra_flags(
3487 3495 getattr(ast, "PyCF_ALLOW_TOP_LEVEL_AWAIT", 0x0)
3488 3496 if self.autoawait
3489 3497 else 0x0
3490 3498 ):
3491 3499 code = compiler(mod, cell_name, mode)
3492 3500 asy = compare(code)
3493 3501 if await self.run_code(code, result, async_=asy):
3494 3502 return True
3495 3503
3496 3504 # Flush softspace
3497 3505 if softspace(sys.stdout, 0):
3498 3506 print()
3499 3507
3500 3508 except:
3501 3509 # It's possible to have exceptions raised here, typically by
3502 3510 # compilation of odd code (such as a naked 'return' outside a
3503 3511 # function) that did parse but isn't valid. Typically the exception
3504 3512 # is a SyntaxError, but it's safest just to catch anything and show
3505 3513 # the user a traceback.
3506 3514
3507 3515 # We do only one try/except outside the loop to minimize the impact
3508 3516 # on runtime, and also because if any node in the node list is
3509 3517 # broken, we should stop execution completely.
3510 3518 if result:
3511 3519 result.error_before_exec = sys.exc_info()[1]
3512 3520 self.showtraceback()
3513 3521 return True
3514 3522
3515 3523 return False
3516 3524
3517 3525 async def run_code(self, code_obj, result=None, *, async_=False):
3518 3526 """Execute a code object.
3519 3527
3520 3528 When an exception occurs, self.showtraceback() is called to display a
3521 3529 traceback.
3522 3530
3523 3531 Parameters
3524 3532 ----------
3525 3533 code_obj : code object
3526 3534 A compiled code object, to be executed
3527 3535 result : ExecutionResult, optional
3528 3536 An object to store exceptions that occur during execution.
3529 3537 async_ : Bool (Experimental)
3530 3538 Attempt to run top-level asynchronous code in a default loop.
3531 3539
3532 3540 Returns
3533 3541 -------
3534 3542 False : successful execution.
3535 3543 True : an error occurred.
3536 3544 """
3537 3545 # special value to say that anything above is IPython and should be
3538 3546 # hidden.
3539 3547 __tracebackhide__ = "__ipython_bottom__"
3540 3548 # Set our own excepthook in case the user code tries to call it
3541 3549 # directly, so that the IPython crash handler doesn't get triggered
3542 3550 old_excepthook, sys.excepthook = sys.excepthook, self.excepthook
3543 3551
3544 3552 # we save the original sys.excepthook in the instance, in case config
3545 3553 # code (such as magics) needs access to it.
3546 3554 self.sys_excepthook = old_excepthook
3547 3555 outflag = True # happens in more places, so it's easier as default
3548 3556 try:
3549 3557 try:
3550 3558 if async_:
3551 3559 await eval(code_obj, self.user_global_ns, self.user_ns)
3552 3560 else:
3553 3561 exec(code_obj, self.user_global_ns, self.user_ns)
3554 3562 finally:
3555 3563 # Reset our crash handler in place
3556 3564 sys.excepthook = old_excepthook
3557 3565 except SystemExit as e:
3558 3566 if result is not None:
3559 3567 result.error_in_exec = e
3560 3568 self.showtraceback(exception_only=True)
3561 3569 warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1)
3562 3570 except bdb.BdbQuit:
3563 3571 etype, value, tb = sys.exc_info()
3564 3572 if result is not None:
3565 3573 result.error_in_exec = value
3566 3574 # the BdbQuit stops here
3567 3575 except self.custom_exceptions:
3568 3576 etype, value, tb = sys.exc_info()
3569 3577 if result is not None:
3570 3578 result.error_in_exec = value
3571 3579 self.CustomTB(etype, value, tb)
3572 3580 except:
3573 3581 if result is not None:
3574 3582 result.error_in_exec = sys.exc_info()[1]
3575 3583 self.showtraceback(running_compiled_code=True)
3576 3584 else:
3577 3585 outflag = False
3578 3586 return outflag
3579 3587
3580 3588 # For backwards compatibility
3581 3589 runcode = run_code
3582 3590
3583 3591 def check_complete(self, code: str) -> Tuple[str, str]:
3584 3592 """Return whether a block of code is ready to execute, or should be continued
3585 3593
3586 3594 Parameters
3587 3595 ----------
3588 3596 code : string
3589 3597 Python input code, which can be multiline.
3590 3598
3591 3599 Returns
3592 3600 -------
3593 3601 status : str
3594 3602 One of 'complete', 'incomplete', or 'invalid' if source is not a
3595 3603 prefix of valid code.
3596 3604 indent : str
3597 3605 When status is 'incomplete', this is some whitespace to insert on
3598 3606 the next line of the prompt.
3599 3607 """
3600 3608 status, nspaces = self.input_transformer_manager.check_complete(code)
3601 3609 return status, ' ' * (nspaces or 0)
3602 3610
3603 3611 #-------------------------------------------------------------------------
3604 3612 # Things related to GUI support and pylab
3605 3613 #-------------------------------------------------------------------------
3606 3614
3607 3615 active_eventloop: Optional[str] = None
3608 3616
3609 3617 def enable_gui(self, gui=None):
3610 3618 raise NotImplementedError('Implement enable_gui in a subclass')
3611 3619
3612 3620 def enable_matplotlib(self, gui=None):
3613 3621 """Enable interactive matplotlib and inline figure support.
3614 3622
3615 3623 This takes the following steps:
3616 3624
3617 3625 1. select the appropriate eventloop and matplotlib backend
3618 3626 2. set up matplotlib for interactive use with that backend
3619 3627 3. configure formatters for inline figure display
3620 3628 4. enable the selected gui eventloop
3621 3629
3622 3630 Parameters
3623 3631 ----------
3624 3632 gui : optional, string
3625 3633 If given, dictates the choice of matplotlib GUI backend to use
3626 3634 (should be one of IPython's supported backends, 'qt', 'osx', 'tk',
3627 3635 'gtk', 'wx' or 'inline'), otherwise we use the default chosen by
3628 3636 matplotlib (as dictated by the matplotlib build-time options plus the
3629 3637 user's matplotlibrc configuration file). Note that not all backends
3630 3638 make sense in all contexts, for example a terminal ipython can't
3631 3639 display figures inline.
3632 3640 """
3633 3641 from matplotlib_inline.backend_inline import configure_inline_support
3634 3642
3635 3643 from IPython.core import pylabtools as pt
3636 3644 gui, backend = pt.find_gui_and_backend(gui, self.pylab_gui_select)
3637 3645
3638 3646 if gui != 'inline':
3639 3647 # If we have our first gui selection, store it
3640 3648 if self.pylab_gui_select is None:
3641 3649 self.pylab_gui_select = gui
3642 3650 # Otherwise if they are different
3643 3651 elif gui != self.pylab_gui_select:
3644 3652 print('Warning: Cannot change to a different GUI toolkit: %s.'
3645 3653 ' Using %s instead.' % (gui, self.pylab_gui_select))
3646 3654 gui, backend = pt.find_gui_and_backend(self.pylab_gui_select)
3647 3655
3648 3656 pt.activate_matplotlib(backend)
3649 3657 configure_inline_support(self, backend)
3650 3658
3651 3659 # Now we must activate the gui pylab wants to use, and fix %run to take
3652 3660 # plot updates into account
3653 3661 self.enable_gui(gui)
3654 3662 self.magics_manager.registry['ExecutionMagics'].default_runner = \
3655 3663 pt.mpl_runner(self.safe_execfile)
3656 3664
3657 3665 return gui, backend
3658 3666
3659 3667 def enable_pylab(self, gui=None, import_all=True, welcome_message=False):
3660 3668 """Activate pylab support at runtime.
3661 3669
3662 3670 This turns on support for matplotlib, preloads into the interactive
3663 3671 namespace all of numpy and pylab, and configures IPython to correctly
3664 3672 interact with the GUI event loop. The GUI backend to be used can be
3665 3673 optionally selected with the optional ``gui`` argument.
3666 3674
3667 3675 This method only adds preloading the namespace to InteractiveShell.enable_matplotlib.
3668 3676
3669 3677 Parameters
3670 3678 ----------
3671 3679 gui : optional, string
3672 3680 If given, dictates the choice of matplotlib GUI backend to use
3673 3681 (should be one of IPython's supported backends, 'qt', 'osx', 'tk',
3674 3682 'gtk', 'wx' or 'inline'), otherwise we use the default chosen by
3675 3683 matplotlib (as dictated by the matplotlib build-time options plus the
3676 3684 user's matplotlibrc configuration file). Note that not all backends
3677 3685 make sense in all contexts, for example a terminal ipython can't
3678 3686 display figures inline.
3679 3687 import_all : optional, bool, default: True
3680 3688 Whether to do `from numpy import *` and `from pylab import *`
3681 3689 in addition to module imports.
3682 3690 welcome_message : deprecated
3683 3691 This argument is ignored, no welcome message will be displayed.
3684 3692 """
3685 3693 from IPython.core.pylabtools import import_pylab
3686 3694
3687 3695 gui, backend = self.enable_matplotlib(gui)
3688 3696
3689 3697 # We want to prevent the loading of pylab to pollute the user's
3690 3698 # namespace as shown by the %who* magics, so we execute the activation
3691 3699 # code in an empty namespace, and we update *both* user_ns and
3692 3700 # user_ns_hidden with this information.
3693 3701 ns = {}
3694 3702 import_pylab(ns, import_all)
3695 3703 # warn about clobbered names
3696 3704 ignored = {"__builtins__"}
3697 3705 both = set(ns).intersection(self.user_ns).difference(ignored)
3698 3706 clobbered = [ name for name in both if self.user_ns[name] is not ns[name] ]
3699 3707 self.user_ns.update(ns)
3700 3708 self.user_ns_hidden.update(ns)
3701 3709 return gui, backend, clobbered
3702 3710
3703 3711 #-------------------------------------------------------------------------
3704 3712 # Utilities
3705 3713 #-------------------------------------------------------------------------
3706 3714
3707 3715 def var_expand(self, cmd, depth=0, formatter=DollarFormatter()):
3708 3716 """Expand python variables in a string.
3709 3717
3710 3718 The depth argument indicates how many frames above the caller should
3711 3719 be walked to look for the local namespace where to expand variables.
3712 3720
3713 3721 The global namespace for expansion is always the user's interactive
3714 3722 namespace.
3715 3723 """
3716 3724 ns = self.user_ns.copy()
3717 3725 try:
3718 3726 frame = sys._getframe(depth+1)
3719 3727 except ValueError:
3720 3728 # This is thrown if there aren't that many frames on the stack,
3721 3729 # e.g. if a script called run_line_magic() directly.
3722 3730 pass
3723 3731 else:
3724 3732 ns.update(frame.f_locals)
3725 3733
3726 3734 try:
3727 3735 # We have to use .vformat() here, because 'self' is a valid and common
3728 3736 # name, and expanding **ns for .format() would make it collide with
3729 3737 # the 'self' argument of the method.
3730 3738 cmd = formatter.vformat(cmd, args=[], kwargs=ns)
3731 3739 except Exception:
3732 3740 # if formatter couldn't format, just let it go untransformed
3733 3741 pass
3734 3742 return cmd
3735 3743
3736 3744 def mktempfile(self, data=None, prefix='ipython_edit_'):
3737 3745 """Make a new tempfile and return its filename.
3738 3746
3739 3747 This makes a call to tempfile.mkstemp (created in a tempfile.mkdtemp),
3740 3748 but it registers the created filename internally so ipython cleans it up
3741 3749 at exit time.
3742 3750
3743 3751 Optional inputs:
3744 3752
3745 3753 - data(None): if data is given, it gets written out to the temp file
3746 3754 immediately, and the file is closed again."""
3747 3755
3748 3756 dir_path = Path(tempfile.mkdtemp(prefix=prefix))
3749 3757 self.tempdirs.append(dir_path)
3750 3758
3751 3759 handle, filename = tempfile.mkstemp(".py", prefix, dir=str(dir_path))
3752 3760 os.close(handle) # On Windows, there can only be one open handle on a file
3753 3761
3754 3762 file_path = Path(filename)
3755 3763 self.tempfiles.append(file_path)
3756 3764
3757 3765 if data:
3758 3766 file_path.write_text(data, encoding="utf-8")
3759 3767 return filename
3760 3768
3761 3769 def ask_yes_no(self, prompt, default=None, interrupt=None):
3762 3770 if self.quiet:
3763 3771 return True
3764 3772 return ask_yes_no(prompt,default,interrupt)
3765 3773
3766 3774 def show_usage(self):
3767 3775 """Show a usage message"""
3768 3776 page.page(IPython.core.usage.interactive_usage)
3769 3777
3770 3778 def extract_input_lines(self, range_str, raw=False):
3771 3779 """Return as a string a set of input history slices.
3772 3780
3773 3781 Parameters
3774 3782 ----------
3775 3783 range_str : str
3776 3784 The set of slices is given as a string, like "~5/6-~4/2 4:8 9",
3777 3785 since this function is for use by magic functions which get their
3778 3786 arguments as strings. The number before the / is the session
3779 3787 number: ~n goes n back from the current session.
3780 3788
3781 3789 If empty string is given, returns history of current session
3782 3790 without the last input.
3783 3791
3784 3792 raw : bool, optional
3785 3793 By default, the processed input is used. If this is true, the raw
3786 3794 input history is used instead.
3787 3795
3788 3796 Notes
3789 3797 -----
3790 3798 Slices can be described with two notations:
3791 3799
3792 3800 * ``N:M`` -> standard python form, means including items N...(M-1).
3793 3801 * ``N-M`` -> include items N..M (closed endpoint).
3794 3802 """
3795 3803 lines = self.history_manager.get_range_by_str(range_str, raw=raw)
3796 3804 text = "\n".join(x for _, _, x in lines)
3797 3805
3798 3806 # Skip the last line, as it's probably the magic that called this
3799 3807 if not range_str:
3800 3808 if "\n" not in text:
3801 3809 text = ""
3802 3810 else:
3803 3811 text = text[: text.rfind("\n")]
3804 3812
3805 3813 return text
3806 3814
3807 3815 def find_user_code(self, target, raw=True, py_only=False, skip_encoding_cookie=True, search_ns=False):
3808 3816 """Get a code string from history, file, url, or a string or macro.
3809 3817
3810 3818 This is mainly used by magic functions.
3811 3819
3812 3820 Parameters
3813 3821 ----------
3814 3822 target : str
3815 3823 A string specifying code to retrieve. This will be tried respectively
3816 3824 as: ranges of input history (see %history for syntax), url,
3817 3825 corresponding .py file, filename, or an expression evaluating to a
3818 3826 string or Macro in the user namespace.
3819 3827
3820 3828 If empty string is given, returns complete history of current
3821 3829 session, without the last line.
3822 3830
3823 3831 raw : bool
3824 3832 If true (default), retrieve raw history. Has no effect on the other
3825 3833 retrieval mechanisms.
3826 3834
3827 3835 py_only : bool (default False)
3828 3836 Only try to fetch python code, do not try alternative methods to decode file
3829 3837 if unicode fails.
3830 3838
3831 3839 Returns
3832 3840 -------
3833 3841 A string of code.
3834 3842 ValueError is raised if nothing is found, and TypeError if it evaluates
3835 3843 to an object of another type. In each case, .args[0] is a printable
3836 3844 message.
3837 3845 """
3838 3846 code = self.extract_input_lines(target, raw=raw) # Grab history
3839 3847 if code:
3840 3848 return code
3841 3849 try:
3842 3850 if target.startswith(('http://', 'https://')):
3843 3851 return openpy.read_py_url(target, skip_encoding_cookie=skip_encoding_cookie)
3844 3852 except UnicodeDecodeError as e:
3845 3853 if not py_only :
3846 3854 # Deferred import
3847 3855 from urllib.request import urlopen
3848 3856 response = urlopen(target)
3849 3857 return response.read().decode('latin1')
3850 3858 raise ValueError(("'%s' seem to be unreadable.") % target) from e
3851 3859
3852 3860 potential_target = [target]
3853 3861 try :
3854 3862 potential_target.insert(0,get_py_filename(target))
3855 3863 except IOError:
3856 3864 pass
3857 3865
3858 3866 for tgt in potential_target :
3859 3867 if os.path.isfile(tgt): # Read file
3860 3868 try :
3861 3869 return openpy.read_py_file(tgt, skip_encoding_cookie=skip_encoding_cookie)
3862 3870 except UnicodeDecodeError as e:
3863 3871 if not py_only :
3864 3872 with io_open(tgt,'r', encoding='latin1') as f :
3865 3873 return f.read()
3866 3874 raise ValueError(("'%s' seem to be unreadable.") % target) from e
3867 3875 elif os.path.isdir(os.path.expanduser(tgt)):
3868 3876 raise ValueError("'%s' is a directory, not a regular file." % target)
3869 3877
3870 3878 if search_ns:
3871 3879 # Inspect namespace to load object source
3872 3880 object_info = self.object_inspect(target, detail_level=1)
3873 3881 if object_info['found'] and object_info['source']:
3874 3882 return object_info['source']
3875 3883
3876 3884 try: # User namespace
3877 3885 codeobj = eval(target, self.user_ns)
3878 3886 except Exception as e:
3879 3887 raise ValueError(("'%s' was not found in history, as a file, url, "
3880 3888 "nor in the user namespace.") % target) from e
3881 3889
3882 3890 if isinstance(codeobj, str):
3883 3891 return codeobj
3884 3892 elif isinstance(codeobj, Macro):
3885 3893 return codeobj.value
3886 3894
3887 3895 raise TypeError("%s is neither a string nor a macro." % target,
3888 3896 codeobj)
3889 3897
3890 3898 def _atexit_once(self):
3891 3899 """
3892 3900 At exist operation that need to be called at most once.
3893 3901 Second call to this function per instance will do nothing.
3894 3902 """
3895 3903
3896 3904 if not getattr(self, "_atexit_once_called", False):
3897 3905 self._atexit_once_called = True
3898 3906 # Clear all user namespaces to release all references cleanly.
3899 3907 self.reset(new_session=False)
3900 3908 # Close the history session (this stores the end time and line count)
3901 3909 # this must be *before* the tempfile cleanup, in case of temporary
3902 3910 # history db
3903 3911 self.history_manager.end_session()
3904 3912 self.history_manager = None
3905 3913
3906 3914 #-------------------------------------------------------------------------
3907 3915 # Things related to IPython exiting
3908 3916 #-------------------------------------------------------------------------
3909 3917 def atexit_operations(self):
3910 3918 """This will be executed at the time of exit.
3911 3919
3912 3920 Cleanup operations and saving of persistent data that is done
3913 3921 unconditionally by IPython should be performed here.
3914 3922
3915 3923 For things that may depend on startup flags or platform specifics (such
3916 3924 as having readline or not), register a separate atexit function in the
3917 3925 code that has the appropriate information, rather than trying to
3918 3926 clutter
3919 3927 """
3920 3928 self._atexit_once()
3921 3929
3922 3930 # Cleanup all tempfiles and folders left around
3923 3931 for tfile in self.tempfiles:
3924 3932 try:
3925 3933 tfile.unlink()
3926 3934 self.tempfiles.remove(tfile)
3927 3935 except FileNotFoundError:
3928 3936 pass
3929 3937 del self.tempfiles
3930 3938 for tdir in self.tempdirs:
3931 3939 try:
3932 3940 shutil.rmtree(tdir)
3933 3941 self.tempdirs.remove(tdir)
3934 3942 except FileNotFoundError:
3935 3943 pass
3936 3944 del self.tempdirs
3937 3945
3938 3946 # Restore user's cursor
3939 3947 if hasattr(self, "editing_mode") and self.editing_mode == "vi":
3940 3948 sys.stdout.write("\x1b[0 q")
3941 3949 sys.stdout.flush()
3942 3950
3943 3951 def cleanup(self):
3944 3952 self.restore_sys_module_state()
3945 3953
3946 3954
3947 3955 # Overridden in terminal subclass to change prompts
3948 3956 def switch_doctest_mode(self, mode):
3949 3957 pass
3950 3958
3951 3959
3952 3960 class InteractiveShellABC(metaclass=abc.ABCMeta):
3953 3961 """An abstract base class for InteractiveShell."""
3954 3962
3955 3963 InteractiveShellABC.register(InteractiveShell)
@@ -1,320 +1,330 b''
1 1 """
2 2 This module contains utility function and classes to inject simple ast
3 3 transformations based on code strings into IPython. While it is already possible
4 4 with ast-transformers it is not easy to directly manipulate ast.
5 5
6 6
7 7 IPython has pre-code and post-code hooks, but are ran from within the IPython
8 8 machinery so may be inappropriate, for example for performance mesurement.
9 9
10 10 This module give you tools to simplify this, and expose 2 classes:
11 11
12 12 - `ReplaceCodeTransformer` which is a simple ast transformer based on code
13 13 template,
14 14
15 15 and for advance case:
16 16
17 17 - `Mangler` which is a simple ast transformer that mangle names in the ast.
18 18
19 19
20 20 Example, let's try to make a simple version of the ``timeit`` magic, that run a
21 21 code snippet 10 times and print the average time taken.
22 22
23 23 Basically we want to run :
24 24
25 25 .. code-block:: python
26 26
27 27 from time import perf_counter
28 28 now = perf_counter()
29 29 for i in range(10):
30 30 __code__ # our code
31 31 print(f"Time taken: {(perf_counter() - now)/10}")
32 32 __ret__ # the result of the last statement
33 33
34 34 Where ``__code__`` is the code snippet we want to run, and ``__ret__`` is the
35 35 result, so that if we for example run `dataframe.head()` IPython still display
36 36 the head of dataframe instead of nothing.
37 37
38 38 Here is a complete example of a file `timit2.py` that define such a magic:
39 39
40 40 .. code-block:: python
41 41
42 42 from IPython.core.magic import (
43 43 Magics,
44 44 magics_class,
45 45 line_cell_magic,
46 46 )
47 47 from IPython.core.magics.ast_mod import ReplaceCodeTransformer
48 48 from textwrap import dedent
49 49 import ast
50 50
51 51 template = template = dedent('''
52 52 from time import perf_counter
53 53 now = perf_counter()
54 54 for i in range(10):
55 55 __code__
56 56 print(f"Time taken: {(perf_counter() - now)/10}")
57 57 __ret__
58 58 '''
59 59 )
60 60
61 61
62 62 @magics_class
63 63 class AstM(Magics):
64 64 @line_cell_magic
65 65 def t2(self, line, cell):
66 66 transformer = ReplaceCodeTransformer.from_string(template)
67 67 transformer.debug = True
68 68 transformer.mangler.debug = True
69 69 new_code = transformer.visit(ast.parse(cell))
70 70 return exec(compile(new_code, "<ast>", "exec"))
71 71
72 72
73 73 def load_ipython_extension(ip):
74 74 ip.register_magics(AstM)
75 75
76 76
77 77
78 78 .. code-block:: python
79 79
80 80 In [1]: %load_ext timit2
81 81
82 82 In [2]: %%t2
83 83 ...: import time
84 84 ...: time.sleep(0.05)
85 85 ...:
86 86 ...:
87 87 Time taken: 0.05435649999999441
88 88
89 89
90 90 If you wish to ran all the code enter in IPython in an ast transformer, you can
91 91 do so as well:
92 92
93 93 .. code-block:: python
94 94
95 95 In [1]: from IPython.core.magics.ast_mod import ReplaceCodeTransformer
96 96 ...:
97 97 ...: template = '''
98 98 ...: from time import perf_counter
99 99 ...: now = perf_counter()
100 100 ...: __code__
101 101 ...: print(f"Code ran in {perf_counter()-now}")
102 102 ...: __ret__'''
103 103 ...:
104 104 ...: get_ipython().ast_transformers.append(ReplaceCodeTransformer.from_string(template))
105 105
106 106 In [2]: 1+1
107 107 Code ran in 3.40410006174352e-05
108 108 Out[2]: 2
109 109
110 110
111 111
112 112 Hygiene and Mangling
113 113 --------------------
114 114
115 115 The ast transformer above is not hygienic, it may not work if the user code use
116 116 the same variable names as the ones used in the template. For example.
117 117
118 118 To help with this by default the `ReplaceCodeTransformer` will mangle all names
119 119 staring with 3 underscores. This is a simple heuristic that should work in most
120 120 case, but can be cumbersome in some case. We provide a `Mangler` class that can
121 121 be overridden to change the mangling heuristic, or simply use the `mangle_all`
122 122 utility function. It will _try_ to mangle all names (except `__ret__` and
123 123 `__code__`), but this include builtins (``print``, ``range``, ``type``) and
124 124 replace those by invalid identifiers py prepending ``mangle-``:
125 125 ``mangle-print``, ``mangle-range``, ``mangle-type`` etc. This is not a problem
126 126 as currently Python AST support invalid identifiers, but it may not be the case
127 127 in the future.
128 128
129 129 You can set `ReplaceCodeTransformer.debug=True` and
130 130 `ReplaceCodeTransformer.mangler.debug=True` to see the code after mangling and
131 131 transforming:
132 132
133 133 .. code-block:: python
134 134
135 135
136 136 In [1]: from IPython.core.magics.ast_mod import ReplaceCodeTransformer, mangle_all
137 137 ...:
138 138 ...: template = '''
139 139 ...: from builtins import type, print
140 140 ...: from time import perf_counter
141 141 ...: now = perf_counter()
142 142 ...: __code__
143 143 ...: print(f"Code ran in {perf_counter()-now}")
144 144 ...: __ret__'''
145 145 ...:
146 146 ...: transformer = ReplaceCodeTransformer.from_string(template, mangling_predicate=mangle_all)
147 147
148 148
149 149 In [2]: transformer.debug = True
150 150 ...: transformer.mangler.debug = True
151 151 ...: get_ipython().ast_transformers.append(transformer)
152 152
153 153 In [3]: 1+1
154 154 Mangling Alias mangle-type
155 155 Mangling Alias mangle-print
156 156 Mangling Alias mangle-perf_counter
157 157 Mangling now
158 158 Mangling perf_counter
159 159 Not mangling __code__
160 160 Mangling print
161 161 Mangling perf_counter
162 162 Mangling now
163 163 Not mangling __ret__
164 164 ---- Transformed code ----
165 165 from builtins import type as mangle-type, print as mangle-print
166 166 from time import perf_counter as mangle-perf_counter
167 167 mangle-now = mangle-perf_counter()
168 168 ret-tmp = 1 + 1
169 169 mangle-print(f'Code ran in {mangle-perf_counter() - mangle-now}')
170 170 ret-tmp
171 171 ---- ---------------- ----
172 172 Code ran in 0.00013654199938173406
173 173 Out[3]: 2
174 174
175 175
176 176 """
177 177
178 178 __skip_doctest__ = True
179 179
180 180
181 from ast import NodeTransformer, Store, Load, Name, Expr, Assign, Module, Import, ImportFrom
181 from ast import (
182 NodeTransformer,
183 Store,
184 Load,
185 Name,
186 Expr,
187 Assign,
188 Module,
189 Import,
190 ImportFrom,
191 )
182 192 import ast
183 193 import copy
184 194
185 195 from typing import Dict, Optional, Union
186 196
187 197
188 198 mangle_all = lambda name: False if name in ("__ret__", "__code__") else True
189 199
190 200
191 201 class Mangler(NodeTransformer):
192 202 """
193 203 Mangle given names in and ast tree to make sure they do not conflict with
194 204 user code.
195 205 """
196 206
197 207 enabled: bool = True
198 208 debug: bool = False
199 209
200 210 def log(self, *args, **kwargs):
201 211 if self.debug:
202 212 print(*args, **kwargs)
203 213
204 214 def __init__(self, predicate=None):
205 215 if predicate is None:
206 216 predicate = lambda name: name.startswith("___")
207 217 self.predicate = predicate
208 218
209 219 def visit_Name(self, node):
210 220 if self.predicate(node.id):
211 221 self.log("Mangling", node.id)
212 222 # Once in the ast we do not need
213 223 # names to be valid identifiers.
214 224 node.id = "mangle-" + node.id
215 225 else:
216 226 self.log("Not mangling", node.id)
217 227 return node
218 228
219 229 def visit_FunctionDef(self, node):
220 230 if self.predicate(node.name):
221 231 self.log("Mangling", node.name)
222 232 node.name = "mangle-" + node.name
223 233 else:
224 234 self.log("Not mangling", node.name)
225 235
226 236 for arg in node.args.args:
227 237 if self.predicate(arg.arg):
228 238 self.log("Mangling function arg", arg.arg)
229 239 arg.arg = "mangle-" + arg.arg
230 240 else:
231 241 self.log("Not mangling function arg", arg.arg)
232 242 return self.generic_visit(node)
233 243
234 def visit_ImportFrom(self, node:ImportFrom):
244 def visit_ImportFrom(self, node: ImportFrom):
235 245 return self._visit_Import_and_ImportFrom(node)
236 246
237 def visit_Import(self, node:Import):
247 def visit_Import(self, node: Import):
238 248 return self._visit_Import_and_ImportFrom(node)
239 249
240 def _visit_Import_and_ImportFrom(self, node:Union[Import, ImportFrom]):
250 def _visit_Import_and_ImportFrom(self, node: Union[Import, ImportFrom]):
241 251 for alias in node.names:
242 252 asname = alias.name if alias.asname is None else alias.asname
243 253 if self.predicate(asname):
244 254 new_name: str = "mangle-" + asname
245 255 self.log("Mangling Alias", new_name)
246 256 alias.asname = new_name
247 257 else:
248 258 self.log("Not mangling Alias", alias.asname)
249 259 return node
250 260
251 261
252 262 class ReplaceCodeTransformer(NodeTransformer):
253 263 enabled: bool = True
254 264 debug: bool = False
255 265 mangler: Mangler
256 266
257 267 def __init__(
258 268 self, template: Module, mapping: Optional[Dict] = None, mangling_predicate=None
259 269 ):
260 270 assert isinstance(mapping, (dict, type(None)))
261 271 assert isinstance(mangling_predicate, (type(None), type(lambda: None)))
262 272 assert isinstance(template, ast.Module)
263 273 self.template = template
264 274 self.mangler = Mangler(predicate=mangling_predicate)
265 275 if mapping is None:
266 276 mapping = {}
267 277 self.mapping = mapping
268 278
269 279 @classmethod
270 280 def from_string(
271 281 cls, template: str, mapping: Optional[Dict] = None, mangling_predicate=None
272 282 ):
273 283 return cls(
274 284 ast.parse(template), mapping=mapping, mangling_predicate=mangling_predicate
275 285 )
276 286
277 287 def visit_Module(self, code):
278 288 if not self.enabled:
279 289 return code
280 290 # if not isinstance(code, ast.Module):
281 291 # recursively called...
282 292 # return generic_visit(self, code)
283 293 last = code.body[-1]
284 294 if isinstance(last, Expr):
285 295 code.body.pop()
286 296 code.body.append(Assign([Name("ret-tmp", ctx=Store())], value=last.value))
287 297 ast.fix_missing_locations(code)
288 298 ret = Expr(value=Name("ret-tmp", ctx=Load()))
289 299 ret = ast.fix_missing_locations(ret)
290 300 self.mapping["__ret__"] = ret
291 301 else:
292 302 self.mapping["__ret__"] = ast.parse("None").body[0]
293 303 self.mapping["__code__"] = code.body
294 304 tpl = ast.fix_missing_locations(self.template)
295 305
296 306 tx = copy.deepcopy(tpl)
297 307 tx = self.mangler.visit(tx)
298 308 node = self.generic_visit(tx)
299 309 node_2 = ast.fix_missing_locations(node)
300 310 if self.debug:
301 311 print("---- Transformed code ----")
302 312 print(ast.unparse(node_2))
303 313 print("---- ---------------- ----")
304 314 return node_2
305 315
306 316 # this does not work as the name might be in a list and one might want to extend the list.
307 317 # def visit_Name(self, name):
308 318 # if name.id in self.mapping and name.id == "__ret__":
309 319 # print(name, "in mapping")
310 320 # if isinstance(name.ctx, ast.Store):
311 321 # return Name("tmp", ctx=Store())
312 322 # else:
313 323 # return copy.deepcopy(self.mapping[name.id])
314 324 # return name
315 325
316 326 def visit_Expr(self, expr):
317 327 if isinstance(expr.value, Name) and expr.value.id in self.mapping:
318 328 if self.mapping[expr.value.id] is not None:
319 329 return copy.deepcopy(self.mapping[expr.value.id])
320 330 return self.generic_visit(expr)
@@ -1,371 +1,372 b''
1 1 """Magic functions for running cells in various scripts."""
2 2
3 3 # Copyright (c) IPython Development Team.
4 4 # Distributed under the terms of the Modified BSD License.
5 5
6 6 import asyncio
7 7 import asyncio.exceptions
8 8 import atexit
9 9 import errno
10 10 import os
11 11 import signal
12 12 import sys
13 13 import time
14 14 from subprocess import CalledProcessError
15 15 from threading import Thread
16 16
17 17 from traitlets import Any, Dict, List, default
18 18
19 19 from IPython.core import magic_arguments
20 20 from IPython.core.async_helpers import _AsyncIOProxy
21 21 from IPython.core.magic import Magics, cell_magic, line_magic, magics_class
22 22 from IPython.utils.process import arg_split
23 23
24 24 #-----------------------------------------------------------------------------
25 25 # Magic implementation classes
26 26 #-----------------------------------------------------------------------------
27 27
28 28 def script_args(f):
29 29 """single decorator for adding script args"""
30 30 args = [
31 31 magic_arguments.argument(
32 32 '--out', type=str,
33 33 help="""The variable in which to store stdout from the script.
34 34 If the script is backgrounded, this will be the stdout *pipe*,
35 35 instead of the stderr text itself and will not be auto closed.
36 36 """
37 37 ),
38 38 magic_arguments.argument(
39 39 '--err', type=str,
40 40 help="""The variable in which to store stderr from the script.
41 41 If the script is backgrounded, this will be the stderr *pipe*,
42 42 instead of the stderr text itself and will not be autoclosed.
43 43 """
44 44 ),
45 45 magic_arguments.argument(
46 46 '--bg', action="store_true",
47 47 help="""Whether to run the script in the background.
48 48 If given, the only way to see the output of the command is
49 49 with --out/err.
50 50 """
51 51 ),
52 52 magic_arguments.argument(
53 53 '--proc', type=str,
54 54 help="""The variable in which to store Popen instance.
55 55 This is used only when --bg option is given.
56 56 """
57 57 ),
58 58 magic_arguments.argument(
59 59 '--no-raise-error', action="store_false", dest='raise_error',
60 60 help="""Whether you should raise an error message in addition to
61 61 a stream on stderr if you get a nonzero exit code.
62 62 """,
63 63 ),
64 64 ]
65 65 for arg in args:
66 66 f = arg(f)
67 67 return f
68 68
69 69
70 70 @magics_class
71 71 class ScriptMagics(Magics):
72 72 """Magics for talking to scripts
73 73
74 74 This defines a base `%%script` cell magic for running a cell
75 75 with a program in a subprocess, and registers a few top-level
76 76 magics that call %%script with common interpreters.
77 77 """
78 78
79 79 event_loop = Any(
80 80 help="""
81 81 The event loop on which to run subprocesses
82 82
83 83 Not the main event loop,
84 84 because we want to be able to make blocking calls
85 85 and have certain requirements we don't want to impose on the main loop.
86 86 """
87 87 )
88 88
89 script_magics = List(
89 script_magics: List = List(
90 90 help="""Extra script cell magics to define
91 91
92 92 This generates simple wrappers of `%%script foo` as `%%foo`.
93 93
94 94 If you want to add script magics that aren't on your path,
95 95 specify them in script_paths
96 96 """,
97 97 ).tag(config=True)
98
98 99 @default('script_magics')
99 100 def _script_magics_default(self):
100 101 """default to a common list of programs"""
101 102
102 103 defaults = [
103 104 'sh',
104 105 'bash',
105 106 'perl',
106 107 'ruby',
107 108 'python',
108 109 'python2',
109 110 'python3',
110 111 'pypy',
111 112 ]
112 113 if os.name == 'nt':
113 114 defaults.extend([
114 115 'cmd',
115 116 ])
116 117
117 118 return defaults
118 119
119 120 script_paths = Dict(
120 121 help="""Dict mapping short 'ruby' names to full paths, such as '/opt/secret/bin/ruby'
121 122
122 123 Only necessary for items in script_magics where the default path will not
123 124 find the right interpreter.
124 125 """
125 126 ).tag(config=True)
126 127
127 128 def __init__(self, shell=None):
128 129 super(ScriptMagics, self).__init__(shell=shell)
129 130 self._generate_script_magics()
130 131 self.bg_processes = []
131 132 atexit.register(self.kill_bg_processes)
132 133
133 134 def __del__(self):
134 135 self.kill_bg_processes()
135 136
136 137 def _generate_script_magics(self):
137 138 cell_magics = self.magics['cell']
138 139 for name in self.script_magics:
139 140 cell_magics[name] = self._make_script_magic(name)
140 141
141 142 def _make_script_magic(self, name):
142 143 """make a named magic, that calls %%script with a particular program"""
143 144 # expand to explicit path if necessary:
144 145 script = self.script_paths.get(name, name)
145 146
146 147 @magic_arguments.magic_arguments()
147 148 @script_args
148 149 def named_script_magic(line, cell):
149 150 # if line, add it as cl-flags
150 151 if line:
151 152 line = "%s %s" % (script, line)
152 153 else:
153 154 line = script
154 155 return self.shebang(line, cell)
155 156
156 157 # write a basic docstring:
157 158 named_script_magic.__doc__ = \
158 159 """%%{name} script magic
159 160
160 161 Run cells with {script} in a subprocess.
161 162
162 163 This is a shortcut for `%%script {script}`
163 164 """.format(**locals())
164 165
165 166 return named_script_magic
166 167
167 168 @magic_arguments.magic_arguments()
168 169 @script_args
169 170 @cell_magic("script")
170 171 def shebang(self, line, cell):
171 172 """Run a cell via a shell command
172 173
173 174 The `%%script` line is like the #! line of script,
174 175 specifying a program (bash, perl, ruby, etc.) with which to run.
175 176
176 177 The rest of the cell is run by that program.
177 178
178 179 Examples
179 180 --------
180 181 ::
181 182
182 183 In [1]: %%script bash
183 184 ...: for i in 1 2 3; do
184 185 ...: echo $i
185 186 ...: done
186 187 1
187 188 2
188 189 3
189 190 """
190 191
191 192 # Create the event loop in which to run script magics
192 193 # this operates on a background thread
193 194 if self.event_loop is None:
194 195 if sys.platform == "win32":
195 196 # don't override the current policy,
196 197 # just create an event loop
197 198 event_loop = asyncio.WindowsProactorEventLoopPolicy().new_event_loop()
198 199 else:
199 200 event_loop = asyncio.new_event_loop()
200 201 self.event_loop = event_loop
201 202
202 203 # start the loop in a background thread
203 204 asyncio_thread = Thread(target=event_loop.run_forever, daemon=True)
204 205 asyncio_thread.start()
205 206 else:
206 207 event_loop = self.event_loop
207 208
208 209 def in_thread(coro):
209 210 """Call a coroutine on the asyncio thread"""
210 211 return asyncio.run_coroutine_threadsafe(coro, event_loop).result()
211 212
212 213 async def _readchunk(stream):
213 214 try:
214 215 return await stream.readuntil(b"\n")
215 216 except asyncio.exceptions.IncompleteReadError as e:
216 217 return e.partial
217 218 except asyncio.exceptions.LimitOverrunError as e:
218 219 return await stream.read(e.consumed)
219 220
220 221 async def _handle_stream(stream, stream_arg, file_object):
221 222 while True:
222 223 chunk = (await _readchunk(stream)).decode("utf8", errors="replace")
223 224 if not chunk:
224 225 break
225 226 if stream_arg:
226 227 self.shell.user_ns[stream_arg] = chunk
227 228 else:
228 229 file_object.write(chunk)
229 230 file_object.flush()
230 231
231 232 async def _stream_communicate(process, cell):
232 233 process.stdin.write(cell)
233 234 process.stdin.close()
234 235 stdout_task = asyncio.create_task(
235 236 _handle_stream(process.stdout, args.out, sys.stdout)
236 237 )
237 238 stderr_task = asyncio.create_task(
238 239 _handle_stream(process.stderr, args.err, sys.stderr)
239 240 )
240 241 await asyncio.wait([stdout_task, stderr_task])
241 242 await process.wait()
242 243
243 244 argv = arg_split(line, posix=not sys.platform.startswith("win"))
244 245 args, cmd = self.shebang.parser.parse_known_args(argv)
245 246
246 247 try:
247 248 p = in_thread(
248 249 asyncio.create_subprocess_exec(
249 250 *cmd,
250 251 stdout=asyncio.subprocess.PIPE,
251 252 stderr=asyncio.subprocess.PIPE,
252 253 stdin=asyncio.subprocess.PIPE,
253 254 )
254 255 )
255 256 except OSError as e:
256 257 if e.errno == errno.ENOENT:
257 258 print("Couldn't find program: %r" % cmd[0])
258 259 return
259 260 else:
260 261 raise
261 262
262 263 if not cell.endswith('\n'):
263 264 cell += '\n'
264 265 cell = cell.encode('utf8', 'replace')
265 266 if args.bg:
266 267 self.bg_processes.append(p)
267 268 self._gc_bg_processes()
268 269 to_close = []
269 270 if args.out:
270 271 self.shell.user_ns[args.out] = _AsyncIOProxy(p.stdout, event_loop)
271 272 else:
272 273 to_close.append(p.stdout)
273 274 if args.err:
274 275 self.shell.user_ns[args.err] = _AsyncIOProxy(p.stderr, event_loop)
275 276 else:
276 277 to_close.append(p.stderr)
277 278 event_loop.call_soon_threadsafe(
278 279 lambda: asyncio.Task(self._run_script(p, cell, to_close))
279 280 )
280 281 if args.proc:
281 282 proc_proxy = _AsyncIOProxy(p, event_loop)
282 283 proc_proxy.stdout = _AsyncIOProxy(p.stdout, event_loop)
283 284 proc_proxy.stderr = _AsyncIOProxy(p.stderr, event_loop)
284 285 self.shell.user_ns[args.proc] = proc_proxy
285 286 return
286 287
287 288 try:
288 289 in_thread(_stream_communicate(p, cell))
289 290 except KeyboardInterrupt:
290 291 try:
291 292 p.send_signal(signal.SIGINT)
292 293 in_thread(asyncio.wait_for(p.wait(), timeout=0.1))
293 294 if p.returncode is not None:
294 295 print("Process is interrupted.")
295 296 return
296 297 p.terminate()
297 298 in_thread(asyncio.wait_for(p.wait(), timeout=0.1))
298 299 if p.returncode is not None:
299 300 print("Process is terminated.")
300 301 return
301 302 p.kill()
302 303 print("Process is killed.")
303 304 except OSError:
304 305 pass
305 306 except Exception as e:
306 307 print("Error while terminating subprocess (pid=%i): %s" % (p.pid, e))
307 308 return
308 309
309 310 if args.raise_error and p.returncode != 0:
310 311 # If we get here and p.returncode is still None, we must have
311 312 # killed it but not yet seen its return code. We don't wait for it,
312 313 # in case it's stuck in uninterruptible sleep. -9 = SIGKILL
313 314 rc = p.returncode or -9
314 315 raise CalledProcessError(rc, cell)
315 316
316 317 shebang.__skip_doctest__ = os.name != "posix"
317 318
318 319 async def _run_script(self, p, cell, to_close):
319 320 """callback for running the script in the background"""
320 321
321 322 p.stdin.write(cell)
322 323 await p.stdin.drain()
323 324 p.stdin.close()
324 325 await p.stdin.wait_closed()
325 326 await p.wait()
326 327 # asyncio read pipes have no close
327 328 # but we should drain the data anyway
328 329 for s in to_close:
329 330 await s.read()
330 331 self._gc_bg_processes()
331 332
332 333 @line_magic("killbgscripts")
333 334 def killbgscripts(self, _nouse_=''):
334 335 """Kill all BG processes started by %%script and its family."""
335 336 self.kill_bg_processes()
336 337 print("All background processes were killed.")
337 338
338 339 def kill_bg_processes(self):
339 340 """Kill all BG processes which are still running."""
340 341 if not self.bg_processes:
341 342 return
342 343 for p in self.bg_processes:
343 344 if p.returncode is None:
344 345 try:
345 346 p.send_signal(signal.SIGINT)
346 347 except:
347 348 pass
348 349 time.sleep(0.1)
349 350 self._gc_bg_processes()
350 351 if not self.bg_processes:
351 352 return
352 353 for p in self.bg_processes:
353 354 if p.returncode is None:
354 355 try:
355 356 p.terminate()
356 357 except:
357 358 pass
358 359 time.sleep(0.1)
359 360 self._gc_bg_processes()
360 361 if not self.bg_processes:
361 362 return
362 363 for p in self.bg_processes:
363 364 if p.returncode is None:
364 365 try:
365 366 p.kill()
366 367 except:
367 368 pass
368 369 self._gc_bg_processes()
369 370
370 371 def _gc_bg_processes(self):
371 372 self.bg_processes = [p for p in self.bg_processes if p.returncode is None]
@@ -1,700 +1,703 b''
1 1 # encoding: utf-8
2 2 """
3 3 Prefiltering components.
4 4
5 5 Prefilters transform user input before it is exec'd by Python. These
6 6 transforms are used to implement additional syntax such as !ls and %magic.
7 7 """
8 8
9 9 # Copyright (c) IPython Development Team.
10 10 # Distributed under the terms of the Modified BSD License.
11 11
12 12 from keyword import iskeyword
13 13 import re
14 14
15 15 from .autocall import IPyAutocall
16 16 from traitlets.config.configurable import Configurable
17 17 from .inputtransformer2 import (
18 18 ESC_MAGIC,
19 19 ESC_QUOTE,
20 20 ESC_QUOTE2,
21 21 ESC_PAREN,
22 22 )
23 23 from .macro import Macro
24 24 from .splitinput import LineInfo
25 25
26 26 from traitlets import (
27 27 List, Integer, Unicode, Bool, Instance, CRegExp
28 28 )
29 29
30 30 #-----------------------------------------------------------------------------
31 31 # Global utilities, errors and constants
32 32 #-----------------------------------------------------------------------------
33 33
34 34
35 35 class PrefilterError(Exception):
36 36 pass
37 37
38 38
39 39 # RegExp to identify potential function names
40 40 re_fun_name = re.compile(r'[^\W\d]([\w.]*) *$')
41 41
42 42 # RegExp to exclude strings with this start from autocalling. In
43 43 # particular, all binary operators should be excluded, so that if foo is
44 44 # callable, foo OP bar doesn't become foo(OP bar), which is invalid. The
45 45 # characters '!=()' don't need to be checked for, as the checkPythonChars
46 46 # routine explicitly does so, to catch direct calls and rebindings of
47 47 # existing names.
48 48
49 49 # Warning: the '-' HAS TO BE AT THE END of the first group, otherwise
50 50 # it affects the rest of the group in square brackets.
51 51 re_exclude_auto = re.compile(r'^[,&^\|\*/\+-]'
52 52 r'|^is |^not |^in |^and |^or ')
53 53
54 54 # try to catch also methods for stuff in lists/tuples/dicts: off
55 55 # (experimental). For this to work, the line_split regexp would need
56 56 # to be modified so it wouldn't break things at '['. That line is
57 57 # nasty enough that I shouldn't change it until I can test it _well_.
58 58 #self.re_fun_name = re.compile (r'[a-zA-Z_]([a-zA-Z0-9_.\[\]]*) ?$')
59 59
60 60
61 61 # Handler Check Utilities
62 62 def is_shadowed(identifier, ip):
63 63 """Is the given identifier defined in one of the namespaces which shadow
64 64 the alias and magic namespaces? Note that an identifier is different
65 65 than ifun, because it can not contain a '.' character."""
66 66 # This is much safer than calling ofind, which can change state
67 67 return (identifier in ip.user_ns \
68 68 or identifier in ip.user_global_ns \
69 69 or identifier in ip.ns_table['builtin']\
70 70 or iskeyword(identifier))
71 71
72 72
73 73 #-----------------------------------------------------------------------------
74 74 # Main Prefilter manager
75 75 #-----------------------------------------------------------------------------
76 76
77 77
78 78 class PrefilterManager(Configurable):
79 79 """Main prefilter component.
80 80
81 81 The IPython prefilter is run on all user input before it is run. The
82 82 prefilter consumes lines of input and produces transformed lines of
83 83 input.
84 84
85 85 The implementation consists of two phases:
86 86
87 87 1. Transformers
88 88 2. Checkers and handlers
89 89
90 90 Over time, we plan on deprecating the checkers and handlers and doing
91 91 everything in the transformers.
92 92
93 93 The transformers are instances of :class:`PrefilterTransformer` and have
94 94 a single method :meth:`transform` that takes a line and returns a
95 95 transformed line. The transformation can be accomplished using any
96 96 tool, but our current ones use regular expressions for speed.
97 97
98 98 After all the transformers have been run, the line is fed to the checkers,
99 99 which are instances of :class:`PrefilterChecker`. The line is passed to
100 100 the :meth:`check` method, which either returns `None` or a
101 101 :class:`PrefilterHandler` instance. If `None` is returned, the other
102 102 checkers are tried. If an :class:`PrefilterHandler` instance is returned,
103 103 the line is passed to the :meth:`handle` method of the returned
104 104 handler and no further checkers are tried.
105 105
106 106 Both transformers and checkers have a `priority` attribute, that determines
107 107 the order in which they are called. Smaller priorities are tried first.
108 108
109 109 Both transformers and checkers also have `enabled` attribute, which is
110 110 a boolean that determines if the instance is used.
111 111
112 112 Users or developers can change the priority or enabled attribute of
113 113 transformers or checkers, but they must call the :meth:`sort_checkers`
114 114 or :meth:`sort_transformers` method after changing the priority.
115 115 """
116 116
117 117 multi_line_specials = Bool(True).tag(config=True)
118 118 shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', allow_none=True)
119 119
120 120 def __init__(self, shell=None, **kwargs):
121 121 super(PrefilterManager, self).__init__(shell=shell, **kwargs)
122 122 self.shell = shell
123 123 self._transformers = []
124 124 self.init_handlers()
125 125 self.init_checkers()
126 126
127 127 #-------------------------------------------------------------------------
128 128 # API for managing transformers
129 129 #-------------------------------------------------------------------------
130 130
131 131 def sort_transformers(self):
132 132 """Sort the transformers by priority.
133 133
134 134 This must be called after the priority of a transformer is changed.
135 135 The :meth:`register_transformer` method calls this automatically.
136 136 """
137 137 self._transformers.sort(key=lambda x: x.priority)
138 138
139 139 @property
140 140 def transformers(self):
141 141 """Return a list of checkers, sorted by priority."""
142 142 return self._transformers
143 143
144 144 def register_transformer(self, transformer):
145 145 """Register a transformer instance."""
146 146 if transformer not in self._transformers:
147 147 self._transformers.append(transformer)
148 148 self.sort_transformers()
149 149
150 150 def unregister_transformer(self, transformer):
151 151 """Unregister a transformer instance."""
152 152 if transformer in self._transformers:
153 153 self._transformers.remove(transformer)
154 154
155 155 #-------------------------------------------------------------------------
156 156 # API for managing checkers
157 157 #-------------------------------------------------------------------------
158 158
159 159 def init_checkers(self):
160 160 """Create the default checkers."""
161 161 self._checkers = []
162 162 for checker in _default_checkers:
163 163 checker(
164 164 shell=self.shell, prefilter_manager=self, parent=self
165 165 )
166 166
167 167 def sort_checkers(self):
168 168 """Sort the checkers by priority.
169 169
170 170 This must be called after the priority of a checker is changed.
171 171 The :meth:`register_checker` method calls this automatically.
172 172 """
173 173 self._checkers.sort(key=lambda x: x.priority)
174 174
175 175 @property
176 176 def checkers(self):
177 177 """Return a list of checkers, sorted by priority."""
178 178 return self._checkers
179 179
180 180 def register_checker(self, checker):
181 181 """Register a checker instance."""
182 182 if checker not in self._checkers:
183 183 self._checkers.append(checker)
184 184 self.sort_checkers()
185 185
186 186 def unregister_checker(self, checker):
187 187 """Unregister a checker instance."""
188 188 if checker in self._checkers:
189 189 self._checkers.remove(checker)
190 190
191 191 #-------------------------------------------------------------------------
192 192 # API for managing handlers
193 193 #-------------------------------------------------------------------------
194 194
195 195 def init_handlers(self):
196 196 """Create the default handlers."""
197 197 self._handlers = {}
198 198 self._esc_handlers = {}
199 199 for handler in _default_handlers:
200 200 handler(
201 201 shell=self.shell, prefilter_manager=self, parent=self
202 202 )
203 203
204 204 @property
205 205 def handlers(self):
206 206 """Return a dict of all the handlers."""
207 207 return self._handlers
208 208
209 209 def register_handler(self, name, handler, esc_strings):
210 210 """Register a handler instance by name with esc_strings."""
211 211 self._handlers[name] = handler
212 212 for esc_str in esc_strings:
213 213 self._esc_handlers[esc_str] = handler
214 214
215 215 def unregister_handler(self, name, handler, esc_strings):
216 216 """Unregister a handler instance by name with esc_strings."""
217 217 try:
218 218 del self._handlers[name]
219 219 except KeyError:
220 220 pass
221 221 for esc_str in esc_strings:
222 222 h = self._esc_handlers.get(esc_str)
223 223 if h is handler:
224 224 del self._esc_handlers[esc_str]
225 225
226 226 def get_handler_by_name(self, name):
227 227 """Get a handler by its name."""
228 228 return self._handlers.get(name)
229 229
230 230 def get_handler_by_esc(self, esc_str):
231 231 """Get a handler by its escape string."""
232 232 return self._esc_handlers.get(esc_str)
233 233
234 234 #-------------------------------------------------------------------------
235 235 # Main prefiltering API
236 236 #-------------------------------------------------------------------------
237 237
238 238 def prefilter_line_info(self, line_info):
239 239 """Prefilter a line that has been converted to a LineInfo object.
240 240
241 241 This implements the checker/handler part of the prefilter pipe.
242 242 """
243 243 # print "prefilter_line_info: ", line_info
244 244 handler = self.find_handler(line_info)
245 245 return handler.handle(line_info)
246 246
247 247 def find_handler(self, line_info):
248 248 """Find a handler for the line_info by trying checkers."""
249 249 for checker in self.checkers:
250 250 if checker.enabled:
251 251 handler = checker.check(line_info)
252 252 if handler:
253 253 return handler
254 254 return self.get_handler_by_name('normal')
255 255
256 256 def transform_line(self, line, continue_prompt):
257 257 """Calls the enabled transformers in order of increasing priority."""
258 258 for transformer in self.transformers:
259 259 if transformer.enabled:
260 260 line = transformer.transform(line, continue_prompt)
261 261 return line
262 262
263 263 def prefilter_line(self, line, continue_prompt=False):
264 264 """Prefilter a single input line as text.
265 265
266 266 This method prefilters a single line of text by calling the
267 267 transformers and then the checkers/handlers.
268 268 """
269 269
270 270 # print "prefilter_line: ", line, continue_prompt
271 271 # All handlers *must* return a value, even if it's blank ('').
272 272
273 273 # save the line away in case we crash, so the post-mortem handler can
274 274 # record it
275 275 self.shell._last_input_line = line
276 276
277 277 if not line:
278 278 # Return immediately on purely empty lines, so that if the user
279 279 # previously typed some whitespace that started a continuation
280 280 # prompt, he can break out of that loop with just an empty line.
281 281 # This is how the default python prompt works.
282 282 return ''
283 283
284 284 # At this point, we invoke our transformers.
285 285 if not continue_prompt or (continue_prompt and self.multi_line_specials):
286 286 line = self.transform_line(line, continue_prompt)
287 287
288 288 # Now we compute line_info for the checkers and handlers
289 289 line_info = LineInfo(line, continue_prompt)
290 290
291 291 # the input history needs to track even empty lines
292 292 stripped = line.strip()
293 293
294 294 normal_handler = self.get_handler_by_name('normal')
295 295 if not stripped:
296 296 return normal_handler.handle(line_info)
297 297
298 298 # special handlers are only allowed for single line statements
299 299 if continue_prompt and not self.multi_line_specials:
300 300 return normal_handler.handle(line_info)
301 301
302 302 prefiltered = self.prefilter_line_info(line_info)
303 303 # print "prefiltered line: %r" % prefiltered
304 304 return prefiltered
305 305
306 306 def prefilter_lines(self, lines, continue_prompt=False):
307 307 """Prefilter multiple input lines of text.
308 308
309 309 This is the main entry point for prefiltering multiple lines of
310 310 input. This simply calls :meth:`prefilter_line` for each line of
311 311 input.
312 312
313 313 This covers cases where there are multiple lines in the user entry,
314 314 which is the case when the user goes back to a multiline history
315 315 entry and presses enter.
316 316 """
317 317 llines = lines.rstrip('\n').split('\n')
318 318 # We can get multiple lines in one shot, where multiline input 'blends'
319 319 # into one line, in cases like recalling from the readline history
320 320 # buffer. We need to make sure that in such cases, we correctly
321 321 # communicate downstream which line is first and which are continuation
322 322 # ones.
323 323 if len(llines) > 1:
324 324 out = '\n'.join([self.prefilter_line(line, lnum>0)
325 325 for lnum, line in enumerate(llines) ])
326 326 else:
327 327 out = self.prefilter_line(llines[0], continue_prompt)
328 328
329 329 return out
330 330
331 331 #-----------------------------------------------------------------------------
332 332 # Prefilter transformers
333 333 #-----------------------------------------------------------------------------
334 334
335 335
336 336 class PrefilterTransformer(Configurable):
337 337 """Transform a line of user input."""
338 338
339 339 priority = Integer(100).tag(config=True)
340 340 # Transformers don't currently use shell or prefilter_manager, but as we
341 341 # move away from checkers and handlers, they will need them.
342 342 shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', allow_none=True)
343 343 prefilter_manager = Instance('IPython.core.prefilter.PrefilterManager', allow_none=True)
344 344 enabled = Bool(True).tag(config=True)
345 345
346 346 def __init__(self, shell=None, prefilter_manager=None, **kwargs):
347 347 super(PrefilterTransformer, self).__init__(
348 348 shell=shell, prefilter_manager=prefilter_manager, **kwargs
349 349 )
350 350 self.prefilter_manager.register_transformer(self)
351 351
352 352 def transform(self, line, continue_prompt):
353 353 """Transform a line, returning the new one."""
354 354 return None
355 355
356 356 def __repr__(self):
357 357 return "<%s(priority=%r, enabled=%r)>" % (
358 358 self.__class__.__name__, self.priority, self.enabled)
359 359
360 360
361 361 #-----------------------------------------------------------------------------
362 362 # Prefilter checkers
363 363 #-----------------------------------------------------------------------------
364 364
365 365
366 366 class PrefilterChecker(Configurable):
367 367 """Inspect an input line and return a handler for that line."""
368 368
369 369 priority = Integer(100).tag(config=True)
370 370 shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', allow_none=True)
371 371 prefilter_manager = Instance('IPython.core.prefilter.PrefilterManager', allow_none=True)
372 372 enabled = Bool(True).tag(config=True)
373 373
374 374 def __init__(self, shell=None, prefilter_manager=None, **kwargs):
375 375 super(PrefilterChecker, self).__init__(
376 376 shell=shell, prefilter_manager=prefilter_manager, **kwargs
377 377 )
378 378 self.prefilter_manager.register_checker(self)
379 379
380 380 def check(self, line_info):
381 381 """Inspect line_info and return a handler instance or None."""
382 382 return None
383 383
384 384 def __repr__(self):
385 385 return "<%s(priority=%r, enabled=%r)>" % (
386 386 self.__class__.__name__, self.priority, self.enabled)
387 387
388 388
389 389 class EmacsChecker(PrefilterChecker):
390 390
391 391 priority = Integer(100).tag(config=True)
392 392 enabled = Bool(False).tag(config=True)
393 393
394 394 def check(self, line_info):
395 395 "Emacs ipython-mode tags certain input lines."
396 396 if line_info.line.endswith('# PYTHON-MODE'):
397 397 return self.prefilter_manager.get_handler_by_name('emacs')
398 398 else:
399 399 return None
400 400
401 401
402 402 class MacroChecker(PrefilterChecker):
403 403
404 404 priority = Integer(250).tag(config=True)
405 405
406 406 def check(self, line_info):
407 407 obj = self.shell.user_ns.get(line_info.ifun)
408 408 if isinstance(obj, Macro):
409 409 return self.prefilter_manager.get_handler_by_name('macro')
410 410 else:
411 411 return None
412 412
413 413
414 414 class IPyAutocallChecker(PrefilterChecker):
415 415
416 416 priority = Integer(300).tag(config=True)
417 417
418 418 def check(self, line_info):
419 419 "Instances of IPyAutocall in user_ns get autocalled immediately"
420 420 obj = self.shell.user_ns.get(line_info.ifun, None)
421 421 if isinstance(obj, IPyAutocall):
422 422 obj.set_ip(self.shell)
423 423 return self.prefilter_manager.get_handler_by_name('auto')
424 424 else:
425 425 return None
426 426
427 427
428 428 class AssignmentChecker(PrefilterChecker):
429 429
430 430 priority = Integer(600).tag(config=True)
431 431
432 432 def check(self, line_info):
433 433 """Check to see if user is assigning to a var for the first time, in
434 434 which case we want to avoid any sort of automagic / autocall games.
435 435
436 436 This allows users to assign to either alias or magic names true python
437 437 variables (the magic/alias systems always take second seat to true
438 438 python code). E.g. ls='hi', or ls,that=1,2"""
439 439 if line_info.the_rest:
440 440 if line_info.the_rest[0] in '=,':
441 441 return self.prefilter_manager.get_handler_by_name('normal')
442 442 else:
443 443 return None
444 444
445 445
446 446 class AutoMagicChecker(PrefilterChecker):
447 447
448 448 priority = Integer(700).tag(config=True)
449 449
450 450 def check(self, line_info):
451 451 """If the ifun is magic, and automagic is on, run it. Note: normal,
452 452 non-auto magic would already have been triggered via '%' in
453 453 check_esc_chars. This just checks for automagic. Also, before
454 454 triggering the magic handler, make sure that there is nothing in the
455 455 user namespace which could shadow it."""
456 456 if not self.shell.automagic or not self.shell.find_magic(line_info.ifun):
457 457 return None
458 458
459 459 # We have a likely magic method. Make sure we should actually call it.
460 460 if line_info.continue_prompt and not self.prefilter_manager.multi_line_specials:
461 461 return None
462 462
463 463 head = line_info.ifun.split('.',1)[0]
464 464 if is_shadowed(head, self.shell):
465 465 return None
466 466
467 467 return self.prefilter_manager.get_handler_by_name('magic')
468 468
469 469
470 470 class PythonOpsChecker(PrefilterChecker):
471 471
472 472 priority = Integer(900).tag(config=True)
473 473
474 474 def check(self, line_info):
475 475 """If the 'rest' of the line begins with a function call or pretty much
476 476 any python operator, we should simply execute the line (regardless of
477 477 whether or not there's a possible autocall expansion). This avoids
478 478 spurious (and very confusing) geattr() accesses."""
479 479 if line_info.the_rest and line_info.the_rest[0] in '!=()<>,+*/%^&|':
480 480 return self.prefilter_manager.get_handler_by_name('normal')
481 481 else:
482 482 return None
483 483
484 484
485 485 class AutocallChecker(PrefilterChecker):
486 486
487 487 priority = Integer(1000).tag(config=True)
488 488
489 489 function_name_regexp = CRegExp(re_fun_name,
490 490 help="RegExp to identify potential function names."
491 491 ).tag(config=True)
492 492 exclude_regexp = CRegExp(re_exclude_auto,
493 493 help="RegExp to exclude strings with this start from autocalling."
494 494 ).tag(config=True)
495 495
496 496 def check(self, line_info):
497 497 "Check if the initial word/function is callable and autocall is on."
498 498 if not self.shell.autocall:
499 499 return None
500 500
501 501 oinfo = line_info.ofind(self.shell) # This can mutate state via getattr
502 502 if not oinfo.found:
503 503 return None
504 504
505 505 ignored_funs = ['b', 'f', 'r', 'u', 'br', 'rb', 'fr', 'rf']
506 506 ifun = line_info.ifun
507 507 line = line_info.line
508 508 if ifun.lower() in ignored_funs and (line.startswith(ifun + "'") or line.startswith(ifun + '"')):
509 509 return None
510 510
511 511 if (
512 512 callable(oinfo.obj)
513 513 and (not self.exclude_regexp.match(line_info.the_rest))
514 514 and self.function_name_regexp.match(line_info.ifun)
515 515 ):
516 516 return self.prefilter_manager.get_handler_by_name("auto")
517 517 else:
518 518 return None
519 519
520 520
521 521 #-----------------------------------------------------------------------------
522 522 # Prefilter handlers
523 523 #-----------------------------------------------------------------------------
524 524
525 525
526 526 class PrefilterHandler(Configurable):
527
528 handler_name = Unicode('normal')
529 esc_strings = List([])
530 shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', allow_none=True)
531 prefilter_manager = Instance('IPython.core.prefilter.PrefilterManager', allow_none=True)
527 handler_name = Unicode("normal")
528 esc_strings: List = List([])
529 shell = Instance(
530 "IPython.core.interactiveshell.InteractiveShellABC", allow_none=True
531 )
532 prefilter_manager = Instance(
533 "IPython.core.prefilter.PrefilterManager", allow_none=True
534 )
532 535
533 536 def __init__(self, shell=None, prefilter_manager=None, **kwargs):
534 537 super(PrefilterHandler, self).__init__(
535 538 shell=shell, prefilter_manager=prefilter_manager, **kwargs
536 539 )
537 540 self.prefilter_manager.register_handler(
538 541 self.handler_name,
539 542 self,
540 543 self.esc_strings
541 544 )
542 545
543 546 def handle(self, line_info):
544 547 # print "normal: ", line_info
545 548 """Handle normal input lines. Use as a template for handlers."""
546 549
547 550 # With autoindent on, we need some way to exit the input loop, and I
548 551 # don't want to force the user to have to backspace all the way to
549 552 # clear the line. The rule will be in this case, that either two
550 553 # lines of pure whitespace in a row, or a line of pure whitespace but
551 554 # of a size different to the indent level, will exit the input loop.
552 555 line = line_info.line
553 556 continue_prompt = line_info.continue_prompt
554 557
555 558 if (continue_prompt and
556 559 self.shell.autoindent and
557 560 line.isspace() and
558 561 0 < abs(len(line) - self.shell.indent_current_nsp) <= 2):
559 562 line = ''
560 563
561 564 return line
562 565
563 566 def __str__(self):
564 567 return "<%s(name=%s)>" % (self.__class__.__name__, self.handler_name)
565 568
566 569
567 570 class MacroHandler(PrefilterHandler):
568 571 handler_name = Unicode("macro")
569 572
570 573 def handle(self, line_info):
571 574 obj = self.shell.user_ns.get(line_info.ifun)
572 575 pre_space = line_info.pre_whitespace
573 576 line_sep = "\n" + pre_space
574 577 return pre_space + line_sep.join(obj.value.splitlines())
575 578
576 579
577 580 class MagicHandler(PrefilterHandler):
578 581
579 582 handler_name = Unicode('magic')
580 583 esc_strings = List([ESC_MAGIC])
581 584
582 585 def handle(self, line_info):
583 586 """Execute magic functions."""
584 587 ifun = line_info.ifun
585 588 the_rest = line_info.the_rest
586 589 #Prepare arguments for get_ipython().run_line_magic(magic_name, magic_args)
587 590 t_arg_s = ifun + " " + the_rest
588 591 t_magic_name, _, t_magic_arg_s = t_arg_s.partition(' ')
589 592 t_magic_name = t_magic_name.lstrip(ESC_MAGIC)
590 593 cmd = '%sget_ipython().run_line_magic(%r, %r)' % (line_info.pre_whitespace, t_magic_name, t_magic_arg_s)
591 594 return cmd
592 595
593 596
594 597 class AutoHandler(PrefilterHandler):
595 598
596 599 handler_name = Unicode('auto')
597 600 esc_strings = List([ESC_PAREN, ESC_QUOTE, ESC_QUOTE2])
598 601
599 602 def handle(self, line_info):
600 603 """Handle lines which can be auto-executed, quoting if requested."""
601 604 line = line_info.line
602 605 ifun = line_info.ifun
603 606 the_rest = line_info.the_rest
604 607 esc = line_info.esc
605 608 continue_prompt = line_info.continue_prompt
606 609 obj = line_info.ofind(self.shell).obj
607 610
608 611 # This should only be active for single-line input!
609 612 if continue_prompt:
610 613 return line
611 614
612 615 force_auto = isinstance(obj, IPyAutocall)
613 616
614 617 # User objects sometimes raise exceptions on attribute access other
615 618 # than AttributeError (we've seen it in the past), so it's safest to be
616 619 # ultra-conservative here and catch all.
617 620 try:
618 621 auto_rewrite = obj.rewrite
619 622 except Exception:
620 623 auto_rewrite = True
621 624
622 625 if esc == ESC_QUOTE:
623 626 # Auto-quote splitting on whitespace
624 627 newcmd = '%s("%s")' % (ifun,'", "'.join(the_rest.split()) )
625 628 elif esc == ESC_QUOTE2:
626 629 # Auto-quote whole string
627 630 newcmd = '%s("%s")' % (ifun,the_rest)
628 631 elif esc == ESC_PAREN:
629 632 newcmd = '%s(%s)' % (ifun,",".join(the_rest.split()))
630 633 else:
631 634 # Auto-paren.
632 635 if force_auto:
633 636 # Don't rewrite if it is already a call.
634 637 do_rewrite = not the_rest.startswith('(')
635 638 else:
636 639 if not the_rest:
637 640 # We only apply it to argument-less calls if the autocall
638 641 # parameter is set to 2.
639 642 do_rewrite = (self.shell.autocall >= 2)
640 643 elif the_rest.startswith('[') and hasattr(obj, '__getitem__'):
641 644 # Don't autocall in this case: item access for an object
642 645 # which is BOTH callable and implements __getitem__.
643 646 do_rewrite = False
644 647 else:
645 648 do_rewrite = True
646 649
647 650 # Figure out the rewritten command
648 651 if do_rewrite:
649 652 if the_rest.endswith(';'):
650 653 newcmd = '%s(%s);' % (ifun.rstrip(),the_rest[:-1])
651 654 else:
652 655 newcmd = '%s(%s)' % (ifun.rstrip(), the_rest)
653 656 else:
654 657 normal_handler = self.prefilter_manager.get_handler_by_name('normal')
655 658 return normal_handler.handle(line_info)
656 659
657 660 # Display the rewritten call
658 661 if auto_rewrite:
659 662 self.shell.auto_rewrite_input(newcmd)
660 663
661 664 return newcmd
662 665
663 666
664 667 class EmacsHandler(PrefilterHandler):
665 668
666 669 handler_name = Unicode('emacs')
667 670 esc_strings = List([])
668 671
669 672 def handle(self, line_info):
670 673 """Handle input lines marked by python-mode."""
671 674
672 675 # Currently, nothing is done. Later more functionality can be added
673 676 # here if needed.
674 677
675 678 # The input cache shouldn't be updated
676 679 return line_info.line
677 680
678 681
679 682 #-----------------------------------------------------------------------------
680 683 # Defaults
681 684 #-----------------------------------------------------------------------------
682 685
683 686
684 687 _default_checkers = [
685 688 EmacsChecker,
686 689 MacroChecker,
687 690 IPyAutocallChecker,
688 691 AssignmentChecker,
689 692 AutoMagicChecker,
690 693 PythonOpsChecker,
691 694 AutocallChecker
692 695 ]
693 696
694 697 _default_handlers = [
695 698 PrefilterHandler,
696 699 MacroHandler,
697 700 MagicHandler,
698 701 AutoHandler,
699 702 EmacsHandler
700 703 ]
General Comments 0
You need to be logged in to leave comments. Login now