##// END OF EJS Templates
update darker/black
M Bussonnier -
Show More
@@ -1,40 +1,40
1 1 # This workflow will install Python dependencies, run tests and lint with a variety of Python versions
2 2 # For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
3 3
4 4 name: Python package
5 5
6 6 permissions:
7 7 contents: read
8 8
9 9 on:
10 10 push:
11 11 branches: [ main, 7.x ]
12 12 pull_request:
13 13 branches: [ main, 7.x ]
14 14
15 15 jobs:
16 16 formatting:
17 17
18 18 runs-on: ubuntu-latest
19 19 timeout-minutes: 5
20 20 steps:
21 21 - uses: actions/checkout@v4
22 22 with:
23 23 fetch-depth: 0
24 24 - name: Set up Python
25 25 uses: actions/setup-python@v5
26 26 with:
27 27 python-version: 3.x
28 28 - name: Install dependencies
29 29 run: |
30 30 python -m pip install --upgrade pip
31 31 # when changing the versions please update CONTRIBUTING.md too
32 pip install --only-binary ':all:' darker==1.5.1 black==22.10.0
32 pip install --only-binary ':all:' darker==2.1.1 black==24.10.0
33 33 - name: Lint with darker
34 34 run: |
35 35 darker -r 60625f241f298b5039cb2debc365db38aa7bb522 --check --diff . || (
36 36 echo "Changes need auto-formatting. Run:"
37 37 echo " darker -r 60625f241f298b5039cb2debc365db38aa7bb522 ."
38 38 echo "then commit and push changes to fix."
39 39 exit 1
40 40 )
@@ -1,3389 +1,3389
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 Ξ±
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 Ξ±
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press :kbd:`Tab` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\Ξ±<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 63 ``False``.
64 64
65 65
66 66 Experimental
67 67 ============
68 68
69 69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 70 generate completions both using static analysis of the code, and dynamically
71 71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 72 for Python. The APIs attached to this new mechanism is unstable and will
73 73 raise unless use in an :any:`provisionalcompleter` context manager.
74 74
75 75 You will find that the following are experimental:
76 76
77 77 - :any:`provisionalcompleter`
78 78 - :any:`IPCompleter.completions`
79 79 - :any:`Completion`
80 80 - :any:`rectify_completions`
81 81
82 82 .. note::
83 83
84 84 better name for :any:`rectify_completions` ?
85 85
86 86 We welcome any feedback on these new API, and we also encourage you to try this
87 87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 88 to have extra logging information if :any:`jedi` is crashing, or if current
89 89 IPython completer pending deprecations are returning results not yet handled
90 90 by :any:`jedi`
91 91
92 92 Using Jedi for tab completion allow snippets like the following to work without
93 93 having to execute any code:
94 94
95 95 >>> myvar = ['hello', 42]
96 96 ... myvar[1].bi<tab>
97 97
98 98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 100 option.
101 101
102 102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 103 current development version to get better completions.
104 104
105 105 Matchers
106 106 ========
107 107
108 108 All completions routines are implemented using unified *Matchers* API.
109 109 The matchers API is provisional and subject to change without notice.
110 110
111 111 The built-in matchers include:
112 112
113 113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 115 - :any:`IPCompleter.unicode_name_matcher`,
116 116 :any:`IPCompleter.fwd_unicode_matcher`
117 117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 125 (`complete_command`) with string dispatch (including regular expressions).
126 126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 127 Jedi results to match behaviour in earlier IPython versions.
128 128
129 129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130 130
131 131 Matcher API
132 132 -----------
133 133
134 134 Simplifying some details, the ``Matcher`` interface can described as
135 135
136 136 .. code-block::
137 137
138 138 MatcherAPIv1 = Callable[[str], list[str]]
139 139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140 140
141 141 Matcher = MatcherAPIv1 | MatcherAPIv2
142 142
143 143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 144 and remains supported as a simplest way for generating completions. This is also
145 145 currently the only API supported by the IPython hooks system `complete_command`.
146 146
147 147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 149 and requires a literal ``2`` for v2 Matchers.
150 150
151 151 Once the API stabilises future versions may relax the requirement for specifying
152 152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154 154
155 155 Suppression of competing matchers
156 156 ---------------------------------
157 157
158 158 By default results from all matchers are combined, in the order determined by
159 159 their priority. Matchers can request to suppress results from subsequent
160 160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161 161
162 162 When multiple matchers simultaneously request suppression, the results from of
163 163 the matcher with higher priority will be returned.
164 164
165 165 Sometimes it is desirable to suppress most but not all other matchers;
166 166 this can be achieved by adding a set of identifiers of matchers which
167 167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168 168
169 169 The suppression behaviour can is user-configurable via
170 170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 171 """
172 172
173 173
174 174 # Copyright (c) IPython Development Team.
175 175 # Distributed under the terms of the Modified BSD License.
176 176 #
177 177 # Some of this code originated from rlcompleter in the Python standard library
178 178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179 179
180 180 from __future__ import annotations
181 181 import builtins as builtin_mod
182 182 import enum
183 183 import glob
184 184 import inspect
185 185 import itertools
186 186 import keyword
187 187 import os
188 188 import re
189 189 import string
190 190 import sys
191 191 import tokenize
192 192 import time
193 193 import unicodedata
194 194 import uuid
195 195 import warnings
196 196 from ast import literal_eval
197 197 from collections import defaultdict
198 198 from contextlib import contextmanager
199 199 from dataclasses import dataclass
200 200 from functools import cached_property, partial
201 201 from types import SimpleNamespace
202 202 from typing import (
203 203 Iterable,
204 204 Iterator,
205 205 List,
206 206 Tuple,
207 207 Union,
208 208 Any,
209 209 Sequence,
210 210 Dict,
211 211 Optional,
212 212 TYPE_CHECKING,
213 213 Set,
214 214 Sized,
215 215 TypeVar,
216 216 Literal,
217 217 )
218 218
219 219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 220 from IPython.core.error import TryNext
221 221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 223 from IPython.core.oinspect import InspectColors
224 224 from IPython.testing.skipdoctest import skip_doctest
225 225 from IPython.utils import generics
226 226 from IPython.utils.decorators import sphinx_options
227 227 from IPython.utils.dir2 import dir2, get_real_method
228 228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 229 from IPython.utils.path import ensure_dir_exists
230 230 from IPython.utils.process import arg_split
231 231 from traitlets import (
232 232 Bool,
233 233 Enum,
234 234 Int,
235 235 List as ListTrait,
236 236 Unicode,
237 237 Dict as DictTrait,
238 238 Union as UnionTrait,
239 239 observe,
240 240 )
241 241 from traitlets.config.configurable import Configurable
242 242
243 243 import __main__
244 244
245 245 # skip module docstests
246 246 __skip_doctest__ = True
247 247
248 248
249 249 try:
250 250 import jedi
251 251 jedi.settings.case_insensitive_completion = False
252 252 import jedi.api.helpers
253 253 import jedi.api.classes
254 254 JEDI_INSTALLED = True
255 255 except ImportError:
256 256 JEDI_INSTALLED = False
257 257
258 258
259 259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 260 from typing import cast
261 261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 262 else:
263 263 from typing import Generic
264 264
265 265 def cast(type_, obj):
266 266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 267 return obj
268 268
269 269 # do not require on runtime
270 270 NotRequired = Tuple # requires Python >=3.11
271 271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 272 Protocol = object # requires Python >=3.8
273 273 TypeAlias = Any # requires Python >=3.10
274 274 TypeGuard = Generic # requires Python >=3.10
275 275 if GENERATING_DOCUMENTATION:
276 276 from typing import TypedDict
277 277
278 278 # -----------------------------------------------------------------------------
279 279 # Globals
280 280 #-----------------------------------------------------------------------------
281 281
282 282 # ranges where we have most of the valid unicode names. We could be more finer
283 283 # grained but is it worth it for performance While unicode have character in the
284 284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 285 # write this). With below range we cover them all, with a density of ~67%
286 286 # biggest next gap we consider only adds up about 1% density and there are 600
287 287 # gaps that would need hard coding.
288 288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289 289
290 290 # Public API
291 291 __all__ = ["Completer", "IPCompleter"]
292 292
293 293 if sys.platform == 'win32':
294 294 PROTECTABLES = ' '
295 295 else:
296 296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297 297
298 298 # Protect against returning an enormous number of completions which the frontend
299 299 # may have trouble processing.
300 300 MATCHES_LIMIT = 500
301 301
302 302 # Completion type reported when no type can be inferred.
303 303 _UNKNOWN_TYPE = "<unknown>"
304 304
305 305 # sentinel value to signal lack of a match
306 306 not_found = object()
307 307
308 308 class ProvisionalCompleterWarning(FutureWarning):
309 309 """
310 310 Exception raise by an experimental feature in this module.
311 311
312 312 Wrap code in :any:`provisionalcompleter` context manager if you
313 313 are certain you want to use an unstable feature.
314 314 """
315 315 pass
316 316
317 317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318 318
319 319
320 320 @skip_doctest
321 321 @contextmanager
322 322 def provisionalcompleter(action='ignore'):
323 323 """
324 324 This context manager has to be used in any place where unstable completer
325 325 behavior and API may be called.
326 326
327 327 >>> with provisionalcompleter():
328 328 ... completer.do_experimental_things() # works
329 329
330 330 >>> completer.do_experimental_things() # raises.
331 331
332 332 .. note::
333 333
334 334 Unstable
335 335
336 336 By using this context manager you agree that the API in use may change
337 337 without warning, and that you won't complain if they do so.
338 338
339 339 You also understand that, if the API is not to your liking, you should report
340 340 a bug to explain your use case upstream.
341 341
342 342 We'll be happy to get your feedback, feature requests, and improvements on
343 343 any of the unstable APIs!
344 344 """
345 345 with warnings.catch_warnings():
346 346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 347 yield
348 348
349 349
350 350 def has_open_quotes(s):
351 351 """Return whether a string has open quotes.
352 352
353 353 This simply counts whether the number of quote characters of either type in
354 354 the string is odd.
355 355
356 356 Returns
357 357 -------
358 358 If there is an open quote, the quote character is returned. Else, return
359 359 False.
360 360 """
361 361 # We check " first, then ', so complex cases with nested quotes will get
362 362 # the " to take precedence.
363 363 if s.count('"') % 2:
364 364 return '"'
365 365 elif s.count("'") % 2:
366 366 return "'"
367 367 else:
368 368 return False
369 369
370 370
371 371 def protect_filename(s, protectables=PROTECTABLES):
372 372 """Escape a string to protect certain characters."""
373 373 if set(s) & set(protectables):
374 374 if sys.platform == "win32":
375 375 return '"' + s + '"'
376 376 else:
377 377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 378 else:
379 379 return s
380 380
381 381
382 382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 383 """Expand ``~``-style usernames in strings.
384 384
385 385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 386 extra information that will be useful if the input was being used in
387 387 computing completions, and you wish to return the completions with the
388 388 original '~' instead of its expanded value.
389 389
390 390 Parameters
391 391 ----------
392 392 path : str
393 393 String to be expanded. If no ~ is present, the output is the same as the
394 394 input.
395 395
396 396 Returns
397 397 -------
398 398 newpath : str
399 399 Result of ~ expansion in the input path.
400 400 tilde_expand : bool
401 401 Whether any expansion was performed or not.
402 402 tilde_val : str
403 403 The value that ~ was replaced with.
404 404 """
405 405 # Default values
406 406 tilde_expand = False
407 407 tilde_val = ''
408 408 newpath = path
409 409
410 410 if path.startswith('~'):
411 411 tilde_expand = True
412 412 rest = len(path)-1
413 413 newpath = os.path.expanduser(path)
414 414 if rest:
415 415 tilde_val = newpath[:-rest]
416 416 else:
417 417 tilde_val = newpath
418 418
419 419 return newpath, tilde_expand, tilde_val
420 420
421 421
422 422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 423 """Does the opposite of expand_user, with its outputs.
424 424 """
425 425 if tilde_expand:
426 426 return path.replace(tilde_val, '~')
427 427 else:
428 428 return path
429 429
430 430
431 431 def completions_sorting_key(word):
432 432 """key for sorting completions
433 433
434 434 This does several things:
435 435
436 436 - Demote any completions starting with underscores to the end
437 437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 438 by their name
439 439 """
440 440 prio1, prio2 = 0, 0
441 441
442 442 if word.startswith('__'):
443 443 prio1 = 2
444 444 elif word.startswith('_'):
445 445 prio1 = 1
446 446
447 447 if word.endswith('='):
448 448 prio1 = -1
449 449
450 450 if word.startswith('%%'):
451 451 # If there's another % in there, this is something else, so leave it alone
452 452 if not "%" in word[2:]:
453 453 word = word[2:]
454 454 prio2 = 2
455 455 elif word.startswith('%'):
456 456 if not "%" in word[1:]:
457 457 word = word[1:]
458 458 prio2 = 1
459 459
460 460 return prio1, word, prio2
461 461
462 462
463 463 class _FakeJediCompletion:
464 464 """
465 465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467 467
468 468 Added in IPython 6.0 so should likely be removed for 7.0
469 469
470 470 """
471 471
472 472 def __init__(self, name):
473 473
474 474 self.name = name
475 475 self.complete = name
476 476 self.type = 'crashed'
477 477 self.name_with_symbols = name
478 478 self.signature = ""
479 479 self._origin = "fake"
480 480 self.text = "crashed"
481 481
482 482 def __repr__(self):
483 483 return '<Fake completion object jedi has crashed>'
484 484
485 485
486 486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487 487
488 488
489 489 class Completion:
490 490 """
491 491 Completion object used and returned by IPython completers.
492 492
493 493 .. warning::
494 494
495 495 Unstable
496 496
497 497 This function is unstable, API may change without warning.
498 498 It will also raise unless use in proper context manager.
499 499
500 500 This act as a middle ground :any:`Completion` object between the
501 501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 502 object. While Jedi need a lot of information about evaluator and how the
503 503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 504 need user facing information.
505 505
506 506 - Which range should be replaced replaced by what.
507 507 - Some metadata (like completion type), or meta information to displayed to
508 508 the use user.
509 509
510 510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 512 """
513 513
514 514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515 515
516 516 def __init__(
517 517 self,
518 518 start: int,
519 519 end: int,
520 520 text: str,
521 521 *,
522 522 type: Optional[str] = None,
523 523 _origin="",
524 524 signature="",
525 525 ) -> None:
526 526 warnings.warn(
527 527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 528 "It may change without warnings. "
529 529 "Use in corresponding context manager.",
530 530 category=ProvisionalCompleterWarning,
531 531 stacklevel=2,
532 532 )
533 533
534 534 self.start = start
535 535 self.end = end
536 536 self.text = text
537 537 self.type = type
538 538 self.signature = signature
539 539 self._origin = _origin
540 540
541 541 def __repr__(self):
542 542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544 544
545 545 def __eq__(self, other) -> bool:
546 546 """
547 547 Equality and hash do not hash the type (as some completer may not be
548 548 able to infer the type), but are use to (partially) de-duplicate
549 549 completion.
550 550
551 551 Completely de-duplicating completion is a bit tricker that just
552 552 comparing as it depends on surrounding text, which Completions are not
553 553 aware of.
554 554 """
555 555 return self.start == other.start and \
556 556 self.end == other.end and \
557 557 self.text == other.text
558 558
559 559 def __hash__(self):
560 560 return hash((self.start, self.end, self.text))
561 561
562 562
563 563 class SimpleCompletion:
564 564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565 565
566 566 .. warning::
567 567
568 568 Provisional
569 569
570 570 This class is used to describe the currently supported attributes of
571 571 simple completion items, and any additional implementation details
572 572 should not be relied on. Additional attributes may be included in
573 573 future versions, and meaning of text disambiguated from the current
574 574 dual meaning of "text to insert" and "text to used as a label".
575 575 """
576 576
577 577 __slots__ = ["text", "type"]
578 578
579 579 def __init__(self, text: str, *, type: Optional[str] = None):
580 580 self.text = text
581 581 self.type = type
582 582
583 583 def __repr__(self):
584 584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585 585
586 586
587 587 class _MatcherResultBase(TypedDict):
588 588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589 589
590 590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 591 matched_fragment: NotRequired[str]
592 592
593 593 #: Whether to suppress results from all other matchers (True), some
594 594 #: matchers (set of identifiers) or none (False); default is False.
595 595 suppress: NotRequired[Union[bool, Set[str]]]
596 596
597 597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 598 #: requests to suppress all other matchers; defaults to an empty set.
599 599 do_not_suppress: NotRequired[Set[str]]
600 600
601 601 #: Are completions already ordered and should be left as-is? default is False.
602 602 ordered: NotRequired[bool]
603 603
604 604
605 605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 607 """Result of new-style completion matcher."""
608 608
609 609 # note: TypedDict is added again to the inheritance chain
610 610 # in order to get __orig_bases__ for documentation
611 611
612 612 #: List of candidate completions
613 613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614 614
615 615
616 616 class _JediMatcherResult(_MatcherResultBase):
617 617 """Matching result returned by Jedi (will be processed differently)"""
618 618
619 619 #: list of candidate completions
620 620 completions: Iterator[_JediCompletionLike]
621 621
622 622
623 623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625 625
626 626
627 627 @dataclass
628 628 class CompletionContext:
629 629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630 630
631 631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 634 # from the completer, and make substituting them in sub-classes easier.
635 635
636 636 #: Relevant fragment of code directly preceding the cursor.
637 637 #: The extraction of token is implemented via splitter heuristic
638 638 #: (following readline behaviour for legacy reasons), which is user configurable
639 639 #: (by switching the greedy mode).
640 640 token: str
641 641
642 642 #: The full available content of the editor or buffer
643 643 full_text: str
644 644
645 645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 646 cursor_position: int
647 647
648 648 #: Cursor line in ``full_text``.
649 649 cursor_line: int
650 650
651 651 #: The maximum number of completions that will be used downstream.
652 652 #: Matchers can use this information to abort early.
653 653 #: The built-in Jedi matcher is currently excepted from this limit.
654 654 # If not given, return all possible completions.
655 655 limit: Optional[int]
656 656
657 657 @cached_property
658 658 def text_until_cursor(self) -> str:
659 659 return self.line_with_cursor[: self.cursor_position]
660 660
661 661 @cached_property
662 662 def line_with_cursor(self) -> str:
663 663 return self.full_text.split("\n")[self.cursor_line]
664 664
665 665
666 666 #: Matcher results for API v2.
667 667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668 668
669 669
670 670 class _MatcherAPIv1Base(Protocol):
671 671 def __call__(self, text: str) -> List[str]:
672 672 """Call signature."""
673 673 ...
674 674
675 675 #: Used to construct the default matcher identifier
676 676 __qualname__: str
677 677
678 678
679 679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 680 #: API version
681 681 matcher_api_version: Optional[Literal[1]]
682 682
683 683 def __call__(self, text: str) -> List[str]:
684 684 """Call signature."""
685 685 ...
686 686
687 687
688 688 #: Protocol describing Matcher API v1.
689 689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690 690
691 691
692 692 class MatcherAPIv2(Protocol):
693 693 """Protocol describing Matcher API v2."""
694 694
695 695 #: API version
696 696 matcher_api_version: Literal[2] = 2
697 697
698 698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 699 """Call signature."""
700 700 ...
701 701
702 702 #: Used to construct the default matcher identifier
703 703 __qualname__: str
704 704
705 705
706 706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707 707
708 708
709 709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 710 api_version = _get_matcher_api_version(matcher)
711 711 return api_version == 1
712 712
713 713
714 714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 715 api_version = _get_matcher_api_version(matcher)
716 716 return api_version == 2
717 717
718 718
719 719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 720 """Determines whether objects is sizable"""
721 721 return hasattr(value, "__len__")
722 722
723 723
724 724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 725 """Determines whether objects is sizable"""
726 726 return hasattr(value, "__next__")
727 727
728 728
729 729 def has_any_completions(result: MatcherResult) -> bool:
730 730 """Check if any result includes any completions."""
731 731 completions = result["completions"]
732 732 if _is_sizable(completions):
733 733 return len(completions) != 0
734 734 if _is_iterator(completions):
735 735 try:
736 736 old_iterator = completions
737 737 first = next(old_iterator)
738 738 result["completions"] = cast(
739 739 Iterator[SimpleCompletion],
740 740 itertools.chain([first], old_iterator),
741 741 )
742 742 return True
743 743 except StopIteration:
744 744 return False
745 745 raise ValueError(
746 746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 747 )
748 748
749 749
750 750 def completion_matcher(
751 751 *,
752 752 priority: Optional[float] = None,
753 753 identifier: Optional[str] = None,
754 754 api_version: int = 1,
755 755 ):
756 756 """Adds attributes describing the matcher.
757 757
758 758 Parameters
759 759 ----------
760 760 priority : Optional[float]
761 761 The priority of the matcher, determines the order of execution of matchers.
762 762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 763 identifier : Optional[str]
764 764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 765 and also used to for debugging (will be passed as ``origin`` with the completions).
766 766
767 767 Defaults to matcher function's ``__qualname__`` (for example,
768 768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 770 api_version: Optional[int]
771 771 version of the Matcher API used by this matcher.
772 772 Currently supported values are 1 and 2.
773 773 Defaults to 1.
774 774 """
775 775
776 776 def wrapper(func: Matcher):
777 777 func.matcher_priority = priority or 0 # type: ignore
778 778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 779 func.matcher_api_version = api_version # type: ignore
780 780 if TYPE_CHECKING:
781 781 if api_version == 1:
782 782 func = cast(MatcherAPIv1, func)
783 783 elif api_version == 2:
784 784 func = cast(MatcherAPIv2, func)
785 785 return func
786 786
787 787 return wrapper
788 788
789 789
790 790 def _get_matcher_priority(matcher: Matcher):
791 791 return getattr(matcher, "matcher_priority", 0)
792 792
793 793
794 794 def _get_matcher_id(matcher: Matcher):
795 795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796 796
797 797
798 798 def _get_matcher_api_version(matcher):
799 799 return getattr(matcher, "matcher_api_version", 1)
800 800
801 801
802 802 context_matcher = partial(completion_matcher, api_version=2)
803 803
804 804
805 805 _IC = Iterable[Completion]
806 806
807 807
808 808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 809 """
810 810 Deduplicate a set of completions.
811 811
812 812 .. warning::
813 813
814 814 Unstable
815 815
816 816 This function is unstable, API may change without warning.
817 817
818 818 Parameters
819 819 ----------
820 820 text : str
821 821 text that should be completed.
822 822 completions : Iterator[Completion]
823 823 iterator over the completions to deduplicate
824 824
825 825 Yields
826 826 ------
827 827 `Completions` objects
828 828 Completions coming from multiple sources, may be different but end up having
829 829 the same effect when applied to ``text``. If this is the case, this will
830 830 consider completions as equal and only emit the first encountered.
831 831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 832 the IPython completer does return things that Jedi does not, but should be
833 833 at some point.
834 834 """
835 835 completions = list(completions)
836 836 if not completions:
837 837 return
838 838
839 839 new_start = min(c.start for c in completions)
840 840 new_end = max(c.end for c in completions)
841 841
842 842 seen = set()
843 843 for c in completions:
844 844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 845 if new_text not in seen:
846 846 yield c
847 847 seen.add(new_text)
848 848
849 849
850 850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 851 """
852 852 Rectify a set of completions to all have the same ``start`` and ``end``
853 853
854 854 .. warning::
855 855
856 856 Unstable
857 857
858 858 This function is unstable, API may change without warning.
859 859 It will also raise unless use in proper context manager.
860 860
861 861 Parameters
862 862 ----------
863 863 text : str
864 864 text that should be completed.
865 865 completions : Iterator[Completion]
866 866 iterator over the completions to rectify
867 867 _debug : bool
868 868 Log failed completion
869 869
870 870 Notes
871 871 -----
872 872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 873 the Jupyter Protocol requires them to behave like so. This will readjust
874 874 the completion to have the same ``start`` and ``end`` by padding both
875 875 extremities with surrounding text.
876 876
877 877 During stabilisation should support a ``_debug`` option to log which
878 878 completion are return by the IPython completer and not found in Jedi in
879 879 order to make upstream bug report.
880 880 """
881 881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 882 "It may change without warnings. "
883 883 "Use in corresponding context manager.",
884 884 category=ProvisionalCompleterWarning, stacklevel=2)
885 885
886 886 completions = list(completions)
887 887 if not completions:
888 888 return
889 889 starts = (c.start for c in completions)
890 890 ends = (c.end for c in completions)
891 891
892 892 new_start = min(starts)
893 893 new_end = max(ends)
894 894
895 895 seen_jedi = set()
896 896 seen_python_matches = set()
897 897 for c in completions:
898 898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 899 if c._origin == 'jedi':
900 900 seen_jedi.add(new_text)
901 901 elif c._origin == "IPCompleter.python_matcher":
902 902 seen_python_matches.add(new_text)
903 903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 904 diff = seen_python_matches.difference(seen_jedi)
905 905 if diff and _debug:
906 906 print('IPython.python matches have extras:', diff)
907 907
908 908
909 909 if sys.platform == 'win32':
910 910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 911 else:
912 912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913 913
914 914 GREEDY_DELIMS = ' =\r\n'
915 915
916 916
917 917 class CompletionSplitter(object):
918 918 """An object to split an input line in a manner similar to readline.
919 919
920 920 By having our own implementation, we can expose readline-like completion in
921 921 a uniform manner to all frontends. This object only needs to be given the
922 922 line of text to be split and the cursor position on said line, and it
923 923 returns the 'word' to be completed on at the cursor after splitting the
924 924 entire line.
925 925
926 926 What characters are used as splitting delimiters can be controlled by
927 927 setting the ``delims`` attribute (this is a property that internally
928 928 automatically builds the necessary regular expression)"""
929 929
930 930 # Private interface
931 931
932 932 # A string of delimiter characters. The default value makes sense for
933 933 # IPython's most typical usage patterns.
934 934 _delims = DELIMS
935 935
936 936 # The expression (a normal string) to be compiled into a regular expression
937 937 # for actual splitting. We store it as an attribute mostly for ease of
938 938 # debugging, since this type of code can be so tricky to debug.
939 939 _delim_expr = None
940 940
941 941 # The regular expression that does the actual splitting
942 942 _delim_re = None
943 943
944 944 def __init__(self, delims=None):
945 945 delims = CompletionSplitter._delims if delims is None else delims
946 946 self.delims = delims
947 947
948 948 @property
949 949 def delims(self):
950 950 """Return the string of delimiter characters."""
951 951 return self._delims
952 952
953 953 @delims.setter
954 954 def delims(self, delims):
955 955 """Set the delimiters for line splitting."""
956 956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 957 self._delim_re = re.compile(expr)
958 958 self._delims = delims
959 959 self._delim_expr = expr
960 960
961 961 def split_line(self, line, cursor_pos=None):
962 962 """Split a line of text with a cursor at the given position.
963 963 """
964 964 l = line if cursor_pos is None else line[:cursor_pos]
965 965 return self._delim_re.split(l)[-1]
966 966
967 967
968 968
969 969 class Completer(Configurable):
970 970
971 971 greedy = Bool(
972 972 False,
973 973 help="""Activate greedy completion.
974 974
975 975 .. deprecated:: 8.8
976 976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977 977
978 978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979 979
980 980 - ``Completer.evaluation = 'unsafe'``
981 981 - ``Completer.auto_close_dict_keys = True``
982 982 """,
983 983 ).tag(config=True)
984 984
985 985 evaluation = Enum(
986 986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 987 default_value="limited",
988 988 help="""Policy for code evaluation under completion.
989 989
990 990 Successive options allow to enable more eager evaluation for better
991 991 completion suggestions, including for nested dictionaries, nested lists,
992 992 or even results of function calls.
993 993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995 995
996 996 Allowed values are:
997 997
998 998 - ``forbidden``: no evaluation of code is permitted,
999 999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 1000 no item/attribute evaluationm no access to locals/globals,
1001 1001 no evaluation of any operations or comparisons.
1002 1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 1007 syntax with side-effects like `del x`,
1008 1008 - ``dangerous``: completely arbitrary evaluation.
1009 1009 """,
1010 1010 ).tag(config=True)
1011 1011
1012 1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 1014 "Default to True if jedi is installed.").tag(config=True)
1015 1015
1016 1016 jedi_compute_type_timeout = Int(default_value=400,
1017 1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 1019 performance by preventing jedi to build its cache.
1020 1020 """).tag(config=True)
1021 1021
1022 1022 debug = Bool(default_value=False,
1023 1023 help='Enable debug for the Completer. Mostly print extra '
1024 1024 'information for experimental jedi integration.')\
1025 1025 .tag(config=True)
1026 1026
1027 1027 backslash_combining_completions = Bool(True,
1028 1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 1029 "Includes completion of latex commands, unicode names, and expanding "
1030 1030 "unicode characters back to latex commands.").tag(config=True)
1031 1031
1032 1032 auto_close_dict_keys = Bool(
1033 1033 False,
1034 1034 help="""
1035 1035 Enable auto-closing dictionary keys.
1036 1036
1037 1037 When enabled string keys will be suffixed with a final quote
1038 1038 (matching the opening quote), tuple keys will also receive a
1039 1039 separating comma if needed, and keys which are final will
1040 1040 receive a closing bracket (``]``).
1041 1041 """,
1042 1042 ).tag(config=True)
1043 1043
1044 1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 1045 """Create a new completer for the command line.
1046 1046
1047 1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048 1048
1049 1049 If unspecified, the default namespace where completions are performed
1050 1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 1051 given as dictionaries.
1052 1052
1053 1053 An optional second namespace can be given. This allows the completer
1054 1054 to handle cases where both the local and global scopes need to be
1055 1055 distinguished.
1056 1056 """
1057 1057
1058 1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 1060 # to bind to __main__.__dict__ at completion time, not now.
1061 1061 if namespace is None:
1062 1062 self.use_main_ns = True
1063 1063 else:
1064 1064 self.use_main_ns = False
1065 1065 self.namespace = namespace
1066 1066
1067 1067 # The global namespace, if given, can be bound directly
1068 1068 if global_namespace is None:
1069 1069 self.global_namespace = {}
1070 1070 else:
1071 1071 self.global_namespace = global_namespace
1072 1072
1073 1073 self.custom_matchers = []
1074 1074
1075 1075 super(Completer, self).__init__(**kwargs)
1076 1076
1077 1077 def complete(self, text, state):
1078 1078 """Return the next possible completion for 'text'.
1079 1079
1080 1080 This is called successively with state == 0, 1, 2, ... until it
1081 1081 returns None. The completion should begin with 'text'.
1082 1082
1083 1083 """
1084 1084 if self.use_main_ns:
1085 1085 self.namespace = __main__.__dict__
1086 1086
1087 1087 if state == 0:
1088 1088 if "." in text:
1089 1089 self.matches = self.attr_matches(text)
1090 1090 else:
1091 1091 self.matches = self.global_matches(text)
1092 1092 try:
1093 1093 return self.matches[state]
1094 1094 except IndexError:
1095 1095 return None
1096 1096
1097 1097 def global_matches(self, text):
1098 1098 """Compute matches when text is a simple name.
1099 1099
1100 1100 Return a list of all keywords, built-in functions and names currently
1101 1101 defined in self.namespace or self.global_namespace that match.
1102 1102
1103 1103 """
1104 1104 matches = []
1105 1105 match_append = matches.append
1106 1106 n = len(text)
1107 1107 for lst in [
1108 1108 keyword.kwlist,
1109 1109 builtin_mod.__dict__.keys(),
1110 1110 list(self.namespace.keys()),
1111 1111 list(self.global_namespace.keys()),
1112 1112 ]:
1113 1113 for word in lst:
1114 1114 if word[:n] == text and word != "__builtins__":
1115 1115 match_append(word)
1116 1116
1117 1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 1119 shortened = {
1120 1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 1121 for word in lst
1122 1122 if snake_case_re.match(word)
1123 1123 }
1124 1124 for word in shortened.keys():
1125 1125 if word[:n] == text and word != "__builtins__":
1126 1126 match_append(shortened[word])
1127 1127 return matches
1128 1128
1129 1129 def attr_matches(self, text):
1130 1130 """Compute matches when text contains a dot.
1131 1131
1132 1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 1134 evaluated and its attributes (as revealed by dir()) are used as
1135 1135 possible completions. (For class instances, class members are
1136 1136 also considered.)
1137 1137
1138 1138 WARNING: this can still invoke arbitrary C code, if an object
1139 1139 with a __getattr__ hook is evaluated.
1140 1140
1141 1141 """
1142 1142 return self._attr_matches(text)[0]
1143 1143
1144 1144 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1145 1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1146 1146 if not m2:
1147 1147 return [], ""
1148 1148 expr, attr = m2.group(1, 2)
1149 1149
1150 1150 obj = self._evaluate_expr(expr)
1151 1151
1152 1152 if obj is not_found:
1153 1153 return [], ""
1154 1154
1155 1155 if self.limit_to__all__ and hasattr(obj, '__all__'):
1156 1156 words = get__all__entries(obj)
1157 1157 else:
1158 1158 words = dir2(obj)
1159 1159
1160 1160 try:
1161 1161 words = generics.complete_object(obj, words)
1162 1162 except TryNext:
1163 1163 pass
1164 1164 except AssertionError:
1165 1165 raise
1166 1166 except Exception:
1167 1167 # Silence errors from completion function
1168 1168 pass
1169 1169 # Build match list to return
1170 1170 n = len(attr)
1171 1171
1172 1172 # Note: ideally we would just return words here and the prefix
1173 1173 # reconciliator would know that we intend to append to rather than
1174 1174 # replace the input text; this requires refactoring to return range
1175 1175 # which ought to be replaced (as does jedi).
1176 1176 if include_prefix:
1177 1177 tokens = _parse_tokens(expr)
1178 1178 rev_tokens = reversed(tokens)
1179 1179 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1180 1180 name_turn = True
1181 1181
1182 1182 parts = []
1183 1183 for token in rev_tokens:
1184 1184 if token.type in skip_over:
1185 1185 continue
1186 1186 if token.type == tokenize.NAME and name_turn:
1187 1187 parts.append(token.string)
1188 1188 name_turn = False
1189 1189 elif (
1190 1190 token.type == tokenize.OP and token.string == "." and not name_turn
1191 1191 ):
1192 1192 parts.append(token.string)
1193 1193 name_turn = True
1194 1194 else:
1195 1195 # short-circuit if not empty nor name token
1196 1196 break
1197 1197
1198 1198 prefix_after_space = "".join(reversed(parts))
1199 1199 else:
1200 1200 prefix_after_space = ""
1201 1201
1202 1202 return (
1203 1203 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1204 1204 "." + attr,
1205 1205 )
1206 1206
1207 1207 def _evaluate_expr(self, expr):
1208 1208 obj = not_found
1209 1209 done = False
1210 1210 while not done and expr:
1211 1211 try:
1212 1212 obj = guarded_eval(
1213 1213 expr,
1214 1214 EvaluationContext(
1215 1215 globals=self.global_namespace,
1216 1216 locals=self.namespace,
1217 1217 evaluation=self.evaluation,
1218 1218 ),
1219 1219 )
1220 1220 done = True
1221 1221 except Exception as e:
1222 1222 if self.debug:
1223 1223 print("Evaluation exception", e)
1224 1224 # trim the expression to remove any invalid prefix
1225 1225 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1226 1226 # where parenthesis is not closed.
1227 1227 # TODO: make this faster by reusing parts of the computation?
1228 1228 expr = expr[1:]
1229 1229 return obj
1230 1230
1231 1231 def get__all__entries(obj):
1232 1232 """returns the strings in the __all__ attribute"""
1233 1233 try:
1234 1234 words = getattr(obj, '__all__')
1235 1235 except:
1236 1236 return []
1237 1237
1238 1238 return [w for w in words if isinstance(w, str)]
1239 1239
1240 1240
1241 1241 class _DictKeyState(enum.Flag):
1242 1242 """Represent state of the key match in context of other possible matches.
1243 1243
1244 1244 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1245 1245 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1246 1246 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1247 1247 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1248 1248 """
1249 1249
1250 1250 BASELINE = 0
1251 1251 END_OF_ITEM = enum.auto()
1252 1252 END_OF_TUPLE = enum.auto()
1253 1253 IN_TUPLE = enum.auto()
1254 1254
1255 1255
1256 1256 def _parse_tokens(c):
1257 1257 """Parse tokens even if there is an error."""
1258 1258 tokens = []
1259 1259 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1260 1260 while True:
1261 1261 try:
1262 1262 tokens.append(next(token_generator))
1263 1263 except tokenize.TokenError:
1264 1264 return tokens
1265 1265 except StopIteration:
1266 1266 return tokens
1267 1267
1268 1268
1269 1269 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1270 1270 """Match any valid Python numeric literal in a prefix of dictionary keys.
1271 1271
1272 1272 References:
1273 1273 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1274 1274 - https://docs.python.org/3/library/tokenize.html
1275 1275 """
1276 1276 if prefix[-1].isspace():
1277 1277 # if user typed a space we do not have anything to complete
1278 1278 # even if there was a valid number token before
1279 1279 return None
1280 1280 tokens = _parse_tokens(prefix)
1281 1281 rev_tokens = reversed(tokens)
1282 1282 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1283 1283 number = None
1284 1284 for token in rev_tokens:
1285 1285 if token.type in skip_over:
1286 1286 continue
1287 1287 if number is None:
1288 1288 if token.type == tokenize.NUMBER:
1289 1289 number = token.string
1290 1290 continue
1291 1291 else:
1292 1292 # we did not match a number
1293 1293 return None
1294 1294 if token.type == tokenize.OP:
1295 1295 if token.string == ",":
1296 1296 break
1297 1297 if token.string in {"+", "-"}:
1298 1298 number = token.string + number
1299 1299 else:
1300 1300 return None
1301 1301 return number
1302 1302
1303 1303
1304 1304 _INT_FORMATS = {
1305 1305 "0b": bin,
1306 1306 "0o": oct,
1307 1307 "0x": hex,
1308 1308 }
1309 1309
1310 1310
1311 1311 def match_dict_keys(
1312 1312 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1313 1313 prefix: str,
1314 1314 delims: str,
1315 1315 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1316 1316 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1317 1317 """Used by dict_key_matches, matching the prefix to a list of keys
1318 1318
1319 1319 Parameters
1320 1320 ----------
1321 1321 keys
1322 1322 list of keys in dictionary currently being completed.
1323 1323 prefix
1324 1324 Part of the text already typed by the user. E.g. `mydict[b'fo`
1325 1325 delims
1326 1326 String of delimiters to consider when finding the current key.
1327 1327 extra_prefix : optional
1328 1328 Part of the text already typed in multi-key index cases. E.g. for
1329 1329 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1330 1330
1331 1331 Returns
1332 1332 -------
1333 1333 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1334 1334 ``quote`` being the quote that need to be used to close current string.
1335 1335 ``token_start`` the position where the replacement should start occurring,
1336 1336 ``matches`` a dictionary of replacement/completion keys on keys and values
1337 1337 indicating whether the state.
1338 1338 """
1339 1339 prefix_tuple = extra_prefix if extra_prefix else ()
1340 1340
1341 1341 prefix_tuple_size = sum(
1342 1342 [
1343 1343 # for pandas, do not count slices as taking space
1344 1344 not isinstance(k, slice)
1345 1345 for k in prefix_tuple
1346 1346 ]
1347 1347 )
1348 1348 text_serializable_types = (str, bytes, int, float, slice)
1349 1349
1350 1350 def filter_prefix_tuple(key):
1351 1351 # Reject too short keys
1352 1352 if len(key) <= prefix_tuple_size:
1353 1353 return False
1354 1354 # Reject keys which cannot be serialised to text
1355 1355 for k in key:
1356 1356 if not isinstance(k, text_serializable_types):
1357 1357 return False
1358 1358 # Reject keys that do not match the prefix
1359 1359 for k, pt in zip(key, prefix_tuple):
1360 1360 if k != pt and not isinstance(pt, slice):
1361 1361 return False
1362 1362 # All checks passed!
1363 1363 return True
1364 1364
1365 filtered_key_is_final: Dict[
1366 Union[str, bytes, int, float], _DictKeyState
1367 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1365 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1366 defaultdict(lambda: _DictKeyState.BASELINE)
1367 )
1368 1368
1369 1369 for k in keys:
1370 1370 # If at least one of the matches is not final, mark as undetermined.
1371 1371 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1372 1372 # `111` appears final on first match but is not final on the second.
1373 1373
1374 1374 if isinstance(k, tuple):
1375 1375 if filter_prefix_tuple(k):
1376 1376 key_fragment = k[prefix_tuple_size]
1377 1377 filtered_key_is_final[key_fragment] |= (
1378 1378 _DictKeyState.END_OF_TUPLE
1379 1379 if len(k) == prefix_tuple_size + 1
1380 1380 else _DictKeyState.IN_TUPLE
1381 1381 )
1382 1382 elif prefix_tuple_size > 0:
1383 1383 # we are completing a tuple but this key is not a tuple,
1384 1384 # so we should ignore it
1385 1385 pass
1386 1386 else:
1387 1387 if isinstance(k, text_serializable_types):
1388 1388 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1389 1389
1390 1390 filtered_keys = filtered_key_is_final.keys()
1391 1391
1392 1392 if not prefix:
1393 1393 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1394 1394
1395 1395 quote_match = re.search("(?:\"|')", prefix)
1396 1396 is_user_prefix_numeric = False
1397 1397
1398 1398 if quote_match:
1399 1399 quote = quote_match.group()
1400 1400 valid_prefix = prefix + quote
1401 1401 try:
1402 1402 prefix_str = literal_eval(valid_prefix)
1403 1403 except Exception:
1404 1404 return "", 0, {}
1405 1405 else:
1406 1406 # If it does not look like a string, let's assume
1407 1407 # we are dealing with a number or variable.
1408 1408 number_match = _match_number_in_dict_key_prefix(prefix)
1409 1409
1410 1410 # We do not want the key matcher to suggest variable names so we yield:
1411 1411 if number_match is None:
1412 1412 # The alternative would be to assume that user forgort the quote
1413 1413 # and if the substring matches, suggest adding it at the start.
1414 1414 return "", 0, {}
1415 1415
1416 1416 prefix_str = number_match
1417 1417 is_user_prefix_numeric = True
1418 1418 quote = ""
1419 1419
1420 1420 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1421 1421 token_match = re.search(pattern, prefix, re.UNICODE)
1422 1422 assert token_match is not None # silence mypy
1423 1423 token_start = token_match.start()
1424 1424 token_prefix = token_match.group()
1425 1425
1426 1426 matched: Dict[str, _DictKeyState] = {}
1427 1427
1428 1428 str_key: Union[str, bytes]
1429 1429
1430 1430 for key in filtered_keys:
1431 1431 if isinstance(key, (int, float)):
1432 1432 # User typed a number but this key is not a number.
1433 1433 if not is_user_prefix_numeric:
1434 1434 continue
1435 1435 str_key = str(key)
1436 1436 if isinstance(key, int):
1437 1437 int_base = prefix_str[:2].lower()
1438 1438 # if user typed integer using binary/oct/hex notation:
1439 1439 if int_base in _INT_FORMATS:
1440 1440 int_format = _INT_FORMATS[int_base]
1441 1441 str_key = int_format(key)
1442 1442 else:
1443 1443 # User typed a string but this key is a number.
1444 1444 if is_user_prefix_numeric:
1445 1445 continue
1446 1446 str_key = key
1447 1447 try:
1448 1448 if not str_key.startswith(prefix_str):
1449 1449 continue
1450 1450 except (AttributeError, TypeError, UnicodeError) as e:
1451 1451 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1452 1452 continue
1453 1453
1454 1454 # reformat remainder of key to begin with prefix
1455 1455 rem = str_key[len(prefix_str) :]
1456 1456 # force repr wrapped in '
1457 1457 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1458 1458 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1459 1459 if quote == '"':
1460 1460 # The entered prefix is quoted with ",
1461 1461 # but the match is quoted with '.
1462 1462 # A contained " hence needs escaping for comparison:
1463 1463 rem_repr = rem_repr.replace('"', '\\"')
1464 1464
1465 1465 # then reinsert prefix from start of token
1466 1466 match = "%s%s" % (token_prefix, rem_repr)
1467 1467
1468 1468 matched[match] = filtered_key_is_final[key]
1469 1469 return quote, token_start, matched
1470 1470
1471 1471
1472 1472 def cursor_to_position(text:str, line:int, column:int)->int:
1473 1473 """
1474 1474 Convert the (line,column) position of the cursor in text to an offset in a
1475 1475 string.
1476 1476
1477 1477 Parameters
1478 1478 ----------
1479 1479 text : str
1480 1480 The text in which to calculate the cursor offset
1481 1481 line : int
1482 1482 Line of the cursor; 0-indexed
1483 1483 column : int
1484 1484 Column of the cursor 0-indexed
1485 1485
1486 1486 Returns
1487 1487 -------
1488 1488 Position of the cursor in ``text``, 0-indexed.
1489 1489
1490 1490 See Also
1491 1491 --------
1492 1492 position_to_cursor : reciprocal of this function
1493 1493
1494 1494 """
1495 1495 lines = text.split('\n')
1496 1496 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1497 1497
1498 1498 return sum(len(l) + 1 for l in lines[:line]) + column
1499 1499
1500 1500 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1501 1501 """
1502 1502 Convert the position of the cursor in text (0 indexed) to a line
1503 1503 number(0-indexed) and a column number (0-indexed) pair
1504 1504
1505 1505 Position should be a valid position in ``text``.
1506 1506
1507 1507 Parameters
1508 1508 ----------
1509 1509 text : str
1510 1510 The text in which to calculate the cursor offset
1511 1511 offset : int
1512 1512 Position of the cursor in ``text``, 0-indexed.
1513 1513
1514 1514 Returns
1515 1515 -------
1516 1516 (line, column) : (int, int)
1517 1517 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1518 1518
1519 1519 See Also
1520 1520 --------
1521 1521 cursor_to_position : reciprocal of this function
1522 1522
1523 1523 """
1524 1524
1525 1525 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1526 1526
1527 1527 before = text[:offset]
1528 1528 blines = before.split('\n') # ! splitnes trim trailing \n
1529 1529 line = before.count('\n')
1530 1530 col = len(blines[-1])
1531 1531 return line, col
1532 1532
1533 1533
1534 1534 def _safe_isinstance(obj, module, class_name, *attrs):
1535 1535 """Checks if obj is an instance of module.class_name if loaded
1536 1536 """
1537 1537 if module in sys.modules:
1538 1538 m = sys.modules[module]
1539 1539 for attr in [class_name, *attrs]:
1540 1540 m = getattr(m, attr)
1541 1541 return isinstance(obj, m)
1542 1542
1543 1543
1544 1544 @context_matcher()
1545 1545 def back_unicode_name_matcher(context: CompletionContext):
1546 1546 """Match Unicode characters back to Unicode name
1547 1547
1548 1548 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1549 1549 """
1550 1550 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1551 1551 return _convert_matcher_v1_result_to_v2(
1552 1552 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1553 1553 )
1554 1554
1555 1555
1556 1556 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1557 1557 """Match Unicode characters back to Unicode name
1558 1558
1559 1559 This does ``β˜ƒ`` -> ``\\snowman``
1560 1560
1561 1561 Note that snowman is not a valid python3 combining character but will be expanded.
1562 1562 Though it will not recombine back to the snowman character by the completion machinery.
1563 1563
1564 1564 This will not either back-complete standard sequences like \\n, \\b ...
1565 1565
1566 1566 .. deprecated:: 8.6
1567 1567 You can use :meth:`back_unicode_name_matcher` instead.
1568 1568
1569 1569 Returns
1570 1570 =======
1571 1571
1572 1572 Return a tuple with two elements:
1573 1573
1574 1574 - The Unicode character that was matched (preceded with a backslash), or
1575 1575 empty string,
1576 1576 - a sequence (of 1), name for the match Unicode character, preceded by
1577 1577 backslash, or empty if no match.
1578 1578 """
1579 1579 if len(text)<2:
1580 1580 return '', ()
1581 1581 maybe_slash = text[-2]
1582 1582 if maybe_slash != '\\':
1583 1583 return '', ()
1584 1584
1585 1585 char = text[-1]
1586 1586 # no expand on quote for completion in strings.
1587 1587 # nor backcomplete standard ascii keys
1588 1588 if char in string.ascii_letters or char in ('"',"'"):
1589 1589 return '', ()
1590 1590 try :
1591 1591 unic = unicodedata.name(char)
1592 1592 return '\\'+char,('\\'+unic,)
1593 1593 except KeyError:
1594 1594 pass
1595 1595 return '', ()
1596 1596
1597 1597
1598 1598 @context_matcher()
1599 1599 def back_latex_name_matcher(context: CompletionContext):
1600 1600 """Match latex characters back to unicode name
1601 1601
1602 1602 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1603 1603 """
1604 1604 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1605 1605 return _convert_matcher_v1_result_to_v2(
1606 1606 matches, type="latex", fragment=fragment, suppress_if_matches=True
1607 1607 )
1608 1608
1609 1609
1610 1610 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1611 1611 """Match latex characters back to unicode name
1612 1612
1613 1613 This does ``\\β„΅`` -> ``\\aleph``
1614 1614
1615 1615 .. deprecated:: 8.6
1616 1616 You can use :meth:`back_latex_name_matcher` instead.
1617 1617 """
1618 1618 if len(text)<2:
1619 1619 return '', ()
1620 1620 maybe_slash = text[-2]
1621 1621 if maybe_slash != '\\':
1622 1622 return '', ()
1623 1623
1624 1624
1625 1625 char = text[-1]
1626 1626 # no expand on quote for completion in strings.
1627 1627 # nor backcomplete standard ascii keys
1628 1628 if char in string.ascii_letters or char in ('"',"'"):
1629 1629 return '', ()
1630 1630 try :
1631 1631 latex = reverse_latex_symbol[char]
1632 1632 # '\\' replace the \ as well
1633 1633 return '\\'+char,[latex]
1634 1634 except KeyError:
1635 1635 pass
1636 1636 return '', ()
1637 1637
1638 1638
1639 1639 def _formatparamchildren(parameter) -> str:
1640 1640 """
1641 1641 Get parameter name and value from Jedi Private API
1642 1642
1643 1643 Jedi does not expose a simple way to get `param=value` from its API.
1644 1644
1645 1645 Parameters
1646 1646 ----------
1647 1647 parameter
1648 1648 Jedi's function `Param`
1649 1649
1650 1650 Returns
1651 1651 -------
1652 1652 A string like 'a', 'b=1', '*args', '**kwargs'
1653 1653
1654 1654 """
1655 1655 description = parameter.description
1656 1656 if not description.startswith('param '):
1657 1657 raise ValueError('Jedi function parameter description have change format.'
1658 1658 'Expected "param ...", found %r".' % description)
1659 1659 return description[6:]
1660 1660
1661 1661 def _make_signature(completion)-> str:
1662 1662 """
1663 1663 Make the signature from a jedi completion
1664 1664
1665 1665 Parameters
1666 1666 ----------
1667 1667 completion : jedi.Completion
1668 1668 object does not complete a function type
1669 1669
1670 1670 Returns
1671 1671 -------
1672 1672 a string consisting of the function signature, with the parenthesis but
1673 1673 without the function name. example:
1674 1674 `(a, *args, b=1, **kwargs)`
1675 1675
1676 1676 """
1677 1677
1678 1678 # it looks like this might work on jedi 0.17
1679 1679 if hasattr(completion, 'get_signatures'):
1680 1680 signatures = completion.get_signatures()
1681 1681 if not signatures:
1682 1682 return '(?)'
1683 1683
1684 1684 c0 = completion.get_signatures()[0]
1685 1685 return '('+c0.to_string().split('(', maxsplit=1)[1]
1686 1686
1687 1687 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1688 1688 for p in signature.defined_names()) if f])
1689 1689
1690 1690
1691 1691 _CompleteResult = Dict[str, MatcherResult]
1692 1692
1693 1693
1694 1694 DICT_MATCHER_REGEX = re.compile(
1695 1695 r"""(?x)
1696 1696 ( # match dict-referring - or any get item object - expression
1697 1697 .+
1698 1698 )
1699 1699 \[ # open bracket
1700 1700 \s* # and optional whitespace
1701 1701 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1702 1702 # and slices
1703 1703 ((?:(?:
1704 1704 (?: # closed string
1705 1705 [uUbB]? # string prefix (r not handled)
1706 1706 (?:
1707 1707 '(?:[^']|(?<!\\)\\')*'
1708 1708 |
1709 1709 "(?:[^"]|(?<!\\)\\")*"
1710 1710 )
1711 1711 )
1712 1712 |
1713 1713 # capture integers and slices
1714 1714 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1715 1715 |
1716 1716 # integer in bin/hex/oct notation
1717 1717 0[bBxXoO]_?(?:\w|\d)+
1718 1718 )
1719 1719 \s*,\s*
1720 1720 )*)
1721 1721 ((?:
1722 1722 (?: # unclosed string
1723 1723 [uUbB]? # string prefix (r not handled)
1724 1724 (?:
1725 1725 '(?:[^']|(?<!\\)\\')*
1726 1726 |
1727 1727 "(?:[^"]|(?<!\\)\\")*
1728 1728 )
1729 1729 )
1730 1730 |
1731 1731 # unfinished integer
1732 1732 (?:[-+]?\d+)
1733 1733 |
1734 1734 # integer in bin/hex/oct notation
1735 1735 0[bBxXoO]_?(?:\w|\d)+
1736 1736 )
1737 1737 )?
1738 1738 $
1739 1739 """
1740 1740 )
1741 1741
1742 1742
1743 1743 def _convert_matcher_v1_result_to_v2(
1744 1744 matches: Sequence[str],
1745 1745 type: str,
1746 1746 fragment: Optional[str] = None,
1747 1747 suppress_if_matches: bool = False,
1748 1748 ) -> SimpleMatcherResult:
1749 1749 """Utility to help with transition"""
1750 1750 result = {
1751 1751 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1752 1752 "suppress": (True if matches else False) if suppress_if_matches else False,
1753 1753 }
1754 1754 if fragment is not None:
1755 1755 result["matched_fragment"] = fragment
1756 1756 return cast(SimpleMatcherResult, result)
1757 1757
1758 1758
1759 1759 class IPCompleter(Completer):
1760 1760 """Extension of the completer class with IPython-specific features"""
1761 1761
1762 1762 @observe('greedy')
1763 1763 def _greedy_changed(self, change):
1764 1764 """update the splitter and readline delims when greedy is changed"""
1765 1765 if change["new"]:
1766 1766 self.evaluation = "unsafe"
1767 1767 self.auto_close_dict_keys = True
1768 1768 self.splitter.delims = GREEDY_DELIMS
1769 1769 else:
1770 1770 self.evaluation = "limited"
1771 1771 self.auto_close_dict_keys = False
1772 1772 self.splitter.delims = DELIMS
1773 1773
1774 1774 dict_keys_only = Bool(
1775 1775 False,
1776 1776 help="""
1777 1777 Whether to show dict key matches only.
1778 1778
1779 1779 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1780 1780 """,
1781 1781 )
1782 1782
1783 1783 suppress_competing_matchers = UnionTrait(
1784 1784 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1785 1785 default_value=None,
1786 1786 help="""
1787 1787 Whether to suppress completions from other *Matchers*.
1788 1788
1789 1789 When set to ``None`` (default) the matchers will attempt to auto-detect
1790 1790 whether suppression of other matchers is desirable. For example, at
1791 1791 the beginning of a line followed by `%` we expect a magic completion
1792 1792 to be the only applicable option, and after ``my_dict['`` we usually
1793 1793 expect a completion with an existing dictionary key.
1794 1794
1795 1795 If you want to disable this heuristic and see completions from all matchers,
1796 1796 set ``IPCompleter.suppress_competing_matchers = False``.
1797 1797 To disable the heuristic for specific matchers provide a dictionary mapping:
1798 1798 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1799 1799
1800 1800 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1801 1801 completions to the set of matchers with the highest priority;
1802 1802 this is equivalent to ``IPCompleter.merge_completions`` and
1803 1803 can be beneficial for performance, but will sometimes omit relevant
1804 1804 candidates from matchers further down the priority list.
1805 1805 """,
1806 1806 ).tag(config=True)
1807 1807
1808 1808 merge_completions = Bool(
1809 1809 True,
1810 1810 help="""Whether to merge completion results into a single list
1811 1811
1812 1812 If False, only the completion results from the first non-empty
1813 1813 completer will be returned.
1814 1814
1815 1815 As of version 8.6.0, setting the value to ``False`` is an alias for:
1816 1816 ``IPCompleter.suppress_competing_matchers = True.``.
1817 1817 """,
1818 1818 ).tag(config=True)
1819 1819
1820 1820 disable_matchers = ListTrait(
1821 1821 Unicode(),
1822 1822 help="""List of matchers to disable.
1823 1823
1824 1824 The list should contain matcher identifiers (see :any:`completion_matcher`).
1825 1825 """,
1826 1826 ).tag(config=True)
1827 1827
1828 1828 omit__names = Enum(
1829 1829 (0, 1, 2),
1830 1830 default_value=2,
1831 1831 help="""Instruct the completer to omit private method names
1832 1832
1833 1833 Specifically, when completing on ``object.<tab>``.
1834 1834
1835 1835 When 2 [default]: all names that start with '_' will be excluded.
1836 1836
1837 1837 When 1: all 'magic' names (``__foo__``) will be excluded.
1838 1838
1839 1839 When 0: nothing will be excluded.
1840 1840 """
1841 1841 ).tag(config=True)
1842 1842 limit_to__all__ = Bool(False,
1843 1843 help="""
1844 1844 DEPRECATED as of version 5.0.
1845 1845
1846 1846 Instruct the completer to use __all__ for the completion
1847 1847
1848 1848 Specifically, when completing on ``object.<tab>``.
1849 1849
1850 1850 When True: only those names in obj.__all__ will be included.
1851 1851
1852 1852 When False [default]: the __all__ attribute is ignored
1853 1853 """,
1854 1854 ).tag(config=True)
1855 1855
1856 1856 profile_completions = Bool(
1857 1857 default_value=False,
1858 1858 help="If True, emit profiling data for completion subsystem using cProfile."
1859 1859 ).tag(config=True)
1860 1860
1861 1861 profiler_output_dir = Unicode(
1862 1862 default_value=".completion_profiles",
1863 1863 help="Template for path at which to output profile data for completions."
1864 1864 ).tag(config=True)
1865 1865
1866 1866 @observe('limit_to__all__')
1867 1867 def _limit_to_all_changed(self, change):
1868 1868 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1869 1869 'value has been deprecated since IPython 5.0, will be made to have '
1870 1870 'no effects and then removed in future version of IPython.',
1871 1871 UserWarning)
1872 1872
1873 1873 def __init__(
1874 1874 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1875 1875 ):
1876 1876 """IPCompleter() -> completer
1877 1877
1878 1878 Return a completer object.
1879 1879
1880 1880 Parameters
1881 1881 ----------
1882 1882 shell
1883 1883 a pointer to the ipython shell itself. This is needed
1884 1884 because this completer knows about magic functions, and those can
1885 1885 only be accessed via the ipython instance.
1886 1886 namespace : dict, optional
1887 1887 an optional dict where completions are performed.
1888 1888 global_namespace : dict, optional
1889 1889 secondary optional dict for completions, to
1890 1890 handle cases (such as IPython embedded inside functions) where
1891 1891 both Python scopes are visible.
1892 1892 config : Config
1893 1893 traitlet's config object
1894 1894 **kwargs
1895 1895 passed to super class unmodified.
1896 1896 """
1897 1897
1898 1898 self.magic_escape = ESC_MAGIC
1899 1899 self.splitter = CompletionSplitter()
1900 1900
1901 1901 # _greedy_changed() depends on splitter and readline being defined:
1902 1902 super().__init__(
1903 1903 namespace=namespace,
1904 1904 global_namespace=global_namespace,
1905 1905 config=config,
1906 1906 **kwargs,
1907 1907 )
1908 1908
1909 1909 # List where completion matches will be stored
1910 1910 self.matches = []
1911 1911 self.shell = shell
1912 1912 # Regexp to split filenames with spaces in them
1913 1913 self.space_name_re = re.compile(r'([^\\] )')
1914 1914 # Hold a local ref. to glob.glob for speed
1915 1915 self.glob = glob.glob
1916 1916
1917 1917 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1918 1918 # buffers, to avoid completion problems.
1919 1919 term = os.environ.get('TERM','xterm')
1920 1920 self.dumb_terminal = term in ['dumb','emacs']
1921 1921
1922 1922 # Special handling of backslashes needed in win32 platforms
1923 1923 if sys.platform == "win32":
1924 1924 self.clean_glob = self._clean_glob_win32
1925 1925 else:
1926 1926 self.clean_glob = self._clean_glob
1927 1927
1928 1928 #regexp to parse docstring for function signature
1929 1929 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1930 1930 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1931 1931 #use this if positional argument name is also needed
1932 1932 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1933 1933
1934 1934 self.magic_arg_matchers = [
1935 1935 self.magic_config_matcher,
1936 1936 self.magic_color_matcher,
1937 1937 ]
1938 1938
1939 1939 # This is set externally by InteractiveShell
1940 1940 self.custom_completers = None
1941 1941
1942 1942 # This is a list of names of unicode characters that can be completed
1943 1943 # into their corresponding unicode value. The list is large, so we
1944 1944 # lazily initialize it on first use. Consuming code should access this
1945 1945 # attribute through the `@unicode_names` property.
1946 1946 self._unicode_names = None
1947 1947
1948 1948 self._backslash_combining_matchers = [
1949 1949 self.latex_name_matcher,
1950 1950 self.unicode_name_matcher,
1951 1951 back_latex_name_matcher,
1952 1952 back_unicode_name_matcher,
1953 1953 self.fwd_unicode_matcher,
1954 1954 ]
1955 1955
1956 1956 if not self.backslash_combining_completions:
1957 1957 for matcher in self._backslash_combining_matchers:
1958 1958 self.disable_matchers.append(_get_matcher_id(matcher))
1959 1959
1960 1960 if not self.merge_completions:
1961 1961 self.suppress_competing_matchers = True
1962 1962
1963 1963 @property
1964 1964 def matchers(self) -> List[Matcher]:
1965 1965 """All active matcher routines for completion"""
1966 1966 if self.dict_keys_only:
1967 1967 return [self.dict_key_matcher]
1968 1968
1969 1969 if self.use_jedi:
1970 1970 return [
1971 1971 *self.custom_matchers,
1972 1972 *self._backslash_combining_matchers,
1973 1973 *self.magic_arg_matchers,
1974 1974 self.custom_completer_matcher,
1975 1975 self.magic_matcher,
1976 1976 self._jedi_matcher,
1977 1977 self.dict_key_matcher,
1978 1978 self.file_matcher,
1979 1979 ]
1980 1980 else:
1981 1981 return [
1982 1982 *self.custom_matchers,
1983 1983 *self._backslash_combining_matchers,
1984 1984 *self.magic_arg_matchers,
1985 1985 self.custom_completer_matcher,
1986 1986 self.dict_key_matcher,
1987 1987 self.magic_matcher,
1988 1988 self.python_matcher,
1989 1989 self.file_matcher,
1990 1990 self.python_func_kw_matcher,
1991 1991 ]
1992 1992
1993 1993 def all_completions(self, text:str) -> List[str]:
1994 1994 """
1995 1995 Wrapper around the completion methods for the benefit of emacs.
1996 1996 """
1997 1997 prefix = text.rpartition('.')[0]
1998 1998 with provisionalcompleter():
1999 1999 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2000 2000 for c in self.completions(text, len(text))]
2001 2001
2002 2002 return self.complete(text)[1]
2003 2003
2004 2004 def _clean_glob(self, text:str):
2005 2005 return self.glob("%s*" % text)
2006 2006
2007 2007 def _clean_glob_win32(self, text:str):
2008 2008 return [f.replace("\\","/")
2009 2009 for f in self.glob("%s*" % text)]
2010 2010
2011 2011 @context_matcher()
2012 2012 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2013 2013 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2014 2014 matches = self.file_matches(context.token)
2015 2015 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2016 2016 # starts with `/home/`, `C:\`, etc)
2017 2017 return _convert_matcher_v1_result_to_v2(matches, type="path")
2018 2018
2019 2019 def file_matches(self, text: str) -> List[str]:
2020 2020 """Match filenames, expanding ~USER type strings.
2021 2021
2022 2022 Most of the seemingly convoluted logic in this completer is an
2023 2023 attempt to handle filenames with spaces in them. And yet it's not
2024 2024 quite perfect, because Python's readline doesn't expose all of the
2025 2025 GNU readline details needed for this to be done correctly.
2026 2026
2027 2027 For a filename with a space in it, the printed completions will be
2028 2028 only the parts after what's already been typed (instead of the
2029 2029 full completions, as is normally done). I don't think with the
2030 2030 current (as of Python 2.3) Python readline it's possible to do
2031 2031 better.
2032 2032
2033 2033 .. deprecated:: 8.6
2034 2034 You can use :meth:`file_matcher` instead.
2035 2035 """
2036 2036
2037 2037 # chars that require escaping with backslash - i.e. chars
2038 2038 # that readline treats incorrectly as delimiters, but we
2039 2039 # don't want to treat as delimiters in filename matching
2040 2040 # when escaped with backslash
2041 2041 if text.startswith('!'):
2042 2042 text = text[1:]
2043 2043 text_prefix = u'!'
2044 2044 else:
2045 2045 text_prefix = u''
2046 2046
2047 2047 text_until_cursor = self.text_until_cursor
2048 2048 # track strings with open quotes
2049 2049 open_quotes = has_open_quotes(text_until_cursor)
2050 2050
2051 2051 if '(' in text_until_cursor or '[' in text_until_cursor:
2052 2052 lsplit = text
2053 2053 else:
2054 2054 try:
2055 2055 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2056 2056 lsplit = arg_split(text_until_cursor)[-1]
2057 2057 except ValueError:
2058 2058 # typically an unmatched ", or backslash without escaped char.
2059 2059 if open_quotes:
2060 2060 lsplit = text_until_cursor.split(open_quotes)[-1]
2061 2061 else:
2062 2062 return []
2063 2063 except IndexError:
2064 2064 # tab pressed on empty line
2065 2065 lsplit = ""
2066 2066
2067 2067 if not open_quotes and lsplit != protect_filename(lsplit):
2068 2068 # if protectables are found, do matching on the whole escaped name
2069 2069 has_protectables = True
2070 2070 text0,text = text,lsplit
2071 2071 else:
2072 2072 has_protectables = False
2073 2073 text = os.path.expanduser(text)
2074 2074
2075 2075 if text == "":
2076 2076 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2077 2077
2078 2078 # Compute the matches from the filesystem
2079 2079 if sys.platform == 'win32':
2080 2080 m0 = self.clean_glob(text)
2081 2081 else:
2082 2082 m0 = self.clean_glob(text.replace('\\', ''))
2083 2083
2084 2084 if has_protectables:
2085 2085 # If we had protectables, we need to revert our changes to the
2086 2086 # beginning of filename so that we don't double-write the part
2087 2087 # of the filename we have so far
2088 2088 len_lsplit = len(lsplit)
2089 2089 matches = [text_prefix + text0 +
2090 2090 protect_filename(f[len_lsplit:]) for f in m0]
2091 2091 else:
2092 2092 if open_quotes:
2093 2093 # if we have a string with an open quote, we don't need to
2094 2094 # protect the names beyond the quote (and we _shouldn't_, as
2095 2095 # it would cause bugs when the filesystem call is made).
2096 2096 matches = m0 if sys.platform == "win32" else\
2097 2097 [protect_filename(f, open_quotes) for f in m0]
2098 2098 else:
2099 2099 matches = [text_prefix +
2100 2100 protect_filename(f) for f in m0]
2101 2101
2102 2102 # Mark directories in input list by appending '/' to their names.
2103 2103 return [x+'/' if os.path.isdir(x) else x for x in matches]
2104 2104
2105 2105 @context_matcher()
2106 2106 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2107 2107 """Match magics."""
2108 2108 text = context.token
2109 2109 matches = self.magic_matches(text)
2110 2110 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2111 2111 is_magic_prefix = len(text) > 0 and text[0] == "%"
2112 2112 result["suppress"] = is_magic_prefix and bool(result["completions"])
2113 2113 return result
2114 2114
2115 2115 def magic_matches(self, text: str):
2116 2116 """Match magics.
2117 2117
2118 2118 .. deprecated:: 8.6
2119 2119 You can use :meth:`magic_matcher` instead.
2120 2120 """
2121 2121 # Get all shell magics now rather than statically, so magics loaded at
2122 2122 # runtime show up too.
2123 2123 lsm = self.shell.magics_manager.lsmagic()
2124 2124 line_magics = lsm['line']
2125 2125 cell_magics = lsm['cell']
2126 2126 pre = self.magic_escape
2127 2127 pre2 = pre+pre
2128 2128
2129 2129 explicit_magic = text.startswith(pre)
2130 2130
2131 2131 # Completion logic:
2132 2132 # - user gives %%: only do cell magics
2133 2133 # - user gives %: do both line and cell magics
2134 2134 # - no prefix: do both
2135 2135 # In other words, line magics are skipped if the user gives %% explicitly
2136 2136 #
2137 2137 # We also exclude magics that match any currently visible names:
2138 2138 # https://github.com/ipython/ipython/issues/4877, unless the user has
2139 2139 # typed a %:
2140 2140 # https://github.com/ipython/ipython/issues/10754
2141 2141 bare_text = text.lstrip(pre)
2142 2142 global_matches = self.global_matches(bare_text)
2143 2143 if not explicit_magic:
2144 2144 def matches(magic):
2145 2145 """
2146 2146 Filter magics, in particular remove magics that match
2147 2147 a name present in global namespace.
2148 2148 """
2149 2149 return ( magic.startswith(bare_text) and
2150 2150 magic not in global_matches )
2151 2151 else:
2152 2152 def matches(magic):
2153 2153 return magic.startswith(bare_text)
2154 2154
2155 2155 comp = [ pre2+m for m in cell_magics if matches(m)]
2156 2156 if not text.startswith(pre2):
2157 2157 comp += [ pre+m for m in line_magics if matches(m)]
2158 2158
2159 2159 return comp
2160 2160
2161 2161 @context_matcher()
2162 2162 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2163 2163 """Match class names and attributes for %config magic."""
2164 2164 # NOTE: uses `line_buffer` equivalent for compatibility
2165 2165 matches = self.magic_config_matches(context.line_with_cursor)
2166 2166 return _convert_matcher_v1_result_to_v2(matches, type="param")
2167 2167
2168 2168 def magic_config_matches(self, text: str) -> List[str]:
2169 2169 """Match class names and attributes for %config magic.
2170 2170
2171 2171 .. deprecated:: 8.6
2172 2172 You can use :meth:`magic_config_matcher` instead.
2173 2173 """
2174 2174 texts = text.strip().split()
2175 2175
2176 2176 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2177 2177 # get all configuration classes
2178 2178 classes = sorted(set([ c for c in self.shell.configurables
2179 2179 if c.__class__.class_traits(config=True)
2180 2180 ]), key=lambda x: x.__class__.__name__)
2181 2181 classnames = [ c.__class__.__name__ for c in classes ]
2182 2182
2183 2183 # return all classnames if config or %config is given
2184 2184 if len(texts) == 1:
2185 2185 return classnames
2186 2186
2187 2187 # match classname
2188 2188 classname_texts = texts[1].split('.')
2189 2189 classname = classname_texts[0]
2190 2190 classname_matches = [ c for c in classnames
2191 2191 if c.startswith(classname) ]
2192 2192
2193 2193 # return matched classes or the matched class with attributes
2194 2194 if texts[1].find('.') < 0:
2195 2195 return classname_matches
2196 2196 elif len(classname_matches) == 1 and \
2197 2197 classname_matches[0] == classname:
2198 2198 cls = classes[classnames.index(classname)].__class__
2199 2199 help = cls.class_get_help()
2200 2200 # strip leading '--' from cl-args:
2201 2201 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2202 2202 return [ attr.split('=')[0]
2203 2203 for attr in help.strip().splitlines()
2204 2204 if attr.startswith(texts[1]) ]
2205 2205 return []
2206 2206
2207 2207 @context_matcher()
2208 2208 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2209 2209 """Match color schemes for %colors magic."""
2210 2210 # NOTE: uses `line_buffer` equivalent for compatibility
2211 2211 matches = self.magic_color_matches(context.line_with_cursor)
2212 2212 return _convert_matcher_v1_result_to_v2(matches, type="param")
2213 2213
2214 2214 def magic_color_matches(self, text: str) -> List[str]:
2215 2215 """Match color schemes for %colors magic.
2216 2216
2217 2217 .. deprecated:: 8.6
2218 2218 You can use :meth:`magic_color_matcher` instead.
2219 2219 """
2220 2220 texts = text.split()
2221 2221 if text.endswith(' '):
2222 2222 # .split() strips off the trailing whitespace. Add '' back
2223 2223 # so that: '%colors ' -> ['%colors', '']
2224 2224 texts.append('')
2225 2225
2226 2226 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2227 2227 prefix = texts[1]
2228 2228 return [ color for color in InspectColors.keys()
2229 2229 if color.startswith(prefix) ]
2230 2230 return []
2231 2231
2232 2232 @context_matcher(identifier="IPCompleter.jedi_matcher")
2233 2233 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2234 2234 matches = self._jedi_matches(
2235 2235 cursor_column=context.cursor_position,
2236 2236 cursor_line=context.cursor_line,
2237 2237 text=context.full_text,
2238 2238 )
2239 2239 return {
2240 2240 "completions": matches,
2241 2241 # static analysis should not suppress other matchers
2242 2242 "suppress": False,
2243 2243 }
2244 2244
2245 2245 def _jedi_matches(
2246 2246 self, cursor_column: int, cursor_line: int, text: str
2247 2247 ) -> Iterator[_JediCompletionLike]:
2248 2248 """
2249 2249 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2250 2250 cursor position.
2251 2251
2252 2252 Parameters
2253 2253 ----------
2254 2254 cursor_column : int
2255 2255 column position of the cursor in ``text``, 0-indexed.
2256 2256 cursor_line : int
2257 2257 line position of the cursor in ``text``, 0-indexed
2258 2258 text : str
2259 2259 text to complete
2260 2260
2261 2261 Notes
2262 2262 -----
2263 2263 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2264 2264 object containing a string with the Jedi debug information attached.
2265 2265
2266 2266 .. deprecated:: 8.6
2267 2267 You can use :meth:`_jedi_matcher` instead.
2268 2268 """
2269 2269 namespaces = [self.namespace]
2270 2270 if self.global_namespace is not None:
2271 2271 namespaces.append(self.global_namespace)
2272 2272
2273 2273 completion_filter = lambda x:x
2274 2274 offset = cursor_to_position(text, cursor_line, cursor_column)
2275 2275 # filter output if we are completing for object members
2276 2276 if offset:
2277 2277 pre = text[offset-1]
2278 2278 if pre == '.':
2279 2279 if self.omit__names == 2:
2280 2280 completion_filter = lambda c:not c.name.startswith('_')
2281 2281 elif self.omit__names == 1:
2282 2282 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2283 2283 elif self.omit__names == 0:
2284 2284 completion_filter = lambda x:x
2285 2285 else:
2286 2286 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2287 2287
2288 2288 interpreter = jedi.Interpreter(text[:offset], namespaces)
2289 2289 try_jedi = True
2290 2290
2291 2291 try:
2292 2292 # find the first token in the current tree -- if it is a ' or " then we are in a string
2293 2293 completing_string = False
2294 2294 try:
2295 2295 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2296 2296 except StopIteration:
2297 2297 pass
2298 2298 else:
2299 2299 # note the value may be ', ", or it may also be ''' or """, or
2300 2300 # in some cases, """what/you/typed..., but all of these are
2301 2301 # strings.
2302 2302 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2303 2303
2304 2304 # if we are in a string jedi is likely not the right candidate for
2305 2305 # now. Skip it.
2306 2306 try_jedi = not completing_string
2307 2307 except Exception as e:
2308 2308 # many of things can go wrong, we are using private API just don't crash.
2309 2309 if self.debug:
2310 2310 print("Error detecting if completing a non-finished string :", e, '|')
2311 2311
2312 2312 if not try_jedi:
2313 2313 return iter([])
2314 2314 try:
2315 2315 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2316 2316 except Exception as e:
2317 2317 if self.debug:
2318 2318 return iter(
2319 2319 [
2320 2320 _FakeJediCompletion(
2321 2321 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2322 2322 % (e)
2323 2323 )
2324 2324 ]
2325 2325 )
2326 2326 else:
2327 2327 return iter([])
2328 2328
2329 2329 @context_matcher()
2330 2330 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2331 2331 """Match attributes or global python names"""
2332 2332 text = context.line_with_cursor
2333 2333 if "." in text:
2334 2334 try:
2335 2335 matches, fragment = self._attr_matches(text, include_prefix=False)
2336 2336 if text.endswith(".") and self.omit__names:
2337 2337 if self.omit__names == 1:
2338 2338 # true if txt is _not_ a __ name, false otherwise:
2339 2339 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2340 2340 else:
2341 2341 # true if txt is _not_ a _ name, false otherwise:
2342 2342 no__name = (
2343 2343 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2344 2344 is None
2345 2345 )
2346 2346 matches = filter(no__name, matches)
2347 2347 return _convert_matcher_v1_result_to_v2(
2348 2348 matches, type="attribute", fragment=fragment
2349 2349 )
2350 2350 except NameError:
2351 2351 # catches <undefined attributes>.<tab>
2352 2352 matches = []
2353 2353 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2354 2354 else:
2355 2355 matches = self.global_matches(context.token)
2356 2356 # TODO: maybe distinguish between functions, modules and just "variables"
2357 2357 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2358 2358
2359 2359 @completion_matcher(api_version=1)
2360 2360 def python_matches(self, text: str) -> Iterable[str]:
2361 2361 """Match attributes or global python names.
2362 2362
2363 2363 .. deprecated:: 8.27
2364 2364 You can use :meth:`python_matcher` instead."""
2365 2365 if "." in text:
2366 2366 try:
2367 2367 matches = self.attr_matches(text)
2368 2368 if text.endswith('.') and self.omit__names:
2369 2369 if self.omit__names == 1:
2370 2370 # true if txt is _not_ a __ name, false otherwise:
2371 2371 no__name = (lambda txt:
2372 2372 re.match(r'.*\.__.*?__',txt) is None)
2373 2373 else:
2374 2374 # true if txt is _not_ a _ name, false otherwise:
2375 2375 no__name = (lambda txt:
2376 2376 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2377 2377 matches = filter(no__name, matches)
2378 2378 except NameError:
2379 2379 # catches <undefined attributes>.<tab>
2380 2380 matches = []
2381 2381 else:
2382 2382 matches = self.global_matches(text)
2383 2383 return matches
2384 2384
2385 2385 def _default_arguments_from_docstring(self, doc):
2386 2386 """Parse the first line of docstring for call signature.
2387 2387
2388 2388 Docstring should be of the form 'min(iterable[, key=func])\n'.
2389 2389 It can also parse cython docstring of the form
2390 2390 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2391 2391 """
2392 2392 if doc is None:
2393 2393 return []
2394 2394
2395 2395 #care only the firstline
2396 2396 line = doc.lstrip().splitlines()[0]
2397 2397
2398 2398 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2399 2399 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2400 2400 sig = self.docstring_sig_re.search(line)
2401 2401 if sig is None:
2402 2402 return []
2403 2403 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2404 2404 sig = sig.groups()[0].split(',')
2405 2405 ret = []
2406 2406 for s in sig:
2407 2407 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2408 2408 ret += self.docstring_kwd_re.findall(s)
2409 2409 return ret
2410 2410
2411 2411 def _default_arguments(self, obj):
2412 2412 """Return the list of default arguments of obj if it is callable,
2413 2413 or empty list otherwise."""
2414 2414 call_obj = obj
2415 2415 ret = []
2416 2416 if inspect.isbuiltin(obj):
2417 2417 pass
2418 2418 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2419 2419 if inspect.isclass(obj):
2420 2420 #for cython embedsignature=True the constructor docstring
2421 2421 #belongs to the object itself not __init__
2422 2422 ret += self._default_arguments_from_docstring(
2423 2423 getattr(obj, '__doc__', ''))
2424 2424 # for classes, check for __init__,__new__
2425 2425 call_obj = (getattr(obj, '__init__', None) or
2426 2426 getattr(obj, '__new__', None))
2427 2427 # for all others, check if they are __call__able
2428 2428 elif hasattr(obj, '__call__'):
2429 2429 call_obj = obj.__call__
2430 2430 ret += self._default_arguments_from_docstring(
2431 2431 getattr(call_obj, '__doc__', ''))
2432 2432
2433 2433 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2434 2434 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2435 2435
2436 2436 try:
2437 2437 sig = inspect.signature(obj)
2438 2438 ret.extend(k for k, v in sig.parameters.items() if
2439 2439 v.kind in _keeps)
2440 2440 except ValueError:
2441 2441 pass
2442 2442
2443 2443 return list(set(ret))
2444 2444
2445 2445 @context_matcher()
2446 2446 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2447 2447 """Match named parameters (kwargs) of the last open function."""
2448 2448 matches = self.python_func_kw_matches(context.token)
2449 2449 return _convert_matcher_v1_result_to_v2(matches, type="param")
2450 2450
2451 2451 def python_func_kw_matches(self, text):
2452 2452 """Match named parameters (kwargs) of the last open function.
2453 2453
2454 2454 .. deprecated:: 8.6
2455 2455 You can use :meth:`python_func_kw_matcher` instead.
2456 2456 """
2457 2457
2458 2458 if "." in text: # a parameter cannot be dotted
2459 2459 return []
2460 2460 try: regexp = self.__funcParamsRegex
2461 2461 except AttributeError:
2462 2462 regexp = self.__funcParamsRegex = re.compile(r'''
2463 2463 '.*?(?<!\\)' | # single quoted strings or
2464 2464 ".*?(?<!\\)" | # double quoted strings or
2465 2465 \w+ | # identifier
2466 2466 \S # other characters
2467 2467 ''', re.VERBOSE | re.DOTALL)
2468 2468 # 1. find the nearest identifier that comes before an unclosed
2469 2469 # parenthesis before the cursor
2470 2470 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2471 2471 tokens = regexp.findall(self.text_until_cursor)
2472 2472 iterTokens = reversed(tokens); openPar = 0
2473 2473
2474 2474 for token in iterTokens:
2475 2475 if token == ')':
2476 2476 openPar -= 1
2477 2477 elif token == '(':
2478 2478 openPar += 1
2479 2479 if openPar > 0:
2480 2480 # found the last unclosed parenthesis
2481 2481 break
2482 2482 else:
2483 2483 return []
2484 2484 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2485 2485 ids = []
2486 2486 isId = re.compile(r'\w+$').match
2487 2487
2488 2488 while True:
2489 2489 try:
2490 2490 ids.append(next(iterTokens))
2491 2491 if not isId(ids[-1]):
2492 2492 ids.pop(); break
2493 2493 if not next(iterTokens) == '.':
2494 2494 break
2495 2495 except StopIteration:
2496 2496 break
2497 2497
2498 2498 # Find all named arguments already assigned to, as to avoid suggesting
2499 2499 # them again
2500 2500 usedNamedArgs = set()
2501 2501 par_level = -1
2502 2502 for token, next_token in zip(tokens, tokens[1:]):
2503 2503 if token == '(':
2504 2504 par_level += 1
2505 2505 elif token == ')':
2506 2506 par_level -= 1
2507 2507
2508 2508 if par_level != 0:
2509 2509 continue
2510 2510
2511 2511 if next_token != '=':
2512 2512 continue
2513 2513
2514 2514 usedNamedArgs.add(token)
2515 2515
2516 2516 argMatches = []
2517 2517 try:
2518 2518 callableObj = '.'.join(ids[::-1])
2519 2519 namedArgs = self._default_arguments(eval(callableObj,
2520 2520 self.namespace))
2521 2521
2522 2522 # Remove used named arguments from the list, no need to show twice
2523 2523 for namedArg in set(namedArgs) - usedNamedArgs:
2524 2524 if namedArg.startswith(text):
2525 2525 argMatches.append("%s=" %namedArg)
2526 2526 except:
2527 2527 pass
2528 2528
2529 2529 return argMatches
2530 2530
2531 2531 @staticmethod
2532 2532 def _get_keys(obj: Any) -> List[Any]:
2533 2533 # Objects can define their own completions by defining an
2534 2534 # _ipy_key_completions_() method.
2535 2535 method = get_real_method(obj, '_ipython_key_completions_')
2536 2536 if method is not None:
2537 2537 return method()
2538 2538
2539 2539 # Special case some common in-memory dict-like types
2540 2540 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2541 2541 try:
2542 2542 return list(obj.keys())
2543 2543 except Exception:
2544 2544 return []
2545 2545 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2546 2546 try:
2547 2547 return list(obj.obj.keys())
2548 2548 except Exception:
2549 2549 return []
2550 2550 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2551 2551 _safe_isinstance(obj, 'numpy', 'void'):
2552 2552 return obj.dtype.names or []
2553 2553 return []
2554 2554
2555 2555 @context_matcher()
2556 2556 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2557 2557 """Match string keys in a dictionary, after e.g. ``foo[``."""
2558 2558 matches = self.dict_key_matches(context.token)
2559 2559 return _convert_matcher_v1_result_to_v2(
2560 2560 matches, type="dict key", suppress_if_matches=True
2561 2561 )
2562 2562
2563 2563 def dict_key_matches(self, text: str) -> List[str]:
2564 2564 """Match string keys in a dictionary, after e.g. ``foo[``.
2565 2565
2566 2566 .. deprecated:: 8.6
2567 2567 You can use :meth:`dict_key_matcher` instead.
2568 2568 """
2569 2569
2570 2570 # Short-circuit on closed dictionary (regular expression would
2571 2571 # not match anyway, but would take quite a while).
2572 2572 if self.text_until_cursor.strip().endswith("]"):
2573 2573 return []
2574 2574
2575 2575 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2576 2576
2577 2577 if match is None:
2578 2578 return []
2579 2579
2580 2580 expr, prior_tuple_keys, key_prefix = match.groups()
2581 2581
2582 2582 obj = self._evaluate_expr(expr)
2583 2583
2584 2584 if obj is not_found:
2585 2585 return []
2586 2586
2587 2587 keys = self._get_keys(obj)
2588 2588 if not keys:
2589 2589 return keys
2590 2590
2591 2591 tuple_prefix = guarded_eval(
2592 2592 prior_tuple_keys,
2593 2593 EvaluationContext(
2594 2594 globals=self.global_namespace,
2595 2595 locals=self.namespace,
2596 2596 evaluation=self.evaluation, # type: ignore
2597 2597 in_subscript=True,
2598 2598 ),
2599 2599 )
2600 2600
2601 2601 closing_quote, token_offset, matches = match_dict_keys(
2602 2602 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2603 2603 )
2604 2604 if not matches:
2605 2605 return []
2606 2606
2607 2607 # get the cursor position of
2608 2608 # - the text being completed
2609 2609 # - the start of the key text
2610 2610 # - the start of the completion
2611 2611 text_start = len(self.text_until_cursor) - len(text)
2612 2612 if key_prefix:
2613 2613 key_start = match.start(3)
2614 2614 completion_start = key_start + token_offset
2615 2615 else:
2616 2616 key_start = completion_start = match.end()
2617 2617
2618 2618 # grab the leading prefix, to make sure all completions start with `text`
2619 2619 if text_start > key_start:
2620 2620 leading = ''
2621 2621 else:
2622 2622 leading = text[text_start:completion_start]
2623 2623
2624 2624 # append closing quote and bracket as appropriate
2625 2625 # this is *not* appropriate if the opening quote or bracket is outside
2626 2626 # the text given to this method, e.g. `d["""a\nt
2627 2627 can_close_quote = False
2628 2628 can_close_bracket = False
2629 2629
2630 2630 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2631 2631
2632 2632 if continuation.startswith(closing_quote):
2633 2633 # do not close if already closed, e.g. `d['a<tab>'`
2634 2634 continuation = continuation[len(closing_quote) :]
2635 2635 else:
2636 2636 can_close_quote = True
2637 2637
2638 2638 continuation = continuation.strip()
2639 2639
2640 2640 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2641 2641 # handling it is out of scope, so let's avoid appending suffixes.
2642 2642 has_known_tuple_handling = isinstance(obj, dict)
2643 2643
2644 2644 can_close_bracket = (
2645 2645 not continuation.startswith("]") and self.auto_close_dict_keys
2646 2646 )
2647 2647 can_close_tuple_item = (
2648 2648 not continuation.startswith(",")
2649 2649 and has_known_tuple_handling
2650 2650 and self.auto_close_dict_keys
2651 2651 )
2652 2652 can_close_quote = can_close_quote and self.auto_close_dict_keys
2653 2653
2654 2654 # fast path if closing quote should be appended but not suffix is allowed
2655 2655 if not can_close_quote and not can_close_bracket and closing_quote:
2656 2656 return [leading + k for k in matches]
2657 2657
2658 2658 results = []
2659 2659
2660 2660 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2661 2661
2662 2662 for k, state_flag in matches.items():
2663 2663 result = leading + k
2664 2664 if can_close_quote and closing_quote:
2665 2665 result += closing_quote
2666 2666
2667 2667 if state_flag == end_of_tuple_or_item:
2668 2668 # We do not know which suffix to add,
2669 2669 # e.g. both tuple item and string
2670 2670 # match this item.
2671 2671 pass
2672 2672
2673 2673 if state_flag in end_of_tuple_or_item and can_close_bracket:
2674 2674 result += "]"
2675 2675 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2676 2676 result += ", "
2677 2677 results.append(result)
2678 2678 return results
2679 2679
2680 2680 @context_matcher()
2681 2681 def unicode_name_matcher(self, context: CompletionContext):
2682 2682 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2683 2683 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2684 2684 return _convert_matcher_v1_result_to_v2(
2685 2685 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2686 2686 )
2687 2687
2688 2688 @staticmethod
2689 2689 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2690 2690 """Match Latex-like syntax for unicode characters base
2691 2691 on the name of the character.
2692 2692
2693 2693 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2694 2694
2695 2695 Works only on valid python 3 identifier, or on combining characters that
2696 2696 will combine to form a valid identifier.
2697 2697 """
2698 2698 slashpos = text.rfind('\\')
2699 2699 if slashpos > -1:
2700 2700 s = text[slashpos+1:]
2701 2701 try :
2702 2702 unic = unicodedata.lookup(s)
2703 2703 # allow combining chars
2704 2704 if ('a'+unic).isidentifier():
2705 2705 return '\\'+s,[unic]
2706 2706 except KeyError:
2707 2707 pass
2708 2708 return '', []
2709 2709
2710 2710 @context_matcher()
2711 2711 def latex_name_matcher(self, context: CompletionContext):
2712 2712 """Match Latex syntax for unicode characters.
2713 2713
2714 2714 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2715 2715 """
2716 2716 fragment, matches = self.latex_matches(context.text_until_cursor)
2717 2717 return _convert_matcher_v1_result_to_v2(
2718 2718 matches, type="latex", fragment=fragment, suppress_if_matches=True
2719 2719 )
2720 2720
2721 2721 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2722 2722 """Match Latex syntax for unicode characters.
2723 2723
2724 2724 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2725 2725
2726 2726 .. deprecated:: 8.6
2727 2727 You can use :meth:`latex_name_matcher` instead.
2728 2728 """
2729 2729 slashpos = text.rfind('\\')
2730 2730 if slashpos > -1:
2731 2731 s = text[slashpos:]
2732 2732 if s in latex_symbols:
2733 2733 # Try to complete a full latex symbol to unicode
2734 2734 # \\alpha -> Ξ±
2735 2735 return s, [latex_symbols[s]]
2736 2736 else:
2737 2737 # If a user has partially typed a latex symbol, give them
2738 2738 # a full list of options \al -> [\aleph, \alpha]
2739 2739 matches = [k for k in latex_symbols if k.startswith(s)]
2740 2740 if matches:
2741 2741 return s, matches
2742 2742 return '', ()
2743 2743
2744 2744 @context_matcher()
2745 2745 def custom_completer_matcher(self, context):
2746 2746 """Dispatch custom completer.
2747 2747
2748 2748 If a match is found, suppresses all other matchers except for Jedi.
2749 2749 """
2750 2750 matches = self.dispatch_custom_completer(context.token) or []
2751 2751 result = _convert_matcher_v1_result_to_v2(
2752 2752 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2753 2753 )
2754 2754 result["ordered"] = True
2755 2755 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2756 2756 return result
2757 2757
2758 2758 def dispatch_custom_completer(self, text):
2759 2759 """
2760 2760 .. deprecated:: 8.6
2761 2761 You can use :meth:`custom_completer_matcher` instead.
2762 2762 """
2763 2763 if not self.custom_completers:
2764 2764 return
2765 2765
2766 2766 line = self.line_buffer
2767 2767 if not line.strip():
2768 2768 return None
2769 2769
2770 2770 # Create a little structure to pass all the relevant information about
2771 2771 # the current completion to any custom completer.
2772 2772 event = SimpleNamespace()
2773 2773 event.line = line
2774 2774 event.symbol = text
2775 2775 cmd = line.split(None,1)[0]
2776 2776 event.command = cmd
2777 2777 event.text_until_cursor = self.text_until_cursor
2778 2778
2779 2779 # for foo etc, try also to find completer for %foo
2780 2780 if not cmd.startswith(self.magic_escape):
2781 2781 try_magic = self.custom_completers.s_matches(
2782 2782 self.magic_escape + cmd)
2783 2783 else:
2784 2784 try_magic = []
2785 2785
2786 2786 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2787 2787 try_magic,
2788 2788 self.custom_completers.flat_matches(self.text_until_cursor)):
2789 2789 try:
2790 2790 res = c(event)
2791 2791 if res:
2792 2792 # first, try case sensitive match
2793 2793 withcase = [r for r in res if r.startswith(text)]
2794 2794 if withcase:
2795 2795 return withcase
2796 2796 # if none, then case insensitive ones are ok too
2797 2797 text_low = text.lower()
2798 2798 return [r for r in res if r.lower().startswith(text_low)]
2799 2799 except TryNext:
2800 2800 pass
2801 2801 except KeyboardInterrupt:
2802 2802 """
2803 2803 If custom completer take too long,
2804 2804 let keyboard interrupt abort and return nothing.
2805 2805 """
2806 2806 break
2807 2807
2808 2808 return None
2809 2809
2810 2810 def completions(self, text: str, offset: int)->Iterator[Completion]:
2811 2811 """
2812 2812 Returns an iterator over the possible completions
2813 2813
2814 2814 .. warning::
2815 2815
2816 2816 Unstable
2817 2817
2818 2818 This function is unstable, API may change without warning.
2819 2819 It will also raise unless use in proper context manager.
2820 2820
2821 2821 Parameters
2822 2822 ----------
2823 2823 text : str
2824 2824 Full text of the current input, multi line string.
2825 2825 offset : int
2826 2826 Integer representing the position of the cursor in ``text``. Offset
2827 2827 is 0-based indexed.
2828 2828
2829 2829 Yields
2830 2830 ------
2831 2831 Completion
2832 2832
2833 2833 Notes
2834 2834 -----
2835 2835 The cursor on a text can either be seen as being "in between"
2836 2836 characters or "On" a character depending on the interface visible to
2837 2837 the user. For consistency the cursor being on "in between" characters X
2838 2838 and Y is equivalent to the cursor being "on" character Y, that is to say
2839 2839 the character the cursor is on is considered as being after the cursor.
2840 2840
2841 2841 Combining characters may span more that one position in the
2842 2842 text.
2843 2843
2844 2844 .. note::
2845 2845
2846 2846 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2847 2847 fake Completion token to distinguish completion returned by Jedi
2848 2848 and usual IPython completion.
2849 2849
2850 2850 .. note::
2851 2851
2852 2852 Completions are not completely deduplicated yet. If identical
2853 2853 completions are coming from different sources this function does not
2854 2854 ensure that each completion object will only be present once.
2855 2855 """
2856 2856 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2857 2857 "It may change without warnings. "
2858 2858 "Use in corresponding context manager.",
2859 2859 category=ProvisionalCompleterWarning, stacklevel=2)
2860 2860
2861 2861 seen = set()
2862 2862 profiler:Optional[cProfile.Profile]
2863 2863 try:
2864 2864 if self.profile_completions:
2865 2865 import cProfile
2866 2866 profiler = cProfile.Profile()
2867 2867 profiler.enable()
2868 2868 else:
2869 2869 profiler = None
2870 2870
2871 2871 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2872 2872 if c and (c in seen):
2873 2873 continue
2874 2874 yield c
2875 2875 seen.add(c)
2876 2876 except KeyboardInterrupt:
2877 2877 """if completions take too long and users send keyboard interrupt,
2878 2878 do not crash and return ASAP. """
2879 2879 pass
2880 2880 finally:
2881 2881 if profiler is not None:
2882 2882 profiler.disable()
2883 2883 ensure_dir_exists(self.profiler_output_dir)
2884 2884 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2885 2885 print("Writing profiler output to", output_path)
2886 2886 profiler.dump_stats(output_path)
2887 2887
2888 2888 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2889 2889 """
2890 2890 Core completion module.Same signature as :any:`completions`, with the
2891 2891 extra `timeout` parameter (in seconds).
2892 2892
2893 2893 Computing jedi's completion ``.type`` can be quite expensive (it is a
2894 2894 lazy property) and can require some warm-up, more warm up than just
2895 2895 computing the ``name`` of a completion. The warm-up can be :
2896 2896
2897 2897 - Long warm-up the first time a module is encountered after
2898 2898 install/update: actually build parse/inference tree.
2899 2899
2900 2900 - first time the module is encountered in a session: load tree from
2901 2901 disk.
2902 2902
2903 2903 We don't want to block completions for tens of seconds so we give the
2904 2904 completer a "budget" of ``_timeout`` seconds per invocation to compute
2905 2905 completions types, the completions that have not yet been computed will
2906 2906 be marked as "unknown" an will have a chance to be computed next round
2907 2907 are things get cached.
2908 2908
2909 2909 Keep in mind that Jedi is not the only thing treating the completion so
2910 2910 keep the timeout short-ish as if we take more than 0.3 second we still
2911 2911 have lots of processing to do.
2912 2912
2913 2913 """
2914 2914 deadline = time.monotonic() + _timeout
2915 2915
2916 2916 before = full_text[:offset]
2917 2917 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2918 2918
2919 2919 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2920 2920
2921 2921 def is_non_jedi_result(
2922 2922 result: MatcherResult, identifier: str
2923 2923 ) -> TypeGuard[SimpleMatcherResult]:
2924 2924 return identifier != jedi_matcher_id
2925 2925
2926 2926 results = self._complete(
2927 2927 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2928 2928 )
2929 2929
2930 2930 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2931 2931 identifier: result
2932 2932 for identifier, result in results.items()
2933 2933 if is_non_jedi_result(result, identifier)
2934 2934 }
2935 2935
2936 2936 jedi_matches = (
2937 2937 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2938 2938 if jedi_matcher_id in results
2939 2939 else ()
2940 2940 )
2941 2941
2942 2942 iter_jm = iter(jedi_matches)
2943 2943 if _timeout:
2944 2944 for jm in iter_jm:
2945 2945 try:
2946 2946 type_ = jm.type
2947 2947 except Exception:
2948 2948 if self.debug:
2949 2949 print("Error in Jedi getting type of ", jm)
2950 2950 type_ = None
2951 2951 delta = len(jm.name_with_symbols) - len(jm.complete)
2952 2952 if type_ == 'function':
2953 2953 signature = _make_signature(jm)
2954 2954 else:
2955 2955 signature = ''
2956 2956 yield Completion(start=offset - delta,
2957 2957 end=offset,
2958 2958 text=jm.name_with_symbols,
2959 2959 type=type_,
2960 2960 signature=signature,
2961 2961 _origin='jedi')
2962 2962
2963 2963 if time.monotonic() > deadline:
2964 2964 break
2965 2965
2966 2966 for jm in iter_jm:
2967 2967 delta = len(jm.name_with_symbols) - len(jm.complete)
2968 2968 yield Completion(
2969 2969 start=offset - delta,
2970 2970 end=offset,
2971 2971 text=jm.name_with_symbols,
2972 2972 type=_UNKNOWN_TYPE, # don't compute type for speed
2973 2973 _origin="jedi",
2974 2974 signature="",
2975 2975 )
2976 2976
2977 2977 # TODO:
2978 2978 # Suppress this, right now just for debug.
2979 2979 if jedi_matches and non_jedi_results and self.debug:
2980 2980 some_start_offset = before.rfind(
2981 2981 next(iter(non_jedi_results.values()))["matched_fragment"]
2982 2982 )
2983 2983 yield Completion(
2984 2984 start=some_start_offset,
2985 2985 end=offset,
2986 2986 text="--jedi/ipython--",
2987 2987 _origin="debug",
2988 2988 type="none",
2989 2989 signature="",
2990 2990 )
2991 2991
2992 2992 ordered: List[Completion] = []
2993 2993 sortable: List[Completion] = []
2994 2994
2995 2995 for origin, result in non_jedi_results.items():
2996 2996 matched_text = result["matched_fragment"]
2997 2997 start_offset = before.rfind(matched_text)
2998 2998 is_ordered = result.get("ordered", False)
2999 2999 container = ordered if is_ordered else sortable
3000 3000
3001 3001 # I'm unsure if this is always true, so let's assert and see if it
3002 3002 # crash
3003 3003 assert before.endswith(matched_text)
3004 3004
3005 3005 for simple_completion in result["completions"]:
3006 3006 completion = Completion(
3007 3007 start=start_offset,
3008 3008 end=offset,
3009 3009 text=simple_completion.text,
3010 3010 _origin=origin,
3011 3011 signature="",
3012 3012 type=simple_completion.type or _UNKNOWN_TYPE,
3013 3013 )
3014 3014 container.append(completion)
3015 3015
3016 3016 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3017 3017 :MATCHES_LIMIT
3018 3018 ]
3019 3019
3020 3020 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3021 3021 """Find completions for the given text and line context.
3022 3022
3023 3023 Note that both the text and the line_buffer are optional, but at least
3024 3024 one of them must be given.
3025 3025
3026 3026 Parameters
3027 3027 ----------
3028 3028 text : string, optional
3029 3029 Text to perform the completion on. If not given, the line buffer
3030 3030 is split using the instance's CompletionSplitter object.
3031 3031 line_buffer : string, optional
3032 3032 If not given, the completer attempts to obtain the current line
3033 3033 buffer via readline. This keyword allows clients which are
3034 3034 requesting for text completions in non-readline contexts to inform
3035 3035 the completer of the entire text.
3036 3036 cursor_pos : int, optional
3037 3037 Index of the cursor in the full line buffer. Should be provided by
3038 3038 remote frontends where kernel has no access to frontend state.
3039 3039
3040 3040 Returns
3041 3041 -------
3042 3042 Tuple of two items:
3043 3043 text : str
3044 3044 Text that was actually used in the completion.
3045 3045 matches : list
3046 3046 A list of completion matches.
3047 3047
3048 3048 Notes
3049 3049 -----
3050 3050 This API is likely to be deprecated and replaced by
3051 3051 :any:`IPCompleter.completions` in the future.
3052 3052
3053 3053 """
3054 3054 warnings.warn('`Completer.complete` is pending deprecation since '
3055 3055 'IPython 6.0 and will be replaced by `Completer.completions`.',
3056 3056 PendingDeprecationWarning)
3057 3057 # potential todo, FOLD the 3rd throw away argument of _complete
3058 3058 # into the first 2 one.
3059 3059 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3060 3060 # TODO: should we deprecate now, or does it stay?
3061 3061
3062 3062 results = self._complete(
3063 3063 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3064 3064 )
3065 3065
3066 3066 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3067 3067
3068 3068 return self._arrange_and_extract(
3069 3069 results,
3070 3070 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3071 3071 skip_matchers={jedi_matcher_id},
3072 3072 # this API does not support different start/end positions (fragments of token).
3073 3073 abort_if_offset_changes=True,
3074 3074 )
3075 3075
3076 3076 def _arrange_and_extract(
3077 3077 self,
3078 3078 results: Dict[str, MatcherResult],
3079 3079 skip_matchers: Set[str],
3080 3080 abort_if_offset_changes: bool,
3081 3081 ):
3082 3082 sortable: List[AnyMatcherCompletion] = []
3083 3083 ordered: List[AnyMatcherCompletion] = []
3084 3084 most_recent_fragment = None
3085 3085 for identifier, result in results.items():
3086 3086 if identifier in skip_matchers:
3087 3087 continue
3088 3088 if not result["completions"]:
3089 3089 continue
3090 3090 if not most_recent_fragment:
3091 3091 most_recent_fragment = result["matched_fragment"]
3092 3092 if (
3093 3093 abort_if_offset_changes
3094 3094 and result["matched_fragment"] != most_recent_fragment
3095 3095 ):
3096 3096 break
3097 3097 if result.get("ordered", False):
3098 3098 ordered.extend(result["completions"])
3099 3099 else:
3100 3100 sortable.extend(result["completions"])
3101 3101
3102 3102 if not most_recent_fragment:
3103 3103 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3104 3104
3105 3105 return most_recent_fragment, [
3106 3106 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3107 3107 ]
3108 3108
3109 3109 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3110 3110 full_text=None) -> _CompleteResult:
3111 3111 """
3112 3112 Like complete but can also returns raw jedi completions as well as the
3113 3113 origin of the completion text. This could (and should) be made much
3114 3114 cleaner but that will be simpler once we drop the old (and stateful)
3115 3115 :any:`complete` API.
3116 3116
3117 3117 With current provisional API, cursor_pos act both (depending on the
3118 3118 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3119 3119 ``column`` when passing multiline strings this could/should be renamed
3120 3120 but would add extra noise.
3121 3121
3122 3122 Parameters
3123 3123 ----------
3124 3124 cursor_line
3125 3125 Index of the line the cursor is on. 0 indexed.
3126 3126 cursor_pos
3127 3127 Position of the cursor in the current line/line_buffer/text. 0
3128 3128 indexed.
3129 3129 line_buffer : optional, str
3130 3130 The current line the cursor is in, this is mostly due to legacy
3131 3131 reason that readline could only give a us the single current line.
3132 3132 Prefer `full_text`.
3133 3133 text : str
3134 3134 The current "token" the cursor is in, mostly also for historical
3135 3135 reasons. as the completer would trigger only after the current line
3136 3136 was parsed.
3137 3137 full_text : str
3138 3138 Full text of the current cell.
3139 3139
3140 3140 Returns
3141 3141 -------
3142 3142 An ordered dictionary where keys are identifiers of completion
3143 3143 matchers and values are ``MatcherResult``s.
3144 3144 """
3145 3145
3146 3146 # if the cursor position isn't given, the only sane assumption we can
3147 3147 # make is that it's at the end of the line (the common case)
3148 3148 if cursor_pos is None:
3149 3149 cursor_pos = len(line_buffer) if text is None else len(text)
3150 3150
3151 3151 if self.use_main_ns:
3152 3152 self.namespace = __main__.__dict__
3153 3153
3154 3154 # if text is either None or an empty string, rely on the line buffer
3155 3155 if (not line_buffer) and full_text:
3156 3156 line_buffer = full_text.split('\n')[cursor_line]
3157 3157 if not text: # issue #11508: check line_buffer before calling split_line
3158 3158 text = (
3159 3159 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3160 3160 )
3161 3161
3162 3162 # If no line buffer is given, assume the input text is all there was
3163 3163 if line_buffer is None:
3164 3164 line_buffer = text
3165 3165
3166 3166 # deprecated - do not use `line_buffer` in new code.
3167 3167 self.line_buffer = line_buffer
3168 3168 self.text_until_cursor = self.line_buffer[:cursor_pos]
3169 3169
3170 3170 if not full_text:
3171 3171 full_text = line_buffer
3172 3172
3173 3173 context = CompletionContext(
3174 3174 full_text=full_text,
3175 3175 cursor_position=cursor_pos,
3176 3176 cursor_line=cursor_line,
3177 3177 token=text,
3178 3178 limit=MATCHES_LIMIT,
3179 3179 )
3180 3180
3181 3181 # Start with a clean slate of completions
3182 3182 results: Dict[str, MatcherResult] = {}
3183 3183
3184 3184 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3185 3185
3186 3186 suppressed_matchers: Set[str] = set()
3187 3187
3188 3188 matchers = {
3189 3189 _get_matcher_id(matcher): matcher
3190 3190 for matcher in sorted(
3191 3191 self.matchers, key=_get_matcher_priority, reverse=True
3192 3192 )
3193 3193 }
3194 3194
3195 3195 for matcher_id, matcher in matchers.items():
3196 3196 matcher_id = _get_matcher_id(matcher)
3197 3197
3198 3198 if matcher_id in self.disable_matchers:
3199 3199 continue
3200 3200
3201 3201 if matcher_id in results:
3202 3202 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3203 3203
3204 3204 if matcher_id in suppressed_matchers:
3205 3205 continue
3206 3206
3207 3207 result: MatcherResult
3208 3208 try:
3209 3209 if _is_matcher_v1(matcher):
3210 3210 result = _convert_matcher_v1_result_to_v2(
3211 3211 matcher(text), type=_UNKNOWN_TYPE
3212 3212 )
3213 3213 elif _is_matcher_v2(matcher):
3214 3214 result = matcher(context)
3215 3215 else:
3216 3216 api_version = _get_matcher_api_version(matcher)
3217 3217 raise ValueError(f"Unsupported API version {api_version}")
3218 3218 except:
3219 3219 # Show the ugly traceback if the matcher causes an
3220 3220 # exception, but do NOT crash the kernel!
3221 3221 sys.excepthook(*sys.exc_info())
3222 3222 continue
3223 3223
3224 3224 # set default value for matched fragment if suffix was not selected.
3225 3225 result["matched_fragment"] = result.get("matched_fragment", context.token)
3226 3226
3227 3227 if not suppressed_matchers:
3228 3228 suppression_recommended: Union[bool, Set[str]] = result.get(
3229 3229 "suppress", False
3230 3230 )
3231 3231
3232 3232 suppression_config = (
3233 3233 self.suppress_competing_matchers.get(matcher_id, None)
3234 3234 if isinstance(self.suppress_competing_matchers, dict)
3235 3235 else self.suppress_competing_matchers
3236 3236 )
3237 3237 should_suppress = (
3238 3238 (suppression_config is True)
3239 3239 or (suppression_recommended and (suppression_config is not False))
3240 3240 ) and has_any_completions(result)
3241 3241
3242 3242 if should_suppress:
3243 3243 suppression_exceptions: Set[str] = result.get(
3244 3244 "do_not_suppress", set()
3245 3245 )
3246 3246 if isinstance(suppression_recommended, Iterable):
3247 3247 to_suppress = set(suppression_recommended)
3248 3248 else:
3249 3249 to_suppress = set(matchers)
3250 3250 suppressed_matchers = to_suppress - suppression_exceptions
3251 3251
3252 3252 new_results = {}
3253 3253 for previous_matcher_id, previous_result in results.items():
3254 3254 if previous_matcher_id not in suppressed_matchers:
3255 3255 new_results[previous_matcher_id] = previous_result
3256 3256 results = new_results
3257 3257
3258 3258 results[matcher_id] = result
3259 3259
3260 3260 _, matches = self._arrange_and_extract(
3261 3261 results,
3262 3262 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3263 3263 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3264 3264 skip_matchers={jedi_matcher_id},
3265 3265 abort_if_offset_changes=False,
3266 3266 )
3267 3267
3268 3268 # populate legacy stateful API
3269 3269 self.matches = matches
3270 3270
3271 3271 return results
3272 3272
3273 3273 @staticmethod
3274 3274 def _deduplicate(
3275 3275 matches: Sequence[AnyCompletion],
3276 3276 ) -> Iterable[AnyCompletion]:
3277 3277 filtered_matches: Dict[str, AnyCompletion] = {}
3278 3278 for match in matches:
3279 3279 text = match.text
3280 3280 if (
3281 3281 text not in filtered_matches
3282 3282 or filtered_matches[text].type == _UNKNOWN_TYPE
3283 3283 ):
3284 3284 filtered_matches[text] = match
3285 3285
3286 3286 return filtered_matches.values()
3287 3287
3288 3288 @staticmethod
3289 3289 def _sort(matches: Sequence[AnyCompletion]):
3290 3290 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3291 3291
3292 3292 @context_matcher()
3293 3293 def fwd_unicode_matcher(self, context: CompletionContext):
3294 3294 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3295 3295 # TODO: use `context.limit` to terminate early once we matched the maximum
3296 3296 # number that will be used downstream; can be added as an optional to
3297 3297 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3298 3298 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3299 3299 return _convert_matcher_v1_result_to_v2(
3300 3300 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3301 3301 )
3302 3302
3303 3303 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3304 3304 """
3305 3305 Forward match a string starting with a backslash with a list of
3306 3306 potential Unicode completions.
3307 3307
3308 3308 Will compute list of Unicode character names on first call and cache it.
3309 3309
3310 3310 .. deprecated:: 8.6
3311 3311 You can use :meth:`fwd_unicode_matcher` instead.
3312 3312
3313 3313 Returns
3314 3314 -------
3315 3315 At tuple with:
3316 3316 - matched text (empty if no matches)
3317 3317 - list of potential completions, empty tuple otherwise)
3318 3318 """
3319 3319 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3320 3320 # We could do a faster match using a Trie.
3321 3321
3322 3322 # Using pygtrie the following seem to work:
3323 3323
3324 3324 # s = PrefixSet()
3325 3325
3326 3326 # for c in range(0,0x10FFFF + 1):
3327 3327 # try:
3328 3328 # s.add(unicodedata.name(chr(c)))
3329 3329 # except ValueError:
3330 3330 # pass
3331 3331 # [''.join(k) for k in s.iter(prefix)]
3332 3332
3333 3333 # But need to be timed and adds an extra dependency.
3334 3334
3335 3335 slashpos = text.rfind('\\')
3336 3336 # if text starts with slash
3337 3337 if slashpos > -1:
3338 3338 # PERF: It's important that we don't access self._unicode_names
3339 3339 # until we're inside this if-block. _unicode_names is lazily
3340 3340 # initialized, and it takes a user-noticeable amount of time to
3341 3341 # initialize it, so we don't want to initialize it unless we're
3342 3342 # actually going to use it.
3343 3343 s = text[slashpos + 1 :]
3344 3344 sup = s.upper()
3345 3345 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3346 3346 if candidates:
3347 3347 return s, candidates
3348 3348 candidates = [x for x in self.unicode_names if sup in x]
3349 3349 if candidates:
3350 3350 return s, candidates
3351 3351 splitsup = sup.split(" ")
3352 3352 candidates = [
3353 3353 x for x in self.unicode_names if all(u in x for u in splitsup)
3354 3354 ]
3355 3355 if candidates:
3356 3356 return s, candidates
3357 3357
3358 3358 return "", ()
3359 3359
3360 3360 # if text does not start with slash
3361 3361 else:
3362 3362 return '', ()
3363 3363
3364 3364 @property
3365 3365 def unicode_names(self) -> List[str]:
3366 3366 """List of names of unicode code points that can be completed.
3367 3367
3368 3368 The list is lazily initialized on first access.
3369 3369 """
3370 3370 if self._unicode_names is None:
3371 3371 names = []
3372 3372 for c in range(0,0x10FFFF + 1):
3373 3373 try:
3374 3374 names.append(unicodedata.name(chr(c)))
3375 3375 except ValueError:
3376 3376 pass
3377 3377 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3378 3378
3379 3379 return self._unicode_names
3380 3380
3381 3381 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3382 3382 names = []
3383 3383 for start,stop in ranges:
3384 3384 for c in range(start, stop) :
3385 3385 try:
3386 3386 names.append(unicodedata.name(chr(c)))
3387 3387 except ValueError:
3388 3388 pass
3389 3389 return names
@@ -1,898 +1,895
1 1 from inspect import isclass, signature, Signature
2 2 from typing import (
3 3 Annotated,
4 4 AnyStr,
5 5 Callable,
6 6 Dict,
7 7 Literal,
8 8 NamedTuple,
9 9 NewType,
10 10 Optional,
11 11 Protocol,
12 12 Set,
13 13 Sequence,
14 14 Tuple,
15 15 Type,
16 16 TypeGuard,
17 17 Union,
18 18 get_args,
19 19 get_origin,
20 20 is_typeddict,
21 21 )
22 22 import ast
23 23 import builtins
24 24 import collections
25 25 import operator
26 26 import sys
27 27 from functools import cached_property
28 28 from dataclasses import dataclass, field
29 29 from types import MethodDescriptorType, ModuleType
30 30
31 31 from IPython.utils.decorators import undoc
32 32
33 33
34 34 if sys.version_info < (3, 11):
35 35 from typing_extensions import Self, LiteralString
36 36 else:
37 37 from typing import Self, LiteralString
38 38
39 39 if sys.version_info < (3, 12):
40 40 from typing_extensions import TypeAliasType
41 41 else:
42 42 from typing import TypeAliasType
43 43
44 44
45 45 @undoc
46 46 class HasGetItem(Protocol):
47 def __getitem__(self, key) -> None:
48 ...
47 def __getitem__(self, key) -> None: ...
49 48
50 49
51 50 @undoc
52 51 class InstancesHaveGetItem(Protocol):
53 def __call__(self, *args, **kwargs) -> HasGetItem:
54 ...
52 def __call__(self, *args, **kwargs) -> HasGetItem: ...
55 53
56 54
57 55 @undoc
58 56 class HasGetAttr(Protocol):
59 def __getattr__(self, key) -> None:
60 ...
57 def __getattr__(self, key) -> None: ...
61 58
62 59
63 60 @undoc
64 61 class DoesNotHaveGetAttr(Protocol):
65 62 pass
66 63
67 64
68 65 # By default `__getattr__` is not explicitly implemented on most objects
69 66 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
70 67
71 68
72 69 def _unbind_method(func: Callable) -> Union[Callable, None]:
73 70 """Get unbound method for given bound method.
74 71
75 72 Returns None if cannot get unbound method, or method is already unbound.
76 73 """
77 74 owner = getattr(func, "__self__", None)
78 75 owner_class = type(owner)
79 76 name = getattr(func, "__name__", None)
80 77 instance_dict_overrides = getattr(owner, "__dict__", None)
81 78 if (
82 79 owner is not None
83 80 and name
84 81 and (
85 82 not instance_dict_overrides
86 83 or (instance_dict_overrides and name not in instance_dict_overrides)
87 84 )
88 85 ):
89 86 return getattr(owner_class, name)
90 87 return None
91 88
92 89
93 90 @undoc
94 91 @dataclass
95 92 class EvaluationPolicy:
96 93 """Definition of evaluation policy."""
97 94
98 95 allow_locals_access: bool = False
99 96 allow_globals_access: bool = False
100 97 allow_item_access: bool = False
101 98 allow_attr_access: bool = False
102 99 allow_builtins_access: bool = False
103 100 allow_all_operations: bool = False
104 101 allow_any_calls: bool = False
105 102 allowed_calls: Set[Callable] = field(default_factory=set)
106 103
107 104 def can_get_item(self, value, item):
108 105 return self.allow_item_access
109 106
110 107 def can_get_attr(self, value, attr):
111 108 return self.allow_attr_access
112 109
113 110 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
114 111 if self.allow_all_operations:
115 112 return True
116 113
117 114 def can_call(self, func):
118 115 if self.allow_any_calls:
119 116 return True
120 117
121 118 if func in self.allowed_calls:
122 119 return True
123 120
124 121 owner_method = _unbind_method(func)
125 122
126 123 if owner_method and owner_method in self.allowed_calls:
127 124 return True
128 125
129 126
130 127 def _get_external(module_name: str, access_path: Sequence[str]):
131 128 """Get value from external module given a dotted access path.
132 129
133 130 Raises:
134 131 * `KeyError` if module is removed not found, and
135 132 * `AttributeError` if access path does not match an exported object
136 133 """
137 134 member_type = sys.modules[module_name]
138 135 for attr in access_path:
139 136 member_type = getattr(member_type, attr)
140 137 return member_type
141 138
142 139
143 140 def _has_original_dunder_external(
144 141 value,
145 142 module_name: str,
146 143 access_path: Sequence[str],
147 144 method_name: str,
148 145 ):
149 146 if module_name not in sys.modules:
150 147 # LBYLB as it is faster
151 148 return False
152 149 try:
153 150 member_type = _get_external(module_name, access_path)
154 151 value_type = type(value)
155 152 if type(value) == member_type:
156 153 return True
157 154 if method_name == "__getattribute__":
158 155 # we have to short-circuit here due to an unresolved issue in
159 156 # `isinstance` implementation: https://bugs.python.org/issue32683
160 157 return False
161 158 if isinstance(value, member_type):
162 159 method = getattr(value_type, method_name, None)
163 160 member_method = getattr(member_type, method_name, None)
164 161 if member_method == method:
165 162 return True
166 163 except (AttributeError, KeyError):
167 164 return False
168 165
169 166
170 167 def _has_original_dunder(
171 168 value, allowed_types, allowed_methods, allowed_external, method_name
172 169 ):
173 170 # note: Python ignores `__getattr__`/`__getitem__` on instances,
174 171 # we only need to check at class level
175 172 value_type = type(value)
176 173
177 174 # strict type check passes β†’ no need to check method
178 175 if value_type in allowed_types:
179 176 return True
180 177
181 178 method = getattr(value_type, method_name, None)
182 179
183 180 if method is None:
184 181 return None
185 182
186 183 if method in allowed_methods:
187 184 return True
188 185
189 186 for module_name, *access_path in allowed_external:
190 187 if _has_original_dunder_external(value, module_name, access_path, method_name):
191 188 return True
192 189
193 190 return False
194 191
195 192
196 193 @undoc
197 194 @dataclass
198 195 class SelectivePolicy(EvaluationPolicy):
199 196 allowed_getitem: Set[InstancesHaveGetItem] = field(default_factory=set)
200 197 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
201 198
202 199 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
203 200 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
204 201
205 202 allowed_operations: Set = field(default_factory=set)
206 203 allowed_operations_external: Set[Tuple[str, ...]] = field(default_factory=set)
207 204
208 205 _operation_methods_cache: Dict[str, Set[Callable]] = field(
209 206 default_factory=dict, init=False
210 207 )
211 208
212 209 def can_get_attr(self, value, attr):
213 210 has_original_attribute = _has_original_dunder(
214 211 value,
215 212 allowed_types=self.allowed_getattr,
216 213 allowed_methods=self._getattribute_methods,
217 214 allowed_external=self.allowed_getattr_external,
218 215 method_name="__getattribute__",
219 216 )
220 217 has_original_attr = _has_original_dunder(
221 218 value,
222 219 allowed_types=self.allowed_getattr,
223 220 allowed_methods=self._getattr_methods,
224 221 allowed_external=self.allowed_getattr_external,
225 222 method_name="__getattr__",
226 223 )
227 224
228 225 accept = False
229 226
230 227 # Many objects do not have `__getattr__`, this is fine.
231 228 if has_original_attr is None and has_original_attribute:
232 229 accept = True
233 230 else:
234 231 # Accept objects without modifications to `__getattr__` and `__getattribute__`
235 232 accept = has_original_attr and has_original_attribute
236 233
237 234 if accept:
238 235 # We still need to check for overridden properties.
239 236
240 237 value_class = type(value)
241 238 if not hasattr(value_class, attr):
242 239 return True
243 240
244 241 class_attr_val = getattr(value_class, attr)
245 242 is_property = isinstance(class_attr_val, property)
246 243
247 244 if not is_property:
248 245 return True
249 246
250 247 # Properties in allowed types are ok (although we do not include any
251 248 # properties in our default allow list currently).
252 249 if type(value) in self.allowed_getattr:
253 250 return True # pragma: no cover
254 251
255 252 # Properties in subclasses of allowed types may be ok if not changed
256 253 for module_name, *access_path in self.allowed_getattr_external:
257 254 try:
258 255 external_class = _get_external(module_name, access_path)
259 256 external_class_attr_val = getattr(external_class, attr)
260 257 except (KeyError, AttributeError):
261 258 return False # pragma: no cover
262 259 return class_attr_val == external_class_attr_val
263 260
264 261 return False
265 262
266 263 def can_get_item(self, value, item):
267 264 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
268 265 return _has_original_dunder(
269 266 value,
270 267 allowed_types=self.allowed_getitem,
271 268 allowed_methods=self._getitem_methods,
272 269 allowed_external=self.allowed_getitem_external,
273 270 method_name="__getitem__",
274 271 )
275 272
276 273 def can_operate(self, dunders: Tuple[str, ...], a, b=None):
277 274 objects = [a]
278 275 if b is not None:
279 276 objects.append(b)
280 277 return all(
281 278 [
282 279 _has_original_dunder(
283 280 obj,
284 281 allowed_types=self.allowed_operations,
285 282 allowed_methods=self._operator_dunder_methods(dunder),
286 283 allowed_external=self.allowed_operations_external,
287 284 method_name=dunder,
288 285 )
289 286 for dunder in dunders
290 287 for obj in objects
291 288 ]
292 289 )
293 290
294 291 def _operator_dunder_methods(self, dunder: str) -> Set[Callable]:
295 292 if dunder not in self._operation_methods_cache:
296 293 self._operation_methods_cache[dunder] = self._safe_get_methods(
297 294 self.allowed_operations, dunder
298 295 )
299 296 return self._operation_methods_cache[dunder]
300 297
301 298 @cached_property
302 299 def _getitem_methods(self) -> Set[Callable]:
303 300 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
304 301
305 302 @cached_property
306 303 def _getattr_methods(self) -> Set[Callable]:
307 304 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
308 305
309 306 @cached_property
310 307 def _getattribute_methods(self) -> Set[Callable]:
311 308 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
312 309
313 310 def _safe_get_methods(self, classes, name) -> Set[Callable]:
314 311 return {
315 312 method
316 313 for class_ in classes
317 314 for method in [getattr(class_, name, None)]
318 315 if method
319 316 }
320 317
321 318
322 319 class _DummyNamedTuple(NamedTuple):
323 320 """Used internally to retrieve methods of named tuple instance."""
324 321
325 322
326 323 class EvaluationContext(NamedTuple):
327 324 #: Local namespace
328 325 locals: dict
329 326 #: Global namespace
330 327 globals: dict
331 328 #: Evaluation policy identifier
332 evaluation: Literal[
333 "forbidden", "minimal", "limited", "unsafe", "dangerous"
334 ] = "forbidden"
329 evaluation: Literal["forbidden", "minimal", "limited", "unsafe", "dangerous"] = (
330 "forbidden"
331 )
335 332 #: Whether the evaluation of code takes place inside of a subscript.
336 333 #: Useful for evaluating ``:-1, 'col'`` in ``df[:-1, 'col']``.
337 334 in_subscript: bool = False
338 335
339 336
340 337 class _IdentitySubscript:
341 338 """Returns the key itself when item is requested via subscript."""
342 339
343 340 def __getitem__(self, key):
344 341 return key
345 342
346 343
347 344 IDENTITY_SUBSCRIPT = _IdentitySubscript()
348 345 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
349 346 UNKNOWN_SIGNATURE = Signature()
350 347 NOT_EVALUATED = object()
351 348
352 349
353 350 class GuardRejection(Exception):
354 351 """Exception raised when guard rejects evaluation attempt."""
355 352
356 353 pass
357 354
358 355
359 356 def guarded_eval(code: str, context: EvaluationContext):
360 357 """Evaluate provided code in the evaluation context.
361 358
362 359 If evaluation policy given by context is set to ``forbidden``
363 360 no evaluation will be performed; if it is set to ``dangerous``
364 361 standard :func:`eval` will be used; finally, for any other,
365 362 policy :func:`eval_node` will be called on parsed AST.
366 363 """
367 364 locals_ = context.locals
368 365
369 366 if context.evaluation == "forbidden":
370 367 raise GuardRejection("Forbidden mode")
371 368
372 369 # note: not using `ast.literal_eval` as it does not implement
373 370 # getitem at all, for example it fails on simple `[0][1]`
374 371
375 372 if context.in_subscript:
376 373 # syntactic sugar for ellipsis (:) is only available in subscripts
377 374 # so we need to trick the ast parser into thinking that we have
378 375 # a subscript, but we need to be able to later recognise that we did
379 376 # it so we can ignore the actual __getitem__ operation
380 377 if not code:
381 378 return tuple()
382 379 locals_ = locals_.copy()
383 380 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
384 381 code = SUBSCRIPT_MARKER + "[" + code + "]"
385 382 context = EvaluationContext(**{**context._asdict(), **{"locals": locals_}})
386 383
387 384 if context.evaluation == "dangerous":
388 385 return eval(code, context.globals, context.locals)
389 386
390 387 expression = ast.parse(code, mode="eval")
391 388
392 389 return eval_node(expression, context)
393 390
394 391
395 392 BINARY_OP_DUNDERS: Dict[Type[ast.operator], Tuple[str]] = {
396 393 ast.Add: ("__add__",),
397 394 ast.Sub: ("__sub__",),
398 395 ast.Mult: ("__mul__",),
399 396 ast.Div: ("__truediv__",),
400 397 ast.FloorDiv: ("__floordiv__",),
401 398 ast.Mod: ("__mod__",),
402 399 ast.Pow: ("__pow__",),
403 400 ast.LShift: ("__lshift__",),
404 401 ast.RShift: ("__rshift__",),
405 402 ast.BitOr: ("__or__",),
406 403 ast.BitXor: ("__xor__",),
407 404 ast.BitAnd: ("__and__",),
408 405 ast.MatMult: ("__matmul__",),
409 406 }
410 407
411 408 COMP_OP_DUNDERS: Dict[Type[ast.cmpop], Tuple[str, ...]] = {
412 409 ast.Eq: ("__eq__",),
413 410 ast.NotEq: ("__ne__", "__eq__"),
414 411 ast.Lt: ("__lt__", "__gt__"),
415 412 ast.LtE: ("__le__", "__ge__"),
416 413 ast.Gt: ("__gt__", "__lt__"),
417 414 ast.GtE: ("__ge__", "__le__"),
418 415 ast.In: ("__contains__",),
419 416 # Note: ast.Is, ast.IsNot, ast.NotIn are handled specially
420 417 }
421 418
422 419 UNARY_OP_DUNDERS: Dict[Type[ast.unaryop], Tuple[str, ...]] = {
423 420 ast.USub: ("__neg__",),
424 421 ast.UAdd: ("__pos__",),
425 422 # we have to check both __inv__ and __invert__!
426 423 ast.Invert: ("__invert__", "__inv__"),
427 424 ast.Not: ("__not__",),
428 425 }
429 426
430 427
431 428 class ImpersonatingDuck:
432 429 """A dummy class used to create objects of other classes without calling their ``__init__``"""
433 430
434 431 # no-op: override __class__ to impersonate
435 432
436 433
437 434 class _Duck:
438 435 """A dummy class used to create objects pretending to have given attributes"""
439 436
440 437 def __init__(self, attributes: Optional[dict] = None, items: Optional[dict] = None):
441 438 self.attributes = attributes or {}
442 439 self.items = items or {}
443 440
444 441 def __getattr__(self, attr: str):
445 442 return self.attributes[attr]
446 443
447 444 def __hasattr__(self, attr: str):
448 445 return attr in self.attributes
449 446
450 447 def __dir__(self):
451 448 return [*dir(super), *self.attributes]
452 449
453 450 def __getitem__(self, key: str):
454 451 return self.items[key]
455 452
456 453 def __hasitem__(self, key: str):
457 454 return self.items[key]
458 455
459 456 def _ipython_key_completions_(self):
460 457 return self.items.keys()
461 458
462 459
463 460 def _find_dunder(node_op, dunders) -> Union[Tuple[str, ...], None]:
464 461 dunder = None
465 462 for op, candidate_dunder in dunders.items():
466 463 if isinstance(node_op, op):
467 464 dunder = candidate_dunder
468 465 return dunder
469 466
470 467
471 468 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
472 469 """Evaluate AST node in provided context.
473 470
474 471 Applies evaluation restrictions defined in the context. Currently does not support evaluation of functions with keyword arguments.
475 472
476 473 Does not evaluate actions that always have side effects:
477 474
478 475 - class definitions (``class sth: ...``)
479 476 - function definitions (``def sth: ...``)
480 477 - variable assignments (``x = 1``)
481 478 - augmented assignments (``x += 1``)
482 479 - deletions (``del x``)
483 480
484 481 Does not evaluate operations which do not return values:
485 482
486 483 - assertions (``assert x``)
487 484 - pass (``pass``)
488 485 - imports (``import x``)
489 486 - control flow:
490 487
491 488 - conditionals (``if x:``) except for ternary IfExp (``a if x else b``)
492 489 - loops (``for`` and ``while``)
493 490 - exception handling
494 491
495 492 The purpose of this function is to guard against unwanted side-effects;
496 493 it does not give guarantees on protection from malicious code execution.
497 494 """
498 495 policy = EVALUATION_POLICIES[context.evaluation]
499 496 if node is None:
500 497 return None
501 498 if isinstance(node, ast.Expression):
502 499 return eval_node(node.body, context)
503 500 if isinstance(node, ast.BinOp):
504 501 left = eval_node(node.left, context)
505 502 right = eval_node(node.right, context)
506 503 dunders = _find_dunder(node.op, BINARY_OP_DUNDERS)
507 504 if dunders:
508 505 if policy.can_operate(dunders, left, right):
509 506 return getattr(left, dunders[0])(right)
510 507 else:
511 508 raise GuardRejection(
512 509 f"Operation (`{dunders}`) for",
513 510 type(left),
514 511 f"not allowed in {context.evaluation} mode",
515 512 )
516 513 if isinstance(node, ast.Compare):
517 514 left = eval_node(node.left, context)
518 515 all_true = True
519 516 negate = False
520 517 for op, right in zip(node.ops, node.comparators):
521 518 right = eval_node(right, context)
522 519 dunder = None
523 520 dunders = _find_dunder(op, COMP_OP_DUNDERS)
524 521 if not dunders:
525 522 if isinstance(op, ast.NotIn):
526 523 dunders = COMP_OP_DUNDERS[ast.In]
527 524 negate = True
528 525 if isinstance(op, ast.Is):
529 526 dunder = "is_"
530 527 if isinstance(op, ast.IsNot):
531 528 dunder = "is_"
532 529 negate = True
533 530 if not dunder and dunders:
534 531 dunder = dunders[0]
535 532 if dunder:
536 533 a, b = (right, left) if dunder == "__contains__" else (left, right)
537 534 if dunder == "is_" or dunders and policy.can_operate(dunders, a, b):
538 535 result = getattr(operator, dunder)(a, b)
539 536 if negate:
540 537 result = not result
541 538 if not result:
542 539 all_true = False
543 540 left = right
544 541 else:
545 542 raise GuardRejection(
546 543 f"Comparison (`{dunder}`) for",
547 544 type(left),
548 545 f"not allowed in {context.evaluation} mode",
549 546 )
550 547 else:
551 548 raise ValueError(
552 549 f"Comparison `{dunder}` not supported"
553 550 ) # pragma: no cover
554 551 return all_true
555 552 if isinstance(node, ast.Constant):
556 553 return node.value
557 554 if isinstance(node, ast.Tuple):
558 555 return tuple(eval_node(e, context) for e in node.elts)
559 556 if isinstance(node, ast.List):
560 557 return [eval_node(e, context) for e in node.elts]
561 558 if isinstance(node, ast.Set):
562 559 return {eval_node(e, context) for e in node.elts}
563 560 if isinstance(node, ast.Dict):
564 561 return dict(
565 562 zip(
566 563 [eval_node(k, context) for k in node.keys],
567 564 [eval_node(v, context) for v in node.values],
568 565 )
569 566 )
570 567 if isinstance(node, ast.Slice):
571 568 return slice(
572 569 eval_node(node.lower, context),
573 570 eval_node(node.upper, context),
574 571 eval_node(node.step, context),
575 572 )
576 573 if isinstance(node, ast.UnaryOp):
577 574 value = eval_node(node.operand, context)
578 575 dunders = _find_dunder(node.op, UNARY_OP_DUNDERS)
579 576 if dunders:
580 577 if policy.can_operate(dunders, value):
581 578 return getattr(value, dunders[0])()
582 579 else:
583 580 raise GuardRejection(
584 581 f"Operation (`{dunders}`) for",
585 582 type(value),
586 583 f"not allowed in {context.evaluation} mode",
587 584 )
588 585 if isinstance(node, ast.Subscript):
589 586 value = eval_node(node.value, context)
590 587 slice_ = eval_node(node.slice, context)
591 588 if policy.can_get_item(value, slice_):
592 589 return value[slice_]
593 590 raise GuardRejection(
594 591 "Subscript access (`__getitem__`) for",
595 592 type(value), # not joined to avoid calling `repr`
596 593 f" not allowed in {context.evaluation} mode",
597 594 )
598 595 if isinstance(node, ast.Name):
599 596 return _eval_node_name(node.id, context)
600 597 if isinstance(node, ast.Attribute):
601 598 value = eval_node(node.value, context)
602 599 if policy.can_get_attr(value, node.attr):
603 600 return getattr(value, node.attr)
604 601 raise GuardRejection(
605 602 "Attribute access (`__getattr__`) for",
606 603 type(value), # not joined to avoid calling `repr`
607 604 f"not allowed in {context.evaluation} mode",
608 605 )
609 606 if isinstance(node, ast.IfExp):
610 607 test = eval_node(node.test, context)
611 608 if test:
612 609 return eval_node(node.body, context)
613 610 else:
614 611 return eval_node(node.orelse, context)
615 612 if isinstance(node, ast.Call):
616 613 func = eval_node(node.func, context)
617 614 if policy.can_call(func) and not node.keywords:
618 615 args = [eval_node(arg, context) for arg in node.args]
619 616 return func(*args)
620 617 if isclass(func):
621 618 # this code path gets entered when calling class e.g. `MyClass()`
622 619 # or `my_instance.__class__()` - in both cases `func` is `MyClass`.
623 620 # Should return `MyClass` if `__new__` is not overridden,
624 621 # otherwise whatever `__new__` return type is.
625 622 overridden_return_type = _eval_return_type(func.__new__, node, context)
626 623 if overridden_return_type is not NOT_EVALUATED:
627 624 return overridden_return_type
628 625 return _create_duck_for_heap_type(func)
629 626 else:
630 627 return_type = _eval_return_type(func, node, context)
631 628 if return_type is not NOT_EVALUATED:
632 629 return return_type
633 630 raise GuardRejection(
634 631 "Call for",
635 632 func, # not joined to avoid calling `repr`
636 633 f"not allowed in {context.evaluation} mode",
637 634 )
638 635 raise ValueError("Unhandled node", ast.dump(node))
639 636
640 637
641 638 def _eval_return_type(func: Callable, node: ast.Call, context: EvaluationContext):
642 639 """Evaluate return type of a given callable function.
643 640
644 641 Returns the built-in type, a duck or NOT_EVALUATED sentinel.
645 642 """
646 643 try:
647 644 sig = signature(func)
648 645 except ValueError:
649 646 sig = UNKNOWN_SIGNATURE
650 647 # if annotation was not stringized, or it was stringized
651 648 # but resolved by signature call we know the return type
652 649 not_empty = sig.return_annotation is not Signature.empty
653 650 if not_empty:
654 651 return _resolve_annotation(sig.return_annotation, sig, func, node, context)
655 652 return NOT_EVALUATED
656 653
657 654
658 655 def _resolve_annotation(
659 656 annotation,
660 657 sig: Signature,
661 658 func: Callable,
662 659 node: ast.Call,
663 660 context: EvaluationContext,
664 661 ):
665 662 """Resolve annotation created by user with `typing` module and custom objects."""
666 663 annotation = (
667 664 _eval_node_name(annotation, context)
668 665 if isinstance(annotation, str)
669 666 else annotation
670 667 )
671 668 origin = get_origin(annotation)
672 669 if annotation is Self and hasattr(func, "__self__"):
673 670 return func.__self__
674 671 elif origin is Literal:
675 672 type_args = get_args(annotation)
676 673 if len(type_args) == 1:
677 674 return type_args[0]
678 675 elif annotation is LiteralString:
679 676 return ""
680 677 elif annotation is AnyStr:
681 678 index = None
682 679 for i, (key, value) in enumerate(sig.parameters.items()):
683 680 if value.annotation is AnyStr:
684 681 index = i
685 682 break
686 683 if index is not None and index < len(node.args):
687 684 return eval_node(node.args[index], context)
688 685 elif origin is TypeGuard:
689 686 return bool()
690 687 elif origin is Union:
691 688 attributes = [
692 689 attr
693 690 for type_arg in get_args(annotation)
694 691 for attr in dir(_resolve_annotation(type_arg, sig, func, node, context))
695 692 ]
696 693 return _Duck(attributes=dict.fromkeys(attributes))
697 694 elif is_typeddict(annotation):
698 695 return _Duck(
699 696 attributes=dict.fromkeys(dir(dict())),
700 697 items={
701 698 k: _resolve_annotation(v, sig, func, node, context)
702 699 for k, v in annotation.__annotations__.items()
703 700 },
704 701 )
705 702 elif hasattr(annotation, "_is_protocol"):
706 703 return _Duck(attributes=dict.fromkeys(dir(annotation)))
707 704 elif origin is Annotated:
708 705 type_arg = get_args(annotation)[0]
709 706 return _resolve_annotation(type_arg, sig, func, node, context)
710 707 elif isinstance(annotation, NewType):
711 708 return _eval_or_create_duck(annotation.__supertype__, node, context)
712 709 elif isinstance(annotation, TypeAliasType):
713 710 return _eval_or_create_duck(annotation.__value__, node, context)
714 711 else:
715 712 return _eval_or_create_duck(annotation, node, context)
716 713
717 714
718 715 def _eval_node_name(node_id: str, context: EvaluationContext):
719 716 policy = EVALUATION_POLICIES[context.evaluation]
720 717 if policy.allow_locals_access and node_id in context.locals:
721 718 return context.locals[node_id]
722 719 if policy.allow_globals_access and node_id in context.globals:
723 720 return context.globals[node_id]
724 721 if policy.allow_builtins_access and hasattr(builtins, node_id):
725 722 # note: do not use __builtins__, it is implementation detail of cPython
726 723 return getattr(builtins, node_id)
727 724 if not policy.allow_globals_access and not policy.allow_locals_access:
728 725 raise GuardRejection(
729 726 f"Namespace access not allowed in {context.evaluation} mode"
730 727 )
731 728 else:
732 729 raise NameError(f"{node_id} not found in locals, globals, nor builtins")
733 730
734 731
735 732 def _eval_or_create_duck(duck_type, node: ast.Call, context: EvaluationContext):
736 733 policy = EVALUATION_POLICIES[context.evaluation]
737 734 # if allow-listed builtin is on type annotation, instantiate it
738 735 if policy.can_call(duck_type) and not node.keywords:
739 736 args = [eval_node(arg, context) for arg in node.args]
740 737 return duck_type(*args)
741 738 # if custom class is in type annotation, mock it
742 739 return _create_duck_for_heap_type(duck_type)
743 740
744 741
745 742 def _create_duck_for_heap_type(duck_type):
746 743 """Create an imitation of an object of a given type (a duck).
747 744
748 745 Returns the duck or NOT_EVALUATED sentinel if duck could not be created.
749 746 """
750 747 duck = ImpersonatingDuck()
751 748 try:
752 749 # this only works for heap types, not builtins
753 750 duck.__class__ = duck_type
754 751 return duck
755 752 except TypeError:
756 753 pass
757 754 return NOT_EVALUATED
758 755
759 756
760 757 SUPPORTED_EXTERNAL_GETITEM = {
761 758 ("pandas", "core", "indexing", "_iLocIndexer"),
762 759 ("pandas", "core", "indexing", "_LocIndexer"),
763 760 ("pandas", "DataFrame"),
764 761 ("pandas", "Series"),
765 762 ("numpy", "ndarray"),
766 763 ("numpy", "void"),
767 764 }
768 765
769 766
770 767 BUILTIN_GETITEM: Set[InstancesHaveGetItem] = {
771 768 dict,
772 769 str, # type: ignore[arg-type]
773 770 bytes, # type: ignore[arg-type]
774 771 list,
775 772 tuple,
776 773 collections.defaultdict,
777 774 collections.deque,
778 775 collections.OrderedDict,
779 776 collections.ChainMap,
780 777 collections.UserDict,
781 778 collections.UserList,
782 779 collections.UserString, # type: ignore[arg-type]
783 780 _DummyNamedTuple,
784 781 _IdentitySubscript,
785 782 }
786 783
787 784
788 785 def _list_methods(cls, source=None):
789 786 """For use on immutable objects or with methods returning a copy"""
790 787 return [getattr(cls, k) for k in (source if source else dir(cls))]
791 788
792 789
793 790 dict_non_mutating_methods = ("copy", "keys", "values", "items")
794 791 list_non_mutating_methods = ("copy", "index", "count")
795 792 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
796 793
797 794
798 795 dict_keys: Type[collections.abc.KeysView] = type({}.keys())
799 796
800 797 NUMERICS = {int, float, complex}
801 798
802 799 ALLOWED_CALLS = {
803 800 bytes,
804 801 *_list_methods(bytes),
805 802 dict,
806 803 *_list_methods(dict, dict_non_mutating_methods),
807 804 dict_keys.isdisjoint,
808 805 list,
809 806 *_list_methods(list, list_non_mutating_methods),
810 807 set,
811 808 *_list_methods(set, set_non_mutating_methods),
812 809 frozenset,
813 810 *_list_methods(frozenset),
814 811 range,
815 812 str,
816 813 *_list_methods(str),
817 814 tuple,
818 815 *_list_methods(tuple),
819 816 *NUMERICS,
820 817 *[method for numeric_cls in NUMERICS for method in _list_methods(numeric_cls)],
821 818 collections.deque,
822 819 *_list_methods(collections.deque, list_non_mutating_methods),
823 820 collections.defaultdict,
824 821 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
825 822 collections.OrderedDict,
826 823 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
827 824 collections.UserDict,
828 825 *_list_methods(collections.UserDict, dict_non_mutating_methods),
829 826 collections.UserList,
830 827 *_list_methods(collections.UserList, list_non_mutating_methods),
831 828 collections.UserString,
832 829 *_list_methods(collections.UserString, dir(str)),
833 830 collections.Counter,
834 831 *_list_methods(collections.Counter, dict_non_mutating_methods),
835 832 collections.Counter.elements,
836 833 collections.Counter.most_common,
837 834 }
838 835
839 836 BUILTIN_GETATTR: Set[MayHaveGetattr] = {
840 837 *BUILTIN_GETITEM,
841 838 set,
842 839 frozenset,
843 840 object,
844 841 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
845 842 *NUMERICS,
846 843 dict_keys,
847 844 MethodDescriptorType,
848 845 ModuleType,
849 846 }
850 847
851 848
852 849 BUILTIN_OPERATIONS = {*BUILTIN_GETATTR}
853 850
854 851 EVALUATION_POLICIES = {
855 852 "minimal": EvaluationPolicy(
856 853 allow_builtins_access=True,
857 854 allow_locals_access=False,
858 855 allow_globals_access=False,
859 856 allow_item_access=False,
860 857 allow_attr_access=False,
861 858 allowed_calls=set(),
862 859 allow_any_calls=False,
863 860 allow_all_operations=False,
864 861 ),
865 862 "limited": SelectivePolicy(
866 863 allowed_getitem=BUILTIN_GETITEM,
867 864 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
868 865 allowed_getattr=BUILTIN_GETATTR,
869 866 allowed_getattr_external={
870 867 # pandas Series/Frame implements custom `__getattr__`
871 868 ("pandas", "DataFrame"),
872 869 ("pandas", "Series"),
873 870 },
874 871 allowed_operations=BUILTIN_OPERATIONS,
875 872 allow_builtins_access=True,
876 873 allow_locals_access=True,
877 874 allow_globals_access=True,
878 875 allowed_calls=ALLOWED_CALLS,
879 876 ),
880 877 "unsafe": EvaluationPolicy(
881 878 allow_builtins_access=True,
882 879 allow_locals_access=True,
883 880 allow_globals_access=True,
884 881 allow_attr_access=True,
885 882 allow_item_access=True,
886 883 allow_any_calls=True,
887 884 allow_all_operations=True,
888 885 ),
889 886 }
890 887
891 888
892 889 __all__ = [
893 890 "guarded_eval",
894 891 "eval_node",
895 892 "GuardRejection",
896 893 "EvaluationContext",
897 894 "_unbind_method",
898 895 ]
@@ -1,798 +1,799
1 1 """DEPRECATED: Input handling and transformation machinery.
2 2
3 3 This module was deprecated in IPython 7.0, in favour of inputtransformer2.
4 4
5 5 The first class in this module, :class:`InputSplitter`, is designed to tell when
6 6 input from a line-oriented frontend is complete and should be executed, and when
7 7 the user should be prompted for another line of code instead. The name 'input
8 8 splitter' is largely for historical reasons.
9 9
10 10 A companion, :class:`IPythonInputSplitter`, provides the same functionality but
11 11 with full support for the extended IPython syntax (magics, system calls, etc).
12 12 The code to actually do these transformations is in :mod:`IPython.core.inputtransformer`.
13 13 :class:`IPythonInputSplitter` feeds the raw code to the transformers in order
14 14 and stores the results.
15 15
16 16 For more details, see the class docstrings below.
17 17 """
18
18 19 from __future__ import annotations
19 20
20 21 from warnings import warn
21 22
22 23 warn('IPython.core.inputsplitter is deprecated since IPython 7 in favor of `IPython.core.inputtransformer2`',
23 24 DeprecationWarning)
24 25
25 26 # Copyright (c) IPython Development Team.
26 27 # Distributed under the terms of the Modified BSD License.
27 28 import ast
28 29 import codeop
29 30 import io
30 31 import re
31 32 import sys
32 33 import tokenize
33 34 import warnings
34 35
35 36 from typing import List, Tuple, Union, Optional, TYPE_CHECKING
36 37 from types import CodeType
37 38
38 39 from IPython.core.inputtransformer import (leading_indent,
39 40 classic_prompt,
40 41 ipy_prompt,
41 42 cellmagic,
42 43 assemble_logical_lines,
43 44 help_end,
44 45 escaped_commands,
45 46 assign_from_magic,
46 47 assign_from_system,
47 48 assemble_python_lines,
48 49 )
49 50 from IPython.utils import tokenutil
50 51
51 52 # These are available in this module for backwards compatibility.
52 53 from IPython.core.inputtransformer import (ESC_SHELL, ESC_SH_CAP, ESC_HELP,
53 54 ESC_HELP2, ESC_MAGIC, ESC_MAGIC2,
54 55 ESC_QUOTE, ESC_QUOTE2, ESC_PAREN, ESC_SEQUENCES)
55 56
56 57 if TYPE_CHECKING:
57 58 from typing_extensions import Self
58 59 #-----------------------------------------------------------------------------
59 60 # Utilities
60 61 #-----------------------------------------------------------------------------
61 62
62 63 # FIXME: These are general-purpose utilities that later can be moved to the
63 64 # general ward. Kept here for now because we're being very strict about test
64 65 # coverage with this code, and this lets us ensure that we keep 100% coverage
65 66 # while developing.
66 67
67 68 # compiled regexps for autoindent management
68 69 dedent_re = re.compile('|'.join([
69 70 r'^\s+raise(\s.*)?$', # raise statement (+ space + other stuff, maybe)
70 71 r'^\s+raise\([^\)]*\).*$', # wacky raise with immediate open paren
71 72 r'^\s+return(\s.*)?$', # normal return (+ space + other stuff, maybe)
72 73 r'^\s+return\([^\)]*\).*$', # wacky return with immediate open paren
73 74 r'^\s+pass\s*$', # pass (optionally followed by trailing spaces)
74 75 r'^\s+break\s*$', # break (optionally followed by trailing spaces)
75 76 r'^\s+continue\s*$', # continue (optionally followed by trailing spaces)
76 77 ]))
77 78 ini_spaces_re = re.compile(r'^([ \t\r\f\v]+)')
78 79
79 80 # regexp to match pure comment lines so we don't accidentally insert 'if 1:'
80 81 # before pure comments
81 82 comment_line_re = re.compile(r'^\s*\#')
82 83
83 84
84 85 def num_ini_spaces(s):
85 86 """Return the number of initial spaces in a string.
86 87
87 88 Note that tabs are counted as a single space. For now, we do *not* support
88 89 mixing of tabs and spaces in the user's input.
89 90
90 91 Parameters
91 92 ----------
92 93 s : string
93 94
94 95 Returns
95 96 -------
96 97 n : int
97 98 """
98 99 warnings.warn(
99 100 "`num_ini_spaces` is Pending Deprecation since IPython 8.17."
100 101 "It is considered for removal in in future version. "
101 102 "Please open an issue if you believe it should be kept.",
102 103 stacklevel=2,
103 104 category=PendingDeprecationWarning,
104 105 )
105 106 ini_spaces = ini_spaces_re.match(s)
106 107 if ini_spaces:
107 108 return ini_spaces.end()
108 109 else:
109 110 return 0
110 111
111 112 # Fake token types for partial_tokenize:
112 113 INCOMPLETE_STRING = tokenize.N_TOKENS
113 114 IN_MULTILINE_STATEMENT = tokenize.N_TOKENS + 1
114 115
115 116 # The 2 classes below have the same API as TokenInfo, but don't try to look up
116 117 # a token type name that they won't find.
117 118 class IncompleteString:
118 119 type = exact_type = INCOMPLETE_STRING
119 120 def __init__(self, s, start, end, line):
120 121 self.s = s
121 122 self.start = start
122 123 self.end = end
123 124 self.line = line
124 125
125 126 class InMultilineStatement:
126 127 type = exact_type = IN_MULTILINE_STATEMENT
127 128 def __init__(self, pos, line):
128 129 self.s = ''
129 130 self.start = self.end = pos
130 131 self.line = line
131 132
132 133 def partial_tokens(s):
133 134 """Iterate over tokens from a possibly-incomplete string of code.
134 135
135 136 This adds two special token types: INCOMPLETE_STRING and
136 137 IN_MULTILINE_STATEMENT. These can only occur as the last token yielded, and
137 138 represent the two main ways for code to be incomplete.
138 139 """
139 140 readline = io.StringIO(s).readline
140 141 token = tokenize.TokenInfo(tokenize.NEWLINE, '', (1, 0), (1, 0), '')
141 142 try:
142 143 for token in tokenutil.generate_tokens_catch_errors(readline):
143 144 yield token
144 145 except tokenize.TokenError as e:
145 146 # catch EOF error
146 147 lines = s.splitlines(keepends=True)
147 148 end = len(lines), len(lines[-1])
148 149 if 'multi-line string' in e.args[0]:
149 150 l, c = start = token.end
150 151 s = lines[l-1][c:] + ''.join(lines[l:])
151 152 yield IncompleteString(s, start, end, lines[-1])
152 153 elif 'multi-line statement' in e.args[0]:
153 154 yield InMultilineStatement(end, lines[-1])
154 155 else:
155 156 raise
156 157
157 158 def find_next_indent(code) -> int:
158 159 """Find the number of spaces for the next line of indentation"""
159 160 tokens = list(partial_tokens(code))
160 161 if tokens[-1].type == tokenize.ENDMARKER:
161 162 tokens.pop()
162 163 if not tokens:
163 164 return 0
164 165
165 166 while tokens[-1].type in {
166 167 tokenize.DEDENT,
167 168 tokenize.NEWLINE,
168 169 tokenize.COMMENT,
169 170 tokenize.ERRORTOKEN,
170 171 }:
171 172 tokens.pop()
172 173
173 174 # Starting in Python 3.12, the tokenize module adds implicit newlines at the end
174 175 # of input. We need to remove those if we're in a multiline statement
175 176 if tokens[-1].type == IN_MULTILINE_STATEMENT:
176 177 while tokens[-2].type in {tokenize.NL}:
177 178 tokens.pop(-2)
178 179
179 180
180 181 if tokens[-1].type == INCOMPLETE_STRING:
181 182 # Inside a multiline string
182 183 return 0
183 184
184 185 # Find the indents used before
185 186 prev_indents = [0]
186 187 def _add_indent(n):
187 188 if n != prev_indents[-1]:
188 189 prev_indents.append(n)
189 190
190 191 tokiter = iter(tokens)
191 192 for tok in tokiter:
192 193 if tok.type in {tokenize.INDENT, tokenize.DEDENT}:
193 194 _add_indent(tok.end[1])
194 195 elif (tok.type == tokenize.NL):
195 196 try:
196 197 _add_indent(next(tokiter).start[1])
197 198 except StopIteration:
198 199 break
199 200
200 201 last_indent = prev_indents.pop()
201 202
202 203 # If we've just opened a multiline statement (e.g. 'a = ['), indent more
203 204 if tokens[-1].type == IN_MULTILINE_STATEMENT:
204 205 if tokens[-2].exact_type in {tokenize.LPAR, tokenize.LSQB, tokenize.LBRACE}:
205 206 return last_indent + 4
206 207 return last_indent
207 208
208 209 if tokens[-1].exact_type == tokenize.COLON:
209 210 # Line ends with colon - indent
210 211 return last_indent + 4
211 212
212 213 if last_indent:
213 214 # Examine the last line for dedent cues - statements like return or
214 215 # raise which normally end a block of code.
215 216 last_line_starts = 0
216 217 for i, tok in enumerate(tokens):
217 218 if tok.type == tokenize.NEWLINE:
218 219 last_line_starts = i + 1
219 220
220 221 last_line_tokens = tokens[last_line_starts:]
221 222 names = [t.string for t in last_line_tokens if t.type == tokenize.NAME]
222 223 if names and names[0] in {'raise', 'return', 'pass', 'break', 'continue'}:
223 224 # Find the most recent indentation less than the current level
224 225 for indent in reversed(prev_indents):
225 226 if indent < last_indent:
226 227 return indent
227 228
228 229 return last_indent
229 230
230 231
231 232 def last_blank(src):
232 233 """Determine if the input source ends in a blank.
233 234
234 235 A blank is either a newline or a line consisting of whitespace.
235 236
236 237 Parameters
237 238 ----------
238 239 src : string
239 240 A single or multiline string.
240 241 """
241 242 if not src: return False
242 243 ll = src.splitlines()[-1]
243 244 return (ll == '') or ll.isspace()
244 245
245 246
246 247 last_two_blanks_re = re.compile(r'\n\s*\n\s*$', re.MULTILINE)
247 248 last_two_blanks_re2 = re.compile(r'.+\n\s*\n\s+$', re.MULTILINE)
248 249
249 250 def last_two_blanks(src):
250 251 """Determine if the input source ends in two blanks.
251 252
252 253 A blank is either a newline or a line consisting of whitespace.
253 254
254 255 Parameters
255 256 ----------
256 257 src : string
257 258 A single or multiline string.
258 259 """
259 260 if not src: return False
260 261 # The logic here is tricky: I couldn't get a regexp to work and pass all
261 262 # the tests, so I took a different approach: split the source by lines,
262 263 # grab the last two and prepend '###\n' as a stand-in for whatever was in
263 264 # the body before the last two lines. Then, with that structure, it's
264 265 # possible to analyze with two regexps. Not the most elegant solution, but
265 266 # it works. If anyone tries to change this logic, make sure to validate
266 267 # the whole test suite first!
267 268 new_src = '\n'.join(['###\n'] + src.splitlines()[-2:])
268 269 return (bool(last_two_blanks_re.match(new_src)) or
269 270 bool(last_two_blanks_re2.match(new_src)) )
270 271
271 272
272 273 def remove_comments(src):
273 274 """Remove all comments from input source.
274 275
275 276 Note: comments are NOT recognized inside of strings!
276 277
277 278 Parameters
278 279 ----------
279 280 src : string
280 281 A single or multiline input string.
281 282
282 283 Returns
283 284 -------
284 285 String with all Python comments removed.
285 286 """
286 287
287 288 return re.sub('#.*', '', src)
288 289
289 290
290 291 def get_input_encoding():
291 292 """Return the default standard input encoding.
292 293
293 294 If sys.stdin has no encoding, 'ascii' is returned."""
294 295 # There are strange environments for which sys.stdin.encoding is None. We
295 296 # ensure that a valid encoding is returned.
296 297 encoding = getattr(sys.stdin, 'encoding', None)
297 298 if encoding is None:
298 299 encoding = 'ascii'
299 300 return encoding
300 301
301 302 #-----------------------------------------------------------------------------
302 303 # Classes and functions for normal Python syntax handling
303 304 #-----------------------------------------------------------------------------
304 305
305 306 class InputSplitter(object):
306 307 r"""An object that can accumulate lines of Python source before execution.
307 308
308 309 This object is designed to be fed python source line-by-line, using
309 310 :meth:`push`. It will return on each push whether the currently pushed
310 311 code could be executed already. In addition, it provides a method called
311 312 :meth:`push_accepts_more` that can be used to query whether more input
312 313 can be pushed into a single interactive block.
313 314
314 315 This is a simple example of how an interactive terminal-based client can use
315 316 this tool::
316 317
317 318 isp = InputSplitter()
318 319 while isp.push_accepts_more():
319 320 indent = ' '*isp.indent_spaces
320 321 prompt = '>>> ' + indent
321 322 line = indent + raw_input(prompt)
322 323 isp.push(line)
323 324 print('Input source was:\n', isp.source_reset())
324 325 """
325 326 # A cache for storing the current indentation
326 327 # The first value stores the most recently processed source input
327 328 # The second value is the number of spaces for the current indentation
328 329 # If self.source matches the first value, the second value is a valid
329 330 # current indentation. Otherwise, the cache is invalid and the indentation
330 331 # must be recalculated.
331 332 _indent_spaces_cache: Union[Tuple[None, None], Tuple[str, int]] = None, None
332 333 # String, indicating the default input encoding. It is computed by default
333 334 # at initialization time via get_input_encoding(), but it can be reset by a
334 335 # client with specific knowledge of the encoding.
335 336 encoding = ''
336 337 # String where the current full source input is stored, properly encoded.
337 338 # Reading this attribute is the normal way of querying the currently pushed
338 339 # source code, that has been properly encoded.
339 340 source: str = ""
340 341 # Code object corresponding to the current source. It is automatically
341 342 # synced to the source, so it can be queried at any time to obtain the code
342 343 # object; it will be None if the source doesn't compile to valid Python.
343 344 code: Optional[CodeType] = None
344 345
345 346 # Private attributes
346 347
347 348 # List with lines of input accumulated so far
348 349 _buffer: List[str]
349 350 # Command compiler
350 351 _compile: codeop.CommandCompiler
351 352 # Boolean indicating whether the current block is complete
352 353 _is_complete: Optional[bool] = None
353 354 # Boolean indicating whether the current block has an unrecoverable syntax error
354 355 _is_invalid: bool = False
355 356
356 357 def __init__(self) -> None:
357 358 """Create a new InputSplitter instance."""
358 359 self._buffer = []
359 360 self._compile = codeop.CommandCompiler()
360 361 self.encoding = get_input_encoding()
361 362
362 363 def reset(self):
363 364 """Reset the input buffer and associated state."""
364 365 self._buffer[:] = []
365 366 self.source = ''
366 367 self.code = None
367 368 self._is_complete = False
368 369 self._is_invalid = False
369 370
370 371 def source_reset(self):
371 372 """Return the input source and perform a full reset.
372 373 """
373 374 out = self.source
374 375 self.reset()
375 376 return out
376 377
377 378 def check_complete(self, source):
378 379 """Return whether a block of code is ready to execute, or should be continued
379 380
380 381 This is a non-stateful API, and will reset the state of this InputSplitter.
381 382
382 383 Parameters
383 384 ----------
384 385 source : string
385 386 Python input code, which can be multiline.
386 387
387 388 Returns
388 389 -------
389 390 status : str
390 391 One of 'complete', 'incomplete', or 'invalid' if source is not a
391 392 prefix of valid code.
392 393 indent_spaces : int or None
393 394 The number of spaces by which to indent the next line of code. If
394 395 status is not 'incomplete', this is None.
395 396 """
396 397 self.reset()
397 398 try:
398 399 self.push(source)
399 400 except SyntaxError:
400 401 # Transformers in IPythonInputSplitter can raise SyntaxError,
401 402 # which push() will not catch.
402 403 return 'invalid', None
403 404 else:
404 405 if self._is_invalid:
405 406 return 'invalid', None
406 407 elif self.push_accepts_more():
407 408 return 'incomplete', self.get_indent_spaces()
408 409 else:
409 410 return 'complete', None
410 411 finally:
411 412 self.reset()
412 413
413 414 def push(self, lines:str) -> bool:
414 415 """Push one or more lines of input.
415 416
416 417 This stores the given lines and returns a status code indicating
417 418 whether the code forms a complete Python block or not.
418 419
419 420 Any exceptions generated in compilation are swallowed, but if an
420 421 exception was produced, the method returns True.
421 422
422 423 Parameters
423 424 ----------
424 425 lines : string
425 426 One or more lines of Python input.
426 427
427 428 Returns
428 429 -------
429 430 is_complete : boolean
430 431 True if the current input source (the result of the current input
431 432 plus prior inputs) forms a complete Python execution block. Note that
432 433 this value is also stored as a private attribute (``_is_complete``), so it
433 434 can be queried at any time.
434 435 """
435 436 assert isinstance(lines, str)
436 437 self._store(lines)
437 438 source = self.source
438 439
439 440 # Before calling _compile(), reset the code object to None so that if an
440 441 # exception is raised in compilation, we don't mislead by having
441 442 # inconsistent code/source attributes.
442 443 self.code, self._is_complete = None, None
443 444 self._is_invalid = False
444 445
445 446 # Honor termination lines properly
446 447 if source.endswith('\\\n'):
447 448 return False
448 449
449 450 try:
450 451 with warnings.catch_warnings():
451 452 warnings.simplefilter('error', SyntaxWarning)
452 453 self.code = self._compile(source, symbol="exec")
453 454 # Invalid syntax can produce any of a number of different errors from
454 455 # inside the compiler, so we have to catch them all. Syntax errors
455 456 # immediately produce a 'ready' block, so the invalid Python can be
456 457 # sent to the kernel for evaluation with possible ipython
457 458 # special-syntax conversion.
458 459 except (SyntaxError, OverflowError, ValueError, TypeError,
459 460 MemoryError, SyntaxWarning):
460 461 self._is_complete = True
461 462 self._is_invalid = True
462 463 else:
463 464 # Compilation didn't produce any exceptions (though it may not have
464 465 # given a complete code object)
465 466 self._is_complete = self.code is not None
466 467
467 468 return self._is_complete
468 469
469 470 def push_accepts_more(self):
470 471 """Return whether a block of interactive input can accept more input.
471 472
472 473 This method is meant to be used by line-oriented frontends, who need to
473 474 guess whether a block is complete or not based solely on prior and
474 475 current input lines. The InputSplitter considers it has a complete
475 476 interactive block and will not accept more input when either:
476 477
477 478 * A SyntaxError is raised
478 479
479 480 * The code is complete and consists of a single line or a single
480 481 non-compound statement
481 482
482 483 * The code is complete and has a blank line at the end
483 484
484 485 If the current input produces a syntax error, this method immediately
485 486 returns False but does *not* raise the syntax error exception, as
486 487 typically clients will want to send invalid syntax to an execution
487 488 backend which might convert the invalid syntax into valid Python via
488 489 one of the dynamic IPython mechanisms.
489 490 """
490 491
491 492 # With incomplete input, unconditionally accept more
492 493 # A syntax error also sets _is_complete to True - see push()
493 494 if not self._is_complete:
494 495 #print("Not complete") # debug
495 496 return True
496 497
497 498 # The user can make any (complete) input execute by leaving a blank line
498 499 last_line = self.source.splitlines()[-1]
499 500 if (not last_line) or last_line.isspace():
500 501 #print("Blank line") # debug
501 502 return False
502 503
503 504 # If there's just a single line or AST node, and we're flush left, as is
504 505 # the case after a simple statement such as 'a=1', we want to execute it
505 506 # straight away.
506 507 if self.get_indent_spaces() == 0:
507 508 if len(self.source.splitlines()) <= 1:
508 509 return False
509 510
510 511 try:
511 512 code_ast = ast.parse("".join(self._buffer))
512 513 except Exception:
513 514 #print("Can't parse AST") # debug
514 515 return False
515 516 else:
516 517 if len(code_ast.body) == 1 and \
517 518 not hasattr(code_ast.body[0], 'body'):
518 519 #print("Simple statement") # debug
519 520 return False
520 521
521 522 # General fallback - accept more code
522 523 return True
523 524
524 525 def get_indent_spaces(self) -> int:
525 526 sourcefor, n = self._indent_spaces_cache
526 527 if sourcefor == self.source:
527 528 assert n is not None
528 529 return n
529 530
530 531 # self.source always has a trailing newline
531 532 n = find_next_indent(self.source[:-1])
532 533 self._indent_spaces_cache = (self.source, n)
533 534 return n
534 535
535 536 # Backwards compatibility. I think all code that used .indent_spaces was
536 537 # inside IPython, but we can leave this here until IPython 7 in case any
537 538 # other modules are using it. -TK, November 2017
538 539 indent_spaces = property(get_indent_spaces)
539 540
540 541 def _store(self, lines, buffer=None, store='source'):
541 542 """Store one or more lines of input.
542 543
543 544 If input lines are not newline-terminated, a newline is automatically
544 545 appended."""
545 546
546 547 if buffer is None:
547 548 buffer = self._buffer
548 549
549 550 if lines.endswith('\n'):
550 551 buffer.append(lines)
551 552 else:
552 553 buffer.append(lines+'\n')
553 554 setattr(self, store, self._set_source(buffer))
554 555
555 556 def _set_source(self, buffer):
556 557 return u''.join(buffer)
557 558
558 559
559 560 class IPythonInputSplitter(InputSplitter):
560 561 """An input splitter that recognizes all of IPython's special syntax."""
561 562
562 563 # String with raw, untransformed input.
563 564 source_raw = ''
564 565
565 566 # Flag to track when a transformer has stored input that it hasn't given
566 567 # back yet.
567 568 transformer_accumulating = False
568 569
569 570 # Flag to track when assemble_python_lines has stored input that it hasn't
570 571 # given back yet.
571 572 within_python_line = False
572 573
573 574 # Private attributes
574 575
575 576 # List with lines of raw input accumulated so far.
576 577 _buffer_raw: List[str]
577 578
578 579 def __init__(self, line_input_checker=True, physical_line_transforms=None,
579 580 logical_line_transforms=None, python_line_transforms=None):
580 581 super(IPythonInputSplitter, self).__init__()
581 582 self._buffer_raw = []
582 583 self._validate = True
583 584
584 585 if physical_line_transforms is not None:
585 586 self.physical_line_transforms = physical_line_transforms
586 587 else:
587 588 self.physical_line_transforms = [
588 589 leading_indent(),
589 590 classic_prompt(),
590 591 ipy_prompt(),
591 592 cellmagic(end_on_blank_line=line_input_checker),
592 593 ]
593 594
594 595 self.assemble_logical_lines = assemble_logical_lines()
595 596 if logical_line_transforms is not None:
596 597 self.logical_line_transforms = logical_line_transforms
597 598 else:
598 599 self.logical_line_transforms = [
599 600 help_end(),
600 601 escaped_commands(),
601 602 assign_from_magic(),
602 603 assign_from_system(),
603 604 ]
604 605
605 606 self.assemble_python_lines = assemble_python_lines()
606 607 if python_line_transforms is not None:
607 608 self.python_line_transforms = python_line_transforms
608 609 else:
609 610 # We don't use any of these at present
610 611 self.python_line_transforms = []
611 612
612 613 @property
613 614 def transforms(self):
614 615 "Quick access to all transformers."
615 616 return self.physical_line_transforms + \
616 617 [self.assemble_logical_lines] + self.logical_line_transforms + \
617 618 [self.assemble_python_lines] + self.python_line_transforms
618 619
619 620 @property
620 621 def transforms_in_use(self):
621 622 """Transformers, excluding logical line transformers if we're in a
622 623 Python line."""
623 624 t = self.physical_line_transforms[:]
624 625 if not self.within_python_line:
625 626 t += [self.assemble_logical_lines] + self.logical_line_transforms
626 627 return t + [self.assemble_python_lines] + self.python_line_transforms
627 628
628 629 def reset(self):
629 630 """Reset the input buffer and associated state."""
630 631 super(IPythonInputSplitter, self).reset()
631 632 self._buffer_raw[:] = []
632 633 self.source_raw = ''
633 634 self.transformer_accumulating = False
634 635 self.within_python_line = False
635 636
636 637 for t in self.transforms:
637 638 try:
638 639 t.reset()
639 640 except SyntaxError:
640 641 # Nothing that calls reset() expects to handle transformer
641 642 # errors
642 643 pass
643 644
644 645 def flush_transformers(self: Self):
645 646 def _flush(transform, outs: List[str]):
646 647 """yield transformed lines
647 648
648 649 always strings, never None
649 650
650 651 transform: the current transform
651 652 outs: an iterable of previously transformed inputs.
652 653 Each may be multiline, which will be passed
653 654 one line at a time to transform.
654 655 """
655 656 for out in outs:
656 657 for line in out.splitlines():
657 658 # push one line at a time
658 659 tmp = transform.push(line)
659 660 if tmp is not None:
660 661 yield tmp
661 662
662 663 # reset the transform
663 664 tmp = transform.reset()
664 665 if tmp is not None:
665 666 yield tmp
666 667
667 668 out: List[str] = []
668 669 for t in self.transforms_in_use:
669 670 out = _flush(t, out)
670 671
671 672 out = list(out)
672 673 if out:
673 674 self._store('\n'.join(out))
674 675
675 676 def raw_reset(self):
676 677 """Return raw input only and perform a full reset.
677 678 """
678 679 out = self.source_raw
679 680 self.reset()
680 681 return out
681 682
682 683 def source_reset(self):
683 684 try:
684 685 self.flush_transformers()
685 686 return self.source
686 687 finally:
687 688 self.reset()
688 689
689 690 def push_accepts_more(self):
690 691 if self.transformer_accumulating:
691 692 return True
692 693 else:
693 694 return super(IPythonInputSplitter, self).push_accepts_more()
694 695
695 696 def transform_cell(self, cell):
696 697 """Process and translate a cell of input.
697 698 """
698 699 self.reset()
699 700 try:
700 701 self.push(cell)
701 702 self.flush_transformers()
702 703 return self.source
703 704 finally:
704 705 self.reset()
705 706
706 707 def push(self, lines:str) -> bool:
707 708 """Push one or more lines of IPython input.
708 709
709 710 This stores the given lines and returns a status code indicating
710 711 whether the code forms a complete Python block or not, after processing
711 712 all input lines for special IPython syntax.
712 713
713 714 Any exceptions generated in compilation are swallowed, but if an
714 715 exception was produced, the method returns True.
715 716
716 717 Parameters
717 718 ----------
718 719 lines : string
719 720 One or more lines of Python input.
720 721
721 722 Returns
722 723 -------
723 724 is_complete : boolean
724 725 True if the current input source (the result of the current input
725 726 plus prior inputs) forms a complete Python execution block. Note that
726 727 this value is also stored as a private attribute (_is_complete), so it
727 728 can be queried at any time.
728 729 """
729 730 assert isinstance(lines, str)
730 731 # We must ensure all input is pure unicode
731 732 # ''.splitlines() --> [], but we need to push the empty line to transformers
732 733 lines_list = lines.splitlines()
733 734 if not lines_list:
734 735 lines_list = ['']
735 736
736 737 # Store raw source before applying any transformations to it. Note
737 738 # that this must be done *after* the reset() call that would otherwise
738 739 # flush the buffer.
739 740 self._store(lines, self._buffer_raw, 'source_raw')
740 741
741 742 transformed_lines_list = []
742 743 for line in lines_list:
743 744 transformed = self._transform_line(line)
744 745 if transformed is not None:
745 746 transformed_lines_list.append(transformed)
746 747
747 748 if transformed_lines_list:
748 749 transformed_lines = '\n'.join(transformed_lines_list)
749 750 return super(IPythonInputSplitter, self).push(transformed_lines)
750 751 else:
751 752 # Got nothing back from transformers - they must be waiting for
752 753 # more input.
753 754 return False
754 755
755 756 def _transform_line(self, line):
756 757 """Push a line of input code through the various transformers.
757 758
758 759 Returns any output from the transformers, or None if a transformer
759 760 is accumulating lines.
760 761
761 762 Sets self.transformer_accumulating as a side effect.
762 763 """
763 764 def _accumulating(dbg):
764 765 #print(dbg)
765 766 self.transformer_accumulating = True
766 767 return None
767 768
768 769 for transformer in self.physical_line_transforms:
769 770 line = transformer.push(line)
770 771 if line is None:
771 772 return _accumulating(transformer)
772 773
773 774 if not self.within_python_line:
774 775 line = self.assemble_logical_lines.push(line)
775 776 if line is None:
776 777 return _accumulating('acc logical line')
777 778
778 779 for transformer in self.logical_line_transforms:
779 780 line = transformer.push(line)
780 781 if line is None:
781 782 return _accumulating(transformer)
782 783
783 784 line = self.assemble_python_lines.push(line)
784 785 if line is None:
785 786 self.within_python_line = True
786 787 return _accumulating('acc python line')
787 788 else:
788 789 self.within_python_line = False
789 790
790 791 for transformer in self.python_line_transforms:
791 792 line = transformer.push(line)
792 793 if line is None:
793 794 return _accumulating(transformer)
794 795
795 796 #print("transformers clear") #debug
796 797 self.transformer_accumulating = False
797 798 return line
798 799
@@ -1,546 +1,545
1 1 """Tests for the Formatters."""
2 2
3 3 from math import pi
4 4
5 5 try:
6 6 import numpy
7 7 except:
8 8 numpy = None
9 9 import pytest
10 10
11 11 from IPython import get_ipython
12 12 from traitlets.config import Config
13 13 from IPython.core.formatters import (
14 14 PlainTextFormatter, HTMLFormatter, PDFFormatter, _mod_name_key,
15 15 DisplayFormatter, JSONFormatter,
16 16 )
17 17 from IPython.utils.io import capture_output
18 18
19 19 class A(object):
20 20 def __repr__(self):
21 21 return 'A()'
22 22
23 23 class B(A):
24 24 def __repr__(self):
25 25 return 'B()'
26 26
27 27 class C:
28 28 pass
29 29
30 30 class BadRepr(object):
31 31 def __repr__(self):
32 32 raise ValueError("bad repr")
33 33
34 34 class BadPretty(object):
35 35 _repr_pretty_ = None
36 36
37 37 class GoodPretty(object):
38 38 def _repr_pretty_(self, pp, cycle):
39 39 pp.text('foo')
40 40
41 41 def __repr__(self):
42 42 return 'GoodPretty()'
43 43
44 44 def foo_printer(obj, pp, cycle):
45 45 pp.text('foo')
46 46
47 47 def test_pretty():
48 48 f = PlainTextFormatter()
49 49 f.for_type(A, foo_printer)
50 50 assert f(A()) == "foo"
51 51 assert f(B()) == "B()"
52 52 assert f(GoodPretty()) == "foo"
53 53 # Just don't raise an exception for the following:
54 54 f(BadPretty())
55 55
56 56 f.pprint = False
57 57 assert f(A()) == "A()"
58 58 assert f(B()) == "B()"
59 59 assert f(GoodPretty()) == "GoodPretty()"
60 60
61 61
62 62 def test_deferred():
63 63 f = PlainTextFormatter()
64 64
65 65 def test_precision():
66 66 """test various values for float_precision."""
67 67 f = PlainTextFormatter()
68 68 assert f(pi) == repr(pi)
69 69 f.float_precision = 0
70 70 if numpy:
71 71 po = numpy.get_printoptions()
72 72 assert po["precision"] == 0
73 73 assert f(pi) == "3"
74 74 f.float_precision = 2
75 75 if numpy:
76 76 po = numpy.get_printoptions()
77 77 assert po["precision"] == 2
78 78 assert f(pi) == "3.14"
79 79 f.float_precision = "%g"
80 80 if numpy:
81 81 po = numpy.get_printoptions()
82 82 assert po["precision"] == 2
83 83 assert f(pi) == "3.14159"
84 84 f.float_precision = "%e"
85 85 assert f(pi) == "3.141593e+00"
86 86 f.float_precision = ""
87 87 if numpy:
88 88 po = numpy.get_printoptions()
89 89 assert po["precision"] == 8
90 90 assert f(pi) == repr(pi)
91 91
92 92
93 93 def test_bad_precision():
94 94 """test various invalid values for float_precision."""
95 95 f = PlainTextFormatter()
96 96 def set_fp(p):
97 97 f.float_precision = p
98 98
99 99 pytest.raises(ValueError, set_fp, "%")
100 100 pytest.raises(ValueError, set_fp, "%.3f%i")
101 101 pytest.raises(ValueError, set_fp, "foo")
102 102 pytest.raises(ValueError, set_fp, -1)
103 103
104 104 def test_for_type():
105 105 f = PlainTextFormatter()
106 106
107 107 # initial return, None
108 108 assert f.for_type(C, foo_printer) is None
109 109 # no func queries
110 110 assert f.for_type(C) is foo_printer
111 111 # shouldn't change anything
112 112 assert f.for_type(C) is foo_printer
113 113 # None should do the same
114 114 assert f.for_type(C, None) is foo_printer
115 115 assert f.for_type(C, None) is foo_printer
116 116
117 117 def test_for_type_string():
118 118 f = PlainTextFormatter()
119 119
120 120 type_str = '%s.%s' % (C.__module__, 'C')
121 121
122 122 # initial return, None
123 123 assert f.for_type(type_str, foo_printer) is None
124 124 # no func queries
125 125 assert f.for_type(type_str) is foo_printer
126 126 assert _mod_name_key(C) in f.deferred_printers
127 127 assert f.for_type(C) is foo_printer
128 128 assert _mod_name_key(C) not in f.deferred_printers
129 129 assert C in f.type_printers
130 130
131 131 def test_for_type_by_name():
132 132 f = PlainTextFormatter()
133 133
134 134 mod = C.__module__
135 135
136 136 # initial return, None
137 137 assert f.for_type_by_name(mod, "C", foo_printer) is None
138 138 # no func queries
139 139 assert f.for_type_by_name(mod, "C") is foo_printer
140 140 # shouldn't change anything
141 141 assert f.for_type_by_name(mod, "C") is foo_printer
142 142 # None should do the same
143 143 assert f.for_type_by_name(mod, "C", None) is foo_printer
144 144 assert f.for_type_by_name(mod, "C", None) is foo_printer
145 145
146 146
147 147 def test_lookup():
148 148 f = PlainTextFormatter()
149 149
150 150 f.for_type(C, foo_printer)
151 151 assert f.lookup(C()) is foo_printer
152 152 with pytest.raises(KeyError):
153 153 f.lookup(A())
154 154
155 155 def test_lookup_string():
156 156 f = PlainTextFormatter()
157 157 type_str = '%s.%s' % (C.__module__, 'C')
158 158
159 159 f.for_type(type_str, foo_printer)
160 160 assert f.lookup(C()) is foo_printer
161 161 # should move from deferred to imported dict
162 162 assert _mod_name_key(C) not in f.deferred_printers
163 163 assert C in f.type_printers
164 164
165 165 def test_lookup_by_type():
166 166 f = PlainTextFormatter()
167 167 f.for_type(C, foo_printer)
168 168 assert f.lookup_by_type(C) is foo_printer
169 169 with pytest.raises(KeyError):
170 170 f.lookup_by_type(A)
171 171
172 172 def test_lookup_by_type_string():
173 173 f = PlainTextFormatter()
174 174 type_str = '%s.%s' % (C.__module__, 'C')
175 175 f.for_type(type_str, foo_printer)
176 176
177 177 # verify insertion
178 178 assert _mod_name_key(C) in f.deferred_printers
179 179 assert C not in f.type_printers
180 180
181 181 assert f.lookup_by_type(type_str) is foo_printer
182 182 # lookup by string doesn't cause import
183 183 assert _mod_name_key(C) in f.deferred_printers
184 184 assert C not in f.type_printers
185 185
186 186 assert f.lookup_by_type(C) is foo_printer
187 187 # should move from deferred to imported dict
188 188 assert _mod_name_key(C) not in f.deferred_printers
189 189 assert C in f.type_printers
190 190
191 191 def test_in_formatter():
192 192 f = PlainTextFormatter()
193 193 f.for_type(C, foo_printer)
194 194 type_str = '%s.%s' % (C.__module__, 'C')
195 195 assert C in f
196 196 assert type_str in f
197 197
198 198 def test_string_in_formatter():
199 199 f = PlainTextFormatter()
200 200 type_str = '%s.%s' % (C.__module__, 'C')
201 201 f.for_type(type_str, foo_printer)
202 202 assert type_str in f
203 203 assert C in f
204 204
205 205 def test_pop():
206 206 f = PlainTextFormatter()
207 207 f.for_type(C, foo_printer)
208 208 assert f.lookup_by_type(C) is foo_printer
209 209 assert f.pop(C, None) is foo_printer
210 210 f.for_type(C, foo_printer)
211 211 assert f.pop(C) is foo_printer
212 212 with pytest.raises(KeyError):
213 213 f.lookup_by_type(C)
214 214 with pytest.raises(KeyError):
215 215 f.pop(C)
216 216 with pytest.raises(KeyError):
217 217 f.pop(A)
218 218 assert f.pop(A, None) is None
219 219
220 220 def test_pop_string():
221 221 f = PlainTextFormatter()
222 222 type_str = '%s.%s' % (C.__module__, 'C')
223 223
224 224 with pytest.raises(KeyError):
225 225 f.pop(type_str)
226 226
227 227 f.for_type(type_str, foo_printer)
228 228 f.pop(type_str)
229 229 with pytest.raises(KeyError):
230 230 f.lookup_by_type(C)
231 231 with pytest.raises(KeyError):
232 232 f.pop(type_str)
233 233
234 234 f.for_type(C, foo_printer)
235 235 assert f.pop(type_str, None) is foo_printer
236 236 with pytest.raises(KeyError):
237 237 f.lookup_by_type(C)
238 238 with pytest.raises(KeyError):
239 239 f.pop(type_str)
240 240 assert f.pop(type_str, None) is None
241 241
242 242
243 243 def test_error_method():
244 244 f = HTMLFormatter()
245 245 class BadHTML(object):
246 246 def _repr_html_(self):
247 247 raise ValueError("Bad HTML")
248 248 bad = BadHTML()
249 249 with capture_output() as captured:
250 250 result = f(bad)
251 251 assert result is None
252 252 assert "Traceback" in captured.stdout
253 253 assert "Bad HTML" in captured.stdout
254 254 assert "_repr_html_" in captured.stdout
255 255
256 256 def test_nowarn_notimplemented():
257 257 f = HTMLFormatter()
258 258 class HTMLNotImplemented(object):
259 259 def _repr_html_(self):
260 260 raise NotImplementedError
261 261 h = HTMLNotImplemented()
262 262 with capture_output() as captured:
263 263 result = f(h)
264 264 assert result is None
265 265 assert "" == captured.stderr
266 266 assert "" == captured.stdout
267 267
268 268
269 269 def test_warn_error_for_type():
270 270 f = HTMLFormatter()
271 271 f.for_type(int, lambda i: name_error)
272 272 with capture_output() as captured:
273 273 result = f(5)
274 274 assert result is None
275 275 assert "Traceback" in captured.stdout
276 276 assert "NameError" in captured.stdout
277 277 assert "name_error" in captured.stdout
278 278
279 279 def test_error_pretty_method():
280 280 f = PlainTextFormatter()
281 281 class BadPretty(object):
282 282 def _repr_pretty_(self):
283 283 return "hello"
284 284 bad = BadPretty()
285 285 with capture_output() as captured:
286 286 result = f(bad)
287 287 assert result is None
288 288 assert "Traceback" in captured.stdout
289 289 assert "_repr_pretty_" in captured.stdout
290 290 assert "given" in captured.stdout
291 291 assert "argument" in captured.stdout
292 292
293 293
294 294 def test_bad_repr_traceback():
295 295 f = PlainTextFormatter()
296 296 bad = BadRepr()
297 297 with capture_output() as captured:
298 298 result = f(bad)
299 299 # catches error, returns None
300 300 assert result is None
301 301 assert "Traceback" in captured.stdout
302 302 assert "__repr__" in captured.stdout
303 303 assert "ValueError" in captured.stdout
304 304
305 305
306 306 class MakePDF(object):
307 307 def _repr_pdf_(self):
308 308 return 'PDF'
309 309
310 310 def test_pdf_formatter():
311 311 pdf = MakePDF()
312 312 f = PDFFormatter()
313 313 assert f(pdf) == "PDF"
314 314
315 315
316 316 def test_print_method_bound():
317 317 f = HTMLFormatter()
318 318 class MyHTML(object):
319 319 def _repr_html_(self):
320 320 return "hello"
321 321 with capture_output() as captured:
322 322 result = f(MyHTML)
323 323 assert result is None
324 324 assert "FormatterWarning" not in captured.stderr
325 325
326 326 with capture_output() as captured:
327 327 result = f(MyHTML())
328 328 assert result == "hello"
329 329 assert captured.stderr == ""
330 330
331 331
332 332 def test_print_method_weird():
333 333
334 334 class TextMagicHat(object):
335 335 def __getattr__(self, key):
336 336 return key
337 337
338 338 f = HTMLFormatter()
339 339
340 340 text_hat = TextMagicHat()
341 341 assert text_hat._repr_html_ == "_repr_html_"
342 342 with capture_output() as captured:
343 343 result = f(text_hat)
344 344
345 345 assert result is None
346 346 assert "FormatterWarning" not in captured.stderr
347 347
348 348 class CallableMagicHat(object):
349 349 def __getattr__(self, key):
350 350 return lambda : key
351 351
352 352 call_hat = CallableMagicHat()
353 353 with capture_output() as captured:
354 354 result = f(call_hat)
355 355
356 356 assert result is None
357 357
358 358 class BadReprArgs(object):
359 359 def _repr_html_(self, extra, args):
360 360 return "html"
361 361
362 362 bad = BadReprArgs()
363 363 with capture_output() as captured:
364 364 result = f(bad)
365 365
366 366 assert result is None
367 367 assert "FormatterWarning" not in captured.stderr
368 368
369 369
370 370 def test_format_config():
371 371 """config objects don't pretend to support fancy reprs with lazy attrs"""
372 372 f = HTMLFormatter()
373 373 cfg = Config()
374 374 with capture_output() as captured:
375 375 result = f(cfg)
376 376 assert result is None
377 377 assert captured.stderr == ""
378 378
379 379 with capture_output() as captured:
380 380 result = f(Config)
381 381 assert result is None
382 382 assert captured.stderr == ""
383 383
384 384
385 385 def test_pretty_max_seq_length():
386 386 f = PlainTextFormatter(max_seq_length=1)
387 387 lis = list(range(3))
388 388 text = f(lis)
389 389 assert text == "[0, ...]"
390 390 f.max_seq_length = 0
391 391 text = f(lis)
392 392 assert text == "[0, 1, 2]"
393 393 text = f(list(range(1024)))
394 394 lines = text.splitlines()
395 395 assert len(lines) == 1024
396 396
397 397
398 398 def test_ipython_display_formatter():
399 399 """Objects with _ipython_display_ defined bypass other formatters"""
400 400 f = get_ipython().display_formatter
401 401 catcher = []
402 402 class SelfDisplaying(object):
403 403 def _ipython_display_(self):
404 404 catcher.append(self)
405 405
406 406 class NotSelfDisplaying(object):
407 407 def __repr__(self):
408 408 return "NotSelfDisplaying"
409 409
410 410 def _ipython_display_(self):
411 411 raise NotImplementedError
412 412
413 413 save_enabled = f.ipython_display_formatter.enabled
414 414 f.ipython_display_formatter.enabled = True
415 415
416 416 yes = SelfDisplaying()
417 417 no = NotSelfDisplaying()
418 418
419 419 d, md = f.format(no)
420 420 assert d == {"text/plain": repr(no)}
421 421 assert md == {}
422 422 assert catcher == []
423 423
424 424 d, md = f.format(yes)
425 425 assert d == {}
426 426 assert md == {}
427 427 assert catcher == [yes]
428 428
429 429 f.ipython_display_formatter.enabled = save_enabled
430 430
431 431
432 432 def test_repr_mime():
433 433 class HasReprMime(object):
434 434 def _repr_mimebundle_(self, include=None, exclude=None):
435 435 return {
436 436 'application/json+test.v2': {
437 437 'x': 'y'
438 438 },
439 439 'plain/text' : '<HasReprMime>',
440 440 'image/png' : 'i-overwrite'
441 441 }
442 442
443 443 def _repr_png_(self):
444 444 return 'should-be-overwritten'
445 445 def _repr_html_(self):
446 446 return '<b>hi!</b>'
447 447
448 448 f = get_ipython().display_formatter
449 449 html_f = f.formatters['text/html']
450 450 save_enabled = html_f.enabled
451 451 html_f.enabled = True
452 452 obj = HasReprMime()
453 453 d, md = f.format(obj)
454 454 html_f.enabled = save_enabled
455 455
456 456 assert sorted(d) == [
457 457 "application/json+test.v2",
458 458 "image/png",
459 459 "plain/text",
460 460 "text/html",
461 461 "text/plain",
462 462 ]
463 463 assert md == {}
464 464
465 465 d, md = f.format(obj, include={"image/png"})
466 466 assert list(d.keys()) == [
467 467 "image/png"
468 468 ], "Include should filter out even things from repr_mimebundle"
469 469
470 470 assert d["image/png"] == "i-overwrite", "_repr_mimebundle_ take precedence"
471 471
472 472
473 473 def test_pass_correct_include_exclude():
474 474 class Tester(object):
475 475
476 476 def __init__(self, include=None, exclude=None):
477 477 self.include = include
478 478 self.exclude = exclude
479 479
480 480 def _repr_mimebundle_(self, include, exclude, **kwargs):
481 481 if include and (include != self.include):
482 482 raise ValueError('include got modified: display() may be broken.')
483 483 if exclude and (exclude != self.exclude):
484 484 raise ValueError('exclude got modified: display() may be broken.')
485 485
486 486 return None
487 487
488 488 include = {'a', 'b', 'c'}
489 489 exclude = {'c', 'e' , 'f'}
490 490
491 491 f = get_ipython().display_formatter
492 492 f.format(Tester(include=include, exclude=exclude), include=include, exclude=exclude)
493 493 f.format(Tester(exclude=exclude), exclude=exclude)
494 494 f.format(Tester(include=include), include=include)
495 495
496 496
497 497 def test_repr_mime_meta():
498 498 class HasReprMimeMeta(object):
499 499 def _repr_mimebundle_(self, include=None, exclude=None):
500 500 data = {
501 501 'image/png': 'base64-image-data',
502 502 }
503 503 metadata = {
504 504 'image/png': {
505 505 'width': 5,
506 506 'height': 10,
507 507 }
508 508 }
509 509 return (data, metadata)
510 510
511 511 f = get_ipython().display_formatter
512 512 obj = HasReprMimeMeta()
513 513 d, md = f.format(obj)
514 514 assert sorted(d) == ["image/png", "text/plain"]
515 515 assert md == {
516 516 "image/png": {
517 517 "width": 5,
518 518 "height": 10,
519 519 }
520 520 }
521 521
522 522
523 523 def test_repr_mime_failure():
524 524 class BadReprMime(object):
525 525 def _repr_mimebundle_(self, include=None, exclude=None):
526 526 raise RuntimeError
527 527
528 528 f = get_ipython().display_formatter
529 529 obj = BadReprMime()
530 530 d, md = f.format(obj)
531 531 assert "text/plain" in d
532 532
533 533
534 534 def test_custom_repr_namedtuple_partialmethod():
535 535 from functools import partialmethod
536 536 from typing import NamedTuple
537 537
538 class Foo(NamedTuple):
539 ...
538 class Foo(NamedTuple): ...
540 539
541 540 Foo.__repr__ = partialmethod(lambda obj: "Hello World")
542 541 foo = Foo()
543 542
544 543 f = PlainTextFormatter()
545 544 assert f.pprint
546 545 assert f(foo) == "Hello World"
@@ -1,447 +1,448
1 1 """Tests for the token-based transformers in IPython.core.inputtransformer2
2 2
3 3 Line-based transformers are the simpler ones; token-based transformers are
4 4 more complex. See test_inputtransformer2_line for tests for line-based
5 5 transformations.
6 6 """
7
7 8 import platform
8 9 import string
9 10 import sys
10 11 from textwrap import dedent
11 12
12 13 import pytest
13 14
14 15 from IPython.core import inputtransformer2 as ipt2
15 16 from IPython.core.inputtransformer2 import _find_assign_op, make_tokens_by_line
16 17
17 18 MULTILINE_MAGIC = (
18 19 """\
19 20 a = f()
20 21 %foo \\
21 22 bar
22 23 g()
23 24 """.splitlines(
24 25 keepends=True
25 26 ),
26 27 (2, 0),
27 28 """\
28 29 a = f()
29 30 get_ipython().run_line_magic('foo', ' bar')
30 31 g()
31 32 """.splitlines(
32 33 keepends=True
33 34 ),
34 35 )
35 36
36 37 INDENTED_MAGIC = (
37 38 """\
38 39 for a in range(5):
39 40 %ls
40 41 """.splitlines(
41 42 keepends=True
42 43 ),
43 44 (2, 4),
44 45 """\
45 46 for a in range(5):
46 47 get_ipython().run_line_magic('ls', '')
47 48 """.splitlines(
48 49 keepends=True
49 50 ),
50 51 )
51 52
52 53 CRLF_MAGIC = (
53 54 ["a = f()\n", "%ls\r\n", "g()\n"],
54 55 (2, 0),
55 56 ["a = f()\n", "get_ipython().run_line_magic('ls', '')\n", "g()\n"],
56 57 )
57 58
58 59 MULTILINE_MAGIC_ASSIGN = (
59 60 """\
60 61 a = f()
61 62 b = %foo \\
62 63 bar
63 64 g()
64 65 """.splitlines(
65 66 keepends=True
66 67 ),
67 68 (2, 4),
68 69 """\
69 70 a = f()
70 71 b = get_ipython().run_line_magic('foo', ' bar')
71 72 g()
72 73 """.splitlines(
73 74 keepends=True
74 75 ),
75 76 )
76 77
77 78 MULTILINE_SYSTEM_ASSIGN = ("""\
78 79 a = f()
79 80 b = !foo \\
80 81 bar
81 82 g()
82 83 """.splitlines(keepends=True), (2, 4), """\
83 84 a = f()
84 85 b = get_ipython().getoutput('foo bar')
85 86 g()
86 87 """.splitlines(keepends=True))
87 88
88 89 #####
89 90
90 91 MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT = (
91 92 """\
92 93 def test():
93 94 for i in range(1):
94 95 print(i)
95 96 res =! ls
96 97 """.splitlines(
97 98 keepends=True
98 99 ),
99 100 (4, 7),
100 101 """\
101 102 def test():
102 103 for i in range(1):
103 104 print(i)
104 105 res =get_ipython().getoutput(\' ls\')
105 106 """.splitlines(
106 107 keepends=True
107 108 ),
108 109 )
109 110
110 111 ######
111 112
112 113 AUTOCALL_QUOTE = ([",f 1 2 3\n"], (1, 0), ['f("1", "2", "3")\n'])
113 114
114 115 AUTOCALL_QUOTE2 = ([";f 1 2 3\n"], (1, 0), ['f("1 2 3")\n'])
115 116
116 117 AUTOCALL_PAREN = (["/f 1 2 3\n"], (1, 0), ["f(1, 2, 3)\n"])
117 118
118 119 SIMPLE_HELP = (["foo?\n"], (1, 0), ["get_ipython().run_line_magic('pinfo', 'foo')\n"])
119 120
120 121 DETAILED_HELP = (
121 122 ["foo??\n"],
122 123 (1, 0),
123 124 ["get_ipython().run_line_magic('pinfo2', 'foo')\n"],
124 125 )
125 126
126 127 MAGIC_HELP = (["%foo?\n"], (1, 0), ["get_ipython().run_line_magic('pinfo', '%foo')\n"])
127 128
128 129 HELP_IN_EXPR = (
129 130 ["a = b + c?\n"],
130 131 (1, 0),
131 132 ["get_ipython().run_line_magic('pinfo', 'c')\n"],
132 133 )
133 134
134 135 HELP_CONTINUED_LINE = (
135 136 """\
136 137 a = \\
137 138 zip?
138 139 """.splitlines(
139 140 keepends=True
140 141 ),
141 142 (1, 0),
142 143 [r"get_ipython().run_line_magic('pinfo', 'zip')" + "\n"],
143 144 )
144 145
145 146 HELP_MULTILINE = (
146 147 """\
147 148 (a,
148 149 b) = zip?
149 150 """.splitlines(
150 151 keepends=True
151 152 ),
152 153 (1, 0),
153 154 [r"get_ipython().run_line_magic('pinfo', 'zip')" + "\n"],
154 155 )
155 156
156 157 HELP_UNICODE = (
157 158 ["Ο€.foo?\n"],
158 159 (1, 0),
159 160 ["get_ipython().run_line_magic('pinfo', 'Ο€.foo')\n"],
160 161 )
161 162
162 163
163 164 def null_cleanup_transformer(lines):
164 165 """
165 166 A cleanup transform that returns an empty list.
166 167 """
167 168 return []
168 169
169 170
170 171 def test_check_make_token_by_line_never_ends_empty():
171 172 """
172 173 Check that not sequence of single or double characters ends up leading to en empty list of tokens
173 174 """
174 175 from string import printable
175 176
176 177 for c in printable:
177 178 assert make_tokens_by_line(c)[-1] != []
178 179 for k in printable:
179 180 assert make_tokens_by_line(c + k)[-1] != []
180 181
181 182
182 183 def check_find(transformer, case, match=True):
183 184 sample, expected_start, _ = case
184 185 tbl = make_tokens_by_line(sample)
185 186 res = transformer.find(tbl)
186 187 if match:
187 188 # start_line is stored 0-indexed, expected values are 1-indexed
188 189 assert (res.start_line + 1, res.start_col) == expected_start
189 190 return res
190 191 else:
191 192 assert res is None
192 193
193 194
194 195 def check_transform(transformer_cls, case):
195 196 lines, start, expected = case
196 197 transformer = transformer_cls(start)
197 198 assert transformer.transform(lines) == expected
198 199
199 200
200 201 def test_continued_line():
201 202 lines = MULTILINE_MAGIC_ASSIGN[0]
202 203 assert ipt2.find_end_of_continued_line(lines, 1) == 2
203 204
204 205 assert ipt2.assemble_continued_line(lines, (1, 5), 2) == "foo bar"
205 206
206 207
207 208 def test_find_assign_magic():
208 209 check_find(ipt2.MagicAssign, MULTILINE_MAGIC_ASSIGN)
209 210 check_find(ipt2.MagicAssign, MULTILINE_SYSTEM_ASSIGN, match=False)
210 211 check_find(ipt2.MagicAssign, MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT, match=False)
211 212
212 213
213 214 def test_transform_assign_magic():
214 215 check_transform(ipt2.MagicAssign, MULTILINE_MAGIC_ASSIGN)
215 216
216 217
217 218 def test_find_assign_system():
218 219 check_find(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN)
219 220 check_find(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT)
220 221 check_find(ipt2.SystemAssign, (["a = !ls\n"], (1, 5), None))
221 222 check_find(ipt2.SystemAssign, (["a=!ls\n"], (1, 2), None))
222 223 check_find(ipt2.SystemAssign, MULTILINE_MAGIC_ASSIGN, match=False)
223 224
224 225
225 226 def test_transform_assign_system():
226 227 check_transform(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN)
227 228 check_transform(ipt2.SystemAssign, MULTILINE_SYSTEM_ASSIGN_AFTER_DEDENT)
228 229
229 230
230 231 def test_find_magic_escape():
231 232 check_find(ipt2.EscapedCommand, MULTILINE_MAGIC)
232 233 check_find(ipt2.EscapedCommand, INDENTED_MAGIC)
233 234 check_find(ipt2.EscapedCommand, MULTILINE_MAGIC_ASSIGN, match=False)
234 235
235 236
236 237 def test_transform_magic_escape():
237 238 check_transform(ipt2.EscapedCommand, MULTILINE_MAGIC)
238 239 check_transform(ipt2.EscapedCommand, INDENTED_MAGIC)
239 240 check_transform(ipt2.EscapedCommand, CRLF_MAGIC)
240 241
241 242
242 243 def test_find_autocalls():
243 244 for case in [AUTOCALL_QUOTE, AUTOCALL_QUOTE2, AUTOCALL_PAREN]:
244 245 print("Testing %r" % case[0])
245 246 check_find(ipt2.EscapedCommand, case)
246 247
247 248
248 249 def test_transform_autocall():
249 250 for case in [AUTOCALL_QUOTE, AUTOCALL_QUOTE2, AUTOCALL_PAREN]:
250 251 print("Testing %r" % case[0])
251 252 check_transform(ipt2.EscapedCommand, case)
252 253
253 254
254 255 def test_find_help():
255 256 for case in [SIMPLE_HELP, DETAILED_HELP, MAGIC_HELP, HELP_IN_EXPR]:
256 257 check_find(ipt2.HelpEnd, case)
257 258
258 259 tf = check_find(ipt2.HelpEnd, HELP_CONTINUED_LINE)
259 260 assert tf.q_line == 1
260 261 assert tf.q_col == 3
261 262
262 263 tf = check_find(ipt2.HelpEnd, HELP_MULTILINE)
263 264 assert tf.q_line == 1
264 265 assert tf.q_col == 8
265 266
266 267 # ? in a comment does not trigger help
267 268 check_find(ipt2.HelpEnd, (["foo # bar?\n"], None, None), match=False)
268 269 # Nor in a string
269 270 check_find(ipt2.HelpEnd, (["foo = '''bar?\n"], None, None), match=False)
270 271
271 272
272 273 def test_transform_help():
273 274 tf = ipt2.HelpEnd((1, 0), (1, 9))
274 275 assert tf.transform(HELP_IN_EXPR[0]) == HELP_IN_EXPR[2]
275 276
276 277 tf = ipt2.HelpEnd((1, 0), (2, 3))
277 278 assert tf.transform(HELP_CONTINUED_LINE[0]) == HELP_CONTINUED_LINE[2]
278 279
279 280 tf = ipt2.HelpEnd((1, 0), (2, 8))
280 281 assert tf.transform(HELP_MULTILINE[0]) == HELP_MULTILINE[2]
281 282
282 283 tf = ipt2.HelpEnd((1, 0), (1, 0))
283 284 assert tf.transform(HELP_UNICODE[0]) == HELP_UNICODE[2]
284 285
285 286
286 287 def test_find_assign_op_dedent():
287 288 """
288 289 be careful that empty token like dedent are not counted as parens
289 290 """
290 291
291 292 class Tk:
292 293 def __init__(self, s):
293 294 self.string = s
294 295
295 296 assert _find_assign_op([Tk(s) for s in ("", "a", "=", "b")]) == 2
296 297 assert (
297 298 _find_assign_op([Tk(s) for s in ("", "(", "a", "=", "b", ")", "=", "5")]) == 6
298 299 )
299 300
300 301
301 302 extra_closing_paren_param = (
302 303 pytest.param("(\n))", "invalid", None)
303 304 if sys.version_info >= (3, 12)
304 305 else pytest.param("(\n))", "incomplete", 0)
305 306 )
306 307 examples = [
307 308 pytest.param("a = 1", "complete", None),
308 309 pytest.param("for a in range(5):", "incomplete", 4),
309 310 pytest.param("for a in range(5):\n if a > 0:", "incomplete", 8),
310 311 pytest.param("raise = 2", "invalid", None),
311 312 pytest.param("a = [1,\n2,", "incomplete", 0),
312 313 extra_closing_paren_param,
313 314 pytest.param("\\\r\n", "incomplete", 0),
314 315 pytest.param("a = '''\n hi", "incomplete", 3),
315 316 pytest.param("def a():\n x=1\n global x", "invalid", None),
316 317 pytest.param(
317 318 "a \\ ",
318 319 "invalid",
319 320 None,
320 321 marks=pytest.mark.xfail(
321 322 reason="Bug in python 3.9.8 – bpo 45738",
322 323 condition=sys.version_info in [(3, 11, 0, "alpha", 2)],
323 324 raises=SystemError,
324 325 strict=True,
325 326 ),
326 327 ), # Nothing allowed after backslash,
327 328 pytest.param("1\\\n+2", "complete", None),
328 329 ]
329 330
330 331
331 332 @pytest.mark.parametrize("code, expected, number", examples)
332 333 def test_check_complete_param(code, expected, number):
333 334 cc = ipt2.TransformerManager().check_complete
334 335 assert cc(code) == (expected, number)
335 336
336 337
337 338 @pytest.mark.xfail(platform.python_implementation() == "PyPy", reason="fail on pypy")
338 339 @pytest.mark.xfail(
339 340 reason="Bug in python 3.9.8 – bpo 45738",
340 341 condition=sys.version_info in [(3, 11, 0, "alpha", 2)],
341 342 raises=SystemError,
342 343 strict=True,
343 344 )
344 345 def test_check_complete():
345 346 cc = ipt2.TransformerManager().check_complete
346 347
347 348 example = dedent(
348 349 """
349 350 if True:
350 351 a=1"""
351 352 )
352 353
353 354 assert cc(example) == ("incomplete", 4)
354 355 assert cc(example + "\n") == ("complete", None)
355 356 assert cc(example + "\n ") == ("complete", None)
356 357
357 358 # no need to loop on all the letters/numbers.
358 359 short = "12abAB" + string.printable[62:]
359 360 for c in short:
360 361 # test does not raise:
361 362 cc(c)
362 363 for k in short:
363 364 cc(c + k)
364 365
365 366 assert cc("def f():\n x=0\n \\\n ") == ("incomplete", 2)
366 367
367 368
368 369 @pytest.mark.xfail(platform.python_implementation() == "PyPy", reason="fail on pypy")
369 370 @pytest.mark.parametrize(
370 371 "value, expected",
371 372 [
372 373 ('''def foo():\n """''', ("incomplete", 4)),
373 374 ("""async with example:\n pass""", ("incomplete", 4)),
374 375 ("""async with example:\n pass\n """, ("complete", None)),
375 376 ],
376 377 )
377 378 def test_check_complete_II(value, expected):
378 379 """
379 380 Test that multiple line strings are properly handled.
380 381
381 382 Separate test function for convenience
382 383
383 384 """
384 385 cc = ipt2.TransformerManager().check_complete
385 386 assert cc(value) == expected
386 387
387 388
388 389 @pytest.mark.parametrize(
389 390 "value, expected",
390 391 [
391 392 (")", ("invalid", None)),
392 393 ("]", ("invalid", None)),
393 394 ("}", ("invalid", None)),
394 395 (")(", ("invalid", None)),
395 396 ("][", ("invalid", None)),
396 397 ("}{", ("invalid", None)),
397 398 ("]()(", ("invalid", None)),
398 399 ("())(", ("invalid", None)),
399 400 (")[](", ("invalid", None)),
400 401 ("()](", ("invalid", None)),
401 402 ],
402 403 )
403 404 def test_check_complete_invalidates_sunken_brackets(value, expected):
404 405 """
405 406 Test that a single line with more closing brackets than the opening ones is
406 407 interpreted as invalid
407 408 """
408 409 cc = ipt2.TransformerManager().check_complete
409 410 assert cc(value) == expected
410 411
411 412
412 413 def test_null_cleanup_transformer():
413 414 manager = ipt2.TransformerManager()
414 415 manager.cleanup_transforms.insert(0, null_cleanup_transformer)
415 416 assert manager.transform_cell("") == ""
416 417
417 418
418 419 def test_side_effects_I():
419 420 count = 0
420 421
421 422 def counter(lines):
422 423 nonlocal count
423 424 count += 1
424 425 return lines
425 426
426 427 counter.has_side_effects = True
427 428
428 429 manager = ipt2.TransformerManager()
429 430 manager.cleanup_transforms.insert(0, counter)
430 431 assert manager.check_complete("a=1\n") == ("complete", None)
431 432 assert count == 0
432 433
433 434
434 435 def test_side_effects_II():
435 436 count = 0
436 437
437 438 def counter(lines):
438 439 nonlocal count
439 440 count += 1
440 441 return lines
441 442
442 443 counter.has_side_effects = True
443 444
444 445 manager = ipt2.TransformerManager()
445 446 manager.line_transforms.insert(0, counter)
446 447 assert manager.check_complete("b=1\n") == ("complete", None)
447 448 assert count == 0
@@ -1,200 +1,202
1 1 import errno
2 2 import os
3 3 import shutil
4 4 import tempfile
5 5 import warnings
6 6 from unittest.mock import patch
7 7
8 8 from tempfile import TemporaryDirectory
9 9 from testpath import assert_isdir, assert_isfile, modified_env
10 10
11 11 from IPython import paths
12 12 from IPython.testing.decorators import skip_win32
13 13
14 14 TMP_TEST_DIR = os.path.realpath(tempfile.mkdtemp())
15 15 HOME_TEST_DIR = os.path.join(TMP_TEST_DIR, "home_test_dir")
16 16 XDG_TEST_DIR = os.path.join(HOME_TEST_DIR, "xdg_test_dir")
17 17 XDG_CACHE_DIR = os.path.join(HOME_TEST_DIR, "xdg_cache_dir")
18 18 IP_TEST_DIR = os.path.join(HOME_TEST_DIR,'.ipython')
19 19
20 20 def setup_module():
21 21 """Setup testenvironment for the module:
22 22
23 23 - Adds dummy home dir tree
24 24 """
25 25 # Do not mask exceptions here. In particular, catching WindowsError is a
26 26 # problem because that exception is only defined on Windows...
27 27 os.makedirs(IP_TEST_DIR)
28 28 os.makedirs(os.path.join(XDG_TEST_DIR, 'ipython'))
29 29 os.makedirs(os.path.join(XDG_CACHE_DIR, 'ipython'))
30 30
31 31
32 32 def teardown_module():
33 33 """Teardown testenvironment for the module:
34 34
35 35 - Remove dummy home dir tree
36 36 """
37 37 # Note: we remove the parent test dir, which is the root of all test
38 38 # subdirs we may have created. Use shutil instead of os.removedirs, so
39 39 # that non-empty directories are all recursively removed.
40 40 shutil.rmtree(TMP_TEST_DIR)
41 41
42 42 def patch_get_home_dir(dirpath):
43 43 return patch.object(paths, 'get_home_dir', return_value=dirpath)
44 44
45 45
46 46 def test_get_ipython_dir_1():
47 47 """test_get_ipython_dir_1, Testcase to see if we can call get_ipython_dir without Exceptions."""
48 48 env_ipdir = os.path.join("someplace", ".ipython")
49 49 with patch.object(paths, '_writable_dir', return_value=True), \
50 50 modified_env({'IPYTHONDIR': env_ipdir}):
51 51 ipdir = paths.get_ipython_dir()
52 52
53 53 assert ipdir == env_ipdir
54 54
55 55 def test_get_ipython_dir_2():
56 56 """test_get_ipython_dir_2, Testcase to see if we can call get_ipython_dir without Exceptions."""
57 57 with patch_get_home_dir('someplace'), \
58 58 patch.object(paths, 'get_xdg_dir', return_value=None), \
59 59 patch.object(paths, '_writable_dir', return_value=True), \
60 60 patch('os.name', "posix"), \
61 61 modified_env({'IPYTHON_DIR': None,
62 62 'IPYTHONDIR': None,
63 63 'XDG_CONFIG_HOME': None
64 64 }):
65 65 ipdir = paths.get_ipython_dir()
66 66
67 67 assert ipdir == os.path.join("someplace", ".ipython")
68 68
69 69 def test_get_ipython_dir_3():
70 70 """test_get_ipython_dir_3, use XDG if defined and exists, and .ipython doesn't exist."""
71 71 tmphome = TemporaryDirectory()
72 72 try:
73 73 with patch_get_home_dir(tmphome.name), \
74 74 patch('os.name', 'posix'), \
75 75 modified_env({
76 76 'IPYTHON_DIR': None,
77 77 'IPYTHONDIR': None,
78 78 'XDG_CONFIG_HOME': XDG_TEST_DIR,
79 79 }), warnings.catch_warnings(record=True) as w:
80 80 ipdir = paths.get_ipython_dir()
81 81
82 82 assert ipdir == os.path.join(tmphome.name, XDG_TEST_DIR, "ipython")
83 83 assert len(w) == 0
84 84 finally:
85 85 tmphome.cleanup()
86 86
87 87 def test_get_ipython_dir_4():
88 88 """test_get_ipython_dir_4, warn if XDG and home both exist."""
89 89 with patch_get_home_dir(HOME_TEST_DIR), \
90 90 patch('os.name', 'posix'):
91 91 try:
92 92 os.mkdir(os.path.join(XDG_TEST_DIR, 'ipython'))
93 93 except OSError as e:
94 94 if e.errno != errno.EEXIST:
95 95 raise
96 96
97 97
98 98 with modified_env({
99 99 'IPYTHON_DIR': None,
100 100 'IPYTHONDIR': None,
101 101 'XDG_CONFIG_HOME': XDG_TEST_DIR,
102 102 }), warnings.catch_warnings(record=True) as w:
103 103 ipdir = paths.get_ipython_dir()
104 104
105 105 assert len(w) == 1
106 106 assert "Ignoring" in str(w[0])
107 107
108 108
109 109 def test_get_ipython_dir_5():
110 110 """test_get_ipython_dir_5, use .ipython if exists and XDG defined, but doesn't exist."""
111 111 with patch_get_home_dir(HOME_TEST_DIR), \
112 112 patch('os.name', 'posix'):
113 113 try:
114 114 os.rmdir(os.path.join(XDG_TEST_DIR, 'ipython'))
115 115 except OSError as e:
116 116 if e.errno != errno.ENOENT:
117 117 raise
118 118
119 119 with modified_env({
120 120 'IPYTHON_DIR': None,
121 121 'IPYTHONDIR': None,
122 122 'XDG_CONFIG_HOME': XDG_TEST_DIR,
123 123 }):
124 124 ipdir = paths.get_ipython_dir()
125 125
126 126 assert ipdir == IP_TEST_DIR
127 127
128 128 def test_get_ipython_dir_6():
129 129 """test_get_ipython_dir_6, use home over XDG if defined and neither exist."""
130 130 xdg = os.path.join(HOME_TEST_DIR, 'somexdg')
131 131 os.mkdir(xdg)
132 132 shutil.rmtree(os.path.join(HOME_TEST_DIR, '.ipython'))
133 133 print(paths._writable_dir)
134 134 with patch_get_home_dir(HOME_TEST_DIR), \
135 135 patch.object(paths, 'get_xdg_dir', return_value=xdg), \
136 136 patch('os.name', 'posix'), \
137 137 modified_env({
138 138 'IPYTHON_DIR': None,
139 139 'IPYTHONDIR': None,
140 140 'XDG_CONFIG_HOME': None,
141 141 }), warnings.catch_warnings(record=True) as w:
142 142 ipdir = paths.get_ipython_dir()
143 143
144 144 assert ipdir == os.path.join(HOME_TEST_DIR, ".ipython")
145 145 assert len(w) == 0
146 146
147 147 def test_get_ipython_dir_7():
148 148 """test_get_ipython_dir_7, test home directory expansion on IPYTHONDIR"""
149 149 home_dir = os.path.normpath(os.path.expanduser('~'))
150 150 with modified_env({'IPYTHONDIR': os.path.join('~', 'somewhere')}), \
151 151 patch.object(paths, '_writable_dir', return_value=True):
152 152 ipdir = paths.get_ipython_dir()
153 153 assert ipdir == os.path.join(home_dir, "somewhere")
154 154
155 155
156 156 @skip_win32
157 157 def test_get_ipython_dir_8():
158 158 """test_get_ipython_dir_8, test / home directory"""
159 159 if not os.access("/", os.W_OK):
160 160 # test only when HOME directory actually writable
161 161 return
162 162
163 with patch.object(paths, "_writable_dir", lambda path: bool(path)), patch.object(
164 paths, "get_xdg_dir", return_value=None
165 ), modified_env(
166 {
167 "IPYTHON_DIR": None,
168 "IPYTHONDIR": None,
169 "HOME": "/",
170 }
163 with (
164 patch.object(paths, "_writable_dir", lambda path: bool(path)),
165 patch.object(paths, "get_xdg_dir", return_value=None),
166 modified_env(
167 {
168 "IPYTHON_DIR": None,
169 "IPYTHONDIR": None,
170 "HOME": "/",
171 }
172 ),
171 173 ):
172 174 assert paths.get_ipython_dir() == "/.ipython"
173 175
174 176
175 177 def test_get_ipython_cache_dir():
176 178 with modified_env({'HOME': HOME_TEST_DIR}):
177 179 if os.name == "posix":
178 180 # test default
179 181 os.makedirs(os.path.join(HOME_TEST_DIR, ".cache"))
180 182 with modified_env({'XDG_CACHE_HOME': None}):
181 183 ipdir = paths.get_ipython_cache_dir()
182 184 assert os.path.join(HOME_TEST_DIR, ".cache", "ipython") == ipdir
183 185 assert_isdir(ipdir)
184 186
185 187 # test env override
186 188 with modified_env({"XDG_CACHE_HOME": XDG_CACHE_DIR}):
187 189 ipdir = paths.get_ipython_cache_dir()
188 190 assert_isdir(ipdir)
189 191 assert ipdir == os.path.join(XDG_CACHE_DIR, "ipython")
190 192 else:
191 193 assert paths.get_ipython_cache_dir() == paths.get_ipython_dir()
192 194
193 195 def test_get_ipython_package_dir():
194 196 ipdir = paths.get_ipython_package_dir()
195 197 assert_isdir(ipdir)
196 198
197 199
198 200 def test_get_ipython_module_path():
199 201 ipapp_path = paths.get_ipython_module_path('IPython.terminal.ipapp')
200 202 assert_isfile(ipapp_path)
@@ -1,422 +1,423
1 1 """
2 2 This module contains factory functions that attempt
3 3 to return Qt submodules from the various python Qt bindings.
4 4
5 5 It also protects against double-importing Qt with different
6 6 bindings, which is unstable and likely to crash
7 7
8 8 This is used primarily by qt and qt_for_kernel, and shouldn't
9 9 be accessed directly from the outside
10 10 """
11
11 12 import importlib.abc
12 13 import sys
13 14 import os
14 15 import types
15 16 from functools import partial, lru_cache
16 17 import operator
17 18
18 19 # ### Available APIs.
19 20 # Qt6
20 21 QT_API_PYQT6 = "pyqt6"
21 22 QT_API_PYSIDE6 = "pyside6"
22 23
23 24 # Qt5
24 25 QT_API_PYQT5 = 'pyqt5'
25 26 QT_API_PYSIDE2 = 'pyside2'
26 27
27 28 # Qt4
28 29 # NOTE: Here for legacy matplotlib compatibility, but not really supported on the IPython side.
29 30 QT_API_PYQT = "pyqt" # Force version 2
30 31 QT_API_PYQTv1 = "pyqtv1" # Force version 2
31 32 QT_API_PYSIDE = "pyside"
32 33
33 34 QT_API_PYQT_DEFAULT = "pyqtdefault" # use system default for version 1 vs. 2
34 35
35 36 api_to_module = {
36 37 # Qt6
37 38 QT_API_PYQT6: "PyQt6",
38 39 QT_API_PYSIDE6: "PySide6",
39 40 # Qt5
40 41 QT_API_PYQT5: "PyQt5",
41 42 QT_API_PYSIDE2: "PySide2",
42 43 # Qt4
43 44 QT_API_PYSIDE: "PySide",
44 45 QT_API_PYQT: "PyQt4",
45 46 QT_API_PYQTv1: "PyQt4",
46 47 # default
47 48 QT_API_PYQT_DEFAULT: "PyQt6",
48 49 }
49 50
50 51
51 52 class ImportDenier(importlib.abc.MetaPathFinder):
52 53 """Import Hook that will guard against bad Qt imports
53 54 once IPython commits to a specific binding
54 55 """
55 56
56 57 def __init__(self):
57 58 self.__forbidden = set()
58 59
59 60 def forbid(self, module_name):
60 61 sys.modules.pop(module_name, None)
61 62 self.__forbidden.add(module_name)
62 63
63 64 def find_spec(self, fullname, path, target=None):
64 65 if path:
65 66 return
66 67 if fullname in self.__forbidden:
67 68 raise ImportError(
68 69 """
69 70 Importing %s disabled by IPython, which has
70 71 already imported an Incompatible QT Binding: %s
71 72 """
72 73 % (fullname, loaded_api())
73 74 )
74 75
75 76
76 77 ID = ImportDenier()
77 78 sys.meta_path.insert(0, ID)
78 79
79 80
80 81 def commit_api(api):
81 82 """Commit to a particular API, and trigger ImportErrors on subsequent
82 83 dangerous imports"""
83 84 modules = set(api_to_module.values())
84 85
85 86 modules.remove(api_to_module[api])
86 87 for mod in modules:
87 88 ID.forbid(mod)
88 89
89 90
90 91 def loaded_api():
91 92 """Return which API is loaded, if any
92 93
93 94 If this returns anything besides None,
94 95 importing any other Qt binding is unsafe.
95 96
96 97 Returns
97 98 -------
98 99 None, 'pyside6', 'pyqt6', 'pyside2', 'pyside', 'pyqt', 'pyqt5', 'pyqtv1'
99 100 """
100 101 if sys.modules.get("PyQt6.QtCore"):
101 102 return QT_API_PYQT6
102 103 elif sys.modules.get("PySide6.QtCore"):
103 104 return QT_API_PYSIDE6
104 105 elif sys.modules.get("PyQt5.QtCore"):
105 106 return QT_API_PYQT5
106 107 elif sys.modules.get("PySide2.QtCore"):
107 108 return QT_API_PYSIDE2
108 109 elif sys.modules.get("PyQt4.QtCore"):
109 110 if qtapi_version() == 2:
110 111 return QT_API_PYQT
111 112 else:
112 113 return QT_API_PYQTv1
113 114 elif sys.modules.get("PySide.QtCore"):
114 115 return QT_API_PYSIDE
115 116
116 117 return None
117 118
118 119
119 120 def has_binding(api):
120 121 """Safely check for PyQt4/5, PySide or PySide2, without importing submodules
121 122
122 123 Parameters
123 124 ----------
124 125 api : str [ 'pyqtv1' | 'pyqt' | 'pyqt5' | 'pyside' | 'pyside2' | 'pyqtdefault']
125 126 Which module to check for
126 127
127 128 Returns
128 129 -------
129 130 True if the relevant module appears to be importable
130 131 """
131 132 module_name = api_to_module[api]
132 133 from importlib.util import find_spec
133 134
134 135 required = ['QtCore', 'QtGui', 'QtSvg']
135 136 if api in (QT_API_PYQT5, QT_API_PYSIDE2, QT_API_PYQT6, QT_API_PYSIDE6):
136 137 # QT5 requires QtWidgets too
137 138 required.append('QtWidgets')
138 139
139 140 for submod in required:
140 141 try:
141 142 spec = find_spec('%s.%s' % (module_name, submod))
142 143 except ImportError:
143 144 # Package (e.g. PyQt5) not found
144 145 return False
145 146 else:
146 147 if spec is None:
147 148 # Submodule (e.g. PyQt5.QtCore) not found
148 149 return False
149 150
150 151 if api == QT_API_PYSIDE:
151 152 # We can also safely check PySide version
152 153 import PySide
153 154
154 155 return PySide.__version_info__ >= (1, 0, 3)
155 156
156 157 return True
157 158
158 159
159 160 def qtapi_version():
160 161 """Return which QString API has been set, if any
161 162
162 163 Returns
163 164 -------
164 165 The QString API version (1 or 2), or None if not set
165 166 """
166 167 try:
167 168 import sip
168 169 except ImportError:
169 170 # as of PyQt5 5.11, sip is no longer available as a top-level
170 171 # module and needs to be imported from the PyQt5 namespace
171 172 try:
172 173 from PyQt5 import sip
173 174 except ImportError:
174 175 return
175 176 try:
176 177 return sip.getapi('QString')
177 178 except ValueError:
178 179 return
179 180
180 181
181 182 def can_import(api):
182 183 """Safely query whether an API is importable, without importing it"""
183 184 if not has_binding(api):
184 185 return False
185 186
186 187 current = loaded_api()
187 188 if api == QT_API_PYQT_DEFAULT:
188 189 return current in [QT_API_PYQT6, None]
189 190 else:
190 191 return current in [api, None]
191 192
192 193
193 194 def import_pyqt4(version=2):
194 195 """
195 196 Import PyQt4
196 197
197 198 Parameters
198 199 ----------
199 200 version : 1, 2, or None
200 201 Which QString/QVariant API to use. Set to None to use the system
201 202 default
202 203 ImportErrors raised within this function are non-recoverable
203 204 """
204 205 # The new-style string API (version=2) automatically
205 206 # converts QStrings to Unicode Python strings. Also, automatically unpacks
206 207 # QVariants to their underlying objects.
207 208 import sip
208 209
209 210 if version is not None:
210 211 sip.setapi('QString', version)
211 212 sip.setapi('QVariant', version)
212 213
213 214 from PyQt4 import QtGui, QtCore, QtSvg
214 215
215 216 if QtCore.PYQT_VERSION < 0x040700:
216 217 raise ImportError("IPython requires PyQt4 >= 4.7, found %s" %
217 218 QtCore.PYQT_VERSION_STR)
218 219
219 220 # Alias PyQt-specific functions for PySide compatibility.
220 221 QtCore.Signal = QtCore.pyqtSignal
221 222 QtCore.Slot = QtCore.pyqtSlot
222 223
223 224 # query for the API version (in case version == None)
224 225 version = sip.getapi('QString')
225 226 api = QT_API_PYQTv1 if version == 1 else QT_API_PYQT
226 227 return QtCore, QtGui, QtSvg, api
227 228
228 229
229 230 def import_pyqt5():
230 231 """
231 232 Import PyQt5
232 233
233 234 ImportErrors raised within this function are non-recoverable
234 235 """
235 236
236 237 from PyQt5 import QtCore, QtSvg, QtWidgets, QtGui
237 238
238 239 # Alias PyQt-specific functions for PySide compatibility.
239 240 QtCore.Signal = QtCore.pyqtSignal
240 241 QtCore.Slot = QtCore.pyqtSlot
241 242
242 243 # Join QtGui and QtWidgets for Qt4 compatibility.
243 244 QtGuiCompat = types.ModuleType('QtGuiCompat')
244 245 QtGuiCompat.__dict__.update(QtGui.__dict__)
245 246 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
246 247
247 248 api = QT_API_PYQT5
248 249 return QtCore, QtGuiCompat, QtSvg, api
249 250
250 251
251 252 def import_pyqt6():
252 253 """
253 254 Import PyQt6
254 255
255 256 ImportErrors raised within this function are non-recoverable
256 257 """
257 258
258 259 from PyQt6 import QtCore, QtSvg, QtWidgets, QtGui
259 260
260 261 # Alias PyQt-specific functions for PySide compatibility.
261 262 QtCore.Signal = QtCore.pyqtSignal
262 263 QtCore.Slot = QtCore.pyqtSlot
263 264
264 265 # Join QtGui and QtWidgets for Qt4 compatibility.
265 266 QtGuiCompat = types.ModuleType("QtGuiCompat")
266 267 QtGuiCompat.__dict__.update(QtGui.__dict__)
267 268 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
268 269
269 270 api = QT_API_PYQT6
270 271 return QtCore, QtGuiCompat, QtSvg, api
271 272
272 273
273 274 def import_pyside():
274 275 """
275 276 Import PySide
276 277
277 278 ImportErrors raised within this function are non-recoverable
278 279 """
279 280 from PySide import QtGui, QtCore, QtSvg
280 281 return QtCore, QtGui, QtSvg, QT_API_PYSIDE
281 282
282 283 def import_pyside2():
283 284 """
284 285 Import PySide2
285 286
286 287 ImportErrors raised within this function are non-recoverable
287 288 """
288 289 from PySide2 import QtGui, QtCore, QtSvg, QtWidgets, QtPrintSupport
289 290
290 291 # Join QtGui and QtWidgets for Qt4 compatibility.
291 292 QtGuiCompat = types.ModuleType('QtGuiCompat')
292 293 QtGuiCompat.__dict__.update(QtGui.__dict__)
293 294 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
294 295 QtGuiCompat.__dict__.update(QtPrintSupport.__dict__)
295 296
296 297 return QtCore, QtGuiCompat, QtSvg, QT_API_PYSIDE2
297 298
298 299
299 300 def import_pyside6():
300 301 """
301 302 Import PySide6
302 303
303 304 ImportErrors raised within this function are non-recoverable
304 305 """
305 306
306 307 def get_attrs(module):
307 308 return {
308 309 name: getattr(module, name)
309 310 for name in dir(module)
310 311 if not name.startswith("_")
311 312 }
312 313
313 314 from PySide6 import QtGui, QtCore, QtSvg, QtWidgets, QtPrintSupport
314 315
315 316 # Join QtGui and QtWidgets for Qt4 compatibility.
316 317 QtGuiCompat = types.ModuleType("QtGuiCompat")
317 318 QtGuiCompat.__dict__.update(QtGui.__dict__)
318 319 if QtCore.__version_info__ < (6, 7):
319 320 QtGuiCompat.__dict__.update(QtWidgets.__dict__)
320 321 QtGuiCompat.__dict__.update(QtPrintSupport.__dict__)
321 322 else:
322 323 QtGuiCompat.__dict__.update(get_attrs(QtWidgets))
323 324 QtGuiCompat.__dict__.update(get_attrs(QtPrintSupport))
324 325
325 326 return QtCore, QtGuiCompat, QtSvg, QT_API_PYSIDE6
326 327
327 328
328 329 def load_qt(api_options):
329 330 """
330 331 Attempt to import Qt, given a preference list
331 332 of permissible bindings
332 333
333 334 It is safe to call this function multiple times.
334 335
335 336 Parameters
336 337 ----------
337 338 api_options : List of strings
338 339 The order of APIs to try. Valid items are 'pyside', 'pyside2',
339 340 'pyqt', 'pyqt5', 'pyqtv1' and 'pyqtdefault'
340 341
341 342 Returns
342 343 -------
343 344 A tuple of QtCore, QtGui, QtSvg, QT_API
344 345 The first three are the Qt modules. The last is the
345 346 string indicating which module was loaded.
346 347
347 348 Raises
348 349 ------
349 350 ImportError, if it isn't possible to import any requested
350 351 bindings (either because they aren't installed, or because
351 352 an incompatible library has already been installed)
352 353 """
353 354 loaders = {
354 355 # Qt6
355 356 QT_API_PYQT6: import_pyqt6,
356 357 QT_API_PYSIDE6: import_pyside6,
357 358 # Qt5
358 359 QT_API_PYQT5: import_pyqt5,
359 360 QT_API_PYSIDE2: import_pyside2,
360 361 # Qt4
361 362 QT_API_PYSIDE: import_pyside,
362 363 QT_API_PYQT: import_pyqt4,
363 364 QT_API_PYQTv1: partial(import_pyqt4, version=1),
364 365 # default
365 366 QT_API_PYQT_DEFAULT: import_pyqt6,
366 367 }
367 368
368 369 for api in api_options:
369 370
370 371 if api not in loaders:
371 372 raise RuntimeError(
372 373 "Invalid Qt API %r, valid values are: %s" %
373 374 (api, ", ".join(["%r" % k for k in loaders.keys()])))
374 375
375 376 if not can_import(api):
376 377 continue
377 378
378 379 #cannot safely recover from an ImportError during this
379 380 result = loaders[api]()
380 381 api = result[-1] # changed if api = QT_API_PYQT_DEFAULT
381 382 commit_api(api)
382 383 return result
383 384 else:
384 385 # Clear the environment variable since it doesn't work.
385 386 if "QT_API" in os.environ:
386 387 del os.environ["QT_API"]
387 388
388 389 raise ImportError(
389 390 """
390 391 Could not load requested Qt binding. Please ensure that
391 392 PyQt4 >= 4.7, PyQt5, PyQt6, PySide >= 1.0.3, PySide2, or
392 393 PySide6 is available, and only one is imported per session.
393 394
394 395 Currently-imported Qt library: %r
395 396 PyQt5 available (requires QtCore, QtGui, QtSvg, QtWidgets): %s
396 397 PyQt6 available (requires QtCore, QtGui, QtSvg, QtWidgets): %s
397 398 PySide2 installed: %s
398 399 PySide6 installed: %s
399 400 Tried to load: %r
400 401 """
401 402 % (
402 403 loaded_api(),
403 404 has_binding(QT_API_PYQT5),
404 405 has_binding(QT_API_PYQT6),
405 406 has_binding(QT_API_PYSIDE2),
406 407 has_binding(QT_API_PYSIDE6),
407 408 api_options,
408 409 )
409 410 )
410 411
411 412
412 413 def enum_factory(QT_API, QtCore):
413 414 """Construct an enum helper to account for PyQt5 <-> PyQt6 changes."""
414 415
415 416 @lru_cache(None)
416 417 def _enum(name):
417 418 # foo.bar.Enum.Entry (PyQt6) <=> foo.bar.Entry (non-PyQt6).
418 419 return operator.attrgetter(
419 420 name if QT_API == QT_API_PYQT6 else name.rpartition(".")[0]
420 421 )(sys.modules[QtCore.__package__])
421 422
422 423 return _enum
@@ -1,101 +1,102
1 1 """ Utilities for accessing the platform's clipboard.
2 2 """
3
3 4 import os
4 5 import subprocess
5 6
6 7 from IPython.core.error import TryNext
7 8 import IPython.utils.py3compat as py3compat
8 9
9 10
10 11 class ClipboardEmpty(ValueError):
11 12 pass
12 13
13 14
14 15 def win32_clipboard_get():
15 16 """ Get the current clipboard's text on Windows.
16 17
17 18 Requires Mark Hammond's pywin32 extensions.
18 19 """
19 20 try:
20 21 import win32clipboard
21 22 except ImportError as e:
22 23 raise TryNext("Getting text from the clipboard requires the pywin32 "
23 24 "extensions: http://sourceforge.net/projects/pywin32/") from e
24 25 win32clipboard.OpenClipboard()
25 26 try:
26 27 text = win32clipboard.GetClipboardData(win32clipboard.CF_UNICODETEXT)
27 28 except (TypeError, win32clipboard.error):
28 29 try:
29 30 text = win32clipboard.GetClipboardData(win32clipboard.CF_TEXT)
30 31 text = py3compat.cast_unicode(text, py3compat.DEFAULT_ENCODING)
31 32 except (TypeError, win32clipboard.error) as e:
32 33 raise ClipboardEmpty from e
33 34 finally:
34 35 win32clipboard.CloseClipboard()
35 36 return text
36 37
37 38
38 39 def osx_clipboard_get() -> str:
39 40 """ Get the clipboard's text on OS X.
40 41 """
41 42 p = subprocess.Popen(['pbpaste', '-Prefer', 'ascii'],
42 43 stdout=subprocess.PIPE)
43 44 bytes_, stderr = p.communicate()
44 45 # Text comes in with old Mac \r line endings. Change them to \n.
45 46 bytes_ = bytes_.replace(b'\r', b'\n')
46 47 text = py3compat.decode(bytes_)
47 48 return text
48 49
49 50
50 51 def tkinter_clipboard_get():
51 52 """ Get the clipboard's text using Tkinter.
52 53
53 54 This is the default on systems that are not Windows or OS X. It may
54 55 interfere with other UI toolkits and should be replaced with an
55 56 implementation that uses that toolkit.
56 57 """
57 58 try:
58 59 from tkinter import Tk, TclError
59 60 except ImportError as e:
60 61 raise TryNext("Getting text from the clipboard on this platform requires tkinter.") from e
61 62
62 63 root = Tk()
63 64 root.withdraw()
64 65 try:
65 66 text = root.clipboard_get()
66 67 except TclError as e:
67 68 raise ClipboardEmpty from e
68 69 finally:
69 70 root.destroy()
70 71 text = py3compat.cast_unicode(text, py3compat.DEFAULT_ENCODING)
71 72 return text
72 73
73 74
74 75 def wayland_clipboard_get():
75 76 """Get the clipboard's text under Wayland using wl-paste command.
76 77
77 78 This requires Wayland and wl-clipboard installed and running.
78 79 """
79 80 if os.environ.get("XDG_SESSION_TYPE") != "wayland":
80 81 raise TryNext("wayland is not detected")
81 82
82 83 try:
83 84 with subprocess.Popen(["wl-paste"], stdout=subprocess.PIPE) as p:
84 85 raw, err = p.communicate()
85 86 if p.wait():
86 87 raise TryNext(err)
87 88 except FileNotFoundError as e:
88 89 raise TryNext(
89 90 "Getting text from the clipboard under Wayland requires the wl-clipboard "
90 91 "extension: https://github.com/bugaevc/wl-clipboard"
91 92 ) from e
92 93
93 94 if not raw:
94 95 raise ClipboardEmpty
95 96
96 97 try:
97 98 text = py3compat.decode(raw)
98 99 except UnicodeDecodeError as e:
99 100 raise ClipboardEmpty from e
100 101
101 102 return text
@@ -1,1021 +1,1023
1 1 """IPython terminal interface using prompt_toolkit"""
2 2
3 3 import os
4 4 import sys
5 5 import inspect
6 6 from warnings import warn
7 7 from typing import Union as UnionType, Optional
8 8
9 9 from IPython.core.async_helpers import get_asyncio_loop
10 10 from IPython.core.interactiveshell import InteractiveShell, InteractiveShellABC
11 11 from IPython.utils.py3compat import input
12 12 from IPython.utils.terminal import toggle_set_term_title, set_term_title, restore_term_title
13 13 from IPython.utils.process import abbrev_cwd
14 14 from traitlets import (
15 15 Bool,
16 16 Unicode,
17 17 Dict,
18 18 Integer,
19 19 List,
20 20 observe,
21 21 Instance,
22 22 Type,
23 23 default,
24 24 Enum,
25 25 Union,
26 26 Any,
27 27 validate,
28 28 Float,
29 29 )
30 30
31 31 from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
32 32 from prompt_toolkit.enums import DEFAULT_BUFFER, EditingMode
33 33 from prompt_toolkit.filters import HasFocus, Condition, IsDone
34 34 from prompt_toolkit.formatted_text import PygmentsTokens
35 35 from prompt_toolkit.history import History
36 36 from prompt_toolkit.layout.processors import ConditionalProcessor, HighlightMatchingBracketProcessor
37 37 from prompt_toolkit.output import ColorDepth
38 38 from prompt_toolkit.patch_stdout import patch_stdout
39 39 from prompt_toolkit.shortcuts import PromptSession, CompleteStyle, print_formatted_text
40 40 from prompt_toolkit.styles import DynamicStyle, merge_styles
41 41 from prompt_toolkit.styles.pygments import style_from_pygments_cls, style_from_pygments_dict
42 42 from prompt_toolkit import __version__ as ptk_version
43 43
44 44 from pygments.styles import get_style_by_name
45 45 from pygments.style import Style
46 46 from pygments.token import Token
47 47
48 48 from .debugger import TerminalPdb, Pdb
49 49 from .magics import TerminalMagics
50 50 from .pt_inputhooks import get_inputhook_name_and_func
51 51 from .prompts import Prompts, ClassicPrompts, RichPromptDisplayHook
52 52 from .ptutils import IPythonPTCompleter, IPythonPTLexer
53 53 from .shortcuts import (
54 54 KEY_BINDINGS,
55 55 create_ipython_shortcuts,
56 56 create_identifier,
57 57 RuntimeBinding,
58 58 add_binding,
59 59 )
60 60 from .shortcuts.filters import KEYBINDING_FILTERS, filter_from_string
61 61 from .shortcuts.auto_suggest import (
62 62 NavigableAutoSuggestFromHistory,
63 63 AppendAutoSuggestionInAnyLine,
64 64 )
65 65
66 66 PTK3 = ptk_version.startswith('3.')
67 67
68 68
69 69 class _NoStyle(Style):
70 70 pass
71 71
72 72
73 73 _style_overrides_light_bg = {
74 74 Token.Prompt: '#ansibrightblue',
75 75 Token.PromptNum: '#ansiblue bold',
76 76 Token.OutPrompt: '#ansibrightred',
77 77 Token.OutPromptNum: '#ansired bold',
78 78 }
79 79
80 80 _style_overrides_linux = {
81 81 Token.Prompt: '#ansibrightgreen',
82 82 Token.PromptNum: '#ansigreen bold',
83 83 Token.OutPrompt: '#ansibrightred',
84 84 Token.OutPromptNum: '#ansired bold',
85 85 }
86 86
87 87
88 88 def _backward_compat_continuation_prompt_tokens(method, width: int, *, lineno: int):
89 89 """
90 90 Sagemath use custom prompt and we broke them in 8.19.
91 91 """
92 92 sig = inspect.signature(method)
93 93 if "lineno" in inspect.signature(method).parameters or any(
94 94 [p.kind == p.VAR_KEYWORD for p in sig.parameters.values()]
95 95 ):
96 96 return method(width, lineno=lineno)
97 97 else:
98 98 return method(width)
99 99
100 100
101 101 def get_default_editor():
102 102 try:
103 103 return os.environ['EDITOR']
104 104 except KeyError:
105 105 pass
106 106 except UnicodeError:
107 107 warn("$EDITOR environment variable is not pure ASCII. Using platform "
108 108 "default editor.")
109 109
110 110 if os.name == 'posix':
111 111 return 'vi' # the only one guaranteed to be there!
112 112 else:
113 113 return "notepad" # same in Windows!
114 114
115 115
116 116 # conservatively check for tty
117 117 # overridden streams can result in things like:
118 118 # - sys.stdin = None
119 119 # - no isatty method
120 120 for _name in ('stdin', 'stdout', 'stderr'):
121 121 _stream = getattr(sys, _name)
122 122 try:
123 123 if not _stream or not hasattr(_stream, "isatty") or not _stream.isatty():
124 124 _is_tty = False
125 125 break
126 126 except ValueError:
127 127 # stream is closed
128 128 _is_tty = False
129 129 break
130 130 else:
131 131 _is_tty = True
132 132
133 133
134 134 _use_simple_prompt = ('IPY_TEST_SIMPLE_PROMPT' in os.environ) or (not _is_tty)
135 135
136 136 def black_reformat_handler(text_before_cursor):
137 137 """
138 138 We do not need to protect against error,
139 139 this is taken care at a higher level where any reformat error is ignored.
140 140 Indeed we may call reformatting on incomplete code.
141 141 """
142 142 import black
143 143
144 144 formatted_text = black.format_str(text_before_cursor, mode=black.FileMode())
145 145 if not text_before_cursor.endswith("\n") and formatted_text.endswith("\n"):
146 146 formatted_text = formatted_text[:-1]
147 147 return formatted_text
148 148
149 149
150 150 def yapf_reformat_handler(text_before_cursor):
151 151 from yapf.yapflib import file_resources
152 152 from yapf.yapflib import yapf_api
153 153
154 154 style_config = file_resources.GetDefaultStyleForDir(os.getcwd())
155 155 formatted_text, was_formatted = yapf_api.FormatCode(
156 156 text_before_cursor, style_config=style_config
157 157 )
158 158 if was_formatted:
159 159 if not text_before_cursor.endswith("\n") and formatted_text.endswith("\n"):
160 160 formatted_text = formatted_text[:-1]
161 161 return formatted_text
162 162 else:
163 163 return text_before_cursor
164 164
165 165
166 166 class PtkHistoryAdapter(History):
167 167 """
168 168 Prompt toolkit has it's own way of handling history, Where it assumes it can
169 169 Push/pull from history.
170 170
171 171 """
172 172
173 173 def __init__(self, shell):
174 174 super().__init__()
175 175 self.shell = shell
176 176 self._refresh()
177 177
178 178 def append_string(self, string):
179 179 # we rely on sql for that.
180 180 self._loaded = False
181 181 self._refresh()
182 182
183 183 def _refresh(self):
184 184 if not self._loaded:
185 185 self._loaded_strings = list(self.load_history_strings())
186 186
187 187 def load_history_strings(self):
188 188 last_cell = ""
189 189 res = []
190 190 for __, ___, cell in self.shell.history_manager.get_tail(
191 191 self.shell.history_load_length, include_latest=True
192 192 ):
193 193 # Ignore blank lines and consecutive duplicates
194 194 cell = cell.rstrip()
195 195 if cell and (cell != last_cell):
196 196 res.append(cell)
197 197 last_cell = cell
198 198 yield from res[::-1]
199 199
200 200 def store_string(self, string: str) -> None:
201 201 pass
202 202
203 203 class TerminalInteractiveShell(InteractiveShell):
204 204 mime_renderers = Dict().tag(config=True)
205 205
206 206 space_for_menu = Integer(6, help='Number of line at the bottom of the screen '
207 207 'to reserve for the tab completion menu, '
208 208 'search history, ...etc, the height of '
209 209 'these menus will at most this value. '
210 210 'Increase it is you prefer long and skinny '
211 211 'menus, decrease for short and wide.'
212 212 ).tag(config=True)
213 213
214 214 pt_app: UnionType[PromptSession, None] = None
215 215 auto_suggest: UnionType[
216 216 AutoSuggestFromHistory, NavigableAutoSuggestFromHistory, None
217 217 ] = None
218 218 debugger_history = None
219 219
220 220 debugger_history_file = Unicode(
221 221 "~/.pdbhistory", help="File in which to store and read history"
222 222 ).tag(config=True)
223 223
224 224 simple_prompt = Bool(_use_simple_prompt,
225 225 help="""Use `raw_input` for the REPL, without completion and prompt colors.
226 226
227 227 Useful when controlling IPython as a subprocess, and piping
228 228 STDIN/OUT/ERR. Known usage are: IPython's own testing machinery,
229 229 and emacs' inferior-python subprocess (assuming you have set
230 230 `python-shell-interpreter` to "ipython") available through the
231 231 built-in `M-x run-python` and third party packages such as elpy.
232 232
233 233 This mode default to `True` if the `IPY_TEST_SIMPLE_PROMPT`
234 234 environment variable is set, or the current terminal is not a tty.
235 235 Thus the Default value reported in --help-all, or config will often
236 236 be incorrectly reported.
237 237 """,
238 238 ).tag(config=True)
239 239
240 240 @property
241 241 def debugger_cls(self):
242 242 return Pdb if self.simple_prompt else TerminalPdb
243 243
244 244 confirm_exit = Bool(True,
245 245 help="""
246 246 Set to confirm when you try to exit IPython with an EOF (Control-D
247 247 in Unix, Control-Z/Enter in Windows). By typing 'exit' or 'quit',
248 248 you can force a direct exit without any confirmation.""",
249 249 ).tag(config=True)
250 250
251 251 editing_mode = Unicode('emacs',
252 252 help="Shortcut style to use at the prompt. 'vi' or 'emacs'.",
253 253 ).tag(config=True)
254 254
255 255 emacs_bindings_in_vi_insert_mode = Bool(
256 256 True,
257 257 help="Add shortcuts from 'emacs' insert mode to 'vi' insert mode.",
258 258 ).tag(config=True)
259 259
260 260 modal_cursor = Bool(
261 261 True,
262 262 help="""
263 263 Cursor shape changes depending on vi mode: beam in vi insert mode,
264 264 block in nav mode, underscore in replace mode.""",
265 265 ).tag(config=True)
266 266
267 267 ttimeoutlen = Float(
268 268 0.01,
269 269 help="""The time in milliseconds that is waited for a key code
270 270 to complete.""",
271 271 ).tag(config=True)
272 272
273 273 timeoutlen = Float(
274 274 0.5,
275 275 help="""The time in milliseconds that is waited for a mapped key
276 276 sequence to complete.""",
277 277 ).tag(config=True)
278 278
279 279 autoformatter = Unicode(
280 280 None,
281 281 help="Autoformatter to reformat Terminal code. Can be `'black'`, `'yapf'` or `None`",
282 282 allow_none=True
283 283 ).tag(config=True)
284 284
285 285 auto_match = Bool(
286 286 False,
287 287 help="""
288 288 Automatically add/delete closing bracket or quote when opening bracket or quote is entered/deleted.
289 289 Brackets: (), [], {}
290 290 Quotes: '', \"\"
291 291 """,
292 292 ).tag(config=True)
293 293
294 294 mouse_support = Bool(False,
295 295 help="Enable mouse support in the prompt\n(Note: prevents selecting text with the mouse)"
296 296 ).tag(config=True)
297 297
298 298 # We don't load the list of styles for the help string, because loading
299 299 # Pygments plugins takes time and can cause unexpected errors.
300 300 highlighting_style = Union([Unicode('legacy'), Type(klass=Style)],
301 301 help="""The name or class of a Pygments style to use for syntax
302 302 highlighting. To see available styles, run `pygmentize -L styles`."""
303 303 ).tag(config=True)
304 304
305 305 @validate('editing_mode')
306 306 def _validate_editing_mode(self, proposal):
307 307 if proposal['value'].lower() == 'vim':
308 308 proposal['value']= 'vi'
309 309 elif proposal['value'].lower() == 'default':
310 310 proposal['value']= 'emacs'
311 311
312 312 if hasattr(EditingMode, proposal['value'].upper()):
313 313 return proposal['value'].lower()
314 314
315 315 return self.editing_mode
316 316
317 317 @observe('editing_mode')
318 318 def _editing_mode(self, change):
319 319 if self.pt_app:
320 320 self.pt_app.editing_mode = getattr(EditingMode, change.new.upper())
321 321
322 322 def _set_formatter(self, formatter):
323 323 if formatter is None:
324 324 self.reformat_handler = lambda x:x
325 325 elif formatter == 'black':
326 326 self.reformat_handler = black_reformat_handler
327 327 elif formatter == "yapf":
328 328 self.reformat_handler = yapf_reformat_handler
329 329 else:
330 330 raise ValueError
331 331
332 332 @observe("autoformatter")
333 333 def _autoformatter_changed(self, change):
334 334 formatter = change.new
335 335 self._set_formatter(formatter)
336 336
337 337 @observe('highlighting_style')
338 338 @observe('colors')
339 339 def _highlighting_style_changed(self, change):
340 340 self.refresh_style()
341 341
342 342 def refresh_style(self):
343 343 self._style = self._make_style_from_name_or_cls(self.highlighting_style)
344 344
345 345 highlighting_style_overrides = Dict(
346 346 help="Override highlighting format for specific tokens"
347 347 ).tag(config=True)
348 348
349 349 true_color = Bool(False,
350 350 help="""Use 24bit colors instead of 256 colors in prompt highlighting.
351 351 If your terminal supports true color, the following command should
352 352 print ``TRUECOLOR`` in orange::
353 353
354 354 printf \"\\x1b[38;2;255;100;0mTRUECOLOR\\x1b[0m\\n\"
355 355 """,
356 356 ).tag(config=True)
357 357
358 358 editor = Unicode(get_default_editor(),
359 359 help="Set the editor used by IPython (default to $EDITOR/vi/notepad)."
360 360 ).tag(config=True)
361 361
362 362 prompts_class = Type(Prompts, help='Class used to generate Prompt token for prompt_toolkit').tag(config=True)
363 363
364 364 prompts = Instance(Prompts)
365 365
366 366 @default('prompts')
367 367 def _prompts_default(self):
368 368 return self.prompts_class(self)
369 369
370 370 # @observe('prompts')
371 371 # def _(self, change):
372 372 # self._update_layout()
373 373
374 374 @default('displayhook_class')
375 375 def _displayhook_class_default(self):
376 376 return RichPromptDisplayHook
377 377
378 378 term_title = Bool(True,
379 379 help="Automatically set the terminal title"
380 380 ).tag(config=True)
381 381
382 382 term_title_format = Unicode("IPython: {cwd}",
383 383 help="Customize the terminal title format. This is a python format string. " +
384 384 "Available substitutions are: {cwd}."
385 385 ).tag(config=True)
386 386
387 387 display_completions = Enum(('column', 'multicolumn','readlinelike'),
388 388 help= ( "Options for displaying tab completions, 'column', 'multicolumn', and "
389 389 "'readlinelike'. These options are for `prompt_toolkit`, see "
390 390 "`prompt_toolkit` documentation for more information."
391 391 ),
392 392 default_value='multicolumn').tag(config=True)
393 393
394 394 highlight_matching_brackets = Bool(True,
395 395 help="Highlight matching brackets.",
396 396 ).tag(config=True)
397 397
398 398 extra_open_editor_shortcuts = Bool(False,
399 399 help="Enable vi (v) or Emacs (C-X C-E) shortcuts to open an external editor. "
400 400 "This is in addition to the F2 binding, which is always enabled."
401 401 ).tag(config=True)
402 402
403 403 handle_return = Any(None,
404 404 help="Provide an alternative handler to be called when the user presses "
405 405 "Return. This is an advanced option intended for debugging, which "
406 406 "may be changed or removed in later releases."
407 407 ).tag(config=True)
408 408
409 409 enable_history_search = Bool(True,
410 410 help="Allows to enable/disable the prompt toolkit history search"
411 411 ).tag(config=True)
412 412
413 413 autosuggestions_provider = Unicode(
414 414 "NavigableAutoSuggestFromHistory",
415 415 help="Specifies from which source automatic suggestions are provided. "
416 416 "Can be set to ``'NavigableAutoSuggestFromHistory'`` (:kbd:`up` and "
417 417 ":kbd:`down` swap suggestions), ``'AutoSuggestFromHistory'``, "
418 418 " or ``None`` to disable automatic suggestions. "
419 419 "Default is `'NavigableAutoSuggestFromHistory`'.",
420 420 allow_none=True,
421 421 ).tag(config=True)
422 422
423 423 def _set_autosuggestions(self, provider):
424 424 # disconnect old handler
425 425 if self.auto_suggest and isinstance(
426 426 self.auto_suggest, NavigableAutoSuggestFromHistory
427 427 ):
428 428 self.auto_suggest.disconnect()
429 429 if provider is None:
430 430 self.auto_suggest = None
431 431 elif provider == "AutoSuggestFromHistory":
432 432 self.auto_suggest = AutoSuggestFromHistory()
433 433 elif provider == "NavigableAutoSuggestFromHistory":
434 434 self.auto_suggest = NavigableAutoSuggestFromHistory()
435 435 else:
436 436 raise ValueError("No valid provider.")
437 437 if self.pt_app:
438 438 self.pt_app.auto_suggest = self.auto_suggest
439 439
440 440 @observe("autosuggestions_provider")
441 441 def _autosuggestions_provider_changed(self, change):
442 442 provider = change.new
443 443 self._set_autosuggestions(provider)
444 444
445 445 shortcuts = List(
446 446 trait=Dict(
447 447 key_trait=Enum(
448 448 [
449 449 "command",
450 450 "match_keys",
451 451 "match_filter",
452 452 "new_keys",
453 453 "new_filter",
454 454 "create",
455 455 ]
456 456 ),
457 457 per_key_traits={
458 458 "command": Unicode(),
459 459 "match_keys": List(Unicode()),
460 460 "match_filter": Unicode(),
461 461 "new_keys": List(Unicode()),
462 462 "new_filter": Unicode(),
463 463 "create": Bool(False),
464 464 },
465 465 ),
466 466 help="""Add, disable or modifying shortcuts.
467 467
468 468 Each entry on the list should be a dictionary with ``command`` key
469 469 identifying the target function executed by the shortcut and at least
470 470 one of the following:
471 471
472 472 - ``match_keys``: list of keys used to match an existing shortcut,
473 473 - ``match_filter``: shortcut filter used to match an existing shortcut,
474 474 - ``new_keys``: list of keys to set,
475 475 - ``new_filter``: a new shortcut filter to set
476 476
477 477 The filters have to be composed of pre-defined verbs and joined by one
478 478 of the following conjunctions: ``&`` (and), ``|`` (or), ``~`` (not).
479 479 The pre-defined verbs are:
480 480
481 481 {}
482 482
483 483
484 484 To disable a shortcut set ``new_keys`` to an empty list.
485 485 To add a shortcut add key ``create`` with value ``True``.
486 486
487 487 When modifying/disabling shortcuts, ``match_keys``/``match_filter`` can
488 488 be omitted if the provided specification uniquely identifies a shortcut
489 489 to be modified/disabled. When modifying a shortcut ``new_filter`` or
490 490 ``new_keys`` can be omitted which will result in reuse of the existing
491 491 filter/keys.
492 492
493 493 Only shortcuts defined in IPython (and not default prompt-toolkit
494 494 shortcuts) can be modified or disabled. The full list of shortcuts,
495 495 command identifiers and filters is available under
496 496 :ref:`terminal-shortcuts-list`.
497 497 """.format(
498 498 "\n ".join([f"- `{k}`" for k in KEYBINDING_FILTERS])
499 499 ),
500 500 ).tag(config=True)
501 501
502 502 @observe("shortcuts")
503 503 def _shortcuts_changed(self, change):
504 504 if self.pt_app:
505 505 self.pt_app.key_bindings = self._merge_shortcuts(user_shortcuts=change.new)
506 506
507 507 def _merge_shortcuts(self, user_shortcuts):
508 508 # rebuild the bindings list from scratch
509 509 key_bindings = create_ipython_shortcuts(self)
510 510
511 511 # for now we only allow adding shortcuts for commands which are already
512 512 # registered; this is a security precaution.
513 513 known_commands = {
514 514 create_identifier(binding.command): binding.command
515 515 for binding in KEY_BINDINGS
516 516 }
517 517 shortcuts_to_skip = []
518 518 shortcuts_to_add = []
519 519
520 520 for shortcut in user_shortcuts:
521 521 command_id = shortcut["command"]
522 522 if command_id not in known_commands:
523 523 allowed_commands = "\n - ".join(known_commands)
524 524 raise ValueError(
525 525 f"{command_id} is not a known shortcut command."
526 526 f" Allowed commands are: \n - {allowed_commands}"
527 527 )
528 528 old_keys = shortcut.get("match_keys", None)
529 529 old_filter = (
530 530 filter_from_string(shortcut["match_filter"])
531 531 if "match_filter" in shortcut
532 532 else None
533 533 )
534 534 matching = [
535 535 binding
536 536 for binding in KEY_BINDINGS
537 537 if (
538 538 (old_filter is None or binding.filter == old_filter)
539 539 and (old_keys is None or [k for k in binding.keys] == old_keys)
540 540 and create_identifier(binding.command) == command_id
541 541 )
542 542 ]
543 543
544 544 new_keys = shortcut.get("new_keys", None)
545 545 new_filter = shortcut.get("new_filter", None)
546 546
547 547 command = known_commands[command_id]
548 548
549 549 creating_new = shortcut.get("create", False)
550 550 modifying_existing = not creating_new and (
551 551 new_keys is not None or new_filter
552 552 )
553 553
554 554 if creating_new and new_keys == []:
555 555 raise ValueError("Cannot add a shortcut without keys")
556 556
557 557 if modifying_existing:
558 558 specification = {
559 559 key: shortcut[key]
560 560 for key in ["command", "filter"]
561 561 if key in shortcut
562 562 }
563 563 if len(matching) == 0:
564 564 raise ValueError(
565 565 f"No shortcuts matching {specification} found in {KEY_BINDINGS}"
566 566 )
567 567 elif len(matching) > 1:
568 568 raise ValueError(
569 569 f"Multiple shortcuts matching {specification} found,"
570 570 f" please add keys/filter to select one of: {matching}"
571 571 )
572 572
573 573 matched = matching[0]
574 574 old_filter = matched.filter
575 575 old_keys = list(matched.keys)
576 576 shortcuts_to_skip.append(
577 577 RuntimeBinding(
578 578 command,
579 579 keys=old_keys,
580 580 filter=old_filter,
581 581 )
582 582 )
583 583
584 584 if new_keys != []:
585 585 shortcuts_to_add.append(
586 586 RuntimeBinding(
587 587 command,
588 588 keys=new_keys or old_keys,
589 filter=filter_from_string(new_filter)
590 if new_filter is not None
591 else (
592 old_filter
593 if old_filter is not None
594 else filter_from_string("always")
589 filter=(
590 filter_from_string(new_filter)
591 if new_filter is not None
592 else (
593 old_filter
594 if old_filter is not None
595 else filter_from_string("always")
596 )
595 597 ),
596 598 )
597 599 )
598 600
599 601 # rebuild the bindings list from scratch
600 602 key_bindings = create_ipython_shortcuts(self, skip=shortcuts_to_skip)
601 603 for binding in shortcuts_to_add:
602 604 add_binding(key_bindings, binding)
603 605
604 606 return key_bindings
605 607
606 608 prompt_includes_vi_mode = Bool(True,
607 609 help="Display the current vi mode (when using vi editing mode)."
608 610 ).tag(config=True)
609 611
610 612 prompt_line_number_format = Unicode(
611 613 "",
612 614 help="The format for line numbering, will be passed `line` (int, 1 based)"
613 615 " the current line number and `rel_line` the relative line number."
614 616 " for example to display both you can use the following template string :"
615 617 " c.TerminalInteractiveShell.prompt_line_number_format='{line: 4d}/{rel_line:+03d} | '"
616 618 " This will display the current line number, with leading space and a width of at least 4"
617 619 " character, as well as the relative line number 0 padded and always with a + or - sign."
618 620 " Note that when using Emacs mode the prompt of the first line may not update.",
619 621 ).tag(config=True)
620 622
621 623 @observe('term_title')
622 624 def init_term_title(self, change=None):
623 625 # Enable or disable the terminal title.
624 626 if self.term_title and _is_tty:
625 627 toggle_set_term_title(True)
626 628 set_term_title(self.term_title_format.format(cwd=abbrev_cwd()))
627 629 else:
628 630 toggle_set_term_title(False)
629 631
630 632 def restore_term_title(self):
631 633 if self.term_title and _is_tty:
632 634 restore_term_title()
633 635
634 636 def init_display_formatter(self):
635 637 super(TerminalInteractiveShell, self).init_display_formatter()
636 638 # terminal only supports plain text
637 639 self.display_formatter.active_types = ["text/plain"]
638 640
639 641 def init_prompt_toolkit_cli(self):
640 642 if self.simple_prompt:
641 643 # Fall back to plain non-interactive output for tests.
642 644 # This is very limited.
643 645 def prompt():
644 646 prompt_text = "".join(x[1] for x in self.prompts.in_prompt_tokens())
645 647 lines = [input(prompt_text)]
646 648 prompt_continuation = "".join(x[1] for x in self.prompts.continuation_prompt_tokens())
647 649 while self.check_complete('\n'.join(lines))[0] == 'incomplete':
648 650 lines.append( input(prompt_continuation) )
649 651 return '\n'.join(lines)
650 652 self.prompt_for_code = prompt
651 653 return
652 654
653 655 # Set up keyboard shortcuts
654 656 key_bindings = self._merge_shortcuts(user_shortcuts=self.shortcuts)
655 657
656 658 # Pre-populate history from IPython's history database
657 659 history = PtkHistoryAdapter(self)
658 660
659 661 self._style = self._make_style_from_name_or_cls(self.highlighting_style)
660 662 self.style = DynamicStyle(lambda: self._style)
661 663
662 664 editing_mode = getattr(EditingMode, self.editing_mode.upper())
663 665
664 666 self._use_asyncio_inputhook = False
665 667 self.pt_app = PromptSession(
666 668 auto_suggest=self.auto_suggest,
667 669 editing_mode=editing_mode,
668 670 key_bindings=key_bindings,
669 671 history=history,
670 672 completer=IPythonPTCompleter(shell=self),
671 673 enable_history_search=self.enable_history_search,
672 674 style=self.style,
673 675 include_default_pygments_style=False,
674 676 mouse_support=self.mouse_support,
675 677 enable_open_in_editor=self.extra_open_editor_shortcuts,
676 678 color_depth=self.color_depth,
677 679 tempfile_suffix=".py",
678 680 **self._extra_prompt_options(),
679 681 )
680 682 if isinstance(self.auto_suggest, NavigableAutoSuggestFromHistory):
681 683 self.auto_suggest.connect(self.pt_app)
682 684
683 685 def _make_style_from_name_or_cls(self, name_or_cls):
684 686 """
685 687 Small wrapper that make an IPython compatible style from a style name
686 688
687 689 We need that to add style for prompt ... etc.
688 690 """
689 691 style_overrides = {}
690 692 if name_or_cls == 'legacy':
691 693 legacy = self.colors.lower()
692 694 if legacy == 'linux':
693 695 style_cls = get_style_by_name('monokai')
694 696 style_overrides = _style_overrides_linux
695 697 elif legacy == 'lightbg':
696 698 style_overrides = _style_overrides_light_bg
697 699 style_cls = get_style_by_name('pastie')
698 700 elif legacy == 'neutral':
699 701 # The default theme needs to be visible on both a dark background
700 702 # and a light background, because we can't tell what the terminal
701 703 # looks like. These tweaks to the default theme help with that.
702 704 style_cls = get_style_by_name('default')
703 705 style_overrides.update({
704 706 Token.Number: '#ansigreen',
705 707 Token.Operator: 'noinherit',
706 708 Token.String: '#ansiyellow',
707 709 Token.Name.Function: '#ansiblue',
708 710 Token.Name.Class: 'bold #ansiblue',
709 711 Token.Name.Namespace: 'bold #ansiblue',
710 712 Token.Name.Variable.Magic: '#ansiblue',
711 713 Token.Prompt: '#ansigreen',
712 714 Token.PromptNum: '#ansibrightgreen bold',
713 715 Token.OutPrompt: '#ansired',
714 716 Token.OutPromptNum: '#ansibrightred bold',
715 717 })
716 718
717 719 # Hack: Due to limited color support on the Windows console
718 720 # the prompt colors will be wrong without this
719 721 if os.name == 'nt':
720 722 style_overrides.update({
721 723 Token.Prompt: '#ansidarkgreen',
722 724 Token.PromptNum: '#ansigreen bold',
723 725 Token.OutPrompt: '#ansidarkred',
724 726 Token.OutPromptNum: '#ansired bold',
725 727 })
726 728 elif legacy =='nocolor':
727 729 style_cls=_NoStyle
728 730 style_overrides = {}
729 731 else :
730 732 raise ValueError('Got unknown colors: ', legacy)
731 733 else :
732 734 if isinstance(name_or_cls, str):
733 735 style_cls = get_style_by_name(name_or_cls)
734 736 else:
735 737 style_cls = name_or_cls
736 738 style_overrides = {
737 739 Token.Prompt: '#ansigreen',
738 740 Token.PromptNum: '#ansibrightgreen bold',
739 741 Token.OutPrompt: '#ansired',
740 742 Token.OutPromptNum: '#ansibrightred bold',
741 743 }
742 744 style_overrides.update(self.highlighting_style_overrides)
743 745 style = merge_styles([
744 746 style_from_pygments_cls(style_cls),
745 747 style_from_pygments_dict(style_overrides),
746 748 ])
747 749
748 750 return style
749 751
750 752 @property
751 753 def pt_complete_style(self):
752 754 return {
753 755 'multicolumn': CompleteStyle.MULTI_COLUMN,
754 756 'column': CompleteStyle.COLUMN,
755 757 'readlinelike': CompleteStyle.READLINE_LIKE,
756 758 }[self.display_completions]
757 759
758 760 @property
759 761 def color_depth(self):
760 762 return (ColorDepth.TRUE_COLOR if self.true_color else None)
761 763
762 764 def _extra_prompt_options(self):
763 765 """
764 766 Return the current layout option for the current Terminal InteractiveShell
765 767 """
766 768 def get_message():
767 769 return PygmentsTokens(self.prompts.in_prompt_tokens())
768 770
769 771 if self.editing_mode == "emacs" and self.prompt_line_number_format == "":
770 772 # with emacs mode the prompt is (usually) static, so we call only
771 773 # the function once. With VI mode it can toggle between [ins] and
772 774 # [nor] so we can't precompute.
773 775 # here I'm going to favor the default keybinding which almost
774 776 # everybody uses to decrease CPU usage.
775 777 # if we have issues with users with custom Prompts we can see how to
776 778 # work around this.
777 779 get_message = get_message()
778 780
779 781 options = {
780 782 "complete_in_thread": False,
781 783 "lexer": IPythonPTLexer(),
782 784 "reserve_space_for_menu": self.space_for_menu,
783 785 "message": get_message,
784 786 "prompt_continuation": (
785 787 lambda width, lineno, is_soft_wrap: PygmentsTokens(
786 788 _backward_compat_continuation_prompt_tokens(
787 789 self.prompts.continuation_prompt_tokens, width, lineno=lineno
788 790 )
789 791 )
790 792 ),
791 793 "multiline": True,
792 794 "complete_style": self.pt_complete_style,
793 795 "input_processors": [
794 796 # Highlight matching brackets, but only when this setting is
795 797 # enabled, and only when the DEFAULT_BUFFER has the focus.
796 798 ConditionalProcessor(
797 799 processor=HighlightMatchingBracketProcessor(chars="[](){}"),
798 800 filter=HasFocus(DEFAULT_BUFFER)
799 801 & ~IsDone()
800 802 & Condition(lambda: self.highlight_matching_brackets),
801 803 ),
802 804 # Show auto-suggestion in lines other than the last line.
803 805 ConditionalProcessor(
804 806 processor=AppendAutoSuggestionInAnyLine(),
805 807 filter=HasFocus(DEFAULT_BUFFER)
806 808 & ~IsDone()
807 809 & Condition(
808 810 lambda: isinstance(
809 811 self.auto_suggest, NavigableAutoSuggestFromHistory
810 812 )
811 813 ),
812 814 ),
813 815 ],
814 816 }
815 817 if not PTK3:
816 818 options['inputhook'] = self.inputhook
817 819
818 820 return options
819 821
820 822 def prompt_for_code(self):
821 823 if self.rl_next_input:
822 824 default = self.rl_next_input
823 825 self.rl_next_input = None
824 826 else:
825 827 default = ''
826 828
827 829 # In order to make sure that asyncio code written in the
828 830 # interactive shell doesn't interfere with the prompt, we run the
829 831 # prompt in a different event loop.
830 832 # If we don't do this, people could spawn coroutine with a
831 833 # while/true inside which will freeze the prompt.
832 834
833 835 with patch_stdout(raw=True):
834 836 if self._use_asyncio_inputhook:
835 837 # When we integrate the asyncio event loop, run the UI in the
836 838 # same event loop as the rest of the code. don't use an actual
837 839 # input hook. (Asyncio is not made for nesting event loops.)
838 840 asyncio_loop = get_asyncio_loop()
839 841 text = asyncio_loop.run_until_complete(
840 842 self.pt_app.prompt_async(
841 843 default=default, **self._extra_prompt_options()
842 844 )
843 845 )
844 846 else:
845 847 text = self.pt_app.prompt(
846 848 default=default,
847 849 inputhook=self._inputhook,
848 850 **self._extra_prompt_options(),
849 851 )
850 852
851 853 return text
852 854
853 855 def enable_win_unicode_console(self):
854 856 # Since IPython 7.10 doesn't support python < 3.6 and PEP 528, Python uses the unicode APIs for the Windows
855 857 # console by default, so WUC shouldn't be needed.
856 858 warn("`enable_win_unicode_console` is deprecated since IPython 7.10, does not do anything and will be removed in the future",
857 859 DeprecationWarning,
858 860 stacklevel=2)
859 861
860 862 def init_io(self):
861 863 if sys.platform not in {'win32', 'cli'}:
862 864 return
863 865
864 866 import colorama
865 867 colorama.init()
866 868
867 869 def init_magics(self):
868 870 super(TerminalInteractiveShell, self).init_magics()
869 871 self.register_magics(TerminalMagics)
870 872
871 873 def init_alias(self):
872 874 # The parent class defines aliases that can be safely used with any
873 875 # frontend.
874 876 super(TerminalInteractiveShell, self).init_alias()
875 877
876 878 # Now define aliases that only make sense on the terminal, because they
877 879 # need direct access to the console in a way that we can't emulate in
878 880 # GUI or web frontend
879 881 if os.name == 'posix':
880 882 for cmd in ('clear', 'more', 'less', 'man'):
881 883 self.alias_manager.soft_define_alias(cmd, cmd)
882 884
883 885 def __init__(self, *args, **kwargs) -> None:
884 886 super(TerminalInteractiveShell, self).__init__(*args, **kwargs)
885 887 self._set_autosuggestions(self.autosuggestions_provider)
886 888 self.init_prompt_toolkit_cli()
887 889 self.init_term_title()
888 890 self.keep_running = True
889 891 self._set_formatter(self.autoformatter)
890 892
891 893 def ask_exit(self):
892 894 self.keep_running = False
893 895
894 896 rl_next_input = None
895 897
896 898 def interact(self):
897 899 self.keep_running = True
898 900 while self.keep_running:
899 901 print(self.separate_in, end='')
900 902
901 903 try:
902 904 code = self.prompt_for_code()
903 905 except EOFError:
904 906 if (not self.confirm_exit) \
905 907 or self.ask_yes_no('Do you really want to exit ([y]/n)?','y','n'):
906 908 self.ask_exit()
907 909
908 910 else:
909 911 if code:
910 912 self.run_cell(code, store_history=True)
911 913
912 914 def mainloop(self):
913 915 # An extra layer of protection in case someone mashing Ctrl-C breaks
914 916 # out of our internal code.
915 917 while True:
916 918 try:
917 919 self.interact()
918 920 break
919 921 except KeyboardInterrupt as e:
920 922 print("\n%s escaped interact()\n" % type(e).__name__)
921 923 finally:
922 924 # An interrupt during the eventloop will mess up the
923 925 # internal state of the prompt_toolkit library.
924 926 # Stopping the eventloop fixes this, see
925 927 # https://github.com/ipython/ipython/pull/9867
926 928 if hasattr(self, '_eventloop'):
927 929 self._eventloop.stop()
928 930
929 931 self.restore_term_title()
930 932
931 933 # try to call some at-exit operation optimistically as some things can't
932 934 # be done during interpreter shutdown. this is technically inaccurate as
933 935 # this make mainlool not re-callable, but that should be a rare if not
934 936 # in existent use case.
935 937
936 938 self._atexit_once()
937 939
938 940 _inputhook = None
939 941 def inputhook(self, context):
940 942 if self._inputhook is not None:
941 943 self._inputhook(context)
942 944
943 945 active_eventloop: Optional[str] = None
944 946
945 947 def enable_gui(self, gui: Optional[str] = None) -> None:
946 948 if gui:
947 949 from ..core.pylabtools import _convert_gui_from_matplotlib
948 950
949 951 gui = _convert_gui_from_matplotlib(gui)
950 952
951 953 if self.simple_prompt is True and gui is not None:
952 954 print(
953 955 f'Cannot install event loop hook for "{gui}" when running with `--simple-prompt`.'
954 956 )
955 957 print(
956 958 "NOTE: Tk is supported natively; use Tk apps and Tk backends with `--simple-prompt`."
957 959 )
958 960 return
959 961
960 962 if self._inputhook is None and gui is None:
961 963 print("No event loop hook running.")
962 964 return
963 965
964 966 if self._inputhook is not None and gui is not None:
965 967 newev, newinhook = get_inputhook_name_and_func(gui)
966 968 if self._inputhook == newinhook:
967 969 # same inputhook, do nothing
968 970 self.log.info(
969 971 f"Shell is already running the {self.active_eventloop} eventloop. Doing nothing"
970 972 )
971 973 return
972 974 self.log.warning(
973 975 f"Shell is already running a different gui event loop for {self.active_eventloop}. "
974 976 "Call with no arguments to disable the current loop."
975 977 )
976 978 return
977 979 if self._inputhook is not None and gui is None:
978 980 self.active_eventloop = self._inputhook = None
979 981
980 982 if gui and (gui not in {None, "webagg"}):
981 983 # This hook runs with each cycle of the `prompt_toolkit`'s event loop.
982 984 self.active_eventloop, self._inputhook = get_inputhook_name_and_func(gui)
983 985 else:
984 986 self.active_eventloop = self._inputhook = None
985 987
986 988 self._use_asyncio_inputhook = gui == "asyncio"
987 989
988 990 # Run !system commands directly, not through pipes, so terminal programs
989 991 # work correctly.
990 992 system = InteractiveShell.system_raw
991 993
992 994 def auto_rewrite_input(self, cmd):
993 995 """Overridden from the parent class to use fancy rewriting prompt"""
994 996 if not self.show_rewritten_input:
995 997 return
996 998
997 999 tokens = self.prompts.rewrite_prompt_tokens()
998 1000 if self.pt_app:
999 1001 print_formatted_text(PygmentsTokens(tokens), end='',
1000 1002 style=self.pt_app.app.style)
1001 1003 print(cmd)
1002 1004 else:
1003 1005 prompt = ''.join(s for t, s in tokens)
1004 1006 print(prompt, cmd, sep='')
1005 1007
1006 1008 _prompts_before = None
1007 1009 def switch_doctest_mode(self, mode):
1008 1010 """Switch prompts to classic for %doctest_mode"""
1009 1011 if mode:
1010 1012 self._prompts_before = self.prompts
1011 1013 self.prompts = ClassicPrompts(self)
1012 1014 elif self._prompts_before:
1013 1015 self.prompts = self._prompts_before
1014 1016 self._prompts_before = None
1015 1017 # self._update_layout()
1016 1018
1017 1019
1018 1020 InteractiveShellABC.register(TerminalInteractiveShell)
1019 1021
1020 1022 if __name__ == '__main__':
1021 1023 TerminalInteractiveShell.instance().interact()
@@ -1,104 +1,105
1 1 """
2 2 Utilities function for keybinding with prompt toolkit.
3 3
4 4 This will be bound to specific key press and filter modes,
5 5 like whether we are in edit mode, and whether the completer is open.
6 6 """
7
7 8 import re
8 9 from prompt_toolkit.key_binding import KeyPressEvent
9 10
10 11
11 12 def parenthesis(event: KeyPressEvent):
12 13 """Auto-close parenthesis"""
13 14 event.current_buffer.insert_text("()")
14 15 event.current_buffer.cursor_left()
15 16
16 17
17 18 def brackets(event: KeyPressEvent):
18 19 """Auto-close brackets"""
19 20 event.current_buffer.insert_text("[]")
20 21 event.current_buffer.cursor_left()
21 22
22 23
23 24 def braces(event: KeyPressEvent):
24 25 """Auto-close braces"""
25 26 event.current_buffer.insert_text("{}")
26 27 event.current_buffer.cursor_left()
27 28
28 29
29 30 def double_quote(event: KeyPressEvent):
30 31 """Auto-close double quotes"""
31 32 event.current_buffer.insert_text('""')
32 33 event.current_buffer.cursor_left()
33 34
34 35
35 36 def single_quote(event: KeyPressEvent):
36 37 """Auto-close single quotes"""
37 38 event.current_buffer.insert_text("''")
38 39 event.current_buffer.cursor_left()
39 40
40 41
41 42 def docstring_double_quotes(event: KeyPressEvent):
42 43 """Auto-close docstring (double quotes)"""
43 44 event.current_buffer.insert_text('""""')
44 45 event.current_buffer.cursor_left(3)
45 46
46 47
47 48 def docstring_single_quotes(event: KeyPressEvent):
48 49 """Auto-close docstring (single quotes)"""
49 50 event.current_buffer.insert_text("''''")
50 51 event.current_buffer.cursor_left(3)
51 52
52 53
53 54 def raw_string_parenthesis(event: KeyPressEvent):
54 55 """Auto-close parenthesis in raw strings"""
55 56 matches = re.match(
56 57 r".*(r|R)[\"'](-*)",
57 58 event.current_buffer.document.current_line_before_cursor,
58 59 )
59 60 dashes = matches.group(2) if matches else ""
60 61 event.current_buffer.insert_text("()" + dashes)
61 62 event.current_buffer.cursor_left(len(dashes) + 1)
62 63
63 64
64 65 def raw_string_bracket(event: KeyPressEvent):
65 66 """Auto-close bracker in raw strings"""
66 67 matches = re.match(
67 68 r".*(r|R)[\"'](-*)",
68 69 event.current_buffer.document.current_line_before_cursor,
69 70 )
70 71 dashes = matches.group(2) if matches else ""
71 72 event.current_buffer.insert_text("[]" + dashes)
72 73 event.current_buffer.cursor_left(len(dashes) + 1)
73 74
74 75
75 76 def raw_string_braces(event: KeyPressEvent):
76 77 """Auto-close braces in raw strings"""
77 78 matches = re.match(
78 79 r".*(r|R)[\"'](-*)",
79 80 event.current_buffer.document.current_line_before_cursor,
80 81 )
81 82 dashes = matches.group(2) if matches else ""
82 83 event.current_buffer.insert_text("{}" + dashes)
83 84 event.current_buffer.cursor_left(len(dashes) + 1)
84 85
85 86
86 87 def skip_over(event: KeyPressEvent):
87 88 """Skip over automatically added parenthesis/quote.
88 89
89 90 (rather than adding another parenthesis/quote)"""
90 91 event.current_buffer.cursor_right()
91 92
92 93
93 94 def delete_pair(event: KeyPressEvent):
94 95 """Delete auto-closed parenthesis"""
95 96 event.current_buffer.delete()
96 97 event.current_buffer.delete_before_cursor()
97 98
98 99
99 100 auto_match_parens = {"(": parenthesis, "[": brackets, "{": braces}
100 101 auto_match_parens_raw_string = {
101 102 "(": raw_string_parenthesis,
102 103 "[": raw_string_bracket,
103 104 "{": raw_string_braces,
104 105 }
General Comments 0
You need to be logged in to leave comments. Login now