##// END OF EJS Templates
fix type_extensions
M Bussonnier -
Show More
@@ -1,3377 +1,3379
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 Ξ±
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 Ξ±
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press :kbd:`Tab` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\Ξ±<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 63 ``False``.
64 64
65 65
66 66 Experimental
67 67 ============
68 68
69 69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 70 generate completions both using static analysis of the code, and dynamically
71 71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 72 for Python. The APIs attached to this new mechanism is unstable and will
73 73 raise unless use in an :any:`provisionalcompleter` context manager.
74 74
75 75 You will find that the following are experimental:
76 76
77 77 - :any:`provisionalcompleter`
78 78 - :any:`IPCompleter.completions`
79 79 - :any:`Completion`
80 80 - :any:`rectify_completions`
81 81
82 82 .. note::
83 83
84 84 better name for :any:`rectify_completions` ?
85 85
86 86 We welcome any feedback on these new API, and we also encourage you to try this
87 87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 88 to have extra logging information if :any:`jedi` is crashing, or if current
89 89 IPython completer pending deprecations are returning results not yet handled
90 90 by :any:`jedi`
91 91
92 92 Using Jedi for tab completion allow snippets like the following to work without
93 93 having to execute any code:
94 94
95 95 >>> myvar = ['hello', 42]
96 96 ... myvar[1].bi<tab>
97 97
98 98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 100 option.
101 101
102 102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 103 current development version to get better completions.
104 104
105 105 Matchers
106 106 ========
107 107
108 108 All completions routines are implemented using unified *Matchers* API.
109 109 The matchers API is provisional and subject to change without notice.
110 110
111 111 The built-in matchers include:
112 112
113 113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 115 - :any:`IPCompleter.unicode_name_matcher`,
116 116 :any:`IPCompleter.fwd_unicode_matcher`
117 117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 125 (`complete_command`) with string dispatch (including regular expressions).
126 126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 127 Jedi results to match behaviour in earlier IPython versions.
128 128
129 129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130 130
131 131 Matcher API
132 132 -----------
133 133
134 134 Simplifying some details, the ``Matcher`` interface can described as
135 135
136 136 .. code-block::
137 137
138 138 MatcherAPIv1 = Callable[[str], list[str]]
139 139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140 140
141 141 Matcher = MatcherAPIv1 | MatcherAPIv2
142 142
143 143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 144 and remains supported as a simplest way for generating completions. This is also
145 145 currently the only API supported by the IPython hooks system `complete_command`.
146 146
147 147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 149 and requires a literal ``2`` for v2 Matchers.
150 150
151 151 Once the API stabilises future versions may relax the requirement for specifying
152 152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154 154
155 155 Suppression of competing matchers
156 156 ---------------------------------
157 157
158 158 By default results from all matchers are combined, in the order determined by
159 159 their priority. Matchers can request to suppress results from subsequent
160 160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161 161
162 162 When multiple matchers simultaneously request suppression, the results from of
163 163 the matcher with higher priority will be returned.
164 164
165 165 Sometimes it is desirable to suppress most but not all other matchers;
166 166 this can be achieved by adding a set of identifiers of matchers which
167 167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168 168
169 169 The suppression behaviour can is user-configurable via
170 170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 171 """
172 172
173 173
174 174 # Copyright (c) IPython Development Team.
175 175 # Distributed under the terms of the Modified BSD License.
176 176 #
177 177 # Some of this code originated from rlcompleter in the Python standard library
178 178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179 179
180 180 from __future__ import annotations
181 181 import builtins as builtin_mod
182 182 import enum
183 183 import glob
184 184 import inspect
185 185 import itertools
186 186 import keyword
187 187 import os
188 188 import re
189 189 import string
190 190 import sys
191 191 import tokenize
192 192 import time
193 193 import unicodedata
194 194 import uuid
195 195 import warnings
196 196 from ast import literal_eval
197 197 from collections import defaultdict
198 198 from contextlib import contextmanager
199 199 from dataclasses import dataclass
200 200 from functools import cached_property, partial
201 201 from types import SimpleNamespace
202 202 from typing import (
203 203 Iterable,
204 204 Iterator,
205 205 List,
206 206 Tuple,
207 207 Union,
208 208 Any,
209 209 Sequence,
210 210 Dict,
211 211 Optional,
212 212 TYPE_CHECKING,
213 213 Set,
214 214 Sized,
215 215 TypeVar,
216 216 Literal,
217 217 )
218 218
219 219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 220 from IPython.core.error import TryNext
221 221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 223 from IPython.core.oinspect import InspectColors
224 224 from IPython.testing.skipdoctest import skip_doctest
225 225 from IPython.utils import generics
226 226 from IPython.utils.decorators import sphinx_options
227 227 from IPython.utils.dir2 import dir2, get_real_method
228 228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 229 from IPython.utils.path import ensure_dir_exists
230 230 from IPython.utils.process import arg_split
231 231 from traitlets import (
232 232 Bool,
233 233 Enum,
234 234 Int,
235 235 List as ListTrait,
236 236 Unicode,
237 237 Dict as DictTrait,
238 238 Union as UnionTrait,
239 239 observe,
240 240 )
241 241 from traitlets.config.configurable import Configurable
242 242
243 243 import __main__
244 244
245 245 from typing import cast
246 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
246
247 if sys.version_info < (3, 12):
248 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
249 else:
250 from typing import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
247 251
248 252
249 253 # skip module docstests
250 254 __skip_doctest__ = True
251 255
252 256
253 257 try:
254 258 import jedi
255 259 jedi.settings.case_insensitive_completion = False
256 260 import jedi.api.helpers
257 261 import jedi.api.classes
258 262 JEDI_INSTALLED = True
259 263 except ImportError:
260 264 JEDI_INSTALLED = False
261 265
262 266
263 if GENERATING_DOCUMENTATION:
264 from typing import TypedDict
265 267
266 268 # -----------------------------------------------------------------------------
267 269 # Globals
268 270 #-----------------------------------------------------------------------------
269 271
270 272 # ranges where we have most of the valid unicode names. We could be more finer
271 273 # grained but is it worth it for performance While unicode have character in the
272 274 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
273 275 # write this). With below range we cover them all, with a density of ~67%
274 276 # biggest next gap we consider only adds up about 1% density and there are 600
275 277 # gaps that would need hard coding.
276 278 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
277 279
278 280 # Public API
279 281 __all__ = ["Completer", "IPCompleter"]
280 282
281 283 if sys.platform == 'win32':
282 284 PROTECTABLES = ' '
283 285 else:
284 286 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
285 287
286 288 # Protect against returning an enormous number of completions which the frontend
287 289 # may have trouble processing.
288 290 MATCHES_LIMIT = 500
289 291
290 292 # Completion type reported when no type can be inferred.
291 293 _UNKNOWN_TYPE = "<unknown>"
292 294
293 295 # sentinel value to signal lack of a match
294 296 not_found = object()
295 297
296 298 class ProvisionalCompleterWarning(FutureWarning):
297 299 """
298 300 Exception raise by an experimental feature in this module.
299 301
300 302 Wrap code in :any:`provisionalcompleter` context manager if you
301 303 are certain you want to use an unstable feature.
302 304 """
303 305 pass
304 306
305 307 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
306 308
307 309
308 310 @skip_doctest
309 311 @contextmanager
310 312 def provisionalcompleter(action='ignore'):
311 313 """
312 314 This context manager has to be used in any place where unstable completer
313 315 behavior and API may be called.
314 316
315 317 >>> with provisionalcompleter():
316 318 ... completer.do_experimental_things() # works
317 319
318 320 >>> completer.do_experimental_things() # raises.
319 321
320 322 .. note::
321 323
322 324 Unstable
323 325
324 326 By using this context manager you agree that the API in use may change
325 327 without warning, and that you won't complain if they do so.
326 328
327 329 You also understand that, if the API is not to your liking, you should report
328 330 a bug to explain your use case upstream.
329 331
330 332 We'll be happy to get your feedback, feature requests, and improvements on
331 333 any of the unstable APIs!
332 334 """
333 335 with warnings.catch_warnings():
334 336 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
335 337 yield
336 338
337 339
338 340 def has_open_quotes(s):
339 341 """Return whether a string has open quotes.
340 342
341 343 This simply counts whether the number of quote characters of either type in
342 344 the string is odd.
343 345
344 346 Returns
345 347 -------
346 348 If there is an open quote, the quote character is returned. Else, return
347 349 False.
348 350 """
349 351 # We check " first, then ', so complex cases with nested quotes will get
350 352 # the " to take precedence.
351 353 if s.count('"') % 2:
352 354 return '"'
353 355 elif s.count("'") % 2:
354 356 return "'"
355 357 else:
356 358 return False
357 359
358 360
359 361 def protect_filename(s, protectables=PROTECTABLES):
360 362 """Escape a string to protect certain characters."""
361 363 if set(s) & set(protectables):
362 364 if sys.platform == "win32":
363 365 return '"' + s + '"'
364 366 else:
365 367 return "".join(("\\" + c if c in protectables else c) for c in s)
366 368 else:
367 369 return s
368 370
369 371
370 372 def expand_user(path:str) -> Tuple[str, bool, str]:
371 373 """Expand ``~``-style usernames in strings.
372 374
373 375 This is similar to :func:`os.path.expanduser`, but it computes and returns
374 376 extra information that will be useful if the input was being used in
375 377 computing completions, and you wish to return the completions with the
376 378 original '~' instead of its expanded value.
377 379
378 380 Parameters
379 381 ----------
380 382 path : str
381 383 String to be expanded. If no ~ is present, the output is the same as the
382 384 input.
383 385
384 386 Returns
385 387 -------
386 388 newpath : str
387 389 Result of ~ expansion in the input path.
388 390 tilde_expand : bool
389 391 Whether any expansion was performed or not.
390 392 tilde_val : str
391 393 The value that ~ was replaced with.
392 394 """
393 395 # Default values
394 396 tilde_expand = False
395 397 tilde_val = ''
396 398 newpath = path
397 399
398 400 if path.startswith('~'):
399 401 tilde_expand = True
400 402 rest = len(path)-1
401 403 newpath = os.path.expanduser(path)
402 404 if rest:
403 405 tilde_val = newpath[:-rest]
404 406 else:
405 407 tilde_val = newpath
406 408
407 409 return newpath, tilde_expand, tilde_val
408 410
409 411
410 412 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
411 413 """Does the opposite of expand_user, with its outputs.
412 414 """
413 415 if tilde_expand:
414 416 return path.replace(tilde_val, '~')
415 417 else:
416 418 return path
417 419
418 420
419 421 def completions_sorting_key(word):
420 422 """key for sorting completions
421 423
422 424 This does several things:
423 425
424 426 - Demote any completions starting with underscores to the end
425 427 - Insert any %magic and %%cellmagic completions in the alphabetical order
426 428 by their name
427 429 """
428 430 prio1, prio2 = 0, 0
429 431
430 432 if word.startswith('__'):
431 433 prio1 = 2
432 434 elif word.startswith('_'):
433 435 prio1 = 1
434 436
435 437 if word.endswith('='):
436 438 prio1 = -1
437 439
438 440 if word.startswith('%%'):
439 441 # If there's another % in there, this is something else, so leave it alone
440 442 if not "%" in word[2:]:
441 443 word = word[2:]
442 444 prio2 = 2
443 445 elif word.startswith('%'):
444 446 if not "%" in word[1:]:
445 447 word = word[1:]
446 448 prio2 = 1
447 449
448 450 return prio1, word, prio2
449 451
450 452
451 453 class _FakeJediCompletion:
452 454 """
453 455 This is a workaround to communicate to the UI that Jedi has crashed and to
454 456 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
455 457
456 458 Added in IPython 6.0 so should likely be removed for 7.0
457 459
458 460 """
459 461
460 462 def __init__(self, name):
461 463
462 464 self.name = name
463 465 self.complete = name
464 466 self.type = 'crashed'
465 467 self.name_with_symbols = name
466 468 self.signature = ""
467 469 self._origin = "fake"
468 470 self.text = "crashed"
469 471
470 472 def __repr__(self):
471 473 return '<Fake completion object jedi has crashed>'
472 474
473 475
474 476 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
475 477
476 478
477 479 class Completion:
478 480 """
479 481 Completion object used and returned by IPython completers.
480 482
481 483 .. warning::
482 484
483 485 Unstable
484 486
485 487 This function is unstable, API may change without warning.
486 488 It will also raise unless use in proper context manager.
487 489
488 490 This act as a middle ground :any:`Completion` object between the
489 491 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
490 492 object. While Jedi need a lot of information about evaluator and how the
491 493 code should be ran/inspected, PromptToolkit (and other frontend) mostly
492 494 need user facing information.
493 495
494 496 - Which range should be replaced replaced by what.
495 497 - Some metadata (like completion type), or meta information to displayed to
496 498 the use user.
497 499
498 500 For debugging purpose we can also store the origin of the completion (``jedi``,
499 501 ``IPython.python_matches``, ``IPython.magics_matches``...).
500 502 """
501 503
502 504 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
503 505
504 506 def __init__(
505 507 self,
506 508 start: int,
507 509 end: int,
508 510 text: str,
509 511 *,
510 512 type: Optional[str] = None,
511 513 _origin="",
512 514 signature="",
513 515 ) -> None:
514 516 warnings.warn(
515 517 "``Completion`` is a provisional API (as of IPython 6.0). "
516 518 "It may change without warnings. "
517 519 "Use in corresponding context manager.",
518 520 category=ProvisionalCompleterWarning,
519 521 stacklevel=2,
520 522 )
521 523
522 524 self.start = start
523 525 self.end = end
524 526 self.text = text
525 527 self.type = type
526 528 self.signature = signature
527 529 self._origin = _origin
528 530
529 531 def __repr__(self):
530 532 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
531 533 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
532 534
533 535 def __eq__(self, other) -> bool:
534 536 """
535 537 Equality and hash do not hash the type (as some completer may not be
536 538 able to infer the type), but are use to (partially) de-duplicate
537 539 completion.
538 540
539 541 Completely de-duplicating completion is a bit tricker that just
540 542 comparing as it depends on surrounding text, which Completions are not
541 543 aware of.
542 544 """
543 545 return self.start == other.start and \
544 546 self.end == other.end and \
545 547 self.text == other.text
546 548
547 549 def __hash__(self):
548 550 return hash((self.start, self.end, self.text))
549 551
550 552
551 553 class SimpleCompletion:
552 554 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
553 555
554 556 .. warning::
555 557
556 558 Provisional
557 559
558 560 This class is used to describe the currently supported attributes of
559 561 simple completion items, and any additional implementation details
560 562 should not be relied on. Additional attributes may be included in
561 563 future versions, and meaning of text disambiguated from the current
562 564 dual meaning of "text to insert" and "text to used as a label".
563 565 """
564 566
565 567 __slots__ = ["text", "type"]
566 568
567 569 def __init__(self, text: str, *, type: Optional[str] = None):
568 570 self.text = text
569 571 self.type = type
570 572
571 573 def __repr__(self):
572 574 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
573 575
574 576
575 577 class _MatcherResultBase(TypedDict):
576 578 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
577 579
578 580 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
579 581 matched_fragment: NotRequired[str]
580 582
581 583 #: Whether to suppress results from all other matchers (True), some
582 584 #: matchers (set of identifiers) or none (False); default is False.
583 585 suppress: NotRequired[Union[bool, Set[str]]]
584 586
585 587 #: Identifiers of matchers which should NOT be suppressed when this matcher
586 588 #: requests to suppress all other matchers; defaults to an empty set.
587 589 do_not_suppress: NotRequired[Set[str]]
588 590
589 591 #: Are completions already ordered and should be left as-is? default is False.
590 592 ordered: NotRequired[bool]
591 593
592 594
593 595 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
594 596 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
595 597 """Result of new-style completion matcher."""
596 598
597 599 # note: TypedDict is added again to the inheritance chain
598 600 # in order to get __orig_bases__ for documentation
599 601
600 602 #: List of candidate completions
601 603 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
602 604
603 605
604 606 class _JediMatcherResult(_MatcherResultBase):
605 607 """Matching result returned by Jedi (will be processed differently)"""
606 608
607 609 #: list of candidate completions
608 610 completions: Iterator[_JediCompletionLike]
609 611
610 612
611 613 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
612 614 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
613 615
614 616
615 617 @dataclass
616 618 class CompletionContext:
617 619 """Completion context provided as an argument to matchers in the Matcher API v2."""
618 620
619 621 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
620 622 # which was not explicitly visible as an argument of the matcher, making any refactor
621 623 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
622 624 # from the completer, and make substituting them in sub-classes easier.
623 625
624 626 #: Relevant fragment of code directly preceding the cursor.
625 627 #: The extraction of token is implemented via splitter heuristic
626 628 #: (following readline behaviour for legacy reasons), which is user configurable
627 629 #: (by switching the greedy mode).
628 630 token: str
629 631
630 632 #: The full available content of the editor or buffer
631 633 full_text: str
632 634
633 635 #: Cursor position in the line (the same for ``full_text`` and ``text``).
634 636 cursor_position: int
635 637
636 638 #: Cursor line in ``full_text``.
637 639 cursor_line: int
638 640
639 641 #: The maximum number of completions that will be used downstream.
640 642 #: Matchers can use this information to abort early.
641 643 #: The built-in Jedi matcher is currently excepted from this limit.
642 644 # If not given, return all possible completions.
643 645 limit: Optional[int]
644 646
645 647 @cached_property
646 648 def text_until_cursor(self) -> str:
647 649 return self.line_with_cursor[: self.cursor_position]
648 650
649 651 @cached_property
650 652 def line_with_cursor(self) -> str:
651 653 return self.full_text.split("\n")[self.cursor_line]
652 654
653 655
654 656 #: Matcher results for API v2.
655 657 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
656 658
657 659
658 660 class _MatcherAPIv1Base(Protocol):
659 661 def __call__(self, text: str) -> List[str]:
660 662 """Call signature."""
661 663 ...
662 664
663 665 #: Used to construct the default matcher identifier
664 666 __qualname__: str
665 667
666 668
667 669 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
668 670 #: API version
669 671 matcher_api_version: Optional[Literal[1]]
670 672
671 673 def __call__(self, text: str) -> List[str]:
672 674 """Call signature."""
673 675 ...
674 676
675 677
676 678 #: Protocol describing Matcher API v1.
677 679 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
678 680
679 681
680 682 class MatcherAPIv2(Protocol):
681 683 """Protocol describing Matcher API v2."""
682 684
683 685 #: API version
684 686 matcher_api_version: Literal[2] = 2
685 687
686 688 def __call__(self, context: CompletionContext) -> MatcherResult:
687 689 """Call signature."""
688 690 ...
689 691
690 692 #: Used to construct the default matcher identifier
691 693 __qualname__: str
692 694
693 695
694 696 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
695 697
696 698
697 699 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
698 700 api_version = _get_matcher_api_version(matcher)
699 701 return api_version == 1
700 702
701 703
702 704 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
703 705 api_version = _get_matcher_api_version(matcher)
704 706 return api_version == 2
705 707
706 708
707 709 def _is_sizable(value: Any) -> TypeGuard[Sized]:
708 710 """Determines whether objects is sizable"""
709 711 return hasattr(value, "__len__")
710 712
711 713
712 714 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
713 715 """Determines whether objects is sizable"""
714 716 return hasattr(value, "__next__")
715 717
716 718
717 719 def has_any_completions(result: MatcherResult) -> bool:
718 720 """Check if any result includes any completions."""
719 721 completions = result["completions"]
720 722 if _is_sizable(completions):
721 723 return len(completions) != 0
722 724 if _is_iterator(completions):
723 725 try:
724 726 old_iterator = completions
725 727 first = next(old_iterator)
726 728 result["completions"] = cast(
727 729 Iterator[SimpleCompletion],
728 730 itertools.chain([first], old_iterator),
729 731 )
730 732 return True
731 733 except StopIteration:
732 734 return False
733 735 raise ValueError(
734 736 "Completions returned by matcher need to be an Iterator or a Sizable"
735 737 )
736 738
737 739
738 740 def completion_matcher(
739 741 *,
740 742 priority: Optional[float] = None,
741 743 identifier: Optional[str] = None,
742 744 api_version: int = 1,
743 745 ):
744 746 """Adds attributes describing the matcher.
745 747
746 748 Parameters
747 749 ----------
748 750 priority : Optional[float]
749 751 The priority of the matcher, determines the order of execution of matchers.
750 752 Higher priority means that the matcher will be executed first. Defaults to 0.
751 753 identifier : Optional[str]
752 754 identifier of the matcher allowing users to modify the behaviour via traitlets,
753 755 and also used to for debugging (will be passed as ``origin`` with the completions).
754 756
755 757 Defaults to matcher function's ``__qualname__`` (for example,
756 758 ``IPCompleter.file_matcher`` for the built-in matched defined
757 759 as a ``file_matcher`` method of the ``IPCompleter`` class).
758 760 api_version: Optional[int]
759 761 version of the Matcher API used by this matcher.
760 762 Currently supported values are 1 and 2.
761 763 Defaults to 1.
762 764 """
763 765
764 766 def wrapper(func: Matcher):
765 767 func.matcher_priority = priority or 0 # type: ignore
766 768 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
767 769 func.matcher_api_version = api_version # type: ignore
768 770 if TYPE_CHECKING:
769 771 if api_version == 1:
770 772 func = cast(MatcherAPIv1, func)
771 773 elif api_version == 2:
772 774 func = cast(MatcherAPIv2, func)
773 775 return func
774 776
775 777 return wrapper
776 778
777 779
778 780 def _get_matcher_priority(matcher: Matcher):
779 781 return getattr(matcher, "matcher_priority", 0)
780 782
781 783
782 784 def _get_matcher_id(matcher: Matcher):
783 785 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
784 786
785 787
786 788 def _get_matcher_api_version(matcher):
787 789 return getattr(matcher, "matcher_api_version", 1)
788 790
789 791
790 792 context_matcher = partial(completion_matcher, api_version=2)
791 793
792 794
793 795 _IC = Iterable[Completion]
794 796
795 797
796 798 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
797 799 """
798 800 Deduplicate a set of completions.
799 801
800 802 .. warning::
801 803
802 804 Unstable
803 805
804 806 This function is unstable, API may change without warning.
805 807
806 808 Parameters
807 809 ----------
808 810 text : str
809 811 text that should be completed.
810 812 completions : Iterator[Completion]
811 813 iterator over the completions to deduplicate
812 814
813 815 Yields
814 816 ------
815 817 `Completions` objects
816 818 Completions coming from multiple sources, may be different but end up having
817 819 the same effect when applied to ``text``. If this is the case, this will
818 820 consider completions as equal and only emit the first encountered.
819 821 Not folded in `completions()` yet for debugging purpose, and to detect when
820 822 the IPython completer does return things that Jedi does not, but should be
821 823 at some point.
822 824 """
823 825 completions = list(completions)
824 826 if not completions:
825 827 return
826 828
827 829 new_start = min(c.start for c in completions)
828 830 new_end = max(c.end for c in completions)
829 831
830 832 seen = set()
831 833 for c in completions:
832 834 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
833 835 if new_text not in seen:
834 836 yield c
835 837 seen.add(new_text)
836 838
837 839
838 840 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
839 841 """
840 842 Rectify a set of completions to all have the same ``start`` and ``end``
841 843
842 844 .. warning::
843 845
844 846 Unstable
845 847
846 848 This function is unstable, API may change without warning.
847 849 It will also raise unless use in proper context manager.
848 850
849 851 Parameters
850 852 ----------
851 853 text : str
852 854 text that should be completed.
853 855 completions : Iterator[Completion]
854 856 iterator over the completions to rectify
855 857 _debug : bool
856 858 Log failed completion
857 859
858 860 Notes
859 861 -----
860 862 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
861 863 the Jupyter Protocol requires them to behave like so. This will readjust
862 864 the completion to have the same ``start`` and ``end`` by padding both
863 865 extremities with surrounding text.
864 866
865 867 During stabilisation should support a ``_debug`` option to log which
866 868 completion are return by the IPython completer and not found in Jedi in
867 869 order to make upstream bug report.
868 870 """
869 871 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
870 872 "It may change without warnings. "
871 873 "Use in corresponding context manager.",
872 874 category=ProvisionalCompleterWarning, stacklevel=2)
873 875
874 876 completions = list(completions)
875 877 if not completions:
876 878 return
877 879 starts = (c.start for c in completions)
878 880 ends = (c.end for c in completions)
879 881
880 882 new_start = min(starts)
881 883 new_end = max(ends)
882 884
883 885 seen_jedi = set()
884 886 seen_python_matches = set()
885 887 for c in completions:
886 888 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
887 889 if c._origin == 'jedi':
888 890 seen_jedi.add(new_text)
889 891 elif c._origin == "IPCompleter.python_matcher":
890 892 seen_python_matches.add(new_text)
891 893 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
892 894 diff = seen_python_matches.difference(seen_jedi)
893 895 if diff and _debug:
894 896 print('IPython.python matches have extras:', diff)
895 897
896 898
897 899 if sys.platform == 'win32':
898 900 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
899 901 else:
900 902 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
901 903
902 904 GREEDY_DELIMS = ' =\r\n'
903 905
904 906
905 907 class CompletionSplitter(object):
906 908 """An object to split an input line in a manner similar to readline.
907 909
908 910 By having our own implementation, we can expose readline-like completion in
909 911 a uniform manner to all frontends. This object only needs to be given the
910 912 line of text to be split and the cursor position on said line, and it
911 913 returns the 'word' to be completed on at the cursor after splitting the
912 914 entire line.
913 915
914 916 What characters are used as splitting delimiters can be controlled by
915 917 setting the ``delims`` attribute (this is a property that internally
916 918 automatically builds the necessary regular expression)"""
917 919
918 920 # Private interface
919 921
920 922 # A string of delimiter characters. The default value makes sense for
921 923 # IPython's most typical usage patterns.
922 924 _delims = DELIMS
923 925
924 926 # The expression (a normal string) to be compiled into a regular expression
925 927 # for actual splitting. We store it as an attribute mostly for ease of
926 928 # debugging, since this type of code can be so tricky to debug.
927 929 _delim_expr = None
928 930
929 931 # The regular expression that does the actual splitting
930 932 _delim_re = None
931 933
932 934 def __init__(self, delims=None):
933 935 delims = CompletionSplitter._delims if delims is None else delims
934 936 self.delims = delims
935 937
936 938 @property
937 939 def delims(self):
938 940 """Return the string of delimiter characters."""
939 941 return self._delims
940 942
941 943 @delims.setter
942 944 def delims(self, delims):
943 945 """Set the delimiters for line splitting."""
944 946 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
945 947 self._delim_re = re.compile(expr)
946 948 self._delims = delims
947 949 self._delim_expr = expr
948 950
949 951 def split_line(self, line, cursor_pos=None):
950 952 """Split a line of text with a cursor at the given position.
951 953 """
952 954 l = line if cursor_pos is None else line[:cursor_pos]
953 955 return self._delim_re.split(l)[-1]
954 956
955 957
956 958
957 959 class Completer(Configurable):
958 960
959 961 greedy = Bool(
960 962 False,
961 963 help="""Activate greedy completion.
962 964
963 965 .. deprecated:: 8.8
964 966 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
965 967
966 968 When enabled in IPython 8.8 or newer, changes configuration as follows:
967 969
968 970 - ``Completer.evaluation = 'unsafe'``
969 971 - ``Completer.auto_close_dict_keys = True``
970 972 """,
971 973 ).tag(config=True)
972 974
973 975 evaluation = Enum(
974 976 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
975 977 default_value="limited",
976 978 help="""Policy for code evaluation under completion.
977 979
978 980 Successive options allow to enable more eager evaluation for better
979 981 completion suggestions, including for nested dictionaries, nested lists,
980 982 or even results of function calls.
981 983 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
982 984 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
983 985
984 986 Allowed values are:
985 987
986 988 - ``forbidden``: no evaluation of code is permitted,
987 989 - ``minimal``: evaluation of literals and access to built-in namespace;
988 990 no item/attribute evaluationm no access to locals/globals,
989 991 no evaluation of any operations or comparisons.
990 992 - ``limited``: access to all namespaces, evaluation of hard-coded methods
991 993 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
992 994 :any:`object.__getitem__`) on allow-listed objects (for example:
993 995 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
994 996 - ``unsafe``: evaluation of all methods and function calls but not of
995 997 syntax with side-effects like `del x`,
996 998 - ``dangerous``: completely arbitrary evaluation.
997 999 """,
998 1000 ).tag(config=True)
999 1001
1000 1002 use_jedi = Bool(default_value=JEDI_INSTALLED,
1001 1003 help="Experimental: Use Jedi to generate autocompletions. "
1002 1004 "Default to True if jedi is installed.").tag(config=True)
1003 1005
1004 1006 jedi_compute_type_timeout = Int(default_value=400,
1005 1007 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1006 1008 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1007 1009 performance by preventing jedi to build its cache.
1008 1010 """).tag(config=True)
1009 1011
1010 1012 debug = Bool(default_value=False,
1011 1013 help='Enable debug for the Completer. Mostly print extra '
1012 1014 'information for experimental jedi integration.')\
1013 1015 .tag(config=True)
1014 1016
1015 1017 backslash_combining_completions = Bool(True,
1016 1018 help="Enable unicode completions, e.g. \\alpha<tab> . "
1017 1019 "Includes completion of latex commands, unicode names, and expanding "
1018 1020 "unicode characters back to latex commands.").tag(config=True)
1019 1021
1020 1022 auto_close_dict_keys = Bool(
1021 1023 False,
1022 1024 help="""
1023 1025 Enable auto-closing dictionary keys.
1024 1026
1025 1027 When enabled string keys will be suffixed with a final quote
1026 1028 (matching the opening quote), tuple keys will also receive a
1027 1029 separating comma if needed, and keys which are final will
1028 1030 receive a closing bracket (``]``).
1029 1031 """,
1030 1032 ).tag(config=True)
1031 1033
1032 1034 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1033 1035 """Create a new completer for the command line.
1034 1036
1035 1037 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1036 1038
1037 1039 If unspecified, the default namespace where completions are performed
1038 1040 is __main__ (technically, __main__.__dict__). Namespaces should be
1039 1041 given as dictionaries.
1040 1042
1041 1043 An optional second namespace can be given. This allows the completer
1042 1044 to handle cases where both the local and global scopes need to be
1043 1045 distinguished.
1044 1046 """
1045 1047
1046 1048 # Don't bind to namespace quite yet, but flag whether the user wants a
1047 1049 # specific namespace or to use __main__.__dict__. This will allow us
1048 1050 # to bind to __main__.__dict__ at completion time, not now.
1049 1051 if namespace is None:
1050 1052 self.use_main_ns = True
1051 1053 else:
1052 1054 self.use_main_ns = False
1053 1055 self.namespace = namespace
1054 1056
1055 1057 # The global namespace, if given, can be bound directly
1056 1058 if global_namespace is None:
1057 1059 self.global_namespace = {}
1058 1060 else:
1059 1061 self.global_namespace = global_namespace
1060 1062
1061 1063 self.custom_matchers = []
1062 1064
1063 1065 super(Completer, self).__init__(**kwargs)
1064 1066
1065 1067 def complete(self, text, state):
1066 1068 """Return the next possible completion for 'text'.
1067 1069
1068 1070 This is called successively with state == 0, 1, 2, ... until it
1069 1071 returns None. The completion should begin with 'text'.
1070 1072
1071 1073 """
1072 1074 if self.use_main_ns:
1073 1075 self.namespace = __main__.__dict__
1074 1076
1075 1077 if state == 0:
1076 1078 if "." in text:
1077 1079 self.matches = self.attr_matches(text)
1078 1080 else:
1079 1081 self.matches = self.global_matches(text)
1080 1082 try:
1081 1083 return self.matches[state]
1082 1084 except IndexError:
1083 1085 return None
1084 1086
1085 1087 def global_matches(self, text):
1086 1088 """Compute matches when text is a simple name.
1087 1089
1088 1090 Return a list of all keywords, built-in functions and names currently
1089 1091 defined in self.namespace or self.global_namespace that match.
1090 1092
1091 1093 """
1092 1094 matches = []
1093 1095 match_append = matches.append
1094 1096 n = len(text)
1095 1097 for lst in [
1096 1098 keyword.kwlist,
1097 1099 builtin_mod.__dict__.keys(),
1098 1100 list(self.namespace.keys()),
1099 1101 list(self.global_namespace.keys()),
1100 1102 ]:
1101 1103 for word in lst:
1102 1104 if word[:n] == text and word != "__builtins__":
1103 1105 match_append(word)
1104 1106
1105 1107 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1106 1108 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1107 1109 shortened = {
1108 1110 "_".join([sub[0] for sub in word.split("_")]): word
1109 1111 for word in lst
1110 1112 if snake_case_re.match(word)
1111 1113 }
1112 1114 for word in shortened.keys():
1113 1115 if word[:n] == text and word != "__builtins__":
1114 1116 match_append(shortened[word])
1115 1117 return matches
1116 1118
1117 1119 def attr_matches(self, text):
1118 1120 """Compute matches when text contains a dot.
1119 1121
1120 1122 Assuming the text is of the form NAME.NAME....[NAME], and is
1121 1123 evaluatable in self.namespace or self.global_namespace, it will be
1122 1124 evaluated and its attributes (as revealed by dir()) are used as
1123 1125 possible completions. (For class instances, class members are
1124 1126 also considered.)
1125 1127
1126 1128 WARNING: this can still invoke arbitrary C code, if an object
1127 1129 with a __getattr__ hook is evaluated.
1128 1130
1129 1131 """
1130 1132 return self._attr_matches(text)[0]
1131 1133
1132 1134 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1133 1135 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1134 1136 if not m2:
1135 1137 return [], ""
1136 1138 expr, attr = m2.group(1, 2)
1137 1139
1138 1140 obj = self._evaluate_expr(expr)
1139 1141
1140 1142 if obj is not_found:
1141 1143 return [], ""
1142 1144
1143 1145 if self.limit_to__all__ and hasattr(obj, '__all__'):
1144 1146 words = get__all__entries(obj)
1145 1147 else:
1146 1148 words = dir2(obj)
1147 1149
1148 1150 try:
1149 1151 words = generics.complete_object(obj, words)
1150 1152 except TryNext:
1151 1153 pass
1152 1154 except AssertionError:
1153 1155 raise
1154 1156 except Exception:
1155 1157 # Silence errors from completion function
1156 1158 pass
1157 1159 # Build match list to return
1158 1160 n = len(attr)
1159 1161
1160 1162 # Note: ideally we would just return words here and the prefix
1161 1163 # reconciliator would know that we intend to append to rather than
1162 1164 # replace the input text; this requires refactoring to return range
1163 1165 # which ought to be replaced (as does jedi).
1164 1166 if include_prefix:
1165 1167 tokens = _parse_tokens(expr)
1166 1168 rev_tokens = reversed(tokens)
1167 1169 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1168 1170 name_turn = True
1169 1171
1170 1172 parts = []
1171 1173 for token in rev_tokens:
1172 1174 if token.type in skip_over:
1173 1175 continue
1174 1176 if token.type == tokenize.NAME and name_turn:
1175 1177 parts.append(token.string)
1176 1178 name_turn = False
1177 1179 elif (
1178 1180 token.type == tokenize.OP and token.string == "." and not name_turn
1179 1181 ):
1180 1182 parts.append(token.string)
1181 1183 name_turn = True
1182 1184 else:
1183 1185 # short-circuit if not empty nor name token
1184 1186 break
1185 1187
1186 1188 prefix_after_space = "".join(reversed(parts))
1187 1189 else:
1188 1190 prefix_after_space = ""
1189 1191
1190 1192 return (
1191 1193 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1192 1194 "." + attr,
1193 1195 )
1194 1196
1195 1197 def _evaluate_expr(self, expr):
1196 1198 obj = not_found
1197 1199 done = False
1198 1200 while not done and expr:
1199 1201 try:
1200 1202 obj = guarded_eval(
1201 1203 expr,
1202 1204 EvaluationContext(
1203 1205 globals=self.global_namespace,
1204 1206 locals=self.namespace,
1205 1207 evaluation=self.evaluation,
1206 1208 ),
1207 1209 )
1208 1210 done = True
1209 1211 except Exception as e:
1210 1212 if self.debug:
1211 1213 print("Evaluation exception", e)
1212 1214 # trim the expression to remove any invalid prefix
1213 1215 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1214 1216 # where parenthesis is not closed.
1215 1217 # TODO: make this faster by reusing parts of the computation?
1216 1218 expr = expr[1:]
1217 1219 return obj
1218 1220
1219 1221 def get__all__entries(obj):
1220 1222 """returns the strings in the __all__ attribute"""
1221 1223 try:
1222 1224 words = getattr(obj, '__all__')
1223 1225 except:
1224 1226 return []
1225 1227
1226 1228 return [w for w in words if isinstance(w, str)]
1227 1229
1228 1230
1229 1231 class _DictKeyState(enum.Flag):
1230 1232 """Represent state of the key match in context of other possible matches.
1231 1233
1232 1234 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1233 1235 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1234 1236 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1235 1237 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1236 1238 """
1237 1239
1238 1240 BASELINE = 0
1239 1241 END_OF_ITEM = enum.auto()
1240 1242 END_OF_TUPLE = enum.auto()
1241 1243 IN_TUPLE = enum.auto()
1242 1244
1243 1245
1244 1246 def _parse_tokens(c):
1245 1247 """Parse tokens even if there is an error."""
1246 1248 tokens = []
1247 1249 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1248 1250 while True:
1249 1251 try:
1250 1252 tokens.append(next(token_generator))
1251 1253 except tokenize.TokenError:
1252 1254 return tokens
1253 1255 except StopIteration:
1254 1256 return tokens
1255 1257
1256 1258
1257 1259 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1258 1260 """Match any valid Python numeric literal in a prefix of dictionary keys.
1259 1261
1260 1262 References:
1261 1263 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1262 1264 - https://docs.python.org/3/library/tokenize.html
1263 1265 """
1264 1266 if prefix[-1].isspace():
1265 1267 # if user typed a space we do not have anything to complete
1266 1268 # even if there was a valid number token before
1267 1269 return None
1268 1270 tokens = _parse_tokens(prefix)
1269 1271 rev_tokens = reversed(tokens)
1270 1272 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1271 1273 number = None
1272 1274 for token in rev_tokens:
1273 1275 if token.type in skip_over:
1274 1276 continue
1275 1277 if number is None:
1276 1278 if token.type == tokenize.NUMBER:
1277 1279 number = token.string
1278 1280 continue
1279 1281 else:
1280 1282 # we did not match a number
1281 1283 return None
1282 1284 if token.type == tokenize.OP:
1283 1285 if token.string == ",":
1284 1286 break
1285 1287 if token.string in {"+", "-"}:
1286 1288 number = token.string + number
1287 1289 else:
1288 1290 return None
1289 1291 return number
1290 1292
1291 1293
1292 1294 _INT_FORMATS = {
1293 1295 "0b": bin,
1294 1296 "0o": oct,
1295 1297 "0x": hex,
1296 1298 }
1297 1299
1298 1300
1299 1301 def match_dict_keys(
1300 1302 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1301 1303 prefix: str,
1302 1304 delims: str,
1303 1305 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1304 1306 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1305 1307 """Used by dict_key_matches, matching the prefix to a list of keys
1306 1308
1307 1309 Parameters
1308 1310 ----------
1309 1311 keys
1310 1312 list of keys in dictionary currently being completed.
1311 1313 prefix
1312 1314 Part of the text already typed by the user. E.g. `mydict[b'fo`
1313 1315 delims
1314 1316 String of delimiters to consider when finding the current key.
1315 1317 extra_prefix : optional
1316 1318 Part of the text already typed in multi-key index cases. E.g. for
1317 1319 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1318 1320
1319 1321 Returns
1320 1322 -------
1321 1323 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1322 1324 ``quote`` being the quote that need to be used to close current string.
1323 1325 ``token_start`` the position where the replacement should start occurring,
1324 1326 ``matches`` a dictionary of replacement/completion keys on keys and values
1325 1327 indicating whether the state.
1326 1328 """
1327 1329 prefix_tuple = extra_prefix if extra_prefix else ()
1328 1330
1329 1331 prefix_tuple_size = sum(
1330 1332 [
1331 1333 # for pandas, do not count slices as taking space
1332 1334 not isinstance(k, slice)
1333 1335 for k in prefix_tuple
1334 1336 ]
1335 1337 )
1336 1338 text_serializable_types = (str, bytes, int, float, slice)
1337 1339
1338 1340 def filter_prefix_tuple(key):
1339 1341 # Reject too short keys
1340 1342 if len(key) <= prefix_tuple_size:
1341 1343 return False
1342 1344 # Reject keys which cannot be serialised to text
1343 1345 for k in key:
1344 1346 if not isinstance(k, text_serializable_types):
1345 1347 return False
1346 1348 # Reject keys that do not match the prefix
1347 1349 for k, pt in zip(key, prefix_tuple):
1348 1350 if k != pt and not isinstance(pt, slice):
1349 1351 return False
1350 1352 # All checks passed!
1351 1353 return True
1352 1354
1353 1355 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1354 1356 defaultdict(lambda: _DictKeyState.BASELINE)
1355 1357 )
1356 1358
1357 1359 for k in keys:
1358 1360 # If at least one of the matches is not final, mark as undetermined.
1359 1361 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1360 1362 # `111` appears final on first match but is not final on the second.
1361 1363
1362 1364 if isinstance(k, tuple):
1363 1365 if filter_prefix_tuple(k):
1364 1366 key_fragment = k[prefix_tuple_size]
1365 1367 filtered_key_is_final[key_fragment] |= (
1366 1368 _DictKeyState.END_OF_TUPLE
1367 1369 if len(k) == prefix_tuple_size + 1
1368 1370 else _DictKeyState.IN_TUPLE
1369 1371 )
1370 1372 elif prefix_tuple_size > 0:
1371 1373 # we are completing a tuple but this key is not a tuple,
1372 1374 # so we should ignore it
1373 1375 pass
1374 1376 else:
1375 1377 if isinstance(k, text_serializable_types):
1376 1378 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1377 1379
1378 1380 filtered_keys = filtered_key_is_final.keys()
1379 1381
1380 1382 if not prefix:
1381 1383 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1382 1384
1383 1385 quote_match = re.search("(?:\"|')", prefix)
1384 1386 is_user_prefix_numeric = False
1385 1387
1386 1388 if quote_match:
1387 1389 quote = quote_match.group()
1388 1390 valid_prefix = prefix + quote
1389 1391 try:
1390 1392 prefix_str = literal_eval(valid_prefix)
1391 1393 except Exception:
1392 1394 return "", 0, {}
1393 1395 else:
1394 1396 # If it does not look like a string, let's assume
1395 1397 # we are dealing with a number or variable.
1396 1398 number_match = _match_number_in_dict_key_prefix(prefix)
1397 1399
1398 1400 # We do not want the key matcher to suggest variable names so we yield:
1399 1401 if number_match is None:
1400 1402 # The alternative would be to assume that user forgort the quote
1401 1403 # and if the substring matches, suggest adding it at the start.
1402 1404 return "", 0, {}
1403 1405
1404 1406 prefix_str = number_match
1405 1407 is_user_prefix_numeric = True
1406 1408 quote = ""
1407 1409
1408 1410 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1409 1411 token_match = re.search(pattern, prefix, re.UNICODE)
1410 1412 assert token_match is not None # silence mypy
1411 1413 token_start = token_match.start()
1412 1414 token_prefix = token_match.group()
1413 1415
1414 1416 matched: Dict[str, _DictKeyState] = {}
1415 1417
1416 1418 str_key: Union[str, bytes]
1417 1419
1418 1420 for key in filtered_keys:
1419 1421 if isinstance(key, (int, float)):
1420 1422 # User typed a number but this key is not a number.
1421 1423 if not is_user_prefix_numeric:
1422 1424 continue
1423 1425 str_key = str(key)
1424 1426 if isinstance(key, int):
1425 1427 int_base = prefix_str[:2].lower()
1426 1428 # if user typed integer using binary/oct/hex notation:
1427 1429 if int_base in _INT_FORMATS:
1428 1430 int_format = _INT_FORMATS[int_base]
1429 1431 str_key = int_format(key)
1430 1432 else:
1431 1433 # User typed a string but this key is a number.
1432 1434 if is_user_prefix_numeric:
1433 1435 continue
1434 1436 str_key = key
1435 1437 try:
1436 1438 if not str_key.startswith(prefix_str):
1437 1439 continue
1438 1440 except (AttributeError, TypeError, UnicodeError) as e:
1439 1441 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1440 1442 continue
1441 1443
1442 1444 # reformat remainder of key to begin with prefix
1443 1445 rem = str_key[len(prefix_str) :]
1444 1446 # force repr wrapped in '
1445 1447 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1446 1448 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1447 1449 if quote == '"':
1448 1450 # The entered prefix is quoted with ",
1449 1451 # but the match is quoted with '.
1450 1452 # A contained " hence needs escaping for comparison:
1451 1453 rem_repr = rem_repr.replace('"', '\\"')
1452 1454
1453 1455 # then reinsert prefix from start of token
1454 1456 match = "%s%s" % (token_prefix, rem_repr)
1455 1457
1456 1458 matched[match] = filtered_key_is_final[key]
1457 1459 return quote, token_start, matched
1458 1460
1459 1461
1460 1462 def cursor_to_position(text:str, line:int, column:int)->int:
1461 1463 """
1462 1464 Convert the (line,column) position of the cursor in text to an offset in a
1463 1465 string.
1464 1466
1465 1467 Parameters
1466 1468 ----------
1467 1469 text : str
1468 1470 The text in which to calculate the cursor offset
1469 1471 line : int
1470 1472 Line of the cursor; 0-indexed
1471 1473 column : int
1472 1474 Column of the cursor 0-indexed
1473 1475
1474 1476 Returns
1475 1477 -------
1476 1478 Position of the cursor in ``text``, 0-indexed.
1477 1479
1478 1480 See Also
1479 1481 --------
1480 1482 position_to_cursor : reciprocal of this function
1481 1483
1482 1484 """
1483 1485 lines = text.split('\n')
1484 1486 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1485 1487
1486 1488 return sum(len(l) + 1 for l in lines[:line]) + column
1487 1489
1488 1490 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1489 1491 """
1490 1492 Convert the position of the cursor in text (0 indexed) to a line
1491 1493 number(0-indexed) and a column number (0-indexed) pair
1492 1494
1493 1495 Position should be a valid position in ``text``.
1494 1496
1495 1497 Parameters
1496 1498 ----------
1497 1499 text : str
1498 1500 The text in which to calculate the cursor offset
1499 1501 offset : int
1500 1502 Position of the cursor in ``text``, 0-indexed.
1501 1503
1502 1504 Returns
1503 1505 -------
1504 1506 (line, column) : (int, int)
1505 1507 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1506 1508
1507 1509 See Also
1508 1510 --------
1509 1511 cursor_to_position : reciprocal of this function
1510 1512
1511 1513 """
1512 1514
1513 1515 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1514 1516
1515 1517 before = text[:offset]
1516 1518 blines = before.split('\n') # ! splitnes trim trailing \n
1517 1519 line = before.count('\n')
1518 1520 col = len(blines[-1])
1519 1521 return line, col
1520 1522
1521 1523
1522 1524 def _safe_isinstance(obj, module, class_name, *attrs):
1523 1525 """Checks if obj is an instance of module.class_name if loaded
1524 1526 """
1525 1527 if module in sys.modules:
1526 1528 m = sys.modules[module]
1527 1529 for attr in [class_name, *attrs]:
1528 1530 m = getattr(m, attr)
1529 1531 return isinstance(obj, m)
1530 1532
1531 1533
1532 1534 @context_matcher()
1533 1535 def back_unicode_name_matcher(context: CompletionContext):
1534 1536 """Match Unicode characters back to Unicode name
1535 1537
1536 1538 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1537 1539 """
1538 1540 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1539 1541 return _convert_matcher_v1_result_to_v2(
1540 1542 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1541 1543 )
1542 1544
1543 1545
1544 1546 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1545 1547 """Match Unicode characters back to Unicode name
1546 1548
1547 1549 This does ``β˜ƒ`` -> ``\\snowman``
1548 1550
1549 1551 Note that snowman is not a valid python3 combining character but will be expanded.
1550 1552 Though it will not recombine back to the snowman character by the completion machinery.
1551 1553
1552 1554 This will not either back-complete standard sequences like \\n, \\b ...
1553 1555
1554 1556 .. deprecated:: 8.6
1555 1557 You can use :meth:`back_unicode_name_matcher` instead.
1556 1558
1557 1559 Returns
1558 1560 =======
1559 1561
1560 1562 Return a tuple with two elements:
1561 1563
1562 1564 - The Unicode character that was matched (preceded with a backslash), or
1563 1565 empty string,
1564 1566 - a sequence (of 1), name for the match Unicode character, preceded by
1565 1567 backslash, or empty if no match.
1566 1568 """
1567 1569 if len(text)<2:
1568 1570 return '', ()
1569 1571 maybe_slash = text[-2]
1570 1572 if maybe_slash != '\\':
1571 1573 return '', ()
1572 1574
1573 1575 char = text[-1]
1574 1576 # no expand on quote for completion in strings.
1575 1577 # nor backcomplete standard ascii keys
1576 1578 if char in string.ascii_letters or char in ('"',"'"):
1577 1579 return '', ()
1578 1580 try :
1579 1581 unic = unicodedata.name(char)
1580 1582 return '\\'+char,('\\'+unic,)
1581 1583 except KeyError:
1582 1584 pass
1583 1585 return '', ()
1584 1586
1585 1587
1586 1588 @context_matcher()
1587 1589 def back_latex_name_matcher(context: CompletionContext):
1588 1590 """Match latex characters back to unicode name
1589 1591
1590 1592 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1591 1593 """
1592 1594 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1593 1595 return _convert_matcher_v1_result_to_v2(
1594 1596 matches, type="latex", fragment=fragment, suppress_if_matches=True
1595 1597 )
1596 1598
1597 1599
1598 1600 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1599 1601 """Match latex characters back to unicode name
1600 1602
1601 1603 This does ``\\β„΅`` -> ``\\aleph``
1602 1604
1603 1605 .. deprecated:: 8.6
1604 1606 You can use :meth:`back_latex_name_matcher` instead.
1605 1607 """
1606 1608 if len(text)<2:
1607 1609 return '', ()
1608 1610 maybe_slash = text[-2]
1609 1611 if maybe_slash != '\\':
1610 1612 return '', ()
1611 1613
1612 1614
1613 1615 char = text[-1]
1614 1616 # no expand on quote for completion in strings.
1615 1617 # nor backcomplete standard ascii keys
1616 1618 if char in string.ascii_letters or char in ('"',"'"):
1617 1619 return '', ()
1618 1620 try :
1619 1621 latex = reverse_latex_symbol[char]
1620 1622 # '\\' replace the \ as well
1621 1623 return '\\'+char,[latex]
1622 1624 except KeyError:
1623 1625 pass
1624 1626 return '', ()
1625 1627
1626 1628
1627 1629 def _formatparamchildren(parameter) -> str:
1628 1630 """
1629 1631 Get parameter name and value from Jedi Private API
1630 1632
1631 1633 Jedi does not expose a simple way to get `param=value` from its API.
1632 1634
1633 1635 Parameters
1634 1636 ----------
1635 1637 parameter
1636 1638 Jedi's function `Param`
1637 1639
1638 1640 Returns
1639 1641 -------
1640 1642 A string like 'a', 'b=1', '*args', '**kwargs'
1641 1643
1642 1644 """
1643 1645 description = parameter.description
1644 1646 if not description.startswith('param '):
1645 1647 raise ValueError('Jedi function parameter description have change format.'
1646 1648 'Expected "param ...", found %r".' % description)
1647 1649 return description[6:]
1648 1650
1649 1651 def _make_signature(completion)-> str:
1650 1652 """
1651 1653 Make the signature from a jedi completion
1652 1654
1653 1655 Parameters
1654 1656 ----------
1655 1657 completion : jedi.Completion
1656 1658 object does not complete a function type
1657 1659
1658 1660 Returns
1659 1661 -------
1660 1662 a string consisting of the function signature, with the parenthesis but
1661 1663 without the function name. example:
1662 1664 `(a, *args, b=1, **kwargs)`
1663 1665
1664 1666 """
1665 1667
1666 1668 # it looks like this might work on jedi 0.17
1667 1669 if hasattr(completion, 'get_signatures'):
1668 1670 signatures = completion.get_signatures()
1669 1671 if not signatures:
1670 1672 return '(?)'
1671 1673
1672 1674 c0 = completion.get_signatures()[0]
1673 1675 return '('+c0.to_string().split('(', maxsplit=1)[1]
1674 1676
1675 1677 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1676 1678 for p in signature.defined_names()) if f])
1677 1679
1678 1680
1679 1681 _CompleteResult = Dict[str, MatcherResult]
1680 1682
1681 1683
1682 1684 DICT_MATCHER_REGEX = re.compile(
1683 1685 r"""(?x)
1684 1686 ( # match dict-referring - or any get item object - expression
1685 1687 .+
1686 1688 )
1687 1689 \[ # open bracket
1688 1690 \s* # and optional whitespace
1689 1691 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1690 1692 # and slices
1691 1693 ((?:(?:
1692 1694 (?: # closed string
1693 1695 [uUbB]? # string prefix (r not handled)
1694 1696 (?:
1695 1697 '(?:[^']|(?<!\\)\\')*'
1696 1698 |
1697 1699 "(?:[^"]|(?<!\\)\\")*"
1698 1700 )
1699 1701 )
1700 1702 |
1701 1703 # capture integers and slices
1702 1704 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1703 1705 |
1704 1706 # integer in bin/hex/oct notation
1705 1707 0[bBxXoO]_?(?:\w|\d)+
1706 1708 )
1707 1709 \s*,\s*
1708 1710 )*)
1709 1711 ((?:
1710 1712 (?: # unclosed string
1711 1713 [uUbB]? # string prefix (r not handled)
1712 1714 (?:
1713 1715 '(?:[^']|(?<!\\)\\')*
1714 1716 |
1715 1717 "(?:[^"]|(?<!\\)\\")*
1716 1718 )
1717 1719 )
1718 1720 |
1719 1721 # unfinished integer
1720 1722 (?:[-+]?\d+)
1721 1723 |
1722 1724 # integer in bin/hex/oct notation
1723 1725 0[bBxXoO]_?(?:\w|\d)+
1724 1726 )
1725 1727 )?
1726 1728 $
1727 1729 """
1728 1730 )
1729 1731
1730 1732
1731 1733 def _convert_matcher_v1_result_to_v2(
1732 1734 matches: Sequence[str],
1733 1735 type: str,
1734 1736 fragment: Optional[str] = None,
1735 1737 suppress_if_matches: bool = False,
1736 1738 ) -> SimpleMatcherResult:
1737 1739 """Utility to help with transition"""
1738 1740 result = {
1739 1741 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1740 1742 "suppress": (True if matches else False) if suppress_if_matches else False,
1741 1743 }
1742 1744 if fragment is not None:
1743 1745 result["matched_fragment"] = fragment
1744 1746 return cast(SimpleMatcherResult, result)
1745 1747
1746 1748
1747 1749 class IPCompleter(Completer):
1748 1750 """Extension of the completer class with IPython-specific features"""
1749 1751
1750 1752 @observe('greedy')
1751 1753 def _greedy_changed(self, change):
1752 1754 """update the splitter and readline delims when greedy is changed"""
1753 1755 if change["new"]:
1754 1756 self.evaluation = "unsafe"
1755 1757 self.auto_close_dict_keys = True
1756 1758 self.splitter.delims = GREEDY_DELIMS
1757 1759 else:
1758 1760 self.evaluation = "limited"
1759 1761 self.auto_close_dict_keys = False
1760 1762 self.splitter.delims = DELIMS
1761 1763
1762 1764 dict_keys_only = Bool(
1763 1765 False,
1764 1766 help="""
1765 1767 Whether to show dict key matches only.
1766 1768
1767 1769 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1768 1770 """,
1769 1771 )
1770 1772
1771 1773 suppress_competing_matchers = UnionTrait(
1772 1774 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1773 1775 default_value=None,
1774 1776 help="""
1775 1777 Whether to suppress completions from other *Matchers*.
1776 1778
1777 1779 When set to ``None`` (default) the matchers will attempt to auto-detect
1778 1780 whether suppression of other matchers is desirable. For example, at
1779 1781 the beginning of a line followed by `%` we expect a magic completion
1780 1782 to be the only applicable option, and after ``my_dict['`` we usually
1781 1783 expect a completion with an existing dictionary key.
1782 1784
1783 1785 If you want to disable this heuristic and see completions from all matchers,
1784 1786 set ``IPCompleter.suppress_competing_matchers = False``.
1785 1787 To disable the heuristic for specific matchers provide a dictionary mapping:
1786 1788 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1787 1789
1788 1790 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1789 1791 completions to the set of matchers with the highest priority;
1790 1792 this is equivalent to ``IPCompleter.merge_completions`` and
1791 1793 can be beneficial for performance, but will sometimes omit relevant
1792 1794 candidates from matchers further down the priority list.
1793 1795 """,
1794 1796 ).tag(config=True)
1795 1797
1796 1798 merge_completions = Bool(
1797 1799 True,
1798 1800 help="""Whether to merge completion results into a single list
1799 1801
1800 1802 If False, only the completion results from the first non-empty
1801 1803 completer will be returned.
1802 1804
1803 1805 As of version 8.6.0, setting the value to ``False`` is an alias for:
1804 1806 ``IPCompleter.suppress_competing_matchers = True.``.
1805 1807 """,
1806 1808 ).tag(config=True)
1807 1809
1808 1810 disable_matchers = ListTrait(
1809 1811 Unicode(),
1810 1812 help="""List of matchers to disable.
1811 1813
1812 1814 The list should contain matcher identifiers (see :any:`completion_matcher`).
1813 1815 """,
1814 1816 ).tag(config=True)
1815 1817
1816 1818 omit__names = Enum(
1817 1819 (0, 1, 2),
1818 1820 default_value=2,
1819 1821 help="""Instruct the completer to omit private method names
1820 1822
1821 1823 Specifically, when completing on ``object.<tab>``.
1822 1824
1823 1825 When 2 [default]: all names that start with '_' will be excluded.
1824 1826
1825 1827 When 1: all 'magic' names (``__foo__``) will be excluded.
1826 1828
1827 1829 When 0: nothing will be excluded.
1828 1830 """
1829 1831 ).tag(config=True)
1830 1832 limit_to__all__ = Bool(False,
1831 1833 help="""
1832 1834 DEPRECATED as of version 5.0.
1833 1835
1834 1836 Instruct the completer to use __all__ for the completion
1835 1837
1836 1838 Specifically, when completing on ``object.<tab>``.
1837 1839
1838 1840 When True: only those names in obj.__all__ will be included.
1839 1841
1840 1842 When False [default]: the __all__ attribute is ignored
1841 1843 """,
1842 1844 ).tag(config=True)
1843 1845
1844 1846 profile_completions = Bool(
1845 1847 default_value=False,
1846 1848 help="If True, emit profiling data for completion subsystem using cProfile."
1847 1849 ).tag(config=True)
1848 1850
1849 1851 profiler_output_dir = Unicode(
1850 1852 default_value=".completion_profiles",
1851 1853 help="Template for path at which to output profile data for completions."
1852 1854 ).tag(config=True)
1853 1855
1854 1856 @observe('limit_to__all__')
1855 1857 def _limit_to_all_changed(self, change):
1856 1858 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1857 1859 'value has been deprecated since IPython 5.0, will be made to have '
1858 1860 'no effects and then removed in future version of IPython.',
1859 1861 UserWarning)
1860 1862
1861 1863 def __init__(
1862 1864 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1863 1865 ):
1864 1866 """IPCompleter() -> completer
1865 1867
1866 1868 Return a completer object.
1867 1869
1868 1870 Parameters
1869 1871 ----------
1870 1872 shell
1871 1873 a pointer to the ipython shell itself. This is needed
1872 1874 because this completer knows about magic functions, and those can
1873 1875 only be accessed via the ipython instance.
1874 1876 namespace : dict, optional
1875 1877 an optional dict where completions are performed.
1876 1878 global_namespace : dict, optional
1877 1879 secondary optional dict for completions, to
1878 1880 handle cases (such as IPython embedded inside functions) where
1879 1881 both Python scopes are visible.
1880 1882 config : Config
1881 1883 traitlet's config object
1882 1884 **kwargs
1883 1885 passed to super class unmodified.
1884 1886 """
1885 1887
1886 1888 self.magic_escape = ESC_MAGIC
1887 1889 self.splitter = CompletionSplitter()
1888 1890
1889 1891 # _greedy_changed() depends on splitter and readline being defined:
1890 1892 super().__init__(
1891 1893 namespace=namespace,
1892 1894 global_namespace=global_namespace,
1893 1895 config=config,
1894 1896 **kwargs,
1895 1897 )
1896 1898
1897 1899 # List where completion matches will be stored
1898 1900 self.matches = []
1899 1901 self.shell = shell
1900 1902 # Regexp to split filenames with spaces in them
1901 1903 self.space_name_re = re.compile(r'([^\\] )')
1902 1904 # Hold a local ref. to glob.glob for speed
1903 1905 self.glob = glob.glob
1904 1906
1905 1907 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1906 1908 # buffers, to avoid completion problems.
1907 1909 term = os.environ.get('TERM','xterm')
1908 1910 self.dumb_terminal = term in ['dumb','emacs']
1909 1911
1910 1912 # Special handling of backslashes needed in win32 platforms
1911 1913 if sys.platform == "win32":
1912 1914 self.clean_glob = self._clean_glob_win32
1913 1915 else:
1914 1916 self.clean_glob = self._clean_glob
1915 1917
1916 1918 #regexp to parse docstring for function signature
1917 1919 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1918 1920 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1919 1921 #use this if positional argument name is also needed
1920 1922 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1921 1923
1922 1924 self.magic_arg_matchers = [
1923 1925 self.magic_config_matcher,
1924 1926 self.magic_color_matcher,
1925 1927 ]
1926 1928
1927 1929 # This is set externally by InteractiveShell
1928 1930 self.custom_completers = None
1929 1931
1930 1932 # This is a list of names of unicode characters that can be completed
1931 1933 # into their corresponding unicode value. The list is large, so we
1932 1934 # lazily initialize it on first use. Consuming code should access this
1933 1935 # attribute through the `@unicode_names` property.
1934 1936 self._unicode_names = None
1935 1937
1936 1938 self._backslash_combining_matchers = [
1937 1939 self.latex_name_matcher,
1938 1940 self.unicode_name_matcher,
1939 1941 back_latex_name_matcher,
1940 1942 back_unicode_name_matcher,
1941 1943 self.fwd_unicode_matcher,
1942 1944 ]
1943 1945
1944 1946 if not self.backslash_combining_completions:
1945 1947 for matcher in self._backslash_combining_matchers:
1946 1948 self.disable_matchers.append(_get_matcher_id(matcher))
1947 1949
1948 1950 if not self.merge_completions:
1949 1951 self.suppress_competing_matchers = True
1950 1952
1951 1953 @property
1952 1954 def matchers(self) -> List[Matcher]:
1953 1955 """All active matcher routines for completion"""
1954 1956 if self.dict_keys_only:
1955 1957 return [self.dict_key_matcher]
1956 1958
1957 1959 if self.use_jedi:
1958 1960 return [
1959 1961 *self.custom_matchers,
1960 1962 *self._backslash_combining_matchers,
1961 1963 *self.magic_arg_matchers,
1962 1964 self.custom_completer_matcher,
1963 1965 self.magic_matcher,
1964 1966 self._jedi_matcher,
1965 1967 self.dict_key_matcher,
1966 1968 self.file_matcher,
1967 1969 ]
1968 1970 else:
1969 1971 return [
1970 1972 *self.custom_matchers,
1971 1973 *self._backslash_combining_matchers,
1972 1974 *self.magic_arg_matchers,
1973 1975 self.custom_completer_matcher,
1974 1976 self.dict_key_matcher,
1975 1977 self.magic_matcher,
1976 1978 self.python_matcher,
1977 1979 self.file_matcher,
1978 1980 self.python_func_kw_matcher,
1979 1981 ]
1980 1982
1981 1983 def all_completions(self, text:str) -> List[str]:
1982 1984 """
1983 1985 Wrapper around the completion methods for the benefit of emacs.
1984 1986 """
1985 1987 prefix = text.rpartition('.')[0]
1986 1988 with provisionalcompleter():
1987 1989 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1988 1990 for c in self.completions(text, len(text))]
1989 1991
1990 1992 return self.complete(text)[1]
1991 1993
1992 1994 def _clean_glob(self, text:str):
1993 1995 return self.glob("%s*" % text)
1994 1996
1995 1997 def _clean_glob_win32(self, text:str):
1996 1998 return [f.replace("\\","/")
1997 1999 for f in self.glob("%s*" % text)]
1998 2000
1999 2001 @context_matcher()
2000 2002 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2001 2003 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2002 2004 matches = self.file_matches(context.token)
2003 2005 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2004 2006 # starts with `/home/`, `C:\`, etc)
2005 2007 return _convert_matcher_v1_result_to_v2(matches, type="path")
2006 2008
2007 2009 def file_matches(self, text: str) -> List[str]:
2008 2010 """Match filenames, expanding ~USER type strings.
2009 2011
2010 2012 Most of the seemingly convoluted logic in this completer is an
2011 2013 attempt to handle filenames with spaces in them. And yet it's not
2012 2014 quite perfect, because Python's readline doesn't expose all of the
2013 2015 GNU readline details needed for this to be done correctly.
2014 2016
2015 2017 For a filename with a space in it, the printed completions will be
2016 2018 only the parts after what's already been typed (instead of the
2017 2019 full completions, as is normally done). I don't think with the
2018 2020 current (as of Python 2.3) Python readline it's possible to do
2019 2021 better.
2020 2022
2021 2023 .. deprecated:: 8.6
2022 2024 You can use :meth:`file_matcher` instead.
2023 2025 """
2024 2026
2025 2027 # chars that require escaping with backslash - i.e. chars
2026 2028 # that readline treats incorrectly as delimiters, but we
2027 2029 # don't want to treat as delimiters in filename matching
2028 2030 # when escaped with backslash
2029 2031 if text.startswith('!'):
2030 2032 text = text[1:]
2031 2033 text_prefix = u'!'
2032 2034 else:
2033 2035 text_prefix = u''
2034 2036
2035 2037 text_until_cursor = self.text_until_cursor
2036 2038 # track strings with open quotes
2037 2039 open_quotes = has_open_quotes(text_until_cursor)
2038 2040
2039 2041 if '(' in text_until_cursor or '[' in text_until_cursor:
2040 2042 lsplit = text
2041 2043 else:
2042 2044 try:
2043 2045 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2044 2046 lsplit = arg_split(text_until_cursor)[-1]
2045 2047 except ValueError:
2046 2048 # typically an unmatched ", or backslash without escaped char.
2047 2049 if open_quotes:
2048 2050 lsplit = text_until_cursor.split(open_quotes)[-1]
2049 2051 else:
2050 2052 return []
2051 2053 except IndexError:
2052 2054 # tab pressed on empty line
2053 2055 lsplit = ""
2054 2056
2055 2057 if not open_quotes and lsplit != protect_filename(lsplit):
2056 2058 # if protectables are found, do matching on the whole escaped name
2057 2059 has_protectables = True
2058 2060 text0,text = text,lsplit
2059 2061 else:
2060 2062 has_protectables = False
2061 2063 text = os.path.expanduser(text)
2062 2064
2063 2065 if text == "":
2064 2066 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2065 2067
2066 2068 # Compute the matches from the filesystem
2067 2069 if sys.platform == 'win32':
2068 2070 m0 = self.clean_glob(text)
2069 2071 else:
2070 2072 m0 = self.clean_glob(text.replace('\\', ''))
2071 2073
2072 2074 if has_protectables:
2073 2075 # If we had protectables, we need to revert our changes to the
2074 2076 # beginning of filename so that we don't double-write the part
2075 2077 # of the filename we have so far
2076 2078 len_lsplit = len(lsplit)
2077 2079 matches = [text_prefix + text0 +
2078 2080 protect_filename(f[len_lsplit:]) for f in m0]
2079 2081 else:
2080 2082 if open_quotes:
2081 2083 # if we have a string with an open quote, we don't need to
2082 2084 # protect the names beyond the quote (and we _shouldn't_, as
2083 2085 # it would cause bugs when the filesystem call is made).
2084 2086 matches = m0 if sys.platform == "win32" else\
2085 2087 [protect_filename(f, open_quotes) for f in m0]
2086 2088 else:
2087 2089 matches = [text_prefix +
2088 2090 protect_filename(f) for f in m0]
2089 2091
2090 2092 # Mark directories in input list by appending '/' to their names.
2091 2093 return [x+'/' if os.path.isdir(x) else x for x in matches]
2092 2094
2093 2095 @context_matcher()
2094 2096 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2095 2097 """Match magics."""
2096 2098 text = context.token
2097 2099 matches = self.magic_matches(text)
2098 2100 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2099 2101 is_magic_prefix = len(text) > 0 and text[0] == "%"
2100 2102 result["suppress"] = is_magic_prefix and bool(result["completions"])
2101 2103 return result
2102 2104
2103 2105 def magic_matches(self, text: str):
2104 2106 """Match magics.
2105 2107
2106 2108 .. deprecated:: 8.6
2107 2109 You can use :meth:`magic_matcher` instead.
2108 2110 """
2109 2111 # Get all shell magics now rather than statically, so magics loaded at
2110 2112 # runtime show up too.
2111 2113 lsm = self.shell.magics_manager.lsmagic()
2112 2114 line_magics = lsm['line']
2113 2115 cell_magics = lsm['cell']
2114 2116 pre = self.magic_escape
2115 2117 pre2 = pre+pre
2116 2118
2117 2119 explicit_magic = text.startswith(pre)
2118 2120
2119 2121 # Completion logic:
2120 2122 # - user gives %%: only do cell magics
2121 2123 # - user gives %: do both line and cell magics
2122 2124 # - no prefix: do both
2123 2125 # In other words, line magics are skipped if the user gives %% explicitly
2124 2126 #
2125 2127 # We also exclude magics that match any currently visible names:
2126 2128 # https://github.com/ipython/ipython/issues/4877, unless the user has
2127 2129 # typed a %:
2128 2130 # https://github.com/ipython/ipython/issues/10754
2129 2131 bare_text = text.lstrip(pre)
2130 2132 global_matches = self.global_matches(bare_text)
2131 2133 if not explicit_magic:
2132 2134 def matches(magic):
2133 2135 """
2134 2136 Filter magics, in particular remove magics that match
2135 2137 a name present in global namespace.
2136 2138 """
2137 2139 return ( magic.startswith(bare_text) and
2138 2140 magic not in global_matches )
2139 2141 else:
2140 2142 def matches(magic):
2141 2143 return magic.startswith(bare_text)
2142 2144
2143 2145 comp = [ pre2+m for m in cell_magics if matches(m)]
2144 2146 if not text.startswith(pre2):
2145 2147 comp += [ pre+m for m in line_magics if matches(m)]
2146 2148
2147 2149 return comp
2148 2150
2149 2151 @context_matcher()
2150 2152 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2151 2153 """Match class names and attributes for %config magic."""
2152 2154 # NOTE: uses `line_buffer` equivalent for compatibility
2153 2155 matches = self.magic_config_matches(context.line_with_cursor)
2154 2156 return _convert_matcher_v1_result_to_v2(matches, type="param")
2155 2157
2156 2158 def magic_config_matches(self, text: str) -> List[str]:
2157 2159 """Match class names and attributes for %config magic.
2158 2160
2159 2161 .. deprecated:: 8.6
2160 2162 You can use :meth:`magic_config_matcher` instead.
2161 2163 """
2162 2164 texts = text.strip().split()
2163 2165
2164 2166 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2165 2167 # get all configuration classes
2166 2168 classes = sorted(set([ c for c in self.shell.configurables
2167 2169 if c.__class__.class_traits(config=True)
2168 2170 ]), key=lambda x: x.__class__.__name__)
2169 2171 classnames = [ c.__class__.__name__ for c in classes ]
2170 2172
2171 2173 # return all classnames if config or %config is given
2172 2174 if len(texts) == 1:
2173 2175 return classnames
2174 2176
2175 2177 # match classname
2176 2178 classname_texts = texts[1].split('.')
2177 2179 classname = classname_texts[0]
2178 2180 classname_matches = [ c for c in classnames
2179 2181 if c.startswith(classname) ]
2180 2182
2181 2183 # return matched classes or the matched class with attributes
2182 2184 if texts[1].find('.') < 0:
2183 2185 return classname_matches
2184 2186 elif len(classname_matches) == 1 and \
2185 2187 classname_matches[0] == classname:
2186 2188 cls = classes[classnames.index(classname)].__class__
2187 2189 help = cls.class_get_help()
2188 2190 # strip leading '--' from cl-args:
2189 2191 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2190 2192 return [ attr.split('=')[0]
2191 2193 for attr in help.strip().splitlines()
2192 2194 if attr.startswith(texts[1]) ]
2193 2195 return []
2194 2196
2195 2197 @context_matcher()
2196 2198 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2197 2199 """Match color schemes for %colors magic."""
2198 2200 # NOTE: uses `line_buffer` equivalent for compatibility
2199 2201 matches = self.magic_color_matches(context.line_with_cursor)
2200 2202 return _convert_matcher_v1_result_to_v2(matches, type="param")
2201 2203
2202 2204 def magic_color_matches(self, text: str) -> List[str]:
2203 2205 """Match color schemes for %colors magic.
2204 2206
2205 2207 .. deprecated:: 8.6
2206 2208 You can use :meth:`magic_color_matcher` instead.
2207 2209 """
2208 2210 texts = text.split()
2209 2211 if text.endswith(' '):
2210 2212 # .split() strips off the trailing whitespace. Add '' back
2211 2213 # so that: '%colors ' -> ['%colors', '']
2212 2214 texts.append('')
2213 2215
2214 2216 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2215 2217 prefix = texts[1]
2216 2218 return [ color for color in InspectColors.keys()
2217 2219 if color.startswith(prefix) ]
2218 2220 return []
2219 2221
2220 2222 @context_matcher(identifier="IPCompleter.jedi_matcher")
2221 2223 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2222 2224 matches = self._jedi_matches(
2223 2225 cursor_column=context.cursor_position,
2224 2226 cursor_line=context.cursor_line,
2225 2227 text=context.full_text,
2226 2228 )
2227 2229 return {
2228 2230 "completions": matches,
2229 2231 # static analysis should not suppress other matchers
2230 2232 "suppress": False,
2231 2233 }
2232 2234
2233 2235 def _jedi_matches(
2234 2236 self, cursor_column: int, cursor_line: int, text: str
2235 2237 ) -> Iterator[_JediCompletionLike]:
2236 2238 """
2237 2239 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2238 2240 cursor position.
2239 2241
2240 2242 Parameters
2241 2243 ----------
2242 2244 cursor_column : int
2243 2245 column position of the cursor in ``text``, 0-indexed.
2244 2246 cursor_line : int
2245 2247 line position of the cursor in ``text``, 0-indexed
2246 2248 text : str
2247 2249 text to complete
2248 2250
2249 2251 Notes
2250 2252 -----
2251 2253 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2252 2254 object containing a string with the Jedi debug information attached.
2253 2255
2254 2256 .. deprecated:: 8.6
2255 2257 You can use :meth:`_jedi_matcher` instead.
2256 2258 """
2257 2259 namespaces = [self.namespace]
2258 2260 if self.global_namespace is not None:
2259 2261 namespaces.append(self.global_namespace)
2260 2262
2261 2263 completion_filter = lambda x:x
2262 2264 offset = cursor_to_position(text, cursor_line, cursor_column)
2263 2265 # filter output if we are completing for object members
2264 2266 if offset:
2265 2267 pre = text[offset-1]
2266 2268 if pre == '.':
2267 2269 if self.omit__names == 2:
2268 2270 completion_filter = lambda c:not c.name.startswith('_')
2269 2271 elif self.omit__names == 1:
2270 2272 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2271 2273 elif self.omit__names == 0:
2272 2274 completion_filter = lambda x:x
2273 2275 else:
2274 2276 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2275 2277
2276 2278 interpreter = jedi.Interpreter(text[:offset], namespaces)
2277 2279 try_jedi = True
2278 2280
2279 2281 try:
2280 2282 # find the first token in the current tree -- if it is a ' or " then we are in a string
2281 2283 completing_string = False
2282 2284 try:
2283 2285 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2284 2286 except StopIteration:
2285 2287 pass
2286 2288 else:
2287 2289 # note the value may be ', ", or it may also be ''' or """, or
2288 2290 # in some cases, """what/you/typed..., but all of these are
2289 2291 # strings.
2290 2292 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2291 2293
2292 2294 # if we are in a string jedi is likely not the right candidate for
2293 2295 # now. Skip it.
2294 2296 try_jedi = not completing_string
2295 2297 except Exception as e:
2296 2298 # many of things can go wrong, we are using private API just don't crash.
2297 2299 if self.debug:
2298 2300 print("Error detecting if completing a non-finished string :", e, '|')
2299 2301
2300 2302 if not try_jedi:
2301 2303 return iter([])
2302 2304 try:
2303 2305 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2304 2306 except Exception as e:
2305 2307 if self.debug:
2306 2308 return iter(
2307 2309 [
2308 2310 _FakeJediCompletion(
2309 2311 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2310 2312 % (e)
2311 2313 )
2312 2314 ]
2313 2315 )
2314 2316 else:
2315 2317 return iter([])
2316 2318
2317 2319 @context_matcher()
2318 2320 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2319 2321 """Match attributes or global python names"""
2320 2322 text = context.line_with_cursor
2321 2323 if "." in text:
2322 2324 try:
2323 2325 matches, fragment = self._attr_matches(text, include_prefix=False)
2324 2326 if text.endswith(".") and self.omit__names:
2325 2327 if self.omit__names == 1:
2326 2328 # true if txt is _not_ a __ name, false otherwise:
2327 2329 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2328 2330 else:
2329 2331 # true if txt is _not_ a _ name, false otherwise:
2330 2332 no__name = (
2331 2333 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2332 2334 is None
2333 2335 )
2334 2336 matches = filter(no__name, matches)
2335 2337 return _convert_matcher_v1_result_to_v2(
2336 2338 matches, type="attribute", fragment=fragment
2337 2339 )
2338 2340 except NameError:
2339 2341 # catches <undefined attributes>.<tab>
2340 2342 matches = []
2341 2343 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2342 2344 else:
2343 2345 matches = self.global_matches(context.token)
2344 2346 # TODO: maybe distinguish between functions, modules and just "variables"
2345 2347 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2346 2348
2347 2349 @completion_matcher(api_version=1)
2348 2350 def python_matches(self, text: str) -> Iterable[str]:
2349 2351 """Match attributes or global python names.
2350 2352
2351 2353 .. deprecated:: 8.27
2352 2354 You can use :meth:`python_matcher` instead."""
2353 2355 if "." in text:
2354 2356 try:
2355 2357 matches = self.attr_matches(text)
2356 2358 if text.endswith('.') and self.omit__names:
2357 2359 if self.omit__names == 1:
2358 2360 # true if txt is _not_ a __ name, false otherwise:
2359 2361 no__name = (lambda txt:
2360 2362 re.match(r'.*\.__.*?__',txt) is None)
2361 2363 else:
2362 2364 # true if txt is _not_ a _ name, false otherwise:
2363 2365 no__name = (lambda txt:
2364 2366 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2365 2367 matches = filter(no__name, matches)
2366 2368 except NameError:
2367 2369 # catches <undefined attributes>.<tab>
2368 2370 matches = []
2369 2371 else:
2370 2372 matches = self.global_matches(text)
2371 2373 return matches
2372 2374
2373 2375 def _default_arguments_from_docstring(self, doc):
2374 2376 """Parse the first line of docstring for call signature.
2375 2377
2376 2378 Docstring should be of the form 'min(iterable[, key=func])\n'.
2377 2379 It can also parse cython docstring of the form
2378 2380 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2379 2381 """
2380 2382 if doc is None:
2381 2383 return []
2382 2384
2383 2385 #care only the firstline
2384 2386 line = doc.lstrip().splitlines()[0]
2385 2387
2386 2388 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2387 2389 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2388 2390 sig = self.docstring_sig_re.search(line)
2389 2391 if sig is None:
2390 2392 return []
2391 2393 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2392 2394 sig = sig.groups()[0].split(',')
2393 2395 ret = []
2394 2396 for s in sig:
2395 2397 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2396 2398 ret += self.docstring_kwd_re.findall(s)
2397 2399 return ret
2398 2400
2399 2401 def _default_arguments(self, obj):
2400 2402 """Return the list of default arguments of obj if it is callable,
2401 2403 or empty list otherwise."""
2402 2404 call_obj = obj
2403 2405 ret = []
2404 2406 if inspect.isbuiltin(obj):
2405 2407 pass
2406 2408 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2407 2409 if inspect.isclass(obj):
2408 2410 #for cython embedsignature=True the constructor docstring
2409 2411 #belongs to the object itself not __init__
2410 2412 ret += self._default_arguments_from_docstring(
2411 2413 getattr(obj, '__doc__', ''))
2412 2414 # for classes, check for __init__,__new__
2413 2415 call_obj = (getattr(obj, '__init__', None) or
2414 2416 getattr(obj, '__new__', None))
2415 2417 # for all others, check if they are __call__able
2416 2418 elif hasattr(obj, '__call__'):
2417 2419 call_obj = obj.__call__
2418 2420 ret += self._default_arguments_from_docstring(
2419 2421 getattr(call_obj, '__doc__', ''))
2420 2422
2421 2423 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2422 2424 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2423 2425
2424 2426 try:
2425 2427 sig = inspect.signature(obj)
2426 2428 ret.extend(k for k, v in sig.parameters.items() if
2427 2429 v.kind in _keeps)
2428 2430 except ValueError:
2429 2431 pass
2430 2432
2431 2433 return list(set(ret))
2432 2434
2433 2435 @context_matcher()
2434 2436 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2435 2437 """Match named parameters (kwargs) of the last open function."""
2436 2438 matches = self.python_func_kw_matches(context.token)
2437 2439 return _convert_matcher_v1_result_to_v2(matches, type="param")
2438 2440
2439 2441 def python_func_kw_matches(self, text):
2440 2442 """Match named parameters (kwargs) of the last open function.
2441 2443
2442 2444 .. deprecated:: 8.6
2443 2445 You can use :meth:`python_func_kw_matcher` instead.
2444 2446 """
2445 2447
2446 2448 if "." in text: # a parameter cannot be dotted
2447 2449 return []
2448 2450 try: regexp = self.__funcParamsRegex
2449 2451 except AttributeError:
2450 2452 regexp = self.__funcParamsRegex = re.compile(r'''
2451 2453 '.*?(?<!\\)' | # single quoted strings or
2452 2454 ".*?(?<!\\)" | # double quoted strings or
2453 2455 \w+ | # identifier
2454 2456 \S # other characters
2455 2457 ''', re.VERBOSE | re.DOTALL)
2456 2458 # 1. find the nearest identifier that comes before an unclosed
2457 2459 # parenthesis before the cursor
2458 2460 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2459 2461 tokens = regexp.findall(self.text_until_cursor)
2460 2462 iterTokens = reversed(tokens); openPar = 0
2461 2463
2462 2464 for token in iterTokens:
2463 2465 if token == ')':
2464 2466 openPar -= 1
2465 2467 elif token == '(':
2466 2468 openPar += 1
2467 2469 if openPar > 0:
2468 2470 # found the last unclosed parenthesis
2469 2471 break
2470 2472 else:
2471 2473 return []
2472 2474 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2473 2475 ids = []
2474 2476 isId = re.compile(r'\w+$').match
2475 2477
2476 2478 while True:
2477 2479 try:
2478 2480 ids.append(next(iterTokens))
2479 2481 if not isId(ids[-1]):
2480 2482 ids.pop(); break
2481 2483 if not next(iterTokens) == '.':
2482 2484 break
2483 2485 except StopIteration:
2484 2486 break
2485 2487
2486 2488 # Find all named arguments already assigned to, as to avoid suggesting
2487 2489 # them again
2488 2490 usedNamedArgs = set()
2489 2491 par_level = -1
2490 2492 for token, next_token in zip(tokens, tokens[1:]):
2491 2493 if token == '(':
2492 2494 par_level += 1
2493 2495 elif token == ')':
2494 2496 par_level -= 1
2495 2497
2496 2498 if par_level != 0:
2497 2499 continue
2498 2500
2499 2501 if next_token != '=':
2500 2502 continue
2501 2503
2502 2504 usedNamedArgs.add(token)
2503 2505
2504 2506 argMatches = []
2505 2507 try:
2506 2508 callableObj = '.'.join(ids[::-1])
2507 2509 namedArgs = self._default_arguments(eval(callableObj,
2508 2510 self.namespace))
2509 2511
2510 2512 # Remove used named arguments from the list, no need to show twice
2511 2513 for namedArg in set(namedArgs) - usedNamedArgs:
2512 2514 if namedArg.startswith(text):
2513 2515 argMatches.append("%s=" %namedArg)
2514 2516 except:
2515 2517 pass
2516 2518
2517 2519 return argMatches
2518 2520
2519 2521 @staticmethod
2520 2522 def _get_keys(obj: Any) -> List[Any]:
2521 2523 # Objects can define their own completions by defining an
2522 2524 # _ipy_key_completions_() method.
2523 2525 method = get_real_method(obj, '_ipython_key_completions_')
2524 2526 if method is not None:
2525 2527 return method()
2526 2528
2527 2529 # Special case some common in-memory dict-like types
2528 2530 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2529 2531 try:
2530 2532 return list(obj.keys())
2531 2533 except Exception:
2532 2534 return []
2533 2535 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2534 2536 try:
2535 2537 return list(obj.obj.keys())
2536 2538 except Exception:
2537 2539 return []
2538 2540 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2539 2541 _safe_isinstance(obj, 'numpy', 'void'):
2540 2542 return obj.dtype.names or []
2541 2543 return []
2542 2544
2543 2545 @context_matcher()
2544 2546 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2545 2547 """Match string keys in a dictionary, after e.g. ``foo[``."""
2546 2548 matches = self.dict_key_matches(context.token)
2547 2549 return _convert_matcher_v1_result_to_v2(
2548 2550 matches, type="dict key", suppress_if_matches=True
2549 2551 )
2550 2552
2551 2553 def dict_key_matches(self, text: str) -> List[str]:
2552 2554 """Match string keys in a dictionary, after e.g. ``foo[``.
2553 2555
2554 2556 .. deprecated:: 8.6
2555 2557 You can use :meth:`dict_key_matcher` instead.
2556 2558 """
2557 2559
2558 2560 # Short-circuit on closed dictionary (regular expression would
2559 2561 # not match anyway, but would take quite a while).
2560 2562 if self.text_until_cursor.strip().endswith("]"):
2561 2563 return []
2562 2564
2563 2565 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2564 2566
2565 2567 if match is None:
2566 2568 return []
2567 2569
2568 2570 expr, prior_tuple_keys, key_prefix = match.groups()
2569 2571
2570 2572 obj = self._evaluate_expr(expr)
2571 2573
2572 2574 if obj is not_found:
2573 2575 return []
2574 2576
2575 2577 keys = self._get_keys(obj)
2576 2578 if not keys:
2577 2579 return keys
2578 2580
2579 2581 tuple_prefix = guarded_eval(
2580 2582 prior_tuple_keys,
2581 2583 EvaluationContext(
2582 2584 globals=self.global_namespace,
2583 2585 locals=self.namespace,
2584 2586 evaluation=self.evaluation, # type: ignore
2585 2587 in_subscript=True,
2586 2588 ),
2587 2589 )
2588 2590
2589 2591 closing_quote, token_offset, matches = match_dict_keys(
2590 2592 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2591 2593 )
2592 2594 if not matches:
2593 2595 return []
2594 2596
2595 2597 # get the cursor position of
2596 2598 # - the text being completed
2597 2599 # - the start of the key text
2598 2600 # - the start of the completion
2599 2601 text_start = len(self.text_until_cursor) - len(text)
2600 2602 if key_prefix:
2601 2603 key_start = match.start(3)
2602 2604 completion_start = key_start + token_offset
2603 2605 else:
2604 2606 key_start = completion_start = match.end()
2605 2607
2606 2608 # grab the leading prefix, to make sure all completions start with `text`
2607 2609 if text_start > key_start:
2608 2610 leading = ''
2609 2611 else:
2610 2612 leading = text[text_start:completion_start]
2611 2613
2612 2614 # append closing quote and bracket as appropriate
2613 2615 # this is *not* appropriate if the opening quote or bracket is outside
2614 2616 # the text given to this method, e.g. `d["""a\nt
2615 2617 can_close_quote = False
2616 2618 can_close_bracket = False
2617 2619
2618 2620 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2619 2621
2620 2622 if continuation.startswith(closing_quote):
2621 2623 # do not close if already closed, e.g. `d['a<tab>'`
2622 2624 continuation = continuation[len(closing_quote) :]
2623 2625 else:
2624 2626 can_close_quote = True
2625 2627
2626 2628 continuation = continuation.strip()
2627 2629
2628 2630 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2629 2631 # handling it is out of scope, so let's avoid appending suffixes.
2630 2632 has_known_tuple_handling = isinstance(obj, dict)
2631 2633
2632 2634 can_close_bracket = (
2633 2635 not continuation.startswith("]") and self.auto_close_dict_keys
2634 2636 )
2635 2637 can_close_tuple_item = (
2636 2638 not continuation.startswith(",")
2637 2639 and has_known_tuple_handling
2638 2640 and self.auto_close_dict_keys
2639 2641 )
2640 2642 can_close_quote = can_close_quote and self.auto_close_dict_keys
2641 2643
2642 2644 # fast path if closing quote should be appended but not suffix is allowed
2643 2645 if not can_close_quote and not can_close_bracket and closing_quote:
2644 2646 return [leading + k for k in matches]
2645 2647
2646 2648 results = []
2647 2649
2648 2650 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2649 2651
2650 2652 for k, state_flag in matches.items():
2651 2653 result = leading + k
2652 2654 if can_close_quote and closing_quote:
2653 2655 result += closing_quote
2654 2656
2655 2657 if state_flag == end_of_tuple_or_item:
2656 2658 # We do not know which suffix to add,
2657 2659 # e.g. both tuple item and string
2658 2660 # match this item.
2659 2661 pass
2660 2662
2661 2663 if state_flag in end_of_tuple_or_item and can_close_bracket:
2662 2664 result += "]"
2663 2665 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2664 2666 result += ", "
2665 2667 results.append(result)
2666 2668 return results
2667 2669
2668 2670 @context_matcher()
2669 2671 def unicode_name_matcher(self, context: CompletionContext):
2670 2672 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2671 2673 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2672 2674 return _convert_matcher_v1_result_to_v2(
2673 2675 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2674 2676 )
2675 2677
2676 2678 @staticmethod
2677 2679 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2678 2680 """Match Latex-like syntax for unicode characters base
2679 2681 on the name of the character.
2680 2682
2681 2683 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2682 2684
2683 2685 Works only on valid python 3 identifier, or on combining characters that
2684 2686 will combine to form a valid identifier.
2685 2687 """
2686 2688 slashpos = text.rfind('\\')
2687 2689 if slashpos > -1:
2688 2690 s = text[slashpos+1:]
2689 2691 try :
2690 2692 unic = unicodedata.lookup(s)
2691 2693 # allow combining chars
2692 2694 if ('a'+unic).isidentifier():
2693 2695 return '\\'+s,[unic]
2694 2696 except KeyError:
2695 2697 pass
2696 2698 return '', []
2697 2699
2698 2700 @context_matcher()
2699 2701 def latex_name_matcher(self, context: CompletionContext):
2700 2702 """Match Latex syntax for unicode characters.
2701 2703
2702 2704 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2703 2705 """
2704 2706 fragment, matches = self.latex_matches(context.text_until_cursor)
2705 2707 return _convert_matcher_v1_result_to_v2(
2706 2708 matches, type="latex", fragment=fragment, suppress_if_matches=True
2707 2709 )
2708 2710
2709 2711 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2710 2712 """Match Latex syntax for unicode characters.
2711 2713
2712 2714 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2713 2715
2714 2716 .. deprecated:: 8.6
2715 2717 You can use :meth:`latex_name_matcher` instead.
2716 2718 """
2717 2719 slashpos = text.rfind('\\')
2718 2720 if slashpos > -1:
2719 2721 s = text[slashpos:]
2720 2722 if s in latex_symbols:
2721 2723 # Try to complete a full latex symbol to unicode
2722 2724 # \\alpha -> Ξ±
2723 2725 return s, [latex_symbols[s]]
2724 2726 else:
2725 2727 # If a user has partially typed a latex symbol, give them
2726 2728 # a full list of options \al -> [\aleph, \alpha]
2727 2729 matches = [k for k in latex_symbols if k.startswith(s)]
2728 2730 if matches:
2729 2731 return s, matches
2730 2732 return '', ()
2731 2733
2732 2734 @context_matcher()
2733 2735 def custom_completer_matcher(self, context):
2734 2736 """Dispatch custom completer.
2735 2737
2736 2738 If a match is found, suppresses all other matchers except for Jedi.
2737 2739 """
2738 2740 matches = self.dispatch_custom_completer(context.token) or []
2739 2741 result = _convert_matcher_v1_result_to_v2(
2740 2742 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2741 2743 )
2742 2744 result["ordered"] = True
2743 2745 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2744 2746 return result
2745 2747
2746 2748 def dispatch_custom_completer(self, text):
2747 2749 """
2748 2750 .. deprecated:: 8.6
2749 2751 You can use :meth:`custom_completer_matcher` instead.
2750 2752 """
2751 2753 if not self.custom_completers:
2752 2754 return
2753 2755
2754 2756 line = self.line_buffer
2755 2757 if not line.strip():
2756 2758 return None
2757 2759
2758 2760 # Create a little structure to pass all the relevant information about
2759 2761 # the current completion to any custom completer.
2760 2762 event = SimpleNamespace()
2761 2763 event.line = line
2762 2764 event.symbol = text
2763 2765 cmd = line.split(None,1)[0]
2764 2766 event.command = cmd
2765 2767 event.text_until_cursor = self.text_until_cursor
2766 2768
2767 2769 # for foo etc, try also to find completer for %foo
2768 2770 if not cmd.startswith(self.magic_escape):
2769 2771 try_magic = self.custom_completers.s_matches(
2770 2772 self.magic_escape + cmd)
2771 2773 else:
2772 2774 try_magic = []
2773 2775
2774 2776 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2775 2777 try_magic,
2776 2778 self.custom_completers.flat_matches(self.text_until_cursor)):
2777 2779 try:
2778 2780 res = c(event)
2779 2781 if res:
2780 2782 # first, try case sensitive match
2781 2783 withcase = [r for r in res if r.startswith(text)]
2782 2784 if withcase:
2783 2785 return withcase
2784 2786 # if none, then case insensitive ones are ok too
2785 2787 text_low = text.lower()
2786 2788 return [r for r in res if r.lower().startswith(text_low)]
2787 2789 except TryNext:
2788 2790 pass
2789 2791 except KeyboardInterrupt:
2790 2792 """
2791 2793 If custom completer take too long,
2792 2794 let keyboard interrupt abort and return nothing.
2793 2795 """
2794 2796 break
2795 2797
2796 2798 return None
2797 2799
2798 2800 def completions(self, text: str, offset: int)->Iterator[Completion]:
2799 2801 """
2800 2802 Returns an iterator over the possible completions
2801 2803
2802 2804 .. warning::
2803 2805
2804 2806 Unstable
2805 2807
2806 2808 This function is unstable, API may change without warning.
2807 2809 It will also raise unless use in proper context manager.
2808 2810
2809 2811 Parameters
2810 2812 ----------
2811 2813 text : str
2812 2814 Full text of the current input, multi line string.
2813 2815 offset : int
2814 2816 Integer representing the position of the cursor in ``text``. Offset
2815 2817 is 0-based indexed.
2816 2818
2817 2819 Yields
2818 2820 ------
2819 2821 Completion
2820 2822
2821 2823 Notes
2822 2824 -----
2823 2825 The cursor on a text can either be seen as being "in between"
2824 2826 characters or "On" a character depending on the interface visible to
2825 2827 the user. For consistency the cursor being on "in between" characters X
2826 2828 and Y is equivalent to the cursor being "on" character Y, that is to say
2827 2829 the character the cursor is on is considered as being after the cursor.
2828 2830
2829 2831 Combining characters may span more that one position in the
2830 2832 text.
2831 2833
2832 2834 .. note::
2833 2835
2834 2836 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2835 2837 fake Completion token to distinguish completion returned by Jedi
2836 2838 and usual IPython completion.
2837 2839
2838 2840 .. note::
2839 2841
2840 2842 Completions are not completely deduplicated yet. If identical
2841 2843 completions are coming from different sources this function does not
2842 2844 ensure that each completion object will only be present once.
2843 2845 """
2844 2846 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2845 2847 "It may change without warnings. "
2846 2848 "Use in corresponding context manager.",
2847 2849 category=ProvisionalCompleterWarning, stacklevel=2)
2848 2850
2849 2851 seen = set()
2850 2852 profiler:Optional[cProfile.Profile]
2851 2853 try:
2852 2854 if self.profile_completions:
2853 2855 import cProfile
2854 2856 profiler = cProfile.Profile()
2855 2857 profiler.enable()
2856 2858 else:
2857 2859 profiler = None
2858 2860
2859 2861 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2860 2862 if c and (c in seen):
2861 2863 continue
2862 2864 yield c
2863 2865 seen.add(c)
2864 2866 except KeyboardInterrupt:
2865 2867 """if completions take too long and users send keyboard interrupt,
2866 2868 do not crash and return ASAP. """
2867 2869 pass
2868 2870 finally:
2869 2871 if profiler is not None:
2870 2872 profiler.disable()
2871 2873 ensure_dir_exists(self.profiler_output_dir)
2872 2874 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2873 2875 print("Writing profiler output to", output_path)
2874 2876 profiler.dump_stats(output_path)
2875 2877
2876 2878 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2877 2879 """
2878 2880 Core completion module.Same signature as :any:`completions`, with the
2879 2881 extra `timeout` parameter (in seconds).
2880 2882
2881 2883 Computing jedi's completion ``.type`` can be quite expensive (it is a
2882 2884 lazy property) and can require some warm-up, more warm up than just
2883 2885 computing the ``name`` of a completion. The warm-up can be :
2884 2886
2885 2887 - Long warm-up the first time a module is encountered after
2886 2888 install/update: actually build parse/inference tree.
2887 2889
2888 2890 - first time the module is encountered in a session: load tree from
2889 2891 disk.
2890 2892
2891 2893 We don't want to block completions for tens of seconds so we give the
2892 2894 completer a "budget" of ``_timeout`` seconds per invocation to compute
2893 2895 completions types, the completions that have not yet been computed will
2894 2896 be marked as "unknown" an will have a chance to be computed next round
2895 2897 are things get cached.
2896 2898
2897 2899 Keep in mind that Jedi is not the only thing treating the completion so
2898 2900 keep the timeout short-ish as if we take more than 0.3 second we still
2899 2901 have lots of processing to do.
2900 2902
2901 2903 """
2902 2904 deadline = time.monotonic() + _timeout
2903 2905
2904 2906 before = full_text[:offset]
2905 2907 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2906 2908
2907 2909 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2908 2910
2909 2911 def is_non_jedi_result(
2910 2912 result: MatcherResult, identifier: str
2911 2913 ) -> TypeGuard[SimpleMatcherResult]:
2912 2914 return identifier != jedi_matcher_id
2913 2915
2914 2916 results = self._complete(
2915 2917 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2916 2918 )
2917 2919
2918 2920 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2919 2921 identifier: result
2920 2922 for identifier, result in results.items()
2921 2923 if is_non_jedi_result(result, identifier)
2922 2924 }
2923 2925
2924 2926 jedi_matches = (
2925 2927 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2926 2928 if jedi_matcher_id in results
2927 2929 else ()
2928 2930 )
2929 2931
2930 2932 iter_jm = iter(jedi_matches)
2931 2933 if _timeout:
2932 2934 for jm in iter_jm:
2933 2935 try:
2934 2936 type_ = jm.type
2935 2937 except Exception:
2936 2938 if self.debug:
2937 2939 print("Error in Jedi getting type of ", jm)
2938 2940 type_ = None
2939 2941 delta = len(jm.name_with_symbols) - len(jm.complete)
2940 2942 if type_ == 'function':
2941 2943 signature = _make_signature(jm)
2942 2944 else:
2943 2945 signature = ''
2944 2946 yield Completion(start=offset - delta,
2945 2947 end=offset,
2946 2948 text=jm.name_with_symbols,
2947 2949 type=type_,
2948 2950 signature=signature,
2949 2951 _origin='jedi')
2950 2952
2951 2953 if time.monotonic() > deadline:
2952 2954 break
2953 2955
2954 2956 for jm in iter_jm:
2955 2957 delta = len(jm.name_with_symbols) - len(jm.complete)
2956 2958 yield Completion(
2957 2959 start=offset - delta,
2958 2960 end=offset,
2959 2961 text=jm.name_with_symbols,
2960 2962 type=_UNKNOWN_TYPE, # don't compute type for speed
2961 2963 _origin="jedi",
2962 2964 signature="",
2963 2965 )
2964 2966
2965 2967 # TODO:
2966 2968 # Suppress this, right now just for debug.
2967 2969 if jedi_matches and non_jedi_results and self.debug:
2968 2970 some_start_offset = before.rfind(
2969 2971 next(iter(non_jedi_results.values()))["matched_fragment"]
2970 2972 )
2971 2973 yield Completion(
2972 2974 start=some_start_offset,
2973 2975 end=offset,
2974 2976 text="--jedi/ipython--",
2975 2977 _origin="debug",
2976 2978 type="none",
2977 2979 signature="",
2978 2980 )
2979 2981
2980 2982 ordered: List[Completion] = []
2981 2983 sortable: List[Completion] = []
2982 2984
2983 2985 for origin, result in non_jedi_results.items():
2984 2986 matched_text = result["matched_fragment"]
2985 2987 start_offset = before.rfind(matched_text)
2986 2988 is_ordered = result.get("ordered", False)
2987 2989 container = ordered if is_ordered else sortable
2988 2990
2989 2991 # I'm unsure if this is always true, so let's assert and see if it
2990 2992 # crash
2991 2993 assert before.endswith(matched_text)
2992 2994
2993 2995 for simple_completion in result["completions"]:
2994 2996 completion = Completion(
2995 2997 start=start_offset,
2996 2998 end=offset,
2997 2999 text=simple_completion.text,
2998 3000 _origin=origin,
2999 3001 signature="",
3000 3002 type=simple_completion.type or _UNKNOWN_TYPE,
3001 3003 )
3002 3004 container.append(completion)
3003 3005
3004 3006 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3005 3007 :MATCHES_LIMIT
3006 3008 ]
3007 3009
3008 3010 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3009 3011 """Find completions for the given text and line context.
3010 3012
3011 3013 Note that both the text and the line_buffer are optional, but at least
3012 3014 one of them must be given.
3013 3015
3014 3016 Parameters
3015 3017 ----------
3016 3018 text : string, optional
3017 3019 Text to perform the completion on. If not given, the line buffer
3018 3020 is split using the instance's CompletionSplitter object.
3019 3021 line_buffer : string, optional
3020 3022 If not given, the completer attempts to obtain the current line
3021 3023 buffer via readline. This keyword allows clients which are
3022 3024 requesting for text completions in non-readline contexts to inform
3023 3025 the completer of the entire text.
3024 3026 cursor_pos : int, optional
3025 3027 Index of the cursor in the full line buffer. Should be provided by
3026 3028 remote frontends where kernel has no access to frontend state.
3027 3029
3028 3030 Returns
3029 3031 -------
3030 3032 Tuple of two items:
3031 3033 text : str
3032 3034 Text that was actually used in the completion.
3033 3035 matches : list
3034 3036 A list of completion matches.
3035 3037
3036 3038 Notes
3037 3039 -----
3038 3040 This API is likely to be deprecated and replaced by
3039 3041 :any:`IPCompleter.completions` in the future.
3040 3042
3041 3043 """
3042 3044 warnings.warn('`Completer.complete` is pending deprecation since '
3043 3045 'IPython 6.0 and will be replaced by `Completer.completions`.',
3044 3046 PendingDeprecationWarning)
3045 3047 # potential todo, FOLD the 3rd throw away argument of _complete
3046 3048 # into the first 2 one.
3047 3049 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3048 3050 # TODO: should we deprecate now, or does it stay?
3049 3051
3050 3052 results = self._complete(
3051 3053 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3052 3054 )
3053 3055
3054 3056 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3055 3057
3056 3058 return self._arrange_and_extract(
3057 3059 results,
3058 3060 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3059 3061 skip_matchers={jedi_matcher_id},
3060 3062 # this API does not support different start/end positions (fragments of token).
3061 3063 abort_if_offset_changes=True,
3062 3064 )
3063 3065
3064 3066 def _arrange_and_extract(
3065 3067 self,
3066 3068 results: Dict[str, MatcherResult],
3067 3069 skip_matchers: Set[str],
3068 3070 abort_if_offset_changes: bool,
3069 3071 ):
3070 3072 sortable: List[AnyMatcherCompletion] = []
3071 3073 ordered: List[AnyMatcherCompletion] = []
3072 3074 most_recent_fragment = None
3073 3075 for identifier, result in results.items():
3074 3076 if identifier in skip_matchers:
3075 3077 continue
3076 3078 if not result["completions"]:
3077 3079 continue
3078 3080 if not most_recent_fragment:
3079 3081 most_recent_fragment = result["matched_fragment"]
3080 3082 if (
3081 3083 abort_if_offset_changes
3082 3084 and result["matched_fragment"] != most_recent_fragment
3083 3085 ):
3084 3086 break
3085 3087 if result.get("ordered", False):
3086 3088 ordered.extend(result["completions"])
3087 3089 else:
3088 3090 sortable.extend(result["completions"])
3089 3091
3090 3092 if not most_recent_fragment:
3091 3093 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3092 3094
3093 3095 return most_recent_fragment, [
3094 3096 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3095 3097 ]
3096 3098
3097 3099 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3098 3100 full_text=None) -> _CompleteResult:
3099 3101 """
3100 3102 Like complete but can also returns raw jedi completions as well as the
3101 3103 origin of the completion text. This could (and should) be made much
3102 3104 cleaner but that will be simpler once we drop the old (and stateful)
3103 3105 :any:`complete` API.
3104 3106
3105 3107 With current provisional API, cursor_pos act both (depending on the
3106 3108 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3107 3109 ``column`` when passing multiline strings this could/should be renamed
3108 3110 but would add extra noise.
3109 3111
3110 3112 Parameters
3111 3113 ----------
3112 3114 cursor_line
3113 3115 Index of the line the cursor is on. 0 indexed.
3114 3116 cursor_pos
3115 3117 Position of the cursor in the current line/line_buffer/text. 0
3116 3118 indexed.
3117 3119 line_buffer : optional, str
3118 3120 The current line the cursor is in, this is mostly due to legacy
3119 3121 reason that readline could only give a us the single current line.
3120 3122 Prefer `full_text`.
3121 3123 text : str
3122 3124 The current "token" the cursor is in, mostly also for historical
3123 3125 reasons. as the completer would trigger only after the current line
3124 3126 was parsed.
3125 3127 full_text : str
3126 3128 Full text of the current cell.
3127 3129
3128 3130 Returns
3129 3131 -------
3130 3132 An ordered dictionary where keys are identifiers of completion
3131 3133 matchers and values are ``MatcherResult``s.
3132 3134 """
3133 3135
3134 3136 # if the cursor position isn't given, the only sane assumption we can
3135 3137 # make is that it's at the end of the line (the common case)
3136 3138 if cursor_pos is None:
3137 3139 cursor_pos = len(line_buffer) if text is None else len(text)
3138 3140
3139 3141 if self.use_main_ns:
3140 3142 self.namespace = __main__.__dict__
3141 3143
3142 3144 # if text is either None or an empty string, rely on the line buffer
3143 3145 if (not line_buffer) and full_text:
3144 3146 line_buffer = full_text.split('\n')[cursor_line]
3145 3147 if not text: # issue #11508: check line_buffer before calling split_line
3146 3148 text = (
3147 3149 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3148 3150 )
3149 3151
3150 3152 # If no line buffer is given, assume the input text is all there was
3151 3153 if line_buffer is None:
3152 3154 line_buffer = text
3153 3155
3154 3156 # deprecated - do not use `line_buffer` in new code.
3155 3157 self.line_buffer = line_buffer
3156 3158 self.text_until_cursor = self.line_buffer[:cursor_pos]
3157 3159
3158 3160 if not full_text:
3159 3161 full_text = line_buffer
3160 3162
3161 3163 context = CompletionContext(
3162 3164 full_text=full_text,
3163 3165 cursor_position=cursor_pos,
3164 3166 cursor_line=cursor_line,
3165 3167 token=text,
3166 3168 limit=MATCHES_LIMIT,
3167 3169 )
3168 3170
3169 3171 # Start with a clean slate of completions
3170 3172 results: Dict[str, MatcherResult] = {}
3171 3173
3172 3174 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3173 3175
3174 3176 suppressed_matchers: Set[str] = set()
3175 3177
3176 3178 matchers = {
3177 3179 _get_matcher_id(matcher): matcher
3178 3180 for matcher in sorted(
3179 3181 self.matchers, key=_get_matcher_priority, reverse=True
3180 3182 )
3181 3183 }
3182 3184
3183 3185 for matcher_id, matcher in matchers.items():
3184 3186 matcher_id = _get_matcher_id(matcher)
3185 3187
3186 3188 if matcher_id in self.disable_matchers:
3187 3189 continue
3188 3190
3189 3191 if matcher_id in results:
3190 3192 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3191 3193
3192 3194 if matcher_id in suppressed_matchers:
3193 3195 continue
3194 3196
3195 3197 result: MatcherResult
3196 3198 try:
3197 3199 if _is_matcher_v1(matcher):
3198 3200 result = _convert_matcher_v1_result_to_v2(
3199 3201 matcher(text), type=_UNKNOWN_TYPE
3200 3202 )
3201 3203 elif _is_matcher_v2(matcher):
3202 3204 result = matcher(context)
3203 3205 else:
3204 3206 api_version = _get_matcher_api_version(matcher)
3205 3207 raise ValueError(f"Unsupported API version {api_version}")
3206 3208 except:
3207 3209 # Show the ugly traceback if the matcher causes an
3208 3210 # exception, but do NOT crash the kernel!
3209 3211 sys.excepthook(*sys.exc_info())
3210 3212 continue
3211 3213
3212 3214 # set default value for matched fragment if suffix was not selected.
3213 3215 result["matched_fragment"] = result.get("matched_fragment", context.token)
3214 3216
3215 3217 if not suppressed_matchers:
3216 3218 suppression_recommended: Union[bool, Set[str]] = result.get(
3217 3219 "suppress", False
3218 3220 )
3219 3221
3220 3222 suppression_config = (
3221 3223 self.suppress_competing_matchers.get(matcher_id, None)
3222 3224 if isinstance(self.suppress_competing_matchers, dict)
3223 3225 else self.suppress_competing_matchers
3224 3226 )
3225 3227 should_suppress = (
3226 3228 (suppression_config is True)
3227 3229 or (suppression_recommended and (suppression_config is not False))
3228 3230 ) and has_any_completions(result)
3229 3231
3230 3232 if should_suppress:
3231 3233 suppression_exceptions: Set[str] = result.get(
3232 3234 "do_not_suppress", set()
3233 3235 )
3234 3236 if isinstance(suppression_recommended, Iterable):
3235 3237 to_suppress = set(suppression_recommended)
3236 3238 else:
3237 3239 to_suppress = set(matchers)
3238 3240 suppressed_matchers = to_suppress - suppression_exceptions
3239 3241
3240 3242 new_results = {}
3241 3243 for previous_matcher_id, previous_result in results.items():
3242 3244 if previous_matcher_id not in suppressed_matchers:
3243 3245 new_results[previous_matcher_id] = previous_result
3244 3246 results = new_results
3245 3247
3246 3248 results[matcher_id] = result
3247 3249
3248 3250 _, matches = self._arrange_and_extract(
3249 3251 results,
3250 3252 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3251 3253 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3252 3254 skip_matchers={jedi_matcher_id},
3253 3255 abort_if_offset_changes=False,
3254 3256 )
3255 3257
3256 3258 # populate legacy stateful API
3257 3259 self.matches = matches
3258 3260
3259 3261 return results
3260 3262
3261 3263 @staticmethod
3262 3264 def _deduplicate(
3263 3265 matches: Sequence[AnyCompletion],
3264 3266 ) -> Iterable[AnyCompletion]:
3265 3267 filtered_matches: Dict[str, AnyCompletion] = {}
3266 3268 for match in matches:
3267 3269 text = match.text
3268 3270 if (
3269 3271 text not in filtered_matches
3270 3272 or filtered_matches[text].type == _UNKNOWN_TYPE
3271 3273 ):
3272 3274 filtered_matches[text] = match
3273 3275
3274 3276 return filtered_matches.values()
3275 3277
3276 3278 @staticmethod
3277 3279 def _sort(matches: Sequence[AnyCompletion]):
3278 3280 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3279 3281
3280 3282 @context_matcher()
3281 3283 def fwd_unicode_matcher(self, context: CompletionContext):
3282 3284 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3283 3285 # TODO: use `context.limit` to terminate early once we matched the maximum
3284 3286 # number that will be used downstream; can be added as an optional to
3285 3287 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3286 3288 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3287 3289 return _convert_matcher_v1_result_to_v2(
3288 3290 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3289 3291 )
3290 3292
3291 3293 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3292 3294 """
3293 3295 Forward match a string starting with a backslash with a list of
3294 3296 potential Unicode completions.
3295 3297
3296 3298 Will compute list of Unicode character names on first call and cache it.
3297 3299
3298 3300 .. deprecated:: 8.6
3299 3301 You can use :meth:`fwd_unicode_matcher` instead.
3300 3302
3301 3303 Returns
3302 3304 -------
3303 3305 At tuple with:
3304 3306 - matched text (empty if no matches)
3305 3307 - list of potential completions, empty tuple otherwise)
3306 3308 """
3307 3309 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3308 3310 # We could do a faster match using a Trie.
3309 3311
3310 3312 # Using pygtrie the following seem to work:
3311 3313
3312 3314 # s = PrefixSet()
3313 3315
3314 3316 # for c in range(0,0x10FFFF + 1):
3315 3317 # try:
3316 3318 # s.add(unicodedata.name(chr(c)))
3317 3319 # except ValueError:
3318 3320 # pass
3319 3321 # [''.join(k) for k in s.iter(prefix)]
3320 3322
3321 3323 # But need to be timed and adds an extra dependency.
3322 3324
3323 3325 slashpos = text.rfind('\\')
3324 3326 # if text starts with slash
3325 3327 if slashpos > -1:
3326 3328 # PERF: It's important that we don't access self._unicode_names
3327 3329 # until we're inside this if-block. _unicode_names is lazily
3328 3330 # initialized, and it takes a user-noticeable amount of time to
3329 3331 # initialize it, so we don't want to initialize it unless we're
3330 3332 # actually going to use it.
3331 3333 s = text[slashpos + 1 :]
3332 3334 sup = s.upper()
3333 3335 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3334 3336 if candidates:
3335 3337 return s, candidates
3336 3338 candidates = [x for x in self.unicode_names if sup in x]
3337 3339 if candidates:
3338 3340 return s, candidates
3339 3341 splitsup = sup.split(" ")
3340 3342 candidates = [
3341 3343 x for x in self.unicode_names if all(u in x for u in splitsup)
3342 3344 ]
3343 3345 if candidates:
3344 3346 return s, candidates
3345 3347
3346 3348 return "", ()
3347 3349
3348 3350 # if text does not start with slash
3349 3351 else:
3350 3352 return '', ()
3351 3353
3352 3354 @property
3353 3355 def unicode_names(self) -> List[str]:
3354 3356 """List of names of unicode code points that can be completed.
3355 3357
3356 3358 The list is lazily initialized on first access.
3357 3359 """
3358 3360 if self._unicode_names is None:
3359 3361 names = []
3360 3362 for c in range(0,0x10FFFF + 1):
3361 3363 try:
3362 3364 names.append(unicodedata.name(chr(c)))
3363 3365 except ValueError:
3364 3366 pass
3365 3367 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3366 3368
3367 3369 return self._unicode_names
3368 3370
3369 3371 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3370 3372 names = []
3371 3373 for start,stop in ranges:
3372 3374 for c in range(start, stop) :
3373 3375 try:
3374 3376 names.append(unicodedata.name(chr(c)))
3375 3377 except ValueError:
3376 3378 pass
3377 3379 return names
@@ -1,395 +1,394
1 1 [build-system]
2 2 requires = ["setuptools>=61.2"]
3 3 # We need access to the 'setupbase' module at build time.
4 4 # Hence we declare a custom build backend.
5 5 build-backend = "_build_meta" # just re-exports setuptools.build_meta definitions
6 6 backend-path = ["."]
7 7
8 8 [project]
9 9 name = "ipython"
10 10 description = "IPython: Productive Interactive Computing"
11 11 keywords = ["Interactive", "Interpreter", "Shell", "Embedding"]
12 12 classifiers = [
13 13 "Framework :: IPython",
14 14 "Framework :: Jupyter",
15 15 "Intended Audience :: Developers",
16 16 "Intended Audience :: Science/Research",
17 17 "License :: OSI Approved :: BSD License",
18 18 "Programming Language :: Python",
19 19 "Programming Language :: Python :: 3",
20 20 "Programming Language :: Python :: 3 :: Only",
21 21 "Topic :: System :: Shells",
22 22 ]
23 23 requires-python = ">=3.11"
24 24 dependencies = [
25 25 'colorama; sys_platform == "win32"',
26 26 "decorator",
27 27 "jedi>=0.16",
28 28 "matplotlib-inline",
29 29 'pexpect>4.3; sys_platform != "win32" and sys_platform != "emscripten"',
30 30 "prompt_toolkit>=3.0.41,<3.1.0",
31 31 "pygments>=2.4.0",
32 32 "stack_data",
33 33 "traitlets>=5.13.0",
34 34 "typing_extensions>=4.6; python_version<'3.12'",
35 35 ]
36 36 dynamic = ["authors", "license", "version"]
37 37
38 38 [project.entry-points."pygments.lexers"]
39 39 ipythonconsole = "IPython.lib.lexers:IPythonConsoleLexer"
40 40 ipython = "IPython.lib.lexers:IPythonLexer"
41 41 ipython3 = "IPython.lib.lexers:IPython3Lexer"
42 42
43 43 [project.scripts]
44 44 ipython = "IPython:start_ipython"
45 45 ipython3 = "IPython:start_ipython"
46 46
47 47 [project.readme]
48 48 file = "long_description.rst"
49 49 content-type = "text/x-rst"
50 50
51 51 [project.urls]
52 52 Homepage = "https://ipython.org"
53 53 Documentation = "https://ipython.readthedocs.io/"
54 54 Funding = "https://numfocus.org/"
55 55 Source = "https://github.com/ipython/ipython"
56 56 Tracker = "https://github.com/ipython/ipython/issues"
57 57
58 58 [project.optional-dependencies]
59 59 black = [
60 60 "black",
61 61 ]
62 62 doc = [
63 63 "docrepr",
64 64 "exceptiongroup",
65 65 "intersphinx_registry",
66 66 "ipykernel",
67 67 "ipython[test]",
68 68 "matplotlib",
69 69 "setuptools>=18.5",
70 70 "sphinx-rtd-theme",
71 71 "sphinx>=1.3",
72 72 "sphinxcontrib-jquery",
73 "typing_extensions",
74 73 ]
75 74 kernel = [
76 75 "ipykernel",
77 76 ]
78 77 nbconvert = [
79 78 "nbconvert",
80 79 ]
81 80 nbformat = [
82 81 "nbformat",
83 82 ]
84 83 notebook = [
85 84 "ipywidgets",
86 85 "notebook",
87 86 ]
88 87 parallel = [
89 88 "ipyparallel",
90 89 ]
91 90 qtconsole = [
92 91 "qtconsole",
93 92 ]
94 93 terminal = []
95 94 test = [
96 95 "pytest",
97 96 "pytest-asyncio<0.22",
98 97 "testpath",
99 98 "pickleshare",
100 99 "packaging",
101 100 ]
102 101 test_extra = [
103 102 "ipython[test]",
104 103 "curio",
105 104 "matplotlib!=3.2.0",
106 105 "nbformat",
107 106 "numpy>=1.23",
108 107 "pandas",
109 108 "trio",
110 109 ]
111 110 matplotlib = [
112 111 "matplotlib"
113 112 ]
114 113 all = [
115 114 "ipython[black,doc,kernel,nbconvert,nbformat,notebook,parallel,qtconsole,matplotlib]",
116 115 "ipython[test,test_extra]",
117 116 ]
118 117
119 118 [tool.mypy]
120 119 python_version = "3.10"
121 120 ignore_missing_imports = true
122 121 follow_imports = 'silent'
123 122 exclude = [
124 123 'test_\.+\.py',
125 124 'IPython.utils.tests.test_wildcard',
126 125 'testing',
127 126 'tests',
128 127 'PyColorize.py',
129 128 '_process_win32_controller.py',
130 129 'IPython/core/application.py',
131 130 'IPython/core/profileapp.py',
132 131 'IPython/lib/deepreload.py',
133 132 'IPython/sphinxext/ipython_directive.py',
134 133 'IPython/terminal/ipapp.py',
135 134 'IPython/utils/_process_win32.py',
136 135 'IPython/utils/path.py',
137 136 ]
138 137 # check_untyped_defs = true
139 138 # disallow_untyped_calls = true
140 139 # disallow_untyped_decorators = true
141 140 # ignore_errors = false
142 141 # ignore_missing_imports = false
143 142 disallow_incomplete_defs = true
144 143 disallow_untyped_defs = true
145 144 warn_redundant_casts = true
146 145
147 146 [[tool.mypy.overrides]]
148 147 module = [
149 148 "IPython.core.crashhandler",
150 149 ]
151 150 check_untyped_defs = true
152 151 disallow_incomplete_defs = true
153 152 disallow_untyped_calls = true
154 153 disallow_untyped_decorators = true
155 154 disallow_untyped_defs = true
156 155 ignore_errors = false
157 156 ignore_missing_imports = false
158 157
159 158 [[tool.mypy.overrides]]
160 159 module = [
161 160 "IPython.utils.text",
162 161 ]
163 162 disallow_untyped_defs = true
164 163 check_untyped_defs = false
165 164 disallow_untyped_decorators = true
166 165
167 166 [[tool.mypy.overrides]]
168 167 module = [
169 168 ]
170 169 disallow_untyped_defs = false
171 170 ignore_errors = true
172 171 ignore_missing_imports = true
173 172 disallow_untyped_calls = false
174 173 disallow_incomplete_defs = false
175 174 check_untyped_defs = false
176 175 disallow_untyped_decorators = false
177 176
178 177
179 178 # gloabl ignore error
180 179 [[tool.mypy.overrides]]
181 180 module = [
182 181 "IPython",
183 182 "IPython.conftest",
184 183 "IPython.core.alias",
185 184 "IPython.core.async_helpers",
186 185 "IPython.core.autocall",
187 186 "IPython.core.builtin_trap",
188 187 "IPython.core.compilerop",
189 188 "IPython.core.completer",
190 189 "IPython.core.completerlib",
191 190 "IPython.core.debugger",
192 191 "IPython.core.display",
193 192 "IPython.core.display_functions",
194 193 "IPython.core.display_trap",
195 194 "IPython.core.displayhook",
196 195 "IPython.core.displaypub",
197 196 "IPython.core.events",
198 197 "IPython.core.excolors",
199 198 "IPython.core.extensions",
200 199 "IPython.core.formatters",
201 200 "IPython.core.getipython",
202 201 "IPython.core.guarded_eval",
203 202 "IPython.core.history",
204 203 "IPython.core.historyapp",
205 204 "IPython.core.hooks",
206 205 "IPython.core.inputsplitter",
207 206 "IPython.core.inputtransformer",
208 207 "IPython.core.inputtransformer2",
209 208 "IPython.core.interactiveshell",
210 209 "IPython.core.logger",
211 210 "IPython.core.macro",
212 211 "IPython.core.magic",
213 212 "IPython.core.magic_arguments",
214 213 "IPython.core.magics.ast_mod",
215 214 "IPython.core.magics.auto",
216 215 "IPython.core.magics.basic",
217 216 "IPython.core.magics.code",
218 217 "IPython.core.magics.config",
219 218 "IPython.core.magics.display",
220 219 "IPython.core.magics.execution",
221 220 "IPython.core.magics.extension",
222 221 "IPython.core.magics.history",
223 222 "IPython.core.magics.logging",
224 223 "IPython.core.magics.namespace",
225 224 "IPython.core.magics.osm",
226 225 "IPython.core.magics.packaging",
227 226 "IPython.core.magics.pylab",
228 227 "IPython.core.magics.script",
229 228 "IPython.core.oinspect",
230 229 "IPython.core.page",
231 230 "IPython.core.payload",
232 231 "IPython.core.payloadpage",
233 232 "IPython.core.prefilter",
234 233 "IPython.core.profiledir",
235 234 "IPython.core.prompts",
236 235 "IPython.core.pylabtools",
237 236 "IPython.core.shellapp",
238 237 "IPython.core.splitinput",
239 238 "IPython.core.ultratb",
240 239 "IPython.extensions.autoreload",
241 240 "IPython.extensions.storemagic",
242 241 "IPython.external.qt_for_kernel",
243 242 "IPython.external.qt_loaders",
244 243 "IPython.lib.backgroundjobs",
245 244 "IPython.lib.clipboard",
246 245 "IPython.lib.demo",
247 246 "IPython.lib.display",
248 247 "IPython.lib.editorhooks",
249 248 "IPython.lib.guisupport",
250 249 "IPython.lib.latextools",
251 250 "IPython.lib.lexers",
252 251 "IPython.lib.pretty",
253 252 "IPython.paths",
254 253 "IPython.sphinxext.ipython_console_highlighting",
255 254 "IPython.terminal.debugger",
256 255 "IPython.terminal.embed",
257 256 "IPython.terminal.interactiveshell",
258 257 "IPython.terminal.magics",
259 258 "IPython.terminal.prompts",
260 259 "IPython.terminal.pt_inputhooks",
261 260 "IPython.terminal.pt_inputhooks.asyncio",
262 261 "IPython.terminal.pt_inputhooks.glut",
263 262 "IPython.terminal.pt_inputhooks.gtk",
264 263 "IPython.terminal.pt_inputhooks.gtk3",
265 264 "IPython.terminal.pt_inputhooks.gtk4",
266 265 "IPython.terminal.pt_inputhooks.osx",
267 266 "IPython.terminal.pt_inputhooks.pyglet",
268 267 "IPython.terminal.pt_inputhooks.qt",
269 268 "IPython.terminal.pt_inputhooks.tk",
270 269 "IPython.terminal.pt_inputhooks.wx",
271 270 "IPython.terminal.ptutils",
272 271 "IPython.terminal.shortcuts",
273 272 "IPython.terminal.shortcuts.auto_match",
274 273 "IPython.terminal.shortcuts.auto_suggest",
275 274 "IPython.terminal.shortcuts.filters",
276 275 "IPython.utils._process_cli",
277 276 "IPython.utils._process_common",
278 277 "IPython.utils._process_emscripten",
279 278 "IPython.utils._process_posix",
280 279 "IPython.utils.capture",
281 280 "IPython.utils.coloransi",
282 281 "IPython.utils.contexts",
283 282 "IPython.utils.data",
284 283 "IPython.utils.decorators",
285 284 "IPython.utils.dir2",
286 285 "IPython.utils.encoding",
287 286 "IPython.utils.frame",
288 287 "IPython.utils.generics",
289 288 "IPython.utils.importstring",
290 289 "IPython.utils.io",
291 290 "IPython.utils.ipstruct",
292 291 "IPython.utils.module_paths",
293 292 "IPython.utils.openpy",
294 293 "IPython.utils.process",
295 294 "IPython.utils.py3compat",
296 295 "IPython.utils.sentinel",
297 296 "IPython.utils.shimmodule",
298 297 "IPython.utils.strdispatch",
299 298 "IPython.utils.sysinfo",
300 299 "IPython.utils.syspathcontext",
301 300 "IPython.utils.tempdir",
302 301 "IPython.utils.terminal",
303 302 "IPython.utils.timing",
304 303 "IPython.utils.tokenutil",
305 304 "IPython.utils.tz",
306 305 "IPython.utils.ulinecache",
307 306 "IPython.utils.version",
308 307 "IPython.utils.wildcard",
309 308
310 309 ]
311 310 disallow_untyped_defs = false
312 311 ignore_errors = true
313 312 ignore_missing_imports = true
314 313 disallow_untyped_calls = false
315 314 disallow_incomplete_defs = false
316 315 check_untyped_defs = false
317 316 disallow_untyped_decorators = false
318 317
319 318 [tool.pytest.ini_options]
320 319 addopts = [
321 320 "--durations=10",
322 321 "-pIPython.testing.plugin.pytest_ipdoctest",
323 322 "--ipdoctest-modules",
324 323 "--ignore=docs",
325 324 "--ignore=examples",
326 325 "--ignore=htmlcov",
327 326 "--ignore=ipython_kernel",
328 327 "--ignore=ipython_parallel",
329 328 "--ignore=results",
330 329 "--ignore=tmp",
331 330 "--ignore=tools",
332 331 "--ignore=traitlets",
333 332 "--ignore=IPython/core/tests/daft_extension",
334 333 "--ignore=IPython/sphinxext",
335 334 "--ignore=IPython/terminal/pt_inputhooks",
336 335 "--ignore=IPython/__main__.py",
337 336 "--ignore=IPython/external/qt_for_kernel.py",
338 337 "--ignore=IPython/html/widgets/widget_link.py",
339 338 "--ignore=IPython/html/widgets/widget_output.py",
340 339 "--ignore=IPython/terminal/console.py",
341 340 "--ignore=IPython/utils/_process_cli.py",
342 341 "--ignore=IPython/utils/_process_posix.py",
343 342 "--ignore=IPython/utils/_process_win32.py",
344 343 "--ignore=IPython/utils/_process_win32_controller.py",
345 344 "--ignore=IPython/utils/daemonize.py",
346 345 "--ignore=IPython/utils/eventful.py",
347 346 "--ignore=IPython/kernel",
348 347 "--ignore=IPython/consoleapp.py",
349 348 "--ignore=IPython/core/inputsplitter.py",
350 349 "--ignore=IPython/lib/kernel.py",
351 350 "--ignore=IPython/utils/jsonutil.py",
352 351 "--ignore=IPython/utils/localinterfaces.py",
353 352 "--ignore=IPython/utils/log.py",
354 353 "--ignore=IPython/utils/signatures.py",
355 354 "--ignore=IPython/utils/traitlets.py",
356 355 "--ignore=IPython/utils/version.py"
357 356 ]
358 357 doctest_optionflags = [
359 358 "NORMALIZE_WHITESPACE",
360 359 "ELLIPSIS"
361 360 ]
362 361 ipdoctest_optionflags = [
363 362 "NORMALIZE_WHITESPACE",
364 363 "ELLIPSIS"
365 364 ]
366 365 asyncio_mode = "strict"
367 366
368 367 [tool.pyright]
369 368 pythonPlatform="All"
370 369
371 370 [tool.setuptools]
372 371 zip-safe = false
373 372 platforms = ["Linux", "Mac OSX", "Windows"]
374 373 license-files = ["LICENSE"]
375 374 include-package-data = false
376 375
377 376 [tool.setuptools.packages.find]
378 377 exclude = ["setupext"]
379 378 namespaces = false
380 379
381 380 [tool.setuptools.package-data]
382 381 "IPython" = ["py.typed"]
383 382 "IPython.core" = ["profile/README*"]
384 383 "IPython.core.tests" = ["*.png", "*.jpg", "daft_extension/*.py"]
385 384 "IPython.lib.tests" = ["*.wav"]
386 385 "IPython.testing.plugin" = ["*.txt"]
387 386
388 387 [tool.setuptools.dynamic]
389 388 version = {attr = "IPython.core.release.__version__"}
390 389
391 390 [tool.coverage.run]
392 391 omit = [
393 392 # omit everything in /tmp as we run tempfile
394 393 "/tmp/*",
395 394 ]
General Comments 0
You need to be logged in to leave comments. Login now