##// END OF EJS Templates
types hints
M Bussonnier -
Show More
@@ -1,3420 +1,3421
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 Ξ±
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 Ξ±
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press :kbd:`Tab` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\Ξ±<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 63 ``False``.
64 64
65 65
66 66 Experimental
67 67 ============
68 68
69 69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 70 generate completions both using static analysis of the code, and dynamically
71 71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 72 for Python. The APIs attached to this new mechanism is unstable and will
73 73 raise unless use in an :any:`provisionalcompleter` context manager.
74 74
75 75 You will find that the following are experimental:
76 76
77 77 - :any:`provisionalcompleter`
78 78 - :any:`IPCompleter.completions`
79 79 - :any:`Completion`
80 80 - :any:`rectify_completions`
81 81
82 82 .. note::
83 83
84 84 better name for :any:`rectify_completions` ?
85 85
86 86 We welcome any feedback on these new API, and we also encourage you to try this
87 87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 88 to have extra logging information if :any:`jedi` is crashing, or if current
89 89 IPython completer pending deprecations are returning results not yet handled
90 90 by :any:`jedi`
91 91
92 92 Using Jedi for tab completion allow snippets like the following to work without
93 93 having to execute any code:
94 94
95 95 >>> myvar = ['hello', 42]
96 96 ... myvar[1].bi<tab>
97 97
98 98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 100 option.
101 101
102 102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 103 current development version to get better completions.
104 104
105 105 Matchers
106 106 ========
107 107
108 108 All completions routines are implemented using unified *Matchers* API.
109 109 The matchers API is provisional and subject to change without notice.
110 110
111 111 The built-in matchers include:
112 112
113 113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 115 - :any:`IPCompleter.unicode_name_matcher`,
116 116 :any:`IPCompleter.fwd_unicode_matcher`
117 117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 125 (`complete_command`) with string dispatch (including regular expressions).
126 126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 127 Jedi results to match behaviour in earlier IPython versions.
128 128
129 129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130 130
131 131 Matcher API
132 132 -----------
133 133
134 134 Simplifying some details, the ``Matcher`` interface can described as
135 135
136 136 .. code-block::
137 137
138 138 MatcherAPIv1 = Callable[[str], list[str]]
139 139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140 140
141 141 Matcher = MatcherAPIv1 | MatcherAPIv2
142 142
143 143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 144 and remains supported as a simplest way for generating completions. This is also
145 145 currently the only API supported by the IPython hooks system `complete_command`.
146 146
147 147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 149 and requires a literal ``2`` for v2 Matchers.
150 150
151 151 Once the API stabilises future versions may relax the requirement for specifying
152 152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154 154
155 155 Suppression of competing matchers
156 156 ---------------------------------
157 157
158 158 By default results from all matchers are combined, in the order determined by
159 159 their priority. Matchers can request to suppress results from subsequent
160 160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161 161
162 162 When multiple matchers simultaneously request suppression, the results from of
163 163 the matcher with higher priority will be returned.
164 164
165 165 Sometimes it is desirable to suppress most but not all other matchers;
166 166 this can be achieved by adding a set of identifiers of matchers which
167 167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168 168
169 169 The suppression behaviour can is user-configurable via
170 170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 171 """
172 172
173 173
174 174 # Copyright (c) IPython Development Team.
175 175 # Distributed under the terms of the Modified BSD License.
176 176 #
177 177 # Some of this code originated from rlcompleter in the Python standard library
178 178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179 179
180 180 from __future__ import annotations
181 181 import builtins as builtin_mod
182 182 import enum
183 183 import glob
184 184 import inspect
185 185 import itertools
186 186 import keyword
187 187 import ast
188 188 import os
189 189 import re
190 190 import string
191 191 import sys
192 192 import tokenize
193 193 import time
194 194 import unicodedata
195 195 import uuid
196 196 import warnings
197 197 from ast import literal_eval
198 198 from collections import defaultdict
199 199 from contextlib import contextmanager
200 200 from dataclasses import dataclass
201 201 from functools import cached_property, partial
202 202 from types import SimpleNamespace
203 203 from typing import (
204 204 Iterable,
205 205 Iterator,
206 206 List,
207 207 Tuple,
208 208 Union,
209 209 Any,
210 210 Sequence,
211 211 Dict,
212 212 Optional,
213 213 TYPE_CHECKING,
214 214 Set,
215 215 Sized,
216 216 TypeVar,
217 217 Literal,
218 218 )
219 219
220 220 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
221 221 from IPython.core.error import TryNext
222 222 from IPython.core.inputtransformer2 import ESC_MAGIC
223 223 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
224 224 from IPython.core.oinspect import InspectColors
225 225 from IPython.testing.skipdoctest import skip_doctest
226 226 from IPython.utils import generics
227 227 from IPython.utils.decorators import sphinx_options
228 228 from IPython.utils.dir2 import dir2, get_real_method
229 229 from IPython.utils.docs import GENERATING_DOCUMENTATION
230 230 from IPython.utils.path import ensure_dir_exists
231 231 from IPython.utils.process import arg_split
232 232 from traitlets import (
233 233 Bool,
234 234 Enum,
235 235 Int,
236 236 List as ListTrait,
237 237 Unicode,
238 238 Dict as DictTrait,
239 239 Union as UnionTrait,
240 240 observe,
241 241 )
242 242 from traitlets.config.configurable import Configurable
243 243
244 244 import __main__
245 245
246 246 # skip module docstests
247 247 __skip_doctest__ = True
248 248
249 249
250 250 try:
251 251 import jedi
252 252 jedi.settings.case_insensitive_completion = False
253 253 import jedi.api.helpers
254 254 import jedi.api.classes
255 255 JEDI_INSTALLED = True
256 256 except ImportError:
257 257 JEDI_INSTALLED = False
258 258
259 259
260 260 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
261 261 from typing import cast
262 262 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
263 263 else:
264 264 from typing import Generic
265 265
266 266 def cast(type_, obj):
267 267 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
268 268 return obj
269 269
270 270 # do not require on runtime
271 271 NotRequired = Tuple # requires Python >=3.11
272 272 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
273 273 Protocol = object # requires Python >=3.8
274 274 TypeAlias = Any # requires Python >=3.10
275 275 TypeGuard = Generic # requires Python >=3.10
276 276 if GENERATING_DOCUMENTATION:
277 277 from typing import TypedDict
278 278
279 279 # -----------------------------------------------------------------------------
280 280 # Globals
281 281 #-----------------------------------------------------------------------------
282 282
283 283 # ranges where we have most of the valid unicode names. We could be more finer
284 284 # grained but is it worth it for performance While unicode have character in the
285 285 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
286 286 # write this). With below range we cover them all, with a density of ~67%
287 287 # biggest next gap we consider only adds up about 1% density and there are 600
288 288 # gaps that would need hard coding.
289 289 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
290 290
291 291 # Public API
292 292 __all__ = ["Completer", "IPCompleter"]
293 293
294 294 if sys.platform == 'win32':
295 295 PROTECTABLES = ' '
296 296 else:
297 297 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
298 298
299 299 # Protect against returning an enormous number of completions which the frontend
300 300 # may have trouble processing.
301 301 MATCHES_LIMIT = 500
302 302
303 303 # Completion type reported when no type can be inferred.
304 304 _UNKNOWN_TYPE = "<unknown>"
305 305
306 306 # sentinel value to signal lack of a match
307 307 not_found = object()
308 308
309 309 class ProvisionalCompleterWarning(FutureWarning):
310 310 """
311 311 Exception raise by an experimental feature in this module.
312 312
313 313 Wrap code in :any:`provisionalcompleter` context manager if you
314 314 are certain you want to use an unstable feature.
315 315 """
316 316 pass
317 317
318 318 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
319 319
320 320
321 321 @skip_doctest
322 322 @contextmanager
323 323 def provisionalcompleter(action='ignore'):
324 324 """
325 325 This context manager has to be used in any place where unstable completer
326 326 behavior and API may be called.
327 327
328 328 >>> with provisionalcompleter():
329 329 ... completer.do_experimental_things() # works
330 330
331 331 >>> completer.do_experimental_things() # raises.
332 332
333 333 .. note::
334 334
335 335 Unstable
336 336
337 337 By using this context manager you agree that the API in use may change
338 338 without warning, and that you won't complain if they do so.
339 339
340 340 You also understand that, if the API is not to your liking, you should report
341 341 a bug to explain your use case upstream.
342 342
343 343 We'll be happy to get your feedback, feature requests, and improvements on
344 344 any of the unstable APIs!
345 345 """
346 346 with warnings.catch_warnings():
347 347 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
348 348 yield
349 349
350 350
351 def has_open_quotes(s):
351 def has_open_quotes(s: str) -> Union[str, bool]:
352 352 """Return whether a string has open quotes.
353 353
354 354 This simply counts whether the number of quote characters of either type in
355 355 the string is odd.
356 356
357 357 Returns
358 358 -------
359 359 If there is an open quote, the quote character is returned. Else, return
360 360 False.
361 361 """
362 362 # We check " first, then ', so complex cases with nested quotes will get
363 363 # the " to take precedence.
364 364 if s.count('"') % 2:
365 365 return '"'
366 366 elif s.count("'") % 2:
367 367 return "'"
368 368 else:
369 369 return False
370 370
371 371
372 def protect_filename(s, protectables=PROTECTABLES):
372 def protect_filename(s: str, protectables: str = PROTECTABLES) -> str:
373 373 """Escape a string to protect certain characters."""
374 374 if set(s) & set(protectables):
375 375 if sys.platform == "win32":
376 376 return '"' + s + '"'
377 377 else:
378 378 return "".join(("\\" + c if c in protectables else c) for c in s)
379 379 else:
380 380 return s
381 381
382 382
383 383 def expand_user(path:str) -> Tuple[str, bool, str]:
384 384 """Expand ``~``-style usernames in strings.
385 385
386 386 This is similar to :func:`os.path.expanduser`, but it computes and returns
387 387 extra information that will be useful if the input was being used in
388 388 computing completions, and you wish to return the completions with the
389 389 original '~' instead of its expanded value.
390 390
391 391 Parameters
392 392 ----------
393 393 path : str
394 394 String to be expanded. If no ~ is present, the output is the same as the
395 395 input.
396 396
397 397 Returns
398 398 -------
399 399 newpath : str
400 400 Result of ~ expansion in the input path.
401 401 tilde_expand : bool
402 402 Whether any expansion was performed or not.
403 403 tilde_val : str
404 404 The value that ~ was replaced with.
405 405 """
406 406 # Default values
407 407 tilde_expand = False
408 408 tilde_val = ''
409 409 newpath = path
410 410
411 411 if path.startswith('~'):
412 412 tilde_expand = True
413 413 rest = len(path)-1
414 414 newpath = os.path.expanduser(path)
415 415 if rest:
416 416 tilde_val = newpath[:-rest]
417 417 else:
418 418 tilde_val = newpath
419 419
420 420 return newpath, tilde_expand, tilde_val
421 421
422 422
423 423 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
424 424 """Does the opposite of expand_user, with its outputs.
425 425 """
426 426 if tilde_expand:
427 427 return path.replace(tilde_val, '~')
428 428 else:
429 429 return path
430 430
431 431
432 432 def completions_sorting_key(word):
433 433 """key for sorting completions
434 434
435 435 This does several things:
436 436
437 437 - Demote any completions starting with underscores to the end
438 438 - Insert any %magic and %%cellmagic completions in the alphabetical order
439 439 by their name
440 440 """
441 441 prio1, prio2 = 0, 0
442 442
443 443 if word.startswith('__'):
444 444 prio1 = 2
445 445 elif word.startswith('_'):
446 446 prio1 = 1
447 447
448 448 if word.endswith('='):
449 449 prio1 = -1
450 450
451 451 if word.startswith('%%'):
452 452 # If there's another % in there, this is something else, so leave it alone
453 453 if "%" not in word[2:]:
454 454 word = word[2:]
455 455 prio2 = 2
456 456 elif word.startswith('%'):
457 457 if "%" not in word[1:]:
458 458 word = word[1:]
459 459 prio2 = 1
460 460
461 461 return prio1, word, prio2
462 462
463 463
464 464 class _FakeJediCompletion:
465 465 """
466 466 This is a workaround to communicate to the UI that Jedi has crashed and to
467 467 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
468 468
469 469 Added in IPython 6.0 so should likely be removed for 7.0
470 470
471 471 """
472 472
473 473 def __init__(self, name):
474 474
475 475 self.name = name
476 476 self.complete = name
477 477 self.type = 'crashed'
478 478 self.name_with_symbols = name
479 479 self.signature = ""
480 480 self._origin = "fake"
481 481 self.text = "crashed"
482 482
483 483 def __repr__(self):
484 484 return '<Fake completion object jedi has crashed>'
485 485
486 486
487 487 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
488 488
489 489
490 490 class Completion:
491 491 """
492 492 Completion object used and returned by IPython completers.
493 493
494 494 .. warning::
495 495
496 496 Unstable
497 497
498 498 This function is unstable, API may change without warning.
499 499 It will also raise unless use in proper context manager.
500 500
501 501 This act as a middle ground :any:`Completion` object between the
502 502 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
503 503 object. While Jedi need a lot of information about evaluator and how the
504 504 code should be ran/inspected, PromptToolkit (and other frontend) mostly
505 505 need user facing information.
506 506
507 507 - Which range should be replaced replaced by what.
508 508 - Some metadata (like completion type), or meta information to displayed to
509 509 the use user.
510 510
511 511 For debugging purpose we can also store the origin of the completion (``jedi``,
512 512 ``IPython.python_matches``, ``IPython.magics_matches``...).
513 513 """
514 514
515 515 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
516 516
517 517 def __init__(
518 518 self,
519 519 start: int,
520 520 end: int,
521 521 text: str,
522 522 *,
523 523 type: Optional[str] = None,
524 524 _origin="",
525 525 signature="",
526 526 ) -> None:
527 527 warnings.warn(
528 528 "``Completion`` is a provisional API (as of IPython 6.0). "
529 529 "It may change without warnings. "
530 530 "Use in corresponding context manager.",
531 531 category=ProvisionalCompleterWarning,
532 532 stacklevel=2,
533 533 )
534 534
535 535 self.start = start
536 536 self.end = end
537 537 self.text = text
538 538 self.type = type
539 539 self.signature = signature
540 540 self._origin = _origin
541 541
542 542 def __repr__(self):
543 543 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
544 544 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
545 545
546 546 def __eq__(self, other) -> bool:
547 547 """
548 548 Equality and hash do not hash the type (as some completer may not be
549 549 able to infer the type), but are use to (partially) de-duplicate
550 550 completion.
551 551
552 552 Completely de-duplicating completion is a bit tricker that just
553 553 comparing as it depends on surrounding text, which Completions are not
554 554 aware of.
555 555 """
556 556 return self.start == other.start and \
557 557 self.end == other.end and \
558 558 self.text == other.text
559 559
560 560 def __hash__(self):
561 561 return hash((self.start, self.end, self.text))
562 562
563 563
564 564 class SimpleCompletion:
565 565 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
566 566
567 567 .. warning::
568 568
569 569 Provisional
570 570
571 571 This class is used to describe the currently supported attributes of
572 572 simple completion items, and any additional implementation details
573 573 should not be relied on. Additional attributes may be included in
574 574 future versions, and meaning of text disambiguated from the current
575 575 dual meaning of "text to insert" and "text to used as a label".
576 576 """
577 577
578 578 __slots__ = ["text", "type"]
579 579
580 580 def __init__(self, text: str, *, type: Optional[str] = None):
581 581 self.text = text
582 582 self.type = type
583 583
584 584 def __repr__(self):
585 585 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
586 586
587 587
588 588 class _MatcherResultBase(TypedDict):
589 589 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
590 590
591 591 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
592 592 matched_fragment: NotRequired[str]
593 593
594 594 #: Whether to suppress results from all other matchers (True), some
595 595 #: matchers (set of identifiers) or none (False); default is False.
596 596 suppress: NotRequired[Union[bool, Set[str]]]
597 597
598 598 #: Identifiers of matchers which should NOT be suppressed when this matcher
599 599 #: requests to suppress all other matchers; defaults to an empty set.
600 600 do_not_suppress: NotRequired[Set[str]]
601 601
602 602 #: Are completions already ordered and should be left as-is? default is False.
603 603 ordered: NotRequired[bool]
604 604
605 605
606 606 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
607 607 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
608 608 """Result of new-style completion matcher."""
609 609
610 610 # note: TypedDict is added again to the inheritance chain
611 611 # in order to get __orig_bases__ for documentation
612 612
613 613 #: List of candidate completions
614 614 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
615 615
616 616
617 617 class _JediMatcherResult(_MatcherResultBase):
618 618 """Matching result returned by Jedi (will be processed differently)"""
619 619
620 620 #: list of candidate completions
621 621 completions: Iterator[_JediCompletionLike]
622 622
623 623
624 624 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
625 625 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
626 626
627 627
628 628 @dataclass
629 629 class CompletionContext:
630 630 """Completion context provided as an argument to matchers in the Matcher API v2."""
631 631
632 632 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
633 633 # which was not explicitly visible as an argument of the matcher, making any refactor
634 634 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
635 635 # from the completer, and make substituting them in sub-classes easier.
636 636
637 637 #: Relevant fragment of code directly preceding the cursor.
638 638 #: The extraction of token is implemented via splitter heuristic
639 639 #: (following readline behaviour for legacy reasons), which is user configurable
640 640 #: (by switching the greedy mode).
641 641 token: str
642 642
643 643 #: The full available content of the editor or buffer
644 644 full_text: str
645 645
646 646 #: Cursor position in the line (the same for ``full_text`` and ``text``).
647 647 cursor_position: int
648 648
649 649 #: Cursor line in ``full_text``.
650 650 cursor_line: int
651 651
652 652 #: The maximum number of completions that will be used downstream.
653 653 #: Matchers can use this information to abort early.
654 654 #: The built-in Jedi matcher is currently excepted from this limit.
655 655 # If not given, return all possible completions.
656 656 limit: Optional[int]
657 657
658 658 @cached_property
659 659 def text_until_cursor(self) -> str:
660 660 return self.line_with_cursor[: self.cursor_position]
661 661
662 662 @cached_property
663 663 def line_with_cursor(self) -> str:
664 664 return self.full_text.split("\n")[self.cursor_line]
665 665
666 666
667 667 #: Matcher results for API v2.
668 668 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
669 669
670 670
671 671 class _MatcherAPIv1Base(Protocol):
672 672 def __call__(self, text: str) -> List[str]:
673 673 """Call signature."""
674 674 ...
675 675
676 676 #: Used to construct the default matcher identifier
677 677 __qualname__: str
678 678
679 679
680 680 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
681 681 #: API version
682 682 matcher_api_version: Optional[Literal[1]]
683 683
684 684 def __call__(self, text: str) -> List[str]:
685 685 """Call signature."""
686 686 ...
687 687
688 688
689 689 #: Protocol describing Matcher API v1.
690 690 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
691 691
692 692
693 693 class MatcherAPIv2(Protocol):
694 694 """Protocol describing Matcher API v2."""
695 695
696 696 #: API version
697 697 matcher_api_version: Literal[2] = 2
698 698
699 699 def __call__(self, context: CompletionContext) -> MatcherResult:
700 700 """Call signature."""
701 701 ...
702 702
703 703 #: Used to construct the default matcher identifier
704 704 __qualname__: str
705 705
706 706
707 707 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
708 708
709 709
710 710 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
711 711 api_version = _get_matcher_api_version(matcher)
712 712 return api_version == 1
713 713
714 714
715 715 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
716 716 api_version = _get_matcher_api_version(matcher)
717 717 return api_version == 2
718 718
719 719
720 720 def _is_sizable(value: Any) -> TypeGuard[Sized]:
721 721 """Determines whether objects is sizable"""
722 722 return hasattr(value, "__len__")
723 723
724 724
725 725 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
726 726 """Determines whether objects is sizable"""
727 727 return hasattr(value, "__next__")
728 728
729 729
730 730 def has_any_completions(result: MatcherResult) -> bool:
731 731 """Check if any result includes any completions."""
732 732 completions = result["completions"]
733 733 if _is_sizable(completions):
734 734 return len(completions) != 0
735 735 if _is_iterator(completions):
736 736 try:
737 737 old_iterator = completions
738 738 first = next(old_iterator)
739 739 result["completions"] = cast(
740 740 Iterator[SimpleCompletion],
741 741 itertools.chain([first], old_iterator),
742 742 )
743 743 return True
744 744 except StopIteration:
745 745 return False
746 746 raise ValueError(
747 747 "Completions returned by matcher need to be an Iterator or a Sizable"
748 748 )
749 749
750 750
751 751 def completion_matcher(
752 752 *,
753 753 priority: Optional[float] = None,
754 754 identifier: Optional[str] = None,
755 755 api_version: int = 1,
756 ):
756 ) -> Callable[[Matcher], Matcher]:
757 757 """Adds attributes describing the matcher.
758 758
759 759 Parameters
760 760 ----------
761 761 priority : Optional[float]
762 762 The priority of the matcher, determines the order of execution of matchers.
763 763 Higher priority means that the matcher will be executed first. Defaults to 0.
764 764 identifier : Optional[str]
765 765 identifier of the matcher allowing users to modify the behaviour via traitlets,
766 766 and also used to for debugging (will be passed as ``origin`` with the completions).
767 767
768 768 Defaults to matcher function's ``__qualname__`` (for example,
769 769 ``IPCompleter.file_matcher`` for the built-in matched defined
770 770 as a ``file_matcher`` method of the ``IPCompleter`` class).
771 771 api_version: Optional[int]
772 772 version of the Matcher API used by this matcher.
773 773 Currently supported values are 1 and 2.
774 774 Defaults to 1.
775 775 """
776 776
777 777 def wrapper(func: Matcher):
778 778 func.matcher_priority = priority or 0 # type: ignore
779 779 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
780 780 func.matcher_api_version = api_version # type: ignore
781 781 if TYPE_CHECKING:
782 782 if api_version == 1:
783 783 func = cast(MatcherAPIv1, func)
784 784 elif api_version == 2:
785 785 func = cast(MatcherAPIv2, func)
786 786 return func
787 787
788 788 return wrapper
789 789
790 790
791 791 def _get_matcher_priority(matcher: Matcher):
792 792 return getattr(matcher, "matcher_priority", 0)
793 793
794 794
795 795 def _get_matcher_id(matcher: Matcher):
796 796 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
797 797
798 798
799 799 def _get_matcher_api_version(matcher):
800 800 return getattr(matcher, "matcher_api_version", 1)
801 801
802 802
803 803 context_matcher = partial(completion_matcher, api_version=2)
804 804
805 805
806 806 _IC = Iterable[Completion]
807 807
808 808
809 809 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
810 810 """
811 811 Deduplicate a set of completions.
812 812
813 813 .. warning::
814 814
815 815 Unstable
816 816
817 817 This function is unstable, API may change without warning.
818 818
819 819 Parameters
820 820 ----------
821 821 text : str
822 822 text that should be completed.
823 823 completions : Iterator[Completion]
824 824 iterator over the completions to deduplicate
825 825
826 826 Yields
827 827 ------
828 828 `Completions` objects
829 829 Completions coming from multiple sources, may be different but end up having
830 830 the same effect when applied to ``text``. If this is the case, this will
831 831 consider completions as equal and only emit the first encountered.
832 832 Not folded in `completions()` yet for debugging purpose, and to detect when
833 833 the IPython completer does return things that Jedi does not, but should be
834 834 at some point.
835 835 """
836 836 completions = list(completions)
837 837 if not completions:
838 838 return
839 839
840 840 new_start = min(c.start for c in completions)
841 841 new_end = max(c.end for c in completions)
842 842
843 843 seen = set()
844 844 for c in completions:
845 845 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
846 846 if new_text not in seen:
847 847 yield c
848 848 seen.add(new_text)
849 849
850 850
851 851 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
852 852 """
853 853 Rectify a set of completions to all have the same ``start`` and ``end``
854 854
855 855 .. warning::
856 856
857 857 Unstable
858 858
859 859 This function is unstable, API may change without warning.
860 860 It will also raise unless use in proper context manager.
861 861
862 862 Parameters
863 863 ----------
864 864 text : str
865 865 text that should be completed.
866 866 completions : Iterator[Completion]
867 867 iterator over the completions to rectify
868 868 _debug : bool
869 869 Log failed completion
870 870
871 871 Notes
872 872 -----
873 873 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
874 874 the Jupyter Protocol requires them to behave like so. This will readjust
875 875 the completion to have the same ``start`` and ``end`` by padding both
876 876 extremities with surrounding text.
877 877
878 878 During stabilisation should support a ``_debug`` option to log which
879 879 completion are return by the IPython completer and not found in Jedi in
880 880 order to make upstream bug report.
881 881 """
882 882 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
883 883 "It may change without warnings. "
884 884 "Use in corresponding context manager.",
885 885 category=ProvisionalCompleterWarning, stacklevel=2)
886 886
887 887 completions = list(completions)
888 888 if not completions:
889 889 return
890 890 starts = (c.start for c in completions)
891 891 ends = (c.end for c in completions)
892 892
893 893 new_start = min(starts)
894 894 new_end = max(ends)
895 895
896 896 seen_jedi = set()
897 897 seen_python_matches = set()
898 898 for c in completions:
899 899 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
900 900 if c._origin == 'jedi':
901 901 seen_jedi.add(new_text)
902 902 elif c._origin == "IPCompleter.python_matcher":
903 903 seen_python_matches.add(new_text)
904 904 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
905 905 diff = seen_python_matches.difference(seen_jedi)
906 906 if diff and _debug:
907 907 print('IPython.python matches have extras:', diff)
908 908
909 909
910 910 if sys.platform == 'win32':
911 911 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
912 912 else:
913 913 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
914 914
915 915 GREEDY_DELIMS = ' =\r\n'
916 916
917 917
918 918 class CompletionSplitter(object):
919 919 """An object to split an input line in a manner similar to readline.
920 920
921 921 By having our own implementation, we can expose readline-like completion in
922 922 a uniform manner to all frontends. This object only needs to be given the
923 923 line of text to be split and the cursor position on said line, and it
924 924 returns the 'word' to be completed on at the cursor after splitting the
925 925 entire line.
926 926
927 927 What characters are used as splitting delimiters can be controlled by
928 928 setting the ``delims`` attribute (this is a property that internally
929 929 automatically builds the necessary regular expression)"""
930 930
931 931 # Private interface
932 932
933 933 # A string of delimiter characters. The default value makes sense for
934 934 # IPython's most typical usage patterns.
935 935 _delims = DELIMS
936 936
937 937 # The expression (a normal string) to be compiled into a regular expression
938 938 # for actual splitting. We store it as an attribute mostly for ease of
939 939 # debugging, since this type of code can be so tricky to debug.
940 940 _delim_expr = None
941 941
942 942 # The regular expression that does the actual splitting
943 943 _delim_re = None
944 944
945 945 def __init__(self, delims=None):
946 946 delims = CompletionSplitter._delims if delims is None else delims
947 947 self.delims = delims
948 948
949 949 @property
950 950 def delims(self):
951 951 """Return the string of delimiter characters."""
952 952 return self._delims
953 953
954 954 @delims.setter
955 955 def delims(self, delims):
956 956 """Set the delimiters for line splitting."""
957 957 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
958 958 self._delim_re = re.compile(expr)
959 959 self._delims = delims
960 960 self._delim_expr = expr
961 961
962 962 def split_line(self, line, cursor_pos=None):
963 963 """Split a line of text with a cursor at the given position.
964 964 """
965 965 cut_line = line if cursor_pos is None else line[:cursor_pos]
966 966 return self._delim_re.split(cut_line)[-1]
967 967
968 968
969 969
970 970 class Completer(Configurable):
971 971
972 972 greedy = Bool(
973 973 False,
974 974 help="""Activate greedy completion.
975 975
976 976 .. deprecated:: 8.8
977 977 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
978 978
979 979 When enabled in IPython 8.8 or newer, changes configuration as follows:
980 980
981 981 - ``Completer.evaluation = 'unsafe'``
982 982 - ``Completer.auto_close_dict_keys = True``
983 983 """,
984 984 ).tag(config=True)
985 985
986 986 evaluation = Enum(
987 987 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
988 988 default_value="limited",
989 989 help="""Policy for code evaluation under completion.
990 990
991 991 Successive options allow to enable more eager evaluation for better
992 992 completion suggestions, including for nested dictionaries, nested lists,
993 993 or even results of function calls.
994 994 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
995 995 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
996 996
997 997 Allowed values are:
998 998
999 999 - ``forbidden``: no evaluation of code is permitted,
1000 1000 - ``minimal``: evaluation of literals and access to built-in namespace;
1001 1001 no item/attribute evaluationm no access to locals/globals,
1002 1002 no evaluation of any operations or comparisons.
1003 1003 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1004 1004 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1005 1005 :any:`object.__getitem__`) on allow-listed objects (for example:
1006 1006 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1007 1007 - ``unsafe``: evaluation of all methods and function calls but not of
1008 1008 syntax with side-effects like `del x`,
1009 1009 - ``dangerous``: completely arbitrary evaluation.
1010 1010 """,
1011 1011 ).tag(config=True)
1012 1012
1013 1013 use_jedi = Bool(default_value=JEDI_INSTALLED,
1014 1014 help="Experimental: Use Jedi to generate autocompletions. "
1015 1015 "Default to True if jedi is installed.").tag(config=True)
1016 1016
1017 1017 jedi_compute_type_timeout = Int(default_value=400,
1018 1018 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1019 1019 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1020 1020 performance by preventing jedi to build its cache.
1021 1021 """).tag(config=True)
1022 1022
1023 1023 debug = Bool(default_value=False,
1024 1024 help='Enable debug for the Completer. Mostly print extra '
1025 1025 'information for experimental jedi integration.')\
1026 1026 .tag(config=True)
1027 1027
1028 1028 backslash_combining_completions = Bool(True,
1029 1029 help="Enable unicode completions, e.g. \\alpha<tab> . "
1030 1030 "Includes completion of latex commands, unicode names, and expanding "
1031 1031 "unicode characters back to latex commands.").tag(config=True)
1032 1032
1033 1033 auto_close_dict_keys = Bool(
1034 1034 False,
1035 1035 help="""
1036 1036 Enable auto-closing dictionary keys.
1037 1037
1038 1038 When enabled string keys will be suffixed with a final quote
1039 1039 (matching the opening quote), tuple keys will also receive a
1040 1040 separating comma if needed, and keys which are final will
1041 1041 receive a closing bracket (``]``).
1042 1042 """,
1043 1043 ).tag(config=True)
1044 1044
1045 1045 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1046 1046 """Create a new completer for the command line.
1047 1047
1048 1048 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1049 1049
1050 1050 If unspecified, the default namespace where completions are performed
1051 1051 is __main__ (technically, __main__.__dict__). Namespaces should be
1052 1052 given as dictionaries.
1053 1053
1054 1054 An optional second namespace can be given. This allows the completer
1055 1055 to handle cases where both the local and global scopes need to be
1056 1056 distinguished.
1057 1057 """
1058 1058
1059 1059 # Don't bind to namespace quite yet, but flag whether the user wants a
1060 1060 # specific namespace or to use __main__.__dict__. This will allow us
1061 1061 # to bind to __main__.__dict__ at completion time, not now.
1062 1062 if namespace is None:
1063 1063 self.use_main_ns = True
1064 1064 else:
1065 1065 self.use_main_ns = False
1066 1066 self.namespace = namespace
1067 1067
1068 1068 # The global namespace, if given, can be bound directly
1069 1069 if global_namespace is None:
1070 1070 self.global_namespace = {}
1071 1071 else:
1072 1072 self.global_namespace = global_namespace
1073 1073
1074 1074 self.custom_matchers = []
1075 1075
1076 1076 super(Completer, self).__init__(**kwargs)
1077 1077
1078 1078 def complete(self, text, state):
1079 1079 """Return the next possible completion for 'text'.
1080 1080
1081 1081 This is called successively with state == 0, 1, 2, ... until it
1082 1082 returns None. The completion should begin with 'text'.
1083 1083
1084 1084 """
1085 1085 if self.use_main_ns:
1086 1086 self.namespace = __main__.__dict__
1087 1087
1088 1088 if state == 0:
1089 1089 if "." in text:
1090 1090 self.matches = self.attr_matches(text)
1091 1091 else:
1092 1092 self.matches = self.global_matches(text)
1093 1093 try:
1094 1094 return self.matches[state]
1095 1095 except IndexError:
1096 1096 return None
1097 1097
1098 1098 def global_matches(self, text):
1099 1099 """Compute matches when text is a simple name.
1100 1100
1101 1101 Return a list of all keywords, built-in functions and names currently
1102 1102 defined in self.namespace or self.global_namespace that match.
1103 1103
1104 1104 """
1105 1105 matches = []
1106 1106 match_append = matches.append
1107 1107 n = len(text)
1108 1108 for lst in [
1109 1109 keyword.kwlist,
1110 1110 builtin_mod.__dict__.keys(),
1111 1111 list(self.namespace.keys()),
1112 1112 list(self.global_namespace.keys()),
1113 1113 ]:
1114 1114 for word in lst:
1115 1115 if word[:n] == text and word != "__builtins__":
1116 1116 match_append(word)
1117 1117
1118 1118 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1119 1119 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1120 1120 shortened = {
1121 1121 "_".join([sub[0] for sub in word.split("_")]): word
1122 1122 for word in lst
1123 1123 if snake_case_re.match(word)
1124 1124 }
1125 1125 for word in shortened.keys():
1126 1126 if word[:n] == text and word != "__builtins__":
1127 1127 match_append(shortened[word])
1128 1128 return matches
1129 1129
1130 1130 def attr_matches(self, text):
1131 1131 """Compute matches when text contains a dot.
1132 1132
1133 1133 Assuming the text is of the form NAME.NAME....[NAME], and is
1134 1134 evaluatable in self.namespace or self.global_namespace, it will be
1135 1135 evaluated and its attributes (as revealed by dir()) are used as
1136 1136 possible completions. (For class instances, class members are
1137 1137 also considered.)
1138 1138
1139 1139 WARNING: this can still invoke arbitrary C code, if an object
1140 1140 with a __getattr__ hook is evaluated.
1141 1141
1142 1142 """
1143 1143 return self._attr_matches(text)[0]
1144 1144
1145 1145 # we simple attribute matching with normal identifiers.
1146 1146 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$")
1147 1147
1148 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1149
1148 def _attr_matches(
1149 self, text: str, include_prefix: bool = True
1150 ) -> Tuple[Sequence[str], str]:
1150 1151 m2 = self._ATTR_MATCH_RE.match(self.line_buffer)
1151 1152 if not m2:
1152 1153 return [], ""
1153 1154 expr, attr = m2.group(1, 2)
1154 1155
1155 1156 obj = self._evaluate_expr(expr)
1156 1157
1157 1158 if obj is not_found:
1158 1159 return [], ""
1159 1160
1160 1161 if self.limit_to__all__ and hasattr(obj, '__all__'):
1161 1162 words = get__all__entries(obj)
1162 1163 else:
1163 1164 words = dir2(obj)
1164 1165
1165 1166 try:
1166 1167 words = generics.complete_object(obj, words)
1167 1168 except TryNext:
1168 1169 pass
1169 1170 except AssertionError:
1170 1171 raise
1171 1172 except Exception:
1172 1173 # Silence errors from completion function
1173 1174 pass
1174 1175 # Build match list to return
1175 1176 n = len(attr)
1176 1177
1177 1178 # Note: ideally we would just return words here and the prefix
1178 1179 # reconciliator would know that we intend to append to rather than
1179 1180 # replace the input text; this requires refactoring to return range
1180 1181 # which ought to be replaced (as does jedi).
1181 1182 if include_prefix:
1182 1183 tokens = _parse_tokens(expr)
1183 1184 rev_tokens = reversed(tokens)
1184 1185 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1185 1186 name_turn = True
1186 1187
1187 1188 parts = []
1188 1189 for token in rev_tokens:
1189 1190 if token.type in skip_over:
1190 1191 continue
1191 1192 if token.type == tokenize.NAME and name_turn:
1192 1193 parts.append(token.string)
1193 1194 name_turn = False
1194 1195 elif (
1195 1196 token.type == tokenize.OP and token.string == "." and not name_turn
1196 1197 ):
1197 1198 parts.append(token.string)
1198 1199 name_turn = True
1199 1200 else:
1200 1201 # short-circuit if not empty nor name token
1201 1202 break
1202 1203
1203 1204 prefix_after_space = "".join(reversed(parts))
1204 1205 else:
1205 1206 prefix_after_space = ""
1206 1207
1207 1208 return (
1208 1209 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1209 1210 "." + attr,
1210 1211 )
1211 1212
1212 1213 def _trim_expr(self, code: str) -> str:
1213 1214 """
1214 1215 Trim the code until it is a valid expression and not a tuple;
1215 1216
1216 1217 return the trimmed expression for guarded_eval.
1217 1218 """
1218 1219 while code:
1219 1220 code = code[1:]
1220 1221 try:
1221 1222 res = ast.parse(code)
1222 1223 except SyntaxError:
1223 1224 continue
1224 1225
1225 1226 assert res is not None
1226 1227 if len(res.body) != 1:
1227 1228 continue
1228 1229 expr = res.body[0].value
1229 1230 if isinstance(expr, ast.Tuple) and not code[-1] == ")":
1230 1231 # we skip implicit tuple, like when trimming `fun(a,b`<completion>
1231 1232 # as `a,b` would be a tuple, and we actually expect to get only `b`
1232 1233 continue
1233 1234 return code
1234 1235 return ""
1235 1236
1236 1237 def _evaluate_expr(self, expr):
1237 1238 obj = not_found
1238 1239 done = False
1239 1240 while not done and expr:
1240 1241 try:
1241 1242 obj = guarded_eval(
1242 1243 expr,
1243 1244 EvaluationContext(
1244 1245 globals=self.global_namespace,
1245 1246 locals=self.namespace,
1246 1247 evaluation=self.evaluation,
1247 1248 ),
1248 1249 )
1249 1250 done = True
1250 1251 except Exception as e:
1251 1252 if self.debug:
1252 1253 print("Evaluation exception", e)
1253 1254 # trim the expression to remove any invalid prefix
1254 1255 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1255 1256 # where parenthesis is not closed.
1256 1257 # TODO: make this faster by reusing parts of the computation?
1257 1258 expr = self._trim_expr(expr)
1258 1259 return obj
1259 1260
1260 1261 def get__all__entries(obj):
1261 1262 """returns the strings in the __all__ attribute"""
1262 1263 try:
1263 1264 words = getattr(obj, '__all__')
1264 1265 except Exception:
1265 1266 return []
1266 1267
1267 1268 return [w for w in words if isinstance(w, str)]
1268 1269
1269 1270
1270 1271 class _DictKeyState(enum.Flag):
1271 1272 """Represent state of the key match in context of other possible matches.
1272 1273
1273 1274 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1274 1275 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1275 1276 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1276 1277 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1277 1278 """
1278 1279
1279 1280 BASELINE = 0
1280 1281 END_OF_ITEM = enum.auto()
1281 1282 END_OF_TUPLE = enum.auto()
1282 1283 IN_TUPLE = enum.auto()
1283 1284
1284 1285
1285 1286 def _parse_tokens(c):
1286 1287 """Parse tokens even if there is an error."""
1287 1288 tokens = []
1288 1289 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1289 1290 while True:
1290 1291 try:
1291 1292 tokens.append(next(token_generator))
1292 1293 except tokenize.TokenError:
1293 1294 return tokens
1294 1295 except StopIteration:
1295 1296 return tokens
1296 1297
1297 1298
1298 1299 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1299 1300 """Match any valid Python numeric literal in a prefix of dictionary keys.
1300 1301
1301 1302 References:
1302 1303 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1303 1304 - https://docs.python.org/3/library/tokenize.html
1304 1305 """
1305 1306 if prefix[-1].isspace():
1306 1307 # if user typed a space we do not have anything to complete
1307 1308 # even if there was a valid number token before
1308 1309 return None
1309 1310 tokens = _parse_tokens(prefix)
1310 1311 rev_tokens = reversed(tokens)
1311 1312 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1312 1313 number = None
1313 1314 for token in rev_tokens:
1314 1315 if token.type in skip_over:
1315 1316 continue
1316 1317 if number is None:
1317 1318 if token.type == tokenize.NUMBER:
1318 1319 number = token.string
1319 1320 continue
1320 1321 else:
1321 1322 # we did not match a number
1322 1323 return None
1323 1324 if token.type == tokenize.OP:
1324 1325 if token.string == ",":
1325 1326 break
1326 1327 if token.string in {"+", "-"}:
1327 1328 number = token.string + number
1328 1329 else:
1329 1330 return None
1330 1331 return number
1331 1332
1332 1333
1333 1334 _INT_FORMATS = {
1334 1335 "0b": bin,
1335 1336 "0o": oct,
1336 1337 "0x": hex,
1337 1338 }
1338 1339
1339 1340
1340 1341 def match_dict_keys(
1341 1342 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1342 1343 prefix: str,
1343 1344 delims: str,
1344 1345 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1345 1346 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1346 1347 """Used by dict_key_matches, matching the prefix to a list of keys
1347 1348
1348 1349 Parameters
1349 1350 ----------
1350 1351 keys
1351 1352 list of keys in dictionary currently being completed.
1352 1353 prefix
1353 1354 Part of the text already typed by the user. E.g. `mydict[b'fo`
1354 1355 delims
1355 1356 String of delimiters to consider when finding the current key.
1356 1357 extra_prefix : optional
1357 1358 Part of the text already typed in multi-key index cases. E.g. for
1358 1359 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1359 1360
1360 1361 Returns
1361 1362 -------
1362 1363 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1363 1364 ``quote`` being the quote that need to be used to close current string.
1364 1365 ``token_start`` the position where the replacement should start occurring,
1365 1366 ``matches`` a dictionary of replacement/completion keys on keys and values
1366 1367 indicating whether the state.
1367 1368 """
1368 1369 prefix_tuple = extra_prefix if extra_prefix else ()
1369 1370
1370 1371 prefix_tuple_size = sum(
1371 1372 [
1372 1373 # for pandas, do not count slices as taking space
1373 1374 not isinstance(k, slice)
1374 1375 for k in prefix_tuple
1375 1376 ]
1376 1377 )
1377 1378 text_serializable_types = (str, bytes, int, float, slice)
1378 1379
1379 1380 def filter_prefix_tuple(key):
1380 1381 # Reject too short keys
1381 1382 if len(key) <= prefix_tuple_size:
1382 1383 return False
1383 1384 # Reject keys which cannot be serialised to text
1384 1385 for k in key:
1385 1386 if not isinstance(k, text_serializable_types):
1386 1387 return False
1387 1388 # Reject keys that do not match the prefix
1388 1389 for k, pt in zip(key, prefix_tuple):
1389 1390 if k != pt and not isinstance(pt, slice):
1390 1391 return False
1391 1392 # All checks passed!
1392 1393 return True
1393 1394
1394 1395 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1395 1396 defaultdict(lambda: _DictKeyState.BASELINE)
1396 1397 )
1397 1398
1398 1399 for k in keys:
1399 1400 # If at least one of the matches is not final, mark as undetermined.
1400 1401 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1401 1402 # `111` appears final on first match but is not final on the second.
1402 1403
1403 1404 if isinstance(k, tuple):
1404 1405 if filter_prefix_tuple(k):
1405 1406 key_fragment = k[prefix_tuple_size]
1406 1407 filtered_key_is_final[key_fragment] |= (
1407 1408 _DictKeyState.END_OF_TUPLE
1408 1409 if len(k) == prefix_tuple_size + 1
1409 1410 else _DictKeyState.IN_TUPLE
1410 1411 )
1411 1412 elif prefix_tuple_size > 0:
1412 1413 # we are completing a tuple but this key is not a tuple,
1413 1414 # so we should ignore it
1414 1415 pass
1415 1416 else:
1416 1417 if isinstance(k, text_serializable_types):
1417 1418 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1418 1419
1419 1420 filtered_keys = filtered_key_is_final.keys()
1420 1421
1421 1422 if not prefix:
1422 1423 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1423 1424
1424 1425 quote_match = re.search("(?:\"|')", prefix)
1425 1426 is_user_prefix_numeric = False
1426 1427
1427 1428 if quote_match:
1428 1429 quote = quote_match.group()
1429 1430 valid_prefix = prefix + quote
1430 1431 try:
1431 1432 prefix_str = literal_eval(valid_prefix)
1432 1433 except Exception:
1433 1434 return "", 0, {}
1434 1435 else:
1435 1436 # If it does not look like a string, let's assume
1436 1437 # we are dealing with a number or variable.
1437 1438 number_match = _match_number_in_dict_key_prefix(prefix)
1438 1439
1439 1440 # We do not want the key matcher to suggest variable names so we yield:
1440 1441 if number_match is None:
1441 1442 # The alternative would be to assume that user forgort the quote
1442 1443 # and if the substring matches, suggest adding it at the start.
1443 1444 return "", 0, {}
1444 1445
1445 1446 prefix_str = number_match
1446 1447 is_user_prefix_numeric = True
1447 1448 quote = ""
1448 1449
1449 1450 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1450 1451 token_match = re.search(pattern, prefix, re.UNICODE)
1451 1452 assert token_match is not None # silence mypy
1452 1453 token_start = token_match.start()
1453 1454 token_prefix = token_match.group()
1454 1455
1455 1456 matched: Dict[str, _DictKeyState] = {}
1456 1457
1457 1458 str_key: Union[str, bytes]
1458 1459
1459 1460 for key in filtered_keys:
1460 1461 if isinstance(key, (int, float)):
1461 1462 # User typed a number but this key is not a number.
1462 1463 if not is_user_prefix_numeric:
1463 1464 continue
1464 1465 str_key = str(key)
1465 1466 if isinstance(key, int):
1466 1467 int_base = prefix_str[:2].lower()
1467 1468 # if user typed integer using binary/oct/hex notation:
1468 1469 if int_base in _INT_FORMATS:
1469 1470 int_format = _INT_FORMATS[int_base]
1470 1471 str_key = int_format(key)
1471 1472 else:
1472 1473 # User typed a string but this key is a number.
1473 1474 if is_user_prefix_numeric:
1474 1475 continue
1475 1476 str_key = key
1476 1477 try:
1477 1478 if not str_key.startswith(prefix_str):
1478 1479 continue
1479 1480 except (AttributeError, TypeError, UnicodeError):
1480 1481 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1481 1482 continue
1482 1483
1483 1484 # reformat remainder of key to begin with prefix
1484 1485 rem = str_key[len(prefix_str) :]
1485 1486 # force repr wrapped in '
1486 1487 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1487 1488 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1488 1489 if quote == '"':
1489 1490 # The entered prefix is quoted with ",
1490 1491 # but the match is quoted with '.
1491 1492 # A contained " hence needs escaping for comparison:
1492 1493 rem_repr = rem_repr.replace('"', '\\"')
1493 1494
1494 1495 # then reinsert prefix from start of token
1495 1496 match = "%s%s" % (token_prefix, rem_repr)
1496 1497
1497 1498 matched[match] = filtered_key_is_final[key]
1498 1499 return quote, token_start, matched
1499 1500
1500 1501
1501 1502 def cursor_to_position(text:str, line:int, column:int)->int:
1502 1503 """
1503 1504 Convert the (line,column) position of the cursor in text to an offset in a
1504 1505 string.
1505 1506
1506 1507 Parameters
1507 1508 ----------
1508 1509 text : str
1509 1510 The text in which to calculate the cursor offset
1510 1511 line : int
1511 1512 Line of the cursor; 0-indexed
1512 1513 column : int
1513 1514 Column of the cursor 0-indexed
1514 1515
1515 1516 Returns
1516 1517 -------
1517 1518 Position of the cursor in ``text``, 0-indexed.
1518 1519
1519 1520 See Also
1520 1521 --------
1521 1522 position_to_cursor : reciprocal of this function
1522 1523
1523 1524 """
1524 1525 lines = text.split('\n')
1525 1526 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1526 1527
1527 1528 return sum(len(line) + 1 for line in lines[:line]) + column
1528 1529
1529 1530 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1530 1531 """
1531 1532 Convert the position of the cursor in text (0 indexed) to a line
1532 1533 number(0-indexed) and a column number (0-indexed) pair
1533 1534
1534 1535 Position should be a valid position in ``text``.
1535 1536
1536 1537 Parameters
1537 1538 ----------
1538 1539 text : str
1539 1540 The text in which to calculate the cursor offset
1540 1541 offset : int
1541 1542 Position of the cursor in ``text``, 0-indexed.
1542 1543
1543 1544 Returns
1544 1545 -------
1545 1546 (line, column) : (int, int)
1546 1547 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1547 1548
1548 1549 See Also
1549 1550 --------
1550 1551 cursor_to_position : reciprocal of this function
1551 1552
1552 1553 """
1553 1554
1554 1555 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1555 1556
1556 1557 before = text[:offset]
1557 1558 blines = before.split('\n') # ! splitnes trim trailing \n
1558 1559 line = before.count('\n')
1559 1560 col = len(blines[-1])
1560 1561 return line, col
1561 1562
1562 1563
1563 1564 def _safe_isinstance(obj, module, class_name, *attrs):
1564 1565 """Checks if obj is an instance of module.class_name if loaded
1565 1566 """
1566 1567 if module in sys.modules:
1567 1568 m = sys.modules[module]
1568 1569 for attr in [class_name, *attrs]:
1569 1570 m = getattr(m, attr)
1570 1571 return isinstance(obj, m)
1571 1572
1572 1573
1573 1574 @context_matcher()
1574 1575 def back_unicode_name_matcher(context: CompletionContext):
1575 1576 """Match Unicode characters back to Unicode name
1576 1577
1577 1578 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1578 1579 """
1579 1580 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1580 1581 return _convert_matcher_v1_result_to_v2(
1581 1582 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1582 1583 )
1583 1584
1584 1585
1585 1586 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1586 1587 """Match Unicode characters back to Unicode name
1587 1588
1588 1589 This does ``β˜ƒ`` -> ``\\snowman``
1589 1590
1590 1591 Note that snowman is not a valid python3 combining character but will be expanded.
1591 1592 Though it will not recombine back to the snowman character by the completion machinery.
1592 1593
1593 1594 This will not either back-complete standard sequences like \\n, \\b ...
1594 1595
1595 1596 .. deprecated:: 8.6
1596 1597 You can use :meth:`back_unicode_name_matcher` instead.
1597 1598
1598 1599 Returns
1599 1600 =======
1600 1601
1601 1602 Return a tuple with two elements:
1602 1603
1603 1604 - The Unicode character that was matched (preceded with a backslash), or
1604 1605 empty string,
1605 1606 - a sequence (of 1), name for the match Unicode character, preceded by
1606 1607 backslash, or empty if no match.
1607 1608 """
1608 1609 if len(text)<2:
1609 1610 return '', ()
1610 1611 maybe_slash = text[-2]
1611 1612 if maybe_slash != '\\':
1612 1613 return '', ()
1613 1614
1614 1615 char = text[-1]
1615 1616 # no expand on quote for completion in strings.
1616 1617 # nor backcomplete standard ascii keys
1617 1618 if char in string.ascii_letters or char in ('"',"'"):
1618 1619 return '', ()
1619 1620 try :
1620 1621 unic = unicodedata.name(char)
1621 1622 return '\\'+char,('\\'+unic,)
1622 1623 except KeyError:
1623 1624 pass
1624 1625 return '', ()
1625 1626
1626 1627
1627 1628 @context_matcher()
1628 1629 def back_latex_name_matcher(context: CompletionContext):
1629 1630 """Match latex characters back to unicode name
1630 1631
1631 1632 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1632 1633 """
1633 1634 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1634 1635 return _convert_matcher_v1_result_to_v2(
1635 1636 matches, type="latex", fragment=fragment, suppress_if_matches=True
1636 1637 )
1637 1638
1638 1639
1639 1640 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1640 1641 """Match latex characters back to unicode name
1641 1642
1642 1643 This does ``\\β„΅`` -> ``\\aleph``
1643 1644
1644 1645 .. deprecated:: 8.6
1645 1646 You can use :meth:`back_latex_name_matcher` instead.
1646 1647 """
1647 1648 if len(text)<2:
1648 1649 return '', ()
1649 1650 maybe_slash = text[-2]
1650 1651 if maybe_slash != '\\':
1651 1652 return '', ()
1652 1653
1653 1654
1654 1655 char = text[-1]
1655 1656 # no expand on quote for completion in strings.
1656 1657 # nor backcomplete standard ascii keys
1657 1658 if char in string.ascii_letters or char in ('"',"'"):
1658 1659 return '', ()
1659 1660 try :
1660 1661 latex = reverse_latex_symbol[char]
1661 1662 # '\\' replace the \ as well
1662 1663 return '\\'+char,[latex]
1663 1664 except KeyError:
1664 1665 pass
1665 1666 return '', ()
1666 1667
1667 1668
1668 1669 def _formatparamchildren(parameter) -> str:
1669 1670 """
1670 1671 Get parameter name and value from Jedi Private API
1671 1672
1672 1673 Jedi does not expose a simple way to get `param=value` from its API.
1673 1674
1674 1675 Parameters
1675 1676 ----------
1676 1677 parameter
1677 1678 Jedi's function `Param`
1678 1679
1679 1680 Returns
1680 1681 -------
1681 1682 A string like 'a', 'b=1', '*args', '**kwargs'
1682 1683
1683 1684 """
1684 1685 description = parameter.description
1685 1686 if not description.startswith('param '):
1686 1687 raise ValueError('Jedi function parameter description have change format.'
1687 1688 'Expected "param ...", found %r".' % description)
1688 1689 return description[6:]
1689 1690
1690 1691 def _make_signature(completion)-> str:
1691 1692 """
1692 1693 Make the signature from a jedi completion
1693 1694
1694 1695 Parameters
1695 1696 ----------
1696 1697 completion : jedi.Completion
1697 1698 object does not complete a function type
1698 1699
1699 1700 Returns
1700 1701 -------
1701 1702 a string consisting of the function signature, with the parenthesis but
1702 1703 without the function name. example:
1703 1704 `(a, *args, b=1, **kwargs)`
1704 1705
1705 1706 """
1706 1707
1707 1708 # it looks like this might work on jedi 0.17
1708 1709 if hasattr(completion, 'get_signatures'):
1709 1710 signatures = completion.get_signatures()
1710 1711 if not signatures:
1711 1712 return '(?)'
1712 1713
1713 1714 c0 = completion.get_signatures()[0]
1714 1715 return '('+c0.to_string().split('(', maxsplit=1)[1]
1715 1716
1716 1717 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1717 1718 for p in signature.defined_names()) if f])
1718 1719
1719 1720
1720 1721 _CompleteResult = Dict[str, MatcherResult]
1721 1722
1722 1723
1723 1724 DICT_MATCHER_REGEX = re.compile(
1724 1725 r"""(?x)
1725 1726 ( # match dict-referring - or any get item object - expression
1726 1727 .+
1727 1728 )
1728 1729 \[ # open bracket
1729 1730 \s* # and optional whitespace
1730 1731 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1731 1732 # and slices
1732 1733 ((?:(?:
1733 1734 (?: # closed string
1734 1735 [uUbB]? # string prefix (r not handled)
1735 1736 (?:
1736 1737 '(?:[^']|(?<!\\)\\')*'
1737 1738 |
1738 1739 "(?:[^"]|(?<!\\)\\")*"
1739 1740 )
1740 1741 )
1741 1742 |
1742 1743 # capture integers and slices
1743 1744 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1744 1745 |
1745 1746 # integer in bin/hex/oct notation
1746 1747 0[bBxXoO]_?(?:\w|\d)+
1747 1748 )
1748 1749 \s*,\s*
1749 1750 )*)
1750 1751 ((?:
1751 1752 (?: # unclosed string
1752 1753 [uUbB]? # string prefix (r not handled)
1753 1754 (?:
1754 1755 '(?:[^']|(?<!\\)\\')*
1755 1756 |
1756 1757 "(?:[^"]|(?<!\\)\\")*
1757 1758 )
1758 1759 )
1759 1760 |
1760 1761 # unfinished integer
1761 1762 (?:[-+]?\d+)
1762 1763 |
1763 1764 # integer in bin/hex/oct notation
1764 1765 0[bBxXoO]_?(?:\w|\d)+
1765 1766 )
1766 1767 )?
1767 1768 $
1768 1769 """
1769 1770 )
1770 1771
1771 1772
1772 1773 def _convert_matcher_v1_result_to_v2(
1773 1774 matches: Sequence[str],
1774 1775 type: str,
1775 1776 fragment: Optional[str] = None,
1776 1777 suppress_if_matches: bool = False,
1777 1778 ) -> SimpleMatcherResult:
1778 1779 """Utility to help with transition"""
1779 1780 result = {
1780 1781 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1781 1782 "suppress": (True if matches else False) if suppress_if_matches else False,
1782 1783 }
1783 1784 if fragment is not None:
1784 1785 result["matched_fragment"] = fragment
1785 1786 return cast(SimpleMatcherResult, result)
1786 1787
1787 1788
1788 1789 class IPCompleter(Completer):
1789 1790 """Extension of the completer class with IPython-specific features"""
1790 1791
1791 1792 @observe('greedy')
1792 1793 def _greedy_changed(self, change):
1793 1794 """update the splitter and readline delims when greedy is changed"""
1794 1795 if change["new"]:
1795 1796 self.evaluation = "unsafe"
1796 1797 self.auto_close_dict_keys = True
1797 1798 self.splitter.delims = GREEDY_DELIMS
1798 1799 else:
1799 1800 self.evaluation = "limited"
1800 1801 self.auto_close_dict_keys = False
1801 1802 self.splitter.delims = DELIMS
1802 1803
1803 1804 dict_keys_only = Bool(
1804 1805 False,
1805 1806 help="""
1806 1807 Whether to show dict key matches only.
1807 1808
1808 1809 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1809 1810 """,
1810 1811 )
1811 1812
1812 1813 suppress_competing_matchers = UnionTrait(
1813 1814 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1814 1815 default_value=None,
1815 1816 help="""
1816 1817 Whether to suppress completions from other *Matchers*.
1817 1818
1818 1819 When set to ``None`` (default) the matchers will attempt to auto-detect
1819 1820 whether suppression of other matchers is desirable. For example, at
1820 1821 the beginning of a line followed by `%` we expect a magic completion
1821 1822 to be the only applicable option, and after ``my_dict['`` we usually
1822 1823 expect a completion with an existing dictionary key.
1823 1824
1824 1825 If you want to disable this heuristic and see completions from all matchers,
1825 1826 set ``IPCompleter.suppress_competing_matchers = False``.
1826 1827 To disable the heuristic for specific matchers provide a dictionary mapping:
1827 1828 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1828 1829
1829 1830 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1830 1831 completions to the set of matchers with the highest priority;
1831 1832 this is equivalent to ``IPCompleter.merge_completions`` and
1832 1833 can be beneficial for performance, but will sometimes omit relevant
1833 1834 candidates from matchers further down the priority list.
1834 1835 """,
1835 1836 ).tag(config=True)
1836 1837
1837 1838 merge_completions = Bool(
1838 1839 True,
1839 1840 help="""Whether to merge completion results into a single list
1840 1841
1841 1842 If False, only the completion results from the first non-empty
1842 1843 completer will be returned.
1843 1844
1844 1845 As of version 8.6.0, setting the value to ``False`` is an alias for:
1845 1846 ``IPCompleter.suppress_competing_matchers = True.``.
1846 1847 """,
1847 1848 ).tag(config=True)
1848 1849
1849 1850 disable_matchers = ListTrait(
1850 1851 Unicode(),
1851 1852 help="""List of matchers to disable.
1852 1853
1853 1854 The list should contain matcher identifiers (see :any:`completion_matcher`).
1854 1855 """,
1855 1856 ).tag(config=True)
1856 1857
1857 1858 omit__names = Enum(
1858 1859 (0, 1, 2),
1859 1860 default_value=2,
1860 1861 help="""Instruct the completer to omit private method names
1861 1862
1862 1863 Specifically, when completing on ``object.<tab>``.
1863 1864
1864 1865 When 2 [default]: all names that start with '_' will be excluded.
1865 1866
1866 1867 When 1: all 'magic' names (``__foo__``) will be excluded.
1867 1868
1868 1869 When 0: nothing will be excluded.
1869 1870 """
1870 1871 ).tag(config=True)
1871 1872 limit_to__all__ = Bool(False,
1872 1873 help="""
1873 1874 DEPRECATED as of version 5.0.
1874 1875
1875 1876 Instruct the completer to use __all__ for the completion
1876 1877
1877 1878 Specifically, when completing on ``object.<tab>``.
1878 1879
1879 1880 When True: only those names in obj.__all__ will be included.
1880 1881
1881 1882 When False [default]: the __all__ attribute is ignored
1882 1883 """,
1883 1884 ).tag(config=True)
1884 1885
1885 1886 profile_completions = Bool(
1886 1887 default_value=False,
1887 1888 help="If True, emit profiling data for completion subsystem using cProfile."
1888 1889 ).tag(config=True)
1889 1890
1890 1891 profiler_output_dir = Unicode(
1891 1892 default_value=".completion_profiles",
1892 1893 help="Template for path at which to output profile data for completions."
1893 1894 ).tag(config=True)
1894 1895
1895 1896 @observe('limit_to__all__')
1896 1897 def _limit_to_all_changed(self, change):
1897 1898 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1898 1899 'value has been deprecated since IPython 5.0, will be made to have '
1899 1900 'no effects and then removed in future version of IPython.',
1900 1901 UserWarning)
1901 1902
1902 1903 def __init__(
1903 1904 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1904 1905 ):
1905 1906 """IPCompleter() -> completer
1906 1907
1907 1908 Return a completer object.
1908 1909
1909 1910 Parameters
1910 1911 ----------
1911 1912 shell
1912 1913 a pointer to the ipython shell itself. This is needed
1913 1914 because this completer knows about magic functions, and those can
1914 1915 only be accessed via the ipython instance.
1915 1916 namespace : dict, optional
1916 1917 an optional dict where completions are performed.
1917 1918 global_namespace : dict, optional
1918 1919 secondary optional dict for completions, to
1919 1920 handle cases (such as IPython embedded inside functions) where
1920 1921 both Python scopes are visible.
1921 1922 config : Config
1922 1923 traitlet's config object
1923 1924 **kwargs
1924 1925 passed to super class unmodified.
1925 1926 """
1926 1927
1927 1928 self.magic_escape = ESC_MAGIC
1928 1929 self.splitter = CompletionSplitter()
1929 1930
1930 1931 # _greedy_changed() depends on splitter and readline being defined:
1931 1932 super().__init__(
1932 1933 namespace=namespace,
1933 1934 global_namespace=global_namespace,
1934 1935 config=config,
1935 1936 **kwargs,
1936 1937 )
1937 1938
1938 1939 # List where completion matches will be stored
1939 1940 self.matches = []
1940 1941 self.shell = shell
1941 1942 # Regexp to split filenames with spaces in them
1942 1943 self.space_name_re = re.compile(r'([^\\] )')
1943 1944 # Hold a local ref. to glob.glob for speed
1944 1945 self.glob = glob.glob
1945 1946
1946 1947 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1947 1948 # buffers, to avoid completion problems.
1948 1949 term = os.environ.get('TERM','xterm')
1949 1950 self.dumb_terminal = term in ['dumb','emacs']
1950 1951
1951 1952 # Special handling of backslashes needed in win32 platforms
1952 1953 if sys.platform == "win32":
1953 1954 self.clean_glob = self._clean_glob_win32
1954 1955 else:
1955 1956 self.clean_glob = self._clean_glob
1956 1957
1957 1958 #regexp to parse docstring for function signature
1958 1959 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1959 1960 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1960 1961 #use this if positional argument name is also needed
1961 1962 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1962 1963
1963 1964 self.magic_arg_matchers = [
1964 1965 self.magic_config_matcher,
1965 1966 self.magic_color_matcher,
1966 1967 ]
1967 1968
1968 1969 # This is set externally by InteractiveShell
1969 1970 self.custom_completers = None
1970 1971
1971 1972 # This is a list of names of unicode characters that can be completed
1972 1973 # into their corresponding unicode value. The list is large, so we
1973 1974 # lazily initialize it on first use. Consuming code should access this
1974 1975 # attribute through the `@unicode_names` property.
1975 1976 self._unicode_names = None
1976 1977
1977 1978 self._backslash_combining_matchers = [
1978 1979 self.latex_name_matcher,
1979 1980 self.unicode_name_matcher,
1980 1981 back_latex_name_matcher,
1981 1982 back_unicode_name_matcher,
1982 1983 self.fwd_unicode_matcher,
1983 1984 ]
1984 1985
1985 1986 if not self.backslash_combining_completions:
1986 1987 for matcher in self._backslash_combining_matchers:
1987 1988 self.disable_matchers.append(_get_matcher_id(matcher))
1988 1989
1989 1990 if not self.merge_completions:
1990 1991 self.suppress_competing_matchers = True
1991 1992
1992 1993 @property
1993 1994 def matchers(self) -> List[Matcher]:
1994 1995 """All active matcher routines for completion"""
1995 1996 if self.dict_keys_only:
1996 1997 return [self.dict_key_matcher]
1997 1998
1998 1999 if self.use_jedi:
1999 2000 return [
2000 2001 *self.custom_matchers,
2001 2002 *self._backslash_combining_matchers,
2002 2003 *self.magic_arg_matchers,
2003 2004 self.custom_completer_matcher,
2004 2005 self.magic_matcher,
2005 2006 self._jedi_matcher,
2006 2007 self.dict_key_matcher,
2007 2008 self.file_matcher,
2008 2009 ]
2009 2010 else:
2010 2011 return [
2011 2012 *self.custom_matchers,
2012 2013 *self._backslash_combining_matchers,
2013 2014 *self.magic_arg_matchers,
2014 2015 self.custom_completer_matcher,
2015 2016 self.dict_key_matcher,
2016 2017 self.magic_matcher,
2017 2018 self.python_matcher,
2018 2019 self.file_matcher,
2019 2020 self.python_func_kw_matcher,
2020 2021 ]
2021 2022
2022 2023 def all_completions(self, text:str) -> List[str]:
2023 2024 """
2024 2025 Wrapper around the completion methods for the benefit of emacs.
2025 2026 """
2026 2027 prefix = text.rpartition('.')[0]
2027 2028 with provisionalcompleter():
2028 2029 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2029 2030 for c in self.completions(text, len(text))]
2030 2031
2031 2032 return self.complete(text)[1]
2032 2033
2033 2034 def _clean_glob(self, text:str):
2034 2035 return self.glob("%s*" % text)
2035 2036
2036 2037 def _clean_glob_win32(self, text:str):
2037 2038 return [f.replace("\\","/")
2038 2039 for f in self.glob("%s*" % text)]
2039 2040
2040 2041 @context_matcher()
2041 2042 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2042 2043 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2043 2044 matches = self.file_matches(context.token)
2044 2045 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2045 2046 # starts with `/home/`, `C:\`, etc)
2046 2047 return _convert_matcher_v1_result_to_v2(matches, type="path")
2047 2048
2048 2049 def file_matches(self, text: str) -> List[str]:
2049 2050 """Match filenames, expanding ~USER type strings.
2050 2051
2051 2052 Most of the seemingly convoluted logic in this completer is an
2052 2053 attempt to handle filenames with spaces in them. And yet it's not
2053 2054 quite perfect, because Python's readline doesn't expose all of the
2054 2055 GNU readline details needed for this to be done correctly.
2055 2056
2056 2057 For a filename with a space in it, the printed completions will be
2057 2058 only the parts after what's already been typed (instead of the
2058 2059 full completions, as is normally done). I don't think with the
2059 2060 current (as of Python 2.3) Python readline it's possible to do
2060 2061 better.
2061 2062
2062 2063 .. deprecated:: 8.6
2063 2064 You can use :meth:`file_matcher` instead.
2064 2065 """
2065 2066
2066 2067 # chars that require escaping with backslash - i.e. chars
2067 2068 # that readline treats incorrectly as delimiters, but we
2068 2069 # don't want to treat as delimiters in filename matching
2069 2070 # when escaped with backslash
2070 2071 if text.startswith('!'):
2071 2072 text = text[1:]
2072 2073 text_prefix = u'!'
2073 2074 else:
2074 2075 text_prefix = u''
2075 2076
2076 2077 text_until_cursor = self.text_until_cursor
2077 2078 # track strings with open quotes
2078 2079 open_quotes = has_open_quotes(text_until_cursor)
2079 2080
2080 2081 if '(' in text_until_cursor or '[' in text_until_cursor:
2081 2082 lsplit = text
2082 2083 else:
2083 2084 try:
2084 2085 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2085 2086 lsplit = arg_split(text_until_cursor)[-1]
2086 2087 except ValueError:
2087 2088 # typically an unmatched ", or backslash without escaped char.
2088 2089 if open_quotes:
2089 2090 lsplit = text_until_cursor.split(open_quotes)[-1]
2090 2091 else:
2091 2092 return []
2092 2093 except IndexError:
2093 2094 # tab pressed on empty line
2094 2095 lsplit = ""
2095 2096
2096 2097 if not open_quotes and lsplit != protect_filename(lsplit):
2097 2098 # if protectables are found, do matching on the whole escaped name
2098 2099 has_protectables = True
2099 2100 text0,text = text,lsplit
2100 2101 else:
2101 2102 has_protectables = False
2102 2103 text = os.path.expanduser(text)
2103 2104
2104 2105 if text == "":
2105 2106 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2106 2107
2107 2108 # Compute the matches from the filesystem
2108 2109 if sys.platform == 'win32':
2109 2110 m0 = self.clean_glob(text)
2110 2111 else:
2111 2112 m0 = self.clean_glob(text.replace('\\', ''))
2112 2113
2113 2114 if has_protectables:
2114 2115 # If we had protectables, we need to revert our changes to the
2115 2116 # beginning of filename so that we don't double-write the part
2116 2117 # of the filename we have so far
2117 2118 len_lsplit = len(lsplit)
2118 2119 matches = [text_prefix + text0 +
2119 2120 protect_filename(f[len_lsplit:]) for f in m0]
2120 2121 else:
2121 2122 if open_quotes:
2122 2123 # if we have a string with an open quote, we don't need to
2123 2124 # protect the names beyond the quote (and we _shouldn't_, as
2124 2125 # it would cause bugs when the filesystem call is made).
2125 2126 matches = m0 if sys.platform == "win32" else\
2126 2127 [protect_filename(f, open_quotes) for f in m0]
2127 2128 else:
2128 2129 matches = [text_prefix +
2129 2130 protect_filename(f) for f in m0]
2130 2131
2131 2132 # Mark directories in input list by appending '/' to their names.
2132 2133 return [x+'/' if os.path.isdir(x) else x for x in matches]
2133 2134
2134 2135 @context_matcher()
2135 2136 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2136 2137 """Match magics."""
2137 2138 text = context.token
2138 2139 matches = self.magic_matches(text)
2139 2140 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2140 2141 is_magic_prefix = len(text) > 0 and text[0] == "%"
2141 2142 result["suppress"] = is_magic_prefix and bool(result["completions"])
2142 2143 return result
2143 2144
2144 def magic_matches(self, text: str):
2145 def magic_matches(self, text: str) -> List[str]:
2145 2146 """Match magics.
2146 2147
2147 2148 .. deprecated:: 8.6
2148 2149 You can use :meth:`magic_matcher` instead.
2149 2150 """
2150 2151 # Get all shell magics now rather than statically, so magics loaded at
2151 2152 # runtime show up too.
2152 2153 lsm = self.shell.magics_manager.lsmagic()
2153 2154 line_magics = lsm['line']
2154 2155 cell_magics = lsm['cell']
2155 2156 pre = self.magic_escape
2156 2157 pre2 = pre+pre
2157 2158
2158 2159 explicit_magic = text.startswith(pre)
2159 2160
2160 2161 # Completion logic:
2161 2162 # - user gives %%: only do cell magics
2162 2163 # - user gives %: do both line and cell magics
2163 2164 # - no prefix: do both
2164 2165 # In other words, line magics are skipped if the user gives %% explicitly
2165 2166 #
2166 2167 # We also exclude magics that match any currently visible names:
2167 2168 # https://github.com/ipython/ipython/issues/4877, unless the user has
2168 2169 # typed a %:
2169 2170 # https://github.com/ipython/ipython/issues/10754
2170 2171 bare_text = text.lstrip(pre)
2171 2172 global_matches = self.global_matches(bare_text)
2172 2173 if not explicit_magic:
2173 2174 def matches(magic):
2174 2175 """
2175 2176 Filter magics, in particular remove magics that match
2176 2177 a name present in global namespace.
2177 2178 """
2178 2179 return ( magic.startswith(bare_text) and
2179 2180 magic not in global_matches )
2180 2181 else:
2181 2182 def matches(magic):
2182 2183 return magic.startswith(bare_text)
2183 2184
2184 2185 comp = [ pre2+m for m in cell_magics if matches(m)]
2185 2186 if not text.startswith(pre2):
2186 2187 comp += [ pre+m for m in line_magics if matches(m)]
2187 2188
2188 2189 return comp
2189 2190
2190 2191 @context_matcher()
2191 2192 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2192 2193 """Match class names and attributes for %config magic."""
2193 2194 # NOTE: uses `line_buffer` equivalent for compatibility
2194 2195 matches = self.magic_config_matches(context.line_with_cursor)
2195 2196 return _convert_matcher_v1_result_to_v2(matches, type="param")
2196 2197
2197 2198 def magic_config_matches(self, text: str) -> List[str]:
2198 2199 """Match class names and attributes for %config magic.
2199 2200
2200 2201 .. deprecated:: 8.6
2201 2202 You can use :meth:`magic_config_matcher` instead.
2202 2203 """
2203 2204 texts = text.strip().split()
2204 2205
2205 2206 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2206 2207 # get all configuration classes
2207 2208 classes = sorted(set([ c for c in self.shell.configurables
2208 2209 if c.__class__.class_traits(config=True)
2209 2210 ]), key=lambda x: x.__class__.__name__)
2210 2211 classnames = [ c.__class__.__name__ for c in classes ]
2211 2212
2212 2213 # return all classnames if config or %config is given
2213 2214 if len(texts) == 1:
2214 2215 return classnames
2215 2216
2216 2217 # match classname
2217 2218 classname_texts = texts[1].split('.')
2218 2219 classname = classname_texts[0]
2219 2220 classname_matches = [ c for c in classnames
2220 2221 if c.startswith(classname) ]
2221 2222
2222 2223 # return matched classes or the matched class with attributes
2223 2224 if texts[1].find('.') < 0:
2224 2225 return classname_matches
2225 2226 elif len(classname_matches) == 1 and \
2226 2227 classname_matches[0] == classname:
2227 2228 cls = classes[classnames.index(classname)].__class__
2228 2229 help = cls.class_get_help()
2229 2230 # strip leading '--' from cl-args:
2230 2231 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2231 2232 return [ attr.split('=')[0]
2232 2233 for attr in help.strip().splitlines()
2233 2234 if attr.startswith(texts[1]) ]
2234 2235 return []
2235 2236
2236 2237 @context_matcher()
2237 2238 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2238 2239 """Match color schemes for %colors magic."""
2239 2240 # NOTE: uses `line_buffer` equivalent for compatibility
2240 2241 matches = self.magic_color_matches(context.line_with_cursor)
2241 2242 return _convert_matcher_v1_result_to_v2(matches, type="param")
2242 2243
2243 2244 def magic_color_matches(self, text: str) -> List[str]:
2244 2245 """Match color schemes for %colors magic.
2245 2246
2246 2247 .. deprecated:: 8.6
2247 2248 You can use :meth:`magic_color_matcher` instead.
2248 2249 """
2249 2250 texts = text.split()
2250 2251 if text.endswith(' '):
2251 2252 # .split() strips off the trailing whitespace. Add '' back
2252 2253 # so that: '%colors ' -> ['%colors', '']
2253 2254 texts.append('')
2254 2255
2255 2256 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2256 2257 prefix = texts[1]
2257 2258 return [ color for color in InspectColors.keys()
2258 2259 if color.startswith(prefix) ]
2259 2260 return []
2260 2261
2261 2262 @context_matcher(identifier="IPCompleter.jedi_matcher")
2262 2263 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2263 2264 matches = self._jedi_matches(
2264 2265 cursor_column=context.cursor_position,
2265 2266 cursor_line=context.cursor_line,
2266 2267 text=context.full_text,
2267 2268 )
2268 2269 return {
2269 2270 "completions": matches,
2270 2271 # static analysis should not suppress other matchers
2271 2272 "suppress": False,
2272 2273 }
2273 2274
2274 2275 def _jedi_matches(
2275 2276 self, cursor_column: int, cursor_line: int, text: str
2276 2277 ) -> Iterator[_JediCompletionLike]:
2277 2278 """
2278 2279 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2279 2280 cursor position.
2280 2281
2281 2282 Parameters
2282 2283 ----------
2283 2284 cursor_column : int
2284 2285 column position of the cursor in ``text``, 0-indexed.
2285 2286 cursor_line : int
2286 2287 line position of the cursor in ``text``, 0-indexed
2287 2288 text : str
2288 2289 text to complete
2289 2290
2290 2291 Notes
2291 2292 -----
2292 2293 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2293 2294 object containing a string with the Jedi debug information attached.
2294 2295
2295 2296 .. deprecated:: 8.6
2296 2297 You can use :meth:`_jedi_matcher` instead.
2297 2298 """
2298 2299 namespaces = [self.namespace]
2299 2300 if self.global_namespace is not None:
2300 2301 namespaces.append(self.global_namespace)
2301 2302
2302 2303 completion_filter = lambda x:x
2303 2304 offset = cursor_to_position(text, cursor_line, cursor_column)
2304 2305 # filter output if we are completing for object members
2305 2306 if offset:
2306 2307 pre = text[offset-1]
2307 2308 if pre == '.':
2308 2309 if self.omit__names == 2:
2309 2310 completion_filter = lambda c:not c.name.startswith('_')
2310 2311 elif self.omit__names == 1:
2311 2312 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2312 2313 elif self.omit__names == 0:
2313 2314 completion_filter = lambda x:x
2314 2315 else:
2315 2316 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2316 2317
2317 2318 interpreter = jedi.Interpreter(text[:offset], namespaces)
2318 2319 try_jedi = True
2319 2320
2320 2321 try:
2321 2322 # find the first token in the current tree -- if it is a ' or " then we are in a string
2322 2323 completing_string = False
2323 2324 try:
2324 2325 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2325 2326 except StopIteration:
2326 2327 pass
2327 2328 else:
2328 2329 # note the value may be ', ", or it may also be ''' or """, or
2329 2330 # in some cases, """what/you/typed..., but all of these are
2330 2331 # strings.
2331 2332 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2332 2333
2333 2334 # if we are in a string jedi is likely not the right candidate for
2334 2335 # now. Skip it.
2335 2336 try_jedi = not completing_string
2336 2337 except Exception as e:
2337 2338 # many of things can go wrong, we are using private API just don't crash.
2338 2339 if self.debug:
2339 2340 print("Error detecting if completing a non-finished string :", e, '|')
2340 2341
2341 2342 if not try_jedi:
2342 2343 return iter([])
2343 2344 try:
2344 2345 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2345 2346 except Exception as e:
2346 2347 if self.debug:
2347 2348 return iter(
2348 2349 [
2349 2350 _FakeJediCompletion(
2350 2351 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2351 2352 % (e)
2352 2353 )
2353 2354 ]
2354 2355 )
2355 2356 else:
2356 2357 return iter([])
2357 2358
2358 2359 @context_matcher()
2359 2360 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2360 2361 """Match attributes or global python names"""
2361 2362 text = context.line_with_cursor
2362 2363 if "." in text:
2363 2364 try:
2364 2365 matches, fragment = self._attr_matches(text, include_prefix=False)
2365 2366 if text.endswith(".") and self.omit__names:
2366 2367 if self.omit__names == 1:
2367 2368 # true if txt is _not_ a __ name, false otherwise:
2368 2369 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2369 2370 else:
2370 2371 # true if txt is _not_ a _ name, false otherwise:
2371 2372 no__name = (
2372 2373 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2373 2374 is None
2374 2375 )
2375 2376 matches = filter(no__name, matches)
2376 2377 return _convert_matcher_v1_result_to_v2(
2377 2378 matches, type="attribute", fragment=fragment
2378 2379 )
2379 2380 except NameError:
2380 2381 # catches <undefined attributes>.<tab>
2381 2382 matches = []
2382 2383 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2383 2384 else:
2384 2385 matches = self.global_matches(context.token)
2385 2386 # TODO: maybe distinguish between functions, modules and just "variables"
2386 2387 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2387 2388
2388 2389 @completion_matcher(api_version=1)
2389 2390 def python_matches(self, text: str) -> Iterable[str]:
2390 2391 """Match attributes or global python names.
2391 2392
2392 2393 .. deprecated:: 8.27
2393 2394 You can use :meth:`python_matcher` instead."""
2394 2395 if "." in text:
2395 2396 try:
2396 2397 matches = self.attr_matches(text)
2397 2398 if text.endswith('.') and self.omit__names:
2398 2399 if self.omit__names == 1:
2399 2400 # true if txt is _not_ a __ name, false otherwise:
2400 2401 no__name = (lambda txt:
2401 2402 re.match(r'.*\.__.*?__',txt) is None)
2402 2403 else:
2403 2404 # true if txt is _not_ a _ name, false otherwise:
2404 2405 no__name = (lambda txt:
2405 2406 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2406 2407 matches = filter(no__name, matches)
2407 2408 except NameError:
2408 2409 # catches <undefined attributes>.<tab>
2409 2410 matches = []
2410 2411 else:
2411 2412 matches = self.global_matches(text)
2412 2413 return matches
2413 2414
2414 2415 def _default_arguments_from_docstring(self, doc):
2415 2416 """Parse the first line of docstring for call signature.
2416 2417
2417 2418 Docstring should be of the form 'min(iterable[, key=func])\n'.
2418 2419 It can also parse cython docstring of the form
2419 2420 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2420 2421 """
2421 2422 if doc is None:
2422 2423 return []
2423 2424
2424 2425 #care only the firstline
2425 2426 line = doc.lstrip().splitlines()[0]
2426 2427
2427 2428 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2428 2429 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2429 2430 sig = self.docstring_sig_re.search(line)
2430 2431 if sig is None:
2431 2432 return []
2432 2433 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2433 2434 sig = sig.groups()[0].split(',')
2434 2435 ret = []
2435 2436 for s in sig:
2436 2437 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2437 2438 ret += self.docstring_kwd_re.findall(s)
2438 2439 return ret
2439 2440
2440 2441 def _default_arguments(self, obj):
2441 2442 """Return the list of default arguments of obj if it is callable,
2442 2443 or empty list otherwise."""
2443 2444 call_obj = obj
2444 2445 ret = []
2445 2446 if inspect.isbuiltin(obj):
2446 2447 pass
2447 2448 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2448 2449 if inspect.isclass(obj):
2449 2450 #for cython embedsignature=True the constructor docstring
2450 2451 #belongs to the object itself not __init__
2451 2452 ret += self._default_arguments_from_docstring(
2452 2453 getattr(obj, '__doc__', ''))
2453 2454 # for classes, check for __init__,__new__
2454 2455 call_obj = (getattr(obj, '__init__', None) or
2455 2456 getattr(obj, '__new__', None))
2456 2457 # for all others, check if they are __call__able
2457 2458 elif hasattr(obj, '__call__'):
2458 2459 call_obj = obj.__call__
2459 2460 ret += self._default_arguments_from_docstring(
2460 2461 getattr(call_obj, '__doc__', ''))
2461 2462
2462 2463 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2463 2464 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2464 2465
2465 2466 try:
2466 2467 sig = inspect.signature(obj)
2467 2468 ret.extend(k for k, v in sig.parameters.items() if
2468 2469 v.kind in _keeps)
2469 2470 except ValueError:
2470 2471 pass
2471 2472
2472 2473 return list(set(ret))
2473 2474
2474 2475 @context_matcher()
2475 2476 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2476 2477 """Match named parameters (kwargs) of the last open function."""
2477 2478 matches = self.python_func_kw_matches(context.token)
2478 2479 return _convert_matcher_v1_result_to_v2(matches, type="param")
2479 2480
2480 2481 def python_func_kw_matches(self, text):
2481 2482 """Match named parameters (kwargs) of the last open function.
2482 2483
2483 2484 .. deprecated:: 8.6
2484 2485 You can use :meth:`python_func_kw_matcher` instead.
2485 2486 """
2486 2487
2487 2488 if "." in text: # a parameter cannot be dotted
2488 2489 return []
2489 2490 try: regexp = self.__funcParamsRegex
2490 2491 except AttributeError:
2491 2492 regexp = self.__funcParamsRegex = re.compile(r'''
2492 2493 '.*?(?<!\\)' | # single quoted strings or
2493 2494 ".*?(?<!\\)" | # double quoted strings or
2494 2495 \w+ | # identifier
2495 2496 \S # other characters
2496 2497 ''', re.VERBOSE | re.DOTALL)
2497 2498 # 1. find the nearest identifier that comes before an unclosed
2498 2499 # parenthesis before the cursor
2499 2500 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2500 2501 tokens = regexp.findall(self.text_until_cursor)
2501 2502 iterTokens = reversed(tokens)
2502 2503 openPar = 0
2503 2504
2504 2505 for token in iterTokens:
2505 2506 if token == ')':
2506 2507 openPar -= 1
2507 2508 elif token == '(':
2508 2509 openPar += 1
2509 2510 if openPar > 0:
2510 2511 # found the last unclosed parenthesis
2511 2512 break
2512 2513 else:
2513 2514 return []
2514 2515 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2515 2516 ids = []
2516 2517 isId = re.compile(r'\w+$').match
2517 2518
2518 2519 while True:
2519 2520 try:
2520 2521 ids.append(next(iterTokens))
2521 2522 if not isId(ids[-1]):
2522 2523 ids.pop()
2523 2524 break
2524 2525 if not next(iterTokens) == '.':
2525 2526 break
2526 2527 except StopIteration:
2527 2528 break
2528 2529
2529 2530 # Find all named arguments already assigned to, as to avoid suggesting
2530 2531 # them again
2531 2532 usedNamedArgs = set()
2532 2533 par_level = -1
2533 2534 for token, next_token in zip(tokens, tokens[1:]):
2534 2535 if token == '(':
2535 2536 par_level += 1
2536 2537 elif token == ')':
2537 2538 par_level -= 1
2538 2539
2539 2540 if par_level != 0:
2540 2541 continue
2541 2542
2542 2543 if next_token != '=':
2543 2544 continue
2544 2545
2545 2546 usedNamedArgs.add(token)
2546 2547
2547 2548 argMatches = []
2548 2549 try:
2549 2550 callableObj = '.'.join(ids[::-1])
2550 2551 namedArgs = self._default_arguments(eval(callableObj,
2551 2552 self.namespace))
2552 2553
2553 2554 # Remove used named arguments from the list, no need to show twice
2554 2555 for namedArg in set(namedArgs) - usedNamedArgs:
2555 2556 if namedArg.startswith(text):
2556 2557 argMatches.append("%s=" %namedArg)
2557 2558 except:
2558 2559 pass
2559 2560
2560 2561 return argMatches
2561 2562
2562 2563 @staticmethod
2563 2564 def _get_keys(obj: Any) -> List[Any]:
2564 2565 # Objects can define their own completions by defining an
2565 2566 # _ipy_key_completions_() method.
2566 2567 method = get_real_method(obj, '_ipython_key_completions_')
2567 2568 if method is not None:
2568 2569 return method()
2569 2570
2570 2571 # Special case some common in-memory dict-like types
2571 2572 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2572 2573 try:
2573 2574 return list(obj.keys())
2574 2575 except Exception:
2575 2576 return []
2576 2577 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2577 2578 try:
2578 2579 return list(obj.obj.keys())
2579 2580 except Exception:
2580 2581 return []
2581 2582 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2582 2583 _safe_isinstance(obj, 'numpy', 'void'):
2583 2584 return obj.dtype.names or []
2584 2585 return []
2585 2586
2586 2587 @context_matcher()
2587 2588 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2588 2589 """Match string keys in a dictionary, after e.g. ``foo[``."""
2589 2590 matches = self.dict_key_matches(context.token)
2590 2591 return _convert_matcher_v1_result_to_v2(
2591 2592 matches, type="dict key", suppress_if_matches=True
2592 2593 )
2593 2594
2594 2595 def dict_key_matches(self, text: str) -> List[str]:
2595 2596 """Match string keys in a dictionary, after e.g. ``foo[``.
2596 2597
2597 2598 .. deprecated:: 8.6
2598 2599 You can use :meth:`dict_key_matcher` instead.
2599 2600 """
2600 2601
2601 2602 # Short-circuit on closed dictionary (regular expression would
2602 2603 # not match anyway, but would take quite a while).
2603 2604 if self.text_until_cursor.strip().endswith("]"):
2604 2605 return []
2605 2606
2606 2607 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2607 2608
2608 2609 if match is None:
2609 2610 return []
2610 2611
2611 2612 expr, prior_tuple_keys, key_prefix = match.groups()
2612 2613
2613 2614 obj = self._evaluate_expr(expr)
2614 2615
2615 2616 if obj is not_found:
2616 2617 return []
2617 2618
2618 2619 keys = self._get_keys(obj)
2619 2620 if not keys:
2620 2621 return keys
2621 2622
2622 2623 tuple_prefix = guarded_eval(
2623 2624 prior_tuple_keys,
2624 2625 EvaluationContext(
2625 2626 globals=self.global_namespace,
2626 2627 locals=self.namespace,
2627 2628 evaluation=self.evaluation, # type: ignore
2628 2629 in_subscript=True,
2629 2630 ),
2630 2631 )
2631 2632
2632 2633 closing_quote, token_offset, matches = match_dict_keys(
2633 2634 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2634 2635 )
2635 2636 if not matches:
2636 2637 return []
2637 2638
2638 2639 # get the cursor position of
2639 2640 # - the text being completed
2640 2641 # - the start of the key text
2641 2642 # - the start of the completion
2642 2643 text_start = len(self.text_until_cursor) - len(text)
2643 2644 if key_prefix:
2644 2645 key_start = match.start(3)
2645 2646 completion_start = key_start + token_offset
2646 2647 else:
2647 2648 key_start = completion_start = match.end()
2648 2649
2649 2650 # grab the leading prefix, to make sure all completions start with `text`
2650 2651 if text_start > key_start:
2651 2652 leading = ''
2652 2653 else:
2653 2654 leading = text[text_start:completion_start]
2654 2655
2655 2656 # append closing quote and bracket as appropriate
2656 2657 # this is *not* appropriate if the opening quote or bracket is outside
2657 2658 # the text given to this method, e.g. `d["""a\nt
2658 2659 can_close_quote = False
2659 2660 can_close_bracket = False
2660 2661
2661 2662 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2662 2663
2663 2664 if continuation.startswith(closing_quote):
2664 2665 # do not close if already closed, e.g. `d['a<tab>'`
2665 2666 continuation = continuation[len(closing_quote) :]
2666 2667 else:
2667 2668 can_close_quote = True
2668 2669
2669 2670 continuation = continuation.strip()
2670 2671
2671 2672 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2672 2673 # handling it is out of scope, so let's avoid appending suffixes.
2673 2674 has_known_tuple_handling = isinstance(obj, dict)
2674 2675
2675 2676 can_close_bracket = (
2676 2677 not continuation.startswith("]") and self.auto_close_dict_keys
2677 2678 )
2678 2679 can_close_tuple_item = (
2679 2680 not continuation.startswith(",")
2680 2681 and has_known_tuple_handling
2681 2682 and self.auto_close_dict_keys
2682 2683 )
2683 2684 can_close_quote = can_close_quote and self.auto_close_dict_keys
2684 2685
2685 2686 # fast path if closing quote should be appended but not suffix is allowed
2686 2687 if not can_close_quote and not can_close_bracket and closing_quote:
2687 2688 return [leading + k for k in matches]
2688 2689
2689 2690 results = []
2690 2691
2691 2692 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2692 2693
2693 2694 for k, state_flag in matches.items():
2694 2695 result = leading + k
2695 2696 if can_close_quote and closing_quote:
2696 2697 result += closing_quote
2697 2698
2698 2699 if state_flag == end_of_tuple_or_item:
2699 2700 # We do not know which suffix to add,
2700 2701 # e.g. both tuple item and string
2701 2702 # match this item.
2702 2703 pass
2703 2704
2704 2705 if state_flag in end_of_tuple_or_item and can_close_bracket:
2705 2706 result += "]"
2706 2707 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2707 2708 result += ", "
2708 2709 results.append(result)
2709 2710 return results
2710 2711
2711 2712 @context_matcher()
2712 2713 def unicode_name_matcher(self, context: CompletionContext):
2713 2714 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2714 2715 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2715 2716 return _convert_matcher_v1_result_to_v2(
2716 2717 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2717 2718 )
2718 2719
2719 2720 @staticmethod
2720 2721 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2721 2722 """Match Latex-like syntax for unicode characters base
2722 2723 on the name of the character.
2723 2724
2724 2725 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2725 2726
2726 2727 Works only on valid python 3 identifier, or on combining characters that
2727 2728 will combine to form a valid identifier.
2728 2729 """
2729 2730 slashpos = text.rfind('\\')
2730 2731 if slashpos > -1:
2731 2732 s = text[slashpos+1:]
2732 2733 try :
2733 2734 unic = unicodedata.lookup(s)
2734 2735 # allow combining chars
2735 2736 if ('a'+unic).isidentifier():
2736 2737 return '\\'+s,[unic]
2737 2738 except KeyError:
2738 2739 pass
2739 2740 return '', []
2740 2741
2741 2742 @context_matcher()
2742 2743 def latex_name_matcher(self, context: CompletionContext):
2743 2744 """Match Latex syntax for unicode characters.
2744 2745
2745 2746 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2746 2747 """
2747 2748 fragment, matches = self.latex_matches(context.text_until_cursor)
2748 2749 return _convert_matcher_v1_result_to_v2(
2749 2750 matches, type="latex", fragment=fragment, suppress_if_matches=True
2750 2751 )
2751 2752
2752 2753 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2753 2754 """Match Latex syntax for unicode characters.
2754 2755
2755 2756 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2756 2757
2757 2758 .. deprecated:: 8.6
2758 2759 You can use :meth:`latex_name_matcher` instead.
2759 2760 """
2760 2761 slashpos = text.rfind('\\')
2761 2762 if slashpos > -1:
2762 2763 s = text[slashpos:]
2763 2764 if s in latex_symbols:
2764 2765 # Try to complete a full latex symbol to unicode
2765 2766 # \\alpha -> Ξ±
2766 2767 return s, [latex_symbols[s]]
2767 2768 else:
2768 2769 # If a user has partially typed a latex symbol, give them
2769 2770 # a full list of options \al -> [\aleph, \alpha]
2770 2771 matches = [k for k in latex_symbols if k.startswith(s)]
2771 2772 if matches:
2772 2773 return s, matches
2773 2774 return '', ()
2774 2775
2775 2776 @context_matcher()
2776 2777 def custom_completer_matcher(self, context):
2777 2778 """Dispatch custom completer.
2778 2779
2779 2780 If a match is found, suppresses all other matchers except for Jedi.
2780 2781 """
2781 2782 matches = self.dispatch_custom_completer(context.token) or []
2782 2783 result = _convert_matcher_v1_result_to_v2(
2783 2784 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2784 2785 )
2785 2786 result["ordered"] = True
2786 2787 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2787 2788 return result
2788 2789
2789 2790 def dispatch_custom_completer(self, text):
2790 2791 """
2791 2792 .. deprecated:: 8.6
2792 2793 You can use :meth:`custom_completer_matcher` instead.
2793 2794 """
2794 2795 if not self.custom_completers:
2795 2796 return
2796 2797
2797 2798 line = self.line_buffer
2798 2799 if not line.strip():
2799 2800 return None
2800 2801
2801 2802 # Create a little structure to pass all the relevant information about
2802 2803 # the current completion to any custom completer.
2803 2804 event = SimpleNamespace()
2804 2805 event.line = line
2805 2806 event.symbol = text
2806 2807 cmd = line.split(None,1)[0]
2807 2808 event.command = cmd
2808 2809 event.text_until_cursor = self.text_until_cursor
2809 2810
2810 2811 # for foo etc, try also to find completer for %foo
2811 2812 if not cmd.startswith(self.magic_escape):
2812 2813 try_magic = self.custom_completers.s_matches(
2813 2814 self.magic_escape + cmd)
2814 2815 else:
2815 2816 try_magic = []
2816 2817
2817 2818 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2818 2819 try_magic,
2819 2820 self.custom_completers.flat_matches(self.text_until_cursor)):
2820 2821 try:
2821 2822 res = c(event)
2822 2823 if res:
2823 2824 # first, try case sensitive match
2824 2825 withcase = [r for r in res if r.startswith(text)]
2825 2826 if withcase:
2826 2827 return withcase
2827 2828 # if none, then case insensitive ones are ok too
2828 2829 text_low = text.lower()
2829 2830 return [r for r in res if r.lower().startswith(text_low)]
2830 2831 except TryNext:
2831 2832 pass
2832 2833 except KeyboardInterrupt:
2833 2834 """
2834 2835 If custom completer take too long,
2835 2836 let keyboard interrupt abort and return nothing.
2836 2837 """
2837 2838 break
2838 2839
2839 2840 return None
2840 2841
2841 2842 def completions(self, text: str, offset: int)->Iterator[Completion]:
2842 2843 """
2843 2844 Returns an iterator over the possible completions
2844 2845
2845 2846 .. warning::
2846 2847
2847 2848 Unstable
2848 2849
2849 2850 This function is unstable, API may change without warning.
2850 2851 It will also raise unless use in proper context manager.
2851 2852
2852 2853 Parameters
2853 2854 ----------
2854 2855 text : str
2855 2856 Full text of the current input, multi line string.
2856 2857 offset : int
2857 2858 Integer representing the position of the cursor in ``text``. Offset
2858 2859 is 0-based indexed.
2859 2860
2860 2861 Yields
2861 2862 ------
2862 2863 Completion
2863 2864
2864 2865 Notes
2865 2866 -----
2866 2867 The cursor on a text can either be seen as being "in between"
2867 2868 characters or "On" a character depending on the interface visible to
2868 2869 the user. For consistency the cursor being on "in between" characters X
2869 2870 and Y is equivalent to the cursor being "on" character Y, that is to say
2870 2871 the character the cursor is on is considered as being after the cursor.
2871 2872
2872 2873 Combining characters may span more that one position in the
2873 2874 text.
2874 2875
2875 2876 .. note::
2876 2877
2877 2878 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2878 2879 fake Completion token to distinguish completion returned by Jedi
2879 2880 and usual IPython completion.
2880 2881
2881 2882 .. note::
2882 2883
2883 2884 Completions are not completely deduplicated yet. If identical
2884 2885 completions are coming from different sources this function does not
2885 2886 ensure that each completion object will only be present once.
2886 2887 """
2887 2888 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2888 2889 "It may change without warnings. "
2889 2890 "Use in corresponding context manager.",
2890 2891 category=ProvisionalCompleterWarning, stacklevel=2)
2891 2892
2892 2893 seen = set()
2893 2894 profiler:Optional[cProfile.Profile]
2894 2895 try:
2895 2896 if self.profile_completions:
2896 2897 import cProfile
2897 2898 profiler = cProfile.Profile()
2898 2899 profiler.enable()
2899 2900 else:
2900 2901 profiler = None
2901 2902
2902 2903 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2903 2904 if c and (c in seen):
2904 2905 continue
2905 2906 yield c
2906 2907 seen.add(c)
2907 2908 except KeyboardInterrupt:
2908 2909 """if completions take too long and users send keyboard interrupt,
2909 2910 do not crash and return ASAP. """
2910 2911 pass
2911 2912 finally:
2912 2913 if profiler is not None:
2913 2914 profiler.disable()
2914 2915 ensure_dir_exists(self.profiler_output_dir)
2915 2916 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2916 2917 print("Writing profiler output to", output_path)
2917 2918 profiler.dump_stats(output_path)
2918 2919
2919 2920 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2920 2921 """
2921 2922 Core completion module.Same signature as :any:`completions`, with the
2922 2923 extra `timeout` parameter (in seconds).
2923 2924
2924 2925 Computing jedi's completion ``.type`` can be quite expensive (it is a
2925 2926 lazy property) and can require some warm-up, more warm up than just
2926 2927 computing the ``name`` of a completion. The warm-up can be :
2927 2928
2928 2929 - Long warm-up the first time a module is encountered after
2929 2930 install/update: actually build parse/inference tree.
2930 2931
2931 2932 - first time the module is encountered in a session: load tree from
2932 2933 disk.
2933 2934
2934 2935 We don't want to block completions for tens of seconds so we give the
2935 2936 completer a "budget" of ``_timeout`` seconds per invocation to compute
2936 2937 completions types, the completions that have not yet been computed will
2937 2938 be marked as "unknown" an will have a chance to be computed next round
2938 2939 are things get cached.
2939 2940
2940 2941 Keep in mind that Jedi is not the only thing treating the completion so
2941 2942 keep the timeout short-ish as if we take more than 0.3 second we still
2942 2943 have lots of processing to do.
2943 2944
2944 2945 """
2945 2946 deadline = time.monotonic() + _timeout
2946 2947
2947 2948 before = full_text[:offset]
2948 2949 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2949 2950
2950 2951 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2951 2952
2952 2953 def is_non_jedi_result(
2953 2954 result: MatcherResult, identifier: str
2954 2955 ) -> TypeGuard[SimpleMatcherResult]:
2955 2956 return identifier != jedi_matcher_id
2956 2957
2957 2958 results = self._complete(
2958 2959 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2959 2960 )
2960 2961
2961 2962 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2962 2963 identifier: result
2963 2964 for identifier, result in results.items()
2964 2965 if is_non_jedi_result(result, identifier)
2965 2966 }
2966 2967
2967 2968 jedi_matches = (
2968 2969 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2969 2970 if jedi_matcher_id in results
2970 2971 else ()
2971 2972 )
2972 2973
2973 2974 iter_jm = iter(jedi_matches)
2974 2975 if _timeout:
2975 2976 for jm in iter_jm:
2976 2977 try:
2977 2978 type_ = jm.type
2978 2979 except Exception:
2979 2980 if self.debug:
2980 2981 print("Error in Jedi getting type of ", jm)
2981 2982 type_ = None
2982 2983 delta = len(jm.name_with_symbols) - len(jm.complete)
2983 2984 if type_ == 'function':
2984 2985 signature = _make_signature(jm)
2985 2986 else:
2986 2987 signature = ''
2987 2988 yield Completion(start=offset - delta,
2988 2989 end=offset,
2989 2990 text=jm.name_with_symbols,
2990 2991 type=type_,
2991 2992 signature=signature,
2992 2993 _origin='jedi')
2993 2994
2994 2995 if time.monotonic() > deadline:
2995 2996 break
2996 2997
2997 2998 for jm in iter_jm:
2998 2999 delta = len(jm.name_with_symbols) - len(jm.complete)
2999 3000 yield Completion(
3000 3001 start=offset - delta,
3001 3002 end=offset,
3002 3003 text=jm.name_with_symbols,
3003 3004 type=_UNKNOWN_TYPE, # don't compute type for speed
3004 3005 _origin="jedi",
3005 3006 signature="",
3006 3007 )
3007 3008
3008 3009 # TODO:
3009 3010 # Suppress this, right now just for debug.
3010 3011 if jedi_matches and non_jedi_results and self.debug:
3011 3012 some_start_offset = before.rfind(
3012 3013 next(iter(non_jedi_results.values()))["matched_fragment"]
3013 3014 )
3014 3015 yield Completion(
3015 3016 start=some_start_offset,
3016 3017 end=offset,
3017 3018 text="--jedi/ipython--",
3018 3019 _origin="debug",
3019 3020 type="none",
3020 3021 signature="",
3021 3022 )
3022 3023
3023 3024 ordered: List[Completion] = []
3024 3025 sortable: List[Completion] = []
3025 3026
3026 3027 for origin, result in non_jedi_results.items():
3027 3028 matched_text = result["matched_fragment"]
3028 3029 start_offset = before.rfind(matched_text)
3029 3030 is_ordered = result.get("ordered", False)
3030 3031 container = ordered if is_ordered else sortable
3031 3032
3032 3033 # I'm unsure if this is always true, so let's assert and see if it
3033 3034 # crash
3034 3035 assert before.endswith(matched_text)
3035 3036
3036 3037 for simple_completion in result["completions"]:
3037 3038 completion = Completion(
3038 3039 start=start_offset,
3039 3040 end=offset,
3040 3041 text=simple_completion.text,
3041 3042 _origin=origin,
3042 3043 signature="",
3043 3044 type=simple_completion.type or _UNKNOWN_TYPE,
3044 3045 )
3045 3046 container.append(completion)
3046 3047
3047 3048 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3048 3049 :MATCHES_LIMIT
3049 3050 ]
3050 3051
3051 3052 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3052 3053 """Find completions for the given text and line context.
3053 3054
3054 3055 Note that both the text and the line_buffer are optional, but at least
3055 3056 one of them must be given.
3056 3057
3057 3058 Parameters
3058 3059 ----------
3059 3060 text : string, optional
3060 3061 Text to perform the completion on. If not given, the line buffer
3061 3062 is split using the instance's CompletionSplitter object.
3062 3063 line_buffer : string, optional
3063 3064 If not given, the completer attempts to obtain the current line
3064 3065 buffer via readline. This keyword allows clients which are
3065 3066 requesting for text completions in non-readline contexts to inform
3066 3067 the completer of the entire text.
3067 3068 cursor_pos : int, optional
3068 3069 Index of the cursor in the full line buffer. Should be provided by
3069 3070 remote frontends where kernel has no access to frontend state.
3070 3071
3071 3072 Returns
3072 3073 -------
3073 3074 Tuple of two items:
3074 3075 text : str
3075 3076 Text that was actually used in the completion.
3076 3077 matches : list
3077 3078 A list of completion matches.
3078 3079
3079 3080 Notes
3080 3081 -----
3081 3082 This API is likely to be deprecated and replaced by
3082 3083 :any:`IPCompleter.completions` in the future.
3083 3084
3084 3085 """
3085 3086 warnings.warn('`Completer.complete` is pending deprecation since '
3086 3087 'IPython 6.0 and will be replaced by `Completer.completions`.',
3087 3088 PendingDeprecationWarning)
3088 3089 # potential todo, FOLD the 3rd throw away argument of _complete
3089 3090 # into the first 2 one.
3090 3091 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3091 3092 # TODO: should we deprecate now, or does it stay?
3092 3093
3093 3094 results = self._complete(
3094 3095 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3095 3096 )
3096 3097
3097 3098 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3098 3099
3099 3100 return self._arrange_and_extract(
3100 3101 results,
3101 3102 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3102 3103 skip_matchers={jedi_matcher_id},
3103 3104 # this API does not support different start/end positions (fragments of token).
3104 3105 abort_if_offset_changes=True,
3105 3106 )
3106 3107
3107 3108 def _arrange_and_extract(
3108 3109 self,
3109 3110 results: Dict[str, MatcherResult],
3110 3111 skip_matchers: Set[str],
3111 3112 abort_if_offset_changes: bool,
3112 3113 ):
3113 3114 sortable: List[AnyMatcherCompletion] = []
3114 3115 ordered: List[AnyMatcherCompletion] = []
3115 3116 most_recent_fragment = None
3116 3117 for identifier, result in results.items():
3117 3118 if identifier in skip_matchers:
3118 3119 continue
3119 3120 if not result["completions"]:
3120 3121 continue
3121 3122 if not most_recent_fragment:
3122 3123 most_recent_fragment = result["matched_fragment"]
3123 3124 if (
3124 3125 abort_if_offset_changes
3125 3126 and result["matched_fragment"] != most_recent_fragment
3126 3127 ):
3127 3128 break
3128 3129 if result.get("ordered", False):
3129 3130 ordered.extend(result["completions"])
3130 3131 else:
3131 3132 sortable.extend(result["completions"])
3132 3133
3133 3134 if not most_recent_fragment:
3134 3135 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3135 3136
3136 3137 return most_recent_fragment, [
3137 3138 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3138 3139 ]
3139 3140
3140 3141 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3141 3142 full_text=None) -> _CompleteResult:
3142 3143 """
3143 3144 Like complete but can also returns raw jedi completions as well as the
3144 3145 origin of the completion text. This could (and should) be made much
3145 3146 cleaner but that will be simpler once we drop the old (and stateful)
3146 3147 :any:`complete` API.
3147 3148
3148 3149 With current provisional API, cursor_pos act both (depending on the
3149 3150 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3150 3151 ``column`` when passing multiline strings this could/should be renamed
3151 3152 but would add extra noise.
3152 3153
3153 3154 Parameters
3154 3155 ----------
3155 3156 cursor_line
3156 3157 Index of the line the cursor is on. 0 indexed.
3157 3158 cursor_pos
3158 3159 Position of the cursor in the current line/line_buffer/text. 0
3159 3160 indexed.
3160 3161 line_buffer : optional, str
3161 3162 The current line the cursor is in, this is mostly due to legacy
3162 3163 reason that readline could only give a us the single current line.
3163 3164 Prefer `full_text`.
3164 3165 text : str
3165 3166 The current "token" the cursor is in, mostly also for historical
3166 3167 reasons. as the completer would trigger only after the current line
3167 3168 was parsed.
3168 3169 full_text : str
3169 3170 Full text of the current cell.
3170 3171
3171 3172 Returns
3172 3173 -------
3173 3174 An ordered dictionary where keys are identifiers of completion
3174 3175 matchers and values are ``MatcherResult``s.
3175 3176 """
3176 3177
3177 3178 # if the cursor position isn't given, the only sane assumption we can
3178 3179 # make is that it's at the end of the line (the common case)
3179 3180 if cursor_pos is None:
3180 3181 cursor_pos = len(line_buffer) if text is None else len(text)
3181 3182
3182 3183 if self.use_main_ns:
3183 3184 self.namespace = __main__.__dict__
3184 3185
3185 3186 # if text is either None or an empty string, rely on the line buffer
3186 3187 if (not line_buffer) and full_text:
3187 3188 line_buffer = full_text.split('\n')[cursor_line]
3188 3189 if not text: # issue #11508: check line_buffer before calling split_line
3189 3190 text = (
3190 3191 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3191 3192 )
3192 3193
3193 3194 # If no line buffer is given, assume the input text is all there was
3194 3195 if line_buffer is None:
3195 3196 line_buffer = text
3196 3197
3197 3198 # deprecated - do not use `line_buffer` in new code.
3198 3199 self.line_buffer = line_buffer
3199 3200 self.text_until_cursor = self.line_buffer[:cursor_pos]
3200 3201
3201 3202 if not full_text:
3202 3203 full_text = line_buffer
3203 3204
3204 3205 context = CompletionContext(
3205 3206 full_text=full_text,
3206 3207 cursor_position=cursor_pos,
3207 3208 cursor_line=cursor_line,
3208 3209 token=text,
3209 3210 limit=MATCHES_LIMIT,
3210 3211 )
3211 3212
3212 3213 # Start with a clean slate of completions
3213 3214 results: Dict[str, MatcherResult] = {}
3214 3215
3215 3216 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3216 3217
3217 3218 suppressed_matchers: Set[str] = set()
3218 3219
3219 3220 matchers = {
3220 3221 _get_matcher_id(matcher): matcher
3221 3222 for matcher in sorted(
3222 3223 self.matchers, key=_get_matcher_priority, reverse=True
3223 3224 )
3224 3225 }
3225 3226
3226 3227 for matcher_id, matcher in matchers.items():
3227 3228 matcher_id = _get_matcher_id(matcher)
3228 3229
3229 3230 if matcher_id in self.disable_matchers:
3230 3231 continue
3231 3232
3232 3233 if matcher_id in results:
3233 3234 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3234 3235
3235 3236 if matcher_id in suppressed_matchers:
3236 3237 continue
3237 3238
3238 3239 result: MatcherResult
3239 3240 try:
3240 3241 if _is_matcher_v1(matcher):
3241 3242 result = _convert_matcher_v1_result_to_v2(
3242 3243 matcher(text), type=_UNKNOWN_TYPE
3243 3244 )
3244 3245 elif _is_matcher_v2(matcher):
3245 3246 result = matcher(context)
3246 3247 else:
3247 3248 api_version = _get_matcher_api_version(matcher)
3248 3249 raise ValueError(f"Unsupported API version {api_version}")
3249 3250 except BaseException:
3250 3251 # Show the ugly traceback if the matcher causes an
3251 3252 # exception, but do NOT crash the kernel!
3252 3253 sys.excepthook(*sys.exc_info())
3253 3254 continue
3254 3255
3255 3256 # set default value for matched fragment if suffix was not selected.
3256 3257 result["matched_fragment"] = result.get("matched_fragment", context.token)
3257 3258
3258 3259 if not suppressed_matchers:
3259 3260 suppression_recommended: Union[bool, Set[str]] = result.get(
3260 3261 "suppress", False
3261 3262 )
3262 3263
3263 3264 suppression_config = (
3264 3265 self.suppress_competing_matchers.get(matcher_id, None)
3265 3266 if isinstance(self.suppress_competing_matchers, dict)
3266 3267 else self.suppress_competing_matchers
3267 3268 )
3268 3269 should_suppress = (
3269 3270 (suppression_config is True)
3270 3271 or (suppression_recommended and (suppression_config is not False))
3271 3272 ) and has_any_completions(result)
3272 3273
3273 3274 if should_suppress:
3274 3275 suppression_exceptions: Set[str] = result.get(
3275 3276 "do_not_suppress", set()
3276 3277 )
3277 3278 if isinstance(suppression_recommended, Iterable):
3278 3279 to_suppress = set(suppression_recommended)
3279 3280 else:
3280 3281 to_suppress = set(matchers)
3281 3282 suppressed_matchers = to_suppress - suppression_exceptions
3282 3283
3283 3284 new_results = {}
3284 3285 for previous_matcher_id, previous_result in results.items():
3285 3286 if previous_matcher_id not in suppressed_matchers:
3286 3287 new_results[previous_matcher_id] = previous_result
3287 3288 results = new_results
3288 3289
3289 3290 results[matcher_id] = result
3290 3291
3291 3292 _, matches = self._arrange_and_extract(
3292 3293 results,
3293 3294 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3294 3295 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3295 3296 skip_matchers={jedi_matcher_id},
3296 3297 abort_if_offset_changes=False,
3297 3298 )
3298 3299
3299 3300 # populate legacy stateful API
3300 3301 self.matches = matches
3301 3302
3302 3303 return results
3303 3304
3304 3305 @staticmethod
3305 3306 def _deduplicate(
3306 3307 matches: Sequence[AnyCompletion],
3307 3308 ) -> Iterable[AnyCompletion]:
3308 3309 filtered_matches: Dict[str, AnyCompletion] = {}
3309 3310 for match in matches:
3310 3311 text = match.text
3311 3312 if (
3312 3313 text not in filtered_matches
3313 3314 or filtered_matches[text].type == _UNKNOWN_TYPE
3314 3315 ):
3315 3316 filtered_matches[text] = match
3316 3317
3317 3318 return filtered_matches.values()
3318 3319
3319 3320 @staticmethod
3320 3321 def _sort(matches: Sequence[AnyCompletion]):
3321 3322 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3322 3323
3323 3324 @context_matcher()
3324 3325 def fwd_unicode_matcher(self, context: CompletionContext):
3325 3326 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3326 3327 # TODO: use `context.limit` to terminate early once we matched the maximum
3327 3328 # number that will be used downstream; can be added as an optional to
3328 3329 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3329 3330 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3330 3331 return _convert_matcher_v1_result_to_v2(
3331 3332 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3332 3333 )
3333 3334
3334 3335 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3335 3336 """
3336 3337 Forward match a string starting with a backslash with a list of
3337 3338 potential Unicode completions.
3338 3339
3339 3340 Will compute list of Unicode character names on first call and cache it.
3340 3341
3341 3342 .. deprecated:: 8.6
3342 3343 You can use :meth:`fwd_unicode_matcher` instead.
3343 3344
3344 3345 Returns
3345 3346 -------
3346 3347 At tuple with:
3347 3348 - matched text (empty if no matches)
3348 3349 - list of potential completions, empty tuple otherwise)
3349 3350 """
3350 3351 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3351 3352 # We could do a faster match using a Trie.
3352 3353
3353 3354 # Using pygtrie the following seem to work:
3354 3355
3355 3356 # s = PrefixSet()
3356 3357
3357 3358 # for c in range(0,0x10FFFF + 1):
3358 3359 # try:
3359 3360 # s.add(unicodedata.name(chr(c)))
3360 3361 # except ValueError:
3361 3362 # pass
3362 3363 # [''.join(k) for k in s.iter(prefix)]
3363 3364
3364 3365 # But need to be timed and adds an extra dependency.
3365 3366
3366 3367 slashpos = text.rfind('\\')
3367 3368 # if text starts with slash
3368 3369 if slashpos > -1:
3369 3370 # PERF: It's important that we don't access self._unicode_names
3370 3371 # until we're inside this if-block. _unicode_names is lazily
3371 3372 # initialized, and it takes a user-noticeable amount of time to
3372 3373 # initialize it, so we don't want to initialize it unless we're
3373 3374 # actually going to use it.
3374 3375 s = text[slashpos + 1 :]
3375 3376 sup = s.upper()
3376 3377 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3377 3378 if candidates:
3378 3379 return s, candidates
3379 3380 candidates = [x for x in self.unicode_names if sup in x]
3380 3381 if candidates:
3381 3382 return s, candidates
3382 3383 splitsup = sup.split(" ")
3383 3384 candidates = [
3384 3385 x for x in self.unicode_names if all(u in x for u in splitsup)
3385 3386 ]
3386 3387 if candidates:
3387 3388 return s, candidates
3388 3389
3389 3390 return "", ()
3390 3391
3391 3392 # if text does not start with slash
3392 3393 else:
3393 3394 return '', ()
3394 3395
3395 3396 @property
3396 3397 def unicode_names(self) -> List[str]:
3397 3398 """List of names of unicode code points that can be completed.
3398 3399
3399 3400 The list is lazily initialized on first access.
3400 3401 """
3401 3402 if self._unicode_names is None:
3402 3403 names = []
3403 3404 for c in range(0,0x10FFFF + 1):
3404 3405 try:
3405 3406 names.append(unicodedata.name(chr(c)))
3406 3407 except ValueError:
3407 3408 pass
3408 3409 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3409 3410
3410 3411 return self._unicode_names
3411 3412
3412 3413 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3413 3414 names = []
3414 3415 for start,stop in ranges:
3415 3416 for c in range(start, stop) :
3416 3417 try:
3417 3418 names.append(unicodedata.name(chr(c)))
3418 3419 except ValueError:
3419 3420 pass
3420 3421 return names
General Comments 0
You need to be logged in to leave comments. Login now