##// END OF EJS Templates
Misc release process update...
Matthias Bussonnier -
Show More
@@ -1,3322 +1,3322 b''
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 Ξ±
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 Ξ±
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press :kbd:`Tab` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\Ξ±<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 63 ``False``.
64 64
65 65
66 66 Experimental
67 67 ============
68 68
69 69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 70 generate completions both using static analysis of the code, and dynamically
71 71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 72 for Python. The APIs attached to this new mechanism is unstable and will
73 73 raise unless use in an :any:`provisionalcompleter` context manager.
74 74
75 75 You will find that the following are experimental:
76 76
77 77 - :any:`provisionalcompleter`
78 78 - :any:`IPCompleter.completions`
79 79 - :any:`Completion`
80 80 - :any:`rectify_completions`
81 81
82 82 .. note::
83 83
84 84 better name for :any:`rectify_completions` ?
85 85
86 86 We welcome any feedback on these new API, and we also encourage you to try this
87 87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 88 to have extra logging information if :any:`jedi` is crashing, or if current
89 89 IPython completer pending deprecations are returning results not yet handled
90 90 by :any:`jedi`
91 91
92 92 Using Jedi for tab completion allow snippets like the following to work without
93 93 having to execute any code:
94 94
95 95 >>> myvar = ['hello', 42]
96 96 ... myvar[1].bi<tab>
97 97
98 98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 100 option.
101 101
102 102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 103 current development version to get better completions.
104 104
105 105 Matchers
106 106 ========
107 107
108 108 All completions routines are implemented using unified *Matchers* API.
109 109 The matchers API is provisional and subject to change without notice.
110 110
111 111 The built-in matchers include:
112 112
113 113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 115 - :any:`IPCompleter.unicode_name_matcher`,
116 116 :any:`IPCompleter.fwd_unicode_matcher`
117 117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 125 (`complete_command`) with string dispatch (including regular expressions).
126 126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 127 Jedi results to match behaviour in earlier IPython versions.
128 128
129 129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130 130
131 131 Matcher API
132 132 -----------
133 133
134 134 Simplifying some details, the ``Matcher`` interface can described as
135 135
136 136 .. code-block::
137 137
138 138 MatcherAPIv1 = Callable[[str], list[str]]
139 139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140 140
141 141 Matcher = MatcherAPIv1 | MatcherAPIv2
142 142
143 143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 144 and remains supported as a simplest way for generating completions. This is also
145 145 currently the only API supported by the IPython hooks system `complete_command`.
146 146
147 147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 149 and requires a literal ``2`` for v2 Matchers.
150 150
151 151 Once the API stabilises future versions may relax the requirement for specifying
152 152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154 154
155 155 Suppression of competing matchers
156 156 ---------------------------------
157 157
158 158 By default results from all matchers are combined, in the order determined by
159 159 their priority. Matchers can request to suppress results from subsequent
160 160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161 161
162 162 When multiple matchers simultaneously request surpression, the results from of
163 163 the matcher with higher priority will be returned.
164 164
165 165 Sometimes it is desirable to suppress most but not all other matchers;
166 166 this can be achieved by adding a list of identifiers of matchers which
167 167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168 168
169 169 The suppression behaviour can is user-configurable via
170 170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 171 """
172 172
173 173
174 174 # Copyright (c) IPython Development Team.
175 175 # Distributed under the terms of the Modified BSD License.
176 176 #
177 177 # Some of this code originated from rlcompleter in the Python standard library
178 178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179 179
180 180 from __future__ import annotations
181 181 import builtins as builtin_mod
182 182 import enum
183 183 import glob
184 184 import inspect
185 185 import itertools
186 186 import keyword
187 187 import os
188 188 import re
189 189 import string
190 190 import sys
191 191 import tokenize
192 192 import time
193 193 import unicodedata
194 194 import uuid
195 195 import warnings
196 196 from ast import literal_eval
197 197 from collections import defaultdict
198 198 from contextlib import contextmanager
199 199 from dataclasses import dataclass
200 200 from functools import cached_property, partial
201 201 from types import SimpleNamespace
202 202 from typing import (
203 203 Iterable,
204 204 Iterator,
205 205 List,
206 206 Tuple,
207 207 Union,
208 208 Any,
209 209 Sequence,
210 210 Dict,
211 211 Optional,
212 212 TYPE_CHECKING,
213 213 Set,
214 214 Sized,
215 215 TypeVar,
216 216 Literal,
217 217 )
218 218
219 219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 220 from IPython.core.error import TryNext
221 221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 223 from IPython.core.oinspect import InspectColors
224 224 from IPython.testing.skipdoctest import skip_doctest
225 225 from IPython.utils import generics
226 226 from IPython.utils.decorators import sphinx_options
227 227 from IPython.utils.dir2 import dir2, get_real_method
228 228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 229 from IPython.utils.path import ensure_dir_exists
230 230 from IPython.utils.process import arg_split
231 231 from traitlets import (
232 232 Bool,
233 233 Enum,
234 234 Int,
235 235 List as ListTrait,
236 236 Unicode,
237 237 Dict as DictTrait,
238 238 Union as UnionTrait,
239 239 observe,
240 240 )
241 241 from traitlets.config.configurable import Configurable
242 242
243 243 import __main__
244 244
245 245 # skip module docstests
246 246 __skip_doctest__ = True
247 247
248 248
249 249 try:
250 250 import jedi
251 251 jedi.settings.case_insensitive_completion = False
252 252 import jedi.api.helpers
253 253 import jedi.api.classes
254 254 JEDI_INSTALLED = True
255 255 except ImportError:
256 256 JEDI_INSTALLED = False
257 257
258 258
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 260 from typing import cast
261 261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 262 else:
263 263 from typing import Generic
264 264
265 265 def cast(type_, obj):
266 266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 267 return obj
268 268
269 269 # do not require on runtime
270 270 NotRequired = Tuple # requires Python >=3.11
271 271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 272 Protocol = object # requires Python >=3.8
273 273 TypeAlias = Any # requires Python >=3.10
274 274 TypeGuard = Generic # requires Python >=3.10
275 275 if GENERATING_DOCUMENTATION:
276 276 from typing import TypedDict
277 277
278 278 # -----------------------------------------------------------------------------
279 279 # Globals
280 280 #-----------------------------------------------------------------------------
281 281
282 282 # ranges where we have most of the valid unicode names. We could be more finer
283 283 # grained but is it worth it for performance While unicode have character in the
284 284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 285 # write this). With below range we cover them all, with a density of ~67%
286 286 # biggest next gap we consider only adds up about 1% density and there are 600
287 287 # gaps that would need hard coding.
288 288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289 289
290 290 # Public API
291 291 __all__ = ["Completer", "IPCompleter"]
292 292
293 293 if sys.platform == 'win32':
294 294 PROTECTABLES = ' '
295 295 else:
296 296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297 297
298 298 # Protect against returning an enormous number of completions which the frontend
299 299 # may have trouble processing.
300 300 MATCHES_LIMIT = 500
301 301
302 302 # Completion type reported when no type can be inferred.
303 303 _UNKNOWN_TYPE = "<unknown>"
304 304
305 305 # sentinel value to signal lack of a match
306 306 not_found = object()
307 307
308 308 class ProvisionalCompleterWarning(FutureWarning):
309 309 """
310 310 Exception raise by an experimental feature in this module.
311 311
312 312 Wrap code in :any:`provisionalcompleter` context manager if you
313 313 are certain you want to use an unstable feature.
314 314 """
315 315 pass
316 316
317 317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318 318
319 319
320 320 @skip_doctest
321 321 @contextmanager
322 322 def provisionalcompleter(action='ignore'):
323 323 """
324 324 This context manager has to be used in any place where unstable completer
325 325 behavior and API may be called.
326 326
327 327 >>> with provisionalcompleter():
328 328 ... completer.do_experimental_things() # works
329 329
330 330 >>> completer.do_experimental_things() # raises.
331 331
332 332 .. note::
333 333
334 334 Unstable
335 335
336 336 By using this context manager you agree that the API in use may change
337 337 without warning, and that you won't complain if they do so.
338 338
339 339 You also understand that, if the API is not to your liking, you should report
340 340 a bug to explain your use case upstream.
341 341
342 342 We'll be happy to get your feedback, feature requests, and improvements on
343 343 any of the unstable APIs!
344 344 """
345 345 with warnings.catch_warnings():
346 346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 347 yield
348 348
349 349
350 350 def has_open_quotes(s):
351 351 """Return whether a string has open quotes.
352 352
353 353 This simply counts whether the number of quote characters of either type in
354 354 the string is odd.
355 355
356 356 Returns
357 357 -------
358 358 If there is an open quote, the quote character is returned. Else, return
359 359 False.
360 360 """
361 361 # We check " first, then ', so complex cases with nested quotes will get
362 362 # the " to take precedence.
363 363 if s.count('"') % 2:
364 364 return '"'
365 365 elif s.count("'") % 2:
366 366 return "'"
367 367 else:
368 368 return False
369 369
370 370
371 371 def protect_filename(s, protectables=PROTECTABLES):
372 372 """Escape a string to protect certain characters."""
373 373 if set(s) & set(protectables):
374 374 if sys.platform == "win32":
375 375 return '"' + s + '"'
376 376 else:
377 377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 378 else:
379 379 return s
380 380
381 381
382 382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 383 """Expand ``~``-style usernames in strings.
384 384
385 385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 386 extra information that will be useful if the input was being used in
387 387 computing completions, and you wish to return the completions with the
388 388 original '~' instead of its expanded value.
389 389
390 390 Parameters
391 391 ----------
392 392 path : str
393 393 String to be expanded. If no ~ is present, the output is the same as the
394 394 input.
395 395
396 396 Returns
397 397 -------
398 398 newpath : str
399 399 Result of ~ expansion in the input path.
400 400 tilde_expand : bool
401 401 Whether any expansion was performed or not.
402 402 tilde_val : str
403 403 The value that ~ was replaced with.
404 404 """
405 405 # Default values
406 406 tilde_expand = False
407 407 tilde_val = ''
408 408 newpath = path
409 409
410 410 if path.startswith('~'):
411 411 tilde_expand = True
412 412 rest = len(path)-1
413 413 newpath = os.path.expanduser(path)
414 414 if rest:
415 415 tilde_val = newpath[:-rest]
416 416 else:
417 417 tilde_val = newpath
418 418
419 419 return newpath, tilde_expand, tilde_val
420 420
421 421
422 422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 423 """Does the opposite of expand_user, with its outputs.
424 424 """
425 425 if tilde_expand:
426 426 return path.replace(tilde_val, '~')
427 427 else:
428 428 return path
429 429
430 430
431 431 def completions_sorting_key(word):
432 432 """key for sorting completions
433 433
434 434 This does several things:
435 435
436 436 - Demote any completions starting with underscores to the end
437 437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 438 by their name
439 439 """
440 440 prio1, prio2 = 0, 0
441 441
442 442 if word.startswith('__'):
443 443 prio1 = 2
444 444 elif word.startswith('_'):
445 445 prio1 = 1
446 446
447 447 if word.endswith('='):
448 448 prio1 = -1
449 449
450 450 if word.startswith('%%'):
451 451 # If there's another % in there, this is something else, so leave it alone
452 452 if not "%" in word[2:]:
453 453 word = word[2:]
454 454 prio2 = 2
455 455 elif word.startswith('%'):
456 456 if not "%" in word[1:]:
457 457 word = word[1:]
458 458 prio2 = 1
459 459
460 460 return prio1, word, prio2
461 461
462 462
463 463 class _FakeJediCompletion:
464 464 """
465 465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467 467
468 468 Added in IPython 6.0 so should likely be removed for 7.0
469 469
470 470 """
471 471
472 472 def __init__(self, name):
473 473
474 474 self.name = name
475 475 self.complete = name
476 476 self.type = 'crashed'
477 477 self.name_with_symbols = name
478 478 self.signature = ""
479 479 self._origin = "fake"
480 480 self.text = "crashed"
481 481
482 482 def __repr__(self):
483 483 return '<Fake completion object jedi has crashed>'
484 484
485 485
486 486 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
487 487
488 488
489 489 class Completion:
490 490 """
491 491 Completion object used and returned by IPython completers.
492 492
493 493 .. warning::
494 494
495 495 Unstable
496 496
497 497 This function is unstable, API may change without warning.
498 498 It will also raise unless use in proper context manager.
499 499
500 500 This act as a middle ground :any:`Completion` object between the
501 501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 502 object. While Jedi need a lot of information about evaluator and how the
503 503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 504 need user facing information.
505 505
506 506 - Which range should be replaced replaced by what.
507 507 - Some metadata (like completion type), or meta information to displayed to
508 508 the use user.
509 509
510 510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 512 """
513 513
514 514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515 515
516 516 def __init__(
517 517 self,
518 518 start: int,
519 519 end: int,
520 520 text: str,
521 521 *,
522 522 type: Optional[str] = None,
523 523 _origin="",
524 524 signature="",
525 525 ) -> None:
526 526 warnings.warn(
527 527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 528 "It may change without warnings. "
529 529 "Use in corresponding context manager.",
530 530 category=ProvisionalCompleterWarning,
531 531 stacklevel=2,
532 532 )
533 533
534 534 self.start = start
535 535 self.end = end
536 536 self.text = text
537 537 self.type = type
538 538 self.signature = signature
539 539 self._origin = _origin
540 540
541 541 def __repr__(self):
542 542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544 544
545 545 def __eq__(self, other) -> bool:
546 546 """
547 547 Equality and hash do not hash the type (as some completer may not be
548 548 able to infer the type), but are use to (partially) de-duplicate
549 549 completion.
550 550
551 551 Completely de-duplicating completion is a bit tricker that just
552 552 comparing as it depends on surrounding text, which Completions are not
553 553 aware of.
554 554 """
555 555 return self.start == other.start and \
556 556 self.end == other.end and \
557 557 self.text == other.text
558 558
559 559 def __hash__(self):
560 560 return hash((self.start, self.end, self.text))
561 561
562 562
563 563 class SimpleCompletion:
564 564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565 565
566 566 .. warning::
567 567
568 568 Provisional
569 569
570 570 This class is used to describe the currently supported attributes of
571 571 simple completion items, and any additional implementation details
572 572 should not be relied on. Additional attributes may be included in
573 573 future versions, and meaning of text disambiguated from the current
574 574 dual meaning of "text to insert" and "text to used as a label".
575 575 """
576 576
577 577 __slots__ = ["text", "type"]
578 578
579 579 def __init__(self, text: str, *, type: Optional[str] = None):
580 580 self.text = text
581 581 self.type = type
582 582
583 583 def __repr__(self):
584 584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585 585
586 586
587 587 class _MatcherResultBase(TypedDict):
588 588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589 589
590 590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 591 matched_fragment: NotRequired[str]
592 592
593 593 #: Whether to suppress results from all other matchers (True), some
594 594 #: matchers (set of identifiers) or none (False); default is False.
595 595 suppress: NotRequired[Union[bool, Set[str]]]
596 596
597 597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 598 #: requests to suppress all other matchers; defaults to an empty set.
599 599 do_not_suppress: NotRequired[Set[str]]
600 600
601 601 #: Are completions already ordered and should be left as-is? default is False.
602 602 ordered: NotRequired[bool]
603 603
604 604
605 605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 607 """Result of new-style completion matcher."""
608 608
609 609 # note: TypedDict is added again to the inheritance chain
610 610 # in order to get __orig_bases__ for documentation
611 611
612 612 #: List of candidate completions
613 613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614 614
615 615
616 616 class _JediMatcherResult(_MatcherResultBase):
617 617 """Matching result returned by Jedi (will be processed differently)"""
618 618
619 619 #: list of candidate completions
620 620 completions: Iterator[_JediCompletionLike]
621 621
622 622
623 623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625 625
626 626
627 627 @dataclass
628 628 class CompletionContext:
629 629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630 630
631 631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 634 # from the completer, and make substituting them in sub-classes easier.
635 635
636 636 #: Relevant fragment of code directly preceding the cursor.
637 637 #: The extraction of token is implemented via splitter heuristic
638 638 #: (following readline behaviour for legacy reasons), which is user configurable
639 639 #: (by switching the greedy mode).
640 640 token: str
641 641
642 642 #: The full available content of the editor or buffer
643 643 full_text: str
644 644
645 645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 646 cursor_position: int
647 647
648 648 #: Cursor line in ``full_text``.
649 649 cursor_line: int
650 650
651 651 #: The maximum number of completions that will be used downstream.
652 652 #: Matchers can use this information to abort early.
653 653 #: The built-in Jedi matcher is currently excepted from this limit.
654 654 # If not given, return all possible completions.
655 655 limit: Optional[int]
656 656
657 657 @cached_property
658 658 def text_until_cursor(self) -> str:
659 659 return self.line_with_cursor[: self.cursor_position]
660 660
661 661 @cached_property
662 662 def line_with_cursor(self) -> str:
663 663 return self.full_text.split("\n")[self.cursor_line]
664 664
665 665
666 666 #: Matcher results for API v2.
667 667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668 668
669 669
670 670 class _MatcherAPIv1Base(Protocol):
671 671 def __call__(self, text: str) -> List[str]:
672 672 """Call signature."""
673 673 ...
674 674
675 675 #: Used to construct the default matcher identifier
676 676 __qualname__: str
677 677
678 678
679 679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 680 #: API version
681 681 matcher_api_version: Optional[Literal[1]]
682 682
683 683 def __call__(self, text: str) -> List[str]:
684 684 """Call signature."""
685 685 ...
686 686
687 687
688 688 #: Protocol describing Matcher API v1.
689 689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690 690
691 691
692 692 class MatcherAPIv2(Protocol):
693 693 """Protocol describing Matcher API v2."""
694 694
695 695 #: API version
696 696 matcher_api_version: Literal[2] = 2
697 697
698 698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 699 """Call signature."""
700 700 ...
701 701
702 702 #: Used to construct the default matcher identifier
703 703 __qualname__: str
704 704
705 705
706 706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707 707
708 708
709 709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 710 api_version = _get_matcher_api_version(matcher)
711 711 return api_version == 1
712 712
713 713
714 714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 715 api_version = _get_matcher_api_version(matcher)
716 716 return api_version == 2
717 717
718 718
719 719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 720 """Determines whether objects is sizable"""
721 721 return hasattr(value, "__len__")
722 722
723 723
724 724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 725 """Determines whether objects is sizable"""
726 726 return hasattr(value, "__next__")
727 727
728 728
729 729 def has_any_completions(result: MatcherResult) -> bool:
730 730 """Check if any result includes any completions."""
731 731 completions = result["completions"]
732 732 if _is_sizable(completions):
733 733 return len(completions) != 0
734 734 if _is_iterator(completions):
735 735 try:
736 736 old_iterator = completions
737 737 first = next(old_iterator)
738 738 result["completions"] = cast(
739 739 Iterator[SimpleCompletion],
740 740 itertools.chain([first], old_iterator),
741 741 )
742 742 return True
743 743 except StopIteration:
744 744 return False
745 745 raise ValueError(
746 746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 747 )
748 748
749 749
750 750 def completion_matcher(
751 751 *,
752 752 priority: Optional[float] = None,
753 753 identifier: Optional[str] = None,
754 754 api_version: int = 1,
755 755 ):
756 756 """Adds attributes describing the matcher.
757 757
758 758 Parameters
759 759 ----------
760 760 priority : Optional[float]
761 761 The priority of the matcher, determines the order of execution of matchers.
762 762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 763 identifier : Optional[str]
764 764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 765 and also used to for debugging (will be passed as ``origin`` with the completions).
766 766
767 767 Defaults to matcher function's ``__qualname__`` (for example,
768 768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 770 api_version: Optional[int]
771 771 version of the Matcher API used by this matcher.
772 772 Currently supported values are 1 and 2.
773 773 Defaults to 1.
774 774 """
775 775
776 776 def wrapper(func: Matcher):
777 777 func.matcher_priority = priority or 0 # type: ignore
778 778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 779 func.matcher_api_version = api_version # type: ignore
780 780 if TYPE_CHECKING:
781 781 if api_version == 1:
782 782 func = cast(MatcherAPIv1, func)
783 783 elif api_version == 2:
784 784 func = cast(MatcherAPIv2, func)
785 785 return func
786 786
787 787 return wrapper
788 788
789 789
790 790 def _get_matcher_priority(matcher: Matcher):
791 791 return getattr(matcher, "matcher_priority", 0)
792 792
793 793
794 794 def _get_matcher_id(matcher: Matcher):
795 795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796 796
797 797
798 798 def _get_matcher_api_version(matcher):
799 799 return getattr(matcher, "matcher_api_version", 1)
800 800
801 801
802 802 context_matcher = partial(completion_matcher, api_version=2)
803 803
804 804
805 805 _IC = Iterable[Completion]
806 806
807 807
808 808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 809 """
810 810 Deduplicate a set of completions.
811 811
812 812 .. warning::
813 813
814 814 Unstable
815 815
816 816 This function is unstable, API may change without warning.
817 817
818 818 Parameters
819 819 ----------
820 820 text : str
821 821 text that should be completed.
822 822 completions : Iterator[Completion]
823 823 iterator over the completions to deduplicate
824 824
825 825 Yields
826 826 ------
827 827 `Completions` objects
828 828 Completions coming from multiple sources, may be different but end up having
829 829 the same effect when applied to ``text``. If this is the case, this will
830 830 consider completions as equal and only emit the first encountered.
831 831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 832 the IPython completer does return things that Jedi does not, but should be
833 833 at some point.
834 834 """
835 835 completions = list(completions)
836 836 if not completions:
837 837 return
838 838
839 839 new_start = min(c.start for c in completions)
840 840 new_end = max(c.end for c in completions)
841 841
842 842 seen = set()
843 843 for c in completions:
844 844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 845 if new_text not in seen:
846 846 yield c
847 847 seen.add(new_text)
848 848
849 849
850 850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 851 """
852 852 Rectify a set of completions to all have the same ``start`` and ``end``
853 853
854 854 .. warning::
855 855
856 856 Unstable
857 857
858 858 This function is unstable, API may change without warning.
859 859 It will also raise unless use in proper context manager.
860 860
861 861 Parameters
862 862 ----------
863 863 text : str
864 864 text that should be completed.
865 865 completions : Iterator[Completion]
866 866 iterator over the completions to rectify
867 867 _debug : bool
868 868 Log failed completion
869 869
870 870 Notes
871 871 -----
872 872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 873 the Jupyter Protocol requires them to behave like so. This will readjust
874 874 the completion to have the same ``start`` and ``end`` by padding both
875 875 extremities with surrounding text.
876 876
877 877 During stabilisation should support a ``_debug`` option to log which
878 878 completion are return by the IPython completer and not found in Jedi in
879 879 order to make upstream bug report.
880 880 """
881 881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 882 "It may change without warnings. "
883 883 "Use in corresponding context manager.",
884 884 category=ProvisionalCompleterWarning, stacklevel=2)
885 885
886 886 completions = list(completions)
887 887 if not completions:
888 888 return
889 889 starts = (c.start for c in completions)
890 890 ends = (c.end for c in completions)
891 891
892 892 new_start = min(starts)
893 893 new_end = max(ends)
894 894
895 895 seen_jedi = set()
896 896 seen_python_matches = set()
897 897 for c in completions:
898 898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 899 if c._origin == 'jedi':
900 900 seen_jedi.add(new_text)
901 901 elif c._origin == 'IPCompleter.python_matches':
902 902 seen_python_matches.add(new_text)
903 903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 904 diff = seen_python_matches.difference(seen_jedi)
905 905 if diff and _debug:
906 906 print('IPython.python matches have extras:', diff)
907 907
908 908
909 909 if sys.platform == 'win32':
910 910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 911 else:
912 912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913 913
914 914 GREEDY_DELIMS = ' =\r\n'
915 915
916 916
917 917 class CompletionSplitter(object):
918 918 """An object to split an input line in a manner similar to readline.
919 919
920 920 By having our own implementation, we can expose readline-like completion in
921 921 a uniform manner to all frontends. This object only needs to be given the
922 922 line of text to be split and the cursor position on said line, and it
923 923 returns the 'word' to be completed on at the cursor after splitting the
924 924 entire line.
925 925
926 926 What characters are used as splitting delimiters can be controlled by
927 927 setting the ``delims`` attribute (this is a property that internally
928 928 automatically builds the necessary regular expression)"""
929 929
930 930 # Private interface
931 931
932 932 # A string of delimiter characters. The default value makes sense for
933 933 # IPython's most typical usage patterns.
934 934 _delims = DELIMS
935 935
936 936 # The expression (a normal string) to be compiled into a regular expression
937 937 # for actual splitting. We store it as an attribute mostly for ease of
938 938 # debugging, since this type of code can be so tricky to debug.
939 939 _delim_expr = None
940 940
941 941 # The regular expression that does the actual splitting
942 942 _delim_re = None
943 943
944 944 def __init__(self, delims=None):
945 945 delims = CompletionSplitter._delims if delims is None else delims
946 946 self.delims = delims
947 947
948 948 @property
949 949 def delims(self):
950 950 """Return the string of delimiter characters."""
951 951 return self._delims
952 952
953 953 @delims.setter
954 954 def delims(self, delims):
955 955 """Set the delimiters for line splitting."""
956 956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 957 self._delim_re = re.compile(expr)
958 958 self._delims = delims
959 959 self._delim_expr = expr
960 960
961 961 def split_line(self, line, cursor_pos=None):
962 962 """Split a line of text with a cursor at the given position.
963 963 """
964 964 l = line if cursor_pos is None else line[:cursor_pos]
965 965 return self._delim_re.split(l)[-1]
966 966
967 967
968 968
969 969 class Completer(Configurable):
970 970
971 971 greedy = Bool(
972 972 False,
973 973 help="""Activate greedy completion.
974 974
975 975 .. deprecated:: 8.8
976 976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977 977
978 978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979 979
980 980 - ``Completer.evaluation = 'unsafe'``
981 981 - ``Completer.auto_close_dict_keys = True``
982 982 """,
983 983 ).tag(config=True)
984 984
985 985 evaluation = Enum(
986 986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 987 default_value="limited",
988 988 help="""Policy for code evaluation under completion.
989 989
990 990 Successive options allow to enable more eager evaluation for better
991 991 completion suggestions, including for nested dictionaries, nested lists,
992 992 or even results of function calls.
993 993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995 995
996 996 Allowed values are:
997 997
998 998 - ``forbidden``: no evaluation of code is permitted,
999 999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 1000 no item/attribute evaluationm no access to locals/globals,
1001 1001 no evaluation of any operations or comparisons.
1002 1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 1007 syntax with side-effects like `del x`,
1008 1008 - ``dangerous``: completely arbitrary evaluation.
1009 1009 """,
1010 1010 ).tag(config=True)
1011 1011
1012 1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 1014 "Default to True if jedi is installed.").tag(config=True)
1015 1015
1016 1016 jedi_compute_type_timeout = Int(default_value=400,
1017 1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 1019 performance by preventing jedi to build its cache.
1020 1020 """).tag(config=True)
1021 1021
1022 1022 debug = Bool(default_value=False,
1023 1023 help='Enable debug for the Completer. Mostly print extra '
1024 1024 'information for experimental jedi integration.')\
1025 1025 .tag(config=True)
1026 1026
1027 1027 backslash_combining_completions = Bool(True,
1028 1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 1029 "Includes completion of latex commands, unicode names, and expanding "
1030 1030 "unicode characters back to latex commands.").tag(config=True)
1031 1031
1032 1032 auto_close_dict_keys = Bool(
1033 1033 False,
1034 1034 help="""
1035 1035 Enable auto-closing dictionary keys.
1036 1036
1037 1037 When enabled string keys will be suffixed with a final quote
1038 1038 (matching the opening quote), tuple keys will also receive a
1039 1039 separating comma if needed, and keys which are final will
1040 1040 receive a closing bracket (``]``).
1041 1041 """,
1042 1042 ).tag(config=True)
1043 1043
1044 1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 1045 """Create a new completer for the command line.
1046 1046
1047 1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048 1048
1049 1049 If unspecified, the default namespace where completions are performed
1050 1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 1051 given as dictionaries.
1052 1052
1053 1053 An optional second namespace can be given. This allows the completer
1054 1054 to handle cases where both the local and global scopes need to be
1055 1055 distinguished.
1056 1056 """
1057 1057
1058 1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 1060 # to bind to __main__.__dict__ at completion time, not now.
1061 1061 if namespace is None:
1062 1062 self.use_main_ns = True
1063 1063 else:
1064 1064 self.use_main_ns = False
1065 1065 self.namespace = namespace
1066 1066
1067 1067 # The global namespace, if given, can be bound directly
1068 1068 if global_namespace is None:
1069 1069 self.global_namespace = {}
1070 1070 else:
1071 1071 self.global_namespace = global_namespace
1072 1072
1073 1073 self.custom_matchers = []
1074 1074
1075 1075 super(Completer, self).__init__(**kwargs)
1076 1076
1077 1077 def complete(self, text, state):
1078 1078 """Return the next possible completion for 'text'.
1079 1079
1080 1080 This is called successively with state == 0, 1, 2, ... until it
1081 1081 returns None. The completion should begin with 'text'.
1082 1082
1083 1083 """
1084 1084 if self.use_main_ns:
1085 1085 self.namespace = __main__.__dict__
1086 1086
1087 1087 if state == 0:
1088 1088 if "." in text:
1089 1089 self.matches = self.attr_matches(text)
1090 1090 else:
1091 1091 self.matches = self.global_matches(text)
1092 1092 try:
1093 1093 return self.matches[state]
1094 1094 except IndexError:
1095 1095 return None
1096 1096
1097 1097 def global_matches(self, text):
1098 1098 """Compute matches when text is a simple name.
1099 1099
1100 1100 Return a list of all keywords, built-in functions and names currently
1101 1101 defined in self.namespace or self.global_namespace that match.
1102 1102
1103 1103 """
1104 1104 matches = []
1105 1105 match_append = matches.append
1106 1106 n = len(text)
1107 1107 for lst in [
1108 1108 keyword.kwlist,
1109 1109 builtin_mod.__dict__.keys(),
1110 1110 list(self.namespace.keys()),
1111 1111 list(self.global_namespace.keys()),
1112 1112 ]:
1113 1113 for word in lst:
1114 1114 if word[:n] == text and word != "__builtins__":
1115 1115 match_append(word)
1116 1116
1117 1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 1119 shortened = {
1120 1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 1121 for word in lst
1122 1122 if snake_case_re.match(word)
1123 1123 }
1124 1124 for word in shortened.keys():
1125 1125 if word[:n] == text and word != "__builtins__":
1126 1126 match_append(shortened[word])
1127 1127 return matches
1128 1128
1129 1129 def attr_matches(self, text):
1130 1130 """Compute matches when text contains a dot.
1131 1131
1132 1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 1134 evaluated and its attributes (as revealed by dir()) are used as
1135 1135 possible completions. (For class instances, class members are
1136 1136 also considered.)
1137 1137
1138 1138 WARNING: this can still invoke arbitrary C code, if an object
1139 1139 with a __getattr__ hook is evaluated.
1140 1140
1141 1141 """
1142 1142 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1143 1143 if not m2:
1144 1144 return []
1145 1145 expr, attr = m2.group(1, 2)
1146 1146
1147 1147 obj = self._evaluate_expr(expr)
1148 1148
1149 1149 if obj is not_found:
1150 1150 return []
1151 1151
1152 1152 if self.limit_to__all__ and hasattr(obj, '__all__'):
1153 1153 words = get__all__entries(obj)
1154 1154 else:
1155 1155 words = dir2(obj)
1156 1156
1157 1157 try:
1158 1158 words = generics.complete_object(obj, words)
1159 1159 except TryNext:
1160 1160 pass
1161 1161 except AssertionError:
1162 1162 raise
1163 1163 except Exception:
1164 1164 # Silence errors from completion function
1165 1165 #raise # dbg
1166 1166 pass
1167 1167 # Build match list to return
1168 1168 n = len(attr)
1169 1169 return ["%s.%s" % (expr, w) for w in words if w[:n] == attr]
1170 1170
1171 1171 def _evaluate_expr(self, expr):
1172 1172 obj = not_found
1173 1173 done = False
1174 1174 while not done and expr:
1175 1175 try:
1176 1176 obj = guarded_eval(
1177 1177 expr,
1178 1178 EvaluationContext(
1179 1179 globals=self.global_namespace,
1180 1180 locals=self.namespace,
1181 1181 evaluation=self.evaluation,
1182 1182 ),
1183 1183 )
1184 1184 done = True
1185 1185 except Exception as e:
1186 1186 if self.debug:
1187 1187 print("Evaluation exception", e)
1188 1188 # trim the expression to remove any invalid prefix
1189 1189 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1190 1190 # where parenthesis is not closed.
1191 1191 # TODO: make this faster by reusing parts of the computation?
1192 1192 expr = expr[1:]
1193 1193 return obj
1194 1194
1195 1195 def get__all__entries(obj):
1196 1196 """returns the strings in the __all__ attribute"""
1197 1197 try:
1198 1198 words = getattr(obj, '__all__')
1199 1199 except:
1200 1200 return []
1201 1201
1202 1202 return [w for w in words if isinstance(w, str)]
1203 1203
1204 1204
1205 1205 class _DictKeyState(enum.Flag):
1206 1206 """Represent state of the key match in context of other possible matches.
1207 1207
1208 1208 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1209 1209 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1210 1210 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1211 1211 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1212 1212 """
1213 1213
1214 1214 BASELINE = 0
1215 1215 END_OF_ITEM = enum.auto()
1216 1216 END_OF_TUPLE = enum.auto()
1217 1217 IN_TUPLE = enum.auto()
1218 1218
1219 1219
1220 1220 def _parse_tokens(c):
1221 1221 """Parse tokens even if there is an error."""
1222 1222 tokens = []
1223 1223 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1224 1224 while True:
1225 1225 try:
1226 1226 tokens.append(next(token_generator))
1227 1227 except tokenize.TokenError:
1228 1228 return tokens
1229 1229 except StopIteration:
1230 1230 return tokens
1231 1231
1232 1232
1233 1233 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1234 1234 """Match any valid Python numeric literal in a prefix of dictionary keys.
1235 1235
1236 1236 References:
1237 1237 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1238 1238 - https://docs.python.org/3/library/tokenize.html
1239 1239 """
1240 1240 if prefix[-1].isspace():
1241 1241 # if user typed a space we do not have anything to complete
1242 1242 # even if there was a valid number token before
1243 1243 return None
1244 1244 tokens = _parse_tokens(prefix)
1245 1245 rev_tokens = reversed(tokens)
1246 1246 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1247 1247 number = None
1248 1248 for token in rev_tokens:
1249 1249 if token.type in skip_over:
1250 1250 continue
1251 1251 if number is None:
1252 1252 if token.type == tokenize.NUMBER:
1253 1253 number = token.string
1254 1254 continue
1255 1255 else:
1256 1256 # we did not match a number
1257 1257 return None
1258 1258 if token.type == tokenize.OP:
1259 1259 if token.string == ",":
1260 1260 break
1261 1261 if token.string in {"+", "-"}:
1262 1262 number = token.string + number
1263 1263 else:
1264 1264 return None
1265 1265 return number
1266 1266
1267 1267
1268 1268 _INT_FORMATS = {
1269 1269 "0b": bin,
1270 1270 "0o": oct,
1271 1271 "0x": hex,
1272 1272 }
1273 1273
1274 1274
1275 1275 def match_dict_keys(
1276 1276 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1277 1277 prefix: str,
1278 1278 delims: str,
1279 1279 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1280 1280 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1281 1281 """Used by dict_key_matches, matching the prefix to a list of keys
1282 1282
1283 1283 Parameters
1284 1284 ----------
1285 1285 keys
1286 1286 list of keys in dictionary currently being completed.
1287 1287 prefix
1288 1288 Part of the text already typed by the user. E.g. `mydict[b'fo`
1289 1289 delims
1290 1290 String of delimiters to consider when finding the current key.
1291 1291 extra_prefix : optional
1292 1292 Part of the text already typed in multi-key index cases. E.g. for
1293 1293 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1294 1294
1295 1295 Returns
1296 1296 -------
1297 1297 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1298 1298 ``quote`` being the quote that need to be used to close current string.
1299 1299 ``token_start`` the position where the replacement should start occurring,
1300 1300 ``matches`` a dictionary of replacement/completion keys on keys and values
1301 1301 indicating whether the state.
1302 1302 """
1303 1303 prefix_tuple = extra_prefix if extra_prefix else ()
1304 1304
1305 1305 prefix_tuple_size = sum(
1306 1306 [
1307 1307 # for pandas, do not count slices as taking space
1308 1308 not isinstance(k, slice)
1309 1309 for k in prefix_tuple
1310 1310 ]
1311 1311 )
1312 1312 text_serializable_types = (str, bytes, int, float, slice)
1313 1313
1314 1314 def filter_prefix_tuple(key):
1315 1315 # Reject too short keys
1316 1316 if len(key) <= prefix_tuple_size:
1317 1317 return False
1318 1318 # Reject keys which cannot be serialised to text
1319 1319 for k in key:
1320 1320 if not isinstance(k, text_serializable_types):
1321 1321 return False
1322 1322 # Reject keys that do not match the prefix
1323 1323 for k, pt in zip(key, prefix_tuple):
1324 1324 if k != pt and not isinstance(pt, slice):
1325 1325 return False
1326 1326 # All checks passed!
1327 1327 return True
1328 1328
1329 1329 filtered_key_is_final: Dict[
1330 1330 Union[str, bytes, int, float], _DictKeyState
1331 1331 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1332 1332
1333 1333 for k in keys:
1334 1334 # If at least one of the matches is not final, mark as undetermined.
1335 1335 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1336 1336 # `111` appears final on first match but is not final on the second.
1337 1337
1338 1338 if isinstance(k, tuple):
1339 1339 if filter_prefix_tuple(k):
1340 1340 key_fragment = k[prefix_tuple_size]
1341 1341 filtered_key_is_final[key_fragment] |= (
1342 1342 _DictKeyState.END_OF_TUPLE
1343 1343 if len(k) == prefix_tuple_size + 1
1344 1344 else _DictKeyState.IN_TUPLE
1345 1345 )
1346 1346 elif prefix_tuple_size > 0:
1347 1347 # we are completing a tuple but this key is not a tuple,
1348 1348 # so we should ignore it
1349 1349 pass
1350 1350 else:
1351 1351 if isinstance(k, text_serializable_types):
1352 1352 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1353 1353
1354 1354 filtered_keys = filtered_key_is_final.keys()
1355 1355
1356 1356 if not prefix:
1357 1357 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1358 1358
1359 1359 quote_match = re.search("(?:\"|')", prefix)
1360 1360 is_user_prefix_numeric = False
1361 1361
1362 1362 if quote_match:
1363 1363 quote = quote_match.group()
1364 1364 valid_prefix = prefix + quote
1365 1365 try:
1366 1366 prefix_str = literal_eval(valid_prefix)
1367 1367 except Exception:
1368 1368 return "", 0, {}
1369 1369 else:
1370 1370 # If it does not look like a string, let's assume
1371 1371 # we are dealing with a number or variable.
1372 1372 number_match = _match_number_in_dict_key_prefix(prefix)
1373 1373
1374 1374 # We do not want the key matcher to suggest variable names so we yield:
1375 1375 if number_match is None:
1376 1376 # The alternative would be to assume that user forgort the quote
1377 1377 # and if the substring matches, suggest adding it at the start.
1378 1378 return "", 0, {}
1379 1379
1380 1380 prefix_str = number_match
1381 1381 is_user_prefix_numeric = True
1382 1382 quote = ""
1383 1383
1384 1384 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1385 1385 token_match = re.search(pattern, prefix, re.UNICODE)
1386 1386 assert token_match is not None # silence mypy
1387 1387 token_start = token_match.start()
1388 1388 token_prefix = token_match.group()
1389 1389
1390 1390 matched: Dict[str, _DictKeyState] = {}
1391 1391
1392 1392 str_key: Union[str, bytes]
1393 1393
1394 1394 for key in filtered_keys:
1395 1395 if isinstance(key, (int, float)):
1396 1396 # User typed a number but this key is not a number.
1397 1397 if not is_user_prefix_numeric:
1398 1398 continue
1399 1399 str_key = str(key)
1400 1400 if isinstance(key, int):
1401 1401 int_base = prefix_str[:2].lower()
1402 1402 # if user typed integer using binary/oct/hex notation:
1403 1403 if int_base in _INT_FORMATS:
1404 1404 int_format = _INT_FORMATS[int_base]
1405 1405 str_key = int_format(key)
1406 1406 else:
1407 1407 # User typed a string but this key is a number.
1408 1408 if is_user_prefix_numeric:
1409 1409 continue
1410 1410 str_key = key
1411 1411 try:
1412 1412 if not str_key.startswith(prefix_str):
1413 1413 continue
1414 1414 except (AttributeError, TypeError, UnicodeError) as e:
1415 1415 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1416 1416 continue
1417 1417
1418 1418 # reformat remainder of key to begin with prefix
1419 1419 rem = str_key[len(prefix_str) :]
1420 1420 # force repr wrapped in '
1421 1421 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1422 1422 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1423 1423 if quote == '"':
1424 1424 # The entered prefix is quoted with ",
1425 1425 # but the match is quoted with '.
1426 1426 # A contained " hence needs escaping for comparison:
1427 1427 rem_repr = rem_repr.replace('"', '\\"')
1428 1428
1429 1429 # then reinsert prefix from start of token
1430 1430 match = "%s%s" % (token_prefix, rem_repr)
1431 1431
1432 1432 matched[match] = filtered_key_is_final[key]
1433 1433 return quote, token_start, matched
1434 1434
1435 1435
1436 1436 def cursor_to_position(text:str, line:int, column:int)->int:
1437 1437 """
1438 1438 Convert the (line,column) position of the cursor in text to an offset in a
1439 1439 string.
1440 1440
1441 1441 Parameters
1442 1442 ----------
1443 1443 text : str
1444 1444 The text in which to calculate the cursor offset
1445 1445 line : int
1446 1446 Line of the cursor; 0-indexed
1447 1447 column : int
1448 1448 Column of the cursor 0-indexed
1449 1449
1450 1450 Returns
1451 1451 -------
1452 1452 Position of the cursor in ``text``, 0-indexed.
1453 1453
1454 1454 See Also
1455 1455 --------
1456 1456 position_to_cursor : reciprocal of this function
1457 1457
1458 1458 """
1459 1459 lines = text.split('\n')
1460 1460 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1461 1461
1462 1462 return sum(len(l) + 1 for l in lines[:line]) + column
1463 1463
1464 1464 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1465 1465 """
1466 1466 Convert the position of the cursor in text (0 indexed) to a line
1467 1467 number(0-indexed) and a column number (0-indexed) pair
1468 1468
1469 1469 Position should be a valid position in ``text``.
1470 1470
1471 1471 Parameters
1472 1472 ----------
1473 1473 text : str
1474 1474 The text in which to calculate the cursor offset
1475 1475 offset : int
1476 1476 Position of the cursor in ``text``, 0-indexed.
1477 1477
1478 1478 Returns
1479 1479 -------
1480 1480 (line, column) : (int, int)
1481 1481 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1482 1482
1483 1483 See Also
1484 1484 --------
1485 1485 cursor_to_position : reciprocal of this function
1486 1486
1487 1487 """
1488 1488
1489 1489 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1490 1490
1491 1491 before = text[:offset]
1492 1492 blines = before.split('\n') # ! splitnes trim trailing \n
1493 1493 line = before.count('\n')
1494 1494 col = len(blines[-1])
1495 1495 return line, col
1496 1496
1497 1497
1498 1498 def _safe_isinstance(obj, module, class_name, *attrs):
1499 1499 """Checks if obj is an instance of module.class_name if loaded
1500 1500 """
1501 1501 if module in sys.modules:
1502 1502 m = sys.modules[module]
1503 1503 for attr in [class_name, *attrs]:
1504 1504 m = getattr(m, attr)
1505 1505 return isinstance(obj, m)
1506 1506
1507 1507
1508 1508 @context_matcher()
1509 1509 def back_unicode_name_matcher(context: CompletionContext):
1510 1510 """Match Unicode characters back to Unicode name
1511 1511
1512 1512 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1513 1513 """
1514 1514 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1515 1515 return _convert_matcher_v1_result_to_v2(
1516 1516 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1517 1517 )
1518 1518
1519 1519
1520 1520 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1521 1521 """Match Unicode characters back to Unicode name
1522 1522
1523 1523 This does ``β˜ƒ`` -> ``\\snowman``
1524 1524
1525 1525 Note that snowman is not a valid python3 combining character but will be expanded.
1526 1526 Though it will not recombine back to the snowman character by the completion machinery.
1527 1527
1528 1528 This will not either back-complete standard sequences like \\n, \\b ...
1529 1529
1530 1530 .. deprecated:: 8.6
1531 1531 You can use :meth:`back_unicode_name_matcher` instead.
1532 1532
1533 1533 Returns
1534 1534 =======
1535 1535
1536 1536 Return a tuple with two elements:
1537 1537
1538 1538 - The Unicode character that was matched (preceded with a backslash), or
1539 1539 empty string,
1540 1540 - a sequence (of 1), name for the match Unicode character, preceded by
1541 1541 backslash, or empty if no match.
1542 1542 """
1543 1543 if len(text)<2:
1544 1544 return '', ()
1545 1545 maybe_slash = text[-2]
1546 1546 if maybe_slash != '\\':
1547 1547 return '', ()
1548 1548
1549 1549 char = text[-1]
1550 1550 # no expand on quote for completion in strings.
1551 1551 # nor backcomplete standard ascii keys
1552 1552 if char in string.ascii_letters or char in ('"',"'"):
1553 1553 return '', ()
1554 1554 try :
1555 1555 unic = unicodedata.name(char)
1556 1556 return '\\'+char,('\\'+unic,)
1557 1557 except KeyError:
1558 1558 pass
1559 1559 return '', ()
1560 1560
1561 1561
1562 1562 @context_matcher()
1563 1563 def back_latex_name_matcher(context: CompletionContext):
1564 1564 """Match latex characters back to unicode name
1565 1565
1566 1566 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1567 1567 """
1568 1568 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1569 1569 return _convert_matcher_v1_result_to_v2(
1570 1570 matches, type="latex", fragment=fragment, suppress_if_matches=True
1571 1571 )
1572 1572
1573 1573
1574 1574 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1575 1575 """Match latex characters back to unicode name
1576 1576
1577 1577 This does ``\\β„΅`` -> ``\\aleph``
1578 1578
1579 1579 .. deprecated:: 8.6
1580 1580 You can use :meth:`back_latex_name_matcher` instead.
1581 1581 """
1582 1582 if len(text)<2:
1583 1583 return '', ()
1584 1584 maybe_slash = text[-2]
1585 1585 if maybe_slash != '\\':
1586 1586 return '', ()
1587 1587
1588 1588
1589 1589 char = text[-1]
1590 1590 # no expand on quote for completion in strings.
1591 1591 # nor backcomplete standard ascii keys
1592 1592 if char in string.ascii_letters or char in ('"',"'"):
1593 1593 return '', ()
1594 1594 try :
1595 1595 latex = reverse_latex_symbol[char]
1596 1596 # '\\' replace the \ as well
1597 1597 return '\\'+char,[latex]
1598 1598 except KeyError:
1599 1599 pass
1600 1600 return '', ()
1601 1601
1602 1602
1603 1603 def _formatparamchildren(parameter) -> str:
1604 1604 """
1605 1605 Get parameter name and value from Jedi Private API
1606 1606
1607 1607 Jedi does not expose a simple way to get `param=value` from its API.
1608 1608
1609 1609 Parameters
1610 1610 ----------
1611 1611 parameter
1612 1612 Jedi's function `Param`
1613 1613
1614 1614 Returns
1615 1615 -------
1616 1616 A string like 'a', 'b=1', '*args', '**kwargs'
1617 1617
1618 1618 """
1619 1619 description = parameter.description
1620 1620 if not description.startswith('param '):
1621 1621 raise ValueError('Jedi function parameter description have change format.'
1622 1622 'Expected "param ...", found %r".' % description)
1623 1623 return description[6:]
1624 1624
1625 1625 def _make_signature(completion)-> str:
1626 1626 """
1627 1627 Make the signature from a jedi completion
1628 1628
1629 1629 Parameters
1630 1630 ----------
1631 1631 completion : jedi.Completion
1632 1632 object does not complete a function type
1633 1633
1634 1634 Returns
1635 1635 -------
1636 1636 a string consisting of the function signature, with the parenthesis but
1637 1637 without the function name. example:
1638 1638 `(a, *args, b=1, **kwargs)`
1639 1639
1640 1640 """
1641 1641
1642 1642 # it looks like this might work on jedi 0.17
1643 1643 if hasattr(completion, 'get_signatures'):
1644 1644 signatures = completion.get_signatures()
1645 1645 if not signatures:
1646 1646 return '(?)'
1647 1647
1648 1648 c0 = completion.get_signatures()[0]
1649 1649 return '('+c0.to_string().split('(', maxsplit=1)[1]
1650 1650
1651 1651 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1652 1652 for p in signature.defined_names()) if f])
1653 1653
1654 1654
1655 1655 _CompleteResult = Dict[str, MatcherResult]
1656 1656
1657 1657
1658 1658 DICT_MATCHER_REGEX = re.compile(
1659 1659 r"""(?x)
1660 1660 ( # match dict-referring - or any get item object - expression
1661 1661 .+
1662 1662 )
1663 1663 \[ # open bracket
1664 1664 \s* # and optional whitespace
1665 1665 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1666 1666 # and slices
1667 1667 ((?:(?:
1668 1668 (?: # closed string
1669 1669 [uUbB]? # string prefix (r not handled)
1670 1670 (?:
1671 1671 '(?:[^']|(?<!\\)\\')*'
1672 1672 |
1673 1673 "(?:[^"]|(?<!\\)\\")*"
1674 1674 )
1675 1675 )
1676 1676 |
1677 1677 # capture integers and slices
1678 1678 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1679 1679 |
1680 1680 # integer in bin/hex/oct notation
1681 1681 0[bBxXoO]_?(?:\w|\d)+
1682 1682 )
1683 1683 \s*,\s*
1684 1684 )*)
1685 1685 ((?:
1686 1686 (?: # unclosed string
1687 1687 [uUbB]? # string prefix (r not handled)
1688 1688 (?:
1689 1689 '(?:[^']|(?<!\\)\\')*
1690 1690 |
1691 1691 "(?:[^"]|(?<!\\)\\")*
1692 1692 )
1693 1693 )
1694 1694 |
1695 1695 # unfinished integer
1696 1696 (?:[-+]?\d+)
1697 1697 |
1698 1698 # integer in bin/hex/oct notation
1699 1699 0[bBxXoO]_?(?:\w|\d)+
1700 1700 )
1701 1701 )?
1702 1702 $
1703 1703 """
1704 1704 )
1705 1705
1706 1706
1707 1707 def _convert_matcher_v1_result_to_v2(
1708 1708 matches: Sequence[str],
1709 1709 type: str,
1710 1710 fragment: Optional[str] = None,
1711 1711 suppress_if_matches: bool = False,
1712 1712 ) -> SimpleMatcherResult:
1713 1713 """Utility to help with transition"""
1714 1714 result = {
1715 1715 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1716 1716 "suppress": (True if matches else False) if suppress_if_matches else False,
1717 1717 }
1718 1718 if fragment is not None:
1719 1719 result["matched_fragment"] = fragment
1720 1720 return cast(SimpleMatcherResult, result)
1721 1721
1722 1722
1723 1723 class IPCompleter(Completer):
1724 1724 """Extension of the completer class with IPython-specific features"""
1725 1725
1726 1726 @observe('greedy')
1727 1727 def _greedy_changed(self, change):
1728 1728 """update the splitter and readline delims when greedy is changed"""
1729 1729 if change["new"]:
1730 1730 self.evaluation = "unsafe"
1731 1731 self.auto_close_dict_keys = True
1732 1732 self.splitter.delims = GREEDY_DELIMS
1733 1733 else:
1734 1734 self.evaluation = "limited"
1735 1735 self.auto_close_dict_keys = False
1736 1736 self.splitter.delims = DELIMS
1737 1737
1738 1738 dict_keys_only = Bool(
1739 1739 False,
1740 1740 help="""
1741 1741 Whether to show dict key matches only.
1742 1742
1743 1743 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1744 1744 """,
1745 1745 )
1746 1746
1747 1747 suppress_competing_matchers = UnionTrait(
1748 1748 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1749 1749 default_value=None,
1750 1750 help="""
1751 1751 Whether to suppress completions from other *Matchers*.
1752 1752
1753 1753 When set to ``None`` (default) the matchers will attempt to auto-detect
1754 1754 whether suppression of other matchers is desirable. For example, at
1755 1755 the beginning of a line followed by `%` we expect a magic completion
1756 1756 to be the only applicable option, and after ``my_dict['`` we usually
1757 1757 expect a completion with an existing dictionary key.
1758 1758
1759 1759 If you want to disable this heuristic and see completions from all matchers,
1760 1760 set ``IPCompleter.suppress_competing_matchers = False``.
1761 1761 To disable the heuristic for specific matchers provide a dictionary mapping:
1762 1762 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1763 1763
1764 1764 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1765 1765 completions to the set of matchers with the highest priority;
1766 1766 this is equivalent to ``IPCompleter.merge_completions`` and
1767 1767 can be beneficial for performance, but will sometimes omit relevant
1768 1768 candidates from matchers further down the priority list.
1769 1769 """,
1770 1770 ).tag(config=True)
1771 1771
1772 1772 merge_completions = Bool(
1773 1773 True,
1774 1774 help="""Whether to merge completion results into a single list
1775 1775
1776 1776 If False, only the completion results from the first non-empty
1777 1777 completer will be returned.
1778 1778
1779 1779 As of version 8.6.0, setting the value to ``False`` is an alias for:
1780 1780 ``IPCompleter.suppress_competing_matchers = True.``.
1781 1781 """,
1782 1782 ).tag(config=True)
1783 1783
1784 1784 disable_matchers = ListTrait(
1785 1785 Unicode(),
1786 1786 help="""List of matchers to disable.
1787 1787
1788 1788 The list should contain matcher identifiers (see :any:`completion_matcher`).
1789 1789 """,
1790 1790 ).tag(config=True)
1791 1791
1792 1792 omit__names = Enum(
1793 1793 (0, 1, 2),
1794 1794 default_value=2,
1795 1795 help="""Instruct the completer to omit private method names
1796 1796
1797 1797 Specifically, when completing on ``object.<tab>``.
1798 1798
1799 1799 When 2 [default]: all names that start with '_' will be excluded.
1800 1800
1801 1801 When 1: all 'magic' names (``__foo__``) will be excluded.
1802 1802
1803 1803 When 0: nothing will be excluded.
1804 1804 """
1805 1805 ).tag(config=True)
1806 1806 limit_to__all__ = Bool(False,
1807 1807 help="""
1808 1808 DEPRECATED as of version 5.0.
1809 1809
1810 1810 Instruct the completer to use __all__ for the completion
1811 1811
1812 1812 Specifically, when completing on ``object.<tab>``.
1813 1813
1814 1814 When True: only those names in obj.__all__ will be included.
1815 1815
1816 1816 When False [default]: the __all__ attribute is ignored
1817 1817 """,
1818 1818 ).tag(config=True)
1819 1819
1820 1820 profile_completions = Bool(
1821 1821 default_value=False,
1822 1822 help="If True, emit profiling data for completion subsystem using cProfile."
1823 1823 ).tag(config=True)
1824 1824
1825 1825 profiler_output_dir = Unicode(
1826 1826 default_value=".completion_profiles",
1827 1827 help="Template for path at which to output profile data for completions."
1828 1828 ).tag(config=True)
1829 1829
1830 1830 @observe('limit_to__all__')
1831 1831 def _limit_to_all_changed(self, change):
1832 1832 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1833 1833 'value has been deprecated since IPython 5.0, will be made to have '
1834 1834 'no effects and then removed in future version of IPython.',
1835 1835 UserWarning)
1836 1836
1837 1837 def __init__(
1838 1838 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1839 1839 ):
1840 1840 """IPCompleter() -> completer
1841 1841
1842 1842 Return a completer object.
1843 1843
1844 1844 Parameters
1845 1845 ----------
1846 1846 shell
1847 1847 a pointer to the ipython shell itself. This is needed
1848 1848 because this completer knows about magic functions, and those can
1849 1849 only be accessed via the ipython instance.
1850 1850 namespace : dict, optional
1851 1851 an optional dict where completions are performed.
1852 1852 global_namespace : dict, optional
1853 1853 secondary optional dict for completions, to
1854 1854 handle cases (such as IPython embedded inside functions) where
1855 1855 both Python scopes are visible.
1856 1856 config : Config
1857 1857 traitlet's config object
1858 1858 **kwargs
1859 1859 passed to super class unmodified.
1860 1860 """
1861 1861
1862 1862 self.magic_escape = ESC_MAGIC
1863 1863 self.splitter = CompletionSplitter()
1864 1864
1865 1865 # _greedy_changed() depends on splitter and readline being defined:
1866 1866 super().__init__(
1867 1867 namespace=namespace,
1868 1868 global_namespace=global_namespace,
1869 1869 config=config,
1870 1870 **kwargs,
1871 1871 )
1872 1872
1873 1873 # List where completion matches will be stored
1874 1874 self.matches = []
1875 1875 self.shell = shell
1876 1876 # Regexp to split filenames with spaces in them
1877 1877 self.space_name_re = re.compile(r'([^\\] )')
1878 1878 # Hold a local ref. to glob.glob for speed
1879 1879 self.glob = glob.glob
1880 1880
1881 1881 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1882 1882 # buffers, to avoid completion problems.
1883 1883 term = os.environ.get('TERM','xterm')
1884 1884 self.dumb_terminal = term in ['dumb','emacs']
1885 1885
1886 1886 # Special handling of backslashes needed in win32 platforms
1887 1887 if sys.platform == "win32":
1888 1888 self.clean_glob = self._clean_glob_win32
1889 1889 else:
1890 1890 self.clean_glob = self._clean_glob
1891 1891
1892 1892 #regexp to parse docstring for function signature
1893 1893 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1894 1894 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1895 1895 #use this if positional argument name is also needed
1896 1896 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1897 1897
1898 1898 self.magic_arg_matchers = [
1899 1899 self.magic_config_matcher,
1900 1900 self.magic_color_matcher,
1901 1901 ]
1902 1902
1903 1903 # This is set externally by InteractiveShell
1904 1904 self.custom_completers = None
1905 1905
1906 1906 # This is a list of names of unicode characters that can be completed
1907 1907 # into their corresponding unicode value. The list is large, so we
1908 1908 # lazily initialize it on first use. Consuming code should access this
1909 1909 # attribute through the `@unicode_names` property.
1910 1910 self._unicode_names = None
1911 1911
1912 1912 self._backslash_combining_matchers = [
1913 1913 self.latex_name_matcher,
1914 1914 self.unicode_name_matcher,
1915 1915 back_latex_name_matcher,
1916 1916 back_unicode_name_matcher,
1917 1917 self.fwd_unicode_matcher,
1918 1918 ]
1919 1919
1920 1920 if not self.backslash_combining_completions:
1921 1921 for matcher in self._backslash_combining_matchers:
1922 1922 self.disable_matchers.append(_get_matcher_id(matcher))
1923 1923
1924 1924 if not self.merge_completions:
1925 1925 self.suppress_competing_matchers = True
1926 1926
1927 1927 @property
1928 1928 def matchers(self) -> List[Matcher]:
1929 1929 """All active matcher routines for completion"""
1930 1930 if self.dict_keys_only:
1931 1931 return [self.dict_key_matcher]
1932 1932
1933 1933 if self.use_jedi:
1934 1934 return [
1935 1935 *self.custom_matchers,
1936 1936 *self._backslash_combining_matchers,
1937 1937 *self.magic_arg_matchers,
1938 1938 self.custom_completer_matcher,
1939 1939 self.magic_matcher,
1940 1940 self._jedi_matcher,
1941 1941 self.dict_key_matcher,
1942 1942 self.file_matcher,
1943 1943 ]
1944 1944 else:
1945 1945 return [
1946 1946 *self.custom_matchers,
1947 1947 *self._backslash_combining_matchers,
1948 1948 *self.magic_arg_matchers,
1949 1949 self.custom_completer_matcher,
1950 1950 self.dict_key_matcher,
1951 1951 # TODO: convert python_matches to v2 API
1952 1952 self.magic_matcher,
1953 1953 self.python_matches,
1954 1954 self.file_matcher,
1955 1955 self.python_func_kw_matcher,
1956 1956 ]
1957 1957
1958 1958 def all_completions(self, text:str) -> List[str]:
1959 1959 """
1960 1960 Wrapper around the completion methods for the benefit of emacs.
1961 1961 """
1962 1962 prefix = text.rpartition('.')[0]
1963 1963 with provisionalcompleter():
1964 1964 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1965 1965 for c in self.completions(text, len(text))]
1966 1966
1967 1967 return self.complete(text)[1]
1968 1968
1969 1969 def _clean_glob(self, text:str):
1970 1970 return self.glob("%s*" % text)
1971 1971
1972 1972 def _clean_glob_win32(self, text:str):
1973 1973 return [f.replace("\\","/")
1974 1974 for f in self.glob("%s*" % text)]
1975 1975
1976 1976 @context_matcher()
1977 1977 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1978 1978 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1979 1979 matches = self.file_matches(context.token)
1980 1980 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1981 1981 # starts with `/home/`, `C:\`, etc)
1982 1982 return _convert_matcher_v1_result_to_v2(matches, type="path")
1983 1983
1984 1984 def file_matches(self, text: str) -> List[str]:
1985 1985 """Match filenames, expanding ~USER type strings.
1986 1986
1987 1987 Most of the seemingly convoluted logic in this completer is an
1988 1988 attempt to handle filenames with spaces in them. And yet it's not
1989 1989 quite perfect, because Python's readline doesn't expose all of the
1990 1990 GNU readline details needed for this to be done correctly.
1991 1991
1992 1992 For a filename with a space in it, the printed completions will be
1993 1993 only the parts after what's already been typed (instead of the
1994 1994 full completions, as is normally done). I don't think with the
1995 1995 current (as of Python 2.3) Python readline it's possible to do
1996 1996 better.
1997 1997
1998 1998 .. deprecated:: 8.6
1999 1999 You can use :meth:`file_matcher` instead.
2000 2000 """
2001 2001
2002 2002 # chars that require escaping with backslash - i.e. chars
2003 2003 # that readline treats incorrectly as delimiters, but we
2004 2004 # don't want to treat as delimiters in filename matching
2005 2005 # when escaped with backslash
2006 2006 if text.startswith('!'):
2007 2007 text = text[1:]
2008 2008 text_prefix = u'!'
2009 2009 else:
2010 2010 text_prefix = u''
2011 2011
2012 2012 text_until_cursor = self.text_until_cursor
2013 2013 # track strings with open quotes
2014 2014 open_quotes = has_open_quotes(text_until_cursor)
2015 2015
2016 2016 if '(' in text_until_cursor or '[' in text_until_cursor:
2017 2017 lsplit = text
2018 2018 else:
2019 2019 try:
2020 2020 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2021 2021 lsplit = arg_split(text_until_cursor)[-1]
2022 2022 except ValueError:
2023 2023 # typically an unmatched ", or backslash without escaped char.
2024 2024 if open_quotes:
2025 2025 lsplit = text_until_cursor.split(open_quotes)[-1]
2026 2026 else:
2027 2027 return []
2028 2028 except IndexError:
2029 2029 # tab pressed on empty line
2030 2030 lsplit = ""
2031 2031
2032 2032 if not open_quotes and lsplit != protect_filename(lsplit):
2033 2033 # if protectables are found, do matching on the whole escaped name
2034 2034 has_protectables = True
2035 2035 text0,text = text,lsplit
2036 2036 else:
2037 2037 has_protectables = False
2038 2038 text = os.path.expanduser(text)
2039 2039
2040 2040 if text == "":
2041 2041 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2042 2042
2043 2043 # Compute the matches from the filesystem
2044 2044 if sys.platform == 'win32':
2045 2045 m0 = self.clean_glob(text)
2046 2046 else:
2047 2047 m0 = self.clean_glob(text.replace('\\', ''))
2048 2048
2049 2049 if has_protectables:
2050 2050 # If we had protectables, we need to revert our changes to the
2051 2051 # beginning of filename so that we don't double-write the part
2052 2052 # of the filename we have so far
2053 2053 len_lsplit = len(lsplit)
2054 2054 matches = [text_prefix + text0 +
2055 2055 protect_filename(f[len_lsplit:]) for f in m0]
2056 2056 else:
2057 2057 if open_quotes:
2058 2058 # if we have a string with an open quote, we don't need to
2059 2059 # protect the names beyond the quote (and we _shouldn't_, as
2060 2060 # it would cause bugs when the filesystem call is made).
2061 2061 matches = m0 if sys.platform == "win32" else\
2062 2062 [protect_filename(f, open_quotes) for f in m0]
2063 2063 else:
2064 2064 matches = [text_prefix +
2065 2065 protect_filename(f) for f in m0]
2066 2066
2067 2067 # Mark directories in input list by appending '/' to their names.
2068 2068 return [x+'/' if os.path.isdir(x) else x for x in matches]
2069 2069
2070 2070 @context_matcher()
2071 2071 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2072 2072 """Match magics."""
2073 2073 text = context.token
2074 2074 matches = self.magic_matches(text)
2075 2075 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2076 2076 is_magic_prefix = len(text) > 0 and text[0] == "%"
2077 2077 result["suppress"] = is_magic_prefix and bool(result["completions"])
2078 2078 return result
2079 2079
2080 2080 def magic_matches(self, text: str):
2081 2081 """Match magics.
2082 2082
2083 2083 .. deprecated:: 8.6
2084 2084 You can use :meth:`magic_matcher` instead.
2085 2085 """
2086 2086 # Get all shell magics now rather than statically, so magics loaded at
2087 2087 # runtime show up too.
2088 2088 lsm = self.shell.magics_manager.lsmagic()
2089 2089 line_magics = lsm['line']
2090 2090 cell_magics = lsm['cell']
2091 2091 pre = self.magic_escape
2092 2092 pre2 = pre+pre
2093 2093
2094 2094 explicit_magic = text.startswith(pre)
2095 2095
2096 2096 # Completion logic:
2097 2097 # - user gives %%: only do cell magics
2098 2098 # - user gives %: do both line and cell magics
2099 2099 # - no prefix: do both
2100 2100 # In other words, line magics are skipped if the user gives %% explicitly
2101 2101 #
2102 2102 # We also exclude magics that match any currently visible names:
2103 2103 # https://github.com/ipython/ipython/issues/4877, unless the user has
2104 2104 # typed a %:
2105 2105 # https://github.com/ipython/ipython/issues/10754
2106 2106 bare_text = text.lstrip(pre)
2107 2107 global_matches = self.global_matches(bare_text)
2108 2108 if not explicit_magic:
2109 2109 def matches(magic):
2110 2110 """
2111 2111 Filter magics, in particular remove magics that match
2112 2112 a name present in global namespace.
2113 2113 """
2114 2114 return ( magic.startswith(bare_text) and
2115 2115 magic not in global_matches )
2116 2116 else:
2117 2117 def matches(magic):
2118 2118 return magic.startswith(bare_text)
2119 2119
2120 2120 comp = [ pre2+m for m in cell_magics if matches(m)]
2121 2121 if not text.startswith(pre2):
2122 2122 comp += [ pre+m for m in line_magics if matches(m)]
2123 2123
2124 2124 return comp
2125 2125
2126 2126 @context_matcher()
2127 2127 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2128 2128 """Match class names and attributes for %config magic."""
2129 2129 # NOTE: uses `line_buffer` equivalent for compatibility
2130 2130 matches = self.magic_config_matches(context.line_with_cursor)
2131 2131 return _convert_matcher_v1_result_to_v2(matches, type="param")
2132 2132
2133 2133 def magic_config_matches(self, text: str) -> List[str]:
2134 2134 """Match class names and attributes for %config magic.
2135 2135
2136 2136 .. deprecated:: 8.6
2137 2137 You can use :meth:`magic_config_matcher` instead.
2138 2138 """
2139 2139 texts = text.strip().split()
2140 2140
2141 2141 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2142 2142 # get all configuration classes
2143 2143 classes = sorted(set([ c for c in self.shell.configurables
2144 2144 if c.__class__.class_traits(config=True)
2145 2145 ]), key=lambda x: x.__class__.__name__)
2146 2146 classnames = [ c.__class__.__name__ for c in classes ]
2147 2147
2148 2148 # return all classnames if config or %config is given
2149 2149 if len(texts) == 1:
2150 2150 return classnames
2151 2151
2152 2152 # match classname
2153 2153 classname_texts = texts[1].split('.')
2154 2154 classname = classname_texts[0]
2155 2155 classname_matches = [ c for c in classnames
2156 2156 if c.startswith(classname) ]
2157 2157
2158 2158 # return matched classes or the matched class with attributes
2159 2159 if texts[1].find('.') < 0:
2160 2160 return classname_matches
2161 2161 elif len(classname_matches) == 1 and \
2162 2162 classname_matches[0] == classname:
2163 2163 cls = classes[classnames.index(classname)].__class__
2164 2164 help = cls.class_get_help()
2165 2165 # strip leading '--' from cl-args:
2166 2166 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2167 2167 return [ attr.split('=')[0]
2168 2168 for attr in help.strip().splitlines()
2169 2169 if attr.startswith(texts[1]) ]
2170 2170 return []
2171 2171
2172 2172 @context_matcher()
2173 2173 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2174 2174 """Match color schemes for %colors magic."""
2175 2175 # NOTE: uses `line_buffer` equivalent for compatibility
2176 2176 matches = self.magic_color_matches(context.line_with_cursor)
2177 2177 return _convert_matcher_v1_result_to_v2(matches, type="param")
2178 2178
2179 2179 def magic_color_matches(self, text: str) -> List[str]:
2180 2180 """Match color schemes for %colors magic.
2181 2181
2182 2182 .. deprecated:: 8.6
2183 2183 You can use :meth:`magic_color_matcher` instead.
2184 2184 """
2185 2185 texts = text.split()
2186 2186 if text.endswith(' '):
2187 2187 # .split() strips off the trailing whitespace. Add '' back
2188 2188 # so that: '%colors ' -> ['%colors', '']
2189 2189 texts.append('')
2190 2190
2191 2191 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2192 2192 prefix = texts[1]
2193 2193 return [ color for color in InspectColors.keys()
2194 2194 if color.startswith(prefix) ]
2195 2195 return []
2196 2196
2197 2197 @context_matcher(identifier="IPCompleter.jedi_matcher")
2198 2198 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2199 2199 matches = self._jedi_matches(
2200 2200 cursor_column=context.cursor_position,
2201 2201 cursor_line=context.cursor_line,
2202 2202 text=context.full_text,
2203 2203 )
2204 2204 return {
2205 2205 "completions": matches,
2206 2206 # static analysis should not suppress other matchers
2207 2207 "suppress": False,
2208 2208 }
2209 2209
2210 2210 def _jedi_matches(
2211 2211 self, cursor_column: int, cursor_line: int, text: str
2212 2212 ) -> Iterator[_JediCompletionLike]:
2213 2213 """
2214 2214 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2215 2215 cursor position.
2216 2216
2217 2217 Parameters
2218 2218 ----------
2219 2219 cursor_column : int
2220 2220 column position of the cursor in ``text``, 0-indexed.
2221 2221 cursor_line : int
2222 2222 line position of the cursor in ``text``, 0-indexed
2223 2223 text : str
2224 2224 text to complete
2225 2225
2226 2226 Notes
2227 2227 -----
2228 2228 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2229 2229 object containing a string with the Jedi debug information attached.
2230 2230
2231 2231 .. deprecated:: 8.6
2232 2232 You can use :meth:`_jedi_matcher` instead.
2233 2233 """
2234 2234 namespaces = [self.namespace]
2235 2235 if self.global_namespace is not None:
2236 2236 namespaces.append(self.global_namespace)
2237 2237
2238 2238 completion_filter = lambda x:x
2239 2239 offset = cursor_to_position(text, cursor_line, cursor_column)
2240 2240 # filter output if we are completing for object members
2241 2241 if offset:
2242 2242 pre = text[offset-1]
2243 2243 if pre == '.':
2244 2244 if self.omit__names == 2:
2245 2245 completion_filter = lambda c:not c.name.startswith('_')
2246 2246 elif self.omit__names == 1:
2247 2247 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2248 2248 elif self.omit__names == 0:
2249 2249 completion_filter = lambda x:x
2250 2250 else:
2251 2251 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2252 2252
2253 2253 interpreter = jedi.Interpreter(text[:offset], namespaces)
2254 2254 try_jedi = True
2255 2255
2256 2256 try:
2257 2257 # find the first token in the current tree -- if it is a ' or " then we are in a string
2258 2258 completing_string = False
2259 2259 try:
2260 2260 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2261 2261 except StopIteration:
2262 2262 pass
2263 2263 else:
2264 2264 # note the value may be ', ", or it may also be ''' or """, or
2265 2265 # in some cases, """what/you/typed..., but all of these are
2266 2266 # strings.
2267 2267 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2268 2268
2269 2269 # if we are in a string jedi is likely not the right candidate for
2270 2270 # now. Skip it.
2271 2271 try_jedi = not completing_string
2272 2272 except Exception as e:
2273 2273 # many of things can go wrong, we are using private API just don't crash.
2274 2274 if self.debug:
2275 2275 print("Error detecting if completing a non-finished string :", e, '|')
2276 2276
2277 2277 if not try_jedi:
2278 2278 return iter([])
2279 2279 try:
2280 2280 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2281 2281 except Exception as e:
2282 2282 if self.debug:
2283 2283 return iter(
2284 2284 [
2285 2285 _FakeJediCompletion(
2286 2286 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2287 2287 % (e)
2288 2288 )
2289 2289 ]
2290 2290 )
2291 2291 else:
2292 2292 return iter([])
2293 2293
2294 2294 @completion_matcher(api_version=1)
2295 2295 def python_matches(self, text: str) -> Iterable[str]:
2296 2296 """Match attributes or global python names"""
2297 2297 if "." in text:
2298 2298 try:
2299 2299 matches = self.attr_matches(text)
2300 2300 if text.endswith('.') and self.omit__names:
2301 2301 if self.omit__names == 1:
2302 2302 # true if txt is _not_ a __ name, false otherwise:
2303 2303 no__name = (lambda txt:
2304 2304 re.match(r'.*\.__.*?__',txt) is None)
2305 2305 else:
2306 2306 # true if txt is _not_ a _ name, false otherwise:
2307 2307 no__name = (lambda txt:
2308 2308 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2309 2309 matches = filter(no__name, matches)
2310 2310 except NameError:
2311 2311 # catches <undefined attributes>.<tab>
2312 2312 matches = []
2313 2313 else:
2314 2314 matches = self.global_matches(text)
2315 2315 return matches
2316 2316
2317 2317 def _default_arguments_from_docstring(self, doc):
2318 2318 """Parse the first line of docstring for call signature.
2319 2319
2320 2320 Docstring should be of the form 'min(iterable[, key=func])\n'.
2321 2321 It can also parse cython docstring of the form
2322 2322 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2323 2323 """
2324 2324 if doc is None:
2325 2325 return []
2326 2326
2327 2327 #care only the firstline
2328 2328 line = doc.lstrip().splitlines()[0]
2329 2329
2330 2330 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2331 2331 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2332 2332 sig = self.docstring_sig_re.search(line)
2333 2333 if sig is None:
2334 2334 return []
2335 2335 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2336 2336 sig = sig.groups()[0].split(',')
2337 2337 ret = []
2338 2338 for s in sig:
2339 2339 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2340 2340 ret += self.docstring_kwd_re.findall(s)
2341 2341 return ret
2342 2342
2343 2343 def _default_arguments(self, obj):
2344 2344 """Return the list of default arguments of obj if it is callable,
2345 2345 or empty list otherwise."""
2346 2346 call_obj = obj
2347 2347 ret = []
2348 2348 if inspect.isbuiltin(obj):
2349 2349 pass
2350 2350 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2351 2351 if inspect.isclass(obj):
2352 2352 #for cython embedsignature=True the constructor docstring
2353 2353 #belongs to the object itself not __init__
2354 2354 ret += self._default_arguments_from_docstring(
2355 2355 getattr(obj, '__doc__', ''))
2356 2356 # for classes, check for __init__,__new__
2357 2357 call_obj = (getattr(obj, '__init__', None) or
2358 2358 getattr(obj, '__new__', None))
2359 2359 # for all others, check if they are __call__able
2360 2360 elif hasattr(obj, '__call__'):
2361 2361 call_obj = obj.__call__
2362 2362 ret += self._default_arguments_from_docstring(
2363 2363 getattr(call_obj, '__doc__', ''))
2364 2364
2365 2365 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2366 2366 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2367 2367
2368 2368 try:
2369 2369 sig = inspect.signature(obj)
2370 2370 ret.extend(k for k, v in sig.parameters.items() if
2371 2371 v.kind in _keeps)
2372 2372 except ValueError:
2373 2373 pass
2374 2374
2375 2375 return list(set(ret))
2376 2376
2377 2377 @context_matcher()
2378 2378 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2379 2379 """Match named parameters (kwargs) of the last open function."""
2380 2380 matches = self.python_func_kw_matches(context.token)
2381 2381 return _convert_matcher_v1_result_to_v2(matches, type="param")
2382 2382
2383 2383 def python_func_kw_matches(self, text):
2384 2384 """Match named parameters (kwargs) of the last open function.
2385 2385
2386 2386 .. deprecated:: 8.6
2387 2387 You can use :meth:`python_func_kw_matcher` instead.
2388 2388 """
2389 2389
2390 2390 if "." in text: # a parameter cannot be dotted
2391 2391 return []
2392 2392 try: regexp = self.__funcParamsRegex
2393 2393 except AttributeError:
2394 2394 regexp = self.__funcParamsRegex = re.compile(r'''
2395 2395 '.*?(?<!\\)' | # single quoted strings or
2396 2396 ".*?(?<!\\)" | # double quoted strings or
2397 2397 \w+ | # identifier
2398 2398 \S # other characters
2399 2399 ''', re.VERBOSE | re.DOTALL)
2400 2400 # 1. find the nearest identifier that comes before an unclosed
2401 2401 # parenthesis before the cursor
2402 2402 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2403 2403 tokens = regexp.findall(self.text_until_cursor)
2404 2404 iterTokens = reversed(tokens); openPar = 0
2405 2405
2406 2406 for token in iterTokens:
2407 2407 if token == ')':
2408 2408 openPar -= 1
2409 2409 elif token == '(':
2410 2410 openPar += 1
2411 2411 if openPar > 0:
2412 2412 # found the last unclosed parenthesis
2413 2413 break
2414 2414 else:
2415 2415 return []
2416 2416 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2417 2417 ids = []
2418 2418 isId = re.compile(r'\w+$').match
2419 2419
2420 2420 while True:
2421 2421 try:
2422 2422 ids.append(next(iterTokens))
2423 2423 if not isId(ids[-1]):
2424 2424 ids.pop(); break
2425 2425 if not next(iterTokens) == '.':
2426 2426 break
2427 2427 except StopIteration:
2428 2428 break
2429 2429
2430 2430 # Find all named arguments already assigned to, as to avoid suggesting
2431 2431 # them again
2432 2432 usedNamedArgs = set()
2433 2433 par_level = -1
2434 2434 for token, next_token in zip(tokens, tokens[1:]):
2435 2435 if token == '(':
2436 2436 par_level += 1
2437 2437 elif token == ')':
2438 2438 par_level -= 1
2439 2439
2440 2440 if par_level != 0:
2441 2441 continue
2442 2442
2443 2443 if next_token != '=':
2444 2444 continue
2445 2445
2446 2446 usedNamedArgs.add(token)
2447 2447
2448 2448 argMatches = []
2449 2449 try:
2450 2450 callableObj = '.'.join(ids[::-1])
2451 2451 namedArgs = self._default_arguments(eval(callableObj,
2452 2452 self.namespace))
2453 2453
2454 2454 # Remove used named arguments from the list, no need to show twice
2455 2455 for namedArg in set(namedArgs) - usedNamedArgs:
2456 2456 if namedArg.startswith(text):
2457 2457 argMatches.append("%s=" %namedArg)
2458 2458 except:
2459 2459 pass
2460 2460
2461 2461 return argMatches
2462 2462
2463 2463 @staticmethod
2464 2464 def _get_keys(obj: Any) -> List[Any]:
2465 2465 # Objects can define their own completions by defining an
2466 2466 # _ipy_key_completions_() method.
2467 2467 method = get_real_method(obj, '_ipython_key_completions_')
2468 2468 if method is not None:
2469 2469 return method()
2470 2470
2471 2471 # Special case some common in-memory dict-like types
2472 2472 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2473 2473 try:
2474 2474 return list(obj.keys())
2475 2475 except Exception:
2476 2476 return []
2477 2477 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2478 2478 try:
2479 2479 return list(obj.obj.keys())
2480 2480 except Exception:
2481 2481 return []
2482 2482 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2483 2483 _safe_isinstance(obj, 'numpy', 'void'):
2484 2484 return obj.dtype.names or []
2485 2485 return []
2486 2486
2487 2487 @context_matcher()
2488 2488 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2489 2489 """Match string keys in a dictionary, after e.g. ``foo[``."""
2490 2490 matches = self.dict_key_matches(context.token)
2491 2491 return _convert_matcher_v1_result_to_v2(
2492 2492 matches, type="dict key", suppress_if_matches=True
2493 2493 )
2494 2494
2495 2495 def dict_key_matches(self, text: str) -> List[str]:
2496 2496 """Match string keys in a dictionary, after e.g. ``foo[``.
2497 2497
2498 2498 .. deprecated:: 8.6
2499 2499 You can use :meth:`dict_key_matcher` instead.
2500 2500 """
2501 2501
2502 2502 # Short-circuit on closed dictionary (regular expression would
2503 2503 # not match anyway, but would take quite a while).
2504 2504 if self.text_until_cursor.strip().endswith("]"):
2505 2505 return []
2506 2506
2507 2507 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2508 2508
2509 2509 if match is None:
2510 2510 return []
2511 2511
2512 2512 expr, prior_tuple_keys, key_prefix = match.groups()
2513 2513
2514 2514 obj = self._evaluate_expr(expr)
2515 2515
2516 2516 if obj is not_found:
2517 2517 return []
2518 2518
2519 2519 keys = self._get_keys(obj)
2520 2520 if not keys:
2521 2521 return keys
2522 2522
2523 2523 tuple_prefix = guarded_eval(
2524 2524 prior_tuple_keys,
2525 2525 EvaluationContext(
2526 2526 globals=self.global_namespace,
2527 2527 locals=self.namespace,
2528 2528 evaluation=self.evaluation,
2529 2529 in_subscript=True,
2530 2530 ),
2531 2531 )
2532 2532
2533 2533 closing_quote, token_offset, matches = match_dict_keys(
2534 2534 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2535 2535 )
2536 2536 if not matches:
2537 2537 return []
2538 2538
2539 2539 # get the cursor position of
2540 2540 # - the text being completed
2541 2541 # - the start of the key text
2542 2542 # - the start of the completion
2543 2543 text_start = len(self.text_until_cursor) - len(text)
2544 2544 if key_prefix:
2545 2545 key_start = match.start(3)
2546 2546 completion_start = key_start + token_offset
2547 2547 else:
2548 2548 key_start = completion_start = match.end()
2549 2549
2550 2550 # grab the leading prefix, to make sure all completions start with `text`
2551 2551 if text_start > key_start:
2552 2552 leading = ''
2553 2553 else:
2554 2554 leading = text[text_start:completion_start]
2555 2555
2556 2556 # append closing quote and bracket as appropriate
2557 2557 # this is *not* appropriate if the opening quote or bracket is outside
2558 2558 # the text given to this method, e.g. `d["""a\nt
2559 2559 can_close_quote = False
2560 2560 can_close_bracket = False
2561 2561
2562 2562 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2563 2563
2564 2564 if continuation.startswith(closing_quote):
2565 2565 # do not close if already closed, e.g. `d['a<tab>'`
2566 2566 continuation = continuation[len(closing_quote) :]
2567 2567 else:
2568 2568 can_close_quote = True
2569 2569
2570 2570 continuation = continuation.strip()
2571 2571
2572 2572 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2573 2573 # handling it is out of scope, so let's avoid appending suffixes.
2574 2574 has_known_tuple_handling = isinstance(obj, dict)
2575 2575
2576 2576 can_close_bracket = (
2577 2577 not continuation.startswith("]") and self.auto_close_dict_keys
2578 2578 )
2579 2579 can_close_tuple_item = (
2580 2580 not continuation.startswith(",")
2581 2581 and has_known_tuple_handling
2582 2582 and self.auto_close_dict_keys
2583 2583 )
2584 2584 can_close_quote = can_close_quote and self.auto_close_dict_keys
2585 2585
2586 2586 # fast path if closing qoute should be appended but not suffix is allowed
2587 2587 if not can_close_quote and not can_close_bracket and closing_quote:
2588 2588 return [leading + k for k in matches]
2589 2589
2590 2590 results = []
2591 2591
2592 2592 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2593 2593
2594 2594 for k, state_flag in matches.items():
2595 2595 result = leading + k
2596 2596 if can_close_quote and closing_quote:
2597 2597 result += closing_quote
2598 2598
2599 2599 if state_flag == end_of_tuple_or_item:
2600 2600 # We do not know which suffix to add,
2601 2601 # e.g. both tuple item and string
2602 2602 # match this item.
2603 2603 pass
2604 2604
2605 2605 if state_flag in end_of_tuple_or_item and can_close_bracket:
2606 2606 result += "]"
2607 2607 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2608 2608 result += ", "
2609 2609 results.append(result)
2610 2610 return results
2611 2611
2612 2612 @context_matcher()
2613 2613 def unicode_name_matcher(self, context: CompletionContext):
2614 2614 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2615 2615 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2616 2616 return _convert_matcher_v1_result_to_v2(
2617 2617 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2618 2618 )
2619 2619
2620 2620 @staticmethod
2621 2621 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2622 2622 """Match Latex-like syntax for unicode characters base
2623 2623 on the name of the character.
2624 2624
2625 2625 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2626 2626
2627 2627 Works only on valid python 3 identifier, or on combining characters that
2628 2628 will combine to form a valid identifier.
2629 2629 """
2630 2630 slashpos = text.rfind('\\')
2631 2631 if slashpos > -1:
2632 2632 s = text[slashpos+1:]
2633 2633 try :
2634 2634 unic = unicodedata.lookup(s)
2635 2635 # allow combining chars
2636 2636 if ('a'+unic).isidentifier():
2637 2637 return '\\'+s,[unic]
2638 2638 except KeyError:
2639 2639 pass
2640 2640 return '', []
2641 2641
2642 2642 @context_matcher()
2643 2643 def latex_name_matcher(self, context: CompletionContext):
2644 2644 """Match Latex syntax for unicode characters.
2645 2645
2646 2646 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2647 2647 """
2648 2648 fragment, matches = self.latex_matches(context.text_until_cursor)
2649 2649 return _convert_matcher_v1_result_to_v2(
2650 2650 matches, type="latex", fragment=fragment, suppress_if_matches=True
2651 2651 )
2652 2652
2653 2653 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2654 2654 """Match Latex syntax for unicode characters.
2655 2655
2656 2656 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2657 2657
2658 2658 .. deprecated:: 8.6
2659 2659 You can use :meth:`latex_name_matcher` instead.
2660 2660 """
2661 2661 slashpos = text.rfind('\\')
2662 2662 if slashpos > -1:
2663 2663 s = text[slashpos:]
2664 2664 if s in latex_symbols:
2665 2665 # Try to complete a full latex symbol to unicode
2666 2666 # \\alpha -> Ξ±
2667 2667 return s, [latex_symbols[s]]
2668 2668 else:
2669 2669 # If a user has partially typed a latex symbol, give them
2670 2670 # a full list of options \al -> [\aleph, \alpha]
2671 2671 matches = [k for k in latex_symbols if k.startswith(s)]
2672 2672 if matches:
2673 2673 return s, matches
2674 2674 return '', ()
2675 2675
2676 2676 @context_matcher()
2677 2677 def custom_completer_matcher(self, context):
2678 2678 """Dispatch custom completer.
2679 2679
2680 2680 If a match is found, suppresses all other matchers except for Jedi.
2681 2681 """
2682 2682 matches = self.dispatch_custom_completer(context.token) or []
2683 2683 result = _convert_matcher_v1_result_to_v2(
2684 2684 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2685 2685 )
2686 2686 result["ordered"] = True
2687 2687 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2688 2688 return result
2689 2689
2690 2690 def dispatch_custom_completer(self, text):
2691 2691 """
2692 2692 .. deprecated:: 8.6
2693 2693 You can use :meth:`custom_completer_matcher` instead.
2694 2694 """
2695 2695 if not self.custom_completers:
2696 2696 return
2697 2697
2698 2698 line = self.line_buffer
2699 2699 if not line.strip():
2700 2700 return None
2701 2701
2702 2702 # Create a little structure to pass all the relevant information about
2703 2703 # the current completion to any custom completer.
2704 2704 event = SimpleNamespace()
2705 2705 event.line = line
2706 2706 event.symbol = text
2707 2707 cmd = line.split(None,1)[0]
2708 2708 event.command = cmd
2709 2709 event.text_until_cursor = self.text_until_cursor
2710 2710
2711 2711 # for foo etc, try also to find completer for %foo
2712 2712 if not cmd.startswith(self.magic_escape):
2713 2713 try_magic = self.custom_completers.s_matches(
2714 2714 self.magic_escape + cmd)
2715 2715 else:
2716 2716 try_magic = []
2717 2717
2718 2718 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2719 2719 try_magic,
2720 2720 self.custom_completers.flat_matches(self.text_until_cursor)):
2721 2721 try:
2722 2722 res = c(event)
2723 2723 if res:
2724 2724 # first, try case sensitive match
2725 2725 withcase = [r for r in res if r.startswith(text)]
2726 2726 if withcase:
2727 2727 return withcase
2728 2728 # if none, then case insensitive ones are ok too
2729 2729 text_low = text.lower()
2730 2730 return [r for r in res if r.lower().startswith(text_low)]
2731 2731 except TryNext:
2732 2732 pass
2733 2733 except KeyboardInterrupt:
2734 2734 """
2735 2735 If custom completer take too long,
2736 2736 let keyboard interrupt abort and return nothing.
2737 2737 """
2738 2738 break
2739 2739
2740 2740 return None
2741 2741
2742 2742 def completions(self, text: str, offset: int)->Iterator[Completion]:
2743 2743 """
2744 2744 Returns an iterator over the possible completions
2745 2745
2746 2746 .. warning::
2747 2747
2748 2748 Unstable
2749 2749
2750 2750 This function is unstable, API may change without warning.
2751 2751 It will also raise unless use in proper context manager.
2752 2752
2753 2753 Parameters
2754 2754 ----------
2755 2755 text : str
2756 2756 Full text of the current input, multi line string.
2757 2757 offset : int
2758 2758 Integer representing the position of the cursor in ``text``. Offset
2759 2759 is 0-based indexed.
2760 2760
2761 2761 Yields
2762 2762 ------
2763 2763 Completion
2764 2764
2765 2765 Notes
2766 2766 -----
2767 2767 The cursor on a text can either be seen as being "in between"
2768 2768 characters or "On" a character depending on the interface visible to
2769 2769 the user. For consistency the cursor being on "in between" characters X
2770 2770 and Y is equivalent to the cursor being "on" character Y, that is to say
2771 2771 the character the cursor is on is considered as being after the cursor.
2772 2772
2773 2773 Combining characters may span more that one position in the
2774 2774 text.
2775 2775
2776 2776 .. note::
2777 2777
2778 2778 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2779 2779 fake Completion token to distinguish completion returned by Jedi
2780 2780 and usual IPython completion.
2781 2781
2782 2782 .. note::
2783 2783
2784 2784 Completions are not completely deduplicated yet. If identical
2785 2785 completions are coming from different sources this function does not
2786 2786 ensure that each completion object will only be present once.
2787 2787 """
2788 2788 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2789 2789 "It may change without warnings. "
2790 2790 "Use in corresponding context manager.",
2791 2791 category=ProvisionalCompleterWarning, stacklevel=2)
2792 2792
2793 2793 seen = set()
2794 2794 profiler:Optional[cProfile.Profile]
2795 2795 try:
2796 2796 if self.profile_completions:
2797 2797 import cProfile
2798 2798 profiler = cProfile.Profile()
2799 2799 profiler.enable()
2800 2800 else:
2801 2801 profiler = None
2802 2802
2803 2803 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2804 2804 if c and (c in seen):
2805 2805 continue
2806 2806 yield c
2807 2807 seen.add(c)
2808 2808 except KeyboardInterrupt:
2809 2809 """if completions take too long and users send keyboard interrupt,
2810 2810 do not crash and return ASAP. """
2811 2811 pass
2812 2812 finally:
2813 2813 if profiler is not None:
2814 2814 profiler.disable()
2815 2815 ensure_dir_exists(self.profiler_output_dir)
2816 2816 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2817 2817 print("Writing profiler output to", output_path)
2818 2818 profiler.dump_stats(output_path)
2819 2819
2820 2820 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2821 2821 """
2822 2822 Core completion module.Same signature as :any:`completions`, with the
2823 2823 extra `timeout` parameter (in seconds).
2824 2824
2825 2825 Computing jedi's completion ``.type`` can be quite expensive (it is a
2826 2826 lazy property) and can require some warm-up, more warm up than just
2827 2827 computing the ``name`` of a completion. The warm-up can be :
2828 2828
2829 2829 - Long warm-up the first time a module is encountered after
2830 2830 install/update: actually build parse/inference tree.
2831 2831
2832 2832 - first time the module is encountered in a session: load tree from
2833 2833 disk.
2834 2834
2835 2835 We don't want to block completions for tens of seconds so we give the
2836 2836 completer a "budget" of ``_timeout`` seconds per invocation to compute
2837 2837 completions types, the completions that have not yet been computed will
2838 2838 be marked as "unknown" an will have a chance to be computed next round
2839 2839 are things get cached.
2840 2840
2841 2841 Keep in mind that Jedi is not the only thing treating the completion so
2842 2842 keep the timeout short-ish as if we take more than 0.3 second we still
2843 2843 have lots of processing to do.
2844 2844
2845 2845 """
2846 2846 deadline = time.monotonic() + _timeout
2847 2847
2848 2848 before = full_text[:offset]
2849 2849 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2850 2850
2851 2851 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2852 2852
2853 2853 def is_non_jedi_result(
2854 2854 result: MatcherResult, identifier: str
2855 2855 ) -> TypeGuard[SimpleMatcherResult]:
2856 2856 return identifier != jedi_matcher_id
2857 2857
2858 2858 results = self._complete(
2859 2859 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2860 2860 )
2861 2861
2862 2862 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2863 2863 identifier: result
2864 2864 for identifier, result in results.items()
2865 2865 if is_non_jedi_result(result, identifier)
2866 2866 }
2867 2867
2868 2868 jedi_matches = (
2869 2869 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2870 2870 if jedi_matcher_id in results
2871 2871 else ()
2872 2872 )
2873 2873
2874 2874 iter_jm = iter(jedi_matches)
2875 2875 if _timeout:
2876 2876 for jm in iter_jm:
2877 2877 try:
2878 2878 type_ = jm.type
2879 2879 except Exception:
2880 2880 if self.debug:
2881 2881 print("Error in Jedi getting type of ", jm)
2882 2882 type_ = None
2883 2883 delta = len(jm.name_with_symbols) - len(jm.complete)
2884 2884 if type_ == 'function':
2885 2885 signature = _make_signature(jm)
2886 2886 else:
2887 2887 signature = ''
2888 2888 yield Completion(start=offset - delta,
2889 2889 end=offset,
2890 2890 text=jm.name_with_symbols,
2891 2891 type=type_,
2892 2892 signature=signature,
2893 2893 _origin='jedi')
2894 2894
2895 2895 if time.monotonic() > deadline:
2896 2896 break
2897 2897
2898 2898 for jm in iter_jm:
2899 2899 delta = len(jm.name_with_symbols) - len(jm.complete)
2900 2900 yield Completion(
2901 2901 start=offset - delta,
2902 2902 end=offset,
2903 2903 text=jm.name_with_symbols,
2904 2904 type=_UNKNOWN_TYPE, # don't compute type for speed
2905 2905 _origin="jedi",
2906 2906 signature="",
2907 2907 )
2908 2908
2909 2909 # TODO:
2910 2910 # Suppress this, right now just for debug.
2911 2911 if jedi_matches and non_jedi_results and self.debug:
2912 2912 some_start_offset = before.rfind(
2913 2913 next(iter(non_jedi_results.values()))["matched_fragment"]
2914 2914 )
2915 2915 yield Completion(
2916 2916 start=some_start_offset,
2917 2917 end=offset,
2918 2918 text="--jedi/ipython--",
2919 2919 _origin="debug",
2920 2920 type="none",
2921 2921 signature="",
2922 2922 )
2923 2923
2924 2924 ordered: List[Completion] = []
2925 2925 sortable: List[Completion] = []
2926 2926
2927 2927 for origin, result in non_jedi_results.items():
2928 2928 matched_text = result["matched_fragment"]
2929 2929 start_offset = before.rfind(matched_text)
2930 2930 is_ordered = result.get("ordered", False)
2931 2931 container = ordered if is_ordered else sortable
2932 2932
2933 2933 # I'm unsure if this is always true, so let's assert and see if it
2934 2934 # crash
2935 2935 assert before.endswith(matched_text)
2936 2936
2937 2937 for simple_completion in result["completions"]:
2938 2938 completion = Completion(
2939 2939 start=start_offset,
2940 2940 end=offset,
2941 2941 text=simple_completion.text,
2942 2942 _origin=origin,
2943 2943 signature="",
2944 2944 type=simple_completion.type or _UNKNOWN_TYPE,
2945 2945 )
2946 2946 container.append(completion)
2947 2947
2948 2948 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2949 2949 :MATCHES_LIMIT
2950 2950 ]
2951 2951
2952 2952 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2953 2953 """Find completions for the given text and line context.
2954 2954
2955 2955 Note that both the text and the line_buffer are optional, but at least
2956 2956 one of them must be given.
2957 2957
2958 2958 Parameters
2959 2959 ----------
2960 2960 text : string, optional
2961 2961 Text to perform the completion on. If not given, the line buffer
2962 2962 is split using the instance's CompletionSplitter object.
2963 2963 line_buffer : string, optional
2964 2964 If not given, the completer attempts to obtain the current line
2965 2965 buffer via readline. This keyword allows clients which are
2966 2966 requesting for text completions in non-readline contexts to inform
2967 2967 the completer of the entire text.
2968 2968 cursor_pos : int, optional
2969 2969 Index of the cursor in the full line buffer. Should be provided by
2970 2970 remote frontends where kernel has no access to frontend state.
2971 2971
2972 2972 Returns
2973 2973 -------
2974 2974 Tuple of two items:
2975 2975 text : str
2976 2976 Text that was actually used in the completion.
2977 2977 matches : list
2978 2978 A list of completion matches.
2979 2979
2980 2980 Notes
2981 2981 -----
2982 2982 This API is likely to be deprecated and replaced by
2983 2983 :any:`IPCompleter.completions` in the future.
2984 2984
2985 2985 """
2986 2986 warnings.warn('`Completer.complete` is pending deprecation since '
2987 2987 'IPython 6.0 and will be replaced by `Completer.completions`.',
2988 2988 PendingDeprecationWarning)
2989 2989 # potential todo, FOLD the 3rd throw away argument of _complete
2990 2990 # into the first 2 one.
2991 2991 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2992 2992 # TODO: should we deprecate now, or does it stay?
2993 2993
2994 2994 results = self._complete(
2995 2995 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2996 2996 )
2997 2997
2998 2998 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2999 2999
3000 3000 return self._arrange_and_extract(
3001 3001 results,
3002 3002 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3003 3003 skip_matchers={jedi_matcher_id},
3004 3004 # this API does not support different start/end positions (fragments of token).
3005 3005 abort_if_offset_changes=True,
3006 3006 )
3007 3007
3008 3008 def _arrange_and_extract(
3009 3009 self,
3010 3010 results: Dict[str, MatcherResult],
3011 3011 skip_matchers: Set[str],
3012 3012 abort_if_offset_changes: bool,
3013 3013 ):
3014 3014
3015 3015 sortable: List[AnyMatcherCompletion] = []
3016 3016 ordered: List[AnyMatcherCompletion] = []
3017 3017 most_recent_fragment = None
3018 3018 for identifier, result in results.items():
3019 3019 if identifier in skip_matchers:
3020 3020 continue
3021 3021 if not result["completions"]:
3022 3022 continue
3023 3023 if not most_recent_fragment:
3024 3024 most_recent_fragment = result["matched_fragment"]
3025 3025 if (
3026 3026 abort_if_offset_changes
3027 3027 and result["matched_fragment"] != most_recent_fragment
3028 3028 ):
3029 3029 break
3030 3030 if result.get("ordered", False):
3031 3031 ordered.extend(result["completions"])
3032 3032 else:
3033 3033 sortable.extend(result["completions"])
3034 3034
3035 3035 if not most_recent_fragment:
3036 3036 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3037 3037
3038 3038 return most_recent_fragment, [
3039 3039 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3040 3040 ]
3041 3041
3042 3042 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3043 3043 full_text=None) -> _CompleteResult:
3044 3044 """
3045 3045 Like complete but can also returns raw jedi completions as well as the
3046 3046 origin of the completion text. This could (and should) be made much
3047 3047 cleaner but that will be simpler once we drop the old (and stateful)
3048 3048 :any:`complete` API.
3049 3049
3050 3050 With current provisional API, cursor_pos act both (depending on the
3051 3051 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3052 3052 ``column`` when passing multiline strings this could/should be renamed
3053 3053 but would add extra noise.
3054 3054
3055 3055 Parameters
3056 3056 ----------
3057 3057 cursor_line
3058 3058 Index of the line the cursor is on. 0 indexed.
3059 3059 cursor_pos
3060 3060 Position of the cursor in the current line/line_buffer/text. 0
3061 3061 indexed.
3062 3062 line_buffer : optional, str
3063 3063 The current line the cursor is in, this is mostly due to legacy
3064 3064 reason that readline could only give a us the single current line.
3065 3065 Prefer `full_text`.
3066 3066 text : str
3067 3067 The current "token" the cursor is in, mostly also for historical
3068 3068 reasons. as the completer would trigger only after the current line
3069 3069 was parsed.
3070 3070 full_text : str
3071 3071 Full text of the current cell.
3072 3072
3073 3073 Returns
3074 3074 -------
3075 3075 An ordered dictionary where keys are identifiers of completion
3076 3076 matchers and values are ``MatcherResult``s.
3077 3077 """
3078 3078
3079 3079 # if the cursor position isn't given, the only sane assumption we can
3080 3080 # make is that it's at the end of the line (the common case)
3081 3081 if cursor_pos is None:
3082 3082 cursor_pos = len(line_buffer) if text is None else len(text)
3083 3083
3084 3084 if self.use_main_ns:
3085 3085 self.namespace = __main__.__dict__
3086 3086
3087 3087 # if text is either None or an empty string, rely on the line buffer
3088 3088 if (not line_buffer) and full_text:
3089 3089 line_buffer = full_text.split('\n')[cursor_line]
3090 3090 if not text: # issue #11508: check line_buffer before calling split_line
3091 3091 text = (
3092 3092 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3093 3093 )
3094 3094
3095 3095 # If no line buffer is given, assume the input text is all there was
3096 3096 if line_buffer is None:
3097 3097 line_buffer = text
3098 3098
3099 3099 # deprecated - do not use `line_buffer` in new code.
3100 3100 self.line_buffer = line_buffer
3101 3101 self.text_until_cursor = self.line_buffer[:cursor_pos]
3102 3102
3103 3103 if not full_text:
3104 3104 full_text = line_buffer
3105 3105
3106 3106 context = CompletionContext(
3107 3107 full_text=full_text,
3108 3108 cursor_position=cursor_pos,
3109 3109 cursor_line=cursor_line,
3110 3110 token=text,
3111 3111 limit=MATCHES_LIMIT,
3112 3112 )
3113 3113
3114 3114 # Start with a clean slate of completions
3115 3115 results: Dict[str, MatcherResult] = {}
3116 3116
3117 3117 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3118 3118
3119 3119 suppressed_matchers: Set[str] = set()
3120 3120
3121 3121 matchers = {
3122 3122 _get_matcher_id(matcher): matcher
3123 3123 for matcher in sorted(
3124 3124 self.matchers, key=_get_matcher_priority, reverse=True
3125 3125 )
3126 3126 }
3127 3127
3128 3128 for matcher_id, matcher in matchers.items():
3129 3129 matcher_id = _get_matcher_id(matcher)
3130 3130
3131 3131 if matcher_id in self.disable_matchers:
3132 3132 continue
3133 3133
3134 3134 if matcher_id in results:
3135 3135 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3136 3136
3137 3137 if matcher_id in suppressed_matchers:
3138 3138 continue
3139 3139
3140 3140 result: MatcherResult
3141 3141 try:
3142 3142 if _is_matcher_v1(matcher):
3143 3143 result = _convert_matcher_v1_result_to_v2(
3144 3144 matcher(text), type=_UNKNOWN_TYPE
3145 3145 )
3146 3146 elif _is_matcher_v2(matcher):
3147 3147 result = matcher(context)
3148 3148 else:
3149 3149 api_version = _get_matcher_api_version(matcher)
3150 3150 raise ValueError(f"Unsupported API version {api_version}")
3151 3151 except:
3152 3152 # Show the ugly traceback if the matcher causes an
3153 3153 # exception, but do NOT crash the kernel!
3154 3154 sys.excepthook(*sys.exc_info())
3155 3155 continue
3156 3156
3157 3157 # set default value for matched fragment if suffix was not selected.
3158 3158 result["matched_fragment"] = result.get("matched_fragment", context.token)
3159 3159
3160 3160 if not suppressed_matchers:
3161 3161 suppression_recommended: Union[bool, Set[str]] = result.get(
3162 3162 "suppress", False
3163 3163 )
3164 3164
3165 3165 suppression_config = (
3166 3166 self.suppress_competing_matchers.get(matcher_id, None)
3167 3167 if isinstance(self.suppress_competing_matchers, dict)
3168 3168 else self.suppress_competing_matchers
3169 3169 )
3170 3170 should_suppress = (
3171 3171 (suppression_config is True)
3172 3172 or (suppression_recommended and (suppression_config is not False))
3173 3173 ) and has_any_completions(result)
3174 3174
3175 3175 if should_suppress:
3176 3176 suppression_exceptions: Set[str] = result.get(
3177 3177 "do_not_suppress", set()
3178 3178 )
3179 3179 if isinstance(suppression_recommended, Iterable):
3180 3180 to_suppress = set(suppression_recommended)
3181 3181 else:
3182 3182 to_suppress = set(matchers)
3183 3183 suppressed_matchers = to_suppress - suppression_exceptions
3184 3184
3185 3185 new_results = {}
3186 3186 for previous_matcher_id, previous_result in results.items():
3187 3187 if previous_matcher_id not in suppressed_matchers:
3188 3188 new_results[previous_matcher_id] = previous_result
3189 3189 results = new_results
3190 3190
3191 3191 results[matcher_id] = result
3192 3192
3193 3193 _, matches = self._arrange_and_extract(
3194 3194 results,
3195 3195 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3196 3196 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3197 3197 skip_matchers={jedi_matcher_id},
3198 3198 abort_if_offset_changes=False,
3199 3199 )
3200 3200
3201 3201 # populate legacy stateful API
3202 3202 self.matches = matches
3203 3203
3204 3204 return results
3205 3205
3206 3206 @staticmethod
3207 3207 def _deduplicate(
3208 3208 matches: Sequence[AnyCompletion],
3209 3209 ) -> Iterable[AnyCompletion]:
3210 3210 filtered_matches: Dict[str, AnyCompletion] = {}
3211 3211 for match in matches:
3212 3212 text = match.text
3213 3213 if (
3214 3214 text not in filtered_matches
3215 3215 or filtered_matches[text].type == _UNKNOWN_TYPE
3216 3216 ):
3217 3217 filtered_matches[text] = match
3218 3218
3219 3219 return filtered_matches.values()
3220 3220
3221 3221 @staticmethod
3222 3222 def _sort(matches: Sequence[AnyCompletion]):
3223 3223 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3224 3224
3225 3225 @context_matcher()
3226 3226 def fwd_unicode_matcher(self, context: CompletionContext):
3227 3227 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3228 3228 # TODO: use `context.limit` to terminate early once we matched the maximum
3229 3229 # number that will be used downstream; can be added as an optional to
3230 3230 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3231 3231 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3232 3232 return _convert_matcher_v1_result_to_v2(
3233 3233 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3234 3234 )
3235 3235
3236 3236 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3237 3237 """
3238 3238 Forward match a string starting with a backslash with a list of
3239 3239 potential Unicode completions.
3240 3240
3241 3241 Will compute list of Unicode character names on first call and cache it.
3242 3242
3243 3243 .. deprecated:: 8.6
3244 3244 You can use :meth:`fwd_unicode_matcher` instead.
3245 3245
3246 3246 Returns
3247 3247 -------
3248 3248 At tuple with:
3249 3249 - matched text (empty if no matches)
3250 3250 - list of potential completions, empty tuple otherwise)
3251 3251 """
3252 3252 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3253 3253 # We could do a faster match using a Trie.
3254 3254
3255 3255 # Using pygtrie the following seem to work:
3256 3256
3257 3257 # s = PrefixSet()
3258 3258
3259 3259 # for c in range(0,0x10FFFF + 1):
3260 3260 # try:
3261 3261 # s.add(unicodedata.name(chr(c)))
3262 3262 # except ValueError:
3263 3263 # pass
3264 3264 # [''.join(k) for k in s.iter(prefix)]
3265 3265
3266 3266 # But need to be timed and adds an extra dependency.
3267 3267
3268 3268 slashpos = text.rfind('\\')
3269 3269 # if text starts with slash
3270 3270 if slashpos > -1:
3271 3271 # PERF: It's important that we don't access self._unicode_names
3272 3272 # until we're inside this if-block. _unicode_names is lazily
3273 3273 # initialized, and it takes a user-noticeable amount of time to
3274 3274 # initialize it, so we don't want to initialize it unless we're
3275 3275 # actually going to use it.
3276 3276 s = text[slashpos + 1 :]
3277 3277 sup = s.upper()
3278 3278 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3279 3279 if candidates:
3280 3280 return s, candidates
3281 3281 candidates = [x for x in self.unicode_names if sup in x]
3282 3282 if candidates:
3283 3283 return s, candidates
3284 3284 splitsup = sup.split(" ")
3285 3285 candidates = [
3286 3286 x for x in self.unicode_names if all(u in x for u in splitsup)
3287 3287 ]
3288 3288 if candidates:
3289 3289 return s, candidates
3290 3290
3291 3291 return "", ()
3292 3292
3293 3293 # if text does not start with slash
3294 3294 else:
3295 3295 return '', ()
3296 3296
3297 3297 @property
3298 3298 def unicode_names(self) -> List[str]:
3299 3299 """List of names of unicode code points that can be completed.
3300 3300
3301 3301 The list is lazily initialized on first access.
3302 3302 """
3303 3303 if self._unicode_names is None:
3304 3304 names = []
3305 3305 for c in range(0,0x10FFFF + 1):
3306 3306 try:
3307 3307 names.append(unicodedata.name(chr(c)))
3308 3308 except ValueError:
3309 3309 pass
3310 3310 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3311 3311
3312 3312 return self._unicode_names
3313 3313
3314 3314 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3315 3315 names = []
3316 3316 for start,stop in ranges:
3317 3317 for c in range(start, stop) :
3318 3318 try:
3319 3319 names.append(unicodedata.name(chr(c)))
3320 3320 except ValueError:
3321 3321 pass
3322 3322 return names
@@ -1,252 +1,259 b''
1 1 # Simple tool to help for release
2 2 # when releasing with bash, simple source it to get asked questions.
3 3
4 4 # misc check before starting
5
6 python -c 'import keyring'
7 python -c 'import twine'
8 python -c 'import sphinx'
9 python -c 'import sphinx_rtd_theme'
10 python -c 'import pytest'
11 python -c 'import build'
12
13
14 5 BLACK=$(tput setaf 1)
15 6 RED=$(tput setaf 1)
16 7 GREEN=$(tput setaf 2)
17 8 YELLOW=$(tput setaf 3)
18 9 BLUE=$(tput setaf 4)
19 10 MAGENTA=$(tput setaf 5)
20 11 CYAN=$(tput setaf 6)
21 12 WHITE=$(tput setaf 7)
22 13 NOR=$(tput sgr0)
23 14
24 15
16 echo "Checking all tools are installed..."
17
18 python -c 'import keyring'
19 python -c 'import twine'
20 python -c 'import sphinx'
21 python -c 'import sphinx_rtd_theme'
22 python -c 'import pytest'
23 python -c 'import build'
24 # those are necessary fo building the docs
25 echo "Checking imports for docs"
26 python -c 'import numpy'
27 python -c 'import matplotlib'
28
29
30
31
25 32 echo "Will use $BLUE'$EDITOR'$NOR to edit files when necessary"
26 33 echo -n "PREV_RELEASE (X.y.z) [$PREV_RELEASE]: "
27 34 read input
28 35 PREV_RELEASE=${input:-$PREV_RELEASE}
29 36 echo -n "MILESTONE (X.y) [$MILESTONE]: "
30 37 read input
31 38 MILESTONE=${input:-$MILESTONE}
32 39 echo -n "VERSION (X.y.z) [$VERSION]:"
33 40 read input
34 41 VERSION=${input:-$VERSION}
35 42 echo -n "BRANCH (main|X.y) [$BRANCH]:"
36 43 read input
37 44 BRANCH=${input:-$BRANCH}
38 45
39 46 ask_section(){
40 47 echo
41 48 echo $BLUE"$1"$NOR
42 49 echo -n $GREEN"Press Enter to continue, S to skip: "$NOR
43 50 if [ "$ZSH_NAME" = "zsh" ] ; then
44 51 read -k1 value
45 52 value=${value%$'\n'}
46 53 else
47 54 read -n1 value
48 55 fi
49 56 if [ -z "$value" ] || [ $value = 'y' ]; then
50 57 return 0
51 58 fi
52 59 return 1
53 60 }
54 61
55 62
56 63 maybe_edit(){
57 64 echo
58 65 echo $BLUE"$1"$NOR
59 66 echo -n $GREEN"Press ${BLUE}e$GREEN to Edit ${BLUE}$1$GREEN, any other keys to skip: "$NOR
60 67 if [ "$ZSH_NAME" = "zsh" ] ; then
61 68 read -k1 value
62 69 value=${value%$'\n'}
63 70 else
64 71 read -n1 value
65 72 fi
66 73
67 74 echo
68 75 if [ $value = 'e' ] ; then
69 76 $=EDITOR $1
70 77 fi
71 78 }
72 79
73 80
74 81
75 82 echo
76 83 if ask_section "Updating what's new with information from docs/source/whatsnew/pr"
77 84 then
78 85 python tools/update_whatsnew.py
79 86
80 87 echo
81 88 echo $BLUE"please move the contents of "docs/source/whatsnew/development.rst" to version-X.rst"$NOR
82 89 echo $GREEN"Press enter to continue"$NOR
83 90 read
84 91 fi
85 92
86 93 if ask_section "Gen Stats, and authors"
87 94 then
88 95
89 96 echo
90 97 echo $BLUE"here are all the authors that contributed to this release:"$NOR
91 98 git log --format="%aN <%aE>" $PREV_RELEASE... | sort -u -f
92 99
93 100 echo
94 101 echo $BLUE"If you see any duplicates cancel (Ctrl-C), then edit .mailmap."
95 102 echo $GREEN"Press enter to continue:"$NOR
96 103 read
97 104
98 105 echo $BLUE"generating stats"$NOR
99 106 python tools/github_stats.py --milestone $MILESTONE > stats.rst
100 107
101 108 echo $BLUE"stats.rst files generated."$NOR
102 109 echo $GREEN"Please merge it with the right file (github-stats-X.rst) and commit."$NOR
103 110 echo $GREEN"press enter to continue."$NOR
104 111 read
105 112
106 113 fi
107 114
108 115 if ask_section "Generate API difference (using frapuccino)"
109 116 then
110 117 echo $BLUE"Checking out $PREV_RELEASE"$NOR
111 118 git checkout $PREV_RELEASE
112 119 sleep 1
113 120 echo $BLUE"Saving API to file $PREV_RELEASE"$NOR
114 121 frappuccino IPython IPython.kernel IPython.lib IPython.qt IPython.lib.kernel IPython.html IPython.frontend IPython.external --save IPython-$PREV_RELEASE.json
115 122 echo $BLUE"coming back to $BRANCH"$NOR
116 123 git checkout $BRANCH
117 124 sleep 1
118 125 echo $BLUE"comparing ..."$NOR
119 126 frappuccino IPython IPython.kernel IPython.lib --compare IPython-$PREV_RELEASE.json
120 127 echo $GREEN"Use the above guideline to write an API changelog ..."$NOR
121 128 echo $GREEN"Press any keys to continue"$NOR
122 129 read
123 130 fi
124 131
125 132 echo "Cleaning repository"
126 133 git clean -xfdi
127 134
128 135 echo $GREEN"please update version number in ${RED}IPython/core/release.py${NOR} , Do not commit yet – we'll do it later."$NOR
129 136 echo $GREEN"I tried ${RED}sed -i bkp -e '/Uncomment/s/^# //g' IPython/core/release.py${NOR}"
130 137 sed -i bkp -e '/Uncomment/s/^# //g' IPython/core/release.py
131 138 rm IPython/core/release.pybkp
132 139 git diff | cat
133 140 maybe_edit IPython/core/release.py
134 141
135 142 echo $GREEN"Press enter to continue"$NOR
136 143 read
137 144
138 145 if ask_section "Build the documentation ?"
139 146 then
140 147 make html -C docs
141 148 echo
142 149 echo $GREEN"Check the docs, press enter to continue"$NOR
143 150 read
144 151
145 152 fi
146 153
147 154 if ask_section "Should we commit, tag, push... etc ? "
148 155 then
149 156 echo
150 157 echo $BLUE"Let's commit : git commit -am \"release $VERSION\" -S"
151 158 echo $GREEN"Press enter to commit"$NOR
152 159 read
153 160 git commit -am "release $VERSION" -S
154 161
155 162 echo
156 163 echo $BLUE"git push origin \$BRANCH ($BRANCH)?"$NOR
157 164 echo $GREEN"Make sure you can push"$NOR
158 165 echo $GREEN"Press enter to continue"$NOR
159 166 read
160 167 git push origin $BRANCH
161 168
162 169 echo
163 170 echo "Let's tag : git tag -am \"release $VERSION\" \"$VERSION\" -s"
164 171 echo $GREEN"Press enter to tag commit"$NOR
165 172 read
166 173 git tag -am "release $VERSION" "$VERSION" -s
167 174
168 175 echo
169 176 echo $BLUE"And push the tag: git push origin \$VERSION ?"$NOR
170 177 echo $GREEN"Press enter to continue"$NOR
171 178 read
172 179 git push origin $VERSION
173 180
174 181
175 182 echo $GREEN"please update version number and back to .dev in ${RED}IPython/core/release.py"
176 183 echo $GREEN"I tried ${RED}sed -i bkp -e '/Uncomment/s/^/# /g' IPython/core/release.py${NOR}"
177 184 sed -i bkp -e '/Uncomment/s/^/# /g' IPython/core/release.py
178 185 rm IPython/core/release.pybkp
179 186 git diff | cat
180 187 echo $GREEN"Please bump ${RED}the minor version number${NOR}"
181 188 maybe_edit IPython/core/release.py
182 189 echo ${BLUE}"Do not commit yet – we'll do it later."$NOR
183 190
184 191
185 192 echo $GREEN"Press enter to continue"$NOR
186 193 read
187 194
188 195 echo
189 196 echo "Let's commit : "$BLUE"git commit -am \"back to dev\""$NOR
190 197 echo $GREEN"Press enter to commit"$NOR
191 198 read
192 199 git commit -am "back to dev"
193 200
194 201 echo
195 202 echo $BLUE"git push origin \$BRANCH ($BRANCH)?"$NOR
196 203 echo $GREEN"Press enter to continue"$NOR
197 204 read
198 205 git push origin $BRANCH
199 206
200 207
201 208 echo
202 209 echo $BLUE"let's : git checkout $VERSION"$NOR
203 210 echo $GREEN"Press enter to continue"$NOR
204 211 read
205 212 git checkout $VERSION
206 213 fi
207 214
208 215 if ask_section "Should we build and release ?"
209 216 then
210 217
211 218 echo $BLUE"going to set SOURCE_DATE_EPOCH"$NOR
212 219 echo $BLUE'export SOURCE_DATE_EPOCH=$(git show -s --format=%ct HEAD)'$NOR
213 220 echo $GREEN"Press enter to continue"$NOR
214 221 read
215 222
216 223 export SOURCE_DATE_EPOCH=$(git show -s --format=%ct HEAD)
217 224
218 225 echo $BLUE"SOURCE_DATE_EPOCH set to $SOURCE_DATE_EPOCH"$NOR
219 226 echo $GREEN"Press enter to continue"$NOR
220 227 read
221 228
222 229
223 230
224 231 echo
225 232 echo $BLUE"Attempting to build package..."$NOR
226 233
227 234 tools/release
228 235
229 236
230 237 echo $RED'$ shasum -a 256 dist/*'
231 238 shasum -a 256 dist/*
232 239 echo $NOR
233 240
234 241 echo $BLUE"We are going to rebuild, node the hash above, and compare them to the rebuild"$NOR
235 242 echo $GREEN"Press enter to continue"$NOR
236 243 read
237 244
238 245 echo
239 246 echo $BLUE"Attempting to build package..."$NOR
240 247
241 248 tools/release
242 249
243 250 echo $RED"Check the shasum for SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH"
244 251 echo $RED'$ shasum -a 256 dist/*'
245 252 shasum -a 256 dist/*
246 253 echo $NOR
247 254
248 255 if ask_section "upload packages ?"
249 256 then
250 257 tools/release upload
251 258 fi
252 259 fi
General Comments 0
You need to be logged in to leave comments. Login now