##// END OF EJS Templates
Fix typos
krassowski -
Show More
@@ -1,3223 +1,3223 b''
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 Ξ±
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 Ξ±
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press ``<tab>`` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\Ξ±<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 ``Completer.backslash_combining_completions`` option to ``False``.
63 63
64 64
65 65 Experimental
66 66 ============
67 67
68 68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 69 generate completions both using static analysis of the code, and dynamically
70 70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 71 for Python. The APIs attached to this new mechanism is unstable and will
72 72 raise unless use in an :any:`provisionalcompleter` context manager.
73 73
74 74 You will find that the following are experimental:
75 75
76 76 - :any:`provisionalcompleter`
77 77 - :any:`IPCompleter.completions`
78 78 - :any:`Completion`
79 79 - :any:`rectify_completions`
80 80
81 81 .. note::
82 82
83 83 better name for :any:`rectify_completions` ?
84 84
85 85 We welcome any feedback on these new API, and we also encourage you to try this
86 86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 87 to have extra logging information if :any:`jedi` is crashing, or if current
88 88 IPython completer pending deprecations are returning results not yet handled
89 89 by :any:`jedi`
90 90
91 91 Using Jedi for tab completion allow snippets like the following to work without
92 92 having to execute any code:
93 93
94 94 >>> myvar = ['hello', 42]
95 95 ... myvar[1].bi<tab>
96 96
97 97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 98 executing any code unlike the previously available ``IPCompleter.greedy``
99 99 option.
100 100
101 101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 102 current development version to get better completions.
103 103
104 104 Matchers
105 105 ========
106 106
107 107 All completions routines are implemented using unified *Matchers* API.
108 108 The matchers API is provisional and subject to change without notice.
109 109
110 110 The built-in matchers include:
111 111
112 112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 113 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 114 - :any:`IPCompleter.unicode_name_matcher`,
115 115 :any:`IPCompleter.fwd_unicode_matcher`
116 116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
121 121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
122 122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 123 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 124 (`complete_command`) with string dispatch (including regular expressions).
125 125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 126 Jedi results to match behaviour in earlier IPython versions.
127 127
128 128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
129 129
130 130 Matcher API
131 131 -----------
132 132
133 133 Simplifying some details, the ``Matcher`` interface can described as
134 134
135 135 .. code-block::
136 136
137 137 MatcherAPIv1 = Callable[[str], list[str]]
138 138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139 139
140 140 Matcher = MatcherAPIv1 | MatcherAPIv2
141 141
142 142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 143 and remains supported as a simplest way for generating completions. This is also
144 144 currently the only API supported by the IPython hooks system `complete_command`.
145 145
146 146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 148 and requires a literal ``2`` for v2 Matchers.
149 149
150 150 Once the API stabilises future versions may relax the requirement for specifying
151 151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153 153
154 154 Suppression of competing matchers
155 155 ---------------------------------
156 156
157 157 By default results from all matchers are combined, in the order determined by
158 158 their priority. Matchers can request to suppress results from subsequent
159 159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
160 160
161 161 When multiple matchers simultaneously request surpression, the results from of
162 162 the matcher with higher priority will be returned.
163 163
164 164 Sometimes it is desirable to suppress most but not all other matchers;
165 165 this can be achieved by adding a list of identifiers of matchers which
166 166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167 167
168 168 The suppression behaviour can is user-configurable via
169 169 :any:`IPCompleter.suppress_competing_matchers`.
170 170 """
171 171
172 172
173 173 # Copyright (c) IPython Development Team.
174 174 # Distributed under the terms of the Modified BSD License.
175 175 #
176 176 # Some of this code originated from rlcompleter in the Python standard library
177 177 # Copyright (C) 2001 Python Software Foundation, www.python.org
178 178
179 179 from __future__ import annotations
180 180 import builtins as builtin_mod
181 181 import enum
182 182 import glob
183 183 import inspect
184 184 import itertools
185 185 import keyword
186 186 import os
187 187 import re
188 188 import string
189 189 import sys
190 190 import tokenize
191 191 import time
192 192 import unicodedata
193 193 import uuid
194 194 import warnings
195 195 from ast import literal_eval
196 196 from collections import defaultdict
197 197 from contextlib import contextmanager
198 198 from dataclasses import dataclass
199 199 from functools import cached_property, partial
200 200 from types import SimpleNamespace
201 201 from typing import (
202 202 Iterable,
203 203 Iterator,
204 204 List,
205 205 Tuple,
206 206 Union,
207 207 Any,
208 208 Sequence,
209 209 Dict,
210 210 Optional,
211 211 TYPE_CHECKING,
212 212 Set,
213 213 Literal,
214 214 )
215 215
216 216 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
217 217 from IPython.core.error import TryNext
218 218 from IPython.core.inputtransformer2 import ESC_MAGIC
219 219 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
220 220 from IPython.core.oinspect import InspectColors
221 221 from IPython.testing.skipdoctest import skip_doctest
222 222 from IPython.utils import generics
223 223 from IPython.utils.decorators import sphinx_options
224 224 from IPython.utils.dir2 import dir2, get_real_method
225 225 from IPython.utils.docs import GENERATING_DOCUMENTATION
226 226 from IPython.utils.path import ensure_dir_exists
227 227 from IPython.utils.process import arg_split
228 228 from traitlets import (
229 229 Bool,
230 230 Enum,
231 231 Int,
232 232 List as ListTrait,
233 233 Unicode,
234 234 Dict as DictTrait,
235 235 Union as UnionTrait,
236 236 observe,
237 237 )
238 238 from traitlets.config.configurable import Configurable
239 239
240 240 import __main__
241 241
242 242 # skip module docstests
243 243 __skip_doctest__ = True
244 244
245 245
246 246 try:
247 247 import jedi
248 248 jedi.settings.case_insensitive_completion = False
249 249 import jedi.api.helpers
250 250 import jedi.api.classes
251 251 JEDI_INSTALLED = True
252 252 except ImportError:
253 253 JEDI_INSTALLED = False
254 254
255 255
256 256 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
257 257 from typing import cast
258 258 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
259 259 else:
260 260
261 261 def cast(obj, type_):
262 262 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
263 263 return obj
264 264
265 265 # do not require on runtime
266 266 NotRequired = Tuple # requires Python >=3.11
267 267 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
268 268 Protocol = object # requires Python >=3.8
269 269 TypeAlias = Any # requires Python >=3.10
270 270 if GENERATING_DOCUMENTATION:
271 271 from typing import TypedDict
272 272
273 273 # -----------------------------------------------------------------------------
274 274 # Globals
275 275 #-----------------------------------------------------------------------------
276 276
277 277 # ranges where we have most of the valid unicode names. We could be more finer
278 278 # grained but is it worth it for performance While unicode have character in the
279 279 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
280 280 # write this). With below range we cover them all, with a density of ~67%
281 281 # biggest next gap we consider only adds up about 1% density and there are 600
282 282 # gaps that would need hard coding.
283 283 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
284 284
285 285 # Public API
286 286 __all__ = ["Completer", "IPCompleter"]
287 287
288 288 if sys.platform == 'win32':
289 289 PROTECTABLES = ' '
290 290 else:
291 291 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
292 292
293 293 # Protect against returning an enormous number of completions which the frontend
294 294 # may have trouble processing.
295 295 MATCHES_LIMIT = 500
296 296
297 297 # Completion type reported when no type can be inferred.
298 298 _UNKNOWN_TYPE = "<unknown>"
299 299
300 300 # sentinel value to signal lack of a match
301 301 not_found = object()
302 302
303 303 class ProvisionalCompleterWarning(FutureWarning):
304 304 """
305 305 Exception raise by an experimental feature in this module.
306 306
307 307 Wrap code in :any:`provisionalcompleter` context manager if you
308 308 are certain you want to use an unstable feature.
309 309 """
310 310 pass
311 311
312 312 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
313 313
314 314
315 315 @skip_doctest
316 316 @contextmanager
317 317 def provisionalcompleter(action='ignore'):
318 318 """
319 319 This context manager has to be used in any place where unstable completer
320 320 behavior and API may be called.
321 321
322 322 >>> with provisionalcompleter():
323 323 ... completer.do_experimental_things() # works
324 324
325 325 >>> completer.do_experimental_things() # raises.
326 326
327 327 .. note::
328 328
329 329 Unstable
330 330
331 331 By using this context manager you agree that the API in use may change
332 332 without warning, and that you won't complain if they do so.
333 333
334 334 You also understand that, if the API is not to your liking, you should report
335 335 a bug to explain your use case upstream.
336 336
337 337 We'll be happy to get your feedback, feature requests, and improvements on
338 338 any of the unstable APIs!
339 339 """
340 340 with warnings.catch_warnings():
341 341 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
342 342 yield
343 343
344 344
345 345 def has_open_quotes(s):
346 346 """Return whether a string has open quotes.
347 347
348 348 This simply counts whether the number of quote characters of either type in
349 349 the string is odd.
350 350
351 351 Returns
352 352 -------
353 353 If there is an open quote, the quote character is returned. Else, return
354 354 False.
355 355 """
356 356 # We check " first, then ', so complex cases with nested quotes will get
357 357 # the " to take precedence.
358 358 if s.count('"') % 2:
359 359 return '"'
360 360 elif s.count("'") % 2:
361 361 return "'"
362 362 else:
363 363 return False
364 364
365 365
366 366 def protect_filename(s, protectables=PROTECTABLES):
367 367 """Escape a string to protect certain characters."""
368 368 if set(s) & set(protectables):
369 369 if sys.platform == "win32":
370 370 return '"' + s + '"'
371 371 else:
372 372 return "".join(("\\" + c if c in protectables else c) for c in s)
373 373 else:
374 374 return s
375 375
376 376
377 377 def expand_user(path:str) -> Tuple[str, bool, str]:
378 378 """Expand ``~``-style usernames in strings.
379 379
380 380 This is similar to :func:`os.path.expanduser`, but it computes and returns
381 381 extra information that will be useful if the input was being used in
382 382 computing completions, and you wish to return the completions with the
383 383 original '~' instead of its expanded value.
384 384
385 385 Parameters
386 386 ----------
387 387 path : str
388 388 String to be expanded. If no ~ is present, the output is the same as the
389 389 input.
390 390
391 391 Returns
392 392 -------
393 393 newpath : str
394 394 Result of ~ expansion in the input path.
395 395 tilde_expand : bool
396 396 Whether any expansion was performed or not.
397 397 tilde_val : str
398 398 The value that ~ was replaced with.
399 399 """
400 400 # Default values
401 401 tilde_expand = False
402 402 tilde_val = ''
403 403 newpath = path
404 404
405 405 if path.startswith('~'):
406 406 tilde_expand = True
407 407 rest = len(path)-1
408 408 newpath = os.path.expanduser(path)
409 409 if rest:
410 410 tilde_val = newpath[:-rest]
411 411 else:
412 412 tilde_val = newpath
413 413
414 414 return newpath, tilde_expand, tilde_val
415 415
416 416
417 417 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
418 418 """Does the opposite of expand_user, with its outputs.
419 419 """
420 420 if tilde_expand:
421 421 return path.replace(tilde_val, '~')
422 422 else:
423 423 return path
424 424
425 425
426 426 def completions_sorting_key(word):
427 427 """key for sorting completions
428 428
429 429 This does several things:
430 430
431 431 - Demote any completions starting with underscores to the end
432 432 - Insert any %magic and %%cellmagic completions in the alphabetical order
433 433 by their name
434 434 """
435 435 prio1, prio2 = 0, 0
436 436
437 437 if word.startswith('__'):
438 438 prio1 = 2
439 439 elif word.startswith('_'):
440 440 prio1 = 1
441 441
442 442 if word.endswith('='):
443 443 prio1 = -1
444 444
445 445 if word.startswith('%%'):
446 446 # If there's another % in there, this is something else, so leave it alone
447 447 if not "%" in word[2:]:
448 448 word = word[2:]
449 449 prio2 = 2
450 450 elif word.startswith('%'):
451 451 if not "%" in word[1:]:
452 452 word = word[1:]
453 453 prio2 = 1
454 454
455 455 return prio1, word, prio2
456 456
457 457
458 458 class _FakeJediCompletion:
459 459 """
460 460 This is a workaround to communicate to the UI that Jedi has crashed and to
461 461 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
462 462
463 463 Added in IPython 6.0 so should likely be removed for 7.0
464 464
465 465 """
466 466
467 467 def __init__(self, name):
468 468
469 469 self.name = name
470 470 self.complete = name
471 471 self.type = 'crashed'
472 472 self.name_with_symbols = name
473 473 self.signature = ''
474 474 self._origin = 'fake'
475 475
476 476 def __repr__(self):
477 477 return '<Fake completion object jedi has crashed>'
478 478
479 479
480 480 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
481 481
482 482
483 483 class Completion:
484 484 """
485 485 Completion object used and returned by IPython completers.
486 486
487 487 .. warning::
488 488
489 489 Unstable
490 490
491 491 This function is unstable, API may change without warning.
492 492 It will also raise unless use in proper context manager.
493 493
494 494 This act as a middle ground :any:`Completion` object between the
495 495 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
496 496 object. While Jedi need a lot of information about evaluator and how the
497 497 code should be ran/inspected, PromptToolkit (and other frontend) mostly
498 498 need user facing information.
499 499
500 500 - Which range should be replaced replaced by what.
501 501 - Some metadata (like completion type), or meta information to displayed to
502 502 the use user.
503 503
504 504 For debugging purpose we can also store the origin of the completion (``jedi``,
505 505 ``IPython.python_matches``, ``IPython.magics_matches``...).
506 506 """
507 507
508 508 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
509 509
510 510 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
511 511 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
512 512 "It may change without warnings. "
513 513 "Use in corresponding context manager.",
514 514 category=ProvisionalCompleterWarning, stacklevel=2)
515 515
516 516 self.start = start
517 517 self.end = end
518 518 self.text = text
519 519 self.type = type
520 520 self.signature = signature
521 521 self._origin = _origin
522 522
523 523 def __repr__(self):
524 524 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
525 525 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
526 526
527 527 def __eq__(self, other)->Bool:
528 528 """
529 529 Equality and hash do not hash the type (as some completer may not be
530 530 able to infer the type), but are use to (partially) de-duplicate
531 531 completion.
532 532
533 533 Completely de-duplicating completion is a bit tricker that just
534 534 comparing as it depends on surrounding text, which Completions are not
535 535 aware of.
536 536 """
537 537 return self.start == other.start and \
538 538 self.end == other.end and \
539 539 self.text == other.text
540 540
541 541 def __hash__(self):
542 542 return hash((self.start, self.end, self.text))
543 543
544 544
545 545 class SimpleCompletion:
546 546 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
547 547
548 548 .. warning::
549 549
550 550 Provisional
551 551
552 552 This class is used to describe the currently supported attributes of
553 553 simple completion items, and any additional implementation details
554 554 should not be relied on. Additional attributes may be included in
555 555 future versions, and meaning of text disambiguated from the current
556 556 dual meaning of "text to insert" and "text to used as a label".
557 557 """
558 558
559 559 __slots__ = ["text", "type"]
560 560
561 561 def __init__(self, text: str, *, type: Optional[str] = None):
562 562 self.text = text
563 563 self.type = type
564 564
565 565 def __repr__(self):
566 566 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
567 567
568 568
569 569 class _MatcherResultBase(TypedDict):
570 570 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
571 571
572 572 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
573 573 matched_fragment: NotRequired[str]
574 574
575 575 #: Whether to suppress results from all other matchers (True), some
576 576 #: matchers (set of identifiers) or none (False); default is False.
577 577 suppress: NotRequired[Union[bool, Set[str]]]
578 578
579 579 #: Identifiers of matchers which should NOT be suppressed when this matcher
580 580 #: requests to suppress all other matchers; defaults to an empty set.
581 581 do_not_suppress: NotRequired[Set[str]]
582 582
583 583 #: Are completions already ordered and should be left as-is? default is False.
584 584 ordered: NotRequired[bool]
585 585
586 586
587 587 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
588 588 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
589 589 """Result of new-style completion matcher."""
590 590
591 591 # note: TypedDict is added again to the inheritance chain
592 592 # in order to get __orig_bases__ for documentation
593 593
594 594 #: List of candidate completions
595 595 completions: Sequence[SimpleCompletion]
596 596
597 597
598 598 class _JediMatcherResult(_MatcherResultBase):
599 599 """Matching result returned by Jedi (will be processed differently)"""
600 600
601 601 #: list of candidate completions
602 602 completions: Iterable[_JediCompletionLike]
603 603
604 604
605 605 @dataclass
606 606 class CompletionContext:
607 607 """Completion context provided as an argument to matchers in the Matcher API v2."""
608 608
609 609 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
610 610 # which was not explicitly visible as an argument of the matcher, making any refactor
611 611 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
612 612 # from the completer, and make substituting them in sub-classes easier.
613 613
614 614 #: Relevant fragment of code directly preceding the cursor.
615 615 #: The extraction of token is implemented via splitter heuristic
616 616 #: (following readline behaviour for legacy reasons), which is user configurable
617 617 #: (by switching the greedy mode).
618 618 token: str
619 619
620 620 #: The full available content of the editor or buffer
621 621 full_text: str
622 622
623 623 #: Cursor position in the line (the same for ``full_text`` and ``text``).
624 624 cursor_position: int
625 625
626 626 #: Cursor line in ``full_text``.
627 627 cursor_line: int
628 628
629 629 #: The maximum number of completions that will be used downstream.
630 630 #: Matchers can use this information to abort early.
631 631 #: The built-in Jedi matcher is currently excepted from this limit.
632 632 # If not given, return all possible completions.
633 633 limit: Optional[int]
634 634
635 635 @cached_property
636 636 def text_until_cursor(self) -> str:
637 637 return self.line_with_cursor[: self.cursor_position]
638 638
639 639 @cached_property
640 640 def line_with_cursor(self) -> str:
641 641 return self.full_text.split("\n")[self.cursor_line]
642 642
643 643
644 644 #: Matcher results for API v2.
645 645 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
646 646
647 647
648 648 class _MatcherAPIv1Base(Protocol):
649 649 def __call__(self, text: str) -> List[str]:
650 650 """Call signature."""
651 651 ...
652 652
653 653
654 654 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
655 655 #: API version
656 656 matcher_api_version: Optional[Literal[1]]
657 657
658 658 def __call__(self, text: str) -> List[str]:
659 659 """Call signature."""
660 660 ...
661 661
662 662
663 663 #: Protocol describing Matcher API v1.
664 664 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
665 665
666 666
667 667 class MatcherAPIv2(Protocol):
668 668 """Protocol describing Matcher API v2."""
669 669
670 670 #: API version
671 671 matcher_api_version: Literal[2] = 2
672 672
673 673 def __call__(self, context: CompletionContext) -> MatcherResult:
674 674 """Call signature."""
675 675 ...
676 676
677 677
678 678 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
679 679
680 680
681 681 def has_any_completions(result: MatcherResult) -> bool:
682 682 """Check if any result includes any completions."""
683 683 if hasattr(result["completions"], "__len__"):
684 684 return len(result["completions"]) != 0
685 685 try:
686 686 old_iterator = result["completions"]
687 687 first = next(old_iterator)
688 688 result["completions"] = itertools.chain([first], old_iterator)
689 689 return True
690 690 except StopIteration:
691 691 return False
692 692
693 693
694 694 def completion_matcher(
695 695 *, priority: float = None, identifier: str = None, api_version: int = 1
696 696 ):
697 697 """Adds attributes describing the matcher.
698 698
699 699 Parameters
700 700 ----------
701 701 priority : Optional[float]
702 702 The priority of the matcher, determines the order of execution of matchers.
703 703 Higher priority means that the matcher will be executed first. Defaults to 0.
704 704 identifier : Optional[str]
705 705 identifier of the matcher allowing users to modify the behaviour via traitlets,
706 706 and also used to for debugging (will be passed as ``origin`` with the completions).
707 707
708 708 Defaults to matcher function's ``__qualname__`` (for example,
709 709 ``IPCompleter.file_matcher`` for the built-in matched defined
710 710 as a ``file_matcher`` method of the ``IPCompleter`` class).
711 711 api_version: Optional[int]
712 712 version of the Matcher API used by this matcher.
713 713 Currently supported values are 1 and 2.
714 714 Defaults to 1.
715 715 """
716 716
717 717 def wrapper(func: Matcher):
718 718 func.matcher_priority = priority or 0
719 719 func.matcher_identifier = identifier or func.__qualname__
720 720 func.matcher_api_version = api_version
721 721 if TYPE_CHECKING:
722 722 if api_version == 1:
723 723 func = cast(func, MatcherAPIv1)
724 724 elif api_version == 2:
725 725 func = cast(func, MatcherAPIv2)
726 726 return func
727 727
728 728 return wrapper
729 729
730 730
731 731 def _get_matcher_priority(matcher: Matcher):
732 732 return getattr(matcher, "matcher_priority", 0)
733 733
734 734
735 735 def _get_matcher_id(matcher: Matcher):
736 736 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
737 737
738 738
739 739 def _get_matcher_api_version(matcher):
740 740 return getattr(matcher, "matcher_api_version", 1)
741 741
742 742
743 743 context_matcher = partial(completion_matcher, api_version=2)
744 744
745 745
746 746 _IC = Iterable[Completion]
747 747
748 748
749 749 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
750 750 """
751 751 Deduplicate a set of completions.
752 752
753 753 .. warning::
754 754
755 755 Unstable
756 756
757 757 This function is unstable, API may change without warning.
758 758
759 759 Parameters
760 760 ----------
761 761 text : str
762 762 text that should be completed.
763 763 completions : Iterator[Completion]
764 764 iterator over the completions to deduplicate
765 765
766 766 Yields
767 767 ------
768 768 `Completions` objects
769 769 Completions coming from multiple sources, may be different but end up having
770 770 the same effect when applied to ``text``. If this is the case, this will
771 771 consider completions as equal and only emit the first encountered.
772 772 Not folded in `completions()` yet for debugging purpose, and to detect when
773 773 the IPython completer does return things that Jedi does not, but should be
774 774 at some point.
775 775 """
776 776 completions = list(completions)
777 777 if not completions:
778 778 return
779 779
780 780 new_start = min(c.start for c in completions)
781 781 new_end = max(c.end for c in completions)
782 782
783 783 seen = set()
784 784 for c in completions:
785 785 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
786 786 if new_text not in seen:
787 787 yield c
788 788 seen.add(new_text)
789 789
790 790
791 791 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
792 792 """
793 793 Rectify a set of completions to all have the same ``start`` and ``end``
794 794
795 795 .. warning::
796 796
797 797 Unstable
798 798
799 799 This function is unstable, API may change without warning.
800 800 It will also raise unless use in proper context manager.
801 801
802 802 Parameters
803 803 ----------
804 804 text : str
805 805 text that should be completed.
806 806 completions : Iterator[Completion]
807 807 iterator over the completions to rectify
808 808 _debug : bool
809 809 Log failed completion
810 810
811 811 Notes
812 812 -----
813 813 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
814 814 the Jupyter Protocol requires them to behave like so. This will readjust
815 815 the completion to have the same ``start`` and ``end`` by padding both
816 816 extremities with surrounding text.
817 817
818 818 During stabilisation should support a ``_debug`` option to log which
819 819 completion are return by the IPython completer and not found in Jedi in
820 820 order to make upstream bug report.
821 821 """
822 822 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
823 823 "It may change without warnings. "
824 824 "Use in corresponding context manager.",
825 825 category=ProvisionalCompleterWarning, stacklevel=2)
826 826
827 827 completions = list(completions)
828 828 if not completions:
829 829 return
830 830 starts = (c.start for c in completions)
831 831 ends = (c.end for c in completions)
832 832
833 833 new_start = min(starts)
834 834 new_end = max(ends)
835 835
836 836 seen_jedi = set()
837 837 seen_python_matches = set()
838 838 for c in completions:
839 839 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
840 840 if c._origin == 'jedi':
841 841 seen_jedi.add(new_text)
842 842 elif c._origin == 'IPCompleter.python_matches':
843 843 seen_python_matches.add(new_text)
844 844 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
845 845 diff = seen_python_matches.difference(seen_jedi)
846 846 if diff and _debug:
847 847 print('IPython.python matches have extras:', diff)
848 848
849 849
850 850 if sys.platform == 'win32':
851 851 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
852 852 else:
853 853 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
854 854
855 855 GREEDY_DELIMS = ' =\r\n'
856 856
857 857
858 858 class CompletionSplitter(object):
859 859 """An object to split an input line in a manner similar to readline.
860 860
861 861 By having our own implementation, we can expose readline-like completion in
862 862 a uniform manner to all frontends. This object only needs to be given the
863 863 line of text to be split and the cursor position on said line, and it
864 864 returns the 'word' to be completed on at the cursor after splitting the
865 865 entire line.
866 866
867 867 What characters are used as splitting delimiters can be controlled by
868 868 setting the ``delims`` attribute (this is a property that internally
869 869 automatically builds the necessary regular expression)"""
870 870
871 871 # Private interface
872 872
873 873 # A string of delimiter characters. The default value makes sense for
874 874 # IPython's most typical usage patterns.
875 875 _delims = DELIMS
876 876
877 877 # The expression (a normal string) to be compiled into a regular expression
878 878 # for actual splitting. We store it as an attribute mostly for ease of
879 879 # debugging, since this type of code can be so tricky to debug.
880 880 _delim_expr = None
881 881
882 882 # The regular expression that does the actual splitting
883 883 _delim_re = None
884 884
885 885 def __init__(self, delims=None):
886 886 delims = CompletionSplitter._delims if delims is None else delims
887 887 self.delims = delims
888 888
889 889 @property
890 890 def delims(self):
891 891 """Return the string of delimiter characters."""
892 892 return self._delims
893 893
894 894 @delims.setter
895 895 def delims(self, delims):
896 896 """Set the delimiters for line splitting."""
897 897 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
898 898 self._delim_re = re.compile(expr)
899 899 self._delims = delims
900 900 self._delim_expr = expr
901 901
902 902 def split_line(self, line, cursor_pos=None):
903 903 """Split a line of text with a cursor at the given position.
904 904 """
905 905 l = line if cursor_pos is None else line[:cursor_pos]
906 906 return self._delim_re.split(l)[-1]
907 907
908 908
909 909
910 910 class Completer(Configurable):
911 911
912 912 greedy = Bool(
913 913 False,
914 914 help="""Activate greedy completion.
915 915
916 916 .. deprecated:: 8.8
917 917 Use :any:`evaluation` and :any:`auto_close_dict_keys` instead.
918 918
919 Whent enabled in IPython 8.8+ activates following settings for compatibility:
919 When enabled in IPython 8.8+ activates following settings for compatibility:
920 920 - ``evaluation = 'unsafe'``
921 921 - ``auto_close_dict_keys = True``
922 922 """,
923 923 ).tag(config=True)
924 924
925 925 evaluation = Enum(
926 ("forbidden", "minimal", "limitted", "unsafe", "dangerous"),
927 default_value="limitted",
926 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
927 default_value="limited",
928 928 help="""Code evaluation under completion.
929 929
930 930 Successive options allow to enable more eager evaluation for more accurate completion suggestions,
931 931 including for nested dictionaries, nested lists, or even results of function calls. Setting `unsafe`
932 932 or higher can lead to evaluation of arbitrary user code on TAB with potentially dangerous side effects.
933 933
934 934 Allowed values are:
935 935 - `forbidden`: no evaluation at all
936 936 - `minimal`: evaluation of literals and access to built-in namespaces; no item/attribute evaluation nor access to locals/globals
937 - `limitted` (default): access to all namespaces, evaluation of hard-coded methods (``keys()``, ``__getattr__``, ``__getitems__``, etc) on allow-listed objects (e.g. ``dict``, ``list``, ``tuple``, ``pandas.Series``)
937 - `limited` (default): access to all namespaces, evaluation of hard-coded methods (``keys()``, ``__getattr__``, ``__getitems__``, etc) on allow-listed objects (e.g. ``dict``, ``list``, ``tuple``, ``pandas.Series``)
938 938 - `unsafe`: evaluation of all methods and function calls but not of syntax with side-effects like `del x`,
939 939 - `dangerous`: completely arbitrary evaluation
940 940 """,
941 941 ).tag(config=True)
942 942
943 943 use_jedi = Bool(default_value=JEDI_INSTALLED,
944 944 help="Experimental: Use Jedi to generate autocompletions. "
945 945 "Default to True if jedi is installed.").tag(config=True)
946 946
947 947 jedi_compute_type_timeout = Int(default_value=400,
948 948 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
949 949 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
950 950 performance by preventing jedi to build its cache.
951 951 """).tag(config=True)
952 952
953 953 debug = Bool(default_value=False,
954 954 help='Enable debug for the Completer. Mostly print extra '
955 955 'information for experimental jedi integration.')\
956 956 .tag(config=True)
957 957
958 958 backslash_combining_completions = Bool(True,
959 959 help="Enable unicode completions, e.g. \\alpha<tab> . "
960 960 "Includes completion of latex commands, unicode names, and expanding "
961 961 "unicode characters back to latex commands.").tag(config=True)
962 962
963 963 auto_close_dict_keys = Bool(
964 964 False, help="""Enable auto-closing dictionary keys."""
965 965 ).tag(config=True)
966 966
967 967 def __init__(self, namespace=None, global_namespace=None, **kwargs):
968 968 """Create a new completer for the command line.
969 969
970 970 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
971 971
972 972 If unspecified, the default namespace where completions are performed
973 973 is __main__ (technically, __main__.__dict__). Namespaces should be
974 974 given as dictionaries.
975 975
976 976 An optional second namespace can be given. This allows the completer
977 977 to handle cases where both the local and global scopes need to be
978 978 distinguished.
979 979 """
980 980
981 981 # Don't bind to namespace quite yet, but flag whether the user wants a
982 982 # specific namespace or to use __main__.__dict__. This will allow us
983 983 # to bind to __main__.__dict__ at completion time, not now.
984 984 if namespace is None:
985 985 self.use_main_ns = True
986 986 else:
987 987 self.use_main_ns = False
988 988 self.namespace = namespace
989 989
990 990 # The global namespace, if given, can be bound directly
991 991 if global_namespace is None:
992 992 self.global_namespace = {}
993 993 else:
994 994 self.global_namespace = global_namespace
995 995
996 996 self.custom_matchers = []
997 997
998 998 super(Completer, self).__init__(**kwargs)
999 999
1000 1000 def complete(self, text, state):
1001 1001 """Return the next possible completion for 'text'.
1002 1002
1003 1003 This is called successively with state == 0, 1, 2, ... until it
1004 1004 returns None. The completion should begin with 'text'.
1005 1005
1006 1006 """
1007 1007 if self.use_main_ns:
1008 1008 self.namespace = __main__.__dict__
1009 1009
1010 1010 if state == 0:
1011 1011 if "." in text:
1012 1012 self.matches = self.attr_matches(text)
1013 1013 else:
1014 1014 self.matches = self.global_matches(text)
1015 1015 try:
1016 1016 return self.matches[state]
1017 1017 except IndexError:
1018 1018 return None
1019 1019
1020 1020 def global_matches(self, text):
1021 1021 """Compute matches when text is a simple name.
1022 1022
1023 1023 Return a list of all keywords, built-in functions and names currently
1024 1024 defined in self.namespace or self.global_namespace that match.
1025 1025
1026 1026 """
1027 1027 matches = []
1028 1028 match_append = matches.append
1029 1029 n = len(text)
1030 1030 for lst in [
1031 1031 keyword.kwlist,
1032 1032 builtin_mod.__dict__.keys(),
1033 1033 list(self.namespace.keys()),
1034 1034 list(self.global_namespace.keys()),
1035 1035 ]:
1036 1036 for word in lst:
1037 1037 if word[:n] == text and word != "__builtins__":
1038 1038 match_append(word)
1039 1039
1040 1040 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1041 1041 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1042 1042 shortened = {
1043 1043 "_".join([sub[0] for sub in word.split("_")]): word
1044 1044 for word in lst
1045 1045 if snake_case_re.match(word)
1046 1046 }
1047 1047 for word in shortened.keys():
1048 1048 if word[:n] == text and word != "__builtins__":
1049 1049 match_append(shortened[word])
1050 1050 return matches
1051 1051
1052 1052 def attr_matches(self, text):
1053 1053 """Compute matches when text contains a dot.
1054 1054
1055 1055 Assuming the text is of the form NAME.NAME....[NAME], and is
1056 1056 evaluatable in self.namespace or self.global_namespace, it will be
1057 1057 evaluated and its attributes (as revealed by dir()) are used as
1058 1058 possible completions. (For class instances, class members are
1059 1059 also considered.)
1060 1060
1061 1061 WARNING: this can still invoke arbitrary C code, if an object
1062 1062 with a __getattr__ hook is evaluated.
1063 1063
1064 1064 """
1065 1065 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1066 1066 if not m2:
1067 1067 return []
1068 1068 expr, attr = m2.group(1, 2)
1069 1069
1070 1070 obj = self._evaluate_expr(expr)
1071 1071
1072 1072 if obj is not_found:
1073 1073 return []
1074 1074
1075 1075 if self.limit_to__all__ and hasattr(obj, '__all__'):
1076 1076 words = get__all__entries(obj)
1077 1077 else:
1078 1078 words = dir2(obj)
1079 1079
1080 1080 try:
1081 1081 words = generics.complete_object(obj, words)
1082 1082 except TryNext:
1083 1083 pass
1084 1084 except AssertionError:
1085 1085 raise
1086 1086 except Exception:
1087 1087 # Silence errors from completion function
1088 1088 #raise # dbg
1089 1089 pass
1090 1090 # Build match list to return
1091 1091 n = len(attr)
1092 1092 return ["%s.%s" % (expr, w) for w in words if w[:n] == attr]
1093 1093
1094 1094 def _evaluate_expr(self, expr):
1095 1095 obj = not_found
1096 1096 done = False
1097 1097 while not done and expr:
1098 1098 try:
1099 1099 obj = guarded_eval(
1100 1100 expr,
1101 1101 EvaluationContext(
1102 1102 globals_=self.global_namespace,
1103 1103 locals_=self.namespace,
1104 1104 evaluation=self.evaluation,
1105 1105 ),
1106 1106 )
1107 1107 done = True
1108 1108 except Exception as e:
1109 1109 if self.debug:
1110 1110 print("Evaluation exception", e)
1111 1111 # trim the expression to remove any invalid prefix
1112 1112 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1113 1113 # where parenthesis is not closed.
1114 1114 # TODO: make this faster by reusing parts of the computation?
1115 1115 expr = expr[1:]
1116 1116 return obj
1117 1117
1118 1118 def get__all__entries(obj):
1119 1119 """returns the strings in the __all__ attribute"""
1120 1120 try:
1121 1121 words = getattr(obj, '__all__')
1122 1122 except:
1123 1123 return []
1124 1124
1125 1125 return [w for w in words if isinstance(w, str)]
1126 1126
1127 1127
1128 1128 class DictKeyState(enum.Flag):
1129 1129 """Represent state of the key match in context of other possible matches.
1130 1130
1131 1131 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1132 1132 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1133 1133 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1134 1134 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1135 1135 """
1136 1136
1137 1137 BASELINE = 0
1138 1138 END_OF_ITEM = enum.auto()
1139 1139 END_OF_TUPLE = enum.auto()
1140 1140 IN_TUPLE = enum.auto()
1141 1141
1142 1142
1143 1143 def _parse_tokens(c):
1144 1144 tokens = []
1145 1145 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1146 1146 while True:
1147 1147 try:
1148 1148 tokens.append(next(token_generator))
1149 1149 except tokenize.TokenError:
1150 1150 return tokens
1151 1151 except StopIteration:
1152 1152 return tokens
1153 1153
1154 1154
1155 1155 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1156 1156 """Match any valid Python numeric literal in a prefix of dictionary keys.
1157 1157
1158 1158 References:
1159 1159 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1160 1160 - https://docs.python.org/3/library/tokenize.html
1161 1161 """
1162 1162 if prefix[-1].isspace():
1163 1163 # if user typed a space we do not have anything to complete
1164 1164 # even if there was a valid number token before
1165 1165 return None
1166 1166 tokens = _parse_tokens(prefix)
1167 1167 rev_tokens = reversed(tokens)
1168 1168 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1169 1169 number = None
1170 1170 for token in rev_tokens:
1171 1171 if token.type in skip_over:
1172 1172 continue
1173 1173 if number is None:
1174 1174 if token.type == tokenize.NUMBER:
1175 1175 number = token.string
1176 1176 continue
1177 1177 else:
1178 1178 # we did not match a number
1179 1179 return None
1180 1180 if token.type == tokenize.OP:
1181 1181 if token.string == ",":
1182 1182 break
1183 1183 if token.string in {"+", "-"}:
1184 1184 number = token.string + number
1185 1185 else:
1186 1186 return None
1187 1187 return number
1188 1188
1189 1189
1190 1190 _INT_FORMATS = {
1191 1191 "0b": bin,
1192 1192 "0o": oct,
1193 1193 "0x": hex,
1194 1194 }
1195 1195
1196 1196
1197 1197 def match_dict_keys(
1198 1198 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1199 1199 prefix: str,
1200 1200 delims: str,
1201 1201 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1202 1202 ) -> Tuple[str, int, Dict[str, DictKeyState]]:
1203 1203 """Used by dict_key_matches, matching the prefix to a list of keys
1204 1204
1205 1205 Parameters
1206 1206 ----------
1207 1207 keys
1208 1208 list of keys in dictionary currently being completed.
1209 1209 prefix
1210 1210 Part of the text already typed by the user. E.g. `mydict[b'fo`
1211 1211 delims
1212 1212 String of delimiters to consider when finding the current key.
1213 1213 extra_prefix : optional
1214 1214 Part of the text already typed in multi-key index cases. E.g. for
1215 1215 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1216 1216
1217 1217 Returns
1218 1218 -------
1219 1219 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1220 1220 ``quote`` being the quote that need to be used to close current string.
1221 1221 ``token_start`` the position where the replacement should start occurring,
1222 1222 ``matches`` a dictionary of replacement/completion keys on keys and values
1223 1223 indicating whether the state.
1224 1224 """
1225 1225 prefix_tuple = extra_prefix if extra_prefix else ()
1226 1226
1227 1227 prefix_tuple_size = sum(
1228 1228 [
1229 1229 # for pandas, do not count slices as taking space
1230 1230 not isinstance(k, slice)
1231 1231 for k in prefix_tuple
1232 1232 ]
1233 1233 )
1234 1234 text_serializable_types = (str, bytes, int, float, slice)
1235 1235
1236 1236 def filter_prefix_tuple(key):
1237 1237 # Reject too short keys
1238 1238 if len(key) <= prefix_tuple_size:
1239 1239 return False
1240 1240 # Reject keys which cannot be serialised to text
1241 1241 for k in key:
1242 1242 if not isinstance(k, text_serializable_types):
1243 1243 return False
1244 1244 # Reject keys that do not match the prefix
1245 1245 for k, pt in zip(key, prefix_tuple):
1246 1246 if k != pt and not isinstance(pt, slice):
1247 1247 return False
1248 1248 # All checks passed!
1249 1249 return True
1250 1250
1251 1251 filtered_key_is_final: Dict[
1252 1252 Union[str, bytes, int, float], DictKeyState
1253 1253 ] = defaultdict(lambda: DictKeyState.BASELINE)
1254 1254
1255 1255 for k in keys:
1256 1256 # If at least one of the matches is not final, mark as undetermined.
1257 1257 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1258 1258 # `111` appears final on first match but is not final on the second.
1259 1259
1260 1260 if isinstance(k, tuple):
1261 1261 if filter_prefix_tuple(k):
1262 1262 key_fragment = k[prefix_tuple_size]
1263 1263 filtered_key_is_final[key_fragment] |= (
1264 1264 DictKeyState.END_OF_TUPLE
1265 1265 if len(k) == prefix_tuple_size + 1
1266 1266 else DictKeyState.IN_TUPLE
1267 1267 )
1268 1268 elif prefix_tuple_size > 0:
1269 1269 # we are completing a tuple but this key is not a tuple,
1270 1270 # so we should ignore it
1271 1271 pass
1272 1272 else:
1273 1273 if isinstance(k, text_serializable_types):
1274 1274 filtered_key_is_final[k] |= DictKeyState.END_OF_ITEM
1275 1275
1276 1276 filtered_keys = filtered_key_is_final.keys()
1277 1277
1278 1278 if not prefix:
1279 1279 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1280 1280
1281 1281 quote_match = re.search("(?:\"|')", prefix)
1282 1282 is_user_prefix_numeric = False
1283 1283
1284 1284 if quote_match:
1285 1285 quote = quote_match.group()
1286 1286 valid_prefix = prefix + quote
1287 1287 try:
1288 1288 prefix_str = literal_eval(valid_prefix)
1289 1289 except Exception:
1290 1290 return "", 0, {}
1291 1291 else:
1292 1292 # If it does not look like a string, let's assume
1293 1293 # we are dealing with a number or variable.
1294 1294 number_match = _match_number_in_dict_key_prefix(prefix)
1295 1295
1296 1296 # We do not want the key matcher to suggest variable names so we yield:
1297 1297 if number_match is None:
1298 1298 # The alternative would be to assume that user forgort the quote
1299 1299 # and if the substring matches, suggest adding it at the start.
1300 1300 return "", 0, {}
1301 1301
1302 1302 prefix_str = number_match
1303 1303 is_user_prefix_numeric = True
1304 1304 quote = ""
1305 1305
1306 1306 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1307 1307 token_match = re.search(pattern, prefix, re.UNICODE)
1308 1308 assert token_match is not None # silence mypy
1309 1309 token_start = token_match.start()
1310 1310 token_prefix = token_match.group()
1311 1311
1312 1312 matched: Dict[str, DictKeyState] = {}
1313 1313
1314 1314 for key in filtered_keys:
1315 1315 if isinstance(key, (int, float)):
1316 1316 # User typed a number but this key is not a number.
1317 1317 if not is_user_prefix_numeric:
1318 1318 continue
1319 1319 str_key = str(key)
1320 1320 if isinstance(key, int):
1321 1321 int_base = prefix_str[:2].lower()
1322 1322 # if user typed integer using binary/oct/hex notation:
1323 1323 if int_base in _INT_FORMATS:
1324 1324 int_format = _INT_FORMATS[int_base]
1325 1325 str_key = int_format(key)
1326 1326 else:
1327 1327 # User typed a string but this key is a number.
1328 1328 if is_user_prefix_numeric:
1329 1329 continue
1330 1330 str_key = key
1331 1331 try:
1332 1332 if not str_key.startswith(prefix_str):
1333 1333 continue
1334 1334 except (AttributeError, TypeError, UnicodeError) as e:
1335 1335 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1336 1336 continue
1337 1337
1338 1338 # reformat remainder of key to begin with prefix
1339 1339 rem = str_key[len(prefix_str) :]
1340 1340 # force repr wrapped in '
1341 1341 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1342 1342 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1343 1343 if quote == '"':
1344 1344 # The entered prefix is quoted with ",
1345 1345 # but the match is quoted with '.
1346 1346 # A contained " hence needs escaping for comparison:
1347 1347 rem_repr = rem_repr.replace('"', '\\"')
1348 1348
1349 1349 # then reinsert prefix from start of token
1350 1350 match = "%s%s" % (token_prefix, rem_repr)
1351 1351
1352 1352 matched[match] = filtered_key_is_final[key]
1353 1353 return quote, token_start, matched
1354 1354
1355 1355
1356 1356 def cursor_to_position(text:str, line:int, column:int)->int:
1357 1357 """
1358 1358 Convert the (line,column) position of the cursor in text to an offset in a
1359 1359 string.
1360 1360
1361 1361 Parameters
1362 1362 ----------
1363 1363 text : str
1364 1364 The text in which to calculate the cursor offset
1365 1365 line : int
1366 1366 Line of the cursor; 0-indexed
1367 1367 column : int
1368 1368 Column of the cursor 0-indexed
1369 1369
1370 1370 Returns
1371 1371 -------
1372 1372 Position of the cursor in ``text``, 0-indexed.
1373 1373
1374 1374 See Also
1375 1375 --------
1376 1376 position_to_cursor : reciprocal of this function
1377 1377
1378 1378 """
1379 1379 lines = text.split('\n')
1380 1380 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1381 1381
1382 1382 return sum(len(l) + 1 for l in lines[:line]) + column
1383 1383
1384 1384 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1385 1385 """
1386 1386 Convert the position of the cursor in text (0 indexed) to a line
1387 1387 number(0-indexed) and a column number (0-indexed) pair
1388 1388
1389 1389 Position should be a valid position in ``text``.
1390 1390
1391 1391 Parameters
1392 1392 ----------
1393 1393 text : str
1394 1394 The text in which to calculate the cursor offset
1395 1395 offset : int
1396 1396 Position of the cursor in ``text``, 0-indexed.
1397 1397
1398 1398 Returns
1399 1399 -------
1400 1400 (line, column) : (int, int)
1401 1401 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1402 1402
1403 1403 See Also
1404 1404 --------
1405 1405 cursor_to_position : reciprocal of this function
1406 1406
1407 1407 """
1408 1408
1409 1409 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1410 1410
1411 1411 before = text[:offset]
1412 1412 blines = before.split('\n') # ! splitnes trim trailing \n
1413 1413 line = before.count('\n')
1414 1414 col = len(blines[-1])
1415 1415 return line, col
1416 1416
1417 1417
1418 1418 def _safe_isinstance(obj, module, class_name, *attrs):
1419 1419 """Checks if obj is an instance of module.class_name if loaded
1420 1420 """
1421 1421 if module in sys.modules:
1422 1422 m = sys.modules[module]
1423 1423 for attr in [class_name, *attrs]:
1424 1424 m = getattr(m, attr)
1425 1425 return isinstance(obj, m)
1426 1426
1427 1427
1428 1428 @context_matcher()
1429 1429 def back_unicode_name_matcher(context: CompletionContext):
1430 1430 """Match Unicode characters back to Unicode name
1431 1431
1432 1432 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1433 1433 """
1434 1434 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1435 1435 return _convert_matcher_v1_result_to_v2(
1436 1436 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1437 1437 )
1438 1438
1439 1439
1440 1440 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1441 1441 """Match Unicode characters back to Unicode name
1442 1442
1443 1443 This does ``β˜ƒ`` -> ``\\snowman``
1444 1444
1445 1445 Note that snowman is not a valid python3 combining character but will be expanded.
1446 1446 Though it will not recombine back to the snowman character by the completion machinery.
1447 1447
1448 1448 This will not either back-complete standard sequences like \\n, \\b ...
1449 1449
1450 1450 .. deprecated:: 8.6
1451 1451 You can use :meth:`back_unicode_name_matcher` instead.
1452 1452
1453 1453 Returns
1454 1454 =======
1455 1455
1456 1456 Return a tuple with two elements:
1457 1457
1458 1458 - The Unicode character that was matched (preceded with a backslash), or
1459 1459 empty string,
1460 1460 - a sequence (of 1), name for the match Unicode character, preceded by
1461 1461 backslash, or empty if no match.
1462 1462 """
1463 1463 if len(text)<2:
1464 1464 return '', ()
1465 1465 maybe_slash = text[-2]
1466 1466 if maybe_slash != '\\':
1467 1467 return '', ()
1468 1468
1469 1469 char = text[-1]
1470 1470 # no expand on quote for completion in strings.
1471 1471 # nor backcomplete standard ascii keys
1472 1472 if char in string.ascii_letters or char in ('"',"'"):
1473 1473 return '', ()
1474 1474 try :
1475 1475 unic = unicodedata.name(char)
1476 1476 return '\\'+char,('\\'+unic,)
1477 1477 except KeyError:
1478 1478 pass
1479 1479 return '', ()
1480 1480
1481 1481
1482 1482 @context_matcher()
1483 1483 def back_latex_name_matcher(context: CompletionContext):
1484 1484 """Match latex characters back to unicode name
1485 1485
1486 1486 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1487 1487 """
1488 1488 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1489 1489 return _convert_matcher_v1_result_to_v2(
1490 1490 matches, type="latex", fragment=fragment, suppress_if_matches=True
1491 1491 )
1492 1492
1493 1493
1494 1494 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1495 1495 """Match latex characters back to unicode name
1496 1496
1497 1497 This does ``\\β„΅`` -> ``\\aleph``
1498 1498
1499 1499 .. deprecated:: 8.6
1500 1500 You can use :meth:`back_latex_name_matcher` instead.
1501 1501 """
1502 1502 if len(text)<2:
1503 1503 return '', ()
1504 1504 maybe_slash = text[-2]
1505 1505 if maybe_slash != '\\':
1506 1506 return '', ()
1507 1507
1508 1508
1509 1509 char = text[-1]
1510 1510 # no expand on quote for completion in strings.
1511 1511 # nor backcomplete standard ascii keys
1512 1512 if char in string.ascii_letters or char in ('"',"'"):
1513 1513 return '', ()
1514 1514 try :
1515 1515 latex = reverse_latex_symbol[char]
1516 1516 # '\\' replace the \ as well
1517 1517 return '\\'+char,[latex]
1518 1518 except KeyError:
1519 1519 pass
1520 1520 return '', ()
1521 1521
1522 1522
1523 1523 def _formatparamchildren(parameter) -> str:
1524 1524 """
1525 1525 Get parameter name and value from Jedi Private API
1526 1526
1527 1527 Jedi does not expose a simple way to get `param=value` from its API.
1528 1528
1529 1529 Parameters
1530 1530 ----------
1531 1531 parameter
1532 1532 Jedi's function `Param`
1533 1533
1534 1534 Returns
1535 1535 -------
1536 1536 A string like 'a', 'b=1', '*args', '**kwargs'
1537 1537
1538 1538 """
1539 1539 description = parameter.description
1540 1540 if not description.startswith('param '):
1541 1541 raise ValueError('Jedi function parameter description have change format.'
1542 1542 'Expected "param ...", found %r".' % description)
1543 1543 return description[6:]
1544 1544
1545 1545 def _make_signature(completion)-> str:
1546 1546 """
1547 1547 Make the signature from a jedi completion
1548 1548
1549 1549 Parameters
1550 1550 ----------
1551 1551 completion : jedi.Completion
1552 1552 object does not complete a function type
1553 1553
1554 1554 Returns
1555 1555 -------
1556 1556 a string consisting of the function signature, with the parenthesis but
1557 1557 without the function name. example:
1558 1558 `(a, *args, b=1, **kwargs)`
1559 1559
1560 1560 """
1561 1561
1562 1562 # it looks like this might work on jedi 0.17
1563 1563 if hasattr(completion, 'get_signatures'):
1564 1564 signatures = completion.get_signatures()
1565 1565 if not signatures:
1566 1566 return '(?)'
1567 1567
1568 1568 c0 = completion.get_signatures()[0]
1569 1569 return '('+c0.to_string().split('(', maxsplit=1)[1]
1570 1570
1571 1571 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1572 1572 for p in signature.defined_names()) if f])
1573 1573
1574 1574
1575 1575 _CompleteResult = Dict[str, MatcherResult]
1576 1576
1577 1577
1578 1578 DICT_MATCHER_REGEX = re.compile(
1579 1579 r"""(?x)
1580 1580 ( # match dict-referring - or any get item object - expression
1581 1581 .+
1582 1582 )
1583 1583 \[ # open bracket
1584 1584 \s* # and optional whitespace
1585 1585 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1586 1586 # and slices
1587 1587 ((?:(?:
1588 1588 (?: # closed string
1589 1589 [uUbB]? # string prefix (r not handled)
1590 1590 (?:
1591 1591 '(?:[^']|(?<!\\)\\')*'
1592 1592 |
1593 1593 "(?:[^"]|(?<!\\)\\")*"
1594 1594 )
1595 1595 )
1596 1596 |
1597 1597 # capture integers and slices
1598 1598 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1599 1599 |
1600 1600 # integer in bin/hex/oct notation
1601 1601 0[bBxXoO]_?(?:\w|\d)+
1602 1602 )
1603 1603 \s*,\s*
1604 1604 )*)
1605 1605 ((?:
1606 1606 (?: # unclosed string
1607 1607 [uUbB]? # string prefix (r not handled)
1608 1608 (?:
1609 1609 '(?:[^']|(?<!\\)\\')*
1610 1610 |
1611 1611 "(?:[^"]|(?<!\\)\\")*
1612 1612 )
1613 1613 )
1614 1614 |
1615 1615 # unfinished integer
1616 1616 (?:[-+]?\d+)
1617 1617 |
1618 1618 # integer in bin/hex/oct notation
1619 1619 0[bBxXoO]_?(?:\w|\d)+
1620 1620 )
1621 1621 )?
1622 1622 $
1623 1623 """
1624 1624 )
1625 1625
1626 1626
1627 1627 def _convert_matcher_v1_result_to_v2(
1628 1628 matches: Sequence[str],
1629 1629 type: str,
1630 1630 fragment: Optional[str] = None,
1631 1631 suppress_if_matches: bool = False,
1632 1632 ) -> SimpleMatcherResult:
1633 1633 """Utility to help with transition"""
1634 1634 result = {
1635 1635 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1636 1636 "suppress": (True if matches else False) if suppress_if_matches else False,
1637 1637 }
1638 1638 if fragment is not None:
1639 1639 result["matched_fragment"] = fragment
1640 1640 return result
1641 1641
1642 1642
1643 1643 class IPCompleter(Completer):
1644 1644 """Extension of the completer class with IPython-specific features"""
1645 1645
1646 1646 @observe('greedy')
1647 1647 def _greedy_changed(self, change):
1648 1648 """update the splitter and readline delims when greedy is changed"""
1649 1649 if change["new"]:
1650 1650 self.evaluation = "unsafe"
1651 1651 self.auto_close_dict_keys = True
1652 1652 self.splitter.delims = GREEDY_DELIMS
1653 1653 else:
1654 self.evaluation = "limitted"
1654 self.evaluation = "limited"
1655 1655 self.auto_close_dict_keys = False
1656 1656 self.splitter.delims = DELIMS
1657 1657
1658 1658 dict_keys_only = Bool(
1659 1659 False,
1660 1660 help="""
1661 1661 Whether to show dict key matches only.
1662 1662
1663 1663 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1664 1664 """,
1665 1665 )
1666 1666
1667 1667 suppress_competing_matchers = UnionTrait(
1668 1668 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1669 1669 default_value=None,
1670 1670 help="""
1671 1671 Whether to suppress completions from other *Matchers*.
1672 1672
1673 1673 When set to ``None`` (default) the matchers will attempt to auto-detect
1674 1674 whether suppression of other matchers is desirable. For example, at
1675 1675 the beginning of a line followed by `%` we expect a magic completion
1676 1676 to be the only applicable option, and after ``my_dict['`` we usually
1677 1677 expect a completion with an existing dictionary key.
1678 1678
1679 1679 If you want to disable this heuristic and see completions from all matchers,
1680 1680 set ``IPCompleter.suppress_competing_matchers = False``.
1681 1681 To disable the heuristic for specific matchers provide a dictionary mapping:
1682 1682 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1683 1683
1684 1684 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1685 1685 completions to the set of matchers with the highest priority;
1686 1686 this is equivalent to ``IPCompleter.merge_completions`` and
1687 1687 can be beneficial for performance, but will sometimes omit relevant
1688 1688 candidates from matchers further down the priority list.
1689 1689 """,
1690 1690 ).tag(config=True)
1691 1691
1692 1692 merge_completions = Bool(
1693 1693 True,
1694 1694 help="""Whether to merge completion results into a single list
1695 1695
1696 1696 If False, only the completion results from the first non-empty
1697 1697 completer will be returned.
1698 1698
1699 1699 As of version 8.6.0, setting the value to ``False`` is an alias for:
1700 1700 ``IPCompleter.suppress_competing_matchers = True.``.
1701 1701 """,
1702 1702 ).tag(config=True)
1703 1703
1704 1704 disable_matchers = ListTrait(
1705 1705 Unicode(),
1706 1706 help="""List of matchers to disable.
1707 1707
1708 1708 The list should contain matcher identifiers (see :any:`completion_matcher`).
1709 1709 """,
1710 1710 ).tag(config=True)
1711 1711
1712 1712 omit__names = Enum(
1713 1713 (0, 1, 2),
1714 1714 default_value=2,
1715 1715 help="""Instruct the completer to omit private method names
1716 1716
1717 1717 Specifically, when completing on ``object.<tab>``.
1718 1718
1719 1719 When 2 [default]: all names that start with '_' will be excluded.
1720 1720
1721 1721 When 1: all 'magic' names (``__foo__``) will be excluded.
1722 1722
1723 1723 When 0: nothing will be excluded.
1724 1724 """
1725 1725 ).tag(config=True)
1726 1726 limit_to__all__ = Bool(False,
1727 1727 help="""
1728 1728 DEPRECATED as of version 5.0.
1729 1729
1730 1730 Instruct the completer to use __all__ for the completion
1731 1731
1732 1732 Specifically, when completing on ``object.<tab>``.
1733 1733
1734 1734 When True: only those names in obj.__all__ will be included.
1735 1735
1736 1736 When False [default]: the __all__ attribute is ignored
1737 1737 """,
1738 1738 ).tag(config=True)
1739 1739
1740 1740 profile_completions = Bool(
1741 1741 default_value=False,
1742 1742 help="If True, emit profiling data for completion subsystem using cProfile."
1743 1743 ).tag(config=True)
1744 1744
1745 1745 profiler_output_dir = Unicode(
1746 1746 default_value=".completion_profiles",
1747 1747 help="Template for path at which to output profile data for completions."
1748 1748 ).tag(config=True)
1749 1749
1750 1750 @observe('limit_to__all__')
1751 1751 def _limit_to_all_changed(self, change):
1752 1752 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1753 1753 'value has been deprecated since IPython 5.0, will be made to have '
1754 1754 'no effects and then removed in future version of IPython.',
1755 1755 UserWarning)
1756 1756
1757 1757 def __init__(
1758 1758 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1759 1759 ):
1760 1760 """IPCompleter() -> completer
1761 1761
1762 1762 Return a completer object.
1763 1763
1764 1764 Parameters
1765 1765 ----------
1766 1766 shell
1767 1767 a pointer to the ipython shell itself. This is needed
1768 1768 because this completer knows about magic functions, and those can
1769 1769 only be accessed via the ipython instance.
1770 1770 namespace : dict, optional
1771 1771 an optional dict where completions are performed.
1772 1772 global_namespace : dict, optional
1773 1773 secondary optional dict for completions, to
1774 1774 handle cases (such as IPython embedded inside functions) where
1775 1775 both Python scopes are visible.
1776 1776 config : Config
1777 1777 traitlet's config object
1778 1778 **kwargs
1779 1779 passed to super class unmodified.
1780 1780 """
1781 1781
1782 1782 self.magic_escape = ESC_MAGIC
1783 1783 self.splitter = CompletionSplitter()
1784 1784
1785 1785 # _greedy_changed() depends on splitter and readline being defined:
1786 1786 super().__init__(
1787 1787 namespace=namespace,
1788 1788 global_namespace=global_namespace,
1789 1789 config=config,
1790 1790 **kwargs,
1791 1791 )
1792 1792
1793 1793 # List where completion matches will be stored
1794 1794 self.matches = []
1795 1795 self.shell = shell
1796 1796 # Regexp to split filenames with spaces in them
1797 1797 self.space_name_re = re.compile(r'([^\\] )')
1798 1798 # Hold a local ref. to glob.glob for speed
1799 1799 self.glob = glob.glob
1800 1800
1801 1801 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1802 1802 # buffers, to avoid completion problems.
1803 1803 term = os.environ.get('TERM','xterm')
1804 1804 self.dumb_terminal = term in ['dumb','emacs']
1805 1805
1806 1806 # Special handling of backslashes needed in win32 platforms
1807 1807 if sys.platform == "win32":
1808 1808 self.clean_glob = self._clean_glob_win32
1809 1809 else:
1810 1810 self.clean_glob = self._clean_glob
1811 1811
1812 1812 #regexp to parse docstring for function signature
1813 1813 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1814 1814 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1815 1815 #use this if positional argument name is also needed
1816 1816 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1817 1817
1818 1818 self.magic_arg_matchers = [
1819 1819 self.magic_config_matcher,
1820 1820 self.magic_color_matcher,
1821 1821 ]
1822 1822
1823 1823 # This is set externally by InteractiveShell
1824 1824 self.custom_completers = None
1825 1825
1826 1826 # This is a list of names of unicode characters that can be completed
1827 1827 # into their corresponding unicode value. The list is large, so we
1828 1828 # lazily initialize it on first use. Consuming code should access this
1829 1829 # attribute through the `@unicode_names` property.
1830 1830 self._unicode_names = None
1831 1831
1832 1832 self._backslash_combining_matchers = [
1833 1833 self.latex_name_matcher,
1834 1834 self.unicode_name_matcher,
1835 1835 back_latex_name_matcher,
1836 1836 back_unicode_name_matcher,
1837 1837 self.fwd_unicode_matcher,
1838 1838 ]
1839 1839
1840 1840 if not self.backslash_combining_completions:
1841 1841 for matcher in self._backslash_combining_matchers:
1842 1842 self.disable_matchers.append(matcher.matcher_identifier)
1843 1843
1844 1844 if not self.merge_completions:
1845 1845 self.suppress_competing_matchers = True
1846 1846
1847 1847 @property
1848 1848 def matchers(self) -> List[Matcher]:
1849 1849 """All active matcher routines for completion"""
1850 1850 if self.dict_keys_only:
1851 1851 return [self.dict_key_matcher]
1852 1852
1853 1853 if self.use_jedi:
1854 1854 return [
1855 1855 *self.custom_matchers,
1856 1856 *self._backslash_combining_matchers,
1857 1857 *self.magic_arg_matchers,
1858 1858 self.custom_completer_matcher,
1859 1859 self.magic_matcher,
1860 1860 self._jedi_matcher,
1861 1861 self.dict_key_matcher,
1862 1862 self.file_matcher,
1863 1863 ]
1864 1864 else:
1865 1865 return [
1866 1866 *self.custom_matchers,
1867 1867 *self._backslash_combining_matchers,
1868 1868 *self.magic_arg_matchers,
1869 1869 self.custom_completer_matcher,
1870 1870 self.dict_key_matcher,
1871 1871 # TODO: convert python_matches to v2 API
1872 1872 self.magic_matcher,
1873 1873 self.python_matches,
1874 1874 self.file_matcher,
1875 1875 self.python_func_kw_matcher,
1876 1876 ]
1877 1877
1878 1878 def all_completions(self, text:str) -> List[str]:
1879 1879 """
1880 1880 Wrapper around the completion methods for the benefit of emacs.
1881 1881 """
1882 1882 prefix = text.rpartition('.')[0]
1883 1883 with provisionalcompleter():
1884 1884 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1885 1885 for c in self.completions(text, len(text))]
1886 1886
1887 1887 return self.complete(text)[1]
1888 1888
1889 1889 def _clean_glob(self, text:str):
1890 1890 return self.glob("%s*" % text)
1891 1891
1892 1892 def _clean_glob_win32(self, text:str):
1893 1893 return [f.replace("\\","/")
1894 1894 for f in self.glob("%s*" % text)]
1895 1895
1896 1896 @context_matcher()
1897 1897 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1898 1898 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1899 1899 matches = self.file_matches(context.token)
1900 1900 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1901 1901 # starts with `/home/`, `C:\`, etc)
1902 1902 return _convert_matcher_v1_result_to_v2(matches, type="path")
1903 1903
1904 1904 def file_matches(self, text: str) -> List[str]:
1905 1905 """Match filenames, expanding ~USER type strings.
1906 1906
1907 1907 Most of the seemingly convoluted logic in this completer is an
1908 1908 attempt to handle filenames with spaces in them. And yet it's not
1909 1909 quite perfect, because Python's readline doesn't expose all of the
1910 1910 GNU readline details needed for this to be done correctly.
1911 1911
1912 1912 For a filename with a space in it, the printed completions will be
1913 1913 only the parts after what's already been typed (instead of the
1914 1914 full completions, as is normally done). I don't think with the
1915 1915 current (as of Python 2.3) Python readline it's possible to do
1916 1916 better.
1917 1917
1918 1918 .. deprecated:: 8.6
1919 1919 You can use :meth:`file_matcher` instead.
1920 1920 """
1921 1921
1922 1922 # chars that require escaping with backslash - i.e. chars
1923 1923 # that readline treats incorrectly as delimiters, but we
1924 1924 # don't want to treat as delimiters in filename matching
1925 1925 # when escaped with backslash
1926 1926 if text.startswith('!'):
1927 1927 text = text[1:]
1928 1928 text_prefix = u'!'
1929 1929 else:
1930 1930 text_prefix = u''
1931 1931
1932 1932 text_until_cursor = self.text_until_cursor
1933 1933 # track strings with open quotes
1934 1934 open_quotes = has_open_quotes(text_until_cursor)
1935 1935
1936 1936 if '(' in text_until_cursor or '[' in text_until_cursor:
1937 1937 lsplit = text
1938 1938 else:
1939 1939 try:
1940 1940 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1941 1941 lsplit = arg_split(text_until_cursor)[-1]
1942 1942 except ValueError:
1943 1943 # typically an unmatched ", or backslash without escaped char.
1944 1944 if open_quotes:
1945 1945 lsplit = text_until_cursor.split(open_quotes)[-1]
1946 1946 else:
1947 1947 return []
1948 1948 except IndexError:
1949 1949 # tab pressed on empty line
1950 1950 lsplit = ""
1951 1951
1952 1952 if not open_quotes and lsplit != protect_filename(lsplit):
1953 1953 # if protectables are found, do matching on the whole escaped name
1954 1954 has_protectables = True
1955 1955 text0,text = text,lsplit
1956 1956 else:
1957 1957 has_protectables = False
1958 1958 text = os.path.expanduser(text)
1959 1959
1960 1960 if text == "":
1961 1961 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1962 1962
1963 1963 # Compute the matches from the filesystem
1964 1964 if sys.platform == 'win32':
1965 1965 m0 = self.clean_glob(text)
1966 1966 else:
1967 1967 m0 = self.clean_glob(text.replace('\\', ''))
1968 1968
1969 1969 if has_protectables:
1970 1970 # If we had protectables, we need to revert our changes to the
1971 1971 # beginning of filename so that we don't double-write the part
1972 1972 # of the filename we have so far
1973 1973 len_lsplit = len(lsplit)
1974 1974 matches = [text_prefix + text0 +
1975 1975 protect_filename(f[len_lsplit:]) for f in m0]
1976 1976 else:
1977 1977 if open_quotes:
1978 1978 # if we have a string with an open quote, we don't need to
1979 1979 # protect the names beyond the quote (and we _shouldn't_, as
1980 1980 # it would cause bugs when the filesystem call is made).
1981 1981 matches = m0 if sys.platform == "win32" else\
1982 1982 [protect_filename(f, open_quotes) for f in m0]
1983 1983 else:
1984 1984 matches = [text_prefix +
1985 1985 protect_filename(f) for f in m0]
1986 1986
1987 1987 # Mark directories in input list by appending '/' to their names.
1988 1988 return [x+'/' if os.path.isdir(x) else x for x in matches]
1989 1989
1990 1990 @context_matcher()
1991 1991 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1992 1992 """Match magics."""
1993 1993 text = context.token
1994 1994 matches = self.magic_matches(text)
1995 1995 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1996 1996 is_magic_prefix = len(text) > 0 and text[0] == "%"
1997 1997 result["suppress"] = is_magic_prefix and bool(result["completions"])
1998 1998 return result
1999 1999
2000 2000 def magic_matches(self, text: str):
2001 2001 """Match magics.
2002 2002
2003 2003 .. deprecated:: 8.6
2004 2004 You can use :meth:`magic_matcher` instead.
2005 2005 """
2006 2006 # Get all shell magics now rather than statically, so magics loaded at
2007 2007 # runtime show up too.
2008 2008 lsm = self.shell.magics_manager.lsmagic()
2009 2009 line_magics = lsm['line']
2010 2010 cell_magics = lsm['cell']
2011 2011 pre = self.magic_escape
2012 2012 pre2 = pre+pre
2013 2013
2014 2014 explicit_magic = text.startswith(pre)
2015 2015
2016 2016 # Completion logic:
2017 2017 # - user gives %%: only do cell magics
2018 2018 # - user gives %: do both line and cell magics
2019 2019 # - no prefix: do both
2020 2020 # In other words, line magics are skipped if the user gives %% explicitly
2021 2021 #
2022 2022 # We also exclude magics that match any currently visible names:
2023 2023 # https://github.com/ipython/ipython/issues/4877, unless the user has
2024 2024 # typed a %:
2025 2025 # https://github.com/ipython/ipython/issues/10754
2026 2026 bare_text = text.lstrip(pre)
2027 2027 global_matches = self.global_matches(bare_text)
2028 2028 if not explicit_magic:
2029 2029 def matches(magic):
2030 2030 """
2031 2031 Filter magics, in particular remove magics that match
2032 2032 a name present in global namespace.
2033 2033 """
2034 2034 return ( magic.startswith(bare_text) and
2035 2035 magic not in global_matches )
2036 2036 else:
2037 2037 def matches(magic):
2038 2038 return magic.startswith(bare_text)
2039 2039
2040 2040 comp = [ pre2+m for m in cell_magics if matches(m)]
2041 2041 if not text.startswith(pre2):
2042 2042 comp += [ pre+m for m in line_magics if matches(m)]
2043 2043
2044 2044 return comp
2045 2045
2046 2046 @context_matcher()
2047 2047 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2048 2048 """Match class names and attributes for %config magic."""
2049 2049 # NOTE: uses `line_buffer` equivalent for compatibility
2050 2050 matches = self.magic_config_matches(context.line_with_cursor)
2051 2051 return _convert_matcher_v1_result_to_v2(matches, type="param")
2052 2052
2053 2053 def magic_config_matches(self, text: str) -> List[str]:
2054 2054 """Match class names and attributes for %config magic.
2055 2055
2056 2056 .. deprecated:: 8.6
2057 2057 You can use :meth:`magic_config_matcher` instead.
2058 2058 """
2059 2059 texts = text.strip().split()
2060 2060
2061 2061 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2062 2062 # get all configuration classes
2063 2063 classes = sorted(set([ c for c in self.shell.configurables
2064 2064 if c.__class__.class_traits(config=True)
2065 2065 ]), key=lambda x: x.__class__.__name__)
2066 2066 classnames = [ c.__class__.__name__ for c in classes ]
2067 2067
2068 2068 # return all classnames if config or %config is given
2069 2069 if len(texts) == 1:
2070 2070 return classnames
2071 2071
2072 2072 # match classname
2073 2073 classname_texts = texts[1].split('.')
2074 2074 classname = classname_texts[0]
2075 2075 classname_matches = [ c for c in classnames
2076 2076 if c.startswith(classname) ]
2077 2077
2078 2078 # return matched classes or the matched class with attributes
2079 2079 if texts[1].find('.') < 0:
2080 2080 return classname_matches
2081 2081 elif len(classname_matches) == 1 and \
2082 2082 classname_matches[0] == classname:
2083 2083 cls = classes[classnames.index(classname)].__class__
2084 2084 help = cls.class_get_help()
2085 2085 # strip leading '--' from cl-args:
2086 2086 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2087 2087 return [ attr.split('=')[0]
2088 2088 for attr in help.strip().splitlines()
2089 2089 if attr.startswith(texts[1]) ]
2090 2090 return []
2091 2091
2092 2092 @context_matcher()
2093 2093 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2094 2094 """Match color schemes for %colors magic."""
2095 2095 # NOTE: uses `line_buffer` equivalent for compatibility
2096 2096 matches = self.magic_color_matches(context.line_with_cursor)
2097 2097 return _convert_matcher_v1_result_to_v2(matches, type="param")
2098 2098
2099 2099 def magic_color_matches(self, text: str) -> List[str]:
2100 2100 """Match color schemes for %colors magic.
2101 2101
2102 2102 .. deprecated:: 8.6
2103 2103 You can use :meth:`magic_color_matcher` instead.
2104 2104 """
2105 2105 texts = text.split()
2106 2106 if text.endswith(' '):
2107 2107 # .split() strips off the trailing whitespace. Add '' back
2108 2108 # so that: '%colors ' -> ['%colors', '']
2109 2109 texts.append('')
2110 2110
2111 2111 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2112 2112 prefix = texts[1]
2113 2113 return [ color for color in InspectColors.keys()
2114 2114 if color.startswith(prefix) ]
2115 2115 return []
2116 2116
2117 2117 @context_matcher(identifier="IPCompleter.jedi_matcher")
2118 2118 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2119 2119 matches = self._jedi_matches(
2120 2120 cursor_column=context.cursor_position,
2121 2121 cursor_line=context.cursor_line,
2122 2122 text=context.full_text,
2123 2123 )
2124 2124 return {
2125 2125 "completions": matches,
2126 2126 # static analysis should not suppress other matchers
2127 2127 "suppress": False,
2128 2128 }
2129 2129
2130 2130 def _jedi_matches(
2131 2131 self, cursor_column: int, cursor_line: int, text: str
2132 2132 ) -> Iterable[_JediCompletionLike]:
2133 2133 """
2134 2134 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
2135 2135 cursor position.
2136 2136
2137 2137 Parameters
2138 2138 ----------
2139 2139 cursor_column : int
2140 2140 column position of the cursor in ``text``, 0-indexed.
2141 2141 cursor_line : int
2142 2142 line position of the cursor in ``text``, 0-indexed
2143 2143 text : str
2144 2144 text to complete
2145 2145
2146 2146 Notes
2147 2147 -----
2148 2148 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2149 2149 object containing a string with the Jedi debug information attached.
2150 2150
2151 2151 .. deprecated:: 8.6
2152 2152 You can use :meth:`_jedi_matcher` instead.
2153 2153 """
2154 2154 namespaces = [self.namespace]
2155 2155 if self.global_namespace is not None:
2156 2156 namespaces.append(self.global_namespace)
2157 2157
2158 2158 completion_filter = lambda x:x
2159 2159 offset = cursor_to_position(text, cursor_line, cursor_column)
2160 2160 # filter output if we are completing for object members
2161 2161 if offset:
2162 2162 pre = text[offset-1]
2163 2163 if pre == '.':
2164 2164 if self.omit__names == 2:
2165 2165 completion_filter = lambda c:not c.name.startswith('_')
2166 2166 elif self.omit__names == 1:
2167 2167 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2168 2168 elif self.omit__names == 0:
2169 2169 completion_filter = lambda x:x
2170 2170 else:
2171 2171 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2172 2172
2173 2173 interpreter = jedi.Interpreter(text[:offset], namespaces)
2174 2174 try_jedi = True
2175 2175
2176 2176 try:
2177 2177 # find the first token in the current tree -- if it is a ' or " then we are in a string
2178 2178 completing_string = False
2179 2179 try:
2180 2180 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2181 2181 except StopIteration:
2182 2182 pass
2183 2183 else:
2184 2184 # note the value may be ', ", or it may also be ''' or """, or
2185 2185 # in some cases, """what/you/typed..., but all of these are
2186 2186 # strings.
2187 2187 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2188 2188
2189 2189 # if we are in a string jedi is likely not the right candidate for
2190 2190 # now. Skip it.
2191 2191 try_jedi = not completing_string
2192 2192 except Exception as e:
2193 2193 # many of things can go wrong, we are using private API just don't crash.
2194 2194 if self.debug:
2195 2195 print("Error detecting if completing a non-finished string :", e, '|')
2196 2196
2197 2197 if not try_jedi:
2198 2198 return []
2199 2199 try:
2200 2200 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2201 2201 except Exception as e:
2202 2202 if self.debug:
2203 2203 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
2204 2204 else:
2205 2205 return []
2206 2206
2207 2207 def python_matches(self, text: str) -> Iterable[str]:
2208 2208 """Match attributes or global python names"""
2209 2209 if "." in text:
2210 2210 try:
2211 2211 matches = self.attr_matches(text)
2212 2212 if text.endswith('.') and self.omit__names:
2213 2213 if self.omit__names == 1:
2214 2214 # true if txt is _not_ a __ name, false otherwise:
2215 2215 no__name = (lambda txt:
2216 2216 re.match(r'.*\.__.*?__',txt) is None)
2217 2217 else:
2218 2218 # true if txt is _not_ a _ name, false otherwise:
2219 2219 no__name = (lambda txt:
2220 2220 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2221 2221 matches = filter(no__name, matches)
2222 2222 except NameError:
2223 2223 # catches <undefined attributes>.<tab>
2224 2224 matches = []
2225 2225 else:
2226 2226 matches = self.global_matches(text)
2227 2227 return matches
2228 2228
2229 2229 def _default_arguments_from_docstring(self, doc):
2230 2230 """Parse the first line of docstring for call signature.
2231 2231
2232 2232 Docstring should be of the form 'min(iterable[, key=func])\n'.
2233 2233 It can also parse cython docstring of the form
2234 2234 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2235 2235 """
2236 2236 if doc is None:
2237 2237 return []
2238 2238
2239 2239 #care only the firstline
2240 2240 line = doc.lstrip().splitlines()[0]
2241 2241
2242 2242 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2243 2243 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2244 2244 sig = self.docstring_sig_re.search(line)
2245 2245 if sig is None:
2246 2246 return []
2247 2247 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2248 2248 sig = sig.groups()[0].split(',')
2249 2249 ret = []
2250 2250 for s in sig:
2251 2251 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2252 2252 ret += self.docstring_kwd_re.findall(s)
2253 2253 return ret
2254 2254
2255 2255 def _default_arguments(self, obj):
2256 2256 """Return the list of default arguments of obj if it is callable,
2257 2257 or empty list otherwise."""
2258 2258 call_obj = obj
2259 2259 ret = []
2260 2260 if inspect.isbuiltin(obj):
2261 2261 pass
2262 2262 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2263 2263 if inspect.isclass(obj):
2264 2264 #for cython embedsignature=True the constructor docstring
2265 2265 #belongs to the object itself not __init__
2266 2266 ret += self._default_arguments_from_docstring(
2267 2267 getattr(obj, '__doc__', ''))
2268 2268 # for classes, check for __init__,__new__
2269 2269 call_obj = (getattr(obj, '__init__', None) or
2270 2270 getattr(obj, '__new__', None))
2271 2271 # for all others, check if they are __call__able
2272 2272 elif hasattr(obj, '__call__'):
2273 2273 call_obj = obj.__call__
2274 2274 ret += self._default_arguments_from_docstring(
2275 2275 getattr(call_obj, '__doc__', ''))
2276 2276
2277 2277 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2278 2278 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2279 2279
2280 2280 try:
2281 2281 sig = inspect.signature(obj)
2282 2282 ret.extend(k for k, v in sig.parameters.items() if
2283 2283 v.kind in _keeps)
2284 2284 except ValueError:
2285 2285 pass
2286 2286
2287 2287 return list(set(ret))
2288 2288
2289 2289 @context_matcher()
2290 2290 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2291 2291 """Match named parameters (kwargs) of the last open function."""
2292 2292 matches = self.python_func_kw_matches(context.token)
2293 2293 return _convert_matcher_v1_result_to_v2(matches, type="param")
2294 2294
2295 2295 def python_func_kw_matches(self, text):
2296 2296 """Match named parameters (kwargs) of the last open function.
2297 2297
2298 2298 .. deprecated:: 8.6
2299 2299 You can use :meth:`python_func_kw_matcher` instead.
2300 2300 """
2301 2301
2302 2302 if "." in text: # a parameter cannot be dotted
2303 2303 return []
2304 2304 try: regexp = self.__funcParamsRegex
2305 2305 except AttributeError:
2306 2306 regexp = self.__funcParamsRegex = re.compile(r'''
2307 2307 '.*?(?<!\\)' | # single quoted strings or
2308 2308 ".*?(?<!\\)" | # double quoted strings or
2309 2309 \w+ | # identifier
2310 2310 \S # other characters
2311 2311 ''', re.VERBOSE | re.DOTALL)
2312 2312 # 1. find the nearest identifier that comes before an unclosed
2313 2313 # parenthesis before the cursor
2314 2314 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2315 2315 tokens = regexp.findall(self.text_until_cursor)
2316 2316 iterTokens = reversed(tokens); openPar = 0
2317 2317
2318 2318 for token in iterTokens:
2319 2319 if token == ')':
2320 2320 openPar -= 1
2321 2321 elif token == '(':
2322 2322 openPar += 1
2323 2323 if openPar > 0:
2324 2324 # found the last unclosed parenthesis
2325 2325 break
2326 2326 else:
2327 2327 return []
2328 2328 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2329 2329 ids = []
2330 2330 isId = re.compile(r'\w+$').match
2331 2331
2332 2332 while True:
2333 2333 try:
2334 2334 ids.append(next(iterTokens))
2335 2335 if not isId(ids[-1]):
2336 2336 ids.pop(); break
2337 2337 if not next(iterTokens) == '.':
2338 2338 break
2339 2339 except StopIteration:
2340 2340 break
2341 2341
2342 2342 # Find all named arguments already assigned to, as to avoid suggesting
2343 2343 # them again
2344 2344 usedNamedArgs = set()
2345 2345 par_level = -1
2346 2346 for token, next_token in zip(tokens, tokens[1:]):
2347 2347 if token == '(':
2348 2348 par_level += 1
2349 2349 elif token == ')':
2350 2350 par_level -= 1
2351 2351
2352 2352 if par_level != 0:
2353 2353 continue
2354 2354
2355 2355 if next_token != '=':
2356 2356 continue
2357 2357
2358 2358 usedNamedArgs.add(token)
2359 2359
2360 2360 argMatches = []
2361 2361 try:
2362 2362 callableObj = '.'.join(ids[::-1])
2363 2363 namedArgs = self._default_arguments(eval(callableObj,
2364 2364 self.namespace))
2365 2365
2366 2366 # Remove used named arguments from the list, no need to show twice
2367 2367 for namedArg in set(namedArgs) - usedNamedArgs:
2368 2368 if namedArg.startswith(text):
2369 2369 argMatches.append("%s=" %namedArg)
2370 2370 except:
2371 2371 pass
2372 2372
2373 2373 return argMatches
2374 2374
2375 2375 @staticmethod
2376 2376 def _get_keys(obj: Any) -> List[Any]:
2377 2377 # Objects can define their own completions by defining an
2378 2378 # _ipy_key_completions_() method.
2379 2379 method = get_real_method(obj, '_ipython_key_completions_')
2380 2380 if method is not None:
2381 2381 return method()
2382 2382
2383 2383 # Special case some common in-memory dict-like types
2384 2384 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2385 2385 try:
2386 2386 return list(obj.keys())
2387 2387 except Exception:
2388 2388 return []
2389 2389 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2390 2390 try:
2391 2391 return list(obj.obj.keys())
2392 2392 except Exception:
2393 2393 return []
2394 2394 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2395 2395 _safe_isinstance(obj, 'numpy', 'void'):
2396 2396 return obj.dtype.names or []
2397 2397 return []
2398 2398
2399 2399 @context_matcher()
2400 2400 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2401 2401 """Match string keys in a dictionary, after e.g. ``foo[``."""
2402 2402 matches = self.dict_key_matches(context.token)
2403 2403 return _convert_matcher_v1_result_to_v2(
2404 2404 matches, type="dict key", suppress_if_matches=True
2405 2405 )
2406 2406
2407 2407 def dict_key_matches(self, text: str) -> List[str]:
2408 2408 """Match string keys in a dictionary, after e.g. ``foo[``.
2409 2409
2410 2410 .. deprecated:: 8.6
2411 2411 You can use :meth:`dict_key_matcher` instead.
2412 2412 """
2413 2413
2414 2414 # Short-circuit on closed dictionary (regular expression would
2415 2415 # not match anyway, but would take quite a while).
2416 2416 if self.text_until_cursor.strip().endswith("]"):
2417 2417 return []
2418 2418
2419 2419 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2420 2420
2421 2421 if match is None:
2422 2422 return []
2423 2423
2424 2424 expr, prior_tuple_keys, key_prefix = match.groups()
2425 2425
2426 2426 obj = self._evaluate_expr(expr)
2427 2427
2428 2428 if obj is not_found:
2429 2429 return []
2430 2430
2431 2431 keys = self._get_keys(obj)
2432 2432 if not keys:
2433 2433 return keys
2434 2434
2435 2435 tuple_prefix = guarded_eval(
2436 2436 prior_tuple_keys,
2437 2437 EvaluationContext(
2438 2438 globals_=self.global_namespace,
2439 2439 locals_=self.namespace,
2440 2440 evaluation=self.evaluation,
2441 2441 in_subscript=True,
2442 2442 ),
2443 2443 )
2444 2444
2445 2445 closing_quote, token_offset, matches = match_dict_keys(
2446 2446 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2447 2447 )
2448 2448 if not matches:
2449 2449 return []
2450 2450
2451 2451 # get the cursor position of
2452 2452 # - the text being completed
2453 2453 # - the start of the key text
2454 2454 # - the start of the completion
2455 2455 text_start = len(self.text_until_cursor) - len(text)
2456 2456 if key_prefix:
2457 2457 key_start = match.start(3)
2458 2458 completion_start = key_start + token_offset
2459 2459 else:
2460 2460 key_start = completion_start = match.end()
2461 2461
2462 2462 # grab the leading prefix, to make sure all completions start with `text`
2463 2463 if text_start > key_start:
2464 2464 leading = ''
2465 2465 else:
2466 2466 leading = text[text_start:completion_start]
2467 2467
2468 2468 # append closing quote and bracket as appropriate
2469 2469 # this is *not* appropriate if the opening quote or bracket is outside
2470 2470 # the text given to this method, e.g. `d["""a\nt
2471 2471 can_close_quote = False
2472 2472 can_close_bracket = False
2473 2473
2474 2474 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2475 2475
2476 2476 if continuation.startswith(closing_quote):
2477 2477 # do not close if already closed, e.g. `d['a<tab>'`
2478 2478 continuation = continuation[len(closing_quote) :]
2479 2479 else:
2480 2480 can_close_quote = True
2481 2481
2482 2482 continuation = continuation.strip()
2483 2483
2484 2484 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2485 2485 # handling it is out of scope, so let's avoid appending suffixes.
2486 2486 has_known_tuple_handling = isinstance(obj, dict)
2487 2487
2488 2488 can_close_bracket = (
2489 2489 not continuation.startswith("]") and self.auto_close_dict_keys
2490 2490 )
2491 2491 can_close_tuple_item = (
2492 2492 not continuation.startswith(",")
2493 2493 and has_known_tuple_handling
2494 2494 and self.auto_close_dict_keys
2495 2495 )
2496 2496 can_close_quote = can_close_quote and self.auto_close_dict_keys
2497 2497
2498 2498 # fast path if closing qoute should be appended but not suffix is allowed
2499 2499 if not can_close_quote and not can_close_bracket and closing_quote:
2500 2500 return [leading + k for k in matches]
2501 2501
2502 2502 results = []
2503 2503
2504 2504 end_of_tuple_or_item = DictKeyState.END_OF_TUPLE | DictKeyState.END_OF_ITEM
2505 2505
2506 2506 for k, state_flag in matches.items():
2507 2507 result = leading + k
2508 2508 if can_close_quote and closing_quote:
2509 2509 result += closing_quote
2510 2510
2511 2511 if state_flag == end_of_tuple_or_item:
2512 2512 # We do not know which suffix to add,
2513 2513 # e.g. both tuple item and string
2514 2514 # match this item.
2515 2515 pass
2516 2516
2517 2517 if state_flag in end_of_tuple_or_item and can_close_bracket:
2518 2518 result += "]"
2519 2519 if state_flag == DictKeyState.IN_TUPLE and can_close_tuple_item:
2520 2520 result += ", "
2521 2521 results.append(result)
2522 2522 return results
2523 2523
2524 2524 @context_matcher()
2525 2525 def unicode_name_matcher(self, context: CompletionContext):
2526 2526 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2527 2527 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2528 2528 return _convert_matcher_v1_result_to_v2(
2529 2529 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2530 2530 )
2531 2531
2532 2532 @staticmethod
2533 2533 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2534 2534 """Match Latex-like syntax for unicode characters base
2535 2535 on the name of the character.
2536 2536
2537 2537 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2538 2538
2539 2539 Works only on valid python 3 identifier, or on combining characters that
2540 2540 will combine to form a valid identifier.
2541 2541 """
2542 2542 slashpos = text.rfind('\\')
2543 2543 if slashpos > -1:
2544 2544 s = text[slashpos+1:]
2545 2545 try :
2546 2546 unic = unicodedata.lookup(s)
2547 2547 # allow combining chars
2548 2548 if ('a'+unic).isidentifier():
2549 2549 return '\\'+s,[unic]
2550 2550 except KeyError:
2551 2551 pass
2552 2552 return '', []
2553 2553
2554 2554 @context_matcher()
2555 2555 def latex_name_matcher(self, context: CompletionContext):
2556 2556 """Match Latex syntax for unicode characters.
2557 2557
2558 2558 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2559 2559 """
2560 2560 fragment, matches = self.latex_matches(context.text_until_cursor)
2561 2561 return _convert_matcher_v1_result_to_v2(
2562 2562 matches, type="latex", fragment=fragment, suppress_if_matches=True
2563 2563 )
2564 2564
2565 2565 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2566 2566 """Match Latex syntax for unicode characters.
2567 2567
2568 2568 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2569 2569
2570 2570 .. deprecated:: 8.6
2571 2571 You can use :meth:`latex_name_matcher` instead.
2572 2572 """
2573 2573 slashpos = text.rfind('\\')
2574 2574 if slashpos > -1:
2575 2575 s = text[slashpos:]
2576 2576 if s in latex_symbols:
2577 2577 # Try to complete a full latex symbol to unicode
2578 2578 # \\alpha -> Ξ±
2579 2579 return s, [latex_symbols[s]]
2580 2580 else:
2581 2581 # If a user has partially typed a latex symbol, give them
2582 2582 # a full list of options \al -> [\aleph, \alpha]
2583 2583 matches = [k for k in latex_symbols if k.startswith(s)]
2584 2584 if matches:
2585 2585 return s, matches
2586 2586 return '', ()
2587 2587
2588 2588 @context_matcher()
2589 2589 def custom_completer_matcher(self, context):
2590 2590 """Dispatch custom completer.
2591 2591
2592 2592 If a match is found, suppresses all other matchers except for Jedi.
2593 2593 """
2594 2594 matches = self.dispatch_custom_completer(context.token) or []
2595 2595 result = _convert_matcher_v1_result_to_v2(
2596 2596 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2597 2597 )
2598 2598 result["ordered"] = True
2599 2599 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2600 2600 return result
2601 2601
2602 2602 def dispatch_custom_completer(self, text):
2603 2603 """
2604 2604 .. deprecated:: 8.6
2605 2605 You can use :meth:`custom_completer_matcher` instead.
2606 2606 """
2607 2607 if not self.custom_completers:
2608 2608 return
2609 2609
2610 2610 line = self.line_buffer
2611 2611 if not line.strip():
2612 2612 return None
2613 2613
2614 2614 # Create a little structure to pass all the relevant information about
2615 2615 # the current completion to any custom completer.
2616 2616 event = SimpleNamespace()
2617 2617 event.line = line
2618 2618 event.symbol = text
2619 2619 cmd = line.split(None,1)[0]
2620 2620 event.command = cmd
2621 2621 event.text_until_cursor = self.text_until_cursor
2622 2622
2623 2623 # for foo etc, try also to find completer for %foo
2624 2624 if not cmd.startswith(self.magic_escape):
2625 2625 try_magic = self.custom_completers.s_matches(
2626 2626 self.magic_escape + cmd)
2627 2627 else:
2628 2628 try_magic = []
2629 2629
2630 2630 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2631 2631 try_magic,
2632 2632 self.custom_completers.flat_matches(self.text_until_cursor)):
2633 2633 try:
2634 2634 res = c(event)
2635 2635 if res:
2636 2636 # first, try case sensitive match
2637 2637 withcase = [r for r in res if r.startswith(text)]
2638 2638 if withcase:
2639 2639 return withcase
2640 2640 # if none, then case insensitive ones are ok too
2641 2641 text_low = text.lower()
2642 2642 return [r for r in res if r.lower().startswith(text_low)]
2643 2643 except TryNext:
2644 2644 pass
2645 2645 except KeyboardInterrupt:
2646 2646 """
2647 2647 If custom completer take too long,
2648 2648 let keyboard interrupt abort and return nothing.
2649 2649 """
2650 2650 break
2651 2651
2652 2652 return None
2653 2653
2654 2654 def completions(self, text: str, offset: int)->Iterator[Completion]:
2655 2655 """
2656 2656 Returns an iterator over the possible completions
2657 2657
2658 2658 .. warning::
2659 2659
2660 2660 Unstable
2661 2661
2662 2662 This function is unstable, API may change without warning.
2663 2663 It will also raise unless use in proper context manager.
2664 2664
2665 2665 Parameters
2666 2666 ----------
2667 2667 text : str
2668 2668 Full text of the current input, multi line string.
2669 2669 offset : int
2670 2670 Integer representing the position of the cursor in ``text``. Offset
2671 2671 is 0-based indexed.
2672 2672
2673 2673 Yields
2674 2674 ------
2675 2675 Completion
2676 2676
2677 2677 Notes
2678 2678 -----
2679 2679 The cursor on a text can either be seen as being "in between"
2680 2680 characters or "On" a character depending on the interface visible to
2681 2681 the user. For consistency the cursor being on "in between" characters X
2682 2682 and Y is equivalent to the cursor being "on" character Y, that is to say
2683 2683 the character the cursor is on is considered as being after the cursor.
2684 2684
2685 2685 Combining characters may span more that one position in the
2686 2686 text.
2687 2687
2688 2688 .. note::
2689 2689
2690 2690 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2691 2691 fake Completion token to distinguish completion returned by Jedi
2692 2692 and usual IPython completion.
2693 2693
2694 2694 .. note::
2695 2695
2696 2696 Completions are not completely deduplicated yet. If identical
2697 2697 completions are coming from different sources this function does not
2698 2698 ensure that each completion object will only be present once.
2699 2699 """
2700 2700 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2701 2701 "It may change without warnings. "
2702 2702 "Use in corresponding context manager.",
2703 2703 category=ProvisionalCompleterWarning, stacklevel=2)
2704 2704
2705 2705 seen = set()
2706 2706 profiler:Optional[cProfile.Profile]
2707 2707 try:
2708 2708 if self.profile_completions:
2709 2709 import cProfile
2710 2710 profiler = cProfile.Profile()
2711 2711 profiler.enable()
2712 2712 else:
2713 2713 profiler = None
2714 2714
2715 2715 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2716 2716 if c and (c in seen):
2717 2717 continue
2718 2718 yield c
2719 2719 seen.add(c)
2720 2720 except KeyboardInterrupt:
2721 2721 """if completions take too long and users send keyboard interrupt,
2722 2722 do not crash and return ASAP. """
2723 2723 pass
2724 2724 finally:
2725 2725 if profiler is not None:
2726 2726 profiler.disable()
2727 2727 ensure_dir_exists(self.profiler_output_dir)
2728 2728 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2729 2729 print("Writing profiler output to", output_path)
2730 2730 profiler.dump_stats(output_path)
2731 2731
2732 2732 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2733 2733 """
2734 2734 Core completion module.Same signature as :any:`completions`, with the
2735 2735 extra `timeout` parameter (in seconds).
2736 2736
2737 2737 Computing jedi's completion ``.type`` can be quite expensive (it is a
2738 2738 lazy property) and can require some warm-up, more warm up than just
2739 2739 computing the ``name`` of a completion. The warm-up can be :
2740 2740
2741 2741 - Long warm-up the first time a module is encountered after
2742 2742 install/update: actually build parse/inference tree.
2743 2743
2744 2744 - first time the module is encountered in a session: load tree from
2745 2745 disk.
2746 2746
2747 2747 We don't want to block completions for tens of seconds so we give the
2748 2748 completer a "budget" of ``_timeout`` seconds per invocation to compute
2749 2749 completions types, the completions that have not yet been computed will
2750 2750 be marked as "unknown" an will have a chance to be computed next round
2751 2751 are things get cached.
2752 2752
2753 2753 Keep in mind that Jedi is not the only thing treating the completion so
2754 2754 keep the timeout short-ish as if we take more than 0.3 second we still
2755 2755 have lots of processing to do.
2756 2756
2757 2757 """
2758 2758 deadline = time.monotonic() + _timeout
2759 2759
2760 2760 before = full_text[:offset]
2761 2761 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2762 2762
2763 2763 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2764 2764
2765 2765 results = self._complete(
2766 2766 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2767 2767 )
2768 2768 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2769 2769 identifier: result
2770 2770 for identifier, result in results.items()
2771 2771 if identifier != jedi_matcher_id
2772 2772 }
2773 2773
2774 2774 jedi_matches = (
2775 2775 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2776 2776 if jedi_matcher_id in results
2777 2777 else ()
2778 2778 )
2779 2779
2780 2780 iter_jm = iter(jedi_matches)
2781 2781 if _timeout:
2782 2782 for jm in iter_jm:
2783 2783 try:
2784 2784 type_ = jm.type
2785 2785 except Exception:
2786 2786 if self.debug:
2787 2787 print("Error in Jedi getting type of ", jm)
2788 2788 type_ = None
2789 2789 delta = len(jm.name_with_symbols) - len(jm.complete)
2790 2790 if type_ == 'function':
2791 2791 signature = _make_signature(jm)
2792 2792 else:
2793 2793 signature = ''
2794 2794 yield Completion(start=offset - delta,
2795 2795 end=offset,
2796 2796 text=jm.name_with_symbols,
2797 2797 type=type_,
2798 2798 signature=signature,
2799 2799 _origin='jedi')
2800 2800
2801 2801 if time.monotonic() > deadline:
2802 2802 break
2803 2803
2804 2804 for jm in iter_jm:
2805 2805 delta = len(jm.name_with_symbols) - len(jm.complete)
2806 2806 yield Completion(
2807 2807 start=offset - delta,
2808 2808 end=offset,
2809 2809 text=jm.name_with_symbols,
2810 2810 type=_UNKNOWN_TYPE, # don't compute type for speed
2811 2811 _origin="jedi",
2812 2812 signature="",
2813 2813 )
2814 2814
2815 2815 # TODO:
2816 2816 # Suppress this, right now just for debug.
2817 2817 if jedi_matches and non_jedi_results and self.debug:
2818 2818 some_start_offset = before.rfind(
2819 2819 next(iter(non_jedi_results.values()))["matched_fragment"]
2820 2820 )
2821 2821 yield Completion(
2822 2822 start=some_start_offset,
2823 2823 end=offset,
2824 2824 text="--jedi/ipython--",
2825 2825 _origin="debug",
2826 2826 type="none",
2827 2827 signature="",
2828 2828 )
2829 2829
2830 2830 ordered = []
2831 2831 sortable = []
2832 2832
2833 2833 for origin, result in non_jedi_results.items():
2834 2834 matched_text = result["matched_fragment"]
2835 2835 start_offset = before.rfind(matched_text)
2836 2836 is_ordered = result.get("ordered", False)
2837 2837 container = ordered if is_ordered else sortable
2838 2838
2839 2839 # I'm unsure if this is always true, so let's assert and see if it
2840 2840 # crash
2841 2841 assert before.endswith(matched_text)
2842 2842
2843 2843 for simple_completion in result["completions"]:
2844 2844 completion = Completion(
2845 2845 start=start_offset,
2846 2846 end=offset,
2847 2847 text=simple_completion.text,
2848 2848 _origin=origin,
2849 2849 signature="",
2850 2850 type=simple_completion.type or _UNKNOWN_TYPE,
2851 2851 )
2852 2852 container.append(completion)
2853 2853
2854 2854 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2855 2855 :MATCHES_LIMIT
2856 2856 ]
2857 2857
2858 2858 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2859 2859 """Find completions for the given text and line context.
2860 2860
2861 2861 Note that both the text and the line_buffer are optional, but at least
2862 2862 one of them must be given.
2863 2863
2864 2864 Parameters
2865 2865 ----------
2866 2866 text : string, optional
2867 2867 Text to perform the completion on. If not given, the line buffer
2868 2868 is split using the instance's CompletionSplitter object.
2869 2869 line_buffer : string, optional
2870 2870 If not given, the completer attempts to obtain the current line
2871 2871 buffer via readline. This keyword allows clients which are
2872 2872 requesting for text completions in non-readline contexts to inform
2873 2873 the completer of the entire text.
2874 2874 cursor_pos : int, optional
2875 2875 Index of the cursor in the full line buffer. Should be provided by
2876 2876 remote frontends where kernel has no access to frontend state.
2877 2877
2878 2878 Returns
2879 2879 -------
2880 2880 Tuple of two items:
2881 2881 text : str
2882 2882 Text that was actually used in the completion.
2883 2883 matches : list
2884 2884 A list of completion matches.
2885 2885
2886 2886 Notes
2887 2887 -----
2888 2888 This API is likely to be deprecated and replaced by
2889 2889 :any:`IPCompleter.completions` in the future.
2890 2890
2891 2891 """
2892 2892 warnings.warn('`Completer.complete` is pending deprecation since '
2893 2893 'IPython 6.0 and will be replaced by `Completer.completions`.',
2894 2894 PendingDeprecationWarning)
2895 2895 # potential todo, FOLD the 3rd throw away argument of _complete
2896 2896 # into the first 2 one.
2897 2897 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2898 2898 # TODO: should we deprecate now, or does it stay?
2899 2899
2900 2900 results = self._complete(
2901 2901 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2902 2902 )
2903 2903
2904 2904 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2905 2905
2906 2906 return self._arrange_and_extract(
2907 2907 results,
2908 2908 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2909 2909 skip_matchers={jedi_matcher_id},
2910 2910 # this API does not support different start/end positions (fragments of token).
2911 2911 abort_if_offset_changes=True,
2912 2912 )
2913 2913
2914 2914 def _arrange_and_extract(
2915 2915 self,
2916 2916 results: Dict[str, MatcherResult],
2917 2917 skip_matchers: Set[str],
2918 2918 abort_if_offset_changes: bool,
2919 2919 ):
2920 2920
2921 2921 sortable = []
2922 2922 ordered = []
2923 2923 most_recent_fragment = None
2924 2924 for identifier, result in results.items():
2925 2925 if identifier in skip_matchers:
2926 2926 continue
2927 2927 if not result["completions"]:
2928 2928 continue
2929 2929 if not most_recent_fragment:
2930 2930 most_recent_fragment = result["matched_fragment"]
2931 2931 if (
2932 2932 abort_if_offset_changes
2933 2933 and result["matched_fragment"] != most_recent_fragment
2934 2934 ):
2935 2935 break
2936 2936 if result.get("ordered", False):
2937 2937 ordered.extend(result["completions"])
2938 2938 else:
2939 2939 sortable.extend(result["completions"])
2940 2940
2941 2941 if not most_recent_fragment:
2942 2942 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2943 2943
2944 2944 return most_recent_fragment, [
2945 2945 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2946 2946 ]
2947 2947
2948 2948 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2949 2949 full_text=None) -> _CompleteResult:
2950 2950 """
2951 2951 Like complete but can also returns raw jedi completions as well as the
2952 2952 origin of the completion text. This could (and should) be made much
2953 2953 cleaner but that will be simpler once we drop the old (and stateful)
2954 2954 :any:`complete` API.
2955 2955
2956 2956 With current provisional API, cursor_pos act both (depending on the
2957 2957 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2958 2958 ``column`` when passing multiline strings this could/should be renamed
2959 2959 but would add extra noise.
2960 2960
2961 2961 Parameters
2962 2962 ----------
2963 2963 cursor_line
2964 2964 Index of the line the cursor is on. 0 indexed.
2965 2965 cursor_pos
2966 2966 Position of the cursor in the current line/line_buffer/text. 0
2967 2967 indexed.
2968 2968 line_buffer : optional, str
2969 2969 The current line the cursor is in, this is mostly due to legacy
2970 2970 reason that readline could only give a us the single current line.
2971 2971 Prefer `full_text`.
2972 2972 text : str
2973 2973 The current "token" the cursor is in, mostly also for historical
2974 2974 reasons. as the completer would trigger only after the current line
2975 2975 was parsed.
2976 2976 full_text : str
2977 2977 Full text of the current cell.
2978 2978
2979 2979 Returns
2980 2980 -------
2981 2981 An ordered dictionary where keys are identifiers of completion
2982 2982 matchers and values are ``MatcherResult``s.
2983 2983 """
2984 2984
2985 2985 # if the cursor position isn't given, the only sane assumption we can
2986 2986 # make is that it's at the end of the line (the common case)
2987 2987 if cursor_pos is None:
2988 2988 cursor_pos = len(line_buffer) if text is None else len(text)
2989 2989
2990 2990 if self.use_main_ns:
2991 2991 self.namespace = __main__.__dict__
2992 2992
2993 2993 # if text is either None or an empty string, rely on the line buffer
2994 2994 if (not line_buffer) and full_text:
2995 2995 line_buffer = full_text.split('\n')[cursor_line]
2996 2996 if not text: # issue #11508: check line_buffer before calling split_line
2997 2997 text = (
2998 2998 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2999 2999 )
3000 3000
3001 3001 # If no line buffer is given, assume the input text is all there was
3002 3002 if line_buffer is None:
3003 3003 line_buffer = text
3004 3004
3005 3005 # deprecated - do not use `line_buffer` in new code.
3006 3006 self.line_buffer = line_buffer
3007 3007 self.text_until_cursor = self.line_buffer[:cursor_pos]
3008 3008
3009 3009 if not full_text:
3010 3010 full_text = line_buffer
3011 3011
3012 3012 context = CompletionContext(
3013 3013 full_text=full_text,
3014 3014 cursor_position=cursor_pos,
3015 3015 cursor_line=cursor_line,
3016 3016 token=text,
3017 3017 limit=MATCHES_LIMIT,
3018 3018 )
3019 3019
3020 3020 # Start with a clean slate of completions
3021 3021 results = {}
3022 3022
3023 3023 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3024 3024
3025 3025 suppressed_matchers = set()
3026 3026
3027 3027 matchers = {
3028 3028 _get_matcher_id(matcher): matcher
3029 3029 for matcher in sorted(
3030 3030 self.matchers, key=_get_matcher_priority, reverse=True
3031 3031 )
3032 3032 }
3033 3033
3034 3034 for matcher_id, matcher in matchers.items():
3035 3035 api_version = _get_matcher_api_version(matcher)
3036 3036 matcher_id = _get_matcher_id(matcher)
3037 3037
3038 3038 if matcher_id in self.disable_matchers:
3039 3039 continue
3040 3040
3041 3041 if matcher_id in results:
3042 3042 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3043 3043
3044 3044 if matcher_id in suppressed_matchers:
3045 3045 continue
3046 3046
3047 3047 try:
3048 3048 if api_version == 1:
3049 3049 result = _convert_matcher_v1_result_to_v2(
3050 3050 matcher(text), type=_UNKNOWN_TYPE
3051 3051 )
3052 3052 elif api_version == 2:
3053 3053 result = cast(matcher, MatcherAPIv2)(context)
3054 3054 else:
3055 3055 raise ValueError(f"Unsupported API version {api_version}")
3056 3056 except:
3057 3057 # Show the ugly traceback if the matcher causes an
3058 3058 # exception, but do NOT crash the kernel!
3059 3059 sys.excepthook(*sys.exc_info())
3060 3060 continue
3061 3061
3062 3062 # set default value for matched fragment if suffix was not selected.
3063 3063 result["matched_fragment"] = result.get("matched_fragment", context.token)
3064 3064
3065 3065 if not suppressed_matchers:
3066 3066 suppression_recommended = result.get("suppress", False)
3067 3067
3068 3068 suppression_config = (
3069 3069 self.suppress_competing_matchers.get(matcher_id, None)
3070 3070 if isinstance(self.suppress_competing_matchers, dict)
3071 3071 else self.suppress_competing_matchers
3072 3072 )
3073 3073 should_suppress = (
3074 3074 (suppression_config is True)
3075 3075 or (suppression_recommended and (suppression_config is not False))
3076 3076 ) and has_any_completions(result)
3077 3077
3078 3078 if should_suppress:
3079 3079 suppression_exceptions = result.get("do_not_suppress", set())
3080 3080 try:
3081 3081 to_suppress = set(suppression_recommended)
3082 3082 except TypeError:
3083 3083 to_suppress = set(matchers)
3084 3084 suppressed_matchers = to_suppress - suppression_exceptions
3085 3085
3086 3086 new_results = {}
3087 3087 for previous_matcher_id, previous_result in results.items():
3088 3088 if previous_matcher_id not in suppressed_matchers:
3089 3089 new_results[previous_matcher_id] = previous_result
3090 3090 results = new_results
3091 3091
3092 3092 results[matcher_id] = result
3093 3093
3094 3094 _, matches = self._arrange_and_extract(
3095 3095 results,
3096 3096 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3097 3097 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3098 3098 skip_matchers={jedi_matcher_id},
3099 3099 abort_if_offset_changes=False,
3100 3100 )
3101 3101
3102 3102 # populate legacy stateful API
3103 3103 self.matches = matches
3104 3104
3105 3105 return results
3106 3106
3107 3107 @staticmethod
3108 3108 def _deduplicate(
3109 3109 matches: Sequence[SimpleCompletion],
3110 3110 ) -> Iterable[SimpleCompletion]:
3111 3111 filtered_matches = {}
3112 3112 for match in matches:
3113 3113 text = match.text
3114 3114 if (
3115 3115 text not in filtered_matches
3116 3116 or filtered_matches[text].type == _UNKNOWN_TYPE
3117 3117 ):
3118 3118 filtered_matches[text] = match
3119 3119
3120 3120 return filtered_matches.values()
3121 3121
3122 3122 @staticmethod
3123 3123 def _sort(matches: Sequence[SimpleCompletion]):
3124 3124 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3125 3125
3126 3126 @context_matcher()
3127 3127 def fwd_unicode_matcher(self, context: CompletionContext):
3128 3128 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3129 3129 # TODO: use `context.limit` to terminate early once we matched the maximum
3130 3130 # number that will be used downstream; can be added as an optional to
3131 3131 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3132 3132 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3133 3133 return _convert_matcher_v1_result_to_v2(
3134 3134 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3135 3135 )
3136 3136
3137 3137 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3138 3138 """
3139 3139 Forward match a string starting with a backslash with a list of
3140 3140 potential Unicode completions.
3141 3141
3142 3142 Will compute list of Unicode character names on first call and cache it.
3143 3143
3144 3144 .. deprecated:: 8.6
3145 3145 You can use :meth:`fwd_unicode_matcher` instead.
3146 3146
3147 3147 Returns
3148 3148 -------
3149 3149 At tuple with:
3150 3150 - matched text (empty if no matches)
3151 3151 - list of potential completions, empty tuple otherwise)
3152 3152 """
3153 3153 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3154 3154 # We could do a faster match using a Trie.
3155 3155
3156 3156 # Using pygtrie the following seem to work:
3157 3157
3158 3158 # s = PrefixSet()
3159 3159
3160 3160 # for c in range(0,0x10FFFF + 1):
3161 3161 # try:
3162 3162 # s.add(unicodedata.name(chr(c)))
3163 3163 # except ValueError:
3164 3164 # pass
3165 3165 # [''.join(k) for k in s.iter(prefix)]
3166 3166
3167 3167 # But need to be timed and adds an extra dependency.
3168 3168
3169 3169 slashpos = text.rfind('\\')
3170 3170 # if text starts with slash
3171 3171 if slashpos > -1:
3172 3172 # PERF: It's important that we don't access self._unicode_names
3173 3173 # until we're inside this if-block. _unicode_names is lazily
3174 3174 # initialized, and it takes a user-noticeable amount of time to
3175 3175 # initialize it, so we don't want to initialize it unless we're
3176 3176 # actually going to use it.
3177 3177 s = text[slashpos + 1 :]
3178 3178 sup = s.upper()
3179 3179 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3180 3180 if candidates:
3181 3181 return s, candidates
3182 3182 candidates = [x for x in self.unicode_names if sup in x]
3183 3183 if candidates:
3184 3184 return s, candidates
3185 3185 splitsup = sup.split(" ")
3186 3186 candidates = [
3187 3187 x for x in self.unicode_names if all(u in x for u in splitsup)
3188 3188 ]
3189 3189 if candidates:
3190 3190 return s, candidates
3191 3191
3192 3192 return "", ()
3193 3193
3194 3194 # if text does not start with slash
3195 3195 else:
3196 3196 return '', ()
3197 3197
3198 3198 @property
3199 3199 def unicode_names(self) -> List[str]:
3200 3200 """List of names of unicode code points that can be completed.
3201 3201
3202 3202 The list is lazily initialized on first access.
3203 3203 """
3204 3204 if self._unicode_names is None:
3205 3205 names = []
3206 3206 for c in range(0,0x10FFFF + 1):
3207 3207 try:
3208 3208 names.append(unicodedata.name(chr(c)))
3209 3209 except ValueError:
3210 3210 pass
3211 3211 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3212 3212
3213 3213 return self._unicode_names
3214 3214
3215 3215 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3216 3216 names = []
3217 3217 for start,stop in ranges:
3218 3218 for c in range(start, stop) :
3219 3219 try:
3220 3220 names.append(unicodedata.name(chr(c)))
3221 3221 except ValueError:
3222 3222 pass
3223 3223 return names
@@ -1,525 +1,524 b''
1 1 from typing import Callable, Set, Tuple, NamedTuple, Literal, Union, TYPE_CHECKING
2 2 import collections
3 3 import sys
4 4 import ast
5 5 from functools import cached_property
6 6 from dataclasses import dataclass, field
7 7
8 8 from IPython.utils.docs import GENERATING_DOCUMENTATION
9 9
10 10
11 11 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
12 12 from typing_extensions import Protocol
13 13 else:
14 14 # do not require on runtime
15 15 Protocol = object # requires Python >=3.8
16 16
17 17
18 18 class HasGetItem(Protocol):
19 19 def __getitem__(self, key) -> None:
20 20 ...
21 21
22 22
23 23 class InstancesHaveGetItem(Protocol):
24 24 def __call__(self) -> HasGetItem:
25 25 ...
26 26
27 27
28 28 class HasGetAttr(Protocol):
29 29 def __getattr__(self, key) -> None:
30 30 ...
31 31
32 32
33 33 class DoesNotHaveGetAttr(Protocol):
34 34 pass
35 35
36 36
37 37 # By default `__getattr__` is not explicitly implemented on most objects
38 38 MayHaveGetattr = Union[HasGetAttr, DoesNotHaveGetAttr]
39 39
40 40
41 41 def unbind_method(func: Callable) -> Union[Callable, None]:
42 42 """Get unbound method for given bound method.
43 43
44 44 Returns None if cannot get unbound method."""
45 45 owner = getattr(func, "__self__", None)
46 46 owner_class = type(owner)
47 47 name = getattr(func, "__name__", None)
48 48 instance_dict_overrides = getattr(owner, "__dict__", None)
49 49 if (
50 50 owner is not None
51 51 and name
52 52 and (
53 53 not instance_dict_overrides
54 54 or (instance_dict_overrides and name not in instance_dict_overrides)
55 55 )
56 56 ):
57 57 return getattr(owner_class, name)
58 58
59 59
60 60 @dataclass
61 61 class EvaluationPolicy:
62 62 allow_locals_access: bool = False
63 63 allow_globals_access: bool = False
64 64 allow_item_access: bool = False
65 65 allow_attr_access: bool = False
66 66 allow_builtins_access: bool = False
67 67 allow_any_calls: bool = False
68 68 allowed_calls: Set[Callable] = field(default_factory=set)
69 69
70 70 def can_get_item(self, value, item):
71 71 return self.allow_item_access
72 72
73 73 def can_get_attr(self, value, attr):
74 74 return self.allow_attr_access
75 75
76 76 def can_call(self, func):
77 77 if self.allow_any_calls:
78 78 return True
79 79
80 80 if func in self.allowed_calls:
81 81 return True
82 82
83 83 owner_method = unbind_method(func)
84 84 if owner_method and owner_method in self.allowed_calls:
85 85 return True
86 86
87 87
88 88 def has_original_dunder_external(
89 89 value,
90 90 module_name,
91 91 access_path,
92 92 method_name,
93 93 ):
94 94 try:
95 95 if module_name not in sys.modules:
96 96 return False
97 97 member_type = sys.modules[module_name]
98 98 for attr in access_path:
99 99 member_type = getattr(member_type, attr)
100 100 value_type = type(value)
101 101 if type(value) == member_type:
102 102 return True
103 103 if isinstance(value, member_type):
104 104 method = getattr(value_type, method_name, None)
105 105 member_method = getattr(member_type, method_name, None)
106 106 if member_method == method:
107 107 return True
108 108 except (AttributeError, KeyError):
109 109 return False
110 110
111 111
112 112 def has_original_dunder(
113 113 value, allowed_types, allowed_methods, allowed_external, method_name
114 114 ):
115 115 # note: Python ignores `__getattr__`/`__getitem__` on instances,
116 116 # we only need to check at class level
117 117 value_type = type(value)
118 118
119 119 # strict type check passes β†’ no need to check method
120 120 if value_type in allowed_types:
121 121 return True
122 122
123 123 method = getattr(value_type, method_name, None)
124 124
125 125 if not method:
126 126 return None
127 127
128 128 if method in allowed_methods:
129 129 return True
130 130
131 131 for module_name, *access_path in allowed_external:
132 132 if has_original_dunder_external(value, module_name, access_path, method_name):
133 133 return True
134 134
135 135 return False
136 136
137 137
138 138 @dataclass
139 139 class SelectivePolicy(EvaluationPolicy):
140 140 allowed_getitem: Set[HasGetItem] = field(default_factory=set)
141 141 allowed_getitem_external: Set[Tuple[str, ...]] = field(default_factory=set)
142 142 allowed_getattr: Set[MayHaveGetattr] = field(default_factory=set)
143 143 allowed_getattr_external: Set[Tuple[str, ...]] = field(default_factory=set)
144 144
145 145 def can_get_attr(self, value, attr):
146 146 has_original_attribute = has_original_dunder(
147 147 value,
148 148 allowed_types=self.allowed_getattr,
149 149 allowed_methods=self._getattribute_methods,
150 150 allowed_external=self.allowed_getattr_external,
151 151 method_name="__getattribute__",
152 152 )
153 153 has_original_attr = has_original_dunder(
154 154 value,
155 155 allowed_types=self.allowed_getattr,
156 156 allowed_methods=self._getattr_methods,
157 157 allowed_external=self.allowed_getattr_external,
158 158 method_name="__getattr__",
159 159 )
160 160 # Many objects do not have `__getattr__`, this is fine
161 161 if has_original_attr is None and has_original_attribute:
162 162 return True
163 163
164 164 # Accept objects without modifications to `__getattr__` and `__getattribute__`
165 165 return has_original_attr and has_original_attribute
166 166
167 167 def get_attr(self, value, attr):
168 168 if self.can_get_attr(value, attr):
169 169 return getattr(value, attr)
170 170
171 171 def can_get_item(self, value, item):
172 172 """Allow accessing `__getiitem__` of allow-listed instances unless it was not modified."""
173 173 return has_original_dunder(
174 174 value,
175 175 allowed_types=self.allowed_getitem,
176 176 allowed_methods=self._getitem_methods,
177 177 allowed_external=self.allowed_getitem_external,
178 178 method_name="__getitem__",
179 179 )
180 180
181 181 @cached_property
182 182 def _getitem_methods(self) -> Set[Callable]:
183 183 return self._safe_get_methods(self.allowed_getitem, "__getitem__")
184 184
185 185 @cached_property
186 186 def _getattr_methods(self) -> Set[Callable]:
187 187 return self._safe_get_methods(self.allowed_getattr, "__getattr__")
188 188
189 189 @cached_property
190 190 def _getattribute_methods(self) -> Set[Callable]:
191 191 return self._safe_get_methods(self.allowed_getattr, "__getattribute__")
192 192
193 193 def _safe_get_methods(self, classes, name) -> Set[Callable]:
194 194 return {
195 195 method
196 196 for class_ in classes
197 197 for method in [getattr(class_, name, None)]
198 198 if method
199 199 }
200 200
201 201
202 202 class DummyNamedTuple(NamedTuple):
203 203 pass
204 204
205 205
206 206 class EvaluationContext(NamedTuple):
207 207 locals_: dict
208 208 globals_: dict
209 209 evaluation: Literal[
210 "forbidden", "minimal", "limitted", "unsafe", "dangerous"
210 "forbidden", "minimal", "limited", "unsafe", "dangerous"
211 211 ] = "forbidden"
212 212 in_subscript: bool = False
213 213
214 214
215 215 class IdentitySubscript:
216 216 def __getitem__(self, key):
217 217 return key
218 218
219 219
220 220 IDENTITY_SUBSCRIPT = IdentitySubscript()
221 221 SUBSCRIPT_MARKER = "__SUBSCRIPT_SENTINEL__"
222 222
223 223
224 224 class GuardRejection(ValueError):
225 225 pass
226 226
227 227
228 228 def guarded_eval(code: str, context: EvaluationContext):
229 229 locals_ = context.locals_
230 230
231 231 if context.evaluation == "forbidden":
232 232 raise GuardRejection("Forbidden mode")
233 233
234 234 # note: not using `ast.literal_eval` as it does not implement
235 235 # getitem at all, for example it fails on simple `[0][1]`
236 236
237 237 if context.in_subscript:
238 238 # syntatic sugar for ellipsis (:) is only available in susbcripts
239 239 # so we need to trick the ast parser into thinking that we have
240 240 # a subscript, but we need to be able to later recognise that we did
241 241 # it so we can ignore the actual __getitem__ operation
242 242 if not code:
243 243 return tuple()
244 244 locals_ = locals_.copy()
245 245 locals_[SUBSCRIPT_MARKER] = IDENTITY_SUBSCRIPT
246 246 code = SUBSCRIPT_MARKER + "[" + code + "]"
247 247 context = EvaluationContext(**{**context._asdict(), **{"locals_": locals_}})
248 248
249 249 if context.evaluation == "dangerous":
250 250 return eval(code, context.globals_, context.locals_)
251 251
252 252 expression = ast.parse(code, mode="eval")
253 253
254 254 return eval_node(expression, context)
255 255
256 256
257 257 def eval_node(node: Union[ast.AST, None], context: EvaluationContext):
258 258 """
259 259 Evaluate AST node in provided context.
260 260
261 261 Applies evaluation restrictions defined in the context.
262 262
263 Currently does not support evaluation of functions with arguments.
263 Currently does not support evaluation of functions with keyword arguments.
264 264
265 265 Does not evaluate actions which always have side effects:
266 266 - class definitions (``class sth: ...``)
267 267 - function definitions (``def sth: ...``)
268 268 - variable assignments (``x = 1``)
269 - augumented assignments (``x += 1``)
269 - augmented assignments (``x += 1``)
270 270 - deletions (``del x``)
271 271
272 272 Does not evaluate operations which do not return values:
273 273 - assertions (``assert x``)
274 274 - pass (``pass``)
275 275 - imports (``import x``)
276 276 - control flow
277 - conditionals (``if x:``) except for terenary IfExp (``a if x else b``)
277 - conditionals (``if x:``) except for ternary IfExp (``a if x else b``)
278 278 - loops (``for`` and `while``)
279 279 - exception handling
280 280
281 281 The purpose of this function is to guard against unwanted side-effects;
282 282 it does not give guarantees on protection from malicious code execution.
283 283 """
284 284 policy = EVALUATION_POLICIES[context.evaluation]
285 285 if node is None:
286 286 return None
287 287 if isinstance(node, ast.Expression):
288 288 return eval_node(node.body, context)
289 289 if isinstance(node, ast.BinOp):
290 290 # TODO: add guards
291 291 left = eval_node(node.left, context)
292 292 right = eval_node(node.right, context)
293 293 if isinstance(node.op, ast.Add):
294 294 return left + right
295 295 if isinstance(node.op, ast.Sub):
296 296 return left - right
297 297 if isinstance(node.op, ast.Mult):
298 298 return left * right
299 299 if isinstance(node.op, ast.Div):
300 300 return left / right
301 301 if isinstance(node.op, ast.FloorDiv):
302 302 return left // right
303 303 if isinstance(node.op, ast.Mod):
304 304 return left % right
305 305 if isinstance(node.op, ast.Pow):
306 306 return left**right
307 307 if isinstance(node.op, ast.LShift):
308 308 return left << right
309 309 if isinstance(node.op, ast.RShift):
310 310 return left >> right
311 311 if isinstance(node.op, ast.BitOr):
312 312 return left | right
313 313 if isinstance(node.op, ast.BitXor):
314 314 return left ^ right
315 315 if isinstance(node.op, ast.BitAnd):
316 316 return left & right
317 317 if isinstance(node.op, ast.MatMult):
318 318 return left @ right
319 319 if isinstance(node, ast.Constant):
320 320 return node.value
321 321 if isinstance(node, ast.Index):
322 322 return eval_node(node.value, context)
323 323 if isinstance(node, ast.Tuple):
324 324 return tuple(eval_node(e, context) for e in node.elts)
325 325 if isinstance(node, ast.List):
326 326 return [eval_node(e, context) for e in node.elts]
327 327 if isinstance(node, ast.Set):
328 328 return {eval_node(e, context) for e in node.elts}
329 329 if isinstance(node, ast.Dict):
330 330 return dict(
331 331 zip(
332 332 [eval_node(k, context) for k in node.keys],
333 333 [eval_node(v, context) for v in node.values],
334 334 )
335 335 )
336 336 if isinstance(node, ast.Slice):
337 337 return slice(
338 338 eval_node(node.lower, context),
339 339 eval_node(node.upper, context),
340 340 eval_node(node.step, context),
341 341 )
342 342 if isinstance(node, ast.ExtSlice):
343 343 return tuple([eval_node(dim, context) for dim in node.dims])
344 344 if isinstance(node, ast.UnaryOp):
345 345 # TODO: add guards
346 346 value = eval_node(node.operand, context)
347 347 if isinstance(node.op, ast.USub):
348 348 return -value
349 349 if isinstance(node.op, ast.UAdd):
350 350 return +value
351 351 if isinstance(node.op, ast.Invert):
352 352 return ~value
353 353 if isinstance(node.op, ast.Not):
354 354 return not value
355 355 raise ValueError("Unhandled unary operation:", node.op)
356 356 if isinstance(node, ast.Subscript):
357 357 value = eval_node(node.value, context)
358 358 slice_ = eval_node(node.slice, context)
359 359 if policy.can_get_item(value, slice_):
360 360 return value[slice_]
361 361 raise GuardRejection(
362 362 "Subscript access (`__getitem__`) for",
363 363 type(value), # not joined to avoid calling `repr`
364 364 f" not allowed in {context.evaluation} mode",
365 365 )
366 366 if isinstance(node, ast.Name):
367 367 if policy.allow_locals_access and node.id in context.locals_:
368 368 return context.locals_[node.id]
369 369 if policy.allow_globals_access and node.id in context.globals_:
370 370 return context.globals_[node.id]
371 371 if policy.allow_builtins_access and node.id in __builtins__:
372 372 return __builtins__[node.id]
373 373 if not policy.allow_globals_access and not policy.allow_locals_access:
374 374 raise GuardRejection(
375 375 f"Namespace access not allowed in {context.evaluation} mode"
376 376 )
377 377 else:
378 378 raise NameError(f"{node.id} not found in locals nor globals")
379 379 if isinstance(node, ast.Attribute):
380 380 value = eval_node(node.value, context)
381 381 if policy.can_get_attr(value, node.attr):
382 382 return getattr(value, node.attr)
383 383 raise GuardRejection(
384 384 "Attribute access (`__getattr__`) for",
385 385 type(value), # not joined to avoid calling `repr`
386 386 f"not allowed in {context.evaluation} mode",
387 387 )
388 388 if isinstance(node, ast.IfExp):
389 389 test = eval_node(node.test, context)
390 390 if test:
391 391 return eval_node(node.body, context)
392 392 else:
393 393 return eval_node(node.orelse, context)
394 394 if isinstance(node, ast.Call):
395 395 func = eval_node(node.func, context)
396 print(node.keywords)
397 396 if policy.can_call(func) and not node.keywords:
398 397 args = [eval_node(arg, context) for arg in node.args]
399 398 return func(*args)
400 399 raise GuardRejection(
401 400 "Call for",
402 401 func, # not joined to avoid calling `repr`
403 402 f"not allowed in {context.evaluation} mode",
404 403 )
405 404 raise ValueError("Unhandled node", node)
406 405
407 406
408 407 SUPPORTED_EXTERNAL_GETITEM = {
409 408 ("pandas", "core", "indexing", "_iLocIndexer"),
410 409 ("pandas", "core", "indexing", "_LocIndexer"),
411 410 ("pandas", "DataFrame"),
412 411 ("pandas", "Series"),
413 412 ("numpy", "ndarray"),
414 413 ("numpy", "void"),
415 414 }
416 415
417 416 BUILTIN_GETITEM = {
418 417 dict,
419 418 str,
420 419 bytes,
421 420 list,
422 421 tuple,
423 422 collections.defaultdict,
424 423 collections.deque,
425 424 collections.OrderedDict,
426 425 collections.ChainMap,
427 426 collections.UserDict,
428 427 collections.UserList,
429 428 collections.UserString,
430 429 DummyNamedTuple,
431 430 IdentitySubscript,
432 431 }
433 432
434 433
435 434 def _list_methods(cls, source=None):
436 435 """For use on immutable objects or with methods returning a copy"""
437 436 return [getattr(cls, k) for k in (source if source else dir(cls))]
438 437
439 438
440 439 dict_non_mutating_methods = ("copy", "keys", "values", "items")
441 440 list_non_mutating_methods = ("copy", "index", "count")
442 441 set_non_mutating_methods = set(dir(set)) & set(dir(frozenset))
443 442
444 443
445 444 dict_keys = type({}.keys())
446 445 method_descriptor = type(list.copy)
447 446
448 447 ALLOWED_CALLS = {
449 448 bytes,
450 449 *_list_methods(bytes),
451 450 dict,
452 451 *_list_methods(dict, dict_non_mutating_methods),
453 452 dict_keys.isdisjoint,
454 453 list,
455 454 *_list_methods(list, list_non_mutating_methods),
456 455 set,
457 456 *_list_methods(set, set_non_mutating_methods),
458 457 frozenset,
459 458 *_list_methods(frozenset),
460 459 range,
461 460 str,
462 461 *_list_methods(str),
463 462 tuple,
464 463 *_list_methods(tuple),
465 464 collections.deque,
466 465 *_list_methods(collections.deque, list_non_mutating_methods),
467 466 collections.defaultdict,
468 467 *_list_methods(collections.defaultdict, dict_non_mutating_methods),
469 468 collections.OrderedDict,
470 469 *_list_methods(collections.OrderedDict, dict_non_mutating_methods),
471 470 collections.UserDict,
472 471 *_list_methods(collections.UserDict, dict_non_mutating_methods),
473 472 collections.UserList,
474 473 *_list_methods(collections.UserList, list_non_mutating_methods),
475 474 collections.UserString,
476 475 *_list_methods(collections.UserString, dir(str)),
477 476 collections.Counter,
478 477 *_list_methods(collections.Counter, dict_non_mutating_methods),
479 478 collections.Counter.elements,
480 479 collections.Counter.most_common,
481 480 }
482 481
483 482 EVALUATION_POLICIES = {
484 483 "minimal": EvaluationPolicy(
485 484 allow_builtins_access=True,
486 485 allow_locals_access=False,
487 486 allow_globals_access=False,
488 487 allow_item_access=False,
489 488 allow_attr_access=False,
490 489 allowed_calls=set(),
491 490 allow_any_calls=False,
492 491 ),
493 "limitted": SelectivePolicy(
492 "limited": SelectivePolicy(
494 493 # TODO:
495 494 # - should reject binary and unary operations if custom methods would be dispatched
496 495 allowed_getitem=BUILTIN_GETITEM,
497 496 allowed_getitem_external=SUPPORTED_EXTERNAL_GETITEM,
498 497 allowed_getattr={
499 498 *BUILTIN_GETITEM,
500 499 set,
501 500 frozenset,
502 501 object,
503 502 type, # `type` handles a lot of generic cases, e.g. numbers as in `int.real`.
504 503 dict_keys,
505 504 method_descriptor,
506 505 },
507 506 allowed_getattr_external={
508 507 # pandas Series/Frame implements custom `__getattr__`
509 508 ("pandas", "DataFrame"),
510 509 ("pandas", "Series"),
511 510 },
512 511 allow_builtins_access=True,
513 512 allow_locals_access=True,
514 513 allow_globals_access=True,
515 514 allowed_calls=ALLOWED_CALLS,
516 515 ),
517 516 "unsafe": EvaluationPolicy(
518 517 allow_builtins_access=True,
519 518 allow_locals_access=True,
520 519 allow_globals_access=True,
521 520 allow_attr_access=True,
522 521 allow_item_access=True,
523 522 allow_any_calls=True,
524 523 ),
525 524 }
@@ -1,1740 +1,1740 b''
1 1 # encoding: utf-8
2 2 """Tests for the IPython tab-completion machinery."""
3 3
4 4 # Copyright (c) IPython Development Team.
5 5 # Distributed under the terms of the Modified BSD License.
6 6
7 7 import os
8 8 import pytest
9 9 import sys
10 10 import textwrap
11 11 import unittest
12 12
13 13 from contextlib import contextmanager
14 14
15 15 from traitlets.config.loader import Config
16 16 from IPython import get_ipython
17 17 from IPython.core import completer
18 18 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
19 19 from IPython.utils.generics import complete_object
20 20 from IPython.testing import decorators as dec
21 21
22 22 from IPython.core.completer import (
23 23 Completion,
24 24 provisionalcompleter,
25 25 match_dict_keys,
26 26 _deduplicate_completions,
27 27 _match_number_in_dict_key_prefix,
28 28 completion_matcher,
29 29 SimpleCompletion,
30 30 CompletionContext,
31 31 )
32 32
33 33 # -----------------------------------------------------------------------------
34 34 # Test functions
35 35 # -----------------------------------------------------------------------------
36 36
37 37 def recompute_unicode_ranges():
38 38 """
39 39 utility to recompute the largest unicode range without any characters
40 40
41 41 use to recompute the gap in the global _UNICODE_RANGES of completer.py
42 42 """
43 43 import itertools
44 44 import unicodedata
45 45 valid = []
46 46 for c in range(0,0x10FFFF + 1):
47 47 try:
48 48 unicodedata.name(chr(c))
49 49 except ValueError:
50 50 continue
51 51 valid.append(c)
52 52
53 53 def ranges(i):
54 54 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
55 55 b = list(b)
56 56 yield b[0][1], b[-1][1]
57 57
58 58 rg = list(ranges(valid))
59 59 lens = []
60 60 gap_lens = []
61 61 pstart, pstop = 0,0
62 62 for start, stop in rg:
63 63 lens.append(stop-start)
64 64 gap_lens.append((start - pstop, hex(pstop), hex(start), f'{round((start - pstop)/0xe01f0*100)}%'))
65 65 pstart, pstop = start, stop
66 66
67 67 return sorted(gap_lens)[-1]
68 68
69 69
70 70
71 71 def test_unicode_range():
72 72 """
73 73 Test that the ranges we test for unicode names give the same number of
74 74 results than testing the full length.
75 75 """
76 76 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
77 77
78 78 expected_list = _unicode_name_compute([(0, 0x110000)])
79 79 test = _unicode_name_compute(_UNICODE_RANGES)
80 80 len_exp = len(expected_list)
81 81 len_test = len(test)
82 82
83 83 # do not inline the len() or on error pytest will try to print the 130 000 +
84 84 # elements.
85 85 message = None
86 86 if len_exp != len_test or len_exp > 131808:
87 87 size, start, stop, prct = recompute_unicode_ranges()
88 88 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
89 89 likely due to a new release of Python. We've find that the biggest gap
90 90 in unicode characters has reduces in size to be {size} characters
91 91 ({prct}), from {start}, to {stop}. In completer.py likely update to
92 92
93 93 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
94 94
95 95 And update the assertion below to use
96 96
97 97 len_exp <= {len_exp}
98 98 """
99 99 assert len_exp == len_test, message
100 100
101 101 # fail if new unicode symbols have been added.
102 102 assert len_exp <= 138552, message
103 103
104 104
105 105 @contextmanager
106 106 def greedy_completion():
107 107 ip = get_ipython()
108 108 greedy_original = ip.Completer.greedy
109 109 try:
110 110 ip.Completer.greedy = True
111 111 yield
112 112 finally:
113 113 ip.Completer.greedy = greedy_original
114 114
115 115
116 116 @contextmanager
117 117 def evaluation_level(evaluation: str):
118 118 ip = get_ipython()
119 119 evaluation_original = ip.Completer.evaluation
120 120 try:
121 121 ip.Completer.evaluation = evaluation
122 122 yield
123 123 finally:
124 124 ip.Completer.evaluation = evaluation_original
125 125
126 126
127 127 @contextmanager
128 128 def custom_matchers(matchers):
129 129 ip = get_ipython()
130 130 try:
131 131 ip.Completer.custom_matchers.extend(matchers)
132 132 yield
133 133 finally:
134 134 ip.Completer.custom_matchers.clear()
135 135
136 136
137 137 def test_protect_filename():
138 138 if sys.platform == "win32":
139 139 pairs = [
140 140 ("abc", "abc"),
141 141 (" abc", '" abc"'),
142 142 ("a bc", '"a bc"'),
143 143 ("a bc", '"a bc"'),
144 144 (" bc", '" bc"'),
145 145 ]
146 146 else:
147 147 pairs = [
148 148 ("abc", "abc"),
149 149 (" abc", r"\ abc"),
150 150 ("a bc", r"a\ bc"),
151 151 ("a bc", r"a\ \ bc"),
152 152 (" bc", r"\ \ bc"),
153 153 # On posix, we also protect parens and other special characters.
154 154 ("a(bc", r"a\(bc"),
155 155 ("a)bc", r"a\)bc"),
156 156 ("a( )bc", r"a\(\ \)bc"),
157 157 ("a[1]bc", r"a\[1\]bc"),
158 158 ("a{1}bc", r"a\{1\}bc"),
159 159 ("a#bc", r"a\#bc"),
160 160 ("a?bc", r"a\?bc"),
161 161 ("a=bc", r"a\=bc"),
162 162 ("a\\bc", r"a\\bc"),
163 163 ("a|bc", r"a\|bc"),
164 164 ("a;bc", r"a\;bc"),
165 165 ("a:bc", r"a\:bc"),
166 166 ("a'bc", r"a\'bc"),
167 167 ("a*bc", r"a\*bc"),
168 168 ('a"bc', r"a\"bc"),
169 169 ("a^bc", r"a\^bc"),
170 170 ("a&bc", r"a\&bc"),
171 171 ]
172 172 # run the actual tests
173 173 for s1, s2 in pairs:
174 174 s1p = completer.protect_filename(s1)
175 175 assert s1p == s2
176 176
177 177
178 178 def check_line_split(splitter, test_specs):
179 179 for part1, part2, split in test_specs:
180 180 cursor_pos = len(part1)
181 181 line = part1 + part2
182 182 out = splitter.split_line(line, cursor_pos)
183 183 assert out == split
184 184
185 185 def test_line_split():
186 186 """Basic line splitter test with default specs."""
187 187 sp = completer.CompletionSplitter()
188 188 # The format of the test specs is: part1, part2, expected answer. Parts 1
189 189 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
190 190 # was at the end of part1. So an empty part2 represents someone hitting
191 191 # tab at the end of the line, the most common case.
192 192 t = [
193 193 ("run some/scrip", "", "some/scrip"),
194 194 ("run scripts/er", "ror.py foo", "scripts/er"),
195 195 ("echo $HOM", "", "HOM"),
196 196 ("print sys.pa", "", "sys.pa"),
197 197 ("print(sys.pa", "", "sys.pa"),
198 198 ("execfile('scripts/er", "", "scripts/er"),
199 199 ("a[x.", "", "x."),
200 200 ("a[x.", "y", "x."),
201 201 ('cd "some_file/', "", "some_file/"),
202 202 ]
203 203 check_line_split(sp, t)
204 204 # Ensure splitting works OK with unicode by re-running the tests with
205 205 # all inputs turned into unicode
206 206 check_line_split(sp, [map(str, p) for p in t])
207 207
208 208
209 209 class NamedInstanceClass:
210 210 instances = {}
211 211
212 212 def __init__(self, name):
213 213 self.instances[name] = self
214 214
215 215 @classmethod
216 216 def _ipython_key_completions_(cls):
217 217 return cls.instances.keys()
218 218
219 219
220 220 class KeyCompletable:
221 221 def __init__(self, things=()):
222 222 self.things = things
223 223
224 224 def _ipython_key_completions_(self):
225 225 return list(self.things)
226 226
227 227
228 228 class TestCompleter(unittest.TestCase):
229 229 def setUp(self):
230 230 """
231 231 We want to silence all PendingDeprecationWarning when testing the completer
232 232 """
233 233 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
234 234 self._assertwarns.__enter__()
235 235
236 236 def tearDown(self):
237 237 try:
238 238 self._assertwarns.__exit__(None, None, None)
239 239 except AssertionError:
240 240 pass
241 241
242 242 def test_custom_completion_error(self):
243 243 """Test that errors from custom attribute completers are silenced."""
244 244 ip = get_ipython()
245 245
246 246 class A:
247 247 pass
248 248
249 249 ip.user_ns["x"] = A()
250 250
251 251 @complete_object.register(A)
252 252 def complete_A(a, existing_completions):
253 253 raise TypeError("this should be silenced")
254 254
255 255 ip.complete("x.")
256 256
257 257 def test_custom_completion_ordering(self):
258 258 """Test that errors from custom attribute completers are silenced."""
259 259 ip = get_ipython()
260 260
261 261 _, matches = ip.complete('in')
262 262 assert matches.index('input') < matches.index('int')
263 263
264 264 def complete_example(a):
265 265 return ['example2', 'example1']
266 266
267 267 ip.Completer.custom_completers.add_re('ex*', complete_example)
268 268 _, matches = ip.complete('ex')
269 269 assert matches.index('example2') < matches.index('example1')
270 270
271 271 def test_unicode_completions(self):
272 272 ip = get_ipython()
273 273 # Some strings that trigger different types of completion. Check them both
274 274 # in str and unicode forms
275 275 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
276 276 for t in s + list(map(str, s)):
277 277 # We don't need to check exact completion values (they may change
278 278 # depending on the state of the namespace, but at least no exceptions
279 279 # should be thrown and the return value should be a pair of text, list
280 280 # values.
281 281 text, matches = ip.complete(t)
282 282 self.assertIsInstance(text, str)
283 283 self.assertIsInstance(matches, list)
284 284
285 285 def test_latex_completions(self):
286 286 from IPython.core.latex_symbols import latex_symbols
287 287 import random
288 288
289 289 ip = get_ipython()
290 290 # Test some random unicode symbols
291 291 keys = random.sample(sorted(latex_symbols), 10)
292 292 for k in keys:
293 293 text, matches = ip.complete(k)
294 294 self.assertEqual(text, k)
295 295 self.assertEqual(matches, [latex_symbols[k]])
296 296 # Test a more complex line
297 297 text, matches = ip.complete("print(\\alpha")
298 298 self.assertEqual(text, "\\alpha")
299 299 self.assertEqual(matches[0], latex_symbols["\\alpha"])
300 300 # Test multiple matching latex symbols
301 301 text, matches = ip.complete("\\al")
302 302 self.assertIn("\\alpha", matches)
303 303 self.assertIn("\\aleph", matches)
304 304
305 305 def test_latex_no_results(self):
306 306 """
307 307 forward latex should really return nothing in either field if nothing is found.
308 308 """
309 309 ip = get_ipython()
310 310 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
311 311 self.assertEqual(text, "")
312 312 self.assertEqual(matches, ())
313 313
314 314 def test_back_latex_completion(self):
315 315 ip = get_ipython()
316 316
317 317 # do not return more than 1 matches for \beta, only the latex one.
318 318 name, matches = ip.complete("\\Ξ²")
319 319 self.assertEqual(matches, ["\\beta"])
320 320
321 321 def test_back_unicode_completion(self):
322 322 ip = get_ipython()
323 323
324 324 name, matches = ip.complete("\\β…€")
325 325 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
326 326
327 327 def test_forward_unicode_completion(self):
328 328 ip = get_ipython()
329 329
330 330 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
331 331 self.assertEqual(matches, ["β…€"]) # This is not a V
332 332 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
333 333
334 334 def test_delim_setting(self):
335 335 sp = completer.CompletionSplitter()
336 336 sp.delims = " "
337 337 self.assertEqual(sp.delims, " ")
338 338 self.assertEqual(sp._delim_expr, r"[\ ]")
339 339
340 340 def test_spaces(self):
341 341 """Test with only spaces as split chars."""
342 342 sp = completer.CompletionSplitter()
343 343 sp.delims = " "
344 344 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
345 345 check_line_split(sp, t)
346 346
347 347 def test_has_open_quotes1(self):
348 348 for s in ["'", "'''", "'hi' '"]:
349 349 self.assertEqual(completer.has_open_quotes(s), "'")
350 350
351 351 def test_has_open_quotes2(self):
352 352 for s in ['"', '"""', '"hi" "']:
353 353 self.assertEqual(completer.has_open_quotes(s), '"')
354 354
355 355 def test_has_open_quotes3(self):
356 356 for s in ["''", "''' '''", "'hi' 'ipython'"]:
357 357 self.assertFalse(completer.has_open_quotes(s))
358 358
359 359 def test_has_open_quotes4(self):
360 360 for s in ['""', '""" """', '"hi" "ipython"']:
361 361 self.assertFalse(completer.has_open_quotes(s))
362 362
363 363 @pytest.mark.xfail(
364 364 sys.platform == "win32", reason="abspath completions fail on Windows"
365 365 )
366 366 def test_abspath_file_completions(self):
367 367 ip = get_ipython()
368 368 with TemporaryDirectory() as tmpdir:
369 369 prefix = os.path.join(tmpdir, "foo")
370 370 suffixes = ["1", "2"]
371 371 names = [prefix + s for s in suffixes]
372 372 for n in names:
373 373 open(n, "w", encoding="utf-8").close()
374 374
375 375 # Check simple completion
376 376 c = ip.complete(prefix)[1]
377 377 self.assertEqual(c, names)
378 378
379 379 # Now check with a function call
380 380 cmd = 'a = f("%s' % prefix
381 381 c = ip.complete(prefix, cmd)[1]
382 382 comp = [prefix + s for s in suffixes]
383 383 self.assertEqual(c, comp)
384 384
385 385 def test_local_file_completions(self):
386 386 ip = get_ipython()
387 387 with TemporaryWorkingDirectory():
388 388 prefix = "./foo"
389 389 suffixes = ["1", "2"]
390 390 names = [prefix + s for s in suffixes]
391 391 for n in names:
392 392 open(n, "w", encoding="utf-8").close()
393 393
394 394 # Check simple completion
395 395 c = ip.complete(prefix)[1]
396 396 self.assertEqual(c, names)
397 397
398 398 # Now check with a function call
399 399 cmd = 'a = f("%s' % prefix
400 400 c = ip.complete(prefix, cmd)[1]
401 401 comp = {prefix + s for s in suffixes}
402 402 self.assertTrue(comp.issubset(set(c)))
403 403
404 404 def test_quoted_file_completions(self):
405 405 ip = get_ipython()
406 406
407 407 def _(text):
408 408 return ip.Completer._complete(
409 409 cursor_line=0, cursor_pos=len(text), full_text=text
410 410 )["IPCompleter.file_matcher"]["completions"]
411 411
412 412 with TemporaryWorkingDirectory():
413 413 name = "foo'bar"
414 414 open(name, "w", encoding="utf-8").close()
415 415
416 416 # Don't escape Windows
417 417 escaped = name if sys.platform == "win32" else "foo\\'bar"
418 418
419 419 # Single quote matches embedded single quote
420 420 c = _("open('foo")[0]
421 421 self.assertEqual(c.text, escaped)
422 422
423 423 # Double quote requires no escape
424 424 c = _('open("foo')[0]
425 425 self.assertEqual(c.text, name)
426 426
427 427 # No quote requires an escape
428 428 c = _("%ls foo")[0]
429 429 self.assertEqual(c.text, escaped)
430 430
431 431 def test_all_completions_dups(self):
432 432 """
433 433 Make sure the output of `IPCompleter.all_completions` does not have
434 434 duplicated prefixes.
435 435 """
436 436 ip = get_ipython()
437 437 c = ip.Completer
438 438 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
439 439 for jedi_status in [True, False]:
440 440 with provisionalcompleter():
441 441 ip.Completer.use_jedi = jedi_status
442 442 matches = c.all_completions("TestCl")
443 443 assert matches == ["TestClass"], (jedi_status, matches)
444 444 matches = c.all_completions("TestClass.")
445 445 assert len(matches) > 2, (jedi_status, matches)
446 446 matches = c.all_completions("TestClass.a")
447 447 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
448 448
449 449 def test_jedi(self):
450 450 """
451 451 A couple of issue we had with Jedi
452 452 """
453 453 ip = get_ipython()
454 454
455 455 def _test_complete(reason, s, comp, start=None, end=None):
456 456 l = len(s)
457 457 start = start if start is not None else l
458 458 end = end if end is not None else l
459 459 with provisionalcompleter():
460 460 ip.Completer.use_jedi = True
461 461 completions = set(ip.Completer.completions(s, l))
462 462 ip.Completer.use_jedi = False
463 463 assert Completion(start, end, comp) in completions, reason
464 464
465 465 def _test_not_complete(reason, s, comp):
466 466 l = len(s)
467 467 with provisionalcompleter():
468 468 ip.Completer.use_jedi = True
469 469 completions = set(ip.Completer.completions(s, l))
470 470 ip.Completer.use_jedi = False
471 471 assert Completion(l, l, comp) not in completions, reason
472 472
473 473 import jedi
474 474
475 475 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
476 476 if jedi_version > (0, 10):
477 477 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
478 478 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
479 479 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
480 480 _test_complete("cover duplicate completions", "im", "import", 0, 2)
481 481
482 482 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
483 483
484 484 def test_completion_have_signature(self):
485 485 """
486 486 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
487 487 """
488 488 ip = get_ipython()
489 489 with provisionalcompleter():
490 490 ip.Completer.use_jedi = True
491 491 completions = ip.Completer.completions("ope", 3)
492 492 c = next(completions) # should be `open`
493 493 ip.Completer.use_jedi = False
494 494 assert "file" in c.signature, "Signature of function was not found by completer"
495 495 assert (
496 496 "encoding" in c.signature
497 497 ), "Signature of function was not found by completer"
498 498
499 499 def test_completions_have_type(self):
500 500 """
501 501 Lets make sure matchers provide completion type.
502 502 """
503 503 ip = get_ipython()
504 504 with provisionalcompleter():
505 505 ip.Completer.use_jedi = False
506 506 completions = ip.Completer.completions("%tim", 3)
507 507 c = next(completions) # should be `%time` or similar
508 508 assert c.type == "magic", "Type of magic was not assigned by completer"
509 509
510 510 @pytest.mark.xfail(reason="Known failure on jedi<=0.18.0")
511 511 def test_deduplicate_completions(self):
512 512 """
513 513 Test that completions are correctly deduplicated (even if ranges are not the same)
514 514 """
515 515 ip = get_ipython()
516 516 ip.ex(
517 517 textwrap.dedent(
518 518 """
519 519 class Z:
520 520 zoo = 1
521 521 """
522 522 )
523 523 )
524 524 with provisionalcompleter():
525 525 ip.Completer.use_jedi = True
526 526 l = list(
527 527 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
528 528 )
529 529 ip.Completer.use_jedi = False
530 530
531 531 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
532 532 assert l[0].text == "zoo" # and not `it.accumulate`
533 533
534 534 def test_greedy_completions(self):
535 535 """
536 536 Test the capability of the Greedy completer.
537 537
538 538 Most of the test here does not really show off the greedy completer, for proof
539 539 each of the text below now pass with Jedi. The greedy completer is capable of more.
540 540
541 541 See the :any:`test_dict_key_completion_contexts`
542 542
543 543 """
544 544 ip = get_ipython()
545 545 ip.ex("a=list(range(5))")
546 546 _, c = ip.complete(".", line="a[0].")
547 547 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
548 548
549 549 def _(line, cursor_pos, expect, message, completion):
550 550 with greedy_completion(), provisionalcompleter():
551 551 ip.Completer.use_jedi = False
552 552 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
553 553 self.assertIn(expect, c, message % c)
554 554
555 555 ip.Completer.use_jedi = True
556 556 with provisionalcompleter():
557 557 completions = ip.Completer.completions(line, cursor_pos)
558 558 self.assertIn(completion, completions)
559 559
560 560 with provisionalcompleter():
561 561 _(
562 562 "a[0].",
563 563 5,
564 564 "a[0].real",
565 565 "Should have completed on a[0].: %s",
566 566 Completion(5, 5, "real"),
567 567 )
568 568 _(
569 569 "a[0].r",
570 570 6,
571 571 "a[0].real",
572 572 "Should have completed on a[0].r: %s",
573 573 Completion(5, 6, "real"),
574 574 )
575 575
576 576 _(
577 577 "a[0].from_",
578 578 10,
579 579 "a[0].from_bytes",
580 580 "Should have completed on a[0].from_: %s",
581 581 Completion(5, 10, "from_bytes"),
582 582 )
583 583
584 584 def test_omit__names(self):
585 585 # also happens to test IPCompleter as a configurable
586 586 ip = get_ipython()
587 587 ip._hidden_attr = 1
588 588 ip._x = {}
589 589 c = ip.Completer
590 590 ip.ex("ip=get_ipython()")
591 591 cfg = Config()
592 592 cfg.IPCompleter.omit__names = 0
593 593 c.update_config(cfg)
594 594 with provisionalcompleter():
595 595 c.use_jedi = False
596 596 s, matches = c.complete("ip.")
597 597 self.assertIn("ip.__str__", matches)
598 598 self.assertIn("ip._hidden_attr", matches)
599 599
600 600 # c.use_jedi = True
601 601 # completions = set(c.completions('ip.', 3))
602 602 # self.assertIn(Completion(3, 3, '__str__'), completions)
603 603 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
604 604
605 605 cfg = Config()
606 606 cfg.IPCompleter.omit__names = 1
607 607 c.update_config(cfg)
608 608 with provisionalcompleter():
609 609 c.use_jedi = False
610 610 s, matches = c.complete("ip.")
611 611 self.assertNotIn("ip.__str__", matches)
612 612 # self.assertIn('ip._hidden_attr', matches)
613 613
614 614 # c.use_jedi = True
615 615 # completions = set(c.completions('ip.', 3))
616 616 # self.assertNotIn(Completion(3,3,'__str__'), completions)
617 617 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
618 618
619 619 cfg = Config()
620 620 cfg.IPCompleter.omit__names = 2
621 621 c.update_config(cfg)
622 622 with provisionalcompleter():
623 623 c.use_jedi = False
624 624 s, matches = c.complete("ip.")
625 625 self.assertNotIn("ip.__str__", matches)
626 626 self.assertNotIn("ip._hidden_attr", matches)
627 627
628 628 # c.use_jedi = True
629 629 # completions = set(c.completions('ip.', 3))
630 630 # self.assertNotIn(Completion(3,3,'__str__'), completions)
631 631 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
632 632
633 633 with provisionalcompleter():
634 634 c.use_jedi = False
635 635 s, matches = c.complete("ip._x.")
636 636 self.assertIn("ip._x.keys", matches)
637 637
638 638 # c.use_jedi = True
639 639 # completions = set(c.completions('ip._x.', 6))
640 640 # self.assertIn(Completion(6,6, "keys"), completions)
641 641
642 642 del ip._hidden_attr
643 643 del ip._x
644 644
645 645 def test_limit_to__all__False_ok(self):
646 646 """
647 647 Limit to all is deprecated, once we remove it this test can go away.
648 648 """
649 649 ip = get_ipython()
650 650 c = ip.Completer
651 651 c.use_jedi = False
652 652 ip.ex("class D: x=24")
653 653 ip.ex("d=D()")
654 654 cfg = Config()
655 655 cfg.IPCompleter.limit_to__all__ = False
656 656 c.update_config(cfg)
657 657 s, matches = c.complete("d.")
658 658 self.assertIn("d.x", matches)
659 659
660 660 def test_get__all__entries_ok(self):
661 661 class A:
662 662 __all__ = ["x", 1]
663 663
664 664 words = completer.get__all__entries(A())
665 665 self.assertEqual(words, ["x"])
666 666
667 667 def test_get__all__entries_no__all__ok(self):
668 668 class A:
669 669 pass
670 670
671 671 words = completer.get__all__entries(A())
672 672 self.assertEqual(words, [])
673 673
674 674 def test_func_kw_completions(self):
675 675 ip = get_ipython()
676 676 c = ip.Completer
677 677 c.use_jedi = False
678 678 ip.ex("def myfunc(a=1,b=2): return a+b")
679 679 s, matches = c.complete(None, "myfunc(1,b")
680 680 self.assertIn("b=", matches)
681 681 # Simulate completing with cursor right after b (pos==10):
682 682 s, matches = c.complete(None, "myfunc(1,b)", 10)
683 683 self.assertIn("b=", matches)
684 684 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
685 685 self.assertIn("b=", matches)
686 686 # builtin function
687 687 s, matches = c.complete(None, "min(k, k")
688 688 self.assertIn("key=", matches)
689 689
690 690 def test_default_arguments_from_docstring(self):
691 691 ip = get_ipython()
692 692 c = ip.Completer
693 693 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
694 694 self.assertEqual(kwd, ["key"])
695 695 # with cython type etc
696 696 kwd = c._default_arguments_from_docstring(
697 697 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
698 698 )
699 699 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
700 700 # white spaces
701 701 kwd = c._default_arguments_from_docstring(
702 702 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
703 703 )
704 704 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
705 705
706 706 def test_line_magics(self):
707 707 ip = get_ipython()
708 708 c = ip.Completer
709 709 s, matches = c.complete(None, "lsmag")
710 710 self.assertIn("%lsmagic", matches)
711 711 s, matches = c.complete(None, "%lsmag")
712 712 self.assertIn("%lsmagic", matches)
713 713
714 714 def test_cell_magics(self):
715 715 from IPython.core.magic import register_cell_magic
716 716
717 717 @register_cell_magic
718 718 def _foo_cellm(line, cell):
719 719 pass
720 720
721 721 ip = get_ipython()
722 722 c = ip.Completer
723 723
724 724 s, matches = c.complete(None, "_foo_ce")
725 725 self.assertIn("%%_foo_cellm", matches)
726 726 s, matches = c.complete(None, "%%_foo_ce")
727 727 self.assertIn("%%_foo_cellm", matches)
728 728
729 729 def test_line_cell_magics(self):
730 730 from IPython.core.magic import register_line_cell_magic
731 731
732 732 @register_line_cell_magic
733 733 def _bar_cellm(line, cell):
734 734 pass
735 735
736 736 ip = get_ipython()
737 737 c = ip.Completer
738 738
739 739 # The policy here is trickier, see comments in completion code. The
740 740 # returned values depend on whether the user passes %% or not explicitly,
741 741 # and this will show a difference if the same name is both a line and cell
742 742 # magic.
743 743 s, matches = c.complete(None, "_bar_ce")
744 744 self.assertIn("%_bar_cellm", matches)
745 745 self.assertIn("%%_bar_cellm", matches)
746 746 s, matches = c.complete(None, "%_bar_ce")
747 747 self.assertIn("%_bar_cellm", matches)
748 748 self.assertIn("%%_bar_cellm", matches)
749 749 s, matches = c.complete(None, "%%_bar_ce")
750 750 self.assertNotIn("%_bar_cellm", matches)
751 751 self.assertIn("%%_bar_cellm", matches)
752 752
753 753 def test_magic_completion_order(self):
754 754 ip = get_ipython()
755 755 c = ip.Completer
756 756
757 757 # Test ordering of line and cell magics.
758 758 text, matches = c.complete("timeit")
759 759 self.assertEqual(matches, ["%timeit", "%%timeit"])
760 760
761 761 def test_magic_completion_shadowing(self):
762 762 ip = get_ipython()
763 763 c = ip.Completer
764 764 c.use_jedi = False
765 765
766 766 # Before importing matplotlib, %matplotlib magic should be the only option.
767 767 text, matches = c.complete("mat")
768 768 self.assertEqual(matches, ["%matplotlib"])
769 769
770 770 # The newly introduced name should shadow the magic.
771 771 ip.run_cell("matplotlib = 1")
772 772 text, matches = c.complete("mat")
773 773 self.assertEqual(matches, ["matplotlib"])
774 774
775 775 # After removing matplotlib from namespace, the magic should again be
776 776 # the only option.
777 777 del ip.user_ns["matplotlib"]
778 778 text, matches = c.complete("mat")
779 779 self.assertEqual(matches, ["%matplotlib"])
780 780
781 781 def test_magic_completion_shadowing_explicit(self):
782 782 """
783 783 If the user try to complete a shadowed magic, and explicit % start should
784 784 still return the completions.
785 785 """
786 786 ip = get_ipython()
787 787 c = ip.Completer
788 788
789 789 # Before importing matplotlib, %matplotlib magic should be the only option.
790 790 text, matches = c.complete("%mat")
791 791 self.assertEqual(matches, ["%matplotlib"])
792 792
793 793 ip.run_cell("matplotlib = 1")
794 794
795 795 # After removing matplotlib from namespace, the magic should still be
796 796 # the only option.
797 797 text, matches = c.complete("%mat")
798 798 self.assertEqual(matches, ["%matplotlib"])
799 799
800 800 def test_magic_config(self):
801 801 ip = get_ipython()
802 802 c = ip.Completer
803 803
804 804 s, matches = c.complete(None, "conf")
805 805 self.assertIn("%config", matches)
806 806 s, matches = c.complete(None, "conf")
807 807 self.assertNotIn("AliasManager", matches)
808 808 s, matches = c.complete(None, "config ")
809 809 self.assertIn("AliasManager", matches)
810 810 s, matches = c.complete(None, "%config ")
811 811 self.assertIn("AliasManager", matches)
812 812 s, matches = c.complete(None, "config Ali")
813 813 self.assertListEqual(["AliasManager"], matches)
814 814 s, matches = c.complete(None, "%config Ali")
815 815 self.assertListEqual(["AliasManager"], matches)
816 816 s, matches = c.complete(None, "config AliasManager")
817 817 self.assertListEqual(["AliasManager"], matches)
818 818 s, matches = c.complete(None, "%config AliasManager")
819 819 self.assertListEqual(["AliasManager"], matches)
820 820 s, matches = c.complete(None, "config AliasManager.")
821 821 self.assertIn("AliasManager.default_aliases", matches)
822 822 s, matches = c.complete(None, "%config AliasManager.")
823 823 self.assertIn("AliasManager.default_aliases", matches)
824 824 s, matches = c.complete(None, "config AliasManager.de")
825 825 self.assertListEqual(["AliasManager.default_aliases"], matches)
826 826 s, matches = c.complete(None, "config AliasManager.de")
827 827 self.assertListEqual(["AliasManager.default_aliases"], matches)
828 828
829 829 def test_magic_color(self):
830 830 ip = get_ipython()
831 831 c = ip.Completer
832 832
833 833 s, matches = c.complete(None, "colo")
834 834 self.assertIn("%colors", matches)
835 835 s, matches = c.complete(None, "colo")
836 836 self.assertNotIn("NoColor", matches)
837 837 s, matches = c.complete(None, "%colors") # No trailing space
838 838 self.assertNotIn("NoColor", matches)
839 839 s, matches = c.complete(None, "colors ")
840 840 self.assertIn("NoColor", matches)
841 841 s, matches = c.complete(None, "%colors ")
842 842 self.assertIn("NoColor", matches)
843 843 s, matches = c.complete(None, "colors NoCo")
844 844 self.assertListEqual(["NoColor"], matches)
845 845 s, matches = c.complete(None, "%colors NoCo")
846 846 self.assertListEqual(["NoColor"], matches)
847 847
848 848 def test_match_dict_keys(self):
849 849 """
850 850 Test that match_dict_keys works on a couple of use case does return what
851 851 expected, and does not crash
852 852 """
853 853 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
854 854
855 855 def match(*args, **kwargs):
856 856 quote, offset, matches = match_dict_keys(*args, **kwargs)
857 857 return quote, offset, list(matches)
858 858
859 859 keys = ["foo", b"far"]
860 860 assert match(keys, "b'", delims=delims) == ("'", 2, ["far"])
861 861 assert match(keys, "b'f", delims=delims) == ("'", 2, ["far"])
862 862 assert match(keys, 'b"', delims=delims) == ('"', 2, ["far"])
863 863 assert match(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
864 864
865 865 assert match(keys, "'", delims=delims) == ("'", 1, ["foo"])
866 866 assert match(keys, "'f", delims=delims) == ("'", 1, ["foo"])
867 867 assert match(keys, '"', delims=delims) == ('"', 1, ["foo"])
868 868 assert match(keys, '"f', delims=delims) == ('"', 1, ["foo"])
869 869
870 870 # Completion on first item of tuple
871 871 keys = [("foo", 1111), ("foo", 2222), (3333, "bar"), (3333, "test")]
872 872 assert match(keys, "'f", delims=delims) == ("'", 1, ["foo"])
873 873 assert match(keys, "33", delims=delims) == ("", 0, ["3333"])
874 874
875 875 # Completion on numbers
876 876 keys = [0xDEADBEEF, 1111, 1234, "1999", 0b10101, 22] # 3735928559 # 21
877 877 assert match(keys, "0xdead", delims=delims) == ("", 0, ["0xdeadbeef"])
878 878 assert match(keys, "1", delims=delims) == ("", 0, ["1111", "1234"])
879 879 assert match(keys, "2", delims=delims) == ("", 0, ["21", "22"])
880 880 assert match(keys, "0b101", delims=delims) == ("", 0, ["0b10101", "0b10110"])
881 881
882 882 def test_match_dict_keys_tuple(self):
883 883 """
884 884 Test that match_dict_keys called with extra prefix works on a couple of use case,
885 885 does return what expected, and does not crash.
886 886 """
887 887 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
888 888
889 889 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
890 890
891 891 def match(*args, **kwargs):
892 892 quote, offset, matches = match_dict_keys(*args, **kwargs)
893 893 return quote, offset, list(matches)
894 894
895 895 # Completion on first key == "foo"
896 896 assert match(keys, "'", delims=delims, extra_prefix=("foo",)) == (
897 897 "'",
898 898 1,
899 899 ["bar", "oof"],
900 900 )
901 901 assert match(keys, '"', delims=delims, extra_prefix=("foo",)) == (
902 902 '"',
903 903 1,
904 904 ["bar", "oof"],
905 905 )
906 906 assert match(keys, "'o", delims=delims, extra_prefix=("foo",)) == (
907 907 "'",
908 908 1,
909 909 ["oof"],
910 910 )
911 911 assert match(keys, '"o', delims=delims, extra_prefix=("foo",)) == (
912 912 '"',
913 913 1,
914 914 ["oof"],
915 915 )
916 916 assert match(keys, "b'", delims=delims, extra_prefix=("foo",)) == (
917 917 "'",
918 918 2,
919 919 ["bar"],
920 920 )
921 921 assert match(keys, 'b"', delims=delims, extra_prefix=("foo",)) == (
922 922 '"',
923 923 2,
924 924 ["bar"],
925 925 )
926 926 assert match(keys, "b'b", delims=delims, extra_prefix=("foo",)) == (
927 927 "'",
928 928 2,
929 929 ["bar"],
930 930 )
931 931 assert match(keys, 'b"b', delims=delims, extra_prefix=("foo",)) == (
932 932 '"',
933 933 2,
934 934 ["bar"],
935 935 )
936 936
937 937 # No Completion
938 938 assert match(keys, "'", delims=delims, extra_prefix=("no_foo",)) == ("'", 1, [])
939 939 assert match(keys, "'", delims=delims, extra_prefix=("fo",)) == ("'", 1, [])
940 940
941 941 keys = [("foo1", "foo2", "foo3", "foo4"), ("foo1", "foo2", "bar", "foo4")]
942 942 assert match(keys, "'foo", delims=delims, extra_prefix=("foo1",)) == (
943 943 "'",
944 944 1,
945 945 ["foo2"],
946 946 )
947 947 assert match(keys, "'foo", delims=delims, extra_prefix=("foo1", "foo2")) == (
948 948 "'",
949 949 1,
950 950 ["foo3"],
951 951 )
952 952 assert match(
953 953 keys, "'foo", delims=delims, extra_prefix=("foo1", "foo2", "foo3")
954 954 ) == ("'", 1, ["foo4"])
955 955 assert match(
956 956 keys, "'foo", delims=delims, extra_prefix=("foo1", "foo2", "foo3", "foo4")
957 957 ) == ("'", 1, [])
958 958
959 959 keys = [("foo", 1111), ("foo", "2222"), (3333, "bar"), (3333, 4444)]
960 960 assert match(keys, "'", delims=delims, extra_prefix=("foo",)) == (
961 961 "'",
962 962 1,
963 963 ["2222"],
964 964 )
965 965 assert match(keys, "", delims=delims, extra_prefix=("foo",)) == (
966 966 "",
967 967 0,
968 968 ["1111", "'2222'"],
969 969 )
970 970 assert match(keys, "'", delims=delims, extra_prefix=(3333,)) == (
971 971 "'",
972 972 1,
973 973 ["bar"],
974 974 )
975 975 assert match(keys, "", delims=delims, extra_prefix=(3333,)) == (
976 976 "",
977 977 0,
978 978 ["'bar'", "4444"],
979 979 )
980 980 assert match(keys, "'", delims=delims, extra_prefix=("3333",)) == ("'", 1, [])
981 981 assert match(keys, "33", delims=delims) == ("", 0, ["3333"])
982 982
983 983 def test_dict_key_completion_closures(self):
984 984 ip = get_ipython()
985 985 complete = ip.Completer.complete
986 986 ip.Completer.auto_close_dict_keys = True
987 987
988 988 ip.user_ns["d"] = {
989 989 # tuple only
990 990 ("aa", 11): None,
991 991 # tuple and non-tuple
992 992 ("bb", 22): None,
993 993 "bb": None,
994 994 # non-tuple only
995 995 "cc": None,
996 996 # numeric tuple only
997 997 (77, "x"): None,
998 998 # numeric tuple and non-tuple
999 999 (88, "y"): None,
1000 1000 88: None,
1001 1001 # numeric non-tuple only
1002 1002 99: None,
1003 1003 }
1004 1004
1005 1005 _, matches = complete(line_buffer="d[")
1006 1006 # should append `, ` if matches a tuple only
1007 1007 self.assertIn("'aa', ", matches)
1008 1008 # should not append anything if matches a tuple and an item
1009 1009 self.assertIn("'bb'", matches)
1010 1010 # should append `]` if matches and item only
1011 1011 self.assertIn("'cc']", matches)
1012 1012
1013 1013 # should append `, ` if matches a tuple only
1014 1014 self.assertIn("77, ", matches)
1015 1015 # should not append anything if matches a tuple and an item
1016 1016 self.assertIn("88", matches)
1017 1017 # should append `]` if matches and item only
1018 1018 self.assertIn("99]", matches)
1019 1019
1020 1020 _, matches = complete(line_buffer="d['aa', ")
1021 1021 # should restrict matches to those matching tuple prefix
1022 1022 self.assertIn("11]", matches)
1023 1023 self.assertNotIn("'bb'", matches)
1024 1024 self.assertNotIn("'bb', ", matches)
1025 1025 self.assertNotIn("'bb']", matches)
1026 1026 self.assertNotIn("'cc'", matches)
1027 1027 self.assertNotIn("'cc', ", matches)
1028 1028 self.assertNotIn("'cc']", matches)
1029 1029 ip.Completer.auto_close_dict_keys = False
1030 1030
1031 1031 def test_dict_key_completion_string(self):
1032 1032 """Test dictionary key completion for string keys"""
1033 1033 ip = get_ipython()
1034 1034 complete = ip.Completer.complete
1035 1035
1036 1036 ip.user_ns["d"] = {"abc": None}
1037 1037
1038 1038 # check completion at different stages
1039 1039 _, matches = complete(line_buffer="d[")
1040 1040 self.assertIn("'abc'", matches)
1041 1041 self.assertNotIn("'abc']", matches)
1042 1042
1043 1043 _, matches = complete(line_buffer="d['")
1044 1044 self.assertIn("abc", matches)
1045 1045 self.assertNotIn("abc']", matches)
1046 1046
1047 1047 _, matches = complete(line_buffer="d['a")
1048 1048 self.assertIn("abc", matches)
1049 1049 self.assertNotIn("abc']", matches)
1050 1050
1051 1051 # check use of different quoting
1052 1052 _, matches = complete(line_buffer='d["')
1053 1053 self.assertIn("abc", matches)
1054 1054 self.assertNotIn('abc"]', matches)
1055 1055
1056 1056 _, matches = complete(line_buffer='d["a')
1057 1057 self.assertIn("abc", matches)
1058 1058 self.assertNotIn('abc"]', matches)
1059 1059
1060 1060 # check sensitivity to following context
1061 1061 _, matches = complete(line_buffer="d[]", cursor_pos=2)
1062 1062 self.assertIn("'abc'", matches)
1063 1063
1064 1064 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1065 1065 self.assertIn("abc", matches)
1066 1066 self.assertNotIn("abc'", matches)
1067 1067 self.assertNotIn("abc']", matches)
1068 1068
1069 1069 # check multiple solutions are correctly returned and that noise is not
1070 1070 ip.user_ns["d"] = {
1071 1071 "abc": None,
1072 1072 "abd": None,
1073 1073 "bad": None,
1074 1074 object(): None,
1075 1075 5: None,
1076 1076 ("abe", None): None,
1077 1077 (None, "abf"): None
1078 1078 }
1079 1079
1080 1080 _, matches = complete(line_buffer="d['a")
1081 1081 self.assertIn("abc", matches)
1082 1082 self.assertIn("abd", matches)
1083 1083 self.assertNotIn("bad", matches)
1084 1084 self.assertNotIn("abe", matches)
1085 1085 self.assertNotIn("abf", matches)
1086 1086 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1087 1087
1088 1088 # check escaping and whitespace
1089 1089 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
1090 1090 _, matches = complete(line_buffer="d['a")
1091 1091 self.assertIn("a\\nb", matches)
1092 1092 self.assertIn("a\\'b", matches)
1093 1093 self.assertIn('a"b', matches)
1094 1094 self.assertIn("a word", matches)
1095 1095 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1096 1096
1097 1097 # - can complete on non-initial word of the string
1098 1098 _, matches = complete(line_buffer="d['a w")
1099 1099 self.assertIn("word", matches)
1100 1100
1101 1101 # - understands quote escaping
1102 1102 _, matches = complete(line_buffer="d['a\\'")
1103 1103 self.assertIn("b", matches)
1104 1104
1105 1105 # - default quoting should work like repr
1106 1106 _, matches = complete(line_buffer="d[")
1107 1107 self.assertIn('"a\'b"', matches)
1108 1108
1109 1109 # - when opening quote with ", possible to match with unescaped apostrophe
1110 1110 _, matches = complete(line_buffer="d[\"a'")
1111 1111 self.assertIn("b", matches)
1112 1112
1113 1113 # need to not split at delims that readline won't split at
1114 1114 if "-" not in ip.Completer.splitter.delims:
1115 1115 ip.user_ns["d"] = {"before-after": None}
1116 1116 _, matches = complete(line_buffer="d['before-af")
1117 1117 self.assertIn("before-after", matches)
1118 1118
1119 1119 # check completion on tuple-of-string keys at different stage - on first key
1120 1120 ip.user_ns["d"] = {('foo', 'bar'): None}
1121 1121 _, matches = complete(line_buffer="d[")
1122 1122 self.assertIn("'foo'", matches)
1123 1123 self.assertNotIn("'foo']", matches)
1124 1124 self.assertNotIn("'bar'", matches)
1125 1125 self.assertNotIn("foo", matches)
1126 1126 self.assertNotIn("bar", matches)
1127 1127
1128 1128 # - match the prefix
1129 1129 _, matches = complete(line_buffer="d['f")
1130 1130 self.assertIn("foo", matches)
1131 1131 self.assertNotIn("foo']", matches)
1132 1132 self.assertNotIn('foo"]', matches)
1133 1133 _, matches = complete(line_buffer="d['foo")
1134 1134 self.assertIn("foo", matches)
1135 1135
1136 1136 # - can complete on second key
1137 1137 _, matches = complete(line_buffer="d['foo', ")
1138 1138 self.assertIn("'bar'", matches)
1139 1139 _, matches = complete(line_buffer="d['foo', 'b")
1140 1140 self.assertIn("bar", matches)
1141 1141 self.assertNotIn("foo", matches)
1142 1142
1143 1143 # - does not propose missing keys
1144 1144 _, matches = complete(line_buffer="d['foo', 'f")
1145 1145 self.assertNotIn("bar", matches)
1146 1146 self.assertNotIn("foo", matches)
1147 1147
1148 1148 # check sensitivity to following context
1149 1149 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1150 1150 self.assertIn("'bar'", matches)
1151 1151 self.assertNotIn("bar", matches)
1152 1152 self.assertNotIn("'foo'", matches)
1153 1153 self.assertNotIn("foo", matches)
1154 1154
1155 1155 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1156 1156 self.assertIn("foo", matches)
1157 1157 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1158 1158
1159 1159 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1160 1160 self.assertIn("foo", matches)
1161 1161 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1162 1162
1163 1163 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1164 1164 self.assertIn("bar", matches)
1165 1165 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1166 1166
1167 1167 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1168 1168 self.assertIn("'bar'", matches)
1169 1169 self.assertNotIn("bar", matches)
1170 1170
1171 1171 # Can complete with longer tuple keys
1172 1172 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1173 1173
1174 1174 # - can complete second key
1175 1175 _, matches = complete(line_buffer="d['foo', 'b")
1176 1176 self.assertIn("bar", matches)
1177 1177 self.assertNotIn("foo", matches)
1178 1178 self.assertNotIn("foobar", matches)
1179 1179
1180 1180 # - can complete third key
1181 1181 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1182 1182 self.assertIn("foobar", matches)
1183 1183 self.assertNotIn("foo", matches)
1184 1184 self.assertNotIn("bar", matches)
1185 1185
1186 1186 def test_dict_key_completion_numbers(self):
1187 1187 ip = get_ipython()
1188 1188 complete = ip.Completer.complete
1189 1189
1190 1190 ip.user_ns["d"] = {
1191 1191 0xDEADBEEF: None, # 3735928559
1192 1192 1111: None,
1193 1193 1234: None,
1194 1194 "1999": None,
1195 1195 0b10101: None, # 21
1196 1196 22: None,
1197 1197 }
1198 1198 _, matches = complete(line_buffer="d[1")
1199 1199 self.assertIn("1111", matches)
1200 1200 self.assertIn("1234", matches)
1201 1201 self.assertNotIn("1999", matches)
1202 1202 self.assertNotIn("'1999'", matches)
1203 1203
1204 1204 _, matches = complete(line_buffer="d[0xdead")
1205 1205 self.assertIn("0xdeadbeef", matches)
1206 1206
1207 1207 _, matches = complete(line_buffer="d[2")
1208 1208 self.assertIn("21", matches)
1209 1209 self.assertIn("22", matches)
1210 1210
1211 1211 _, matches = complete(line_buffer="d[0b101")
1212 1212 self.assertIn("0b10101", matches)
1213 1213 self.assertIn("0b10110", matches)
1214 1214
1215 1215 def test_dict_key_completion_contexts(self):
1216 1216 """Test expression contexts in which dict key completion occurs"""
1217 1217 ip = get_ipython()
1218 1218 complete = ip.Completer.complete
1219 1219 d = {"abc": None}
1220 1220 ip.user_ns["d"] = d
1221 1221
1222 1222 class C:
1223 1223 data = d
1224 1224
1225 1225 ip.user_ns["C"] = C
1226 1226 ip.user_ns["get"] = lambda: d
1227 1227 ip.user_ns["nested"] = {"x": d}
1228 1228
1229 1229 def assert_no_completion(**kwargs):
1230 1230 _, matches = complete(**kwargs)
1231 1231 self.assertNotIn("abc", matches)
1232 1232 self.assertNotIn("abc'", matches)
1233 1233 self.assertNotIn("abc']", matches)
1234 1234 self.assertNotIn("'abc'", matches)
1235 1235 self.assertNotIn("'abc']", matches)
1236 1236
1237 1237 def assert_completion(**kwargs):
1238 1238 _, matches = complete(**kwargs)
1239 1239 self.assertIn("'abc'", matches)
1240 1240 self.assertNotIn("'abc']", matches)
1241 1241
1242 1242 # no completion after string closed, even if reopened
1243 1243 assert_no_completion(line_buffer="d['a'")
1244 1244 assert_no_completion(line_buffer='d["a"')
1245 1245 assert_no_completion(line_buffer="d['a' + ")
1246 1246 assert_no_completion(line_buffer="d['a' + '")
1247 1247
1248 1248 # completion in non-trivial expressions
1249 1249 assert_completion(line_buffer="+ d[")
1250 1250 assert_completion(line_buffer="(d[")
1251 1251 assert_completion(line_buffer="C.data[")
1252 1252
1253 1253 # nested dict completion
1254 1254 assert_completion(line_buffer="nested['x'][")
1255 1255
1256 1256 with evaluation_level("minimal"):
1257 1257 with pytest.raises(AssertionError):
1258 1258 assert_completion(line_buffer="nested['x'][")
1259 1259
1260 1260 # greedy flag
1261 1261 def assert_completion(**kwargs):
1262 1262 _, matches = complete(**kwargs)
1263 1263 self.assertIn("get()['abc']", matches)
1264 1264
1265 1265 assert_no_completion(line_buffer="get()[")
1266 1266 with greedy_completion():
1267 1267 assert_completion(line_buffer="get()[")
1268 1268 assert_completion(line_buffer="get()['")
1269 1269 assert_completion(line_buffer="get()['a")
1270 1270 assert_completion(line_buffer="get()['ab")
1271 1271 assert_completion(line_buffer="get()['abc")
1272 1272
1273 1273 def test_dict_key_completion_bytes(self):
1274 1274 """Test handling of bytes in dict key completion"""
1275 1275 ip = get_ipython()
1276 1276 complete = ip.Completer.complete
1277 1277
1278 1278 ip.user_ns["d"] = {"abc": None, b"abd": None}
1279 1279
1280 1280 _, matches = complete(line_buffer="d[")
1281 1281 self.assertIn("'abc'", matches)
1282 1282 self.assertIn("b'abd'", matches)
1283 1283
1284 1284 if False: # not currently implemented
1285 1285 _, matches = complete(line_buffer="d[b")
1286 1286 self.assertIn("b'abd'", matches)
1287 1287 self.assertNotIn("b'abc'", matches)
1288 1288
1289 1289 _, matches = complete(line_buffer="d[b'")
1290 1290 self.assertIn("abd", matches)
1291 1291 self.assertNotIn("abc", matches)
1292 1292
1293 1293 _, matches = complete(line_buffer="d[B'")
1294 1294 self.assertIn("abd", matches)
1295 1295 self.assertNotIn("abc", matches)
1296 1296
1297 1297 _, matches = complete(line_buffer="d['")
1298 1298 self.assertIn("abc", matches)
1299 1299 self.assertNotIn("abd", matches)
1300 1300
1301 1301 def test_dict_key_completion_unicode_py3(self):
1302 1302 """Test handling of unicode in dict key completion"""
1303 1303 ip = get_ipython()
1304 1304 complete = ip.Completer.complete
1305 1305
1306 1306 ip.user_ns["d"] = {"a\u05d0": None}
1307 1307
1308 1308 # query using escape
1309 1309 if sys.platform != "win32":
1310 1310 # Known failure on Windows
1311 1311 _, matches = complete(line_buffer="d['a\\u05d0")
1312 1312 self.assertIn("u05d0", matches) # tokenized after \\
1313 1313
1314 1314 # query using character
1315 1315 _, matches = complete(line_buffer="d['a\u05d0")
1316 1316 self.assertIn("a\u05d0", matches)
1317 1317
1318 1318 with greedy_completion():
1319 1319 # query using escape
1320 1320 _, matches = complete(line_buffer="d['a\\u05d0")
1321 1321 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1322 1322
1323 1323 # query using character
1324 1324 _, matches = complete(line_buffer="d['a\u05d0")
1325 1325 self.assertIn("d['a\u05d0']", matches)
1326 1326
1327 1327 @dec.skip_without("numpy")
1328 1328 def test_struct_array_key_completion(self):
1329 1329 """Test dict key completion applies to numpy struct arrays"""
1330 1330 import numpy
1331 1331
1332 1332 ip = get_ipython()
1333 1333 complete = ip.Completer.complete
1334 1334 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1335 1335 _, matches = complete(line_buffer="d['")
1336 1336 self.assertIn("hello", matches)
1337 1337 self.assertIn("world", matches)
1338 1338 # complete on the numpy struct itself
1339 1339 dt = numpy.dtype(
1340 1340 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1341 1341 )
1342 1342 x = numpy.zeros(2, dtype=dt)
1343 1343 ip.user_ns["d"] = x[1]
1344 1344 _, matches = complete(line_buffer="d['")
1345 1345 self.assertIn("my_head", matches)
1346 1346 self.assertIn("my_data", matches)
1347 1347
1348 1348 def completes_on_nested():
1349 1349 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1350 1350 _, matches = complete(line_buffer="d[1]['my_head']['")
1351 1351 self.assertTrue(any(["my_dt" in m for m in matches]))
1352 1352 self.assertTrue(any(["my_df" in m for m in matches]))
1353 1353 # complete on a nested level
1354 1354 with greedy_completion():
1355 1355 completes_on_nested()
1356 1356
1357 with evaluation_level("limitted"):
1357 with evaluation_level("limited"):
1358 1358 completes_on_nested()
1359 1359
1360 1360 with evaluation_level("minimal"):
1361 1361 with pytest.raises(AssertionError):
1362 1362 completes_on_nested()
1363 1363
1364 1364 @dec.skip_without("pandas")
1365 1365 def test_dataframe_key_completion(self):
1366 1366 """Test dict key completion applies to pandas DataFrames"""
1367 1367 import pandas
1368 1368
1369 1369 ip = get_ipython()
1370 1370 complete = ip.Completer.complete
1371 1371 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1372 1372 _, matches = complete(line_buffer="d['")
1373 1373 self.assertIn("hello", matches)
1374 1374 self.assertIn("world", matches)
1375 1375 _, matches = complete(line_buffer="d.loc[:, '")
1376 1376 self.assertIn("hello", matches)
1377 1377 self.assertIn("world", matches)
1378 1378 _, matches = complete(line_buffer="d.loc[1:, '")
1379 1379 self.assertIn("hello", matches)
1380 1380 _, matches = complete(line_buffer="d.loc[1:1, '")
1381 1381 self.assertIn("hello", matches)
1382 1382 _, matches = complete(line_buffer="d.loc[1:1:-1, '")
1383 1383 self.assertIn("hello", matches)
1384 1384 _, matches = complete(line_buffer="d.loc[::, '")
1385 1385 self.assertIn("hello", matches)
1386 1386
1387 1387 def test_dict_key_completion_invalids(self):
1388 1388 """Smoke test cases dict key completion can't handle"""
1389 1389 ip = get_ipython()
1390 1390 complete = ip.Completer.complete
1391 1391
1392 1392 ip.user_ns["no_getitem"] = None
1393 1393 ip.user_ns["no_keys"] = []
1394 1394 ip.user_ns["cant_call_keys"] = dict
1395 1395 ip.user_ns["empty"] = {}
1396 1396 ip.user_ns["d"] = {"abc": 5}
1397 1397
1398 1398 _, matches = complete(line_buffer="no_getitem['")
1399 1399 _, matches = complete(line_buffer="no_keys['")
1400 1400 _, matches = complete(line_buffer="cant_call_keys['")
1401 1401 _, matches = complete(line_buffer="empty['")
1402 1402 _, matches = complete(line_buffer="name_error['")
1403 1403 _, matches = complete(line_buffer="d['\\") # incomplete escape
1404 1404
1405 1405 def test_object_key_completion(self):
1406 1406 ip = get_ipython()
1407 1407 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1408 1408
1409 1409 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1410 1410 self.assertIn("qwerty", matches)
1411 1411 self.assertIn("qwick", matches)
1412 1412
1413 1413 def test_class_key_completion(self):
1414 1414 ip = get_ipython()
1415 1415 NamedInstanceClass("qwerty")
1416 1416 NamedInstanceClass("qwick")
1417 1417 ip.user_ns["named_instance_class"] = NamedInstanceClass
1418 1418
1419 1419 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1420 1420 self.assertIn("qwerty", matches)
1421 1421 self.assertIn("qwick", matches)
1422 1422
1423 1423 def test_tryimport(self):
1424 1424 """
1425 1425 Test that try-import don't crash on trailing dot, and import modules before
1426 1426 """
1427 1427 from IPython.core.completerlib import try_import
1428 1428
1429 1429 assert try_import("IPython.")
1430 1430
1431 1431 def test_aimport_module_completer(self):
1432 1432 ip = get_ipython()
1433 1433 _, matches = ip.complete("i", "%aimport i")
1434 1434 self.assertIn("io", matches)
1435 1435 self.assertNotIn("int", matches)
1436 1436
1437 1437 def test_nested_import_module_completer(self):
1438 1438 ip = get_ipython()
1439 1439 _, matches = ip.complete(None, "import IPython.co", 17)
1440 1440 self.assertIn("IPython.core", matches)
1441 1441 self.assertNotIn("import IPython.core", matches)
1442 1442 self.assertNotIn("IPython.display", matches)
1443 1443
1444 1444 def test_import_module_completer(self):
1445 1445 ip = get_ipython()
1446 1446 _, matches = ip.complete("i", "import i")
1447 1447 self.assertIn("io", matches)
1448 1448 self.assertNotIn("int", matches)
1449 1449
1450 1450 def test_from_module_completer(self):
1451 1451 ip = get_ipython()
1452 1452 _, matches = ip.complete("B", "from io import B", 16)
1453 1453 self.assertIn("BytesIO", matches)
1454 1454 self.assertNotIn("BaseException", matches)
1455 1455
1456 1456 def test_snake_case_completion(self):
1457 1457 ip = get_ipython()
1458 1458 ip.Completer.use_jedi = False
1459 1459 ip.user_ns["some_three"] = 3
1460 1460 ip.user_ns["some_four"] = 4
1461 1461 _, matches = ip.complete("s_", "print(s_f")
1462 1462 self.assertIn("some_three", matches)
1463 1463 self.assertIn("some_four", matches)
1464 1464
1465 1465 def test_mix_terms(self):
1466 1466 ip = get_ipython()
1467 1467 from textwrap import dedent
1468 1468
1469 1469 ip.Completer.use_jedi = False
1470 1470 ip.ex(
1471 1471 dedent(
1472 1472 """
1473 1473 class Test:
1474 1474 def meth(self, meth_arg1):
1475 1475 print("meth")
1476 1476
1477 1477 def meth_1(self, meth1_arg1, meth1_arg2):
1478 1478 print("meth1")
1479 1479
1480 1480 def meth_2(self, meth2_arg1, meth2_arg2):
1481 1481 print("meth2")
1482 1482 test = Test()
1483 1483 """
1484 1484 )
1485 1485 )
1486 1486 _, matches = ip.complete(None, "test.meth(")
1487 1487 self.assertIn("meth_arg1=", matches)
1488 1488 self.assertNotIn("meth2_arg1=", matches)
1489 1489
1490 1490 def test_percent_symbol_restrict_to_magic_completions(self):
1491 1491 ip = get_ipython()
1492 1492 completer = ip.Completer
1493 1493 text = "%a"
1494 1494
1495 1495 with provisionalcompleter():
1496 1496 completer.use_jedi = True
1497 1497 completions = completer.completions(text, len(text))
1498 1498 for c in completions:
1499 1499 self.assertEqual(c.text[0], "%")
1500 1500
1501 1501 def test_fwd_unicode_restricts(self):
1502 1502 ip = get_ipython()
1503 1503 completer = ip.Completer
1504 1504 text = "\\ROMAN NUMERAL FIVE"
1505 1505
1506 1506 with provisionalcompleter():
1507 1507 completer.use_jedi = True
1508 1508 completions = [
1509 1509 completion.text for completion in completer.completions(text, len(text))
1510 1510 ]
1511 1511 self.assertEqual(completions, ["\u2164"])
1512 1512
1513 1513 def test_dict_key_restrict_to_dicts(self):
1514 1514 """Test that dict key suppresses non-dict completion items"""
1515 1515 ip = get_ipython()
1516 1516 c = ip.Completer
1517 1517 d = {"abc": None}
1518 1518 ip.user_ns["d"] = d
1519 1519
1520 1520 text = 'd["a'
1521 1521
1522 1522 def _():
1523 1523 with provisionalcompleter():
1524 1524 c.use_jedi = True
1525 1525 return [
1526 1526 completion.text for completion in c.completions(text, len(text))
1527 1527 ]
1528 1528
1529 1529 completions = _()
1530 1530 self.assertEqual(completions, ["abc"])
1531 1531
1532 1532 # check that it can be disabled in granular manner:
1533 1533 cfg = Config()
1534 1534 cfg.IPCompleter.suppress_competing_matchers = {
1535 1535 "IPCompleter.dict_key_matcher": False
1536 1536 }
1537 1537 c.update_config(cfg)
1538 1538
1539 1539 completions = _()
1540 1540 self.assertIn("abc", completions)
1541 1541 self.assertGreater(len(completions), 1)
1542 1542
1543 1543 def test_matcher_suppression(self):
1544 1544 @completion_matcher(identifier="a_matcher")
1545 1545 def a_matcher(text):
1546 1546 return ["completion_a"]
1547 1547
1548 1548 @completion_matcher(identifier="b_matcher", api_version=2)
1549 1549 def b_matcher(context: CompletionContext):
1550 1550 text = context.token
1551 1551 result = {"completions": [SimpleCompletion("completion_b")]}
1552 1552
1553 1553 if text == "suppress c":
1554 1554 result["suppress"] = {"c_matcher"}
1555 1555
1556 1556 if text.startswith("suppress all"):
1557 1557 result["suppress"] = True
1558 1558 if text == "suppress all but c":
1559 1559 result["do_not_suppress"] = {"c_matcher"}
1560 1560 if text == "suppress all but a":
1561 1561 result["do_not_suppress"] = {"a_matcher"}
1562 1562
1563 1563 return result
1564 1564
1565 1565 @completion_matcher(identifier="c_matcher")
1566 1566 def c_matcher(text):
1567 1567 return ["completion_c"]
1568 1568
1569 1569 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1570 1570 ip = get_ipython()
1571 1571 c = ip.Completer
1572 1572
1573 1573 def _(text, expected):
1574 1574 c.use_jedi = False
1575 1575 s, matches = c.complete(text)
1576 1576 self.assertEqual(expected, matches)
1577 1577
1578 1578 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1579 1579 _("suppress all", ["completion_b"])
1580 1580 _("suppress all but a", ["completion_a", "completion_b"])
1581 1581 _("suppress all but c", ["completion_b", "completion_c"])
1582 1582
1583 1583 def configure(suppression_config):
1584 1584 cfg = Config()
1585 1585 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1586 1586 c.update_config(cfg)
1587 1587
1588 1588 # test that configuration takes priority over the run-time decisions
1589 1589
1590 1590 configure(False)
1591 1591 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1592 1592
1593 1593 configure({"b_matcher": False})
1594 1594 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1595 1595
1596 1596 configure({"a_matcher": False})
1597 1597 _("suppress all", ["completion_b"])
1598 1598
1599 1599 configure({"b_matcher": True})
1600 1600 _("do not suppress", ["completion_b"])
1601 1601
1602 1602 configure(True)
1603 1603 _("do not suppress", ["completion_a"])
1604 1604
1605 1605 def test_matcher_suppression_with_iterator(self):
1606 1606 @completion_matcher(identifier="matcher_returning_iterator")
1607 1607 def matcher_returning_iterator(text):
1608 1608 return iter(["completion_iter"])
1609 1609
1610 1610 @completion_matcher(identifier="matcher_returning_list")
1611 1611 def matcher_returning_list(text):
1612 1612 return ["completion_list"]
1613 1613
1614 1614 with custom_matchers([matcher_returning_iterator, matcher_returning_list]):
1615 1615 ip = get_ipython()
1616 1616 c = ip.Completer
1617 1617
1618 1618 def _(text, expected):
1619 1619 c.use_jedi = False
1620 1620 s, matches = c.complete(text)
1621 1621 self.assertEqual(expected, matches)
1622 1622
1623 1623 def configure(suppression_config):
1624 1624 cfg = Config()
1625 1625 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1626 1626 c.update_config(cfg)
1627 1627
1628 1628 configure(False)
1629 1629 _("---", ["completion_iter", "completion_list"])
1630 1630
1631 1631 configure(True)
1632 1632 _("---", ["completion_iter"])
1633 1633
1634 1634 configure(None)
1635 1635 _("--", ["completion_iter", "completion_list"])
1636 1636
1637 1637 def test_matcher_suppression_with_jedi(self):
1638 1638 ip = get_ipython()
1639 1639 c = ip.Completer
1640 1640 c.use_jedi = True
1641 1641
1642 1642 def configure(suppression_config):
1643 1643 cfg = Config()
1644 1644 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1645 1645 c.update_config(cfg)
1646 1646
1647 1647 def _():
1648 1648 with provisionalcompleter():
1649 1649 matches = [completion.text for completion in c.completions("dict.", 5)]
1650 1650 self.assertIn("keys", matches)
1651 1651
1652 1652 configure(False)
1653 1653 _()
1654 1654
1655 1655 configure(True)
1656 1656 _()
1657 1657
1658 1658 configure(None)
1659 1659 _()
1660 1660
1661 1661 def test_matcher_disabling(self):
1662 1662 @completion_matcher(identifier="a_matcher")
1663 1663 def a_matcher(text):
1664 1664 return ["completion_a"]
1665 1665
1666 1666 @completion_matcher(identifier="b_matcher")
1667 1667 def b_matcher(text):
1668 1668 return ["completion_b"]
1669 1669
1670 1670 def _(expected):
1671 1671 s, matches = c.complete("completion_")
1672 1672 self.assertEqual(expected, matches)
1673 1673
1674 1674 with custom_matchers([a_matcher, b_matcher]):
1675 1675 ip = get_ipython()
1676 1676 c = ip.Completer
1677 1677
1678 1678 _(["completion_a", "completion_b"])
1679 1679
1680 1680 cfg = Config()
1681 1681 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1682 1682 c.update_config(cfg)
1683 1683
1684 1684 _(["completion_a"])
1685 1685
1686 1686 cfg.IPCompleter.disable_matchers = []
1687 1687 c.update_config(cfg)
1688 1688
1689 1689 def test_matcher_priority(self):
1690 1690 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1691 1691 def a_matcher(text):
1692 1692 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1693 1693
1694 1694 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1695 1695 def b_matcher(text):
1696 1696 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1697 1697
1698 1698 def _(expected):
1699 1699 s, matches = c.complete("completion_")
1700 1700 self.assertEqual(expected, matches)
1701 1701
1702 1702 with custom_matchers([a_matcher, b_matcher]):
1703 1703 ip = get_ipython()
1704 1704 c = ip.Completer
1705 1705
1706 1706 _(["completion_b"])
1707 1707 a_matcher.matcher_priority = 3
1708 1708 _(["completion_a"])
1709 1709
1710 1710
1711 1711 @pytest.mark.parametrize(
1712 1712 "input, expected",
1713 1713 [
1714 1714 ["1.234", "1.234"],
1715 1715 # should match signed numbers
1716 1716 ["+1", "+1"],
1717 1717 ["-1", "-1"],
1718 1718 ["-1.0", "-1.0"],
1719 1719 ["-1.", "-1."],
1720 1720 ["+1.", "+1."],
1721 1721 [".1", ".1"],
1722 1722 # should not match non-numbers
1723 1723 ["1..", None],
1724 1724 ["..", None],
1725 1725 [".1.", None],
1726 1726 # should match after comma
1727 1727 [",1", "1"],
1728 1728 [", 1", "1"],
1729 1729 [", .1", ".1"],
1730 1730 [", +.1", "+.1"],
1731 1731 # should not match after trailing spaces
1732 1732 [".1 ", None],
1733 1733 # some complex cases
1734 1734 ["0b_0011_1111_0100_1110", "0b_0011_1111_0100_1110"],
1735 1735 ["0xdeadbeef", "0xdeadbeef"],
1736 1736 ["0b_1110_0101", "0b_1110_0101"],
1737 1737 ],
1738 1738 )
1739 1739 def test_match_numeric_literal_for_dict_key(input, expected):
1740 1740 assert _match_number_in_dict_key_prefix(input) == expected
@@ -1,256 +1,256 b''
1 1 from typing import NamedTuple
2 2 from IPython.core.guarded_eval import (
3 3 EvaluationContext,
4 4 GuardRejection,
5 5 guarded_eval,
6 6 unbind_method,
7 7 )
8 8 from IPython.testing import decorators as dec
9 9 import pytest
10 10
11 11
12 def limitted(**kwargs):
13 return EvaluationContext(locals_=kwargs, globals_={}, evaluation="limitted")
12 def limited(**kwargs):
13 return EvaluationContext(locals_=kwargs, globals_={}, evaluation="limited")
14 14
15 15
16 16 def unsafe(**kwargs):
17 17 return EvaluationContext(locals_=kwargs, globals_={}, evaluation="unsafe")
18 18
19 19
20 20 @dec.skip_without("pandas")
21 21 def test_pandas_series_iloc():
22 22 import pandas as pd
23 23
24 24 series = pd.Series([1], index=["a"])
25 context = limitted(data=series)
25 context = limited(data=series)
26 26 assert guarded_eval("data.iloc[0]", context) == 1
27 27
28 28
29 29 @dec.skip_without("pandas")
30 30 def test_pandas_series():
31 31 import pandas as pd
32 32
33 context = limitted(data=pd.Series([1], index=["a"]))
33 context = limited(data=pd.Series([1], index=["a"]))
34 34 assert guarded_eval('data["a"]', context) == 1
35 35 with pytest.raises(KeyError):
36 36 guarded_eval('data["c"]', context)
37 37
38 38
39 39 @dec.skip_without("pandas")
40 40 def test_pandas_bad_series():
41 41 import pandas as pd
42 42
43 43 class BadItemSeries(pd.Series):
44 44 def __getitem__(self, key):
45 45 return "CUSTOM_ITEM"
46 46
47 47 class BadAttrSeries(pd.Series):
48 48 def __getattr__(self, key):
49 49 return "CUSTOM_ATTR"
50 50
51 51 bad_series = BadItemSeries([1], index=["a"])
52 context = limitted(data=bad_series)
52 context = limited(data=bad_series)
53 53
54 54 with pytest.raises(GuardRejection):
55 55 guarded_eval('data["a"]', context)
56 56 with pytest.raises(GuardRejection):
57 57 guarded_eval('data["c"]', context)
58 58
59 59 # note: here result is a bit unexpected because
60 60 # pandas `__getattr__` calls `__getitem__`;
61 61 # FIXME - special case to handle it?
62 62 assert guarded_eval("data.a", context) == "CUSTOM_ITEM"
63 63
64 64 context = unsafe(data=bad_series)
65 65 assert guarded_eval('data["a"]', context) == "CUSTOM_ITEM"
66 66
67 67 bad_attr_series = BadAttrSeries([1], index=["a"])
68 context = limitted(data=bad_attr_series)
68 context = limited(data=bad_attr_series)
69 69 assert guarded_eval('data["a"]', context) == 1
70 70 with pytest.raises(GuardRejection):
71 71 guarded_eval("data.a", context)
72 72
73 73
74 74 @dec.skip_without("pandas")
75 75 def test_pandas_dataframe_loc():
76 76 import pandas as pd
77 77 from pandas.testing import assert_series_equal
78 78
79 79 data = pd.DataFrame([{"a": 1}])
80 context = limitted(data=data)
80 context = limited(data=data)
81 81 assert_series_equal(guarded_eval('data.loc[:, "a"]', context), data["a"])
82 82
83 83
84 84 def test_named_tuple():
85 85 class GoodNamedTuple(NamedTuple):
86 86 a: str
87 87 pass
88 88
89 89 class BadNamedTuple(NamedTuple):
90 90 a: str
91 91
92 92 def __getitem__(self, key):
93 93 return None
94 94
95 95 good = GoodNamedTuple(a="x")
96 96 bad = BadNamedTuple(a="x")
97 97
98 context = limitted(data=good)
98 context = limited(data=good)
99 99 assert guarded_eval("data[0]", context) == "x"
100 100
101 context = limitted(data=bad)
101 context = limited(data=bad)
102 102 with pytest.raises(GuardRejection):
103 103 guarded_eval("data[0]", context)
104 104
105 105
106 106 def test_dict():
107 context = limitted(data={"a": 1, "b": {"x": 2}, ("x", "y"): 3})
107 context = limited(data={"a": 1, "b": {"x": 2}, ("x", "y"): 3})
108 108 assert guarded_eval('data["a"]', context) == 1
109 109 assert guarded_eval('data["b"]', context) == {"x": 2}
110 110 assert guarded_eval('data["b"]["x"]', context) == 2
111 111 assert guarded_eval('data["x", "y"]', context) == 3
112 112
113 113 assert guarded_eval("data.keys", context)
114 114
115 115
116 116 def test_set():
117 context = limitted(data={"a", "b"})
117 context = limited(data={"a", "b"})
118 118 assert guarded_eval("data.difference", context)
119 119
120 120
121 121 def test_list():
122 context = limitted(data=[1, 2, 3])
122 context = limited(data=[1, 2, 3])
123 123 assert guarded_eval("data[1]", context) == 2
124 124 assert guarded_eval("data.copy", context)
125 125
126 126
127 127 def test_dict_literal():
128 context = limitted()
128 context = limited()
129 129 assert guarded_eval("{}", context) == {}
130 130 assert guarded_eval('{"a": 1}', context) == {"a": 1}
131 131
132 132
133 133 def test_list_literal():
134 context = limitted()
134 context = limited()
135 135 assert guarded_eval("[]", context) == []
136 136 assert guarded_eval('[1, "a"]', context) == [1, "a"]
137 137
138 138
139 139 def test_set_literal():
140 context = limitted()
140 context = limited()
141 141 assert guarded_eval("set()", context) == set()
142 142 assert guarded_eval('{"a"}', context) == {"a"}
143 143
144 144
145 145 def test_if_expression():
146 context = limitted()
146 context = limited()
147 147 assert guarded_eval("2 if True else 3", context) == 2
148 148 assert guarded_eval("4 if False else 5", context) == 5
149 149
150 150
151 151 def test_object():
152 152 obj = object()
153 context = limitted(obj=obj)
153 context = limited(obj=obj)
154 154 assert guarded_eval("obj.__dir__", context) == obj.__dir__
155 155
156 156
157 157 @pytest.mark.parametrize(
158 158 "code,expected",
159 159 [
160 160 ["int.numerator", int.numerator],
161 161 ["float.is_integer", float.is_integer],
162 162 ["complex.real", complex.real],
163 163 ],
164 164 )
165 165 def test_number_attributes(code, expected):
166 assert guarded_eval(code, limitted()) == expected
166 assert guarded_eval(code, limited()) == expected
167 167
168 168
169 169 def test_method_descriptor():
170 context = limitted()
170 context = limited()
171 171 assert guarded_eval("list.copy.__name__", context) == "copy"
172 172
173 173
174 174 @pytest.mark.parametrize(
175 175 "data,good,bad,expected",
176 176 [
177 177 [[1, 2, 3], "data.index(2)", "data.append(4)", 1],
178 178 [{"a": 1}, "data.keys().isdisjoint({})", "data.update()", True],
179 179 ],
180 180 )
181 181 def test_calls(data, good, bad, expected):
182 context = limitted(data=data)
182 context = limited(data=data)
183 183 assert guarded_eval(good, context) == expected
184 184
185 185 with pytest.raises(GuardRejection):
186 186 guarded_eval(bad, context)
187 187
188 188
189 189 @pytest.mark.parametrize(
190 190 "code,expected",
191 191 [
192 192 ["(1\n+\n1)", 2],
193 193 ["list(range(10))[-1:]", [9]],
194 194 ["list(range(20))[3:-2:3]", [3, 6, 9, 12, 15]],
195 195 ],
196 196 )
197 197 def test_literals(code, expected):
198 context = limitted()
198 context = limited()
199 199 assert guarded_eval(code, context) == expected
200 200
201 201
202 202 def test_subscript():
203 203 context = EvaluationContext(
204 locals_={}, globals_={}, evaluation="limitted", in_subscript=True
204 locals_={}, globals_={}, evaluation="limited", in_subscript=True
205 205 )
206 206 empty_slice = slice(None, None, None)
207 207 assert guarded_eval("", context) == tuple()
208 208 assert guarded_eval(":", context) == empty_slice
209 209 assert guarded_eval("1:2:3", context) == slice(1, 2, 3)
210 210 assert guarded_eval(':, "a"', context) == (empty_slice, "a")
211 211
212 212
213 213 def test_unbind_method():
214 214 class X(list):
215 215 def index(self, k):
216 216 return "CUSTOM"
217 217
218 218 x = X()
219 219 assert unbind_method(x.index) is X.index
220 220 assert unbind_method([].index) is list.index
221 221
222 222
223 223 def test_assumption_instance_attr_do_not_matter():
224 224 """This is semi-specified in Python documentation.
225 225
226 226 However, since the specification says 'not guaranted
227 227 to work' rather than 'is forbidden to work', future
228 228 versions could invalidate this assumptions. This test
229 229 is meant to catch such a change if it ever comes true.
230 230 """
231 231
232 232 class T:
233 233 def __getitem__(self, k):
234 234 return "a"
235 235
236 236 def __getattr__(self, k):
237 237 return "a"
238 238
239 239 t = T()
240 240 t.__getitem__ = lambda f: "b"
241 241 t.__getattr__ = lambda f: "b"
242 242 assert t[1] == "a"
243 243 assert t[1] == "a"
244 244
245 245
246 246 def test_assumption_named_tuples_share_getitem():
247 247 """Check assumption on named tuples sharing __getitem__"""
248 248 from typing import NamedTuple
249 249
250 250 class A(NamedTuple):
251 251 pass
252 252
253 253 class B(NamedTuple):
254 254 pass
255 255
256 256 assert A.__getitem__ == B.__getitem__
General Comments 0
You need to be logged in to leave comments. Login now