##// END OF EJS Templates
Fix completion tuple (#14594)...
M Bussonnier -
r28978:9cdf92d3 merge
parent child Browse files
Show More
@@ -1,3389 +1,3421
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 α
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 α
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press :kbd:`Tab` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\α<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 63 ``False``.
64 64
65 65
66 66 Experimental
67 67 ============
68 68
69 69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 70 generate completions both using static analysis of the code, and dynamically
71 71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 72 for Python. The APIs attached to this new mechanism is unstable and will
73 73 raise unless use in an :any:`provisionalcompleter` context manager.
74 74
75 75 You will find that the following are experimental:
76 76
77 77 - :any:`provisionalcompleter`
78 78 - :any:`IPCompleter.completions`
79 79 - :any:`Completion`
80 80 - :any:`rectify_completions`
81 81
82 82 .. note::
83 83
84 84 better name for :any:`rectify_completions` ?
85 85
86 86 We welcome any feedback on these new API, and we also encourage you to try this
87 87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 88 to have extra logging information if :any:`jedi` is crashing, or if current
89 89 IPython completer pending deprecations are returning results not yet handled
90 90 by :any:`jedi`
91 91
92 92 Using Jedi for tab completion allow snippets like the following to work without
93 93 having to execute any code:
94 94
95 95 >>> myvar = ['hello', 42]
96 96 ... myvar[1].bi<tab>
97 97
98 98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 100 option.
101 101
102 102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 103 current development version to get better completions.
104 104
105 105 Matchers
106 106 ========
107 107
108 108 All completions routines are implemented using unified *Matchers* API.
109 109 The matchers API is provisional and subject to change without notice.
110 110
111 111 The built-in matchers include:
112 112
113 113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 115 - :any:`IPCompleter.unicode_name_matcher`,
116 116 :any:`IPCompleter.fwd_unicode_matcher`
117 117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 125 (`complete_command`) with string dispatch (including regular expressions).
126 126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 127 Jedi results to match behaviour in earlier IPython versions.
128 128
129 129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130 130
131 131 Matcher API
132 132 -----------
133 133
134 134 Simplifying some details, the ``Matcher`` interface can described as
135 135
136 136 .. code-block::
137 137
138 138 MatcherAPIv1 = Callable[[str], list[str]]
139 139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140 140
141 141 Matcher = MatcherAPIv1 | MatcherAPIv2
142 142
143 143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 144 and remains supported as a simplest way for generating completions. This is also
145 145 currently the only API supported by the IPython hooks system `complete_command`.
146 146
147 147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 149 and requires a literal ``2`` for v2 Matchers.
150 150
151 151 Once the API stabilises future versions may relax the requirement for specifying
152 152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154 154
155 155 Suppression of competing matchers
156 156 ---------------------------------
157 157
158 158 By default results from all matchers are combined, in the order determined by
159 159 their priority. Matchers can request to suppress results from subsequent
160 160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161 161
162 162 When multiple matchers simultaneously request suppression, the results from of
163 163 the matcher with higher priority will be returned.
164 164
165 165 Sometimes it is desirable to suppress most but not all other matchers;
166 166 this can be achieved by adding a set of identifiers of matchers which
167 167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168 168
169 169 The suppression behaviour can is user-configurable via
170 170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 171 """
172 172
173 173
174 174 # Copyright (c) IPython Development Team.
175 175 # Distributed under the terms of the Modified BSD License.
176 176 #
177 177 # Some of this code originated from rlcompleter in the Python standard library
178 178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179 179
180 180 from __future__ import annotations
181 181 import builtins as builtin_mod
182 182 import enum
183 183 import glob
184 184 import inspect
185 185 import itertools
186 186 import keyword
187 import ast
187 188 import os
188 189 import re
189 190 import string
190 191 import sys
191 192 import tokenize
192 193 import time
193 194 import unicodedata
194 195 import uuid
195 196 import warnings
196 197 from ast import literal_eval
197 198 from collections import defaultdict
198 199 from contextlib import contextmanager
199 200 from dataclasses import dataclass
200 201 from functools import cached_property, partial
201 202 from types import SimpleNamespace
202 203 from typing import (
203 204 Iterable,
204 205 Iterator,
205 206 List,
206 207 Tuple,
207 208 Union,
208 209 Any,
209 210 Sequence,
210 211 Dict,
211 212 Optional,
212 213 TYPE_CHECKING,
213 214 Set,
214 215 Sized,
215 216 TypeVar,
216 217 Literal,
217 218 )
218 219
219 220 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 221 from IPython.core.error import TryNext
221 222 from IPython.core.inputtransformer2 import ESC_MAGIC
222 223 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 224 from IPython.core.oinspect import InspectColors
224 225 from IPython.testing.skipdoctest import skip_doctest
225 226 from IPython.utils import generics
226 227 from IPython.utils.decorators import sphinx_options
227 228 from IPython.utils.dir2 import dir2, get_real_method
228 229 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 230 from IPython.utils.path import ensure_dir_exists
230 231 from IPython.utils.process import arg_split
231 232 from traitlets import (
232 233 Bool,
233 234 Enum,
234 235 Int,
235 236 List as ListTrait,
236 237 Unicode,
237 238 Dict as DictTrait,
238 239 Union as UnionTrait,
239 240 observe,
240 241 )
241 242 from traitlets.config.configurable import Configurable
242 243
243 244 import __main__
244 245
245 246 # skip module docstests
246 247 __skip_doctest__ = True
247 248
248 249
249 250 try:
250 251 import jedi
251 252 jedi.settings.case_insensitive_completion = False
252 253 import jedi.api.helpers
253 254 import jedi.api.classes
254 255 JEDI_INSTALLED = True
255 256 except ImportError:
256 257 JEDI_INSTALLED = False
257 258
258 259
259 260 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 261 from typing import cast
261 262 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 263 else:
263 264 from typing import Generic
264 265
265 266 def cast(type_, obj):
266 267 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 268 return obj
268 269
269 270 # do not require on runtime
270 271 NotRequired = Tuple # requires Python >=3.11
271 272 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 273 Protocol = object # requires Python >=3.8
273 274 TypeAlias = Any # requires Python >=3.10
274 275 TypeGuard = Generic # requires Python >=3.10
275 276 if GENERATING_DOCUMENTATION:
276 277 from typing import TypedDict
277 278
278 279 # -----------------------------------------------------------------------------
279 280 # Globals
280 281 #-----------------------------------------------------------------------------
281 282
282 283 # ranges where we have most of the valid unicode names. We could be more finer
283 284 # grained but is it worth it for performance While unicode have character in the
284 285 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 286 # write this). With below range we cover them all, with a density of ~67%
286 287 # biggest next gap we consider only adds up about 1% density and there are 600
287 288 # gaps that would need hard coding.
288 289 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289 290
290 291 # Public API
291 292 __all__ = ["Completer", "IPCompleter"]
292 293
293 294 if sys.platform == 'win32':
294 295 PROTECTABLES = ' '
295 296 else:
296 297 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297 298
298 299 # Protect against returning an enormous number of completions which the frontend
299 300 # may have trouble processing.
300 301 MATCHES_LIMIT = 500
301 302
302 303 # Completion type reported when no type can be inferred.
303 304 _UNKNOWN_TYPE = "<unknown>"
304 305
305 306 # sentinel value to signal lack of a match
306 307 not_found = object()
307 308
308 309 class ProvisionalCompleterWarning(FutureWarning):
309 310 """
310 311 Exception raise by an experimental feature in this module.
311 312
312 313 Wrap code in :any:`provisionalcompleter` context manager if you
313 314 are certain you want to use an unstable feature.
314 315 """
315 316 pass
316 317
317 318 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318 319
319 320
320 321 @skip_doctest
321 322 @contextmanager
322 323 def provisionalcompleter(action='ignore'):
323 324 """
324 325 This context manager has to be used in any place where unstable completer
325 326 behavior and API may be called.
326 327
327 328 >>> with provisionalcompleter():
328 329 ... completer.do_experimental_things() # works
329 330
330 331 >>> completer.do_experimental_things() # raises.
331 332
332 333 .. note::
333 334
334 335 Unstable
335 336
336 337 By using this context manager you agree that the API in use may change
337 338 without warning, and that you won't complain if they do so.
338 339
339 340 You also understand that, if the API is not to your liking, you should report
340 341 a bug to explain your use case upstream.
341 342
342 343 We'll be happy to get your feedback, feature requests, and improvements on
343 344 any of the unstable APIs!
344 345 """
345 346 with warnings.catch_warnings():
346 347 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 348 yield
348 349
349 350
350 def has_open_quotes(s):
351 def has_open_quotes(s: str) -> Union[str, bool]:
351 352 """Return whether a string has open quotes.
352 353
353 354 This simply counts whether the number of quote characters of either type in
354 355 the string is odd.
355 356
356 357 Returns
357 358 -------
358 359 If there is an open quote, the quote character is returned. Else, return
359 360 False.
360 361 """
361 362 # We check " first, then ', so complex cases with nested quotes will get
362 363 # the " to take precedence.
363 364 if s.count('"') % 2:
364 365 return '"'
365 366 elif s.count("'") % 2:
366 367 return "'"
367 368 else:
368 369 return False
369 370
370 371
371 def protect_filename(s, protectables=PROTECTABLES):
372 def protect_filename(s: str, protectables: str = PROTECTABLES) -> str:
372 373 """Escape a string to protect certain characters."""
373 374 if set(s) & set(protectables):
374 375 if sys.platform == "win32":
375 376 return '"' + s + '"'
376 377 else:
377 378 return "".join(("\\" + c if c in protectables else c) for c in s)
378 379 else:
379 380 return s
380 381
381 382
382 383 def expand_user(path:str) -> Tuple[str, bool, str]:
383 384 """Expand ``~``-style usernames in strings.
384 385
385 386 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 387 extra information that will be useful if the input was being used in
387 388 computing completions, and you wish to return the completions with the
388 389 original '~' instead of its expanded value.
389 390
390 391 Parameters
391 392 ----------
392 393 path : str
393 394 String to be expanded. If no ~ is present, the output is the same as the
394 395 input.
395 396
396 397 Returns
397 398 -------
398 399 newpath : str
399 400 Result of ~ expansion in the input path.
400 401 tilde_expand : bool
401 402 Whether any expansion was performed or not.
402 403 tilde_val : str
403 404 The value that ~ was replaced with.
404 405 """
405 406 # Default values
406 407 tilde_expand = False
407 408 tilde_val = ''
408 409 newpath = path
409 410
410 411 if path.startswith('~'):
411 412 tilde_expand = True
412 413 rest = len(path)-1
413 414 newpath = os.path.expanduser(path)
414 415 if rest:
415 416 tilde_val = newpath[:-rest]
416 417 else:
417 418 tilde_val = newpath
418 419
419 420 return newpath, tilde_expand, tilde_val
420 421
421 422
422 423 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 424 """Does the opposite of expand_user, with its outputs.
424 425 """
425 426 if tilde_expand:
426 427 return path.replace(tilde_val, '~')
427 428 else:
428 429 return path
429 430
430 431
431 432 def completions_sorting_key(word):
432 433 """key for sorting completions
433 434
434 435 This does several things:
435 436
436 437 - Demote any completions starting with underscores to the end
437 438 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 439 by their name
439 440 """
440 441 prio1, prio2 = 0, 0
441 442
442 443 if word.startswith('__'):
443 444 prio1 = 2
444 445 elif word.startswith('_'):
445 446 prio1 = 1
446 447
447 448 if word.endswith('='):
448 449 prio1 = -1
449 450
450 451 if word.startswith('%%'):
451 452 # If there's another % in there, this is something else, so leave it alone
452 if not "%" in word[2:]:
453 if "%" not in word[2:]:
453 454 word = word[2:]
454 455 prio2 = 2
455 456 elif word.startswith('%'):
456 if not "%" in word[1:]:
457 if "%" not in word[1:]:
457 458 word = word[1:]
458 459 prio2 = 1
459 460
460 461 return prio1, word, prio2
461 462
462 463
463 464 class _FakeJediCompletion:
464 465 """
465 466 This is a workaround to communicate to the UI that Jedi has crashed and to
466 467 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467 468
468 469 Added in IPython 6.0 so should likely be removed for 7.0
469 470
470 471 """
471 472
472 473 def __init__(self, name):
473 474
474 475 self.name = name
475 476 self.complete = name
476 477 self.type = 'crashed'
477 478 self.name_with_symbols = name
478 479 self.signature = ""
479 480 self._origin = "fake"
480 481 self.text = "crashed"
481 482
482 483 def __repr__(self):
483 484 return '<Fake completion object jedi has crashed>'
484 485
485 486
486 487 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487 488
488 489
489 490 class Completion:
490 491 """
491 492 Completion object used and returned by IPython completers.
492 493
493 494 .. warning::
494 495
495 496 Unstable
496 497
497 498 This function is unstable, API may change without warning.
498 499 It will also raise unless use in proper context manager.
499 500
500 501 This act as a middle ground :any:`Completion` object between the
501 502 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 503 object. While Jedi need a lot of information about evaluator and how the
503 504 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 505 need user facing information.
505 506
506 507 - Which range should be replaced replaced by what.
507 508 - Some metadata (like completion type), or meta information to displayed to
508 509 the use user.
509 510
510 511 For debugging purpose we can also store the origin of the completion (``jedi``,
511 512 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 513 """
513 514
514 515 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515 516
516 517 def __init__(
517 518 self,
518 519 start: int,
519 520 end: int,
520 521 text: str,
521 522 *,
522 523 type: Optional[str] = None,
523 524 _origin="",
524 525 signature="",
525 526 ) -> None:
526 527 warnings.warn(
527 528 "``Completion`` is a provisional API (as of IPython 6.0). "
528 529 "It may change without warnings. "
529 530 "Use in corresponding context manager.",
530 531 category=ProvisionalCompleterWarning,
531 532 stacklevel=2,
532 533 )
533 534
534 535 self.start = start
535 536 self.end = end
536 537 self.text = text
537 538 self.type = type
538 539 self.signature = signature
539 540 self._origin = _origin
540 541
541 542 def __repr__(self):
542 543 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 544 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544 545
545 546 def __eq__(self, other) -> bool:
546 547 """
547 548 Equality and hash do not hash the type (as some completer may not be
548 549 able to infer the type), but are use to (partially) de-duplicate
549 550 completion.
550 551
551 552 Completely de-duplicating completion is a bit tricker that just
552 553 comparing as it depends on surrounding text, which Completions are not
553 554 aware of.
554 555 """
555 556 return self.start == other.start and \
556 557 self.end == other.end and \
557 558 self.text == other.text
558 559
559 560 def __hash__(self):
560 561 return hash((self.start, self.end, self.text))
561 562
562 563
563 564 class SimpleCompletion:
564 565 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565 566
566 567 .. warning::
567 568
568 569 Provisional
569 570
570 571 This class is used to describe the currently supported attributes of
571 572 simple completion items, and any additional implementation details
572 573 should not be relied on. Additional attributes may be included in
573 574 future versions, and meaning of text disambiguated from the current
574 575 dual meaning of "text to insert" and "text to used as a label".
575 576 """
576 577
577 578 __slots__ = ["text", "type"]
578 579
579 580 def __init__(self, text: str, *, type: Optional[str] = None):
580 581 self.text = text
581 582 self.type = type
582 583
583 584 def __repr__(self):
584 585 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585 586
586 587
587 588 class _MatcherResultBase(TypedDict):
588 589 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589 590
590 591 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 592 matched_fragment: NotRequired[str]
592 593
593 594 #: Whether to suppress results from all other matchers (True), some
594 595 #: matchers (set of identifiers) or none (False); default is False.
595 596 suppress: NotRequired[Union[bool, Set[str]]]
596 597
597 598 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 599 #: requests to suppress all other matchers; defaults to an empty set.
599 600 do_not_suppress: NotRequired[Set[str]]
600 601
601 602 #: Are completions already ordered and should be left as-is? default is False.
602 603 ordered: NotRequired[bool]
603 604
604 605
605 606 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 607 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 608 """Result of new-style completion matcher."""
608 609
609 610 # note: TypedDict is added again to the inheritance chain
610 611 # in order to get __orig_bases__ for documentation
611 612
612 613 #: List of candidate completions
613 614 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614 615
615 616
616 617 class _JediMatcherResult(_MatcherResultBase):
617 618 """Matching result returned by Jedi (will be processed differently)"""
618 619
619 620 #: list of candidate completions
620 621 completions: Iterator[_JediCompletionLike]
621 622
622 623
623 624 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 625 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625 626
626 627
627 628 @dataclass
628 629 class CompletionContext:
629 630 """Completion context provided as an argument to matchers in the Matcher API v2."""
630 631
631 632 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 633 # which was not explicitly visible as an argument of the matcher, making any refactor
633 634 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 635 # from the completer, and make substituting them in sub-classes easier.
635 636
636 637 #: Relevant fragment of code directly preceding the cursor.
637 638 #: The extraction of token is implemented via splitter heuristic
638 639 #: (following readline behaviour for legacy reasons), which is user configurable
639 640 #: (by switching the greedy mode).
640 641 token: str
641 642
642 643 #: The full available content of the editor or buffer
643 644 full_text: str
644 645
645 646 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 647 cursor_position: int
647 648
648 649 #: Cursor line in ``full_text``.
649 650 cursor_line: int
650 651
651 652 #: The maximum number of completions that will be used downstream.
652 653 #: Matchers can use this information to abort early.
653 654 #: The built-in Jedi matcher is currently excepted from this limit.
654 655 # If not given, return all possible completions.
655 656 limit: Optional[int]
656 657
657 658 @cached_property
658 659 def text_until_cursor(self) -> str:
659 660 return self.line_with_cursor[: self.cursor_position]
660 661
661 662 @cached_property
662 663 def line_with_cursor(self) -> str:
663 664 return self.full_text.split("\n")[self.cursor_line]
664 665
665 666
666 667 #: Matcher results for API v2.
667 668 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668 669
669 670
670 671 class _MatcherAPIv1Base(Protocol):
671 672 def __call__(self, text: str) -> List[str]:
672 673 """Call signature."""
673 674 ...
674 675
675 676 #: Used to construct the default matcher identifier
676 677 __qualname__: str
677 678
678 679
679 680 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 681 #: API version
681 682 matcher_api_version: Optional[Literal[1]]
682 683
683 684 def __call__(self, text: str) -> List[str]:
684 685 """Call signature."""
685 686 ...
686 687
687 688
688 689 #: Protocol describing Matcher API v1.
689 690 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690 691
691 692
692 693 class MatcherAPIv2(Protocol):
693 694 """Protocol describing Matcher API v2."""
694 695
695 696 #: API version
696 697 matcher_api_version: Literal[2] = 2
697 698
698 699 def __call__(self, context: CompletionContext) -> MatcherResult:
699 700 """Call signature."""
700 701 ...
701 702
702 703 #: Used to construct the default matcher identifier
703 704 __qualname__: str
704 705
705 706
706 707 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707 708
708 709
709 710 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 711 api_version = _get_matcher_api_version(matcher)
711 712 return api_version == 1
712 713
713 714
714 715 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 716 api_version = _get_matcher_api_version(matcher)
716 717 return api_version == 2
717 718
718 719
719 720 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 721 """Determines whether objects is sizable"""
721 722 return hasattr(value, "__len__")
722 723
723 724
724 725 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 726 """Determines whether objects is sizable"""
726 727 return hasattr(value, "__next__")
727 728
728 729
729 730 def has_any_completions(result: MatcherResult) -> bool:
730 731 """Check if any result includes any completions."""
731 732 completions = result["completions"]
732 733 if _is_sizable(completions):
733 734 return len(completions) != 0
734 735 if _is_iterator(completions):
735 736 try:
736 737 old_iterator = completions
737 738 first = next(old_iterator)
738 739 result["completions"] = cast(
739 740 Iterator[SimpleCompletion],
740 741 itertools.chain([first], old_iterator),
741 742 )
742 743 return True
743 744 except StopIteration:
744 745 return False
745 746 raise ValueError(
746 747 "Completions returned by matcher need to be an Iterator or a Sizable"
747 748 )
748 749
749 750
750 751 def completion_matcher(
751 752 *,
752 753 priority: Optional[float] = None,
753 754 identifier: Optional[str] = None,
754 755 api_version: int = 1,
755 ):
756 ) -> Callable[[Matcher], Matcher]:
756 757 """Adds attributes describing the matcher.
757 758
758 759 Parameters
759 760 ----------
760 761 priority : Optional[float]
761 762 The priority of the matcher, determines the order of execution of matchers.
762 763 Higher priority means that the matcher will be executed first. Defaults to 0.
763 764 identifier : Optional[str]
764 765 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 766 and also used to for debugging (will be passed as ``origin`` with the completions).
766 767
767 768 Defaults to matcher function's ``__qualname__`` (for example,
768 769 ``IPCompleter.file_matcher`` for the built-in matched defined
769 770 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 771 api_version: Optional[int]
771 772 version of the Matcher API used by this matcher.
772 773 Currently supported values are 1 and 2.
773 774 Defaults to 1.
774 775 """
775 776
776 777 def wrapper(func: Matcher):
777 778 func.matcher_priority = priority or 0 # type: ignore
778 779 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 780 func.matcher_api_version = api_version # type: ignore
780 781 if TYPE_CHECKING:
781 782 if api_version == 1:
782 783 func = cast(MatcherAPIv1, func)
783 784 elif api_version == 2:
784 785 func = cast(MatcherAPIv2, func)
785 786 return func
786 787
787 788 return wrapper
788 789
789 790
790 791 def _get_matcher_priority(matcher: Matcher):
791 792 return getattr(matcher, "matcher_priority", 0)
792 793
793 794
794 795 def _get_matcher_id(matcher: Matcher):
795 796 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796 797
797 798
798 799 def _get_matcher_api_version(matcher):
799 800 return getattr(matcher, "matcher_api_version", 1)
800 801
801 802
802 803 context_matcher = partial(completion_matcher, api_version=2)
803 804
804 805
805 806 _IC = Iterable[Completion]
806 807
807 808
808 809 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 810 """
810 811 Deduplicate a set of completions.
811 812
812 813 .. warning::
813 814
814 815 Unstable
815 816
816 817 This function is unstable, API may change without warning.
817 818
818 819 Parameters
819 820 ----------
820 821 text : str
821 822 text that should be completed.
822 823 completions : Iterator[Completion]
823 824 iterator over the completions to deduplicate
824 825
825 826 Yields
826 827 ------
827 828 `Completions` objects
828 829 Completions coming from multiple sources, may be different but end up having
829 830 the same effect when applied to ``text``. If this is the case, this will
830 831 consider completions as equal and only emit the first encountered.
831 832 Not folded in `completions()` yet for debugging purpose, and to detect when
832 833 the IPython completer does return things that Jedi does not, but should be
833 834 at some point.
834 835 """
835 836 completions = list(completions)
836 837 if not completions:
837 838 return
838 839
839 840 new_start = min(c.start for c in completions)
840 841 new_end = max(c.end for c in completions)
841 842
842 843 seen = set()
843 844 for c in completions:
844 845 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 846 if new_text not in seen:
846 847 yield c
847 848 seen.add(new_text)
848 849
849 850
850 851 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 852 """
852 853 Rectify a set of completions to all have the same ``start`` and ``end``
853 854
854 855 .. warning::
855 856
856 857 Unstable
857 858
858 859 This function is unstable, API may change without warning.
859 860 It will also raise unless use in proper context manager.
860 861
861 862 Parameters
862 863 ----------
863 864 text : str
864 865 text that should be completed.
865 866 completions : Iterator[Completion]
866 867 iterator over the completions to rectify
867 868 _debug : bool
868 869 Log failed completion
869 870
870 871 Notes
871 872 -----
872 873 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 874 the Jupyter Protocol requires them to behave like so. This will readjust
874 875 the completion to have the same ``start`` and ``end`` by padding both
875 876 extremities with surrounding text.
876 877
877 878 During stabilisation should support a ``_debug`` option to log which
878 879 completion are return by the IPython completer and not found in Jedi in
879 880 order to make upstream bug report.
880 881 """
881 882 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 883 "It may change without warnings. "
883 884 "Use in corresponding context manager.",
884 885 category=ProvisionalCompleterWarning, stacklevel=2)
885 886
886 887 completions = list(completions)
887 888 if not completions:
888 889 return
889 890 starts = (c.start for c in completions)
890 891 ends = (c.end for c in completions)
891 892
892 893 new_start = min(starts)
893 894 new_end = max(ends)
894 895
895 896 seen_jedi = set()
896 897 seen_python_matches = set()
897 898 for c in completions:
898 899 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 900 if c._origin == 'jedi':
900 901 seen_jedi.add(new_text)
901 902 elif c._origin == "IPCompleter.python_matcher":
902 903 seen_python_matches.add(new_text)
903 904 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 905 diff = seen_python_matches.difference(seen_jedi)
905 906 if diff and _debug:
906 907 print('IPython.python matches have extras:', diff)
907 908
908 909
909 910 if sys.platform == 'win32':
910 911 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 912 else:
912 913 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913 914
914 915 GREEDY_DELIMS = ' =\r\n'
915 916
916 917
917 918 class CompletionSplitter(object):
918 919 """An object to split an input line in a manner similar to readline.
919 920
920 921 By having our own implementation, we can expose readline-like completion in
921 922 a uniform manner to all frontends. This object only needs to be given the
922 923 line of text to be split and the cursor position on said line, and it
923 924 returns the 'word' to be completed on at the cursor after splitting the
924 925 entire line.
925 926
926 927 What characters are used as splitting delimiters can be controlled by
927 928 setting the ``delims`` attribute (this is a property that internally
928 929 automatically builds the necessary regular expression)"""
929 930
930 931 # Private interface
931 932
932 933 # A string of delimiter characters. The default value makes sense for
933 934 # IPython's most typical usage patterns.
934 935 _delims = DELIMS
935 936
936 937 # The expression (a normal string) to be compiled into a regular expression
937 938 # for actual splitting. We store it as an attribute mostly for ease of
938 939 # debugging, since this type of code can be so tricky to debug.
939 940 _delim_expr = None
940 941
941 942 # The regular expression that does the actual splitting
942 943 _delim_re = None
943 944
944 945 def __init__(self, delims=None):
945 946 delims = CompletionSplitter._delims if delims is None else delims
946 947 self.delims = delims
947 948
948 949 @property
949 950 def delims(self):
950 951 """Return the string of delimiter characters."""
951 952 return self._delims
952 953
953 954 @delims.setter
954 955 def delims(self, delims):
955 956 """Set the delimiters for line splitting."""
956 957 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 958 self._delim_re = re.compile(expr)
958 959 self._delims = delims
959 960 self._delim_expr = expr
960 961
961 962 def split_line(self, line, cursor_pos=None):
962 963 """Split a line of text with a cursor at the given position.
963 964 """
964 l = line if cursor_pos is None else line[:cursor_pos]
965 return self._delim_re.split(l)[-1]
965 cut_line = line if cursor_pos is None else line[:cursor_pos]
966 return self._delim_re.split(cut_line)[-1]
966 967
967 968
968 969
969 970 class Completer(Configurable):
970 971
971 972 greedy = Bool(
972 973 False,
973 974 help="""Activate greedy completion.
974 975
975 976 .. deprecated:: 8.8
976 977 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977 978
978 979 When enabled in IPython 8.8 or newer, changes configuration as follows:
979 980
980 981 - ``Completer.evaluation = 'unsafe'``
981 982 - ``Completer.auto_close_dict_keys = True``
982 983 """,
983 984 ).tag(config=True)
984 985
985 986 evaluation = Enum(
986 987 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 988 default_value="limited",
988 989 help="""Policy for code evaluation under completion.
989 990
990 991 Successive options allow to enable more eager evaluation for better
991 992 completion suggestions, including for nested dictionaries, nested lists,
992 993 or even results of function calls.
993 994 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 995 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995 996
996 997 Allowed values are:
997 998
998 999 - ``forbidden``: no evaluation of code is permitted,
999 1000 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 1001 no item/attribute evaluationm no access to locals/globals,
1001 1002 no evaluation of any operations or comparisons.
1002 1003 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 1004 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 1005 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 1006 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 1007 - ``unsafe``: evaluation of all methods and function calls but not of
1007 1008 syntax with side-effects like `del x`,
1008 1009 - ``dangerous``: completely arbitrary evaluation.
1009 1010 """,
1010 1011 ).tag(config=True)
1011 1012
1012 1013 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 1014 help="Experimental: Use Jedi to generate autocompletions. "
1014 1015 "Default to True if jedi is installed.").tag(config=True)
1015 1016
1016 1017 jedi_compute_type_timeout = Int(default_value=400,
1017 1018 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 1019 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 1020 performance by preventing jedi to build its cache.
1020 1021 """).tag(config=True)
1021 1022
1022 1023 debug = Bool(default_value=False,
1023 1024 help='Enable debug for the Completer. Mostly print extra '
1024 1025 'information for experimental jedi integration.')\
1025 1026 .tag(config=True)
1026 1027
1027 1028 backslash_combining_completions = Bool(True,
1028 1029 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 1030 "Includes completion of latex commands, unicode names, and expanding "
1030 1031 "unicode characters back to latex commands.").tag(config=True)
1031 1032
1032 1033 auto_close_dict_keys = Bool(
1033 1034 False,
1034 1035 help="""
1035 1036 Enable auto-closing dictionary keys.
1036 1037
1037 1038 When enabled string keys will be suffixed with a final quote
1038 1039 (matching the opening quote), tuple keys will also receive a
1039 1040 separating comma if needed, and keys which are final will
1040 1041 receive a closing bracket (``]``).
1041 1042 """,
1042 1043 ).tag(config=True)
1043 1044
1044 1045 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 1046 """Create a new completer for the command line.
1046 1047
1047 1048 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048 1049
1049 1050 If unspecified, the default namespace where completions are performed
1050 1051 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 1052 given as dictionaries.
1052 1053
1053 1054 An optional second namespace can be given. This allows the completer
1054 1055 to handle cases where both the local and global scopes need to be
1055 1056 distinguished.
1056 1057 """
1057 1058
1058 1059 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 1060 # specific namespace or to use __main__.__dict__. This will allow us
1060 1061 # to bind to __main__.__dict__ at completion time, not now.
1061 1062 if namespace is None:
1062 1063 self.use_main_ns = True
1063 1064 else:
1064 1065 self.use_main_ns = False
1065 1066 self.namespace = namespace
1066 1067
1067 1068 # The global namespace, if given, can be bound directly
1068 1069 if global_namespace is None:
1069 1070 self.global_namespace = {}
1070 1071 else:
1071 1072 self.global_namespace = global_namespace
1072 1073
1073 1074 self.custom_matchers = []
1074 1075
1075 1076 super(Completer, self).__init__(**kwargs)
1076 1077
1077 1078 def complete(self, text, state):
1078 1079 """Return the next possible completion for 'text'.
1079 1080
1080 1081 This is called successively with state == 0, 1, 2, ... until it
1081 1082 returns None. The completion should begin with 'text'.
1082 1083
1083 1084 """
1084 1085 if self.use_main_ns:
1085 1086 self.namespace = __main__.__dict__
1086 1087
1087 1088 if state == 0:
1088 1089 if "." in text:
1089 1090 self.matches = self.attr_matches(text)
1090 1091 else:
1091 1092 self.matches = self.global_matches(text)
1092 1093 try:
1093 1094 return self.matches[state]
1094 1095 except IndexError:
1095 1096 return None
1096 1097
1097 1098 def global_matches(self, text):
1098 1099 """Compute matches when text is a simple name.
1099 1100
1100 1101 Return a list of all keywords, built-in functions and names currently
1101 1102 defined in self.namespace or self.global_namespace that match.
1102 1103
1103 1104 """
1104 1105 matches = []
1105 1106 match_append = matches.append
1106 1107 n = len(text)
1107 1108 for lst in [
1108 1109 keyword.kwlist,
1109 1110 builtin_mod.__dict__.keys(),
1110 1111 list(self.namespace.keys()),
1111 1112 list(self.global_namespace.keys()),
1112 1113 ]:
1113 1114 for word in lst:
1114 1115 if word[:n] == text and word != "__builtins__":
1115 1116 match_append(word)
1116 1117
1117 1118 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 1119 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 1120 shortened = {
1120 1121 "_".join([sub[0] for sub in word.split("_")]): word
1121 1122 for word in lst
1122 1123 if snake_case_re.match(word)
1123 1124 }
1124 1125 for word in shortened.keys():
1125 1126 if word[:n] == text and word != "__builtins__":
1126 1127 match_append(shortened[word])
1127 1128 return matches
1128 1129
1129 1130 def attr_matches(self, text):
1130 1131 """Compute matches when text contains a dot.
1131 1132
1132 1133 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 1134 evaluatable in self.namespace or self.global_namespace, it will be
1134 1135 evaluated and its attributes (as revealed by dir()) are used as
1135 1136 possible completions. (For class instances, class members are
1136 1137 also considered.)
1137 1138
1138 1139 WARNING: this can still invoke arbitrary C code, if an object
1139 1140 with a __getattr__ hook is evaluated.
1140 1141
1141 1142 """
1142 1143 return self._attr_matches(text)[0]
1143 1144
1144 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1145 # we simple attribute matching with normal identifiers.
1146 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$")
1147
1148 def _attr_matches(
1149 self, text: str, include_prefix: bool = True
1150 ) -> Tuple[Sequence[str], str]:
1151 m2 = self._ATTR_MATCH_RE.match(self.line_buffer)
1146 1152 if not m2:
1147 1153 return [], ""
1148 1154 expr, attr = m2.group(1, 2)
1149 1155
1150 1156 obj = self._evaluate_expr(expr)
1151 1157
1152 1158 if obj is not_found:
1153 1159 return [], ""
1154 1160
1155 1161 if self.limit_to__all__ and hasattr(obj, '__all__'):
1156 1162 words = get__all__entries(obj)
1157 1163 else:
1158 1164 words = dir2(obj)
1159 1165
1160 1166 try:
1161 1167 words = generics.complete_object(obj, words)
1162 1168 except TryNext:
1163 1169 pass
1164 1170 except AssertionError:
1165 1171 raise
1166 1172 except Exception:
1167 1173 # Silence errors from completion function
1168 1174 pass
1169 1175 # Build match list to return
1170 1176 n = len(attr)
1171 1177
1172 1178 # Note: ideally we would just return words here and the prefix
1173 1179 # reconciliator would know that we intend to append to rather than
1174 1180 # replace the input text; this requires refactoring to return range
1175 1181 # which ought to be replaced (as does jedi).
1176 1182 if include_prefix:
1177 1183 tokens = _parse_tokens(expr)
1178 1184 rev_tokens = reversed(tokens)
1179 1185 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1180 1186 name_turn = True
1181 1187
1182 1188 parts = []
1183 1189 for token in rev_tokens:
1184 1190 if token.type in skip_over:
1185 1191 continue
1186 1192 if token.type == tokenize.NAME and name_turn:
1187 1193 parts.append(token.string)
1188 1194 name_turn = False
1189 1195 elif (
1190 1196 token.type == tokenize.OP and token.string == "." and not name_turn
1191 1197 ):
1192 1198 parts.append(token.string)
1193 1199 name_turn = True
1194 1200 else:
1195 1201 # short-circuit if not empty nor name token
1196 1202 break
1197 1203
1198 1204 prefix_after_space = "".join(reversed(parts))
1199 1205 else:
1200 1206 prefix_after_space = ""
1201 1207
1202 1208 return (
1203 1209 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1204 1210 "." + attr,
1205 1211 )
1206 1212
1213 def _trim_expr(self, code: str) -> str:
1214 """
1215 Trim the code until it is a valid expression and not a tuple;
1216
1217 return the trimmed expression for guarded_eval.
1218 """
1219 while code:
1220 code = code[1:]
1221 try:
1222 res = ast.parse(code)
1223 except SyntaxError:
1224 continue
1225
1226 assert res is not None
1227 if len(res.body) != 1:
1228 continue
1229 expr = res.body[0].value
1230 if isinstance(expr, ast.Tuple) and not code[-1] == ")":
1231 # we skip implicit tuple, like when trimming `fun(a,b`<completion>
1232 # as `a,b` would be a tuple, and we actually expect to get only `b`
1233 continue
1234 return code
1235 return ""
1236
1207 1237 def _evaluate_expr(self, expr):
1208 1238 obj = not_found
1209 1239 done = False
1210 1240 while not done and expr:
1211 1241 try:
1212 1242 obj = guarded_eval(
1213 1243 expr,
1214 1244 EvaluationContext(
1215 1245 globals=self.global_namespace,
1216 1246 locals=self.namespace,
1217 1247 evaluation=self.evaluation,
1218 1248 ),
1219 1249 )
1220 1250 done = True
1221 1251 except Exception as e:
1222 1252 if self.debug:
1223 1253 print("Evaluation exception", e)
1224 1254 # trim the expression to remove any invalid prefix
1225 1255 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1226 1256 # where parenthesis is not closed.
1227 1257 # TODO: make this faster by reusing parts of the computation?
1228 expr = expr[1:]
1258 expr = self._trim_expr(expr)
1229 1259 return obj
1230 1260
1231 1261 def get__all__entries(obj):
1232 1262 """returns the strings in the __all__ attribute"""
1233 1263 try:
1234 1264 words = getattr(obj, '__all__')
1235 except:
1265 except Exception:
1236 1266 return []
1237 1267
1238 1268 return [w for w in words if isinstance(w, str)]
1239 1269
1240 1270
1241 1271 class _DictKeyState(enum.Flag):
1242 1272 """Represent state of the key match in context of other possible matches.
1243 1273
1244 1274 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1245 1275 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1246 1276 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1247 1277 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1248 1278 """
1249 1279
1250 1280 BASELINE = 0
1251 1281 END_OF_ITEM = enum.auto()
1252 1282 END_OF_TUPLE = enum.auto()
1253 1283 IN_TUPLE = enum.auto()
1254 1284
1255 1285
1256 1286 def _parse_tokens(c):
1257 1287 """Parse tokens even if there is an error."""
1258 1288 tokens = []
1259 1289 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1260 1290 while True:
1261 1291 try:
1262 1292 tokens.append(next(token_generator))
1263 1293 except tokenize.TokenError:
1264 1294 return tokens
1265 1295 except StopIteration:
1266 1296 return tokens
1267 1297
1268 1298
1269 1299 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1270 1300 """Match any valid Python numeric literal in a prefix of dictionary keys.
1271 1301
1272 1302 References:
1273 1303 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1274 1304 - https://docs.python.org/3/library/tokenize.html
1275 1305 """
1276 1306 if prefix[-1].isspace():
1277 1307 # if user typed a space we do not have anything to complete
1278 1308 # even if there was a valid number token before
1279 1309 return None
1280 1310 tokens = _parse_tokens(prefix)
1281 1311 rev_tokens = reversed(tokens)
1282 1312 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1283 1313 number = None
1284 1314 for token in rev_tokens:
1285 1315 if token.type in skip_over:
1286 1316 continue
1287 1317 if number is None:
1288 1318 if token.type == tokenize.NUMBER:
1289 1319 number = token.string
1290 1320 continue
1291 1321 else:
1292 1322 # we did not match a number
1293 1323 return None
1294 1324 if token.type == tokenize.OP:
1295 1325 if token.string == ",":
1296 1326 break
1297 1327 if token.string in {"+", "-"}:
1298 1328 number = token.string + number
1299 1329 else:
1300 1330 return None
1301 1331 return number
1302 1332
1303 1333
1304 1334 _INT_FORMATS = {
1305 1335 "0b": bin,
1306 1336 "0o": oct,
1307 1337 "0x": hex,
1308 1338 }
1309 1339
1310 1340
1311 1341 def match_dict_keys(
1312 1342 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1313 1343 prefix: str,
1314 1344 delims: str,
1315 1345 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1316 1346 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1317 1347 """Used by dict_key_matches, matching the prefix to a list of keys
1318 1348
1319 1349 Parameters
1320 1350 ----------
1321 1351 keys
1322 1352 list of keys in dictionary currently being completed.
1323 1353 prefix
1324 1354 Part of the text already typed by the user. E.g. `mydict[b'fo`
1325 1355 delims
1326 1356 String of delimiters to consider when finding the current key.
1327 1357 extra_prefix : optional
1328 1358 Part of the text already typed in multi-key index cases. E.g. for
1329 1359 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1330 1360
1331 1361 Returns
1332 1362 -------
1333 1363 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1334 1364 ``quote`` being the quote that need to be used to close current string.
1335 1365 ``token_start`` the position where the replacement should start occurring,
1336 1366 ``matches`` a dictionary of replacement/completion keys on keys and values
1337 1367 indicating whether the state.
1338 1368 """
1339 1369 prefix_tuple = extra_prefix if extra_prefix else ()
1340 1370
1341 1371 prefix_tuple_size = sum(
1342 1372 [
1343 1373 # for pandas, do not count slices as taking space
1344 1374 not isinstance(k, slice)
1345 1375 for k in prefix_tuple
1346 1376 ]
1347 1377 )
1348 1378 text_serializable_types = (str, bytes, int, float, slice)
1349 1379
1350 1380 def filter_prefix_tuple(key):
1351 1381 # Reject too short keys
1352 1382 if len(key) <= prefix_tuple_size:
1353 1383 return False
1354 1384 # Reject keys which cannot be serialised to text
1355 1385 for k in key:
1356 1386 if not isinstance(k, text_serializable_types):
1357 1387 return False
1358 1388 # Reject keys that do not match the prefix
1359 1389 for k, pt in zip(key, prefix_tuple):
1360 1390 if k != pt and not isinstance(pt, slice):
1361 1391 return False
1362 1392 # All checks passed!
1363 1393 return True
1364 1394
1365 1395 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1366 1396 defaultdict(lambda: _DictKeyState.BASELINE)
1367 1397 )
1368 1398
1369 1399 for k in keys:
1370 1400 # If at least one of the matches is not final, mark as undetermined.
1371 1401 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1372 1402 # `111` appears final on first match but is not final on the second.
1373 1403
1374 1404 if isinstance(k, tuple):
1375 1405 if filter_prefix_tuple(k):
1376 1406 key_fragment = k[prefix_tuple_size]
1377 1407 filtered_key_is_final[key_fragment] |= (
1378 1408 _DictKeyState.END_OF_TUPLE
1379 1409 if len(k) == prefix_tuple_size + 1
1380 1410 else _DictKeyState.IN_TUPLE
1381 1411 )
1382 1412 elif prefix_tuple_size > 0:
1383 1413 # we are completing a tuple but this key is not a tuple,
1384 1414 # so we should ignore it
1385 1415 pass
1386 1416 else:
1387 1417 if isinstance(k, text_serializable_types):
1388 1418 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1389 1419
1390 1420 filtered_keys = filtered_key_is_final.keys()
1391 1421
1392 1422 if not prefix:
1393 1423 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1394 1424
1395 1425 quote_match = re.search("(?:\"|')", prefix)
1396 1426 is_user_prefix_numeric = False
1397 1427
1398 1428 if quote_match:
1399 1429 quote = quote_match.group()
1400 1430 valid_prefix = prefix + quote
1401 1431 try:
1402 1432 prefix_str = literal_eval(valid_prefix)
1403 1433 except Exception:
1404 1434 return "", 0, {}
1405 1435 else:
1406 1436 # If it does not look like a string, let's assume
1407 1437 # we are dealing with a number or variable.
1408 1438 number_match = _match_number_in_dict_key_prefix(prefix)
1409 1439
1410 1440 # We do not want the key matcher to suggest variable names so we yield:
1411 1441 if number_match is None:
1412 1442 # The alternative would be to assume that user forgort the quote
1413 1443 # and if the substring matches, suggest adding it at the start.
1414 1444 return "", 0, {}
1415 1445
1416 1446 prefix_str = number_match
1417 1447 is_user_prefix_numeric = True
1418 1448 quote = ""
1419 1449
1420 1450 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1421 1451 token_match = re.search(pattern, prefix, re.UNICODE)
1422 1452 assert token_match is not None # silence mypy
1423 1453 token_start = token_match.start()
1424 1454 token_prefix = token_match.group()
1425 1455
1426 1456 matched: Dict[str, _DictKeyState] = {}
1427 1457
1428 1458 str_key: Union[str, bytes]
1429 1459
1430 1460 for key in filtered_keys:
1431 1461 if isinstance(key, (int, float)):
1432 1462 # User typed a number but this key is not a number.
1433 1463 if not is_user_prefix_numeric:
1434 1464 continue
1435 1465 str_key = str(key)
1436 1466 if isinstance(key, int):
1437 1467 int_base = prefix_str[:2].lower()
1438 1468 # if user typed integer using binary/oct/hex notation:
1439 1469 if int_base in _INT_FORMATS:
1440 1470 int_format = _INT_FORMATS[int_base]
1441 1471 str_key = int_format(key)
1442 1472 else:
1443 1473 # User typed a string but this key is a number.
1444 1474 if is_user_prefix_numeric:
1445 1475 continue
1446 1476 str_key = key
1447 1477 try:
1448 1478 if not str_key.startswith(prefix_str):
1449 1479 continue
1450 except (AttributeError, TypeError, UnicodeError) as e:
1480 except (AttributeError, TypeError, UnicodeError):
1451 1481 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1452 1482 continue
1453 1483
1454 1484 # reformat remainder of key to begin with prefix
1455 1485 rem = str_key[len(prefix_str) :]
1456 1486 # force repr wrapped in '
1457 1487 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1458 1488 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1459 1489 if quote == '"':
1460 1490 # The entered prefix is quoted with ",
1461 1491 # but the match is quoted with '.
1462 1492 # A contained " hence needs escaping for comparison:
1463 1493 rem_repr = rem_repr.replace('"', '\\"')
1464 1494
1465 1495 # then reinsert prefix from start of token
1466 1496 match = "%s%s" % (token_prefix, rem_repr)
1467 1497
1468 1498 matched[match] = filtered_key_is_final[key]
1469 1499 return quote, token_start, matched
1470 1500
1471 1501
1472 1502 def cursor_to_position(text:str, line:int, column:int)->int:
1473 1503 """
1474 1504 Convert the (line,column) position of the cursor in text to an offset in a
1475 1505 string.
1476 1506
1477 1507 Parameters
1478 1508 ----------
1479 1509 text : str
1480 1510 The text in which to calculate the cursor offset
1481 1511 line : int
1482 1512 Line of the cursor; 0-indexed
1483 1513 column : int
1484 1514 Column of the cursor 0-indexed
1485 1515
1486 1516 Returns
1487 1517 -------
1488 1518 Position of the cursor in ``text``, 0-indexed.
1489 1519
1490 1520 See Also
1491 1521 --------
1492 1522 position_to_cursor : reciprocal of this function
1493 1523
1494 1524 """
1495 1525 lines = text.split('\n')
1496 1526 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1497 1527
1498 return sum(len(l) + 1 for l in lines[:line]) + column
1528 return sum(len(line) + 1 for line in lines[:line]) + column
1499 1529
1500 1530 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1501 1531 """
1502 1532 Convert the position of the cursor in text (0 indexed) to a line
1503 1533 number(0-indexed) and a column number (0-indexed) pair
1504 1534
1505 1535 Position should be a valid position in ``text``.
1506 1536
1507 1537 Parameters
1508 1538 ----------
1509 1539 text : str
1510 1540 The text in which to calculate the cursor offset
1511 1541 offset : int
1512 1542 Position of the cursor in ``text``, 0-indexed.
1513 1543
1514 1544 Returns
1515 1545 -------
1516 1546 (line, column) : (int, int)
1517 1547 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1518 1548
1519 1549 See Also
1520 1550 --------
1521 1551 cursor_to_position : reciprocal of this function
1522 1552
1523 1553 """
1524 1554
1525 1555 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1526 1556
1527 1557 before = text[:offset]
1528 1558 blines = before.split('\n') # ! splitnes trim trailing \n
1529 1559 line = before.count('\n')
1530 1560 col = len(blines[-1])
1531 1561 return line, col
1532 1562
1533 1563
1534 1564 def _safe_isinstance(obj, module, class_name, *attrs):
1535 1565 """Checks if obj is an instance of module.class_name if loaded
1536 1566 """
1537 1567 if module in sys.modules:
1538 1568 m = sys.modules[module]
1539 1569 for attr in [class_name, *attrs]:
1540 1570 m = getattr(m, attr)
1541 1571 return isinstance(obj, m)
1542 1572
1543 1573
1544 1574 @context_matcher()
1545 1575 def back_unicode_name_matcher(context: CompletionContext):
1546 1576 """Match Unicode characters back to Unicode name
1547 1577
1548 1578 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1549 1579 """
1550 1580 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1551 1581 return _convert_matcher_v1_result_to_v2(
1552 1582 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1553 1583 )
1554 1584
1555 1585
1556 1586 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1557 1587 """Match Unicode characters back to Unicode name
1558 1588
1559 1589 This does ``☃`` -> ``\\snowman``
1560 1590
1561 1591 Note that snowman is not a valid python3 combining character but will be expanded.
1562 1592 Though it will not recombine back to the snowman character by the completion machinery.
1563 1593
1564 1594 This will not either back-complete standard sequences like \\n, \\b ...
1565 1595
1566 1596 .. deprecated:: 8.6
1567 1597 You can use :meth:`back_unicode_name_matcher` instead.
1568 1598
1569 1599 Returns
1570 1600 =======
1571 1601
1572 1602 Return a tuple with two elements:
1573 1603
1574 1604 - The Unicode character that was matched (preceded with a backslash), or
1575 1605 empty string,
1576 1606 - a sequence (of 1), name for the match Unicode character, preceded by
1577 1607 backslash, or empty if no match.
1578 1608 """
1579 1609 if len(text)<2:
1580 1610 return '', ()
1581 1611 maybe_slash = text[-2]
1582 1612 if maybe_slash != '\\':
1583 1613 return '', ()
1584 1614
1585 1615 char = text[-1]
1586 1616 # no expand on quote for completion in strings.
1587 1617 # nor backcomplete standard ascii keys
1588 1618 if char in string.ascii_letters or char in ('"',"'"):
1589 1619 return '', ()
1590 1620 try :
1591 1621 unic = unicodedata.name(char)
1592 1622 return '\\'+char,('\\'+unic,)
1593 1623 except KeyError:
1594 1624 pass
1595 1625 return '', ()
1596 1626
1597 1627
1598 1628 @context_matcher()
1599 1629 def back_latex_name_matcher(context: CompletionContext):
1600 1630 """Match latex characters back to unicode name
1601 1631
1602 1632 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1603 1633 """
1604 1634 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1605 1635 return _convert_matcher_v1_result_to_v2(
1606 1636 matches, type="latex", fragment=fragment, suppress_if_matches=True
1607 1637 )
1608 1638
1609 1639
1610 1640 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1611 1641 """Match latex characters back to unicode name
1612 1642
1613 1643 This does ``\\ℵ`` -> ``\\aleph``
1614 1644
1615 1645 .. deprecated:: 8.6
1616 1646 You can use :meth:`back_latex_name_matcher` instead.
1617 1647 """
1618 1648 if len(text)<2:
1619 1649 return '', ()
1620 1650 maybe_slash = text[-2]
1621 1651 if maybe_slash != '\\':
1622 1652 return '', ()
1623 1653
1624 1654
1625 1655 char = text[-1]
1626 1656 # no expand on quote for completion in strings.
1627 1657 # nor backcomplete standard ascii keys
1628 1658 if char in string.ascii_letters or char in ('"',"'"):
1629 1659 return '', ()
1630 1660 try :
1631 1661 latex = reverse_latex_symbol[char]
1632 1662 # '\\' replace the \ as well
1633 1663 return '\\'+char,[latex]
1634 1664 except KeyError:
1635 1665 pass
1636 1666 return '', ()
1637 1667
1638 1668
1639 1669 def _formatparamchildren(parameter) -> str:
1640 1670 """
1641 1671 Get parameter name and value from Jedi Private API
1642 1672
1643 1673 Jedi does not expose a simple way to get `param=value` from its API.
1644 1674
1645 1675 Parameters
1646 1676 ----------
1647 1677 parameter
1648 1678 Jedi's function `Param`
1649 1679
1650 1680 Returns
1651 1681 -------
1652 1682 A string like 'a', 'b=1', '*args', '**kwargs'
1653 1683
1654 1684 """
1655 1685 description = parameter.description
1656 1686 if not description.startswith('param '):
1657 1687 raise ValueError('Jedi function parameter description have change format.'
1658 1688 'Expected "param ...", found %r".' % description)
1659 1689 return description[6:]
1660 1690
1661 1691 def _make_signature(completion)-> str:
1662 1692 """
1663 1693 Make the signature from a jedi completion
1664 1694
1665 1695 Parameters
1666 1696 ----------
1667 1697 completion : jedi.Completion
1668 1698 object does not complete a function type
1669 1699
1670 1700 Returns
1671 1701 -------
1672 1702 a string consisting of the function signature, with the parenthesis but
1673 1703 without the function name. example:
1674 1704 `(a, *args, b=1, **kwargs)`
1675 1705
1676 1706 """
1677 1707
1678 1708 # it looks like this might work on jedi 0.17
1679 1709 if hasattr(completion, 'get_signatures'):
1680 1710 signatures = completion.get_signatures()
1681 1711 if not signatures:
1682 1712 return '(?)'
1683 1713
1684 1714 c0 = completion.get_signatures()[0]
1685 1715 return '('+c0.to_string().split('(', maxsplit=1)[1]
1686 1716
1687 1717 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1688 1718 for p in signature.defined_names()) if f])
1689 1719
1690 1720
1691 1721 _CompleteResult = Dict[str, MatcherResult]
1692 1722
1693 1723
1694 1724 DICT_MATCHER_REGEX = re.compile(
1695 1725 r"""(?x)
1696 1726 ( # match dict-referring - or any get item object - expression
1697 1727 .+
1698 1728 )
1699 1729 \[ # open bracket
1700 1730 \s* # and optional whitespace
1701 1731 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1702 1732 # and slices
1703 1733 ((?:(?:
1704 1734 (?: # closed string
1705 1735 [uUbB]? # string prefix (r not handled)
1706 1736 (?:
1707 1737 '(?:[^']|(?<!\\)\\')*'
1708 1738 |
1709 1739 "(?:[^"]|(?<!\\)\\")*"
1710 1740 )
1711 1741 )
1712 1742 |
1713 1743 # capture integers and slices
1714 1744 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1715 1745 |
1716 1746 # integer in bin/hex/oct notation
1717 1747 0[bBxXoO]_?(?:\w|\d)+
1718 1748 )
1719 1749 \s*,\s*
1720 1750 )*)
1721 1751 ((?:
1722 1752 (?: # unclosed string
1723 1753 [uUbB]? # string prefix (r not handled)
1724 1754 (?:
1725 1755 '(?:[^']|(?<!\\)\\')*
1726 1756 |
1727 1757 "(?:[^"]|(?<!\\)\\")*
1728 1758 )
1729 1759 )
1730 1760 |
1731 1761 # unfinished integer
1732 1762 (?:[-+]?\d+)
1733 1763 |
1734 1764 # integer in bin/hex/oct notation
1735 1765 0[bBxXoO]_?(?:\w|\d)+
1736 1766 )
1737 1767 )?
1738 1768 $
1739 1769 """
1740 1770 )
1741 1771
1742 1772
1743 1773 def _convert_matcher_v1_result_to_v2(
1744 1774 matches: Sequence[str],
1745 1775 type: str,
1746 1776 fragment: Optional[str] = None,
1747 1777 suppress_if_matches: bool = False,
1748 1778 ) -> SimpleMatcherResult:
1749 1779 """Utility to help with transition"""
1750 1780 result = {
1751 1781 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1752 1782 "suppress": (True if matches else False) if suppress_if_matches else False,
1753 1783 }
1754 1784 if fragment is not None:
1755 1785 result["matched_fragment"] = fragment
1756 1786 return cast(SimpleMatcherResult, result)
1757 1787
1758 1788
1759 1789 class IPCompleter(Completer):
1760 1790 """Extension of the completer class with IPython-specific features"""
1761 1791
1762 1792 @observe('greedy')
1763 1793 def _greedy_changed(self, change):
1764 1794 """update the splitter and readline delims when greedy is changed"""
1765 1795 if change["new"]:
1766 1796 self.evaluation = "unsafe"
1767 1797 self.auto_close_dict_keys = True
1768 1798 self.splitter.delims = GREEDY_DELIMS
1769 1799 else:
1770 1800 self.evaluation = "limited"
1771 1801 self.auto_close_dict_keys = False
1772 1802 self.splitter.delims = DELIMS
1773 1803
1774 1804 dict_keys_only = Bool(
1775 1805 False,
1776 1806 help="""
1777 1807 Whether to show dict key matches only.
1778 1808
1779 1809 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1780 1810 """,
1781 1811 )
1782 1812
1783 1813 suppress_competing_matchers = UnionTrait(
1784 1814 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1785 1815 default_value=None,
1786 1816 help="""
1787 1817 Whether to suppress completions from other *Matchers*.
1788 1818
1789 1819 When set to ``None`` (default) the matchers will attempt to auto-detect
1790 1820 whether suppression of other matchers is desirable. For example, at
1791 1821 the beginning of a line followed by `%` we expect a magic completion
1792 1822 to be the only applicable option, and after ``my_dict['`` we usually
1793 1823 expect a completion with an existing dictionary key.
1794 1824
1795 1825 If you want to disable this heuristic and see completions from all matchers,
1796 1826 set ``IPCompleter.suppress_competing_matchers = False``.
1797 1827 To disable the heuristic for specific matchers provide a dictionary mapping:
1798 1828 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1799 1829
1800 1830 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1801 1831 completions to the set of matchers with the highest priority;
1802 1832 this is equivalent to ``IPCompleter.merge_completions`` and
1803 1833 can be beneficial for performance, but will sometimes omit relevant
1804 1834 candidates from matchers further down the priority list.
1805 1835 """,
1806 1836 ).tag(config=True)
1807 1837
1808 1838 merge_completions = Bool(
1809 1839 True,
1810 1840 help="""Whether to merge completion results into a single list
1811 1841
1812 1842 If False, only the completion results from the first non-empty
1813 1843 completer will be returned.
1814 1844
1815 1845 As of version 8.6.0, setting the value to ``False`` is an alias for:
1816 1846 ``IPCompleter.suppress_competing_matchers = True.``.
1817 1847 """,
1818 1848 ).tag(config=True)
1819 1849
1820 1850 disable_matchers = ListTrait(
1821 1851 Unicode(),
1822 1852 help="""List of matchers to disable.
1823 1853
1824 1854 The list should contain matcher identifiers (see :any:`completion_matcher`).
1825 1855 """,
1826 1856 ).tag(config=True)
1827 1857
1828 1858 omit__names = Enum(
1829 1859 (0, 1, 2),
1830 1860 default_value=2,
1831 1861 help="""Instruct the completer to omit private method names
1832 1862
1833 1863 Specifically, when completing on ``object.<tab>``.
1834 1864
1835 1865 When 2 [default]: all names that start with '_' will be excluded.
1836 1866
1837 1867 When 1: all 'magic' names (``__foo__``) will be excluded.
1838 1868
1839 1869 When 0: nothing will be excluded.
1840 1870 """
1841 1871 ).tag(config=True)
1842 1872 limit_to__all__ = Bool(False,
1843 1873 help="""
1844 1874 DEPRECATED as of version 5.0.
1845 1875
1846 1876 Instruct the completer to use __all__ for the completion
1847 1877
1848 1878 Specifically, when completing on ``object.<tab>``.
1849 1879
1850 1880 When True: only those names in obj.__all__ will be included.
1851 1881
1852 1882 When False [default]: the __all__ attribute is ignored
1853 1883 """,
1854 1884 ).tag(config=True)
1855 1885
1856 1886 profile_completions = Bool(
1857 1887 default_value=False,
1858 1888 help="If True, emit profiling data for completion subsystem using cProfile."
1859 1889 ).tag(config=True)
1860 1890
1861 1891 profiler_output_dir = Unicode(
1862 1892 default_value=".completion_profiles",
1863 1893 help="Template for path at which to output profile data for completions."
1864 1894 ).tag(config=True)
1865 1895
1866 1896 @observe('limit_to__all__')
1867 1897 def _limit_to_all_changed(self, change):
1868 1898 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1869 1899 'value has been deprecated since IPython 5.0, will be made to have '
1870 1900 'no effects and then removed in future version of IPython.',
1871 1901 UserWarning)
1872 1902
1873 1903 def __init__(
1874 1904 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1875 1905 ):
1876 1906 """IPCompleter() -> completer
1877 1907
1878 1908 Return a completer object.
1879 1909
1880 1910 Parameters
1881 1911 ----------
1882 1912 shell
1883 1913 a pointer to the ipython shell itself. This is needed
1884 1914 because this completer knows about magic functions, and those can
1885 1915 only be accessed via the ipython instance.
1886 1916 namespace : dict, optional
1887 1917 an optional dict where completions are performed.
1888 1918 global_namespace : dict, optional
1889 1919 secondary optional dict for completions, to
1890 1920 handle cases (such as IPython embedded inside functions) where
1891 1921 both Python scopes are visible.
1892 1922 config : Config
1893 1923 traitlet's config object
1894 1924 **kwargs
1895 1925 passed to super class unmodified.
1896 1926 """
1897 1927
1898 1928 self.magic_escape = ESC_MAGIC
1899 1929 self.splitter = CompletionSplitter()
1900 1930
1901 1931 # _greedy_changed() depends on splitter and readline being defined:
1902 1932 super().__init__(
1903 1933 namespace=namespace,
1904 1934 global_namespace=global_namespace,
1905 1935 config=config,
1906 1936 **kwargs,
1907 1937 )
1908 1938
1909 1939 # List where completion matches will be stored
1910 1940 self.matches = []
1911 1941 self.shell = shell
1912 1942 # Regexp to split filenames with spaces in them
1913 1943 self.space_name_re = re.compile(r'([^\\] )')
1914 1944 # Hold a local ref. to glob.glob for speed
1915 1945 self.glob = glob.glob
1916 1946
1917 1947 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1918 1948 # buffers, to avoid completion problems.
1919 1949 term = os.environ.get('TERM','xterm')
1920 1950 self.dumb_terminal = term in ['dumb','emacs']
1921 1951
1922 1952 # Special handling of backslashes needed in win32 platforms
1923 1953 if sys.platform == "win32":
1924 1954 self.clean_glob = self._clean_glob_win32
1925 1955 else:
1926 1956 self.clean_glob = self._clean_glob
1927 1957
1928 1958 #regexp to parse docstring for function signature
1929 1959 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1930 1960 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1931 1961 #use this if positional argument name is also needed
1932 1962 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1933 1963
1934 1964 self.magic_arg_matchers = [
1935 1965 self.magic_config_matcher,
1936 1966 self.magic_color_matcher,
1937 1967 ]
1938 1968
1939 1969 # This is set externally by InteractiveShell
1940 1970 self.custom_completers = None
1941 1971
1942 1972 # This is a list of names of unicode characters that can be completed
1943 1973 # into their corresponding unicode value. The list is large, so we
1944 1974 # lazily initialize it on first use. Consuming code should access this
1945 1975 # attribute through the `@unicode_names` property.
1946 1976 self._unicode_names = None
1947 1977
1948 1978 self._backslash_combining_matchers = [
1949 1979 self.latex_name_matcher,
1950 1980 self.unicode_name_matcher,
1951 1981 back_latex_name_matcher,
1952 1982 back_unicode_name_matcher,
1953 1983 self.fwd_unicode_matcher,
1954 1984 ]
1955 1985
1956 1986 if not self.backslash_combining_completions:
1957 1987 for matcher in self._backslash_combining_matchers:
1958 1988 self.disable_matchers.append(_get_matcher_id(matcher))
1959 1989
1960 1990 if not self.merge_completions:
1961 1991 self.suppress_competing_matchers = True
1962 1992
1963 1993 @property
1964 1994 def matchers(self) -> List[Matcher]:
1965 1995 """All active matcher routines for completion"""
1966 1996 if self.dict_keys_only:
1967 1997 return [self.dict_key_matcher]
1968 1998
1969 1999 if self.use_jedi:
1970 2000 return [
1971 2001 *self.custom_matchers,
1972 2002 *self._backslash_combining_matchers,
1973 2003 *self.magic_arg_matchers,
1974 2004 self.custom_completer_matcher,
1975 2005 self.magic_matcher,
1976 2006 self._jedi_matcher,
1977 2007 self.dict_key_matcher,
1978 2008 self.file_matcher,
1979 2009 ]
1980 2010 else:
1981 2011 return [
1982 2012 *self.custom_matchers,
1983 2013 *self._backslash_combining_matchers,
1984 2014 *self.magic_arg_matchers,
1985 2015 self.custom_completer_matcher,
1986 2016 self.dict_key_matcher,
1987 2017 self.magic_matcher,
1988 2018 self.python_matcher,
1989 2019 self.file_matcher,
1990 2020 self.python_func_kw_matcher,
1991 2021 ]
1992 2022
1993 2023 def all_completions(self, text:str) -> List[str]:
1994 2024 """
1995 2025 Wrapper around the completion methods for the benefit of emacs.
1996 2026 """
1997 2027 prefix = text.rpartition('.')[0]
1998 2028 with provisionalcompleter():
1999 2029 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2000 2030 for c in self.completions(text, len(text))]
2001 2031
2002 2032 return self.complete(text)[1]
2003 2033
2004 2034 def _clean_glob(self, text:str):
2005 2035 return self.glob("%s*" % text)
2006 2036
2007 2037 def _clean_glob_win32(self, text:str):
2008 2038 return [f.replace("\\","/")
2009 2039 for f in self.glob("%s*" % text)]
2010 2040
2011 2041 @context_matcher()
2012 2042 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2013 2043 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2014 2044 matches = self.file_matches(context.token)
2015 2045 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2016 2046 # starts with `/home/`, `C:\`, etc)
2017 2047 return _convert_matcher_v1_result_to_v2(matches, type="path")
2018 2048
2019 2049 def file_matches(self, text: str) -> List[str]:
2020 2050 """Match filenames, expanding ~USER type strings.
2021 2051
2022 2052 Most of the seemingly convoluted logic in this completer is an
2023 2053 attempt to handle filenames with spaces in them. And yet it's not
2024 2054 quite perfect, because Python's readline doesn't expose all of the
2025 2055 GNU readline details needed for this to be done correctly.
2026 2056
2027 2057 For a filename with a space in it, the printed completions will be
2028 2058 only the parts after what's already been typed (instead of the
2029 2059 full completions, as is normally done). I don't think with the
2030 2060 current (as of Python 2.3) Python readline it's possible to do
2031 2061 better.
2032 2062
2033 2063 .. deprecated:: 8.6
2034 2064 You can use :meth:`file_matcher` instead.
2035 2065 """
2036 2066
2037 2067 # chars that require escaping with backslash - i.e. chars
2038 2068 # that readline treats incorrectly as delimiters, but we
2039 2069 # don't want to treat as delimiters in filename matching
2040 2070 # when escaped with backslash
2041 2071 if text.startswith('!'):
2042 2072 text = text[1:]
2043 2073 text_prefix = u'!'
2044 2074 else:
2045 2075 text_prefix = u''
2046 2076
2047 2077 text_until_cursor = self.text_until_cursor
2048 2078 # track strings with open quotes
2049 2079 open_quotes = has_open_quotes(text_until_cursor)
2050 2080
2051 2081 if '(' in text_until_cursor or '[' in text_until_cursor:
2052 2082 lsplit = text
2053 2083 else:
2054 2084 try:
2055 2085 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2056 2086 lsplit = arg_split(text_until_cursor)[-1]
2057 2087 except ValueError:
2058 2088 # typically an unmatched ", or backslash without escaped char.
2059 2089 if open_quotes:
2060 2090 lsplit = text_until_cursor.split(open_quotes)[-1]
2061 2091 else:
2062 2092 return []
2063 2093 except IndexError:
2064 2094 # tab pressed on empty line
2065 2095 lsplit = ""
2066 2096
2067 2097 if not open_quotes and lsplit != protect_filename(lsplit):
2068 2098 # if protectables are found, do matching on the whole escaped name
2069 2099 has_protectables = True
2070 2100 text0,text = text,lsplit
2071 2101 else:
2072 2102 has_protectables = False
2073 2103 text = os.path.expanduser(text)
2074 2104
2075 2105 if text == "":
2076 2106 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2077 2107
2078 2108 # Compute the matches from the filesystem
2079 2109 if sys.platform == 'win32':
2080 2110 m0 = self.clean_glob(text)
2081 2111 else:
2082 2112 m0 = self.clean_glob(text.replace('\\', ''))
2083 2113
2084 2114 if has_protectables:
2085 2115 # If we had protectables, we need to revert our changes to the
2086 2116 # beginning of filename so that we don't double-write the part
2087 2117 # of the filename we have so far
2088 2118 len_lsplit = len(lsplit)
2089 2119 matches = [text_prefix + text0 +
2090 2120 protect_filename(f[len_lsplit:]) for f in m0]
2091 2121 else:
2092 2122 if open_quotes:
2093 2123 # if we have a string with an open quote, we don't need to
2094 2124 # protect the names beyond the quote (and we _shouldn't_, as
2095 2125 # it would cause bugs when the filesystem call is made).
2096 2126 matches = m0 if sys.platform == "win32" else\
2097 2127 [protect_filename(f, open_quotes) for f in m0]
2098 2128 else:
2099 2129 matches = [text_prefix +
2100 2130 protect_filename(f) for f in m0]
2101 2131
2102 2132 # Mark directories in input list by appending '/' to their names.
2103 2133 return [x+'/' if os.path.isdir(x) else x for x in matches]
2104 2134
2105 2135 @context_matcher()
2106 2136 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2107 2137 """Match magics."""
2108 2138 text = context.token
2109 2139 matches = self.magic_matches(text)
2110 2140 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2111 2141 is_magic_prefix = len(text) > 0 and text[0] == "%"
2112 2142 result["suppress"] = is_magic_prefix and bool(result["completions"])
2113 2143 return result
2114 2144
2115 def magic_matches(self, text: str):
2145 def magic_matches(self, text: str) -> List[str]:
2116 2146 """Match magics.
2117 2147
2118 2148 .. deprecated:: 8.6
2119 2149 You can use :meth:`magic_matcher` instead.
2120 2150 """
2121 2151 # Get all shell magics now rather than statically, so magics loaded at
2122 2152 # runtime show up too.
2123 2153 lsm = self.shell.magics_manager.lsmagic()
2124 2154 line_magics = lsm['line']
2125 2155 cell_magics = lsm['cell']
2126 2156 pre = self.magic_escape
2127 2157 pre2 = pre+pre
2128 2158
2129 2159 explicit_magic = text.startswith(pre)
2130 2160
2131 2161 # Completion logic:
2132 2162 # - user gives %%: only do cell magics
2133 2163 # - user gives %: do both line and cell magics
2134 2164 # - no prefix: do both
2135 2165 # In other words, line magics are skipped if the user gives %% explicitly
2136 2166 #
2137 2167 # We also exclude magics that match any currently visible names:
2138 2168 # https://github.com/ipython/ipython/issues/4877, unless the user has
2139 2169 # typed a %:
2140 2170 # https://github.com/ipython/ipython/issues/10754
2141 2171 bare_text = text.lstrip(pre)
2142 2172 global_matches = self.global_matches(bare_text)
2143 2173 if not explicit_magic:
2144 2174 def matches(magic):
2145 2175 """
2146 2176 Filter magics, in particular remove magics that match
2147 2177 a name present in global namespace.
2148 2178 """
2149 2179 return ( magic.startswith(bare_text) and
2150 2180 magic not in global_matches )
2151 2181 else:
2152 2182 def matches(magic):
2153 2183 return magic.startswith(bare_text)
2154 2184
2155 2185 comp = [ pre2+m for m in cell_magics if matches(m)]
2156 2186 if not text.startswith(pre2):
2157 2187 comp += [ pre+m for m in line_magics if matches(m)]
2158 2188
2159 2189 return comp
2160 2190
2161 2191 @context_matcher()
2162 2192 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2163 2193 """Match class names and attributes for %config magic."""
2164 2194 # NOTE: uses `line_buffer` equivalent for compatibility
2165 2195 matches = self.magic_config_matches(context.line_with_cursor)
2166 2196 return _convert_matcher_v1_result_to_v2(matches, type="param")
2167 2197
2168 2198 def magic_config_matches(self, text: str) -> List[str]:
2169 2199 """Match class names and attributes for %config magic.
2170 2200
2171 2201 .. deprecated:: 8.6
2172 2202 You can use :meth:`magic_config_matcher` instead.
2173 2203 """
2174 2204 texts = text.strip().split()
2175 2205
2176 2206 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2177 2207 # get all configuration classes
2178 2208 classes = sorted(set([ c for c in self.shell.configurables
2179 2209 if c.__class__.class_traits(config=True)
2180 2210 ]), key=lambda x: x.__class__.__name__)
2181 2211 classnames = [ c.__class__.__name__ for c in classes ]
2182 2212
2183 2213 # return all classnames if config or %config is given
2184 2214 if len(texts) == 1:
2185 2215 return classnames
2186 2216
2187 2217 # match classname
2188 2218 classname_texts = texts[1].split('.')
2189 2219 classname = classname_texts[0]
2190 2220 classname_matches = [ c for c in classnames
2191 2221 if c.startswith(classname) ]
2192 2222
2193 2223 # return matched classes or the matched class with attributes
2194 2224 if texts[1].find('.') < 0:
2195 2225 return classname_matches
2196 2226 elif len(classname_matches) == 1 and \
2197 2227 classname_matches[0] == classname:
2198 2228 cls = classes[classnames.index(classname)].__class__
2199 2229 help = cls.class_get_help()
2200 2230 # strip leading '--' from cl-args:
2201 2231 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2202 2232 return [ attr.split('=')[0]
2203 2233 for attr in help.strip().splitlines()
2204 2234 if attr.startswith(texts[1]) ]
2205 2235 return []
2206 2236
2207 2237 @context_matcher()
2208 2238 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2209 2239 """Match color schemes for %colors magic."""
2210 2240 # NOTE: uses `line_buffer` equivalent for compatibility
2211 2241 matches = self.magic_color_matches(context.line_with_cursor)
2212 2242 return _convert_matcher_v1_result_to_v2(matches, type="param")
2213 2243
2214 2244 def magic_color_matches(self, text: str) -> List[str]:
2215 2245 """Match color schemes for %colors magic.
2216 2246
2217 2247 .. deprecated:: 8.6
2218 2248 You can use :meth:`magic_color_matcher` instead.
2219 2249 """
2220 2250 texts = text.split()
2221 2251 if text.endswith(' '):
2222 2252 # .split() strips off the trailing whitespace. Add '' back
2223 2253 # so that: '%colors ' -> ['%colors', '']
2224 2254 texts.append('')
2225 2255
2226 2256 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2227 2257 prefix = texts[1]
2228 2258 return [ color for color in InspectColors.keys()
2229 2259 if color.startswith(prefix) ]
2230 2260 return []
2231 2261
2232 2262 @context_matcher(identifier="IPCompleter.jedi_matcher")
2233 2263 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2234 2264 matches = self._jedi_matches(
2235 2265 cursor_column=context.cursor_position,
2236 2266 cursor_line=context.cursor_line,
2237 2267 text=context.full_text,
2238 2268 )
2239 2269 return {
2240 2270 "completions": matches,
2241 2271 # static analysis should not suppress other matchers
2242 2272 "suppress": False,
2243 2273 }
2244 2274
2245 2275 def _jedi_matches(
2246 2276 self, cursor_column: int, cursor_line: int, text: str
2247 2277 ) -> Iterator[_JediCompletionLike]:
2248 2278 """
2249 2279 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2250 2280 cursor position.
2251 2281
2252 2282 Parameters
2253 2283 ----------
2254 2284 cursor_column : int
2255 2285 column position of the cursor in ``text``, 0-indexed.
2256 2286 cursor_line : int
2257 2287 line position of the cursor in ``text``, 0-indexed
2258 2288 text : str
2259 2289 text to complete
2260 2290
2261 2291 Notes
2262 2292 -----
2263 2293 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2264 2294 object containing a string with the Jedi debug information attached.
2265 2295
2266 2296 .. deprecated:: 8.6
2267 2297 You can use :meth:`_jedi_matcher` instead.
2268 2298 """
2269 2299 namespaces = [self.namespace]
2270 2300 if self.global_namespace is not None:
2271 2301 namespaces.append(self.global_namespace)
2272 2302
2273 2303 completion_filter = lambda x:x
2274 2304 offset = cursor_to_position(text, cursor_line, cursor_column)
2275 2305 # filter output if we are completing for object members
2276 2306 if offset:
2277 2307 pre = text[offset-1]
2278 2308 if pre == '.':
2279 2309 if self.omit__names == 2:
2280 2310 completion_filter = lambda c:not c.name.startswith('_')
2281 2311 elif self.omit__names == 1:
2282 2312 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2283 2313 elif self.omit__names == 0:
2284 2314 completion_filter = lambda x:x
2285 2315 else:
2286 2316 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2287 2317
2288 2318 interpreter = jedi.Interpreter(text[:offset], namespaces)
2289 2319 try_jedi = True
2290 2320
2291 2321 try:
2292 2322 # find the first token in the current tree -- if it is a ' or " then we are in a string
2293 2323 completing_string = False
2294 2324 try:
2295 2325 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2296 2326 except StopIteration:
2297 2327 pass
2298 2328 else:
2299 2329 # note the value may be ', ", or it may also be ''' or """, or
2300 2330 # in some cases, """what/you/typed..., but all of these are
2301 2331 # strings.
2302 2332 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2303 2333
2304 2334 # if we are in a string jedi is likely not the right candidate for
2305 2335 # now. Skip it.
2306 2336 try_jedi = not completing_string
2307 2337 except Exception as e:
2308 2338 # many of things can go wrong, we are using private API just don't crash.
2309 2339 if self.debug:
2310 2340 print("Error detecting if completing a non-finished string :", e, '|')
2311 2341
2312 2342 if not try_jedi:
2313 2343 return iter([])
2314 2344 try:
2315 2345 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2316 2346 except Exception as e:
2317 2347 if self.debug:
2318 2348 return iter(
2319 2349 [
2320 2350 _FakeJediCompletion(
2321 2351 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2322 2352 % (e)
2323 2353 )
2324 2354 ]
2325 2355 )
2326 2356 else:
2327 2357 return iter([])
2328 2358
2329 2359 @context_matcher()
2330 2360 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2331 2361 """Match attributes or global python names"""
2332 2362 text = context.line_with_cursor
2333 2363 if "." in text:
2334 2364 try:
2335 2365 matches, fragment = self._attr_matches(text, include_prefix=False)
2336 2366 if text.endswith(".") and self.omit__names:
2337 2367 if self.omit__names == 1:
2338 2368 # true if txt is _not_ a __ name, false otherwise:
2339 2369 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2340 2370 else:
2341 2371 # true if txt is _not_ a _ name, false otherwise:
2342 2372 no__name = (
2343 2373 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2344 2374 is None
2345 2375 )
2346 2376 matches = filter(no__name, matches)
2347 2377 return _convert_matcher_v1_result_to_v2(
2348 2378 matches, type="attribute", fragment=fragment
2349 2379 )
2350 2380 except NameError:
2351 2381 # catches <undefined attributes>.<tab>
2352 2382 matches = []
2353 2383 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2354 2384 else:
2355 2385 matches = self.global_matches(context.token)
2356 2386 # TODO: maybe distinguish between functions, modules and just "variables"
2357 2387 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2358 2388
2359 2389 @completion_matcher(api_version=1)
2360 2390 def python_matches(self, text: str) -> Iterable[str]:
2361 2391 """Match attributes or global python names.
2362 2392
2363 2393 .. deprecated:: 8.27
2364 2394 You can use :meth:`python_matcher` instead."""
2365 2395 if "." in text:
2366 2396 try:
2367 2397 matches = self.attr_matches(text)
2368 2398 if text.endswith('.') and self.omit__names:
2369 2399 if self.omit__names == 1:
2370 2400 # true if txt is _not_ a __ name, false otherwise:
2371 2401 no__name = (lambda txt:
2372 2402 re.match(r'.*\.__.*?__',txt) is None)
2373 2403 else:
2374 2404 # true if txt is _not_ a _ name, false otherwise:
2375 2405 no__name = (lambda txt:
2376 2406 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2377 2407 matches = filter(no__name, matches)
2378 2408 except NameError:
2379 2409 # catches <undefined attributes>.<tab>
2380 2410 matches = []
2381 2411 else:
2382 2412 matches = self.global_matches(text)
2383 2413 return matches
2384 2414
2385 2415 def _default_arguments_from_docstring(self, doc):
2386 2416 """Parse the first line of docstring for call signature.
2387 2417
2388 2418 Docstring should be of the form 'min(iterable[, key=func])\n'.
2389 2419 It can also parse cython docstring of the form
2390 2420 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2391 2421 """
2392 2422 if doc is None:
2393 2423 return []
2394 2424
2395 2425 #care only the firstline
2396 2426 line = doc.lstrip().splitlines()[0]
2397 2427
2398 2428 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2399 2429 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2400 2430 sig = self.docstring_sig_re.search(line)
2401 2431 if sig is None:
2402 2432 return []
2403 2433 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2404 2434 sig = sig.groups()[0].split(',')
2405 2435 ret = []
2406 2436 for s in sig:
2407 2437 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2408 2438 ret += self.docstring_kwd_re.findall(s)
2409 2439 return ret
2410 2440
2411 2441 def _default_arguments(self, obj):
2412 2442 """Return the list of default arguments of obj if it is callable,
2413 2443 or empty list otherwise."""
2414 2444 call_obj = obj
2415 2445 ret = []
2416 2446 if inspect.isbuiltin(obj):
2417 2447 pass
2418 2448 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2419 2449 if inspect.isclass(obj):
2420 2450 #for cython embedsignature=True the constructor docstring
2421 2451 #belongs to the object itself not __init__
2422 2452 ret += self._default_arguments_from_docstring(
2423 2453 getattr(obj, '__doc__', ''))
2424 2454 # for classes, check for __init__,__new__
2425 2455 call_obj = (getattr(obj, '__init__', None) or
2426 2456 getattr(obj, '__new__', None))
2427 2457 # for all others, check if they are __call__able
2428 2458 elif hasattr(obj, '__call__'):
2429 2459 call_obj = obj.__call__
2430 2460 ret += self._default_arguments_from_docstring(
2431 2461 getattr(call_obj, '__doc__', ''))
2432 2462
2433 2463 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2434 2464 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2435 2465
2436 2466 try:
2437 2467 sig = inspect.signature(obj)
2438 2468 ret.extend(k for k, v in sig.parameters.items() if
2439 2469 v.kind in _keeps)
2440 2470 except ValueError:
2441 2471 pass
2442 2472
2443 2473 return list(set(ret))
2444 2474
2445 2475 @context_matcher()
2446 2476 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2447 2477 """Match named parameters (kwargs) of the last open function."""
2448 2478 matches = self.python_func_kw_matches(context.token)
2449 2479 return _convert_matcher_v1_result_to_v2(matches, type="param")
2450 2480
2451 2481 def python_func_kw_matches(self, text):
2452 2482 """Match named parameters (kwargs) of the last open function.
2453 2483
2454 2484 .. deprecated:: 8.6
2455 2485 You can use :meth:`python_func_kw_matcher` instead.
2456 2486 """
2457 2487
2458 2488 if "." in text: # a parameter cannot be dotted
2459 2489 return []
2460 2490 try: regexp = self.__funcParamsRegex
2461 2491 except AttributeError:
2462 2492 regexp = self.__funcParamsRegex = re.compile(r'''
2463 2493 '.*?(?<!\\)' | # single quoted strings or
2464 2494 ".*?(?<!\\)" | # double quoted strings or
2465 2495 \w+ | # identifier
2466 2496 \S # other characters
2467 2497 ''', re.VERBOSE | re.DOTALL)
2468 2498 # 1. find the nearest identifier that comes before an unclosed
2469 2499 # parenthesis before the cursor
2470 2500 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2471 2501 tokens = regexp.findall(self.text_until_cursor)
2472 iterTokens = reversed(tokens); openPar = 0
2502 iterTokens = reversed(tokens)
2503 openPar = 0
2473 2504
2474 2505 for token in iterTokens:
2475 2506 if token == ')':
2476 2507 openPar -= 1
2477 2508 elif token == '(':
2478 2509 openPar += 1
2479 2510 if openPar > 0:
2480 2511 # found the last unclosed parenthesis
2481 2512 break
2482 2513 else:
2483 2514 return []
2484 2515 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2485 2516 ids = []
2486 2517 isId = re.compile(r'\w+$').match
2487 2518
2488 2519 while True:
2489 2520 try:
2490 2521 ids.append(next(iterTokens))
2491 2522 if not isId(ids[-1]):
2492 ids.pop(); break
2523 ids.pop()
2524 break
2493 2525 if not next(iterTokens) == '.':
2494 2526 break
2495 2527 except StopIteration:
2496 2528 break
2497 2529
2498 2530 # Find all named arguments already assigned to, as to avoid suggesting
2499 2531 # them again
2500 2532 usedNamedArgs = set()
2501 2533 par_level = -1
2502 2534 for token, next_token in zip(tokens, tokens[1:]):
2503 2535 if token == '(':
2504 2536 par_level += 1
2505 2537 elif token == ')':
2506 2538 par_level -= 1
2507 2539
2508 2540 if par_level != 0:
2509 2541 continue
2510 2542
2511 2543 if next_token != '=':
2512 2544 continue
2513 2545
2514 2546 usedNamedArgs.add(token)
2515 2547
2516 2548 argMatches = []
2517 2549 try:
2518 2550 callableObj = '.'.join(ids[::-1])
2519 2551 namedArgs = self._default_arguments(eval(callableObj,
2520 2552 self.namespace))
2521 2553
2522 2554 # Remove used named arguments from the list, no need to show twice
2523 2555 for namedArg in set(namedArgs) - usedNamedArgs:
2524 2556 if namedArg.startswith(text):
2525 2557 argMatches.append("%s=" %namedArg)
2526 2558 except:
2527 2559 pass
2528 2560
2529 2561 return argMatches
2530 2562
2531 2563 @staticmethod
2532 2564 def _get_keys(obj: Any) -> List[Any]:
2533 2565 # Objects can define their own completions by defining an
2534 2566 # _ipy_key_completions_() method.
2535 2567 method = get_real_method(obj, '_ipython_key_completions_')
2536 2568 if method is not None:
2537 2569 return method()
2538 2570
2539 2571 # Special case some common in-memory dict-like types
2540 2572 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2541 2573 try:
2542 2574 return list(obj.keys())
2543 2575 except Exception:
2544 2576 return []
2545 2577 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2546 2578 try:
2547 2579 return list(obj.obj.keys())
2548 2580 except Exception:
2549 2581 return []
2550 2582 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2551 2583 _safe_isinstance(obj, 'numpy', 'void'):
2552 2584 return obj.dtype.names or []
2553 2585 return []
2554 2586
2555 2587 @context_matcher()
2556 2588 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2557 2589 """Match string keys in a dictionary, after e.g. ``foo[``."""
2558 2590 matches = self.dict_key_matches(context.token)
2559 2591 return _convert_matcher_v1_result_to_v2(
2560 2592 matches, type="dict key", suppress_if_matches=True
2561 2593 )
2562 2594
2563 2595 def dict_key_matches(self, text: str) -> List[str]:
2564 2596 """Match string keys in a dictionary, after e.g. ``foo[``.
2565 2597
2566 2598 .. deprecated:: 8.6
2567 2599 You can use :meth:`dict_key_matcher` instead.
2568 2600 """
2569 2601
2570 2602 # Short-circuit on closed dictionary (regular expression would
2571 2603 # not match anyway, but would take quite a while).
2572 2604 if self.text_until_cursor.strip().endswith("]"):
2573 2605 return []
2574 2606
2575 2607 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2576 2608
2577 2609 if match is None:
2578 2610 return []
2579 2611
2580 2612 expr, prior_tuple_keys, key_prefix = match.groups()
2581 2613
2582 2614 obj = self._evaluate_expr(expr)
2583 2615
2584 2616 if obj is not_found:
2585 2617 return []
2586 2618
2587 2619 keys = self._get_keys(obj)
2588 2620 if not keys:
2589 2621 return keys
2590 2622
2591 2623 tuple_prefix = guarded_eval(
2592 2624 prior_tuple_keys,
2593 2625 EvaluationContext(
2594 2626 globals=self.global_namespace,
2595 2627 locals=self.namespace,
2596 2628 evaluation=self.evaluation, # type: ignore
2597 2629 in_subscript=True,
2598 2630 ),
2599 2631 )
2600 2632
2601 2633 closing_quote, token_offset, matches = match_dict_keys(
2602 2634 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2603 2635 )
2604 2636 if not matches:
2605 2637 return []
2606 2638
2607 2639 # get the cursor position of
2608 2640 # - the text being completed
2609 2641 # - the start of the key text
2610 2642 # - the start of the completion
2611 2643 text_start = len(self.text_until_cursor) - len(text)
2612 2644 if key_prefix:
2613 2645 key_start = match.start(3)
2614 2646 completion_start = key_start + token_offset
2615 2647 else:
2616 2648 key_start = completion_start = match.end()
2617 2649
2618 2650 # grab the leading prefix, to make sure all completions start with `text`
2619 2651 if text_start > key_start:
2620 2652 leading = ''
2621 2653 else:
2622 2654 leading = text[text_start:completion_start]
2623 2655
2624 2656 # append closing quote and bracket as appropriate
2625 2657 # this is *not* appropriate if the opening quote or bracket is outside
2626 2658 # the text given to this method, e.g. `d["""a\nt
2627 2659 can_close_quote = False
2628 2660 can_close_bracket = False
2629 2661
2630 2662 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2631 2663
2632 2664 if continuation.startswith(closing_quote):
2633 2665 # do not close if already closed, e.g. `d['a<tab>'`
2634 2666 continuation = continuation[len(closing_quote) :]
2635 2667 else:
2636 2668 can_close_quote = True
2637 2669
2638 2670 continuation = continuation.strip()
2639 2671
2640 2672 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2641 2673 # handling it is out of scope, so let's avoid appending suffixes.
2642 2674 has_known_tuple_handling = isinstance(obj, dict)
2643 2675
2644 2676 can_close_bracket = (
2645 2677 not continuation.startswith("]") and self.auto_close_dict_keys
2646 2678 )
2647 2679 can_close_tuple_item = (
2648 2680 not continuation.startswith(",")
2649 2681 and has_known_tuple_handling
2650 2682 and self.auto_close_dict_keys
2651 2683 )
2652 2684 can_close_quote = can_close_quote and self.auto_close_dict_keys
2653 2685
2654 2686 # fast path if closing quote should be appended but not suffix is allowed
2655 2687 if not can_close_quote and not can_close_bracket and closing_quote:
2656 2688 return [leading + k for k in matches]
2657 2689
2658 2690 results = []
2659 2691
2660 2692 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2661 2693
2662 2694 for k, state_flag in matches.items():
2663 2695 result = leading + k
2664 2696 if can_close_quote and closing_quote:
2665 2697 result += closing_quote
2666 2698
2667 2699 if state_flag == end_of_tuple_or_item:
2668 2700 # We do not know which suffix to add,
2669 2701 # e.g. both tuple item and string
2670 2702 # match this item.
2671 2703 pass
2672 2704
2673 2705 if state_flag in end_of_tuple_or_item and can_close_bracket:
2674 2706 result += "]"
2675 2707 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2676 2708 result += ", "
2677 2709 results.append(result)
2678 2710 return results
2679 2711
2680 2712 @context_matcher()
2681 2713 def unicode_name_matcher(self, context: CompletionContext):
2682 2714 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2683 2715 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2684 2716 return _convert_matcher_v1_result_to_v2(
2685 2717 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2686 2718 )
2687 2719
2688 2720 @staticmethod
2689 2721 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2690 2722 """Match Latex-like syntax for unicode characters base
2691 2723 on the name of the character.
2692 2724
2693 2725 This does ``\\GREEK SMALL LETTER ETA`` -> ``η``
2694 2726
2695 2727 Works only on valid python 3 identifier, or on combining characters that
2696 2728 will combine to form a valid identifier.
2697 2729 """
2698 2730 slashpos = text.rfind('\\')
2699 2731 if slashpos > -1:
2700 2732 s = text[slashpos+1:]
2701 2733 try :
2702 2734 unic = unicodedata.lookup(s)
2703 2735 # allow combining chars
2704 2736 if ('a'+unic).isidentifier():
2705 2737 return '\\'+s,[unic]
2706 2738 except KeyError:
2707 2739 pass
2708 2740 return '', []
2709 2741
2710 2742 @context_matcher()
2711 2743 def latex_name_matcher(self, context: CompletionContext):
2712 2744 """Match Latex syntax for unicode characters.
2713 2745
2714 2746 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
2715 2747 """
2716 2748 fragment, matches = self.latex_matches(context.text_until_cursor)
2717 2749 return _convert_matcher_v1_result_to_v2(
2718 2750 matches, type="latex", fragment=fragment, suppress_if_matches=True
2719 2751 )
2720 2752
2721 2753 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2722 2754 """Match Latex syntax for unicode characters.
2723 2755
2724 2756 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
2725 2757
2726 2758 .. deprecated:: 8.6
2727 2759 You can use :meth:`latex_name_matcher` instead.
2728 2760 """
2729 2761 slashpos = text.rfind('\\')
2730 2762 if slashpos > -1:
2731 2763 s = text[slashpos:]
2732 2764 if s in latex_symbols:
2733 2765 # Try to complete a full latex symbol to unicode
2734 2766 # \\alpha -> α
2735 2767 return s, [latex_symbols[s]]
2736 2768 else:
2737 2769 # If a user has partially typed a latex symbol, give them
2738 2770 # a full list of options \al -> [\aleph, \alpha]
2739 2771 matches = [k for k in latex_symbols if k.startswith(s)]
2740 2772 if matches:
2741 2773 return s, matches
2742 2774 return '', ()
2743 2775
2744 2776 @context_matcher()
2745 2777 def custom_completer_matcher(self, context):
2746 2778 """Dispatch custom completer.
2747 2779
2748 2780 If a match is found, suppresses all other matchers except for Jedi.
2749 2781 """
2750 2782 matches = self.dispatch_custom_completer(context.token) or []
2751 2783 result = _convert_matcher_v1_result_to_v2(
2752 2784 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2753 2785 )
2754 2786 result["ordered"] = True
2755 2787 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2756 2788 return result
2757 2789
2758 2790 def dispatch_custom_completer(self, text):
2759 2791 """
2760 2792 .. deprecated:: 8.6
2761 2793 You can use :meth:`custom_completer_matcher` instead.
2762 2794 """
2763 2795 if not self.custom_completers:
2764 2796 return
2765 2797
2766 2798 line = self.line_buffer
2767 2799 if not line.strip():
2768 2800 return None
2769 2801
2770 2802 # Create a little structure to pass all the relevant information about
2771 2803 # the current completion to any custom completer.
2772 2804 event = SimpleNamespace()
2773 2805 event.line = line
2774 2806 event.symbol = text
2775 2807 cmd = line.split(None,1)[0]
2776 2808 event.command = cmd
2777 2809 event.text_until_cursor = self.text_until_cursor
2778 2810
2779 2811 # for foo etc, try also to find completer for %foo
2780 2812 if not cmd.startswith(self.magic_escape):
2781 2813 try_magic = self.custom_completers.s_matches(
2782 2814 self.magic_escape + cmd)
2783 2815 else:
2784 2816 try_magic = []
2785 2817
2786 2818 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2787 2819 try_magic,
2788 2820 self.custom_completers.flat_matches(self.text_until_cursor)):
2789 2821 try:
2790 2822 res = c(event)
2791 2823 if res:
2792 2824 # first, try case sensitive match
2793 2825 withcase = [r for r in res if r.startswith(text)]
2794 2826 if withcase:
2795 2827 return withcase
2796 2828 # if none, then case insensitive ones are ok too
2797 2829 text_low = text.lower()
2798 2830 return [r for r in res if r.lower().startswith(text_low)]
2799 2831 except TryNext:
2800 2832 pass
2801 2833 except KeyboardInterrupt:
2802 2834 """
2803 2835 If custom completer take too long,
2804 2836 let keyboard interrupt abort and return nothing.
2805 2837 """
2806 2838 break
2807 2839
2808 2840 return None
2809 2841
2810 2842 def completions(self, text: str, offset: int)->Iterator[Completion]:
2811 2843 """
2812 2844 Returns an iterator over the possible completions
2813 2845
2814 2846 .. warning::
2815 2847
2816 2848 Unstable
2817 2849
2818 2850 This function is unstable, API may change without warning.
2819 2851 It will also raise unless use in proper context manager.
2820 2852
2821 2853 Parameters
2822 2854 ----------
2823 2855 text : str
2824 2856 Full text of the current input, multi line string.
2825 2857 offset : int
2826 2858 Integer representing the position of the cursor in ``text``. Offset
2827 2859 is 0-based indexed.
2828 2860
2829 2861 Yields
2830 2862 ------
2831 2863 Completion
2832 2864
2833 2865 Notes
2834 2866 -----
2835 2867 The cursor on a text can either be seen as being "in between"
2836 2868 characters or "On" a character depending on the interface visible to
2837 2869 the user. For consistency the cursor being on "in between" characters X
2838 2870 and Y is equivalent to the cursor being "on" character Y, that is to say
2839 2871 the character the cursor is on is considered as being after the cursor.
2840 2872
2841 2873 Combining characters may span more that one position in the
2842 2874 text.
2843 2875
2844 2876 .. note::
2845 2877
2846 2878 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2847 2879 fake Completion token to distinguish completion returned by Jedi
2848 2880 and usual IPython completion.
2849 2881
2850 2882 .. note::
2851 2883
2852 2884 Completions are not completely deduplicated yet. If identical
2853 2885 completions are coming from different sources this function does not
2854 2886 ensure that each completion object will only be present once.
2855 2887 """
2856 2888 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2857 2889 "It may change without warnings. "
2858 2890 "Use in corresponding context manager.",
2859 2891 category=ProvisionalCompleterWarning, stacklevel=2)
2860 2892
2861 2893 seen = set()
2862 2894 profiler:Optional[cProfile.Profile]
2863 2895 try:
2864 2896 if self.profile_completions:
2865 2897 import cProfile
2866 2898 profiler = cProfile.Profile()
2867 2899 profiler.enable()
2868 2900 else:
2869 2901 profiler = None
2870 2902
2871 2903 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2872 2904 if c and (c in seen):
2873 2905 continue
2874 2906 yield c
2875 2907 seen.add(c)
2876 2908 except KeyboardInterrupt:
2877 2909 """if completions take too long and users send keyboard interrupt,
2878 2910 do not crash and return ASAP. """
2879 2911 pass
2880 2912 finally:
2881 2913 if profiler is not None:
2882 2914 profiler.disable()
2883 2915 ensure_dir_exists(self.profiler_output_dir)
2884 2916 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2885 2917 print("Writing profiler output to", output_path)
2886 2918 profiler.dump_stats(output_path)
2887 2919
2888 2920 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2889 2921 """
2890 2922 Core completion module.Same signature as :any:`completions`, with the
2891 2923 extra `timeout` parameter (in seconds).
2892 2924
2893 2925 Computing jedi's completion ``.type`` can be quite expensive (it is a
2894 2926 lazy property) and can require some warm-up, more warm up than just
2895 2927 computing the ``name`` of a completion. The warm-up can be :
2896 2928
2897 2929 - Long warm-up the first time a module is encountered after
2898 2930 install/update: actually build parse/inference tree.
2899 2931
2900 2932 - first time the module is encountered in a session: load tree from
2901 2933 disk.
2902 2934
2903 2935 We don't want to block completions for tens of seconds so we give the
2904 2936 completer a "budget" of ``_timeout`` seconds per invocation to compute
2905 2937 completions types, the completions that have not yet been computed will
2906 2938 be marked as "unknown" an will have a chance to be computed next round
2907 2939 are things get cached.
2908 2940
2909 2941 Keep in mind that Jedi is not the only thing treating the completion so
2910 2942 keep the timeout short-ish as if we take more than 0.3 second we still
2911 2943 have lots of processing to do.
2912 2944
2913 2945 """
2914 2946 deadline = time.monotonic() + _timeout
2915 2947
2916 2948 before = full_text[:offset]
2917 2949 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2918 2950
2919 2951 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2920 2952
2921 2953 def is_non_jedi_result(
2922 2954 result: MatcherResult, identifier: str
2923 2955 ) -> TypeGuard[SimpleMatcherResult]:
2924 2956 return identifier != jedi_matcher_id
2925 2957
2926 2958 results = self._complete(
2927 2959 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2928 2960 )
2929 2961
2930 2962 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2931 2963 identifier: result
2932 2964 for identifier, result in results.items()
2933 2965 if is_non_jedi_result(result, identifier)
2934 2966 }
2935 2967
2936 2968 jedi_matches = (
2937 2969 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2938 2970 if jedi_matcher_id in results
2939 2971 else ()
2940 2972 )
2941 2973
2942 2974 iter_jm = iter(jedi_matches)
2943 2975 if _timeout:
2944 2976 for jm in iter_jm:
2945 2977 try:
2946 2978 type_ = jm.type
2947 2979 except Exception:
2948 2980 if self.debug:
2949 2981 print("Error in Jedi getting type of ", jm)
2950 2982 type_ = None
2951 2983 delta = len(jm.name_with_symbols) - len(jm.complete)
2952 2984 if type_ == 'function':
2953 2985 signature = _make_signature(jm)
2954 2986 else:
2955 2987 signature = ''
2956 2988 yield Completion(start=offset - delta,
2957 2989 end=offset,
2958 2990 text=jm.name_with_symbols,
2959 2991 type=type_,
2960 2992 signature=signature,
2961 2993 _origin='jedi')
2962 2994
2963 2995 if time.monotonic() > deadline:
2964 2996 break
2965 2997
2966 2998 for jm in iter_jm:
2967 2999 delta = len(jm.name_with_symbols) - len(jm.complete)
2968 3000 yield Completion(
2969 3001 start=offset - delta,
2970 3002 end=offset,
2971 3003 text=jm.name_with_symbols,
2972 3004 type=_UNKNOWN_TYPE, # don't compute type for speed
2973 3005 _origin="jedi",
2974 3006 signature="",
2975 3007 )
2976 3008
2977 3009 # TODO:
2978 3010 # Suppress this, right now just for debug.
2979 3011 if jedi_matches and non_jedi_results and self.debug:
2980 3012 some_start_offset = before.rfind(
2981 3013 next(iter(non_jedi_results.values()))["matched_fragment"]
2982 3014 )
2983 3015 yield Completion(
2984 3016 start=some_start_offset,
2985 3017 end=offset,
2986 3018 text="--jedi/ipython--",
2987 3019 _origin="debug",
2988 3020 type="none",
2989 3021 signature="",
2990 3022 )
2991 3023
2992 3024 ordered: List[Completion] = []
2993 3025 sortable: List[Completion] = []
2994 3026
2995 3027 for origin, result in non_jedi_results.items():
2996 3028 matched_text = result["matched_fragment"]
2997 3029 start_offset = before.rfind(matched_text)
2998 3030 is_ordered = result.get("ordered", False)
2999 3031 container = ordered if is_ordered else sortable
3000 3032
3001 3033 # I'm unsure if this is always true, so let's assert and see if it
3002 3034 # crash
3003 3035 assert before.endswith(matched_text)
3004 3036
3005 3037 for simple_completion in result["completions"]:
3006 3038 completion = Completion(
3007 3039 start=start_offset,
3008 3040 end=offset,
3009 3041 text=simple_completion.text,
3010 3042 _origin=origin,
3011 3043 signature="",
3012 3044 type=simple_completion.type or _UNKNOWN_TYPE,
3013 3045 )
3014 3046 container.append(completion)
3015 3047
3016 3048 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3017 3049 :MATCHES_LIMIT
3018 3050 ]
3019 3051
3020 3052 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3021 3053 """Find completions for the given text and line context.
3022 3054
3023 3055 Note that both the text and the line_buffer are optional, but at least
3024 3056 one of them must be given.
3025 3057
3026 3058 Parameters
3027 3059 ----------
3028 3060 text : string, optional
3029 3061 Text to perform the completion on. If not given, the line buffer
3030 3062 is split using the instance's CompletionSplitter object.
3031 3063 line_buffer : string, optional
3032 3064 If not given, the completer attempts to obtain the current line
3033 3065 buffer via readline. This keyword allows clients which are
3034 3066 requesting for text completions in non-readline contexts to inform
3035 3067 the completer of the entire text.
3036 3068 cursor_pos : int, optional
3037 3069 Index of the cursor in the full line buffer. Should be provided by
3038 3070 remote frontends where kernel has no access to frontend state.
3039 3071
3040 3072 Returns
3041 3073 -------
3042 3074 Tuple of two items:
3043 3075 text : str
3044 3076 Text that was actually used in the completion.
3045 3077 matches : list
3046 3078 A list of completion matches.
3047 3079
3048 3080 Notes
3049 3081 -----
3050 3082 This API is likely to be deprecated and replaced by
3051 3083 :any:`IPCompleter.completions` in the future.
3052 3084
3053 3085 """
3054 3086 warnings.warn('`Completer.complete` is pending deprecation since '
3055 3087 'IPython 6.0 and will be replaced by `Completer.completions`.',
3056 3088 PendingDeprecationWarning)
3057 3089 # potential todo, FOLD the 3rd throw away argument of _complete
3058 3090 # into the first 2 one.
3059 3091 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3060 3092 # TODO: should we deprecate now, or does it stay?
3061 3093
3062 3094 results = self._complete(
3063 3095 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3064 3096 )
3065 3097
3066 3098 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3067 3099
3068 3100 return self._arrange_and_extract(
3069 3101 results,
3070 3102 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3071 3103 skip_matchers={jedi_matcher_id},
3072 3104 # this API does not support different start/end positions (fragments of token).
3073 3105 abort_if_offset_changes=True,
3074 3106 )
3075 3107
3076 3108 def _arrange_and_extract(
3077 3109 self,
3078 3110 results: Dict[str, MatcherResult],
3079 3111 skip_matchers: Set[str],
3080 3112 abort_if_offset_changes: bool,
3081 3113 ):
3082 3114 sortable: List[AnyMatcherCompletion] = []
3083 3115 ordered: List[AnyMatcherCompletion] = []
3084 3116 most_recent_fragment = None
3085 3117 for identifier, result in results.items():
3086 3118 if identifier in skip_matchers:
3087 3119 continue
3088 3120 if not result["completions"]:
3089 3121 continue
3090 3122 if not most_recent_fragment:
3091 3123 most_recent_fragment = result["matched_fragment"]
3092 3124 if (
3093 3125 abort_if_offset_changes
3094 3126 and result["matched_fragment"] != most_recent_fragment
3095 3127 ):
3096 3128 break
3097 3129 if result.get("ordered", False):
3098 3130 ordered.extend(result["completions"])
3099 3131 else:
3100 3132 sortable.extend(result["completions"])
3101 3133
3102 3134 if not most_recent_fragment:
3103 3135 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3104 3136
3105 3137 return most_recent_fragment, [
3106 3138 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3107 3139 ]
3108 3140
3109 3141 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3110 3142 full_text=None) -> _CompleteResult:
3111 3143 """
3112 3144 Like complete but can also returns raw jedi completions as well as the
3113 3145 origin of the completion text. This could (and should) be made much
3114 3146 cleaner but that will be simpler once we drop the old (and stateful)
3115 3147 :any:`complete` API.
3116 3148
3117 3149 With current provisional API, cursor_pos act both (depending on the
3118 3150 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3119 3151 ``column`` when passing multiline strings this could/should be renamed
3120 3152 but would add extra noise.
3121 3153
3122 3154 Parameters
3123 3155 ----------
3124 3156 cursor_line
3125 3157 Index of the line the cursor is on. 0 indexed.
3126 3158 cursor_pos
3127 3159 Position of the cursor in the current line/line_buffer/text. 0
3128 3160 indexed.
3129 3161 line_buffer : optional, str
3130 3162 The current line the cursor is in, this is mostly due to legacy
3131 3163 reason that readline could only give a us the single current line.
3132 3164 Prefer `full_text`.
3133 3165 text : str
3134 3166 The current "token" the cursor is in, mostly also for historical
3135 3167 reasons. as the completer would trigger only after the current line
3136 3168 was parsed.
3137 3169 full_text : str
3138 3170 Full text of the current cell.
3139 3171
3140 3172 Returns
3141 3173 -------
3142 3174 An ordered dictionary where keys are identifiers of completion
3143 3175 matchers and values are ``MatcherResult``s.
3144 3176 """
3145 3177
3146 3178 # if the cursor position isn't given, the only sane assumption we can
3147 3179 # make is that it's at the end of the line (the common case)
3148 3180 if cursor_pos is None:
3149 3181 cursor_pos = len(line_buffer) if text is None else len(text)
3150 3182
3151 3183 if self.use_main_ns:
3152 3184 self.namespace = __main__.__dict__
3153 3185
3154 3186 # if text is either None or an empty string, rely on the line buffer
3155 3187 if (not line_buffer) and full_text:
3156 3188 line_buffer = full_text.split('\n')[cursor_line]
3157 3189 if not text: # issue #11508: check line_buffer before calling split_line
3158 3190 text = (
3159 3191 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3160 3192 )
3161 3193
3162 3194 # If no line buffer is given, assume the input text is all there was
3163 3195 if line_buffer is None:
3164 3196 line_buffer = text
3165 3197
3166 3198 # deprecated - do not use `line_buffer` in new code.
3167 3199 self.line_buffer = line_buffer
3168 3200 self.text_until_cursor = self.line_buffer[:cursor_pos]
3169 3201
3170 3202 if not full_text:
3171 3203 full_text = line_buffer
3172 3204
3173 3205 context = CompletionContext(
3174 3206 full_text=full_text,
3175 3207 cursor_position=cursor_pos,
3176 3208 cursor_line=cursor_line,
3177 3209 token=text,
3178 3210 limit=MATCHES_LIMIT,
3179 3211 )
3180 3212
3181 3213 # Start with a clean slate of completions
3182 3214 results: Dict[str, MatcherResult] = {}
3183 3215
3184 3216 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3185 3217
3186 3218 suppressed_matchers: Set[str] = set()
3187 3219
3188 3220 matchers = {
3189 3221 _get_matcher_id(matcher): matcher
3190 3222 for matcher in sorted(
3191 3223 self.matchers, key=_get_matcher_priority, reverse=True
3192 3224 )
3193 3225 }
3194 3226
3195 3227 for matcher_id, matcher in matchers.items():
3196 3228 matcher_id = _get_matcher_id(matcher)
3197 3229
3198 3230 if matcher_id in self.disable_matchers:
3199 3231 continue
3200 3232
3201 3233 if matcher_id in results:
3202 3234 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3203 3235
3204 3236 if matcher_id in suppressed_matchers:
3205 3237 continue
3206 3238
3207 3239 result: MatcherResult
3208 3240 try:
3209 3241 if _is_matcher_v1(matcher):
3210 3242 result = _convert_matcher_v1_result_to_v2(
3211 3243 matcher(text), type=_UNKNOWN_TYPE
3212 3244 )
3213 3245 elif _is_matcher_v2(matcher):
3214 3246 result = matcher(context)
3215 3247 else:
3216 3248 api_version = _get_matcher_api_version(matcher)
3217 3249 raise ValueError(f"Unsupported API version {api_version}")
3218 except:
3250 except BaseException:
3219 3251 # Show the ugly traceback if the matcher causes an
3220 3252 # exception, but do NOT crash the kernel!
3221 3253 sys.excepthook(*sys.exc_info())
3222 3254 continue
3223 3255
3224 3256 # set default value for matched fragment if suffix was not selected.
3225 3257 result["matched_fragment"] = result.get("matched_fragment", context.token)
3226 3258
3227 3259 if not suppressed_matchers:
3228 3260 suppression_recommended: Union[bool, Set[str]] = result.get(
3229 3261 "suppress", False
3230 3262 )
3231 3263
3232 3264 suppression_config = (
3233 3265 self.suppress_competing_matchers.get(matcher_id, None)
3234 3266 if isinstance(self.suppress_competing_matchers, dict)
3235 3267 else self.suppress_competing_matchers
3236 3268 )
3237 3269 should_suppress = (
3238 3270 (suppression_config is True)
3239 3271 or (suppression_recommended and (suppression_config is not False))
3240 3272 ) and has_any_completions(result)
3241 3273
3242 3274 if should_suppress:
3243 3275 suppression_exceptions: Set[str] = result.get(
3244 3276 "do_not_suppress", set()
3245 3277 )
3246 3278 if isinstance(suppression_recommended, Iterable):
3247 3279 to_suppress = set(suppression_recommended)
3248 3280 else:
3249 3281 to_suppress = set(matchers)
3250 3282 suppressed_matchers = to_suppress - suppression_exceptions
3251 3283
3252 3284 new_results = {}
3253 3285 for previous_matcher_id, previous_result in results.items():
3254 3286 if previous_matcher_id not in suppressed_matchers:
3255 3287 new_results[previous_matcher_id] = previous_result
3256 3288 results = new_results
3257 3289
3258 3290 results[matcher_id] = result
3259 3291
3260 3292 _, matches = self._arrange_and_extract(
3261 3293 results,
3262 3294 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3263 3295 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3264 3296 skip_matchers={jedi_matcher_id},
3265 3297 abort_if_offset_changes=False,
3266 3298 )
3267 3299
3268 3300 # populate legacy stateful API
3269 3301 self.matches = matches
3270 3302
3271 3303 return results
3272 3304
3273 3305 @staticmethod
3274 3306 def _deduplicate(
3275 3307 matches: Sequence[AnyCompletion],
3276 3308 ) -> Iterable[AnyCompletion]:
3277 3309 filtered_matches: Dict[str, AnyCompletion] = {}
3278 3310 for match in matches:
3279 3311 text = match.text
3280 3312 if (
3281 3313 text not in filtered_matches
3282 3314 or filtered_matches[text].type == _UNKNOWN_TYPE
3283 3315 ):
3284 3316 filtered_matches[text] = match
3285 3317
3286 3318 return filtered_matches.values()
3287 3319
3288 3320 @staticmethod
3289 3321 def _sort(matches: Sequence[AnyCompletion]):
3290 3322 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3291 3323
3292 3324 @context_matcher()
3293 3325 def fwd_unicode_matcher(self, context: CompletionContext):
3294 3326 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3295 3327 # TODO: use `context.limit` to terminate early once we matched the maximum
3296 3328 # number that will be used downstream; can be added as an optional to
3297 3329 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3298 3330 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3299 3331 return _convert_matcher_v1_result_to_v2(
3300 3332 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3301 3333 )
3302 3334
3303 3335 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3304 3336 """
3305 3337 Forward match a string starting with a backslash with a list of
3306 3338 potential Unicode completions.
3307 3339
3308 3340 Will compute list of Unicode character names on first call and cache it.
3309 3341
3310 3342 .. deprecated:: 8.6
3311 3343 You can use :meth:`fwd_unicode_matcher` instead.
3312 3344
3313 3345 Returns
3314 3346 -------
3315 3347 At tuple with:
3316 3348 - matched text (empty if no matches)
3317 3349 - list of potential completions, empty tuple otherwise)
3318 3350 """
3319 3351 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3320 3352 # We could do a faster match using a Trie.
3321 3353
3322 3354 # Using pygtrie the following seem to work:
3323 3355
3324 3356 # s = PrefixSet()
3325 3357
3326 3358 # for c in range(0,0x10FFFF + 1):
3327 3359 # try:
3328 3360 # s.add(unicodedata.name(chr(c)))
3329 3361 # except ValueError:
3330 3362 # pass
3331 3363 # [''.join(k) for k in s.iter(prefix)]
3332 3364
3333 3365 # But need to be timed and adds an extra dependency.
3334 3366
3335 3367 slashpos = text.rfind('\\')
3336 3368 # if text starts with slash
3337 3369 if slashpos > -1:
3338 3370 # PERF: It's important that we don't access self._unicode_names
3339 3371 # until we're inside this if-block. _unicode_names is lazily
3340 3372 # initialized, and it takes a user-noticeable amount of time to
3341 3373 # initialize it, so we don't want to initialize it unless we're
3342 3374 # actually going to use it.
3343 3375 s = text[slashpos + 1 :]
3344 3376 sup = s.upper()
3345 3377 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3346 3378 if candidates:
3347 3379 return s, candidates
3348 3380 candidates = [x for x in self.unicode_names if sup in x]
3349 3381 if candidates:
3350 3382 return s, candidates
3351 3383 splitsup = sup.split(" ")
3352 3384 candidates = [
3353 3385 x for x in self.unicode_names if all(u in x for u in splitsup)
3354 3386 ]
3355 3387 if candidates:
3356 3388 return s, candidates
3357 3389
3358 3390 return "", ()
3359 3391
3360 3392 # if text does not start with slash
3361 3393 else:
3362 3394 return '', ()
3363 3395
3364 3396 @property
3365 3397 def unicode_names(self) -> List[str]:
3366 3398 """List of names of unicode code points that can be completed.
3367 3399
3368 3400 The list is lazily initialized on first access.
3369 3401 """
3370 3402 if self._unicode_names is None:
3371 3403 names = []
3372 3404 for c in range(0,0x10FFFF + 1):
3373 3405 try:
3374 3406 names.append(unicodedata.name(chr(c)))
3375 3407 except ValueError:
3376 3408 pass
3377 3409 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3378 3410
3379 3411 return self._unicode_names
3380 3412
3381 3413 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3382 3414 names = []
3383 3415 for start,stop in ranges:
3384 3416 for c in range(start, stop) :
3385 3417 try:
3386 3418 names.append(unicodedata.name(chr(c)))
3387 3419 except ValueError:
3388 3420 pass
3389 3421 return names
@@ -1,1769 +1,1819
1 1 # encoding: utf-8
2 2 """Tests for the IPython tab-completion machinery."""
3 3
4 4 # Copyright (c) IPython Development Team.
5 5 # Distributed under the terms of the Modified BSD License.
6 6
7 7 import os
8 8 import pytest
9 9 import sys
10 10 import textwrap
11 11 import unittest
12 import random
12 13
13 14 from importlib.metadata import version
14 15
15
16 16 from contextlib import contextmanager
17 17
18 18 from traitlets.config.loader import Config
19 19 from IPython import get_ipython
20 20 from IPython.core import completer
21 21 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
22 22 from IPython.utils.generics import complete_object
23 23 from IPython.testing import decorators as dec
24 from IPython.core.latex_symbols import latex_symbols
24 25
25 26 from IPython.core.completer import (
26 27 Completion,
27 28 provisionalcompleter,
28 29 match_dict_keys,
29 30 _deduplicate_completions,
30 31 _match_number_in_dict_key_prefix,
31 32 completion_matcher,
32 33 SimpleCompletion,
33 34 CompletionContext,
35 _unicode_name_compute,
36 _UNICODE_RANGES,
34 37 )
35 38
36 39 from packaging.version import parse
37 40
38 41
42 @contextmanager
43 def jedi_status(status: bool):
44 completer = get_ipython().Completer
45 try:
46 old = completer.use_jedi
47 completer.use_jedi = status
48 yield
49 finally:
50 completer.use_jedi = old
51
52
39 53 # -----------------------------------------------------------------------------
40 54 # Test functions
41 55 # -----------------------------------------------------------------------------
42 56
43 57
44 58 def recompute_unicode_ranges():
45 59 """
46 60 utility to recompute the largest unicode range without any characters
47 61
48 62 use to recompute the gap in the global _UNICODE_RANGES of completer.py
49 63 """
50 64 import itertools
51 65 import unicodedata
52 66
53 67 valid = []
54 68 for c in range(0, 0x10FFFF + 1):
55 69 try:
56 70 unicodedata.name(chr(c))
57 71 except ValueError:
58 72 continue
59 73 valid.append(c)
60 74
61 75 def ranges(i):
62 76 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
63 77 b = list(b)
64 78 yield b[0][1], b[-1][1]
65 79
66 80 rg = list(ranges(valid))
67 81 lens = []
68 82 gap_lens = []
69 pstart, pstop = 0, 0
83 _pstart, pstop = 0, 0
70 84 for start, stop in rg:
71 85 lens.append(stop - start)
72 86 gap_lens.append(
73 87 (
74 88 start - pstop,
75 89 hex(pstop + 1),
76 90 hex(start),
77 91 f"{round((start - pstop)/0xe01f0*100)}%",
78 92 )
79 93 )
80 pstart, pstop = start, stop
94 _pstart, pstop = start, stop
81 95
82 96 return sorted(gap_lens)[-1]
83 97
84 98
85 99 def test_unicode_range():
86 100 """
87 101 Test that the ranges we test for unicode names give the same number of
88 102 results than testing the full length.
89 103 """
90 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
91 104
92 105 expected_list = _unicode_name_compute([(0, 0x110000)])
93 106 test = _unicode_name_compute(_UNICODE_RANGES)
94 107 len_exp = len(expected_list)
95 108 len_test = len(test)
96 109
97 110 # do not inline the len() or on error pytest will try to print the 130 000 +
98 111 # elements.
99 112 message = None
100 113 if len_exp != len_test or len_exp > 131808:
101 114 size, start, stop, prct = recompute_unicode_ranges()
102 115 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
103 116 likely due to a new release of Python. We've find that the biggest gap
104 117 in unicode characters has reduces in size to be {size} characters
105 118 ({prct}), from {start}, to {stop}. In completer.py likely update to
106 119
107 120 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
108 121
109 122 And update the assertion below to use
110 123
111 124 len_exp <= {len_exp}
112 125 """
113 126 assert len_exp == len_test, message
114 127
115 128 # fail if new unicode symbols have been added.
116 129 assert len_exp <= 143668, message
117 130
118 131
119 132 @contextmanager
120 133 def greedy_completion():
121 134 ip = get_ipython()
122 135 greedy_original = ip.Completer.greedy
123 136 try:
124 137 ip.Completer.greedy = True
125 138 yield
126 139 finally:
127 140 ip.Completer.greedy = greedy_original
128 141
129 142
130 143 @contextmanager
131 144 def evaluation_policy(evaluation: str):
132 145 ip = get_ipython()
133 146 evaluation_original = ip.Completer.evaluation
134 147 try:
135 148 ip.Completer.evaluation = evaluation
136 149 yield
137 150 finally:
138 151 ip.Completer.evaluation = evaluation_original
139 152
140 153
141 154 @contextmanager
142 155 def custom_matchers(matchers):
143 156 ip = get_ipython()
144 157 try:
145 158 ip.Completer.custom_matchers.extend(matchers)
146 159 yield
147 160 finally:
148 161 ip.Completer.custom_matchers.clear()
149 162
150 163
151 def test_protect_filename():
152 if sys.platform == "win32":
153 pairs = [
154 ("abc", "abc"),
155 (" abc", '" abc"'),
156 ("a bc", '"a bc"'),
157 ("a bc", '"a bc"'),
158 (" bc", '" bc"'),
159 ]
160 else:
161 pairs = [
162 ("abc", "abc"),
163 (" abc", r"\ abc"),
164 ("a bc", r"a\ bc"),
165 ("a bc", r"a\ \ bc"),
166 (" bc", r"\ \ bc"),
167 # On posix, we also protect parens and other special characters.
168 ("a(bc", r"a\(bc"),
169 ("a)bc", r"a\)bc"),
170 ("a( )bc", r"a\(\ \)bc"),
171 ("a[1]bc", r"a\[1\]bc"),
172 ("a{1}bc", r"a\{1\}bc"),
173 ("a#bc", r"a\#bc"),
174 ("a?bc", r"a\?bc"),
175 ("a=bc", r"a\=bc"),
176 ("a\\bc", r"a\\bc"),
177 ("a|bc", r"a\|bc"),
178 ("a;bc", r"a\;bc"),
179 ("a:bc", r"a\:bc"),
180 ("a'bc", r"a\'bc"),
181 ("a*bc", r"a\*bc"),
182 ('a"bc', r"a\"bc"),
183 ("a^bc", r"a\^bc"),
184 ("a&bc", r"a\&bc"),
185 ]
186 # run the actual tests
187 for s1, s2 in pairs:
188 s1p = completer.protect_filename(s1)
189 assert s1p == s2
164 if sys.platform == "win32":
165 pairs = [
166 ("abc", "abc"),
167 (" abc", '" abc"'),
168 ("a bc", '"a bc"'),
169 ("a bc", '"a bc"'),
170 (" bc", '" bc"'),
171 ]
172 else:
173 pairs = [
174 ("abc", "abc"),
175 (" abc", r"\ abc"),
176 ("a bc", r"a\ bc"),
177 ("a bc", r"a\ \ bc"),
178 (" bc", r"\ \ bc"),
179 # On posix, we also protect parens and other special characters.
180 ("a(bc", r"a\(bc"),
181 ("a)bc", r"a\)bc"),
182 ("a( )bc", r"a\(\ \)bc"),
183 ("a[1]bc", r"a\[1\]bc"),
184 ("a{1}bc", r"a\{1\}bc"),
185 ("a#bc", r"a\#bc"),
186 ("a?bc", r"a\?bc"),
187 ("a=bc", r"a\=bc"),
188 ("a\\bc", r"a\\bc"),
189 ("a|bc", r"a\|bc"),
190 ("a;bc", r"a\;bc"),
191 ("a:bc", r"a\:bc"),
192 ("a'bc", r"a\'bc"),
193 ("a*bc", r"a\*bc"),
194 ('a"bc', r"a\"bc"),
195 ("a^bc", r"a\^bc"),
196 ("a&bc", r"a\&bc"),
197 ]
198
199
200 @pytest.mark.parametrize("s1,expected", pairs)
201 def test_protect_filename(s1, expected):
202 assert completer.protect_filename(s1) == expected
190 203
191 204
192 205 def check_line_split(splitter, test_specs):
193 206 for part1, part2, split in test_specs:
194 207 cursor_pos = len(part1)
195 208 line = part1 + part2
196 209 out = splitter.split_line(line, cursor_pos)
197 210 assert out == split
198 211
199 212 def test_line_split():
200 213 """Basic line splitter test with default specs."""
201 214 sp = completer.CompletionSplitter()
202 215 # The format of the test specs is: part1, part2, expected answer. Parts 1
203 216 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
204 217 # was at the end of part1. So an empty part2 represents someone hitting
205 218 # tab at the end of the line, the most common case.
206 219 t = [
207 220 ("run some/script", "", "some/script"),
208 221 ("run scripts/er", "ror.py foo", "scripts/er"),
209 222 ("echo $HOM", "", "HOM"),
210 223 ("print sys.pa", "", "sys.pa"),
211 224 ("print(sys.pa", "", "sys.pa"),
212 225 ("execfile('scripts/er", "", "scripts/er"),
213 226 ("a[x.", "", "x."),
214 227 ("a[x.", "y", "x."),
215 228 ('cd "some_file/', "", "some_file/"),
216 229 ]
217 230 check_line_split(sp, t)
218 231 # Ensure splitting works OK with unicode by re-running the tests with
219 232 # all inputs turned into unicode
220 233 check_line_split(sp, [map(str, p) for p in t])
221 234
222 235
223 236 class NamedInstanceClass:
224 237 instances = {}
225 238
226 239 def __init__(self, name):
227 240 self.instances[name] = self
228 241
229 242 @classmethod
230 243 def _ipython_key_completions_(cls):
231 244 return cls.instances.keys()
232 245
233 246
234 247 class KeyCompletable:
235 248 def __init__(self, things=()):
236 249 self.things = things
237 250
238 251 def _ipython_key_completions_(self):
239 252 return list(self.things)
240 253
241 254
242 255 class TestCompleter(unittest.TestCase):
243 256 def setUp(self):
244 257 """
245 258 We want to silence all PendingDeprecationWarning when testing the completer
246 259 """
247 260 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
248 261 self._assertwarns.__enter__()
249 262
250 263 def tearDown(self):
251 264 try:
252 265 self._assertwarns.__exit__(None, None, None)
253 266 except AssertionError:
254 267 pass
255 268
256 269 def test_custom_completion_error(self):
257 270 """Test that errors from custom attribute completers are silenced."""
258 271 ip = get_ipython()
259 272
260 273 class A:
261 274 pass
262 275
263 276 ip.user_ns["x"] = A()
264 277
265 278 @complete_object.register(A)
266 279 def complete_A(a, existing_completions):
267 280 raise TypeError("this should be silenced")
268 281
269 282 ip.complete("x.")
270 283
271 284 def test_custom_completion_ordering(self):
272 285 """Test that errors from custom attribute completers are silenced."""
273 286 ip = get_ipython()
274 287
275 288 _, matches = ip.complete('in')
276 289 assert matches.index('input') < matches.index('int')
277 290
278 291 def complete_example(a):
279 292 return ['example2', 'example1']
280 293
281 294 ip.Completer.custom_completers.add_re('ex*', complete_example)
282 295 _, matches = ip.complete('ex')
283 296 assert matches.index('example2') < matches.index('example1')
284 297
285 298 def test_unicode_completions(self):
286 299 ip = get_ipython()
287 300 # Some strings that trigger different types of completion. Check them both
288 301 # in str and unicode forms
289 302 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
290 303 for t in s + list(map(str, s)):
291 304 # We don't need to check exact completion values (they may change
292 305 # depending on the state of the namespace, but at least no exceptions
293 306 # should be thrown and the return value should be a pair of text, list
294 307 # values.
295 308 text, matches = ip.complete(t)
296 309 self.assertIsInstance(text, str)
297 310 self.assertIsInstance(matches, list)
298 311
299 312 def test_latex_completions(self):
300 from IPython.core.latex_symbols import latex_symbols
301 import random
302 313
303 314 ip = get_ipython()
304 315 # Test some random unicode symbols
305 316 keys = random.sample(sorted(latex_symbols), 10)
306 317 for k in keys:
307 318 text, matches = ip.complete(k)
308 319 self.assertEqual(text, k)
309 320 self.assertEqual(matches, [latex_symbols[k]])
310 321 # Test a more complex line
311 322 text, matches = ip.complete("print(\\alpha")
312 323 self.assertEqual(text, "\\alpha")
313 324 self.assertEqual(matches[0], latex_symbols["\\alpha"])
314 325 # Test multiple matching latex symbols
315 326 text, matches = ip.complete("\\al")
316 327 self.assertIn("\\alpha", matches)
317 328 self.assertIn("\\aleph", matches)
318 329
319 330 def test_latex_no_results(self):
320 331 """
321 332 forward latex should really return nothing in either field if nothing is found.
322 333 """
323 334 ip = get_ipython()
324 335 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
325 336 self.assertEqual(text, "")
326 337 self.assertEqual(matches, ())
327 338
328 339 def test_back_latex_completion(self):
329 340 ip = get_ipython()
330 341
331 342 # do not return more than 1 matches for \beta, only the latex one.
332 343 name, matches = ip.complete("\\β")
333 344 self.assertEqual(matches, ["\\beta"])
334 345
335 346 def test_back_unicode_completion(self):
336 347 ip = get_ipython()
337 348
338 349 name, matches = ip.complete("\\Ⅴ")
339 350 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
340 351
341 352 def test_forward_unicode_completion(self):
342 353 ip = get_ipython()
343 354
344 355 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
345 356 self.assertEqual(matches, ["Ⅴ"]) # This is not a V
346 357 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
347 358
348 359 def test_delim_setting(self):
349 360 sp = completer.CompletionSplitter()
350 361 sp.delims = " "
351 362 self.assertEqual(sp.delims, " ")
352 363 self.assertEqual(sp._delim_expr, r"[\ ]")
353 364
354 365 def test_spaces(self):
355 366 """Test with only spaces as split chars."""
356 367 sp = completer.CompletionSplitter()
357 368 sp.delims = " "
358 369 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
359 370 check_line_split(sp, t)
360 371
361 372 def test_has_open_quotes1(self):
362 373 for s in ["'", "'''", "'hi' '"]:
363 374 self.assertEqual(completer.has_open_quotes(s), "'")
364 375
365 376 def test_has_open_quotes2(self):
366 377 for s in ['"', '"""', '"hi" "']:
367 378 self.assertEqual(completer.has_open_quotes(s), '"')
368 379
369 380 def test_has_open_quotes3(self):
370 381 for s in ["''", "''' '''", "'hi' 'ipython'"]:
371 382 self.assertFalse(completer.has_open_quotes(s))
372 383
373 384 def test_has_open_quotes4(self):
374 385 for s in ['""', '""" """', '"hi" "ipython"']:
375 386 self.assertFalse(completer.has_open_quotes(s))
376 387
377 388 @pytest.mark.xfail(
378 389 sys.platform == "win32", reason="abspath completions fail on Windows"
379 390 )
380 391 def test_abspath_file_completions(self):
381 392 ip = get_ipython()
382 393 with TemporaryDirectory() as tmpdir:
383 394 prefix = os.path.join(tmpdir, "foo")
384 395 suffixes = ["1", "2"]
385 396 names = [prefix + s for s in suffixes]
386 397 for n in names:
387 398 open(n, "w", encoding="utf-8").close()
388 399
389 400 # Check simple completion
390 401 c = ip.complete(prefix)[1]
391 402 self.assertEqual(c, names)
392 403
393 404 # Now check with a function call
394 405 cmd = 'a = f("%s' % prefix
395 406 c = ip.complete(prefix, cmd)[1]
396 407 comp = [prefix + s for s in suffixes]
397 408 self.assertEqual(c, comp)
398 409
399 410 def test_local_file_completions(self):
400 411 ip = get_ipython()
401 412 with TemporaryWorkingDirectory():
402 413 prefix = "./foo"
403 414 suffixes = ["1", "2"]
404 415 names = [prefix + s for s in suffixes]
405 416 for n in names:
406 417 open(n, "w", encoding="utf-8").close()
407 418
408 419 # Check simple completion
409 420 c = ip.complete(prefix)[1]
410 421 self.assertEqual(c, names)
411 422
412 423 # Now check with a function call
413 424 cmd = 'a = f("%s' % prefix
414 425 c = ip.complete(prefix, cmd)[1]
415 426 comp = {prefix + s for s in suffixes}
416 427 self.assertTrue(comp.issubset(set(c)))
417 428
418 429 def test_quoted_file_completions(self):
419 430 ip = get_ipython()
420 431
421 432 def _(text):
422 433 return ip.Completer._complete(
423 434 cursor_line=0, cursor_pos=len(text), full_text=text
424 435 )["IPCompleter.file_matcher"]["completions"]
425 436
426 437 with TemporaryWorkingDirectory():
427 438 name = "foo'bar"
428 439 open(name, "w", encoding="utf-8").close()
429 440
430 441 # Don't escape Windows
431 442 escaped = name if sys.platform == "win32" else "foo\\'bar"
432 443
433 444 # Single quote matches embedded single quote
434 445 c = _("open('foo")[0]
435 446 self.assertEqual(c.text, escaped)
436 447
437 448 # Double quote requires no escape
438 449 c = _('open("foo')[0]
439 450 self.assertEqual(c.text, name)
440 451
441 452 # No quote requires an escape
442 453 c = _("%ls foo")[0]
443 454 self.assertEqual(c.text, escaped)
444 455
445 456 @pytest.mark.xfail(
446 457 sys.version_info.releaselevel in ("alpha",),
447 458 reason="Parso does not yet parse 3.13",
448 459 )
449 460 def test_all_completions_dups(self):
450 461 """
451 462 Make sure the output of `IPCompleter.all_completions` does not have
452 463 duplicated prefixes.
453 464 """
454 465 ip = get_ipython()
455 466 c = ip.Completer
456 467 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
457 468 for jedi_status in [True, False]:
458 469 with provisionalcompleter():
459 470 ip.Completer.use_jedi = jedi_status
460 471 matches = c.all_completions("TestCl")
461 472 assert matches == ["TestClass"], (jedi_status, matches)
462 473 matches = c.all_completions("TestClass.")
463 474 assert len(matches) > 2, (jedi_status, matches)
464 475 matches = c.all_completions("TestClass.a")
465 476 if jedi_status:
466 477 assert matches == ["TestClass.a", "TestClass.a1"], jedi_status
467 478 else:
468 479 assert matches == [".a", ".a1"], jedi_status
469 480
470 481 @pytest.mark.xfail(
471 482 sys.version_info.releaselevel in ("alpha",),
472 483 reason="Parso does not yet parse 3.13",
473 484 )
474 485 def test_jedi(self):
475 486 """
476 487 A couple of issue we had with Jedi
477 488 """
478 489 ip = get_ipython()
479 490
480 491 def _test_complete(reason, s, comp, start=None, end=None):
481 492 l = len(s)
482 493 start = start if start is not None else l
483 494 end = end if end is not None else l
484 495 with provisionalcompleter():
485 496 ip.Completer.use_jedi = True
486 497 completions = set(ip.Completer.completions(s, l))
487 498 ip.Completer.use_jedi = False
488 499 assert Completion(start, end, comp) in completions, reason
489 500
490 501 def _test_not_complete(reason, s, comp):
491 502 l = len(s)
492 503 with provisionalcompleter():
493 504 ip.Completer.use_jedi = True
494 505 completions = set(ip.Completer.completions(s, l))
495 506 ip.Completer.use_jedi = False
496 507 assert Completion(l, l, comp) not in completions, reason
497 508
498 509 import jedi
499 510
500 511 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
501 512 if jedi_version > (0, 10):
502 513 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
503 514 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
504 515 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
505 516 _test_complete("cover duplicate completions", "im", "import", 0, 2)
506 517
507 518 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
508 519
509 520 @pytest.mark.xfail(
510 521 sys.version_info.releaselevel in ("alpha",),
511 522 reason="Parso does not yet parse 3.13",
512 523 )
513 524 def test_completion_have_signature(self):
514 525 """
515 526 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
516 527 """
517 528 ip = get_ipython()
518 529 with provisionalcompleter():
519 530 ip.Completer.use_jedi = True
520 531 completions = ip.Completer.completions("ope", 3)
521 532 c = next(completions) # should be `open`
522 533 ip.Completer.use_jedi = False
523 534 assert "file" in c.signature, "Signature of function was not found by completer"
524 535 assert (
525 536 "encoding" in c.signature
526 537 ), "Signature of function was not found by completer"
527 538
528 539 @pytest.mark.xfail(
529 540 sys.version_info.releaselevel in ("alpha",),
530 541 reason="Parso does not yet parse 3.13",
531 542 )
532 543 def test_completions_have_type(self):
533 544 """
534 545 Lets make sure matchers provide completion type.
535 546 """
536 547 ip = get_ipython()
537 548 with provisionalcompleter():
538 549 ip.Completer.use_jedi = False
539 550 completions = ip.Completer.completions("%tim", 3)
540 551 c = next(completions) # should be `%time` or similar
541 552 assert c.type == "magic", "Type of magic was not assigned by completer"
542 553
543 554 @pytest.mark.xfail(
544 555 parse(version("jedi")) <= parse("0.18.0"),
545 556 reason="Known failure on jedi<=0.18.0",
546 557 strict=True,
547 558 )
548 559 def test_deduplicate_completions(self):
549 560 """
550 561 Test that completions are correctly deduplicated (even if ranges are not the same)
551 562 """
552 563 ip = get_ipython()
553 564 ip.ex(
554 565 textwrap.dedent(
555 566 """
556 567 class Z:
557 568 zoo = 1
558 569 """
559 570 )
560 571 )
561 572 with provisionalcompleter():
562 573 ip.Completer.use_jedi = True
563 574 l = list(
564 575 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
565 576 )
566 577 ip.Completer.use_jedi = False
567 578
568 579 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
569 580 assert l[0].text == "zoo" # and not `it.accumulate`
570 581
571 582 @pytest.mark.xfail(
572 583 sys.version_info.releaselevel in ("alpha",),
573 584 reason="Parso does not yet parse 3.13",
574 585 )
575 586 def test_greedy_completions(self):
576 587 """
577 588 Test the capability of the Greedy completer.
578 589
579 590 Most of the test here does not really show off the greedy completer, for proof
580 591 each of the text below now pass with Jedi. The greedy completer is capable of more.
581 592
582 593 See the :any:`test_dict_key_completion_contexts`
583 594
584 595 """
585 596 ip = get_ipython()
586 597 ip.ex("a=list(range(5))")
587 598 ip.ex("d = {'a b': str}")
588 599 _, c = ip.complete(".", line="a[0].")
589 600 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
590 601
591 602 def _(line, cursor_pos, expect, message, completion):
592 603 with greedy_completion(), provisionalcompleter():
593 604 ip.Completer.use_jedi = False
594 605 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
595 606 self.assertIn(expect, c, message % c)
596 607
597 608 ip.Completer.use_jedi = True
598 609 with provisionalcompleter():
599 610 completions = ip.Completer.completions(line, cursor_pos)
600 611 self.assertIn(completion, list(completions))
601 612
602 613 with provisionalcompleter():
603 614 _(
604 615 "a[0].",
605 616 5,
606 617 ".real",
607 618 "Should have completed on a[0].: %s",
608 619 Completion(5, 5, "real"),
609 620 )
610 621 _(
611 622 "a[0].r",
612 623 6,
613 624 ".real",
614 625 "Should have completed on a[0].r: %s",
615 626 Completion(5, 6, "real"),
616 627 )
617 628
618 629 _(
619 630 "a[0].from_",
620 631 10,
621 632 ".from_bytes",
622 633 "Should have completed on a[0].from_: %s",
623 634 Completion(5, 10, "from_bytes"),
624 635 )
625 636 _(
626 637 "assert str.star",
627 638 14,
628 639 ".startswith",
629 640 "Should have completed on `assert str.star`: %s",
630 641 Completion(11, 14, "startswith"),
631 642 )
632 643 _(
633 644 "d['a b'].str",
634 645 12,
635 646 ".strip",
636 647 "Should have completed on `d['a b'].str`: %s",
637 648 Completion(9, 12, "strip"),
638 649 )
639 650 _(
640 651 "a.app",
641 652 4,
642 653 ".append",
643 654 "Should have completed on `a.app`: %s",
644 655 Completion(2, 4, "append"),
645 656 )
646 657
647 658 def test_omit__names(self):
648 659 # also happens to test IPCompleter as a configurable
649 660 ip = get_ipython()
650 661 ip._hidden_attr = 1
651 662 ip._x = {}
652 663 c = ip.Completer
653 664 ip.ex("ip=get_ipython()")
654 665 cfg = Config()
655 666 cfg.IPCompleter.omit__names = 0
656 667 c.update_config(cfg)
657 668 with provisionalcompleter():
658 669 c.use_jedi = False
659 670 s, matches = c.complete("ip.")
660 671 self.assertIn(".__str__", matches)
661 672 self.assertIn("._hidden_attr", matches)
662 673
663 674 # c.use_jedi = True
664 675 # completions = set(c.completions('ip.', 3))
665 676 # self.assertIn(Completion(3, 3, '__str__'), completions)
666 677 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
667 678
668 679 cfg = Config()
669 680 cfg.IPCompleter.omit__names = 1
670 681 c.update_config(cfg)
671 682 with provisionalcompleter():
672 683 c.use_jedi = False
673 684 s, matches = c.complete("ip.")
674 685 self.assertNotIn(".__str__", matches)
675 686 # self.assertIn('ip._hidden_attr', matches)
676 687
677 688 # c.use_jedi = True
678 689 # completions = set(c.completions('ip.', 3))
679 690 # self.assertNotIn(Completion(3,3,'__str__'), completions)
680 691 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
681 692
682 693 cfg = Config()
683 694 cfg.IPCompleter.omit__names = 2
684 695 c.update_config(cfg)
685 696 with provisionalcompleter():
686 697 c.use_jedi = False
687 698 s, matches = c.complete("ip.")
688 699 self.assertNotIn(".__str__", matches)
689 700 self.assertNotIn("._hidden_attr", matches)
690 701
691 702 # c.use_jedi = True
692 703 # completions = set(c.completions('ip.', 3))
693 704 # self.assertNotIn(Completion(3,3,'__str__'), completions)
694 705 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
695 706
696 707 with provisionalcompleter():
697 708 c.use_jedi = False
698 709 s, matches = c.complete("ip._x.")
699 710 self.assertIn(".keys", matches)
700 711
701 712 # c.use_jedi = True
702 713 # completions = set(c.completions('ip._x.', 6))
703 714 # self.assertIn(Completion(6,6, "keys"), completions)
704 715
705 716 del ip._hidden_attr
706 717 del ip._x
707 718
708 719 def test_limit_to__all__False_ok(self):
709 720 """
710 721 Limit to all is deprecated, once we remove it this test can go away.
711 722 """
712 723 ip = get_ipython()
713 724 c = ip.Completer
714 725 c.use_jedi = False
715 726 ip.ex("class D: x=24")
716 727 ip.ex("d=D()")
717 728 cfg = Config()
718 729 cfg.IPCompleter.limit_to__all__ = False
719 730 c.update_config(cfg)
720 731 s, matches = c.complete("d.")
721 732 self.assertIn(".x", matches)
722 733
723 734 def test_get__all__entries_ok(self):
724 735 class A:
725 736 __all__ = ["x", 1]
726 737
727 738 words = completer.get__all__entries(A())
728 739 self.assertEqual(words, ["x"])
729 740
730 741 def test_get__all__entries_no__all__ok(self):
731 742 class A:
732 743 pass
733 744
734 745 words = completer.get__all__entries(A())
735 746 self.assertEqual(words, [])
736 747
737 748 def test_func_kw_completions(self):
738 749 ip = get_ipython()
739 750 c = ip.Completer
740 751 c.use_jedi = False
741 752 ip.ex("def myfunc(a=1,b=2): return a+b")
742 753 s, matches = c.complete(None, "myfunc(1,b")
743 754 self.assertIn("b=", matches)
744 755 # Simulate completing with cursor right after b (pos==10):
745 756 s, matches = c.complete(None, "myfunc(1,b)", 10)
746 757 self.assertIn("b=", matches)
747 758 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
748 759 self.assertIn("b=", matches)
749 760 # builtin function
750 761 s, matches = c.complete(None, "min(k, k")
751 762 self.assertIn("key=", matches)
752 763
753 764 def test_default_arguments_from_docstring(self):
754 765 ip = get_ipython()
755 766 c = ip.Completer
756 767 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
757 768 self.assertEqual(kwd, ["key"])
758 769 # with cython type etc
759 770 kwd = c._default_arguments_from_docstring(
760 771 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
761 772 )
762 773 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
763 774 # white spaces
764 775 kwd = c._default_arguments_from_docstring(
765 776 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
766 777 )
767 778 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
768 779
769 780 def test_line_magics(self):
770 781 ip = get_ipython()
771 782 c = ip.Completer
772 783 s, matches = c.complete(None, "lsmag")
773 784 self.assertIn("%lsmagic", matches)
774 785 s, matches = c.complete(None, "%lsmag")
775 786 self.assertIn("%lsmagic", matches)
776 787
777 788 def test_cell_magics(self):
778 789 from IPython.core.magic import register_cell_magic
779 790
780 791 @register_cell_magic
781 792 def _foo_cellm(line, cell):
782 793 pass
783 794
784 795 ip = get_ipython()
785 796 c = ip.Completer
786 797
787 798 s, matches = c.complete(None, "_foo_ce")
788 799 self.assertIn("%%_foo_cellm", matches)
789 800 s, matches = c.complete(None, "%%_foo_ce")
790 801 self.assertIn("%%_foo_cellm", matches)
791 802
792 803 def test_line_cell_magics(self):
793 804 from IPython.core.magic import register_line_cell_magic
794 805
795 806 @register_line_cell_magic
796 807 def _bar_cellm(line, cell):
797 808 pass
798 809
799 810 ip = get_ipython()
800 811 c = ip.Completer
801 812
802 813 # The policy here is trickier, see comments in completion code. The
803 814 # returned values depend on whether the user passes %% or not explicitly,
804 815 # and this will show a difference if the same name is both a line and cell
805 816 # magic.
806 817 s, matches = c.complete(None, "_bar_ce")
807 818 self.assertIn("%_bar_cellm", matches)
808 819 self.assertIn("%%_bar_cellm", matches)
809 820 s, matches = c.complete(None, "%_bar_ce")
810 821 self.assertIn("%_bar_cellm", matches)
811 822 self.assertIn("%%_bar_cellm", matches)
812 823 s, matches = c.complete(None, "%%_bar_ce")
813 824 self.assertNotIn("%_bar_cellm", matches)
814 825 self.assertIn("%%_bar_cellm", matches)
815 826
816 827 def test_magic_completion_order(self):
817 828 ip = get_ipython()
818 829 c = ip.Completer
819 830
820 831 # Test ordering of line and cell magics.
821 832 text, matches = c.complete("timeit")
822 833 self.assertEqual(matches, ["%timeit", "%%timeit"])
823 834
824 835 def test_magic_completion_shadowing(self):
825 836 ip = get_ipython()
826 837 c = ip.Completer
827 838 c.use_jedi = False
828 839
829 840 # Before importing matplotlib, %matplotlib magic should be the only option.
830 841 text, matches = c.complete("mat")
831 842 self.assertEqual(matches, ["%matplotlib"])
832 843
833 844 # The newly introduced name should shadow the magic.
834 845 ip.run_cell("matplotlib = 1")
835 846 text, matches = c.complete("mat")
836 847 self.assertEqual(matches, ["matplotlib"])
837 848
838 849 # After removing matplotlib from namespace, the magic should again be
839 850 # the only option.
840 851 del ip.user_ns["matplotlib"]
841 852 text, matches = c.complete("mat")
842 853 self.assertEqual(matches, ["%matplotlib"])
843 854
844 855 def test_magic_completion_shadowing_explicit(self):
845 856 """
846 857 If the user try to complete a shadowed magic, and explicit % start should
847 858 still return the completions.
848 859 """
849 860 ip = get_ipython()
850 861 c = ip.Completer
851 862
852 863 # Before importing matplotlib, %matplotlib magic should be the only option.
853 864 text, matches = c.complete("%mat")
854 865 self.assertEqual(matches, ["%matplotlib"])
855 866
856 867 ip.run_cell("matplotlib = 1")
857 868
858 869 # After removing matplotlib from namespace, the magic should still be
859 870 # the only option.
860 871 text, matches = c.complete("%mat")
861 872 self.assertEqual(matches, ["%matplotlib"])
862 873
863 874 def test_magic_config(self):
864 875 ip = get_ipython()
865 876 c = ip.Completer
866 877
867 878 s, matches = c.complete(None, "conf")
868 879 self.assertIn("%config", matches)
869 880 s, matches = c.complete(None, "conf")
870 881 self.assertNotIn("AliasManager", matches)
871 882 s, matches = c.complete(None, "config ")
872 883 self.assertIn("AliasManager", matches)
873 884 s, matches = c.complete(None, "%config ")
874 885 self.assertIn("AliasManager", matches)
875 886 s, matches = c.complete(None, "config Ali")
876 887 self.assertListEqual(["AliasManager"], matches)
877 888 s, matches = c.complete(None, "%config Ali")
878 889 self.assertListEqual(["AliasManager"], matches)
879 890 s, matches = c.complete(None, "config AliasManager")
880 891 self.assertListEqual(["AliasManager"], matches)
881 892 s, matches = c.complete(None, "%config AliasManager")
882 893 self.assertListEqual(["AliasManager"], matches)
883 894 s, matches = c.complete(None, "config AliasManager.")
884 895 self.assertIn("AliasManager.default_aliases", matches)
885 896 s, matches = c.complete(None, "%config AliasManager.")
886 897 self.assertIn("AliasManager.default_aliases", matches)
887 898 s, matches = c.complete(None, "config AliasManager.de")
888 899 self.assertListEqual(["AliasManager.default_aliases"], matches)
889 900 s, matches = c.complete(None, "config AliasManager.de")
890 901 self.assertListEqual(["AliasManager.default_aliases"], matches)
891 902
892 903 def test_magic_color(self):
893 904 ip = get_ipython()
894 905 c = ip.Completer
895 906
896 907 s, matches = c.complete(None, "colo")
897 908 self.assertIn("%colors", matches)
898 909 s, matches = c.complete(None, "colo")
899 910 self.assertNotIn("NoColor", matches)
900 911 s, matches = c.complete(None, "%colors") # No trailing space
901 912 self.assertNotIn("NoColor", matches)
902 913 s, matches = c.complete(None, "colors ")
903 914 self.assertIn("NoColor", matches)
904 915 s, matches = c.complete(None, "%colors ")
905 916 self.assertIn("NoColor", matches)
906 917 s, matches = c.complete(None, "colors NoCo")
907 918 self.assertListEqual(["NoColor"], matches)
908 919 s, matches = c.complete(None, "%colors NoCo")
909 920 self.assertListEqual(["NoColor"], matches)
910 921
911 922 def test_match_dict_keys(self):
912 923 """
913 924 Test that match_dict_keys works on a couple of use case does return what
914 925 expected, and does not crash
915 926 """
916 927 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
917 928
918 929 def match(*args, **kwargs):
919 930 quote, offset, matches = match_dict_keys(*args, delims=delims, **kwargs)
920 931 return quote, offset, list(matches)
921 932
922 933 keys = ["foo", b"far"]
923 934 assert match(keys, "b'") == ("'", 2, ["far"])
924 935 assert match(keys, "b'f") == ("'", 2, ["far"])
925 936 assert match(keys, 'b"') == ('"', 2, ["far"])
926 937 assert match(keys, 'b"f') == ('"', 2, ["far"])
927 938
928 939 assert match(keys, "'") == ("'", 1, ["foo"])
929 940 assert match(keys, "'f") == ("'", 1, ["foo"])
930 941 assert match(keys, '"') == ('"', 1, ["foo"])
931 942 assert match(keys, '"f') == ('"', 1, ["foo"])
932 943
933 944 # Completion on first item of tuple
934 945 keys = [("foo", 1111), ("foo", 2222), (3333, "bar"), (3333, "test")]
935 946 assert match(keys, "'f") == ("'", 1, ["foo"])
936 947 assert match(keys, "33") == ("", 0, ["3333"])
937 948
938 949 # Completion on numbers
939 950 keys = [
940 951 0xDEADBEEF,
941 952 1111,
942 953 1234,
943 954 "1999",
944 955 0b10101,
945 956 22,
946 957 ] # 0xDEADBEEF = 3735928559; 0b10101 = 21
947 958 assert match(keys, "0xdead") == ("", 0, ["0xdeadbeef"])
948 959 assert match(keys, "1") == ("", 0, ["1111", "1234"])
949 960 assert match(keys, "2") == ("", 0, ["21", "22"])
950 961 assert match(keys, "0b101") == ("", 0, ["0b10101", "0b10110"])
951 962
952 963 # Should yield on variables
953 964 assert match(keys, "a_variable") == ("", 0, [])
954 965
955 966 # Should pass over invalid literals
956 967 assert match(keys, "'' ''") == ("", 0, [])
957 968
958 969 def test_match_dict_keys_tuple(self):
959 970 """
960 971 Test that match_dict_keys called with extra prefix works on a couple of use case,
961 972 does return what expected, and does not crash.
962 973 """
963 974 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
964 975
965 976 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
966 977
967 978 def match(*args, extra=None, **kwargs):
968 979 quote, offset, matches = match_dict_keys(
969 980 *args, delims=delims, extra_prefix=extra, **kwargs
970 981 )
971 982 return quote, offset, list(matches)
972 983
973 984 # Completion on first key == "foo"
974 985 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["bar", "oof"])
975 986 assert match(keys, '"', extra=("foo",)) == ('"', 1, ["bar", "oof"])
976 987 assert match(keys, "'o", extra=("foo",)) == ("'", 1, ["oof"])
977 988 assert match(keys, '"o', extra=("foo",)) == ('"', 1, ["oof"])
978 989 assert match(keys, "b'", extra=("foo",)) == ("'", 2, ["bar"])
979 990 assert match(keys, 'b"', extra=("foo",)) == ('"', 2, ["bar"])
980 991 assert match(keys, "b'b", extra=("foo",)) == ("'", 2, ["bar"])
981 992 assert match(keys, 'b"b', extra=("foo",)) == ('"', 2, ["bar"])
982 993
983 994 # No Completion
984 995 assert match(keys, "'", extra=("no_foo",)) == ("'", 1, [])
985 996 assert match(keys, "'", extra=("fo",)) == ("'", 1, [])
986 997
987 998 keys = [("foo1", "foo2", "foo3", "foo4"), ("foo1", "foo2", "bar", "foo4")]
988 999 assert match(keys, "'foo", extra=("foo1",)) == ("'", 1, ["foo2"])
989 1000 assert match(keys, "'foo", extra=("foo1", "foo2")) == ("'", 1, ["foo3"])
990 1001 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3")) == ("'", 1, ["foo4"])
991 1002 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3", "foo4")) == (
992 1003 "'",
993 1004 1,
994 1005 [],
995 1006 )
996 1007
997 1008 keys = [("foo", 1111), ("foo", "2222"), (3333, "bar"), (3333, 4444)]
998 1009 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["2222"])
999 1010 assert match(keys, "", extra=("foo",)) == ("", 0, ["1111", "'2222'"])
1000 1011 assert match(keys, "'", extra=(3333,)) == ("'", 1, ["bar"])
1001 1012 assert match(keys, "", extra=(3333,)) == ("", 0, ["'bar'", "4444"])
1002 1013 assert match(keys, "'", extra=("3333",)) == ("'", 1, [])
1003 1014 assert match(keys, "33") == ("", 0, ["3333"])
1004 1015
1005 1016 def test_dict_key_completion_closures(self):
1006 1017 ip = get_ipython()
1007 1018 complete = ip.Completer.complete
1008 1019 ip.Completer.auto_close_dict_keys = True
1009 1020
1010 1021 ip.user_ns["d"] = {
1011 1022 # tuple only
1012 1023 ("aa", 11): None,
1013 1024 # tuple and non-tuple
1014 1025 ("bb", 22): None,
1015 1026 "bb": None,
1016 1027 # non-tuple only
1017 1028 "cc": None,
1018 1029 # numeric tuple only
1019 1030 (77, "x"): None,
1020 1031 # numeric tuple and non-tuple
1021 1032 (88, "y"): None,
1022 1033 88: None,
1023 1034 # numeric non-tuple only
1024 1035 99: None,
1025 1036 }
1026 1037
1027 1038 _, matches = complete(line_buffer="d[")
1028 1039 # should append `, ` if matches a tuple only
1029 1040 self.assertIn("'aa', ", matches)
1030 1041 # should not append anything if matches a tuple and an item
1031 1042 self.assertIn("'bb'", matches)
1032 1043 # should append `]` if matches and item only
1033 1044 self.assertIn("'cc']", matches)
1034 1045
1035 1046 # should append `, ` if matches a tuple only
1036 1047 self.assertIn("77, ", matches)
1037 1048 # should not append anything if matches a tuple and an item
1038 1049 self.assertIn("88", matches)
1039 1050 # should append `]` if matches and item only
1040 1051 self.assertIn("99]", matches)
1041 1052
1042 1053 _, matches = complete(line_buffer="d['aa', ")
1043 1054 # should restrict matches to those matching tuple prefix
1044 1055 self.assertIn("11]", matches)
1045 1056 self.assertNotIn("'bb'", matches)
1046 1057 self.assertNotIn("'bb', ", matches)
1047 1058 self.assertNotIn("'bb']", matches)
1048 1059 self.assertNotIn("'cc'", matches)
1049 1060 self.assertNotIn("'cc', ", matches)
1050 1061 self.assertNotIn("'cc']", matches)
1051 1062 ip.Completer.auto_close_dict_keys = False
1052 1063
1053 1064 def test_dict_key_completion_string(self):
1054 1065 """Test dictionary key completion for string keys"""
1055 1066 ip = get_ipython()
1056 1067 complete = ip.Completer.complete
1057 1068
1058 1069 ip.user_ns["d"] = {"abc": None}
1059 1070
1060 1071 # check completion at different stages
1061 1072 _, matches = complete(line_buffer="d[")
1062 1073 self.assertIn("'abc'", matches)
1063 1074 self.assertNotIn("'abc']", matches)
1064 1075
1065 1076 _, matches = complete(line_buffer="d['")
1066 1077 self.assertIn("abc", matches)
1067 1078 self.assertNotIn("abc']", matches)
1068 1079
1069 1080 _, matches = complete(line_buffer="d['a")
1070 1081 self.assertIn("abc", matches)
1071 1082 self.assertNotIn("abc']", matches)
1072 1083
1073 1084 # check use of different quoting
1074 1085 _, matches = complete(line_buffer='d["')
1075 1086 self.assertIn("abc", matches)
1076 1087 self.assertNotIn('abc"]', matches)
1077 1088
1078 1089 _, matches = complete(line_buffer='d["a')
1079 1090 self.assertIn("abc", matches)
1080 1091 self.assertNotIn('abc"]', matches)
1081 1092
1082 1093 # check sensitivity to following context
1083 1094 _, matches = complete(line_buffer="d[]", cursor_pos=2)
1084 1095 self.assertIn("'abc'", matches)
1085 1096
1086 1097 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1087 1098 self.assertIn("abc", matches)
1088 1099 self.assertNotIn("abc'", matches)
1089 1100 self.assertNotIn("abc']", matches)
1090 1101
1091 1102 # check multiple solutions are correctly returned and that noise is not
1092 1103 ip.user_ns["d"] = {
1093 1104 "abc": None,
1094 1105 "abd": None,
1095 1106 "bad": None,
1096 1107 object(): None,
1097 1108 5: None,
1098 1109 ("abe", None): None,
1099 1110 (None, "abf"): None
1100 1111 }
1101 1112
1102 1113 _, matches = complete(line_buffer="d['a")
1103 1114 self.assertIn("abc", matches)
1104 1115 self.assertIn("abd", matches)
1105 1116 self.assertNotIn("bad", matches)
1106 1117 self.assertNotIn("abe", matches)
1107 1118 self.assertNotIn("abf", matches)
1108 1119 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1109 1120
1110 1121 # check escaping and whitespace
1111 1122 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
1112 1123 _, matches = complete(line_buffer="d['a")
1113 1124 self.assertIn("a\\nb", matches)
1114 1125 self.assertIn("a\\'b", matches)
1115 1126 self.assertIn('a"b', matches)
1116 1127 self.assertIn("a word", matches)
1117 1128 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1118 1129
1119 1130 # - can complete on non-initial word of the string
1120 1131 _, matches = complete(line_buffer="d['a w")
1121 1132 self.assertIn("word", matches)
1122 1133
1123 1134 # - understands quote escaping
1124 1135 _, matches = complete(line_buffer="d['a\\'")
1125 1136 self.assertIn("b", matches)
1126 1137
1127 1138 # - default quoting should work like repr
1128 1139 _, matches = complete(line_buffer="d[")
1129 1140 self.assertIn('"a\'b"', matches)
1130 1141
1131 1142 # - when opening quote with ", possible to match with unescaped apostrophe
1132 1143 _, matches = complete(line_buffer="d[\"a'")
1133 1144 self.assertIn("b", matches)
1134 1145
1135 1146 # need to not split at delims that readline won't split at
1136 1147 if "-" not in ip.Completer.splitter.delims:
1137 1148 ip.user_ns["d"] = {"before-after": None}
1138 1149 _, matches = complete(line_buffer="d['before-af")
1139 1150 self.assertIn("before-after", matches)
1140 1151
1141 1152 # check completion on tuple-of-string keys at different stage - on first key
1142 1153 ip.user_ns["d"] = {('foo', 'bar'): None}
1143 1154 _, matches = complete(line_buffer="d[")
1144 1155 self.assertIn("'foo'", matches)
1145 1156 self.assertNotIn("'foo']", matches)
1146 1157 self.assertNotIn("'bar'", matches)
1147 1158 self.assertNotIn("foo", matches)
1148 1159 self.assertNotIn("bar", matches)
1149 1160
1150 1161 # - match the prefix
1151 1162 _, matches = complete(line_buffer="d['f")
1152 1163 self.assertIn("foo", matches)
1153 1164 self.assertNotIn("foo']", matches)
1154 1165 self.assertNotIn('foo"]', matches)
1155 1166 _, matches = complete(line_buffer="d['foo")
1156 1167 self.assertIn("foo", matches)
1157 1168
1158 1169 # - can complete on second key
1159 1170 _, matches = complete(line_buffer="d['foo', ")
1160 1171 self.assertIn("'bar'", matches)
1161 1172 _, matches = complete(line_buffer="d['foo', 'b")
1162 1173 self.assertIn("bar", matches)
1163 1174 self.assertNotIn("foo", matches)
1164 1175
1165 1176 # - does not propose missing keys
1166 1177 _, matches = complete(line_buffer="d['foo', 'f")
1167 1178 self.assertNotIn("bar", matches)
1168 1179 self.assertNotIn("foo", matches)
1169 1180
1170 1181 # check sensitivity to following context
1171 1182 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1172 1183 self.assertIn("'bar'", matches)
1173 1184 self.assertNotIn("bar", matches)
1174 1185 self.assertNotIn("'foo'", matches)
1175 1186 self.assertNotIn("foo", matches)
1176 1187
1177 1188 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1178 1189 self.assertIn("foo", matches)
1179 1190 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1180 1191
1181 1192 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1182 1193 self.assertIn("foo", matches)
1183 1194 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1184 1195
1185 1196 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1186 1197 self.assertIn("bar", matches)
1187 1198 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1188 1199
1189 1200 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1190 1201 self.assertIn("'bar'", matches)
1191 1202 self.assertNotIn("bar", matches)
1192 1203
1193 1204 # Can complete with longer tuple keys
1194 1205 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1195 1206
1196 1207 # - can complete second key
1197 1208 _, matches = complete(line_buffer="d['foo', 'b")
1198 1209 self.assertIn("bar", matches)
1199 1210 self.assertNotIn("foo", matches)
1200 1211 self.assertNotIn("foobar", matches)
1201 1212
1202 1213 # - can complete third key
1203 1214 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1204 1215 self.assertIn("foobar", matches)
1205 1216 self.assertNotIn("foo", matches)
1206 1217 self.assertNotIn("bar", matches)
1207 1218
1208 1219 def test_dict_key_completion_numbers(self):
1209 1220 ip = get_ipython()
1210 1221 complete = ip.Completer.complete
1211 1222
1212 1223 ip.user_ns["d"] = {
1213 1224 0xDEADBEEF: None, # 3735928559
1214 1225 1111: None,
1215 1226 1234: None,
1216 1227 "1999": None,
1217 1228 0b10101: None, # 21
1218 1229 22: None,
1219 1230 }
1220 1231 _, matches = complete(line_buffer="d[1")
1221 1232 self.assertIn("1111", matches)
1222 1233 self.assertIn("1234", matches)
1223 1234 self.assertNotIn("1999", matches)
1224 1235 self.assertNotIn("'1999'", matches)
1225 1236
1226 1237 _, matches = complete(line_buffer="d[0xdead")
1227 1238 self.assertIn("0xdeadbeef", matches)
1228 1239
1229 1240 _, matches = complete(line_buffer="d[2")
1230 1241 self.assertIn("21", matches)
1231 1242 self.assertIn("22", matches)
1232 1243
1233 1244 _, matches = complete(line_buffer="d[0b101")
1234 1245 self.assertIn("0b10101", matches)
1235 1246 self.assertIn("0b10110", matches)
1236 1247
1237 1248 def test_dict_key_completion_contexts(self):
1238 1249 """Test expression contexts in which dict key completion occurs"""
1239 1250 ip = get_ipython()
1240 1251 complete = ip.Completer.complete
1241 1252 d = {"abc": None}
1242 1253 ip.user_ns["d"] = d
1243 1254
1244 1255 class C:
1245 1256 data = d
1246 1257
1247 1258 ip.user_ns["C"] = C
1248 1259 ip.user_ns["get"] = lambda: d
1249 1260 ip.user_ns["nested"] = {"x": d}
1250 1261
1251 1262 def assert_no_completion(**kwargs):
1252 1263 _, matches = complete(**kwargs)
1253 1264 self.assertNotIn("abc", matches)
1254 1265 self.assertNotIn("abc'", matches)
1255 1266 self.assertNotIn("abc']", matches)
1256 1267 self.assertNotIn("'abc'", matches)
1257 1268 self.assertNotIn("'abc']", matches)
1258 1269
1259 1270 def assert_completion(**kwargs):
1260 1271 _, matches = complete(**kwargs)
1261 1272 self.assertIn("'abc'", matches)
1262 1273 self.assertNotIn("'abc']", matches)
1263 1274
1264 1275 # no completion after string closed, even if reopened
1265 1276 assert_no_completion(line_buffer="d['a'")
1266 1277 assert_no_completion(line_buffer='d["a"')
1267 1278 assert_no_completion(line_buffer="d['a' + ")
1268 1279 assert_no_completion(line_buffer="d['a' + '")
1269 1280
1270 1281 # completion in non-trivial expressions
1271 1282 assert_completion(line_buffer="+ d[")
1272 1283 assert_completion(line_buffer="(d[")
1273 1284 assert_completion(line_buffer="C.data[")
1274 1285
1275 1286 # nested dict completion
1276 1287 assert_completion(line_buffer="nested['x'][")
1277 1288
1278 1289 with evaluation_policy("minimal"):
1279 1290 with pytest.raises(AssertionError):
1280 1291 assert_completion(line_buffer="nested['x'][")
1281 1292
1282 1293 # greedy flag
1283 1294 def assert_completion(**kwargs):
1284 1295 _, matches = complete(**kwargs)
1285 1296 self.assertIn("get()['abc']", matches)
1286 1297
1287 1298 assert_no_completion(line_buffer="get()[")
1288 1299 with greedy_completion():
1289 1300 assert_completion(line_buffer="get()[")
1290 1301 assert_completion(line_buffer="get()['")
1291 1302 assert_completion(line_buffer="get()['a")
1292 1303 assert_completion(line_buffer="get()['ab")
1293 1304 assert_completion(line_buffer="get()['abc")
1294 1305
1295 1306 def test_dict_key_completion_bytes(self):
1296 1307 """Test handling of bytes in dict key completion"""
1297 1308 ip = get_ipython()
1298 1309 complete = ip.Completer.complete
1299 1310
1300 1311 ip.user_ns["d"] = {"abc": None, b"abd": None}
1301 1312
1302 1313 _, matches = complete(line_buffer="d[")
1303 1314 self.assertIn("'abc'", matches)
1304 1315 self.assertIn("b'abd'", matches)
1305 1316
1306 1317 if False: # not currently implemented
1307 1318 _, matches = complete(line_buffer="d[b")
1308 1319 self.assertIn("b'abd'", matches)
1309 1320 self.assertNotIn("b'abc'", matches)
1310 1321
1311 1322 _, matches = complete(line_buffer="d[b'")
1312 1323 self.assertIn("abd", matches)
1313 1324 self.assertNotIn("abc", matches)
1314 1325
1315 1326 _, matches = complete(line_buffer="d[B'")
1316 1327 self.assertIn("abd", matches)
1317 1328 self.assertNotIn("abc", matches)
1318 1329
1319 1330 _, matches = complete(line_buffer="d['")
1320 1331 self.assertIn("abc", matches)
1321 1332 self.assertNotIn("abd", matches)
1322 1333
1323 1334 def test_dict_key_completion_unicode_py3(self):
1324 1335 """Test handling of unicode in dict key completion"""
1325 1336 ip = get_ipython()
1326 1337 complete = ip.Completer.complete
1327 1338
1328 1339 ip.user_ns["d"] = {"a\u05d0": None}
1329 1340
1330 1341 # query using escape
1331 1342 if sys.platform != "win32":
1332 1343 # Known failure on Windows
1333 1344 _, matches = complete(line_buffer="d['a\\u05d0")
1334 1345 self.assertIn("u05d0", matches) # tokenized after \\
1335 1346
1336 1347 # query using character
1337 1348 _, matches = complete(line_buffer="d['a\u05d0")
1338 1349 self.assertIn("a\u05d0", matches)
1339 1350
1340 1351 with greedy_completion():
1341 1352 # query using escape
1342 1353 _, matches = complete(line_buffer="d['a\\u05d0")
1343 1354 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1344 1355
1345 1356 # query using character
1346 1357 _, matches = complete(line_buffer="d['a\u05d0")
1347 1358 self.assertIn("d['a\u05d0']", matches)
1348 1359
1349 1360 @dec.skip_without("numpy")
1350 1361 def test_struct_array_key_completion(self):
1351 1362 """Test dict key completion applies to numpy struct arrays"""
1352 1363 import numpy
1353 1364
1354 1365 ip = get_ipython()
1355 1366 complete = ip.Completer.complete
1356 1367 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1357 1368 _, matches = complete(line_buffer="d['")
1358 1369 self.assertIn("hello", matches)
1359 1370 self.assertIn("world", matches)
1360 1371 # complete on the numpy struct itself
1361 1372 dt = numpy.dtype(
1362 1373 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1363 1374 )
1364 1375 x = numpy.zeros(2, dtype=dt)
1365 1376 ip.user_ns["d"] = x[1]
1366 1377 _, matches = complete(line_buffer="d['")
1367 1378 self.assertIn("my_head", matches)
1368 1379 self.assertIn("my_data", matches)
1369 1380
1370 1381 def completes_on_nested():
1371 1382 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1372 1383 _, matches = complete(line_buffer="d[1]['my_head']['")
1373 1384 self.assertTrue(any(["my_dt" in m for m in matches]))
1374 1385 self.assertTrue(any(["my_df" in m for m in matches]))
1375 1386 # complete on a nested level
1376 1387 with greedy_completion():
1377 1388 completes_on_nested()
1378 1389
1379 1390 with evaluation_policy("limited"):
1380 1391 completes_on_nested()
1381 1392
1382 1393 with evaluation_policy("minimal"):
1383 1394 with pytest.raises(AssertionError):
1384 1395 completes_on_nested()
1385 1396
1386 1397 @dec.skip_without("pandas")
1387 1398 def test_dataframe_key_completion(self):
1388 1399 """Test dict key completion applies to pandas DataFrames"""
1389 1400 import pandas
1390 1401
1391 1402 ip = get_ipython()
1392 1403 complete = ip.Completer.complete
1393 1404 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1394 1405 _, matches = complete(line_buffer="d['")
1395 1406 self.assertIn("hello", matches)
1396 1407 self.assertIn("world", matches)
1397 1408 _, matches = complete(line_buffer="d.loc[:, '")
1398 1409 self.assertIn("hello", matches)
1399 1410 self.assertIn("world", matches)
1400 1411 _, matches = complete(line_buffer="d.loc[1:, '")
1401 1412 self.assertIn("hello", matches)
1402 1413 _, matches = complete(line_buffer="d.loc[1:1, '")
1403 1414 self.assertIn("hello", matches)
1404 1415 _, matches = complete(line_buffer="d.loc[1:1:-1, '")
1405 1416 self.assertIn("hello", matches)
1406 1417 _, matches = complete(line_buffer="d.loc[::, '")
1407 1418 self.assertIn("hello", matches)
1408 1419
1409 1420 def test_dict_key_completion_invalids(self):
1410 1421 """Smoke test cases dict key completion can't handle"""
1411 1422 ip = get_ipython()
1412 1423 complete = ip.Completer.complete
1413 1424
1414 1425 ip.user_ns["no_getitem"] = None
1415 1426 ip.user_ns["no_keys"] = []
1416 1427 ip.user_ns["cant_call_keys"] = dict
1417 1428 ip.user_ns["empty"] = {}
1418 1429 ip.user_ns["d"] = {"abc": 5}
1419 1430
1420 1431 _, matches = complete(line_buffer="no_getitem['")
1421 1432 _, matches = complete(line_buffer="no_keys['")
1422 1433 _, matches = complete(line_buffer="cant_call_keys['")
1423 1434 _, matches = complete(line_buffer="empty['")
1424 1435 _, matches = complete(line_buffer="name_error['")
1425 1436 _, matches = complete(line_buffer="d['\\") # incomplete escape
1426 1437
1427 1438 def test_object_key_completion(self):
1428 1439 ip = get_ipython()
1429 1440 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1430 1441
1431 1442 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1432 1443 self.assertIn("qwerty", matches)
1433 1444 self.assertIn("qwick", matches)
1434 1445
1435 1446 def test_class_key_completion(self):
1436 1447 ip = get_ipython()
1437 1448 NamedInstanceClass("qwerty")
1438 1449 NamedInstanceClass("qwick")
1439 1450 ip.user_ns["named_instance_class"] = NamedInstanceClass
1440 1451
1441 1452 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1442 1453 self.assertIn("qwerty", matches)
1443 1454 self.assertIn("qwick", matches)
1444 1455
1445 1456 def test_tryimport(self):
1446 1457 """
1447 1458 Test that try-import don't crash on trailing dot, and import modules before
1448 1459 """
1449 1460 from IPython.core.completerlib import try_import
1450 1461
1451 1462 assert try_import("IPython.")
1452 1463
1453 1464 def test_aimport_module_completer(self):
1454 1465 ip = get_ipython()
1455 1466 _, matches = ip.complete("i", "%aimport i")
1456 1467 self.assertIn("io", matches)
1457 1468 self.assertNotIn("int", matches)
1458 1469
1459 1470 def test_nested_import_module_completer(self):
1460 1471 ip = get_ipython()
1461 1472 _, matches = ip.complete(None, "import IPython.co", 17)
1462 1473 self.assertIn("IPython.core", matches)
1463 1474 self.assertNotIn("import IPython.core", matches)
1464 1475 self.assertNotIn("IPython.display", matches)
1465 1476
1466 1477 def test_import_module_completer(self):
1467 1478 ip = get_ipython()
1468 1479 _, matches = ip.complete("i", "import i")
1469 1480 self.assertIn("io", matches)
1470 1481 self.assertNotIn("int", matches)
1471 1482
1472 1483 def test_from_module_completer(self):
1473 1484 ip = get_ipython()
1474 1485 _, matches = ip.complete("B", "from io import B", 16)
1475 1486 self.assertIn("BytesIO", matches)
1476 1487 self.assertNotIn("BaseException", matches)
1477 1488
1478 1489 def test_snake_case_completion(self):
1479 1490 ip = get_ipython()
1480 1491 ip.Completer.use_jedi = False
1481 1492 ip.user_ns["some_three"] = 3
1482 1493 ip.user_ns["some_four"] = 4
1483 1494 _, matches = ip.complete("s_", "print(s_f")
1484 1495 self.assertIn("some_three", matches)
1485 1496 self.assertIn("some_four", matches)
1486 1497
1487 1498 def test_mix_terms(self):
1488 1499 ip = get_ipython()
1489 1500 from textwrap import dedent
1490 1501
1491 1502 ip.Completer.use_jedi = False
1492 1503 ip.ex(
1493 1504 dedent(
1494 1505 """
1495 1506 class Test:
1496 1507 def meth(self, meth_arg1):
1497 1508 print("meth")
1498 1509
1499 1510 def meth_1(self, meth1_arg1, meth1_arg2):
1500 1511 print("meth1")
1501 1512
1502 1513 def meth_2(self, meth2_arg1, meth2_arg2):
1503 1514 print("meth2")
1504 1515 test = Test()
1505 1516 """
1506 1517 )
1507 1518 )
1508 1519 _, matches = ip.complete(None, "test.meth(")
1509 1520 self.assertIn("meth_arg1=", matches)
1510 1521 self.assertNotIn("meth2_arg1=", matches)
1511 1522
1512 1523 def test_percent_symbol_restrict_to_magic_completions(self):
1513 1524 ip = get_ipython()
1514 1525 completer = ip.Completer
1515 1526 text = "%a"
1516 1527
1517 1528 with provisionalcompleter():
1518 1529 completer.use_jedi = True
1519 1530 completions = completer.completions(text, len(text))
1520 1531 for c in completions:
1521 1532 self.assertEqual(c.text[0], "%")
1522 1533
1523 1534 def test_fwd_unicode_restricts(self):
1524 1535 ip = get_ipython()
1525 1536 completer = ip.Completer
1526 1537 text = "\\ROMAN NUMERAL FIVE"
1527 1538
1528 1539 with provisionalcompleter():
1529 1540 completer.use_jedi = True
1530 1541 completions = [
1531 1542 completion.text for completion in completer.completions(text, len(text))
1532 1543 ]
1533 1544 self.assertEqual(completions, ["\u2164"])
1534 1545
1535 1546 def test_dict_key_restrict_to_dicts(self):
1536 1547 """Test that dict key suppresses non-dict completion items"""
1537 1548 ip = get_ipython()
1538 1549 c = ip.Completer
1539 1550 d = {"abc": None}
1540 1551 ip.user_ns["d"] = d
1541 1552
1542 1553 text = 'd["a'
1543 1554
1544 1555 def _():
1545 1556 with provisionalcompleter():
1546 1557 c.use_jedi = True
1547 1558 return [
1548 1559 completion.text for completion in c.completions(text, len(text))
1549 1560 ]
1550 1561
1551 1562 completions = _()
1552 1563 self.assertEqual(completions, ["abc"])
1553 1564
1554 1565 # check that it can be disabled in granular manner:
1555 1566 cfg = Config()
1556 1567 cfg.IPCompleter.suppress_competing_matchers = {
1557 1568 "IPCompleter.dict_key_matcher": False
1558 1569 }
1559 1570 c.update_config(cfg)
1560 1571
1561 1572 completions = _()
1562 1573 self.assertIn("abc", completions)
1563 1574 self.assertGreater(len(completions), 1)
1564 1575
1565 1576 def test_matcher_suppression(self):
1566 1577 @completion_matcher(identifier="a_matcher")
1567 1578 def a_matcher(text):
1568 1579 return ["completion_a"]
1569 1580
1570 1581 @completion_matcher(identifier="b_matcher", api_version=2)
1571 1582 def b_matcher(context: CompletionContext):
1572 1583 text = context.token
1573 1584 result = {"completions": [SimpleCompletion("completion_b")]}
1574 1585
1575 1586 if text == "suppress c":
1576 1587 result["suppress"] = {"c_matcher"}
1577 1588
1578 1589 if text.startswith("suppress all"):
1579 1590 result["suppress"] = True
1580 1591 if text == "suppress all but c":
1581 1592 result["do_not_suppress"] = {"c_matcher"}
1582 1593 if text == "suppress all but a":
1583 1594 result["do_not_suppress"] = {"a_matcher"}
1584 1595
1585 1596 return result
1586 1597
1587 1598 @completion_matcher(identifier="c_matcher")
1588 1599 def c_matcher(text):
1589 1600 return ["completion_c"]
1590 1601
1591 1602 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1592 1603 ip = get_ipython()
1593 1604 c = ip.Completer
1594 1605
1595 1606 def _(text, expected):
1596 1607 c.use_jedi = False
1597 1608 s, matches = c.complete(text)
1598 1609 self.assertEqual(expected, matches)
1599 1610
1600 1611 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1601 1612 _("suppress all", ["completion_b"])
1602 1613 _("suppress all but a", ["completion_a", "completion_b"])
1603 1614 _("suppress all but c", ["completion_b", "completion_c"])
1604 1615
1605 1616 def configure(suppression_config):
1606 1617 cfg = Config()
1607 1618 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1608 1619 c.update_config(cfg)
1609 1620
1610 1621 # test that configuration takes priority over the run-time decisions
1611 1622
1612 1623 configure(False)
1613 1624 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1614 1625
1615 1626 configure({"b_matcher": False})
1616 1627 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1617 1628
1618 1629 configure({"a_matcher": False})
1619 1630 _("suppress all", ["completion_b"])
1620 1631
1621 1632 configure({"b_matcher": True})
1622 1633 _("do not suppress", ["completion_b"])
1623 1634
1624 1635 configure(True)
1625 1636 _("do not suppress", ["completion_a"])
1626 1637
1627 1638 def test_matcher_suppression_with_iterator(self):
1628 1639 @completion_matcher(identifier="matcher_returning_iterator")
1629 1640 def matcher_returning_iterator(text):
1630 1641 return iter(["completion_iter"])
1631 1642
1632 1643 @completion_matcher(identifier="matcher_returning_list")
1633 1644 def matcher_returning_list(text):
1634 1645 return ["completion_list"]
1635 1646
1636 1647 with custom_matchers([matcher_returning_iterator, matcher_returning_list]):
1637 1648 ip = get_ipython()
1638 1649 c = ip.Completer
1639 1650
1640 1651 def _(text, expected):
1641 1652 c.use_jedi = False
1642 1653 s, matches = c.complete(text)
1643 1654 self.assertEqual(expected, matches)
1644 1655
1645 1656 def configure(suppression_config):
1646 1657 cfg = Config()
1647 1658 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1648 1659 c.update_config(cfg)
1649 1660
1650 1661 configure(False)
1651 1662 _("---", ["completion_iter", "completion_list"])
1652 1663
1653 1664 configure(True)
1654 1665 _("---", ["completion_iter"])
1655 1666
1656 1667 configure(None)
1657 1668 _("--", ["completion_iter", "completion_list"])
1658 1669
1659 1670 @pytest.mark.xfail(
1660 1671 sys.version_info.releaselevel in ("alpha",),
1661 1672 reason="Parso does not yet parse 3.13",
1662 1673 )
1663 1674 def test_matcher_suppression_with_jedi(self):
1664 1675 ip = get_ipython()
1665 1676 c = ip.Completer
1666 1677 c.use_jedi = True
1667 1678
1668 1679 def configure(suppression_config):
1669 1680 cfg = Config()
1670 1681 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1671 1682 c.update_config(cfg)
1672 1683
1673 1684 def _():
1674 1685 with provisionalcompleter():
1675 1686 matches = [completion.text for completion in c.completions("dict.", 5)]
1676 1687 self.assertIn("keys", matches)
1677 1688
1678 1689 configure(False)
1679 1690 _()
1680 1691
1681 1692 configure(True)
1682 1693 _()
1683 1694
1684 1695 configure(None)
1685 1696 _()
1686 1697
1687 1698 def test_matcher_disabling(self):
1688 1699 @completion_matcher(identifier="a_matcher")
1689 1700 def a_matcher(text):
1690 1701 return ["completion_a"]
1691 1702
1692 1703 @completion_matcher(identifier="b_matcher")
1693 1704 def b_matcher(text):
1694 1705 return ["completion_b"]
1695 1706
1696 1707 def _(expected):
1697 1708 s, matches = c.complete("completion_")
1698 1709 self.assertEqual(expected, matches)
1699 1710
1700 1711 with custom_matchers([a_matcher, b_matcher]):
1701 1712 ip = get_ipython()
1702 1713 c = ip.Completer
1703 1714
1704 1715 _(["completion_a", "completion_b"])
1705 1716
1706 1717 cfg = Config()
1707 1718 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1708 1719 c.update_config(cfg)
1709 1720
1710 1721 _(["completion_a"])
1711 1722
1712 1723 cfg.IPCompleter.disable_matchers = []
1713 1724 c.update_config(cfg)
1714 1725
1715 1726 def test_matcher_priority(self):
1716 1727 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1717 1728 def a_matcher(text):
1718 1729 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1719 1730
1720 1731 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1721 1732 def b_matcher(text):
1722 1733 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1723 1734
1724 1735 def _(expected):
1725 1736 s, matches = c.complete("completion_")
1726 1737 self.assertEqual(expected, matches)
1727 1738
1728 1739 with custom_matchers([a_matcher, b_matcher]):
1729 1740 ip = get_ipython()
1730 1741 c = ip.Completer
1731 1742
1732 1743 _(["completion_b"])
1733 1744 a_matcher.matcher_priority = 3
1734 1745 _(["completion_a"])
1735 1746
1736 1747
1737 1748 @pytest.mark.parametrize(
1749 "setup,code,expected,not_expected",
1750 [
1751 ('a="str"; b=1', "(a, b.", [".bit_count", ".conjugate"], [".count"]),
1752 ('a="str"; b=1', "(a, b).", [".count"], [".bit_count", ".capitalize"]),
1753 ('x="str"; y=1', "x = {1, y.", [".bit_count"], [".count"]),
1754 ('x="str"; y=1', "x = [1, y.", [".bit_count"], [".count"]),
1755 ('x="str"; y=1; fun=lambda x:x', "x = fun(1, y.", [".bit_count"], [".count"]),
1756 ],
1757 )
1758 def test_misc_no_jedi_completions(setup, code, expected, not_expected):
1759 ip = get_ipython()
1760 c = ip.Completer
1761 ip.ex(setup)
1762 with provisionalcompleter(), jedi_status(False):
1763 matches = c.all_completions(code)
1764 assert set(expected) - set(matches) == set(), set(matches)
1765 assert set(matches).intersection(set(not_expected)) == set()
1766
1767
1768 @pytest.mark.parametrize(
1769 "code,expected",
1770 [
1771 (" (a, b", "b"),
1772 ("(a, b", "b"),
1773 ("(a, b)", ""), # trim always start by trimming
1774 (" (a, b)", "(a, b)"),
1775 (" [a, b]", "[a, b]"),
1776 (" a, b", "b"),
1777 ("x = {1, y", "y"),
1778 ("x = [1, y", "y"),
1779 ("x = fun(1, y", "y"),
1780 ],
1781 )
1782 def test_trim_expr(code, expected):
1783 c = get_ipython().Completer
1784 assert c._trim_expr(code) == expected
1785
1786
1787 @pytest.mark.parametrize(
1738 1788 "input, expected",
1739 1789 [
1740 1790 ["1.234", "1.234"],
1741 1791 # should match signed numbers
1742 1792 ["+1", "+1"],
1743 1793 ["-1", "-1"],
1744 1794 ["-1.0", "-1.0"],
1745 1795 ["-1.", "-1."],
1746 1796 ["+1.", "+1."],
1747 1797 [".1", ".1"],
1748 1798 # should not match non-numbers
1749 1799 ["1..", None],
1750 1800 ["..", None],
1751 1801 [".1.", None],
1752 1802 # should match after comma
1753 1803 [",1", "1"],
1754 1804 [", 1", "1"],
1755 1805 [", .1", ".1"],
1756 1806 [", +.1", "+.1"],
1757 1807 # should not match after trailing spaces
1758 1808 [".1 ", None],
1759 1809 # some complex cases
1760 1810 ["0b_0011_1111_0100_1110", "0b_0011_1111_0100_1110"],
1761 1811 ["0xdeadbeef", "0xdeadbeef"],
1762 1812 ["0b_1110_0101", "0b_1110_0101"],
1763 1813 # should not match if in an operation
1764 1814 ["1 + 1", None],
1765 1815 [", 1 + 1", None],
1766 1816 ],
1767 1817 )
1768 1818 def test_match_numeric_literal_for_dict_key(input, expected):
1769 1819 assert _match_number_in_dict_key_prefix(input) == expected
General Comments 0
You need to be logged in to leave comments. Login now