##// END OF EJS Templates
fix IPCompleter inside tuples/arrays when jedi is disabled...
M Bussonnier -
Show More
@@ -1,3389 +1,3420
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 Ξ±
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 Ξ±
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press :kbd:`Tab` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\Ξ±<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 63 ``False``.
64 64
65 65
66 66 Experimental
67 67 ============
68 68
69 69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 70 generate completions both using static analysis of the code, and dynamically
71 71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 72 for Python. The APIs attached to this new mechanism is unstable and will
73 73 raise unless use in an :any:`provisionalcompleter` context manager.
74 74
75 75 You will find that the following are experimental:
76 76
77 77 - :any:`provisionalcompleter`
78 78 - :any:`IPCompleter.completions`
79 79 - :any:`Completion`
80 80 - :any:`rectify_completions`
81 81
82 82 .. note::
83 83
84 84 better name for :any:`rectify_completions` ?
85 85
86 86 We welcome any feedback on these new API, and we also encourage you to try this
87 87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 88 to have extra logging information if :any:`jedi` is crashing, or if current
89 89 IPython completer pending deprecations are returning results not yet handled
90 90 by :any:`jedi`
91 91
92 92 Using Jedi for tab completion allow snippets like the following to work without
93 93 having to execute any code:
94 94
95 95 >>> myvar = ['hello', 42]
96 96 ... myvar[1].bi<tab>
97 97
98 98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 100 option.
101 101
102 102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 103 current development version to get better completions.
104 104
105 105 Matchers
106 106 ========
107 107
108 108 All completions routines are implemented using unified *Matchers* API.
109 109 The matchers API is provisional and subject to change without notice.
110 110
111 111 The built-in matchers include:
112 112
113 113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 115 - :any:`IPCompleter.unicode_name_matcher`,
116 116 :any:`IPCompleter.fwd_unicode_matcher`
117 117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 125 (`complete_command`) with string dispatch (including regular expressions).
126 126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 127 Jedi results to match behaviour in earlier IPython versions.
128 128
129 129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130 130
131 131 Matcher API
132 132 -----------
133 133
134 134 Simplifying some details, the ``Matcher`` interface can described as
135 135
136 136 .. code-block::
137 137
138 138 MatcherAPIv1 = Callable[[str], list[str]]
139 139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140 140
141 141 Matcher = MatcherAPIv1 | MatcherAPIv2
142 142
143 143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 144 and remains supported as a simplest way for generating completions. This is also
145 145 currently the only API supported by the IPython hooks system `complete_command`.
146 146
147 147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 149 and requires a literal ``2`` for v2 Matchers.
150 150
151 151 Once the API stabilises future versions may relax the requirement for specifying
152 152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154 154
155 155 Suppression of competing matchers
156 156 ---------------------------------
157 157
158 158 By default results from all matchers are combined, in the order determined by
159 159 their priority. Matchers can request to suppress results from subsequent
160 160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161 161
162 162 When multiple matchers simultaneously request suppression, the results from of
163 163 the matcher with higher priority will be returned.
164 164
165 165 Sometimes it is desirable to suppress most but not all other matchers;
166 166 this can be achieved by adding a set of identifiers of matchers which
167 167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168 168
169 169 The suppression behaviour can is user-configurable via
170 170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 171 """
172 172
173 173
174 174 # Copyright (c) IPython Development Team.
175 175 # Distributed under the terms of the Modified BSD License.
176 176 #
177 177 # Some of this code originated from rlcompleter in the Python standard library
178 178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179 179
180 180 from __future__ import annotations
181 181 import builtins as builtin_mod
182 182 import enum
183 183 import glob
184 184 import inspect
185 185 import itertools
186 186 import keyword
187 import ast
187 188 import os
188 189 import re
189 190 import string
190 191 import sys
191 192 import tokenize
192 193 import time
193 194 import unicodedata
194 195 import uuid
195 196 import warnings
196 197 from ast import literal_eval
197 198 from collections import defaultdict
198 199 from contextlib import contextmanager
199 200 from dataclasses import dataclass
200 201 from functools import cached_property, partial
201 202 from types import SimpleNamespace
202 203 from typing import (
203 204 Iterable,
204 205 Iterator,
205 206 List,
206 207 Tuple,
207 208 Union,
208 209 Any,
209 210 Sequence,
210 211 Dict,
211 212 Optional,
212 213 TYPE_CHECKING,
213 214 Set,
214 215 Sized,
215 216 TypeVar,
216 217 Literal,
217 218 )
218 219
219 220 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 221 from IPython.core.error import TryNext
221 222 from IPython.core.inputtransformer2 import ESC_MAGIC
222 223 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 224 from IPython.core.oinspect import InspectColors
224 225 from IPython.testing.skipdoctest import skip_doctest
225 226 from IPython.utils import generics
226 227 from IPython.utils.decorators import sphinx_options
227 228 from IPython.utils.dir2 import dir2, get_real_method
228 229 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 230 from IPython.utils.path import ensure_dir_exists
230 231 from IPython.utils.process import arg_split
231 232 from traitlets import (
232 233 Bool,
233 234 Enum,
234 235 Int,
235 236 List as ListTrait,
236 237 Unicode,
237 238 Dict as DictTrait,
238 239 Union as UnionTrait,
239 240 observe,
240 241 )
241 242 from traitlets.config.configurable import Configurable
242 243
243 244 import __main__
244 245
245 246 # skip module docstests
246 247 __skip_doctest__ = True
247 248
248 249
249 250 try:
250 251 import jedi
251 252 jedi.settings.case_insensitive_completion = False
252 253 import jedi.api.helpers
253 254 import jedi.api.classes
254 255 JEDI_INSTALLED = True
255 256 except ImportError:
256 257 JEDI_INSTALLED = False
257 258
258 259
259 260 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 261 from typing import cast
261 262 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 263 else:
263 264 from typing import Generic
264 265
265 266 def cast(type_, obj):
266 267 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 268 return obj
268 269
269 270 # do not require on runtime
270 271 NotRequired = Tuple # requires Python >=3.11
271 272 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 273 Protocol = object # requires Python >=3.8
273 274 TypeAlias = Any # requires Python >=3.10
274 275 TypeGuard = Generic # requires Python >=3.10
275 276 if GENERATING_DOCUMENTATION:
276 277 from typing import TypedDict
277 278
278 279 # -----------------------------------------------------------------------------
279 280 # Globals
280 281 #-----------------------------------------------------------------------------
281 282
282 283 # ranges where we have most of the valid unicode names. We could be more finer
283 284 # grained but is it worth it for performance While unicode have character in the
284 285 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 286 # write this). With below range we cover them all, with a density of ~67%
286 287 # biggest next gap we consider only adds up about 1% density and there are 600
287 288 # gaps that would need hard coding.
288 289 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289 290
290 291 # Public API
291 292 __all__ = ["Completer", "IPCompleter"]
292 293
293 294 if sys.platform == 'win32':
294 295 PROTECTABLES = ' '
295 296 else:
296 297 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297 298
298 299 # Protect against returning an enormous number of completions which the frontend
299 300 # may have trouble processing.
300 301 MATCHES_LIMIT = 500
301 302
302 303 # Completion type reported when no type can be inferred.
303 304 _UNKNOWN_TYPE = "<unknown>"
304 305
305 306 # sentinel value to signal lack of a match
306 307 not_found = object()
307 308
308 309 class ProvisionalCompleterWarning(FutureWarning):
309 310 """
310 311 Exception raise by an experimental feature in this module.
311 312
312 313 Wrap code in :any:`provisionalcompleter` context manager if you
313 314 are certain you want to use an unstable feature.
314 315 """
315 316 pass
316 317
317 318 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318 319
319 320
320 321 @skip_doctest
321 322 @contextmanager
322 323 def provisionalcompleter(action='ignore'):
323 324 """
324 325 This context manager has to be used in any place where unstable completer
325 326 behavior and API may be called.
326 327
327 328 >>> with provisionalcompleter():
328 329 ... completer.do_experimental_things() # works
329 330
330 331 >>> completer.do_experimental_things() # raises.
331 332
332 333 .. note::
333 334
334 335 Unstable
335 336
336 337 By using this context manager you agree that the API in use may change
337 338 without warning, and that you won't complain if they do so.
338 339
339 340 You also understand that, if the API is not to your liking, you should report
340 341 a bug to explain your use case upstream.
341 342
342 343 We'll be happy to get your feedback, feature requests, and improvements on
343 344 any of the unstable APIs!
344 345 """
345 346 with warnings.catch_warnings():
346 347 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 348 yield
348 349
349 350
350 351 def has_open_quotes(s):
351 352 """Return whether a string has open quotes.
352 353
353 354 This simply counts whether the number of quote characters of either type in
354 355 the string is odd.
355 356
356 357 Returns
357 358 -------
358 359 If there is an open quote, the quote character is returned. Else, return
359 360 False.
360 361 """
361 362 # We check " first, then ', so complex cases with nested quotes will get
362 363 # the " to take precedence.
363 364 if s.count('"') % 2:
364 365 return '"'
365 366 elif s.count("'") % 2:
366 367 return "'"
367 368 else:
368 369 return False
369 370
370 371
371 372 def protect_filename(s, protectables=PROTECTABLES):
372 373 """Escape a string to protect certain characters."""
373 374 if set(s) & set(protectables):
374 375 if sys.platform == "win32":
375 376 return '"' + s + '"'
376 377 else:
377 378 return "".join(("\\" + c if c in protectables else c) for c in s)
378 379 else:
379 380 return s
380 381
381 382
382 383 def expand_user(path:str) -> Tuple[str, bool, str]:
383 384 """Expand ``~``-style usernames in strings.
384 385
385 386 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 387 extra information that will be useful if the input was being used in
387 388 computing completions, and you wish to return the completions with the
388 389 original '~' instead of its expanded value.
389 390
390 391 Parameters
391 392 ----------
392 393 path : str
393 394 String to be expanded. If no ~ is present, the output is the same as the
394 395 input.
395 396
396 397 Returns
397 398 -------
398 399 newpath : str
399 400 Result of ~ expansion in the input path.
400 401 tilde_expand : bool
401 402 Whether any expansion was performed or not.
402 403 tilde_val : str
403 404 The value that ~ was replaced with.
404 405 """
405 406 # Default values
406 407 tilde_expand = False
407 408 tilde_val = ''
408 409 newpath = path
409 410
410 411 if path.startswith('~'):
411 412 tilde_expand = True
412 413 rest = len(path)-1
413 414 newpath = os.path.expanduser(path)
414 415 if rest:
415 416 tilde_val = newpath[:-rest]
416 417 else:
417 418 tilde_val = newpath
418 419
419 420 return newpath, tilde_expand, tilde_val
420 421
421 422
422 423 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 424 """Does the opposite of expand_user, with its outputs.
424 425 """
425 426 if tilde_expand:
426 427 return path.replace(tilde_val, '~')
427 428 else:
428 429 return path
429 430
430 431
431 432 def completions_sorting_key(word):
432 433 """key for sorting completions
433 434
434 435 This does several things:
435 436
436 437 - Demote any completions starting with underscores to the end
437 438 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 439 by their name
439 440 """
440 441 prio1, prio2 = 0, 0
441 442
442 443 if word.startswith('__'):
443 444 prio1 = 2
444 445 elif word.startswith('_'):
445 446 prio1 = 1
446 447
447 448 if word.endswith('='):
448 449 prio1 = -1
449 450
450 451 if word.startswith('%%'):
451 452 # If there's another % in there, this is something else, so leave it alone
452 if not "%" in word[2:]:
453 if "%" not in word[2:]:
453 454 word = word[2:]
454 455 prio2 = 2
455 456 elif word.startswith('%'):
456 if not "%" in word[1:]:
457 if "%" not in word[1:]:
457 458 word = word[1:]
458 459 prio2 = 1
459 460
460 461 return prio1, word, prio2
461 462
462 463
463 464 class _FakeJediCompletion:
464 465 """
465 466 This is a workaround to communicate to the UI that Jedi has crashed and to
466 467 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467 468
468 469 Added in IPython 6.0 so should likely be removed for 7.0
469 470
470 471 """
471 472
472 473 def __init__(self, name):
473 474
474 475 self.name = name
475 476 self.complete = name
476 477 self.type = 'crashed'
477 478 self.name_with_symbols = name
478 479 self.signature = ""
479 480 self._origin = "fake"
480 481 self.text = "crashed"
481 482
482 483 def __repr__(self):
483 484 return '<Fake completion object jedi has crashed>'
484 485
485 486
486 487 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487 488
488 489
489 490 class Completion:
490 491 """
491 492 Completion object used and returned by IPython completers.
492 493
493 494 .. warning::
494 495
495 496 Unstable
496 497
497 498 This function is unstable, API may change without warning.
498 499 It will also raise unless use in proper context manager.
499 500
500 501 This act as a middle ground :any:`Completion` object between the
501 502 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 503 object. While Jedi need a lot of information about evaluator and how the
503 504 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 505 need user facing information.
505 506
506 507 - Which range should be replaced replaced by what.
507 508 - Some metadata (like completion type), or meta information to displayed to
508 509 the use user.
509 510
510 511 For debugging purpose we can also store the origin of the completion (``jedi``,
511 512 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 513 """
513 514
514 515 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515 516
516 517 def __init__(
517 518 self,
518 519 start: int,
519 520 end: int,
520 521 text: str,
521 522 *,
522 523 type: Optional[str] = None,
523 524 _origin="",
524 525 signature="",
525 526 ) -> None:
526 527 warnings.warn(
527 528 "``Completion`` is a provisional API (as of IPython 6.0). "
528 529 "It may change without warnings. "
529 530 "Use in corresponding context manager.",
530 531 category=ProvisionalCompleterWarning,
531 532 stacklevel=2,
532 533 )
533 534
534 535 self.start = start
535 536 self.end = end
536 537 self.text = text
537 538 self.type = type
538 539 self.signature = signature
539 540 self._origin = _origin
540 541
541 542 def __repr__(self):
542 543 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 544 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544 545
545 546 def __eq__(self, other) -> bool:
546 547 """
547 548 Equality and hash do not hash the type (as some completer may not be
548 549 able to infer the type), but are use to (partially) de-duplicate
549 550 completion.
550 551
551 552 Completely de-duplicating completion is a bit tricker that just
552 553 comparing as it depends on surrounding text, which Completions are not
553 554 aware of.
554 555 """
555 556 return self.start == other.start and \
556 557 self.end == other.end and \
557 558 self.text == other.text
558 559
559 560 def __hash__(self):
560 561 return hash((self.start, self.end, self.text))
561 562
562 563
563 564 class SimpleCompletion:
564 565 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565 566
566 567 .. warning::
567 568
568 569 Provisional
569 570
570 571 This class is used to describe the currently supported attributes of
571 572 simple completion items, and any additional implementation details
572 573 should not be relied on. Additional attributes may be included in
573 574 future versions, and meaning of text disambiguated from the current
574 575 dual meaning of "text to insert" and "text to used as a label".
575 576 """
576 577
577 578 __slots__ = ["text", "type"]
578 579
579 580 def __init__(self, text: str, *, type: Optional[str] = None):
580 581 self.text = text
581 582 self.type = type
582 583
583 584 def __repr__(self):
584 585 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585 586
586 587
587 588 class _MatcherResultBase(TypedDict):
588 589 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589 590
590 591 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 592 matched_fragment: NotRequired[str]
592 593
593 594 #: Whether to suppress results from all other matchers (True), some
594 595 #: matchers (set of identifiers) or none (False); default is False.
595 596 suppress: NotRequired[Union[bool, Set[str]]]
596 597
597 598 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 599 #: requests to suppress all other matchers; defaults to an empty set.
599 600 do_not_suppress: NotRequired[Set[str]]
600 601
601 602 #: Are completions already ordered and should be left as-is? default is False.
602 603 ordered: NotRequired[bool]
603 604
604 605
605 606 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 607 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 608 """Result of new-style completion matcher."""
608 609
609 610 # note: TypedDict is added again to the inheritance chain
610 611 # in order to get __orig_bases__ for documentation
611 612
612 613 #: List of candidate completions
613 614 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614 615
615 616
616 617 class _JediMatcherResult(_MatcherResultBase):
617 618 """Matching result returned by Jedi (will be processed differently)"""
618 619
619 620 #: list of candidate completions
620 621 completions: Iterator[_JediCompletionLike]
621 622
622 623
623 624 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 625 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625 626
626 627
627 628 @dataclass
628 629 class CompletionContext:
629 630 """Completion context provided as an argument to matchers in the Matcher API v2."""
630 631
631 632 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 633 # which was not explicitly visible as an argument of the matcher, making any refactor
633 634 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 635 # from the completer, and make substituting them in sub-classes easier.
635 636
636 637 #: Relevant fragment of code directly preceding the cursor.
637 638 #: The extraction of token is implemented via splitter heuristic
638 639 #: (following readline behaviour for legacy reasons), which is user configurable
639 640 #: (by switching the greedy mode).
640 641 token: str
641 642
642 643 #: The full available content of the editor or buffer
643 644 full_text: str
644 645
645 646 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 647 cursor_position: int
647 648
648 649 #: Cursor line in ``full_text``.
649 650 cursor_line: int
650 651
651 652 #: The maximum number of completions that will be used downstream.
652 653 #: Matchers can use this information to abort early.
653 654 #: The built-in Jedi matcher is currently excepted from this limit.
654 655 # If not given, return all possible completions.
655 656 limit: Optional[int]
656 657
657 658 @cached_property
658 659 def text_until_cursor(self) -> str:
659 660 return self.line_with_cursor[: self.cursor_position]
660 661
661 662 @cached_property
662 663 def line_with_cursor(self) -> str:
663 664 return self.full_text.split("\n")[self.cursor_line]
664 665
665 666
666 667 #: Matcher results for API v2.
667 668 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668 669
669 670
670 671 class _MatcherAPIv1Base(Protocol):
671 672 def __call__(self, text: str) -> List[str]:
672 673 """Call signature."""
673 674 ...
674 675
675 676 #: Used to construct the default matcher identifier
676 677 __qualname__: str
677 678
678 679
679 680 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 681 #: API version
681 682 matcher_api_version: Optional[Literal[1]]
682 683
683 684 def __call__(self, text: str) -> List[str]:
684 685 """Call signature."""
685 686 ...
686 687
687 688
688 689 #: Protocol describing Matcher API v1.
689 690 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690 691
691 692
692 693 class MatcherAPIv2(Protocol):
693 694 """Protocol describing Matcher API v2."""
694 695
695 696 #: API version
696 697 matcher_api_version: Literal[2] = 2
697 698
698 699 def __call__(self, context: CompletionContext) -> MatcherResult:
699 700 """Call signature."""
700 701 ...
701 702
702 703 #: Used to construct the default matcher identifier
703 704 __qualname__: str
704 705
705 706
706 707 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707 708
708 709
709 710 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 711 api_version = _get_matcher_api_version(matcher)
711 712 return api_version == 1
712 713
713 714
714 715 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 716 api_version = _get_matcher_api_version(matcher)
716 717 return api_version == 2
717 718
718 719
719 720 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 721 """Determines whether objects is sizable"""
721 722 return hasattr(value, "__len__")
722 723
723 724
724 725 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 726 """Determines whether objects is sizable"""
726 727 return hasattr(value, "__next__")
727 728
728 729
729 730 def has_any_completions(result: MatcherResult) -> bool:
730 731 """Check if any result includes any completions."""
731 732 completions = result["completions"]
732 733 if _is_sizable(completions):
733 734 return len(completions) != 0
734 735 if _is_iterator(completions):
735 736 try:
736 737 old_iterator = completions
737 738 first = next(old_iterator)
738 739 result["completions"] = cast(
739 740 Iterator[SimpleCompletion],
740 741 itertools.chain([first], old_iterator),
741 742 )
742 743 return True
743 744 except StopIteration:
744 745 return False
745 746 raise ValueError(
746 747 "Completions returned by matcher need to be an Iterator or a Sizable"
747 748 )
748 749
749 750
750 751 def completion_matcher(
751 752 *,
752 753 priority: Optional[float] = None,
753 754 identifier: Optional[str] = None,
754 755 api_version: int = 1,
755 756 ):
756 757 """Adds attributes describing the matcher.
757 758
758 759 Parameters
759 760 ----------
760 761 priority : Optional[float]
761 762 The priority of the matcher, determines the order of execution of matchers.
762 763 Higher priority means that the matcher will be executed first. Defaults to 0.
763 764 identifier : Optional[str]
764 765 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 766 and also used to for debugging (will be passed as ``origin`` with the completions).
766 767
767 768 Defaults to matcher function's ``__qualname__`` (for example,
768 769 ``IPCompleter.file_matcher`` for the built-in matched defined
769 770 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 771 api_version: Optional[int]
771 772 version of the Matcher API used by this matcher.
772 773 Currently supported values are 1 and 2.
773 774 Defaults to 1.
774 775 """
775 776
776 777 def wrapper(func: Matcher):
777 778 func.matcher_priority = priority or 0 # type: ignore
778 779 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 780 func.matcher_api_version = api_version # type: ignore
780 781 if TYPE_CHECKING:
781 782 if api_version == 1:
782 783 func = cast(MatcherAPIv1, func)
783 784 elif api_version == 2:
784 785 func = cast(MatcherAPIv2, func)
785 786 return func
786 787
787 788 return wrapper
788 789
789 790
790 791 def _get_matcher_priority(matcher: Matcher):
791 792 return getattr(matcher, "matcher_priority", 0)
792 793
793 794
794 795 def _get_matcher_id(matcher: Matcher):
795 796 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796 797
797 798
798 799 def _get_matcher_api_version(matcher):
799 800 return getattr(matcher, "matcher_api_version", 1)
800 801
801 802
802 803 context_matcher = partial(completion_matcher, api_version=2)
803 804
804 805
805 806 _IC = Iterable[Completion]
806 807
807 808
808 809 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 810 """
810 811 Deduplicate a set of completions.
811 812
812 813 .. warning::
813 814
814 815 Unstable
815 816
816 817 This function is unstable, API may change without warning.
817 818
818 819 Parameters
819 820 ----------
820 821 text : str
821 822 text that should be completed.
822 823 completions : Iterator[Completion]
823 824 iterator over the completions to deduplicate
824 825
825 826 Yields
826 827 ------
827 828 `Completions` objects
828 829 Completions coming from multiple sources, may be different but end up having
829 830 the same effect when applied to ``text``. If this is the case, this will
830 831 consider completions as equal and only emit the first encountered.
831 832 Not folded in `completions()` yet for debugging purpose, and to detect when
832 833 the IPython completer does return things that Jedi does not, but should be
833 834 at some point.
834 835 """
835 836 completions = list(completions)
836 837 if not completions:
837 838 return
838 839
839 840 new_start = min(c.start for c in completions)
840 841 new_end = max(c.end for c in completions)
841 842
842 843 seen = set()
843 844 for c in completions:
844 845 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 846 if new_text not in seen:
846 847 yield c
847 848 seen.add(new_text)
848 849
849 850
850 851 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 852 """
852 853 Rectify a set of completions to all have the same ``start`` and ``end``
853 854
854 855 .. warning::
855 856
856 857 Unstable
857 858
858 859 This function is unstable, API may change without warning.
859 860 It will also raise unless use in proper context manager.
860 861
861 862 Parameters
862 863 ----------
863 864 text : str
864 865 text that should be completed.
865 866 completions : Iterator[Completion]
866 867 iterator over the completions to rectify
867 868 _debug : bool
868 869 Log failed completion
869 870
870 871 Notes
871 872 -----
872 873 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 874 the Jupyter Protocol requires them to behave like so. This will readjust
874 875 the completion to have the same ``start`` and ``end`` by padding both
875 876 extremities with surrounding text.
876 877
877 878 During stabilisation should support a ``_debug`` option to log which
878 879 completion are return by the IPython completer and not found in Jedi in
879 880 order to make upstream bug report.
880 881 """
881 882 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 883 "It may change without warnings. "
883 884 "Use in corresponding context manager.",
884 885 category=ProvisionalCompleterWarning, stacklevel=2)
885 886
886 887 completions = list(completions)
887 888 if not completions:
888 889 return
889 890 starts = (c.start for c in completions)
890 891 ends = (c.end for c in completions)
891 892
892 893 new_start = min(starts)
893 894 new_end = max(ends)
894 895
895 896 seen_jedi = set()
896 897 seen_python_matches = set()
897 898 for c in completions:
898 899 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 900 if c._origin == 'jedi':
900 901 seen_jedi.add(new_text)
901 902 elif c._origin == "IPCompleter.python_matcher":
902 903 seen_python_matches.add(new_text)
903 904 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 905 diff = seen_python_matches.difference(seen_jedi)
905 906 if diff and _debug:
906 907 print('IPython.python matches have extras:', diff)
907 908
908 909
909 910 if sys.platform == 'win32':
910 911 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 912 else:
912 913 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913 914
914 915 GREEDY_DELIMS = ' =\r\n'
915 916
916 917
917 918 class CompletionSplitter(object):
918 919 """An object to split an input line in a manner similar to readline.
919 920
920 921 By having our own implementation, we can expose readline-like completion in
921 922 a uniform manner to all frontends. This object only needs to be given the
922 923 line of text to be split and the cursor position on said line, and it
923 924 returns the 'word' to be completed on at the cursor after splitting the
924 925 entire line.
925 926
926 927 What characters are used as splitting delimiters can be controlled by
927 928 setting the ``delims`` attribute (this is a property that internally
928 929 automatically builds the necessary regular expression)"""
929 930
930 931 # Private interface
931 932
932 933 # A string of delimiter characters. The default value makes sense for
933 934 # IPython's most typical usage patterns.
934 935 _delims = DELIMS
935 936
936 937 # The expression (a normal string) to be compiled into a regular expression
937 938 # for actual splitting. We store it as an attribute mostly for ease of
938 939 # debugging, since this type of code can be so tricky to debug.
939 940 _delim_expr = None
940 941
941 942 # The regular expression that does the actual splitting
942 943 _delim_re = None
943 944
944 945 def __init__(self, delims=None):
945 946 delims = CompletionSplitter._delims if delims is None else delims
946 947 self.delims = delims
947 948
948 949 @property
949 950 def delims(self):
950 951 """Return the string of delimiter characters."""
951 952 return self._delims
952 953
953 954 @delims.setter
954 955 def delims(self, delims):
955 956 """Set the delimiters for line splitting."""
956 957 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 958 self._delim_re = re.compile(expr)
958 959 self._delims = delims
959 960 self._delim_expr = expr
960 961
961 962 def split_line(self, line, cursor_pos=None):
962 963 """Split a line of text with a cursor at the given position.
963 964 """
964 l = line if cursor_pos is None else line[:cursor_pos]
965 return self._delim_re.split(l)[-1]
965 cut_line = line if cursor_pos is None else line[:cursor_pos]
966 return self._delim_re.split(cut_line)[-1]
966 967
967 968
968 969
969 970 class Completer(Configurable):
970 971
971 972 greedy = Bool(
972 973 False,
973 974 help="""Activate greedy completion.
974 975
975 976 .. deprecated:: 8.8
976 977 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977 978
978 979 When enabled in IPython 8.8 or newer, changes configuration as follows:
979 980
980 981 - ``Completer.evaluation = 'unsafe'``
981 982 - ``Completer.auto_close_dict_keys = True``
982 983 """,
983 984 ).tag(config=True)
984 985
985 986 evaluation = Enum(
986 987 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 988 default_value="limited",
988 989 help="""Policy for code evaluation under completion.
989 990
990 991 Successive options allow to enable more eager evaluation for better
991 992 completion suggestions, including for nested dictionaries, nested lists,
992 993 or even results of function calls.
993 994 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 995 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995 996
996 997 Allowed values are:
997 998
998 999 - ``forbidden``: no evaluation of code is permitted,
999 1000 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 1001 no item/attribute evaluationm no access to locals/globals,
1001 1002 no evaluation of any operations or comparisons.
1002 1003 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 1004 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 1005 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 1006 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 1007 - ``unsafe``: evaluation of all methods and function calls but not of
1007 1008 syntax with side-effects like `del x`,
1008 1009 - ``dangerous``: completely arbitrary evaluation.
1009 1010 """,
1010 1011 ).tag(config=True)
1011 1012
1012 1013 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 1014 help="Experimental: Use Jedi to generate autocompletions. "
1014 1015 "Default to True if jedi is installed.").tag(config=True)
1015 1016
1016 1017 jedi_compute_type_timeout = Int(default_value=400,
1017 1018 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 1019 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 1020 performance by preventing jedi to build its cache.
1020 1021 """).tag(config=True)
1021 1022
1022 1023 debug = Bool(default_value=False,
1023 1024 help='Enable debug for the Completer. Mostly print extra '
1024 1025 'information for experimental jedi integration.')\
1025 1026 .tag(config=True)
1026 1027
1027 1028 backslash_combining_completions = Bool(True,
1028 1029 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 1030 "Includes completion of latex commands, unicode names, and expanding "
1030 1031 "unicode characters back to latex commands.").tag(config=True)
1031 1032
1032 1033 auto_close_dict_keys = Bool(
1033 1034 False,
1034 1035 help="""
1035 1036 Enable auto-closing dictionary keys.
1036 1037
1037 1038 When enabled string keys will be suffixed with a final quote
1038 1039 (matching the opening quote), tuple keys will also receive a
1039 1040 separating comma if needed, and keys which are final will
1040 1041 receive a closing bracket (``]``).
1041 1042 """,
1042 1043 ).tag(config=True)
1043 1044
1044 1045 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 1046 """Create a new completer for the command line.
1046 1047
1047 1048 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048 1049
1049 1050 If unspecified, the default namespace where completions are performed
1050 1051 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 1052 given as dictionaries.
1052 1053
1053 1054 An optional second namespace can be given. This allows the completer
1054 1055 to handle cases where both the local and global scopes need to be
1055 1056 distinguished.
1056 1057 """
1057 1058
1058 1059 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 1060 # specific namespace or to use __main__.__dict__. This will allow us
1060 1061 # to bind to __main__.__dict__ at completion time, not now.
1061 1062 if namespace is None:
1062 1063 self.use_main_ns = True
1063 1064 else:
1064 1065 self.use_main_ns = False
1065 1066 self.namespace = namespace
1066 1067
1067 1068 # The global namespace, if given, can be bound directly
1068 1069 if global_namespace is None:
1069 1070 self.global_namespace = {}
1070 1071 else:
1071 1072 self.global_namespace = global_namespace
1072 1073
1073 1074 self.custom_matchers = []
1074 1075
1075 1076 super(Completer, self).__init__(**kwargs)
1076 1077
1077 1078 def complete(self, text, state):
1078 1079 """Return the next possible completion for 'text'.
1079 1080
1080 1081 This is called successively with state == 0, 1, 2, ... until it
1081 1082 returns None. The completion should begin with 'text'.
1082 1083
1083 1084 """
1084 1085 if self.use_main_ns:
1085 1086 self.namespace = __main__.__dict__
1086 1087
1087 1088 if state == 0:
1088 1089 if "." in text:
1089 1090 self.matches = self.attr_matches(text)
1090 1091 else:
1091 1092 self.matches = self.global_matches(text)
1092 1093 try:
1093 1094 return self.matches[state]
1094 1095 except IndexError:
1095 1096 return None
1096 1097
1097 1098 def global_matches(self, text):
1098 1099 """Compute matches when text is a simple name.
1099 1100
1100 1101 Return a list of all keywords, built-in functions and names currently
1101 1102 defined in self.namespace or self.global_namespace that match.
1102 1103
1103 1104 """
1104 1105 matches = []
1105 1106 match_append = matches.append
1106 1107 n = len(text)
1107 1108 for lst in [
1108 1109 keyword.kwlist,
1109 1110 builtin_mod.__dict__.keys(),
1110 1111 list(self.namespace.keys()),
1111 1112 list(self.global_namespace.keys()),
1112 1113 ]:
1113 1114 for word in lst:
1114 1115 if word[:n] == text and word != "__builtins__":
1115 1116 match_append(word)
1116 1117
1117 1118 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 1119 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 1120 shortened = {
1120 1121 "_".join([sub[0] for sub in word.split("_")]): word
1121 1122 for word in lst
1122 1123 if snake_case_re.match(word)
1123 1124 }
1124 1125 for word in shortened.keys():
1125 1126 if word[:n] == text and word != "__builtins__":
1126 1127 match_append(shortened[word])
1127 1128 return matches
1128 1129
1129 1130 def attr_matches(self, text):
1130 1131 """Compute matches when text contains a dot.
1131 1132
1132 1133 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 1134 evaluatable in self.namespace or self.global_namespace, it will be
1134 1135 evaluated and its attributes (as revealed by dir()) are used as
1135 1136 possible completions. (For class instances, class members are
1136 1137 also considered.)
1137 1138
1138 1139 WARNING: this can still invoke arbitrary C code, if an object
1139 1140 with a __getattr__ hook is evaluated.
1140 1141
1141 1142 """
1142 1143 return self._attr_matches(text)[0]
1143 1144
1145 # we simple attribute matching with normal identifiers.
1146 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$")
1147
1144 1148 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1149
1150 m2 = self._ATTR_MATCH_RE.match(self.line_buffer)
1146 1151 if not m2:
1147 1152 return [], ""
1148 1153 expr, attr = m2.group(1, 2)
1149 1154
1150 1155 obj = self._evaluate_expr(expr)
1151 1156
1152 1157 if obj is not_found:
1153 1158 return [], ""
1154 1159
1155 1160 if self.limit_to__all__ and hasattr(obj, '__all__'):
1156 1161 words = get__all__entries(obj)
1157 1162 else:
1158 1163 words = dir2(obj)
1159 1164
1160 1165 try:
1161 1166 words = generics.complete_object(obj, words)
1162 1167 except TryNext:
1163 1168 pass
1164 1169 except AssertionError:
1165 1170 raise
1166 1171 except Exception:
1167 1172 # Silence errors from completion function
1168 1173 pass
1169 1174 # Build match list to return
1170 1175 n = len(attr)
1171 1176
1172 1177 # Note: ideally we would just return words here and the prefix
1173 1178 # reconciliator would know that we intend to append to rather than
1174 1179 # replace the input text; this requires refactoring to return range
1175 1180 # which ought to be replaced (as does jedi).
1176 1181 if include_prefix:
1177 1182 tokens = _parse_tokens(expr)
1178 1183 rev_tokens = reversed(tokens)
1179 1184 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1180 1185 name_turn = True
1181 1186
1182 1187 parts = []
1183 1188 for token in rev_tokens:
1184 1189 if token.type in skip_over:
1185 1190 continue
1186 1191 if token.type == tokenize.NAME and name_turn:
1187 1192 parts.append(token.string)
1188 1193 name_turn = False
1189 1194 elif (
1190 1195 token.type == tokenize.OP and token.string == "." and not name_turn
1191 1196 ):
1192 1197 parts.append(token.string)
1193 1198 name_turn = True
1194 1199 else:
1195 1200 # short-circuit if not empty nor name token
1196 1201 break
1197 1202
1198 1203 prefix_after_space = "".join(reversed(parts))
1199 1204 else:
1200 1205 prefix_after_space = ""
1201 1206
1202 1207 return (
1203 1208 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1204 1209 "." + attr,
1205 1210 )
1206 1211
1212 def _trim_expr(self, code: str) -> str:
1213 """
1214 Trim the code until it is a valid expression and not a tuple;
1215
1216 return the trimmed expression for guarded_eval.
1217 """
1218 while code:
1219 code = code[1:]
1220 try:
1221 res = ast.parse(code)
1222 except SyntaxError:
1223 continue
1224
1225 assert res is not None
1226 if len(res.body) != 1:
1227 continue
1228 expr = res.body[0].value
1229 if isinstance(expr, ast.Tuple) and not code[-1] == ")":
1230 # we skip implicit tuple, like when trimming `fun(a,b`<completion>
1231 # as `a,b` would be a tuple, and we actually expect to get only `b`
1232 continue
1233 return code
1234 return ""
1235
1207 1236 def _evaluate_expr(self, expr):
1208 1237 obj = not_found
1209 1238 done = False
1210 1239 while not done and expr:
1211 1240 try:
1212 1241 obj = guarded_eval(
1213 1242 expr,
1214 1243 EvaluationContext(
1215 1244 globals=self.global_namespace,
1216 1245 locals=self.namespace,
1217 1246 evaluation=self.evaluation,
1218 1247 ),
1219 1248 )
1220 1249 done = True
1221 1250 except Exception as e:
1222 1251 if self.debug:
1223 1252 print("Evaluation exception", e)
1224 1253 # trim the expression to remove any invalid prefix
1225 1254 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1226 1255 # where parenthesis is not closed.
1227 1256 # TODO: make this faster by reusing parts of the computation?
1228 expr = expr[1:]
1257 expr = self._trim_expr(expr)
1229 1258 return obj
1230 1259
1231 1260 def get__all__entries(obj):
1232 1261 """returns the strings in the __all__ attribute"""
1233 1262 try:
1234 1263 words = getattr(obj, '__all__')
1235 except:
1264 except Exception:
1236 1265 return []
1237 1266
1238 1267 return [w for w in words if isinstance(w, str)]
1239 1268
1240 1269
1241 1270 class _DictKeyState(enum.Flag):
1242 1271 """Represent state of the key match in context of other possible matches.
1243 1272
1244 1273 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1245 1274 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1246 1275 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1247 1276 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1248 1277 """
1249 1278
1250 1279 BASELINE = 0
1251 1280 END_OF_ITEM = enum.auto()
1252 1281 END_OF_TUPLE = enum.auto()
1253 1282 IN_TUPLE = enum.auto()
1254 1283
1255 1284
1256 1285 def _parse_tokens(c):
1257 1286 """Parse tokens even if there is an error."""
1258 1287 tokens = []
1259 1288 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1260 1289 while True:
1261 1290 try:
1262 1291 tokens.append(next(token_generator))
1263 1292 except tokenize.TokenError:
1264 1293 return tokens
1265 1294 except StopIteration:
1266 1295 return tokens
1267 1296
1268 1297
1269 1298 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1270 1299 """Match any valid Python numeric literal in a prefix of dictionary keys.
1271 1300
1272 1301 References:
1273 1302 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1274 1303 - https://docs.python.org/3/library/tokenize.html
1275 1304 """
1276 1305 if prefix[-1].isspace():
1277 1306 # if user typed a space we do not have anything to complete
1278 1307 # even if there was a valid number token before
1279 1308 return None
1280 1309 tokens = _parse_tokens(prefix)
1281 1310 rev_tokens = reversed(tokens)
1282 1311 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1283 1312 number = None
1284 1313 for token in rev_tokens:
1285 1314 if token.type in skip_over:
1286 1315 continue
1287 1316 if number is None:
1288 1317 if token.type == tokenize.NUMBER:
1289 1318 number = token.string
1290 1319 continue
1291 1320 else:
1292 1321 # we did not match a number
1293 1322 return None
1294 1323 if token.type == tokenize.OP:
1295 1324 if token.string == ",":
1296 1325 break
1297 1326 if token.string in {"+", "-"}:
1298 1327 number = token.string + number
1299 1328 else:
1300 1329 return None
1301 1330 return number
1302 1331
1303 1332
1304 1333 _INT_FORMATS = {
1305 1334 "0b": bin,
1306 1335 "0o": oct,
1307 1336 "0x": hex,
1308 1337 }
1309 1338
1310 1339
1311 1340 def match_dict_keys(
1312 1341 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1313 1342 prefix: str,
1314 1343 delims: str,
1315 1344 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1316 1345 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1317 1346 """Used by dict_key_matches, matching the prefix to a list of keys
1318 1347
1319 1348 Parameters
1320 1349 ----------
1321 1350 keys
1322 1351 list of keys in dictionary currently being completed.
1323 1352 prefix
1324 1353 Part of the text already typed by the user. E.g. `mydict[b'fo`
1325 1354 delims
1326 1355 String of delimiters to consider when finding the current key.
1327 1356 extra_prefix : optional
1328 1357 Part of the text already typed in multi-key index cases. E.g. for
1329 1358 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1330 1359
1331 1360 Returns
1332 1361 -------
1333 1362 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1334 1363 ``quote`` being the quote that need to be used to close current string.
1335 1364 ``token_start`` the position where the replacement should start occurring,
1336 1365 ``matches`` a dictionary of replacement/completion keys on keys and values
1337 1366 indicating whether the state.
1338 1367 """
1339 1368 prefix_tuple = extra_prefix if extra_prefix else ()
1340 1369
1341 1370 prefix_tuple_size = sum(
1342 1371 [
1343 1372 # for pandas, do not count slices as taking space
1344 1373 not isinstance(k, slice)
1345 1374 for k in prefix_tuple
1346 1375 ]
1347 1376 )
1348 1377 text_serializable_types = (str, bytes, int, float, slice)
1349 1378
1350 1379 def filter_prefix_tuple(key):
1351 1380 # Reject too short keys
1352 1381 if len(key) <= prefix_tuple_size:
1353 1382 return False
1354 1383 # Reject keys which cannot be serialised to text
1355 1384 for k in key:
1356 1385 if not isinstance(k, text_serializable_types):
1357 1386 return False
1358 1387 # Reject keys that do not match the prefix
1359 1388 for k, pt in zip(key, prefix_tuple):
1360 1389 if k != pt and not isinstance(pt, slice):
1361 1390 return False
1362 1391 # All checks passed!
1363 1392 return True
1364 1393
1365 1394 filtered_key_is_final: Dict[Union[str, bytes, int, float], _DictKeyState] = (
1366 1395 defaultdict(lambda: _DictKeyState.BASELINE)
1367 1396 )
1368 1397
1369 1398 for k in keys:
1370 1399 # If at least one of the matches is not final, mark as undetermined.
1371 1400 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1372 1401 # `111` appears final on first match but is not final on the second.
1373 1402
1374 1403 if isinstance(k, tuple):
1375 1404 if filter_prefix_tuple(k):
1376 1405 key_fragment = k[prefix_tuple_size]
1377 1406 filtered_key_is_final[key_fragment] |= (
1378 1407 _DictKeyState.END_OF_TUPLE
1379 1408 if len(k) == prefix_tuple_size + 1
1380 1409 else _DictKeyState.IN_TUPLE
1381 1410 )
1382 1411 elif prefix_tuple_size > 0:
1383 1412 # we are completing a tuple but this key is not a tuple,
1384 1413 # so we should ignore it
1385 1414 pass
1386 1415 else:
1387 1416 if isinstance(k, text_serializable_types):
1388 1417 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1389 1418
1390 1419 filtered_keys = filtered_key_is_final.keys()
1391 1420
1392 1421 if not prefix:
1393 1422 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1394 1423
1395 1424 quote_match = re.search("(?:\"|')", prefix)
1396 1425 is_user_prefix_numeric = False
1397 1426
1398 1427 if quote_match:
1399 1428 quote = quote_match.group()
1400 1429 valid_prefix = prefix + quote
1401 1430 try:
1402 1431 prefix_str = literal_eval(valid_prefix)
1403 1432 except Exception:
1404 1433 return "", 0, {}
1405 1434 else:
1406 1435 # If it does not look like a string, let's assume
1407 1436 # we are dealing with a number or variable.
1408 1437 number_match = _match_number_in_dict_key_prefix(prefix)
1409 1438
1410 1439 # We do not want the key matcher to suggest variable names so we yield:
1411 1440 if number_match is None:
1412 1441 # The alternative would be to assume that user forgort the quote
1413 1442 # and if the substring matches, suggest adding it at the start.
1414 1443 return "", 0, {}
1415 1444
1416 1445 prefix_str = number_match
1417 1446 is_user_prefix_numeric = True
1418 1447 quote = ""
1419 1448
1420 1449 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1421 1450 token_match = re.search(pattern, prefix, re.UNICODE)
1422 1451 assert token_match is not None # silence mypy
1423 1452 token_start = token_match.start()
1424 1453 token_prefix = token_match.group()
1425 1454
1426 1455 matched: Dict[str, _DictKeyState] = {}
1427 1456
1428 1457 str_key: Union[str, bytes]
1429 1458
1430 1459 for key in filtered_keys:
1431 1460 if isinstance(key, (int, float)):
1432 1461 # User typed a number but this key is not a number.
1433 1462 if not is_user_prefix_numeric:
1434 1463 continue
1435 1464 str_key = str(key)
1436 1465 if isinstance(key, int):
1437 1466 int_base = prefix_str[:2].lower()
1438 1467 # if user typed integer using binary/oct/hex notation:
1439 1468 if int_base in _INT_FORMATS:
1440 1469 int_format = _INT_FORMATS[int_base]
1441 1470 str_key = int_format(key)
1442 1471 else:
1443 1472 # User typed a string but this key is a number.
1444 1473 if is_user_prefix_numeric:
1445 1474 continue
1446 1475 str_key = key
1447 1476 try:
1448 1477 if not str_key.startswith(prefix_str):
1449 1478 continue
1450 except (AttributeError, TypeError, UnicodeError) as e:
1479 except (AttributeError, TypeError, UnicodeError):
1451 1480 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1452 1481 continue
1453 1482
1454 1483 # reformat remainder of key to begin with prefix
1455 1484 rem = str_key[len(prefix_str) :]
1456 1485 # force repr wrapped in '
1457 1486 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1458 1487 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1459 1488 if quote == '"':
1460 1489 # The entered prefix is quoted with ",
1461 1490 # but the match is quoted with '.
1462 1491 # A contained " hence needs escaping for comparison:
1463 1492 rem_repr = rem_repr.replace('"', '\\"')
1464 1493
1465 1494 # then reinsert prefix from start of token
1466 1495 match = "%s%s" % (token_prefix, rem_repr)
1467 1496
1468 1497 matched[match] = filtered_key_is_final[key]
1469 1498 return quote, token_start, matched
1470 1499
1471 1500
1472 1501 def cursor_to_position(text:str, line:int, column:int)->int:
1473 1502 """
1474 1503 Convert the (line,column) position of the cursor in text to an offset in a
1475 1504 string.
1476 1505
1477 1506 Parameters
1478 1507 ----------
1479 1508 text : str
1480 1509 The text in which to calculate the cursor offset
1481 1510 line : int
1482 1511 Line of the cursor; 0-indexed
1483 1512 column : int
1484 1513 Column of the cursor 0-indexed
1485 1514
1486 1515 Returns
1487 1516 -------
1488 1517 Position of the cursor in ``text``, 0-indexed.
1489 1518
1490 1519 See Also
1491 1520 --------
1492 1521 position_to_cursor : reciprocal of this function
1493 1522
1494 1523 """
1495 1524 lines = text.split('\n')
1496 1525 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1497 1526
1498 return sum(len(l) + 1 for l in lines[:line]) + column
1527 return sum(len(line) + 1 for line in lines[:line]) + column
1499 1528
1500 1529 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1501 1530 """
1502 1531 Convert the position of the cursor in text (0 indexed) to a line
1503 1532 number(0-indexed) and a column number (0-indexed) pair
1504 1533
1505 1534 Position should be a valid position in ``text``.
1506 1535
1507 1536 Parameters
1508 1537 ----------
1509 1538 text : str
1510 1539 The text in which to calculate the cursor offset
1511 1540 offset : int
1512 1541 Position of the cursor in ``text``, 0-indexed.
1513 1542
1514 1543 Returns
1515 1544 -------
1516 1545 (line, column) : (int, int)
1517 1546 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1518 1547
1519 1548 See Also
1520 1549 --------
1521 1550 cursor_to_position : reciprocal of this function
1522 1551
1523 1552 """
1524 1553
1525 1554 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1526 1555
1527 1556 before = text[:offset]
1528 1557 blines = before.split('\n') # ! splitnes trim trailing \n
1529 1558 line = before.count('\n')
1530 1559 col = len(blines[-1])
1531 1560 return line, col
1532 1561
1533 1562
1534 1563 def _safe_isinstance(obj, module, class_name, *attrs):
1535 1564 """Checks if obj is an instance of module.class_name if loaded
1536 1565 """
1537 1566 if module in sys.modules:
1538 1567 m = sys.modules[module]
1539 1568 for attr in [class_name, *attrs]:
1540 1569 m = getattr(m, attr)
1541 1570 return isinstance(obj, m)
1542 1571
1543 1572
1544 1573 @context_matcher()
1545 1574 def back_unicode_name_matcher(context: CompletionContext):
1546 1575 """Match Unicode characters back to Unicode name
1547 1576
1548 1577 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1549 1578 """
1550 1579 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1551 1580 return _convert_matcher_v1_result_to_v2(
1552 1581 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1553 1582 )
1554 1583
1555 1584
1556 1585 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1557 1586 """Match Unicode characters back to Unicode name
1558 1587
1559 1588 This does ``β˜ƒ`` -> ``\\snowman``
1560 1589
1561 1590 Note that snowman is not a valid python3 combining character but will be expanded.
1562 1591 Though it will not recombine back to the snowman character by the completion machinery.
1563 1592
1564 1593 This will not either back-complete standard sequences like \\n, \\b ...
1565 1594
1566 1595 .. deprecated:: 8.6
1567 1596 You can use :meth:`back_unicode_name_matcher` instead.
1568 1597
1569 1598 Returns
1570 1599 =======
1571 1600
1572 1601 Return a tuple with two elements:
1573 1602
1574 1603 - The Unicode character that was matched (preceded with a backslash), or
1575 1604 empty string,
1576 1605 - a sequence (of 1), name for the match Unicode character, preceded by
1577 1606 backslash, or empty if no match.
1578 1607 """
1579 1608 if len(text)<2:
1580 1609 return '', ()
1581 1610 maybe_slash = text[-2]
1582 1611 if maybe_slash != '\\':
1583 1612 return '', ()
1584 1613
1585 1614 char = text[-1]
1586 1615 # no expand on quote for completion in strings.
1587 1616 # nor backcomplete standard ascii keys
1588 1617 if char in string.ascii_letters or char in ('"',"'"):
1589 1618 return '', ()
1590 1619 try :
1591 1620 unic = unicodedata.name(char)
1592 1621 return '\\'+char,('\\'+unic,)
1593 1622 except KeyError:
1594 1623 pass
1595 1624 return '', ()
1596 1625
1597 1626
1598 1627 @context_matcher()
1599 1628 def back_latex_name_matcher(context: CompletionContext):
1600 1629 """Match latex characters back to unicode name
1601 1630
1602 1631 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1603 1632 """
1604 1633 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1605 1634 return _convert_matcher_v1_result_to_v2(
1606 1635 matches, type="latex", fragment=fragment, suppress_if_matches=True
1607 1636 )
1608 1637
1609 1638
1610 1639 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1611 1640 """Match latex characters back to unicode name
1612 1641
1613 1642 This does ``\\β„΅`` -> ``\\aleph``
1614 1643
1615 1644 .. deprecated:: 8.6
1616 1645 You can use :meth:`back_latex_name_matcher` instead.
1617 1646 """
1618 1647 if len(text)<2:
1619 1648 return '', ()
1620 1649 maybe_slash = text[-2]
1621 1650 if maybe_slash != '\\':
1622 1651 return '', ()
1623 1652
1624 1653
1625 1654 char = text[-1]
1626 1655 # no expand on quote for completion in strings.
1627 1656 # nor backcomplete standard ascii keys
1628 1657 if char in string.ascii_letters or char in ('"',"'"):
1629 1658 return '', ()
1630 1659 try :
1631 1660 latex = reverse_latex_symbol[char]
1632 1661 # '\\' replace the \ as well
1633 1662 return '\\'+char,[latex]
1634 1663 except KeyError:
1635 1664 pass
1636 1665 return '', ()
1637 1666
1638 1667
1639 1668 def _formatparamchildren(parameter) -> str:
1640 1669 """
1641 1670 Get parameter name and value from Jedi Private API
1642 1671
1643 1672 Jedi does not expose a simple way to get `param=value` from its API.
1644 1673
1645 1674 Parameters
1646 1675 ----------
1647 1676 parameter
1648 1677 Jedi's function `Param`
1649 1678
1650 1679 Returns
1651 1680 -------
1652 1681 A string like 'a', 'b=1', '*args', '**kwargs'
1653 1682
1654 1683 """
1655 1684 description = parameter.description
1656 1685 if not description.startswith('param '):
1657 1686 raise ValueError('Jedi function parameter description have change format.'
1658 1687 'Expected "param ...", found %r".' % description)
1659 1688 return description[6:]
1660 1689
1661 1690 def _make_signature(completion)-> str:
1662 1691 """
1663 1692 Make the signature from a jedi completion
1664 1693
1665 1694 Parameters
1666 1695 ----------
1667 1696 completion : jedi.Completion
1668 1697 object does not complete a function type
1669 1698
1670 1699 Returns
1671 1700 -------
1672 1701 a string consisting of the function signature, with the parenthesis but
1673 1702 without the function name. example:
1674 1703 `(a, *args, b=1, **kwargs)`
1675 1704
1676 1705 """
1677 1706
1678 1707 # it looks like this might work on jedi 0.17
1679 1708 if hasattr(completion, 'get_signatures'):
1680 1709 signatures = completion.get_signatures()
1681 1710 if not signatures:
1682 1711 return '(?)'
1683 1712
1684 1713 c0 = completion.get_signatures()[0]
1685 1714 return '('+c0.to_string().split('(', maxsplit=1)[1]
1686 1715
1687 1716 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1688 1717 for p in signature.defined_names()) if f])
1689 1718
1690 1719
1691 1720 _CompleteResult = Dict[str, MatcherResult]
1692 1721
1693 1722
1694 1723 DICT_MATCHER_REGEX = re.compile(
1695 1724 r"""(?x)
1696 1725 ( # match dict-referring - or any get item object - expression
1697 1726 .+
1698 1727 )
1699 1728 \[ # open bracket
1700 1729 \s* # and optional whitespace
1701 1730 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1702 1731 # and slices
1703 1732 ((?:(?:
1704 1733 (?: # closed string
1705 1734 [uUbB]? # string prefix (r not handled)
1706 1735 (?:
1707 1736 '(?:[^']|(?<!\\)\\')*'
1708 1737 |
1709 1738 "(?:[^"]|(?<!\\)\\")*"
1710 1739 )
1711 1740 )
1712 1741 |
1713 1742 # capture integers and slices
1714 1743 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1715 1744 |
1716 1745 # integer in bin/hex/oct notation
1717 1746 0[bBxXoO]_?(?:\w|\d)+
1718 1747 )
1719 1748 \s*,\s*
1720 1749 )*)
1721 1750 ((?:
1722 1751 (?: # unclosed string
1723 1752 [uUbB]? # string prefix (r not handled)
1724 1753 (?:
1725 1754 '(?:[^']|(?<!\\)\\')*
1726 1755 |
1727 1756 "(?:[^"]|(?<!\\)\\")*
1728 1757 )
1729 1758 )
1730 1759 |
1731 1760 # unfinished integer
1732 1761 (?:[-+]?\d+)
1733 1762 |
1734 1763 # integer in bin/hex/oct notation
1735 1764 0[bBxXoO]_?(?:\w|\d)+
1736 1765 )
1737 1766 )?
1738 1767 $
1739 1768 """
1740 1769 )
1741 1770
1742 1771
1743 1772 def _convert_matcher_v1_result_to_v2(
1744 1773 matches: Sequence[str],
1745 1774 type: str,
1746 1775 fragment: Optional[str] = None,
1747 1776 suppress_if_matches: bool = False,
1748 1777 ) -> SimpleMatcherResult:
1749 1778 """Utility to help with transition"""
1750 1779 result = {
1751 1780 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1752 1781 "suppress": (True if matches else False) if suppress_if_matches else False,
1753 1782 }
1754 1783 if fragment is not None:
1755 1784 result["matched_fragment"] = fragment
1756 1785 return cast(SimpleMatcherResult, result)
1757 1786
1758 1787
1759 1788 class IPCompleter(Completer):
1760 1789 """Extension of the completer class with IPython-specific features"""
1761 1790
1762 1791 @observe('greedy')
1763 1792 def _greedy_changed(self, change):
1764 1793 """update the splitter and readline delims when greedy is changed"""
1765 1794 if change["new"]:
1766 1795 self.evaluation = "unsafe"
1767 1796 self.auto_close_dict_keys = True
1768 1797 self.splitter.delims = GREEDY_DELIMS
1769 1798 else:
1770 1799 self.evaluation = "limited"
1771 1800 self.auto_close_dict_keys = False
1772 1801 self.splitter.delims = DELIMS
1773 1802
1774 1803 dict_keys_only = Bool(
1775 1804 False,
1776 1805 help="""
1777 1806 Whether to show dict key matches only.
1778 1807
1779 1808 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1780 1809 """,
1781 1810 )
1782 1811
1783 1812 suppress_competing_matchers = UnionTrait(
1784 1813 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1785 1814 default_value=None,
1786 1815 help="""
1787 1816 Whether to suppress completions from other *Matchers*.
1788 1817
1789 1818 When set to ``None`` (default) the matchers will attempt to auto-detect
1790 1819 whether suppression of other matchers is desirable. For example, at
1791 1820 the beginning of a line followed by `%` we expect a magic completion
1792 1821 to be the only applicable option, and after ``my_dict['`` we usually
1793 1822 expect a completion with an existing dictionary key.
1794 1823
1795 1824 If you want to disable this heuristic and see completions from all matchers,
1796 1825 set ``IPCompleter.suppress_competing_matchers = False``.
1797 1826 To disable the heuristic for specific matchers provide a dictionary mapping:
1798 1827 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1799 1828
1800 1829 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1801 1830 completions to the set of matchers with the highest priority;
1802 1831 this is equivalent to ``IPCompleter.merge_completions`` and
1803 1832 can be beneficial for performance, but will sometimes omit relevant
1804 1833 candidates from matchers further down the priority list.
1805 1834 """,
1806 1835 ).tag(config=True)
1807 1836
1808 1837 merge_completions = Bool(
1809 1838 True,
1810 1839 help="""Whether to merge completion results into a single list
1811 1840
1812 1841 If False, only the completion results from the first non-empty
1813 1842 completer will be returned.
1814 1843
1815 1844 As of version 8.6.0, setting the value to ``False`` is an alias for:
1816 1845 ``IPCompleter.suppress_competing_matchers = True.``.
1817 1846 """,
1818 1847 ).tag(config=True)
1819 1848
1820 1849 disable_matchers = ListTrait(
1821 1850 Unicode(),
1822 1851 help="""List of matchers to disable.
1823 1852
1824 1853 The list should contain matcher identifiers (see :any:`completion_matcher`).
1825 1854 """,
1826 1855 ).tag(config=True)
1827 1856
1828 1857 omit__names = Enum(
1829 1858 (0, 1, 2),
1830 1859 default_value=2,
1831 1860 help="""Instruct the completer to omit private method names
1832 1861
1833 1862 Specifically, when completing on ``object.<tab>``.
1834 1863
1835 1864 When 2 [default]: all names that start with '_' will be excluded.
1836 1865
1837 1866 When 1: all 'magic' names (``__foo__``) will be excluded.
1838 1867
1839 1868 When 0: nothing will be excluded.
1840 1869 """
1841 1870 ).tag(config=True)
1842 1871 limit_to__all__ = Bool(False,
1843 1872 help="""
1844 1873 DEPRECATED as of version 5.0.
1845 1874
1846 1875 Instruct the completer to use __all__ for the completion
1847 1876
1848 1877 Specifically, when completing on ``object.<tab>``.
1849 1878
1850 1879 When True: only those names in obj.__all__ will be included.
1851 1880
1852 1881 When False [default]: the __all__ attribute is ignored
1853 1882 """,
1854 1883 ).tag(config=True)
1855 1884
1856 1885 profile_completions = Bool(
1857 1886 default_value=False,
1858 1887 help="If True, emit profiling data for completion subsystem using cProfile."
1859 1888 ).tag(config=True)
1860 1889
1861 1890 profiler_output_dir = Unicode(
1862 1891 default_value=".completion_profiles",
1863 1892 help="Template for path at which to output profile data for completions."
1864 1893 ).tag(config=True)
1865 1894
1866 1895 @observe('limit_to__all__')
1867 1896 def _limit_to_all_changed(self, change):
1868 1897 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1869 1898 'value has been deprecated since IPython 5.0, will be made to have '
1870 1899 'no effects and then removed in future version of IPython.',
1871 1900 UserWarning)
1872 1901
1873 1902 def __init__(
1874 1903 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1875 1904 ):
1876 1905 """IPCompleter() -> completer
1877 1906
1878 1907 Return a completer object.
1879 1908
1880 1909 Parameters
1881 1910 ----------
1882 1911 shell
1883 1912 a pointer to the ipython shell itself. This is needed
1884 1913 because this completer knows about magic functions, and those can
1885 1914 only be accessed via the ipython instance.
1886 1915 namespace : dict, optional
1887 1916 an optional dict where completions are performed.
1888 1917 global_namespace : dict, optional
1889 1918 secondary optional dict for completions, to
1890 1919 handle cases (such as IPython embedded inside functions) where
1891 1920 both Python scopes are visible.
1892 1921 config : Config
1893 1922 traitlet's config object
1894 1923 **kwargs
1895 1924 passed to super class unmodified.
1896 1925 """
1897 1926
1898 1927 self.magic_escape = ESC_MAGIC
1899 1928 self.splitter = CompletionSplitter()
1900 1929
1901 1930 # _greedy_changed() depends on splitter and readline being defined:
1902 1931 super().__init__(
1903 1932 namespace=namespace,
1904 1933 global_namespace=global_namespace,
1905 1934 config=config,
1906 1935 **kwargs,
1907 1936 )
1908 1937
1909 1938 # List where completion matches will be stored
1910 1939 self.matches = []
1911 1940 self.shell = shell
1912 1941 # Regexp to split filenames with spaces in them
1913 1942 self.space_name_re = re.compile(r'([^\\] )')
1914 1943 # Hold a local ref. to glob.glob for speed
1915 1944 self.glob = glob.glob
1916 1945
1917 1946 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1918 1947 # buffers, to avoid completion problems.
1919 1948 term = os.environ.get('TERM','xterm')
1920 1949 self.dumb_terminal = term in ['dumb','emacs']
1921 1950
1922 1951 # Special handling of backslashes needed in win32 platforms
1923 1952 if sys.platform == "win32":
1924 1953 self.clean_glob = self._clean_glob_win32
1925 1954 else:
1926 1955 self.clean_glob = self._clean_glob
1927 1956
1928 1957 #regexp to parse docstring for function signature
1929 1958 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1930 1959 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1931 1960 #use this if positional argument name is also needed
1932 1961 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1933 1962
1934 1963 self.magic_arg_matchers = [
1935 1964 self.magic_config_matcher,
1936 1965 self.magic_color_matcher,
1937 1966 ]
1938 1967
1939 1968 # This is set externally by InteractiveShell
1940 1969 self.custom_completers = None
1941 1970
1942 1971 # This is a list of names of unicode characters that can be completed
1943 1972 # into their corresponding unicode value. The list is large, so we
1944 1973 # lazily initialize it on first use. Consuming code should access this
1945 1974 # attribute through the `@unicode_names` property.
1946 1975 self._unicode_names = None
1947 1976
1948 1977 self._backslash_combining_matchers = [
1949 1978 self.latex_name_matcher,
1950 1979 self.unicode_name_matcher,
1951 1980 back_latex_name_matcher,
1952 1981 back_unicode_name_matcher,
1953 1982 self.fwd_unicode_matcher,
1954 1983 ]
1955 1984
1956 1985 if not self.backslash_combining_completions:
1957 1986 for matcher in self._backslash_combining_matchers:
1958 1987 self.disable_matchers.append(_get_matcher_id(matcher))
1959 1988
1960 1989 if not self.merge_completions:
1961 1990 self.suppress_competing_matchers = True
1962 1991
1963 1992 @property
1964 1993 def matchers(self) -> List[Matcher]:
1965 1994 """All active matcher routines for completion"""
1966 1995 if self.dict_keys_only:
1967 1996 return [self.dict_key_matcher]
1968 1997
1969 1998 if self.use_jedi:
1970 1999 return [
1971 2000 *self.custom_matchers,
1972 2001 *self._backslash_combining_matchers,
1973 2002 *self.magic_arg_matchers,
1974 2003 self.custom_completer_matcher,
1975 2004 self.magic_matcher,
1976 2005 self._jedi_matcher,
1977 2006 self.dict_key_matcher,
1978 2007 self.file_matcher,
1979 2008 ]
1980 2009 else:
1981 2010 return [
1982 2011 *self.custom_matchers,
1983 2012 *self._backslash_combining_matchers,
1984 2013 *self.magic_arg_matchers,
1985 2014 self.custom_completer_matcher,
1986 2015 self.dict_key_matcher,
1987 2016 self.magic_matcher,
1988 2017 self.python_matcher,
1989 2018 self.file_matcher,
1990 2019 self.python_func_kw_matcher,
1991 2020 ]
1992 2021
1993 2022 def all_completions(self, text:str) -> List[str]:
1994 2023 """
1995 2024 Wrapper around the completion methods for the benefit of emacs.
1996 2025 """
1997 2026 prefix = text.rpartition('.')[0]
1998 2027 with provisionalcompleter():
1999 2028 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2000 2029 for c in self.completions(text, len(text))]
2001 2030
2002 2031 return self.complete(text)[1]
2003 2032
2004 2033 def _clean_glob(self, text:str):
2005 2034 return self.glob("%s*" % text)
2006 2035
2007 2036 def _clean_glob_win32(self, text:str):
2008 2037 return [f.replace("\\","/")
2009 2038 for f in self.glob("%s*" % text)]
2010 2039
2011 2040 @context_matcher()
2012 2041 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2013 2042 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2014 2043 matches = self.file_matches(context.token)
2015 2044 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2016 2045 # starts with `/home/`, `C:\`, etc)
2017 2046 return _convert_matcher_v1_result_to_v2(matches, type="path")
2018 2047
2019 2048 def file_matches(self, text: str) -> List[str]:
2020 2049 """Match filenames, expanding ~USER type strings.
2021 2050
2022 2051 Most of the seemingly convoluted logic in this completer is an
2023 2052 attempt to handle filenames with spaces in them. And yet it's not
2024 2053 quite perfect, because Python's readline doesn't expose all of the
2025 2054 GNU readline details needed for this to be done correctly.
2026 2055
2027 2056 For a filename with a space in it, the printed completions will be
2028 2057 only the parts after what's already been typed (instead of the
2029 2058 full completions, as is normally done). I don't think with the
2030 2059 current (as of Python 2.3) Python readline it's possible to do
2031 2060 better.
2032 2061
2033 2062 .. deprecated:: 8.6
2034 2063 You can use :meth:`file_matcher` instead.
2035 2064 """
2036 2065
2037 2066 # chars that require escaping with backslash - i.e. chars
2038 2067 # that readline treats incorrectly as delimiters, but we
2039 2068 # don't want to treat as delimiters in filename matching
2040 2069 # when escaped with backslash
2041 2070 if text.startswith('!'):
2042 2071 text = text[1:]
2043 2072 text_prefix = u'!'
2044 2073 else:
2045 2074 text_prefix = u''
2046 2075
2047 2076 text_until_cursor = self.text_until_cursor
2048 2077 # track strings with open quotes
2049 2078 open_quotes = has_open_quotes(text_until_cursor)
2050 2079
2051 2080 if '(' in text_until_cursor or '[' in text_until_cursor:
2052 2081 lsplit = text
2053 2082 else:
2054 2083 try:
2055 2084 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2056 2085 lsplit = arg_split(text_until_cursor)[-1]
2057 2086 except ValueError:
2058 2087 # typically an unmatched ", or backslash without escaped char.
2059 2088 if open_quotes:
2060 2089 lsplit = text_until_cursor.split(open_quotes)[-1]
2061 2090 else:
2062 2091 return []
2063 2092 except IndexError:
2064 2093 # tab pressed on empty line
2065 2094 lsplit = ""
2066 2095
2067 2096 if not open_quotes and lsplit != protect_filename(lsplit):
2068 2097 # if protectables are found, do matching on the whole escaped name
2069 2098 has_protectables = True
2070 2099 text0,text = text,lsplit
2071 2100 else:
2072 2101 has_protectables = False
2073 2102 text = os.path.expanduser(text)
2074 2103
2075 2104 if text == "":
2076 2105 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2077 2106
2078 2107 # Compute the matches from the filesystem
2079 2108 if sys.platform == 'win32':
2080 2109 m0 = self.clean_glob(text)
2081 2110 else:
2082 2111 m0 = self.clean_glob(text.replace('\\', ''))
2083 2112
2084 2113 if has_protectables:
2085 2114 # If we had protectables, we need to revert our changes to the
2086 2115 # beginning of filename so that we don't double-write the part
2087 2116 # of the filename we have so far
2088 2117 len_lsplit = len(lsplit)
2089 2118 matches = [text_prefix + text0 +
2090 2119 protect_filename(f[len_lsplit:]) for f in m0]
2091 2120 else:
2092 2121 if open_quotes:
2093 2122 # if we have a string with an open quote, we don't need to
2094 2123 # protect the names beyond the quote (and we _shouldn't_, as
2095 2124 # it would cause bugs when the filesystem call is made).
2096 2125 matches = m0 if sys.platform == "win32" else\
2097 2126 [protect_filename(f, open_quotes) for f in m0]
2098 2127 else:
2099 2128 matches = [text_prefix +
2100 2129 protect_filename(f) for f in m0]
2101 2130
2102 2131 # Mark directories in input list by appending '/' to their names.
2103 2132 return [x+'/' if os.path.isdir(x) else x for x in matches]
2104 2133
2105 2134 @context_matcher()
2106 2135 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2107 2136 """Match magics."""
2108 2137 text = context.token
2109 2138 matches = self.magic_matches(text)
2110 2139 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2111 2140 is_magic_prefix = len(text) > 0 and text[0] == "%"
2112 2141 result["suppress"] = is_magic_prefix and bool(result["completions"])
2113 2142 return result
2114 2143
2115 2144 def magic_matches(self, text: str):
2116 2145 """Match magics.
2117 2146
2118 2147 .. deprecated:: 8.6
2119 2148 You can use :meth:`magic_matcher` instead.
2120 2149 """
2121 2150 # Get all shell magics now rather than statically, so magics loaded at
2122 2151 # runtime show up too.
2123 2152 lsm = self.shell.magics_manager.lsmagic()
2124 2153 line_magics = lsm['line']
2125 2154 cell_magics = lsm['cell']
2126 2155 pre = self.magic_escape
2127 2156 pre2 = pre+pre
2128 2157
2129 2158 explicit_magic = text.startswith(pre)
2130 2159
2131 2160 # Completion logic:
2132 2161 # - user gives %%: only do cell magics
2133 2162 # - user gives %: do both line and cell magics
2134 2163 # - no prefix: do both
2135 2164 # In other words, line magics are skipped if the user gives %% explicitly
2136 2165 #
2137 2166 # We also exclude magics that match any currently visible names:
2138 2167 # https://github.com/ipython/ipython/issues/4877, unless the user has
2139 2168 # typed a %:
2140 2169 # https://github.com/ipython/ipython/issues/10754
2141 2170 bare_text = text.lstrip(pre)
2142 2171 global_matches = self.global_matches(bare_text)
2143 2172 if not explicit_magic:
2144 2173 def matches(magic):
2145 2174 """
2146 2175 Filter magics, in particular remove magics that match
2147 2176 a name present in global namespace.
2148 2177 """
2149 2178 return ( magic.startswith(bare_text) and
2150 2179 magic not in global_matches )
2151 2180 else:
2152 2181 def matches(magic):
2153 2182 return magic.startswith(bare_text)
2154 2183
2155 2184 comp = [ pre2+m for m in cell_magics if matches(m)]
2156 2185 if not text.startswith(pre2):
2157 2186 comp += [ pre+m for m in line_magics if matches(m)]
2158 2187
2159 2188 return comp
2160 2189
2161 2190 @context_matcher()
2162 2191 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2163 2192 """Match class names and attributes for %config magic."""
2164 2193 # NOTE: uses `line_buffer` equivalent for compatibility
2165 2194 matches = self.magic_config_matches(context.line_with_cursor)
2166 2195 return _convert_matcher_v1_result_to_v2(matches, type="param")
2167 2196
2168 2197 def magic_config_matches(self, text: str) -> List[str]:
2169 2198 """Match class names and attributes for %config magic.
2170 2199
2171 2200 .. deprecated:: 8.6
2172 2201 You can use :meth:`magic_config_matcher` instead.
2173 2202 """
2174 2203 texts = text.strip().split()
2175 2204
2176 2205 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2177 2206 # get all configuration classes
2178 2207 classes = sorted(set([ c for c in self.shell.configurables
2179 2208 if c.__class__.class_traits(config=True)
2180 2209 ]), key=lambda x: x.__class__.__name__)
2181 2210 classnames = [ c.__class__.__name__ for c in classes ]
2182 2211
2183 2212 # return all classnames if config or %config is given
2184 2213 if len(texts) == 1:
2185 2214 return classnames
2186 2215
2187 2216 # match classname
2188 2217 classname_texts = texts[1].split('.')
2189 2218 classname = classname_texts[0]
2190 2219 classname_matches = [ c for c in classnames
2191 2220 if c.startswith(classname) ]
2192 2221
2193 2222 # return matched classes or the matched class with attributes
2194 2223 if texts[1].find('.') < 0:
2195 2224 return classname_matches
2196 2225 elif len(classname_matches) == 1 and \
2197 2226 classname_matches[0] == classname:
2198 2227 cls = classes[classnames.index(classname)].__class__
2199 2228 help = cls.class_get_help()
2200 2229 # strip leading '--' from cl-args:
2201 2230 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2202 2231 return [ attr.split('=')[0]
2203 2232 for attr in help.strip().splitlines()
2204 2233 if attr.startswith(texts[1]) ]
2205 2234 return []
2206 2235
2207 2236 @context_matcher()
2208 2237 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2209 2238 """Match color schemes for %colors magic."""
2210 2239 # NOTE: uses `line_buffer` equivalent for compatibility
2211 2240 matches = self.magic_color_matches(context.line_with_cursor)
2212 2241 return _convert_matcher_v1_result_to_v2(matches, type="param")
2213 2242
2214 2243 def magic_color_matches(self, text: str) -> List[str]:
2215 2244 """Match color schemes for %colors magic.
2216 2245
2217 2246 .. deprecated:: 8.6
2218 2247 You can use :meth:`magic_color_matcher` instead.
2219 2248 """
2220 2249 texts = text.split()
2221 2250 if text.endswith(' '):
2222 2251 # .split() strips off the trailing whitespace. Add '' back
2223 2252 # so that: '%colors ' -> ['%colors', '']
2224 2253 texts.append('')
2225 2254
2226 2255 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2227 2256 prefix = texts[1]
2228 2257 return [ color for color in InspectColors.keys()
2229 2258 if color.startswith(prefix) ]
2230 2259 return []
2231 2260
2232 2261 @context_matcher(identifier="IPCompleter.jedi_matcher")
2233 2262 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2234 2263 matches = self._jedi_matches(
2235 2264 cursor_column=context.cursor_position,
2236 2265 cursor_line=context.cursor_line,
2237 2266 text=context.full_text,
2238 2267 )
2239 2268 return {
2240 2269 "completions": matches,
2241 2270 # static analysis should not suppress other matchers
2242 2271 "suppress": False,
2243 2272 }
2244 2273
2245 2274 def _jedi_matches(
2246 2275 self, cursor_column: int, cursor_line: int, text: str
2247 2276 ) -> Iterator[_JediCompletionLike]:
2248 2277 """
2249 2278 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2250 2279 cursor position.
2251 2280
2252 2281 Parameters
2253 2282 ----------
2254 2283 cursor_column : int
2255 2284 column position of the cursor in ``text``, 0-indexed.
2256 2285 cursor_line : int
2257 2286 line position of the cursor in ``text``, 0-indexed
2258 2287 text : str
2259 2288 text to complete
2260 2289
2261 2290 Notes
2262 2291 -----
2263 2292 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2264 2293 object containing a string with the Jedi debug information attached.
2265 2294
2266 2295 .. deprecated:: 8.6
2267 2296 You can use :meth:`_jedi_matcher` instead.
2268 2297 """
2269 2298 namespaces = [self.namespace]
2270 2299 if self.global_namespace is not None:
2271 2300 namespaces.append(self.global_namespace)
2272 2301
2273 2302 completion_filter = lambda x:x
2274 2303 offset = cursor_to_position(text, cursor_line, cursor_column)
2275 2304 # filter output if we are completing for object members
2276 2305 if offset:
2277 2306 pre = text[offset-1]
2278 2307 if pre == '.':
2279 2308 if self.omit__names == 2:
2280 2309 completion_filter = lambda c:not c.name.startswith('_')
2281 2310 elif self.omit__names == 1:
2282 2311 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2283 2312 elif self.omit__names == 0:
2284 2313 completion_filter = lambda x:x
2285 2314 else:
2286 2315 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2287 2316
2288 2317 interpreter = jedi.Interpreter(text[:offset], namespaces)
2289 2318 try_jedi = True
2290 2319
2291 2320 try:
2292 2321 # find the first token in the current tree -- if it is a ' or " then we are in a string
2293 2322 completing_string = False
2294 2323 try:
2295 2324 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2296 2325 except StopIteration:
2297 2326 pass
2298 2327 else:
2299 2328 # note the value may be ', ", or it may also be ''' or """, or
2300 2329 # in some cases, """what/you/typed..., but all of these are
2301 2330 # strings.
2302 2331 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2303 2332
2304 2333 # if we are in a string jedi is likely not the right candidate for
2305 2334 # now. Skip it.
2306 2335 try_jedi = not completing_string
2307 2336 except Exception as e:
2308 2337 # many of things can go wrong, we are using private API just don't crash.
2309 2338 if self.debug:
2310 2339 print("Error detecting if completing a non-finished string :", e, '|')
2311 2340
2312 2341 if not try_jedi:
2313 2342 return iter([])
2314 2343 try:
2315 2344 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2316 2345 except Exception as e:
2317 2346 if self.debug:
2318 2347 return iter(
2319 2348 [
2320 2349 _FakeJediCompletion(
2321 2350 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2322 2351 % (e)
2323 2352 )
2324 2353 ]
2325 2354 )
2326 2355 else:
2327 2356 return iter([])
2328 2357
2329 2358 @context_matcher()
2330 2359 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2331 2360 """Match attributes or global python names"""
2332 2361 text = context.line_with_cursor
2333 2362 if "." in text:
2334 2363 try:
2335 2364 matches, fragment = self._attr_matches(text, include_prefix=False)
2336 2365 if text.endswith(".") and self.omit__names:
2337 2366 if self.omit__names == 1:
2338 2367 # true if txt is _not_ a __ name, false otherwise:
2339 2368 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2340 2369 else:
2341 2370 # true if txt is _not_ a _ name, false otherwise:
2342 2371 no__name = (
2343 2372 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2344 2373 is None
2345 2374 )
2346 2375 matches = filter(no__name, matches)
2347 2376 return _convert_matcher_v1_result_to_v2(
2348 2377 matches, type="attribute", fragment=fragment
2349 2378 )
2350 2379 except NameError:
2351 2380 # catches <undefined attributes>.<tab>
2352 2381 matches = []
2353 2382 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2354 2383 else:
2355 2384 matches = self.global_matches(context.token)
2356 2385 # TODO: maybe distinguish between functions, modules and just "variables"
2357 2386 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2358 2387
2359 2388 @completion_matcher(api_version=1)
2360 2389 def python_matches(self, text: str) -> Iterable[str]:
2361 2390 """Match attributes or global python names.
2362 2391
2363 2392 .. deprecated:: 8.27
2364 2393 You can use :meth:`python_matcher` instead."""
2365 2394 if "." in text:
2366 2395 try:
2367 2396 matches = self.attr_matches(text)
2368 2397 if text.endswith('.') and self.omit__names:
2369 2398 if self.omit__names == 1:
2370 2399 # true if txt is _not_ a __ name, false otherwise:
2371 2400 no__name = (lambda txt:
2372 2401 re.match(r'.*\.__.*?__',txt) is None)
2373 2402 else:
2374 2403 # true if txt is _not_ a _ name, false otherwise:
2375 2404 no__name = (lambda txt:
2376 2405 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2377 2406 matches = filter(no__name, matches)
2378 2407 except NameError:
2379 2408 # catches <undefined attributes>.<tab>
2380 2409 matches = []
2381 2410 else:
2382 2411 matches = self.global_matches(text)
2383 2412 return matches
2384 2413
2385 2414 def _default_arguments_from_docstring(self, doc):
2386 2415 """Parse the first line of docstring for call signature.
2387 2416
2388 2417 Docstring should be of the form 'min(iterable[, key=func])\n'.
2389 2418 It can also parse cython docstring of the form
2390 2419 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2391 2420 """
2392 2421 if doc is None:
2393 2422 return []
2394 2423
2395 2424 #care only the firstline
2396 2425 line = doc.lstrip().splitlines()[0]
2397 2426
2398 2427 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2399 2428 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2400 2429 sig = self.docstring_sig_re.search(line)
2401 2430 if sig is None:
2402 2431 return []
2403 2432 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2404 2433 sig = sig.groups()[0].split(',')
2405 2434 ret = []
2406 2435 for s in sig:
2407 2436 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2408 2437 ret += self.docstring_kwd_re.findall(s)
2409 2438 return ret
2410 2439
2411 2440 def _default_arguments(self, obj):
2412 2441 """Return the list of default arguments of obj if it is callable,
2413 2442 or empty list otherwise."""
2414 2443 call_obj = obj
2415 2444 ret = []
2416 2445 if inspect.isbuiltin(obj):
2417 2446 pass
2418 2447 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2419 2448 if inspect.isclass(obj):
2420 2449 #for cython embedsignature=True the constructor docstring
2421 2450 #belongs to the object itself not __init__
2422 2451 ret += self._default_arguments_from_docstring(
2423 2452 getattr(obj, '__doc__', ''))
2424 2453 # for classes, check for __init__,__new__
2425 2454 call_obj = (getattr(obj, '__init__', None) or
2426 2455 getattr(obj, '__new__', None))
2427 2456 # for all others, check if they are __call__able
2428 2457 elif hasattr(obj, '__call__'):
2429 2458 call_obj = obj.__call__
2430 2459 ret += self._default_arguments_from_docstring(
2431 2460 getattr(call_obj, '__doc__', ''))
2432 2461
2433 2462 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2434 2463 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2435 2464
2436 2465 try:
2437 2466 sig = inspect.signature(obj)
2438 2467 ret.extend(k for k, v in sig.parameters.items() if
2439 2468 v.kind in _keeps)
2440 2469 except ValueError:
2441 2470 pass
2442 2471
2443 2472 return list(set(ret))
2444 2473
2445 2474 @context_matcher()
2446 2475 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2447 2476 """Match named parameters (kwargs) of the last open function."""
2448 2477 matches = self.python_func_kw_matches(context.token)
2449 2478 return _convert_matcher_v1_result_to_v2(matches, type="param")
2450 2479
2451 2480 def python_func_kw_matches(self, text):
2452 2481 """Match named parameters (kwargs) of the last open function.
2453 2482
2454 2483 .. deprecated:: 8.6
2455 2484 You can use :meth:`python_func_kw_matcher` instead.
2456 2485 """
2457 2486
2458 2487 if "." in text: # a parameter cannot be dotted
2459 2488 return []
2460 2489 try: regexp = self.__funcParamsRegex
2461 2490 except AttributeError:
2462 2491 regexp = self.__funcParamsRegex = re.compile(r'''
2463 2492 '.*?(?<!\\)' | # single quoted strings or
2464 2493 ".*?(?<!\\)" | # double quoted strings or
2465 2494 \w+ | # identifier
2466 2495 \S # other characters
2467 2496 ''', re.VERBOSE | re.DOTALL)
2468 2497 # 1. find the nearest identifier that comes before an unclosed
2469 2498 # parenthesis before the cursor
2470 2499 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2471 2500 tokens = regexp.findall(self.text_until_cursor)
2472 iterTokens = reversed(tokens); openPar = 0
2501 iterTokens = reversed(tokens)
2502 openPar = 0
2473 2503
2474 2504 for token in iterTokens:
2475 2505 if token == ')':
2476 2506 openPar -= 1
2477 2507 elif token == '(':
2478 2508 openPar += 1
2479 2509 if openPar > 0:
2480 2510 # found the last unclosed parenthesis
2481 2511 break
2482 2512 else:
2483 2513 return []
2484 2514 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2485 2515 ids = []
2486 2516 isId = re.compile(r'\w+$').match
2487 2517
2488 2518 while True:
2489 2519 try:
2490 2520 ids.append(next(iterTokens))
2491 2521 if not isId(ids[-1]):
2492 ids.pop(); break
2522 ids.pop()
2523 break
2493 2524 if not next(iterTokens) == '.':
2494 2525 break
2495 2526 except StopIteration:
2496 2527 break
2497 2528
2498 2529 # Find all named arguments already assigned to, as to avoid suggesting
2499 2530 # them again
2500 2531 usedNamedArgs = set()
2501 2532 par_level = -1
2502 2533 for token, next_token in zip(tokens, tokens[1:]):
2503 2534 if token == '(':
2504 2535 par_level += 1
2505 2536 elif token == ')':
2506 2537 par_level -= 1
2507 2538
2508 2539 if par_level != 0:
2509 2540 continue
2510 2541
2511 2542 if next_token != '=':
2512 2543 continue
2513 2544
2514 2545 usedNamedArgs.add(token)
2515 2546
2516 2547 argMatches = []
2517 2548 try:
2518 2549 callableObj = '.'.join(ids[::-1])
2519 2550 namedArgs = self._default_arguments(eval(callableObj,
2520 2551 self.namespace))
2521 2552
2522 2553 # Remove used named arguments from the list, no need to show twice
2523 2554 for namedArg in set(namedArgs) - usedNamedArgs:
2524 2555 if namedArg.startswith(text):
2525 2556 argMatches.append("%s=" %namedArg)
2526 2557 except:
2527 2558 pass
2528 2559
2529 2560 return argMatches
2530 2561
2531 2562 @staticmethod
2532 2563 def _get_keys(obj: Any) -> List[Any]:
2533 2564 # Objects can define their own completions by defining an
2534 2565 # _ipy_key_completions_() method.
2535 2566 method = get_real_method(obj, '_ipython_key_completions_')
2536 2567 if method is not None:
2537 2568 return method()
2538 2569
2539 2570 # Special case some common in-memory dict-like types
2540 2571 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2541 2572 try:
2542 2573 return list(obj.keys())
2543 2574 except Exception:
2544 2575 return []
2545 2576 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2546 2577 try:
2547 2578 return list(obj.obj.keys())
2548 2579 except Exception:
2549 2580 return []
2550 2581 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2551 2582 _safe_isinstance(obj, 'numpy', 'void'):
2552 2583 return obj.dtype.names or []
2553 2584 return []
2554 2585
2555 2586 @context_matcher()
2556 2587 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2557 2588 """Match string keys in a dictionary, after e.g. ``foo[``."""
2558 2589 matches = self.dict_key_matches(context.token)
2559 2590 return _convert_matcher_v1_result_to_v2(
2560 2591 matches, type="dict key", suppress_if_matches=True
2561 2592 )
2562 2593
2563 2594 def dict_key_matches(self, text: str) -> List[str]:
2564 2595 """Match string keys in a dictionary, after e.g. ``foo[``.
2565 2596
2566 2597 .. deprecated:: 8.6
2567 2598 You can use :meth:`dict_key_matcher` instead.
2568 2599 """
2569 2600
2570 2601 # Short-circuit on closed dictionary (regular expression would
2571 2602 # not match anyway, but would take quite a while).
2572 2603 if self.text_until_cursor.strip().endswith("]"):
2573 2604 return []
2574 2605
2575 2606 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2576 2607
2577 2608 if match is None:
2578 2609 return []
2579 2610
2580 2611 expr, prior_tuple_keys, key_prefix = match.groups()
2581 2612
2582 2613 obj = self._evaluate_expr(expr)
2583 2614
2584 2615 if obj is not_found:
2585 2616 return []
2586 2617
2587 2618 keys = self._get_keys(obj)
2588 2619 if not keys:
2589 2620 return keys
2590 2621
2591 2622 tuple_prefix = guarded_eval(
2592 2623 prior_tuple_keys,
2593 2624 EvaluationContext(
2594 2625 globals=self.global_namespace,
2595 2626 locals=self.namespace,
2596 2627 evaluation=self.evaluation, # type: ignore
2597 2628 in_subscript=True,
2598 2629 ),
2599 2630 )
2600 2631
2601 2632 closing_quote, token_offset, matches = match_dict_keys(
2602 2633 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2603 2634 )
2604 2635 if not matches:
2605 2636 return []
2606 2637
2607 2638 # get the cursor position of
2608 2639 # - the text being completed
2609 2640 # - the start of the key text
2610 2641 # - the start of the completion
2611 2642 text_start = len(self.text_until_cursor) - len(text)
2612 2643 if key_prefix:
2613 2644 key_start = match.start(3)
2614 2645 completion_start = key_start + token_offset
2615 2646 else:
2616 2647 key_start = completion_start = match.end()
2617 2648
2618 2649 # grab the leading prefix, to make sure all completions start with `text`
2619 2650 if text_start > key_start:
2620 2651 leading = ''
2621 2652 else:
2622 2653 leading = text[text_start:completion_start]
2623 2654
2624 2655 # append closing quote and bracket as appropriate
2625 2656 # this is *not* appropriate if the opening quote or bracket is outside
2626 2657 # the text given to this method, e.g. `d["""a\nt
2627 2658 can_close_quote = False
2628 2659 can_close_bracket = False
2629 2660
2630 2661 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2631 2662
2632 2663 if continuation.startswith(closing_quote):
2633 2664 # do not close if already closed, e.g. `d['a<tab>'`
2634 2665 continuation = continuation[len(closing_quote) :]
2635 2666 else:
2636 2667 can_close_quote = True
2637 2668
2638 2669 continuation = continuation.strip()
2639 2670
2640 2671 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2641 2672 # handling it is out of scope, so let's avoid appending suffixes.
2642 2673 has_known_tuple_handling = isinstance(obj, dict)
2643 2674
2644 2675 can_close_bracket = (
2645 2676 not continuation.startswith("]") and self.auto_close_dict_keys
2646 2677 )
2647 2678 can_close_tuple_item = (
2648 2679 not continuation.startswith(",")
2649 2680 and has_known_tuple_handling
2650 2681 and self.auto_close_dict_keys
2651 2682 )
2652 2683 can_close_quote = can_close_quote and self.auto_close_dict_keys
2653 2684
2654 2685 # fast path if closing quote should be appended but not suffix is allowed
2655 2686 if not can_close_quote and not can_close_bracket and closing_quote:
2656 2687 return [leading + k for k in matches]
2657 2688
2658 2689 results = []
2659 2690
2660 2691 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2661 2692
2662 2693 for k, state_flag in matches.items():
2663 2694 result = leading + k
2664 2695 if can_close_quote and closing_quote:
2665 2696 result += closing_quote
2666 2697
2667 2698 if state_flag == end_of_tuple_or_item:
2668 2699 # We do not know which suffix to add,
2669 2700 # e.g. both tuple item and string
2670 2701 # match this item.
2671 2702 pass
2672 2703
2673 2704 if state_flag in end_of_tuple_or_item and can_close_bracket:
2674 2705 result += "]"
2675 2706 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2676 2707 result += ", "
2677 2708 results.append(result)
2678 2709 return results
2679 2710
2680 2711 @context_matcher()
2681 2712 def unicode_name_matcher(self, context: CompletionContext):
2682 2713 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2683 2714 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2684 2715 return _convert_matcher_v1_result_to_v2(
2685 2716 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2686 2717 )
2687 2718
2688 2719 @staticmethod
2689 2720 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2690 2721 """Match Latex-like syntax for unicode characters base
2691 2722 on the name of the character.
2692 2723
2693 2724 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2694 2725
2695 2726 Works only on valid python 3 identifier, or on combining characters that
2696 2727 will combine to form a valid identifier.
2697 2728 """
2698 2729 slashpos = text.rfind('\\')
2699 2730 if slashpos > -1:
2700 2731 s = text[slashpos+1:]
2701 2732 try :
2702 2733 unic = unicodedata.lookup(s)
2703 2734 # allow combining chars
2704 2735 if ('a'+unic).isidentifier():
2705 2736 return '\\'+s,[unic]
2706 2737 except KeyError:
2707 2738 pass
2708 2739 return '', []
2709 2740
2710 2741 @context_matcher()
2711 2742 def latex_name_matcher(self, context: CompletionContext):
2712 2743 """Match Latex syntax for unicode characters.
2713 2744
2714 2745 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2715 2746 """
2716 2747 fragment, matches = self.latex_matches(context.text_until_cursor)
2717 2748 return _convert_matcher_v1_result_to_v2(
2718 2749 matches, type="latex", fragment=fragment, suppress_if_matches=True
2719 2750 )
2720 2751
2721 2752 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2722 2753 """Match Latex syntax for unicode characters.
2723 2754
2724 2755 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2725 2756
2726 2757 .. deprecated:: 8.6
2727 2758 You can use :meth:`latex_name_matcher` instead.
2728 2759 """
2729 2760 slashpos = text.rfind('\\')
2730 2761 if slashpos > -1:
2731 2762 s = text[slashpos:]
2732 2763 if s in latex_symbols:
2733 2764 # Try to complete a full latex symbol to unicode
2734 2765 # \\alpha -> Ξ±
2735 2766 return s, [latex_symbols[s]]
2736 2767 else:
2737 2768 # If a user has partially typed a latex symbol, give them
2738 2769 # a full list of options \al -> [\aleph, \alpha]
2739 2770 matches = [k for k in latex_symbols if k.startswith(s)]
2740 2771 if matches:
2741 2772 return s, matches
2742 2773 return '', ()
2743 2774
2744 2775 @context_matcher()
2745 2776 def custom_completer_matcher(self, context):
2746 2777 """Dispatch custom completer.
2747 2778
2748 2779 If a match is found, suppresses all other matchers except for Jedi.
2749 2780 """
2750 2781 matches = self.dispatch_custom_completer(context.token) or []
2751 2782 result = _convert_matcher_v1_result_to_v2(
2752 2783 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2753 2784 )
2754 2785 result["ordered"] = True
2755 2786 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2756 2787 return result
2757 2788
2758 2789 def dispatch_custom_completer(self, text):
2759 2790 """
2760 2791 .. deprecated:: 8.6
2761 2792 You can use :meth:`custom_completer_matcher` instead.
2762 2793 """
2763 2794 if not self.custom_completers:
2764 2795 return
2765 2796
2766 2797 line = self.line_buffer
2767 2798 if not line.strip():
2768 2799 return None
2769 2800
2770 2801 # Create a little structure to pass all the relevant information about
2771 2802 # the current completion to any custom completer.
2772 2803 event = SimpleNamespace()
2773 2804 event.line = line
2774 2805 event.symbol = text
2775 2806 cmd = line.split(None,1)[0]
2776 2807 event.command = cmd
2777 2808 event.text_until_cursor = self.text_until_cursor
2778 2809
2779 2810 # for foo etc, try also to find completer for %foo
2780 2811 if not cmd.startswith(self.magic_escape):
2781 2812 try_magic = self.custom_completers.s_matches(
2782 2813 self.magic_escape + cmd)
2783 2814 else:
2784 2815 try_magic = []
2785 2816
2786 2817 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2787 2818 try_magic,
2788 2819 self.custom_completers.flat_matches(self.text_until_cursor)):
2789 2820 try:
2790 2821 res = c(event)
2791 2822 if res:
2792 2823 # first, try case sensitive match
2793 2824 withcase = [r for r in res if r.startswith(text)]
2794 2825 if withcase:
2795 2826 return withcase
2796 2827 # if none, then case insensitive ones are ok too
2797 2828 text_low = text.lower()
2798 2829 return [r for r in res if r.lower().startswith(text_low)]
2799 2830 except TryNext:
2800 2831 pass
2801 2832 except KeyboardInterrupt:
2802 2833 """
2803 2834 If custom completer take too long,
2804 2835 let keyboard interrupt abort and return nothing.
2805 2836 """
2806 2837 break
2807 2838
2808 2839 return None
2809 2840
2810 2841 def completions(self, text: str, offset: int)->Iterator[Completion]:
2811 2842 """
2812 2843 Returns an iterator over the possible completions
2813 2844
2814 2845 .. warning::
2815 2846
2816 2847 Unstable
2817 2848
2818 2849 This function is unstable, API may change without warning.
2819 2850 It will also raise unless use in proper context manager.
2820 2851
2821 2852 Parameters
2822 2853 ----------
2823 2854 text : str
2824 2855 Full text of the current input, multi line string.
2825 2856 offset : int
2826 2857 Integer representing the position of the cursor in ``text``. Offset
2827 2858 is 0-based indexed.
2828 2859
2829 2860 Yields
2830 2861 ------
2831 2862 Completion
2832 2863
2833 2864 Notes
2834 2865 -----
2835 2866 The cursor on a text can either be seen as being "in between"
2836 2867 characters or "On" a character depending on the interface visible to
2837 2868 the user. For consistency the cursor being on "in between" characters X
2838 2869 and Y is equivalent to the cursor being "on" character Y, that is to say
2839 2870 the character the cursor is on is considered as being after the cursor.
2840 2871
2841 2872 Combining characters may span more that one position in the
2842 2873 text.
2843 2874
2844 2875 .. note::
2845 2876
2846 2877 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2847 2878 fake Completion token to distinguish completion returned by Jedi
2848 2879 and usual IPython completion.
2849 2880
2850 2881 .. note::
2851 2882
2852 2883 Completions are not completely deduplicated yet. If identical
2853 2884 completions are coming from different sources this function does not
2854 2885 ensure that each completion object will only be present once.
2855 2886 """
2856 2887 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2857 2888 "It may change without warnings. "
2858 2889 "Use in corresponding context manager.",
2859 2890 category=ProvisionalCompleterWarning, stacklevel=2)
2860 2891
2861 2892 seen = set()
2862 2893 profiler:Optional[cProfile.Profile]
2863 2894 try:
2864 2895 if self.profile_completions:
2865 2896 import cProfile
2866 2897 profiler = cProfile.Profile()
2867 2898 profiler.enable()
2868 2899 else:
2869 2900 profiler = None
2870 2901
2871 2902 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2872 2903 if c and (c in seen):
2873 2904 continue
2874 2905 yield c
2875 2906 seen.add(c)
2876 2907 except KeyboardInterrupt:
2877 2908 """if completions take too long and users send keyboard interrupt,
2878 2909 do not crash and return ASAP. """
2879 2910 pass
2880 2911 finally:
2881 2912 if profiler is not None:
2882 2913 profiler.disable()
2883 2914 ensure_dir_exists(self.profiler_output_dir)
2884 2915 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2885 2916 print("Writing profiler output to", output_path)
2886 2917 profiler.dump_stats(output_path)
2887 2918
2888 2919 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2889 2920 """
2890 2921 Core completion module.Same signature as :any:`completions`, with the
2891 2922 extra `timeout` parameter (in seconds).
2892 2923
2893 2924 Computing jedi's completion ``.type`` can be quite expensive (it is a
2894 2925 lazy property) and can require some warm-up, more warm up than just
2895 2926 computing the ``name`` of a completion. The warm-up can be :
2896 2927
2897 2928 - Long warm-up the first time a module is encountered after
2898 2929 install/update: actually build parse/inference tree.
2899 2930
2900 2931 - first time the module is encountered in a session: load tree from
2901 2932 disk.
2902 2933
2903 2934 We don't want to block completions for tens of seconds so we give the
2904 2935 completer a "budget" of ``_timeout`` seconds per invocation to compute
2905 2936 completions types, the completions that have not yet been computed will
2906 2937 be marked as "unknown" an will have a chance to be computed next round
2907 2938 are things get cached.
2908 2939
2909 2940 Keep in mind that Jedi is not the only thing treating the completion so
2910 2941 keep the timeout short-ish as if we take more than 0.3 second we still
2911 2942 have lots of processing to do.
2912 2943
2913 2944 """
2914 2945 deadline = time.monotonic() + _timeout
2915 2946
2916 2947 before = full_text[:offset]
2917 2948 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2918 2949
2919 2950 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2920 2951
2921 2952 def is_non_jedi_result(
2922 2953 result: MatcherResult, identifier: str
2923 2954 ) -> TypeGuard[SimpleMatcherResult]:
2924 2955 return identifier != jedi_matcher_id
2925 2956
2926 2957 results = self._complete(
2927 2958 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2928 2959 )
2929 2960
2930 2961 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2931 2962 identifier: result
2932 2963 for identifier, result in results.items()
2933 2964 if is_non_jedi_result(result, identifier)
2934 2965 }
2935 2966
2936 2967 jedi_matches = (
2937 2968 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2938 2969 if jedi_matcher_id in results
2939 2970 else ()
2940 2971 )
2941 2972
2942 2973 iter_jm = iter(jedi_matches)
2943 2974 if _timeout:
2944 2975 for jm in iter_jm:
2945 2976 try:
2946 2977 type_ = jm.type
2947 2978 except Exception:
2948 2979 if self.debug:
2949 2980 print("Error in Jedi getting type of ", jm)
2950 2981 type_ = None
2951 2982 delta = len(jm.name_with_symbols) - len(jm.complete)
2952 2983 if type_ == 'function':
2953 2984 signature = _make_signature(jm)
2954 2985 else:
2955 2986 signature = ''
2956 2987 yield Completion(start=offset - delta,
2957 2988 end=offset,
2958 2989 text=jm.name_with_symbols,
2959 2990 type=type_,
2960 2991 signature=signature,
2961 2992 _origin='jedi')
2962 2993
2963 2994 if time.monotonic() > deadline:
2964 2995 break
2965 2996
2966 2997 for jm in iter_jm:
2967 2998 delta = len(jm.name_with_symbols) - len(jm.complete)
2968 2999 yield Completion(
2969 3000 start=offset - delta,
2970 3001 end=offset,
2971 3002 text=jm.name_with_symbols,
2972 3003 type=_UNKNOWN_TYPE, # don't compute type for speed
2973 3004 _origin="jedi",
2974 3005 signature="",
2975 3006 )
2976 3007
2977 3008 # TODO:
2978 3009 # Suppress this, right now just for debug.
2979 3010 if jedi_matches and non_jedi_results and self.debug:
2980 3011 some_start_offset = before.rfind(
2981 3012 next(iter(non_jedi_results.values()))["matched_fragment"]
2982 3013 )
2983 3014 yield Completion(
2984 3015 start=some_start_offset,
2985 3016 end=offset,
2986 3017 text="--jedi/ipython--",
2987 3018 _origin="debug",
2988 3019 type="none",
2989 3020 signature="",
2990 3021 )
2991 3022
2992 3023 ordered: List[Completion] = []
2993 3024 sortable: List[Completion] = []
2994 3025
2995 3026 for origin, result in non_jedi_results.items():
2996 3027 matched_text = result["matched_fragment"]
2997 3028 start_offset = before.rfind(matched_text)
2998 3029 is_ordered = result.get("ordered", False)
2999 3030 container = ordered if is_ordered else sortable
3000 3031
3001 3032 # I'm unsure if this is always true, so let's assert and see if it
3002 3033 # crash
3003 3034 assert before.endswith(matched_text)
3004 3035
3005 3036 for simple_completion in result["completions"]:
3006 3037 completion = Completion(
3007 3038 start=start_offset,
3008 3039 end=offset,
3009 3040 text=simple_completion.text,
3010 3041 _origin=origin,
3011 3042 signature="",
3012 3043 type=simple_completion.type or _UNKNOWN_TYPE,
3013 3044 )
3014 3045 container.append(completion)
3015 3046
3016 3047 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3017 3048 :MATCHES_LIMIT
3018 3049 ]
3019 3050
3020 3051 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
3021 3052 """Find completions for the given text and line context.
3022 3053
3023 3054 Note that both the text and the line_buffer are optional, but at least
3024 3055 one of them must be given.
3025 3056
3026 3057 Parameters
3027 3058 ----------
3028 3059 text : string, optional
3029 3060 Text to perform the completion on. If not given, the line buffer
3030 3061 is split using the instance's CompletionSplitter object.
3031 3062 line_buffer : string, optional
3032 3063 If not given, the completer attempts to obtain the current line
3033 3064 buffer via readline. This keyword allows clients which are
3034 3065 requesting for text completions in non-readline contexts to inform
3035 3066 the completer of the entire text.
3036 3067 cursor_pos : int, optional
3037 3068 Index of the cursor in the full line buffer. Should be provided by
3038 3069 remote frontends where kernel has no access to frontend state.
3039 3070
3040 3071 Returns
3041 3072 -------
3042 3073 Tuple of two items:
3043 3074 text : str
3044 3075 Text that was actually used in the completion.
3045 3076 matches : list
3046 3077 A list of completion matches.
3047 3078
3048 3079 Notes
3049 3080 -----
3050 3081 This API is likely to be deprecated and replaced by
3051 3082 :any:`IPCompleter.completions` in the future.
3052 3083
3053 3084 """
3054 3085 warnings.warn('`Completer.complete` is pending deprecation since '
3055 3086 'IPython 6.0 and will be replaced by `Completer.completions`.',
3056 3087 PendingDeprecationWarning)
3057 3088 # potential todo, FOLD the 3rd throw away argument of _complete
3058 3089 # into the first 2 one.
3059 3090 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3060 3091 # TODO: should we deprecate now, or does it stay?
3061 3092
3062 3093 results = self._complete(
3063 3094 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3064 3095 )
3065 3096
3066 3097 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3067 3098
3068 3099 return self._arrange_and_extract(
3069 3100 results,
3070 3101 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3071 3102 skip_matchers={jedi_matcher_id},
3072 3103 # this API does not support different start/end positions (fragments of token).
3073 3104 abort_if_offset_changes=True,
3074 3105 )
3075 3106
3076 3107 def _arrange_and_extract(
3077 3108 self,
3078 3109 results: Dict[str, MatcherResult],
3079 3110 skip_matchers: Set[str],
3080 3111 abort_if_offset_changes: bool,
3081 3112 ):
3082 3113 sortable: List[AnyMatcherCompletion] = []
3083 3114 ordered: List[AnyMatcherCompletion] = []
3084 3115 most_recent_fragment = None
3085 3116 for identifier, result in results.items():
3086 3117 if identifier in skip_matchers:
3087 3118 continue
3088 3119 if not result["completions"]:
3089 3120 continue
3090 3121 if not most_recent_fragment:
3091 3122 most_recent_fragment = result["matched_fragment"]
3092 3123 if (
3093 3124 abort_if_offset_changes
3094 3125 and result["matched_fragment"] != most_recent_fragment
3095 3126 ):
3096 3127 break
3097 3128 if result.get("ordered", False):
3098 3129 ordered.extend(result["completions"])
3099 3130 else:
3100 3131 sortable.extend(result["completions"])
3101 3132
3102 3133 if not most_recent_fragment:
3103 3134 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3104 3135
3105 3136 return most_recent_fragment, [
3106 3137 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3107 3138 ]
3108 3139
3109 3140 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3110 3141 full_text=None) -> _CompleteResult:
3111 3142 """
3112 3143 Like complete but can also returns raw jedi completions as well as the
3113 3144 origin of the completion text. This could (and should) be made much
3114 3145 cleaner but that will be simpler once we drop the old (and stateful)
3115 3146 :any:`complete` API.
3116 3147
3117 3148 With current provisional API, cursor_pos act both (depending on the
3118 3149 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3119 3150 ``column`` when passing multiline strings this could/should be renamed
3120 3151 but would add extra noise.
3121 3152
3122 3153 Parameters
3123 3154 ----------
3124 3155 cursor_line
3125 3156 Index of the line the cursor is on. 0 indexed.
3126 3157 cursor_pos
3127 3158 Position of the cursor in the current line/line_buffer/text. 0
3128 3159 indexed.
3129 3160 line_buffer : optional, str
3130 3161 The current line the cursor is in, this is mostly due to legacy
3131 3162 reason that readline could only give a us the single current line.
3132 3163 Prefer `full_text`.
3133 3164 text : str
3134 3165 The current "token" the cursor is in, mostly also for historical
3135 3166 reasons. as the completer would trigger only after the current line
3136 3167 was parsed.
3137 3168 full_text : str
3138 3169 Full text of the current cell.
3139 3170
3140 3171 Returns
3141 3172 -------
3142 3173 An ordered dictionary where keys are identifiers of completion
3143 3174 matchers and values are ``MatcherResult``s.
3144 3175 """
3145 3176
3146 3177 # if the cursor position isn't given, the only sane assumption we can
3147 3178 # make is that it's at the end of the line (the common case)
3148 3179 if cursor_pos is None:
3149 3180 cursor_pos = len(line_buffer) if text is None else len(text)
3150 3181
3151 3182 if self.use_main_ns:
3152 3183 self.namespace = __main__.__dict__
3153 3184
3154 3185 # if text is either None or an empty string, rely on the line buffer
3155 3186 if (not line_buffer) and full_text:
3156 3187 line_buffer = full_text.split('\n')[cursor_line]
3157 3188 if not text: # issue #11508: check line_buffer before calling split_line
3158 3189 text = (
3159 3190 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3160 3191 )
3161 3192
3162 3193 # If no line buffer is given, assume the input text is all there was
3163 3194 if line_buffer is None:
3164 3195 line_buffer = text
3165 3196
3166 3197 # deprecated - do not use `line_buffer` in new code.
3167 3198 self.line_buffer = line_buffer
3168 3199 self.text_until_cursor = self.line_buffer[:cursor_pos]
3169 3200
3170 3201 if not full_text:
3171 3202 full_text = line_buffer
3172 3203
3173 3204 context = CompletionContext(
3174 3205 full_text=full_text,
3175 3206 cursor_position=cursor_pos,
3176 3207 cursor_line=cursor_line,
3177 3208 token=text,
3178 3209 limit=MATCHES_LIMIT,
3179 3210 )
3180 3211
3181 3212 # Start with a clean slate of completions
3182 3213 results: Dict[str, MatcherResult] = {}
3183 3214
3184 3215 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3185 3216
3186 3217 suppressed_matchers: Set[str] = set()
3187 3218
3188 3219 matchers = {
3189 3220 _get_matcher_id(matcher): matcher
3190 3221 for matcher in sorted(
3191 3222 self.matchers, key=_get_matcher_priority, reverse=True
3192 3223 )
3193 3224 }
3194 3225
3195 3226 for matcher_id, matcher in matchers.items():
3196 3227 matcher_id = _get_matcher_id(matcher)
3197 3228
3198 3229 if matcher_id in self.disable_matchers:
3199 3230 continue
3200 3231
3201 3232 if matcher_id in results:
3202 3233 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3203 3234
3204 3235 if matcher_id in suppressed_matchers:
3205 3236 continue
3206 3237
3207 3238 result: MatcherResult
3208 3239 try:
3209 3240 if _is_matcher_v1(matcher):
3210 3241 result = _convert_matcher_v1_result_to_v2(
3211 3242 matcher(text), type=_UNKNOWN_TYPE
3212 3243 )
3213 3244 elif _is_matcher_v2(matcher):
3214 3245 result = matcher(context)
3215 3246 else:
3216 3247 api_version = _get_matcher_api_version(matcher)
3217 3248 raise ValueError(f"Unsupported API version {api_version}")
3218 except:
3249 except BaseException:
3219 3250 # Show the ugly traceback if the matcher causes an
3220 3251 # exception, but do NOT crash the kernel!
3221 3252 sys.excepthook(*sys.exc_info())
3222 3253 continue
3223 3254
3224 3255 # set default value for matched fragment if suffix was not selected.
3225 3256 result["matched_fragment"] = result.get("matched_fragment", context.token)
3226 3257
3227 3258 if not suppressed_matchers:
3228 3259 suppression_recommended: Union[bool, Set[str]] = result.get(
3229 3260 "suppress", False
3230 3261 )
3231 3262
3232 3263 suppression_config = (
3233 3264 self.suppress_competing_matchers.get(matcher_id, None)
3234 3265 if isinstance(self.suppress_competing_matchers, dict)
3235 3266 else self.suppress_competing_matchers
3236 3267 )
3237 3268 should_suppress = (
3238 3269 (suppression_config is True)
3239 3270 or (suppression_recommended and (suppression_config is not False))
3240 3271 ) and has_any_completions(result)
3241 3272
3242 3273 if should_suppress:
3243 3274 suppression_exceptions: Set[str] = result.get(
3244 3275 "do_not_suppress", set()
3245 3276 )
3246 3277 if isinstance(suppression_recommended, Iterable):
3247 3278 to_suppress = set(suppression_recommended)
3248 3279 else:
3249 3280 to_suppress = set(matchers)
3250 3281 suppressed_matchers = to_suppress - suppression_exceptions
3251 3282
3252 3283 new_results = {}
3253 3284 for previous_matcher_id, previous_result in results.items():
3254 3285 if previous_matcher_id not in suppressed_matchers:
3255 3286 new_results[previous_matcher_id] = previous_result
3256 3287 results = new_results
3257 3288
3258 3289 results[matcher_id] = result
3259 3290
3260 3291 _, matches = self._arrange_and_extract(
3261 3292 results,
3262 3293 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3263 3294 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3264 3295 skip_matchers={jedi_matcher_id},
3265 3296 abort_if_offset_changes=False,
3266 3297 )
3267 3298
3268 3299 # populate legacy stateful API
3269 3300 self.matches = matches
3270 3301
3271 3302 return results
3272 3303
3273 3304 @staticmethod
3274 3305 def _deduplicate(
3275 3306 matches: Sequence[AnyCompletion],
3276 3307 ) -> Iterable[AnyCompletion]:
3277 3308 filtered_matches: Dict[str, AnyCompletion] = {}
3278 3309 for match in matches:
3279 3310 text = match.text
3280 3311 if (
3281 3312 text not in filtered_matches
3282 3313 or filtered_matches[text].type == _UNKNOWN_TYPE
3283 3314 ):
3284 3315 filtered_matches[text] = match
3285 3316
3286 3317 return filtered_matches.values()
3287 3318
3288 3319 @staticmethod
3289 3320 def _sort(matches: Sequence[AnyCompletion]):
3290 3321 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3291 3322
3292 3323 @context_matcher()
3293 3324 def fwd_unicode_matcher(self, context: CompletionContext):
3294 3325 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3295 3326 # TODO: use `context.limit` to terminate early once we matched the maximum
3296 3327 # number that will be used downstream; can be added as an optional to
3297 3328 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3298 3329 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3299 3330 return _convert_matcher_v1_result_to_v2(
3300 3331 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3301 3332 )
3302 3333
3303 3334 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3304 3335 """
3305 3336 Forward match a string starting with a backslash with a list of
3306 3337 potential Unicode completions.
3307 3338
3308 3339 Will compute list of Unicode character names on first call and cache it.
3309 3340
3310 3341 .. deprecated:: 8.6
3311 3342 You can use :meth:`fwd_unicode_matcher` instead.
3312 3343
3313 3344 Returns
3314 3345 -------
3315 3346 At tuple with:
3316 3347 - matched text (empty if no matches)
3317 3348 - list of potential completions, empty tuple otherwise)
3318 3349 """
3319 3350 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3320 3351 # We could do a faster match using a Trie.
3321 3352
3322 3353 # Using pygtrie the following seem to work:
3323 3354
3324 3355 # s = PrefixSet()
3325 3356
3326 3357 # for c in range(0,0x10FFFF + 1):
3327 3358 # try:
3328 3359 # s.add(unicodedata.name(chr(c)))
3329 3360 # except ValueError:
3330 3361 # pass
3331 3362 # [''.join(k) for k in s.iter(prefix)]
3332 3363
3333 3364 # But need to be timed and adds an extra dependency.
3334 3365
3335 3366 slashpos = text.rfind('\\')
3336 3367 # if text starts with slash
3337 3368 if slashpos > -1:
3338 3369 # PERF: It's important that we don't access self._unicode_names
3339 3370 # until we're inside this if-block. _unicode_names is lazily
3340 3371 # initialized, and it takes a user-noticeable amount of time to
3341 3372 # initialize it, so we don't want to initialize it unless we're
3342 3373 # actually going to use it.
3343 3374 s = text[slashpos + 1 :]
3344 3375 sup = s.upper()
3345 3376 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3346 3377 if candidates:
3347 3378 return s, candidates
3348 3379 candidates = [x for x in self.unicode_names if sup in x]
3349 3380 if candidates:
3350 3381 return s, candidates
3351 3382 splitsup = sup.split(" ")
3352 3383 candidates = [
3353 3384 x for x in self.unicode_names if all(u in x for u in splitsup)
3354 3385 ]
3355 3386 if candidates:
3356 3387 return s, candidates
3357 3388
3358 3389 return "", ()
3359 3390
3360 3391 # if text does not start with slash
3361 3392 else:
3362 3393 return '', ()
3363 3394
3364 3395 @property
3365 3396 def unicode_names(self) -> List[str]:
3366 3397 """List of names of unicode code points that can be completed.
3367 3398
3368 3399 The list is lazily initialized on first access.
3369 3400 """
3370 3401 if self._unicode_names is None:
3371 3402 names = []
3372 3403 for c in range(0,0x10FFFF + 1):
3373 3404 try:
3374 3405 names.append(unicodedata.name(chr(c)))
3375 3406 except ValueError:
3376 3407 pass
3377 3408 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3378 3409
3379 3410 return self._unicode_names
3380 3411
3381 3412 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3382 3413 names = []
3383 3414 for start,stop in ranges:
3384 3415 for c in range(start, stop) :
3385 3416 try:
3386 3417 names.append(unicodedata.name(chr(c)))
3387 3418 except ValueError:
3388 3419 pass
3389 3420 return names
@@ -1,1769 +1,1819
1 1 # encoding: utf-8
2 2 """Tests for the IPython tab-completion machinery."""
3 3
4 4 # Copyright (c) IPython Development Team.
5 5 # Distributed under the terms of the Modified BSD License.
6 6
7 7 import os
8 8 import pytest
9 9 import sys
10 10 import textwrap
11 11 import unittest
12 import random
12 13
13 14 from importlib.metadata import version
14 15
15
16 16 from contextlib import contextmanager
17 17
18 18 from traitlets.config.loader import Config
19 19 from IPython import get_ipython
20 20 from IPython.core import completer
21 21 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
22 22 from IPython.utils.generics import complete_object
23 23 from IPython.testing import decorators as dec
24 from IPython.core.latex_symbols import latex_symbols
24 25
25 26 from IPython.core.completer import (
26 27 Completion,
27 28 provisionalcompleter,
28 29 match_dict_keys,
29 30 _deduplicate_completions,
30 31 _match_number_in_dict_key_prefix,
31 32 completion_matcher,
32 33 SimpleCompletion,
33 34 CompletionContext,
35 _unicode_name_compute,
36 _UNICODE_RANGES,
34 37 )
35 38
36 39 from packaging.version import parse
37 40
38 41
42 @contextmanager
43 def jedi_status(status: bool):
44 completer = get_ipython().Completer
45 try:
46 old = completer.use_jedi
47 completer.use_jedi = status
48 yield
49 finally:
50 completer.use_jedi = old
51
52
39 53 # -----------------------------------------------------------------------------
40 54 # Test functions
41 55 # -----------------------------------------------------------------------------
42 56
43 57
44 58 def recompute_unicode_ranges():
45 59 """
46 60 utility to recompute the largest unicode range without any characters
47 61
48 62 use to recompute the gap in the global _UNICODE_RANGES of completer.py
49 63 """
50 64 import itertools
51 65 import unicodedata
52 66
53 67 valid = []
54 68 for c in range(0, 0x10FFFF + 1):
55 69 try:
56 70 unicodedata.name(chr(c))
57 71 except ValueError:
58 72 continue
59 73 valid.append(c)
60 74
61 75 def ranges(i):
62 76 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
63 77 b = list(b)
64 78 yield b[0][1], b[-1][1]
65 79
66 80 rg = list(ranges(valid))
67 81 lens = []
68 82 gap_lens = []
69 pstart, pstop = 0, 0
83 _pstart, pstop = 0, 0
70 84 for start, stop in rg:
71 85 lens.append(stop - start)
72 86 gap_lens.append(
73 87 (
74 88 start - pstop,
75 89 hex(pstop + 1),
76 90 hex(start),
77 91 f"{round((start - pstop)/0xe01f0*100)}%",
78 92 )
79 93 )
80 pstart, pstop = start, stop
94 _pstart, pstop = start, stop
81 95
82 96 return sorted(gap_lens)[-1]
83 97
84 98
85 99 def test_unicode_range():
86 100 """
87 101 Test that the ranges we test for unicode names give the same number of
88 102 results than testing the full length.
89 103 """
90 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
91 104
92 105 expected_list = _unicode_name_compute([(0, 0x110000)])
93 106 test = _unicode_name_compute(_UNICODE_RANGES)
94 107 len_exp = len(expected_list)
95 108 len_test = len(test)
96 109
97 110 # do not inline the len() or on error pytest will try to print the 130 000 +
98 111 # elements.
99 112 message = None
100 113 if len_exp != len_test or len_exp > 131808:
101 114 size, start, stop, prct = recompute_unicode_ranges()
102 115 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
103 116 likely due to a new release of Python. We've find that the biggest gap
104 117 in unicode characters has reduces in size to be {size} characters
105 118 ({prct}), from {start}, to {stop}. In completer.py likely update to
106 119
107 120 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
108 121
109 122 And update the assertion below to use
110 123
111 124 len_exp <= {len_exp}
112 125 """
113 126 assert len_exp == len_test, message
114 127
115 128 # fail if new unicode symbols have been added.
116 129 assert len_exp <= 143668, message
117 130
118 131
119 132 @contextmanager
120 133 def greedy_completion():
121 134 ip = get_ipython()
122 135 greedy_original = ip.Completer.greedy
123 136 try:
124 137 ip.Completer.greedy = True
125 138 yield
126 139 finally:
127 140 ip.Completer.greedy = greedy_original
128 141
129 142
130 143 @contextmanager
131 144 def evaluation_policy(evaluation: str):
132 145 ip = get_ipython()
133 146 evaluation_original = ip.Completer.evaluation
134 147 try:
135 148 ip.Completer.evaluation = evaluation
136 149 yield
137 150 finally:
138 151 ip.Completer.evaluation = evaluation_original
139 152
140 153
141 154 @contextmanager
142 155 def custom_matchers(matchers):
143 156 ip = get_ipython()
144 157 try:
145 158 ip.Completer.custom_matchers.extend(matchers)
146 159 yield
147 160 finally:
148 161 ip.Completer.custom_matchers.clear()
149 162
150 163
151 def test_protect_filename():
152 if sys.platform == "win32":
153 pairs = [
154 ("abc", "abc"),
155 (" abc", '" abc"'),
156 ("a bc", '"a bc"'),
157 ("a bc", '"a bc"'),
158 (" bc", '" bc"'),
159 ]
160 else:
161 pairs = [
162 ("abc", "abc"),
163 (" abc", r"\ abc"),
164 ("a bc", r"a\ bc"),
165 ("a bc", r"a\ \ bc"),
166 (" bc", r"\ \ bc"),
167 # On posix, we also protect parens and other special characters.
168 ("a(bc", r"a\(bc"),
169 ("a)bc", r"a\)bc"),
170 ("a( )bc", r"a\(\ \)bc"),
171 ("a[1]bc", r"a\[1\]bc"),
172 ("a{1}bc", r"a\{1\}bc"),
173 ("a#bc", r"a\#bc"),
174 ("a?bc", r"a\?bc"),
175 ("a=bc", r"a\=bc"),
176 ("a\\bc", r"a\\bc"),
177 ("a|bc", r"a\|bc"),
178 ("a;bc", r"a\;bc"),
179 ("a:bc", r"a\:bc"),
180 ("a'bc", r"a\'bc"),
181 ("a*bc", r"a\*bc"),
182 ('a"bc', r"a\"bc"),
183 ("a^bc", r"a\^bc"),
184 ("a&bc", r"a\&bc"),
185 ]
186 # run the actual tests
187 for s1, s2 in pairs:
188 s1p = completer.protect_filename(s1)
189 assert s1p == s2
164 if sys.platform == "win32":
165 pairs = [
166 ("abc", "abc"),
167 (" abc", '" abc"'),
168 ("a bc", '"a bc"'),
169 ("a bc", '"a bc"'),
170 (" bc", '" bc"'),
171 ]
172 else:
173 pairs = [
174 ("abc", "abc"),
175 (" abc", r"\ abc"),
176 ("a bc", r"a\ bc"),
177 ("a bc", r"a\ \ bc"),
178 (" bc", r"\ \ bc"),
179 # On posix, we also protect parens and other special characters.
180 ("a(bc", r"a\(bc"),
181 ("a)bc", r"a\)bc"),
182 ("a( )bc", r"a\(\ \)bc"),
183 ("a[1]bc", r"a\[1\]bc"),
184 ("a{1}bc", r"a\{1\}bc"),
185 ("a#bc", r"a\#bc"),
186 ("a?bc", r"a\?bc"),
187 ("a=bc", r"a\=bc"),
188 ("a\\bc", r"a\\bc"),
189 ("a|bc", r"a\|bc"),
190 ("a;bc", r"a\;bc"),
191 ("a:bc", r"a\:bc"),
192 ("a'bc", r"a\'bc"),
193 ("a*bc", r"a\*bc"),
194 ('a"bc', r"a\"bc"),
195 ("a^bc", r"a\^bc"),
196 ("a&bc", r"a\&bc"),
197 ]
198
199
200 @pytest.mark.parametrize("s1,expected", pairs)
201 def test_protect_filename(s1, expected):
202 assert completer.protect_filename(s1) == expected
190 203
191 204
192 205 def check_line_split(splitter, test_specs):
193 206 for part1, part2, split in test_specs:
194 207 cursor_pos = len(part1)
195 208 line = part1 + part2
196 209 out = splitter.split_line(line, cursor_pos)
197 210 assert out == split
198 211
199 212 def test_line_split():
200 213 """Basic line splitter test with default specs."""
201 214 sp = completer.CompletionSplitter()
202 215 # The format of the test specs is: part1, part2, expected answer. Parts 1
203 216 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
204 217 # was at the end of part1. So an empty part2 represents someone hitting
205 218 # tab at the end of the line, the most common case.
206 219 t = [
207 220 ("run some/script", "", "some/script"),
208 221 ("run scripts/er", "ror.py foo", "scripts/er"),
209 222 ("echo $HOM", "", "HOM"),
210 223 ("print sys.pa", "", "sys.pa"),
211 224 ("print(sys.pa", "", "sys.pa"),
212 225 ("execfile('scripts/er", "", "scripts/er"),
213 226 ("a[x.", "", "x."),
214 227 ("a[x.", "y", "x."),
215 228 ('cd "some_file/', "", "some_file/"),
216 229 ]
217 230 check_line_split(sp, t)
218 231 # Ensure splitting works OK with unicode by re-running the tests with
219 232 # all inputs turned into unicode
220 233 check_line_split(sp, [map(str, p) for p in t])
221 234
222 235
223 236 class NamedInstanceClass:
224 237 instances = {}
225 238
226 239 def __init__(self, name):
227 240 self.instances[name] = self
228 241
229 242 @classmethod
230 243 def _ipython_key_completions_(cls):
231 244 return cls.instances.keys()
232 245
233 246
234 247 class KeyCompletable:
235 248 def __init__(self, things=()):
236 249 self.things = things
237 250
238 251 def _ipython_key_completions_(self):
239 252 return list(self.things)
240 253
241 254
242 255 class TestCompleter(unittest.TestCase):
243 256 def setUp(self):
244 257 """
245 258 We want to silence all PendingDeprecationWarning when testing the completer
246 259 """
247 260 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
248 261 self._assertwarns.__enter__()
249 262
250 263 def tearDown(self):
251 264 try:
252 265 self._assertwarns.__exit__(None, None, None)
253 266 except AssertionError:
254 267 pass
255 268
256 269 def test_custom_completion_error(self):
257 270 """Test that errors from custom attribute completers are silenced."""
258 271 ip = get_ipython()
259 272
260 273 class A:
261 274 pass
262 275
263 276 ip.user_ns["x"] = A()
264 277
265 278 @complete_object.register(A)
266 279 def complete_A(a, existing_completions):
267 280 raise TypeError("this should be silenced")
268 281
269 282 ip.complete("x.")
270 283
271 284 def test_custom_completion_ordering(self):
272 285 """Test that errors from custom attribute completers are silenced."""
273 286 ip = get_ipython()
274 287
275 288 _, matches = ip.complete('in')
276 289 assert matches.index('input') < matches.index('int')
277 290
278 291 def complete_example(a):
279 292 return ['example2', 'example1']
280 293
281 294 ip.Completer.custom_completers.add_re('ex*', complete_example)
282 295 _, matches = ip.complete('ex')
283 296 assert matches.index('example2') < matches.index('example1')
284 297
285 298 def test_unicode_completions(self):
286 299 ip = get_ipython()
287 300 # Some strings that trigger different types of completion. Check them both
288 301 # in str and unicode forms
289 302 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
290 303 for t in s + list(map(str, s)):
291 304 # We don't need to check exact completion values (they may change
292 305 # depending on the state of the namespace, but at least no exceptions
293 306 # should be thrown and the return value should be a pair of text, list
294 307 # values.
295 308 text, matches = ip.complete(t)
296 309 self.assertIsInstance(text, str)
297 310 self.assertIsInstance(matches, list)
298 311
299 312 def test_latex_completions(self):
300 from IPython.core.latex_symbols import latex_symbols
301 import random
302 313
303 314 ip = get_ipython()
304 315 # Test some random unicode symbols
305 316 keys = random.sample(sorted(latex_symbols), 10)
306 317 for k in keys:
307 318 text, matches = ip.complete(k)
308 319 self.assertEqual(text, k)
309 320 self.assertEqual(matches, [latex_symbols[k]])
310 321 # Test a more complex line
311 322 text, matches = ip.complete("print(\\alpha")
312 323 self.assertEqual(text, "\\alpha")
313 324 self.assertEqual(matches[0], latex_symbols["\\alpha"])
314 325 # Test multiple matching latex symbols
315 326 text, matches = ip.complete("\\al")
316 327 self.assertIn("\\alpha", matches)
317 328 self.assertIn("\\aleph", matches)
318 329
319 330 def test_latex_no_results(self):
320 331 """
321 332 forward latex should really return nothing in either field if nothing is found.
322 333 """
323 334 ip = get_ipython()
324 335 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
325 336 self.assertEqual(text, "")
326 337 self.assertEqual(matches, ())
327 338
328 339 def test_back_latex_completion(self):
329 340 ip = get_ipython()
330 341
331 342 # do not return more than 1 matches for \beta, only the latex one.
332 343 name, matches = ip.complete("\\Ξ²")
333 344 self.assertEqual(matches, ["\\beta"])
334 345
335 346 def test_back_unicode_completion(self):
336 347 ip = get_ipython()
337 348
338 349 name, matches = ip.complete("\\β…€")
339 350 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
340 351
341 352 def test_forward_unicode_completion(self):
342 353 ip = get_ipython()
343 354
344 355 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
345 356 self.assertEqual(matches, ["β…€"]) # This is not a V
346 357 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
347 358
348 359 def test_delim_setting(self):
349 360 sp = completer.CompletionSplitter()
350 361 sp.delims = " "
351 362 self.assertEqual(sp.delims, " ")
352 363 self.assertEqual(sp._delim_expr, r"[\ ]")
353 364
354 365 def test_spaces(self):
355 366 """Test with only spaces as split chars."""
356 367 sp = completer.CompletionSplitter()
357 368 sp.delims = " "
358 369 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
359 370 check_line_split(sp, t)
360 371
361 372 def test_has_open_quotes1(self):
362 373 for s in ["'", "'''", "'hi' '"]:
363 374 self.assertEqual(completer.has_open_quotes(s), "'")
364 375
365 376 def test_has_open_quotes2(self):
366 377 for s in ['"', '"""', '"hi" "']:
367 378 self.assertEqual(completer.has_open_quotes(s), '"')
368 379
369 380 def test_has_open_quotes3(self):
370 381 for s in ["''", "''' '''", "'hi' 'ipython'"]:
371 382 self.assertFalse(completer.has_open_quotes(s))
372 383
373 384 def test_has_open_quotes4(self):
374 385 for s in ['""', '""" """', '"hi" "ipython"']:
375 386 self.assertFalse(completer.has_open_quotes(s))
376 387
377 388 @pytest.mark.xfail(
378 389 sys.platform == "win32", reason="abspath completions fail on Windows"
379 390 )
380 391 def test_abspath_file_completions(self):
381 392 ip = get_ipython()
382 393 with TemporaryDirectory() as tmpdir:
383 394 prefix = os.path.join(tmpdir, "foo")
384 395 suffixes = ["1", "2"]
385 396 names = [prefix + s for s in suffixes]
386 397 for n in names:
387 398 open(n, "w", encoding="utf-8").close()
388 399
389 400 # Check simple completion
390 401 c = ip.complete(prefix)[1]
391 402 self.assertEqual(c, names)
392 403
393 404 # Now check with a function call
394 405 cmd = 'a = f("%s' % prefix
395 406 c = ip.complete(prefix, cmd)[1]
396 407 comp = [prefix + s for s in suffixes]
397 408 self.assertEqual(c, comp)
398 409
399 410 def test_local_file_completions(self):
400 411 ip = get_ipython()
401 412 with TemporaryWorkingDirectory():
402 413 prefix = "./foo"
403 414 suffixes = ["1", "2"]
404 415 names = [prefix + s for s in suffixes]
405 416 for n in names:
406 417 open(n, "w", encoding="utf-8").close()
407 418
408 419 # Check simple completion
409 420 c = ip.complete(prefix)[1]
410 421 self.assertEqual(c, names)
411 422
412 423 # Now check with a function call
413 424 cmd = 'a = f("%s' % prefix
414 425 c = ip.complete(prefix, cmd)[1]
415 426 comp = {prefix + s for s in suffixes}
416 427 self.assertTrue(comp.issubset(set(c)))
417 428
418 429 def test_quoted_file_completions(self):
419 430 ip = get_ipython()
420 431
421 432 def _(text):
422 433 return ip.Completer._complete(
423 434 cursor_line=0, cursor_pos=len(text), full_text=text
424 435 )["IPCompleter.file_matcher"]["completions"]
425 436
426 437 with TemporaryWorkingDirectory():
427 438 name = "foo'bar"
428 439 open(name, "w", encoding="utf-8").close()
429 440
430 441 # Don't escape Windows
431 442 escaped = name if sys.platform == "win32" else "foo\\'bar"
432 443
433 444 # Single quote matches embedded single quote
434 445 c = _("open('foo")[0]
435 446 self.assertEqual(c.text, escaped)
436 447
437 448 # Double quote requires no escape
438 449 c = _('open("foo')[0]
439 450 self.assertEqual(c.text, name)
440 451
441 452 # No quote requires an escape
442 453 c = _("%ls foo")[0]
443 454 self.assertEqual(c.text, escaped)
444 455
445 456 @pytest.mark.xfail(
446 457 sys.version_info.releaselevel in ("alpha",),
447 458 reason="Parso does not yet parse 3.13",
448 459 )
449 460 def test_all_completions_dups(self):
450 461 """
451 462 Make sure the output of `IPCompleter.all_completions` does not have
452 463 duplicated prefixes.
453 464 """
454 465 ip = get_ipython()
455 466 c = ip.Completer
456 467 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
457 468 for jedi_status in [True, False]:
458 469 with provisionalcompleter():
459 470 ip.Completer.use_jedi = jedi_status
460 471 matches = c.all_completions("TestCl")
461 472 assert matches == ["TestClass"], (jedi_status, matches)
462 473 matches = c.all_completions("TestClass.")
463 474 assert len(matches) > 2, (jedi_status, matches)
464 475 matches = c.all_completions("TestClass.a")
465 476 if jedi_status:
466 477 assert matches == ["TestClass.a", "TestClass.a1"], jedi_status
467 478 else:
468 479 assert matches == [".a", ".a1"], jedi_status
469 480
470 481 @pytest.mark.xfail(
471 482 sys.version_info.releaselevel in ("alpha",),
472 483 reason="Parso does not yet parse 3.13",
473 484 )
474 485 def test_jedi(self):
475 486 """
476 487 A couple of issue we had with Jedi
477 488 """
478 489 ip = get_ipython()
479 490
480 491 def _test_complete(reason, s, comp, start=None, end=None):
481 492 l = len(s)
482 493 start = start if start is not None else l
483 494 end = end if end is not None else l
484 495 with provisionalcompleter():
485 496 ip.Completer.use_jedi = True
486 497 completions = set(ip.Completer.completions(s, l))
487 498 ip.Completer.use_jedi = False
488 499 assert Completion(start, end, comp) in completions, reason
489 500
490 501 def _test_not_complete(reason, s, comp):
491 502 l = len(s)
492 503 with provisionalcompleter():
493 504 ip.Completer.use_jedi = True
494 505 completions = set(ip.Completer.completions(s, l))
495 506 ip.Completer.use_jedi = False
496 507 assert Completion(l, l, comp) not in completions, reason
497 508
498 509 import jedi
499 510
500 511 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
501 512 if jedi_version > (0, 10):
502 513 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
503 514 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
504 515 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
505 516 _test_complete("cover duplicate completions", "im", "import", 0, 2)
506 517
507 518 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
508 519
509 520 @pytest.mark.xfail(
510 521 sys.version_info.releaselevel in ("alpha",),
511 522 reason="Parso does not yet parse 3.13",
512 523 )
513 524 def test_completion_have_signature(self):
514 525 """
515 526 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
516 527 """
517 528 ip = get_ipython()
518 529 with provisionalcompleter():
519 530 ip.Completer.use_jedi = True
520 531 completions = ip.Completer.completions("ope", 3)
521 532 c = next(completions) # should be `open`
522 533 ip.Completer.use_jedi = False
523 534 assert "file" in c.signature, "Signature of function was not found by completer"
524 535 assert (
525 536 "encoding" in c.signature
526 537 ), "Signature of function was not found by completer"
527 538
528 539 @pytest.mark.xfail(
529 540 sys.version_info.releaselevel in ("alpha",),
530 541 reason="Parso does not yet parse 3.13",
531 542 )
532 543 def test_completions_have_type(self):
533 544 """
534 545 Lets make sure matchers provide completion type.
535 546 """
536 547 ip = get_ipython()
537 548 with provisionalcompleter():
538 549 ip.Completer.use_jedi = False
539 550 completions = ip.Completer.completions("%tim", 3)
540 551 c = next(completions) # should be `%time` or similar
541 552 assert c.type == "magic", "Type of magic was not assigned by completer"
542 553
543 554 @pytest.mark.xfail(
544 555 parse(version("jedi")) <= parse("0.18.0"),
545 556 reason="Known failure on jedi<=0.18.0",
546 557 strict=True,
547 558 )
548 559 def test_deduplicate_completions(self):
549 560 """
550 561 Test that completions are correctly deduplicated (even if ranges are not the same)
551 562 """
552 563 ip = get_ipython()
553 564 ip.ex(
554 565 textwrap.dedent(
555 566 """
556 567 class Z:
557 568 zoo = 1
558 569 """
559 570 )
560 571 )
561 572 with provisionalcompleter():
562 573 ip.Completer.use_jedi = True
563 574 l = list(
564 575 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
565 576 )
566 577 ip.Completer.use_jedi = False
567 578
568 579 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
569 580 assert l[0].text == "zoo" # and not `it.accumulate`
570 581
571 582 @pytest.mark.xfail(
572 583 sys.version_info.releaselevel in ("alpha",),
573 584 reason="Parso does not yet parse 3.13",
574 585 )
575 586 def test_greedy_completions(self):
576 587 """
577 588 Test the capability of the Greedy completer.
578 589
579 590 Most of the test here does not really show off the greedy completer, for proof
580 591 each of the text below now pass with Jedi. The greedy completer is capable of more.
581 592
582 593 See the :any:`test_dict_key_completion_contexts`
583 594
584 595 """
585 596 ip = get_ipython()
586 597 ip.ex("a=list(range(5))")
587 598 ip.ex("d = {'a b': str}")
588 599 _, c = ip.complete(".", line="a[0].")
589 600 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
590 601
591 602 def _(line, cursor_pos, expect, message, completion):
592 603 with greedy_completion(), provisionalcompleter():
593 604 ip.Completer.use_jedi = False
594 605 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
595 606 self.assertIn(expect, c, message % c)
596 607
597 608 ip.Completer.use_jedi = True
598 609 with provisionalcompleter():
599 610 completions = ip.Completer.completions(line, cursor_pos)
600 611 self.assertIn(completion, list(completions))
601 612
602 613 with provisionalcompleter():
603 614 _(
604 615 "a[0].",
605 616 5,
606 617 ".real",
607 618 "Should have completed on a[0].: %s",
608 619 Completion(5, 5, "real"),
609 620 )
610 621 _(
611 622 "a[0].r",
612 623 6,
613 624 ".real",
614 625 "Should have completed on a[0].r: %s",
615 626 Completion(5, 6, "real"),
616 627 )
617 628
618 629 _(
619 630 "a[0].from_",
620 631 10,
621 632 ".from_bytes",
622 633 "Should have completed on a[0].from_: %s",
623 634 Completion(5, 10, "from_bytes"),
624 635 )
625 636 _(
626 637 "assert str.star",
627 638 14,
628 639 ".startswith",
629 640 "Should have completed on `assert str.star`: %s",
630 641 Completion(11, 14, "startswith"),
631 642 )
632 643 _(
633 644 "d['a b'].str",
634 645 12,
635 646 ".strip",
636 647 "Should have completed on `d['a b'].str`: %s",
637 648 Completion(9, 12, "strip"),
638 649 )
639 650 _(
640 651 "a.app",
641 652 4,
642 653 ".append",
643 654 "Should have completed on `a.app`: %s",
644 655 Completion(2, 4, "append"),
645 656 )
646 657
647 658 def test_omit__names(self):
648 659 # also happens to test IPCompleter as a configurable
649 660 ip = get_ipython()
650 661 ip._hidden_attr = 1
651 662 ip._x = {}
652 663 c = ip.Completer
653 664 ip.ex("ip=get_ipython()")
654 665 cfg = Config()
655 666 cfg.IPCompleter.omit__names = 0
656 667 c.update_config(cfg)
657 668 with provisionalcompleter():
658 669 c.use_jedi = False
659 670 s, matches = c.complete("ip.")
660 671 self.assertIn(".__str__", matches)
661 672 self.assertIn("._hidden_attr", matches)
662 673
663 674 # c.use_jedi = True
664 675 # completions = set(c.completions('ip.', 3))
665 676 # self.assertIn(Completion(3, 3, '__str__'), completions)
666 677 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
667 678
668 679 cfg = Config()
669 680 cfg.IPCompleter.omit__names = 1
670 681 c.update_config(cfg)
671 682 with provisionalcompleter():
672 683 c.use_jedi = False
673 684 s, matches = c.complete("ip.")
674 685 self.assertNotIn(".__str__", matches)
675 686 # self.assertIn('ip._hidden_attr', matches)
676 687
677 688 # c.use_jedi = True
678 689 # completions = set(c.completions('ip.', 3))
679 690 # self.assertNotIn(Completion(3,3,'__str__'), completions)
680 691 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
681 692
682 693 cfg = Config()
683 694 cfg.IPCompleter.omit__names = 2
684 695 c.update_config(cfg)
685 696 with provisionalcompleter():
686 697 c.use_jedi = False
687 698 s, matches = c.complete("ip.")
688 699 self.assertNotIn(".__str__", matches)
689 700 self.assertNotIn("._hidden_attr", matches)
690 701
691 702 # c.use_jedi = True
692 703 # completions = set(c.completions('ip.', 3))
693 704 # self.assertNotIn(Completion(3,3,'__str__'), completions)
694 705 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
695 706
696 707 with provisionalcompleter():
697 708 c.use_jedi = False
698 709 s, matches = c.complete("ip._x.")
699 710 self.assertIn(".keys", matches)
700 711
701 712 # c.use_jedi = True
702 713 # completions = set(c.completions('ip._x.', 6))
703 714 # self.assertIn(Completion(6,6, "keys"), completions)
704 715
705 716 del ip._hidden_attr
706 717 del ip._x
707 718
708 719 def test_limit_to__all__False_ok(self):
709 720 """
710 721 Limit to all is deprecated, once we remove it this test can go away.
711 722 """
712 723 ip = get_ipython()
713 724 c = ip.Completer
714 725 c.use_jedi = False
715 726 ip.ex("class D: x=24")
716 727 ip.ex("d=D()")
717 728 cfg = Config()
718 729 cfg.IPCompleter.limit_to__all__ = False
719 730 c.update_config(cfg)
720 731 s, matches = c.complete("d.")
721 732 self.assertIn(".x", matches)
722 733
723 734 def test_get__all__entries_ok(self):
724 735 class A:
725 736 __all__ = ["x", 1]
726 737
727 738 words = completer.get__all__entries(A())
728 739 self.assertEqual(words, ["x"])
729 740
730 741 def test_get__all__entries_no__all__ok(self):
731 742 class A:
732 743 pass
733 744
734 745 words = completer.get__all__entries(A())
735 746 self.assertEqual(words, [])
736 747
737 748 def test_func_kw_completions(self):
738 749 ip = get_ipython()
739 750 c = ip.Completer
740 751 c.use_jedi = False
741 752 ip.ex("def myfunc(a=1,b=2): return a+b")
742 753 s, matches = c.complete(None, "myfunc(1,b")
743 754 self.assertIn("b=", matches)
744 755 # Simulate completing with cursor right after b (pos==10):
745 756 s, matches = c.complete(None, "myfunc(1,b)", 10)
746 757 self.assertIn("b=", matches)
747 758 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
748 759 self.assertIn("b=", matches)
749 760 # builtin function
750 761 s, matches = c.complete(None, "min(k, k")
751 762 self.assertIn("key=", matches)
752 763
753 764 def test_default_arguments_from_docstring(self):
754 765 ip = get_ipython()
755 766 c = ip.Completer
756 767 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
757 768 self.assertEqual(kwd, ["key"])
758 769 # with cython type etc
759 770 kwd = c._default_arguments_from_docstring(
760 771 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
761 772 )
762 773 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
763 774 # white spaces
764 775 kwd = c._default_arguments_from_docstring(
765 776 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
766 777 )
767 778 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
768 779
769 780 def test_line_magics(self):
770 781 ip = get_ipython()
771 782 c = ip.Completer
772 783 s, matches = c.complete(None, "lsmag")
773 784 self.assertIn("%lsmagic", matches)
774 785 s, matches = c.complete(None, "%lsmag")
775 786 self.assertIn("%lsmagic", matches)
776 787
777 788 def test_cell_magics(self):
778 789 from IPython.core.magic import register_cell_magic
779 790
780 791 @register_cell_magic
781 792 def _foo_cellm(line, cell):
782 793 pass
783 794
784 795 ip = get_ipython()
785 796 c = ip.Completer
786 797
787 798 s, matches = c.complete(None, "_foo_ce")
788 799 self.assertIn("%%_foo_cellm", matches)
789 800 s, matches = c.complete(None, "%%_foo_ce")
790 801 self.assertIn("%%_foo_cellm", matches)
791 802
792 803 def test_line_cell_magics(self):
793 804 from IPython.core.magic import register_line_cell_magic
794 805
795 806 @register_line_cell_magic
796 807 def _bar_cellm(line, cell):
797 808 pass
798 809
799 810 ip = get_ipython()
800 811 c = ip.Completer
801 812
802 813 # The policy here is trickier, see comments in completion code. The
803 814 # returned values depend on whether the user passes %% or not explicitly,
804 815 # and this will show a difference if the same name is both a line and cell
805 816 # magic.
806 817 s, matches = c.complete(None, "_bar_ce")
807 818 self.assertIn("%_bar_cellm", matches)
808 819 self.assertIn("%%_bar_cellm", matches)
809 820 s, matches = c.complete(None, "%_bar_ce")
810 821 self.assertIn("%_bar_cellm", matches)
811 822 self.assertIn("%%_bar_cellm", matches)
812 823 s, matches = c.complete(None, "%%_bar_ce")
813 824 self.assertNotIn("%_bar_cellm", matches)
814 825 self.assertIn("%%_bar_cellm", matches)
815 826
816 827 def test_magic_completion_order(self):
817 828 ip = get_ipython()
818 829 c = ip.Completer
819 830
820 831 # Test ordering of line and cell magics.
821 832 text, matches = c.complete("timeit")
822 833 self.assertEqual(matches, ["%timeit", "%%timeit"])
823 834
824 835 def test_magic_completion_shadowing(self):
825 836 ip = get_ipython()
826 837 c = ip.Completer
827 838 c.use_jedi = False
828 839
829 840 # Before importing matplotlib, %matplotlib magic should be the only option.
830 841 text, matches = c.complete("mat")
831 842 self.assertEqual(matches, ["%matplotlib"])
832 843
833 844 # The newly introduced name should shadow the magic.
834 845 ip.run_cell("matplotlib = 1")
835 846 text, matches = c.complete("mat")
836 847 self.assertEqual(matches, ["matplotlib"])
837 848
838 849 # After removing matplotlib from namespace, the magic should again be
839 850 # the only option.
840 851 del ip.user_ns["matplotlib"]
841 852 text, matches = c.complete("mat")
842 853 self.assertEqual(matches, ["%matplotlib"])
843 854
844 855 def test_magic_completion_shadowing_explicit(self):
845 856 """
846 857 If the user try to complete a shadowed magic, and explicit % start should
847 858 still return the completions.
848 859 """
849 860 ip = get_ipython()
850 861 c = ip.Completer
851 862
852 863 # Before importing matplotlib, %matplotlib magic should be the only option.
853 864 text, matches = c.complete("%mat")
854 865 self.assertEqual(matches, ["%matplotlib"])
855 866
856 867 ip.run_cell("matplotlib = 1")
857 868
858 869 # After removing matplotlib from namespace, the magic should still be
859 870 # the only option.
860 871 text, matches = c.complete("%mat")
861 872 self.assertEqual(matches, ["%matplotlib"])
862 873
863 874 def test_magic_config(self):
864 875 ip = get_ipython()
865 876 c = ip.Completer
866 877
867 878 s, matches = c.complete(None, "conf")
868 879 self.assertIn("%config", matches)
869 880 s, matches = c.complete(None, "conf")
870 881 self.assertNotIn("AliasManager", matches)
871 882 s, matches = c.complete(None, "config ")
872 883 self.assertIn("AliasManager", matches)
873 884 s, matches = c.complete(None, "%config ")
874 885 self.assertIn("AliasManager", matches)
875 886 s, matches = c.complete(None, "config Ali")
876 887 self.assertListEqual(["AliasManager"], matches)
877 888 s, matches = c.complete(None, "%config Ali")
878 889 self.assertListEqual(["AliasManager"], matches)
879 890 s, matches = c.complete(None, "config AliasManager")
880 891 self.assertListEqual(["AliasManager"], matches)
881 892 s, matches = c.complete(None, "%config AliasManager")
882 893 self.assertListEqual(["AliasManager"], matches)
883 894 s, matches = c.complete(None, "config AliasManager.")
884 895 self.assertIn("AliasManager.default_aliases", matches)
885 896 s, matches = c.complete(None, "%config AliasManager.")
886 897 self.assertIn("AliasManager.default_aliases", matches)
887 898 s, matches = c.complete(None, "config AliasManager.de")
888 899 self.assertListEqual(["AliasManager.default_aliases"], matches)
889 900 s, matches = c.complete(None, "config AliasManager.de")
890 901 self.assertListEqual(["AliasManager.default_aliases"], matches)
891 902
892 903 def test_magic_color(self):
893 904 ip = get_ipython()
894 905 c = ip.Completer
895 906
896 907 s, matches = c.complete(None, "colo")
897 908 self.assertIn("%colors", matches)
898 909 s, matches = c.complete(None, "colo")
899 910 self.assertNotIn("NoColor", matches)
900 911 s, matches = c.complete(None, "%colors") # No trailing space
901 912 self.assertNotIn("NoColor", matches)
902 913 s, matches = c.complete(None, "colors ")
903 914 self.assertIn("NoColor", matches)
904 915 s, matches = c.complete(None, "%colors ")
905 916 self.assertIn("NoColor", matches)
906 917 s, matches = c.complete(None, "colors NoCo")
907 918 self.assertListEqual(["NoColor"], matches)
908 919 s, matches = c.complete(None, "%colors NoCo")
909 920 self.assertListEqual(["NoColor"], matches)
910 921
911 922 def test_match_dict_keys(self):
912 923 """
913 924 Test that match_dict_keys works on a couple of use case does return what
914 925 expected, and does not crash
915 926 """
916 927 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
917 928
918 929 def match(*args, **kwargs):
919 930 quote, offset, matches = match_dict_keys(*args, delims=delims, **kwargs)
920 931 return quote, offset, list(matches)
921 932
922 933 keys = ["foo", b"far"]
923 934 assert match(keys, "b'") == ("'", 2, ["far"])
924 935 assert match(keys, "b'f") == ("'", 2, ["far"])
925 936 assert match(keys, 'b"') == ('"', 2, ["far"])
926 937 assert match(keys, 'b"f') == ('"', 2, ["far"])
927 938
928 939 assert match(keys, "'") == ("'", 1, ["foo"])
929 940 assert match(keys, "'f") == ("'", 1, ["foo"])
930 941 assert match(keys, '"') == ('"', 1, ["foo"])
931 942 assert match(keys, '"f') == ('"', 1, ["foo"])
932 943
933 944 # Completion on first item of tuple
934 945 keys = [("foo", 1111), ("foo", 2222), (3333, "bar"), (3333, "test")]
935 946 assert match(keys, "'f") == ("'", 1, ["foo"])
936 947 assert match(keys, "33") == ("", 0, ["3333"])
937 948
938 949 # Completion on numbers
939 950 keys = [
940 951 0xDEADBEEF,
941 952 1111,
942 953 1234,
943 954 "1999",
944 955 0b10101,
945 956 22,
946 957 ] # 0xDEADBEEF = 3735928559; 0b10101 = 21
947 958 assert match(keys, "0xdead") == ("", 0, ["0xdeadbeef"])
948 959 assert match(keys, "1") == ("", 0, ["1111", "1234"])
949 960 assert match(keys, "2") == ("", 0, ["21", "22"])
950 961 assert match(keys, "0b101") == ("", 0, ["0b10101", "0b10110"])
951 962
952 963 # Should yield on variables
953 964 assert match(keys, "a_variable") == ("", 0, [])
954 965
955 966 # Should pass over invalid literals
956 967 assert match(keys, "'' ''") == ("", 0, [])
957 968
958 969 def test_match_dict_keys_tuple(self):
959 970 """
960 971 Test that match_dict_keys called with extra prefix works on a couple of use case,
961 972 does return what expected, and does not crash.
962 973 """
963 974 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
964 975
965 976 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
966 977
967 978 def match(*args, extra=None, **kwargs):
968 979 quote, offset, matches = match_dict_keys(
969 980 *args, delims=delims, extra_prefix=extra, **kwargs
970 981 )
971 982 return quote, offset, list(matches)
972 983
973 984 # Completion on first key == "foo"
974 985 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["bar", "oof"])
975 986 assert match(keys, '"', extra=("foo",)) == ('"', 1, ["bar", "oof"])
976 987 assert match(keys, "'o", extra=("foo",)) == ("'", 1, ["oof"])
977 988 assert match(keys, '"o', extra=("foo",)) == ('"', 1, ["oof"])
978 989 assert match(keys, "b'", extra=("foo",)) == ("'", 2, ["bar"])
979 990 assert match(keys, 'b"', extra=("foo",)) == ('"', 2, ["bar"])
980 991 assert match(keys, "b'b", extra=("foo",)) == ("'", 2, ["bar"])
981 992 assert match(keys, 'b"b', extra=("foo",)) == ('"', 2, ["bar"])
982 993
983 994 # No Completion
984 995 assert match(keys, "'", extra=("no_foo",)) == ("'", 1, [])
985 996 assert match(keys, "'", extra=("fo",)) == ("'", 1, [])
986 997
987 998 keys = [("foo1", "foo2", "foo3", "foo4"), ("foo1", "foo2", "bar", "foo4")]
988 999 assert match(keys, "'foo", extra=("foo1",)) == ("'", 1, ["foo2"])
989 1000 assert match(keys, "'foo", extra=("foo1", "foo2")) == ("'", 1, ["foo3"])
990 1001 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3")) == ("'", 1, ["foo4"])
991 1002 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3", "foo4")) == (
992 1003 "'",
993 1004 1,
994 1005 [],
995 1006 )
996 1007
997 1008 keys = [("foo", 1111), ("foo", "2222"), (3333, "bar"), (3333, 4444)]
998 1009 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["2222"])
999 1010 assert match(keys, "", extra=("foo",)) == ("", 0, ["1111", "'2222'"])
1000 1011 assert match(keys, "'", extra=(3333,)) == ("'", 1, ["bar"])
1001 1012 assert match(keys, "", extra=(3333,)) == ("", 0, ["'bar'", "4444"])
1002 1013 assert match(keys, "'", extra=("3333",)) == ("'", 1, [])
1003 1014 assert match(keys, "33") == ("", 0, ["3333"])
1004 1015
1005 1016 def test_dict_key_completion_closures(self):
1006 1017 ip = get_ipython()
1007 1018 complete = ip.Completer.complete
1008 1019 ip.Completer.auto_close_dict_keys = True
1009 1020
1010 1021 ip.user_ns["d"] = {
1011 1022 # tuple only
1012 1023 ("aa", 11): None,
1013 1024 # tuple and non-tuple
1014 1025 ("bb", 22): None,
1015 1026 "bb": None,
1016 1027 # non-tuple only
1017 1028 "cc": None,
1018 1029 # numeric tuple only
1019 1030 (77, "x"): None,
1020 1031 # numeric tuple and non-tuple
1021 1032 (88, "y"): None,
1022 1033 88: None,
1023 1034 # numeric non-tuple only
1024 1035 99: None,
1025 1036 }
1026 1037
1027 1038 _, matches = complete(line_buffer="d[")
1028 1039 # should append `, ` if matches a tuple only
1029 1040 self.assertIn("'aa', ", matches)
1030 1041 # should not append anything if matches a tuple and an item
1031 1042 self.assertIn("'bb'", matches)
1032 1043 # should append `]` if matches and item only
1033 1044 self.assertIn("'cc']", matches)
1034 1045
1035 1046 # should append `, ` if matches a tuple only
1036 1047 self.assertIn("77, ", matches)
1037 1048 # should not append anything if matches a tuple and an item
1038 1049 self.assertIn("88", matches)
1039 1050 # should append `]` if matches and item only
1040 1051 self.assertIn("99]", matches)
1041 1052
1042 1053 _, matches = complete(line_buffer="d['aa', ")
1043 1054 # should restrict matches to those matching tuple prefix
1044 1055 self.assertIn("11]", matches)
1045 1056 self.assertNotIn("'bb'", matches)
1046 1057 self.assertNotIn("'bb', ", matches)
1047 1058 self.assertNotIn("'bb']", matches)
1048 1059 self.assertNotIn("'cc'", matches)
1049 1060 self.assertNotIn("'cc', ", matches)
1050 1061 self.assertNotIn("'cc']", matches)
1051 1062 ip.Completer.auto_close_dict_keys = False
1052 1063
1053 1064 def test_dict_key_completion_string(self):
1054 1065 """Test dictionary key completion for string keys"""
1055 1066 ip = get_ipython()
1056 1067 complete = ip.Completer.complete
1057 1068
1058 1069 ip.user_ns["d"] = {"abc": None}
1059 1070
1060 1071 # check completion at different stages
1061 1072 _, matches = complete(line_buffer="d[")
1062 1073 self.assertIn("'abc'", matches)
1063 1074 self.assertNotIn("'abc']", matches)
1064 1075
1065 1076 _, matches = complete(line_buffer="d['")
1066 1077 self.assertIn("abc", matches)
1067 1078 self.assertNotIn("abc']", matches)
1068 1079
1069 1080 _, matches = complete(line_buffer="d['a")
1070 1081 self.assertIn("abc", matches)
1071 1082 self.assertNotIn("abc']", matches)
1072 1083
1073 1084 # check use of different quoting
1074 1085 _, matches = complete(line_buffer='d["')
1075 1086 self.assertIn("abc", matches)
1076 1087 self.assertNotIn('abc"]', matches)
1077 1088
1078 1089 _, matches = complete(line_buffer='d["a')
1079 1090 self.assertIn("abc", matches)
1080 1091 self.assertNotIn('abc"]', matches)
1081 1092
1082 1093 # check sensitivity to following context
1083 1094 _, matches = complete(line_buffer="d[]", cursor_pos=2)
1084 1095 self.assertIn("'abc'", matches)
1085 1096
1086 1097 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1087 1098 self.assertIn("abc", matches)
1088 1099 self.assertNotIn("abc'", matches)
1089 1100 self.assertNotIn("abc']", matches)
1090 1101
1091 1102 # check multiple solutions are correctly returned and that noise is not
1092 1103 ip.user_ns["d"] = {
1093 1104 "abc": None,
1094 1105 "abd": None,
1095 1106 "bad": None,
1096 1107 object(): None,
1097 1108 5: None,
1098 1109 ("abe", None): None,
1099 1110 (None, "abf"): None
1100 1111 }
1101 1112
1102 1113 _, matches = complete(line_buffer="d['a")
1103 1114 self.assertIn("abc", matches)
1104 1115 self.assertIn("abd", matches)
1105 1116 self.assertNotIn("bad", matches)
1106 1117 self.assertNotIn("abe", matches)
1107 1118 self.assertNotIn("abf", matches)
1108 1119 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1109 1120
1110 1121 # check escaping and whitespace
1111 1122 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
1112 1123 _, matches = complete(line_buffer="d['a")
1113 1124 self.assertIn("a\\nb", matches)
1114 1125 self.assertIn("a\\'b", matches)
1115 1126 self.assertIn('a"b', matches)
1116 1127 self.assertIn("a word", matches)
1117 1128 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1118 1129
1119 1130 # - can complete on non-initial word of the string
1120 1131 _, matches = complete(line_buffer="d['a w")
1121 1132 self.assertIn("word", matches)
1122 1133
1123 1134 # - understands quote escaping
1124 1135 _, matches = complete(line_buffer="d['a\\'")
1125 1136 self.assertIn("b", matches)
1126 1137
1127 1138 # - default quoting should work like repr
1128 1139 _, matches = complete(line_buffer="d[")
1129 1140 self.assertIn('"a\'b"', matches)
1130 1141
1131 1142 # - when opening quote with ", possible to match with unescaped apostrophe
1132 1143 _, matches = complete(line_buffer="d[\"a'")
1133 1144 self.assertIn("b", matches)
1134 1145
1135 1146 # need to not split at delims that readline won't split at
1136 1147 if "-" not in ip.Completer.splitter.delims:
1137 1148 ip.user_ns["d"] = {"before-after": None}
1138 1149 _, matches = complete(line_buffer="d['before-af")
1139 1150 self.assertIn("before-after", matches)
1140 1151
1141 1152 # check completion on tuple-of-string keys at different stage - on first key
1142 1153 ip.user_ns["d"] = {('foo', 'bar'): None}
1143 1154 _, matches = complete(line_buffer="d[")
1144 1155 self.assertIn("'foo'", matches)
1145 1156 self.assertNotIn("'foo']", matches)
1146 1157 self.assertNotIn("'bar'", matches)
1147 1158 self.assertNotIn("foo", matches)
1148 1159 self.assertNotIn("bar", matches)
1149 1160
1150 1161 # - match the prefix
1151 1162 _, matches = complete(line_buffer="d['f")
1152 1163 self.assertIn("foo", matches)
1153 1164 self.assertNotIn("foo']", matches)
1154 1165 self.assertNotIn('foo"]', matches)
1155 1166 _, matches = complete(line_buffer="d['foo")
1156 1167 self.assertIn("foo", matches)
1157 1168
1158 1169 # - can complete on second key
1159 1170 _, matches = complete(line_buffer="d['foo', ")
1160 1171 self.assertIn("'bar'", matches)
1161 1172 _, matches = complete(line_buffer="d['foo', 'b")
1162 1173 self.assertIn("bar", matches)
1163 1174 self.assertNotIn("foo", matches)
1164 1175
1165 1176 # - does not propose missing keys
1166 1177 _, matches = complete(line_buffer="d['foo', 'f")
1167 1178 self.assertNotIn("bar", matches)
1168 1179 self.assertNotIn("foo", matches)
1169 1180
1170 1181 # check sensitivity to following context
1171 1182 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1172 1183 self.assertIn("'bar'", matches)
1173 1184 self.assertNotIn("bar", matches)
1174 1185 self.assertNotIn("'foo'", matches)
1175 1186 self.assertNotIn("foo", matches)
1176 1187
1177 1188 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1178 1189 self.assertIn("foo", matches)
1179 1190 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1180 1191
1181 1192 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1182 1193 self.assertIn("foo", matches)
1183 1194 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1184 1195
1185 1196 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1186 1197 self.assertIn("bar", matches)
1187 1198 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1188 1199
1189 1200 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1190 1201 self.assertIn("'bar'", matches)
1191 1202 self.assertNotIn("bar", matches)
1192 1203
1193 1204 # Can complete with longer tuple keys
1194 1205 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1195 1206
1196 1207 # - can complete second key
1197 1208 _, matches = complete(line_buffer="d['foo', 'b")
1198 1209 self.assertIn("bar", matches)
1199 1210 self.assertNotIn("foo", matches)
1200 1211 self.assertNotIn("foobar", matches)
1201 1212
1202 1213 # - can complete third key
1203 1214 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1204 1215 self.assertIn("foobar", matches)
1205 1216 self.assertNotIn("foo", matches)
1206 1217 self.assertNotIn("bar", matches)
1207 1218
1208 1219 def test_dict_key_completion_numbers(self):
1209 1220 ip = get_ipython()
1210 1221 complete = ip.Completer.complete
1211 1222
1212 1223 ip.user_ns["d"] = {
1213 1224 0xDEADBEEF: None, # 3735928559
1214 1225 1111: None,
1215 1226 1234: None,
1216 1227 "1999": None,
1217 1228 0b10101: None, # 21
1218 1229 22: None,
1219 1230 }
1220 1231 _, matches = complete(line_buffer="d[1")
1221 1232 self.assertIn("1111", matches)
1222 1233 self.assertIn("1234", matches)
1223 1234 self.assertNotIn("1999", matches)
1224 1235 self.assertNotIn("'1999'", matches)
1225 1236
1226 1237 _, matches = complete(line_buffer="d[0xdead")
1227 1238 self.assertIn("0xdeadbeef", matches)
1228 1239
1229 1240 _, matches = complete(line_buffer="d[2")
1230 1241 self.assertIn("21", matches)
1231 1242 self.assertIn("22", matches)
1232 1243
1233 1244 _, matches = complete(line_buffer="d[0b101")
1234 1245 self.assertIn("0b10101", matches)
1235 1246 self.assertIn("0b10110", matches)
1236 1247
1237 1248 def test_dict_key_completion_contexts(self):
1238 1249 """Test expression contexts in which dict key completion occurs"""
1239 1250 ip = get_ipython()
1240 1251 complete = ip.Completer.complete
1241 1252 d = {"abc": None}
1242 1253 ip.user_ns["d"] = d
1243 1254
1244 1255 class C:
1245 1256 data = d
1246 1257
1247 1258 ip.user_ns["C"] = C
1248 1259 ip.user_ns["get"] = lambda: d
1249 1260 ip.user_ns["nested"] = {"x": d}
1250 1261
1251 1262 def assert_no_completion(**kwargs):
1252 1263 _, matches = complete(**kwargs)
1253 1264 self.assertNotIn("abc", matches)
1254 1265 self.assertNotIn("abc'", matches)
1255 1266 self.assertNotIn("abc']", matches)
1256 1267 self.assertNotIn("'abc'", matches)
1257 1268 self.assertNotIn("'abc']", matches)
1258 1269
1259 1270 def assert_completion(**kwargs):
1260 1271 _, matches = complete(**kwargs)
1261 1272 self.assertIn("'abc'", matches)
1262 1273 self.assertNotIn("'abc']", matches)
1263 1274
1264 1275 # no completion after string closed, even if reopened
1265 1276 assert_no_completion(line_buffer="d['a'")
1266 1277 assert_no_completion(line_buffer='d["a"')
1267 1278 assert_no_completion(line_buffer="d['a' + ")
1268 1279 assert_no_completion(line_buffer="d['a' + '")
1269 1280
1270 1281 # completion in non-trivial expressions
1271 1282 assert_completion(line_buffer="+ d[")
1272 1283 assert_completion(line_buffer="(d[")
1273 1284 assert_completion(line_buffer="C.data[")
1274 1285
1275 1286 # nested dict completion
1276 1287 assert_completion(line_buffer="nested['x'][")
1277 1288
1278 1289 with evaluation_policy("minimal"):
1279 1290 with pytest.raises(AssertionError):
1280 1291 assert_completion(line_buffer="nested['x'][")
1281 1292
1282 1293 # greedy flag
1283 1294 def assert_completion(**kwargs):
1284 1295 _, matches = complete(**kwargs)
1285 1296 self.assertIn("get()['abc']", matches)
1286 1297
1287 1298 assert_no_completion(line_buffer="get()[")
1288 1299 with greedy_completion():
1289 1300 assert_completion(line_buffer="get()[")
1290 1301 assert_completion(line_buffer="get()['")
1291 1302 assert_completion(line_buffer="get()['a")
1292 1303 assert_completion(line_buffer="get()['ab")
1293 1304 assert_completion(line_buffer="get()['abc")
1294 1305
1295 1306 def test_dict_key_completion_bytes(self):
1296 1307 """Test handling of bytes in dict key completion"""
1297 1308 ip = get_ipython()
1298 1309 complete = ip.Completer.complete
1299 1310
1300 1311 ip.user_ns["d"] = {"abc": None, b"abd": None}
1301 1312
1302 1313 _, matches = complete(line_buffer="d[")
1303 1314 self.assertIn("'abc'", matches)
1304 1315 self.assertIn("b'abd'", matches)
1305 1316
1306 1317 if False: # not currently implemented
1307 1318 _, matches = complete(line_buffer="d[b")
1308 1319 self.assertIn("b'abd'", matches)
1309 1320 self.assertNotIn("b'abc'", matches)
1310 1321
1311 1322 _, matches = complete(line_buffer="d[b'")
1312 1323 self.assertIn("abd", matches)
1313 1324 self.assertNotIn("abc", matches)
1314 1325
1315 1326 _, matches = complete(line_buffer="d[B'")
1316 1327 self.assertIn("abd", matches)
1317 1328 self.assertNotIn("abc", matches)
1318 1329
1319 1330 _, matches = complete(line_buffer="d['")
1320 1331 self.assertIn("abc", matches)
1321 1332 self.assertNotIn("abd", matches)
1322 1333
1323 1334 def test_dict_key_completion_unicode_py3(self):
1324 1335 """Test handling of unicode in dict key completion"""
1325 1336 ip = get_ipython()
1326 1337 complete = ip.Completer.complete
1327 1338
1328 1339 ip.user_ns["d"] = {"a\u05d0": None}
1329 1340
1330 1341 # query using escape
1331 1342 if sys.platform != "win32":
1332 1343 # Known failure on Windows
1333 1344 _, matches = complete(line_buffer="d['a\\u05d0")
1334 1345 self.assertIn("u05d0", matches) # tokenized after \\
1335 1346
1336 1347 # query using character
1337 1348 _, matches = complete(line_buffer="d['a\u05d0")
1338 1349 self.assertIn("a\u05d0", matches)
1339 1350
1340 1351 with greedy_completion():
1341 1352 # query using escape
1342 1353 _, matches = complete(line_buffer="d['a\\u05d0")
1343 1354 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1344 1355
1345 1356 # query using character
1346 1357 _, matches = complete(line_buffer="d['a\u05d0")
1347 1358 self.assertIn("d['a\u05d0']", matches)
1348 1359
1349 1360 @dec.skip_without("numpy")
1350 1361 def test_struct_array_key_completion(self):
1351 1362 """Test dict key completion applies to numpy struct arrays"""
1352 1363 import numpy
1353 1364
1354 1365 ip = get_ipython()
1355 1366 complete = ip.Completer.complete
1356 1367 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1357 1368 _, matches = complete(line_buffer="d['")
1358 1369 self.assertIn("hello", matches)
1359 1370 self.assertIn("world", matches)
1360 1371 # complete on the numpy struct itself
1361 1372 dt = numpy.dtype(
1362 1373 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1363 1374 )
1364 1375 x = numpy.zeros(2, dtype=dt)
1365 1376 ip.user_ns["d"] = x[1]
1366 1377 _, matches = complete(line_buffer="d['")
1367 1378 self.assertIn("my_head", matches)
1368 1379 self.assertIn("my_data", matches)
1369 1380
1370 1381 def completes_on_nested():
1371 1382 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1372 1383 _, matches = complete(line_buffer="d[1]['my_head']['")
1373 1384 self.assertTrue(any(["my_dt" in m for m in matches]))
1374 1385 self.assertTrue(any(["my_df" in m for m in matches]))
1375 1386 # complete on a nested level
1376 1387 with greedy_completion():
1377 1388 completes_on_nested()
1378 1389
1379 1390 with evaluation_policy("limited"):
1380 1391 completes_on_nested()
1381 1392
1382 1393 with evaluation_policy("minimal"):
1383 1394 with pytest.raises(AssertionError):
1384 1395 completes_on_nested()
1385 1396
1386 1397 @dec.skip_without("pandas")
1387 1398 def test_dataframe_key_completion(self):
1388 1399 """Test dict key completion applies to pandas DataFrames"""
1389 1400 import pandas
1390 1401
1391 1402 ip = get_ipython()
1392 1403 complete = ip.Completer.complete
1393 1404 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1394 1405 _, matches = complete(line_buffer="d['")
1395 1406 self.assertIn("hello", matches)
1396 1407 self.assertIn("world", matches)
1397 1408 _, matches = complete(line_buffer="d.loc[:, '")
1398 1409 self.assertIn("hello", matches)
1399 1410 self.assertIn("world", matches)
1400 1411 _, matches = complete(line_buffer="d.loc[1:, '")
1401 1412 self.assertIn("hello", matches)
1402 1413 _, matches = complete(line_buffer="d.loc[1:1, '")
1403 1414 self.assertIn("hello", matches)
1404 1415 _, matches = complete(line_buffer="d.loc[1:1:-1, '")
1405 1416 self.assertIn("hello", matches)
1406 1417 _, matches = complete(line_buffer="d.loc[::, '")
1407 1418 self.assertIn("hello", matches)
1408 1419
1409 1420 def test_dict_key_completion_invalids(self):
1410 1421 """Smoke test cases dict key completion can't handle"""
1411 1422 ip = get_ipython()
1412 1423 complete = ip.Completer.complete
1413 1424
1414 1425 ip.user_ns["no_getitem"] = None
1415 1426 ip.user_ns["no_keys"] = []
1416 1427 ip.user_ns["cant_call_keys"] = dict
1417 1428 ip.user_ns["empty"] = {}
1418 1429 ip.user_ns["d"] = {"abc": 5}
1419 1430
1420 1431 _, matches = complete(line_buffer="no_getitem['")
1421 1432 _, matches = complete(line_buffer="no_keys['")
1422 1433 _, matches = complete(line_buffer="cant_call_keys['")
1423 1434 _, matches = complete(line_buffer="empty['")
1424 1435 _, matches = complete(line_buffer="name_error['")
1425 1436 _, matches = complete(line_buffer="d['\\") # incomplete escape
1426 1437
1427 1438 def test_object_key_completion(self):
1428 1439 ip = get_ipython()
1429 1440 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1430 1441
1431 1442 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1432 1443 self.assertIn("qwerty", matches)
1433 1444 self.assertIn("qwick", matches)
1434 1445
1435 1446 def test_class_key_completion(self):
1436 1447 ip = get_ipython()
1437 1448 NamedInstanceClass("qwerty")
1438 1449 NamedInstanceClass("qwick")
1439 1450 ip.user_ns["named_instance_class"] = NamedInstanceClass
1440 1451
1441 1452 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1442 1453 self.assertIn("qwerty", matches)
1443 1454 self.assertIn("qwick", matches)
1444 1455
1445 1456 def test_tryimport(self):
1446 1457 """
1447 1458 Test that try-import don't crash on trailing dot, and import modules before
1448 1459 """
1449 1460 from IPython.core.completerlib import try_import
1450 1461
1451 1462 assert try_import("IPython.")
1452 1463
1453 1464 def test_aimport_module_completer(self):
1454 1465 ip = get_ipython()
1455 1466 _, matches = ip.complete("i", "%aimport i")
1456 1467 self.assertIn("io", matches)
1457 1468 self.assertNotIn("int", matches)
1458 1469
1459 1470 def test_nested_import_module_completer(self):
1460 1471 ip = get_ipython()
1461 1472 _, matches = ip.complete(None, "import IPython.co", 17)
1462 1473 self.assertIn("IPython.core", matches)
1463 1474 self.assertNotIn("import IPython.core", matches)
1464 1475 self.assertNotIn("IPython.display", matches)
1465 1476
1466 1477 def test_import_module_completer(self):
1467 1478 ip = get_ipython()
1468 1479 _, matches = ip.complete("i", "import i")
1469 1480 self.assertIn("io", matches)
1470 1481 self.assertNotIn("int", matches)
1471 1482
1472 1483 def test_from_module_completer(self):
1473 1484 ip = get_ipython()
1474 1485 _, matches = ip.complete("B", "from io import B", 16)
1475 1486 self.assertIn("BytesIO", matches)
1476 1487 self.assertNotIn("BaseException", matches)
1477 1488
1478 1489 def test_snake_case_completion(self):
1479 1490 ip = get_ipython()
1480 1491 ip.Completer.use_jedi = False
1481 1492 ip.user_ns["some_three"] = 3
1482 1493 ip.user_ns["some_four"] = 4
1483 1494 _, matches = ip.complete("s_", "print(s_f")
1484 1495 self.assertIn("some_three", matches)
1485 1496 self.assertIn("some_four", matches)
1486 1497
1487 1498 def test_mix_terms(self):
1488 1499 ip = get_ipython()
1489 1500 from textwrap import dedent
1490 1501
1491 1502 ip.Completer.use_jedi = False
1492 1503 ip.ex(
1493 1504 dedent(
1494 1505 """
1495 1506 class Test:
1496 1507 def meth(self, meth_arg1):
1497 1508 print("meth")
1498 1509
1499 1510 def meth_1(self, meth1_arg1, meth1_arg2):
1500 1511 print("meth1")
1501 1512
1502 1513 def meth_2(self, meth2_arg1, meth2_arg2):
1503 1514 print("meth2")
1504 1515 test = Test()
1505 1516 """
1506 1517 )
1507 1518 )
1508 1519 _, matches = ip.complete(None, "test.meth(")
1509 1520 self.assertIn("meth_arg1=", matches)
1510 1521 self.assertNotIn("meth2_arg1=", matches)
1511 1522
1512 1523 def test_percent_symbol_restrict_to_magic_completions(self):
1513 1524 ip = get_ipython()
1514 1525 completer = ip.Completer
1515 1526 text = "%a"
1516 1527
1517 1528 with provisionalcompleter():
1518 1529 completer.use_jedi = True
1519 1530 completions = completer.completions(text, len(text))
1520 1531 for c in completions:
1521 1532 self.assertEqual(c.text[0], "%")
1522 1533
1523 1534 def test_fwd_unicode_restricts(self):
1524 1535 ip = get_ipython()
1525 1536 completer = ip.Completer
1526 1537 text = "\\ROMAN NUMERAL FIVE"
1527 1538
1528 1539 with provisionalcompleter():
1529 1540 completer.use_jedi = True
1530 1541 completions = [
1531 1542 completion.text for completion in completer.completions(text, len(text))
1532 1543 ]
1533 1544 self.assertEqual(completions, ["\u2164"])
1534 1545
1535 1546 def test_dict_key_restrict_to_dicts(self):
1536 1547 """Test that dict key suppresses non-dict completion items"""
1537 1548 ip = get_ipython()
1538 1549 c = ip.Completer
1539 1550 d = {"abc": None}
1540 1551 ip.user_ns["d"] = d
1541 1552
1542 1553 text = 'd["a'
1543 1554
1544 1555 def _():
1545 1556 with provisionalcompleter():
1546 1557 c.use_jedi = True
1547 1558 return [
1548 1559 completion.text for completion in c.completions(text, len(text))
1549 1560 ]
1550 1561
1551 1562 completions = _()
1552 1563 self.assertEqual(completions, ["abc"])
1553 1564
1554 1565 # check that it can be disabled in granular manner:
1555 1566 cfg = Config()
1556 1567 cfg.IPCompleter.suppress_competing_matchers = {
1557 1568 "IPCompleter.dict_key_matcher": False
1558 1569 }
1559 1570 c.update_config(cfg)
1560 1571
1561 1572 completions = _()
1562 1573 self.assertIn("abc", completions)
1563 1574 self.assertGreater(len(completions), 1)
1564 1575
1565 1576 def test_matcher_suppression(self):
1566 1577 @completion_matcher(identifier="a_matcher")
1567 1578 def a_matcher(text):
1568 1579 return ["completion_a"]
1569 1580
1570 1581 @completion_matcher(identifier="b_matcher", api_version=2)
1571 1582 def b_matcher(context: CompletionContext):
1572 1583 text = context.token
1573 1584 result = {"completions": [SimpleCompletion("completion_b")]}
1574 1585
1575 1586 if text == "suppress c":
1576 1587 result["suppress"] = {"c_matcher"}
1577 1588
1578 1589 if text.startswith("suppress all"):
1579 1590 result["suppress"] = True
1580 1591 if text == "suppress all but c":
1581 1592 result["do_not_suppress"] = {"c_matcher"}
1582 1593 if text == "suppress all but a":
1583 1594 result["do_not_suppress"] = {"a_matcher"}
1584 1595
1585 1596 return result
1586 1597
1587 1598 @completion_matcher(identifier="c_matcher")
1588 1599 def c_matcher(text):
1589 1600 return ["completion_c"]
1590 1601
1591 1602 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1592 1603 ip = get_ipython()
1593 1604 c = ip.Completer
1594 1605
1595 1606 def _(text, expected):
1596 1607 c.use_jedi = False
1597 1608 s, matches = c.complete(text)
1598 1609 self.assertEqual(expected, matches)
1599 1610
1600 1611 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1601 1612 _("suppress all", ["completion_b"])
1602 1613 _("suppress all but a", ["completion_a", "completion_b"])
1603 1614 _("suppress all but c", ["completion_b", "completion_c"])
1604 1615
1605 1616 def configure(suppression_config):
1606 1617 cfg = Config()
1607 1618 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1608 1619 c.update_config(cfg)
1609 1620
1610 1621 # test that configuration takes priority over the run-time decisions
1611 1622
1612 1623 configure(False)
1613 1624 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1614 1625
1615 1626 configure({"b_matcher": False})
1616 1627 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1617 1628
1618 1629 configure({"a_matcher": False})
1619 1630 _("suppress all", ["completion_b"])
1620 1631
1621 1632 configure({"b_matcher": True})
1622 1633 _("do not suppress", ["completion_b"])
1623 1634
1624 1635 configure(True)
1625 1636 _("do not suppress", ["completion_a"])
1626 1637
1627 1638 def test_matcher_suppression_with_iterator(self):
1628 1639 @completion_matcher(identifier="matcher_returning_iterator")
1629 1640 def matcher_returning_iterator(text):
1630 1641 return iter(["completion_iter"])
1631 1642
1632 1643 @completion_matcher(identifier="matcher_returning_list")
1633 1644 def matcher_returning_list(text):
1634 1645 return ["completion_list"]
1635 1646
1636 1647 with custom_matchers([matcher_returning_iterator, matcher_returning_list]):
1637 1648 ip = get_ipython()
1638 1649 c = ip.Completer
1639 1650
1640 1651 def _(text, expected):
1641 1652 c.use_jedi = False
1642 1653 s, matches = c.complete(text)
1643 1654 self.assertEqual(expected, matches)
1644 1655
1645 1656 def configure(suppression_config):
1646 1657 cfg = Config()
1647 1658 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1648 1659 c.update_config(cfg)
1649 1660
1650 1661 configure(False)
1651 1662 _("---", ["completion_iter", "completion_list"])
1652 1663
1653 1664 configure(True)
1654 1665 _("---", ["completion_iter"])
1655 1666
1656 1667 configure(None)
1657 1668 _("--", ["completion_iter", "completion_list"])
1658 1669
1659 1670 @pytest.mark.xfail(
1660 1671 sys.version_info.releaselevel in ("alpha",),
1661 1672 reason="Parso does not yet parse 3.13",
1662 1673 )
1663 1674 def test_matcher_suppression_with_jedi(self):
1664 1675 ip = get_ipython()
1665 1676 c = ip.Completer
1666 1677 c.use_jedi = True
1667 1678
1668 1679 def configure(suppression_config):
1669 1680 cfg = Config()
1670 1681 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1671 1682 c.update_config(cfg)
1672 1683
1673 1684 def _():
1674 1685 with provisionalcompleter():
1675 1686 matches = [completion.text for completion in c.completions("dict.", 5)]
1676 1687 self.assertIn("keys", matches)
1677 1688
1678 1689 configure(False)
1679 1690 _()
1680 1691
1681 1692 configure(True)
1682 1693 _()
1683 1694
1684 1695 configure(None)
1685 1696 _()
1686 1697
1687 1698 def test_matcher_disabling(self):
1688 1699 @completion_matcher(identifier="a_matcher")
1689 1700 def a_matcher(text):
1690 1701 return ["completion_a"]
1691 1702
1692 1703 @completion_matcher(identifier="b_matcher")
1693 1704 def b_matcher(text):
1694 1705 return ["completion_b"]
1695 1706
1696 1707 def _(expected):
1697 1708 s, matches = c.complete("completion_")
1698 1709 self.assertEqual(expected, matches)
1699 1710
1700 1711 with custom_matchers([a_matcher, b_matcher]):
1701 1712 ip = get_ipython()
1702 1713 c = ip.Completer
1703 1714
1704 1715 _(["completion_a", "completion_b"])
1705 1716
1706 1717 cfg = Config()
1707 1718 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1708 1719 c.update_config(cfg)
1709 1720
1710 1721 _(["completion_a"])
1711 1722
1712 1723 cfg.IPCompleter.disable_matchers = []
1713 1724 c.update_config(cfg)
1714 1725
1715 1726 def test_matcher_priority(self):
1716 1727 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1717 1728 def a_matcher(text):
1718 1729 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1719 1730
1720 1731 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1721 1732 def b_matcher(text):
1722 1733 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1723 1734
1724 1735 def _(expected):
1725 1736 s, matches = c.complete("completion_")
1726 1737 self.assertEqual(expected, matches)
1727 1738
1728 1739 with custom_matchers([a_matcher, b_matcher]):
1729 1740 ip = get_ipython()
1730 1741 c = ip.Completer
1731 1742
1732 1743 _(["completion_b"])
1733 1744 a_matcher.matcher_priority = 3
1734 1745 _(["completion_a"])
1735 1746
1736 1747
1737 1748 @pytest.mark.parametrize(
1749 "setup,code,expected,not_expected",
1750 [
1751 ('a="str"; b=1', "(a, b.", [".bit_count", ".conjugate"], [".count"]),
1752 ('a="str"; b=1', "(a, b).", [".count"], [".bit_count", ".capitalize"]),
1753 ('x="str"; y=1', "x = {1, y.", [".bit_count"], [".count"]),
1754 ('x="str"; y=1', "x = [1, y.", [".bit_count"], [".count"]),
1755 ('x="str"; y=1; fun=lambda x:x', "x = fun(1, y.", [".bit_count"], [".count"]),
1756 ],
1757 )
1758 def test_misc_no_jedi_completions(setup, code, expected, not_expected):
1759 ip = get_ipython()
1760 c = ip.Completer
1761 ip.ex(setup)
1762 with provisionalcompleter(), jedi_status(False):
1763 matches = c.all_completions(code)
1764 assert set(expected) - set(matches) == set(), set(matches)
1765 assert set(matches).intersection(set(not_expected)) == set()
1766
1767
1768 @pytest.mark.parametrize(
1769 "code,expected",
1770 [
1771 (" (a, b", "b"),
1772 ("(a, b", "b"),
1773 ("(a, b)", ""), # trim always start by trimming
1774 (" (a, b)", "(a, b)"),
1775 (" [a, b]", "[a, b]"),
1776 (" a, b", "b"),
1777 ("x = {1, y", "y"),
1778 ("x = [1, y", "y"),
1779 ("x = fun(1, y", "y"),
1780 ],
1781 )
1782 def test_trim_expr(code, expected):
1783 c = get_ipython().Completer
1784 assert c._trim_expr(code) == expected
1785
1786
1787 @pytest.mark.parametrize(
1738 1788 "input, expected",
1739 1789 [
1740 1790 ["1.234", "1.234"],
1741 1791 # should match signed numbers
1742 1792 ["+1", "+1"],
1743 1793 ["-1", "-1"],
1744 1794 ["-1.0", "-1.0"],
1745 1795 ["-1.", "-1."],
1746 1796 ["+1.", "+1."],
1747 1797 [".1", ".1"],
1748 1798 # should not match non-numbers
1749 1799 ["1..", None],
1750 1800 ["..", None],
1751 1801 [".1.", None],
1752 1802 # should match after comma
1753 1803 [",1", "1"],
1754 1804 [", 1", "1"],
1755 1805 [", .1", ".1"],
1756 1806 [", +.1", "+.1"],
1757 1807 # should not match after trailing spaces
1758 1808 [".1 ", None],
1759 1809 # some complex cases
1760 1810 ["0b_0011_1111_0100_1110", "0b_0011_1111_0100_1110"],
1761 1811 ["0xdeadbeef", "0xdeadbeef"],
1762 1812 ["0b_1110_0101", "0b_1110_0101"],
1763 1813 # should not match if in an operation
1764 1814 ["1 + 1", None],
1765 1815 [", 1 + 1", None],
1766 1816 ],
1767 1817 )
1768 1818 def test_match_numeric_literal_for_dict_key(input, expected):
1769 1819 assert _match_number_in_dict_key_prefix(input) == expected
General Comments 0
You need to be logged in to leave comments. Login now