##// END OF EJS Templates
Fix completion in indented lines dropping prefix when jedi is disabled (#14474)...
M Bussonnier -
r28832:5c8bc514 merge
parent child Browse files
Show More
@@ -1,3346 +1,3389 b''
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 Ξ±
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 Ξ±
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press :kbd:`Tab` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\Ξ±<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 :std:configtrait:`Completer.backslash_combining_completions` option to
63 63 ``False``.
64 64
65 65
66 66 Experimental
67 67 ============
68 68
69 69 Starting with IPython 6.0, this module can make use of the Jedi library to
70 70 generate completions both using static analysis of the code, and dynamically
71 71 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72 72 for Python. The APIs attached to this new mechanism is unstable and will
73 73 raise unless use in an :any:`provisionalcompleter` context manager.
74 74
75 75 You will find that the following are experimental:
76 76
77 77 - :any:`provisionalcompleter`
78 78 - :any:`IPCompleter.completions`
79 79 - :any:`Completion`
80 80 - :any:`rectify_completions`
81 81
82 82 .. note::
83 83
84 84 better name for :any:`rectify_completions` ?
85 85
86 86 We welcome any feedback on these new API, and we also encourage you to try this
87 87 module in debug mode (start IPython with ``--Completer.debug=True``) in order
88 88 to have extra logging information if :any:`jedi` is crashing, or if current
89 89 IPython completer pending deprecations are returning results not yet handled
90 90 by :any:`jedi`
91 91
92 92 Using Jedi for tab completion allow snippets like the following to work without
93 93 having to execute any code:
94 94
95 95 >>> myvar = ['hello', 42]
96 96 ... myvar[1].bi<tab>
97 97
98 98 Tab completion will be able to infer that ``myvar[1]`` is a real number without
99 99 executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100 100 option.
101 101
102 102 Be sure to update :any:`jedi` to the latest stable version or to try the
103 103 current development version to get better completions.
104 104
105 105 Matchers
106 106 ========
107 107
108 108 All completions routines are implemented using unified *Matchers* API.
109 109 The matchers API is provisional and subject to change without notice.
110 110
111 111 The built-in matchers include:
112 112
113 113 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114 114 - :any:`IPCompleter.magic_matcher`: completions for magics,
115 115 - :any:`IPCompleter.unicode_name_matcher`,
116 116 :any:`IPCompleter.fwd_unicode_matcher`
117 117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118 118 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119 119 - :any:`IPCompleter.file_matcher`: paths to files and directories,
120 120 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121 121 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122 122 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123 123 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 125 (`complete_command`) with string dispatch (including regular expressions).
126 126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 127 Jedi results to match behaviour in earlier IPython versions.
128 128
129 129 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130 130
131 131 Matcher API
132 132 -----------
133 133
134 134 Simplifying some details, the ``Matcher`` interface can described as
135 135
136 136 .. code-block::
137 137
138 138 MatcherAPIv1 = Callable[[str], list[str]]
139 139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140 140
141 141 Matcher = MatcherAPIv1 | MatcherAPIv2
142 142
143 143 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144 144 and remains supported as a simplest way for generating completions. This is also
145 145 currently the only API supported by the IPython hooks system `complete_command`.
146 146
147 147 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148 148 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149 149 and requires a literal ``2`` for v2 Matchers.
150 150
151 151 Once the API stabilises future versions may relax the requirement for specifying
152 152 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153 153 please do not rely on the presence of ``matcher_api_version`` for any purposes.
154 154
155 155 Suppression of competing matchers
156 156 ---------------------------------
157 157
158 158 By default results from all matchers are combined, in the order determined by
159 159 their priority. Matchers can request to suppress results from subsequent
160 160 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161 161
162 162 When multiple matchers simultaneously request surpression, the results from of
163 163 the matcher with higher priority will be returned.
164 164
165 165 Sometimes it is desirable to suppress most but not all other matchers;
166 166 this can be achieved by adding a set of identifiers of matchers which
167 167 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168 168
169 169 The suppression behaviour can is user-configurable via
170 170 :std:configtrait:`IPCompleter.suppress_competing_matchers`.
171 171 """
172 172
173 173
174 174 # Copyright (c) IPython Development Team.
175 175 # Distributed under the terms of the Modified BSD License.
176 176 #
177 177 # Some of this code originated from rlcompleter in the Python standard library
178 178 # Copyright (C) 2001 Python Software Foundation, www.python.org
179 179
180 180 from __future__ import annotations
181 181 import builtins as builtin_mod
182 182 import enum
183 183 import glob
184 184 import inspect
185 185 import itertools
186 186 import keyword
187 187 import os
188 188 import re
189 189 import string
190 190 import sys
191 191 import tokenize
192 192 import time
193 193 import unicodedata
194 194 import uuid
195 195 import warnings
196 196 from ast import literal_eval
197 197 from collections import defaultdict
198 198 from contextlib import contextmanager
199 199 from dataclasses import dataclass
200 200 from functools import cached_property, partial
201 201 from types import SimpleNamespace
202 202 from typing import (
203 203 Iterable,
204 204 Iterator,
205 205 List,
206 206 Tuple,
207 207 Union,
208 208 Any,
209 209 Sequence,
210 210 Dict,
211 211 Optional,
212 212 TYPE_CHECKING,
213 213 Set,
214 214 Sized,
215 215 TypeVar,
216 216 Literal,
217 217 )
218 218
219 219 from IPython.core.guarded_eval import guarded_eval, EvaluationContext
220 220 from IPython.core.error import TryNext
221 221 from IPython.core.inputtransformer2 import ESC_MAGIC
222 222 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
223 223 from IPython.core.oinspect import InspectColors
224 224 from IPython.testing.skipdoctest import skip_doctest
225 225 from IPython.utils import generics
226 226 from IPython.utils.decorators import sphinx_options
227 227 from IPython.utils.dir2 import dir2, get_real_method
228 228 from IPython.utils.docs import GENERATING_DOCUMENTATION
229 229 from IPython.utils.path import ensure_dir_exists
230 230 from IPython.utils.process import arg_split
231 231 from traitlets import (
232 232 Bool,
233 233 Enum,
234 234 Int,
235 235 List as ListTrait,
236 236 Unicode,
237 237 Dict as DictTrait,
238 238 Union as UnionTrait,
239 239 observe,
240 240 )
241 241 from traitlets.config.configurable import Configurable
242 242
243 243 import __main__
244 244
245 245 # skip module docstests
246 246 __skip_doctest__ = True
247 247
248 248
249 249 try:
250 250 import jedi
251 251 jedi.settings.case_insensitive_completion = False
252 252 import jedi.api.helpers
253 253 import jedi.api.classes
254 254 JEDI_INSTALLED = True
255 255 except ImportError:
256 256 JEDI_INSTALLED = False
257 257
258 258
259 259 if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
260 260 from typing import cast
261 261 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
262 262 else:
263 263 from typing import Generic
264 264
265 265 def cast(type_, obj):
266 266 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
267 267 return obj
268 268
269 269 # do not require on runtime
270 270 NotRequired = Tuple # requires Python >=3.11
271 271 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
272 272 Protocol = object # requires Python >=3.8
273 273 TypeAlias = Any # requires Python >=3.10
274 274 TypeGuard = Generic # requires Python >=3.10
275 275 if GENERATING_DOCUMENTATION:
276 276 from typing import TypedDict
277 277
278 278 # -----------------------------------------------------------------------------
279 279 # Globals
280 280 #-----------------------------------------------------------------------------
281 281
282 282 # ranges where we have most of the valid unicode names. We could be more finer
283 283 # grained but is it worth it for performance While unicode have character in the
284 284 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
285 285 # write this). With below range we cover them all, with a density of ~67%
286 286 # biggest next gap we consider only adds up about 1% density and there are 600
287 287 # gaps that would need hard coding.
288 288 _UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
289 289
290 290 # Public API
291 291 __all__ = ["Completer", "IPCompleter"]
292 292
293 293 if sys.platform == 'win32':
294 294 PROTECTABLES = ' '
295 295 else:
296 296 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
297 297
298 298 # Protect against returning an enormous number of completions which the frontend
299 299 # may have trouble processing.
300 300 MATCHES_LIMIT = 500
301 301
302 302 # Completion type reported when no type can be inferred.
303 303 _UNKNOWN_TYPE = "<unknown>"
304 304
305 305 # sentinel value to signal lack of a match
306 306 not_found = object()
307 307
308 308 class ProvisionalCompleterWarning(FutureWarning):
309 309 """
310 310 Exception raise by an experimental feature in this module.
311 311
312 312 Wrap code in :any:`provisionalcompleter` context manager if you
313 313 are certain you want to use an unstable feature.
314 314 """
315 315 pass
316 316
317 317 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
318 318
319 319
320 320 @skip_doctest
321 321 @contextmanager
322 322 def provisionalcompleter(action='ignore'):
323 323 """
324 324 This context manager has to be used in any place where unstable completer
325 325 behavior and API may be called.
326 326
327 327 >>> with provisionalcompleter():
328 328 ... completer.do_experimental_things() # works
329 329
330 330 >>> completer.do_experimental_things() # raises.
331 331
332 332 .. note::
333 333
334 334 Unstable
335 335
336 336 By using this context manager you agree that the API in use may change
337 337 without warning, and that you won't complain if they do so.
338 338
339 339 You also understand that, if the API is not to your liking, you should report
340 340 a bug to explain your use case upstream.
341 341
342 342 We'll be happy to get your feedback, feature requests, and improvements on
343 343 any of the unstable APIs!
344 344 """
345 345 with warnings.catch_warnings():
346 346 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
347 347 yield
348 348
349 349
350 350 def has_open_quotes(s):
351 351 """Return whether a string has open quotes.
352 352
353 353 This simply counts whether the number of quote characters of either type in
354 354 the string is odd.
355 355
356 356 Returns
357 357 -------
358 358 If there is an open quote, the quote character is returned. Else, return
359 359 False.
360 360 """
361 361 # We check " first, then ', so complex cases with nested quotes will get
362 362 # the " to take precedence.
363 363 if s.count('"') % 2:
364 364 return '"'
365 365 elif s.count("'") % 2:
366 366 return "'"
367 367 else:
368 368 return False
369 369
370 370
371 371 def protect_filename(s, protectables=PROTECTABLES):
372 372 """Escape a string to protect certain characters."""
373 373 if set(s) & set(protectables):
374 374 if sys.platform == "win32":
375 375 return '"' + s + '"'
376 376 else:
377 377 return "".join(("\\" + c if c in protectables else c) for c in s)
378 378 else:
379 379 return s
380 380
381 381
382 382 def expand_user(path:str) -> Tuple[str, bool, str]:
383 383 """Expand ``~``-style usernames in strings.
384 384
385 385 This is similar to :func:`os.path.expanduser`, but it computes and returns
386 386 extra information that will be useful if the input was being used in
387 387 computing completions, and you wish to return the completions with the
388 388 original '~' instead of its expanded value.
389 389
390 390 Parameters
391 391 ----------
392 392 path : str
393 393 String to be expanded. If no ~ is present, the output is the same as the
394 394 input.
395 395
396 396 Returns
397 397 -------
398 398 newpath : str
399 399 Result of ~ expansion in the input path.
400 400 tilde_expand : bool
401 401 Whether any expansion was performed or not.
402 402 tilde_val : str
403 403 The value that ~ was replaced with.
404 404 """
405 405 # Default values
406 406 tilde_expand = False
407 407 tilde_val = ''
408 408 newpath = path
409 409
410 410 if path.startswith('~'):
411 411 tilde_expand = True
412 412 rest = len(path)-1
413 413 newpath = os.path.expanduser(path)
414 414 if rest:
415 415 tilde_val = newpath[:-rest]
416 416 else:
417 417 tilde_val = newpath
418 418
419 419 return newpath, tilde_expand, tilde_val
420 420
421 421
422 422 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
423 423 """Does the opposite of expand_user, with its outputs.
424 424 """
425 425 if tilde_expand:
426 426 return path.replace(tilde_val, '~')
427 427 else:
428 428 return path
429 429
430 430
431 431 def completions_sorting_key(word):
432 432 """key for sorting completions
433 433
434 434 This does several things:
435 435
436 436 - Demote any completions starting with underscores to the end
437 437 - Insert any %magic and %%cellmagic completions in the alphabetical order
438 438 by their name
439 439 """
440 440 prio1, prio2 = 0, 0
441 441
442 442 if word.startswith('__'):
443 443 prio1 = 2
444 444 elif word.startswith('_'):
445 445 prio1 = 1
446 446
447 447 if word.endswith('='):
448 448 prio1 = -1
449 449
450 450 if word.startswith('%%'):
451 451 # If there's another % in there, this is something else, so leave it alone
452 452 if not "%" in word[2:]:
453 453 word = word[2:]
454 454 prio2 = 2
455 455 elif word.startswith('%'):
456 456 if not "%" in word[1:]:
457 457 word = word[1:]
458 458 prio2 = 1
459 459
460 460 return prio1, word, prio2
461 461
462 462
463 463 class _FakeJediCompletion:
464 464 """
465 465 This is a workaround to communicate to the UI that Jedi has crashed and to
466 466 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
467 467
468 468 Added in IPython 6.0 so should likely be removed for 7.0
469 469
470 470 """
471 471
472 472 def __init__(self, name):
473 473
474 474 self.name = name
475 475 self.complete = name
476 476 self.type = 'crashed'
477 477 self.name_with_symbols = name
478 478 self.signature = ""
479 479 self._origin = "fake"
480 480 self.text = "crashed"
481 481
482 482 def __repr__(self):
483 483 return '<Fake completion object jedi has crashed>'
484 484
485 485
486 486 _JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
487 487
488 488
489 489 class Completion:
490 490 """
491 491 Completion object used and returned by IPython completers.
492 492
493 493 .. warning::
494 494
495 495 Unstable
496 496
497 497 This function is unstable, API may change without warning.
498 498 It will also raise unless use in proper context manager.
499 499
500 500 This act as a middle ground :any:`Completion` object between the
501 501 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
502 502 object. While Jedi need a lot of information about evaluator and how the
503 503 code should be ran/inspected, PromptToolkit (and other frontend) mostly
504 504 need user facing information.
505 505
506 506 - Which range should be replaced replaced by what.
507 507 - Some metadata (like completion type), or meta information to displayed to
508 508 the use user.
509 509
510 510 For debugging purpose we can also store the origin of the completion (``jedi``,
511 511 ``IPython.python_matches``, ``IPython.magics_matches``...).
512 512 """
513 513
514 514 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
515 515
516 516 def __init__(
517 517 self,
518 518 start: int,
519 519 end: int,
520 520 text: str,
521 521 *,
522 522 type: Optional[str] = None,
523 523 _origin="",
524 524 signature="",
525 525 ) -> None:
526 526 warnings.warn(
527 527 "``Completion`` is a provisional API (as of IPython 6.0). "
528 528 "It may change without warnings. "
529 529 "Use in corresponding context manager.",
530 530 category=ProvisionalCompleterWarning,
531 531 stacklevel=2,
532 532 )
533 533
534 534 self.start = start
535 535 self.end = end
536 536 self.text = text
537 537 self.type = type
538 538 self.signature = signature
539 539 self._origin = _origin
540 540
541 541 def __repr__(self):
542 542 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
543 543 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
544 544
545 545 def __eq__(self, other) -> bool:
546 546 """
547 547 Equality and hash do not hash the type (as some completer may not be
548 548 able to infer the type), but are use to (partially) de-duplicate
549 549 completion.
550 550
551 551 Completely de-duplicating completion is a bit tricker that just
552 552 comparing as it depends on surrounding text, which Completions are not
553 553 aware of.
554 554 """
555 555 return self.start == other.start and \
556 556 self.end == other.end and \
557 557 self.text == other.text
558 558
559 559 def __hash__(self):
560 560 return hash((self.start, self.end, self.text))
561 561
562 562
563 563 class SimpleCompletion:
564 564 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
565 565
566 566 .. warning::
567 567
568 568 Provisional
569 569
570 570 This class is used to describe the currently supported attributes of
571 571 simple completion items, and any additional implementation details
572 572 should not be relied on. Additional attributes may be included in
573 573 future versions, and meaning of text disambiguated from the current
574 574 dual meaning of "text to insert" and "text to used as a label".
575 575 """
576 576
577 577 __slots__ = ["text", "type"]
578 578
579 579 def __init__(self, text: str, *, type: Optional[str] = None):
580 580 self.text = text
581 581 self.type = type
582 582
583 583 def __repr__(self):
584 584 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
585 585
586 586
587 587 class _MatcherResultBase(TypedDict):
588 588 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
589 589
590 590 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
591 591 matched_fragment: NotRequired[str]
592 592
593 593 #: Whether to suppress results from all other matchers (True), some
594 594 #: matchers (set of identifiers) or none (False); default is False.
595 595 suppress: NotRequired[Union[bool, Set[str]]]
596 596
597 597 #: Identifiers of matchers which should NOT be suppressed when this matcher
598 598 #: requests to suppress all other matchers; defaults to an empty set.
599 599 do_not_suppress: NotRequired[Set[str]]
600 600
601 601 #: Are completions already ordered and should be left as-is? default is False.
602 602 ordered: NotRequired[bool]
603 603
604 604
605 605 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
606 606 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
607 607 """Result of new-style completion matcher."""
608 608
609 609 # note: TypedDict is added again to the inheritance chain
610 610 # in order to get __orig_bases__ for documentation
611 611
612 612 #: List of candidate completions
613 613 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
614 614
615 615
616 616 class _JediMatcherResult(_MatcherResultBase):
617 617 """Matching result returned by Jedi (will be processed differently)"""
618 618
619 619 #: list of candidate completions
620 620 completions: Iterator[_JediCompletionLike]
621 621
622 622
623 623 AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
624 624 AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
625 625
626 626
627 627 @dataclass
628 628 class CompletionContext:
629 629 """Completion context provided as an argument to matchers in the Matcher API v2."""
630 630
631 631 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
632 632 # which was not explicitly visible as an argument of the matcher, making any refactor
633 633 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
634 634 # from the completer, and make substituting them in sub-classes easier.
635 635
636 636 #: Relevant fragment of code directly preceding the cursor.
637 637 #: The extraction of token is implemented via splitter heuristic
638 638 #: (following readline behaviour for legacy reasons), which is user configurable
639 639 #: (by switching the greedy mode).
640 640 token: str
641 641
642 642 #: The full available content of the editor or buffer
643 643 full_text: str
644 644
645 645 #: Cursor position in the line (the same for ``full_text`` and ``text``).
646 646 cursor_position: int
647 647
648 648 #: Cursor line in ``full_text``.
649 649 cursor_line: int
650 650
651 651 #: The maximum number of completions that will be used downstream.
652 652 #: Matchers can use this information to abort early.
653 653 #: The built-in Jedi matcher is currently excepted from this limit.
654 654 # If not given, return all possible completions.
655 655 limit: Optional[int]
656 656
657 657 @cached_property
658 658 def text_until_cursor(self) -> str:
659 659 return self.line_with_cursor[: self.cursor_position]
660 660
661 661 @cached_property
662 662 def line_with_cursor(self) -> str:
663 663 return self.full_text.split("\n")[self.cursor_line]
664 664
665 665
666 666 #: Matcher results for API v2.
667 667 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
668 668
669 669
670 670 class _MatcherAPIv1Base(Protocol):
671 671 def __call__(self, text: str) -> List[str]:
672 672 """Call signature."""
673 673 ...
674 674
675 675 #: Used to construct the default matcher identifier
676 676 __qualname__: str
677 677
678 678
679 679 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
680 680 #: API version
681 681 matcher_api_version: Optional[Literal[1]]
682 682
683 683 def __call__(self, text: str) -> List[str]:
684 684 """Call signature."""
685 685 ...
686 686
687 687
688 688 #: Protocol describing Matcher API v1.
689 689 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
690 690
691 691
692 692 class MatcherAPIv2(Protocol):
693 693 """Protocol describing Matcher API v2."""
694 694
695 695 #: API version
696 696 matcher_api_version: Literal[2] = 2
697 697
698 698 def __call__(self, context: CompletionContext) -> MatcherResult:
699 699 """Call signature."""
700 700 ...
701 701
702 702 #: Used to construct the default matcher identifier
703 703 __qualname__: str
704 704
705 705
706 706 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
707 707
708 708
709 709 def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
710 710 api_version = _get_matcher_api_version(matcher)
711 711 return api_version == 1
712 712
713 713
714 714 def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
715 715 api_version = _get_matcher_api_version(matcher)
716 716 return api_version == 2
717 717
718 718
719 719 def _is_sizable(value: Any) -> TypeGuard[Sized]:
720 720 """Determines whether objects is sizable"""
721 721 return hasattr(value, "__len__")
722 722
723 723
724 724 def _is_iterator(value: Any) -> TypeGuard[Iterator]:
725 725 """Determines whether objects is sizable"""
726 726 return hasattr(value, "__next__")
727 727
728 728
729 729 def has_any_completions(result: MatcherResult) -> bool:
730 730 """Check if any result includes any completions."""
731 731 completions = result["completions"]
732 732 if _is_sizable(completions):
733 733 return len(completions) != 0
734 734 if _is_iterator(completions):
735 735 try:
736 736 old_iterator = completions
737 737 first = next(old_iterator)
738 738 result["completions"] = cast(
739 739 Iterator[SimpleCompletion],
740 740 itertools.chain([first], old_iterator),
741 741 )
742 742 return True
743 743 except StopIteration:
744 744 return False
745 745 raise ValueError(
746 746 "Completions returned by matcher need to be an Iterator or a Sizable"
747 747 )
748 748
749 749
750 750 def completion_matcher(
751 751 *,
752 752 priority: Optional[float] = None,
753 753 identifier: Optional[str] = None,
754 754 api_version: int = 1,
755 755 ):
756 756 """Adds attributes describing the matcher.
757 757
758 758 Parameters
759 759 ----------
760 760 priority : Optional[float]
761 761 The priority of the matcher, determines the order of execution of matchers.
762 762 Higher priority means that the matcher will be executed first. Defaults to 0.
763 763 identifier : Optional[str]
764 764 identifier of the matcher allowing users to modify the behaviour via traitlets,
765 765 and also used to for debugging (will be passed as ``origin`` with the completions).
766 766
767 767 Defaults to matcher function's ``__qualname__`` (for example,
768 768 ``IPCompleter.file_matcher`` for the built-in matched defined
769 769 as a ``file_matcher`` method of the ``IPCompleter`` class).
770 770 api_version: Optional[int]
771 771 version of the Matcher API used by this matcher.
772 772 Currently supported values are 1 and 2.
773 773 Defaults to 1.
774 774 """
775 775
776 776 def wrapper(func: Matcher):
777 777 func.matcher_priority = priority or 0 # type: ignore
778 778 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
779 779 func.matcher_api_version = api_version # type: ignore
780 780 if TYPE_CHECKING:
781 781 if api_version == 1:
782 782 func = cast(MatcherAPIv1, func)
783 783 elif api_version == 2:
784 784 func = cast(MatcherAPIv2, func)
785 785 return func
786 786
787 787 return wrapper
788 788
789 789
790 790 def _get_matcher_priority(matcher: Matcher):
791 791 return getattr(matcher, "matcher_priority", 0)
792 792
793 793
794 794 def _get_matcher_id(matcher: Matcher):
795 795 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
796 796
797 797
798 798 def _get_matcher_api_version(matcher):
799 799 return getattr(matcher, "matcher_api_version", 1)
800 800
801 801
802 802 context_matcher = partial(completion_matcher, api_version=2)
803 803
804 804
805 805 _IC = Iterable[Completion]
806 806
807 807
808 808 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
809 809 """
810 810 Deduplicate a set of completions.
811 811
812 812 .. warning::
813 813
814 814 Unstable
815 815
816 816 This function is unstable, API may change without warning.
817 817
818 818 Parameters
819 819 ----------
820 820 text : str
821 821 text that should be completed.
822 822 completions : Iterator[Completion]
823 823 iterator over the completions to deduplicate
824 824
825 825 Yields
826 826 ------
827 827 `Completions` objects
828 828 Completions coming from multiple sources, may be different but end up having
829 829 the same effect when applied to ``text``. If this is the case, this will
830 830 consider completions as equal and only emit the first encountered.
831 831 Not folded in `completions()` yet for debugging purpose, and to detect when
832 832 the IPython completer does return things that Jedi does not, but should be
833 833 at some point.
834 834 """
835 835 completions = list(completions)
836 836 if not completions:
837 837 return
838 838
839 839 new_start = min(c.start for c in completions)
840 840 new_end = max(c.end for c in completions)
841 841
842 842 seen = set()
843 843 for c in completions:
844 844 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
845 845 if new_text not in seen:
846 846 yield c
847 847 seen.add(new_text)
848 848
849 849
850 850 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
851 851 """
852 852 Rectify a set of completions to all have the same ``start`` and ``end``
853 853
854 854 .. warning::
855 855
856 856 Unstable
857 857
858 858 This function is unstable, API may change without warning.
859 859 It will also raise unless use in proper context manager.
860 860
861 861 Parameters
862 862 ----------
863 863 text : str
864 864 text that should be completed.
865 865 completions : Iterator[Completion]
866 866 iterator over the completions to rectify
867 867 _debug : bool
868 868 Log failed completion
869 869
870 870 Notes
871 871 -----
872 872 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
873 873 the Jupyter Protocol requires them to behave like so. This will readjust
874 874 the completion to have the same ``start`` and ``end`` by padding both
875 875 extremities with surrounding text.
876 876
877 877 During stabilisation should support a ``_debug`` option to log which
878 878 completion are return by the IPython completer and not found in Jedi in
879 879 order to make upstream bug report.
880 880 """
881 881 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
882 882 "It may change without warnings. "
883 883 "Use in corresponding context manager.",
884 884 category=ProvisionalCompleterWarning, stacklevel=2)
885 885
886 886 completions = list(completions)
887 887 if not completions:
888 888 return
889 889 starts = (c.start for c in completions)
890 890 ends = (c.end for c in completions)
891 891
892 892 new_start = min(starts)
893 893 new_end = max(ends)
894 894
895 895 seen_jedi = set()
896 896 seen_python_matches = set()
897 897 for c in completions:
898 898 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
899 899 if c._origin == 'jedi':
900 900 seen_jedi.add(new_text)
901 elif c._origin == 'IPCompleter.python_matches':
901 elif c._origin == "IPCompleter.python_matcher":
902 902 seen_python_matches.add(new_text)
903 903 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
904 904 diff = seen_python_matches.difference(seen_jedi)
905 905 if diff and _debug:
906 906 print('IPython.python matches have extras:', diff)
907 907
908 908
909 909 if sys.platform == 'win32':
910 910 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
911 911 else:
912 912 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
913 913
914 914 GREEDY_DELIMS = ' =\r\n'
915 915
916 916
917 917 class CompletionSplitter(object):
918 918 """An object to split an input line in a manner similar to readline.
919 919
920 920 By having our own implementation, we can expose readline-like completion in
921 921 a uniform manner to all frontends. This object only needs to be given the
922 922 line of text to be split and the cursor position on said line, and it
923 923 returns the 'word' to be completed on at the cursor after splitting the
924 924 entire line.
925 925
926 926 What characters are used as splitting delimiters can be controlled by
927 927 setting the ``delims`` attribute (this is a property that internally
928 928 automatically builds the necessary regular expression)"""
929 929
930 930 # Private interface
931 931
932 932 # A string of delimiter characters. The default value makes sense for
933 933 # IPython's most typical usage patterns.
934 934 _delims = DELIMS
935 935
936 936 # The expression (a normal string) to be compiled into a regular expression
937 937 # for actual splitting. We store it as an attribute mostly for ease of
938 938 # debugging, since this type of code can be so tricky to debug.
939 939 _delim_expr = None
940 940
941 941 # The regular expression that does the actual splitting
942 942 _delim_re = None
943 943
944 944 def __init__(self, delims=None):
945 945 delims = CompletionSplitter._delims if delims is None else delims
946 946 self.delims = delims
947 947
948 948 @property
949 949 def delims(self):
950 950 """Return the string of delimiter characters."""
951 951 return self._delims
952 952
953 953 @delims.setter
954 954 def delims(self, delims):
955 955 """Set the delimiters for line splitting."""
956 956 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
957 957 self._delim_re = re.compile(expr)
958 958 self._delims = delims
959 959 self._delim_expr = expr
960 960
961 961 def split_line(self, line, cursor_pos=None):
962 962 """Split a line of text with a cursor at the given position.
963 963 """
964 964 l = line if cursor_pos is None else line[:cursor_pos]
965 965 return self._delim_re.split(l)[-1]
966 966
967 967
968 968
969 969 class Completer(Configurable):
970 970
971 971 greedy = Bool(
972 972 False,
973 973 help="""Activate greedy completion.
974 974
975 975 .. deprecated:: 8.8
976 976 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
977 977
978 978 When enabled in IPython 8.8 or newer, changes configuration as follows:
979 979
980 980 - ``Completer.evaluation = 'unsafe'``
981 981 - ``Completer.auto_close_dict_keys = True``
982 982 """,
983 983 ).tag(config=True)
984 984
985 985 evaluation = Enum(
986 986 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
987 987 default_value="limited",
988 988 help="""Policy for code evaluation under completion.
989 989
990 990 Successive options allow to enable more eager evaluation for better
991 991 completion suggestions, including for nested dictionaries, nested lists,
992 992 or even results of function calls.
993 993 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
994 994 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
995 995
996 996 Allowed values are:
997 997
998 998 - ``forbidden``: no evaluation of code is permitted,
999 999 - ``minimal``: evaluation of literals and access to built-in namespace;
1000 1000 no item/attribute evaluationm no access to locals/globals,
1001 1001 no evaluation of any operations or comparisons.
1002 1002 - ``limited``: access to all namespaces, evaluation of hard-coded methods
1003 1003 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
1004 1004 :any:`object.__getitem__`) on allow-listed objects (for example:
1005 1005 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
1006 1006 - ``unsafe``: evaluation of all methods and function calls but not of
1007 1007 syntax with side-effects like `del x`,
1008 1008 - ``dangerous``: completely arbitrary evaluation.
1009 1009 """,
1010 1010 ).tag(config=True)
1011 1011
1012 1012 use_jedi = Bool(default_value=JEDI_INSTALLED,
1013 1013 help="Experimental: Use Jedi to generate autocompletions. "
1014 1014 "Default to True if jedi is installed.").tag(config=True)
1015 1015
1016 1016 jedi_compute_type_timeout = Int(default_value=400,
1017 1017 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1018 1018 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1019 1019 performance by preventing jedi to build its cache.
1020 1020 """).tag(config=True)
1021 1021
1022 1022 debug = Bool(default_value=False,
1023 1023 help='Enable debug for the Completer. Mostly print extra '
1024 1024 'information for experimental jedi integration.')\
1025 1025 .tag(config=True)
1026 1026
1027 1027 backslash_combining_completions = Bool(True,
1028 1028 help="Enable unicode completions, e.g. \\alpha<tab> . "
1029 1029 "Includes completion of latex commands, unicode names, and expanding "
1030 1030 "unicode characters back to latex commands.").tag(config=True)
1031 1031
1032 1032 auto_close_dict_keys = Bool(
1033 1033 False,
1034 1034 help="""
1035 1035 Enable auto-closing dictionary keys.
1036 1036
1037 1037 When enabled string keys will be suffixed with a final quote
1038 1038 (matching the opening quote), tuple keys will also receive a
1039 1039 separating comma if needed, and keys which are final will
1040 1040 receive a closing bracket (``]``).
1041 1041 """,
1042 1042 ).tag(config=True)
1043 1043
1044 1044 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1045 1045 """Create a new completer for the command line.
1046 1046
1047 1047 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1048 1048
1049 1049 If unspecified, the default namespace where completions are performed
1050 1050 is __main__ (technically, __main__.__dict__). Namespaces should be
1051 1051 given as dictionaries.
1052 1052
1053 1053 An optional second namespace can be given. This allows the completer
1054 1054 to handle cases where both the local and global scopes need to be
1055 1055 distinguished.
1056 1056 """
1057 1057
1058 1058 # Don't bind to namespace quite yet, but flag whether the user wants a
1059 1059 # specific namespace or to use __main__.__dict__. This will allow us
1060 1060 # to bind to __main__.__dict__ at completion time, not now.
1061 1061 if namespace is None:
1062 1062 self.use_main_ns = True
1063 1063 else:
1064 1064 self.use_main_ns = False
1065 1065 self.namespace = namespace
1066 1066
1067 1067 # The global namespace, if given, can be bound directly
1068 1068 if global_namespace is None:
1069 1069 self.global_namespace = {}
1070 1070 else:
1071 1071 self.global_namespace = global_namespace
1072 1072
1073 1073 self.custom_matchers = []
1074 1074
1075 1075 super(Completer, self).__init__(**kwargs)
1076 1076
1077 1077 def complete(self, text, state):
1078 1078 """Return the next possible completion for 'text'.
1079 1079
1080 1080 This is called successively with state == 0, 1, 2, ... until it
1081 1081 returns None. The completion should begin with 'text'.
1082 1082
1083 1083 """
1084 1084 if self.use_main_ns:
1085 1085 self.namespace = __main__.__dict__
1086 1086
1087 1087 if state == 0:
1088 1088 if "." in text:
1089 1089 self.matches = self.attr_matches(text)
1090 1090 else:
1091 1091 self.matches = self.global_matches(text)
1092 1092 try:
1093 1093 return self.matches[state]
1094 1094 except IndexError:
1095 1095 return None
1096 1096
1097 1097 def global_matches(self, text):
1098 1098 """Compute matches when text is a simple name.
1099 1099
1100 1100 Return a list of all keywords, built-in functions and names currently
1101 1101 defined in self.namespace or self.global_namespace that match.
1102 1102
1103 1103 """
1104 1104 matches = []
1105 1105 match_append = matches.append
1106 1106 n = len(text)
1107 1107 for lst in [
1108 1108 keyword.kwlist,
1109 1109 builtin_mod.__dict__.keys(),
1110 1110 list(self.namespace.keys()),
1111 1111 list(self.global_namespace.keys()),
1112 1112 ]:
1113 1113 for word in lst:
1114 1114 if word[:n] == text and word != "__builtins__":
1115 1115 match_append(word)
1116 1116
1117 1117 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1118 1118 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1119 1119 shortened = {
1120 1120 "_".join([sub[0] for sub in word.split("_")]): word
1121 1121 for word in lst
1122 1122 if snake_case_re.match(word)
1123 1123 }
1124 1124 for word in shortened.keys():
1125 1125 if word[:n] == text and word != "__builtins__":
1126 1126 match_append(shortened[word])
1127 1127 return matches
1128 1128
1129 1129 def attr_matches(self, text):
1130 1130 """Compute matches when text contains a dot.
1131 1131
1132 1132 Assuming the text is of the form NAME.NAME....[NAME], and is
1133 1133 evaluatable in self.namespace or self.global_namespace, it will be
1134 1134 evaluated and its attributes (as revealed by dir()) are used as
1135 1135 possible completions. (For class instances, class members are
1136 1136 also considered.)
1137 1137
1138 1138 WARNING: this can still invoke arbitrary C code, if an object
1139 1139 with a __getattr__ hook is evaluated.
1140 1140
1141 1141 """
1142 return self._attr_matches(text)[0]
1143
1144 def _attr_matches(self, text, include_prefix=True) -> Tuple[Sequence[str], str]:
1142 1145 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1143 1146 if not m2:
1144 return []
1147 return [], ""
1145 1148 expr, attr = m2.group(1, 2)
1146 1149
1147 1150 obj = self._evaluate_expr(expr)
1148 1151
1149 1152 if obj is not_found:
1150 return []
1153 return [], ""
1151 1154
1152 1155 if self.limit_to__all__ and hasattr(obj, '__all__'):
1153 1156 words = get__all__entries(obj)
1154 1157 else:
1155 1158 words = dir2(obj)
1156 1159
1157 1160 try:
1158 1161 words = generics.complete_object(obj, words)
1159 1162 except TryNext:
1160 1163 pass
1161 1164 except AssertionError:
1162 1165 raise
1163 1166 except Exception:
1164 1167 # Silence errors from completion function
1165 1168 pass
1166 1169 # Build match list to return
1167 1170 n = len(attr)
1168 1171
1169 1172 # Note: ideally we would just return words here and the prefix
1170 1173 # reconciliator would know that we intend to append to rather than
1171 1174 # replace the input text; this requires refactoring to return range
1172 1175 # which ought to be replaced (as does jedi).
1176 if include_prefix:
1173 1177 tokens = _parse_tokens(expr)
1174 1178 rev_tokens = reversed(tokens)
1175 1179 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1176 1180 name_turn = True
1177 1181
1178 1182 parts = []
1179 1183 for token in rev_tokens:
1180 1184 if token.type in skip_over:
1181 1185 continue
1182 1186 if token.type == tokenize.NAME and name_turn:
1183 1187 parts.append(token.string)
1184 1188 name_turn = False
1185 elif token.type == tokenize.OP and token.string == "." and not name_turn:
1189 elif (
1190 token.type == tokenize.OP and token.string == "." and not name_turn
1191 ):
1186 1192 parts.append(token.string)
1187 1193 name_turn = True
1188 1194 else:
1189 1195 # short-circuit if not empty nor name token
1190 1196 break
1191 1197
1192 1198 prefix_after_space = "".join(reversed(parts))
1199 else:
1200 prefix_after_space = ""
1193 1201
1194 return ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr]
1202 return (
1203 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1204 "." + attr,
1205 )
1195 1206
1196 1207 def _evaluate_expr(self, expr):
1197 1208 obj = not_found
1198 1209 done = False
1199 1210 while not done and expr:
1200 1211 try:
1201 1212 obj = guarded_eval(
1202 1213 expr,
1203 1214 EvaluationContext(
1204 1215 globals=self.global_namespace,
1205 1216 locals=self.namespace,
1206 1217 evaluation=self.evaluation,
1207 1218 ),
1208 1219 )
1209 1220 done = True
1210 1221 except Exception as e:
1211 1222 if self.debug:
1212 1223 print("Evaluation exception", e)
1213 1224 # trim the expression to remove any invalid prefix
1214 1225 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1215 1226 # where parenthesis is not closed.
1216 1227 # TODO: make this faster by reusing parts of the computation?
1217 1228 expr = expr[1:]
1218 1229 return obj
1219 1230
1220 1231 def get__all__entries(obj):
1221 1232 """returns the strings in the __all__ attribute"""
1222 1233 try:
1223 1234 words = getattr(obj, '__all__')
1224 1235 except:
1225 1236 return []
1226 1237
1227 1238 return [w for w in words if isinstance(w, str)]
1228 1239
1229 1240
1230 1241 class _DictKeyState(enum.Flag):
1231 1242 """Represent state of the key match in context of other possible matches.
1232 1243
1233 1244 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1234 1245 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1235 1246 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1236 1247 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1237 1248 """
1238 1249
1239 1250 BASELINE = 0
1240 1251 END_OF_ITEM = enum.auto()
1241 1252 END_OF_TUPLE = enum.auto()
1242 1253 IN_TUPLE = enum.auto()
1243 1254
1244 1255
1245 1256 def _parse_tokens(c):
1246 1257 """Parse tokens even if there is an error."""
1247 1258 tokens = []
1248 1259 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1249 1260 while True:
1250 1261 try:
1251 1262 tokens.append(next(token_generator))
1252 1263 except tokenize.TokenError:
1253 1264 return tokens
1254 1265 except StopIteration:
1255 1266 return tokens
1256 1267
1257 1268
1258 1269 def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1259 1270 """Match any valid Python numeric literal in a prefix of dictionary keys.
1260 1271
1261 1272 References:
1262 1273 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1263 1274 - https://docs.python.org/3/library/tokenize.html
1264 1275 """
1265 1276 if prefix[-1].isspace():
1266 1277 # if user typed a space we do not have anything to complete
1267 1278 # even if there was a valid number token before
1268 1279 return None
1269 1280 tokens = _parse_tokens(prefix)
1270 1281 rev_tokens = reversed(tokens)
1271 1282 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1272 1283 number = None
1273 1284 for token in rev_tokens:
1274 1285 if token.type in skip_over:
1275 1286 continue
1276 1287 if number is None:
1277 1288 if token.type == tokenize.NUMBER:
1278 1289 number = token.string
1279 1290 continue
1280 1291 else:
1281 1292 # we did not match a number
1282 1293 return None
1283 1294 if token.type == tokenize.OP:
1284 1295 if token.string == ",":
1285 1296 break
1286 1297 if token.string in {"+", "-"}:
1287 1298 number = token.string + number
1288 1299 else:
1289 1300 return None
1290 1301 return number
1291 1302
1292 1303
1293 1304 _INT_FORMATS = {
1294 1305 "0b": bin,
1295 1306 "0o": oct,
1296 1307 "0x": hex,
1297 1308 }
1298 1309
1299 1310
1300 1311 def match_dict_keys(
1301 1312 keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
1302 1313 prefix: str,
1303 1314 delims: str,
1304 1315 extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
1305 1316 ) -> Tuple[str, int, Dict[str, _DictKeyState]]:
1306 1317 """Used by dict_key_matches, matching the prefix to a list of keys
1307 1318
1308 1319 Parameters
1309 1320 ----------
1310 1321 keys
1311 1322 list of keys in dictionary currently being completed.
1312 1323 prefix
1313 1324 Part of the text already typed by the user. E.g. `mydict[b'fo`
1314 1325 delims
1315 1326 String of delimiters to consider when finding the current key.
1316 1327 extra_prefix : optional
1317 1328 Part of the text already typed in multi-key index cases. E.g. for
1318 1329 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1319 1330
1320 1331 Returns
1321 1332 -------
1322 1333 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1323 1334 ``quote`` being the quote that need to be used to close current string.
1324 1335 ``token_start`` the position where the replacement should start occurring,
1325 1336 ``matches`` a dictionary of replacement/completion keys on keys and values
1326 1337 indicating whether the state.
1327 1338 """
1328 1339 prefix_tuple = extra_prefix if extra_prefix else ()
1329 1340
1330 1341 prefix_tuple_size = sum(
1331 1342 [
1332 1343 # for pandas, do not count slices as taking space
1333 1344 not isinstance(k, slice)
1334 1345 for k in prefix_tuple
1335 1346 ]
1336 1347 )
1337 1348 text_serializable_types = (str, bytes, int, float, slice)
1338 1349
1339 1350 def filter_prefix_tuple(key):
1340 1351 # Reject too short keys
1341 1352 if len(key) <= prefix_tuple_size:
1342 1353 return False
1343 1354 # Reject keys which cannot be serialised to text
1344 1355 for k in key:
1345 1356 if not isinstance(k, text_serializable_types):
1346 1357 return False
1347 1358 # Reject keys that do not match the prefix
1348 1359 for k, pt in zip(key, prefix_tuple):
1349 1360 if k != pt and not isinstance(pt, slice):
1350 1361 return False
1351 1362 # All checks passed!
1352 1363 return True
1353 1364
1354 1365 filtered_key_is_final: Dict[
1355 1366 Union[str, bytes, int, float], _DictKeyState
1356 1367 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1357 1368
1358 1369 for k in keys:
1359 1370 # If at least one of the matches is not final, mark as undetermined.
1360 1371 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1361 1372 # `111` appears final on first match but is not final on the second.
1362 1373
1363 1374 if isinstance(k, tuple):
1364 1375 if filter_prefix_tuple(k):
1365 1376 key_fragment = k[prefix_tuple_size]
1366 1377 filtered_key_is_final[key_fragment] |= (
1367 1378 _DictKeyState.END_OF_TUPLE
1368 1379 if len(k) == prefix_tuple_size + 1
1369 1380 else _DictKeyState.IN_TUPLE
1370 1381 )
1371 1382 elif prefix_tuple_size > 0:
1372 1383 # we are completing a tuple but this key is not a tuple,
1373 1384 # so we should ignore it
1374 1385 pass
1375 1386 else:
1376 1387 if isinstance(k, text_serializable_types):
1377 1388 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1378 1389
1379 1390 filtered_keys = filtered_key_is_final.keys()
1380 1391
1381 1392 if not prefix:
1382 1393 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1383 1394
1384 1395 quote_match = re.search("(?:\"|')", prefix)
1385 1396 is_user_prefix_numeric = False
1386 1397
1387 1398 if quote_match:
1388 1399 quote = quote_match.group()
1389 1400 valid_prefix = prefix + quote
1390 1401 try:
1391 1402 prefix_str = literal_eval(valid_prefix)
1392 1403 except Exception:
1393 1404 return "", 0, {}
1394 1405 else:
1395 1406 # If it does not look like a string, let's assume
1396 1407 # we are dealing with a number or variable.
1397 1408 number_match = _match_number_in_dict_key_prefix(prefix)
1398 1409
1399 1410 # We do not want the key matcher to suggest variable names so we yield:
1400 1411 if number_match is None:
1401 1412 # The alternative would be to assume that user forgort the quote
1402 1413 # and if the substring matches, suggest adding it at the start.
1403 1414 return "", 0, {}
1404 1415
1405 1416 prefix_str = number_match
1406 1417 is_user_prefix_numeric = True
1407 1418 quote = ""
1408 1419
1409 1420 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1410 1421 token_match = re.search(pattern, prefix, re.UNICODE)
1411 1422 assert token_match is not None # silence mypy
1412 1423 token_start = token_match.start()
1413 1424 token_prefix = token_match.group()
1414 1425
1415 1426 matched: Dict[str, _DictKeyState] = {}
1416 1427
1417 1428 str_key: Union[str, bytes]
1418 1429
1419 1430 for key in filtered_keys:
1420 1431 if isinstance(key, (int, float)):
1421 1432 # User typed a number but this key is not a number.
1422 1433 if not is_user_prefix_numeric:
1423 1434 continue
1424 1435 str_key = str(key)
1425 1436 if isinstance(key, int):
1426 1437 int_base = prefix_str[:2].lower()
1427 1438 # if user typed integer using binary/oct/hex notation:
1428 1439 if int_base in _INT_FORMATS:
1429 1440 int_format = _INT_FORMATS[int_base]
1430 1441 str_key = int_format(key)
1431 1442 else:
1432 1443 # User typed a string but this key is a number.
1433 1444 if is_user_prefix_numeric:
1434 1445 continue
1435 1446 str_key = key
1436 1447 try:
1437 1448 if not str_key.startswith(prefix_str):
1438 1449 continue
1439 1450 except (AttributeError, TypeError, UnicodeError) as e:
1440 1451 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1441 1452 continue
1442 1453
1443 1454 # reformat remainder of key to begin with prefix
1444 1455 rem = str_key[len(prefix_str) :]
1445 1456 # force repr wrapped in '
1446 1457 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1447 1458 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1448 1459 if quote == '"':
1449 1460 # The entered prefix is quoted with ",
1450 1461 # but the match is quoted with '.
1451 1462 # A contained " hence needs escaping for comparison:
1452 1463 rem_repr = rem_repr.replace('"', '\\"')
1453 1464
1454 1465 # then reinsert prefix from start of token
1455 1466 match = "%s%s" % (token_prefix, rem_repr)
1456 1467
1457 1468 matched[match] = filtered_key_is_final[key]
1458 1469 return quote, token_start, matched
1459 1470
1460 1471
1461 1472 def cursor_to_position(text:str, line:int, column:int)->int:
1462 1473 """
1463 1474 Convert the (line,column) position of the cursor in text to an offset in a
1464 1475 string.
1465 1476
1466 1477 Parameters
1467 1478 ----------
1468 1479 text : str
1469 1480 The text in which to calculate the cursor offset
1470 1481 line : int
1471 1482 Line of the cursor; 0-indexed
1472 1483 column : int
1473 1484 Column of the cursor 0-indexed
1474 1485
1475 1486 Returns
1476 1487 -------
1477 1488 Position of the cursor in ``text``, 0-indexed.
1478 1489
1479 1490 See Also
1480 1491 --------
1481 1492 position_to_cursor : reciprocal of this function
1482 1493
1483 1494 """
1484 1495 lines = text.split('\n')
1485 1496 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1486 1497
1487 1498 return sum(len(l) + 1 for l in lines[:line]) + column
1488 1499
1489 1500 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1490 1501 """
1491 1502 Convert the position of the cursor in text (0 indexed) to a line
1492 1503 number(0-indexed) and a column number (0-indexed) pair
1493 1504
1494 1505 Position should be a valid position in ``text``.
1495 1506
1496 1507 Parameters
1497 1508 ----------
1498 1509 text : str
1499 1510 The text in which to calculate the cursor offset
1500 1511 offset : int
1501 1512 Position of the cursor in ``text``, 0-indexed.
1502 1513
1503 1514 Returns
1504 1515 -------
1505 1516 (line, column) : (int, int)
1506 1517 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1507 1518
1508 1519 See Also
1509 1520 --------
1510 1521 cursor_to_position : reciprocal of this function
1511 1522
1512 1523 """
1513 1524
1514 1525 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1515 1526
1516 1527 before = text[:offset]
1517 1528 blines = before.split('\n') # ! splitnes trim trailing \n
1518 1529 line = before.count('\n')
1519 1530 col = len(blines[-1])
1520 1531 return line, col
1521 1532
1522 1533
1523 1534 def _safe_isinstance(obj, module, class_name, *attrs):
1524 1535 """Checks if obj is an instance of module.class_name if loaded
1525 1536 """
1526 1537 if module in sys.modules:
1527 1538 m = sys.modules[module]
1528 1539 for attr in [class_name, *attrs]:
1529 1540 m = getattr(m, attr)
1530 1541 return isinstance(obj, m)
1531 1542
1532 1543
1533 1544 @context_matcher()
1534 1545 def back_unicode_name_matcher(context: CompletionContext):
1535 1546 """Match Unicode characters back to Unicode name
1536 1547
1537 1548 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1538 1549 """
1539 1550 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1540 1551 return _convert_matcher_v1_result_to_v2(
1541 1552 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1542 1553 )
1543 1554
1544 1555
1545 1556 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1546 1557 """Match Unicode characters back to Unicode name
1547 1558
1548 1559 This does ``β˜ƒ`` -> ``\\snowman``
1549 1560
1550 1561 Note that snowman is not a valid python3 combining character but will be expanded.
1551 1562 Though it will not recombine back to the snowman character by the completion machinery.
1552 1563
1553 1564 This will not either back-complete standard sequences like \\n, \\b ...
1554 1565
1555 1566 .. deprecated:: 8.6
1556 1567 You can use :meth:`back_unicode_name_matcher` instead.
1557 1568
1558 1569 Returns
1559 1570 =======
1560 1571
1561 1572 Return a tuple with two elements:
1562 1573
1563 1574 - The Unicode character that was matched (preceded with a backslash), or
1564 1575 empty string,
1565 1576 - a sequence (of 1), name for the match Unicode character, preceded by
1566 1577 backslash, or empty if no match.
1567 1578 """
1568 1579 if len(text)<2:
1569 1580 return '', ()
1570 1581 maybe_slash = text[-2]
1571 1582 if maybe_slash != '\\':
1572 1583 return '', ()
1573 1584
1574 1585 char = text[-1]
1575 1586 # no expand on quote for completion in strings.
1576 1587 # nor backcomplete standard ascii keys
1577 1588 if char in string.ascii_letters or char in ('"',"'"):
1578 1589 return '', ()
1579 1590 try :
1580 1591 unic = unicodedata.name(char)
1581 1592 return '\\'+char,('\\'+unic,)
1582 1593 except KeyError:
1583 1594 pass
1584 1595 return '', ()
1585 1596
1586 1597
1587 1598 @context_matcher()
1588 1599 def back_latex_name_matcher(context: CompletionContext):
1589 1600 """Match latex characters back to unicode name
1590 1601
1591 1602 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1592 1603 """
1593 1604 fragment, matches = back_latex_name_matches(context.text_until_cursor)
1594 1605 return _convert_matcher_v1_result_to_v2(
1595 1606 matches, type="latex", fragment=fragment, suppress_if_matches=True
1596 1607 )
1597 1608
1598 1609
1599 1610 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1600 1611 """Match latex characters back to unicode name
1601 1612
1602 1613 This does ``\\β„΅`` -> ``\\aleph``
1603 1614
1604 1615 .. deprecated:: 8.6
1605 1616 You can use :meth:`back_latex_name_matcher` instead.
1606 1617 """
1607 1618 if len(text)<2:
1608 1619 return '', ()
1609 1620 maybe_slash = text[-2]
1610 1621 if maybe_slash != '\\':
1611 1622 return '', ()
1612 1623
1613 1624
1614 1625 char = text[-1]
1615 1626 # no expand on quote for completion in strings.
1616 1627 # nor backcomplete standard ascii keys
1617 1628 if char in string.ascii_letters or char in ('"',"'"):
1618 1629 return '', ()
1619 1630 try :
1620 1631 latex = reverse_latex_symbol[char]
1621 1632 # '\\' replace the \ as well
1622 1633 return '\\'+char,[latex]
1623 1634 except KeyError:
1624 1635 pass
1625 1636 return '', ()
1626 1637
1627 1638
1628 1639 def _formatparamchildren(parameter) -> str:
1629 1640 """
1630 1641 Get parameter name and value from Jedi Private API
1631 1642
1632 1643 Jedi does not expose a simple way to get `param=value` from its API.
1633 1644
1634 1645 Parameters
1635 1646 ----------
1636 1647 parameter
1637 1648 Jedi's function `Param`
1638 1649
1639 1650 Returns
1640 1651 -------
1641 1652 A string like 'a', 'b=1', '*args', '**kwargs'
1642 1653
1643 1654 """
1644 1655 description = parameter.description
1645 1656 if not description.startswith('param '):
1646 1657 raise ValueError('Jedi function parameter description have change format.'
1647 1658 'Expected "param ...", found %r".' % description)
1648 1659 return description[6:]
1649 1660
1650 1661 def _make_signature(completion)-> str:
1651 1662 """
1652 1663 Make the signature from a jedi completion
1653 1664
1654 1665 Parameters
1655 1666 ----------
1656 1667 completion : jedi.Completion
1657 1668 object does not complete a function type
1658 1669
1659 1670 Returns
1660 1671 -------
1661 1672 a string consisting of the function signature, with the parenthesis but
1662 1673 without the function name. example:
1663 1674 `(a, *args, b=1, **kwargs)`
1664 1675
1665 1676 """
1666 1677
1667 1678 # it looks like this might work on jedi 0.17
1668 1679 if hasattr(completion, 'get_signatures'):
1669 1680 signatures = completion.get_signatures()
1670 1681 if not signatures:
1671 1682 return '(?)'
1672 1683
1673 1684 c0 = completion.get_signatures()[0]
1674 1685 return '('+c0.to_string().split('(', maxsplit=1)[1]
1675 1686
1676 1687 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1677 1688 for p in signature.defined_names()) if f])
1678 1689
1679 1690
1680 1691 _CompleteResult = Dict[str, MatcherResult]
1681 1692
1682 1693
1683 1694 DICT_MATCHER_REGEX = re.compile(
1684 1695 r"""(?x)
1685 1696 ( # match dict-referring - or any get item object - expression
1686 1697 .+
1687 1698 )
1688 1699 \[ # open bracket
1689 1700 \s* # and optional whitespace
1690 1701 # Capture any number of serializable objects (e.g. "a", "b", 'c')
1691 1702 # and slices
1692 1703 ((?:(?:
1693 1704 (?: # closed string
1694 1705 [uUbB]? # string prefix (r not handled)
1695 1706 (?:
1696 1707 '(?:[^']|(?<!\\)\\')*'
1697 1708 |
1698 1709 "(?:[^"]|(?<!\\)\\")*"
1699 1710 )
1700 1711 )
1701 1712 |
1702 1713 # capture integers and slices
1703 1714 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1704 1715 |
1705 1716 # integer in bin/hex/oct notation
1706 1717 0[bBxXoO]_?(?:\w|\d)+
1707 1718 )
1708 1719 \s*,\s*
1709 1720 )*)
1710 1721 ((?:
1711 1722 (?: # unclosed string
1712 1723 [uUbB]? # string prefix (r not handled)
1713 1724 (?:
1714 1725 '(?:[^']|(?<!\\)\\')*
1715 1726 |
1716 1727 "(?:[^"]|(?<!\\)\\")*
1717 1728 )
1718 1729 )
1719 1730 |
1720 1731 # unfinished integer
1721 1732 (?:[-+]?\d+)
1722 1733 |
1723 1734 # integer in bin/hex/oct notation
1724 1735 0[bBxXoO]_?(?:\w|\d)+
1725 1736 )
1726 1737 )?
1727 1738 $
1728 1739 """
1729 1740 )
1730 1741
1731 1742
1732 1743 def _convert_matcher_v1_result_to_v2(
1733 1744 matches: Sequence[str],
1734 1745 type: str,
1735 1746 fragment: Optional[str] = None,
1736 1747 suppress_if_matches: bool = False,
1737 1748 ) -> SimpleMatcherResult:
1738 1749 """Utility to help with transition"""
1739 1750 result = {
1740 1751 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1741 1752 "suppress": (True if matches else False) if suppress_if_matches else False,
1742 1753 }
1743 1754 if fragment is not None:
1744 1755 result["matched_fragment"] = fragment
1745 1756 return cast(SimpleMatcherResult, result)
1746 1757
1747 1758
1748 1759 class IPCompleter(Completer):
1749 1760 """Extension of the completer class with IPython-specific features"""
1750 1761
1751 1762 @observe('greedy')
1752 1763 def _greedy_changed(self, change):
1753 1764 """update the splitter and readline delims when greedy is changed"""
1754 1765 if change["new"]:
1755 1766 self.evaluation = "unsafe"
1756 1767 self.auto_close_dict_keys = True
1757 1768 self.splitter.delims = GREEDY_DELIMS
1758 1769 else:
1759 1770 self.evaluation = "limited"
1760 1771 self.auto_close_dict_keys = False
1761 1772 self.splitter.delims = DELIMS
1762 1773
1763 1774 dict_keys_only = Bool(
1764 1775 False,
1765 1776 help="""
1766 1777 Whether to show dict key matches only.
1767 1778
1768 1779 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1769 1780 """,
1770 1781 )
1771 1782
1772 1783 suppress_competing_matchers = UnionTrait(
1773 1784 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1774 1785 default_value=None,
1775 1786 help="""
1776 1787 Whether to suppress completions from other *Matchers*.
1777 1788
1778 1789 When set to ``None`` (default) the matchers will attempt to auto-detect
1779 1790 whether suppression of other matchers is desirable. For example, at
1780 1791 the beginning of a line followed by `%` we expect a magic completion
1781 1792 to be the only applicable option, and after ``my_dict['`` we usually
1782 1793 expect a completion with an existing dictionary key.
1783 1794
1784 1795 If you want to disable this heuristic and see completions from all matchers,
1785 1796 set ``IPCompleter.suppress_competing_matchers = False``.
1786 1797 To disable the heuristic for specific matchers provide a dictionary mapping:
1787 1798 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1788 1799
1789 1800 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1790 1801 completions to the set of matchers with the highest priority;
1791 1802 this is equivalent to ``IPCompleter.merge_completions`` and
1792 1803 can be beneficial for performance, but will sometimes omit relevant
1793 1804 candidates from matchers further down the priority list.
1794 1805 """,
1795 1806 ).tag(config=True)
1796 1807
1797 1808 merge_completions = Bool(
1798 1809 True,
1799 1810 help="""Whether to merge completion results into a single list
1800 1811
1801 1812 If False, only the completion results from the first non-empty
1802 1813 completer will be returned.
1803 1814
1804 1815 As of version 8.6.0, setting the value to ``False`` is an alias for:
1805 1816 ``IPCompleter.suppress_competing_matchers = True.``.
1806 1817 """,
1807 1818 ).tag(config=True)
1808 1819
1809 1820 disable_matchers = ListTrait(
1810 1821 Unicode(),
1811 1822 help="""List of matchers to disable.
1812 1823
1813 1824 The list should contain matcher identifiers (see :any:`completion_matcher`).
1814 1825 """,
1815 1826 ).tag(config=True)
1816 1827
1817 1828 omit__names = Enum(
1818 1829 (0, 1, 2),
1819 1830 default_value=2,
1820 1831 help="""Instruct the completer to omit private method names
1821 1832
1822 1833 Specifically, when completing on ``object.<tab>``.
1823 1834
1824 1835 When 2 [default]: all names that start with '_' will be excluded.
1825 1836
1826 1837 When 1: all 'magic' names (``__foo__``) will be excluded.
1827 1838
1828 1839 When 0: nothing will be excluded.
1829 1840 """
1830 1841 ).tag(config=True)
1831 1842 limit_to__all__ = Bool(False,
1832 1843 help="""
1833 1844 DEPRECATED as of version 5.0.
1834 1845
1835 1846 Instruct the completer to use __all__ for the completion
1836 1847
1837 1848 Specifically, when completing on ``object.<tab>``.
1838 1849
1839 1850 When True: only those names in obj.__all__ will be included.
1840 1851
1841 1852 When False [default]: the __all__ attribute is ignored
1842 1853 """,
1843 1854 ).tag(config=True)
1844 1855
1845 1856 profile_completions = Bool(
1846 1857 default_value=False,
1847 1858 help="If True, emit profiling data for completion subsystem using cProfile."
1848 1859 ).tag(config=True)
1849 1860
1850 1861 profiler_output_dir = Unicode(
1851 1862 default_value=".completion_profiles",
1852 1863 help="Template for path at which to output profile data for completions."
1853 1864 ).tag(config=True)
1854 1865
1855 1866 @observe('limit_to__all__')
1856 1867 def _limit_to_all_changed(self, change):
1857 1868 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1858 1869 'value has been deprecated since IPython 5.0, will be made to have '
1859 1870 'no effects and then removed in future version of IPython.',
1860 1871 UserWarning)
1861 1872
1862 1873 def __init__(
1863 1874 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1864 1875 ):
1865 1876 """IPCompleter() -> completer
1866 1877
1867 1878 Return a completer object.
1868 1879
1869 1880 Parameters
1870 1881 ----------
1871 1882 shell
1872 1883 a pointer to the ipython shell itself. This is needed
1873 1884 because this completer knows about magic functions, and those can
1874 1885 only be accessed via the ipython instance.
1875 1886 namespace : dict, optional
1876 1887 an optional dict where completions are performed.
1877 1888 global_namespace : dict, optional
1878 1889 secondary optional dict for completions, to
1879 1890 handle cases (such as IPython embedded inside functions) where
1880 1891 both Python scopes are visible.
1881 1892 config : Config
1882 1893 traitlet's config object
1883 1894 **kwargs
1884 1895 passed to super class unmodified.
1885 1896 """
1886 1897
1887 1898 self.magic_escape = ESC_MAGIC
1888 1899 self.splitter = CompletionSplitter()
1889 1900
1890 1901 # _greedy_changed() depends on splitter and readline being defined:
1891 1902 super().__init__(
1892 1903 namespace=namespace,
1893 1904 global_namespace=global_namespace,
1894 1905 config=config,
1895 1906 **kwargs,
1896 1907 )
1897 1908
1898 1909 # List where completion matches will be stored
1899 1910 self.matches = []
1900 1911 self.shell = shell
1901 1912 # Regexp to split filenames with spaces in them
1902 1913 self.space_name_re = re.compile(r'([^\\] )')
1903 1914 # Hold a local ref. to glob.glob for speed
1904 1915 self.glob = glob.glob
1905 1916
1906 1917 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1907 1918 # buffers, to avoid completion problems.
1908 1919 term = os.environ.get('TERM','xterm')
1909 1920 self.dumb_terminal = term in ['dumb','emacs']
1910 1921
1911 1922 # Special handling of backslashes needed in win32 platforms
1912 1923 if sys.platform == "win32":
1913 1924 self.clean_glob = self._clean_glob_win32
1914 1925 else:
1915 1926 self.clean_glob = self._clean_glob
1916 1927
1917 1928 #regexp to parse docstring for function signature
1918 1929 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1919 1930 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1920 1931 #use this if positional argument name is also needed
1921 1932 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1922 1933
1923 1934 self.magic_arg_matchers = [
1924 1935 self.magic_config_matcher,
1925 1936 self.magic_color_matcher,
1926 1937 ]
1927 1938
1928 1939 # This is set externally by InteractiveShell
1929 1940 self.custom_completers = None
1930 1941
1931 1942 # This is a list of names of unicode characters that can be completed
1932 1943 # into their corresponding unicode value. The list is large, so we
1933 1944 # lazily initialize it on first use. Consuming code should access this
1934 1945 # attribute through the `@unicode_names` property.
1935 1946 self._unicode_names = None
1936 1947
1937 1948 self._backslash_combining_matchers = [
1938 1949 self.latex_name_matcher,
1939 1950 self.unicode_name_matcher,
1940 1951 back_latex_name_matcher,
1941 1952 back_unicode_name_matcher,
1942 1953 self.fwd_unicode_matcher,
1943 1954 ]
1944 1955
1945 1956 if not self.backslash_combining_completions:
1946 1957 for matcher in self._backslash_combining_matchers:
1947 1958 self.disable_matchers.append(_get_matcher_id(matcher))
1948 1959
1949 1960 if not self.merge_completions:
1950 1961 self.suppress_competing_matchers = True
1951 1962
1952 1963 @property
1953 1964 def matchers(self) -> List[Matcher]:
1954 1965 """All active matcher routines for completion"""
1955 1966 if self.dict_keys_only:
1956 1967 return [self.dict_key_matcher]
1957 1968
1958 1969 if self.use_jedi:
1959 1970 return [
1960 1971 *self.custom_matchers,
1961 1972 *self._backslash_combining_matchers,
1962 1973 *self.magic_arg_matchers,
1963 1974 self.custom_completer_matcher,
1964 1975 self.magic_matcher,
1965 1976 self._jedi_matcher,
1966 1977 self.dict_key_matcher,
1967 1978 self.file_matcher,
1968 1979 ]
1969 1980 else:
1970 1981 return [
1971 1982 *self.custom_matchers,
1972 1983 *self._backslash_combining_matchers,
1973 1984 *self.magic_arg_matchers,
1974 1985 self.custom_completer_matcher,
1975 1986 self.dict_key_matcher,
1976 # TODO: convert python_matches to v2 API
1977 1987 self.magic_matcher,
1978 self.python_matches,
1988 self.python_matcher,
1979 1989 self.file_matcher,
1980 1990 self.python_func_kw_matcher,
1981 1991 ]
1982 1992
1983 1993 def all_completions(self, text:str) -> List[str]:
1984 1994 """
1985 1995 Wrapper around the completion methods for the benefit of emacs.
1986 1996 """
1987 1997 prefix = text.rpartition('.')[0]
1988 1998 with provisionalcompleter():
1989 1999 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1990 2000 for c in self.completions(text, len(text))]
1991 2001
1992 2002 return self.complete(text)[1]
1993 2003
1994 2004 def _clean_glob(self, text:str):
1995 2005 return self.glob("%s*" % text)
1996 2006
1997 2007 def _clean_glob_win32(self, text:str):
1998 2008 return [f.replace("\\","/")
1999 2009 for f in self.glob("%s*" % text)]
2000 2010
2001 2011 @context_matcher()
2002 2012 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2003 2013 """Same as :any:`file_matches`, but adopted to new Matcher API."""
2004 2014 matches = self.file_matches(context.token)
2005 2015 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2006 2016 # starts with `/home/`, `C:\`, etc)
2007 2017 return _convert_matcher_v1_result_to_v2(matches, type="path")
2008 2018
2009 2019 def file_matches(self, text: str) -> List[str]:
2010 2020 """Match filenames, expanding ~USER type strings.
2011 2021
2012 2022 Most of the seemingly convoluted logic in this completer is an
2013 2023 attempt to handle filenames with spaces in them. And yet it's not
2014 2024 quite perfect, because Python's readline doesn't expose all of the
2015 2025 GNU readline details needed for this to be done correctly.
2016 2026
2017 2027 For a filename with a space in it, the printed completions will be
2018 2028 only the parts after what's already been typed (instead of the
2019 2029 full completions, as is normally done). I don't think with the
2020 2030 current (as of Python 2.3) Python readline it's possible to do
2021 2031 better.
2022 2032
2023 2033 .. deprecated:: 8.6
2024 2034 You can use :meth:`file_matcher` instead.
2025 2035 """
2026 2036
2027 2037 # chars that require escaping with backslash - i.e. chars
2028 2038 # that readline treats incorrectly as delimiters, but we
2029 2039 # don't want to treat as delimiters in filename matching
2030 2040 # when escaped with backslash
2031 2041 if text.startswith('!'):
2032 2042 text = text[1:]
2033 2043 text_prefix = u'!'
2034 2044 else:
2035 2045 text_prefix = u''
2036 2046
2037 2047 text_until_cursor = self.text_until_cursor
2038 2048 # track strings with open quotes
2039 2049 open_quotes = has_open_quotes(text_until_cursor)
2040 2050
2041 2051 if '(' in text_until_cursor or '[' in text_until_cursor:
2042 2052 lsplit = text
2043 2053 else:
2044 2054 try:
2045 2055 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2046 2056 lsplit = arg_split(text_until_cursor)[-1]
2047 2057 except ValueError:
2048 2058 # typically an unmatched ", or backslash without escaped char.
2049 2059 if open_quotes:
2050 2060 lsplit = text_until_cursor.split(open_quotes)[-1]
2051 2061 else:
2052 2062 return []
2053 2063 except IndexError:
2054 2064 # tab pressed on empty line
2055 2065 lsplit = ""
2056 2066
2057 2067 if not open_quotes and lsplit != protect_filename(lsplit):
2058 2068 # if protectables are found, do matching on the whole escaped name
2059 2069 has_protectables = True
2060 2070 text0,text = text,lsplit
2061 2071 else:
2062 2072 has_protectables = False
2063 2073 text = os.path.expanduser(text)
2064 2074
2065 2075 if text == "":
2066 2076 return [text_prefix + protect_filename(f) for f in self.glob("*")]
2067 2077
2068 2078 # Compute the matches from the filesystem
2069 2079 if sys.platform == 'win32':
2070 2080 m0 = self.clean_glob(text)
2071 2081 else:
2072 2082 m0 = self.clean_glob(text.replace('\\', ''))
2073 2083
2074 2084 if has_protectables:
2075 2085 # If we had protectables, we need to revert our changes to the
2076 2086 # beginning of filename so that we don't double-write the part
2077 2087 # of the filename we have so far
2078 2088 len_lsplit = len(lsplit)
2079 2089 matches = [text_prefix + text0 +
2080 2090 protect_filename(f[len_lsplit:]) for f in m0]
2081 2091 else:
2082 2092 if open_quotes:
2083 2093 # if we have a string with an open quote, we don't need to
2084 2094 # protect the names beyond the quote (and we _shouldn't_, as
2085 2095 # it would cause bugs when the filesystem call is made).
2086 2096 matches = m0 if sys.platform == "win32" else\
2087 2097 [protect_filename(f, open_quotes) for f in m0]
2088 2098 else:
2089 2099 matches = [text_prefix +
2090 2100 protect_filename(f) for f in m0]
2091 2101
2092 2102 # Mark directories in input list by appending '/' to their names.
2093 2103 return [x+'/' if os.path.isdir(x) else x for x in matches]
2094 2104
2095 2105 @context_matcher()
2096 2106 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2097 2107 """Match magics."""
2098 2108 text = context.token
2099 2109 matches = self.magic_matches(text)
2100 2110 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
2101 2111 is_magic_prefix = len(text) > 0 and text[0] == "%"
2102 2112 result["suppress"] = is_magic_prefix and bool(result["completions"])
2103 2113 return result
2104 2114
2105 2115 def magic_matches(self, text: str):
2106 2116 """Match magics.
2107 2117
2108 2118 .. deprecated:: 8.6
2109 2119 You can use :meth:`magic_matcher` instead.
2110 2120 """
2111 2121 # Get all shell magics now rather than statically, so magics loaded at
2112 2122 # runtime show up too.
2113 2123 lsm = self.shell.magics_manager.lsmagic()
2114 2124 line_magics = lsm['line']
2115 2125 cell_magics = lsm['cell']
2116 2126 pre = self.magic_escape
2117 2127 pre2 = pre+pre
2118 2128
2119 2129 explicit_magic = text.startswith(pre)
2120 2130
2121 2131 # Completion logic:
2122 2132 # - user gives %%: only do cell magics
2123 2133 # - user gives %: do both line and cell magics
2124 2134 # - no prefix: do both
2125 2135 # In other words, line magics are skipped if the user gives %% explicitly
2126 2136 #
2127 2137 # We also exclude magics that match any currently visible names:
2128 2138 # https://github.com/ipython/ipython/issues/4877, unless the user has
2129 2139 # typed a %:
2130 2140 # https://github.com/ipython/ipython/issues/10754
2131 2141 bare_text = text.lstrip(pre)
2132 2142 global_matches = self.global_matches(bare_text)
2133 2143 if not explicit_magic:
2134 2144 def matches(magic):
2135 2145 """
2136 2146 Filter magics, in particular remove magics that match
2137 2147 a name present in global namespace.
2138 2148 """
2139 2149 return ( magic.startswith(bare_text) and
2140 2150 magic not in global_matches )
2141 2151 else:
2142 2152 def matches(magic):
2143 2153 return magic.startswith(bare_text)
2144 2154
2145 2155 comp = [ pre2+m for m in cell_magics if matches(m)]
2146 2156 if not text.startswith(pre2):
2147 2157 comp += [ pre+m for m in line_magics if matches(m)]
2148 2158
2149 2159 return comp
2150 2160
2151 2161 @context_matcher()
2152 2162 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2153 2163 """Match class names and attributes for %config magic."""
2154 2164 # NOTE: uses `line_buffer` equivalent for compatibility
2155 2165 matches = self.magic_config_matches(context.line_with_cursor)
2156 2166 return _convert_matcher_v1_result_to_v2(matches, type="param")
2157 2167
2158 2168 def magic_config_matches(self, text: str) -> List[str]:
2159 2169 """Match class names and attributes for %config magic.
2160 2170
2161 2171 .. deprecated:: 8.6
2162 2172 You can use :meth:`magic_config_matcher` instead.
2163 2173 """
2164 2174 texts = text.strip().split()
2165 2175
2166 2176 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2167 2177 # get all configuration classes
2168 2178 classes = sorted(set([ c for c in self.shell.configurables
2169 2179 if c.__class__.class_traits(config=True)
2170 2180 ]), key=lambda x: x.__class__.__name__)
2171 2181 classnames = [ c.__class__.__name__ for c in classes ]
2172 2182
2173 2183 # return all classnames if config or %config is given
2174 2184 if len(texts) == 1:
2175 2185 return classnames
2176 2186
2177 2187 # match classname
2178 2188 classname_texts = texts[1].split('.')
2179 2189 classname = classname_texts[0]
2180 2190 classname_matches = [ c for c in classnames
2181 2191 if c.startswith(classname) ]
2182 2192
2183 2193 # return matched classes or the matched class with attributes
2184 2194 if texts[1].find('.') < 0:
2185 2195 return classname_matches
2186 2196 elif len(classname_matches) == 1 and \
2187 2197 classname_matches[0] == classname:
2188 2198 cls = classes[classnames.index(classname)].__class__
2189 2199 help = cls.class_get_help()
2190 2200 # strip leading '--' from cl-args:
2191 2201 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2192 2202 return [ attr.split('=')[0]
2193 2203 for attr in help.strip().splitlines()
2194 2204 if attr.startswith(texts[1]) ]
2195 2205 return []
2196 2206
2197 2207 @context_matcher()
2198 2208 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2199 2209 """Match color schemes for %colors magic."""
2200 2210 # NOTE: uses `line_buffer` equivalent for compatibility
2201 2211 matches = self.magic_color_matches(context.line_with_cursor)
2202 2212 return _convert_matcher_v1_result_to_v2(matches, type="param")
2203 2213
2204 2214 def magic_color_matches(self, text: str) -> List[str]:
2205 2215 """Match color schemes for %colors magic.
2206 2216
2207 2217 .. deprecated:: 8.6
2208 2218 You can use :meth:`magic_color_matcher` instead.
2209 2219 """
2210 2220 texts = text.split()
2211 2221 if text.endswith(' '):
2212 2222 # .split() strips off the trailing whitespace. Add '' back
2213 2223 # so that: '%colors ' -> ['%colors', '']
2214 2224 texts.append('')
2215 2225
2216 2226 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2217 2227 prefix = texts[1]
2218 2228 return [ color for color in InspectColors.keys()
2219 2229 if color.startswith(prefix) ]
2220 2230 return []
2221 2231
2222 2232 @context_matcher(identifier="IPCompleter.jedi_matcher")
2223 2233 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2224 2234 matches = self._jedi_matches(
2225 2235 cursor_column=context.cursor_position,
2226 2236 cursor_line=context.cursor_line,
2227 2237 text=context.full_text,
2228 2238 )
2229 2239 return {
2230 2240 "completions": matches,
2231 2241 # static analysis should not suppress other matchers
2232 2242 "suppress": False,
2233 2243 }
2234 2244
2235 2245 def _jedi_matches(
2236 2246 self, cursor_column: int, cursor_line: int, text: str
2237 2247 ) -> Iterator[_JediCompletionLike]:
2238 2248 """
2239 2249 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2240 2250 cursor position.
2241 2251
2242 2252 Parameters
2243 2253 ----------
2244 2254 cursor_column : int
2245 2255 column position of the cursor in ``text``, 0-indexed.
2246 2256 cursor_line : int
2247 2257 line position of the cursor in ``text``, 0-indexed
2248 2258 text : str
2249 2259 text to complete
2250 2260
2251 2261 Notes
2252 2262 -----
2253 2263 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2254 2264 object containing a string with the Jedi debug information attached.
2255 2265
2256 2266 .. deprecated:: 8.6
2257 2267 You can use :meth:`_jedi_matcher` instead.
2258 2268 """
2259 2269 namespaces = [self.namespace]
2260 2270 if self.global_namespace is not None:
2261 2271 namespaces.append(self.global_namespace)
2262 2272
2263 2273 completion_filter = lambda x:x
2264 2274 offset = cursor_to_position(text, cursor_line, cursor_column)
2265 2275 # filter output if we are completing for object members
2266 2276 if offset:
2267 2277 pre = text[offset-1]
2268 2278 if pre == '.':
2269 2279 if self.omit__names == 2:
2270 2280 completion_filter = lambda c:not c.name.startswith('_')
2271 2281 elif self.omit__names == 1:
2272 2282 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2273 2283 elif self.omit__names == 0:
2274 2284 completion_filter = lambda x:x
2275 2285 else:
2276 2286 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2277 2287
2278 2288 interpreter = jedi.Interpreter(text[:offset], namespaces)
2279 2289 try_jedi = True
2280 2290
2281 2291 try:
2282 2292 # find the first token in the current tree -- if it is a ' or " then we are in a string
2283 2293 completing_string = False
2284 2294 try:
2285 2295 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2286 2296 except StopIteration:
2287 2297 pass
2288 2298 else:
2289 2299 # note the value may be ', ", or it may also be ''' or """, or
2290 2300 # in some cases, """what/you/typed..., but all of these are
2291 2301 # strings.
2292 2302 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2293 2303
2294 2304 # if we are in a string jedi is likely not the right candidate for
2295 2305 # now. Skip it.
2296 2306 try_jedi = not completing_string
2297 2307 except Exception as e:
2298 2308 # many of things can go wrong, we are using private API just don't crash.
2299 2309 if self.debug:
2300 2310 print("Error detecting if completing a non-finished string :", e, '|')
2301 2311
2302 2312 if not try_jedi:
2303 2313 return iter([])
2304 2314 try:
2305 2315 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2306 2316 except Exception as e:
2307 2317 if self.debug:
2308 2318 return iter(
2309 2319 [
2310 2320 _FakeJediCompletion(
2311 2321 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2312 2322 % (e)
2313 2323 )
2314 2324 ]
2315 2325 )
2316 2326 else:
2317 2327 return iter([])
2318 2328
2329 @context_matcher()
2330 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2331 """Match attributes or global python names"""
2332 text = context.line_with_cursor
2333 if "." in text:
2334 try:
2335 matches, fragment = self._attr_matches(text, include_prefix=False)
2336 if text.endswith(".") and self.omit__names:
2337 if self.omit__names == 1:
2338 # true if txt is _not_ a __ name, false otherwise:
2339 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2340 else:
2341 # true if txt is _not_ a _ name, false otherwise:
2342 no__name = (
2343 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2344 is None
2345 )
2346 matches = filter(no__name, matches)
2347 return _convert_matcher_v1_result_to_v2(
2348 matches, type="attribute", fragment=fragment
2349 )
2350 except NameError:
2351 # catches <undefined attributes>.<tab>
2352 matches = []
2353 return _convert_matcher_v1_result_to_v2(matches, type="attribute")
2354 else:
2355 matches = self.global_matches(context.token)
2356 # TODO: maybe distinguish between functions, modules and just "variables"
2357 return _convert_matcher_v1_result_to_v2(matches, type="variable")
2358
2319 2359 @completion_matcher(api_version=1)
2320 2360 def python_matches(self, text: str) -> Iterable[str]:
2321 """Match attributes or global python names"""
2361 """Match attributes or global python names.
2362
2363 .. deprecated:: 8.27
2364 You can use :meth:`python_matcher` instead."""
2322 2365 if "." in text:
2323 2366 try:
2324 2367 matches = self.attr_matches(text)
2325 2368 if text.endswith('.') and self.omit__names:
2326 2369 if self.omit__names == 1:
2327 2370 # true if txt is _not_ a __ name, false otherwise:
2328 2371 no__name = (lambda txt:
2329 2372 re.match(r'.*\.__.*?__',txt) is None)
2330 2373 else:
2331 2374 # true if txt is _not_ a _ name, false otherwise:
2332 2375 no__name = (lambda txt:
2333 2376 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2334 2377 matches = filter(no__name, matches)
2335 2378 except NameError:
2336 2379 # catches <undefined attributes>.<tab>
2337 2380 matches = []
2338 2381 else:
2339 2382 matches = self.global_matches(text)
2340 2383 return matches
2341 2384
2342 2385 def _default_arguments_from_docstring(self, doc):
2343 2386 """Parse the first line of docstring for call signature.
2344 2387
2345 2388 Docstring should be of the form 'min(iterable[, key=func])\n'.
2346 2389 It can also parse cython docstring of the form
2347 2390 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2348 2391 """
2349 2392 if doc is None:
2350 2393 return []
2351 2394
2352 2395 #care only the firstline
2353 2396 line = doc.lstrip().splitlines()[0]
2354 2397
2355 2398 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2356 2399 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2357 2400 sig = self.docstring_sig_re.search(line)
2358 2401 if sig is None:
2359 2402 return []
2360 2403 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2361 2404 sig = sig.groups()[0].split(',')
2362 2405 ret = []
2363 2406 for s in sig:
2364 2407 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2365 2408 ret += self.docstring_kwd_re.findall(s)
2366 2409 return ret
2367 2410
2368 2411 def _default_arguments(self, obj):
2369 2412 """Return the list of default arguments of obj if it is callable,
2370 2413 or empty list otherwise."""
2371 2414 call_obj = obj
2372 2415 ret = []
2373 2416 if inspect.isbuiltin(obj):
2374 2417 pass
2375 2418 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2376 2419 if inspect.isclass(obj):
2377 2420 #for cython embedsignature=True the constructor docstring
2378 2421 #belongs to the object itself not __init__
2379 2422 ret += self._default_arguments_from_docstring(
2380 2423 getattr(obj, '__doc__', ''))
2381 2424 # for classes, check for __init__,__new__
2382 2425 call_obj = (getattr(obj, '__init__', None) or
2383 2426 getattr(obj, '__new__', None))
2384 2427 # for all others, check if they are __call__able
2385 2428 elif hasattr(obj, '__call__'):
2386 2429 call_obj = obj.__call__
2387 2430 ret += self._default_arguments_from_docstring(
2388 2431 getattr(call_obj, '__doc__', ''))
2389 2432
2390 2433 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2391 2434 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2392 2435
2393 2436 try:
2394 2437 sig = inspect.signature(obj)
2395 2438 ret.extend(k for k, v in sig.parameters.items() if
2396 2439 v.kind in _keeps)
2397 2440 except ValueError:
2398 2441 pass
2399 2442
2400 2443 return list(set(ret))
2401 2444
2402 2445 @context_matcher()
2403 2446 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2404 2447 """Match named parameters (kwargs) of the last open function."""
2405 2448 matches = self.python_func_kw_matches(context.token)
2406 2449 return _convert_matcher_v1_result_to_v2(matches, type="param")
2407 2450
2408 2451 def python_func_kw_matches(self, text):
2409 2452 """Match named parameters (kwargs) of the last open function.
2410 2453
2411 2454 .. deprecated:: 8.6
2412 2455 You can use :meth:`python_func_kw_matcher` instead.
2413 2456 """
2414 2457
2415 2458 if "." in text: # a parameter cannot be dotted
2416 2459 return []
2417 2460 try: regexp = self.__funcParamsRegex
2418 2461 except AttributeError:
2419 2462 regexp = self.__funcParamsRegex = re.compile(r'''
2420 2463 '.*?(?<!\\)' | # single quoted strings or
2421 2464 ".*?(?<!\\)" | # double quoted strings or
2422 2465 \w+ | # identifier
2423 2466 \S # other characters
2424 2467 ''', re.VERBOSE | re.DOTALL)
2425 2468 # 1. find the nearest identifier that comes before an unclosed
2426 2469 # parenthesis before the cursor
2427 2470 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2428 2471 tokens = regexp.findall(self.text_until_cursor)
2429 2472 iterTokens = reversed(tokens); openPar = 0
2430 2473
2431 2474 for token in iterTokens:
2432 2475 if token == ')':
2433 2476 openPar -= 1
2434 2477 elif token == '(':
2435 2478 openPar += 1
2436 2479 if openPar > 0:
2437 2480 # found the last unclosed parenthesis
2438 2481 break
2439 2482 else:
2440 2483 return []
2441 2484 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2442 2485 ids = []
2443 2486 isId = re.compile(r'\w+$').match
2444 2487
2445 2488 while True:
2446 2489 try:
2447 2490 ids.append(next(iterTokens))
2448 2491 if not isId(ids[-1]):
2449 2492 ids.pop(); break
2450 2493 if not next(iterTokens) == '.':
2451 2494 break
2452 2495 except StopIteration:
2453 2496 break
2454 2497
2455 2498 # Find all named arguments already assigned to, as to avoid suggesting
2456 2499 # them again
2457 2500 usedNamedArgs = set()
2458 2501 par_level = -1
2459 2502 for token, next_token in zip(tokens, tokens[1:]):
2460 2503 if token == '(':
2461 2504 par_level += 1
2462 2505 elif token == ')':
2463 2506 par_level -= 1
2464 2507
2465 2508 if par_level != 0:
2466 2509 continue
2467 2510
2468 2511 if next_token != '=':
2469 2512 continue
2470 2513
2471 2514 usedNamedArgs.add(token)
2472 2515
2473 2516 argMatches = []
2474 2517 try:
2475 2518 callableObj = '.'.join(ids[::-1])
2476 2519 namedArgs = self._default_arguments(eval(callableObj,
2477 2520 self.namespace))
2478 2521
2479 2522 # Remove used named arguments from the list, no need to show twice
2480 2523 for namedArg in set(namedArgs) - usedNamedArgs:
2481 2524 if namedArg.startswith(text):
2482 2525 argMatches.append("%s=" %namedArg)
2483 2526 except:
2484 2527 pass
2485 2528
2486 2529 return argMatches
2487 2530
2488 2531 @staticmethod
2489 2532 def _get_keys(obj: Any) -> List[Any]:
2490 2533 # Objects can define their own completions by defining an
2491 2534 # _ipy_key_completions_() method.
2492 2535 method = get_real_method(obj, '_ipython_key_completions_')
2493 2536 if method is not None:
2494 2537 return method()
2495 2538
2496 2539 # Special case some common in-memory dict-like types
2497 2540 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
2498 2541 try:
2499 2542 return list(obj.keys())
2500 2543 except Exception:
2501 2544 return []
2502 2545 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
2503 2546 try:
2504 2547 return list(obj.obj.keys())
2505 2548 except Exception:
2506 2549 return []
2507 2550 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2508 2551 _safe_isinstance(obj, 'numpy', 'void'):
2509 2552 return obj.dtype.names or []
2510 2553 return []
2511 2554
2512 2555 @context_matcher()
2513 2556 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2514 2557 """Match string keys in a dictionary, after e.g. ``foo[``."""
2515 2558 matches = self.dict_key_matches(context.token)
2516 2559 return _convert_matcher_v1_result_to_v2(
2517 2560 matches, type="dict key", suppress_if_matches=True
2518 2561 )
2519 2562
2520 2563 def dict_key_matches(self, text: str) -> List[str]:
2521 2564 """Match string keys in a dictionary, after e.g. ``foo[``.
2522 2565
2523 2566 .. deprecated:: 8.6
2524 2567 You can use :meth:`dict_key_matcher` instead.
2525 2568 """
2526 2569
2527 2570 # Short-circuit on closed dictionary (regular expression would
2528 2571 # not match anyway, but would take quite a while).
2529 2572 if self.text_until_cursor.strip().endswith("]"):
2530 2573 return []
2531 2574
2532 2575 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
2533 2576
2534 2577 if match is None:
2535 2578 return []
2536 2579
2537 2580 expr, prior_tuple_keys, key_prefix = match.groups()
2538 2581
2539 2582 obj = self._evaluate_expr(expr)
2540 2583
2541 2584 if obj is not_found:
2542 2585 return []
2543 2586
2544 2587 keys = self._get_keys(obj)
2545 2588 if not keys:
2546 2589 return keys
2547 2590
2548 2591 tuple_prefix = guarded_eval(
2549 2592 prior_tuple_keys,
2550 2593 EvaluationContext(
2551 2594 globals=self.global_namespace,
2552 2595 locals=self.namespace,
2553 2596 evaluation=self.evaluation, # type: ignore
2554 2597 in_subscript=True,
2555 2598 ),
2556 2599 )
2557 2600
2558 2601 closing_quote, token_offset, matches = match_dict_keys(
2559 2602 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
2560 2603 )
2561 2604 if not matches:
2562 2605 return []
2563 2606
2564 2607 # get the cursor position of
2565 2608 # - the text being completed
2566 2609 # - the start of the key text
2567 2610 # - the start of the completion
2568 2611 text_start = len(self.text_until_cursor) - len(text)
2569 2612 if key_prefix:
2570 2613 key_start = match.start(3)
2571 2614 completion_start = key_start + token_offset
2572 2615 else:
2573 2616 key_start = completion_start = match.end()
2574 2617
2575 2618 # grab the leading prefix, to make sure all completions start with `text`
2576 2619 if text_start > key_start:
2577 2620 leading = ''
2578 2621 else:
2579 2622 leading = text[text_start:completion_start]
2580 2623
2581 2624 # append closing quote and bracket as appropriate
2582 2625 # this is *not* appropriate if the opening quote or bracket is outside
2583 2626 # the text given to this method, e.g. `d["""a\nt
2584 2627 can_close_quote = False
2585 2628 can_close_bracket = False
2586 2629
2587 2630 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
2588 2631
2589 2632 if continuation.startswith(closing_quote):
2590 2633 # do not close if already closed, e.g. `d['a<tab>'`
2591 2634 continuation = continuation[len(closing_quote) :]
2592 2635 else:
2593 2636 can_close_quote = True
2594 2637
2595 2638 continuation = continuation.strip()
2596 2639
2597 2640 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
2598 2641 # handling it is out of scope, so let's avoid appending suffixes.
2599 2642 has_known_tuple_handling = isinstance(obj, dict)
2600 2643
2601 2644 can_close_bracket = (
2602 2645 not continuation.startswith("]") and self.auto_close_dict_keys
2603 2646 )
2604 2647 can_close_tuple_item = (
2605 2648 not continuation.startswith(",")
2606 2649 and has_known_tuple_handling
2607 2650 and self.auto_close_dict_keys
2608 2651 )
2609 2652 can_close_quote = can_close_quote and self.auto_close_dict_keys
2610 2653
2611 2654 # fast path if closing qoute should be appended but not suffix is allowed
2612 2655 if not can_close_quote and not can_close_bracket and closing_quote:
2613 2656 return [leading + k for k in matches]
2614 2657
2615 2658 results = []
2616 2659
2617 2660 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
2618 2661
2619 2662 for k, state_flag in matches.items():
2620 2663 result = leading + k
2621 2664 if can_close_quote and closing_quote:
2622 2665 result += closing_quote
2623 2666
2624 2667 if state_flag == end_of_tuple_or_item:
2625 2668 # We do not know which suffix to add,
2626 2669 # e.g. both tuple item and string
2627 2670 # match this item.
2628 2671 pass
2629 2672
2630 2673 if state_flag in end_of_tuple_or_item and can_close_bracket:
2631 2674 result += "]"
2632 2675 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
2633 2676 result += ", "
2634 2677 results.append(result)
2635 2678 return results
2636 2679
2637 2680 @context_matcher()
2638 2681 def unicode_name_matcher(self, context: CompletionContext):
2639 2682 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2640 2683 fragment, matches = self.unicode_name_matches(context.text_until_cursor)
2641 2684 return _convert_matcher_v1_result_to_v2(
2642 2685 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2643 2686 )
2644 2687
2645 2688 @staticmethod
2646 2689 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2647 2690 """Match Latex-like syntax for unicode characters base
2648 2691 on the name of the character.
2649 2692
2650 2693 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2651 2694
2652 2695 Works only on valid python 3 identifier, or on combining characters that
2653 2696 will combine to form a valid identifier.
2654 2697 """
2655 2698 slashpos = text.rfind('\\')
2656 2699 if slashpos > -1:
2657 2700 s = text[slashpos+1:]
2658 2701 try :
2659 2702 unic = unicodedata.lookup(s)
2660 2703 # allow combining chars
2661 2704 if ('a'+unic).isidentifier():
2662 2705 return '\\'+s,[unic]
2663 2706 except KeyError:
2664 2707 pass
2665 2708 return '', []
2666 2709
2667 2710 @context_matcher()
2668 2711 def latex_name_matcher(self, context: CompletionContext):
2669 2712 """Match Latex syntax for unicode characters.
2670 2713
2671 2714 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2672 2715 """
2673 2716 fragment, matches = self.latex_matches(context.text_until_cursor)
2674 2717 return _convert_matcher_v1_result_to_v2(
2675 2718 matches, type="latex", fragment=fragment, suppress_if_matches=True
2676 2719 )
2677 2720
2678 2721 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2679 2722 """Match Latex syntax for unicode characters.
2680 2723
2681 2724 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2682 2725
2683 2726 .. deprecated:: 8.6
2684 2727 You can use :meth:`latex_name_matcher` instead.
2685 2728 """
2686 2729 slashpos = text.rfind('\\')
2687 2730 if slashpos > -1:
2688 2731 s = text[slashpos:]
2689 2732 if s in latex_symbols:
2690 2733 # Try to complete a full latex symbol to unicode
2691 2734 # \\alpha -> Ξ±
2692 2735 return s, [latex_symbols[s]]
2693 2736 else:
2694 2737 # If a user has partially typed a latex symbol, give them
2695 2738 # a full list of options \al -> [\aleph, \alpha]
2696 2739 matches = [k for k in latex_symbols if k.startswith(s)]
2697 2740 if matches:
2698 2741 return s, matches
2699 2742 return '', ()
2700 2743
2701 2744 @context_matcher()
2702 2745 def custom_completer_matcher(self, context):
2703 2746 """Dispatch custom completer.
2704 2747
2705 2748 If a match is found, suppresses all other matchers except for Jedi.
2706 2749 """
2707 2750 matches = self.dispatch_custom_completer(context.token) or []
2708 2751 result = _convert_matcher_v1_result_to_v2(
2709 2752 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2710 2753 )
2711 2754 result["ordered"] = True
2712 2755 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2713 2756 return result
2714 2757
2715 2758 def dispatch_custom_completer(self, text):
2716 2759 """
2717 2760 .. deprecated:: 8.6
2718 2761 You can use :meth:`custom_completer_matcher` instead.
2719 2762 """
2720 2763 if not self.custom_completers:
2721 2764 return
2722 2765
2723 2766 line = self.line_buffer
2724 2767 if not line.strip():
2725 2768 return None
2726 2769
2727 2770 # Create a little structure to pass all the relevant information about
2728 2771 # the current completion to any custom completer.
2729 2772 event = SimpleNamespace()
2730 2773 event.line = line
2731 2774 event.symbol = text
2732 2775 cmd = line.split(None,1)[0]
2733 2776 event.command = cmd
2734 2777 event.text_until_cursor = self.text_until_cursor
2735 2778
2736 2779 # for foo etc, try also to find completer for %foo
2737 2780 if not cmd.startswith(self.magic_escape):
2738 2781 try_magic = self.custom_completers.s_matches(
2739 2782 self.magic_escape + cmd)
2740 2783 else:
2741 2784 try_magic = []
2742 2785
2743 2786 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2744 2787 try_magic,
2745 2788 self.custom_completers.flat_matches(self.text_until_cursor)):
2746 2789 try:
2747 2790 res = c(event)
2748 2791 if res:
2749 2792 # first, try case sensitive match
2750 2793 withcase = [r for r in res if r.startswith(text)]
2751 2794 if withcase:
2752 2795 return withcase
2753 2796 # if none, then case insensitive ones are ok too
2754 2797 text_low = text.lower()
2755 2798 return [r for r in res if r.lower().startswith(text_low)]
2756 2799 except TryNext:
2757 2800 pass
2758 2801 except KeyboardInterrupt:
2759 2802 """
2760 2803 If custom completer take too long,
2761 2804 let keyboard interrupt abort and return nothing.
2762 2805 """
2763 2806 break
2764 2807
2765 2808 return None
2766 2809
2767 2810 def completions(self, text: str, offset: int)->Iterator[Completion]:
2768 2811 """
2769 2812 Returns an iterator over the possible completions
2770 2813
2771 2814 .. warning::
2772 2815
2773 2816 Unstable
2774 2817
2775 2818 This function is unstable, API may change without warning.
2776 2819 It will also raise unless use in proper context manager.
2777 2820
2778 2821 Parameters
2779 2822 ----------
2780 2823 text : str
2781 2824 Full text of the current input, multi line string.
2782 2825 offset : int
2783 2826 Integer representing the position of the cursor in ``text``. Offset
2784 2827 is 0-based indexed.
2785 2828
2786 2829 Yields
2787 2830 ------
2788 2831 Completion
2789 2832
2790 2833 Notes
2791 2834 -----
2792 2835 The cursor on a text can either be seen as being "in between"
2793 2836 characters or "On" a character depending on the interface visible to
2794 2837 the user. For consistency the cursor being on "in between" characters X
2795 2838 and Y is equivalent to the cursor being "on" character Y, that is to say
2796 2839 the character the cursor is on is considered as being after the cursor.
2797 2840
2798 2841 Combining characters may span more that one position in the
2799 2842 text.
2800 2843
2801 2844 .. note::
2802 2845
2803 2846 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2804 2847 fake Completion token to distinguish completion returned by Jedi
2805 2848 and usual IPython completion.
2806 2849
2807 2850 .. note::
2808 2851
2809 2852 Completions are not completely deduplicated yet. If identical
2810 2853 completions are coming from different sources this function does not
2811 2854 ensure that each completion object will only be present once.
2812 2855 """
2813 2856 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2814 2857 "It may change without warnings. "
2815 2858 "Use in corresponding context manager.",
2816 2859 category=ProvisionalCompleterWarning, stacklevel=2)
2817 2860
2818 2861 seen = set()
2819 2862 profiler:Optional[cProfile.Profile]
2820 2863 try:
2821 2864 if self.profile_completions:
2822 2865 import cProfile
2823 2866 profiler = cProfile.Profile()
2824 2867 profiler.enable()
2825 2868 else:
2826 2869 profiler = None
2827 2870
2828 2871 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2829 2872 if c and (c in seen):
2830 2873 continue
2831 2874 yield c
2832 2875 seen.add(c)
2833 2876 except KeyboardInterrupt:
2834 2877 """if completions take too long and users send keyboard interrupt,
2835 2878 do not crash and return ASAP. """
2836 2879 pass
2837 2880 finally:
2838 2881 if profiler is not None:
2839 2882 profiler.disable()
2840 2883 ensure_dir_exists(self.profiler_output_dir)
2841 2884 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2842 2885 print("Writing profiler output to", output_path)
2843 2886 profiler.dump_stats(output_path)
2844 2887
2845 2888 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2846 2889 """
2847 2890 Core completion module.Same signature as :any:`completions`, with the
2848 2891 extra `timeout` parameter (in seconds).
2849 2892
2850 2893 Computing jedi's completion ``.type`` can be quite expensive (it is a
2851 2894 lazy property) and can require some warm-up, more warm up than just
2852 2895 computing the ``name`` of a completion. The warm-up can be :
2853 2896
2854 2897 - Long warm-up the first time a module is encountered after
2855 2898 install/update: actually build parse/inference tree.
2856 2899
2857 2900 - first time the module is encountered in a session: load tree from
2858 2901 disk.
2859 2902
2860 2903 We don't want to block completions for tens of seconds so we give the
2861 2904 completer a "budget" of ``_timeout`` seconds per invocation to compute
2862 2905 completions types, the completions that have not yet been computed will
2863 2906 be marked as "unknown" an will have a chance to be computed next round
2864 2907 are things get cached.
2865 2908
2866 2909 Keep in mind that Jedi is not the only thing treating the completion so
2867 2910 keep the timeout short-ish as if we take more than 0.3 second we still
2868 2911 have lots of processing to do.
2869 2912
2870 2913 """
2871 2914 deadline = time.monotonic() + _timeout
2872 2915
2873 2916 before = full_text[:offset]
2874 2917 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2875 2918
2876 2919 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2877 2920
2878 2921 def is_non_jedi_result(
2879 2922 result: MatcherResult, identifier: str
2880 2923 ) -> TypeGuard[SimpleMatcherResult]:
2881 2924 return identifier != jedi_matcher_id
2882 2925
2883 2926 results = self._complete(
2884 2927 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2885 2928 )
2886 2929
2887 2930 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2888 2931 identifier: result
2889 2932 for identifier, result in results.items()
2890 2933 if is_non_jedi_result(result, identifier)
2891 2934 }
2892 2935
2893 2936 jedi_matches = (
2894 2937 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
2895 2938 if jedi_matcher_id in results
2896 2939 else ()
2897 2940 )
2898 2941
2899 2942 iter_jm = iter(jedi_matches)
2900 2943 if _timeout:
2901 2944 for jm in iter_jm:
2902 2945 try:
2903 2946 type_ = jm.type
2904 2947 except Exception:
2905 2948 if self.debug:
2906 2949 print("Error in Jedi getting type of ", jm)
2907 2950 type_ = None
2908 2951 delta = len(jm.name_with_symbols) - len(jm.complete)
2909 2952 if type_ == 'function':
2910 2953 signature = _make_signature(jm)
2911 2954 else:
2912 2955 signature = ''
2913 2956 yield Completion(start=offset - delta,
2914 2957 end=offset,
2915 2958 text=jm.name_with_symbols,
2916 2959 type=type_,
2917 2960 signature=signature,
2918 2961 _origin='jedi')
2919 2962
2920 2963 if time.monotonic() > deadline:
2921 2964 break
2922 2965
2923 2966 for jm in iter_jm:
2924 2967 delta = len(jm.name_with_symbols) - len(jm.complete)
2925 2968 yield Completion(
2926 2969 start=offset - delta,
2927 2970 end=offset,
2928 2971 text=jm.name_with_symbols,
2929 2972 type=_UNKNOWN_TYPE, # don't compute type for speed
2930 2973 _origin="jedi",
2931 2974 signature="",
2932 2975 )
2933 2976
2934 2977 # TODO:
2935 2978 # Suppress this, right now just for debug.
2936 2979 if jedi_matches and non_jedi_results and self.debug:
2937 2980 some_start_offset = before.rfind(
2938 2981 next(iter(non_jedi_results.values()))["matched_fragment"]
2939 2982 )
2940 2983 yield Completion(
2941 2984 start=some_start_offset,
2942 2985 end=offset,
2943 2986 text="--jedi/ipython--",
2944 2987 _origin="debug",
2945 2988 type="none",
2946 2989 signature="",
2947 2990 )
2948 2991
2949 2992 ordered: List[Completion] = []
2950 2993 sortable: List[Completion] = []
2951 2994
2952 2995 for origin, result in non_jedi_results.items():
2953 2996 matched_text = result["matched_fragment"]
2954 2997 start_offset = before.rfind(matched_text)
2955 2998 is_ordered = result.get("ordered", False)
2956 2999 container = ordered if is_ordered else sortable
2957 3000
2958 3001 # I'm unsure if this is always true, so let's assert and see if it
2959 3002 # crash
2960 3003 assert before.endswith(matched_text)
2961 3004
2962 3005 for simple_completion in result["completions"]:
2963 3006 completion = Completion(
2964 3007 start=start_offset,
2965 3008 end=offset,
2966 3009 text=simple_completion.text,
2967 3010 _origin=origin,
2968 3011 signature="",
2969 3012 type=simple_completion.type or _UNKNOWN_TYPE,
2970 3013 )
2971 3014 container.append(completion)
2972 3015
2973 3016 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2974 3017 :MATCHES_LIMIT
2975 3018 ]
2976 3019
2977 3020 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2978 3021 """Find completions for the given text and line context.
2979 3022
2980 3023 Note that both the text and the line_buffer are optional, but at least
2981 3024 one of them must be given.
2982 3025
2983 3026 Parameters
2984 3027 ----------
2985 3028 text : string, optional
2986 3029 Text to perform the completion on. If not given, the line buffer
2987 3030 is split using the instance's CompletionSplitter object.
2988 3031 line_buffer : string, optional
2989 3032 If not given, the completer attempts to obtain the current line
2990 3033 buffer via readline. This keyword allows clients which are
2991 3034 requesting for text completions in non-readline contexts to inform
2992 3035 the completer of the entire text.
2993 3036 cursor_pos : int, optional
2994 3037 Index of the cursor in the full line buffer. Should be provided by
2995 3038 remote frontends where kernel has no access to frontend state.
2996 3039
2997 3040 Returns
2998 3041 -------
2999 3042 Tuple of two items:
3000 3043 text : str
3001 3044 Text that was actually used in the completion.
3002 3045 matches : list
3003 3046 A list of completion matches.
3004 3047
3005 3048 Notes
3006 3049 -----
3007 3050 This API is likely to be deprecated and replaced by
3008 3051 :any:`IPCompleter.completions` in the future.
3009 3052
3010 3053 """
3011 3054 warnings.warn('`Completer.complete` is pending deprecation since '
3012 3055 'IPython 6.0 and will be replaced by `Completer.completions`.',
3013 3056 PendingDeprecationWarning)
3014 3057 # potential todo, FOLD the 3rd throw away argument of _complete
3015 3058 # into the first 2 one.
3016 3059 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3017 3060 # TODO: should we deprecate now, or does it stay?
3018 3061
3019 3062 results = self._complete(
3020 3063 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3021 3064 )
3022 3065
3023 3066 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3024 3067
3025 3068 return self._arrange_and_extract(
3026 3069 results,
3027 3070 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3028 3071 skip_matchers={jedi_matcher_id},
3029 3072 # this API does not support different start/end positions (fragments of token).
3030 3073 abort_if_offset_changes=True,
3031 3074 )
3032 3075
3033 3076 def _arrange_and_extract(
3034 3077 self,
3035 3078 results: Dict[str, MatcherResult],
3036 3079 skip_matchers: Set[str],
3037 3080 abort_if_offset_changes: bool,
3038 3081 ):
3039 3082 sortable: List[AnyMatcherCompletion] = []
3040 3083 ordered: List[AnyMatcherCompletion] = []
3041 3084 most_recent_fragment = None
3042 3085 for identifier, result in results.items():
3043 3086 if identifier in skip_matchers:
3044 3087 continue
3045 3088 if not result["completions"]:
3046 3089 continue
3047 3090 if not most_recent_fragment:
3048 3091 most_recent_fragment = result["matched_fragment"]
3049 3092 if (
3050 3093 abort_if_offset_changes
3051 3094 and result["matched_fragment"] != most_recent_fragment
3052 3095 ):
3053 3096 break
3054 3097 if result.get("ordered", False):
3055 3098 ordered.extend(result["completions"])
3056 3099 else:
3057 3100 sortable.extend(result["completions"])
3058 3101
3059 3102 if not most_recent_fragment:
3060 3103 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3061 3104
3062 3105 return most_recent_fragment, [
3063 3106 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3064 3107 ]
3065 3108
3066 3109 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3067 3110 full_text=None) -> _CompleteResult:
3068 3111 """
3069 3112 Like complete but can also returns raw jedi completions as well as the
3070 3113 origin of the completion text. This could (and should) be made much
3071 3114 cleaner but that will be simpler once we drop the old (and stateful)
3072 3115 :any:`complete` API.
3073 3116
3074 3117 With current provisional API, cursor_pos act both (depending on the
3075 3118 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3076 3119 ``column`` when passing multiline strings this could/should be renamed
3077 3120 but would add extra noise.
3078 3121
3079 3122 Parameters
3080 3123 ----------
3081 3124 cursor_line
3082 3125 Index of the line the cursor is on. 0 indexed.
3083 3126 cursor_pos
3084 3127 Position of the cursor in the current line/line_buffer/text. 0
3085 3128 indexed.
3086 3129 line_buffer : optional, str
3087 3130 The current line the cursor is in, this is mostly due to legacy
3088 3131 reason that readline could only give a us the single current line.
3089 3132 Prefer `full_text`.
3090 3133 text : str
3091 3134 The current "token" the cursor is in, mostly also for historical
3092 3135 reasons. as the completer would trigger only after the current line
3093 3136 was parsed.
3094 3137 full_text : str
3095 3138 Full text of the current cell.
3096 3139
3097 3140 Returns
3098 3141 -------
3099 3142 An ordered dictionary where keys are identifiers of completion
3100 3143 matchers and values are ``MatcherResult``s.
3101 3144 """
3102 3145
3103 3146 # if the cursor position isn't given, the only sane assumption we can
3104 3147 # make is that it's at the end of the line (the common case)
3105 3148 if cursor_pos is None:
3106 3149 cursor_pos = len(line_buffer) if text is None else len(text)
3107 3150
3108 3151 if self.use_main_ns:
3109 3152 self.namespace = __main__.__dict__
3110 3153
3111 3154 # if text is either None or an empty string, rely on the line buffer
3112 3155 if (not line_buffer) and full_text:
3113 3156 line_buffer = full_text.split('\n')[cursor_line]
3114 3157 if not text: # issue #11508: check line_buffer before calling split_line
3115 3158 text = (
3116 3159 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3117 3160 )
3118 3161
3119 3162 # If no line buffer is given, assume the input text is all there was
3120 3163 if line_buffer is None:
3121 3164 line_buffer = text
3122 3165
3123 3166 # deprecated - do not use `line_buffer` in new code.
3124 3167 self.line_buffer = line_buffer
3125 3168 self.text_until_cursor = self.line_buffer[:cursor_pos]
3126 3169
3127 3170 if not full_text:
3128 3171 full_text = line_buffer
3129 3172
3130 3173 context = CompletionContext(
3131 3174 full_text=full_text,
3132 3175 cursor_position=cursor_pos,
3133 3176 cursor_line=cursor_line,
3134 3177 token=text,
3135 3178 limit=MATCHES_LIMIT,
3136 3179 )
3137 3180
3138 3181 # Start with a clean slate of completions
3139 3182 results: Dict[str, MatcherResult] = {}
3140 3183
3141 3184 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3142 3185
3143 3186 suppressed_matchers: Set[str] = set()
3144 3187
3145 3188 matchers = {
3146 3189 _get_matcher_id(matcher): matcher
3147 3190 for matcher in sorted(
3148 3191 self.matchers, key=_get_matcher_priority, reverse=True
3149 3192 )
3150 3193 }
3151 3194
3152 3195 for matcher_id, matcher in matchers.items():
3153 3196 matcher_id = _get_matcher_id(matcher)
3154 3197
3155 3198 if matcher_id in self.disable_matchers:
3156 3199 continue
3157 3200
3158 3201 if matcher_id in results:
3159 3202 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3160 3203
3161 3204 if matcher_id in suppressed_matchers:
3162 3205 continue
3163 3206
3164 3207 result: MatcherResult
3165 3208 try:
3166 3209 if _is_matcher_v1(matcher):
3167 3210 result = _convert_matcher_v1_result_to_v2(
3168 3211 matcher(text), type=_UNKNOWN_TYPE
3169 3212 )
3170 3213 elif _is_matcher_v2(matcher):
3171 3214 result = matcher(context)
3172 3215 else:
3173 3216 api_version = _get_matcher_api_version(matcher)
3174 3217 raise ValueError(f"Unsupported API version {api_version}")
3175 3218 except:
3176 3219 # Show the ugly traceback if the matcher causes an
3177 3220 # exception, but do NOT crash the kernel!
3178 3221 sys.excepthook(*sys.exc_info())
3179 3222 continue
3180 3223
3181 3224 # set default value for matched fragment if suffix was not selected.
3182 3225 result["matched_fragment"] = result.get("matched_fragment", context.token)
3183 3226
3184 3227 if not suppressed_matchers:
3185 3228 suppression_recommended: Union[bool, Set[str]] = result.get(
3186 3229 "suppress", False
3187 3230 )
3188 3231
3189 3232 suppression_config = (
3190 3233 self.suppress_competing_matchers.get(matcher_id, None)
3191 3234 if isinstance(self.suppress_competing_matchers, dict)
3192 3235 else self.suppress_competing_matchers
3193 3236 )
3194 3237 should_suppress = (
3195 3238 (suppression_config is True)
3196 3239 or (suppression_recommended and (suppression_config is not False))
3197 3240 ) and has_any_completions(result)
3198 3241
3199 3242 if should_suppress:
3200 3243 suppression_exceptions: Set[str] = result.get(
3201 3244 "do_not_suppress", set()
3202 3245 )
3203 3246 if isinstance(suppression_recommended, Iterable):
3204 3247 to_suppress = set(suppression_recommended)
3205 3248 else:
3206 3249 to_suppress = set(matchers)
3207 3250 suppressed_matchers = to_suppress - suppression_exceptions
3208 3251
3209 3252 new_results = {}
3210 3253 for previous_matcher_id, previous_result in results.items():
3211 3254 if previous_matcher_id not in suppressed_matchers:
3212 3255 new_results[previous_matcher_id] = previous_result
3213 3256 results = new_results
3214 3257
3215 3258 results[matcher_id] = result
3216 3259
3217 3260 _, matches = self._arrange_and_extract(
3218 3261 results,
3219 3262 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3220 3263 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3221 3264 skip_matchers={jedi_matcher_id},
3222 3265 abort_if_offset_changes=False,
3223 3266 )
3224 3267
3225 3268 # populate legacy stateful API
3226 3269 self.matches = matches
3227 3270
3228 3271 return results
3229 3272
3230 3273 @staticmethod
3231 3274 def _deduplicate(
3232 3275 matches: Sequence[AnyCompletion],
3233 3276 ) -> Iterable[AnyCompletion]:
3234 3277 filtered_matches: Dict[str, AnyCompletion] = {}
3235 3278 for match in matches:
3236 3279 text = match.text
3237 3280 if (
3238 3281 text not in filtered_matches
3239 3282 or filtered_matches[text].type == _UNKNOWN_TYPE
3240 3283 ):
3241 3284 filtered_matches[text] = match
3242 3285
3243 3286 return filtered_matches.values()
3244 3287
3245 3288 @staticmethod
3246 3289 def _sort(matches: Sequence[AnyCompletion]):
3247 3290 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3248 3291
3249 3292 @context_matcher()
3250 3293 def fwd_unicode_matcher(self, context: CompletionContext):
3251 3294 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3252 3295 # TODO: use `context.limit` to terminate early once we matched the maximum
3253 3296 # number that will be used downstream; can be added as an optional to
3254 3297 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3255 3298 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3256 3299 return _convert_matcher_v1_result_to_v2(
3257 3300 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3258 3301 )
3259 3302
3260 3303 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
3261 3304 """
3262 3305 Forward match a string starting with a backslash with a list of
3263 3306 potential Unicode completions.
3264 3307
3265 3308 Will compute list of Unicode character names on first call and cache it.
3266 3309
3267 3310 .. deprecated:: 8.6
3268 3311 You can use :meth:`fwd_unicode_matcher` instead.
3269 3312
3270 3313 Returns
3271 3314 -------
3272 3315 At tuple with:
3273 3316 - matched text (empty if no matches)
3274 3317 - list of potential completions, empty tuple otherwise)
3275 3318 """
3276 3319 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3277 3320 # We could do a faster match using a Trie.
3278 3321
3279 3322 # Using pygtrie the following seem to work:
3280 3323
3281 3324 # s = PrefixSet()
3282 3325
3283 3326 # for c in range(0,0x10FFFF + 1):
3284 3327 # try:
3285 3328 # s.add(unicodedata.name(chr(c)))
3286 3329 # except ValueError:
3287 3330 # pass
3288 3331 # [''.join(k) for k in s.iter(prefix)]
3289 3332
3290 3333 # But need to be timed and adds an extra dependency.
3291 3334
3292 3335 slashpos = text.rfind('\\')
3293 3336 # if text starts with slash
3294 3337 if slashpos > -1:
3295 3338 # PERF: It's important that we don't access self._unicode_names
3296 3339 # until we're inside this if-block. _unicode_names is lazily
3297 3340 # initialized, and it takes a user-noticeable amount of time to
3298 3341 # initialize it, so we don't want to initialize it unless we're
3299 3342 # actually going to use it.
3300 3343 s = text[slashpos + 1 :]
3301 3344 sup = s.upper()
3302 3345 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3303 3346 if candidates:
3304 3347 return s, candidates
3305 3348 candidates = [x for x in self.unicode_names if sup in x]
3306 3349 if candidates:
3307 3350 return s, candidates
3308 3351 splitsup = sup.split(" ")
3309 3352 candidates = [
3310 3353 x for x in self.unicode_names if all(u in x for u in splitsup)
3311 3354 ]
3312 3355 if candidates:
3313 3356 return s, candidates
3314 3357
3315 3358 return "", ()
3316 3359
3317 3360 # if text does not start with slash
3318 3361 else:
3319 3362 return '', ()
3320 3363
3321 3364 @property
3322 3365 def unicode_names(self) -> List[str]:
3323 3366 """List of names of unicode code points that can be completed.
3324 3367
3325 3368 The list is lazily initialized on first access.
3326 3369 """
3327 3370 if self._unicode_names is None:
3328 3371 names = []
3329 3372 for c in range(0,0x10FFFF + 1):
3330 3373 try:
3331 3374 names.append(unicodedata.name(chr(c)))
3332 3375 except ValueError:
3333 3376 pass
3334 3377 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3335 3378
3336 3379 return self._unicode_names
3337 3380
3338 3381 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
3339 3382 names = []
3340 3383 for start,stop in ranges:
3341 3384 for c in range(start, stop) :
3342 3385 try:
3343 3386 names.append(unicodedata.name(chr(c)))
3344 3387 except ValueError:
3345 3388 pass
3346 3389 return names
@@ -1,1759 +1,1769 b''
1 1 # encoding: utf-8
2 2 """Tests for the IPython tab-completion machinery."""
3 3
4 4 # Copyright (c) IPython Development Team.
5 5 # Distributed under the terms of the Modified BSD License.
6 6
7 7 import os
8 8 import pytest
9 9 import sys
10 10 import textwrap
11 11 import unittest
12 12
13 13 from importlib.metadata import version
14 14
15 15
16 16 from contextlib import contextmanager
17 17
18 18 from traitlets.config.loader import Config
19 19 from IPython import get_ipython
20 20 from IPython.core import completer
21 21 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
22 22 from IPython.utils.generics import complete_object
23 23 from IPython.testing import decorators as dec
24 24
25 25 from IPython.core.completer import (
26 26 Completion,
27 27 provisionalcompleter,
28 28 match_dict_keys,
29 29 _deduplicate_completions,
30 30 _match_number_in_dict_key_prefix,
31 31 completion_matcher,
32 32 SimpleCompletion,
33 33 CompletionContext,
34 34 )
35 35
36 36 from packaging.version import parse
37 37
38 38
39 39 # -----------------------------------------------------------------------------
40 40 # Test functions
41 41 # -----------------------------------------------------------------------------
42 42
43 43
44 44 def recompute_unicode_ranges():
45 45 """
46 46 utility to recompute the largest unicode range without any characters
47 47
48 48 use to recompute the gap in the global _UNICODE_RANGES of completer.py
49 49 """
50 50 import itertools
51 51 import unicodedata
52 52
53 53 valid = []
54 54 for c in range(0, 0x10FFFF + 1):
55 55 try:
56 56 unicodedata.name(chr(c))
57 57 except ValueError:
58 58 continue
59 59 valid.append(c)
60 60
61 61 def ranges(i):
62 62 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
63 63 b = list(b)
64 64 yield b[0][1], b[-1][1]
65 65
66 66 rg = list(ranges(valid))
67 67 lens = []
68 68 gap_lens = []
69 69 pstart, pstop = 0, 0
70 70 for start, stop in rg:
71 71 lens.append(stop - start)
72 72 gap_lens.append(
73 73 (
74 74 start - pstop,
75 75 hex(pstop + 1),
76 76 hex(start),
77 77 f"{round((start - pstop)/0xe01f0*100)}%",
78 78 )
79 79 )
80 80 pstart, pstop = start, stop
81 81
82 82 return sorted(gap_lens)[-1]
83 83
84 84
85 85 def test_unicode_range():
86 86 """
87 87 Test that the ranges we test for unicode names give the same number of
88 88 results than testing the full length.
89 89 """
90 90 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
91 91
92 92 expected_list = _unicode_name_compute([(0, 0x110000)])
93 93 test = _unicode_name_compute(_UNICODE_RANGES)
94 94 len_exp = len(expected_list)
95 95 len_test = len(test)
96 96
97 97 # do not inline the len() or on error pytest will try to print the 130 000 +
98 98 # elements.
99 99 message = None
100 100 if len_exp != len_test or len_exp > 131808:
101 101 size, start, stop, prct = recompute_unicode_ranges()
102 102 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
103 103 likely due to a new release of Python. We've find that the biggest gap
104 104 in unicode characters has reduces in size to be {size} characters
105 105 ({prct}), from {start}, to {stop}. In completer.py likely update to
106 106
107 107 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
108 108
109 109 And update the assertion below to use
110 110
111 111 len_exp <= {len_exp}
112 112 """
113 113 assert len_exp == len_test, message
114 114
115 115 # fail if new unicode symbols have been added.
116 116 assert len_exp <= 143668, message
117 117
118 118
119 119 @contextmanager
120 120 def greedy_completion():
121 121 ip = get_ipython()
122 122 greedy_original = ip.Completer.greedy
123 123 try:
124 124 ip.Completer.greedy = True
125 125 yield
126 126 finally:
127 127 ip.Completer.greedy = greedy_original
128 128
129 129
130 130 @contextmanager
131 131 def evaluation_policy(evaluation: str):
132 132 ip = get_ipython()
133 133 evaluation_original = ip.Completer.evaluation
134 134 try:
135 135 ip.Completer.evaluation = evaluation
136 136 yield
137 137 finally:
138 138 ip.Completer.evaluation = evaluation_original
139 139
140 140
141 141 @contextmanager
142 142 def custom_matchers(matchers):
143 143 ip = get_ipython()
144 144 try:
145 145 ip.Completer.custom_matchers.extend(matchers)
146 146 yield
147 147 finally:
148 148 ip.Completer.custom_matchers.clear()
149 149
150 150
151 151 def test_protect_filename():
152 152 if sys.platform == "win32":
153 153 pairs = [
154 154 ("abc", "abc"),
155 155 (" abc", '" abc"'),
156 156 ("a bc", '"a bc"'),
157 157 ("a bc", '"a bc"'),
158 158 (" bc", '" bc"'),
159 159 ]
160 160 else:
161 161 pairs = [
162 162 ("abc", "abc"),
163 163 (" abc", r"\ abc"),
164 164 ("a bc", r"a\ bc"),
165 165 ("a bc", r"a\ \ bc"),
166 166 (" bc", r"\ \ bc"),
167 167 # On posix, we also protect parens and other special characters.
168 168 ("a(bc", r"a\(bc"),
169 169 ("a)bc", r"a\)bc"),
170 170 ("a( )bc", r"a\(\ \)bc"),
171 171 ("a[1]bc", r"a\[1\]bc"),
172 172 ("a{1}bc", r"a\{1\}bc"),
173 173 ("a#bc", r"a\#bc"),
174 174 ("a?bc", r"a\?bc"),
175 175 ("a=bc", r"a\=bc"),
176 176 ("a\\bc", r"a\\bc"),
177 177 ("a|bc", r"a\|bc"),
178 178 ("a;bc", r"a\;bc"),
179 179 ("a:bc", r"a\:bc"),
180 180 ("a'bc", r"a\'bc"),
181 181 ("a*bc", r"a\*bc"),
182 182 ('a"bc', r"a\"bc"),
183 183 ("a^bc", r"a\^bc"),
184 184 ("a&bc", r"a\&bc"),
185 185 ]
186 186 # run the actual tests
187 187 for s1, s2 in pairs:
188 188 s1p = completer.protect_filename(s1)
189 189 assert s1p == s2
190 190
191 191
192 192 def check_line_split(splitter, test_specs):
193 193 for part1, part2, split in test_specs:
194 194 cursor_pos = len(part1)
195 195 line = part1 + part2
196 196 out = splitter.split_line(line, cursor_pos)
197 197 assert out == split
198 198
199 199 def test_line_split():
200 200 """Basic line splitter test with default specs."""
201 201 sp = completer.CompletionSplitter()
202 202 # The format of the test specs is: part1, part2, expected answer. Parts 1
203 203 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
204 204 # was at the end of part1. So an empty part2 represents someone hitting
205 205 # tab at the end of the line, the most common case.
206 206 t = [
207 207 ("run some/scrip", "", "some/scrip"),
208 208 ("run scripts/er", "ror.py foo", "scripts/er"),
209 209 ("echo $HOM", "", "HOM"),
210 210 ("print sys.pa", "", "sys.pa"),
211 211 ("print(sys.pa", "", "sys.pa"),
212 212 ("execfile('scripts/er", "", "scripts/er"),
213 213 ("a[x.", "", "x."),
214 214 ("a[x.", "y", "x."),
215 215 ('cd "some_file/', "", "some_file/"),
216 216 ]
217 217 check_line_split(sp, t)
218 218 # Ensure splitting works OK with unicode by re-running the tests with
219 219 # all inputs turned into unicode
220 220 check_line_split(sp, [map(str, p) for p in t])
221 221
222 222
223 223 class NamedInstanceClass:
224 224 instances = {}
225 225
226 226 def __init__(self, name):
227 227 self.instances[name] = self
228 228
229 229 @classmethod
230 230 def _ipython_key_completions_(cls):
231 231 return cls.instances.keys()
232 232
233 233
234 234 class KeyCompletable:
235 235 def __init__(self, things=()):
236 236 self.things = things
237 237
238 238 def _ipython_key_completions_(self):
239 239 return list(self.things)
240 240
241 241
242 242 class TestCompleter(unittest.TestCase):
243 243 def setUp(self):
244 244 """
245 245 We want to silence all PendingDeprecationWarning when testing the completer
246 246 """
247 247 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
248 248 self._assertwarns.__enter__()
249 249
250 250 def tearDown(self):
251 251 try:
252 252 self._assertwarns.__exit__(None, None, None)
253 253 except AssertionError:
254 254 pass
255 255
256 256 def test_custom_completion_error(self):
257 257 """Test that errors from custom attribute completers are silenced."""
258 258 ip = get_ipython()
259 259
260 260 class A:
261 261 pass
262 262
263 263 ip.user_ns["x"] = A()
264 264
265 265 @complete_object.register(A)
266 266 def complete_A(a, existing_completions):
267 267 raise TypeError("this should be silenced")
268 268
269 269 ip.complete("x.")
270 270
271 271 def test_custom_completion_ordering(self):
272 272 """Test that errors from custom attribute completers are silenced."""
273 273 ip = get_ipython()
274 274
275 275 _, matches = ip.complete('in')
276 276 assert matches.index('input') < matches.index('int')
277 277
278 278 def complete_example(a):
279 279 return ['example2', 'example1']
280 280
281 281 ip.Completer.custom_completers.add_re('ex*', complete_example)
282 282 _, matches = ip.complete('ex')
283 283 assert matches.index('example2') < matches.index('example1')
284 284
285 285 def test_unicode_completions(self):
286 286 ip = get_ipython()
287 287 # Some strings that trigger different types of completion. Check them both
288 288 # in str and unicode forms
289 289 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
290 290 for t in s + list(map(str, s)):
291 291 # We don't need to check exact completion values (they may change
292 292 # depending on the state of the namespace, but at least no exceptions
293 293 # should be thrown and the return value should be a pair of text, list
294 294 # values.
295 295 text, matches = ip.complete(t)
296 296 self.assertIsInstance(text, str)
297 297 self.assertIsInstance(matches, list)
298 298
299 299 def test_latex_completions(self):
300 300 from IPython.core.latex_symbols import latex_symbols
301 301 import random
302 302
303 303 ip = get_ipython()
304 304 # Test some random unicode symbols
305 305 keys = random.sample(sorted(latex_symbols), 10)
306 306 for k in keys:
307 307 text, matches = ip.complete(k)
308 308 self.assertEqual(text, k)
309 309 self.assertEqual(matches, [latex_symbols[k]])
310 310 # Test a more complex line
311 311 text, matches = ip.complete("print(\\alpha")
312 312 self.assertEqual(text, "\\alpha")
313 313 self.assertEqual(matches[0], latex_symbols["\\alpha"])
314 314 # Test multiple matching latex symbols
315 315 text, matches = ip.complete("\\al")
316 316 self.assertIn("\\alpha", matches)
317 317 self.assertIn("\\aleph", matches)
318 318
319 319 def test_latex_no_results(self):
320 320 """
321 321 forward latex should really return nothing in either field if nothing is found.
322 322 """
323 323 ip = get_ipython()
324 324 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
325 325 self.assertEqual(text, "")
326 326 self.assertEqual(matches, ())
327 327
328 328 def test_back_latex_completion(self):
329 329 ip = get_ipython()
330 330
331 331 # do not return more than 1 matches for \beta, only the latex one.
332 332 name, matches = ip.complete("\\Ξ²")
333 333 self.assertEqual(matches, ["\\beta"])
334 334
335 335 def test_back_unicode_completion(self):
336 336 ip = get_ipython()
337 337
338 338 name, matches = ip.complete("\\β…€")
339 339 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
340 340
341 341 def test_forward_unicode_completion(self):
342 342 ip = get_ipython()
343 343
344 344 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
345 345 self.assertEqual(matches, ["β…€"]) # This is not a V
346 346 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
347 347
348 348 def test_delim_setting(self):
349 349 sp = completer.CompletionSplitter()
350 350 sp.delims = " "
351 351 self.assertEqual(sp.delims, " ")
352 352 self.assertEqual(sp._delim_expr, r"[\ ]")
353 353
354 354 def test_spaces(self):
355 355 """Test with only spaces as split chars."""
356 356 sp = completer.CompletionSplitter()
357 357 sp.delims = " "
358 358 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
359 359 check_line_split(sp, t)
360 360
361 361 def test_has_open_quotes1(self):
362 362 for s in ["'", "'''", "'hi' '"]:
363 363 self.assertEqual(completer.has_open_quotes(s), "'")
364 364
365 365 def test_has_open_quotes2(self):
366 366 for s in ['"', '"""', '"hi" "']:
367 367 self.assertEqual(completer.has_open_quotes(s), '"')
368 368
369 369 def test_has_open_quotes3(self):
370 370 for s in ["''", "''' '''", "'hi' 'ipython'"]:
371 371 self.assertFalse(completer.has_open_quotes(s))
372 372
373 373 def test_has_open_quotes4(self):
374 374 for s in ['""', '""" """', '"hi" "ipython"']:
375 375 self.assertFalse(completer.has_open_quotes(s))
376 376
377 377 @pytest.mark.xfail(
378 378 sys.platform == "win32", reason="abspath completions fail on Windows"
379 379 )
380 380 def test_abspath_file_completions(self):
381 381 ip = get_ipython()
382 382 with TemporaryDirectory() as tmpdir:
383 383 prefix = os.path.join(tmpdir, "foo")
384 384 suffixes = ["1", "2"]
385 385 names = [prefix + s for s in suffixes]
386 386 for n in names:
387 387 open(n, "w", encoding="utf-8").close()
388 388
389 389 # Check simple completion
390 390 c = ip.complete(prefix)[1]
391 391 self.assertEqual(c, names)
392 392
393 393 # Now check with a function call
394 394 cmd = 'a = f("%s' % prefix
395 395 c = ip.complete(prefix, cmd)[1]
396 396 comp = [prefix + s for s in suffixes]
397 397 self.assertEqual(c, comp)
398 398
399 399 def test_local_file_completions(self):
400 400 ip = get_ipython()
401 401 with TemporaryWorkingDirectory():
402 402 prefix = "./foo"
403 403 suffixes = ["1", "2"]
404 404 names = [prefix + s for s in suffixes]
405 405 for n in names:
406 406 open(n, "w", encoding="utf-8").close()
407 407
408 408 # Check simple completion
409 409 c = ip.complete(prefix)[1]
410 410 self.assertEqual(c, names)
411 411
412 412 # Now check with a function call
413 413 cmd = 'a = f("%s' % prefix
414 414 c = ip.complete(prefix, cmd)[1]
415 415 comp = {prefix + s for s in suffixes}
416 416 self.assertTrue(comp.issubset(set(c)))
417 417
418 418 def test_quoted_file_completions(self):
419 419 ip = get_ipython()
420 420
421 421 def _(text):
422 422 return ip.Completer._complete(
423 423 cursor_line=0, cursor_pos=len(text), full_text=text
424 424 )["IPCompleter.file_matcher"]["completions"]
425 425
426 426 with TemporaryWorkingDirectory():
427 427 name = "foo'bar"
428 428 open(name, "w", encoding="utf-8").close()
429 429
430 430 # Don't escape Windows
431 431 escaped = name if sys.platform == "win32" else "foo\\'bar"
432 432
433 433 # Single quote matches embedded single quote
434 434 c = _("open('foo")[0]
435 435 self.assertEqual(c.text, escaped)
436 436
437 437 # Double quote requires no escape
438 438 c = _('open("foo')[0]
439 439 self.assertEqual(c.text, name)
440 440
441 441 # No quote requires an escape
442 442 c = _("%ls foo")[0]
443 443 self.assertEqual(c.text, escaped)
444 444
445 445 @pytest.mark.xfail(
446 446 sys.version_info.releaselevel in ("alpha",),
447 447 reason="Parso does not yet parse 3.13",
448 448 )
449 449 def test_all_completions_dups(self):
450 450 """
451 451 Make sure the output of `IPCompleter.all_completions` does not have
452 452 duplicated prefixes.
453 453 """
454 454 ip = get_ipython()
455 455 c = ip.Completer
456 456 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
457 457 for jedi_status in [True, False]:
458 458 with provisionalcompleter():
459 459 ip.Completer.use_jedi = jedi_status
460 460 matches = c.all_completions("TestCl")
461 461 assert matches == ["TestClass"], (jedi_status, matches)
462 462 matches = c.all_completions("TestClass.")
463 463 assert len(matches) > 2, (jedi_status, matches)
464 464 matches = c.all_completions("TestClass.a")
465 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
465 if jedi_status:
466 assert matches == ["TestClass.a", "TestClass.a1"], jedi_status
467 else:
468 assert matches == [".a", ".a1"], jedi_status
466 469
467 470 @pytest.mark.xfail(
468 471 sys.version_info.releaselevel in ("alpha",),
469 472 reason="Parso does not yet parse 3.13",
470 473 )
471 474 def test_jedi(self):
472 475 """
473 476 A couple of issue we had with Jedi
474 477 """
475 478 ip = get_ipython()
476 479
477 480 def _test_complete(reason, s, comp, start=None, end=None):
478 481 l = len(s)
479 482 start = start if start is not None else l
480 483 end = end if end is not None else l
481 484 with provisionalcompleter():
482 485 ip.Completer.use_jedi = True
483 486 completions = set(ip.Completer.completions(s, l))
484 487 ip.Completer.use_jedi = False
485 488 assert Completion(start, end, comp) in completions, reason
486 489
487 490 def _test_not_complete(reason, s, comp):
488 491 l = len(s)
489 492 with provisionalcompleter():
490 493 ip.Completer.use_jedi = True
491 494 completions = set(ip.Completer.completions(s, l))
492 495 ip.Completer.use_jedi = False
493 496 assert Completion(l, l, comp) not in completions, reason
494 497
495 498 import jedi
496 499
497 500 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
498 501 if jedi_version > (0, 10):
499 502 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
500 503 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
501 504 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
502 505 _test_complete("cover duplicate completions", "im", "import", 0, 2)
503 506
504 507 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
505 508
506 509 @pytest.mark.xfail(
507 510 sys.version_info.releaselevel in ("alpha",),
508 511 reason="Parso does not yet parse 3.13",
509 512 )
510 513 def test_completion_have_signature(self):
511 514 """
512 515 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
513 516 """
514 517 ip = get_ipython()
515 518 with provisionalcompleter():
516 519 ip.Completer.use_jedi = True
517 520 completions = ip.Completer.completions("ope", 3)
518 521 c = next(completions) # should be `open`
519 522 ip.Completer.use_jedi = False
520 523 assert "file" in c.signature, "Signature of function was not found by completer"
521 524 assert (
522 525 "encoding" in c.signature
523 526 ), "Signature of function was not found by completer"
524 527
525 528 @pytest.mark.xfail(
526 529 sys.version_info.releaselevel in ("alpha",),
527 530 reason="Parso does not yet parse 3.13",
528 531 )
529 532 def test_completions_have_type(self):
530 533 """
531 534 Lets make sure matchers provide completion type.
532 535 """
533 536 ip = get_ipython()
534 537 with provisionalcompleter():
535 538 ip.Completer.use_jedi = False
536 539 completions = ip.Completer.completions("%tim", 3)
537 540 c = next(completions) # should be `%time` or similar
538 541 assert c.type == "magic", "Type of magic was not assigned by completer"
539 542
540 543 @pytest.mark.xfail(
541 544 parse(version("jedi")) <= parse("0.18.0"),
542 545 reason="Known failure on jedi<=0.18.0",
543 546 strict=True,
544 547 )
545 548 def test_deduplicate_completions(self):
546 549 """
547 550 Test that completions are correctly deduplicated (even if ranges are not the same)
548 551 """
549 552 ip = get_ipython()
550 553 ip.ex(
551 554 textwrap.dedent(
552 555 """
553 556 class Z:
554 557 zoo = 1
555 558 """
556 559 )
557 560 )
558 561 with provisionalcompleter():
559 562 ip.Completer.use_jedi = True
560 563 l = list(
561 564 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
562 565 )
563 566 ip.Completer.use_jedi = False
564 567
565 568 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
566 569 assert l[0].text == "zoo" # and not `it.accumulate`
567 570
568 571 @pytest.mark.xfail(
569 572 sys.version_info.releaselevel in ("alpha",),
570 573 reason="Parso does not yet parse 3.13",
571 574 )
572 575 def test_greedy_completions(self):
573 576 """
574 577 Test the capability of the Greedy completer.
575 578
576 579 Most of the test here does not really show off the greedy completer, for proof
577 580 each of the text below now pass with Jedi. The greedy completer is capable of more.
578 581
579 582 See the :any:`test_dict_key_completion_contexts`
580 583
581 584 """
582 585 ip = get_ipython()
583 586 ip.ex("a=list(range(5))")
584 587 ip.ex("d = {'a b': str}")
585 588 _, c = ip.complete(".", line="a[0].")
586 589 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
587 590
588 591 def _(line, cursor_pos, expect, message, completion):
589 592 with greedy_completion(), provisionalcompleter():
590 593 ip.Completer.use_jedi = False
591 594 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
592 595 self.assertIn(expect, c, message % c)
593 596
594 597 ip.Completer.use_jedi = True
595 598 with provisionalcompleter():
596 599 completions = ip.Completer.completions(line, cursor_pos)
597 self.assertIn(completion, completions)
600 self.assertIn(completion, list(completions))
598 601
599 602 with provisionalcompleter():
600 603 _(
601 604 "a[0].",
602 605 5,
603 606 ".real",
604 607 "Should have completed on a[0].: %s",
605 608 Completion(5, 5, "real"),
606 609 )
607 610 _(
608 611 "a[0].r",
609 612 6,
610 613 ".real",
611 614 "Should have completed on a[0].r: %s",
612 615 Completion(5, 6, "real"),
613 616 )
614 617
615 618 _(
616 619 "a[0].from_",
617 620 10,
618 621 ".from_bytes",
619 622 "Should have completed on a[0].from_: %s",
620 623 Completion(5, 10, "from_bytes"),
621 624 )
622 625 _(
623 626 "assert str.star",
624 627 14,
625 "str.startswith",
628 ".startswith",
626 629 "Should have completed on `assert str.star`: %s",
627 630 Completion(11, 14, "startswith"),
628 631 )
629 632 _(
630 633 "d['a b'].str",
631 634 12,
632 635 ".strip",
633 636 "Should have completed on `d['a b'].str`: %s",
634 637 Completion(9, 12, "strip"),
635 638 )
639 _(
640 "a.app",
641 4,
642 ".append",
643 "Should have completed on `a.app`: %s",
644 Completion(2, 4, "append"),
645 )
636 646
637 647 def test_omit__names(self):
638 648 # also happens to test IPCompleter as a configurable
639 649 ip = get_ipython()
640 650 ip._hidden_attr = 1
641 651 ip._x = {}
642 652 c = ip.Completer
643 653 ip.ex("ip=get_ipython()")
644 654 cfg = Config()
645 655 cfg.IPCompleter.omit__names = 0
646 656 c.update_config(cfg)
647 657 with provisionalcompleter():
648 658 c.use_jedi = False
649 659 s, matches = c.complete("ip.")
650 self.assertIn("ip.__str__", matches)
651 self.assertIn("ip._hidden_attr", matches)
660 self.assertIn(".__str__", matches)
661 self.assertIn("._hidden_attr", matches)
652 662
653 663 # c.use_jedi = True
654 664 # completions = set(c.completions('ip.', 3))
655 665 # self.assertIn(Completion(3, 3, '__str__'), completions)
656 666 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
657 667
658 668 cfg = Config()
659 669 cfg.IPCompleter.omit__names = 1
660 670 c.update_config(cfg)
661 671 with provisionalcompleter():
662 672 c.use_jedi = False
663 673 s, matches = c.complete("ip.")
664 self.assertNotIn("ip.__str__", matches)
674 self.assertNotIn(".__str__", matches)
665 675 # self.assertIn('ip._hidden_attr', matches)
666 676
667 677 # c.use_jedi = True
668 678 # completions = set(c.completions('ip.', 3))
669 679 # self.assertNotIn(Completion(3,3,'__str__'), completions)
670 680 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
671 681
672 682 cfg = Config()
673 683 cfg.IPCompleter.omit__names = 2
674 684 c.update_config(cfg)
675 685 with provisionalcompleter():
676 686 c.use_jedi = False
677 687 s, matches = c.complete("ip.")
678 self.assertNotIn("ip.__str__", matches)
679 self.assertNotIn("ip._hidden_attr", matches)
688 self.assertNotIn(".__str__", matches)
689 self.assertNotIn("._hidden_attr", matches)
680 690
681 691 # c.use_jedi = True
682 692 # completions = set(c.completions('ip.', 3))
683 693 # self.assertNotIn(Completion(3,3,'__str__'), completions)
684 694 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
685 695
686 696 with provisionalcompleter():
687 697 c.use_jedi = False
688 698 s, matches = c.complete("ip._x.")
689 self.assertIn("ip._x.keys", matches)
699 self.assertIn(".keys", matches)
690 700
691 701 # c.use_jedi = True
692 702 # completions = set(c.completions('ip._x.', 6))
693 703 # self.assertIn(Completion(6,6, "keys"), completions)
694 704
695 705 del ip._hidden_attr
696 706 del ip._x
697 707
698 708 def test_limit_to__all__False_ok(self):
699 709 """
700 710 Limit to all is deprecated, once we remove it this test can go away.
701 711 """
702 712 ip = get_ipython()
703 713 c = ip.Completer
704 714 c.use_jedi = False
705 715 ip.ex("class D: x=24")
706 716 ip.ex("d=D()")
707 717 cfg = Config()
708 718 cfg.IPCompleter.limit_to__all__ = False
709 719 c.update_config(cfg)
710 720 s, matches = c.complete("d.")
711 self.assertIn("d.x", matches)
721 self.assertIn(".x", matches)
712 722
713 723 def test_get__all__entries_ok(self):
714 724 class A:
715 725 __all__ = ["x", 1]
716 726
717 727 words = completer.get__all__entries(A())
718 728 self.assertEqual(words, ["x"])
719 729
720 730 def test_get__all__entries_no__all__ok(self):
721 731 class A:
722 732 pass
723 733
724 734 words = completer.get__all__entries(A())
725 735 self.assertEqual(words, [])
726 736
727 737 def test_func_kw_completions(self):
728 738 ip = get_ipython()
729 739 c = ip.Completer
730 740 c.use_jedi = False
731 741 ip.ex("def myfunc(a=1,b=2): return a+b")
732 742 s, matches = c.complete(None, "myfunc(1,b")
733 743 self.assertIn("b=", matches)
734 744 # Simulate completing with cursor right after b (pos==10):
735 745 s, matches = c.complete(None, "myfunc(1,b)", 10)
736 746 self.assertIn("b=", matches)
737 747 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
738 748 self.assertIn("b=", matches)
739 749 # builtin function
740 750 s, matches = c.complete(None, "min(k, k")
741 751 self.assertIn("key=", matches)
742 752
743 753 def test_default_arguments_from_docstring(self):
744 754 ip = get_ipython()
745 755 c = ip.Completer
746 756 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
747 757 self.assertEqual(kwd, ["key"])
748 758 # with cython type etc
749 759 kwd = c._default_arguments_from_docstring(
750 760 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
751 761 )
752 762 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
753 763 # white spaces
754 764 kwd = c._default_arguments_from_docstring(
755 765 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
756 766 )
757 767 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
758 768
759 769 def test_line_magics(self):
760 770 ip = get_ipython()
761 771 c = ip.Completer
762 772 s, matches = c.complete(None, "lsmag")
763 773 self.assertIn("%lsmagic", matches)
764 774 s, matches = c.complete(None, "%lsmag")
765 775 self.assertIn("%lsmagic", matches)
766 776
767 777 def test_cell_magics(self):
768 778 from IPython.core.magic import register_cell_magic
769 779
770 780 @register_cell_magic
771 781 def _foo_cellm(line, cell):
772 782 pass
773 783
774 784 ip = get_ipython()
775 785 c = ip.Completer
776 786
777 787 s, matches = c.complete(None, "_foo_ce")
778 788 self.assertIn("%%_foo_cellm", matches)
779 789 s, matches = c.complete(None, "%%_foo_ce")
780 790 self.assertIn("%%_foo_cellm", matches)
781 791
782 792 def test_line_cell_magics(self):
783 793 from IPython.core.magic import register_line_cell_magic
784 794
785 795 @register_line_cell_magic
786 796 def _bar_cellm(line, cell):
787 797 pass
788 798
789 799 ip = get_ipython()
790 800 c = ip.Completer
791 801
792 802 # The policy here is trickier, see comments in completion code. The
793 803 # returned values depend on whether the user passes %% or not explicitly,
794 804 # and this will show a difference if the same name is both a line and cell
795 805 # magic.
796 806 s, matches = c.complete(None, "_bar_ce")
797 807 self.assertIn("%_bar_cellm", matches)
798 808 self.assertIn("%%_bar_cellm", matches)
799 809 s, matches = c.complete(None, "%_bar_ce")
800 810 self.assertIn("%_bar_cellm", matches)
801 811 self.assertIn("%%_bar_cellm", matches)
802 812 s, matches = c.complete(None, "%%_bar_ce")
803 813 self.assertNotIn("%_bar_cellm", matches)
804 814 self.assertIn("%%_bar_cellm", matches)
805 815
806 816 def test_magic_completion_order(self):
807 817 ip = get_ipython()
808 818 c = ip.Completer
809 819
810 820 # Test ordering of line and cell magics.
811 821 text, matches = c.complete("timeit")
812 822 self.assertEqual(matches, ["%timeit", "%%timeit"])
813 823
814 824 def test_magic_completion_shadowing(self):
815 825 ip = get_ipython()
816 826 c = ip.Completer
817 827 c.use_jedi = False
818 828
819 829 # Before importing matplotlib, %matplotlib magic should be the only option.
820 830 text, matches = c.complete("mat")
821 831 self.assertEqual(matches, ["%matplotlib"])
822 832
823 833 # The newly introduced name should shadow the magic.
824 834 ip.run_cell("matplotlib = 1")
825 835 text, matches = c.complete("mat")
826 836 self.assertEqual(matches, ["matplotlib"])
827 837
828 838 # After removing matplotlib from namespace, the magic should again be
829 839 # the only option.
830 840 del ip.user_ns["matplotlib"]
831 841 text, matches = c.complete("mat")
832 842 self.assertEqual(matches, ["%matplotlib"])
833 843
834 844 def test_magic_completion_shadowing_explicit(self):
835 845 """
836 846 If the user try to complete a shadowed magic, and explicit % start should
837 847 still return the completions.
838 848 """
839 849 ip = get_ipython()
840 850 c = ip.Completer
841 851
842 852 # Before importing matplotlib, %matplotlib magic should be the only option.
843 853 text, matches = c.complete("%mat")
844 854 self.assertEqual(matches, ["%matplotlib"])
845 855
846 856 ip.run_cell("matplotlib = 1")
847 857
848 858 # After removing matplotlib from namespace, the magic should still be
849 859 # the only option.
850 860 text, matches = c.complete("%mat")
851 861 self.assertEqual(matches, ["%matplotlib"])
852 862
853 863 def test_magic_config(self):
854 864 ip = get_ipython()
855 865 c = ip.Completer
856 866
857 867 s, matches = c.complete(None, "conf")
858 868 self.assertIn("%config", matches)
859 869 s, matches = c.complete(None, "conf")
860 870 self.assertNotIn("AliasManager", matches)
861 871 s, matches = c.complete(None, "config ")
862 872 self.assertIn("AliasManager", matches)
863 873 s, matches = c.complete(None, "%config ")
864 874 self.assertIn("AliasManager", matches)
865 875 s, matches = c.complete(None, "config Ali")
866 876 self.assertListEqual(["AliasManager"], matches)
867 877 s, matches = c.complete(None, "%config Ali")
868 878 self.assertListEqual(["AliasManager"], matches)
869 879 s, matches = c.complete(None, "config AliasManager")
870 880 self.assertListEqual(["AliasManager"], matches)
871 881 s, matches = c.complete(None, "%config AliasManager")
872 882 self.assertListEqual(["AliasManager"], matches)
873 883 s, matches = c.complete(None, "config AliasManager.")
874 884 self.assertIn("AliasManager.default_aliases", matches)
875 885 s, matches = c.complete(None, "%config AliasManager.")
876 886 self.assertIn("AliasManager.default_aliases", matches)
877 887 s, matches = c.complete(None, "config AliasManager.de")
878 888 self.assertListEqual(["AliasManager.default_aliases"], matches)
879 889 s, matches = c.complete(None, "config AliasManager.de")
880 890 self.assertListEqual(["AliasManager.default_aliases"], matches)
881 891
882 892 def test_magic_color(self):
883 893 ip = get_ipython()
884 894 c = ip.Completer
885 895
886 896 s, matches = c.complete(None, "colo")
887 897 self.assertIn("%colors", matches)
888 898 s, matches = c.complete(None, "colo")
889 899 self.assertNotIn("NoColor", matches)
890 900 s, matches = c.complete(None, "%colors") # No trailing space
891 901 self.assertNotIn("NoColor", matches)
892 902 s, matches = c.complete(None, "colors ")
893 903 self.assertIn("NoColor", matches)
894 904 s, matches = c.complete(None, "%colors ")
895 905 self.assertIn("NoColor", matches)
896 906 s, matches = c.complete(None, "colors NoCo")
897 907 self.assertListEqual(["NoColor"], matches)
898 908 s, matches = c.complete(None, "%colors NoCo")
899 909 self.assertListEqual(["NoColor"], matches)
900 910
901 911 def test_match_dict_keys(self):
902 912 """
903 913 Test that match_dict_keys works on a couple of use case does return what
904 914 expected, and does not crash
905 915 """
906 916 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
907 917
908 918 def match(*args, **kwargs):
909 919 quote, offset, matches = match_dict_keys(*args, delims=delims, **kwargs)
910 920 return quote, offset, list(matches)
911 921
912 922 keys = ["foo", b"far"]
913 923 assert match(keys, "b'") == ("'", 2, ["far"])
914 924 assert match(keys, "b'f") == ("'", 2, ["far"])
915 925 assert match(keys, 'b"') == ('"', 2, ["far"])
916 926 assert match(keys, 'b"f') == ('"', 2, ["far"])
917 927
918 928 assert match(keys, "'") == ("'", 1, ["foo"])
919 929 assert match(keys, "'f") == ("'", 1, ["foo"])
920 930 assert match(keys, '"') == ('"', 1, ["foo"])
921 931 assert match(keys, '"f') == ('"', 1, ["foo"])
922 932
923 933 # Completion on first item of tuple
924 934 keys = [("foo", 1111), ("foo", 2222), (3333, "bar"), (3333, "test")]
925 935 assert match(keys, "'f") == ("'", 1, ["foo"])
926 936 assert match(keys, "33") == ("", 0, ["3333"])
927 937
928 938 # Completion on numbers
929 939 keys = [
930 940 0xDEADBEEF,
931 941 1111,
932 942 1234,
933 943 "1999",
934 944 0b10101,
935 945 22,
936 946 ] # 0xDEADBEEF = 3735928559; 0b10101 = 21
937 947 assert match(keys, "0xdead") == ("", 0, ["0xdeadbeef"])
938 948 assert match(keys, "1") == ("", 0, ["1111", "1234"])
939 949 assert match(keys, "2") == ("", 0, ["21", "22"])
940 950 assert match(keys, "0b101") == ("", 0, ["0b10101", "0b10110"])
941 951
942 952 # Should yield on variables
943 953 assert match(keys, "a_variable") == ("", 0, [])
944 954
945 955 # Should pass over invalid literals
946 956 assert match(keys, "'' ''") == ("", 0, [])
947 957
948 958 def test_match_dict_keys_tuple(self):
949 959 """
950 960 Test that match_dict_keys called with extra prefix works on a couple of use case,
951 961 does return what expected, and does not crash.
952 962 """
953 963 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
954 964
955 965 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
956 966
957 967 def match(*args, extra=None, **kwargs):
958 968 quote, offset, matches = match_dict_keys(
959 969 *args, delims=delims, extra_prefix=extra, **kwargs
960 970 )
961 971 return quote, offset, list(matches)
962 972
963 973 # Completion on first key == "foo"
964 974 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["bar", "oof"])
965 975 assert match(keys, '"', extra=("foo",)) == ('"', 1, ["bar", "oof"])
966 976 assert match(keys, "'o", extra=("foo",)) == ("'", 1, ["oof"])
967 977 assert match(keys, '"o', extra=("foo",)) == ('"', 1, ["oof"])
968 978 assert match(keys, "b'", extra=("foo",)) == ("'", 2, ["bar"])
969 979 assert match(keys, 'b"', extra=("foo",)) == ('"', 2, ["bar"])
970 980 assert match(keys, "b'b", extra=("foo",)) == ("'", 2, ["bar"])
971 981 assert match(keys, 'b"b', extra=("foo",)) == ('"', 2, ["bar"])
972 982
973 983 # No Completion
974 984 assert match(keys, "'", extra=("no_foo",)) == ("'", 1, [])
975 985 assert match(keys, "'", extra=("fo",)) == ("'", 1, [])
976 986
977 987 keys = [("foo1", "foo2", "foo3", "foo4"), ("foo1", "foo2", "bar", "foo4")]
978 988 assert match(keys, "'foo", extra=("foo1",)) == ("'", 1, ["foo2"])
979 989 assert match(keys, "'foo", extra=("foo1", "foo2")) == ("'", 1, ["foo3"])
980 990 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3")) == ("'", 1, ["foo4"])
981 991 assert match(keys, "'foo", extra=("foo1", "foo2", "foo3", "foo4")) == (
982 992 "'",
983 993 1,
984 994 [],
985 995 )
986 996
987 997 keys = [("foo", 1111), ("foo", "2222"), (3333, "bar"), (3333, 4444)]
988 998 assert match(keys, "'", extra=("foo",)) == ("'", 1, ["2222"])
989 999 assert match(keys, "", extra=("foo",)) == ("", 0, ["1111", "'2222'"])
990 1000 assert match(keys, "'", extra=(3333,)) == ("'", 1, ["bar"])
991 1001 assert match(keys, "", extra=(3333,)) == ("", 0, ["'bar'", "4444"])
992 1002 assert match(keys, "'", extra=("3333",)) == ("'", 1, [])
993 1003 assert match(keys, "33") == ("", 0, ["3333"])
994 1004
995 1005 def test_dict_key_completion_closures(self):
996 1006 ip = get_ipython()
997 1007 complete = ip.Completer.complete
998 1008 ip.Completer.auto_close_dict_keys = True
999 1009
1000 1010 ip.user_ns["d"] = {
1001 1011 # tuple only
1002 1012 ("aa", 11): None,
1003 1013 # tuple and non-tuple
1004 1014 ("bb", 22): None,
1005 1015 "bb": None,
1006 1016 # non-tuple only
1007 1017 "cc": None,
1008 1018 # numeric tuple only
1009 1019 (77, "x"): None,
1010 1020 # numeric tuple and non-tuple
1011 1021 (88, "y"): None,
1012 1022 88: None,
1013 1023 # numeric non-tuple only
1014 1024 99: None,
1015 1025 }
1016 1026
1017 1027 _, matches = complete(line_buffer="d[")
1018 1028 # should append `, ` if matches a tuple only
1019 1029 self.assertIn("'aa', ", matches)
1020 1030 # should not append anything if matches a tuple and an item
1021 1031 self.assertIn("'bb'", matches)
1022 1032 # should append `]` if matches and item only
1023 1033 self.assertIn("'cc']", matches)
1024 1034
1025 1035 # should append `, ` if matches a tuple only
1026 1036 self.assertIn("77, ", matches)
1027 1037 # should not append anything if matches a tuple and an item
1028 1038 self.assertIn("88", matches)
1029 1039 # should append `]` if matches and item only
1030 1040 self.assertIn("99]", matches)
1031 1041
1032 1042 _, matches = complete(line_buffer="d['aa', ")
1033 1043 # should restrict matches to those matching tuple prefix
1034 1044 self.assertIn("11]", matches)
1035 1045 self.assertNotIn("'bb'", matches)
1036 1046 self.assertNotIn("'bb', ", matches)
1037 1047 self.assertNotIn("'bb']", matches)
1038 1048 self.assertNotIn("'cc'", matches)
1039 1049 self.assertNotIn("'cc', ", matches)
1040 1050 self.assertNotIn("'cc']", matches)
1041 1051 ip.Completer.auto_close_dict_keys = False
1042 1052
1043 1053 def test_dict_key_completion_string(self):
1044 1054 """Test dictionary key completion for string keys"""
1045 1055 ip = get_ipython()
1046 1056 complete = ip.Completer.complete
1047 1057
1048 1058 ip.user_ns["d"] = {"abc": None}
1049 1059
1050 1060 # check completion at different stages
1051 1061 _, matches = complete(line_buffer="d[")
1052 1062 self.assertIn("'abc'", matches)
1053 1063 self.assertNotIn("'abc']", matches)
1054 1064
1055 1065 _, matches = complete(line_buffer="d['")
1056 1066 self.assertIn("abc", matches)
1057 1067 self.assertNotIn("abc']", matches)
1058 1068
1059 1069 _, matches = complete(line_buffer="d['a")
1060 1070 self.assertIn("abc", matches)
1061 1071 self.assertNotIn("abc']", matches)
1062 1072
1063 1073 # check use of different quoting
1064 1074 _, matches = complete(line_buffer='d["')
1065 1075 self.assertIn("abc", matches)
1066 1076 self.assertNotIn('abc"]', matches)
1067 1077
1068 1078 _, matches = complete(line_buffer='d["a')
1069 1079 self.assertIn("abc", matches)
1070 1080 self.assertNotIn('abc"]', matches)
1071 1081
1072 1082 # check sensitivity to following context
1073 1083 _, matches = complete(line_buffer="d[]", cursor_pos=2)
1074 1084 self.assertIn("'abc'", matches)
1075 1085
1076 1086 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1077 1087 self.assertIn("abc", matches)
1078 1088 self.assertNotIn("abc'", matches)
1079 1089 self.assertNotIn("abc']", matches)
1080 1090
1081 1091 # check multiple solutions are correctly returned and that noise is not
1082 1092 ip.user_ns["d"] = {
1083 1093 "abc": None,
1084 1094 "abd": None,
1085 1095 "bad": None,
1086 1096 object(): None,
1087 1097 5: None,
1088 1098 ("abe", None): None,
1089 1099 (None, "abf"): None
1090 1100 }
1091 1101
1092 1102 _, matches = complete(line_buffer="d['a")
1093 1103 self.assertIn("abc", matches)
1094 1104 self.assertIn("abd", matches)
1095 1105 self.assertNotIn("bad", matches)
1096 1106 self.assertNotIn("abe", matches)
1097 1107 self.assertNotIn("abf", matches)
1098 1108 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1099 1109
1100 1110 # check escaping and whitespace
1101 1111 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
1102 1112 _, matches = complete(line_buffer="d['a")
1103 1113 self.assertIn("a\\nb", matches)
1104 1114 self.assertIn("a\\'b", matches)
1105 1115 self.assertIn('a"b', matches)
1106 1116 self.assertIn("a word", matches)
1107 1117 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1108 1118
1109 1119 # - can complete on non-initial word of the string
1110 1120 _, matches = complete(line_buffer="d['a w")
1111 1121 self.assertIn("word", matches)
1112 1122
1113 1123 # - understands quote escaping
1114 1124 _, matches = complete(line_buffer="d['a\\'")
1115 1125 self.assertIn("b", matches)
1116 1126
1117 1127 # - default quoting should work like repr
1118 1128 _, matches = complete(line_buffer="d[")
1119 1129 self.assertIn('"a\'b"', matches)
1120 1130
1121 1131 # - when opening quote with ", possible to match with unescaped apostrophe
1122 1132 _, matches = complete(line_buffer="d[\"a'")
1123 1133 self.assertIn("b", matches)
1124 1134
1125 1135 # need to not split at delims that readline won't split at
1126 1136 if "-" not in ip.Completer.splitter.delims:
1127 1137 ip.user_ns["d"] = {"before-after": None}
1128 1138 _, matches = complete(line_buffer="d['before-af")
1129 1139 self.assertIn("before-after", matches)
1130 1140
1131 1141 # check completion on tuple-of-string keys at different stage - on first key
1132 1142 ip.user_ns["d"] = {('foo', 'bar'): None}
1133 1143 _, matches = complete(line_buffer="d[")
1134 1144 self.assertIn("'foo'", matches)
1135 1145 self.assertNotIn("'foo']", matches)
1136 1146 self.assertNotIn("'bar'", matches)
1137 1147 self.assertNotIn("foo", matches)
1138 1148 self.assertNotIn("bar", matches)
1139 1149
1140 1150 # - match the prefix
1141 1151 _, matches = complete(line_buffer="d['f")
1142 1152 self.assertIn("foo", matches)
1143 1153 self.assertNotIn("foo']", matches)
1144 1154 self.assertNotIn('foo"]', matches)
1145 1155 _, matches = complete(line_buffer="d['foo")
1146 1156 self.assertIn("foo", matches)
1147 1157
1148 1158 # - can complete on second key
1149 1159 _, matches = complete(line_buffer="d['foo', ")
1150 1160 self.assertIn("'bar'", matches)
1151 1161 _, matches = complete(line_buffer="d['foo', 'b")
1152 1162 self.assertIn("bar", matches)
1153 1163 self.assertNotIn("foo", matches)
1154 1164
1155 1165 # - does not propose missing keys
1156 1166 _, matches = complete(line_buffer="d['foo', 'f")
1157 1167 self.assertNotIn("bar", matches)
1158 1168 self.assertNotIn("foo", matches)
1159 1169
1160 1170 # check sensitivity to following context
1161 1171 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1162 1172 self.assertIn("'bar'", matches)
1163 1173 self.assertNotIn("bar", matches)
1164 1174 self.assertNotIn("'foo'", matches)
1165 1175 self.assertNotIn("foo", matches)
1166 1176
1167 1177 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1168 1178 self.assertIn("foo", matches)
1169 1179 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1170 1180
1171 1181 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1172 1182 self.assertIn("foo", matches)
1173 1183 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1174 1184
1175 1185 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1176 1186 self.assertIn("bar", matches)
1177 1187 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1178 1188
1179 1189 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1180 1190 self.assertIn("'bar'", matches)
1181 1191 self.assertNotIn("bar", matches)
1182 1192
1183 1193 # Can complete with longer tuple keys
1184 1194 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1185 1195
1186 1196 # - can complete second key
1187 1197 _, matches = complete(line_buffer="d['foo', 'b")
1188 1198 self.assertIn("bar", matches)
1189 1199 self.assertNotIn("foo", matches)
1190 1200 self.assertNotIn("foobar", matches)
1191 1201
1192 1202 # - can complete third key
1193 1203 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1194 1204 self.assertIn("foobar", matches)
1195 1205 self.assertNotIn("foo", matches)
1196 1206 self.assertNotIn("bar", matches)
1197 1207
1198 1208 def test_dict_key_completion_numbers(self):
1199 1209 ip = get_ipython()
1200 1210 complete = ip.Completer.complete
1201 1211
1202 1212 ip.user_ns["d"] = {
1203 1213 0xDEADBEEF: None, # 3735928559
1204 1214 1111: None,
1205 1215 1234: None,
1206 1216 "1999": None,
1207 1217 0b10101: None, # 21
1208 1218 22: None,
1209 1219 }
1210 1220 _, matches = complete(line_buffer="d[1")
1211 1221 self.assertIn("1111", matches)
1212 1222 self.assertIn("1234", matches)
1213 1223 self.assertNotIn("1999", matches)
1214 1224 self.assertNotIn("'1999'", matches)
1215 1225
1216 1226 _, matches = complete(line_buffer="d[0xdead")
1217 1227 self.assertIn("0xdeadbeef", matches)
1218 1228
1219 1229 _, matches = complete(line_buffer="d[2")
1220 1230 self.assertIn("21", matches)
1221 1231 self.assertIn("22", matches)
1222 1232
1223 1233 _, matches = complete(line_buffer="d[0b101")
1224 1234 self.assertIn("0b10101", matches)
1225 1235 self.assertIn("0b10110", matches)
1226 1236
1227 1237 def test_dict_key_completion_contexts(self):
1228 1238 """Test expression contexts in which dict key completion occurs"""
1229 1239 ip = get_ipython()
1230 1240 complete = ip.Completer.complete
1231 1241 d = {"abc": None}
1232 1242 ip.user_ns["d"] = d
1233 1243
1234 1244 class C:
1235 1245 data = d
1236 1246
1237 1247 ip.user_ns["C"] = C
1238 1248 ip.user_ns["get"] = lambda: d
1239 1249 ip.user_ns["nested"] = {"x": d}
1240 1250
1241 1251 def assert_no_completion(**kwargs):
1242 1252 _, matches = complete(**kwargs)
1243 1253 self.assertNotIn("abc", matches)
1244 1254 self.assertNotIn("abc'", matches)
1245 1255 self.assertNotIn("abc']", matches)
1246 1256 self.assertNotIn("'abc'", matches)
1247 1257 self.assertNotIn("'abc']", matches)
1248 1258
1249 1259 def assert_completion(**kwargs):
1250 1260 _, matches = complete(**kwargs)
1251 1261 self.assertIn("'abc'", matches)
1252 1262 self.assertNotIn("'abc']", matches)
1253 1263
1254 1264 # no completion after string closed, even if reopened
1255 1265 assert_no_completion(line_buffer="d['a'")
1256 1266 assert_no_completion(line_buffer='d["a"')
1257 1267 assert_no_completion(line_buffer="d['a' + ")
1258 1268 assert_no_completion(line_buffer="d['a' + '")
1259 1269
1260 1270 # completion in non-trivial expressions
1261 1271 assert_completion(line_buffer="+ d[")
1262 1272 assert_completion(line_buffer="(d[")
1263 1273 assert_completion(line_buffer="C.data[")
1264 1274
1265 1275 # nested dict completion
1266 1276 assert_completion(line_buffer="nested['x'][")
1267 1277
1268 1278 with evaluation_policy("minimal"):
1269 1279 with pytest.raises(AssertionError):
1270 1280 assert_completion(line_buffer="nested['x'][")
1271 1281
1272 1282 # greedy flag
1273 1283 def assert_completion(**kwargs):
1274 1284 _, matches = complete(**kwargs)
1275 1285 self.assertIn("get()['abc']", matches)
1276 1286
1277 1287 assert_no_completion(line_buffer="get()[")
1278 1288 with greedy_completion():
1279 1289 assert_completion(line_buffer="get()[")
1280 1290 assert_completion(line_buffer="get()['")
1281 1291 assert_completion(line_buffer="get()['a")
1282 1292 assert_completion(line_buffer="get()['ab")
1283 1293 assert_completion(line_buffer="get()['abc")
1284 1294
1285 1295 def test_dict_key_completion_bytes(self):
1286 1296 """Test handling of bytes in dict key completion"""
1287 1297 ip = get_ipython()
1288 1298 complete = ip.Completer.complete
1289 1299
1290 1300 ip.user_ns["d"] = {"abc": None, b"abd": None}
1291 1301
1292 1302 _, matches = complete(line_buffer="d[")
1293 1303 self.assertIn("'abc'", matches)
1294 1304 self.assertIn("b'abd'", matches)
1295 1305
1296 1306 if False: # not currently implemented
1297 1307 _, matches = complete(line_buffer="d[b")
1298 1308 self.assertIn("b'abd'", matches)
1299 1309 self.assertNotIn("b'abc'", matches)
1300 1310
1301 1311 _, matches = complete(line_buffer="d[b'")
1302 1312 self.assertIn("abd", matches)
1303 1313 self.assertNotIn("abc", matches)
1304 1314
1305 1315 _, matches = complete(line_buffer="d[B'")
1306 1316 self.assertIn("abd", matches)
1307 1317 self.assertNotIn("abc", matches)
1308 1318
1309 1319 _, matches = complete(line_buffer="d['")
1310 1320 self.assertIn("abc", matches)
1311 1321 self.assertNotIn("abd", matches)
1312 1322
1313 1323 def test_dict_key_completion_unicode_py3(self):
1314 1324 """Test handling of unicode in dict key completion"""
1315 1325 ip = get_ipython()
1316 1326 complete = ip.Completer.complete
1317 1327
1318 1328 ip.user_ns["d"] = {"a\u05d0": None}
1319 1329
1320 1330 # query using escape
1321 1331 if sys.platform != "win32":
1322 1332 # Known failure on Windows
1323 1333 _, matches = complete(line_buffer="d['a\\u05d0")
1324 1334 self.assertIn("u05d0", matches) # tokenized after \\
1325 1335
1326 1336 # query using character
1327 1337 _, matches = complete(line_buffer="d['a\u05d0")
1328 1338 self.assertIn("a\u05d0", matches)
1329 1339
1330 1340 with greedy_completion():
1331 1341 # query using escape
1332 1342 _, matches = complete(line_buffer="d['a\\u05d0")
1333 1343 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1334 1344
1335 1345 # query using character
1336 1346 _, matches = complete(line_buffer="d['a\u05d0")
1337 1347 self.assertIn("d['a\u05d0']", matches)
1338 1348
1339 1349 @dec.skip_without("numpy")
1340 1350 def test_struct_array_key_completion(self):
1341 1351 """Test dict key completion applies to numpy struct arrays"""
1342 1352 import numpy
1343 1353
1344 1354 ip = get_ipython()
1345 1355 complete = ip.Completer.complete
1346 1356 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1347 1357 _, matches = complete(line_buffer="d['")
1348 1358 self.assertIn("hello", matches)
1349 1359 self.assertIn("world", matches)
1350 1360 # complete on the numpy struct itself
1351 1361 dt = numpy.dtype(
1352 1362 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1353 1363 )
1354 1364 x = numpy.zeros(2, dtype=dt)
1355 1365 ip.user_ns["d"] = x[1]
1356 1366 _, matches = complete(line_buffer="d['")
1357 1367 self.assertIn("my_head", matches)
1358 1368 self.assertIn("my_data", matches)
1359 1369
1360 1370 def completes_on_nested():
1361 1371 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1362 1372 _, matches = complete(line_buffer="d[1]['my_head']['")
1363 1373 self.assertTrue(any(["my_dt" in m for m in matches]))
1364 1374 self.assertTrue(any(["my_df" in m for m in matches]))
1365 1375 # complete on a nested level
1366 1376 with greedy_completion():
1367 1377 completes_on_nested()
1368 1378
1369 1379 with evaluation_policy("limited"):
1370 1380 completes_on_nested()
1371 1381
1372 1382 with evaluation_policy("minimal"):
1373 1383 with pytest.raises(AssertionError):
1374 1384 completes_on_nested()
1375 1385
1376 1386 @dec.skip_without("pandas")
1377 1387 def test_dataframe_key_completion(self):
1378 1388 """Test dict key completion applies to pandas DataFrames"""
1379 1389 import pandas
1380 1390
1381 1391 ip = get_ipython()
1382 1392 complete = ip.Completer.complete
1383 1393 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1384 1394 _, matches = complete(line_buffer="d['")
1385 1395 self.assertIn("hello", matches)
1386 1396 self.assertIn("world", matches)
1387 1397 _, matches = complete(line_buffer="d.loc[:, '")
1388 1398 self.assertIn("hello", matches)
1389 1399 self.assertIn("world", matches)
1390 1400 _, matches = complete(line_buffer="d.loc[1:, '")
1391 1401 self.assertIn("hello", matches)
1392 1402 _, matches = complete(line_buffer="d.loc[1:1, '")
1393 1403 self.assertIn("hello", matches)
1394 1404 _, matches = complete(line_buffer="d.loc[1:1:-1, '")
1395 1405 self.assertIn("hello", matches)
1396 1406 _, matches = complete(line_buffer="d.loc[::, '")
1397 1407 self.assertIn("hello", matches)
1398 1408
1399 1409 def test_dict_key_completion_invalids(self):
1400 1410 """Smoke test cases dict key completion can't handle"""
1401 1411 ip = get_ipython()
1402 1412 complete = ip.Completer.complete
1403 1413
1404 1414 ip.user_ns["no_getitem"] = None
1405 1415 ip.user_ns["no_keys"] = []
1406 1416 ip.user_ns["cant_call_keys"] = dict
1407 1417 ip.user_ns["empty"] = {}
1408 1418 ip.user_ns["d"] = {"abc": 5}
1409 1419
1410 1420 _, matches = complete(line_buffer="no_getitem['")
1411 1421 _, matches = complete(line_buffer="no_keys['")
1412 1422 _, matches = complete(line_buffer="cant_call_keys['")
1413 1423 _, matches = complete(line_buffer="empty['")
1414 1424 _, matches = complete(line_buffer="name_error['")
1415 1425 _, matches = complete(line_buffer="d['\\") # incomplete escape
1416 1426
1417 1427 def test_object_key_completion(self):
1418 1428 ip = get_ipython()
1419 1429 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1420 1430
1421 1431 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1422 1432 self.assertIn("qwerty", matches)
1423 1433 self.assertIn("qwick", matches)
1424 1434
1425 1435 def test_class_key_completion(self):
1426 1436 ip = get_ipython()
1427 1437 NamedInstanceClass("qwerty")
1428 1438 NamedInstanceClass("qwick")
1429 1439 ip.user_ns["named_instance_class"] = NamedInstanceClass
1430 1440
1431 1441 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1432 1442 self.assertIn("qwerty", matches)
1433 1443 self.assertIn("qwick", matches)
1434 1444
1435 1445 def test_tryimport(self):
1436 1446 """
1437 1447 Test that try-import don't crash on trailing dot, and import modules before
1438 1448 """
1439 1449 from IPython.core.completerlib import try_import
1440 1450
1441 1451 assert try_import("IPython.")
1442 1452
1443 1453 def test_aimport_module_completer(self):
1444 1454 ip = get_ipython()
1445 1455 _, matches = ip.complete("i", "%aimport i")
1446 1456 self.assertIn("io", matches)
1447 1457 self.assertNotIn("int", matches)
1448 1458
1449 1459 def test_nested_import_module_completer(self):
1450 1460 ip = get_ipython()
1451 1461 _, matches = ip.complete(None, "import IPython.co", 17)
1452 1462 self.assertIn("IPython.core", matches)
1453 1463 self.assertNotIn("import IPython.core", matches)
1454 1464 self.assertNotIn("IPython.display", matches)
1455 1465
1456 1466 def test_import_module_completer(self):
1457 1467 ip = get_ipython()
1458 1468 _, matches = ip.complete("i", "import i")
1459 1469 self.assertIn("io", matches)
1460 1470 self.assertNotIn("int", matches)
1461 1471
1462 1472 def test_from_module_completer(self):
1463 1473 ip = get_ipython()
1464 1474 _, matches = ip.complete("B", "from io import B", 16)
1465 1475 self.assertIn("BytesIO", matches)
1466 1476 self.assertNotIn("BaseException", matches)
1467 1477
1468 1478 def test_snake_case_completion(self):
1469 1479 ip = get_ipython()
1470 1480 ip.Completer.use_jedi = False
1471 1481 ip.user_ns["some_three"] = 3
1472 1482 ip.user_ns["some_four"] = 4
1473 1483 _, matches = ip.complete("s_", "print(s_f")
1474 1484 self.assertIn("some_three", matches)
1475 1485 self.assertIn("some_four", matches)
1476 1486
1477 1487 def test_mix_terms(self):
1478 1488 ip = get_ipython()
1479 1489 from textwrap import dedent
1480 1490
1481 1491 ip.Completer.use_jedi = False
1482 1492 ip.ex(
1483 1493 dedent(
1484 1494 """
1485 1495 class Test:
1486 1496 def meth(self, meth_arg1):
1487 1497 print("meth")
1488 1498
1489 1499 def meth_1(self, meth1_arg1, meth1_arg2):
1490 1500 print("meth1")
1491 1501
1492 1502 def meth_2(self, meth2_arg1, meth2_arg2):
1493 1503 print("meth2")
1494 1504 test = Test()
1495 1505 """
1496 1506 )
1497 1507 )
1498 1508 _, matches = ip.complete(None, "test.meth(")
1499 1509 self.assertIn("meth_arg1=", matches)
1500 1510 self.assertNotIn("meth2_arg1=", matches)
1501 1511
1502 1512 def test_percent_symbol_restrict_to_magic_completions(self):
1503 1513 ip = get_ipython()
1504 1514 completer = ip.Completer
1505 1515 text = "%a"
1506 1516
1507 1517 with provisionalcompleter():
1508 1518 completer.use_jedi = True
1509 1519 completions = completer.completions(text, len(text))
1510 1520 for c in completions:
1511 1521 self.assertEqual(c.text[0], "%")
1512 1522
1513 1523 def test_fwd_unicode_restricts(self):
1514 1524 ip = get_ipython()
1515 1525 completer = ip.Completer
1516 1526 text = "\\ROMAN NUMERAL FIVE"
1517 1527
1518 1528 with provisionalcompleter():
1519 1529 completer.use_jedi = True
1520 1530 completions = [
1521 1531 completion.text for completion in completer.completions(text, len(text))
1522 1532 ]
1523 1533 self.assertEqual(completions, ["\u2164"])
1524 1534
1525 1535 def test_dict_key_restrict_to_dicts(self):
1526 1536 """Test that dict key suppresses non-dict completion items"""
1527 1537 ip = get_ipython()
1528 1538 c = ip.Completer
1529 1539 d = {"abc": None}
1530 1540 ip.user_ns["d"] = d
1531 1541
1532 1542 text = 'd["a'
1533 1543
1534 1544 def _():
1535 1545 with provisionalcompleter():
1536 1546 c.use_jedi = True
1537 1547 return [
1538 1548 completion.text for completion in c.completions(text, len(text))
1539 1549 ]
1540 1550
1541 1551 completions = _()
1542 1552 self.assertEqual(completions, ["abc"])
1543 1553
1544 1554 # check that it can be disabled in granular manner:
1545 1555 cfg = Config()
1546 1556 cfg.IPCompleter.suppress_competing_matchers = {
1547 1557 "IPCompleter.dict_key_matcher": False
1548 1558 }
1549 1559 c.update_config(cfg)
1550 1560
1551 1561 completions = _()
1552 1562 self.assertIn("abc", completions)
1553 1563 self.assertGreater(len(completions), 1)
1554 1564
1555 1565 def test_matcher_suppression(self):
1556 1566 @completion_matcher(identifier="a_matcher")
1557 1567 def a_matcher(text):
1558 1568 return ["completion_a"]
1559 1569
1560 1570 @completion_matcher(identifier="b_matcher", api_version=2)
1561 1571 def b_matcher(context: CompletionContext):
1562 1572 text = context.token
1563 1573 result = {"completions": [SimpleCompletion("completion_b")]}
1564 1574
1565 1575 if text == "suppress c":
1566 1576 result["suppress"] = {"c_matcher"}
1567 1577
1568 1578 if text.startswith("suppress all"):
1569 1579 result["suppress"] = True
1570 1580 if text == "suppress all but c":
1571 1581 result["do_not_suppress"] = {"c_matcher"}
1572 1582 if text == "suppress all but a":
1573 1583 result["do_not_suppress"] = {"a_matcher"}
1574 1584
1575 1585 return result
1576 1586
1577 1587 @completion_matcher(identifier="c_matcher")
1578 1588 def c_matcher(text):
1579 1589 return ["completion_c"]
1580 1590
1581 1591 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1582 1592 ip = get_ipython()
1583 1593 c = ip.Completer
1584 1594
1585 1595 def _(text, expected):
1586 1596 c.use_jedi = False
1587 1597 s, matches = c.complete(text)
1588 1598 self.assertEqual(expected, matches)
1589 1599
1590 1600 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1591 1601 _("suppress all", ["completion_b"])
1592 1602 _("suppress all but a", ["completion_a", "completion_b"])
1593 1603 _("suppress all but c", ["completion_b", "completion_c"])
1594 1604
1595 1605 def configure(suppression_config):
1596 1606 cfg = Config()
1597 1607 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1598 1608 c.update_config(cfg)
1599 1609
1600 1610 # test that configuration takes priority over the run-time decisions
1601 1611
1602 1612 configure(False)
1603 1613 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1604 1614
1605 1615 configure({"b_matcher": False})
1606 1616 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1607 1617
1608 1618 configure({"a_matcher": False})
1609 1619 _("suppress all", ["completion_b"])
1610 1620
1611 1621 configure({"b_matcher": True})
1612 1622 _("do not suppress", ["completion_b"])
1613 1623
1614 1624 configure(True)
1615 1625 _("do not suppress", ["completion_a"])
1616 1626
1617 1627 def test_matcher_suppression_with_iterator(self):
1618 1628 @completion_matcher(identifier="matcher_returning_iterator")
1619 1629 def matcher_returning_iterator(text):
1620 1630 return iter(["completion_iter"])
1621 1631
1622 1632 @completion_matcher(identifier="matcher_returning_list")
1623 1633 def matcher_returning_list(text):
1624 1634 return ["completion_list"]
1625 1635
1626 1636 with custom_matchers([matcher_returning_iterator, matcher_returning_list]):
1627 1637 ip = get_ipython()
1628 1638 c = ip.Completer
1629 1639
1630 1640 def _(text, expected):
1631 1641 c.use_jedi = False
1632 1642 s, matches = c.complete(text)
1633 1643 self.assertEqual(expected, matches)
1634 1644
1635 1645 def configure(suppression_config):
1636 1646 cfg = Config()
1637 1647 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1638 1648 c.update_config(cfg)
1639 1649
1640 1650 configure(False)
1641 1651 _("---", ["completion_iter", "completion_list"])
1642 1652
1643 1653 configure(True)
1644 1654 _("---", ["completion_iter"])
1645 1655
1646 1656 configure(None)
1647 1657 _("--", ["completion_iter", "completion_list"])
1648 1658
1649 1659 @pytest.mark.xfail(
1650 1660 sys.version_info.releaselevel in ("alpha",),
1651 1661 reason="Parso does not yet parse 3.13",
1652 1662 )
1653 1663 def test_matcher_suppression_with_jedi(self):
1654 1664 ip = get_ipython()
1655 1665 c = ip.Completer
1656 1666 c.use_jedi = True
1657 1667
1658 1668 def configure(suppression_config):
1659 1669 cfg = Config()
1660 1670 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1661 1671 c.update_config(cfg)
1662 1672
1663 1673 def _():
1664 1674 with provisionalcompleter():
1665 1675 matches = [completion.text for completion in c.completions("dict.", 5)]
1666 1676 self.assertIn("keys", matches)
1667 1677
1668 1678 configure(False)
1669 1679 _()
1670 1680
1671 1681 configure(True)
1672 1682 _()
1673 1683
1674 1684 configure(None)
1675 1685 _()
1676 1686
1677 1687 def test_matcher_disabling(self):
1678 1688 @completion_matcher(identifier="a_matcher")
1679 1689 def a_matcher(text):
1680 1690 return ["completion_a"]
1681 1691
1682 1692 @completion_matcher(identifier="b_matcher")
1683 1693 def b_matcher(text):
1684 1694 return ["completion_b"]
1685 1695
1686 1696 def _(expected):
1687 1697 s, matches = c.complete("completion_")
1688 1698 self.assertEqual(expected, matches)
1689 1699
1690 1700 with custom_matchers([a_matcher, b_matcher]):
1691 1701 ip = get_ipython()
1692 1702 c = ip.Completer
1693 1703
1694 1704 _(["completion_a", "completion_b"])
1695 1705
1696 1706 cfg = Config()
1697 1707 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1698 1708 c.update_config(cfg)
1699 1709
1700 1710 _(["completion_a"])
1701 1711
1702 1712 cfg.IPCompleter.disable_matchers = []
1703 1713 c.update_config(cfg)
1704 1714
1705 1715 def test_matcher_priority(self):
1706 1716 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1707 1717 def a_matcher(text):
1708 1718 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1709 1719
1710 1720 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1711 1721 def b_matcher(text):
1712 1722 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1713 1723
1714 1724 def _(expected):
1715 1725 s, matches = c.complete("completion_")
1716 1726 self.assertEqual(expected, matches)
1717 1727
1718 1728 with custom_matchers([a_matcher, b_matcher]):
1719 1729 ip = get_ipython()
1720 1730 c = ip.Completer
1721 1731
1722 1732 _(["completion_b"])
1723 1733 a_matcher.matcher_priority = 3
1724 1734 _(["completion_a"])
1725 1735
1726 1736
1727 1737 @pytest.mark.parametrize(
1728 1738 "input, expected",
1729 1739 [
1730 1740 ["1.234", "1.234"],
1731 1741 # should match signed numbers
1732 1742 ["+1", "+1"],
1733 1743 ["-1", "-1"],
1734 1744 ["-1.0", "-1.0"],
1735 1745 ["-1.", "-1."],
1736 1746 ["+1.", "+1."],
1737 1747 [".1", ".1"],
1738 1748 # should not match non-numbers
1739 1749 ["1..", None],
1740 1750 ["..", None],
1741 1751 [".1.", None],
1742 1752 # should match after comma
1743 1753 [",1", "1"],
1744 1754 [", 1", "1"],
1745 1755 [", .1", ".1"],
1746 1756 [", +.1", "+.1"],
1747 1757 # should not match after trailing spaces
1748 1758 [".1 ", None],
1749 1759 # some complex cases
1750 1760 ["0b_0011_1111_0100_1110", "0b_0011_1111_0100_1110"],
1751 1761 ["0xdeadbeef", "0xdeadbeef"],
1752 1762 ["0b_1110_0101", "0b_1110_0101"],
1753 1763 # should not match if in an operation
1754 1764 ["1 + 1", None],
1755 1765 [", 1 + 1", None],
1756 1766 ],
1757 1767 )
1758 1768 def test_match_numeric_literal_for_dict_key(input, expected):
1759 1769 assert _match_number_in_dict_key_prefix(input) == expected
General Comments 0
You need to be logged in to leave comments. Login now