##// END OF EJS Templates
Improve type hinting and documentation
krassowski -
Show More
@@ -0,0 +1,7 b''
1 # encoding: utf-8
2
3 # Copyright (c) IPython Development Team.
4 # Distributed under the terms of the Modified BSD License.
5 import os
6
7 GENERATING_DOCUMENTATION = os.environ.get("IN_SPHINX_RUN", None) == "True"
@@ -1,2862 +1,2953 b''
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 Ξ±
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 Ξ±
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press ``<tab>`` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\Ξ±<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 ``Completer.backslash_combining_completions`` option to ``False``.
63 63
64 64
65 65 Experimental
66 66 ============
67 67
68 68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 69 generate completions both using static analysis of the code, and dynamically
70 70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 71 for Python. The APIs attached to this new mechanism is unstable and will
72 72 raise unless use in an :any:`provisionalcompleter` context manager.
73 73
74 74 You will find that the following are experimental:
75 75
76 76 - :any:`provisionalcompleter`
77 77 - :any:`IPCompleter.completions`
78 78 - :any:`Completion`
79 79 - :any:`rectify_completions`
80 80
81 81 .. note::
82 82
83 83 better name for :any:`rectify_completions` ?
84 84
85 85 We welcome any feedback on these new API, and we also encourage you to try this
86 86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 87 to have extra logging information if :any:`jedi` is crashing, or if current
88 88 IPython completer pending deprecations are returning results not yet handled
89 89 by :any:`jedi`
90 90
91 91 Using Jedi for tab completion allow snippets like the following to work without
92 92 having to execute any code:
93 93
94 94 >>> myvar = ['hello', 42]
95 95 ... myvar[1].bi<tab>
96 96
97 97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 98 executing any code unlike the previously available ``IPCompleter.greedy``
99 99 option.
100 100
101 101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 102 current development version to get better completions.
103 103
104 104 Matchers
105 105 ========
106 106
107 107 All completions routines are implemented using unified *Matchers* API.
108 108 The matchers API is provisional and subject to change without notice.
109 109
110 110 The built-in matchers include:
111 111
112 - ``IPCompleter.dict_key_matcher``: dictionary key completions,
113 - ``IPCompleter.magic_matcher``: completions for magics,
114 - ``IPCompleter.unicode_name_matcher``, ``IPCompleter.fwd_unicode_matcher`` and ``IPCompleter.latex_matcher``: see `Forward latex/unicode completion`_,
115 - ``back_unicode_name_matcher`` and ``back_latex_name_matcher``: see `Backward latex completion`_,
116 - ``IPCompleter.file_matcher``: paths to files and directories,
117 - ``IPCompleter.python_func_kw_matcher`` - function keywords,
118 - ``IPCompleter.python_matches`` - globals and attributes (v1 API),
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - :any:`IPCompleter.unicode_name_matcher`,
115 :any:`IPCompleter.fwd_unicode_matcher`
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
119 121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
120 - ``IPCompleter.custom_completer_matcher`` - pluggable completer with a default implementation in any:`core.InteractiveShell`
121 which uses uses IPython hooks system (`complete_command`) with string dispatch (including regular expressions).
122 Differently to other matchers, ``custom_completer_matcher`` will not suppress Jedi results to match
123 behaviour in earlier IPython versions.
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
124 (`complete_command`) with string dispatch (including regular expressions).
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Jedi results to match behaviour in earlier IPython versions.
124 127
125 128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
126 129
130 Matcher API
131 -----------
132
133 Simplifying some details, the ``Matcher`` interface can described as
134
135 .. highlight::
136
137 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139
140 Matcher = MatcherAPIv1 | MatcherAPIv2
141
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 and remains supported as a simplest way for generating completions. This is also
144 currently the only API supported by the IPython hooks system `complete_command`.
145
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 and requires a literal ``2`` for v2 Matchers.
149
150 Once the API stabilises future versions may relax the requirement for specifying
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153
127 154 Suppression of competing matchers
128 155 ---------------------------------
129 156
130 157 By default results from all matchers are combined, in the order determined by
131 158 their priority. Matchers can request to suppress results from subsequent
132 159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
133 160
134 161 When multiple matchers simultaneously request surpression, the results from of
135 162 the matcher with higher priority will be returned.
136 163
137 164 Sometimes it is desirable to suppress most but not all other matchers;
138 165 this can be achieved by adding a list of identifiers of matchers which
139 166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167
168 The suppression behaviour can is user-configurable via
169 :any:`IPCompleter.suppress_competing_matchers`.
140 170 """
141 171
142 172
143 173 # Copyright (c) IPython Development Team.
144 174 # Distributed under the terms of the Modified BSD License.
145 175 #
146 176 # Some of this code originated from rlcompleter in the Python standard library
147 177 # Copyright (C) 2001 Python Software Foundation, www.python.org
148 178
149
179 from __future__ import annotations
150 180 import builtins as builtin_mod
151 181 import glob
152 182 import inspect
153 183 import itertools
154 184 import keyword
155 185 import os
156 186 import re
157 187 import string
158 188 import sys
159 189 import time
160 190 import unicodedata
161 191 import uuid
162 192 import warnings
163 193 from contextlib import contextmanager
164 194 from functools import lru_cache, partial
165 195 from importlib import import_module
166 196 from types import SimpleNamespace
167 197 from typing import (
168 198 Iterable,
169 199 Iterator,
170 200 List,
171 201 Tuple,
172 202 Union,
173 203 Any,
174 204 Sequence,
175 205 Dict,
176 206 NamedTuple,
177 207 Pattern,
178 208 Optional,
179 Callable,
180 209 TYPE_CHECKING,
181 210 Set,
211 Literal,
182 212 )
183 213
184 214 from IPython.core.error import TryNext
185 215 from IPython.core.inputtransformer2 import ESC_MAGIC
186 216 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
187 217 from IPython.core.oinspect import InspectColors
188 218 from IPython.testing.skipdoctest import skip_doctest
189 219 from IPython.utils import generics
220 from IPython.utils.decorators import sphinx_options
190 221 from IPython.utils.dir2 import dir2, get_real_method
222 from IPython.utils.docs import GENERATING_DOCUMENTATION
191 223 from IPython.utils.path import ensure_dir_exists
192 224 from IPython.utils.process import arg_split
193 225 from traitlets import (
194 226 Bool,
195 227 Enum,
196 228 Int,
197 229 List as ListTrait,
198 230 Unicode,
199 231 Dict as DictTrait,
200 232 Union as UnionTrait,
201 233 default,
202 234 observe,
203 235 )
204 236 from traitlets.config.configurable import Configurable
205 237
206 238 import __main__
207 239
208 240 # skip module docstests
209 241 __skip_doctest__ = True
210 242
211 243
212 244 try:
213 245 import jedi
214 246 jedi.settings.case_insensitive_completion = False
215 247 import jedi.api.helpers
216 248 import jedi.api.classes
217 249 JEDI_INSTALLED = True
218 250 except ImportError:
219 251 JEDI_INSTALLED = False
220 252
221 if TYPE_CHECKING:
253
254 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
222 255 from typing import cast
223 from typing_extensions import TypedDict, NotRequired
256 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
224 257 else:
225 258
226 def cast(obj, _type):
259 def cast(obj, type_):
260 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
227 261 return obj
228 262
229 TypedDict = Dict
230 NotRequired = Tuple
263 # do not require on runtime
264 NotRequired = Tuple # requires Python >=3.11
265 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
266 Protocol = object # requires Python >=3.8
267 TypeAlias = Any # requires Python >=3.10
268 if GENERATING_DOCUMENTATION:
269 from typing import TypedDict
231 270
232 271 # -----------------------------------------------------------------------------
233 272 # Globals
234 273 #-----------------------------------------------------------------------------
235 274
236 275 # ranges where we have most of the valid unicode names. We could be more finer
237 276 # grained but is it worth it for performance While unicode have character in the
238 277 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
239 278 # write this). With below range we cover them all, with a density of ~67%
240 279 # biggest next gap we consider only adds up about 1% density and there are 600
241 280 # gaps that would need hard coding.
242 281 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
243 282
244 283 # Public API
245 284 __all__ = ["Completer", "IPCompleter"]
246 285
247 286 if sys.platform == 'win32':
248 287 PROTECTABLES = ' '
249 288 else:
250 289 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
251 290
252 291 # Protect against returning an enormous number of completions which the frontend
253 292 # may have trouble processing.
254 293 MATCHES_LIMIT = 500
255 294
256 295 # Completion type reported when no type can be inferred.
257 296 _UNKNOWN_TYPE = "<unknown>"
258 297
259 298 class ProvisionalCompleterWarning(FutureWarning):
260 299 """
261 300 Exception raise by an experimental feature in this module.
262 301
263 302 Wrap code in :any:`provisionalcompleter` context manager if you
264 303 are certain you want to use an unstable feature.
265 304 """
266 305 pass
267 306
268 307 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
269 308
270 309
271 310 @skip_doctest
272 311 @contextmanager
273 312 def provisionalcompleter(action='ignore'):
274 313 """
275 314 This context manager has to be used in any place where unstable completer
276 315 behavior and API may be called.
277 316
278 317 >>> with provisionalcompleter():
279 318 ... completer.do_experimental_things() # works
280 319
281 320 >>> completer.do_experimental_things() # raises.
282 321
283 322 .. note::
284 323
285 324 Unstable
286 325
287 326 By using this context manager you agree that the API in use may change
288 327 without warning, and that you won't complain if they do so.
289 328
290 329 You also understand that, if the API is not to your liking, you should report
291 330 a bug to explain your use case upstream.
292 331
293 332 We'll be happy to get your feedback, feature requests, and improvements on
294 333 any of the unstable APIs!
295 334 """
296 335 with warnings.catch_warnings():
297 336 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
298 337 yield
299 338
300 339
301 340 def has_open_quotes(s):
302 341 """Return whether a string has open quotes.
303 342
304 343 This simply counts whether the number of quote characters of either type in
305 344 the string is odd.
306 345
307 346 Returns
308 347 -------
309 348 If there is an open quote, the quote character is returned. Else, return
310 349 False.
311 350 """
312 351 # We check " first, then ', so complex cases with nested quotes will get
313 352 # the " to take precedence.
314 353 if s.count('"') % 2:
315 354 return '"'
316 355 elif s.count("'") % 2:
317 356 return "'"
318 357 else:
319 358 return False
320 359
321 360
322 361 def protect_filename(s, protectables=PROTECTABLES):
323 362 """Escape a string to protect certain characters."""
324 363 if set(s) & set(protectables):
325 364 if sys.platform == "win32":
326 365 return '"' + s + '"'
327 366 else:
328 367 return "".join(("\\" + c if c in protectables else c) for c in s)
329 368 else:
330 369 return s
331 370
332 371
333 372 def expand_user(path:str) -> Tuple[str, bool, str]:
334 373 """Expand ``~``-style usernames in strings.
335 374
336 375 This is similar to :func:`os.path.expanduser`, but it computes and returns
337 376 extra information that will be useful if the input was being used in
338 377 computing completions, and you wish to return the completions with the
339 378 original '~' instead of its expanded value.
340 379
341 380 Parameters
342 381 ----------
343 382 path : str
344 383 String to be expanded. If no ~ is present, the output is the same as the
345 384 input.
346 385
347 386 Returns
348 387 -------
349 388 newpath : str
350 389 Result of ~ expansion in the input path.
351 390 tilde_expand : bool
352 391 Whether any expansion was performed or not.
353 392 tilde_val : str
354 393 The value that ~ was replaced with.
355 394 """
356 395 # Default values
357 396 tilde_expand = False
358 397 tilde_val = ''
359 398 newpath = path
360 399
361 400 if path.startswith('~'):
362 401 tilde_expand = True
363 402 rest = len(path)-1
364 403 newpath = os.path.expanduser(path)
365 404 if rest:
366 405 tilde_val = newpath[:-rest]
367 406 else:
368 407 tilde_val = newpath
369 408
370 409 return newpath, tilde_expand, tilde_val
371 410
372 411
373 412 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
374 413 """Does the opposite of expand_user, with its outputs.
375 414 """
376 415 if tilde_expand:
377 416 return path.replace(tilde_val, '~')
378 417 else:
379 418 return path
380 419
381 420
382 421 def completions_sorting_key(word):
383 422 """key for sorting completions
384 423
385 424 This does several things:
386 425
387 426 - Demote any completions starting with underscores to the end
388 427 - Insert any %magic and %%cellmagic completions in the alphabetical order
389 428 by their name
390 429 """
391 430 prio1, prio2 = 0, 0
392 431
393 432 if word.startswith('__'):
394 433 prio1 = 2
395 434 elif word.startswith('_'):
396 435 prio1 = 1
397 436
398 437 if word.endswith('='):
399 438 prio1 = -1
400 439
401 440 if word.startswith('%%'):
402 441 # If there's another % in there, this is something else, so leave it alone
403 442 if not "%" in word[2:]:
404 443 word = word[2:]
405 444 prio2 = 2
406 445 elif word.startswith('%'):
407 446 if not "%" in word[1:]:
408 447 word = word[1:]
409 448 prio2 = 1
410 449
411 450 return prio1, word, prio2
412 451
413 452
414 453 class _FakeJediCompletion:
415 454 """
416 455 This is a workaround to communicate to the UI that Jedi has crashed and to
417 456 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
418 457
419 458 Added in IPython 6.0 so should likely be removed for 7.0
420 459
421 460 """
422 461
423 462 def __init__(self, name):
424 463
425 464 self.name = name
426 465 self.complete = name
427 466 self.type = 'crashed'
428 467 self.name_with_symbols = name
429 468 self.signature = ''
430 469 self._origin = 'fake'
431 470
432 471 def __repr__(self):
433 472 return '<Fake completion object jedi has crashed>'
434 473
435 474
436 475 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
437 476
438 477
439 478 class Completion:
440 479 """
441 480 Completion object used and returned by IPython completers.
442 481
443 482 .. warning::
444 483
445 484 Unstable
446 485
447 486 This function is unstable, API may change without warning.
448 487 It will also raise unless use in proper context manager.
449 488
450 489 This act as a middle ground :any:`Completion` object between the
451 490 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
452 491 object. While Jedi need a lot of information about evaluator and how the
453 492 code should be ran/inspected, PromptToolkit (and other frontend) mostly
454 493 need user facing information.
455 494
456 495 - Which range should be replaced replaced by what.
457 496 - Some metadata (like completion type), or meta information to displayed to
458 497 the use user.
459 498
460 499 For debugging purpose we can also store the origin of the completion (``jedi``,
461 500 ``IPython.python_matches``, ``IPython.magics_matches``...).
462 501 """
463 502
464 503 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
465 504
466 505 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
467 506 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
468 507 "It may change without warnings. "
469 508 "Use in corresponding context manager.",
470 509 category=ProvisionalCompleterWarning, stacklevel=2)
471 510
472 511 self.start = start
473 512 self.end = end
474 513 self.text = text
475 514 self.type = type
476 515 self.signature = signature
477 516 self._origin = _origin
478 517
479 518 def __repr__(self):
480 519 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
481 520 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
482 521
483 522 def __eq__(self, other)->Bool:
484 523 """
485 524 Equality and hash do not hash the type (as some completer may not be
486 525 able to infer the type), but are use to (partially) de-duplicate
487 526 completion.
488 527
489 528 Completely de-duplicating completion is a bit tricker that just
490 529 comparing as it depends on surrounding text, which Completions are not
491 530 aware of.
492 531 """
493 532 return self.start == other.start and \
494 533 self.end == other.end and \
495 534 self.text == other.text
496 535
497 536 def __hash__(self):
498 537 return hash((self.start, self.end, self.text))
499 538
500 539
501 540 class SimpleCompletion:
502 541 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
503 542
504 543 .. warning::
505 544
506 545 Provisional
507 546
508 547 This class is used to describe the currently supported attributes of
509 548 simple completion items, and any additional implementation details
510 549 should not be relied on. Additional attributes may be included in
511 550 future versions, and meaning of text disambiguated from the current
512 551 dual meaning of "text to insert" and "text to used as a label".
513 552 """
514 553
515 554 __slots__ = ["text", "type"]
516 555
517 556 def __init__(self, text: str, *, type: str = None):
518 557 self.text = text
519 558 self.type = type
520 559
521 560 def __repr__(self):
522 561 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
523 562
524 563
525 class MatcherResultBase(TypedDict):
564 class _MatcherResultBase(TypedDict):
526 565 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
527 566
528 #: suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
567 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
529 568 matched_fragment: NotRequired[str]
530 569
531 #: whether to suppress results from all other matchers (True), some
570 #: Whether to suppress results from all other matchers (True), some
532 571 #: matchers (set of identifiers) or none (False); default is False.
533 572 suppress: NotRequired[Union[bool, Set[str]]]
534 573
535 #: identifiers of matchers which should NOT be suppressed
574 #: Identifiers of matchers which should NOT be suppressed when this matcher
575 #: requests to suppress all other matchers; defaults to an empty set.
536 576 do_not_suppress: NotRequired[Set[str]]
537 577
538 #: are completions already ordered and should be left as-is? default is False.
578 #: Are completions already ordered and should be left as-is? default is False.
539 579 ordered: NotRequired[bool]
540 580
541 581
542 class SimpleMatcherResult(MatcherResultBase):
582 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
583 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
543 584 """Result of new-style completion matcher."""
544 585
545 #: list of candidate completions
586 # note: TypedDict is added again to the inheritance chain
587 # in order to get __orig_bases__ for documentation
588
589 #: List of candidate completions
546 590 completions: Sequence[SimpleCompletion]
547 591
548 592
549 class _JediMatcherResult(MatcherResultBase):
593 class _JediMatcherResult(_MatcherResultBase):
550 594 """Matching result returned by Jedi (will be processed differently)"""
551 595
552 596 #: list of candidate completions
553 597 completions: Iterable[_JediCompletionLike]
554 598
555 599
556 600 class CompletionContext(NamedTuple):
557 601 """Completion context provided as an argument to matchers in the Matcher API v2."""
558 602
559 603 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
560 604 # which was not explicitly visible as an argument of the matcher, making any refactor
561 605 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
562 606 # from the completer, and make substituting them in sub-classes easier.
563 607
564 608 #: Relevant fragment of code directly preceding the cursor.
565 609 #: The extraction of token is implemented via splitter heuristic
566 610 #: (following readline behaviour for legacy reasons), which is user configurable
567 611 #: (by switching the greedy mode).
568 612 token: str
569 613
570 614 #: The full available content of the editor or buffer
571 615 full_text: str
572 616
573 617 #: Cursor position in the line (the same for ``full_text`` and ``text``).
574 618 cursor_position: int
575 619
576 620 #: Cursor line in ``full_text``.
577 621 cursor_line: int
578 622
579 623 #: The maximum number of completions that will be used downstream.
580 624 #: Matchers can use this information to abort early.
581 625 #: The built-in Jedi matcher is currently excepted from this limit.
582 626 limit: int
583 627
584 628 @property
585 629 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
586 630 def text_until_cursor(self) -> str:
587 631 return self.line_with_cursor[: self.cursor_position]
588 632
589 633 @property
590 634 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
591 635 def line_with_cursor(self) -> str:
592 636 return self.full_text.split("\n")[self.cursor_line]
593 637
594 638
639 #: Matcher results for API v2.
595 640 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
596 641
597 MatcherAPIv1 = Callable[[str], List[str]]
598 MatcherAPIv2 = Callable[[CompletionContext], MatcherResult]
599 Matcher = Union[MatcherAPIv1, MatcherAPIv2]
642
643 class _MatcherAPIv1Base(Protocol):
644 def __call__(self, text: str) -> list[str]:
645 """Call signature."""
646
647
648 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
649 #: API version
650 matcher_api_version: Optional[Literal[1]]
651
652 def __call__(self, text: str) -> list[str]:
653 """Call signature."""
654
655
656 #: Protocol describing Matcher API v1.
657 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
658
659
660 class MatcherAPIv2(Protocol):
661 """Protocol describing Matcher API v2."""
662
663 #: API version
664 matcher_api_version: Literal[2] = 2
665
666 def __call__(self, context: CompletionContext) -> MatcherResult:
667 """Call signature."""
668
669
670 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
600 671
601 672
602 673 def completion_matcher(
603 674 *, priority: float = None, identifier: str = None, api_version: int = 1
604 675 ):
605 676 """Adds attributes describing the matcher.
606 677
607 678 Parameters
608 679 ----------
609 680 priority : Optional[float]
610 681 The priority of the matcher, determines the order of execution of matchers.
611 682 Higher priority means that the matcher will be executed first. Defaults to 0.
612 683 identifier : Optional[str]
613 684 identifier of the matcher allowing users to modify the behaviour via traitlets,
614 685 and also used to for debugging (will be passed as ``origin`` with the completions).
615 686 Defaults to matcher function ``__qualname__``.
616 687 api_version: Optional[int]
617 688 version of the Matcher API used by this matcher.
618 689 Currently supported values are 1 and 2.
619 690 Defaults to 1.
620 691 """
621 692
622 693 def wrapper(func: Matcher):
623 694 func.matcher_priority = priority or 0
624 695 func.matcher_identifier = identifier or func.__qualname__
625 696 func.matcher_api_version = api_version
626 697 if TYPE_CHECKING:
627 698 if api_version == 1:
628 699 func = cast(func, MatcherAPIv1)
629 700 elif api_version == 2:
630 701 func = cast(func, MatcherAPIv2)
631 702 return func
632 703
633 704 return wrapper
634 705
635 706
636 707 def _get_matcher_priority(matcher: Matcher):
637 708 return getattr(matcher, "matcher_priority", 0)
638 709
639 710
640 711 def _get_matcher_id(matcher: Matcher):
641 712 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
642 713
643 714
644 715 def _get_matcher_api_version(matcher):
645 716 return getattr(matcher, "matcher_api_version", 1)
646 717
647 718
648 719 context_matcher = partial(completion_matcher, api_version=2)
649 720
650 721
651 722 _IC = Iterable[Completion]
652 723
653 724
654 725 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
655 726 """
656 727 Deduplicate a set of completions.
657 728
658 729 .. warning::
659 730
660 731 Unstable
661 732
662 733 This function is unstable, API may change without warning.
663 734
664 735 Parameters
665 736 ----------
666 737 text : str
667 738 text that should be completed.
668 739 completions : Iterator[Completion]
669 740 iterator over the completions to deduplicate
670 741
671 742 Yields
672 743 ------
673 744 `Completions` objects
674 745 Completions coming from multiple sources, may be different but end up having
675 746 the same effect when applied to ``text``. If this is the case, this will
676 747 consider completions as equal and only emit the first encountered.
677 748 Not folded in `completions()` yet for debugging purpose, and to detect when
678 749 the IPython completer does return things that Jedi does not, but should be
679 750 at some point.
680 751 """
681 752 completions = list(completions)
682 753 if not completions:
683 754 return
684 755
685 756 new_start = min(c.start for c in completions)
686 757 new_end = max(c.end for c in completions)
687 758
688 759 seen = set()
689 760 for c in completions:
690 761 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
691 762 if new_text not in seen:
692 763 yield c
693 764 seen.add(new_text)
694 765
695 766
696 767 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
697 768 """
698 769 Rectify a set of completions to all have the same ``start`` and ``end``
699 770
700 771 .. warning::
701 772
702 773 Unstable
703 774
704 775 This function is unstable, API may change without warning.
705 776 It will also raise unless use in proper context manager.
706 777
707 778 Parameters
708 779 ----------
709 780 text : str
710 781 text that should be completed.
711 782 completions : Iterator[Completion]
712 783 iterator over the completions to rectify
713 784 _debug : bool
714 785 Log failed completion
715 786
716 787 Notes
717 788 -----
718 789 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
719 790 the Jupyter Protocol requires them to behave like so. This will readjust
720 791 the completion to have the same ``start`` and ``end`` by padding both
721 792 extremities with surrounding text.
722 793
723 794 During stabilisation should support a ``_debug`` option to log which
724 795 completion are return by the IPython completer and not found in Jedi in
725 796 order to make upstream bug report.
726 797 """
727 798 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
728 799 "It may change without warnings. "
729 800 "Use in corresponding context manager.",
730 801 category=ProvisionalCompleterWarning, stacklevel=2)
731 802
732 803 completions = list(completions)
733 804 if not completions:
734 805 return
735 806 starts = (c.start for c in completions)
736 807 ends = (c.end for c in completions)
737 808
738 809 new_start = min(starts)
739 810 new_end = max(ends)
740 811
741 812 seen_jedi = set()
742 813 seen_python_matches = set()
743 814 for c in completions:
744 815 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
745 816 if c._origin == 'jedi':
746 817 seen_jedi.add(new_text)
747 818 elif c._origin == 'IPCompleter.python_matches':
748 819 seen_python_matches.add(new_text)
749 820 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
750 821 diff = seen_python_matches.difference(seen_jedi)
751 822 if diff and _debug:
752 823 print('IPython.python matches have extras:', diff)
753 824
754 825
755 826 if sys.platform == 'win32':
756 827 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
757 828 else:
758 829 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
759 830
760 831 GREEDY_DELIMS = ' =\r\n'
761 832
762 833
763 834 class CompletionSplitter(object):
764 835 """An object to split an input line in a manner similar to readline.
765 836
766 837 By having our own implementation, we can expose readline-like completion in
767 838 a uniform manner to all frontends. This object only needs to be given the
768 839 line of text to be split and the cursor position on said line, and it
769 840 returns the 'word' to be completed on at the cursor after splitting the
770 841 entire line.
771 842
772 843 What characters are used as splitting delimiters can be controlled by
773 844 setting the ``delims`` attribute (this is a property that internally
774 845 automatically builds the necessary regular expression)"""
775 846
776 847 # Private interface
777 848
778 849 # A string of delimiter characters. The default value makes sense for
779 850 # IPython's most typical usage patterns.
780 851 _delims = DELIMS
781 852
782 853 # The expression (a normal string) to be compiled into a regular expression
783 854 # for actual splitting. We store it as an attribute mostly for ease of
784 855 # debugging, since this type of code can be so tricky to debug.
785 856 _delim_expr = None
786 857
787 858 # The regular expression that does the actual splitting
788 859 _delim_re = None
789 860
790 861 def __init__(self, delims=None):
791 862 delims = CompletionSplitter._delims if delims is None else delims
792 863 self.delims = delims
793 864
794 865 @property
795 866 def delims(self):
796 867 """Return the string of delimiter characters."""
797 868 return self._delims
798 869
799 870 @delims.setter
800 871 def delims(self, delims):
801 872 """Set the delimiters for line splitting."""
802 873 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
803 874 self._delim_re = re.compile(expr)
804 875 self._delims = delims
805 876 self._delim_expr = expr
806 877
807 878 def split_line(self, line, cursor_pos=None):
808 879 """Split a line of text with a cursor at the given position.
809 880 """
810 881 l = line if cursor_pos is None else line[:cursor_pos]
811 882 return self._delim_re.split(l)[-1]
812 883
813 884
814 885
815 886 class Completer(Configurable):
816 887
817 888 greedy = Bool(False,
818 889 help="""Activate greedy completion
819 890 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
820 891
821 892 This will enable completion on elements of lists, results of function calls, etc.,
822 893 but can be unsafe because the code is actually evaluated on TAB.
823 894 """,
824 895 ).tag(config=True)
825 896
826 897 use_jedi = Bool(default_value=JEDI_INSTALLED,
827 898 help="Experimental: Use Jedi to generate autocompletions. "
828 899 "Default to True if jedi is installed.").tag(config=True)
829 900
830 901 jedi_compute_type_timeout = Int(default_value=400,
831 902 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
832 903 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
833 904 performance by preventing jedi to build its cache.
834 905 """).tag(config=True)
835 906
836 907 debug = Bool(default_value=False,
837 908 help='Enable debug for the Completer. Mostly print extra '
838 909 'information for experimental jedi integration.')\
839 910 .tag(config=True)
840 911
841 912 backslash_combining_completions = Bool(True,
842 913 help="Enable unicode completions, e.g. \\alpha<tab> . "
843 914 "Includes completion of latex commands, unicode names, and expanding "
844 915 "unicode characters back to latex commands.").tag(config=True)
845 916
846 917 def __init__(self, namespace=None, global_namespace=None, **kwargs):
847 918 """Create a new completer for the command line.
848 919
849 920 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
850 921
851 922 If unspecified, the default namespace where completions are performed
852 923 is __main__ (technically, __main__.__dict__). Namespaces should be
853 924 given as dictionaries.
854 925
855 926 An optional second namespace can be given. This allows the completer
856 927 to handle cases where both the local and global scopes need to be
857 928 distinguished.
858 929 """
859 930
860 931 # Don't bind to namespace quite yet, but flag whether the user wants a
861 932 # specific namespace or to use __main__.__dict__. This will allow us
862 933 # to bind to __main__.__dict__ at completion time, not now.
863 934 if namespace is None:
864 935 self.use_main_ns = True
865 936 else:
866 937 self.use_main_ns = False
867 938 self.namespace = namespace
868 939
869 940 # The global namespace, if given, can be bound directly
870 941 if global_namespace is None:
871 942 self.global_namespace = {}
872 943 else:
873 944 self.global_namespace = global_namespace
874 945
875 946 self.custom_matchers = []
876 947
877 948 super(Completer, self).__init__(**kwargs)
878 949
879 950 def complete(self, text, state):
880 951 """Return the next possible completion for 'text'.
881 952
882 953 This is called successively with state == 0, 1, 2, ... until it
883 954 returns None. The completion should begin with 'text'.
884 955
885 956 """
886 957 if self.use_main_ns:
887 958 self.namespace = __main__.__dict__
888 959
889 960 if state == 0:
890 961 if "." in text:
891 962 self.matches = self.attr_matches(text)
892 963 else:
893 964 self.matches = self.global_matches(text)
894 965 try:
895 966 return self.matches[state]
896 967 except IndexError:
897 968 return None
898 969
899 970 def global_matches(self, text):
900 971 """Compute matches when text is a simple name.
901 972
902 973 Return a list of all keywords, built-in functions and names currently
903 974 defined in self.namespace or self.global_namespace that match.
904 975
905 976 """
906 977 matches = []
907 978 match_append = matches.append
908 979 n = len(text)
909 980 for lst in [
910 981 keyword.kwlist,
911 982 builtin_mod.__dict__.keys(),
912 983 list(self.namespace.keys()),
913 984 list(self.global_namespace.keys()),
914 985 ]:
915 986 for word in lst:
916 987 if word[:n] == text and word != "__builtins__":
917 988 match_append(word)
918 989
919 990 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
920 991 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
921 992 shortened = {
922 993 "_".join([sub[0] for sub in word.split("_")]): word
923 994 for word in lst
924 995 if snake_case_re.match(word)
925 996 }
926 997 for word in shortened.keys():
927 998 if word[:n] == text and word != "__builtins__":
928 999 match_append(shortened[word])
929 1000 return matches
930 1001
931 1002 def attr_matches(self, text):
932 1003 """Compute matches when text contains a dot.
933 1004
934 1005 Assuming the text is of the form NAME.NAME....[NAME], and is
935 1006 evaluatable in self.namespace or self.global_namespace, it will be
936 1007 evaluated and its attributes (as revealed by dir()) are used as
937 1008 possible completions. (For class instances, class members are
938 1009 also considered.)
939 1010
940 1011 WARNING: this can still invoke arbitrary C code, if an object
941 1012 with a __getattr__ hook is evaluated.
942 1013
943 1014 """
944 1015
945 1016 # Another option, seems to work great. Catches things like ''.<tab>
946 1017 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
947 1018
948 1019 if m:
949 1020 expr, attr = m.group(1, 3)
950 1021 elif self.greedy:
951 1022 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
952 1023 if not m2:
953 1024 return []
954 1025 expr, attr = m2.group(1,2)
955 1026 else:
956 1027 return []
957 1028
958 1029 try:
959 1030 obj = eval(expr, self.namespace)
960 1031 except:
961 1032 try:
962 1033 obj = eval(expr, self.global_namespace)
963 1034 except:
964 1035 return []
965 1036
966 1037 if self.limit_to__all__ and hasattr(obj, '__all__'):
967 1038 words = get__all__entries(obj)
968 1039 else:
969 1040 words = dir2(obj)
970 1041
971 1042 try:
972 1043 words = generics.complete_object(obj, words)
973 1044 except TryNext:
974 1045 pass
975 1046 except AssertionError:
976 1047 raise
977 1048 except Exception:
978 1049 # Silence errors from completion function
979 1050 #raise # dbg
980 1051 pass
981 1052 # Build match list to return
982 1053 n = len(attr)
983 1054 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
984 1055
985 1056
986 1057 def get__all__entries(obj):
987 1058 """returns the strings in the __all__ attribute"""
988 1059 try:
989 1060 words = getattr(obj, '__all__')
990 1061 except:
991 1062 return []
992 1063
993 1064 return [w for w in words if isinstance(w, str)]
994 1065
995 1066
996 1067 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
997 1068 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
998 1069 """Used by dict_key_matches, matching the prefix to a list of keys
999 1070
1000 1071 Parameters
1001 1072 ----------
1002 1073 keys
1003 1074 list of keys in dictionary currently being completed.
1004 1075 prefix
1005 1076 Part of the text already typed by the user. E.g. `mydict[b'fo`
1006 1077 delims
1007 1078 String of delimiters to consider when finding the current key.
1008 1079 extra_prefix : optional
1009 1080 Part of the text already typed in multi-key index cases. E.g. for
1010 1081 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1011 1082
1012 1083 Returns
1013 1084 -------
1014 1085 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1015 1086 ``quote`` being the quote that need to be used to close current string.
1016 1087 ``token_start`` the position where the replacement should start occurring,
1017 1088 ``matches`` a list of replacement/completion
1018 1089
1019 1090 """
1020 1091 prefix_tuple = extra_prefix if extra_prefix else ()
1021 1092 Nprefix = len(prefix_tuple)
1022 1093 def filter_prefix_tuple(key):
1023 1094 # Reject too short keys
1024 1095 if len(key) <= Nprefix:
1025 1096 return False
1026 1097 # Reject keys with non str/bytes in it
1027 1098 for k in key:
1028 1099 if not isinstance(k, (str, bytes)):
1029 1100 return False
1030 1101 # Reject keys that do not match the prefix
1031 1102 for k, pt in zip(key, prefix_tuple):
1032 1103 if k != pt:
1033 1104 return False
1034 1105 # All checks passed!
1035 1106 return True
1036 1107
1037 1108 filtered_keys:List[Union[str,bytes]] = []
1038 1109 def _add_to_filtered_keys(key):
1039 1110 if isinstance(key, (str, bytes)):
1040 1111 filtered_keys.append(key)
1041 1112
1042 1113 for k in keys:
1043 1114 if isinstance(k, tuple):
1044 1115 if filter_prefix_tuple(k):
1045 1116 _add_to_filtered_keys(k[Nprefix])
1046 1117 else:
1047 1118 _add_to_filtered_keys(k)
1048 1119
1049 1120 if not prefix:
1050 1121 return '', 0, [repr(k) for k in filtered_keys]
1051 1122 quote_match = re.search('["\']', prefix)
1052 1123 assert quote_match is not None # silence mypy
1053 1124 quote = quote_match.group()
1054 1125 try:
1055 1126 prefix_str = eval(prefix + quote, {})
1056 1127 except Exception:
1057 1128 return '', 0, []
1058 1129
1059 1130 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1060 1131 token_match = re.search(pattern, prefix, re.UNICODE)
1061 1132 assert token_match is not None # silence mypy
1062 1133 token_start = token_match.start()
1063 1134 token_prefix = token_match.group()
1064 1135
1065 1136 matched:List[str] = []
1066 1137 for key in filtered_keys:
1067 1138 try:
1068 1139 if not key.startswith(prefix_str):
1069 1140 continue
1070 1141 except (AttributeError, TypeError, UnicodeError):
1071 1142 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1072 1143 continue
1073 1144
1074 1145 # reformat remainder of key to begin with prefix
1075 1146 rem = key[len(prefix_str):]
1076 1147 # force repr wrapped in '
1077 1148 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1078 1149 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1079 1150 if quote == '"':
1080 1151 # The entered prefix is quoted with ",
1081 1152 # but the match is quoted with '.
1082 1153 # A contained " hence needs escaping for comparison:
1083 1154 rem_repr = rem_repr.replace('"', '\\"')
1084 1155
1085 1156 # then reinsert prefix from start of token
1086 1157 matched.append('%s%s' % (token_prefix, rem_repr))
1087 1158 return quote, token_start, matched
1088 1159
1089 1160
1090 1161 def cursor_to_position(text:str, line:int, column:int)->int:
1091 1162 """
1092 1163 Convert the (line,column) position of the cursor in text to an offset in a
1093 1164 string.
1094 1165
1095 1166 Parameters
1096 1167 ----------
1097 1168 text : str
1098 1169 The text in which to calculate the cursor offset
1099 1170 line : int
1100 1171 Line of the cursor; 0-indexed
1101 1172 column : int
1102 1173 Column of the cursor 0-indexed
1103 1174
1104 1175 Returns
1105 1176 -------
1106 1177 Position of the cursor in ``text``, 0-indexed.
1107 1178
1108 1179 See Also
1109 1180 --------
1110 1181 position_to_cursor : reciprocal of this function
1111 1182
1112 1183 """
1113 1184 lines = text.split('\n')
1114 1185 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1115 1186
1116 1187 return sum(len(l) + 1 for l in lines[:line]) + column
1117 1188
1118 1189 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1119 1190 """
1120 1191 Convert the position of the cursor in text (0 indexed) to a line
1121 1192 number(0-indexed) and a column number (0-indexed) pair
1122 1193
1123 1194 Position should be a valid position in ``text``.
1124 1195
1125 1196 Parameters
1126 1197 ----------
1127 1198 text : str
1128 1199 The text in which to calculate the cursor offset
1129 1200 offset : int
1130 1201 Position of the cursor in ``text``, 0-indexed.
1131 1202
1132 1203 Returns
1133 1204 -------
1134 1205 (line, column) : (int, int)
1135 1206 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1136 1207
1137 1208 See Also
1138 1209 --------
1139 1210 cursor_to_position : reciprocal of this function
1140 1211
1141 1212 """
1142 1213
1143 1214 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1144 1215
1145 1216 before = text[:offset]
1146 1217 blines = before.split('\n') # ! splitnes trim trailing \n
1147 1218 line = before.count('\n')
1148 1219 col = len(blines[-1])
1149 1220 return line, col
1150 1221
1151 1222
1152 1223 def _safe_isinstance(obj, module, class_name):
1153 1224 """Checks if obj is an instance of module.class_name if loaded
1154 1225 """
1155 1226 return (module in sys.modules and
1156 1227 isinstance(obj, getattr(import_module(module), class_name)))
1157 1228
1158 1229
1159 1230 @context_matcher()
1160 1231 def back_unicode_name_matcher(context):
1161 1232 """Match Unicode characters back to Unicode name
1162 1233
1163 Same as ``back_unicode_name_matches``, but adopted to new Matcher API.
1234 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1164 1235 """
1165 1236 fragment, matches = back_unicode_name_matches(context.token)
1166 1237 return _convert_matcher_v1_result_to_v2(
1167 1238 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1168 1239 )
1169 1240
1170 1241
1171 1242 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1172 1243 """Match Unicode characters back to Unicode name
1173 1244
1174 1245 This does ``β˜ƒ`` -> ``\\snowman``
1175 1246
1176 1247 Note that snowman is not a valid python3 combining character but will be expanded.
1177 1248 Though it will not recombine back to the snowman character by the completion machinery.
1178 1249
1179 1250 This will not either back-complete standard sequences like \\n, \\b ...
1180 1251
1252 .. deprecated:: 8.6
1253 You can use :meth:`back_unicode_name_matcher` instead.
1254
1181 1255 Returns
1182 1256 =======
1183 1257
1184 1258 Return a tuple with two elements:
1185 1259
1186 1260 - The Unicode character that was matched (preceded with a backslash), or
1187 1261 empty string,
1188 1262 - a sequence (of 1), name for the match Unicode character, preceded by
1189 1263 backslash, or empty if no match.
1190
1191 1264 """
1192 1265 if len(text)<2:
1193 1266 return '', ()
1194 1267 maybe_slash = text[-2]
1195 1268 if maybe_slash != '\\':
1196 1269 return '', ()
1197 1270
1198 1271 char = text[-1]
1199 1272 # no expand on quote for completion in strings.
1200 1273 # nor backcomplete standard ascii keys
1201 1274 if char in string.ascii_letters or char in ('"',"'"):
1202 1275 return '', ()
1203 1276 try :
1204 1277 unic = unicodedata.name(char)
1205 1278 return '\\'+char,('\\'+unic,)
1206 1279 except KeyError:
1207 1280 pass
1208 1281 return '', ()
1209 1282
1210 1283
1211 1284 @context_matcher()
1212 1285 def back_latex_name_matcher(context):
1213 1286 """Match latex characters back to unicode name
1214 1287
1215 Same as ``back_latex_name_matches``, but adopted to new Matcher API.
1288 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1216 1289 """
1217 1290 fragment, matches = back_latex_name_matches(context.token)
1218 1291 return _convert_matcher_v1_result_to_v2(
1219 1292 matches, type="latex", fragment=fragment, suppress_if_matches=True
1220 1293 )
1221 1294
1222 1295
1223 1296 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1224 1297 """Match latex characters back to unicode name
1225 1298
1226 1299 This does ``\\β„΅`` -> ``\\aleph``
1227 1300
1301 .. deprecated:: 8.6
1302 You can use :meth:`back_latex_name_matcher` instead.
1228 1303 """
1229 1304 if len(text)<2:
1230 1305 return '', ()
1231 1306 maybe_slash = text[-2]
1232 1307 if maybe_slash != '\\':
1233 1308 return '', ()
1234 1309
1235 1310
1236 1311 char = text[-1]
1237 1312 # no expand on quote for completion in strings.
1238 1313 # nor backcomplete standard ascii keys
1239 1314 if char in string.ascii_letters or char in ('"',"'"):
1240 1315 return '', ()
1241 1316 try :
1242 1317 latex = reverse_latex_symbol[char]
1243 1318 # '\\' replace the \ as well
1244 1319 return '\\'+char,[latex]
1245 1320 except KeyError:
1246 1321 pass
1247 1322 return '', ()
1248 1323
1249 1324
1250 1325 def _formatparamchildren(parameter) -> str:
1251 1326 """
1252 1327 Get parameter name and value from Jedi Private API
1253 1328
1254 1329 Jedi does not expose a simple way to get `param=value` from its API.
1255 1330
1256 1331 Parameters
1257 1332 ----------
1258 1333 parameter
1259 1334 Jedi's function `Param`
1260 1335
1261 1336 Returns
1262 1337 -------
1263 1338 A string like 'a', 'b=1', '*args', '**kwargs'
1264 1339
1265 1340 """
1266 1341 description = parameter.description
1267 1342 if not description.startswith('param '):
1268 1343 raise ValueError('Jedi function parameter description have change format.'
1269 1344 'Expected "param ...", found %r".' % description)
1270 1345 return description[6:]
1271 1346
1272 1347 def _make_signature(completion)-> str:
1273 1348 """
1274 1349 Make the signature from a jedi completion
1275 1350
1276 1351 Parameters
1277 1352 ----------
1278 1353 completion : jedi.Completion
1279 1354 object does not complete a function type
1280 1355
1281 1356 Returns
1282 1357 -------
1283 1358 a string consisting of the function signature, with the parenthesis but
1284 1359 without the function name. example:
1285 1360 `(a, *args, b=1, **kwargs)`
1286 1361
1287 1362 """
1288 1363
1289 1364 # it looks like this might work on jedi 0.17
1290 1365 if hasattr(completion, 'get_signatures'):
1291 1366 signatures = completion.get_signatures()
1292 1367 if not signatures:
1293 1368 return '(?)'
1294 1369
1295 1370 c0 = completion.get_signatures()[0]
1296 1371 return '('+c0.to_string().split('(', maxsplit=1)[1]
1297 1372
1298 1373 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1299 1374 for p in signature.defined_names()) if f])
1300 1375
1301 1376
1302 1377 _CompleteResult = Dict[str, MatcherResult]
1303 1378
1304 1379
1305 1380 def _convert_matcher_v1_result_to_v2(
1306 1381 matches: Sequence[str],
1307 1382 type: str,
1308 1383 fragment: str = None,
1309 1384 suppress_if_matches: bool = False,
1310 1385 ) -> SimpleMatcherResult:
1311 1386 """Utility to help with transition"""
1312 1387 result = {
1313 1388 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1314 1389 "suppress": (True if matches else False) if suppress_if_matches else False,
1315 1390 }
1316 1391 if fragment is not None:
1317 1392 result["matched_fragment"] = fragment
1318 1393 return result
1319 1394
1320 1395
1321 1396 class IPCompleter(Completer):
1322 1397 """Extension of the completer class with IPython-specific features"""
1323 1398
1324 1399 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1325 1400
1326 1401 @observe('greedy')
1327 1402 def _greedy_changed(self, change):
1328 1403 """update the splitter and readline delims when greedy is changed"""
1329 1404 if change['new']:
1330 1405 self.splitter.delims = GREEDY_DELIMS
1331 1406 else:
1332 1407 self.splitter.delims = DELIMS
1333 1408
1334 1409 dict_keys_only = Bool(
1335 1410 False,
1336 1411 help="""
1337 1412 Whether to show dict key matches only.
1338 1413
1339 1414 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1340 1415 """,
1341 1416 )
1342 1417
1343 1418 suppress_competing_matchers = UnionTrait(
1344 1419 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1345 1420 default_value=None,
1346 1421 help="""
1347 1422 Whether to suppress completions from other *Matchers*.
1348 1423
1349 1424 When set to ``None`` (default) the matchers will attempt to auto-detect
1350 1425 whether suppression of other matchers is desirable. For example, at
1351 1426 the beginning of a line followed by `%` we expect a magic completion
1352 1427 to be the only applicable option, and after ``my_dict['`` we usually
1353 1428 expect a completion with an existing dictionary key.
1354 1429
1355 1430 If you want to disable this heuristic and see completions from all matchers,
1356 1431 set ``IPCompleter.suppress_competing_matchers = False``.
1357 1432 To disable the heuristic for specific matchers provide a dictionary mapping:
1358 1433 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1359 1434
1360 1435 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1361 1436 completions to the set of matchers with the highest priority;
1362 1437 this is equivalent to ``IPCompleter.merge_completions`` and
1363 1438 can be beneficial for performance, but will sometimes omit relevant
1364 1439 candidates from matchers further down the priority list.
1365 1440 """,
1366 1441 ).tag(config=True)
1367 1442
1368 1443 merge_completions = Bool(
1369 1444 True,
1370 1445 help="""Whether to merge completion results into a single list
1371 1446
1372 1447 If False, only the completion results from the first non-empty
1373 1448 completer will be returned.
1374 1449
1375 1450 As of version 8.6.0, setting the value to ``False`` is an alias for:
1376 1451 ``IPCompleter.suppress_competing_matchers = True.``.
1377 1452 """,
1378 1453 ).tag(config=True)
1379 1454
1380 1455 disable_matchers = ListTrait(
1381 1456 Unicode(), help="""List of matchers to disable."""
1382 1457 ).tag(config=True)
1383 1458
1384 1459 omit__names = Enum(
1385 1460 (0, 1, 2),
1386 1461 default_value=2,
1387 1462 help="""Instruct the completer to omit private method names
1388 1463
1389 1464 Specifically, when completing on ``object.<tab>``.
1390 1465
1391 1466 When 2 [default]: all names that start with '_' will be excluded.
1392 1467
1393 1468 When 1: all 'magic' names (``__foo__``) will be excluded.
1394 1469
1395 1470 When 0: nothing will be excluded.
1396 1471 """
1397 1472 ).tag(config=True)
1398 1473 limit_to__all__ = Bool(False,
1399 1474 help="""
1400 1475 DEPRECATED as of version 5.0.
1401 1476
1402 1477 Instruct the completer to use __all__ for the completion
1403 1478
1404 1479 Specifically, when completing on ``object.<tab>``.
1405 1480
1406 1481 When True: only those names in obj.__all__ will be included.
1407 1482
1408 1483 When False [default]: the __all__ attribute is ignored
1409 1484 """,
1410 1485 ).tag(config=True)
1411 1486
1412 1487 profile_completions = Bool(
1413 1488 default_value=False,
1414 1489 help="If True, emit profiling data for completion subsystem using cProfile."
1415 1490 ).tag(config=True)
1416 1491
1417 1492 profiler_output_dir = Unicode(
1418 1493 default_value=".completion_profiles",
1419 1494 help="Template for path at which to output profile data for completions."
1420 1495 ).tag(config=True)
1421 1496
1422 1497 @observe('limit_to__all__')
1423 1498 def _limit_to_all_changed(self, change):
1424 1499 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1425 1500 'value has been deprecated since IPython 5.0, will be made to have '
1426 1501 'no effects and then removed in future version of IPython.',
1427 1502 UserWarning)
1428 1503
1429 1504 def __init__(
1430 1505 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1431 1506 ):
1432 1507 """IPCompleter() -> completer
1433 1508
1434 1509 Return a completer object.
1435 1510
1436 1511 Parameters
1437 1512 ----------
1438 1513 shell
1439 1514 a pointer to the ipython shell itself. This is needed
1440 1515 because this completer knows about magic functions, and those can
1441 1516 only be accessed via the ipython instance.
1442 1517 namespace : dict, optional
1443 1518 an optional dict where completions are performed.
1444 1519 global_namespace : dict, optional
1445 1520 secondary optional dict for completions, to
1446 1521 handle cases (such as IPython embedded inside functions) where
1447 1522 both Python scopes are visible.
1448 1523 config : Config
1449 1524 traitlet's config object
1450 1525 **kwargs
1451 1526 passed to super class unmodified.
1452 1527 """
1453 1528
1454 1529 self.magic_escape = ESC_MAGIC
1455 1530 self.splitter = CompletionSplitter()
1456 1531
1457 1532 # _greedy_changed() depends on splitter and readline being defined:
1458 1533 super().__init__(
1459 1534 namespace=namespace,
1460 1535 global_namespace=global_namespace,
1461 1536 config=config,
1462 1537 **kwargs,
1463 1538 )
1464 1539
1465 1540 # List where completion matches will be stored
1466 1541 self.matches = []
1467 1542 self.shell = shell
1468 1543 # Regexp to split filenames with spaces in them
1469 1544 self.space_name_re = re.compile(r'([^\\] )')
1470 1545 # Hold a local ref. to glob.glob for speed
1471 1546 self.glob = glob.glob
1472 1547
1473 1548 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1474 1549 # buffers, to avoid completion problems.
1475 1550 term = os.environ.get('TERM','xterm')
1476 1551 self.dumb_terminal = term in ['dumb','emacs']
1477 1552
1478 1553 # Special handling of backslashes needed in win32 platforms
1479 1554 if sys.platform == "win32":
1480 1555 self.clean_glob = self._clean_glob_win32
1481 1556 else:
1482 1557 self.clean_glob = self._clean_glob
1483 1558
1484 1559 #regexp to parse docstring for function signature
1485 1560 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1486 1561 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1487 1562 #use this if positional argument name is also needed
1488 1563 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1489 1564
1490 1565 self.magic_arg_matchers = [
1491 1566 self.magic_config_matcher,
1492 1567 self.magic_color_matcher,
1493 1568 ]
1494 1569
1495 1570 # This is set externally by InteractiveShell
1496 1571 self.custom_completers = None
1497 1572
1498 1573 # This is a list of names of unicode characters that can be completed
1499 1574 # into their corresponding unicode value. The list is large, so we
1500 1575 # lazily initialize it on first use. Consuming code should access this
1501 1576 # attribute through the `@unicode_names` property.
1502 1577 self._unicode_names = None
1503 1578
1504 1579 self._backslash_combining_matchers = [
1505 1580 self.latex_name_matcher,
1506 1581 self.unicode_name_matcher,
1507 1582 back_latex_name_matcher,
1508 1583 back_unicode_name_matcher,
1509 1584 self.fwd_unicode_matcher,
1510 1585 ]
1511 1586
1512 1587 if not self.backslash_combining_completions:
1513 1588 for matcher in self._backslash_combining_matchers:
1514 1589 self.disable_matchers.append(matcher.matcher_identifier)
1515 1590
1516 1591 if not self.merge_completions:
1517 1592 self.suppress_competing_matchers = True
1518 1593
1519 1594 @property
1520 1595 def matchers(self) -> List[Matcher]:
1521 1596 """All active matcher routines for completion"""
1522 1597 if self.dict_keys_only:
1523 1598 return [self.dict_key_matcher]
1524 1599
1525 1600 if self.use_jedi:
1526 1601 return [
1527 1602 *self.custom_matchers,
1528 1603 *self._backslash_combining_matchers,
1529 1604 *self.magic_arg_matchers,
1530 1605 self.custom_completer_matcher,
1531 1606 self.magic_matcher,
1532 1607 self._jedi_matcher,
1533 1608 self.dict_key_matcher,
1534 1609 self.file_matcher,
1535 1610 ]
1536 1611 else:
1537 1612 return [
1538 1613 *self.custom_matchers,
1539 1614 *self._backslash_combining_matchers,
1540 1615 *self.magic_arg_matchers,
1541 1616 self.custom_completer_matcher,
1542 1617 self.dict_key_matcher,
1543 1618 # TODO: convert python_matches to v2 API
1544 1619 self.magic_matcher,
1545 1620 self.python_matches,
1546 1621 self.file_matcher,
1547 1622 self.python_func_kw_matcher,
1548 1623 ]
1549 1624
1550 1625 def all_completions(self, text:str) -> List[str]:
1551 1626 """
1552 1627 Wrapper around the completion methods for the benefit of emacs.
1553 1628 """
1554 1629 prefix = text.rpartition('.')[0]
1555 1630 with provisionalcompleter():
1556 1631 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1557 1632 for c in self.completions(text, len(text))]
1558 1633
1559 1634 return self.complete(text)[1]
1560 1635
1561 1636 def _clean_glob(self, text:str):
1562 1637 return self.glob("%s*" % text)
1563 1638
1564 1639 def _clean_glob_win32(self, text:str):
1565 1640 return [f.replace("\\","/")
1566 1641 for f in self.glob("%s*" % text)]
1567 1642
1568 1643 @context_matcher()
1569 1644 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1570 """Same as ``file_matches``, but adopted to new Matcher API."""
1645 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1571 1646 matches = self.file_matches(context.token)
1572 1647 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1573 1648 # starts with `/home/`, `C:\`, etc)
1574 1649 return _convert_matcher_v1_result_to_v2(matches, type="path")
1575 1650
1576 1651 def file_matches(self, text: str) -> List[str]:
1577 1652 """Match filenames, expanding ~USER type strings.
1578 1653
1579 1654 Most of the seemingly convoluted logic in this completer is an
1580 1655 attempt to handle filenames with spaces in them. And yet it's not
1581 1656 quite perfect, because Python's readline doesn't expose all of the
1582 1657 GNU readline details needed for this to be done correctly.
1583 1658
1584 1659 For a filename with a space in it, the printed completions will be
1585 1660 only the parts after what's already been typed (instead of the
1586 1661 full completions, as is normally done). I don't think with the
1587 1662 current (as of Python 2.3) Python readline it's possible to do
1588 1663 better.
1589 1664
1590 DEPRECATED: Deprecated since 8.6. Use ``file_matcher`` instead.
1665 .. deprecated:: 8.6
1666 You can use :meth:`file_matcher` instead.
1591 1667 """
1592 1668
1593 1669 # chars that require escaping with backslash - i.e. chars
1594 1670 # that readline treats incorrectly as delimiters, but we
1595 1671 # don't want to treat as delimiters in filename matching
1596 1672 # when escaped with backslash
1597 1673 if text.startswith('!'):
1598 1674 text = text[1:]
1599 1675 text_prefix = u'!'
1600 1676 else:
1601 1677 text_prefix = u''
1602 1678
1603 1679 text_until_cursor = self.text_until_cursor
1604 1680 # track strings with open quotes
1605 1681 open_quotes = has_open_quotes(text_until_cursor)
1606 1682
1607 1683 if '(' in text_until_cursor or '[' in text_until_cursor:
1608 1684 lsplit = text
1609 1685 else:
1610 1686 try:
1611 1687 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1612 1688 lsplit = arg_split(text_until_cursor)[-1]
1613 1689 except ValueError:
1614 1690 # typically an unmatched ", or backslash without escaped char.
1615 1691 if open_quotes:
1616 1692 lsplit = text_until_cursor.split(open_quotes)[-1]
1617 1693 else:
1618 1694 return []
1619 1695 except IndexError:
1620 1696 # tab pressed on empty line
1621 1697 lsplit = ""
1622 1698
1623 1699 if not open_quotes and lsplit != protect_filename(lsplit):
1624 1700 # if protectables are found, do matching on the whole escaped name
1625 1701 has_protectables = True
1626 1702 text0,text = text,lsplit
1627 1703 else:
1628 1704 has_protectables = False
1629 1705 text = os.path.expanduser(text)
1630 1706
1631 1707 if text == "":
1632 1708 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1633 1709
1634 1710 # Compute the matches from the filesystem
1635 1711 if sys.platform == 'win32':
1636 1712 m0 = self.clean_glob(text)
1637 1713 else:
1638 1714 m0 = self.clean_glob(text.replace('\\', ''))
1639 1715
1640 1716 if has_protectables:
1641 1717 # If we had protectables, we need to revert our changes to the
1642 1718 # beginning of filename so that we don't double-write the part
1643 1719 # of the filename we have so far
1644 1720 len_lsplit = len(lsplit)
1645 1721 matches = [text_prefix + text0 +
1646 1722 protect_filename(f[len_lsplit:]) for f in m0]
1647 1723 else:
1648 1724 if open_quotes:
1649 1725 # if we have a string with an open quote, we don't need to
1650 1726 # protect the names beyond the quote (and we _shouldn't_, as
1651 1727 # it would cause bugs when the filesystem call is made).
1652 1728 matches = m0 if sys.platform == "win32" else\
1653 1729 [protect_filename(f, open_quotes) for f in m0]
1654 1730 else:
1655 1731 matches = [text_prefix +
1656 1732 protect_filename(f) for f in m0]
1657 1733
1658 1734 # Mark directories in input list by appending '/' to their names.
1659 1735 return [x+'/' if os.path.isdir(x) else x for x in matches]
1660 1736
1661 1737 @context_matcher()
1662 1738 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1739 """Match magics."""
1663 1740 text = context.token
1664 1741 matches = self.magic_matches(text)
1665 1742 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1666 1743 is_magic_prefix = len(text) > 0 and text[0] == "%"
1667 1744 result["suppress"] = is_magic_prefix and bool(result["completions"])
1668 1745 return result
1669 1746
1670 1747 def magic_matches(self, text: str):
1671 1748 """Match magics.
1672 1749
1673 DEPRECATED: Deprecated since 8.6. Use ``magic_matcher`` instead.
1750 .. deprecated:: 8.6
1751 You can use :meth:`magic_matcher` instead.
1674 1752 """
1675 1753 # Get all shell magics now rather than statically, so magics loaded at
1676 1754 # runtime show up too.
1677 1755 lsm = self.shell.magics_manager.lsmagic()
1678 1756 line_magics = lsm['line']
1679 1757 cell_magics = lsm['cell']
1680 1758 pre = self.magic_escape
1681 1759 pre2 = pre+pre
1682 1760
1683 1761 explicit_magic = text.startswith(pre)
1684 1762
1685 1763 # Completion logic:
1686 1764 # - user gives %%: only do cell magics
1687 1765 # - user gives %: do both line and cell magics
1688 1766 # - no prefix: do both
1689 1767 # In other words, line magics are skipped if the user gives %% explicitly
1690 1768 #
1691 1769 # We also exclude magics that match any currently visible names:
1692 1770 # https://github.com/ipython/ipython/issues/4877, unless the user has
1693 1771 # typed a %:
1694 1772 # https://github.com/ipython/ipython/issues/10754
1695 1773 bare_text = text.lstrip(pre)
1696 1774 global_matches = self.global_matches(bare_text)
1697 1775 if not explicit_magic:
1698 1776 def matches(magic):
1699 1777 """
1700 1778 Filter magics, in particular remove magics that match
1701 1779 a name present in global namespace.
1702 1780 """
1703 1781 return ( magic.startswith(bare_text) and
1704 1782 magic not in global_matches )
1705 1783 else:
1706 1784 def matches(magic):
1707 1785 return magic.startswith(bare_text)
1708 1786
1709 1787 comp = [ pre2+m for m in cell_magics if matches(m)]
1710 1788 if not text.startswith(pre2):
1711 1789 comp += [ pre+m for m in line_magics if matches(m)]
1712 1790
1713 1791 return comp
1714 1792
1715 1793 @context_matcher()
1716 1794 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1717 1795 """Match class names and attributes for %config magic."""
1718 1796 # NOTE: uses `line_buffer` equivalent for compatibility
1719 1797 matches = self.magic_config_matches(context.line_with_cursor)
1720 1798 return _convert_matcher_v1_result_to_v2(matches, type="param")
1721 1799
1722 1800 def magic_config_matches(self, text: str) -> List[str]:
1723 1801 """Match class names and attributes for %config magic.
1724 1802
1725 DEPRECATED: Deprecated since 8.6. Use ``magic_config_matcher`` instead.
1803 .. deprecated:: 8.6
1804 You can use :meth:`magic_config_matcher` instead.
1726 1805 """
1727 1806 texts = text.strip().split()
1728 1807
1729 1808 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1730 1809 # get all configuration classes
1731 1810 classes = sorted(set([ c for c in self.shell.configurables
1732 1811 if c.__class__.class_traits(config=True)
1733 1812 ]), key=lambda x: x.__class__.__name__)
1734 1813 classnames = [ c.__class__.__name__ for c in classes ]
1735 1814
1736 1815 # return all classnames if config or %config is given
1737 1816 if len(texts) == 1:
1738 1817 return classnames
1739 1818
1740 1819 # match classname
1741 1820 classname_texts = texts[1].split('.')
1742 1821 classname = classname_texts[0]
1743 1822 classname_matches = [ c for c in classnames
1744 1823 if c.startswith(classname) ]
1745 1824
1746 1825 # return matched classes or the matched class with attributes
1747 1826 if texts[1].find('.') < 0:
1748 1827 return classname_matches
1749 1828 elif len(classname_matches) == 1 and \
1750 1829 classname_matches[0] == classname:
1751 1830 cls = classes[classnames.index(classname)].__class__
1752 1831 help = cls.class_get_help()
1753 1832 # strip leading '--' from cl-args:
1754 1833 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1755 1834 return [ attr.split('=')[0]
1756 1835 for attr in help.strip().splitlines()
1757 1836 if attr.startswith(texts[1]) ]
1758 1837 return []
1759 1838
1760 1839 @context_matcher()
1761 1840 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1762 1841 """Match color schemes for %colors magic."""
1763 1842 # NOTE: uses `line_buffer` equivalent for compatibility
1764 1843 matches = self.magic_color_matches(context.line_with_cursor)
1765 1844 return _convert_matcher_v1_result_to_v2(matches, type="param")
1766 1845
1767 1846 def magic_color_matches(self, text: str) -> List[str]:
1768 1847 """Match color schemes for %colors magic.
1769 1848
1770 DEPRECATED: Deprecated since 8.6. Use ``magic_color_matcher`` instead.
1849 .. deprecated:: 8.6
1850 You can use :meth:`magic_color_matcher` instead.
1771 1851 """
1772 1852 texts = text.split()
1773 1853 if text.endswith(' '):
1774 1854 # .split() strips off the trailing whitespace. Add '' back
1775 1855 # so that: '%colors ' -> ['%colors', '']
1776 1856 texts.append('')
1777 1857
1778 1858 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1779 1859 prefix = texts[1]
1780 1860 return [ color for color in InspectColors.keys()
1781 1861 if color.startswith(prefix) ]
1782 1862 return []
1783 1863
1784 1864 @context_matcher(identifier="IPCompleter.jedi_matcher")
1785 1865 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1786 1866 matches = self._jedi_matches(
1787 1867 cursor_column=context.cursor_position,
1788 1868 cursor_line=context.cursor_line,
1789 1869 text=context.full_text,
1790 1870 )
1791 1871 return {
1792 1872 "completions": matches,
1793 1873 # static analysis should not suppress other matchers
1794 1874 "suppress": False,
1795 1875 }
1796 1876
1797 1877 def _jedi_matches(
1798 1878 self, cursor_column: int, cursor_line: int, text: str
1799 1879 ) -> Iterable[_JediCompletionLike]:
1800 1880 """
1801 1881 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1802 1882 cursor position.
1803 1883
1804 1884 Parameters
1805 1885 ----------
1806 1886 cursor_column : int
1807 1887 column position of the cursor in ``text``, 0-indexed.
1808 1888 cursor_line : int
1809 1889 line position of the cursor in ``text``, 0-indexed
1810 1890 text : str
1811 1891 text to complete
1812 1892
1813 1893 Notes
1814 1894 -----
1815 1895 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1816 1896 object containing a string with the Jedi debug information attached.
1817 1897
1818 DEPRECATED: Deprecated since 8.6. Use ``_jedi_matcher`` instead.
1898 .. deprecated:: 8.6
1899 You can use :meth:`_jedi_matcher` instead.
1819 1900 """
1820 1901 namespaces = [self.namespace]
1821 1902 if self.global_namespace is not None:
1822 1903 namespaces.append(self.global_namespace)
1823 1904
1824 1905 completion_filter = lambda x:x
1825 1906 offset = cursor_to_position(text, cursor_line, cursor_column)
1826 1907 # filter output if we are completing for object members
1827 1908 if offset:
1828 1909 pre = text[offset-1]
1829 1910 if pre == '.':
1830 1911 if self.omit__names == 2:
1831 1912 completion_filter = lambda c:not c.name.startswith('_')
1832 1913 elif self.omit__names == 1:
1833 1914 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1834 1915 elif self.omit__names == 0:
1835 1916 completion_filter = lambda x:x
1836 1917 else:
1837 1918 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1838 1919
1839 1920 interpreter = jedi.Interpreter(text[:offset], namespaces)
1840 1921 try_jedi = True
1841 1922
1842 1923 try:
1843 1924 # find the first token in the current tree -- if it is a ' or " then we are in a string
1844 1925 completing_string = False
1845 1926 try:
1846 1927 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1847 1928 except StopIteration:
1848 1929 pass
1849 1930 else:
1850 1931 # note the value may be ', ", or it may also be ''' or """, or
1851 1932 # in some cases, """what/you/typed..., but all of these are
1852 1933 # strings.
1853 1934 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1854 1935
1855 1936 # if we are in a string jedi is likely not the right candidate for
1856 1937 # now. Skip it.
1857 1938 try_jedi = not completing_string
1858 1939 except Exception as e:
1859 1940 # many of things can go wrong, we are using private API just don't crash.
1860 1941 if self.debug:
1861 1942 print("Error detecting if completing a non-finished string :", e, '|')
1862 1943
1863 1944 if not try_jedi:
1864 1945 return []
1865 1946 try:
1866 1947 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1867 1948 except Exception as e:
1868 1949 if self.debug:
1869 1950 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1870 1951 else:
1871 1952 return []
1872 1953
1873 1954 def python_matches(self, text:str)->List[str]:
1874 1955 """Match attributes or global python names"""
1875 1956 if "." in text:
1876 1957 try:
1877 1958 matches = self.attr_matches(text)
1878 1959 if text.endswith('.') and self.omit__names:
1879 1960 if self.omit__names == 1:
1880 1961 # true if txt is _not_ a __ name, false otherwise:
1881 1962 no__name = (lambda txt:
1882 1963 re.match(r'.*\.__.*?__',txt) is None)
1883 1964 else:
1884 1965 # true if txt is _not_ a _ name, false otherwise:
1885 1966 no__name = (lambda txt:
1886 1967 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1887 1968 matches = filter(no__name, matches)
1888 1969 except NameError:
1889 1970 # catches <undefined attributes>.<tab>
1890 1971 matches = []
1891 1972 else:
1892 1973 matches = self.global_matches(text)
1893 1974 return matches
1894 1975
1895 1976 def _default_arguments_from_docstring(self, doc):
1896 1977 """Parse the first line of docstring for call signature.
1897 1978
1898 1979 Docstring should be of the form 'min(iterable[, key=func])\n'.
1899 1980 It can also parse cython docstring of the form
1900 1981 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1901 1982 """
1902 1983 if doc is None:
1903 1984 return []
1904 1985
1905 1986 #care only the firstline
1906 1987 line = doc.lstrip().splitlines()[0]
1907 1988
1908 1989 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1909 1990 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1910 1991 sig = self.docstring_sig_re.search(line)
1911 1992 if sig is None:
1912 1993 return []
1913 1994 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1914 1995 sig = sig.groups()[0].split(',')
1915 1996 ret = []
1916 1997 for s in sig:
1917 1998 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1918 1999 ret += self.docstring_kwd_re.findall(s)
1919 2000 return ret
1920 2001
1921 2002 def _default_arguments(self, obj):
1922 2003 """Return the list of default arguments of obj if it is callable,
1923 2004 or empty list otherwise."""
1924 2005 call_obj = obj
1925 2006 ret = []
1926 2007 if inspect.isbuiltin(obj):
1927 2008 pass
1928 2009 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1929 2010 if inspect.isclass(obj):
1930 2011 #for cython embedsignature=True the constructor docstring
1931 2012 #belongs to the object itself not __init__
1932 2013 ret += self._default_arguments_from_docstring(
1933 2014 getattr(obj, '__doc__', ''))
1934 2015 # for classes, check for __init__,__new__
1935 2016 call_obj = (getattr(obj, '__init__', None) or
1936 2017 getattr(obj, '__new__', None))
1937 2018 # for all others, check if they are __call__able
1938 2019 elif hasattr(obj, '__call__'):
1939 2020 call_obj = obj.__call__
1940 2021 ret += self._default_arguments_from_docstring(
1941 2022 getattr(call_obj, '__doc__', ''))
1942 2023
1943 2024 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1944 2025 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1945 2026
1946 2027 try:
1947 2028 sig = inspect.signature(obj)
1948 2029 ret.extend(k for k, v in sig.parameters.items() if
1949 2030 v.kind in _keeps)
1950 2031 except ValueError:
1951 2032 pass
1952 2033
1953 2034 return list(set(ret))
1954 2035
1955 2036 @context_matcher()
1956 2037 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1957 2038 """Match named parameters (kwargs) of the last open function."""
1958 2039 matches = self.python_func_kw_matches(context.token)
1959 2040 return _convert_matcher_v1_result_to_v2(matches, type="param")
1960 2041
1961 2042 def python_func_kw_matches(self, text):
1962 2043 """Match named parameters (kwargs) of the last open function.
1963 2044
1964 DEPRECATED: Deprecated since 8.6. Use ``magic_config_matcher`` instead.
2045 .. deprecated:: 8.6
2046 You can use :meth:`python_func_kw_matcher` instead.
1965 2047 """
1966 2048
1967 2049 if "." in text: # a parameter cannot be dotted
1968 2050 return []
1969 2051 try: regexp = self.__funcParamsRegex
1970 2052 except AttributeError:
1971 2053 regexp = self.__funcParamsRegex = re.compile(r'''
1972 2054 '.*?(?<!\\)' | # single quoted strings or
1973 2055 ".*?(?<!\\)" | # double quoted strings or
1974 2056 \w+ | # identifier
1975 2057 \S # other characters
1976 2058 ''', re.VERBOSE | re.DOTALL)
1977 2059 # 1. find the nearest identifier that comes before an unclosed
1978 2060 # parenthesis before the cursor
1979 2061 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1980 2062 tokens = regexp.findall(self.text_until_cursor)
1981 2063 iterTokens = reversed(tokens); openPar = 0
1982 2064
1983 2065 for token in iterTokens:
1984 2066 if token == ')':
1985 2067 openPar -= 1
1986 2068 elif token == '(':
1987 2069 openPar += 1
1988 2070 if openPar > 0:
1989 2071 # found the last unclosed parenthesis
1990 2072 break
1991 2073 else:
1992 2074 return []
1993 2075 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1994 2076 ids = []
1995 2077 isId = re.compile(r'\w+$').match
1996 2078
1997 2079 while True:
1998 2080 try:
1999 2081 ids.append(next(iterTokens))
2000 2082 if not isId(ids[-1]):
2001 2083 ids.pop(); break
2002 2084 if not next(iterTokens) == '.':
2003 2085 break
2004 2086 except StopIteration:
2005 2087 break
2006 2088
2007 2089 # Find all named arguments already assigned to, as to avoid suggesting
2008 2090 # them again
2009 2091 usedNamedArgs = set()
2010 2092 par_level = -1
2011 2093 for token, next_token in zip(tokens, tokens[1:]):
2012 2094 if token == '(':
2013 2095 par_level += 1
2014 2096 elif token == ')':
2015 2097 par_level -= 1
2016 2098
2017 2099 if par_level != 0:
2018 2100 continue
2019 2101
2020 2102 if next_token != '=':
2021 2103 continue
2022 2104
2023 2105 usedNamedArgs.add(token)
2024 2106
2025 2107 argMatches = []
2026 2108 try:
2027 2109 callableObj = '.'.join(ids[::-1])
2028 2110 namedArgs = self._default_arguments(eval(callableObj,
2029 2111 self.namespace))
2030 2112
2031 2113 # Remove used named arguments from the list, no need to show twice
2032 2114 for namedArg in set(namedArgs) - usedNamedArgs:
2033 2115 if namedArg.startswith(text):
2034 2116 argMatches.append("%s=" %namedArg)
2035 2117 except:
2036 2118 pass
2037 2119
2038 2120 return argMatches
2039 2121
2040 2122 @staticmethod
2041 2123 def _get_keys(obj: Any) -> List[Any]:
2042 2124 # Objects can define their own completions by defining an
2043 2125 # _ipy_key_completions_() method.
2044 2126 method = get_real_method(obj, '_ipython_key_completions_')
2045 2127 if method is not None:
2046 2128 return method()
2047 2129
2048 2130 # Special case some common in-memory dict-like types
2049 2131 if isinstance(obj, dict) or\
2050 2132 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2051 2133 try:
2052 2134 return list(obj.keys())
2053 2135 except Exception:
2054 2136 return []
2055 2137 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2056 2138 _safe_isinstance(obj, 'numpy', 'void'):
2057 2139 return obj.dtype.names or []
2058 2140 return []
2059 2141
2060 2142 @context_matcher()
2061 2143 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2062 2144 """Match string keys in a dictionary, after e.g. ``foo[``."""
2063 2145 matches = self.dict_key_matches(context.token)
2064 2146 return _convert_matcher_v1_result_to_v2(
2065 2147 matches, type="dict key", suppress_if_matches=True
2066 2148 )
2067 2149
2068 2150 def dict_key_matches(self, text: str) -> List[str]:
2069 2151 """Match string keys in a dictionary, after e.g. ``foo[``.
2070 2152
2071 DEPRECATED: Deprecated since 8.6. Use `dict_key_matcher` instead.
2153 .. deprecated:: 8.6
2154 You can use :meth:`dict_key_matcher` instead.
2072 2155 """
2073 2156
2074 2157 if self.__dict_key_regexps is not None:
2075 2158 regexps = self.__dict_key_regexps
2076 2159 else:
2077 2160 dict_key_re_fmt = r'''(?x)
2078 2161 ( # match dict-referring expression wrt greedy setting
2079 2162 %s
2080 2163 )
2081 2164 \[ # open bracket
2082 2165 \s* # and optional whitespace
2083 2166 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2084 2167 ((?:[uUbB]? # string prefix (r not handled)
2085 2168 (?:
2086 2169 '(?:[^']|(?<!\\)\\')*'
2087 2170 |
2088 2171 "(?:[^"]|(?<!\\)\\")*"
2089 2172 )
2090 2173 \s*,\s*
2091 2174 )*)
2092 2175 ([uUbB]? # string prefix (r not handled)
2093 2176 (?: # unclosed string
2094 2177 '(?:[^']|(?<!\\)\\')*
2095 2178 |
2096 2179 "(?:[^"]|(?<!\\)\\")*
2097 2180 )
2098 2181 )?
2099 2182 $
2100 2183 '''
2101 2184 regexps = self.__dict_key_regexps = {
2102 2185 False: re.compile(dict_key_re_fmt % r'''
2103 2186 # identifiers separated by .
2104 2187 (?!\d)\w+
2105 2188 (?:\.(?!\d)\w+)*
2106 2189 '''),
2107 2190 True: re.compile(dict_key_re_fmt % '''
2108 2191 .+
2109 2192 ''')
2110 2193 }
2111 2194
2112 2195 match = regexps[self.greedy].search(self.text_until_cursor)
2113 2196
2114 2197 if match is None:
2115 2198 return []
2116 2199
2117 2200 expr, prefix0, prefix = match.groups()
2118 2201 try:
2119 2202 obj = eval(expr, self.namespace)
2120 2203 except Exception:
2121 2204 try:
2122 2205 obj = eval(expr, self.global_namespace)
2123 2206 except Exception:
2124 2207 return []
2125 2208
2126 2209 keys = self._get_keys(obj)
2127 2210 if not keys:
2128 2211 return keys
2129 2212
2130 2213 extra_prefix = eval(prefix0) if prefix0 != '' else None
2131 2214
2132 2215 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2133 2216 if not matches:
2134 2217 return matches
2135 2218
2136 2219 # get the cursor position of
2137 2220 # - the text being completed
2138 2221 # - the start of the key text
2139 2222 # - the start of the completion
2140 2223 text_start = len(self.text_until_cursor) - len(text)
2141 2224 if prefix:
2142 2225 key_start = match.start(3)
2143 2226 completion_start = key_start + token_offset
2144 2227 else:
2145 2228 key_start = completion_start = match.end()
2146 2229
2147 2230 # grab the leading prefix, to make sure all completions start with `text`
2148 2231 if text_start > key_start:
2149 2232 leading = ''
2150 2233 else:
2151 2234 leading = text[text_start:completion_start]
2152 2235
2153 2236 # the index of the `[` character
2154 2237 bracket_idx = match.end(1)
2155 2238
2156 2239 # append closing quote and bracket as appropriate
2157 2240 # this is *not* appropriate if the opening quote or bracket is outside
2158 2241 # the text given to this method
2159 2242 suf = ''
2160 2243 continuation = self.line_buffer[len(self.text_until_cursor):]
2161 2244 if key_start > text_start and closing_quote:
2162 2245 # quotes were opened inside text, maybe close them
2163 2246 if continuation.startswith(closing_quote):
2164 2247 continuation = continuation[len(closing_quote):]
2165 2248 else:
2166 2249 suf += closing_quote
2167 2250 if bracket_idx > text_start:
2168 2251 # brackets were opened inside text, maybe close them
2169 2252 if not continuation.startswith(']'):
2170 2253 suf += ']'
2171 2254
2172 2255 return [leading + k + suf for k in matches]
2173 2256
2174 2257 @context_matcher()
2175 2258 def unicode_name_matcher(self, context):
2259 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2176 2260 fragment, matches = self.unicode_name_matches(context.token)
2177 2261 return _convert_matcher_v1_result_to_v2(
2178 2262 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2179 2263 )
2180 2264
2181 2265 @staticmethod
2182 2266 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2183 2267 """Match Latex-like syntax for unicode characters base
2184 2268 on the name of the character.
2185 2269
2186 2270 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2187 2271
2188 2272 Works only on valid python 3 identifier, or on combining characters that
2189 2273 will combine to form a valid identifier.
2190 2274 """
2191 2275 slashpos = text.rfind('\\')
2192 2276 if slashpos > -1:
2193 2277 s = text[slashpos+1:]
2194 2278 try :
2195 2279 unic = unicodedata.lookup(s)
2196 2280 # allow combining chars
2197 2281 if ('a'+unic).isidentifier():
2198 2282 return '\\'+s,[unic]
2199 2283 except KeyError:
2200 2284 pass
2201 2285 return '', []
2202 2286
2203 2287 @context_matcher()
2204 2288 def latex_name_matcher(self, context):
2205 2289 """Match Latex syntax for unicode characters.
2206 2290
2207 2291 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2208 2292 """
2209 2293 fragment, matches = self.latex_matches(context.token)
2210 2294 return _convert_matcher_v1_result_to_v2(
2211 2295 matches, type="latex", fragment=fragment, suppress_if_matches=True
2212 2296 )
2213 2297
2214 2298 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2215 2299 """Match Latex syntax for unicode characters.
2216 2300
2217 2301 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2218 2302
2219 DEPRECATED: Deprecated since 8.6. Use `latex_matcher` instead.
2303 .. deprecated:: 8.6
2304 You can use :meth:`latex_name_matcher` instead.
2220 2305 """
2221 2306 slashpos = text.rfind('\\')
2222 2307 if slashpos > -1:
2223 2308 s = text[slashpos:]
2224 2309 if s in latex_symbols:
2225 2310 # Try to complete a full latex symbol to unicode
2226 2311 # \\alpha -> Ξ±
2227 2312 return s, [latex_symbols[s]]
2228 2313 else:
2229 2314 # If a user has partially typed a latex symbol, give them
2230 2315 # a full list of options \al -> [\aleph, \alpha]
2231 2316 matches = [k for k in latex_symbols if k.startswith(s)]
2232 2317 if matches:
2233 2318 return s, matches
2234 2319 return '', ()
2235 2320
2236 2321 @context_matcher()
2237 2322 def custom_completer_matcher(self, context):
2323 """Dispatch custom completer.
2324
2325 If a match is found, suppresses all other matchers except for Jedi.
2326 """
2238 2327 matches = self.dispatch_custom_completer(context.token) or []
2239 2328 result = _convert_matcher_v1_result_to_v2(
2240 matches, type="<unknown>", suppress_if_matches=True
2329 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2241 2330 )
2242 2331 result["ordered"] = True
2243 2332 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2244 2333 return result
2245 2334
2246 2335 def dispatch_custom_completer(self, text):
2247 2336 """
2248 DEPRECATED: Deprecated since 8.6. Use `custom_completer_matcher` instead.
2337 .. deprecated:: 8.6
2338 You can use :meth:`custom_completer_matcher` instead.
2249 2339 """
2250 2340 if not self.custom_completers:
2251 2341 return
2252 2342
2253 2343 line = self.line_buffer
2254 2344 if not line.strip():
2255 2345 return None
2256 2346
2257 2347 # Create a little structure to pass all the relevant information about
2258 2348 # the current completion to any custom completer.
2259 2349 event = SimpleNamespace()
2260 2350 event.line = line
2261 2351 event.symbol = text
2262 2352 cmd = line.split(None,1)[0]
2263 2353 event.command = cmd
2264 2354 event.text_until_cursor = self.text_until_cursor
2265 2355
2266 2356 # for foo etc, try also to find completer for %foo
2267 2357 if not cmd.startswith(self.magic_escape):
2268 2358 try_magic = self.custom_completers.s_matches(
2269 2359 self.magic_escape + cmd)
2270 2360 else:
2271 2361 try_magic = []
2272 2362
2273 2363 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2274 2364 try_magic,
2275 2365 self.custom_completers.flat_matches(self.text_until_cursor)):
2276 2366 try:
2277 2367 res = c(event)
2278 2368 if res:
2279 2369 # first, try case sensitive match
2280 2370 withcase = [r for r in res if r.startswith(text)]
2281 2371 if withcase:
2282 2372 return withcase
2283 2373 # if none, then case insensitive ones are ok too
2284 2374 text_low = text.lower()
2285 2375 return [r for r in res if r.lower().startswith(text_low)]
2286 2376 except TryNext:
2287 2377 pass
2288 2378 except KeyboardInterrupt:
2289 2379 """
2290 2380 If custom completer take too long,
2291 2381 let keyboard interrupt abort and return nothing.
2292 2382 """
2293 2383 break
2294 2384
2295 2385 return None
2296 2386
2297 2387 def completions(self, text: str, offset: int)->Iterator[Completion]:
2298 2388 """
2299 2389 Returns an iterator over the possible completions
2300 2390
2301 2391 .. warning::
2302 2392
2303 2393 Unstable
2304 2394
2305 2395 This function is unstable, API may change without warning.
2306 2396 It will also raise unless use in proper context manager.
2307 2397
2308 2398 Parameters
2309 2399 ----------
2310 2400 text : str
2311 2401 Full text of the current input, multi line string.
2312 2402 offset : int
2313 2403 Integer representing the position of the cursor in ``text``. Offset
2314 2404 is 0-based indexed.
2315 2405
2316 2406 Yields
2317 2407 ------
2318 2408 Completion
2319 2409
2320 2410 Notes
2321 2411 -----
2322 2412 The cursor on a text can either be seen as being "in between"
2323 2413 characters or "On" a character depending on the interface visible to
2324 2414 the user. For consistency the cursor being on "in between" characters X
2325 2415 and Y is equivalent to the cursor being "on" character Y, that is to say
2326 2416 the character the cursor is on is considered as being after the cursor.
2327 2417
2328 2418 Combining characters may span more that one position in the
2329 2419 text.
2330 2420
2331 2421 .. note::
2332 2422
2333 2423 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2334 2424 fake Completion token to distinguish completion returned by Jedi
2335 2425 and usual IPython completion.
2336 2426
2337 2427 .. note::
2338 2428
2339 2429 Completions are not completely deduplicated yet. If identical
2340 2430 completions are coming from different sources this function does not
2341 2431 ensure that each completion object will only be present once.
2342 2432 """
2343 2433 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2344 2434 "It may change without warnings. "
2345 2435 "Use in corresponding context manager.",
2346 2436 category=ProvisionalCompleterWarning, stacklevel=2)
2347 2437
2348 2438 seen = set()
2349 2439 profiler:Optional[cProfile.Profile]
2350 2440 try:
2351 2441 if self.profile_completions:
2352 2442 import cProfile
2353 2443 profiler = cProfile.Profile()
2354 2444 profiler.enable()
2355 2445 else:
2356 2446 profiler = None
2357 2447
2358 2448 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2359 2449 if c and (c in seen):
2360 2450 continue
2361 2451 yield c
2362 2452 seen.add(c)
2363 2453 except KeyboardInterrupt:
2364 2454 """if completions take too long and users send keyboard interrupt,
2365 2455 do not crash and return ASAP. """
2366 2456 pass
2367 2457 finally:
2368 2458 if profiler is not None:
2369 2459 profiler.disable()
2370 2460 ensure_dir_exists(self.profiler_output_dir)
2371 2461 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2372 2462 print("Writing profiler output to", output_path)
2373 2463 profiler.dump_stats(output_path)
2374 2464
2375 2465 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2376 2466 """
2377 2467 Core completion module.Same signature as :any:`completions`, with the
2378 2468 extra `timeout` parameter (in seconds).
2379 2469
2380 2470 Computing jedi's completion ``.type`` can be quite expensive (it is a
2381 2471 lazy property) and can require some warm-up, more warm up than just
2382 2472 computing the ``name`` of a completion. The warm-up can be :
2383 2473
2384 2474 - Long warm-up the first time a module is encountered after
2385 2475 install/update: actually build parse/inference tree.
2386 2476
2387 2477 - first time the module is encountered in a session: load tree from
2388 2478 disk.
2389 2479
2390 2480 We don't want to block completions for tens of seconds so we give the
2391 2481 completer a "budget" of ``_timeout`` seconds per invocation to compute
2392 2482 completions types, the completions that have not yet been computed will
2393 2483 be marked as "unknown" an will have a chance to be computed next round
2394 2484 are things get cached.
2395 2485
2396 2486 Keep in mind that Jedi is not the only thing treating the completion so
2397 2487 keep the timeout short-ish as if we take more than 0.3 second we still
2398 2488 have lots of processing to do.
2399 2489
2400 2490 """
2401 2491 deadline = time.monotonic() + _timeout
2402 2492
2403 2493 before = full_text[:offset]
2404 2494 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2405 2495
2406 2496 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2407 2497
2408 2498 results = self._complete(
2409 2499 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2410 2500 )
2411 2501 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2412 2502 identifier: result
2413 2503 for identifier, result in results.items()
2414 2504 if identifier != jedi_matcher_id
2415 2505 }
2416 2506
2417 2507 jedi_matches = (
2418 2508 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2419 2509 if jedi_matcher_id in results
2420 2510 else ()
2421 2511 )
2422 2512
2423 2513 iter_jm = iter(jedi_matches)
2424 2514 if _timeout:
2425 2515 for jm in iter_jm:
2426 2516 try:
2427 2517 type_ = jm.type
2428 2518 except Exception:
2429 2519 if self.debug:
2430 2520 print("Error in Jedi getting type of ", jm)
2431 2521 type_ = None
2432 2522 delta = len(jm.name_with_symbols) - len(jm.complete)
2433 2523 if type_ == 'function':
2434 2524 signature = _make_signature(jm)
2435 2525 else:
2436 2526 signature = ''
2437 2527 yield Completion(start=offset - delta,
2438 2528 end=offset,
2439 2529 text=jm.name_with_symbols,
2440 2530 type=type_,
2441 2531 signature=signature,
2442 2532 _origin='jedi')
2443 2533
2444 2534 if time.monotonic() > deadline:
2445 2535 break
2446 2536
2447 2537 for jm in iter_jm:
2448 2538 delta = len(jm.name_with_symbols) - len(jm.complete)
2449 2539 yield Completion(
2450 2540 start=offset - delta,
2451 2541 end=offset,
2452 2542 text=jm.name_with_symbols,
2453 2543 type=_UNKNOWN_TYPE, # don't compute type for speed
2454 2544 _origin="jedi",
2455 2545 signature="",
2456 2546 )
2457 2547
2458 2548 # TODO:
2459 2549 # Suppress this, right now just for debug.
2460 2550 if jedi_matches and non_jedi_results and self.debug:
2461 2551 some_start_offset = before.rfind(
2462 2552 next(iter(non_jedi_results.values()))["matched_fragment"]
2463 2553 )
2464 2554 yield Completion(
2465 2555 start=some_start_offset,
2466 2556 end=offset,
2467 2557 text="--jedi/ipython--",
2468 2558 _origin="debug",
2469 2559 type="none",
2470 2560 signature="",
2471 2561 )
2472 2562
2473 2563 ordered = []
2474 2564 sortable = []
2475 2565
2476 2566 for origin, result in non_jedi_results.items():
2477 2567 matched_text = result["matched_fragment"]
2478 2568 start_offset = before.rfind(matched_text)
2479 2569 is_ordered = result.get("ordered", False)
2480 2570 container = ordered if is_ordered else sortable
2481 2571
2482 2572 # I'm unsure if this is always true, so let's assert and see if it
2483 2573 # crash
2484 2574 assert before.endswith(matched_text)
2485 2575
2486 2576 for simple_completion in result["completions"]:
2487 2577 completion = Completion(
2488 2578 start=start_offset,
2489 2579 end=offset,
2490 2580 text=simple_completion.text,
2491 2581 _origin=origin,
2492 2582 signature="",
2493 2583 type=simple_completion.type or _UNKNOWN_TYPE,
2494 2584 )
2495 2585 container.append(completion)
2496 2586
2497 2587 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2498 2588 :MATCHES_LIMIT
2499 2589 ]
2500 2590
2501 2591 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2502 2592 """Find completions for the given text and line context.
2503 2593
2504 2594 Note that both the text and the line_buffer are optional, but at least
2505 2595 one of them must be given.
2506 2596
2507 2597 Parameters
2508 2598 ----------
2509 2599 text : string, optional
2510 2600 Text to perform the completion on. If not given, the line buffer
2511 2601 is split using the instance's CompletionSplitter object.
2512 2602 line_buffer : string, optional
2513 2603 If not given, the completer attempts to obtain the current line
2514 2604 buffer via readline. This keyword allows clients which are
2515 2605 requesting for text completions in non-readline contexts to inform
2516 2606 the completer of the entire text.
2517 2607 cursor_pos : int, optional
2518 2608 Index of the cursor in the full line buffer. Should be provided by
2519 2609 remote frontends where kernel has no access to frontend state.
2520 2610
2521 2611 Returns
2522 2612 -------
2523 2613 Tuple of two items:
2524 2614 text : str
2525 2615 Text that was actually used in the completion.
2526 2616 matches : list
2527 2617 A list of completion matches.
2528 2618
2529 2619 Notes
2530 2620 -----
2531 2621 This API is likely to be deprecated and replaced by
2532 2622 :any:`IPCompleter.completions` in the future.
2533 2623
2534 2624 """
2535 2625 warnings.warn('`Completer.complete` is pending deprecation since '
2536 2626 'IPython 6.0 and will be replaced by `Completer.completions`.',
2537 2627 PendingDeprecationWarning)
2538 2628 # potential todo, FOLD the 3rd throw away argument of _complete
2539 2629 # into the first 2 one.
2540 2630 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2541 2631 # TODO: should we deprecate now, or does it stay?
2542 2632
2543 2633 results = self._complete(
2544 2634 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2545 2635 )
2546 2636
2547 2637 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2548 2638
2549 2639 return self._arrange_and_extract(
2550 2640 results,
2551 2641 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2552 2642 skip_matchers={jedi_matcher_id},
2553 2643 # this API does not support different start/end positions (fragments of token).
2554 2644 abort_if_offset_changes=True,
2555 2645 )
2556 2646
2557 2647 def _arrange_and_extract(
2558 2648 self,
2559 2649 results: Dict[str, MatcherResult],
2560 2650 skip_matchers: Set[str],
2561 2651 abort_if_offset_changes: bool,
2562 2652 ):
2563 2653
2564 2654 sortable = []
2565 2655 ordered = []
2566 2656 most_recent_fragment = None
2567 2657 for identifier, result in results.items():
2568 2658 if identifier in skip_matchers:
2569 2659 continue
2570 2660 if not result["completions"]:
2571 2661 continue
2572 2662 if not most_recent_fragment:
2573 2663 most_recent_fragment = result["matched_fragment"]
2574 2664 if (
2575 2665 abort_if_offset_changes
2576 2666 and result["matched_fragment"] != most_recent_fragment
2577 2667 ):
2578 2668 break
2579 2669 if result.get("ordered", False):
2580 2670 ordered.extend(result["completions"])
2581 2671 else:
2582 2672 sortable.extend(result["completions"])
2583 2673
2584 2674 if not most_recent_fragment:
2585 2675 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2586 2676
2587 2677 return most_recent_fragment, [
2588 2678 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2589 2679 ]
2590 2680
2591 2681 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2592 2682 full_text=None) -> _CompleteResult:
2593 2683 """
2594 2684 Like complete but can also returns raw jedi completions as well as the
2595 2685 origin of the completion text. This could (and should) be made much
2596 2686 cleaner but that will be simpler once we drop the old (and stateful)
2597 2687 :any:`complete` API.
2598 2688
2599 2689 With current provisional API, cursor_pos act both (depending on the
2600 2690 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2601 2691 ``column`` when passing multiline strings this could/should be renamed
2602 2692 but would add extra noise.
2603 2693
2604 2694 Parameters
2605 2695 ----------
2606 2696 cursor_line
2607 2697 Index of the line the cursor is on. 0 indexed.
2608 2698 cursor_pos
2609 2699 Position of the cursor in the current line/line_buffer/text. 0
2610 2700 indexed.
2611 2701 line_buffer : optional, str
2612 2702 The current line the cursor is in, this is mostly due to legacy
2613 2703 reason that readline could only give a us the single current line.
2614 2704 Prefer `full_text`.
2615 2705 text : str
2616 2706 The current "token" the cursor is in, mostly also for historical
2617 2707 reasons. as the completer would trigger only after the current line
2618 2708 was parsed.
2619 2709 full_text : str
2620 2710 Full text of the current cell.
2621 2711
2622 2712 Returns
2623 2713 -------
2624 2714 An ordered dictionary where keys are identifiers of completion
2625 2715 matchers and values are ``MatcherResult``s.
2626 2716 """
2627 2717
2628 2718 # if the cursor position isn't given, the only sane assumption we can
2629 2719 # make is that it's at the end of the line (the common case)
2630 2720 if cursor_pos is None:
2631 2721 cursor_pos = len(line_buffer) if text is None else len(text)
2632 2722
2633 2723 if self.use_main_ns:
2634 2724 self.namespace = __main__.__dict__
2635 2725
2636 2726 # if text is either None or an empty string, rely on the line buffer
2637 2727 if (not line_buffer) and full_text:
2638 2728 line_buffer = full_text.split('\n')[cursor_line]
2639 2729 if not text: # issue #11508: check line_buffer before calling split_line
2640 2730 text = (
2641 2731 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2642 2732 )
2643 2733
2644 2734 # If no line buffer is given, assume the input text is all there was
2645 2735 if line_buffer is None:
2646 2736 line_buffer = text
2647 2737
2648 2738 # deprecated - do not use `line_buffer` in new code.
2649 2739 self.line_buffer = line_buffer
2650 2740 self.text_until_cursor = self.line_buffer[:cursor_pos]
2651 2741
2652 2742 if not full_text:
2653 2743 full_text = line_buffer
2654 2744
2655 2745 context = CompletionContext(
2656 2746 full_text=full_text,
2657 2747 cursor_position=cursor_pos,
2658 2748 cursor_line=cursor_line,
2659 2749 token=text,
2660 2750 limit=MATCHES_LIMIT,
2661 2751 )
2662 2752
2663 2753 # Start with a clean slate of completions
2664 2754 results = {}
2665 2755
2666 2756 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2667 2757
2668 2758 suppressed_matchers = set()
2669 2759
2670 2760 matchers = {
2671 2761 _get_matcher_id(matcher): matcher
2672 2762 for matcher in sorted(
2673 2763 self.matchers, key=_get_matcher_priority, reverse=True
2674 2764 )
2675 2765 }
2676 2766
2677 2767 for matcher_id, matcher in matchers.items():
2678 2768 api_version = _get_matcher_api_version(matcher)
2679 2769 matcher_id = _get_matcher_id(matcher)
2680 2770
2681 2771 if matcher_id in self.disable_matchers:
2682 2772 continue
2683 2773
2684 2774 if matcher_id in results:
2685 2775 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2686 2776
2687 2777 if matcher_id in suppressed_matchers:
2688 2778 continue
2689 2779
2690 2780 try:
2691 2781 if api_version == 1:
2692 2782 result = _convert_matcher_v1_result_to_v2(
2693 2783 matcher(text), type=_UNKNOWN_TYPE
2694 2784 )
2695 2785 elif api_version == 2:
2696 2786 result = cast(matcher, MatcherAPIv2)(context)
2697 2787 else:
2698 2788 raise ValueError(f"Unsupported API version {api_version}")
2699 2789 except:
2700 2790 # Show the ugly traceback if the matcher causes an
2701 2791 # exception, but do NOT crash the kernel!
2702 2792 sys.excepthook(*sys.exc_info())
2703 2793 continue
2704 2794
2705 2795 # set default value for matched fragment if suffix was not selected.
2706 2796 result["matched_fragment"] = result.get("matched_fragment", context.token)
2707 2797
2708 2798 if not suppressed_matchers:
2709 2799 suppression_recommended = result.get("suppress", False)
2710 2800
2711 2801 suppression_config = (
2712 2802 self.suppress_competing_matchers.get(matcher_id, None)
2713 2803 if isinstance(self.suppress_competing_matchers, dict)
2714 2804 else self.suppress_competing_matchers
2715 2805 )
2716 2806 should_suppress = (
2717 2807 (suppression_config is True)
2718 2808 or (suppression_recommended and (suppression_config is not False))
2719 2809 ) and len(result["completions"])
2720 2810
2721 2811 if should_suppress:
2722 2812 suppression_exceptions = result.get("do_not_suppress", set())
2723 2813 try:
2724 2814 to_suppress = set(suppression_recommended)
2725 2815 except TypeError:
2726 2816 to_suppress = set(matchers)
2727 2817 suppressed_matchers = to_suppress - suppression_exceptions
2728 2818
2729 2819 new_results = {}
2730 2820 for previous_matcher_id, previous_result in results.items():
2731 2821 if previous_matcher_id not in suppressed_matchers:
2732 2822 new_results[previous_matcher_id] = previous_result
2733 2823 results = new_results
2734 2824
2735 2825 results[matcher_id] = result
2736 2826
2737 2827 _, matches = self._arrange_and_extract(
2738 2828 results,
2739 2829 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2740 2830 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2741 2831 skip_matchers={jedi_matcher_id},
2742 2832 abort_if_offset_changes=False,
2743 2833 )
2744 2834
2745 2835 # populate legacy stateful API
2746 2836 self.matches = matches
2747 2837
2748 2838 return results
2749 2839
2750 2840 @staticmethod
2751 2841 def _deduplicate(
2752 2842 matches: Sequence[SimpleCompletion],
2753 2843 ) -> Iterable[SimpleCompletion]:
2754 2844 filtered_matches = {}
2755 2845 for match in matches:
2756 2846 text = match.text
2757 2847 if (
2758 2848 text not in filtered_matches
2759 2849 or filtered_matches[text].type == _UNKNOWN_TYPE
2760 2850 ):
2761 2851 filtered_matches[text] = match
2762 2852
2763 2853 return filtered_matches.values()
2764 2854
2765 2855 @staticmethod
2766 2856 def _sort(matches: Sequence[SimpleCompletion]):
2767 2857 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2768 2858
2769 2859 @context_matcher()
2770 2860 def fwd_unicode_matcher(self, context):
2771 """Same as ``fwd_unicode_match``, but adopted to new Matcher API."""
2861 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
2772 2862 fragment, matches = self.latex_matches(context.token)
2773 2863 return _convert_matcher_v1_result_to_v2(
2774 2864 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2775 2865 )
2776 2866
2777 2867 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2778 2868 """
2779 2869 Forward match a string starting with a backslash with a list of
2780 2870 potential Unicode completions.
2781 2871
2782 Will compute list list of Unicode character names on first call and cache it.
2872 Will compute list of Unicode character names on first call and cache it.
2873
2874 .. deprecated:: 8.6
2875 You can use :meth:`fwd_unicode_matcher` instead.
2783 2876
2784 2877 Returns
2785 2878 -------
2786 2879 At tuple with:
2787 2880 - matched text (empty if no matches)
2788 2881 - list of potential completions, empty tuple otherwise)
2789
2790 DEPRECATED: Deprecated since 8.6. Use `fwd_unicode_matcher` instead.
2791 2882 """
2792 2883 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2793 2884 # We could do a faster match using a Trie.
2794 2885
2795 2886 # Using pygtrie the following seem to work:
2796 2887
2797 2888 # s = PrefixSet()
2798 2889
2799 2890 # for c in range(0,0x10FFFF + 1):
2800 2891 # try:
2801 2892 # s.add(unicodedata.name(chr(c)))
2802 2893 # except ValueError:
2803 2894 # pass
2804 2895 # [''.join(k) for k in s.iter(prefix)]
2805 2896
2806 2897 # But need to be timed and adds an extra dependency.
2807 2898
2808 2899 slashpos = text.rfind('\\')
2809 2900 # if text starts with slash
2810 2901 if slashpos > -1:
2811 2902 # PERF: It's important that we don't access self._unicode_names
2812 2903 # until we're inside this if-block. _unicode_names is lazily
2813 2904 # initialized, and it takes a user-noticeable amount of time to
2814 2905 # initialize it, so we don't want to initialize it unless we're
2815 2906 # actually going to use it.
2816 2907 s = text[slashpos + 1 :]
2817 2908 sup = s.upper()
2818 2909 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2819 2910 if candidates:
2820 2911 return s, candidates
2821 2912 candidates = [x for x in self.unicode_names if sup in x]
2822 2913 if candidates:
2823 2914 return s, candidates
2824 2915 splitsup = sup.split(" ")
2825 2916 candidates = [
2826 2917 x for x in self.unicode_names if all(u in x for u in splitsup)
2827 2918 ]
2828 2919 if candidates:
2829 2920 return s, candidates
2830 2921
2831 2922 return "", ()
2832 2923
2833 2924 # if text does not start with slash
2834 2925 else:
2835 2926 return '', ()
2836 2927
2837 2928 @property
2838 2929 def unicode_names(self) -> List[str]:
2839 2930 """List of names of unicode code points that can be completed.
2840 2931
2841 2932 The list is lazily initialized on first access.
2842 2933 """
2843 2934 if self._unicode_names is None:
2844 2935 names = []
2845 2936 for c in range(0,0x10FFFF + 1):
2846 2937 try:
2847 2938 names.append(unicodedata.name(chr(c)))
2848 2939 except ValueError:
2849 2940 pass
2850 2941 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2851 2942
2852 2943 return self._unicode_names
2853 2944
2854 2945 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2855 2946 names = []
2856 2947 for start,stop in ranges:
2857 2948 for c in range(start, stop) :
2858 2949 try:
2859 2950 names.append(unicodedata.name(chr(c)))
2860 2951 except ValueError:
2861 2952 pass
2862 2953 return names
@@ -1,58 +1,83 b''
1 1 # encoding: utf-8
2 2 """Decorators that don't go anywhere else.
3 3
4 4 This module contains misc. decorators that don't really go with another module
5 in :mod:`IPython.utils`. Beore putting something here please see if it should
5 in :mod:`IPython.utils`. Before putting something here please see if it should
6 6 go into another topical module in :mod:`IPython.utils`.
7 7 """
8 8
9 9 #-----------------------------------------------------------------------------
10 10 # Copyright (C) 2008-2011 The IPython Development Team
11 11 #
12 12 # Distributed under the terms of the BSD License. The full license is in
13 13 # the file COPYING, distributed as part of this software.
14 14 #-----------------------------------------------------------------------------
15 15
16 16 #-----------------------------------------------------------------------------
17 17 # Imports
18 18 #-----------------------------------------------------------------------------
19 from typing import Sequence
20
21 from IPython.utils.docs import GENERATING_DOCUMENTATION
22
19 23
20 24 #-----------------------------------------------------------------------------
21 25 # Code
22 26 #-----------------------------------------------------------------------------
23 27
24 28 def flag_calls(func):
25 29 """Wrap a function to detect and flag when it gets called.
26 30
27 31 This is a decorator which takes a function and wraps it in a function with
28 32 a 'called' attribute. wrapper.called is initialized to False.
29 33
30 34 The wrapper.called attribute is set to False right before each call to the
31 35 wrapped function, so if the call fails it remains False. After the call
32 36 completes, wrapper.called is set to True and the output is returned.
33 37
34 38 Testing for truth in wrapper.called allows you to determine if a call to
35 39 func() was attempted and succeeded."""
36 40
37 41 # don't wrap twice
38 42 if hasattr(func, 'called'):
39 43 return func
40 44
41 45 def wrapper(*args,**kw):
42 46 wrapper.called = False
43 47 out = func(*args,**kw)
44 48 wrapper.called = True
45 49 return out
46 50
47 51 wrapper.called = False
48 52 wrapper.__doc__ = func.__doc__
49 53 return wrapper
50 54
55
51 56 def undoc(func):
52 57 """Mark a function or class as undocumented.
53 58
54 59 This is found by inspecting the AST, so for now it must be used directly
55 60 as @undoc, not as e.g. @decorators.undoc
56 61 """
57 62 return func
58 63
64
65 def sphinx_options(
66 show_inheritance: bool = True,
67 show_inherited_members: bool = False,
68 exclude_inherited_from: Sequence[str] = tuple(),
69 ):
70 """Set sphinx options"""
71
72 def wrapper(func):
73 if not GENERATING_DOCUMENTATION:
74 return func
75
76 func._sphinx_options = dict(
77 show_inheritance=show_inheritance,
78 show_inherited_members=show_inherited_members,
79 exclude_inherited_from=exclude_inherited_from,
80 )
81 return func
82
83 return wrapper
@@ -1,326 +1,334 b''
1 1 # -*- coding: utf-8 -*-
2 2 #
3 3 # IPython documentation build configuration file.
4 4
5 5 # NOTE: This file has been edited manually from the auto-generated one from
6 6 # sphinx. Do NOT delete and re-generate. If any changes from sphinx are
7 7 # needed, generate a scratch one and merge by hand any new fields needed.
8 8
9 9 #
10 10 # This file is execfile()d with the current directory set to its containing dir.
11 11 #
12 12 # The contents of this file are pickled, so don't put values in the namespace
13 13 # that aren't pickleable (module imports are okay, they're removed automatically).
14 14 #
15 15 # All configuration values have a default value; values that are commented out
16 16 # serve to show the default value.
17 17
18 18 import sys, os
19 19 from pathlib import Path
20 20
21 21 # https://read-the-docs.readthedocs.io/en/latest/faq.html
22 22 ON_RTD = os.environ.get('READTHEDOCS', None) == 'True'
23 23
24 24 if ON_RTD:
25 25 tags.add('rtd')
26 26
27 27 # RTD doesn't use the Makefile, so re-run autogen_{things}.py here.
28 28 for name in ("config", "api", "magics", "shortcuts"):
29 29 fname = Path("autogen_{}.py".format(name))
30 30 fpath = (Path(__file__).parent).joinpath("..", fname)
31 31 with open(fpath, encoding="utf-8") as f:
32 32 exec(
33 33 compile(f.read(), fname, "exec"),
34 34 {
35 35 "__file__": fpath,
36 36 "__name__": "__main__",
37 37 },
38 38 )
39 39 else:
40 40 import sphinx_rtd_theme
41 41 html_theme = "sphinx_rtd_theme"
42 42 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
43 43
44 # Allow Python scripts to change behaviour during sphinx run
45 os.environ["IN_SPHINX_RUN"] = "True"
46
47 autodoc_type_aliases = {
48 "Matcher": " IPython.core.completer.Matcher",
49 "MatcherAPIv1": " IPython.core.completer.MatcherAPIv1",
50 }
51
44 52 # If your extensions are in another directory, add it here. If the directory
45 53 # is relative to the documentation root, use os.path.abspath to make it
46 54 # absolute, like shown here.
47 55 sys.path.insert(0, os.path.abspath('../sphinxext'))
48 56
49 57 # We load the ipython release info into a dict by explicit execution
50 58 iprelease = {}
51 59 exec(
52 60 compile(
53 61 open("../../IPython/core/release.py", encoding="utf-8").read(),
54 62 "../../IPython/core/release.py",
55 63 "exec",
56 64 ),
57 65 iprelease,
58 66 )
59 67
60 68 # General configuration
61 69 # ---------------------
62 70
63 71 # Add any Sphinx extension module names here, as strings. They can be extensions
64 72 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
65 73 extensions = [
66 74 'sphinx.ext.autodoc',
67 75 'sphinx.ext.autosummary',
68 76 'sphinx.ext.doctest',
69 77 'sphinx.ext.inheritance_diagram',
70 78 'sphinx.ext.intersphinx',
71 79 'sphinx.ext.graphviz',
72 80 'IPython.sphinxext.ipython_console_highlighting',
73 81 'IPython.sphinxext.ipython_directive',
74 82 'sphinx.ext.napoleon', # to preprocess docstrings
75 83 'github', # for easy GitHub links
76 84 'magics',
77 85 'configtraits',
78 86 ]
79 87
80 88 # Add any paths that contain templates here, relative to this directory.
81 89 templates_path = ['_templates']
82 90
83 91 # The suffix of source filenames.
84 92 source_suffix = '.rst'
85 93
86 94 rst_prolog = ''
87 95
88 96 def is_stable(extra):
89 97 for ext in {'dev', 'b', 'rc'}:
90 98 if ext in extra:
91 99 return False
92 100 return True
93 101
94 102 if is_stable(iprelease['_version_extra']):
95 103 tags.add('ipystable')
96 104 print('Adding Tag: ipystable')
97 105 else:
98 106 tags.add('ipydev')
99 107 print('Adding Tag: ipydev')
100 108 rst_prolog += """
101 109 .. warning::
102 110
103 111 This documentation covers a development version of IPython. The development
104 112 version may differ significantly from the latest stable release.
105 113 """
106 114
107 115 rst_prolog += """
108 116 .. important::
109 117
110 118 This documentation covers IPython versions 6.0 and higher. Beginning with
111 119 version 6.0, IPython stopped supporting compatibility with Python versions
112 120 lower than 3.3 including all versions of Python 2.7.
113 121
114 122 If you are looking for an IPython version compatible with Python 2.7,
115 123 please use the IPython 5.x LTS release and refer to its documentation (LTS
116 124 is the long term support release).
117 125
118 126 """
119 127
120 128 # The master toctree document.
121 129 master_doc = 'index'
122 130
123 131 # General substitutions.
124 132 project = 'IPython'
125 133 copyright = 'The IPython Development Team'
126 134
127 135 # ghissue config
128 136 github_project_url = "https://github.com/ipython/ipython"
129 137
130 138 # numpydoc config
131 139 numpydoc_show_class_members = False # Otherwise Sphinx emits thousands of warnings
132 140 numpydoc_class_members_toctree = False
133 141 warning_is_error = True
134 142
135 143 import logging
136 144
137 145 class ConfigtraitFilter(logging.Filter):
138 146 """
139 147 This is a filter to remove in sphinx 3+ the error about config traits being duplicated.
140 148
141 149 As we autogenerate configuration traits from, subclasses have lots of
142 150 duplication and we want to silence them. Indeed we build on travis with
143 151 warnings-as-error set to True, so those duplicate items make the build fail.
144 152 """
145 153
146 154 def filter(self, record):
147 155 if record.args and record.args[0] == 'configtrait' and 'duplicate' in record.msg:
148 156 return False
149 157 return True
150 158
151 159 ct_filter = ConfigtraitFilter()
152 160
153 161 import sphinx.util
154 162 logger = sphinx.util.logging.getLogger('sphinx.domains.std').logger
155 163
156 164 logger.addFilter(ct_filter)
157 165
158 166 # The default replacements for |version| and |release|, also used in various
159 167 # other places throughout the built documents.
160 168 #
161 169 # The full version, including alpha/beta/rc tags.
162 170 release = "%s" % iprelease['version']
163 171 # Just the X.Y.Z part, no '-dev'
164 172 version = iprelease['version'].split('-', 1)[0]
165 173
166 174
167 175 # There are two options for replacing |today|: either, you set today to some
168 176 # non-false value, then it is used:
169 177 #today = ''
170 178 # Else, today_fmt is used as the format for a strftime call.
171 179 today_fmt = '%B %d, %Y'
172 180
173 181 # List of documents that shouldn't be included in the build.
174 182 #unused_docs = []
175 183
176 184 # Exclude these glob-style patterns when looking for source files. They are
177 185 # relative to the source/ directory.
178 186 exclude_patterns = []
179 187
180 188
181 189 # If true, '()' will be appended to :func: etc. cross-reference text.
182 190 #add_function_parentheses = True
183 191
184 192 # If true, the current module name will be prepended to all description
185 193 # unit titles (such as .. function::).
186 194 #add_module_names = True
187 195
188 196 # If true, sectionauthor and moduleauthor directives will be shown in the
189 197 # output. They are ignored by default.
190 198 #show_authors = False
191 199
192 200 # The name of the Pygments (syntax highlighting) style to use.
193 201 pygments_style = 'sphinx'
194 202
195 203 # Set the default role so we can use `foo` instead of ``foo``
196 204 default_role = 'literal'
197 205
198 206 # Options for HTML output
199 207 # -----------------------
200 208
201 209 # The style sheet to use for HTML and HTML Help pages. A file of that name
202 210 # must exist either in Sphinx' static/ path, or in one of the custom paths
203 211 # given in html_static_path.
204 212 # html_style = 'default.css'
205 213
206 214
207 215 # The name for this set of Sphinx documents. If None, it defaults to
208 216 # "<project> v<release> documentation".
209 217 #html_title = None
210 218
211 219 # The name of an image file (within the static path) to place at the top of
212 220 # the sidebar.
213 221 #html_logo = None
214 222
215 223 # Add any paths that contain custom static files (such as style sheets) here,
216 224 # relative to this directory. They are copied after the builtin static files,
217 225 # so a file named "default.css" will overwrite the builtin "default.css".
218 226 html_static_path = ['_static']
219 227
220 228 # Favicon needs the directory name
221 229 html_favicon = '_static/favicon.ico'
222 230 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
223 231 # using the given strftime format.
224 232 html_last_updated_fmt = '%b %d, %Y'
225 233
226 234 # If true, SmartyPants will be used to convert quotes and dashes to
227 235 # typographically correct entities.
228 236 #html_use_smartypants = True
229 237
230 238 # Custom sidebar templates, maps document names to template names.
231 239 #html_sidebars = {}
232 240
233 241 # Additional templates that should be rendered to pages, maps page names to
234 242 # template names.
235 243 html_additional_pages = {
236 244 'interactive/htmlnotebook': 'notebook_redirect.html',
237 245 'interactive/notebook': 'notebook_redirect.html',
238 246 'interactive/nbconvert': 'notebook_redirect.html',
239 247 'interactive/public_server': 'notebook_redirect.html',
240 248 }
241 249
242 250 # If false, no module index is generated.
243 251 #html_use_modindex = True
244 252
245 253 # If true, the reST sources are included in the HTML build as _sources/<name>.
246 254 #html_copy_source = True
247 255
248 256 # If true, an OpenSearch description file will be output, and all pages will
249 257 # contain a <link> tag referring to it. The value of this option must be the
250 258 # base URL from which the finished HTML is served.
251 259 #html_use_opensearch = ''
252 260
253 261 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
254 262 #html_file_suffix = ''
255 263
256 264 # Output file base name for HTML help builder.
257 265 htmlhelp_basename = 'ipythondoc'
258 266
259 267 intersphinx_mapping = {'python': ('https://docs.python.org/3/', None),
260 268 'rpy2': ('https://rpy2.github.io/doc/latest/html/', None),
261 269 'jupyterclient': ('https://jupyter-client.readthedocs.io/en/latest/', None),
262 270 'jupyter': ('https://jupyter.readthedocs.io/en/latest/', None),
263 271 'jedi': ('https://jedi.readthedocs.io/en/latest/', None),
264 272 'traitlets': ('https://traitlets.readthedocs.io/en/latest/', None),
265 273 'ipykernel': ('https://ipykernel.readthedocs.io/en/latest/', None),
266 274 'prompt_toolkit' : ('https://python-prompt-toolkit.readthedocs.io/en/stable/', None),
267 275 'ipywidgets': ('https://ipywidgets.readthedocs.io/en/stable/', None),
268 276 'ipyparallel': ('https://ipyparallel.readthedocs.io/en/stable/', None),
269 277 'pip': ('https://pip.pypa.io/en/stable/', None)
270 278 }
271 279
272 280 # Options for LaTeX output
273 281 # ------------------------
274 282
275 283 # The font size ('10pt', '11pt' or '12pt').
276 284 latex_font_size = '11pt'
277 285
278 286 # Grouping the document tree into LaTeX files. List of tuples
279 287 # (source start file, target name, title, author, document class [howto/manual]).
280 288
281 289 latex_documents = [
282 290 ('index', 'ipython.tex', 'IPython Documentation',
283 291 u"""The IPython Development Team""", 'manual', True),
284 292 ('parallel/winhpc_index', 'winhpc_whitepaper.tex',
285 293 'Using IPython on Windows HPC Server 2008',
286 294 u"Brian E. Granger", 'manual', True)
287 295 ]
288 296
289 297 # The name of an image file (relative to this directory) to place at the top of
290 298 # the title page.
291 299 #latex_logo = None
292 300
293 301 # For "manual" documents, if this is true, then toplevel headings are parts,
294 302 # not chapters.
295 303 #latex_use_parts = False
296 304
297 305 # Additional stuff for the LaTeX preamble.
298 306 #latex_preamble = ''
299 307
300 308 # Documents to append as an appendix to all manuals.
301 309 #latex_appendices = []
302 310
303 311 # If false, no module index is generated.
304 312 latex_use_modindex = True
305 313
306 314
307 315 # Options for texinfo output
308 316 # --------------------------
309 317
310 318 texinfo_documents = [
311 319 (master_doc, 'ipython', 'IPython Documentation',
312 320 'The IPython Development Team',
313 321 'IPython',
314 322 'IPython Documentation',
315 323 'Programming',
316 324 1),
317 325 ]
318 326
319 327 modindex_common_prefix = ['IPython.']
320 328
321 329
322 330 # Cleanup
323 331 # -------
324 332 # delete release info to avoid pickling errors from sphinx
325 333
326 334 del iprelease
@@ -1,452 +1,463 b''
1 1 """Attempt to generate templates for module reference with Sphinx
2 2
3 3 XXX - we exclude extension modules
4 4
5 5 To include extension modules, first identify them as valid in the
6 6 ``_uri2path`` method, then handle them in the ``_parse_module`` script.
7 7
8 8 We get functions and classes by parsing the text of .py files.
9 9 Alternatively we could import the modules for discovery, and we'd have
10 10 to do that for extension modules. This would involve changing the
11 11 ``_parse_module`` method to work via import and introspection, and
12 12 might involve changing ``discover_modules`` (which determines which
13 13 files are modules, and therefore which module URIs will be passed to
14 14 ``_parse_module``).
15 15
16 16 NOTE: this is a modified version of a script originally shipped with the
17 17 PyMVPA project, which we've adapted for NIPY use. PyMVPA is an MIT-licensed
18 18 project."""
19 19
20 20
21 21 # Stdlib imports
22 22 import ast
23 23 import inspect
24 24 import os
25 25 import re
26 26 from importlib import import_module
27 from types import SimpleNamespace as Obj
27 28
28 29
29 class Obj(object):
30 '''Namespace to hold arbitrary information.'''
31 def __init__(self, **kwargs):
32 for k, v in kwargs.items():
33 setattr(self, k, v)
34
35 30 class FuncClsScanner(ast.NodeVisitor):
36 31 """Scan a module for top-level functions and classes.
37 32
38 33 Skips objects with an @undoc decorator, or a name starting with '_'.
39 34 """
40 35 def __init__(self):
41 36 ast.NodeVisitor.__init__(self)
42 37 self.classes = []
43 38 self.classes_seen = set()
44 39 self.functions = []
45
40
46 41 @staticmethod
47 42 def has_undoc_decorator(node):
48 43 return any(isinstance(d, ast.Name) and d.id == 'undoc' \
49 44 for d in node.decorator_list)
50 45
51 46 def visit_If(self, node):
52 47 if isinstance(node.test, ast.Compare) \
53 48 and isinstance(node.test.left, ast.Name) \
54 49 and node.test.left.id == '__name__':
55 50 return # Ignore classes defined in "if __name__ == '__main__':"
56 51
57 52 self.generic_visit(node)
58 53
59 54 def visit_FunctionDef(self, node):
60 55 if not (node.name.startswith('_') or self.has_undoc_decorator(node)) \
61 56 and node.name not in self.functions:
62 57 self.functions.append(node.name)
63 58
64 59 def visit_ClassDef(self, node):
65 if not (node.name.startswith('_') or self.has_undoc_decorator(node)) \
66 and node.name not in self.classes_seen:
67 cls = Obj(name=node.name)
68 cls.has_init = any(isinstance(n, ast.FunctionDef) and \
69 n.name=='__init__' for n in node.body)
60 if (
61 not (node.name.startswith("_") or self.has_undoc_decorator(node))
62 and node.name not in self.classes_seen
63 ):
64 cls = Obj(name=node.name, sphinx_options={})
65 cls.has_init = any(
66 isinstance(n, ast.FunctionDef) and n.name == "__init__"
67 for n in node.body
68 )
70 69 self.classes.append(cls)
71 70 self.classes_seen.add(node.name)
72 71
73 72 def scan(self, mod):
74 73 self.visit(mod)
75 74 return self.functions, self.classes
76 75
77 76 # Functions and classes
78 77 class ApiDocWriter(object):
79 78 ''' Class for automatic detection and parsing of API docs
80 79 to Sphinx-parsable reST format'''
81 80
82 81 # only separating first two levels
83 82 rst_section_levels = ['*', '=', '-', '~', '^']
84 83
85 84 def __init__(self,
86 85 package_name,
87 86 rst_extension='.rst',
88 87 package_skip_patterns=None,
89 88 module_skip_patterns=None,
90 89 names_from__all__=None,
91 90 ):
92 91 ''' Initialize package for parsing
93 92
94 93 Parameters
95 94 ----------
96 95 package_name : string
97 96 Name of the top-level package. *package_name* must be the
98 97 name of an importable package
99 98 rst_extension : string, optional
100 99 Extension for reST files, default '.rst'
101 100 package_skip_patterns : None or sequence of {strings, regexps}
102 101 Sequence of strings giving URIs of packages to be excluded
103 102 Operates on the package path, starting at (including) the
104 103 first dot in the package path, after *package_name* - so,
105 104 if *package_name* is ``sphinx``, then ``sphinx.util`` will
106 105 result in ``.util`` being passed for earching by these
107 106 regexps. If is None, gives default. Default is:
108 107 ['\\.tests$']
109 108 module_skip_patterns : None or sequence
110 109 Sequence of strings giving URIs of modules to be excluded
111 110 Operates on the module name including preceding URI path,
112 111 back to the first dot after *package_name*. For example
113 112 ``sphinx.util.console`` results in the string to search of
114 113 ``.util.console``
115 114 If is None, gives default. Default is:
116 115 ['\\.setup$', '\\._']
117 116 names_from__all__ : set, optional
118 117 Modules listed in here will be scanned by doing ``from mod import *``,
119 118 rather than finding function and class definitions by scanning the
120 119 AST. This is intended for API modules which expose things defined in
121 120 other files. Modules listed here must define ``__all__`` to avoid
122 121 exposing everything they import.
123 122 '''
124 123 if package_skip_patterns is None:
125 124 package_skip_patterns = ['\\.tests$']
126 125 if module_skip_patterns is None:
127 126 module_skip_patterns = ['\\.setup$', '\\._']
128 127 self.package_name = package_name
129 128 self.rst_extension = rst_extension
130 129 self.package_skip_patterns = package_skip_patterns
131 130 self.module_skip_patterns = module_skip_patterns
132 131 self.names_from__all__ = names_from__all__ or set()
133 132
134 133 def get_package_name(self):
135 134 return self._package_name
136 135
137 136 def set_package_name(self, package_name):
138 137 ''' Set package_name
139 138
140 139 >>> docwriter = ApiDocWriter('sphinx')
141 140 >>> import sphinx
142 141 >>> docwriter.root_path == sphinx.__path__[0]
143 142 True
144 143 >>> docwriter.package_name = 'docutils'
145 144 >>> import docutils
146 145 >>> docwriter.root_path == docutils.__path__[0]
147 146 True
148 147 '''
149 148 # It's also possible to imagine caching the module parsing here
150 149 self._package_name = package_name
151 150 self.root_module = import_module(package_name)
152 151 self.root_path = self.root_module.__path__[0]
153 152 self.written_modules = None
154 153
155 154 package_name = property(get_package_name, set_package_name, None,
156 155 'get/set package_name')
157 156
158 157 def _uri2path(self, uri):
159 158 ''' Convert uri to absolute filepath
160 159
161 160 Parameters
162 161 ----------
163 162 uri : string
164 163 URI of python module to return path for
165 164
166 165 Returns
167 166 -------
168 167 path : None or string
169 168 Returns None if there is no valid path for this URI
170 169 Otherwise returns absolute file system path for URI
171 170
172 171 Examples
173 172 --------
174 173 >>> docwriter = ApiDocWriter('sphinx')
175 174 >>> import sphinx
176 175 >>> modpath = sphinx.__path__[0]
177 176 >>> res = docwriter._uri2path('sphinx.builder')
178 177 >>> res == os.path.join(modpath, 'builder.py')
179 178 True
180 179 >>> res = docwriter._uri2path('sphinx')
181 180 >>> res == os.path.join(modpath, '__init__.py')
182 181 True
183 182 >>> docwriter._uri2path('sphinx.does_not_exist')
184 183
185 184 '''
186 185 if uri == self.package_name:
187 186 return os.path.join(self.root_path, '__init__.py')
188 187 path = uri.replace('.', os.path.sep)
189 188 path = path.replace(self.package_name + os.path.sep, '')
190 189 path = os.path.join(self.root_path, path)
191 190 # XXX maybe check for extensions as well?
192 191 if os.path.exists(path + '.py'): # file
193 192 path += '.py'
194 193 elif os.path.exists(os.path.join(path, '__init__.py')):
195 194 path = os.path.join(path, '__init__.py')
196 195 else:
197 196 return None
198 197 return path
199 198
200 199 def _path2uri(self, dirpath):
201 200 ''' Convert directory path to uri '''
202 201 relpath = dirpath.replace(self.root_path, self.package_name)
203 202 if relpath.startswith(os.path.sep):
204 203 relpath = relpath[1:]
205 204 return relpath.replace(os.path.sep, '.')
206 205
207 206 def _parse_module(self, uri):
208 207 ''' Parse module defined in *uri* '''
209 208 filename = self._uri2path(uri)
210 209 if filename is None:
211 210 # nothing that we could handle here.
212 211 return ([],[])
213 212 with open(filename, 'rb') as f:
214 213 mod = ast.parse(f.read())
215 214 return FuncClsScanner().scan(mod)
216 215
217 216 def _import_funcs_classes(self, uri):
218 217 """Import * from uri, and separate out functions and classes."""
219 218 ns = {}
220 219 exec('from %s import *' % uri, ns)
221 220 funcs, classes = [], []
222 221 for name, obj in ns.items():
223 222 if inspect.isclass(obj):
224 cls = Obj(name=name, has_init='__init__' in obj.__dict__)
223 cls = Obj(
224 name=name,
225 has_init="__init__" in obj.__dict__,
226 sphinx_options=getattr(obj, "_sphinx_options", {}),
227 )
225 228 classes.append(cls)
226 229 elif inspect.isfunction(obj):
227 230 funcs.append(name)
228 231
229 232 return sorted(funcs), sorted(classes, key=lambda x: x.name)
230 233
231 234 def find_funcs_classes(self, uri):
232 235 """Find the functions and classes defined in the module ``uri``"""
233 236 if uri in self.names_from__all__:
234 237 # For API modules which expose things defined elsewhere, import them
235 238 return self._import_funcs_classes(uri)
236 239 else:
237 240 # For other modules, scan their AST to see what they define
238 241 return self._parse_module(uri)
239 242
240 243 def generate_api_doc(self, uri):
241 244 '''Make autodoc documentation template string for a module
242 245
243 246 Parameters
244 247 ----------
245 248 uri : string
246 249 python location of module - e.g 'sphinx.builder'
247 250
248 251 Returns
249 252 -------
250 253 S : string
251 254 Contents of API doc
252 255 '''
253 256 # get the names of all classes and functions
254 257 functions, classes = self.find_funcs_classes(uri)
255 258 if not len(functions) and not len(classes):
256 259 #print ('WARNING: Empty -', uri) # dbg
257 260 return ''
258 261
259 262 # Make a shorter version of the uri that omits the package name for
260 263 # titles
261 264 uri_short = re.sub(r'^%s\.' % self.package_name,'',uri)
262 265
263 266 ad = '.. AUTO-GENERATED FILE -- DO NOT EDIT!\n\n'
264 267
265 268 # Set the chapter title to read 'Module:' for all modules except for the
266 269 # main packages
267 270 if '.' in uri:
268 271 chap_title = 'Module: :mod:`' + uri_short + '`'
269 272 else:
270 273 chap_title = ':mod:`' + uri_short + '`'
271 274 ad += chap_title + '\n' + self.rst_section_levels[1] * len(chap_title)
272 275
273 276 ad += '\n.. automodule:: ' + uri + '\n'
274 277 ad += '\n.. currentmodule:: ' + uri + '\n'
275 278
276 279 if classes:
277 280 subhead = str(len(classes)) + (' Classes' if len(classes) > 1 else ' Class')
278 281 ad += '\n'+ subhead + '\n' + \
279 282 self.rst_section_levels[2] * len(subhead) + '\n'
280 283
281 284 for c in classes:
282 ad += '\n.. autoclass:: ' + c.name + '\n'
285 opts = c.sphinx_options
286 ad += "\n.. autoclass:: " + c.name + "\n"
283 287 # must NOT exclude from index to keep cross-refs working
284 ad += ' :members:\n' \
285 ' :show-inheritance:\n'
288 ad += " :members:\n"
289 if opts.get("show_inheritance", True):
290 ad += " :show-inheritance:\n"
291 if opts.get("show_inherited_members", False):
292 exclusions_list = opts.get("exclude_inherited_from", [])
293 exclusions = (
294 (" " + " ".join(exclusions_list)) if exclusions_list else ""
295 )
296 ad += f" :inherited-members:{exclusions}\n"
286 297 if c.has_init:
287 298 ad += '\n .. automethod:: __init__\n'
288 299
289 300 if functions:
290 301 subhead = str(len(functions)) + (' Functions' if len(functions) > 1 else ' Function')
291 302 ad += '\n'+ subhead + '\n' + \
292 303 self.rst_section_levels[2] * len(subhead) + '\n'
293 304 for f in functions:
294 305 # must NOT exclude from index to keep cross-refs working
295 306 ad += '\n.. autofunction:: ' + uri + '.' + f + '\n\n'
296 307 return ad
297 308
298 309 def _survives_exclude(self, matchstr, match_type):
299 310 ''' Returns True if *matchstr* does not match patterns
300 311
301 312 ``self.package_name`` removed from front of string if present
302 313
303 314 Examples
304 315 --------
305 316 >>> dw = ApiDocWriter('sphinx')
306 317 >>> dw._survives_exclude('sphinx.okpkg', 'package')
307 318 True
308 319 >>> dw.package_skip_patterns.append('^\\.badpkg$')
309 320 >>> dw._survives_exclude('sphinx.badpkg', 'package')
310 321 False
311 322 >>> dw._survives_exclude('sphinx.badpkg', 'module')
312 323 True
313 324 >>> dw._survives_exclude('sphinx.badmod', 'module')
314 325 True
315 326 >>> dw.module_skip_patterns.append('^\\.badmod$')
316 327 >>> dw._survives_exclude('sphinx.badmod', 'module')
317 328 False
318 329 '''
319 330 if match_type == 'module':
320 331 patterns = self.module_skip_patterns
321 332 elif match_type == 'package':
322 333 patterns = self.package_skip_patterns
323 334 else:
324 335 raise ValueError('Cannot interpret match type "%s"'
325 336 % match_type)
326 337 # Match to URI without package name
327 338 L = len(self.package_name)
328 339 if matchstr[:L] == self.package_name:
329 340 matchstr = matchstr[L:]
330 341 for pat in patterns:
331 342 try:
332 343 pat.search
333 344 except AttributeError:
334 345 pat = re.compile(pat)
335 346 if pat.search(matchstr):
336 347 return False
337 348 return True
338 349
339 350 def discover_modules(self):
340 351 ''' Return module sequence discovered from ``self.package_name``
341 352
342 353
343 354 Parameters
344 355 ----------
345 356 None
346 357
347 358 Returns
348 359 -------
349 360 mods : sequence
350 361 Sequence of module names within ``self.package_name``
351 362
352 363 Examples
353 364 --------
354 365 >>> dw = ApiDocWriter('sphinx')
355 366 >>> mods = dw.discover_modules()
356 367 >>> 'sphinx.util' in mods
357 368 True
358 369 >>> dw.package_skip_patterns.append('\\.util$')
359 370 >>> 'sphinx.util' in dw.discover_modules()
360 371 False
361 372 >>>
362 373 '''
363 374 modules = [self.package_name]
364 375 # raw directory parsing
365 376 for dirpath, dirnames, filenames in os.walk(self.root_path):
366 377 # Check directory names for packages
367 378 root_uri = self._path2uri(os.path.join(self.root_path,
368 379 dirpath))
369 380 for dirname in dirnames[:]: # copy list - we modify inplace
370 381 package_uri = '.'.join((root_uri, dirname))
371 382 if (self._uri2path(package_uri) and
372 383 self._survives_exclude(package_uri, 'package')):
373 384 modules.append(package_uri)
374 385 else:
375 386 dirnames.remove(dirname)
376 387 # Check filenames for modules
377 388 for filename in filenames:
378 389 module_name = filename[:-3]
379 390 module_uri = '.'.join((root_uri, module_name))
380 391 if (self._uri2path(module_uri) and
381 392 self._survives_exclude(module_uri, 'module')):
382 393 modules.append(module_uri)
383 394 return sorted(modules)
384 395
385 396 def write_modules_api(self, modules,outdir):
386 397 # write the list
387 398 written_modules = []
388 399 for m in modules:
389 400 api_str = self.generate_api_doc(m)
390 401 if not api_str:
391 402 continue
392 403 # write out to file
393 404 outfile = os.path.join(outdir, m + self.rst_extension)
394 405 with open(outfile, "wt", encoding="utf-8") as fileobj:
395 406 fileobj.write(api_str)
396 407 written_modules.append(m)
397 408 self.written_modules = written_modules
398 409
399 410 def write_api_docs(self, outdir):
400 411 """Generate API reST files.
401 412
402 413 Parameters
403 414 ----------
404 415 outdir : string
405 416 Directory name in which to store files
406 417 We create automatic filenames for each module
407 418
408 419 Returns
409 420 -------
410 421 None
411 422
412 423 Notes
413 424 -----
414 425 Sets self.written_modules to list of written modules
415 426 """
416 427 if not os.path.exists(outdir):
417 428 os.mkdir(outdir)
418 429 # compose list of modules
419 430 modules = self.discover_modules()
420 431 self.write_modules_api(modules,outdir)
421 432
422 433 def write_index(self, outdir, path='gen.rst', relative_to=None):
423 434 """Make a reST API index file from written files
424 435
425 436 Parameters
426 437 ----------
427 438 outdir : string
428 439 Directory to which to write generated index file
429 440 path : string
430 441 Filename to write index to
431 442 relative_to : string
432 443 path to which written filenames are relative. This
433 444 component of the written file path will be removed from
434 445 outdir, in the generated index. Default is None, meaning,
435 446 leave path as it is.
436 447 """
437 448 if self.written_modules is None:
438 449 raise ValueError('No modules written')
439 450 # Get full filename path
440 451 path = os.path.join(outdir, path)
441 452 # Path written into index is relative to rootpath
442 453 if relative_to is not None:
443 454 relpath = outdir.replace(relative_to + os.path.sep, '')
444 455 else:
445 456 relpath = outdir
446 457 with open(path, "wt", encoding="utf-8") as idx:
447 458 w = idx.write
448 459 w('.. AUTO-GENERATED FILE -- DO NOT EDIT!\n\n')
449 460 w('.. autosummary::\n'
450 461 ' :toctree: %s\n\n' % relpath)
451 462 for mod in self.written_modules:
452 463 w(' %s\n' % mod)
General Comments 0
You need to be logged in to leave comments. Login now