##// END OF EJS Templates
Improve type hinting and documentation
krassowski -
Show More
@@ -0,0 +1,7 b''
1 # encoding: utf-8
2
3 # Copyright (c) IPython Development Team.
4 # Distributed under the terms of the Modified BSD License.
5 import os
6
7 GENERATING_DOCUMENTATION = os.environ.get("IN_SPHINX_RUN", None) == "True"
@@ -1,2862 +1,2953 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press ``<tab>`` to expand it to its latex form.
53 and press ``<tab>`` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103
103
104 Matchers
104 Matchers
105 ========
105 ========
106
106
107 All completions routines are implemented using unified *Matchers* API.
107 All completions routines are implemented using unified *Matchers* API.
108 The matchers API is provisional and subject to change without notice.
108 The matchers API is provisional and subject to change without notice.
109
109
110 The built-in matchers include:
110 The built-in matchers include:
111
111
112 - ``IPCompleter.dict_key_matcher``: dictionary key completions,
112 - :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
113 - ``IPCompleter.magic_matcher``: completions for magics,
113 - :any:`IPCompleter.magic_matcher`: completions for magics,
114 - ``IPCompleter.unicode_name_matcher``, ``IPCompleter.fwd_unicode_matcher`` and ``IPCompleter.latex_matcher``: see `Forward latex/unicode completion`_,
114 - :any:`IPCompleter.unicode_name_matcher`,
115 - ``back_unicode_name_matcher`` and ``back_latex_name_matcher``: see `Backward latex completion`_,
115 :any:`IPCompleter.fwd_unicode_matcher`
116 - ``IPCompleter.file_matcher``: paths to files and directories,
116 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
117 - ``IPCompleter.python_func_kw_matcher`` - function keywords,
117 - :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
118 - ``IPCompleter.python_matches`` - globals and attributes (v1 API),
118 - :any:`IPCompleter.file_matcher`: paths to files and directories,
119 - :any:`IPCompleter.python_func_kw_matcher` - function keywords,
120 - :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
119 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
121 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
120 - ``IPCompleter.custom_completer_matcher`` - pluggable completer with a default implementation in any:`core.InteractiveShell`
122 - :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
121 which uses uses IPython hooks system (`complete_command`) with string dispatch (including regular expressions).
123 implementation in :any:`InteractiveShell` which uses IPython hooks system
122 Differently to other matchers, ``custom_completer_matcher`` will not suppress Jedi results to match
124 (`complete_command`) with string dispatch (including regular expressions).
123 behaviour in earlier IPython versions.
125 Differently to other matchers, ``custom_completer_matcher`` will not suppress
126 Jedi results to match behaviour in earlier IPython versions.
124
127
125 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
128 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
126
129
130 Matcher API
131 -----------
132
133 Simplifying some details, the ``Matcher`` interface can described as
134
135 .. highlight::
136
137 MatcherAPIv1 = Callable[[str], list[str]]
138 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
139
140 Matcher = MatcherAPIv1 | MatcherAPIv2
141
142 The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
143 and remains supported as a simplest way for generating completions. This is also
144 currently the only API supported by the IPython hooks system `complete_command`.
145
146 To distinguish between matcher versions ``matcher_api_version`` attribute is used.
147 More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
148 and requires a literal ``2`` for v2 Matchers.
149
150 Once the API stabilises future versions may relax the requirement for specifying
151 ``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
152 please do not rely on the presence of ``matcher_api_version`` for any purposes.
153
127 Suppression of competing matchers
154 Suppression of competing matchers
128 ---------------------------------
155 ---------------------------------
129
156
130 By default results from all matchers are combined, in the order determined by
157 By default results from all matchers are combined, in the order determined by
131 their priority. Matchers can request to suppress results from subsequent
158 their priority. Matchers can request to suppress results from subsequent
132 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
159 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
133
160
134 When multiple matchers simultaneously request surpression, the results from of
161 When multiple matchers simultaneously request surpression, the results from of
135 the matcher with higher priority will be returned.
162 the matcher with higher priority will be returned.
136
163
137 Sometimes it is desirable to suppress most but not all other matchers;
164 Sometimes it is desirable to suppress most but not all other matchers;
138 this can be achieved by adding a list of identifiers of matchers which
165 this can be achieved by adding a list of identifiers of matchers which
139 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
166 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
167
168 The suppression behaviour can is user-configurable via
169 :any:`IPCompleter.suppress_competing_matchers`.
140 """
170 """
141
171
142
172
143 # Copyright (c) IPython Development Team.
173 # Copyright (c) IPython Development Team.
144 # Distributed under the terms of the Modified BSD License.
174 # Distributed under the terms of the Modified BSD License.
145 #
175 #
146 # Some of this code originated from rlcompleter in the Python standard library
176 # Some of this code originated from rlcompleter in the Python standard library
147 # Copyright (C) 2001 Python Software Foundation, www.python.org
177 # Copyright (C) 2001 Python Software Foundation, www.python.org
148
178
149
179 from __future__ import annotations
150 import builtins as builtin_mod
180 import builtins as builtin_mod
151 import glob
181 import glob
152 import inspect
182 import inspect
153 import itertools
183 import itertools
154 import keyword
184 import keyword
155 import os
185 import os
156 import re
186 import re
157 import string
187 import string
158 import sys
188 import sys
159 import time
189 import time
160 import unicodedata
190 import unicodedata
161 import uuid
191 import uuid
162 import warnings
192 import warnings
163 from contextlib import contextmanager
193 from contextlib import contextmanager
164 from functools import lru_cache, partial
194 from functools import lru_cache, partial
165 from importlib import import_module
195 from importlib import import_module
166 from types import SimpleNamespace
196 from types import SimpleNamespace
167 from typing import (
197 from typing import (
168 Iterable,
198 Iterable,
169 Iterator,
199 Iterator,
170 List,
200 List,
171 Tuple,
201 Tuple,
172 Union,
202 Union,
173 Any,
203 Any,
174 Sequence,
204 Sequence,
175 Dict,
205 Dict,
176 NamedTuple,
206 NamedTuple,
177 Pattern,
207 Pattern,
178 Optional,
208 Optional,
179 Callable,
180 TYPE_CHECKING,
209 TYPE_CHECKING,
181 Set,
210 Set,
211 Literal,
182 )
212 )
183
213
184 from IPython.core.error import TryNext
214 from IPython.core.error import TryNext
185 from IPython.core.inputtransformer2 import ESC_MAGIC
215 from IPython.core.inputtransformer2 import ESC_MAGIC
186 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
216 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
187 from IPython.core.oinspect import InspectColors
217 from IPython.core.oinspect import InspectColors
188 from IPython.testing.skipdoctest import skip_doctest
218 from IPython.testing.skipdoctest import skip_doctest
189 from IPython.utils import generics
219 from IPython.utils import generics
220 from IPython.utils.decorators import sphinx_options
190 from IPython.utils.dir2 import dir2, get_real_method
221 from IPython.utils.dir2 import dir2, get_real_method
222 from IPython.utils.docs import GENERATING_DOCUMENTATION
191 from IPython.utils.path import ensure_dir_exists
223 from IPython.utils.path import ensure_dir_exists
192 from IPython.utils.process import arg_split
224 from IPython.utils.process import arg_split
193 from traitlets import (
225 from traitlets import (
194 Bool,
226 Bool,
195 Enum,
227 Enum,
196 Int,
228 Int,
197 List as ListTrait,
229 List as ListTrait,
198 Unicode,
230 Unicode,
199 Dict as DictTrait,
231 Dict as DictTrait,
200 Union as UnionTrait,
232 Union as UnionTrait,
201 default,
233 default,
202 observe,
234 observe,
203 )
235 )
204 from traitlets.config.configurable import Configurable
236 from traitlets.config.configurable import Configurable
205
237
206 import __main__
238 import __main__
207
239
208 # skip module docstests
240 # skip module docstests
209 __skip_doctest__ = True
241 __skip_doctest__ = True
210
242
211
243
212 try:
244 try:
213 import jedi
245 import jedi
214 jedi.settings.case_insensitive_completion = False
246 jedi.settings.case_insensitive_completion = False
215 import jedi.api.helpers
247 import jedi.api.helpers
216 import jedi.api.classes
248 import jedi.api.classes
217 JEDI_INSTALLED = True
249 JEDI_INSTALLED = True
218 except ImportError:
250 except ImportError:
219 JEDI_INSTALLED = False
251 JEDI_INSTALLED = False
220
252
221 if TYPE_CHECKING:
253
254 if TYPE_CHECKING or GENERATING_DOCUMENTATION:
222 from typing import cast
255 from typing import cast
223 from typing_extensions import TypedDict, NotRequired
256 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
224 else:
257 else:
225
258
226 def cast(obj, _type):
259 def cast(obj, type_):
260 """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
227 return obj
261 return obj
228
262
229 TypedDict = Dict
263 # do not require on runtime
230 NotRequired = Tuple
264 NotRequired = Tuple # requires Python >=3.11
265 TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
266 Protocol = object # requires Python >=3.8
267 TypeAlias = Any # requires Python >=3.10
268 if GENERATING_DOCUMENTATION:
269 from typing import TypedDict
231
270
232 # -----------------------------------------------------------------------------
271 # -----------------------------------------------------------------------------
233 # Globals
272 # Globals
234 #-----------------------------------------------------------------------------
273 #-----------------------------------------------------------------------------
235
274
236 # ranges where we have most of the valid unicode names. We could be more finer
275 # ranges where we have most of the valid unicode names. We could be more finer
237 # grained but is it worth it for performance While unicode have character in the
276 # grained but is it worth it for performance While unicode have character in the
238 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
277 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
239 # write this). With below range we cover them all, with a density of ~67%
278 # write this). With below range we cover them all, with a density of ~67%
240 # biggest next gap we consider only adds up about 1% density and there are 600
279 # biggest next gap we consider only adds up about 1% density and there are 600
241 # gaps that would need hard coding.
280 # gaps that would need hard coding.
242 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
281 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
243
282
244 # Public API
283 # Public API
245 __all__ = ["Completer", "IPCompleter"]
284 __all__ = ["Completer", "IPCompleter"]
246
285
247 if sys.platform == 'win32':
286 if sys.platform == 'win32':
248 PROTECTABLES = ' '
287 PROTECTABLES = ' '
249 else:
288 else:
250 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
289 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
251
290
252 # Protect against returning an enormous number of completions which the frontend
291 # Protect against returning an enormous number of completions which the frontend
253 # may have trouble processing.
292 # may have trouble processing.
254 MATCHES_LIMIT = 500
293 MATCHES_LIMIT = 500
255
294
256 # Completion type reported when no type can be inferred.
295 # Completion type reported when no type can be inferred.
257 _UNKNOWN_TYPE = "<unknown>"
296 _UNKNOWN_TYPE = "<unknown>"
258
297
259 class ProvisionalCompleterWarning(FutureWarning):
298 class ProvisionalCompleterWarning(FutureWarning):
260 """
299 """
261 Exception raise by an experimental feature in this module.
300 Exception raise by an experimental feature in this module.
262
301
263 Wrap code in :any:`provisionalcompleter` context manager if you
302 Wrap code in :any:`provisionalcompleter` context manager if you
264 are certain you want to use an unstable feature.
303 are certain you want to use an unstable feature.
265 """
304 """
266 pass
305 pass
267
306
268 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
307 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
269
308
270
309
271 @skip_doctest
310 @skip_doctest
272 @contextmanager
311 @contextmanager
273 def provisionalcompleter(action='ignore'):
312 def provisionalcompleter(action='ignore'):
274 """
313 """
275 This context manager has to be used in any place where unstable completer
314 This context manager has to be used in any place where unstable completer
276 behavior and API may be called.
315 behavior and API may be called.
277
316
278 >>> with provisionalcompleter():
317 >>> with provisionalcompleter():
279 ... completer.do_experimental_things() # works
318 ... completer.do_experimental_things() # works
280
319
281 >>> completer.do_experimental_things() # raises.
320 >>> completer.do_experimental_things() # raises.
282
321
283 .. note::
322 .. note::
284
323
285 Unstable
324 Unstable
286
325
287 By using this context manager you agree that the API in use may change
326 By using this context manager you agree that the API in use may change
288 without warning, and that you won't complain if they do so.
327 without warning, and that you won't complain if they do so.
289
328
290 You also understand that, if the API is not to your liking, you should report
329 You also understand that, if the API is not to your liking, you should report
291 a bug to explain your use case upstream.
330 a bug to explain your use case upstream.
292
331
293 We'll be happy to get your feedback, feature requests, and improvements on
332 We'll be happy to get your feedback, feature requests, and improvements on
294 any of the unstable APIs!
333 any of the unstable APIs!
295 """
334 """
296 with warnings.catch_warnings():
335 with warnings.catch_warnings():
297 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
336 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
298 yield
337 yield
299
338
300
339
301 def has_open_quotes(s):
340 def has_open_quotes(s):
302 """Return whether a string has open quotes.
341 """Return whether a string has open quotes.
303
342
304 This simply counts whether the number of quote characters of either type in
343 This simply counts whether the number of quote characters of either type in
305 the string is odd.
344 the string is odd.
306
345
307 Returns
346 Returns
308 -------
347 -------
309 If there is an open quote, the quote character is returned. Else, return
348 If there is an open quote, the quote character is returned. Else, return
310 False.
349 False.
311 """
350 """
312 # We check " first, then ', so complex cases with nested quotes will get
351 # We check " first, then ', so complex cases with nested quotes will get
313 # the " to take precedence.
352 # the " to take precedence.
314 if s.count('"') % 2:
353 if s.count('"') % 2:
315 return '"'
354 return '"'
316 elif s.count("'") % 2:
355 elif s.count("'") % 2:
317 return "'"
356 return "'"
318 else:
357 else:
319 return False
358 return False
320
359
321
360
322 def protect_filename(s, protectables=PROTECTABLES):
361 def protect_filename(s, protectables=PROTECTABLES):
323 """Escape a string to protect certain characters."""
362 """Escape a string to protect certain characters."""
324 if set(s) & set(protectables):
363 if set(s) & set(protectables):
325 if sys.platform == "win32":
364 if sys.platform == "win32":
326 return '"' + s + '"'
365 return '"' + s + '"'
327 else:
366 else:
328 return "".join(("\\" + c if c in protectables else c) for c in s)
367 return "".join(("\\" + c if c in protectables else c) for c in s)
329 else:
368 else:
330 return s
369 return s
331
370
332
371
333 def expand_user(path:str) -> Tuple[str, bool, str]:
372 def expand_user(path:str) -> Tuple[str, bool, str]:
334 """Expand ``~``-style usernames in strings.
373 """Expand ``~``-style usernames in strings.
335
374
336 This is similar to :func:`os.path.expanduser`, but it computes and returns
375 This is similar to :func:`os.path.expanduser`, but it computes and returns
337 extra information that will be useful if the input was being used in
376 extra information that will be useful if the input was being used in
338 computing completions, and you wish to return the completions with the
377 computing completions, and you wish to return the completions with the
339 original '~' instead of its expanded value.
378 original '~' instead of its expanded value.
340
379
341 Parameters
380 Parameters
342 ----------
381 ----------
343 path : str
382 path : str
344 String to be expanded. If no ~ is present, the output is the same as the
383 String to be expanded. If no ~ is present, the output is the same as the
345 input.
384 input.
346
385
347 Returns
386 Returns
348 -------
387 -------
349 newpath : str
388 newpath : str
350 Result of ~ expansion in the input path.
389 Result of ~ expansion in the input path.
351 tilde_expand : bool
390 tilde_expand : bool
352 Whether any expansion was performed or not.
391 Whether any expansion was performed or not.
353 tilde_val : str
392 tilde_val : str
354 The value that ~ was replaced with.
393 The value that ~ was replaced with.
355 """
394 """
356 # Default values
395 # Default values
357 tilde_expand = False
396 tilde_expand = False
358 tilde_val = ''
397 tilde_val = ''
359 newpath = path
398 newpath = path
360
399
361 if path.startswith('~'):
400 if path.startswith('~'):
362 tilde_expand = True
401 tilde_expand = True
363 rest = len(path)-1
402 rest = len(path)-1
364 newpath = os.path.expanduser(path)
403 newpath = os.path.expanduser(path)
365 if rest:
404 if rest:
366 tilde_val = newpath[:-rest]
405 tilde_val = newpath[:-rest]
367 else:
406 else:
368 tilde_val = newpath
407 tilde_val = newpath
369
408
370 return newpath, tilde_expand, tilde_val
409 return newpath, tilde_expand, tilde_val
371
410
372
411
373 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
412 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
374 """Does the opposite of expand_user, with its outputs.
413 """Does the opposite of expand_user, with its outputs.
375 """
414 """
376 if tilde_expand:
415 if tilde_expand:
377 return path.replace(tilde_val, '~')
416 return path.replace(tilde_val, '~')
378 else:
417 else:
379 return path
418 return path
380
419
381
420
382 def completions_sorting_key(word):
421 def completions_sorting_key(word):
383 """key for sorting completions
422 """key for sorting completions
384
423
385 This does several things:
424 This does several things:
386
425
387 - Demote any completions starting with underscores to the end
426 - Demote any completions starting with underscores to the end
388 - Insert any %magic and %%cellmagic completions in the alphabetical order
427 - Insert any %magic and %%cellmagic completions in the alphabetical order
389 by their name
428 by their name
390 """
429 """
391 prio1, prio2 = 0, 0
430 prio1, prio2 = 0, 0
392
431
393 if word.startswith('__'):
432 if word.startswith('__'):
394 prio1 = 2
433 prio1 = 2
395 elif word.startswith('_'):
434 elif word.startswith('_'):
396 prio1 = 1
435 prio1 = 1
397
436
398 if word.endswith('='):
437 if word.endswith('='):
399 prio1 = -1
438 prio1 = -1
400
439
401 if word.startswith('%%'):
440 if word.startswith('%%'):
402 # If there's another % in there, this is something else, so leave it alone
441 # If there's another % in there, this is something else, so leave it alone
403 if not "%" in word[2:]:
442 if not "%" in word[2:]:
404 word = word[2:]
443 word = word[2:]
405 prio2 = 2
444 prio2 = 2
406 elif word.startswith('%'):
445 elif word.startswith('%'):
407 if not "%" in word[1:]:
446 if not "%" in word[1:]:
408 word = word[1:]
447 word = word[1:]
409 prio2 = 1
448 prio2 = 1
410
449
411 return prio1, word, prio2
450 return prio1, word, prio2
412
451
413
452
414 class _FakeJediCompletion:
453 class _FakeJediCompletion:
415 """
454 """
416 This is a workaround to communicate to the UI that Jedi has crashed and to
455 This is a workaround to communicate to the UI that Jedi has crashed and to
417 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
456 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
418
457
419 Added in IPython 6.0 so should likely be removed for 7.0
458 Added in IPython 6.0 so should likely be removed for 7.0
420
459
421 """
460 """
422
461
423 def __init__(self, name):
462 def __init__(self, name):
424
463
425 self.name = name
464 self.name = name
426 self.complete = name
465 self.complete = name
427 self.type = 'crashed'
466 self.type = 'crashed'
428 self.name_with_symbols = name
467 self.name_with_symbols = name
429 self.signature = ''
468 self.signature = ''
430 self._origin = 'fake'
469 self._origin = 'fake'
431
470
432 def __repr__(self):
471 def __repr__(self):
433 return '<Fake completion object jedi has crashed>'
472 return '<Fake completion object jedi has crashed>'
434
473
435
474
436 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
475 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
437
476
438
477
439 class Completion:
478 class Completion:
440 """
479 """
441 Completion object used and returned by IPython completers.
480 Completion object used and returned by IPython completers.
442
481
443 .. warning::
482 .. warning::
444
483
445 Unstable
484 Unstable
446
485
447 This function is unstable, API may change without warning.
486 This function is unstable, API may change without warning.
448 It will also raise unless use in proper context manager.
487 It will also raise unless use in proper context manager.
449
488
450 This act as a middle ground :any:`Completion` object between the
489 This act as a middle ground :any:`Completion` object between the
451 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
490 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
452 object. While Jedi need a lot of information about evaluator and how the
491 object. While Jedi need a lot of information about evaluator and how the
453 code should be ran/inspected, PromptToolkit (and other frontend) mostly
492 code should be ran/inspected, PromptToolkit (and other frontend) mostly
454 need user facing information.
493 need user facing information.
455
494
456 - Which range should be replaced replaced by what.
495 - Which range should be replaced replaced by what.
457 - Some metadata (like completion type), or meta information to displayed to
496 - Some metadata (like completion type), or meta information to displayed to
458 the use user.
497 the use user.
459
498
460 For debugging purpose we can also store the origin of the completion (``jedi``,
499 For debugging purpose we can also store the origin of the completion (``jedi``,
461 ``IPython.python_matches``, ``IPython.magics_matches``...).
500 ``IPython.python_matches``, ``IPython.magics_matches``...).
462 """
501 """
463
502
464 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
503 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
465
504
466 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
505 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
467 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
506 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
468 "It may change without warnings. "
507 "It may change without warnings. "
469 "Use in corresponding context manager.",
508 "Use in corresponding context manager.",
470 category=ProvisionalCompleterWarning, stacklevel=2)
509 category=ProvisionalCompleterWarning, stacklevel=2)
471
510
472 self.start = start
511 self.start = start
473 self.end = end
512 self.end = end
474 self.text = text
513 self.text = text
475 self.type = type
514 self.type = type
476 self.signature = signature
515 self.signature = signature
477 self._origin = _origin
516 self._origin = _origin
478
517
479 def __repr__(self):
518 def __repr__(self):
480 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
519 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
481 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
520 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
482
521
483 def __eq__(self, other)->Bool:
522 def __eq__(self, other)->Bool:
484 """
523 """
485 Equality and hash do not hash the type (as some completer may not be
524 Equality and hash do not hash the type (as some completer may not be
486 able to infer the type), but are use to (partially) de-duplicate
525 able to infer the type), but are use to (partially) de-duplicate
487 completion.
526 completion.
488
527
489 Completely de-duplicating completion is a bit tricker that just
528 Completely de-duplicating completion is a bit tricker that just
490 comparing as it depends on surrounding text, which Completions are not
529 comparing as it depends on surrounding text, which Completions are not
491 aware of.
530 aware of.
492 """
531 """
493 return self.start == other.start and \
532 return self.start == other.start and \
494 self.end == other.end and \
533 self.end == other.end and \
495 self.text == other.text
534 self.text == other.text
496
535
497 def __hash__(self):
536 def __hash__(self):
498 return hash((self.start, self.end, self.text))
537 return hash((self.start, self.end, self.text))
499
538
500
539
501 class SimpleCompletion:
540 class SimpleCompletion:
502 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
541 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
503
542
504 .. warning::
543 .. warning::
505
544
506 Provisional
545 Provisional
507
546
508 This class is used to describe the currently supported attributes of
547 This class is used to describe the currently supported attributes of
509 simple completion items, and any additional implementation details
548 simple completion items, and any additional implementation details
510 should not be relied on. Additional attributes may be included in
549 should not be relied on. Additional attributes may be included in
511 future versions, and meaning of text disambiguated from the current
550 future versions, and meaning of text disambiguated from the current
512 dual meaning of "text to insert" and "text to used as a label".
551 dual meaning of "text to insert" and "text to used as a label".
513 """
552 """
514
553
515 __slots__ = ["text", "type"]
554 __slots__ = ["text", "type"]
516
555
517 def __init__(self, text: str, *, type: str = None):
556 def __init__(self, text: str, *, type: str = None):
518 self.text = text
557 self.text = text
519 self.type = type
558 self.type = type
520
559
521 def __repr__(self):
560 def __repr__(self):
522 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
561 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
523
562
524
563
525 class MatcherResultBase(TypedDict):
564 class _MatcherResultBase(TypedDict):
526 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
565 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
527
566
528 #: suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
567 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
529 matched_fragment: NotRequired[str]
568 matched_fragment: NotRequired[str]
530
569
531 #: whether to suppress results from all other matchers (True), some
570 #: Whether to suppress results from all other matchers (True), some
532 #: matchers (set of identifiers) or none (False); default is False.
571 #: matchers (set of identifiers) or none (False); default is False.
533 suppress: NotRequired[Union[bool, Set[str]]]
572 suppress: NotRequired[Union[bool, Set[str]]]
534
573
535 #: identifiers of matchers which should NOT be suppressed
574 #: Identifiers of matchers which should NOT be suppressed when this matcher
575 #: requests to suppress all other matchers; defaults to an empty set.
536 do_not_suppress: NotRequired[Set[str]]
576 do_not_suppress: NotRequired[Set[str]]
537
577
538 #: are completions already ordered and should be left as-is? default is False.
578 #: Are completions already ordered and should be left as-is? default is False.
539 ordered: NotRequired[bool]
579 ordered: NotRequired[bool]
540
580
541
581
542 class SimpleMatcherResult(MatcherResultBase):
582 @sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
583 class SimpleMatcherResult(_MatcherResultBase, TypedDict):
543 """Result of new-style completion matcher."""
584 """Result of new-style completion matcher."""
544
585
545 #: list of candidate completions
586 # note: TypedDict is added again to the inheritance chain
587 # in order to get __orig_bases__ for documentation
588
589 #: List of candidate completions
546 completions: Sequence[SimpleCompletion]
590 completions: Sequence[SimpleCompletion]
547
591
548
592
549 class _JediMatcherResult(MatcherResultBase):
593 class _JediMatcherResult(_MatcherResultBase):
550 """Matching result returned by Jedi (will be processed differently)"""
594 """Matching result returned by Jedi (will be processed differently)"""
551
595
552 #: list of candidate completions
596 #: list of candidate completions
553 completions: Iterable[_JediCompletionLike]
597 completions: Iterable[_JediCompletionLike]
554
598
555
599
556 class CompletionContext(NamedTuple):
600 class CompletionContext(NamedTuple):
557 """Completion context provided as an argument to matchers in the Matcher API v2."""
601 """Completion context provided as an argument to matchers in the Matcher API v2."""
558
602
559 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
603 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
560 # which was not explicitly visible as an argument of the matcher, making any refactor
604 # which was not explicitly visible as an argument of the matcher, making any refactor
561 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
605 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
562 # from the completer, and make substituting them in sub-classes easier.
606 # from the completer, and make substituting them in sub-classes easier.
563
607
564 #: Relevant fragment of code directly preceding the cursor.
608 #: Relevant fragment of code directly preceding the cursor.
565 #: The extraction of token is implemented via splitter heuristic
609 #: The extraction of token is implemented via splitter heuristic
566 #: (following readline behaviour for legacy reasons), which is user configurable
610 #: (following readline behaviour for legacy reasons), which is user configurable
567 #: (by switching the greedy mode).
611 #: (by switching the greedy mode).
568 token: str
612 token: str
569
613
570 #: The full available content of the editor or buffer
614 #: The full available content of the editor or buffer
571 full_text: str
615 full_text: str
572
616
573 #: Cursor position in the line (the same for ``full_text`` and ``text``).
617 #: Cursor position in the line (the same for ``full_text`` and ``text``).
574 cursor_position: int
618 cursor_position: int
575
619
576 #: Cursor line in ``full_text``.
620 #: Cursor line in ``full_text``.
577 cursor_line: int
621 cursor_line: int
578
622
579 #: The maximum number of completions that will be used downstream.
623 #: The maximum number of completions that will be used downstream.
580 #: Matchers can use this information to abort early.
624 #: Matchers can use this information to abort early.
581 #: The built-in Jedi matcher is currently excepted from this limit.
625 #: The built-in Jedi matcher is currently excepted from this limit.
582 limit: int
626 limit: int
583
627
584 @property
628 @property
585 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
629 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
586 def text_until_cursor(self) -> str:
630 def text_until_cursor(self) -> str:
587 return self.line_with_cursor[: self.cursor_position]
631 return self.line_with_cursor[: self.cursor_position]
588
632
589 @property
633 @property
590 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
634 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
591 def line_with_cursor(self) -> str:
635 def line_with_cursor(self) -> str:
592 return self.full_text.split("\n")[self.cursor_line]
636 return self.full_text.split("\n")[self.cursor_line]
593
637
594
638
639 #: Matcher results for API v2.
595 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
640 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
596
641
597 MatcherAPIv1 = Callable[[str], List[str]]
642
598 MatcherAPIv2 = Callable[[CompletionContext], MatcherResult]
643 class _MatcherAPIv1Base(Protocol):
599 Matcher = Union[MatcherAPIv1, MatcherAPIv2]
644 def __call__(self, text: str) -> list[str]:
645 """Call signature."""
646
647
648 class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
649 #: API version
650 matcher_api_version: Optional[Literal[1]]
651
652 def __call__(self, text: str) -> list[str]:
653 """Call signature."""
654
655
656 #: Protocol describing Matcher API v1.
657 MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
658
659
660 class MatcherAPIv2(Protocol):
661 """Protocol describing Matcher API v2."""
662
663 #: API version
664 matcher_api_version: Literal[2] = 2
665
666 def __call__(self, context: CompletionContext) -> MatcherResult:
667 """Call signature."""
668
669
670 Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
600
671
601
672
602 def completion_matcher(
673 def completion_matcher(
603 *, priority: float = None, identifier: str = None, api_version: int = 1
674 *, priority: float = None, identifier: str = None, api_version: int = 1
604 ):
675 ):
605 """Adds attributes describing the matcher.
676 """Adds attributes describing the matcher.
606
677
607 Parameters
678 Parameters
608 ----------
679 ----------
609 priority : Optional[float]
680 priority : Optional[float]
610 The priority of the matcher, determines the order of execution of matchers.
681 The priority of the matcher, determines the order of execution of matchers.
611 Higher priority means that the matcher will be executed first. Defaults to 0.
682 Higher priority means that the matcher will be executed first. Defaults to 0.
612 identifier : Optional[str]
683 identifier : Optional[str]
613 identifier of the matcher allowing users to modify the behaviour via traitlets,
684 identifier of the matcher allowing users to modify the behaviour via traitlets,
614 and also used to for debugging (will be passed as ``origin`` with the completions).
685 and also used to for debugging (will be passed as ``origin`` with the completions).
615 Defaults to matcher function ``__qualname__``.
686 Defaults to matcher function ``__qualname__``.
616 api_version: Optional[int]
687 api_version: Optional[int]
617 version of the Matcher API used by this matcher.
688 version of the Matcher API used by this matcher.
618 Currently supported values are 1 and 2.
689 Currently supported values are 1 and 2.
619 Defaults to 1.
690 Defaults to 1.
620 """
691 """
621
692
622 def wrapper(func: Matcher):
693 def wrapper(func: Matcher):
623 func.matcher_priority = priority or 0
694 func.matcher_priority = priority or 0
624 func.matcher_identifier = identifier or func.__qualname__
695 func.matcher_identifier = identifier or func.__qualname__
625 func.matcher_api_version = api_version
696 func.matcher_api_version = api_version
626 if TYPE_CHECKING:
697 if TYPE_CHECKING:
627 if api_version == 1:
698 if api_version == 1:
628 func = cast(func, MatcherAPIv1)
699 func = cast(func, MatcherAPIv1)
629 elif api_version == 2:
700 elif api_version == 2:
630 func = cast(func, MatcherAPIv2)
701 func = cast(func, MatcherAPIv2)
631 return func
702 return func
632
703
633 return wrapper
704 return wrapper
634
705
635
706
636 def _get_matcher_priority(matcher: Matcher):
707 def _get_matcher_priority(matcher: Matcher):
637 return getattr(matcher, "matcher_priority", 0)
708 return getattr(matcher, "matcher_priority", 0)
638
709
639
710
640 def _get_matcher_id(matcher: Matcher):
711 def _get_matcher_id(matcher: Matcher):
641 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
712 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
642
713
643
714
644 def _get_matcher_api_version(matcher):
715 def _get_matcher_api_version(matcher):
645 return getattr(matcher, "matcher_api_version", 1)
716 return getattr(matcher, "matcher_api_version", 1)
646
717
647
718
648 context_matcher = partial(completion_matcher, api_version=2)
719 context_matcher = partial(completion_matcher, api_version=2)
649
720
650
721
651 _IC = Iterable[Completion]
722 _IC = Iterable[Completion]
652
723
653
724
654 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
725 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
655 """
726 """
656 Deduplicate a set of completions.
727 Deduplicate a set of completions.
657
728
658 .. warning::
729 .. warning::
659
730
660 Unstable
731 Unstable
661
732
662 This function is unstable, API may change without warning.
733 This function is unstable, API may change without warning.
663
734
664 Parameters
735 Parameters
665 ----------
736 ----------
666 text : str
737 text : str
667 text that should be completed.
738 text that should be completed.
668 completions : Iterator[Completion]
739 completions : Iterator[Completion]
669 iterator over the completions to deduplicate
740 iterator over the completions to deduplicate
670
741
671 Yields
742 Yields
672 ------
743 ------
673 `Completions` objects
744 `Completions` objects
674 Completions coming from multiple sources, may be different but end up having
745 Completions coming from multiple sources, may be different but end up having
675 the same effect when applied to ``text``. If this is the case, this will
746 the same effect when applied to ``text``. If this is the case, this will
676 consider completions as equal and only emit the first encountered.
747 consider completions as equal and only emit the first encountered.
677 Not folded in `completions()` yet for debugging purpose, and to detect when
748 Not folded in `completions()` yet for debugging purpose, and to detect when
678 the IPython completer does return things that Jedi does not, but should be
749 the IPython completer does return things that Jedi does not, but should be
679 at some point.
750 at some point.
680 """
751 """
681 completions = list(completions)
752 completions = list(completions)
682 if not completions:
753 if not completions:
683 return
754 return
684
755
685 new_start = min(c.start for c in completions)
756 new_start = min(c.start for c in completions)
686 new_end = max(c.end for c in completions)
757 new_end = max(c.end for c in completions)
687
758
688 seen = set()
759 seen = set()
689 for c in completions:
760 for c in completions:
690 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
761 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
691 if new_text not in seen:
762 if new_text not in seen:
692 yield c
763 yield c
693 seen.add(new_text)
764 seen.add(new_text)
694
765
695
766
696 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
767 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
697 """
768 """
698 Rectify a set of completions to all have the same ``start`` and ``end``
769 Rectify a set of completions to all have the same ``start`` and ``end``
699
770
700 .. warning::
771 .. warning::
701
772
702 Unstable
773 Unstable
703
774
704 This function is unstable, API may change without warning.
775 This function is unstable, API may change without warning.
705 It will also raise unless use in proper context manager.
776 It will also raise unless use in proper context manager.
706
777
707 Parameters
778 Parameters
708 ----------
779 ----------
709 text : str
780 text : str
710 text that should be completed.
781 text that should be completed.
711 completions : Iterator[Completion]
782 completions : Iterator[Completion]
712 iterator over the completions to rectify
783 iterator over the completions to rectify
713 _debug : bool
784 _debug : bool
714 Log failed completion
785 Log failed completion
715
786
716 Notes
787 Notes
717 -----
788 -----
718 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
789 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
719 the Jupyter Protocol requires them to behave like so. This will readjust
790 the Jupyter Protocol requires them to behave like so. This will readjust
720 the completion to have the same ``start`` and ``end`` by padding both
791 the completion to have the same ``start`` and ``end`` by padding both
721 extremities with surrounding text.
792 extremities with surrounding text.
722
793
723 During stabilisation should support a ``_debug`` option to log which
794 During stabilisation should support a ``_debug`` option to log which
724 completion are return by the IPython completer and not found in Jedi in
795 completion are return by the IPython completer and not found in Jedi in
725 order to make upstream bug report.
796 order to make upstream bug report.
726 """
797 """
727 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
798 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
728 "It may change without warnings. "
799 "It may change without warnings. "
729 "Use in corresponding context manager.",
800 "Use in corresponding context manager.",
730 category=ProvisionalCompleterWarning, stacklevel=2)
801 category=ProvisionalCompleterWarning, stacklevel=2)
731
802
732 completions = list(completions)
803 completions = list(completions)
733 if not completions:
804 if not completions:
734 return
805 return
735 starts = (c.start for c in completions)
806 starts = (c.start for c in completions)
736 ends = (c.end for c in completions)
807 ends = (c.end for c in completions)
737
808
738 new_start = min(starts)
809 new_start = min(starts)
739 new_end = max(ends)
810 new_end = max(ends)
740
811
741 seen_jedi = set()
812 seen_jedi = set()
742 seen_python_matches = set()
813 seen_python_matches = set()
743 for c in completions:
814 for c in completions:
744 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
815 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
745 if c._origin == 'jedi':
816 if c._origin == 'jedi':
746 seen_jedi.add(new_text)
817 seen_jedi.add(new_text)
747 elif c._origin == 'IPCompleter.python_matches':
818 elif c._origin == 'IPCompleter.python_matches':
748 seen_python_matches.add(new_text)
819 seen_python_matches.add(new_text)
749 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
820 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
750 diff = seen_python_matches.difference(seen_jedi)
821 diff = seen_python_matches.difference(seen_jedi)
751 if diff and _debug:
822 if diff and _debug:
752 print('IPython.python matches have extras:', diff)
823 print('IPython.python matches have extras:', diff)
753
824
754
825
755 if sys.platform == 'win32':
826 if sys.platform == 'win32':
756 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
827 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
757 else:
828 else:
758 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
829 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
759
830
760 GREEDY_DELIMS = ' =\r\n'
831 GREEDY_DELIMS = ' =\r\n'
761
832
762
833
763 class CompletionSplitter(object):
834 class CompletionSplitter(object):
764 """An object to split an input line in a manner similar to readline.
835 """An object to split an input line in a manner similar to readline.
765
836
766 By having our own implementation, we can expose readline-like completion in
837 By having our own implementation, we can expose readline-like completion in
767 a uniform manner to all frontends. This object only needs to be given the
838 a uniform manner to all frontends. This object only needs to be given the
768 line of text to be split and the cursor position on said line, and it
839 line of text to be split and the cursor position on said line, and it
769 returns the 'word' to be completed on at the cursor after splitting the
840 returns the 'word' to be completed on at the cursor after splitting the
770 entire line.
841 entire line.
771
842
772 What characters are used as splitting delimiters can be controlled by
843 What characters are used as splitting delimiters can be controlled by
773 setting the ``delims`` attribute (this is a property that internally
844 setting the ``delims`` attribute (this is a property that internally
774 automatically builds the necessary regular expression)"""
845 automatically builds the necessary regular expression)"""
775
846
776 # Private interface
847 # Private interface
777
848
778 # A string of delimiter characters. The default value makes sense for
849 # A string of delimiter characters. The default value makes sense for
779 # IPython's most typical usage patterns.
850 # IPython's most typical usage patterns.
780 _delims = DELIMS
851 _delims = DELIMS
781
852
782 # The expression (a normal string) to be compiled into a regular expression
853 # The expression (a normal string) to be compiled into a regular expression
783 # for actual splitting. We store it as an attribute mostly for ease of
854 # for actual splitting. We store it as an attribute mostly for ease of
784 # debugging, since this type of code can be so tricky to debug.
855 # debugging, since this type of code can be so tricky to debug.
785 _delim_expr = None
856 _delim_expr = None
786
857
787 # The regular expression that does the actual splitting
858 # The regular expression that does the actual splitting
788 _delim_re = None
859 _delim_re = None
789
860
790 def __init__(self, delims=None):
861 def __init__(self, delims=None):
791 delims = CompletionSplitter._delims if delims is None else delims
862 delims = CompletionSplitter._delims if delims is None else delims
792 self.delims = delims
863 self.delims = delims
793
864
794 @property
865 @property
795 def delims(self):
866 def delims(self):
796 """Return the string of delimiter characters."""
867 """Return the string of delimiter characters."""
797 return self._delims
868 return self._delims
798
869
799 @delims.setter
870 @delims.setter
800 def delims(self, delims):
871 def delims(self, delims):
801 """Set the delimiters for line splitting."""
872 """Set the delimiters for line splitting."""
802 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
873 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
803 self._delim_re = re.compile(expr)
874 self._delim_re = re.compile(expr)
804 self._delims = delims
875 self._delims = delims
805 self._delim_expr = expr
876 self._delim_expr = expr
806
877
807 def split_line(self, line, cursor_pos=None):
878 def split_line(self, line, cursor_pos=None):
808 """Split a line of text with a cursor at the given position.
879 """Split a line of text with a cursor at the given position.
809 """
880 """
810 l = line if cursor_pos is None else line[:cursor_pos]
881 l = line if cursor_pos is None else line[:cursor_pos]
811 return self._delim_re.split(l)[-1]
882 return self._delim_re.split(l)[-1]
812
883
813
884
814
885
815 class Completer(Configurable):
886 class Completer(Configurable):
816
887
817 greedy = Bool(False,
888 greedy = Bool(False,
818 help="""Activate greedy completion
889 help="""Activate greedy completion
819 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
890 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
820
891
821 This will enable completion on elements of lists, results of function calls, etc.,
892 This will enable completion on elements of lists, results of function calls, etc.,
822 but can be unsafe because the code is actually evaluated on TAB.
893 but can be unsafe because the code is actually evaluated on TAB.
823 """,
894 """,
824 ).tag(config=True)
895 ).tag(config=True)
825
896
826 use_jedi = Bool(default_value=JEDI_INSTALLED,
897 use_jedi = Bool(default_value=JEDI_INSTALLED,
827 help="Experimental: Use Jedi to generate autocompletions. "
898 help="Experimental: Use Jedi to generate autocompletions. "
828 "Default to True if jedi is installed.").tag(config=True)
899 "Default to True if jedi is installed.").tag(config=True)
829
900
830 jedi_compute_type_timeout = Int(default_value=400,
901 jedi_compute_type_timeout = Int(default_value=400,
831 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
902 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
832 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
903 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
833 performance by preventing jedi to build its cache.
904 performance by preventing jedi to build its cache.
834 """).tag(config=True)
905 """).tag(config=True)
835
906
836 debug = Bool(default_value=False,
907 debug = Bool(default_value=False,
837 help='Enable debug for the Completer. Mostly print extra '
908 help='Enable debug for the Completer. Mostly print extra '
838 'information for experimental jedi integration.')\
909 'information for experimental jedi integration.')\
839 .tag(config=True)
910 .tag(config=True)
840
911
841 backslash_combining_completions = Bool(True,
912 backslash_combining_completions = Bool(True,
842 help="Enable unicode completions, e.g. \\alpha<tab> . "
913 help="Enable unicode completions, e.g. \\alpha<tab> . "
843 "Includes completion of latex commands, unicode names, and expanding "
914 "Includes completion of latex commands, unicode names, and expanding "
844 "unicode characters back to latex commands.").tag(config=True)
915 "unicode characters back to latex commands.").tag(config=True)
845
916
846 def __init__(self, namespace=None, global_namespace=None, **kwargs):
917 def __init__(self, namespace=None, global_namespace=None, **kwargs):
847 """Create a new completer for the command line.
918 """Create a new completer for the command line.
848
919
849 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
920 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
850
921
851 If unspecified, the default namespace where completions are performed
922 If unspecified, the default namespace where completions are performed
852 is __main__ (technically, __main__.__dict__). Namespaces should be
923 is __main__ (technically, __main__.__dict__). Namespaces should be
853 given as dictionaries.
924 given as dictionaries.
854
925
855 An optional second namespace can be given. This allows the completer
926 An optional second namespace can be given. This allows the completer
856 to handle cases where both the local and global scopes need to be
927 to handle cases where both the local and global scopes need to be
857 distinguished.
928 distinguished.
858 """
929 """
859
930
860 # Don't bind to namespace quite yet, but flag whether the user wants a
931 # Don't bind to namespace quite yet, but flag whether the user wants a
861 # specific namespace or to use __main__.__dict__. This will allow us
932 # specific namespace or to use __main__.__dict__. This will allow us
862 # to bind to __main__.__dict__ at completion time, not now.
933 # to bind to __main__.__dict__ at completion time, not now.
863 if namespace is None:
934 if namespace is None:
864 self.use_main_ns = True
935 self.use_main_ns = True
865 else:
936 else:
866 self.use_main_ns = False
937 self.use_main_ns = False
867 self.namespace = namespace
938 self.namespace = namespace
868
939
869 # The global namespace, if given, can be bound directly
940 # The global namespace, if given, can be bound directly
870 if global_namespace is None:
941 if global_namespace is None:
871 self.global_namespace = {}
942 self.global_namespace = {}
872 else:
943 else:
873 self.global_namespace = global_namespace
944 self.global_namespace = global_namespace
874
945
875 self.custom_matchers = []
946 self.custom_matchers = []
876
947
877 super(Completer, self).__init__(**kwargs)
948 super(Completer, self).__init__(**kwargs)
878
949
879 def complete(self, text, state):
950 def complete(self, text, state):
880 """Return the next possible completion for 'text'.
951 """Return the next possible completion for 'text'.
881
952
882 This is called successively with state == 0, 1, 2, ... until it
953 This is called successively with state == 0, 1, 2, ... until it
883 returns None. The completion should begin with 'text'.
954 returns None. The completion should begin with 'text'.
884
955
885 """
956 """
886 if self.use_main_ns:
957 if self.use_main_ns:
887 self.namespace = __main__.__dict__
958 self.namespace = __main__.__dict__
888
959
889 if state == 0:
960 if state == 0:
890 if "." in text:
961 if "." in text:
891 self.matches = self.attr_matches(text)
962 self.matches = self.attr_matches(text)
892 else:
963 else:
893 self.matches = self.global_matches(text)
964 self.matches = self.global_matches(text)
894 try:
965 try:
895 return self.matches[state]
966 return self.matches[state]
896 except IndexError:
967 except IndexError:
897 return None
968 return None
898
969
899 def global_matches(self, text):
970 def global_matches(self, text):
900 """Compute matches when text is a simple name.
971 """Compute matches when text is a simple name.
901
972
902 Return a list of all keywords, built-in functions and names currently
973 Return a list of all keywords, built-in functions and names currently
903 defined in self.namespace or self.global_namespace that match.
974 defined in self.namespace or self.global_namespace that match.
904
975
905 """
976 """
906 matches = []
977 matches = []
907 match_append = matches.append
978 match_append = matches.append
908 n = len(text)
979 n = len(text)
909 for lst in [
980 for lst in [
910 keyword.kwlist,
981 keyword.kwlist,
911 builtin_mod.__dict__.keys(),
982 builtin_mod.__dict__.keys(),
912 list(self.namespace.keys()),
983 list(self.namespace.keys()),
913 list(self.global_namespace.keys()),
984 list(self.global_namespace.keys()),
914 ]:
985 ]:
915 for word in lst:
986 for word in lst:
916 if word[:n] == text and word != "__builtins__":
987 if word[:n] == text and word != "__builtins__":
917 match_append(word)
988 match_append(word)
918
989
919 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
990 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
920 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
991 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
921 shortened = {
992 shortened = {
922 "_".join([sub[0] for sub in word.split("_")]): word
993 "_".join([sub[0] for sub in word.split("_")]): word
923 for word in lst
994 for word in lst
924 if snake_case_re.match(word)
995 if snake_case_re.match(word)
925 }
996 }
926 for word in shortened.keys():
997 for word in shortened.keys():
927 if word[:n] == text and word != "__builtins__":
998 if word[:n] == text and word != "__builtins__":
928 match_append(shortened[word])
999 match_append(shortened[word])
929 return matches
1000 return matches
930
1001
931 def attr_matches(self, text):
1002 def attr_matches(self, text):
932 """Compute matches when text contains a dot.
1003 """Compute matches when text contains a dot.
933
1004
934 Assuming the text is of the form NAME.NAME....[NAME], and is
1005 Assuming the text is of the form NAME.NAME....[NAME], and is
935 evaluatable in self.namespace or self.global_namespace, it will be
1006 evaluatable in self.namespace or self.global_namespace, it will be
936 evaluated and its attributes (as revealed by dir()) are used as
1007 evaluated and its attributes (as revealed by dir()) are used as
937 possible completions. (For class instances, class members are
1008 possible completions. (For class instances, class members are
938 also considered.)
1009 also considered.)
939
1010
940 WARNING: this can still invoke arbitrary C code, if an object
1011 WARNING: this can still invoke arbitrary C code, if an object
941 with a __getattr__ hook is evaluated.
1012 with a __getattr__ hook is evaluated.
942
1013
943 """
1014 """
944
1015
945 # Another option, seems to work great. Catches things like ''.<tab>
1016 # Another option, seems to work great. Catches things like ''.<tab>
946 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
1017 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
947
1018
948 if m:
1019 if m:
949 expr, attr = m.group(1, 3)
1020 expr, attr = m.group(1, 3)
950 elif self.greedy:
1021 elif self.greedy:
951 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
1022 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
952 if not m2:
1023 if not m2:
953 return []
1024 return []
954 expr, attr = m2.group(1,2)
1025 expr, attr = m2.group(1,2)
955 else:
1026 else:
956 return []
1027 return []
957
1028
958 try:
1029 try:
959 obj = eval(expr, self.namespace)
1030 obj = eval(expr, self.namespace)
960 except:
1031 except:
961 try:
1032 try:
962 obj = eval(expr, self.global_namespace)
1033 obj = eval(expr, self.global_namespace)
963 except:
1034 except:
964 return []
1035 return []
965
1036
966 if self.limit_to__all__ and hasattr(obj, '__all__'):
1037 if self.limit_to__all__ and hasattr(obj, '__all__'):
967 words = get__all__entries(obj)
1038 words = get__all__entries(obj)
968 else:
1039 else:
969 words = dir2(obj)
1040 words = dir2(obj)
970
1041
971 try:
1042 try:
972 words = generics.complete_object(obj, words)
1043 words = generics.complete_object(obj, words)
973 except TryNext:
1044 except TryNext:
974 pass
1045 pass
975 except AssertionError:
1046 except AssertionError:
976 raise
1047 raise
977 except Exception:
1048 except Exception:
978 # Silence errors from completion function
1049 # Silence errors from completion function
979 #raise # dbg
1050 #raise # dbg
980 pass
1051 pass
981 # Build match list to return
1052 # Build match list to return
982 n = len(attr)
1053 n = len(attr)
983 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
1054 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
984
1055
985
1056
986 def get__all__entries(obj):
1057 def get__all__entries(obj):
987 """returns the strings in the __all__ attribute"""
1058 """returns the strings in the __all__ attribute"""
988 try:
1059 try:
989 words = getattr(obj, '__all__')
1060 words = getattr(obj, '__all__')
990 except:
1061 except:
991 return []
1062 return []
992
1063
993 return [w for w in words if isinstance(w, str)]
1064 return [w for w in words if isinstance(w, str)]
994
1065
995
1066
996 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
1067 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
997 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
1068 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
998 """Used by dict_key_matches, matching the prefix to a list of keys
1069 """Used by dict_key_matches, matching the prefix to a list of keys
999
1070
1000 Parameters
1071 Parameters
1001 ----------
1072 ----------
1002 keys
1073 keys
1003 list of keys in dictionary currently being completed.
1074 list of keys in dictionary currently being completed.
1004 prefix
1075 prefix
1005 Part of the text already typed by the user. E.g. `mydict[b'fo`
1076 Part of the text already typed by the user. E.g. `mydict[b'fo`
1006 delims
1077 delims
1007 String of delimiters to consider when finding the current key.
1078 String of delimiters to consider when finding the current key.
1008 extra_prefix : optional
1079 extra_prefix : optional
1009 Part of the text already typed in multi-key index cases. E.g. for
1080 Part of the text already typed in multi-key index cases. E.g. for
1010 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1081 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1011
1082
1012 Returns
1083 Returns
1013 -------
1084 -------
1014 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1085 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1015 ``quote`` being the quote that need to be used to close current string.
1086 ``quote`` being the quote that need to be used to close current string.
1016 ``token_start`` the position where the replacement should start occurring,
1087 ``token_start`` the position where the replacement should start occurring,
1017 ``matches`` a list of replacement/completion
1088 ``matches`` a list of replacement/completion
1018
1089
1019 """
1090 """
1020 prefix_tuple = extra_prefix if extra_prefix else ()
1091 prefix_tuple = extra_prefix if extra_prefix else ()
1021 Nprefix = len(prefix_tuple)
1092 Nprefix = len(prefix_tuple)
1022 def filter_prefix_tuple(key):
1093 def filter_prefix_tuple(key):
1023 # Reject too short keys
1094 # Reject too short keys
1024 if len(key) <= Nprefix:
1095 if len(key) <= Nprefix:
1025 return False
1096 return False
1026 # Reject keys with non str/bytes in it
1097 # Reject keys with non str/bytes in it
1027 for k in key:
1098 for k in key:
1028 if not isinstance(k, (str, bytes)):
1099 if not isinstance(k, (str, bytes)):
1029 return False
1100 return False
1030 # Reject keys that do not match the prefix
1101 # Reject keys that do not match the prefix
1031 for k, pt in zip(key, prefix_tuple):
1102 for k, pt in zip(key, prefix_tuple):
1032 if k != pt:
1103 if k != pt:
1033 return False
1104 return False
1034 # All checks passed!
1105 # All checks passed!
1035 return True
1106 return True
1036
1107
1037 filtered_keys:List[Union[str,bytes]] = []
1108 filtered_keys:List[Union[str,bytes]] = []
1038 def _add_to_filtered_keys(key):
1109 def _add_to_filtered_keys(key):
1039 if isinstance(key, (str, bytes)):
1110 if isinstance(key, (str, bytes)):
1040 filtered_keys.append(key)
1111 filtered_keys.append(key)
1041
1112
1042 for k in keys:
1113 for k in keys:
1043 if isinstance(k, tuple):
1114 if isinstance(k, tuple):
1044 if filter_prefix_tuple(k):
1115 if filter_prefix_tuple(k):
1045 _add_to_filtered_keys(k[Nprefix])
1116 _add_to_filtered_keys(k[Nprefix])
1046 else:
1117 else:
1047 _add_to_filtered_keys(k)
1118 _add_to_filtered_keys(k)
1048
1119
1049 if not prefix:
1120 if not prefix:
1050 return '', 0, [repr(k) for k in filtered_keys]
1121 return '', 0, [repr(k) for k in filtered_keys]
1051 quote_match = re.search('["\']', prefix)
1122 quote_match = re.search('["\']', prefix)
1052 assert quote_match is not None # silence mypy
1123 assert quote_match is not None # silence mypy
1053 quote = quote_match.group()
1124 quote = quote_match.group()
1054 try:
1125 try:
1055 prefix_str = eval(prefix + quote, {})
1126 prefix_str = eval(prefix + quote, {})
1056 except Exception:
1127 except Exception:
1057 return '', 0, []
1128 return '', 0, []
1058
1129
1059 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1130 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1060 token_match = re.search(pattern, prefix, re.UNICODE)
1131 token_match = re.search(pattern, prefix, re.UNICODE)
1061 assert token_match is not None # silence mypy
1132 assert token_match is not None # silence mypy
1062 token_start = token_match.start()
1133 token_start = token_match.start()
1063 token_prefix = token_match.group()
1134 token_prefix = token_match.group()
1064
1135
1065 matched:List[str] = []
1136 matched:List[str] = []
1066 for key in filtered_keys:
1137 for key in filtered_keys:
1067 try:
1138 try:
1068 if not key.startswith(prefix_str):
1139 if not key.startswith(prefix_str):
1069 continue
1140 continue
1070 except (AttributeError, TypeError, UnicodeError):
1141 except (AttributeError, TypeError, UnicodeError):
1071 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1142 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1072 continue
1143 continue
1073
1144
1074 # reformat remainder of key to begin with prefix
1145 # reformat remainder of key to begin with prefix
1075 rem = key[len(prefix_str):]
1146 rem = key[len(prefix_str):]
1076 # force repr wrapped in '
1147 # force repr wrapped in '
1077 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1148 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1078 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1149 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1079 if quote == '"':
1150 if quote == '"':
1080 # The entered prefix is quoted with ",
1151 # The entered prefix is quoted with ",
1081 # but the match is quoted with '.
1152 # but the match is quoted with '.
1082 # A contained " hence needs escaping for comparison:
1153 # A contained " hence needs escaping for comparison:
1083 rem_repr = rem_repr.replace('"', '\\"')
1154 rem_repr = rem_repr.replace('"', '\\"')
1084
1155
1085 # then reinsert prefix from start of token
1156 # then reinsert prefix from start of token
1086 matched.append('%s%s' % (token_prefix, rem_repr))
1157 matched.append('%s%s' % (token_prefix, rem_repr))
1087 return quote, token_start, matched
1158 return quote, token_start, matched
1088
1159
1089
1160
1090 def cursor_to_position(text:str, line:int, column:int)->int:
1161 def cursor_to_position(text:str, line:int, column:int)->int:
1091 """
1162 """
1092 Convert the (line,column) position of the cursor in text to an offset in a
1163 Convert the (line,column) position of the cursor in text to an offset in a
1093 string.
1164 string.
1094
1165
1095 Parameters
1166 Parameters
1096 ----------
1167 ----------
1097 text : str
1168 text : str
1098 The text in which to calculate the cursor offset
1169 The text in which to calculate the cursor offset
1099 line : int
1170 line : int
1100 Line of the cursor; 0-indexed
1171 Line of the cursor; 0-indexed
1101 column : int
1172 column : int
1102 Column of the cursor 0-indexed
1173 Column of the cursor 0-indexed
1103
1174
1104 Returns
1175 Returns
1105 -------
1176 -------
1106 Position of the cursor in ``text``, 0-indexed.
1177 Position of the cursor in ``text``, 0-indexed.
1107
1178
1108 See Also
1179 See Also
1109 --------
1180 --------
1110 position_to_cursor : reciprocal of this function
1181 position_to_cursor : reciprocal of this function
1111
1182
1112 """
1183 """
1113 lines = text.split('\n')
1184 lines = text.split('\n')
1114 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1185 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1115
1186
1116 return sum(len(l) + 1 for l in lines[:line]) + column
1187 return sum(len(l) + 1 for l in lines[:line]) + column
1117
1188
1118 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1189 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1119 """
1190 """
1120 Convert the position of the cursor in text (0 indexed) to a line
1191 Convert the position of the cursor in text (0 indexed) to a line
1121 number(0-indexed) and a column number (0-indexed) pair
1192 number(0-indexed) and a column number (0-indexed) pair
1122
1193
1123 Position should be a valid position in ``text``.
1194 Position should be a valid position in ``text``.
1124
1195
1125 Parameters
1196 Parameters
1126 ----------
1197 ----------
1127 text : str
1198 text : str
1128 The text in which to calculate the cursor offset
1199 The text in which to calculate the cursor offset
1129 offset : int
1200 offset : int
1130 Position of the cursor in ``text``, 0-indexed.
1201 Position of the cursor in ``text``, 0-indexed.
1131
1202
1132 Returns
1203 Returns
1133 -------
1204 -------
1134 (line, column) : (int, int)
1205 (line, column) : (int, int)
1135 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1206 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1136
1207
1137 See Also
1208 See Also
1138 --------
1209 --------
1139 cursor_to_position : reciprocal of this function
1210 cursor_to_position : reciprocal of this function
1140
1211
1141 """
1212 """
1142
1213
1143 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1214 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1144
1215
1145 before = text[:offset]
1216 before = text[:offset]
1146 blines = before.split('\n') # ! splitnes trim trailing \n
1217 blines = before.split('\n') # ! splitnes trim trailing \n
1147 line = before.count('\n')
1218 line = before.count('\n')
1148 col = len(blines[-1])
1219 col = len(blines[-1])
1149 return line, col
1220 return line, col
1150
1221
1151
1222
1152 def _safe_isinstance(obj, module, class_name):
1223 def _safe_isinstance(obj, module, class_name):
1153 """Checks if obj is an instance of module.class_name if loaded
1224 """Checks if obj is an instance of module.class_name if loaded
1154 """
1225 """
1155 return (module in sys.modules and
1226 return (module in sys.modules and
1156 isinstance(obj, getattr(import_module(module), class_name)))
1227 isinstance(obj, getattr(import_module(module), class_name)))
1157
1228
1158
1229
1159 @context_matcher()
1230 @context_matcher()
1160 def back_unicode_name_matcher(context):
1231 def back_unicode_name_matcher(context):
1161 """Match Unicode characters back to Unicode name
1232 """Match Unicode characters back to Unicode name
1162
1233
1163 Same as ``back_unicode_name_matches``, but adopted to new Matcher API.
1234 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1164 """
1235 """
1165 fragment, matches = back_unicode_name_matches(context.token)
1236 fragment, matches = back_unicode_name_matches(context.token)
1166 return _convert_matcher_v1_result_to_v2(
1237 return _convert_matcher_v1_result_to_v2(
1167 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1238 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1168 )
1239 )
1169
1240
1170
1241
1171 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1242 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1172 """Match Unicode characters back to Unicode name
1243 """Match Unicode characters back to Unicode name
1173
1244
1174 This does ``β˜ƒ`` -> ``\\snowman``
1245 This does ``β˜ƒ`` -> ``\\snowman``
1175
1246
1176 Note that snowman is not a valid python3 combining character but will be expanded.
1247 Note that snowman is not a valid python3 combining character but will be expanded.
1177 Though it will not recombine back to the snowman character by the completion machinery.
1248 Though it will not recombine back to the snowman character by the completion machinery.
1178
1249
1179 This will not either back-complete standard sequences like \\n, \\b ...
1250 This will not either back-complete standard sequences like \\n, \\b ...
1180
1251
1252 .. deprecated:: 8.6
1253 You can use :meth:`back_unicode_name_matcher` instead.
1254
1181 Returns
1255 Returns
1182 =======
1256 =======
1183
1257
1184 Return a tuple with two elements:
1258 Return a tuple with two elements:
1185
1259
1186 - The Unicode character that was matched (preceded with a backslash), or
1260 - The Unicode character that was matched (preceded with a backslash), or
1187 empty string,
1261 empty string,
1188 - a sequence (of 1), name for the match Unicode character, preceded by
1262 - a sequence (of 1), name for the match Unicode character, preceded by
1189 backslash, or empty if no match.
1263 backslash, or empty if no match.
1190
1191 """
1264 """
1192 if len(text)<2:
1265 if len(text)<2:
1193 return '', ()
1266 return '', ()
1194 maybe_slash = text[-2]
1267 maybe_slash = text[-2]
1195 if maybe_slash != '\\':
1268 if maybe_slash != '\\':
1196 return '', ()
1269 return '', ()
1197
1270
1198 char = text[-1]
1271 char = text[-1]
1199 # no expand on quote for completion in strings.
1272 # no expand on quote for completion in strings.
1200 # nor backcomplete standard ascii keys
1273 # nor backcomplete standard ascii keys
1201 if char in string.ascii_letters or char in ('"',"'"):
1274 if char in string.ascii_letters or char in ('"',"'"):
1202 return '', ()
1275 return '', ()
1203 try :
1276 try :
1204 unic = unicodedata.name(char)
1277 unic = unicodedata.name(char)
1205 return '\\'+char,('\\'+unic,)
1278 return '\\'+char,('\\'+unic,)
1206 except KeyError:
1279 except KeyError:
1207 pass
1280 pass
1208 return '', ()
1281 return '', ()
1209
1282
1210
1283
1211 @context_matcher()
1284 @context_matcher()
1212 def back_latex_name_matcher(context):
1285 def back_latex_name_matcher(context):
1213 """Match latex characters back to unicode name
1286 """Match latex characters back to unicode name
1214
1287
1215 Same as ``back_latex_name_matches``, but adopted to new Matcher API.
1288 Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
1216 """
1289 """
1217 fragment, matches = back_latex_name_matches(context.token)
1290 fragment, matches = back_latex_name_matches(context.token)
1218 return _convert_matcher_v1_result_to_v2(
1291 return _convert_matcher_v1_result_to_v2(
1219 matches, type="latex", fragment=fragment, suppress_if_matches=True
1292 matches, type="latex", fragment=fragment, suppress_if_matches=True
1220 )
1293 )
1221
1294
1222
1295
1223 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1296 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1224 """Match latex characters back to unicode name
1297 """Match latex characters back to unicode name
1225
1298
1226 This does ``\\β„΅`` -> ``\\aleph``
1299 This does ``\\β„΅`` -> ``\\aleph``
1227
1300
1301 .. deprecated:: 8.6
1302 You can use :meth:`back_latex_name_matcher` instead.
1228 """
1303 """
1229 if len(text)<2:
1304 if len(text)<2:
1230 return '', ()
1305 return '', ()
1231 maybe_slash = text[-2]
1306 maybe_slash = text[-2]
1232 if maybe_slash != '\\':
1307 if maybe_slash != '\\':
1233 return '', ()
1308 return '', ()
1234
1309
1235
1310
1236 char = text[-1]
1311 char = text[-1]
1237 # no expand on quote for completion in strings.
1312 # no expand on quote for completion in strings.
1238 # nor backcomplete standard ascii keys
1313 # nor backcomplete standard ascii keys
1239 if char in string.ascii_letters or char in ('"',"'"):
1314 if char in string.ascii_letters or char in ('"',"'"):
1240 return '', ()
1315 return '', ()
1241 try :
1316 try :
1242 latex = reverse_latex_symbol[char]
1317 latex = reverse_latex_symbol[char]
1243 # '\\' replace the \ as well
1318 # '\\' replace the \ as well
1244 return '\\'+char,[latex]
1319 return '\\'+char,[latex]
1245 except KeyError:
1320 except KeyError:
1246 pass
1321 pass
1247 return '', ()
1322 return '', ()
1248
1323
1249
1324
1250 def _formatparamchildren(parameter) -> str:
1325 def _formatparamchildren(parameter) -> str:
1251 """
1326 """
1252 Get parameter name and value from Jedi Private API
1327 Get parameter name and value from Jedi Private API
1253
1328
1254 Jedi does not expose a simple way to get `param=value` from its API.
1329 Jedi does not expose a simple way to get `param=value` from its API.
1255
1330
1256 Parameters
1331 Parameters
1257 ----------
1332 ----------
1258 parameter
1333 parameter
1259 Jedi's function `Param`
1334 Jedi's function `Param`
1260
1335
1261 Returns
1336 Returns
1262 -------
1337 -------
1263 A string like 'a', 'b=1', '*args', '**kwargs'
1338 A string like 'a', 'b=1', '*args', '**kwargs'
1264
1339
1265 """
1340 """
1266 description = parameter.description
1341 description = parameter.description
1267 if not description.startswith('param '):
1342 if not description.startswith('param '):
1268 raise ValueError('Jedi function parameter description have change format.'
1343 raise ValueError('Jedi function parameter description have change format.'
1269 'Expected "param ...", found %r".' % description)
1344 'Expected "param ...", found %r".' % description)
1270 return description[6:]
1345 return description[6:]
1271
1346
1272 def _make_signature(completion)-> str:
1347 def _make_signature(completion)-> str:
1273 """
1348 """
1274 Make the signature from a jedi completion
1349 Make the signature from a jedi completion
1275
1350
1276 Parameters
1351 Parameters
1277 ----------
1352 ----------
1278 completion : jedi.Completion
1353 completion : jedi.Completion
1279 object does not complete a function type
1354 object does not complete a function type
1280
1355
1281 Returns
1356 Returns
1282 -------
1357 -------
1283 a string consisting of the function signature, with the parenthesis but
1358 a string consisting of the function signature, with the parenthesis but
1284 without the function name. example:
1359 without the function name. example:
1285 `(a, *args, b=1, **kwargs)`
1360 `(a, *args, b=1, **kwargs)`
1286
1361
1287 """
1362 """
1288
1363
1289 # it looks like this might work on jedi 0.17
1364 # it looks like this might work on jedi 0.17
1290 if hasattr(completion, 'get_signatures'):
1365 if hasattr(completion, 'get_signatures'):
1291 signatures = completion.get_signatures()
1366 signatures = completion.get_signatures()
1292 if not signatures:
1367 if not signatures:
1293 return '(?)'
1368 return '(?)'
1294
1369
1295 c0 = completion.get_signatures()[0]
1370 c0 = completion.get_signatures()[0]
1296 return '('+c0.to_string().split('(', maxsplit=1)[1]
1371 return '('+c0.to_string().split('(', maxsplit=1)[1]
1297
1372
1298 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1373 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1299 for p in signature.defined_names()) if f])
1374 for p in signature.defined_names()) if f])
1300
1375
1301
1376
1302 _CompleteResult = Dict[str, MatcherResult]
1377 _CompleteResult = Dict[str, MatcherResult]
1303
1378
1304
1379
1305 def _convert_matcher_v1_result_to_v2(
1380 def _convert_matcher_v1_result_to_v2(
1306 matches: Sequence[str],
1381 matches: Sequence[str],
1307 type: str,
1382 type: str,
1308 fragment: str = None,
1383 fragment: str = None,
1309 suppress_if_matches: bool = False,
1384 suppress_if_matches: bool = False,
1310 ) -> SimpleMatcherResult:
1385 ) -> SimpleMatcherResult:
1311 """Utility to help with transition"""
1386 """Utility to help with transition"""
1312 result = {
1387 result = {
1313 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1388 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1314 "suppress": (True if matches else False) if suppress_if_matches else False,
1389 "suppress": (True if matches else False) if suppress_if_matches else False,
1315 }
1390 }
1316 if fragment is not None:
1391 if fragment is not None:
1317 result["matched_fragment"] = fragment
1392 result["matched_fragment"] = fragment
1318 return result
1393 return result
1319
1394
1320
1395
1321 class IPCompleter(Completer):
1396 class IPCompleter(Completer):
1322 """Extension of the completer class with IPython-specific features"""
1397 """Extension of the completer class with IPython-specific features"""
1323
1398
1324 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1399 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1325
1400
1326 @observe('greedy')
1401 @observe('greedy')
1327 def _greedy_changed(self, change):
1402 def _greedy_changed(self, change):
1328 """update the splitter and readline delims when greedy is changed"""
1403 """update the splitter and readline delims when greedy is changed"""
1329 if change['new']:
1404 if change['new']:
1330 self.splitter.delims = GREEDY_DELIMS
1405 self.splitter.delims = GREEDY_DELIMS
1331 else:
1406 else:
1332 self.splitter.delims = DELIMS
1407 self.splitter.delims = DELIMS
1333
1408
1334 dict_keys_only = Bool(
1409 dict_keys_only = Bool(
1335 False,
1410 False,
1336 help="""
1411 help="""
1337 Whether to show dict key matches only.
1412 Whether to show dict key matches only.
1338
1413
1339 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1414 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1340 """,
1415 """,
1341 )
1416 )
1342
1417
1343 suppress_competing_matchers = UnionTrait(
1418 suppress_competing_matchers = UnionTrait(
1344 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1419 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1345 default_value=None,
1420 default_value=None,
1346 help="""
1421 help="""
1347 Whether to suppress completions from other *Matchers*.
1422 Whether to suppress completions from other *Matchers*.
1348
1423
1349 When set to ``None`` (default) the matchers will attempt to auto-detect
1424 When set to ``None`` (default) the matchers will attempt to auto-detect
1350 whether suppression of other matchers is desirable. For example, at
1425 whether suppression of other matchers is desirable. For example, at
1351 the beginning of a line followed by `%` we expect a magic completion
1426 the beginning of a line followed by `%` we expect a magic completion
1352 to be the only applicable option, and after ``my_dict['`` we usually
1427 to be the only applicable option, and after ``my_dict['`` we usually
1353 expect a completion with an existing dictionary key.
1428 expect a completion with an existing dictionary key.
1354
1429
1355 If you want to disable this heuristic and see completions from all matchers,
1430 If you want to disable this heuristic and see completions from all matchers,
1356 set ``IPCompleter.suppress_competing_matchers = False``.
1431 set ``IPCompleter.suppress_competing_matchers = False``.
1357 To disable the heuristic for specific matchers provide a dictionary mapping:
1432 To disable the heuristic for specific matchers provide a dictionary mapping:
1358 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1433 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1359
1434
1360 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1435 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1361 completions to the set of matchers with the highest priority;
1436 completions to the set of matchers with the highest priority;
1362 this is equivalent to ``IPCompleter.merge_completions`` and
1437 this is equivalent to ``IPCompleter.merge_completions`` and
1363 can be beneficial for performance, but will sometimes omit relevant
1438 can be beneficial for performance, but will sometimes omit relevant
1364 candidates from matchers further down the priority list.
1439 candidates from matchers further down the priority list.
1365 """,
1440 """,
1366 ).tag(config=True)
1441 ).tag(config=True)
1367
1442
1368 merge_completions = Bool(
1443 merge_completions = Bool(
1369 True,
1444 True,
1370 help="""Whether to merge completion results into a single list
1445 help="""Whether to merge completion results into a single list
1371
1446
1372 If False, only the completion results from the first non-empty
1447 If False, only the completion results from the first non-empty
1373 completer will be returned.
1448 completer will be returned.
1374
1449
1375 As of version 8.6.0, setting the value to ``False`` is an alias for:
1450 As of version 8.6.0, setting the value to ``False`` is an alias for:
1376 ``IPCompleter.suppress_competing_matchers = True.``.
1451 ``IPCompleter.suppress_competing_matchers = True.``.
1377 """,
1452 """,
1378 ).tag(config=True)
1453 ).tag(config=True)
1379
1454
1380 disable_matchers = ListTrait(
1455 disable_matchers = ListTrait(
1381 Unicode(), help="""List of matchers to disable."""
1456 Unicode(), help="""List of matchers to disable."""
1382 ).tag(config=True)
1457 ).tag(config=True)
1383
1458
1384 omit__names = Enum(
1459 omit__names = Enum(
1385 (0, 1, 2),
1460 (0, 1, 2),
1386 default_value=2,
1461 default_value=2,
1387 help="""Instruct the completer to omit private method names
1462 help="""Instruct the completer to omit private method names
1388
1463
1389 Specifically, when completing on ``object.<tab>``.
1464 Specifically, when completing on ``object.<tab>``.
1390
1465
1391 When 2 [default]: all names that start with '_' will be excluded.
1466 When 2 [default]: all names that start with '_' will be excluded.
1392
1467
1393 When 1: all 'magic' names (``__foo__``) will be excluded.
1468 When 1: all 'magic' names (``__foo__``) will be excluded.
1394
1469
1395 When 0: nothing will be excluded.
1470 When 0: nothing will be excluded.
1396 """
1471 """
1397 ).tag(config=True)
1472 ).tag(config=True)
1398 limit_to__all__ = Bool(False,
1473 limit_to__all__ = Bool(False,
1399 help="""
1474 help="""
1400 DEPRECATED as of version 5.0.
1475 DEPRECATED as of version 5.0.
1401
1476
1402 Instruct the completer to use __all__ for the completion
1477 Instruct the completer to use __all__ for the completion
1403
1478
1404 Specifically, when completing on ``object.<tab>``.
1479 Specifically, when completing on ``object.<tab>``.
1405
1480
1406 When True: only those names in obj.__all__ will be included.
1481 When True: only those names in obj.__all__ will be included.
1407
1482
1408 When False [default]: the __all__ attribute is ignored
1483 When False [default]: the __all__ attribute is ignored
1409 """,
1484 """,
1410 ).tag(config=True)
1485 ).tag(config=True)
1411
1486
1412 profile_completions = Bool(
1487 profile_completions = Bool(
1413 default_value=False,
1488 default_value=False,
1414 help="If True, emit profiling data for completion subsystem using cProfile."
1489 help="If True, emit profiling data for completion subsystem using cProfile."
1415 ).tag(config=True)
1490 ).tag(config=True)
1416
1491
1417 profiler_output_dir = Unicode(
1492 profiler_output_dir = Unicode(
1418 default_value=".completion_profiles",
1493 default_value=".completion_profiles",
1419 help="Template for path at which to output profile data for completions."
1494 help="Template for path at which to output profile data for completions."
1420 ).tag(config=True)
1495 ).tag(config=True)
1421
1496
1422 @observe('limit_to__all__')
1497 @observe('limit_to__all__')
1423 def _limit_to_all_changed(self, change):
1498 def _limit_to_all_changed(self, change):
1424 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1499 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1425 'value has been deprecated since IPython 5.0, will be made to have '
1500 'value has been deprecated since IPython 5.0, will be made to have '
1426 'no effects and then removed in future version of IPython.',
1501 'no effects and then removed in future version of IPython.',
1427 UserWarning)
1502 UserWarning)
1428
1503
1429 def __init__(
1504 def __init__(
1430 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1505 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1431 ):
1506 ):
1432 """IPCompleter() -> completer
1507 """IPCompleter() -> completer
1433
1508
1434 Return a completer object.
1509 Return a completer object.
1435
1510
1436 Parameters
1511 Parameters
1437 ----------
1512 ----------
1438 shell
1513 shell
1439 a pointer to the ipython shell itself. This is needed
1514 a pointer to the ipython shell itself. This is needed
1440 because this completer knows about magic functions, and those can
1515 because this completer knows about magic functions, and those can
1441 only be accessed via the ipython instance.
1516 only be accessed via the ipython instance.
1442 namespace : dict, optional
1517 namespace : dict, optional
1443 an optional dict where completions are performed.
1518 an optional dict where completions are performed.
1444 global_namespace : dict, optional
1519 global_namespace : dict, optional
1445 secondary optional dict for completions, to
1520 secondary optional dict for completions, to
1446 handle cases (such as IPython embedded inside functions) where
1521 handle cases (such as IPython embedded inside functions) where
1447 both Python scopes are visible.
1522 both Python scopes are visible.
1448 config : Config
1523 config : Config
1449 traitlet's config object
1524 traitlet's config object
1450 **kwargs
1525 **kwargs
1451 passed to super class unmodified.
1526 passed to super class unmodified.
1452 """
1527 """
1453
1528
1454 self.magic_escape = ESC_MAGIC
1529 self.magic_escape = ESC_MAGIC
1455 self.splitter = CompletionSplitter()
1530 self.splitter = CompletionSplitter()
1456
1531
1457 # _greedy_changed() depends on splitter and readline being defined:
1532 # _greedy_changed() depends on splitter and readline being defined:
1458 super().__init__(
1533 super().__init__(
1459 namespace=namespace,
1534 namespace=namespace,
1460 global_namespace=global_namespace,
1535 global_namespace=global_namespace,
1461 config=config,
1536 config=config,
1462 **kwargs,
1537 **kwargs,
1463 )
1538 )
1464
1539
1465 # List where completion matches will be stored
1540 # List where completion matches will be stored
1466 self.matches = []
1541 self.matches = []
1467 self.shell = shell
1542 self.shell = shell
1468 # Regexp to split filenames with spaces in them
1543 # Regexp to split filenames with spaces in them
1469 self.space_name_re = re.compile(r'([^\\] )')
1544 self.space_name_re = re.compile(r'([^\\] )')
1470 # Hold a local ref. to glob.glob for speed
1545 # Hold a local ref. to glob.glob for speed
1471 self.glob = glob.glob
1546 self.glob = glob.glob
1472
1547
1473 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1548 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1474 # buffers, to avoid completion problems.
1549 # buffers, to avoid completion problems.
1475 term = os.environ.get('TERM','xterm')
1550 term = os.environ.get('TERM','xterm')
1476 self.dumb_terminal = term in ['dumb','emacs']
1551 self.dumb_terminal = term in ['dumb','emacs']
1477
1552
1478 # Special handling of backslashes needed in win32 platforms
1553 # Special handling of backslashes needed in win32 platforms
1479 if sys.platform == "win32":
1554 if sys.platform == "win32":
1480 self.clean_glob = self._clean_glob_win32
1555 self.clean_glob = self._clean_glob_win32
1481 else:
1556 else:
1482 self.clean_glob = self._clean_glob
1557 self.clean_glob = self._clean_glob
1483
1558
1484 #regexp to parse docstring for function signature
1559 #regexp to parse docstring for function signature
1485 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1560 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1486 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1561 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1487 #use this if positional argument name is also needed
1562 #use this if positional argument name is also needed
1488 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1563 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1489
1564
1490 self.magic_arg_matchers = [
1565 self.magic_arg_matchers = [
1491 self.magic_config_matcher,
1566 self.magic_config_matcher,
1492 self.magic_color_matcher,
1567 self.magic_color_matcher,
1493 ]
1568 ]
1494
1569
1495 # This is set externally by InteractiveShell
1570 # This is set externally by InteractiveShell
1496 self.custom_completers = None
1571 self.custom_completers = None
1497
1572
1498 # This is a list of names of unicode characters that can be completed
1573 # This is a list of names of unicode characters that can be completed
1499 # into their corresponding unicode value. The list is large, so we
1574 # into their corresponding unicode value. The list is large, so we
1500 # lazily initialize it on first use. Consuming code should access this
1575 # lazily initialize it on first use. Consuming code should access this
1501 # attribute through the `@unicode_names` property.
1576 # attribute through the `@unicode_names` property.
1502 self._unicode_names = None
1577 self._unicode_names = None
1503
1578
1504 self._backslash_combining_matchers = [
1579 self._backslash_combining_matchers = [
1505 self.latex_name_matcher,
1580 self.latex_name_matcher,
1506 self.unicode_name_matcher,
1581 self.unicode_name_matcher,
1507 back_latex_name_matcher,
1582 back_latex_name_matcher,
1508 back_unicode_name_matcher,
1583 back_unicode_name_matcher,
1509 self.fwd_unicode_matcher,
1584 self.fwd_unicode_matcher,
1510 ]
1585 ]
1511
1586
1512 if not self.backslash_combining_completions:
1587 if not self.backslash_combining_completions:
1513 for matcher in self._backslash_combining_matchers:
1588 for matcher in self._backslash_combining_matchers:
1514 self.disable_matchers.append(matcher.matcher_identifier)
1589 self.disable_matchers.append(matcher.matcher_identifier)
1515
1590
1516 if not self.merge_completions:
1591 if not self.merge_completions:
1517 self.suppress_competing_matchers = True
1592 self.suppress_competing_matchers = True
1518
1593
1519 @property
1594 @property
1520 def matchers(self) -> List[Matcher]:
1595 def matchers(self) -> List[Matcher]:
1521 """All active matcher routines for completion"""
1596 """All active matcher routines for completion"""
1522 if self.dict_keys_only:
1597 if self.dict_keys_only:
1523 return [self.dict_key_matcher]
1598 return [self.dict_key_matcher]
1524
1599
1525 if self.use_jedi:
1600 if self.use_jedi:
1526 return [
1601 return [
1527 *self.custom_matchers,
1602 *self.custom_matchers,
1528 *self._backslash_combining_matchers,
1603 *self._backslash_combining_matchers,
1529 *self.magic_arg_matchers,
1604 *self.magic_arg_matchers,
1530 self.custom_completer_matcher,
1605 self.custom_completer_matcher,
1531 self.magic_matcher,
1606 self.magic_matcher,
1532 self._jedi_matcher,
1607 self._jedi_matcher,
1533 self.dict_key_matcher,
1608 self.dict_key_matcher,
1534 self.file_matcher,
1609 self.file_matcher,
1535 ]
1610 ]
1536 else:
1611 else:
1537 return [
1612 return [
1538 *self.custom_matchers,
1613 *self.custom_matchers,
1539 *self._backslash_combining_matchers,
1614 *self._backslash_combining_matchers,
1540 *self.magic_arg_matchers,
1615 *self.magic_arg_matchers,
1541 self.custom_completer_matcher,
1616 self.custom_completer_matcher,
1542 self.dict_key_matcher,
1617 self.dict_key_matcher,
1543 # TODO: convert python_matches to v2 API
1618 # TODO: convert python_matches to v2 API
1544 self.magic_matcher,
1619 self.magic_matcher,
1545 self.python_matches,
1620 self.python_matches,
1546 self.file_matcher,
1621 self.file_matcher,
1547 self.python_func_kw_matcher,
1622 self.python_func_kw_matcher,
1548 ]
1623 ]
1549
1624
1550 def all_completions(self, text:str) -> List[str]:
1625 def all_completions(self, text:str) -> List[str]:
1551 """
1626 """
1552 Wrapper around the completion methods for the benefit of emacs.
1627 Wrapper around the completion methods for the benefit of emacs.
1553 """
1628 """
1554 prefix = text.rpartition('.')[0]
1629 prefix = text.rpartition('.')[0]
1555 with provisionalcompleter():
1630 with provisionalcompleter():
1556 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1631 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1557 for c in self.completions(text, len(text))]
1632 for c in self.completions(text, len(text))]
1558
1633
1559 return self.complete(text)[1]
1634 return self.complete(text)[1]
1560
1635
1561 def _clean_glob(self, text:str):
1636 def _clean_glob(self, text:str):
1562 return self.glob("%s*" % text)
1637 return self.glob("%s*" % text)
1563
1638
1564 def _clean_glob_win32(self, text:str):
1639 def _clean_glob_win32(self, text:str):
1565 return [f.replace("\\","/")
1640 return [f.replace("\\","/")
1566 for f in self.glob("%s*" % text)]
1641 for f in self.glob("%s*" % text)]
1567
1642
1568 @context_matcher()
1643 @context_matcher()
1569 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1644 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1570 """Same as ``file_matches``, but adopted to new Matcher API."""
1645 """Same as :any:`file_matches`, but adopted to new Matcher API."""
1571 matches = self.file_matches(context.token)
1646 matches = self.file_matches(context.token)
1572 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1647 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1573 # starts with `/home/`, `C:\`, etc)
1648 # starts with `/home/`, `C:\`, etc)
1574 return _convert_matcher_v1_result_to_v2(matches, type="path")
1649 return _convert_matcher_v1_result_to_v2(matches, type="path")
1575
1650
1576 def file_matches(self, text: str) -> List[str]:
1651 def file_matches(self, text: str) -> List[str]:
1577 """Match filenames, expanding ~USER type strings.
1652 """Match filenames, expanding ~USER type strings.
1578
1653
1579 Most of the seemingly convoluted logic in this completer is an
1654 Most of the seemingly convoluted logic in this completer is an
1580 attempt to handle filenames with spaces in them. And yet it's not
1655 attempt to handle filenames with spaces in them. And yet it's not
1581 quite perfect, because Python's readline doesn't expose all of the
1656 quite perfect, because Python's readline doesn't expose all of the
1582 GNU readline details needed for this to be done correctly.
1657 GNU readline details needed for this to be done correctly.
1583
1658
1584 For a filename with a space in it, the printed completions will be
1659 For a filename with a space in it, the printed completions will be
1585 only the parts after what's already been typed (instead of the
1660 only the parts after what's already been typed (instead of the
1586 full completions, as is normally done). I don't think with the
1661 full completions, as is normally done). I don't think with the
1587 current (as of Python 2.3) Python readline it's possible to do
1662 current (as of Python 2.3) Python readline it's possible to do
1588 better.
1663 better.
1589
1664
1590 DEPRECATED: Deprecated since 8.6. Use ``file_matcher`` instead.
1665 .. deprecated:: 8.6
1666 You can use :meth:`file_matcher` instead.
1591 """
1667 """
1592
1668
1593 # chars that require escaping with backslash - i.e. chars
1669 # chars that require escaping with backslash - i.e. chars
1594 # that readline treats incorrectly as delimiters, but we
1670 # that readline treats incorrectly as delimiters, but we
1595 # don't want to treat as delimiters in filename matching
1671 # don't want to treat as delimiters in filename matching
1596 # when escaped with backslash
1672 # when escaped with backslash
1597 if text.startswith('!'):
1673 if text.startswith('!'):
1598 text = text[1:]
1674 text = text[1:]
1599 text_prefix = u'!'
1675 text_prefix = u'!'
1600 else:
1676 else:
1601 text_prefix = u''
1677 text_prefix = u''
1602
1678
1603 text_until_cursor = self.text_until_cursor
1679 text_until_cursor = self.text_until_cursor
1604 # track strings with open quotes
1680 # track strings with open quotes
1605 open_quotes = has_open_quotes(text_until_cursor)
1681 open_quotes = has_open_quotes(text_until_cursor)
1606
1682
1607 if '(' in text_until_cursor or '[' in text_until_cursor:
1683 if '(' in text_until_cursor or '[' in text_until_cursor:
1608 lsplit = text
1684 lsplit = text
1609 else:
1685 else:
1610 try:
1686 try:
1611 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1687 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1612 lsplit = arg_split(text_until_cursor)[-1]
1688 lsplit = arg_split(text_until_cursor)[-1]
1613 except ValueError:
1689 except ValueError:
1614 # typically an unmatched ", or backslash without escaped char.
1690 # typically an unmatched ", or backslash without escaped char.
1615 if open_quotes:
1691 if open_quotes:
1616 lsplit = text_until_cursor.split(open_quotes)[-1]
1692 lsplit = text_until_cursor.split(open_quotes)[-1]
1617 else:
1693 else:
1618 return []
1694 return []
1619 except IndexError:
1695 except IndexError:
1620 # tab pressed on empty line
1696 # tab pressed on empty line
1621 lsplit = ""
1697 lsplit = ""
1622
1698
1623 if not open_quotes and lsplit != protect_filename(lsplit):
1699 if not open_quotes and lsplit != protect_filename(lsplit):
1624 # if protectables are found, do matching on the whole escaped name
1700 # if protectables are found, do matching on the whole escaped name
1625 has_protectables = True
1701 has_protectables = True
1626 text0,text = text,lsplit
1702 text0,text = text,lsplit
1627 else:
1703 else:
1628 has_protectables = False
1704 has_protectables = False
1629 text = os.path.expanduser(text)
1705 text = os.path.expanduser(text)
1630
1706
1631 if text == "":
1707 if text == "":
1632 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1708 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1633
1709
1634 # Compute the matches from the filesystem
1710 # Compute the matches from the filesystem
1635 if sys.platform == 'win32':
1711 if sys.platform == 'win32':
1636 m0 = self.clean_glob(text)
1712 m0 = self.clean_glob(text)
1637 else:
1713 else:
1638 m0 = self.clean_glob(text.replace('\\', ''))
1714 m0 = self.clean_glob(text.replace('\\', ''))
1639
1715
1640 if has_protectables:
1716 if has_protectables:
1641 # If we had protectables, we need to revert our changes to the
1717 # If we had protectables, we need to revert our changes to the
1642 # beginning of filename so that we don't double-write the part
1718 # beginning of filename so that we don't double-write the part
1643 # of the filename we have so far
1719 # of the filename we have so far
1644 len_lsplit = len(lsplit)
1720 len_lsplit = len(lsplit)
1645 matches = [text_prefix + text0 +
1721 matches = [text_prefix + text0 +
1646 protect_filename(f[len_lsplit:]) for f in m0]
1722 protect_filename(f[len_lsplit:]) for f in m0]
1647 else:
1723 else:
1648 if open_quotes:
1724 if open_quotes:
1649 # if we have a string with an open quote, we don't need to
1725 # if we have a string with an open quote, we don't need to
1650 # protect the names beyond the quote (and we _shouldn't_, as
1726 # protect the names beyond the quote (and we _shouldn't_, as
1651 # it would cause bugs when the filesystem call is made).
1727 # it would cause bugs when the filesystem call is made).
1652 matches = m0 if sys.platform == "win32" else\
1728 matches = m0 if sys.platform == "win32" else\
1653 [protect_filename(f, open_quotes) for f in m0]
1729 [protect_filename(f, open_quotes) for f in m0]
1654 else:
1730 else:
1655 matches = [text_prefix +
1731 matches = [text_prefix +
1656 protect_filename(f) for f in m0]
1732 protect_filename(f) for f in m0]
1657
1733
1658 # Mark directories in input list by appending '/' to their names.
1734 # Mark directories in input list by appending '/' to their names.
1659 return [x+'/' if os.path.isdir(x) else x for x in matches]
1735 return [x+'/' if os.path.isdir(x) else x for x in matches]
1660
1736
1661 @context_matcher()
1737 @context_matcher()
1662 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1738 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1739 """Match magics."""
1663 text = context.token
1740 text = context.token
1664 matches = self.magic_matches(text)
1741 matches = self.magic_matches(text)
1665 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1742 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1666 is_magic_prefix = len(text) > 0 and text[0] == "%"
1743 is_magic_prefix = len(text) > 0 and text[0] == "%"
1667 result["suppress"] = is_magic_prefix and bool(result["completions"])
1744 result["suppress"] = is_magic_prefix and bool(result["completions"])
1668 return result
1745 return result
1669
1746
1670 def magic_matches(self, text: str):
1747 def magic_matches(self, text: str):
1671 """Match magics.
1748 """Match magics.
1672
1749
1673 DEPRECATED: Deprecated since 8.6. Use ``magic_matcher`` instead.
1750 .. deprecated:: 8.6
1751 You can use :meth:`magic_matcher` instead.
1674 """
1752 """
1675 # Get all shell magics now rather than statically, so magics loaded at
1753 # Get all shell magics now rather than statically, so magics loaded at
1676 # runtime show up too.
1754 # runtime show up too.
1677 lsm = self.shell.magics_manager.lsmagic()
1755 lsm = self.shell.magics_manager.lsmagic()
1678 line_magics = lsm['line']
1756 line_magics = lsm['line']
1679 cell_magics = lsm['cell']
1757 cell_magics = lsm['cell']
1680 pre = self.magic_escape
1758 pre = self.magic_escape
1681 pre2 = pre+pre
1759 pre2 = pre+pre
1682
1760
1683 explicit_magic = text.startswith(pre)
1761 explicit_magic = text.startswith(pre)
1684
1762
1685 # Completion logic:
1763 # Completion logic:
1686 # - user gives %%: only do cell magics
1764 # - user gives %%: only do cell magics
1687 # - user gives %: do both line and cell magics
1765 # - user gives %: do both line and cell magics
1688 # - no prefix: do both
1766 # - no prefix: do both
1689 # In other words, line magics are skipped if the user gives %% explicitly
1767 # In other words, line magics are skipped if the user gives %% explicitly
1690 #
1768 #
1691 # We also exclude magics that match any currently visible names:
1769 # We also exclude magics that match any currently visible names:
1692 # https://github.com/ipython/ipython/issues/4877, unless the user has
1770 # https://github.com/ipython/ipython/issues/4877, unless the user has
1693 # typed a %:
1771 # typed a %:
1694 # https://github.com/ipython/ipython/issues/10754
1772 # https://github.com/ipython/ipython/issues/10754
1695 bare_text = text.lstrip(pre)
1773 bare_text = text.lstrip(pre)
1696 global_matches = self.global_matches(bare_text)
1774 global_matches = self.global_matches(bare_text)
1697 if not explicit_magic:
1775 if not explicit_magic:
1698 def matches(magic):
1776 def matches(magic):
1699 """
1777 """
1700 Filter magics, in particular remove magics that match
1778 Filter magics, in particular remove magics that match
1701 a name present in global namespace.
1779 a name present in global namespace.
1702 """
1780 """
1703 return ( magic.startswith(bare_text) and
1781 return ( magic.startswith(bare_text) and
1704 magic not in global_matches )
1782 magic not in global_matches )
1705 else:
1783 else:
1706 def matches(magic):
1784 def matches(magic):
1707 return magic.startswith(bare_text)
1785 return magic.startswith(bare_text)
1708
1786
1709 comp = [ pre2+m for m in cell_magics if matches(m)]
1787 comp = [ pre2+m for m in cell_magics if matches(m)]
1710 if not text.startswith(pre2):
1788 if not text.startswith(pre2):
1711 comp += [ pre+m for m in line_magics if matches(m)]
1789 comp += [ pre+m for m in line_magics if matches(m)]
1712
1790
1713 return comp
1791 return comp
1714
1792
1715 @context_matcher()
1793 @context_matcher()
1716 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1794 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1717 """Match class names and attributes for %config magic."""
1795 """Match class names and attributes for %config magic."""
1718 # NOTE: uses `line_buffer` equivalent for compatibility
1796 # NOTE: uses `line_buffer` equivalent for compatibility
1719 matches = self.magic_config_matches(context.line_with_cursor)
1797 matches = self.magic_config_matches(context.line_with_cursor)
1720 return _convert_matcher_v1_result_to_v2(matches, type="param")
1798 return _convert_matcher_v1_result_to_v2(matches, type="param")
1721
1799
1722 def magic_config_matches(self, text: str) -> List[str]:
1800 def magic_config_matches(self, text: str) -> List[str]:
1723 """Match class names and attributes for %config magic.
1801 """Match class names and attributes for %config magic.
1724
1802
1725 DEPRECATED: Deprecated since 8.6. Use ``magic_config_matcher`` instead.
1803 .. deprecated:: 8.6
1804 You can use :meth:`magic_config_matcher` instead.
1726 """
1805 """
1727 texts = text.strip().split()
1806 texts = text.strip().split()
1728
1807
1729 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1808 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1730 # get all configuration classes
1809 # get all configuration classes
1731 classes = sorted(set([ c for c in self.shell.configurables
1810 classes = sorted(set([ c for c in self.shell.configurables
1732 if c.__class__.class_traits(config=True)
1811 if c.__class__.class_traits(config=True)
1733 ]), key=lambda x: x.__class__.__name__)
1812 ]), key=lambda x: x.__class__.__name__)
1734 classnames = [ c.__class__.__name__ for c in classes ]
1813 classnames = [ c.__class__.__name__ for c in classes ]
1735
1814
1736 # return all classnames if config or %config is given
1815 # return all classnames if config or %config is given
1737 if len(texts) == 1:
1816 if len(texts) == 1:
1738 return classnames
1817 return classnames
1739
1818
1740 # match classname
1819 # match classname
1741 classname_texts = texts[1].split('.')
1820 classname_texts = texts[1].split('.')
1742 classname = classname_texts[0]
1821 classname = classname_texts[0]
1743 classname_matches = [ c for c in classnames
1822 classname_matches = [ c for c in classnames
1744 if c.startswith(classname) ]
1823 if c.startswith(classname) ]
1745
1824
1746 # return matched classes or the matched class with attributes
1825 # return matched classes or the matched class with attributes
1747 if texts[1].find('.') < 0:
1826 if texts[1].find('.') < 0:
1748 return classname_matches
1827 return classname_matches
1749 elif len(classname_matches) == 1 and \
1828 elif len(classname_matches) == 1 and \
1750 classname_matches[0] == classname:
1829 classname_matches[0] == classname:
1751 cls = classes[classnames.index(classname)].__class__
1830 cls = classes[classnames.index(classname)].__class__
1752 help = cls.class_get_help()
1831 help = cls.class_get_help()
1753 # strip leading '--' from cl-args:
1832 # strip leading '--' from cl-args:
1754 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1833 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1755 return [ attr.split('=')[0]
1834 return [ attr.split('=')[0]
1756 for attr in help.strip().splitlines()
1835 for attr in help.strip().splitlines()
1757 if attr.startswith(texts[1]) ]
1836 if attr.startswith(texts[1]) ]
1758 return []
1837 return []
1759
1838
1760 @context_matcher()
1839 @context_matcher()
1761 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1840 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1762 """Match color schemes for %colors magic."""
1841 """Match color schemes for %colors magic."""
1763 # NOTE: uses `line_buffer` equivalent for compatibility
1842 # NOTE: uses `line_buffer` equivalent for compatibility
1764 matches = self.magic_color_matches(context.line_with_cursor)
1843 matches = self.magic_color_matches(context.line_with_cursor)
1765 return _convert_matcher_v1_result_to_v2(matches, type="param")
1844 return _convert_matcher_v1_result_to_v2(matches, type="param")
1766
1845
1767 def magic_color_matches(self, text: str) -> List[str]:
1846 def magic_color_matches(self, text: str) -> List[str]:
1768 """Match color schemes for %colors magic.
1847 """Match color schemes for %colors magic.
1769
1848
1770 DEPRECATED: Deprecated since 8.6. Use ``magic_color_matcher`` instead.
1849 .. deprecated:: 8.6
1850 You can use :meth:`magic_color_matcher` instead.
1771 """
1851 """
1772 texts = text.split()
1852 texts = text.split()
1773 if text.endswith(' '):
1853 if text.endswith(' '):
1774 # .split() strips off the trailing whitespace. Add '' back
1854 # .split() strips off the trailing whitespace. Add '' back
1775 # so that: '%colors ' -> ['%colors', '']
1855 # so that: '%colors ' -> ['%colors', '']
1776 texts.append('')
1856 texts.append('')
1777
1857
1778 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1858 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1779 prefix = texts[1]
1859 prefix = texts[1]
1780 return [ color for color in InspectColors.keys()
1860 return [ color for color in InspectColors.keys()
1781 if color.startswith(prefix) ]
1861 if color.startswith(prefix) ]
1782 return []
1862 return []
1783
1863
1784 @context_matcher(identifier="IPCompleter.jedi_matcher")
1864 @context_matcher(identifier="IPCompleter.jedi_matcher")
1785 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1865 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1786 matches = self._jedi_matches(
1866 matches = self._jedi_matches(
1787 cursor_column=context.cursor_position,
1867 cursor_column=context.cursor_position,
1788 cursor_line=context.cursor_line,
1868 cursor_line=context.cursor_line,
1789 text=context.full_text,
1869 text=context.full_text,
1790 )
1870 )
1791 return {
1871 return {
1792 "completions": matches,
1872 "completions": matches,
1793 # static analysis should not suppress other matchers
1873 # static analysis should not suppress other matchers
1794 "suppress": False,
1874 "suppress": False,
1795 }
1875 }
1796
1876
1797 def _jedi_matches(
1877 def _jedi_matches(
1798 self, cursor_column: int, cursor_line: int, text: str
1878 self, cursor_column: int, cursor_line: int, text: str
1799 ) -> Iterable[_JediCompletionLike]:
1879 ) -> Iterable[_JediCompletionLike]:
1800 """
1880 """
1801 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1881 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1802 cursor position.
1882 cursor position.
1803
1883
1804 Parameters
1884 Parameters
1805 ----------
1885 ----------
1806 cursor_column : int
1886 cursor_column : int
1807 column position of the cursor in ``text``, 0-indexed.
1887 column position of the cursor in ``text``, 0-indexed.
1808 cursor_line : int
1888 cursor_line : int
1809 line position of the cursor in ``text``, 0-indexed
1889 line position of the cursor in ``text``, 0-indexed
1810 text : str
1890 text : str
1811 text to complete
1891 text to complete
1812
1892
1813 Notes
1893 Notes
1814 -----
1894 -----
1815 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1895 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1816 object containing a string with the Jedi debug information attached.
1896 object containing a string with the Jedi debug information attached.
1817
1897
1818 DEPRECATED: Deprecated since 8.6. Use ``_jedi_matcher`` instead.
1898 .. deprecated:: 8.6
1899 You can use :meth:`_jedi_matcher` instead.
1819 """
1900 """
1820 namespaces = [self.namespace]
1901 namespaces = [self.namespace]
1821 if self.global_namespace is not None:
1902 if self.global_namespace is not None:
1822 namespaces.append(self.global_namespace)
1903 namespaces.append(self.global_namespace)
1823
1904
1824 completion_filter = lambda x:x
1905 completion_filter = lambda x:x
1825 offset = cursor_to_position(text, cursor_line, cursor_column)
1906 offset = cursor_to_position(text, cursor_line, cursor_column)
1826 # filter output if we are completing for object members
1907 # filter output if we are completing for object members
1827 if offset:
1908 if offset:
1828 pre = text[offset-1]
1909 pre = text[offset-1]
1829 if pre == '.':
1910 if pre == '.':
1830 if self.omit__names == 2:
1911 if self.omit__names == 2:
1831 completion_filter = lambda c:not c.name.startswith('_')
1912 completion_filter = lambda c:not c.name.startswith('_')
1832 elif self.omit__names == 1:
1913 elif self.omit__names == 1:
1833 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1914 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1834 elif self.omit__names == 0:
1915 elif self.omit__names == 0:
1835 completion_filter = lambda x:x
1916 completion_filter = lambda x:x
1836 else:
1917 else:
1837 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1918 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1838
1919
1839 interpreter = jedi.Interpreter(text[:offset], namespaces)
1920 interpreter = jedi.Interpreter(text[:offset], namespaces)
1840 try_jedi = True
1921 try_jedi = True
1841
1922
1842 try:
1923 try:
1843 # find the first token in the current tree -- if it is a ' or " then we are in a string
1924 # find the first token in the current tree -- if it is a ' or " then we are in a string
1844 completing_string = False
1925 completing_string = False
1845 try:
1926 try:
1846 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1927 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1847 except StopIteration:
1928 except StopIteration:
1848 pass
1929 pass
1849 else:
1930 else:
1850 # note the value may be ', ", or it may also be ''' or """, or
1931 # note the value may be ', ", or it may also be ''' or """, or
1851 # in some cases, """what/you/typed..., but all of these are
1932 # in some cases, """what/you/typed..., but all of these are
1852 # strings.
1933 # strings.
1853 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1934 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1854
1935
1855 # if we are in a string jedi is likely not the right candidate for
1936 # if we are in a string jedi is likely not the right candidate for
1856 # now. Skip it.
1937 # now. Skip it.
1857 try_jedi = not completing_string
1938 try_jedi = not completing_string
1858 except Exception as e:
1939 except Exception as e:
1859 # many of things can go wrong, we are using private API just don't crash.
1940 # many of things can go wrong, we are using private API just don't crash.
1860 if self.debug:
1941 if self.debug:
1861 print("Error detecting if completing a non-finished string :", e, '|')
1942 print("Error detecting if completing a non-finished string :", e, '|')
1862
1943
1863 if not try_jedi:
1944 if not try_jedi:
1864 return []
1945 return []
1865 try:
1946 try:
1866 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1947 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1867 except Exception as e:
1948 except Exception as e:
1868 if self.debug:
1949 if self.debug:
1869 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1950 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1870 else:
1951 else:
1871 return []
1952 return []
1872
1953
1873 def python_matches(self, text:str)->List[str]:
1954 def python_matches(self, text:str)->List[str]:
1874 """Match attributes or global python names"""
1955 """Match attributes or global python names"""
1875 if "." in text:
1956 if "." in text:
1876 try:
1957 try:
1877 matches = self.attr_matches(text)
1958 matches = self.attr_matches(text)
1878 if text.endswith('.') and self.omit__names:
1959 if text.endswith('.') and self.omit__names:
1879 if self.omit__names == 1:
1960 if self.omit__names == 1:
1880 # true if txt is _not_ a __ name, false otherwise:
1961 # true if txt is _not_ a __ name, false otherwise:
1881 no__name = (lambda txt:
1962 no__name = (lambda txt:
1882 re.match(r'.*\.__.*?__',txt) is None)
1963 re.match(r'.*\.__.*?__',txt) is None)
1883 else:
1964 else:
1884 # true if txt is _not_ a _ name, false otherwise:
1965 # true if txt is _not_ a _ name, false otherwise:
1885 no__name = (lambda txt:
1966 no__name = (lambda txt:
1886 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1967 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1887 matches = filter(no__name, matches)
1968 matches = filter(no__name, matches)
1888 except NameError:
1969 except NameError:
1889 # catches <undefined attributes>.<tab>
1970 # catches <undefined attributes>.<tab>
1890 matches = []
1971 matches = []
1891 else:
1972 else:
1892 matches = self.global_matches(text)
1973 matches = self.global_matches(text)
1893 return matches
1974 return matches
1894
1975
1895 def _default_arguments_from_docstring(self, doc):
1976 def _default_arguments_from_docstring(self, doc):
1896 """Parse the first line of docstring for call signature.
1977 """Parse the first line of docstring for call signature.
1897
1978
1898 Docstring should be of the form 'min(iterable[, key=func])\n'.
1979 Docstring should be of the form 'min(iterable[, key=func])\n'.
1899 It can also parse cython docstring of the form
1980 It can also parse cython docstring of the form
1900 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1981 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1901 """
1982 """
1902 if doc is None:
1983 if doc is None:
1903 return []
1984 return []
1904
1985
1905 #care only the firstline
1986 #care only the firstline
1906 line = doc.lstrip().splitlines()[0]
1987 line = doc.lstrip().splitlines()[0]
1907
1988
1908 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1989 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1909 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1990 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1910 sig = self.docstring_sig_re.search(line)
1991 sig = self.docstring_sig_re.search(line)
1911 if sig is None:
1992 if sig is None:
1912 return []
1993 return []
1913 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1994 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1914 sig = sig.groups()[0].split(',')
1995 sig = sig.groups()[0].split(',')
1915 ret = []
1996 ret = []
1916 for s in sig:
1997 for s in sig:
1917 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1998 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1918 ret += self.docstring_kwd_re.findall(s)
1999 ret += self.docstring_kwd_re.findall(s)
1919 return ret
2000 return ret
1920
2001
1921 def _default_arguments(self, obj):
2002 def _default_arguments(self, obj):
1922 """Return the list of default arguments of obj if it is callable,
2003 """Return the list of default arguments of obj if it is callable,
1923 or empty list otherwise."""
2004 or empty list otherwise."""
1924 call_obj = obj
2005 call_obj = obj
1925 ret = []
2006 ret = []
1926 if inspect.isbuiltin(obj):
2007 if inspect.isbuiltin(obj):
1927 pass
2008 pass
1928 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2009 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1929 if inspect.isclass(obj):
2010 if inspect.isclass(obj):
1930 #for cython embedsignature=True the constructor docstring
2011 #for cython embedsignature=True the constructor docstring
1931 #belongs to the object itself not __init__
2012 #belongs to the object itself not __init__
1932 ret += self._default_arguments_from_docstring(
2013 ret += self._default_arguments_from_docstring(
1933 getattr(obj, '__doc__', ''))
2014 getattr(obj, '__doc__', ''))
1934 # for classes, check for __init__,__new__
2015 # for classes, check for __init__,__new__
1935 call_obj = (getattr(obj, '__init__', None) or
2016 call_obj = (getattr(obj, '__init__', None) or
1936 getattr(obj, '__new__', None))
2017 getattr(obj, '__new__', None))
1937 # for all others, check if they are __call__able
2018 # for all others, check if they are __call__able
1938 elif hasattr(obj, '__call__'):
2019 elif hasattr(obj, '__call__'):
1939 call_obj = obj.__call__
2020 call_obj = obj.__call__
1940 ret += self._default_arguments_from_docstring(
2021 ret += self._default_arguments_from_docstring(
1941 getattr(call_obj, '__doc__', ''))
2022 getattr(call_obj, '__doc__', ''))
1942
2023
1943 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2024 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1944 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2025 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1945
2026
1946 try:
2027 try:
1947 sig = inspect.signature(obj)
2028 sig = inspect.signature(obj)
1948 ret.extend(k for k, v in sig.parameters.items() if
2029 ret.extend(k for k, v in sig.parameters.items() if
1949 v.kind in _keeps)
2030 v.kind in _keeps)
1950 except ValueError:
2031 except ValueError:
1951 pass
2032 pass
1952
2033
1953 return list(set(ret))
2034 return list(set(ret))
1954
2035
1955 @context_matcher()
2036 @context_matcher()
1956 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2037 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1957 """Match named parameters (kwargs) of the last open function."""
2038 """Match named parameters (kwargs) of the last open function."""
1958 matches = self.python_func_kw_matches(context.token)
2039 matches = self.python_func_kw_matches(context.token)
1959 return _convert_matcher_v1_result_to_v2(matches, type="param")
2040 return _convert_matcher_v1_result_to_v2(matches, type="param")
1960
2041
1961 def python_func_kw_matches(self, text):
2042 def python_func_kw_matches(self, text):
1962 """Match named parameters (kwargs) of the last open function.
2043 """Match named parameters (kwargs) of the last open function.
1963
2044
1964 DEPRECATED: Deprecated since 8.6. Use ``magic_config_matcher`` instead.
2045 .. deprecated:: 8.6
2046 You can use :meth:`python_func_kw_matcher` instead.
1965 """
2047 """
1966
2048
1967 if "." in text: # a parameter cannot be dotted
2049 if "." in text: # a parameter cannot be dotted
1968 return []
2050 return []
1969 try: regexp = self.__funcParamsRegex
2051 try: regexp = self.__funcParamsRegex
1970 except AttributeError:
2052 except AttributeError:
1971 regexp = self.__funcParamsRegex = re.compile(r'''
2053 regexp = self.__funcParamsRegex = re.compile(r'''
1972 '.*?(?<!\\)' | # single quoted strings or
2054 '.*?(?<!\\)' | # single quoted strings or
1973 ".*?(?<!\\)" | # double quoted strings or
2055 ".*?(?<!\\)" | # double quoted strings or
1974 \w+ | # identifier
2056 \w+ | # identifier
1975 \S # other characters
2057 \S # other characters
1976 ''', re.VERBOSE | re.DOTALL)
2058 ''', re.VERBOSE | re.DOTALL)
1977 # 1. find the nearest identifier that comes before an unclosed
2059 # 1. find the nearest identifier that comes before an unclosed
1978 # parenthesis before the cursor
2060 # parenthesis before the cursor
1979 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2061 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1980 tokens = regexp.findall(self.text_until_cursor)
2062 tokens = regexp.findall(self.text_until_cursor)
1981 iterTokens = reversed(tokens); openPar = 0
2063 iterTokens = reversed(tokens); openPar = 0
1982
2064
1983 for token in iterTokens:
2065 for token in iterTokens:
1984 if token == ')':
2066 if token == ')':
1985 openPar -= 1
2067 openPar -= 1
1986 elif token == '(':
2068 elif token == '(':
1987 openPar += 1
2069 openPar += 1
1988 if openPar > 0:
2070 if openPar > 0:
1989 # found the last unclosed parenthesis
2071 # found the last unclosed parenthesis
1990 break
2072 break
1991 else:
2073 else:
1992 return []
2074 return []
1993 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2075 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1994 ids = []
2076 ids = []
1995 isId = re.compile(r'\w+$').match
2077 isId = re.compile(r'\w+$').match
1996
2078
1997 while True:
2079 while True:
1998 try:
2080 try:
1999 ids.append(next(iterTokens))
2081 ids.append(next(iterTokens))
2000 if not isId(ids[-1]):
2082 if not isId(ids[-1]):
2001 ids.pop(); break
2083 ids.pop(); break
2002 if not next(iterTokens) == '.':
2084 if not next(iterTokens) == '.':
2003 break
2085 break
2004 except StopIteration:
2086 except StopIteration:
2005 break
2087 break
2006
2088
2007 # Find all named arguments already assigned to, as to avoid suggesting
2089 # Find all named arguments already assigned to, as to avoid suggesting
2008 # them again
2090 # them again
2009 usedNamedArgs = set()
2091 usedNamedArgs = set()
2010 par_level = -1
2092 par_level = -1
2011 for token, next_token in zip(tokens, tokens[1:]):
2093 for token, next_token in zip(tokens, tokens[1:]):
2012 if token == '(':
2094 if token == '(':
2013 par_level += 1
2095 par_level += 1
2014 elif token == ')':
2096 elif token == ')':
2015 par_level -= 1
2097 par_level -= 1
2016
2098
2017 if par_level != 0:
2099 if par_level != 0:
2018 continue
2100 continue
2019
2101
2020 if next_token != '=':
2102 if next_token != '=':
2021 continue
2103 continue
2022
2104
2023 usedNamedArgs.add(token)
2105 usedNamedArgs.add(token)
2024
2106
2025 argMatches = []
2107 argMatches = []
2026 try:
2108 try:
2027 callableObj = '.'.join(ids[::-1])
2109 callableObj = '.'.join(ids[::-1])
2028 namedArgs = self._default_arguments(eval(callableObj,
2110 namedArgs = self._default_arguments(eval(callableObj,
2029 self.namespace))
2111 self.namespace))
2030
2112
2031 # Remove used named arguments from the list, no need to show twice
2113 # Remove used named arguments from the list, no need to show twice
2032 for namedArg in set(namedArgs) - usedNamedArgs:
2114 for namedArg in set(namedArgs) - usedNamedArgs:
2033 if namedArg.startswith(text):
2115 if namedArg.startswith(text):
2034 argMatches.append("%s=" %namedArg)
2116 argMatches.append("%s=" %namedArg)
2035 except:
2117 except:
2036 pass
2118 pass
2037
2119
2038 return argMatches
2120 return argMatches
2039
2121
2040 @staticmethod
2122 @staticmethod
2041 def _get_keys(obj: Any) -> List[Any]:
2123 def _get_keys(obj: Any) -> List[Any]:
2042 # Objects can define their own completions by defining an
2124 # Objects can define their own completions by defining an
2043 # _ipy_key_completions_() method.
2125 # _ipy_key_completions_() method.
2044 method = get_real_method(obj, '_ipython_key_completions_')
2126 method = get_real_method(obj, '_ipython_key_completions_')
2045 if method is not None:
2127 if method is not None:
2046 return method()
2128 return method()
2047
2129
2048 # Special case some common in-memory dict-like types
2130 # Special case some common in-memory dict-like types
2049 if isinstance(obj, dict) or\
2131 if isinstance(obj, dict) or\
2050 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2132 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2051 try:
2133 try:
2052 return list(obj.keys())
2134 return list(obj.keys())
2053 except Exception:
2135 except Exception:
2054 return []
2136 return []
2055 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2137 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2056 _safe_isinstance(obj, 'numpy', 'void'):
2138 _safe_isinstance(obj, 'numpy', 'void'):
2057 return obj.dtype.names or []
2139 return obj.dtype.names or []
2058 return []
2140 return []
2059
2141
2060 @context_matcher()
2142 @context_matcher()
2061 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2143 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2062 """Match string keys in a dictionary, after e.g. ``foo[``."""
2144 """Match string keys in a dictionary, after e.g. ``foo[``."""
2063 matches = self.dict_key_matches(context.token)
2145 matches = self.dict_key_matches(context.token)
2064 return _convert_matcher_v1_result_to_v2(
2146 return _convert_matcher_v1_result_to_v2(
2065 matches, type="dict key", suppress_if_matches=True
2147 matches, type="dict key", suppress_if_matches=True
2066 )
2148 )
2067
2149
2068 def dict_key_matches(self, text: str) -> List[str]:
2150 def dict_key_matches(self, text: str) -> List[str]:
2069 """Match string keys in a dictionary, after e.g. ``foo[``.
2151 """Match string keys in a dictionary, after e.g. ``foo[``.
2070
2152
2071 DEPRECATED: Deprecated since 8.6. Use `dict_key_matcher` instead.
2153 .. deprecated:: 8.6
2154 You can use :meth:`dict_key_matcher` instead.
2072 """
2155 """
2073
2156
2074 if self.__dict_key_regexps is not None:
2157 if self.__dict_key_regexps is not None:
2075 regexps = self.__dict_key_regexps
2158 regexps = self.__dict_key_regexps
2076 else:
2159 else:
2077 dict_key_re_fmt = r'''(?x)
2160 dict_key_re_fmt = r'''(?x)
2078 ( # match dict-referring expression wrt greedy setting
2161 ( # match dict-referring expression wrt greedy setting
2079 %s
2162 %s
2080 )
2163 )
2081 \[ # open bracket
2164 \[ # open bracket
2082 \s* # and optional whitespace
2165 \s* # and optional whitespace
2083 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2166 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2084 ((?:[uUbB]? # string prefix (r not handled)
2167 ((?:[uUbB]? # string prefix (r not handled)
2085 (?:
2168 (?:
2086 '(?:[^']|(?<!\\)\\')*'
2169 '(?:[^']|(?<!\\)\\')*'
2087 |
2170 |
2088 "(?:[^"]|(?<!\\)\\")*"
2171 "(?:[^"]|(?<!\\)\\")*"
2089 )
2172 )
2090 \s*,\s*
2173 \s*,\s*
2091 )*)
2174 )*)
2092 ([uUbB]? # string prefix (r not handled)
2175 ([uUbB]? # string prefix (r not handled)
2093 (?: # unclosed string
2176 (?: # unclosed string
2094 '(?:[^']|(?<!\\)\\')*
2177 '(?:[^']|(?<!\\)\\')*
2095 |
2178 |
2096 "(?:[^"]|(?<!\\)\\")*
2179 "(?:[^"]|(?<!\\)\\")*
2097 )
2180 )
2098 )?
2181 )?
2099 $
2182 $
2100 '''
2183 '''
2101 regexps = self.__dict_key_regexps = {
2184 regexps = self.__dict_key_regexps = {
2102 False: re.compile(dict_key_re_fmt % r'''
2185 False: re.compile(dict_key_re_fmt % r'''
2103 # identifiers separated by .
2186 # identifiers separated by .
2104 (?!\d)\w+
2187 (?!\d)\w+
2105 (?:\.(?!\d)\w+)*
2188 (?:\.(?!\d)\w+)*
2106 '''),
2189 '''),
2107 True: re.compile(dict_key_re_fmt % '''
2190 True: re.compile(dict_key_re_fmt % '''
2108 .+
2191 .+
2109 ''')
2192 ''')
2110 }
2193 }
2111
2194
2112 match = regexps[self.greedy].search(self.text_until_cursor)
2195 match = regexps[self.greedy].search(self.text_until_cursor)
2113
2196
2114 if match is None:
2197 if match is None:
2115 return []
2198 return []
2116
2199
2117 expr, prefix0, prefix = match.groups()
2200 expr, prefix0, prefix = match.groups()
2118 try:
2201 try:
2119 obj = eval(expr, self.namespace)
2202 obj = eval(expr, self.namespace)
2120 except Exception:
2203 except Exception:
2121 try:
2204 try:
2122 obj = eval(expr, self.global_namespace)
2205 obj = eval(expr, self.global_namespace)
2123 except Exception:
2206 except Exception:
2124 return []
2207 return []
2125
2208
2126 keys = self._get_keys(obj)
2209 keys = self._get_keys(obj)
2127 if not keys:
2210 if not keys:
2128 return keys
2211 return keys
2129
2212
2130 extra_prefix = eval(prefix0) if prefix0 != '' else None
2213 extra_prefix = eval(prefix0) if prefix0 != '' else None
2131
2214
2132 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2215 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2133 if not matches:
2216 if not matches:
2134 return matches
2217 return matches
2135
2218
2136 # get the cursor position of
2219 # get the cursor position of
2137 # - the text being completed
2220 # - the text being completed
2138 # - the start of the key text
2221 # - the start of the key text
2139 # - the start of the completion
2222 # - the start of the completion
2140 text_start = len(self.text_until_cursor) - len(text)
2223 text_start = len(self.text_until_cursor) - len(text)
2141 if prefix:
2224 if prefix:
2142 key_start = match.start(3)
2225 key_start = match.start(3)
2143 completion_start = key_start + token_offset
2226 completion_start = key_start + token_offset
2144 else:
2227 else:
2145 key_start = completion_start = match.end()
2228 key_start = completion_start = match.end()
2146
2229
2147 # grab the leading prefix, to make sure all completions start with `text`
2230 # grab the leading prefix, to make sure all completions start with `text`
2148 if text_start > key_start:
2231 if text_start > key_start:
2149 leading = ''
2232 leading = ''
2150 else:
2233 else:
2151 leading = text[text_start:completion_start]
2234 leading = text[text_start:completion_start]
2152
2235
2153 # the index of the `[` character
2236 # the index of the `[` character
2154 bracket_idx = match.end(1)
2237 bracket_idx = match.end(1)
2155
2238
2156 # append closing quote and bracket as appropriate
2239 # append closing quote and bracket as appropriate
2157 # this is *not* appropriate if the opening quote or bracket is outside
2240 # this is *not* appropriate if the opening quote or bracket is outside
2158 # the text given to this method
2241 # the text given to this method
2159 suf = ''
2242 suf = ''
2160 continuation = self.line_buffer[len(self.text_until_cursor):]
2243 continuation = self.line_buffer[len(self.text_until_cursor):]
2161 if key_start > text_start and closing_quote:
2244 if key_start > text_start and closing_quote:
2162 # quotes were opened inside text, maybe close them
2245 # quotes were opened inside text, maybe close them
2163 if continuation.startswith(closing_quote):
2246 if continuation.startswith(closing_quote):
2164 continuation = continuation[len(closing_quote):]
2247 continuation = continuation[len(closing_quote):]
2165 else:
2248 else:
2166 suf += closing_quote
2249 suf += closing_quote
2167 if bracket_idx > text_start:
2250 if bracket_idx > text_start:
2168 # brackets were opened inside text, maybe close them
2251 # brackets were opened inside text, maybe close them
2169 if not continuation.startswith(']'):
2252 if not continuation.startswith(']'):
2170 suf += ']'
2253 suf += ']'
2171
2254
2172 return [leading + k + suf for k in matches]
2255 return [leading + k + suf for k in matches]
2173
2256
2174 @context_matcher()
2257 @context_matcher()
2175 def unicode_name_matcher(self, context):
2258 def unicode_name_matcher(self, context):
2259 """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
2176 fragment, matches = self.unicode_name_matches(context.token)
2260 fragment, matches = self.unicode_name_matches(context.token)
2177 return _convert_matcher_v1_result_to_v2(
2261 return _convert_matcher_v1_result_to_v2(
2178 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2262 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2179 )
2263 )
2180
2264
2181 @staticmethod
2265 @staticmethod
2182 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2266 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2183 """Match Latex-like syntax for unicode characters base
2267 """Match Latex-like syntax for unicode characters base
2184 on the name of the character.
2268 on the name of the character.
2185
2269
2186 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2270 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2187
2271
2188 Works only on valid python 3 identifier, or on combining characters that
2272 Works only on valid python 3 identifier, or on combining characters that
2189 will combine to form a valid identifier.
2273 will combine to form a valid identifier.
2190 """
2274 """
2191 slashpos = text.rfind('\\')
2275 slashpos = text.rfind('\\')
2192 if slashpos > -1:
2276 if slashpos > -1:
2193 s = text[slashpos+1:]
2277 s = text[slashpos+1:]
2194 try :
2278 try :
2195 unic = unicodedata.lookup(s)
2279 unic = unicodedata.lookup(s)
2196 # allow combining chars
2280 # allow combining chars
2197 if ('a'+unic).isidentifier():
2281 if ('a'+unic).isidentifier():
2198 return '\\'+s,[unic]
2282 return '\\'+s,[unic]
2199 except KeyError:
2283 except KeyError:
2200 pass
2284 pass
2201 return '', []
2285 return '', []
2202
2286
2203 @context_matcher()
2287 @context_matcher()
2204 def latex_name_matcher(self, context):
2288 def latex_name_matcher(self, context):
2205 """Match Latex syntax for unicode characters.
2289 """Match Latex syntax for unicode characters.
2206
2290
2207 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2291 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2208 """
2292 """
2209 fragment, matches = self.latex_matches(context.token)
2293 fragment, matches = self.latex_matches(context.token)
2210 return _convert_matcher_v1_result_to_v2(
2294 return _convert_matcher_v1_result_to_v2(
2211 matches, type="latex", fragment=fragment, suppress_if_matches=True
2295 matches, type="latex", fragment=fragment, suppress_if_matches=True
2212 )
2296 )
2213
2297
2214 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2298 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2215 """Match Latex syntax for unicode characters.
2299 """Match Latex syntax for unicode characters.
2216
2300
2217 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2301 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2218
2302
2219 DEPRECATED: Deprecated since 8.6. Use `latex_matcher` instead.
2303 .. deprecated:: 8.6
2304 You can use :meth:`latex_name_matcher` instead.
2220 """
2305 """
2221 slashpos = text.rfind('\\')
2306 slashpos = text.rfind('\\')
2222 if slashpos > -1:
2307 if slashpos > -1:
2223 s = text[slashpos:]
2308 s = text[slashpos:]
2224 if s in latex_symbols:
2309 if s in latex_symbols:
2225 # Try to complete a full latex symbol to unicode
2310 # Try to complete a full latex symbol to unicode
2226 # \\alpha -> Ξ±
2311 # \\alpha -> Ξ±
2227 return s, [latex_symbols[s]]
2312 return s, [latex_symbols[s]]
2228 else:
2313 else:
2229 # If a user has partially typed a latex symbol, give them
2314 # If a user has partially typed a latex symbol, give them
2230 # a full list of options \al -> [\aleph, \alpha]
2315 # a full list of options \al -> [\aleph, \alpha]
2231 matches = [k for k in latex_symbols if k.startswith(s)]
2316 matches = [k for k in latex_symbols if k.startswith(s)]
2232 if matches:
2317 if matches:
2233 return s, matches
2318 return s, matches
2234 return '', ()
2319 return '', ()
2235
2320
2236 @context_matcher()
2321 @context_matcher()
2237 def custom_completer_matcher(self, context):
2322 def custom_completer_matcher(self, context):
2323 """Dispatch custom completer.
2324
2325 If a match is found, suppresses all other matchers except for Jedi.
2326 """
2238 matches = self.dispatch_custom_completer(context.token) or []
2327 matches = self.dispatch_custom_completer(context.token) or []
2239 result = _convert_matcher_v1_result_to_v2(
2328 result = _convert_matcher_v1_result_to_v2(
2240 matches, type="<unknown>", suppress_if_matches=True
2329 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
2241 )
2330 )
2242 result["ordered"] = True
2331 result["ordered"] = True
2243 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2332 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2244 return result
2333 return result
2245
2334
2246 def dispatch_custom_completer(self, text):
2335 def dispatch_custom_completer(self, text):
2247 """
2336 """
2248 DEPRECATED: Deprecated since 8.6. Use `custom_completer_matcher` instead.
2337 .. deprecated:: 8.6
2338 You can use :meth:`custom_completer_matcher` instead.
2249 """
2339 """
2250 if not self.custom_completers:
2340 if not self.custom_completers:
2251 return
2341 return
2252
2342
2253 line = self.line_buffer
2343 line = self.line_buffer
2254 if not line.strip():
2344 if not line.strip():
2255 return None
2345 return None
2256
2346
2257 # Create a little structure to pass all the relevant information about
2347 # Create a little structure to pass all the relevant information about
2258 # the current completion to any custom completer.
2348 # the current completion to any custom completer.
2259 event = SimpleNamespace()
2349 event = SimpleNamespace()
2260 event.line = line
2350 event.line = line
2261 event.symbol = text
2351 event.symbol = text
2262 cmd = line.split(None,1)[0]
2352 cmd = line.split(None,1)[0]
2263 event.command = cmd
2353 event.command = cmd
2264 event.text_until_cursor = self.text_until_cursor
2354 event.text_until_cursor = self.text_until_cursor
2265
2355
2266 # for foo etc, try also to find completer for %foo
2356 # for foo etc, try also to find completer for %foo
2267 if not cmd.startswith(self.magic_escape):
2357 if not cmd.startswith(self.magic_escape):
2268 try_magic = self.custom_completers.s_matches(
2358 try_magic = self.custom_completers.s_matches(
2269 self.magic_escape + cmd)
2359 self.magic_escape + cmd)
2270 else:
2360 else:
2271 try_magic = []
2361 try_magic = []
2272
2362
2273 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2363 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2274 try_magic,
2364 try_magic,
2275 self.custom_completers.flat_matches(self.text_until_cursor)):
2365 self.custom_completers.flat_matches(self.text_until_cursor)):
2276 try:
2366 try:
2277 res = c(event)
2367 res = c(event)
2278 if res:
2368 if res:
2279 # first, try case sensitive match
2369 # first, try case sensitive match
2280 withcase = [r for r in res if r.startswith(text)]
2370 withcase = [r for r in res if r.startswith(text)]
2281 if withcase:
2371 if withcase:
2282 return withcase
2372 return withcase
2283 # if none, then case insensitive ones are ok too
2373 # if none, then case insensitive ones are ok too
2284 text_low = text.lower()
2374 text_low = text.lower()
2285 return [r for r in res if r.lower().startswith(text_low)]
2375 return [r for r in res if r.lower().startswith(text_low)]
2286 except TryNext:
2376 except TryNext:
2287 pass
2377 pass
2288 except KeyboardInterrupt:
2378 except KeyboardInterrupt:
2289 """
2379 """
2290 If custom completer take too long,
2380 If custom completer take too long,
2291 let keyboard interrupt abort and return nothing.
2381 let keyboard interrupt abort and return nothing.
2292 """
2382 """
2293 break
2383 break
2294
2384
2295 return None
2385 return None
2296
2386
2297 def completions(self, text: str, offset: int)->Iterator[Completion]:
2387 def completions(self, text: str, offset: int)->Iterator[Completion]:
2298 """
2388 """
2299 Returns an iterator over the possible completions
2389 Returns an iterator over the possible completions
2300
2390
2301 .. warning::
2391 .. warning::
2302
2392
2303 Unstable
2393 Unstable
2304
2394
2305 This function is unstable, API may change without warning.
2395 This function is unstable, API may change without warning.
2306 It will also raise unless use in proper context manager.
2396 It will also raise unless use in proper context manager.
2307
2397
2308 Parameters
2398 Parameters
2309 ----------
2399 ----------
2310 text : str
2400 text : str
2311 Full text of the current input, multi line string.
2401 Full text of the current input, multi line string.
2312 offset : int
2402 offset : int
2313 Integer representing the position of the cursor in ``text``. Offset
2403 Integer representing the position of the cursor in ``text``. Offset
2314 is 0-based indexed.
2404 is 0-based indexed.
2315
2405
2316 Yields
2406 Yields
2317 ------
2407 ------
2318 Completion
2408 Completion
2319
2409
2320 Notes
2410 Notes
2321 -----
2411 -----
2322 The cursor on a text can either be seen as being "in between"
2412 The cursor on a text can either be seen as being "in between"
2323 characters or "On" a character depending on the interface visible to
2413 characters or "On" a character depending on the interface visible to
2324 the user. For consistency the cursor being on "in between" characters X
2414 the user. For consistency the cursor being on "in between" characters X
2325 and Y is equivalent to the cursor being "on" character Y, that is to say
2415 and Y is equivalent to the cursor being "on" character Y, that is to say
2326 the character the cursor is on is considered as being after the cursor.
2416 the character the cursor is on is considered as being after the cursor.
2327
2417
2328 Combining characters may span more that one position in the
2418 Combining characters may span more that one position in the
2329 text.
2419 text.
2330
2420
2331 .. note::
2421 .. note::
2332
2422
2333 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2423 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2334 fake Completion token to distinguish completion returned by Jedi
2424 fake Completion token to distinguish completion returned by Jedi
2335 and usual IPython completion.
2425 and usual IPython completion.
2336
2426
2337 .. note::
2427 .. note::
2338
2428
2339 Completions are not completely deduplicated yet. If identical
2429 Completions are not completely deduplicated yet. If identical
2340 completions are coming from different sources this function does not
2430 completions are coming from different sources this function does not
2341 ensure that each completion object will only be present once.
2431 ensure that each completion object will only be present once.
2342 """
2432 """
2343 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2433 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2344 "It may change without warnings. "
2434 "It may change without warnings. "
2345 "Use in corresponding context manager.",
2435 "Use in corresponding context manager.",
2346 category=ProvisionalCompleterWarning, stacklevel=2)
2436 category=ProvisionalCompleterWarning, stacklevel=2)
2347
2437
2348 seen = set()
2438 seen = set()
2349 profiler:Optional[cProfile.Profile]
2439 profiler:Optional[cProfile.Profile]
2350 try:
2440 try:
2351 if self.profile_completions:
2441 if self.profile_completions:
2352 import cProfile
2442 import cProfile
2353 profiler = cProfile.Profile()
2443 profiler = cProfile.Profile()
2354 profiler.enable()
2444 profiler.enable()
2355 else:
2445 else:
2356 profiler = None
2446 profiler = None
2357
2447
2358 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2448 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2359 if c and (c in seen):
2449 if c and (c in seen):
2360 continue
2450 continue
2361 yield c
2451 yield c
2362 seen.add(c)
2452 seen.add(c)
2363 except KeyboardInterrupt:
2453 except KeyboardInterrupt:
2364 """if completions take too long and users send keyboard interrupt,
2454 """if completions take too long and users send keyboard interrupt,
2365 do not crash and return ASAP. """
2455 do not crash and return ASAP. """
2366 pass
2456 pass
2367 finally:
2457 finally:
2368 if profiler is not None:
2458 if profiler is not None:
2369 profiler.disable()
2459 profiler.disable()
2370 ensure_dir_exists(self.profiler_output_dir)
2460 ensure_dir_exists(self.profiler_output_dir)
2371 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2461 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2372 print("Writing profiler output to", output_path)
2462 print("Writing profiler output to", output_path)
2373 profiler.dump_stats(output_path)
2463 profiler.dump_stats(output_path)
2374
2464
2375 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2465 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2376 """
2466 """
2377 Core completion module.Same signature as :any:`completions`, with the
2467 Core completion module.Same signature as :any:`completions`, with the
2378 extra `timeout` parameter (in seconds).
2468 extra `timeout` parameter (in seconds).
2379
2469
2380 Computing jedi's completion ``.type`` can be quite expensive (it is a
2470 Computing jedi's completion ``.type`` can be quite expensive (it is a
2381 lazy property) and can require some warm-up, more warm up than just
2471 lazy property) and can require some warm-up, more warm up than just
2382 computing the ``name`` of a completion. The warm-up can be :
2472 computing the ``name`` of a completion. The warm-up can be :
2383
2473
2384 - Long warm-up the first time a module is encountered after
2474 - Long warm-up the first time a module is encountered after
2385 install/update: actually build parse/inference tree.
2475 install/update: actually build parse/inference tree.
2386
2476
2387 - first time the module is encountered in a session: load tree from
2477 - first time the module is encountered in a session: load tree from
2388 disk.
2478 disk.
2389
2479
2390 We don't want to block completions for tens of seconds so we give the
2480 We don't want to block completions for tens of seconds so we give the
2391 completer a "budget" of ``_timeout`` seconds per invocation to compute
2481 completer a "budget" of ``_timeout`` seconds per invocation to compute
2392 completions types, the completions that have not yet been computed will
2482 completions types, the completions that have not yet been computed will
2393 be marked as "unknown" an will have a chance to be computed next round
2483 be marked as "unknown" an will have a chance to be computed next round
2394 are things get cached.
2484 are things get cached.
2395
2485
2396 Keep in mind that Jedi is not the only thing treating the completion so
2486 Keep in mind that Jedi is not the only thing treating the completion so
2397 keep the timeout short-ish as if we take more than 0.3 second we still
2487 keep the timeout short-ish as if we take more than 0.3 second we still
2398 have lots of processing to do.
2488 have lots of processing to do.
2399
2489
2400 """
2490 """
2401 deadline = time.monotonic() + _timeout
2491 deadline = time.monotonic() + _timeout
2402
2492
2403 before = full_text[:offset]
2493 before = full_text[:offset]
2404 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2494 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2405
2495
2406 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2496 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2407
2497
2408 results = self._complete(
2498 results = self._complete(
2409 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2499 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2410 )
2500 )
2411 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2501 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2412 identifier: result
2502 identifier: result
2413 for identifier, result in results.items()
2503 for identifier, result in results.items()
2414 if identifier != jedi_matcher_id
2504 if identifier != jedi_matcher_id
2415 }
2505 }
2416
2506
2417 jedi_matches = (
2507 jedi_matches = (
2418 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2508 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2419 if jedi_matcher_id in results
2509 if jedi_matcher_id in results
2420 else ()
2510 else ()
2421 )
2511 )
2422
2512
2423 iter_jm = iter(jedi_matches)
2513 iter_jm = iter(jedi_matches)
2424 if _timeout:
2514 if _timeout:
2425 for jm in iter_jm:
2515 for jm in iter_jm:
2426 try:
2516 try:
2427 type_ = jm.type
2517 type_ = jm.type
2428 except Exception:
2518 except Exception:
2429 if self.debug:
2519 if self.debug:
2430 print("Error in Jedi getting type of ", jm)
2520 print("Error in Jedi getting type of ", jm)
2431 type_ = None
2521 type_ = None
2432 delta = len(jm.name_with_symbols) - len(jm.complete)
2522 delta = len(jm.name_with_symbols) - len(jm.complete)
2433 if type_ == 'function':
2523 if type_ == 'function':
2434 signature = _make_signature(jm)
2524 signature = _make_signature(jm)
2435 else:
2525 else:
2436 signature = ''
2526 signature = ''
2437 yield Completion(start=offset - delta,
2527 yield Completion(start=offset - delta,
2438 end=offset,
2528 end=offset,
2439 text=jm.name_with_symbols,
2529 text=jm.name_with_symbols,
2440 type=type_,
2530 type=type_,
2441 signature=signature,
2531 signature=signature,
2442 _origin='jedi')
2532 _origin='jedi')
2443
2533
2444 if time.monotonic() > deadline:
2534 if time.monotonic() > deadline:
2445 break
2535 break
2446
2536
2447 for jm in iter_jm:
2537 for jm in iter_jm:
2448 delta = len(jm.name_with_symbols) - len(jm.complete)
2538 delta = len(jm.name_with_symbols) - len(jm.complete)
2449 yield Completion(
2539 yield Completion(
2450 start=offset - delta,
2540 start=offset - delta,
2451 end=offset,
2541 end=offset,
2452 text=jm.name_with_symbols,
2542 text=jm.name_with_symbols,
2453 type=_UNKNOWN_TYPE, # don't compute type for speed
2543 type=_UNKNOWN_TYPE, # don't compute type for speed
2454 _origin="jedi",
2544 _origin="jedi",
2455 signature="",
2545 signature="",
2456 )
2546 )
2457
2547
2458 # TODO:
2548 # TODO:
2459 # Suppress this, right now just for debug.
2549 # Suppress this, right now just for debug.
2460 if jedi_matches and non_jedi_results and self.debug:
2550 if jedi_matches and non_jedi_results and self.debug:
2461 some_start_offset = before.rfind(
2551 some_start_offset = before.rfind(
2462 next(iter(non_jedi_results.values()))["matched_fragment"]
2552 next(iter(non_jedi_results.values()))["matched_fragment"]
2463 )
2553 )
2464 yield Completion(
2554 yield Completion(
2465 start=some_start_offset,
2555 start=some_start_offset,
2466 end=offset,
2556 end=offset,
2467 text="--jedi/ipython--",
2557 text="--jedi/ipython--",
2468 _origin="debug",
2558 _origin="debug",
2469 type="none",
2559 type="none",
2470 signature="",
2560 signature="",
2471 )
2561 )
2472
2562
2473 ordered = []
2563 ordered = []
2474 sortable = []
2564 sortable = []
2475
2565
2476 for origin, result in non_jedi_results.items():
2566 for origin, result in non_jedi_results.items():
2477 matched_text = result["matched_fragment"]
2567 matched_text = result["matched_fragment"]
2478 start_offset = before.rfind(matched_text)
2568 start_offset = before.rfind(matched_text)
2479 is_ordered = result.get("ordered", False)
2569 is_ordered = result.get("ordered", False)
2480 container = ordered if is_ordered else sortable
2570 container = ordered if is_ordered else sortable
2481
2571
2482 # I'm unsure if this is always true, so let's assert and see if it
2572 # I'm unsure if this is always true, so let's assert and see if it
2483 # crash
2573 # crash
2484 assert before.endswith(matched_text)
2574 assert before.endswith(matched_text)
2485
2575
2486 for simple_completion in result["completions"]:
2576 for simple_completion in result["completions"]:
2487 completion = Completion(
2577 completion = Completion(
2488 start=start_offset,
2578 start=start_offset,
2489 end=offset,
2579 end=offset,
2490 text=simple_completion.text,
2580 text=simple_completion.text,
2491 _origin=origin,
2581 _origin=origin,
2492 signature="",
2582 signature="",
2493 type=simple_completion.type or _UNKNOWN_TYPE,
2583 type=simple_completion.type or _UNKNOWN_TYPE,
2494 )
2584 )
2495 container.append(completion)
2585 container.append(completion)
2496
2586
2497 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2587 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2498 :MATCHES_LIMIT
2588 :MATCHES_LIMIT
2499 ]
2589 ]
2500
2590
2501 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2591 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2502 """Find completions for the given text and line context.
2592 """Find completions for the given text and line context.
2503
2593
2504 Note that both the text and the line_buffer are optional, but at least
2594 Note that both the text and the line_buffer are optional, but at least
2505 one of them must be given.
2595 one of them must be given.
2506
2596
2507 Parameters
2597 Parameters
2508 ----------
2598 ----------
2509 text : string, optional
2599 text : string, optional
2510 Text to perform the completion on. If not given, the line buffer
2600 Text to perform the completion on. If not given, the line buffer
2511 is split using the instance's CompletionSplitter object.
2601 is split using the instance's CompletionSplitter object.
2512 line_buffer : string, optional
2602 line_buffer : string, optional
2513 If not given, the completer attempts to obtain the current line
2603 If not given, the completer attempts to obtain the current line
2514 buffer via readline. This keyword allows clients which are
2604 buffer via readline. This keyword allows clients which are
2515 requesting for text completions in non-readline contexts to inform
2605 requesting for text completions in non-readline contexts to inform
2516 the completer of the entire text.
2606 the completer of the entire text.
2517 cursor_pos : int, optional
2607 cursor_pos : int, optional
2518 Index of the cursor in the full line buffer. Should be provided by
2608 Index of the cursor in the full line buffer. Should be provided by
2519 remote frontends where kernel has no access to frontend state.
2609 remote frontends where kernel has no access to frontend state.
2520
2610
2521 Returns
2611 Returns
2522 -------
2612 -------
2523 Tuple of two items:
2613 Tuple of two items:
2524 text : str
2614 text : str
2525 Text that was actually used in the completion.
2615 Text that was actually used in the completion.
2526 matches : list
2616 matches : list
2527 A list of completion matches.
2617 A list of completion matches.
2528
2618
2529 Notes
2619 Notes
2530 -----
2620 -----
2531 This API is likely to be deprecated and replaced by
2621 This API is likely to be deprecated and replaced by
2532 :any:`IPCompleter.completions` in the future.
2622 :any:`IPCompleter.completions` in the future.
2533
2623
2534 """
2624 """
2535 warnings.warn('`Completer.complete` is pending deprecation since '
2625 warnings.warn('`Completer.complete` is pending deprecation since '
2536 'IPython 6.0 and will be replaced by `Completer.completions`.',
2626 'IPython 6.0 and will be replaced by `Completer.completions`.',
2537 PendingDeprecationWarning)
2627 PendingDeprecationWarning)
2538 # potential todo, FOLD the 3rd throw away argument of _complete
2628 # potential todo, FOLD the 3rd throw away argument of _complete
2539 # into the first 2 one.
2629 # into the first 2 one.
2540 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2630 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2541 # TODO: should we deprecate now, or does it stay?
2631 # TODO: should we deprecate now, or does it stay?
2542
2632
2543 results = self._complete(
2633 results = self._complete(
2544 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2634 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2545 )
2635 )
2546
2636
2547 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2637 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2548
2638
2549 return self._arrange_and_extract(
2639 return self._arrange_and_extract(
2550 results,
2640 results,
2551 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2641 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2552 skip_matchers={jedi_matcher_id},
2642 skip_matchers={jedi_matcher_id},
2553 # this API does not support different start/end positions (fragments of token).
2643 # this API does not support different start/end positions (fragments of token).
2554 abort_if_offset_changes=True,
2644 abort_if_offset_changes=True,
2555 )
2645 )
2556
2646
2557 def _arrange_and_extract(
2647 def _arrange_and_extract(
2558 self,
2648 self,
2559 results: Dict[str, MatcherResult],
2649 results: Dict[str, MatcherResult],
2560 skip_matchers: Set[str],
2650 skip_matchers: Set[str],
2561 abort_if_offset_changes: bool,
2651 abort_if_offset_changes: bool,
2562 ):
2652 ):
2563
2653
2564 sortable = []
2654 sortable = []
2565 ordered = []
2655 ordered = []
2566 most_recent_fragment = None
2656 most_recent_fragment = None
2567 for identifier, result in results.items():
2657 for identifier, result in results.items():
2568 if identifier in skip_matchers:
2658 if identifier in skip_matchers:
2569 continue
2659 continue
2570 if not result["completions"]:
2660 if not result["completions"]:
2571 continue
2661 continue
2572 if not most_recent_fragment:
2662 if not most_recent_fragment:
2573 most_recent_fragment = result["matched_fragment"]
2663 most_recent_fragment = result["matched_fragment"]
2574 if (
2664 if (
2575 abort_if_offset_changes
2665 abort_if_offset_changes
2576 and result["matched_fragment"] != most_recent_fragment
2666 and result["matched_fragment"] != most_recent_fragment
2577 ):
2667 ):
2578 break
2668 break
2579 if result.get("ordered", False):
2669 if result.get("ordered", False):
2580 ordered.extend(result["completions"])
2670 ordered.extend(result["completions"])
2581 else:
2671 else:
2582 sortable.extend(result["completions"])
2672 sortable.extend(result["completions"])
2583
2673
2584 if not most_recent_fragment:
2674 if not most_recent_fragment:
2585 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2675 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2586
2676
2587 return most_recent_fragment, [
2677 return most_recent_fragment, [
2588 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2678 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2589 ]
2679 ]
2590
2680
2591 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2681 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2592 full_text=None) -> _CompleteResult:
2682 full_text=None) -> _CompleteResult:
2593 """
2683 """
2594 Like complete but can also returns raw jedi completions as well as the
2684 Like complete but can also returns raw jedi completions as well as the
2595 origin of the completion text. This could (and should) be made much
2685 origin of the completion text. This could (and should) be made much
2596 cleaner but that will be simpler once we drop the old (and stateful)
2686 cleaner but that will be simpler once we drop the old (and stateful)
2597 :any:`complete` API.
2687 :any:`complete` API.
2598
2688
2599 With current provisional API, cursor_pos act both (depending on the
2689 With current provisional API, cursor_pos act both (depending on the
2600 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2690 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2601 ``column`` when passing multiline strings this could/should be renamed
2691 ``column`` when passing multiline strings this could/should be renamed
2602 but would add extra noise.
2692 but would add extra noise.
2603
2693
2604 Parameters
2694 Parameters
2605 ----------
2695 ----------
2606 cursor_line
2696 cursor_line
2607 Index of the line the cursor is on. 0 indexed.
2697 Index of the line the cursor is on. 0 indexed.
2608 cursor_pos
2698 cursor_pos
2609 Position of the cursor in the current line/line_buffer/text. 0
2699 Position of the cursor in the current line/line_buffer/text. 0
2610 indexed.
2700 indexed.
2611 line_buffer : optional, str
2701 line_buffer : optional, str
2612 The current line the cursor is in, this is mostly due to legacy
2702 The current line the cursor is in, this is mostly due to legacy
2613 reason that readline could only give a us the single current line.
2703 reason that readline could only give a us the single current line.
2614 Prefer `full_text`.
2704 Prefer `full_text`.
2615 text : str
2705 text : str
2616 The current "token" the cursor is in, mostly also for historical
2706 The current "token" the cursor is in, mostly also for historical
2617 reasons. as the completer would trigger only after the current line
2707 reasons. as the completer would trigger only after the current line
2618 was parsed.
2708 was parsed.
2619 full_text : str
2709 full_text : str
2620 Full text of the current cell.
2710 Full text of the current cell.
2621
2711
2622 Returns
2712 Returns
2623 -------
2713 -------
2624 An ordered dictionary where keys are identifiers of completion
2714 An ordered dictionary where keys are identifiers of completion
2625 matchers and values are ``MatcherResult``s.
2715 matchers and values are ``MatcherResult``s.
2626 """
2716 """
2627
2717
2628 # if the cursor position isn't given, the only sane assumption we can
2718 # if the cursor position isn't given, the only sane assumption we can
2629 # make is that it's at the end of the line (the common case)
2719 # make is that it's at the end of the line (the common case)
2630 if cursor_pos is None:
2720 if cursor_pos is None:
2631 cursor_pos = len(line_buffer) if text is None else len(text)
2721 cursor_pos = len(line_buffer) if text is None else len(text)
2632
2722
2633 if self.use_main_ns:
2723 if self.use_main_ns:
2634 self.namespace = __main__.__dict__
2724 self.namespace = __main__.__dict__
2635
2725
2636 # if text is either None or an empty string, rely on the line buffer
2726 # if text is either None or an empty string, rely on the line buffer
2637 if (not line_buffer) and full_text:
2727 if (not line_buffer) and full_text:
2638 line_buffer = full_text.split('\n')[cursor_line]
2728 line_buffer = full_text.split('\n')[cursor_line]
2639 if not text: # issue #11508: check line_buffer before calling split_line
2729 if not text: # issue #11508: check line_buffer before calling split_line
2640 text = (
2730 text = (
2641 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2731 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2642 )
2732 )
2643
2733
2644 # If no line buffer is given, assume the input text is all there was
2734 # If no line buffer is given, assume the input text is all there was
2645 if line_buffer is None:
2735 if line_buffer is None:
2646 line_buffer = text
2736 line_buffer = text
2647
2737
2648 # deprecated - do not use `line_buffer` in new code.
2738 # deprecated - do not use `line_buffer` in new code.
2649 self.line_buffer = line_buffer
2739 self.line_buffer = line_buffer
2650 self.text_until_cursor = self.line_buffer[:cursor_pos]
2740 self.text_until_cursor = self.line_buffer[:cursor_pos]
2651
2741
2652 if not full_text:
2742 if not full_text:
2653 full_text = line_buffer
2743 full_text = line_buffer
2654
2744
2655 context = CompletionContext(
2745 context = CompletionContext(
2656 full_text=full_text,
2746 full_text=full_text,
2657 cursor_position=cursor_pos,
2747 cursor_position=cursor_pos,
2658 cursor_line=cursor_line,
2748 cursor_line=cursor_line,
2659 token=text,
2749 token=text,
2660 limit=MATCHES_LIMIT,
2750 limit=MATCHES_LIMIT,
2661 )
2751 )
2662
2752
2663 # Start with a clean slate of completions
2753 # Start with a clean slate of completions
2664 results = {}
2754 results = {}
2665
2755
2666 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2756 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2667
2757
2668 suppressed_matchers = set()
2758 suppressed_matchers = set()
2669
2759
2670 matchers = {
2760 matchers = {
2671 _get_matcher_id(matcher): matcher
2761 _get_matcher_id(matcher): matcher
2672 for matcher in sorted(
2762 for matcher in sorted(
2673 self.matchers, key=_get_matcher_priority, reverse=True
2763 self.matchers, key=_get_matcher_priority, reverse=True
2674 )
2764 )
2675 }
2765 }
2676
2766
2677 for matcher_id, matcher in matchers.items():
2767 for matcher_id, matcher in matchers.items():
2678 api_version = _get_matcher_api_version(matcher)
2768 api_version = _get_matcher_api_version(matcher)
2679 matcher_id = _get_matcher_id(matcher)
2769 matcher_id = _get_matcher_id(matcher)
2680
2770
2681 if matcher_id in self.disable_matchers:
2771 if matcher_id in self.disable_matchers:
2682 continue
2772 continue
2683
2773
2684 if matcher_id in results:
2774 if matcher_id in results:
2685 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2775 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2686
2776
2687 if matcher_id in suppressed_matchers:
2777 if matcher_id in suppressed_matchers:
2688 continue
2778 continue
2689
2779
2690 try:
2780 try:
2691 if api_version == 1:
2781 if api_version == 1:
2692 result = _convert_matcher_v1_result_to_v2(
2782 result = _convert_matcher_v1_result_to_v2(
2693 matcher(text), type=_UNKNOWN_TYPE
2783 matcher(text), type=_UNKNOWN_TYPE
2694 )
2784 )
2695 elif api_version == 2:
2785 elif api_version == 2:
2696 result = cast(matcher, MatcherAPIv2)(context)
2786 result = cast(matcher, MatcherAPIv2)(context)
2697 else:
2787 else:
2698 raise ValueError(f"Unsupported API version {api_version}")
2788 raise ValueError(f"Unsupported API version {api_version}")
2699 except:
2789 except:
2700 # Show the ugly traceback if the matcher causes an
2790 # Show the ugly traceback if the matcher causes an
2701 # exception, but do NOT crash the kernel!
2791 # exception, but do NOT crash the kernel!
2702 sys.excepthook(*sys.exc_info())
2792 sys.excepthook(*sys.exc_info())
2703 continue
2793 continue
2704
2794
2705 # set default value for matched fragment if suffix was not selected.
2795 # set default value for matched fragment if suffix was not selected.
2706 result["matched_fragment"] = result.get("matched_fragment", context.token)
2796 result["matched_fragment"] = result.get("matched_fragment", context.token)
2707
2797
2708 if not suppressed_matchers:
2798 if not suppressed_matchers:
2709 suppression_recommended = result.get("suppress", False)
2799 suppression_recommended = result.get("suppress", False)
2710
2800
2711 suppression_config = (
2801 suppression_config = (
2712 self.suppress_competing_matchers.get(matcher_id, None)
2802 self.suppress_competing_matchers.get(matcher_id, None)
2713 if isinstance(self.suppress_competing_matchers, dict)
2803 if isinstance(self.suppress_competing_matchers, dict)
2714 else self.suppress_competing_matchers
2804 else self.suppress_competing_matchers
2715 )
2805 )
2716 should_suppress = (
2806 should_suppress = (
2717 (suppression_config is True)
2807 (suppression_config is True)
2718 or (suppression_recommended and (suppression_config is not False))
2808 or (suppression_recommended and (suppression_config is not False))
2719 ) and len(result["completions"])
2809 ) and len(result["completions"])
2720
2810
2721 if should_suppress:
2811 if should_suppress:
2722 suppression_exceptions = result.get("do_not_suppress", set())
2812 suppression_exceptions = result.get("do_not_suppress", set())
2723 try:
2813 try:
2724 to_suppress = set(suppression_recommended)
2814 to_suppress = set(suppression_recommended)
2725 except TypeError:
2815 except TypeError:
2726 to_suppress = set(matchers)
2816 to_suppress = set(matchers)
2727 suppressed_matchers = to_suppress - suppression_exceptions
2817 suppressed_matchers = to_suppress - suppression_exceptions
2728
2818
2729 new_results = {}
2819 new_results = {}
2730 for previous_matcher_id, previous_result in results.items():
2820 for previous_matcher_id, previous_result in results.items():
2731 if previous_matcher_id not in suppressed_matchers:
2821 if previous_matcher_id not in suppressed_matchers:
2732 new_results[previous_matcher_id] = previous_result
2822 new_results[previous_matcher_id] = previous_result
2733 results = new_results
2823 results = new_results
2734
2824
2735 results[matcher_id] = result
2825 results[matcher_id] = result
2736
2826
2737 _, matches = self._arrange_and_extract(
2827 _, matches = self._arrange_and_extract(
2738 results,
2828 results,
2739 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2829 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2740 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2830 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2741 skip_matchers={jedi_matcher_id},
2831 skip_matchers={jedi_matcher_id},
2742 abort_if_offset_changes=False,
2832 abort_if_offset_changes=False,
2743 )
2833 )
2744
2834
2745 # populate legacy stateful API
2835 # populate legacy stateful API
2746 self.matches = matches
2836 self.matches = matches
2747
2837
2748 return results
2838 return results
2749
2839
2750 @staticmethod
2840 @staticmethod
2751 def _deduplicate(
2841 def _deduplicate(
2752 matches: Sequence[SimpleCompletion],
2842 matches: Sequence[SimpleCompletion],
2753 ) -> Iterable[SimpleCompletion]:
2843 ) -> Iterable[SimpleCompletion]:
2754 filtered_matches = {}
2844 filtered_matches = {}
2755 for match in matches:
2845 for match in matches:
2756 text = match.text
2846 text = match.text
2757 if (
2847 if (
2758 text not in filtered_matches
2848 text not in filtered_matches
2759 or filtered_matches[text].type == _UNKNOWN_TYPE
2849 or filtered_matches[text].type == _UNKNOWN_TYPE
2760 ):
2850 ):
2761 filtered_matches[text] = match
2851 filtered_matches[text] = match
2762
2852
2763 return filtered_matches.values()
2853 return filtered_matches.values()
2764
2854
2765 @staticmethod
2855 @staticmethod
2766 def _sort(matches: Sequence[SimpleCompletion]):
2856 def _sort(matches: Sequence[SimpleCompletion]):
2767 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2857 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2768
2858
2769 @context_matcher()
2859 @context_matcher()
2770 def fwd_unicode_matcher(self, context):
2860 def fwd_unicode_matcher(self, context):
2771 """Same as ``fwd_unicode_match``, but adopted to new Matcher API."""
2861 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
2772 fragment, matches = self.latex_matches(context.token)
2862 fragment, matches = self.latex_matches(context.token)
2773 return _convert_matcher_v1_result_to_v2(
2863 return _convert_matcher_v1_result_to_v2(
2774 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2864 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2775 )
2865 )
2776
2866
2777 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2867 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2778 """
2868 """
2779 Forward match a string starting with a backslash with a list of
2869 Forward match a string starting with a backslash with a list of
2780 potential Unicode completions.
2870 potential Unicode completions.
2781
2871
2782 Will compute list list of Unicode character names on first call and cache it.
2872 Will compute list of Unicode character names on first call and cache it.
2873
2874 .. deprecated:: 8.6
2875 You can use :meth:`fwd_unicode_matcher` instead.
2783
2876
2784 Returns
2877 Returns
2785 -------
2878 -------
2786 At tuple with:
2879 At tuple with:
2787 - matched text (empty if no matches)
2880 - matched text (empty if no matches)
2788 - list of potential completions, empty tuple otherwise)
2881 - list of potential completions, empty tuple otherwise)
2789
2790 DEPRECATED: Deprecated since 8.6. Use `fwd_unicode_matcher` instead.
2791 """
2882 """
2792 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2883 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2793 # We could do a faster match using a Trie.
2884 # We could do a faster match using a Trie.
2794
2885
2795 # Using pygtrie the following seem to work:
2886 # Using pygtrie the following seem to work:
2796
2887
2797 # s = PrefixSet()
2888 # s = PrefixSet()
2798
2889
2799 # for c in range(0,0x10FFFF + 1):
2890 # for c in range(0,0x10FFFF + 1):
2800 # try:
2891 # try:
2801 # s.add(unicodedata.name(chr(c)))
2892 # s.add(unicodedata.name(chr(c)))
2802 # except ValueError:
2893 # except ValueError:
2803 # pass
2894 # pass
2804 # [''.join(k) for k in s.iter(prefix)]
2895 # [''.join(k) for k in s.iter(prefix)]
2805
2896
2806 # But need to be timed and adds an extra dependency.
2897 # But need to be timed and adds an extra dependency.
2807
2898
2808 slashpos = text.rfind('\\')
2899 slashpos = text.rfind('\\')
2809 # if text starts with slash
2900 # if text starts with slash
2810 if slashpos > -1:
2901 if slashpos > -1:
2811 # PERF: It's important that we don't access self._unicode_names
2902 # PERF: It's important that we don't access self._unicode_names
2812 # until we're inside this if-block. _unicode_names is lazily
2903 # until we're inside this if-block. _unicode_names is lazily
2813 # initialized, and it takes a user-noticeable amount of time to
2904 # initialized, and it takes a user-noticeable amount of time to
2814 # initialize it, so we don't want to initialize it unless we're
2905 # initialize it, so we don't want to initialize it unless we're
2815 # actually going to use it.
2906 # actually going to use it.
2816 s = text[slashpos + 1 :]
2907 s = text[slashpos + 1 :]
2817 sup = s.upper()
2908 sup = s.upper()
2818 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2909 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2819 if candidates:
2910 if candidates:
2820 return s, candidates
2911 return s, candidates
2821 candidates = [x for x in self.unicode_names if sup in x]
2912 candidates = [x for x in self.unicode_names if sup in x]
2822 if candidates:
2913 if candidates:
2823 return s, candidates
2914 return s, candidates
2824 splitsup = sup.split(" ")
2915 splitsup = sup.split(" ")
2825 candidates = [
2916 candidates = [
2826 x for x in self.unicode_names if all(u in x for u in splitsup)
2917 x for x in self.unicode_names if all(u in x for u in splitsup)
2827 ]
2918 ]
2828 if candidates:
2919 if candidates:
2829 return s, candidates
2920 return s, candidates
2830
2921
2831 return "", ()
2922 return "", ()
2832
2923
2833 # if text does not start with slash
2924 # if text does not start with slash
2834 else:
2925 else:
2835 return '', ()
2926 return '', ()
2836
2927
2837 @property
2928 @property
2838 def unicode_names(self) -> List[str]:
2929 def unicode_names(self) -> List[str]:
2839 """List of names of unicode code points that can be completed.
2930 """List of names of unicode code points that can be completed.
2840
2931
2841 The list is lazily initialized on first access.
2932 The list is lazily initialized on first access.
2842 """
2933 """
2843 if self._unicode_names is None:
2934 if self._unicode_names is None:
2844 names = []
2935 names = []
2845 for c in range(0,0x10FFFF + 1):
2936 for c in range(0,0x10FFFF + 1):
2846 try:
2937 try:
2847 names.append(unicodedata.name(chr(c)))
2938 names.append(unicodedata.name(chr(c)))
2848 except ValueError:
2939 except ValueError:
2849 pass
2940 pass
2850 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2941 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2851
2942
2852 return self._unicode_names
2943 return self._unicode_names
2853
2944
2854 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2945 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2855 names = []
2946 names = []
2856 for start,stop in ranges:
2947 for start,stop in ranges:
2857 for c in range(start, stop) :
2948 for c in range(start, stop) :
2858 try:
2949 try:
2859 names.append(unicodedata.name(chr(c)))
2950 names.append(unicodedata.name(chr(c)))
2860 except ValueError:
2951 except ValueError:
2861 pass
2952 pass
2862 return names
2953 return names
@@ -1,58 +1,83 b''
1 # encoding: utf-8
1 # encoding: utf-8
2 """Decorators that don't go anywhere else.
2 """Decorators that don't go anywhere else.
3
3
4 This module contains misc. decorators that don't really go with another module
4 This module contains misc. decorators that don't really go with another module
5 in :mod:`IPython.utils`. Beore putting something here please see if it should
5 in :mod:`IPython.utils`. Before putting something here please see if it should
6 go into another topical module in :mod:`IPython.utils`.
6 go into another topical module in :mod:`IPython.utils`.
7 """
7 """
8
8
9 #-----------------------------------------------------------------------------
9 #-----------------------------------------------------------------------------
10 # Copyright (C) 2008-2011 The IPython Development Team
10 # Copyright (C) 2008-2011 The IPython Development Team
11 #
11 #
12 # Distributed under the terms of the BSD License. The full license is in
12 # Distributed under the terms of the BSD License. The full license is in
13 # the file COPYING, distributed as part of this software.
13 # the file COPYING, distributed as part of this software.
14 #-----------------------------------------------------------------------------
14 #-----------------------------------------------------------------------------
15
15
16 #-----------------------------------------------------------------------------
16 #-----------------------------------------------------------------------------
17 # Imports
17 # Imports
18 #-----------------------------------------------------------------------------
18 #-----------------------------------------------------------------------------
19 from typing import Sequence
20
21 from IPython.utils.docs import GENERATING_DOCUMENTATION
22
19
23
20 #-----------------------------------------------------------------------------
24 #-----------------------------------------------------------------------------
21 # Code
25 # Code
22 #-----------------------------------------------------------------------------
26 #-----------------------------------------------------------------------------
23
27
24 def flag_calls(func):
28 def flag_calls(func):
25 """Wrap a function to detect and flag when it gets called.
29 """Wrap a function to detect and flag when it gets called.
26
30
27 This is a decorator which takes a function and wraps it in a function with
31 This is a decorator which takes a function and wraps it in a function with
28 a 'called' attribute. wrapper.called is initialized to False.
32 a 'called' attribute. wrapper.called is initialized to False.
29
33
30 The wrapper.called attribute is set to False right before each call to the
34 The wrapper.called attribute is set to False right before each call to the
31 wrapped function, so if the call fails it remains False. After the call
35 wrapped function, so if the call fails it remains False. After the call
32 completes, wrapper.called is set to True and the output is returned.
36 completes, wrapper.called is set to True and the output is returned.
33
37
34 Testing for truth in wrapper.called allows you to determine if a call to
38 Testing for truth in wrapper.called allows you to determine if a call to
35 func() was attempted and succeeded."""
39 func() was attempted and succeeded."""
36
40
37 # don't wrap twice
41 # don't wrap twice
38 if hasattr(func, 'called'):
42 if hasattr(func, 'called'):
39 return func
43 return func
40
44
41 def wrapper(*args,**kw):
45 def wrapper(*args,**kw):
42 wrapper.called = False
46 wrapper.called = False
43 out = func(*args,**kw)
47 out = func(*args,**kw)
44 wrapper.called = True
48 wrapper.called = True
45 return out
49 return out
46
50
47 wrapper.called = False
51 wrapper.called = False
48 wrapper.__doc__ = func.__doc__
52 wrapper.__doc__ = func.__doc__
49 return wrapper
53 return wrapper
50
54
55
51 def undoc(func):
56 def undoc(func):
52 """Mark a function or class as undocumented.
57 """Mark a function or class as undocumented.
53
58
54 This is found by inspecting the AST, so for now it must be used directly
59 This is found by inspecting the AST, so for now it must be used directly
55 as @undoc, not as e.g. @decorators.undoc
60 as @undoc, not as e.g. @decorators.undoc
56 """
61 """
57 return func
62 return func
58
63
64
65 def sphinx_options(
66 show_inheritance: bool = True,
67 show_inherited_members: bool = False,
68 exclude_inherited_from: Sequence[str] = tuple(),
69 ):
70 """Set sphinx options"""
71
72 def wrapper(func):
73 if not GENERATING_DOCUMENTATION:
74 return func
75
76 func._sphinx_options = dict(
77 show_inheritance=show_inheritance,
78 show_inherited_members=show_inherited_members,
79 exclude_inherited_from=exclude_inherited_from,
80 )
81 return func
82
83 return wrapper
@@ -1,326 +1,334 b''
1 # -*- coding: utf-8 -*-
1 # -*- coding: utf-8 -*-
2 #
2 #
3 # IPython documentation build configuration file.
3 # IPython documentation build configuration file.
4
4
5 # NOTE: This file has been edited manually from the auto-generated one from
5 # NOTE: This file has been edited manually from the auto-generated one from
6 # sphinx. Do NOT delete and re-generate. If any changes from sphinx are
6 # sphinx. Do NOT delete and re-generate. If any changes from sphinx are
7 # needed, generate a scratch one and merge by hand any new fields needed.
7 # needed, generate a scratch one and merge by hand any new fields needed.
8
8
9 #
9 #
10 # This file is execfile()d with the current directory set to its containing dir.
10 # This file is execfile()d with the current directory set to its containing dir.
11 #
11 #
12 # The contents of this file are pickled, so don't put values in the namespace
12 # The contents of this file are pickled, so don't put values in the namespace
13 # that aren't pickleable (module imports are okay, they're removed automatically).
13 # that aren't pickleable (module imports are okay, they're removed automatically).
14 #
14 #
15 # All configuration values have a default value; values that are commented out
15 # All configuration values have a default value; values that are commented out
16 # serve to show the default value.
16 # serve to show the default value.
17
17
18 import sys, os
18 import sys, os
19 from pathlib import Path
19 from pathlib import Path
20
20
21 # https://read-the-docs.readthedocs.io/en/latest/faq.html
21 # https://read-the-docs.readthedocs.io/en/latest/faq.html
22 ON_RTD = os.environ.get('READTHEDOCS', None) == 'True'
22 ON_RTD = os.environ.get('READTHEDOCS', None) == 'True'
23
23
24 if ON_RTD:
24 if ON_RTD:
25 tags.add('rtd')
25 tags.add('rtd')
26
26
27 # RTD doesn't use the Makefile, so re-run autogen_{things}.py here.
27 # RTD doesn't use the Makefile, so re-run autogen_{things}.py here.
28 for name in ("config", "api", "magics", "shortcuts"):
28 for name in ("config", "api", "magics", "shortcuts"):
29 fname = Path("autogen_{}.py".format(name))
29 fname = Path("autogen_{}.py".format(name))
30 fpath = (Path(__file__).parent).joinpath("..", fname)
30 fpath = (Path(__file__).parent).joinpath("..", fname)
31 with open(fpath, encoding="utf-8") as f:
31 with open(fpath, encoding="utf-8") as f:
32 exec(
32 exec(
33 compile(f.read(), fname, "exec"),
33 compile(f.read(), fname, "exec"),
34 {
34 {
35 "__file__": fpath,
35 "__file__": fpath,
36 "__name__": "__main__",
36 "__name__": "__main__",
37 },
37 },
38 )
38 )
39 else:
39 else:
40 import sphinx_rtd_theme
40 import sphinx_rtd_theme
41 html_theme = "sphinx_rtd_theme"
41 html_theme = "sphinx_rtd_theme"
42 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
42 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
43
43
44 # Allow Python scripts to change behaviour during sphinx run
45 os.environ["IN_SPHINX_RUN"] = "True"
46
47 autodoc_type_aliases = {
48 "Matcher": " IPython.core.completer.Matcher",
49 "MatcherAPIv1": " IPython.core.completer.MatcherAPIv1",
50 }
51
44 # If your extensions are in another directory, add it here. If the directory
52 # If your extensions are in another directory, add it here. If the directory
45 # is relative to the documentation root, use os.path.abspath to make it
53 # is relative to the documentation root, use os.path.abspath to make it
46 # absolute, like shown here.
54 # absolute, like shown here.
47 sys.path.insert(0, os.path.abspath('../sphinxext'))
55 sys.path.insert(0, os.path.abspath('../sphinxext'))
48
56
49 # We load the ipython release info into a dict by explicit execution
57 # We load the ipython release info into a dict by explicit execution
50 iprelease = {}
58 iprelease = {}
51 exec(
59 exec(
52 compile(
60 compile(
53 open("../../IPython/core/release.py", encoding="utf-8").read(),
61 open("../../IPython/core/release.py", encoding="utf-8").read(),
54 "../../IPython/core/release.py",
62 "../../IPython/core/release.py",
55 "exec",
63 "exec",
56 ),
64 ),
57 iprelease,
65 iprelease,
58 )
66 )
59
67
60 # General configuration
68 # General configuration
61 # ---------------------
69 # ---------------------
62
70
63 # Add any Sphinx extension module names here, as strings. They can be extensions
71 # Add any Sphinx extension module names here, as strings. They can be extensions
64 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
72 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
65 extensions = [
73 extensions = [
66 'sphinx.ext.autodoc',
74 'sphinx.ext.autodoc',
67 'sphinx.ext.autosummary',
75 'sphinx.ext.autosummary',
68 'sphinx.ext.doctest',
76 'sphinx.ext.doctest',
69 'sphinx.ext.inheritance_diagram',
77 'sphinx.ext.inheritance_diagram',
70 'sphinx.ext.intersphinx',
78 'sphinx.ext.intersphinx',
71 'sphinx.ext.graphviz',
79 'sphinx.ext.graphviz',
72 'IPython.sphinxext.ipython_console_highlighting',
80 'IPython.sphinxext.ipython_console_highlighting',
73 'IPython.sphinxext.ipython_directive',
81 'IPython.sphinxext.ipython_directive',
74 'sphinx.ext.napoleon', # to preprocess docstrings
82 'sphinx.ext.napoleon', # to preprocess docstrings
75 'github', # for easy GitHub links
83 'github', # for easy GitHub links
76 'magics',
84 'magics',
77 'configtraits',
85 'configtraits',
78 ]
86 ]
79
87
80 # Add any paths that contain templates here, relative to this directory.
88 # Add any paths that contain templates here, relative to this directory.
81 templates_path = ['_templates']
89 templates_path = ['_templates']
82
90
83 # The suffix of source filenames.
91 # The suffix of source filenames.
84 source_suffix = '.rst'
92 source_suffix = '.rst'
85
93
86 rst_prolog = ''
94 rst_prolog = ''
87
95
88 def is_stable(extra):
96 def is_stable(extra):
89 for ext in {'dev', 'b', 'rc'}:
97 for ext in {'dev', 'b', 'rc'}:
90 if ext in extra:
98 if ext in extra:
91 return False
99 return False
92 return True
100 return True
93
101
94 if is_stable(iprelease['_version_extra']):
102 if is_stable(iprelease['_version_extra']):
95 tags.add('ipystable')
103 tags.add('ipystable')
96 print('Adding Tag: ipystable')
104 print('Adding Tag: ipystable')
97 else:
105 else:
98 tags.add('ipydev')
106 tags.add('ipydev')
99 print('Adding Tag: ipydev')
107 print('Adding Tag: ipydev')
100 rst_prolog += """
108 rst_prolog += """
101 .. warning::
109 .. warning::
102
110
103 This documentation covers a development version of IPython. The development
111 This documentation covers a development version of IPython. The development
104 version may differ significantly from the latest stable release.
112 version may differ significantly from the latest stable release.
105 """
113 """
106
114
107 rst_prolog += """
115 rst_prolog += """
108 .. important::
116 .. important::
109
117
110 This documentation covers IPython versions 6.0 and higher. Beginning with
118 This documentation covers IPython versions 6.0 and higher. Beginning with
111 version 6.0, IPython stopped supporting compatibility with Python versions
119 version 6.0, IPython stopped supporting compatibility with Python versions
112 lower than 3.3 including all versions of Python 2.7.
120 lower than 3.3 including all versions of Python 2.7.
113
121
114 If you are looking for an IPython version compatible with Python 2.7,
122 If you are looking for an IPython version compatible with Python 2.7,
115 please use the IPython 5.x LTS release and refer to its documentation (LTS
123 please use the IPython 5.x LTS release and refer to its documentation (LTS
116 is the long term support release).
124 is the long term support release).
117
125
118 """
126 """
119
127
120 # The master toctree document.
128 # The master toctree document.
121 master_doc = 'index'
129 master_doc = 'index'
122
130
123 # General substitutions.
131 # General substitutions.
124 project = 'IPython'
132 project = 'IPython'
125 copyright = 'The IPython Development Team'
133 copyright = 'The IPython Development Team'
126
134
127 # ghissue config
135 # ghissue config
128 github_project_url = "https://github.com/ipython/ipython"
136 github_project_url = "https://github.com/ipython/ipython"
129
137
130 # numpydoc config
138 # numpydoc config
131 numpydoc_show_class_members = False # Otherwise Sphinx emits thousands of warnings
139 numpydoc_show_class_members = False # Otherwise Sphinx emits thousands of warnings
132 numpydoc_class_members_toctree = False
140 numpydoc_class_members_toctree = False
133 warning_is_error = True
141 warning_is_error = True
134
142
135 import logging
143 import logging
136
144
137 class ConfigtraitFilter(logging.Filter):
145 class ConfigtraitFilter(logging.Filter):
138 """
146 """
139 This is a filter to remove in sphinx 3+ the error about config traits being duplicated.
147 This is a filter to remove in sphinx 3+ the error about config traits being duplicated.
140
148
141 As we autogenerate configuration traits from, subclasses have lots of
149 As we autogenerate configuration traits from, subclasses have lots of
142 duplication and we want to silence them. Indeed we build on travis with
150 duplication and we want to silence them. Indeed we build on travis with
143 warnings-as-error set to True, so those duplicate items make the build fail.
151 warnings-as-error set to True, so those duplicate items make the build fail.
144 """
152 """
145
153
146 def filter(self, record):
154 def filter(self, record):
147 if record.args and record.args[0] == 'configtrait' and 'duplicate' in record.msg:
155 if record.args and record.args[0] == 'configtrait' and 'duplicate' in record.msg:
148 return False
156 return False
149 return True
157 return True
150
158
151 ct_filter = ConfigtraitFilter()
159 ct_filter = ConfigtraitFilter()
152
160
153 import sphinx.util
161 import sphinx.util
154 logger = sphinx.util.logging.getLogger('sphinx.domains.std').logger
162 logger = sphinx.util.logging.getLogger('sphinx.domains.std').logger
155
163
156 logger.addFilter(ct_filter)
164 logger.addFilter(ct_filter)
157
165
158 # The default replacements for |version| and |release|, also used in various
166 # The default replacements for |version| and |release|, also used in various
159 # other places throughout the built documents.
167 # other places throughout the built documents.
160 #
168 #
161 # The full version, including alpha/beta/rc tags.
169 # The full version, including alpha/beta/rc tags.
162 release = "%s" % iprelease['version']
170 release = "%s" % iprelease['version']
163 # Just the X.Y.Z part, no '-dev'
171 # Just the X.Y.Z part, no '-dev'
164 version = iprelease['version'].split('-', 1)[0]
172 version = iprelease['version'].split('-', 1)[0]
165
173
166
174
167 # There are two options for replacing |today|: either, you set today to some
175 # There are two options for replacing |today|: either, you set today to some
168 # non-false value, then it is used:
176 # non-false value, then it is used:
169 #today = ''
177 #today = ''
170 # Else, today_fmt is used as the format for a strftime call.
178 # Else, today_fmt is used as the format for a strftime call.
171 today_fmt = '%B %d, %Y'
179 today_fmt = '%B %d, %Y'
172
180
173 # List of documents that shouldn't be included in the build.
181 # List of documents that shouldn't be included in the build.
174 #unused_docs = []
182 #unused_docs = []
175
183
176 # Exclude these glob-style patterns when looking for source files. They are
184 # Exclude these glob-style patterns when looking for source files. They are
177 # relative to the source/ directory.
185 # relative to the source/ directory.
178 exclude_patterns = []
186 exclude_patterns = []
179
187
180
188
181 # If true, '()' will be appended to :func: etc. cross-reference text.
189 # If true, '()' will be appended to :func: etc. cross-reference text.
182 #add_function_parentheses = True
190 #add_function_parentheses = True
183
191
184 # If true, the current module name will be prepended to all description
192 # If true, the current module name will be prepended to all description
185 # unit titles (such as .. function::).
193 # unit titles (such as .. function::).
186 #add_module_names = True
194 #add_module_names = True
187
195
188 # If true, sectionauthor and moduleauthor directives will be shown in the
196 # If true, sectionauthor and moduleauthor directives will be shown in the
189 # output. They are ignored by default.
197 # output. They are ignored by default.
190 #show_authors = False
198 #show_authors = False
191
199
192 # The name of the Pygments (syntax highlighting) style to use.
200 # The name of the Pygments (syntax highlighting) style to use.
193 pygments_style = 'sphinx'
201 pygments_style = 'sphinx'
194
202
195 # Set the default role so we can use `foo` instead of ``foo``
203 # Set the default role so we can use `foo` instead of ``foo``
196 default_role = 'literal'
204 default_role = 'literal'
197
205
198 # Options for HTML output
206 # Options for HTML output
199 # -----------------------
207 # -----------------------
200
208
201 # The style sheet to use for HTML and HTML Help pages. A file of that name
209 # The style sheet to use for HTML and HTML Help pages. A file of that name
202 # must exist either in Sphinx' static/ path, or in one of the custom paths
210 # must exist either in Sphinx' static/ path, or in one of the custom paths
203 # given in html_static_path.
211 # given in html_static_path.
204 # html_style = 'default.css'
212 # html_style = 'default.css'
205
213
206
214
207 # The name for this set of Sphinx documents. If None, it defaults to
215 # The name for this set of Sphinx documents. If None, it defaults to
208 # "<project> v<release> documentation".
216 # "<project> v<release> documentation".
209 #html_title = None
217 #html_title = None
210
218
211 # The name of an image file (within the static path) to place at the top of
219 # The name of an image file (within the static path) to place at the top of
212 # the sidebar.
220 # the sidebar.
213 #html_logo = None
221 #html_logo = None
214
222
215 # Add any paths that contain custom static files (such as style sheets) here,
223 # Add any paths that contain custom static files (such as style sheets) here,
216 # relative to this directory. They are copied after the builtin static files,
224 # relative to this directory. They are copied after the builtin static files,
217 # so a file named "default.css" will overwrite the builtin "default.css".
225 # so a file named "default.css" will overwrite the builtin "default.css".
218 html_static_path = ['_static']
226 html_static_path = ['_static']
219
227
220 # Favicon needs the directory name
228 # Favicon needs the directory name
221 html_favicon = '_static/favicon.ico'
229 html_favicon = '_static/favicon.ico'
222 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
230 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
223 # using the given strftime format.
231 # using the given strftime format.
224 html_last_updated_fmt = '%b %d, %Y'
232 html_last_updated_fmt = '%b %d, %Y'
225
233
226 # If true, SmartyPants will be used to convert quotes and dashes to
234 # If true, SmartyPants will be used to convert quotes and dashes to
227 # typographically correct entities.
235 # typographically correct entities.
228 #html_use_smartypants = True
236 #html_use_smartypants = True
229
237
230 # Custom sidebar templates, maps document names to template names.
238 # Custom sidebar templates, maps document names to template names.
231 #html_sidebars = {}
239 #html_sidebars = {}
232
240
233 # Additional templates that should be rendered to pages, maps page names to
241 # Additional templates that should be rendered to pages, maps page names to
234 # template names.
242 # template names.
235 html_additional_pages = {
243 html_additional_pages = {
236 'interactive/htmlnotebook': 'notebook_redirect.html',
244 'interactive/htmlnotebook': 'notebook_redirect.html',
237 'interactive/notebook': 'notebook_redirect.html',
245 'interactive/notebook': 'notebook_redirect.html',
238 'interactive/nbconvert': 'notebook_redirect.html',
246 'interactive/nbconvert': 'notebook_redirect.html',
239 'interactive/public_server': 'notebook_redirect.html',
247 'interactive/public_server': 'notebook_redirect.html',
240 }
248 }
241
249
242 # If false, no module index is generated.
250 # If false, no module index is generated.
243 #html_use_modindex = True
251 #html_use_modindex = True
244
252
245 # If true, the reST sources are included in the HTML build as _sources/<name>.
253 # If true, the reST sources are included in the HTML build as _sources/<name>.
246 #html_copy_source = True
254 #html_copy_source = True
247
255
248 # If true, an OpenSearch description file will be output, and all pages will
256 # If true, an OpenSearch description file will be output, and all pages will
249 # contain a <link> tag referring to it. The value of this option must be the
257 # contain a <link> tag referring to it. The value of this option must be the
250 # base URL from which the finished HTML is served.
258 # base URL from which the finished HTML is served.
251 #html_use_opensearch = ''
259 #html_use_opensearch = ''
252
260
253 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
261 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
254 #html_file_suffix = ''
262 #html_file_suffix = ''
255
263
256 # Output file base name for HTML help builder.
264 # Output file base name for HTML help builder.
257 htmlhelp_basename = 'ipythondoc'
265 htmlhelp_basename = 'ipythondoc'
258
266
259 intersphinx_mapping = {'python': ('https://docs.python.org/3/', None),
267 intersphinx_mapping = {'python': ('https://docs.python.org/3/', None),
260 'rpy2': ('https://rpy2.github.io/doc/latest/html/', None),
268 'rpy2': ('https://rpy2.github.io/doc/latest/html/', None),
261 'jupyterclient': ('https://jupyter-client.readthedocs.io/en/latest/', None),
269 'jupyterclient': ('https://jupyter-client.readthedocs.io/en/latest/', None),
262 'jupyter': ('https://jupyter.readthedocs.io/en/latest/', None),
270 'jupyter': ('https://jupyter.readthedocs.io/en/latest/', None),
263 'jedi': ('https://jedi.readthedocs.io/en/latest/', None),
271 'jedi': ('https://jedi.readthedocs.io/en/latest/', None),
264 'traitlets': ('https://traitlets.readthedocs.io/en/latest/', None),
272 'traitlets': ('https://traitlets.readthedocs.io/en/latest/', None),
265 'ipykernel': ('https://ipykernel.readthedocs.io/en/latest/', None),
273 'ipykernel': ('https://ipykernel.readthedocs.io/en/latest/', None),
266 'prompt_toolkit' : ('https://python-prompt-toolkit.readthedocs.io/en/stable/', None),
274 'prompt_toolkit' : ('https://python-prompt-toolkit.readthedocs.io/en/stable/', None),
267 'ipywidgets': ('https://ipywidgets.readthedocs.io/en/stable/', None),
275 'ipywidgets': ('https://ipywidgets.readthedocs.io/en/stable/', None),
268 'ipyparallel': ('https://ipyparallel.readthedocs.io/en/stable/', None),
276 'ipyparallel': ('https://ipyparallel.readthedocs.io/en/stable/', None),
269 'pip': ('https://pip.pypa.io/en/stable/', None)
277 'pip': ('https://pip.pypa.io/en/stable/', None)
270 }
278 }
271
279
272 # Options for LaTeX output
280 # Options for LaTeX output
273 # ------------------------
281 # ------------------------
274
282
275 # The font size ('10pt', '11pt' or '12pt').
283 # The font size ('10pt', '11pt' or '12pt').
276 latex_font_size = '11pt'
284 latex_font_size = '11pt'
277
285
278 # Grouping the document tree into LaTeX files. List of tuples
286 # Grouping the document tree into LaTeX files. List of tuples
279 # (source start file, target name, title, author, document class [howto/manual]).
287 # (source start file, target name, title, author, document class [howto/manual]).
280
288
281 latex_documents = [
289 latex_documents = [
282 ('index', 'ipython.tex', 'IPython Documentation',
290 ('index', 'ipython.tex', 'IPython Documentation',
283 u"""The IPython Development Team""", 'manual', True),
291 u"""The IPython Development Team""", 'manual', True),
284 ('parallel/winhpc_index', 'winhpc_whitepaper.tex',
292 ('parallel/winhpc_index', 'winhpc_whitepaper.tex',
285 'Using IPython on Windows HPC Server 2008',
293 'Using IPython on Windows HPC Server 2008',
286 u"Brian E. Granger", 'manual', True)
294 u"Brian E. Granger", 'manual', True)
287 ]
295 ]
288
296
289 # The name of an image file (relative to this directory) to place at the top of
297 # The name of an image file (relative to this directory) to place at the top of
290 # the title page.
298 # the title page.
291 #latex_logo = None
299 #latex_logo = None
292
300
293 # For "manual" documents, if this is true, then toplevel headings are parts,
301 # For "manual" documents, if this is true, then toplevel headings are parts,
294 # not chapters.
302 # not chapters.
295 #latex_use_parts = False
303 #latex_use_parts = False
296
304
297 # Additional stuff for the LaTeX preamble.
305 # Additional stuff for the LaTeX preamble.
298 #latex_preamble = ''
306 #latex_preamble = ''
299
307
300 # Documents to append as an appendix to all manuals.
308 # Documents to append as an appendix to all manuals.
301 #latex_appendices = []
309 #latex_appendices = []
302
310
303 # If false, no module index is generated.
311 # If false, no module index is generated.
304 latex_use_modindex = True
312 latex_use_modindex = True
305
313
306
314
307 # Options for texinfo output
315 # Options for texinfo output
308 # --------------------------
316 # --------------------------
309
317
310 texinfo_documents = [
318 texinfo_documents = [
311 (master_doc, 'ipython', 'IPython Documentation',
319 (master_doc, 'ipython', 'IPython Documentation',
312 'The IPython Development Team',
320 'The IPython Development Team',
313 'IPython',
321 'IPython',
314 'IPython Documentation',
322 'IPython Documentation',
315 'Programming',
323 'Programming',
316 1),
324 1),
317 ]
325 ]
318
326
319 modindex_common_prefix = ['IPython.']
327 modindex_common_prefix = ['IPython.']
320
328
321
329
322 # Cleanup
330 # Cleanup
323 # -------
331 # -------
324 # delete release info to avoid pickling errors from sphinx
332 # delete release info to avoid pickling errors from sphinx
325
333
326 del iprelease
334 del iprelease
@@ -1,452 +1,463 b''
1 """Attempt to generate templates for module reference with Sphinx
1 """Attempt to generate templates for module reference with Sphinx
2
2
3 XXX - we exclude extension modules
3 XXX - we exclude extension modules
4
4
5 To include extension modules, first identify them as valid in the
5 To include extension modules, first identify them as valid in the
6 ``_uri2path`` method, then handle them in the ``_parse_module`` script.
6 ``_uri2path`` method, then handle them in the ``_parse_module`` script.
7
7
8 We get functions and classes by parsing the text of .py files.
8 We get functions and classes by parsing the text of .py files.
9 Alternatively we could import the modules for discovery, and we'd have
9 Alternatively we could import the modules for discovery, and we'd have
10 to do that for extension modules. This would involve changing the
10 to do that for extension modules. This would involve changing the
11 ``_parse_module`` method to work via import and introspection, and
11 ``_parse_module`` method to work via import and introspection, and
12 might involve changing ``discover_modules`` (which determines which
12 might involve changing ``discover_modules`` (which determines which
13 files are modules, and therefore which module URIs will be passed to
13 files are modules, and therefore which module URIs will be passed to
14 ``_parse_module``).
14 ``_parse_module``).
15
15
16 NOTE: this is a modified version of a script originally shipped with the
16 NOTE: this is a modified version of a script originally shipped with the
17 PyMVPA project, which we've adapted for NIPY use. PyMVPA is an MIT-licensed
17 PyMVPA project, which we've adapted for NIPY use. PyMVPA is an MIT-licensed
18 project."""
18 project."""
19
19
20
20
21 # Stdlib imports
21 # Stdlib imports
22 import ast
22 import ast
23 import inspect
23 import inspect
24 import os
24 import os
25 import re
25 import re
26 from importlib import import_module
26 from importlib import import_module
27 from types import SimpleNamespace as Obj
27
28
28
29
29 class Obj(object):
30 '''Namespace to hold arbitrary information.'''
31 def __init__(self, **kwargs):
32 for k, v in kwargs.items():
33 setattr(self, k, v)
34
35 class FuncClsScanner(ast.NodeVisitor):
30 class FuncClsScanner(ast.NodeVisitor):
36 """Scan a module for top-level functions and classes.
31 """Scan a module for top-level functions and classes.
37
32
38 Skips objects with an @undoc decorator, or a name starting with '_'.
33 Skips objects with an @undoc decorator, or a name starting with '_'.
39 """
34 """
40 def __init__(self):
35 def __init__(self):
41 ast.NodeVisitor.__init__(self)
36 ast.NodeVisitor.__init__(self)
42 self.classes = []
37 self.classes = []
43 self.classes_seen = set()
38 self.classes_seen = set()
44 self.functions = []
39 self.functions = []
45
40
46 @staticmethod
41 @staticmethod
47 def has_undoc_decorator(node):
42 def has_undoc_decorator(node):
48 return any(isinstance(d, ast.Name) and d.id == 'undoc' \
43 return any(isinstance(d, ast.Name) and d.id == 'undoc' \
49 for d in node.decorator_list)
44 for d in node.decorator_list)
50
45
51 def visit_If(self, node):
46 def visit_If(self, node):
52 if isinstance(node.test, ast.Compare) \
47 if isinstance(node.test, ast.Compare) \
53 and isinstance(node.test.left, ast.Name) \
48 and isinstance(node.test.left, ast.Name) \
54 and node.test.left.id == '__name__':
49 and node.test.left.id == '__name__':
55 return # Ignore classes defined in "if __name__ == '__main__':"
50 return # Ignore classes defined in "if __name__ == '__main__':"
56
51
57 self.generic_visit(node)
52 self.generic_visit(node)
58
53
59 def visit_FunctionDef(self, node):
54 def visit_FunctionDef(self, node):
60 if not (node.name.startswith('_') or self.has_undoc_decorator(node)) \
55 if not (node.name.startswith('_') or self.has_undoc_decorator(node)) \
61 and node.name not in self.functions:
56 and node.name not in self.functions:
62 self.functions.append(node.name)
57 self.functions.append(node.name)
63
58
64 def visit_ClassDef(self, node):
59 def visit_ClassDef(self, node):
65 if not (node.name.startswith('_') or self.has_undoc_decorator(node)) \
60 if (
66 and node.name not in self.classes_seen:
61 not (node.name.startswith("_") or self.has_undoc_decorator(node))
67 cls = Obj(name=node.name)
62 and node.name not in self.classes_seen
68 cls.has_init = any(isinstance(n, ast.FunctionDef) and \
63 ):
69 n.name=='__init__' for n in node.body)
64 cls = Obj(name=node.name, sphinx_options={})
65 cls.has_init = any(
66 isinstance(n, ast.FunctionDef) and n.name == "__init__"
67 for n in node.body
68 )
70 self.classes.append(cls)
69 self.classes.append(cls)
71 self.classes_seen.add(node.name)
70 self.classes_seen.add(node.name)
72
71
73 def scan(self, mod):
72 def scan(self, mod):
74 self.visit(mod)
73 self.visit(mod)
75 return self.functions, self.classes
74 return self.functions, self.classes
76
75
77 # Functions and classes
76 # Functions and classes
78 class ApiDocWriter(object):
77 class ApiDocWriter(object):
79 ''' Class for automatic detection and parsing of API docs
78 ''' Class for automatic detection and parsing of API docs
80 to Sphinx-parsable reST format'''
79 to Sphinx-parsable reST format'''
81
80
82 # only separating first two levels
81 # only separating first two levels
83 rst_section_levels = ['*', '=', '-', '~', '^']
82 rst_section_levels = ['*', '=', '-', '~', '^']
84
83
85 def __init__(self,
84 def __init__(self,
86 package_name,
85 package_name,
87 rst_extension='.rst',
86 rst_extension='.rst',
88 package_skip_patterns=None,
87 package_skip_patterns=None,
89 module_skip_patterns=None,
88 module_skip_patterns=None,
90 names_from__all__=None,
89 names_from__all__=None,
91 ):
90 ):
92 ''' Initialize package for parsing
91 ''' Initialize package for parsing
93
92
94 Parameters
93 Parameters
95 ----------
94 ----------
96 package_name : string
95 package_name : string
97 Name of the top-level package. *package_name* must be the
96 Name of the top-level package. *package_name* must be the
98 name of an importable package
97 name of an importable package
99 rst_extension : string, optional
98 rst_extension : string, optional
100 Extension for reST files, default '.rst'
99 Extension for reST files, default '.rst'
101 package_skip_patterns : None or sequence of {strings, regexps}
100 package_skip_patterns : None or sequence of {strings, regexps}
102 Sequence of strings giving URIs of packages to be excluded
101 Sequence of strings giving URIs of packages to be excluded
103 Operates on the package path, starting at (including) the
102 Operates on the package path, starting at (including) the
104 first dot in the package path, after *package_name* - so,
103 first dot in the package path, after *package_name* - so,
105 if *package_name* is ``sphinx``, then ``sphinx.util`` will
104 if *package_name* is ``sphinx``, then ``sphinx.util`` will
106 result in ``.util`` being passed for earching by these
105 result in ``.util`` being passed for earching by these
107 regexps. If is None, gives default. Default is:
106 regexps. If is None, gives default. Default is:
108 ['\\.tests$']
107 ['\\.tests$']
109 module_skip_patterns : None or sequence
108 module_skip_patterns : None or sequence
110 Sequence of strings giving URIs of modules to be excluded
109 Sequence of strings giving URIs of modules to be excluded
111 Operates on the module name including preceding URI path,
110 Operates on the module name including preceding URI path,
112 back to the first dot after *package_name*. For example
111 back to the first dot after *package_name*. For example
113 ``sphinx.util.console`` results in the string to search of
112 ``sphinx.util.console`` results in the string to search of
114 ``.util.console``
113 ``.util.console``
115 If is None, gives default. Default is:
114 If is None, gives default. Default is:
116 ['\\.setup$', '\\._']
115 ['\\.setup$', '\\._']
117 names_from__all__ : set, optional
116 names_from__all__ : set, optional
118 Modules listed in here will be scanned by doing ``from mod import *``,
117 Modules listed in here will be scanned by doing ``from mod import *``,
119 rather than finding function and class definitions by scanning the
118 rather than finding function and class definitions by scanning the
120 AST. This is intended for API modules which expose things defined in
119 AST. This is intended for API modules which expose things defined in
121 other files. Modules listed here must define ``__all__`` to avoid
120 other files. Modules listed here must define ``__all__`` to avoid
122 exposing everything they import.
121 exposing everything they import.
123 '''
122 '''
124 if package_skip_patterns is None:
123 if package_skip_patterns is None:
125 package_skip_patterns = ['\\.tests$']
124 package_skip_patterns = ['\\.tests$']
126 if module_skip_patterns is None:
125 if module_skip_patterns is None:
127 module_skip_patterns = ['\\.setup$', '\\._']
126 module_skip_patterns = ['\\.setup$', '\\._']
128 self.package_name = package_name
127 self.package_name = package_name
129 self.rst_extension = rst_extension
128 self.rst_extension = rst_extension
130 self.package_skip_patterns = package_skip_patterns
129 self.package_skip_patterns = package_skip_patterns
131 self.module_skip_patterns = module_skip_patterns
130 self.module_skip_patterns = module_skip_patterns
132 self.names_from__all__ = names_from__all__ or set()
131 self.names_from__all__ = names_from__all__ or set()
133
132
134 def get_package_name(self):
133 def get_package_name(self):
135 return self._package_name
134 return self._package_name
136
135
137 def set_package_name(self, package_name):
136 def set_package_name(self, package_name):
138 ''' Set package_name
137 ''' Set package_name
139
138
140 >>> docwriter = ApiDocWriter('sphinx')
139 >>> docwriter = ApiDocWriter('sphinx')
141 >>> import sphinx
140 >>> import sphinx
142 >>> docwriter.root_path == sphinx.__path__[0]
141 >>> docwriter.root_path == sphinx.__path__[0]
143 True
142 True
144 >>> docwriter.package_name = 'docutils'
143 >>> docwriter.package_name = 'docutils'
145 >>> import docutils
144 >>> import docutils
146 >>> docwriter.root_path == docutils.__path__[0]
145 >>> docwriter.root_path == docutils.__path__[0]
147 True
146 True
148 '''
147 '''
149 # It's also possible to imagine caching the module parsing here
148 # It's also possible to imagine caching the module parsing here
150 self._package_name = package_name
149 self._package_name = package_name
151 self.root_module = import_module(package_name)
150 self.root_module = import_module(package_name)
152 self.root_path = self.root_module.__path__[0]
151 self.root_path = self.root_module.__path__[0]
153 self.written_modules = None
152 self.written_modules = None
154
153
155 package_name = property(get_package_name, set_package_name, None,
154 package_name = property(get_package_name, set_package_name, None,
156 'get/set package_name')
155 'get/set package_name')
157
156
158 def _uri2path(self, uri):
157 def _uri2path(self, uri):
159 ''' Convert uri to absolute filepath
158 ''' Convert uri to absolute filepath
160
159
161 Parameters
160 Parameters
162 ----------
161 ----------
163 uri : string
162 uri : string
164 URI of python module to return path for
163 URI of python module to return path for
165
164
166 Returns
165 Returns
167 -------
166 -------
168 path : None or string
167 path : None or string
169 Returns None if there is no valid path for this URI
168 Returns None if there is no valid path for this URI
170 Otherwise returns absolute file system path for URI
169 Otherwise returns absolute file system path for URI
171
170
172 Examples
171 Examples
173 --------
172 --------
174 >>> docwriter = ApiDocWriter('sphinx')
173 >>> docwriter = ApiDocWriter('sphinx')
175 >>> import sphinx
174 >>> import sphinx
176 >>> modpath = sphinx.__path__[0]
175 >>> modpath = sphinx.__path__[0]
177 >>> res = docwriter._uri2path('sphinx.builder')
176 >>> res = docwriter._uri2path('sphinx.builder')
178 >>> res == os.path.join(modpath, 'builder.py')
177 >>> res == os.path.join(modpath, 'builder.py')
179 True
178 True
180 >>> res = docwriter._uri2path('sphinx')
179 >>> res = docwriter._uri2path('sphinx')
181 >>> res == os.path.join(modpath, '__init__.py')
180 >>> res == os.path.join(modpath, '__init__.py')
182 True
181 True
183 >>> docwriter._uri2path('sphinx.does_not_exist')
182 >>> docwriter._uri2path('sphinx.does_not_exist')
184
183
185 '''
184 '''
186 if uri == self.package_name:
185 if uri == self.package_name:
187 return os.path.join(self.root_path, '__init__.py')
186 return os.path.join(self.root_path, '__init__.py')
188 path = uri.replace('.', os.path.sep)
187 path = uri.replace('.', os.path.sep)
189 path = path.replace(self.package_name + os.path.sep, '')
188 path = path.replace(self.package_name + os.path.sep, '')
190 path = os.path.join(self.root_path, path)
189 path = os.path.join(self.root_path, path)
191 # XXX maybe check for extensions as well?
190 # XXX maybe check for extensions as well?
192 if os.path.exists(path + '.py'): # file
191 if os.path.exists(path + '.py'): # file
193 path += '.py'
192 path += '.py'
194 elif os.path.exists(os.path.join(path, '__init__.py')):
193 elif os.path.exists(os.path.join(path, '__init__.py')):
195 path = os.path.join(path, '__init__.py')
194 path = os.path.join(path, '__init__.py')
196 else:
195 else:
197 return None
196 return None
198 return path
197 return path
199
198
200 def _path2uri(self, dirpath):
199 def _path2uri(self, dirpath):
201 ''' Convert directory path to uri '''
200 ''' Convert directory path to uri '''
202 relpath = dirpath.replace(self.root_path, self.package_name)
201 relpath = dirpath.replace(self.root_path, self.package_name)
203 if relpath.startswith(os.path.sep):
202 if relpath.startswith(os.path.sep):
204 relpath = relpath[1:]
203 relpath = relpath[1:]
205 return relpath.replace(os.path.sep, '.')
204 return relpath.replace(os.path.sep, '.')
206
205
207 def _parse_module(self, uri):
206 def _parse_module(self, uri):
208 ''' Parse module defined in *uri* '''
207 ''' Parse module defined in *uri* '''
209 filename = self._uri2path(uri)
208 filename = self._uri2path(uri)
210 if filename is None:
209 if filename is None:
211 # nothing that we could handle here.
210 # nothing that we could handle here.
212 return ([],[])
211 return ([],[])
213 with open(filename, 'rb') as f:
212 with open(filename, 'rb') as f:
214 mod = ast.parse(f.read())
213 mod = ast.parse(f.read())
215 return FuncClsScanner().scan(mod)
214 return FuncClsScanner().scan(mod)
216
215
217 def _import_funcs_classes(self, uri):
216 def _import_funcs_classes(self, uri):
218 """Import * from uri, and separate out functions and classes."""
217 """Import * from uri, and separate out functions and classes."""
219 ns = {}
218 ns = {}
220 exec('from %s import *' % uri, ns)
219 exec('from %s import *' % uri, ns)
221 funcs, classes = [], []
220 funcs, classes = [], []
222 for name, obj in ns.items():
221 for name, obj in ns.items():
223 if inspect.isclass(obj):
222 if inspect.isclass(obj):
224 cls = Obj(name=name, has_init='__init__' in obj.__dict__)
223 cls = Obj(
224 name=name,
225 has_init="__init__" in obj.__dict__,
226 sphinx_options=getattr(obj, "_sphinx_options", {}),
227 )
225 classes.append(cls)
228 classes.append(cls)
226 elif inspect.isfunction(obj):
229 elif inspect.isfunction(obj):
227 funcs.append(name)
230 funcs.append(name)
228
231
229 return sorted(funcs), sorted(classes, key=lambda x: x.name)
232 return sorted(funcs), sorted(classes, key=lambda x: x.name)
230
233
231 def find_funcs_classes(self, uri):
234 def find_funcs_classes(self, uri):
232 """Find the functions and classes defined in the module ``uri``"""
235 """Find the functions and classes defined in the module ``uri``"""
233 if uri in self.names_from__all__:
236 if uri in self.names_from__all__:
234 # For API modules which expose things defined elsewhere, import them
237 # For API modules which expose things defined elsewhere, import them
235 return self._import_funcs_classes(uri)
238 return self._import_funcs_classes(uri)
236 else:
239 else:
237 # For other modules, scan their AST to see what they define
240 # For other modules, scan their AST to see what they define
238 return self._parse_module(uri)
241 return self._parse_module(uri)
239
242
240 def generate_api_doc(self, uri):
243 def generate_api_doc(self, uri):
241 '''Make autodoc documentation template string for a module
244 '''Make autodoc documentation template string for a module
242
245
243 Parameters
246 Parameters
244 ----------
247 ----------
245 uri : string
248 uri : string
246 python location of module - e.g 'sphinx.builder'
249 python location of module - e.g 'sphinx.builder'
247
250
248 Returns
251 Returns
249 -------
252 -------
250 S : string
253 S : string
251 Contents of API doc
254 Contents of API doc
252 '''
255 '''
253 # get the names of all classes and functions
256 # get the names of all classes and functions
254 functions, classes = self.find_funcs_classes(uri)
257 functions, classes = self.find_funcs_classes(uri)
255 if not len(functions) and not len(classes):
258 if not len(functions) and not len(classes):
256 #print ('WARNING: Empty -', uri) # dbg
259 #print ('WARNING: Empty -', uri) # dbg
257 return ''
260 return ''
258
261
259 # Make a shorter version of the uri that omits the package name for
262 # Make a shorter version of the uri that omits the package name for
260 # titles
263 # titles
261 uri_short = re.sub(r'^%s\.' % self.package_name,'',uri)
264 uri_short = re.sub(r'^%s\.' % self.package_name,'',uri)
262
265
263 ad = '.. AUTO-GENERATED FILE -- DO NOT EDIT!\n\n'
266 ad = '.. AUTO-GENERATED FILE -- DO NOT EDIT!\n\n'
264
267
265 # Set the chapter title to read 'Module:' for all modules except for the
268 # Set the chapter title to read 'Module:' for all modules except for the
266 # main packages
269 # main packages
267 if '.' in uri:
270 if '.' in uri:
268 chap_title = 'Module: :mod:`' + uri_short + '`'
271 chap_title = 'Module: :mod:`' + uri_short + '`'
269 else:
272 else:
270 chap_title = ':mod:`' + uri_short + '`'
273 chap_title = ':mod:`' + uri_short + '`'
271 ad += chap_title + '\n' + self.rst_section_levels[1] * len(chap_title)
274 ad += chap_title + '\n' + self.rst_section_levels[1] * len(chap_title)
272
275
273 ad += '\n.. automodule:: ' + uri + '\n'
276 ad += '\n.. automodule:: ' + uri + '\n'
274 ad += '\n.. currentmodule:: ' + uri + '\n'
277 ad += '\n.. currentmodule:: ' + uri + '\n'
275
278
276 if classes:
279 if classes:
277 subhead = str(len(classes)) + (' Classes' if len(classes) > 1 else ' Class')
280 subhead = str(len(classes)) + (' Classes' if len(classes) > 1 else ' Class')
278 ad += '\n'+ subhead + '\n' + \
281 ad += '\n'+ subhead + '\n' + \
279 self.rst_section_levels[2] * len(subhead) + '\n'
282 self.rst_section_levels[2] * len(subhead) + '\n'
280
283
281 for c in classes:
284 for c in classes:
282 ad += '\n.. autoclass:: ' + c.name + '\n'
285 opts = c.sphinx_options
286 ad += "\n.. autoclass:: " + c.name + "\n"
283 # must NOT exclude from index to keep cross-refs working
287 # must NOT exclude from index to keep cross-refs working
284 ad += ' :members:\n' \
288 ad += " :members:\n"
285 ' :show-inheritance:\n'
289 if opts.get("show_inheritance", True):
290 ad += " :show-inheritance:\n"
291 if opts.get("show_inherited_members", False):
292 exclusions_list = opts.get("exclude_inherited_from", [])
293 exclusions = (
294 (" " + " ".join(exclusions_list)) if exclusions_list else ""
295 )
296 ad += f" :inherited-members:{exclusions}\n"
286 if c.has_init:
297 if c.has_init:
287 ad += '\n .. automethod:: __init__\n'
298 ad += '\n .. automethod:: __init__\n'
288
299
289 if functions:
300 if functions:
290 subhead = str(len(functions)) + (' Functions' if len(functions) > 1 else ' Function')
301 subhead = str(len(functions)) + (' Functions' if len(functions) > 1 else ' Function')
291 ad += '\n'+ subhead + '\n' + \
302 ad += '\n'+ subhead + '\n' + \
292 self.rst_section_levels[2] * len(subhead) + '\n'
303 self.rst_section_levels[2] * len(subhead) + '\n'
293 for f in functions:
304 for f in functions:
294 # must NOT exclude from index to keep cross-refs working
305 # must NOT exclude from index to keep cross-refs working
295 ad += '\n.. autofunction:: ' + uri + '.' + f + '\n\n'
306 ad += '\n.. autofunction:: ' + uri + '.' + f + '\n\n'
296 return ad
307 return ad
297
308
298 def _survives_exclude(self, matchstr, match_type):
309 def _survives_exclude(self, matchstr, match_type):
299 ''' Returns True if *matchstr* does not match patterns
310 ''' Returns True if *matchstr* does not match patterns
300
311
301 ``self.package_name`` removed from front of string if present
312 ``self.package_name`` removed from front of string if present
302
313
303 Examples
314 Examples
304 --------
315 --------
305 >>> dw = ApiDocWriter('sphinx')
316 >>> dw = ApiDocWriter('sphinx')
306 >>> dw._survives_exclude('sphinx.okpkg', 'package')
317 >>> dw._survives_exclude('sphinx.okpkg', 'package')
307 True
318 True
308 >>> dw.package_skip_patterns.append('^\\.badpkg$')
319 >>> dw.package_skip_patterns.append('^\\.badpkg$')
309 >>> dw._survives_exclude('sphinx.badpkg', 'package')
320 >>> dw._survives_exclude('sphinx.badpkg', 'package')
310 False
321 False
311 >>> dw._survives_exclude('sphinx.badpkg', 'module')
322 >>> dw._survives_exclude('sphinx.badpkg', 'module')
312 True
323 True
313 >>> dw._survives_exclude('sphinx.badmod', 'module')
324 >>> dw._survives_exclude('sphinx.badmod', 'module')
314 True
325 True
315 >>> dw.module_skip_patterns.append('^\\.badmod$')
326 >>> dw.module_skip_patterns.append('^\\.badmod$')
316 >>> dw._survives_exclude('sphinx.badmod', 'module')
327 >>> dw._survives_exclude('sphinx.badmod', 'module')
317 False
328 False
318 '''
329 '''
319 if match_type == 'module':
330 if match_type == 'module':
320 patterns = self.module_skip_patterns
331 patterns = self.module_skip_patterns
321 elif match_type == 'package':
332 elif match_type == 'package':
322 patterns = self.package_skip_patterns
333 patterns = self.package_skip_patterns
323 else:
334 else:
324 raise ValueError('Cannot interpret match type "%s"'
335 raise ValueError('Cannot interpret match type "%s"'
325 % match_type)
336 % match_type)
326 # Match to URI without package name
337 # Match to URI without package name
327 L = len(self.package_name)
338 L = len(self.package_name)
328 if matchstr[:L] == self.package_name:
339 if matchstr[:L] == self.package_name:
329 matchstr = matchstr[L:]
340 matchstr = matchstr[L:]
330 for pat in patterns:
341 for pat in patterns:
331 try:
342 try:
332 pat.search
343 pat.search
333 except AttributeError:
344 except AttributeError:
334 pat = re.compile(pat)
345 pat = re.compile(pat)
335 if pat.search(matchstr):
346 if pat.search(matchstr):
336 return False
347 return False
337 return True
348 return True
338
349
339 def discover_modules(self):
350 def discover_modules(self):
340 ''' Return module sequence discovered from ``self.package_name``
351 ''' Return module sequence discovered from ``self.package_name``
341
352
342
353
343 Parameters
354 Parameters
344 ----------
355 ----------
345 None
356 None
346
357
347 Returns
358 Returns
348 -------
359 -------
349 mods : sequence
360 mods : sequence
350 Sequence of module names within ``self.package_name``
361 Sequence of module names within ``self.package_name``
351
362
352 Examples
363 Examples
353 --------
364 --------
354 >>> dw = ApiDocWriter('sphinx')
365 >>> dw = ApiDocWriter('sphinx')
355 >>> mods = dw.discover_modules()
366 >>> mods = dw.discover_modules()
356 >>> 'sphinx.util' in mods
367 >>> 'sphinx.util' in mods
357 True
368 True
358 >>> dw.package_skip_patterns.append('\\.util$')
369 >>> dw.package_skip_patterns.append('\\.util$')
359 >>> 'sphinx.util' in dw.discover_modules()
370 >>> 'sphinx.util' in dw.discover_modules()
360 False
371 False
361 >>>
372 >>>
362 '''
373 '''
363 modules = [self.package_name]
374 modules = [self.package_name]
364 # raw directory parsing
375 # raw directory parsing
365 for dirpath, dirnames, filenames in os.walk(self.root_path):
376 for dirpath, dirnames, filenames in os.walk(self.root_path):
366 # Check directory names for packages
377 # Check directory names for packages
367 root_uri = self._path2uri(os.path.join(self.root_path,
378 root_uri = self._path2uri(os.path.join(self.root_path,
368 dirpath))
379 dirpath))
369 for dirname in dirnames[:]: # copy list - we modify inplace
380 for dirname in dirnames[:]: # copy list - we modify inplace
370 package_uri = '.'.join((root_uri, dirname))
381 package_uri = '.'.join((root_uri, dirname))
371 if (self._uri2path(package_uri) and
382 if (self._uri2path(package_uri) and
372 self._survives_exclude(package_uri, 'package')):
383 self._survives_exclude(package_uri, 'package')):
373 modules.append(package_uri)
384 modules.append(package_uri)
374 else:
385 else:
375 dirnames.remove(dirname)
386 dirnames.remove(dirname)
376 # Check filenames for modules
387 # Check filenames for modules
377 for filename in filenames:
388 for filename in filenames:
378 module_name = filename[:-3]
389 module_name = filename[:-3]
379 module_uri = '.'.join((root_uri, module_name))
390 module_uri = '.'.join((root_uri, module_name))
380 if (self._uri2path(module_uri) and
391 if (self._uri2path(module_uri) and
381 self._survives_exclude(module_uri, 'module')):
392 self._survives_exclude(module_uri, 'module')):
382 modules.append(module_uri)
393 modules.append(module_uri)
383 return sorted(modules)
394 return sorted(modules)
384
395
385 def write_modules_api(self, modules,outdir):
396 def write_modules_api(self, modules,outdir):
386 # write the list
397 # write the list
387 written_modules = []
398 written_modules = []
388 for m in modules:
399 for m in modules:
389 api_str = self.generate_api_doc(m)
400 api_str = self.generate_api_doc(m)
390 if not api_str:
401 if not api_str:
391 continue
402 continue
392 # write out to file
403 # write out to file
393 outfile = os.path.join(outdir, m + self.rst_extension)
404 outfile = os.path.join(outdir, m + self.rst_extension)
394 with open(outfile, "wt", encoding="utf-8") as fileobj:
405 with open(outfile, "wt", encoding="utf-8") as fileobj:
395 fileobj.write(api_str)
406 fileobj.write(api_str)
396 written_modules.append(m)
407 written_modules.append(m)
397 self.written_modules = written_modules
408 self.written_modules = written_modules
398
409
399 def write_api_docs(self, outdir):
410 def write_api_docs(self, outdir):
400 """Generate API reST files.
411 """Generate API reST files.
401
412
402 Parameters
413 Parameters
403 ----------
414 ----------
404 outdir : string
415 outdir : string
405 Directory name in which to store files
416 Directory name in which to store files
406 We create automatic filenames for each module
417 We create automatic filenames for each module
407
418
408 Returns
419 Returns
409 -------
420 -------
410 None
421 None
411
422
412 Notes
423 Notes
413 -----
424 -----
414 Sets self.written_modules to list of written modules
425 Sets self.written_modules to list of written modules
415 """
426 """
416 if not os.path.exists(outdir):
427 if not os.path.exists(outdir):
417 os.mkdir(outdir)
428 os.mkdir(outdir)
418 # compose list of modules
429 # compose list of modules
419 modules = self.discover_modules()
430 modules = self.discover_modules()
420 self.write_modules_api(modules,outdir)
431 self.write_modules_api(modules,outdir)
421
432
422 def write_index(self, outdir, path='gen.rst', relative_to=None):
433 def write_index(self, outdir, path='gen.rst', relative_to=None):
423 """Make a reST API index file from written files
434 """Make a reST API index file from written files
424
435
425 Parameters
436 Parameters
426 ----------
437 ----------
427 outdir : string
438 outdir : string
428 Directory to which to write generated index file
439 Directory to which to write generated index file
429 path : string
440 path : string
430 Filename to write index to
441 Filename to write index to
431 relative_to : string
442 relative_to : string
432 path to which written filenames are relative. This
443 path to which written filenames are relative. This
433 component of the written file path will be removed from
444 component of the written file path will be removed from
434 outdir, in the generated index. Default is None, meaning,
445 outdir, in the generated index. Default is None, meaning,
435 leave path as it is.
446 leave path as it is.
436 """
447 """
437 if self.written_modules is None:
448 if self.written_modules is None:
438 raise ValueError('No modules written')
449 raise ValueError('No modules written')
439 # Get full filename path
450 # Get full filename path
440 path = os.path.join(outdir, path)
451 path = os.path.join(outdir, path)
441 # Path written into index is relative to rootpath
452 # Path written into index is relative to rootpath
442 if relative_to is not None:
453 if relative_to is not None:
443 relpath = outdir.replace(relative_to + os.path.sep, '')
454 relpath = outdir.replace(relative_to + os.path.sep, '')
444 else:
455 else:
445 relpath = outdir
456 relpath = outdir
446 with open(path, "wt", encoding="utf-8") as idx:
457 with open(path, "wt", encoding="utf-8") as idx:
447 w = idx.write
458 w = idx.write
448 w('.. AUTO-GENERATED FILE -- DO NOT EDIT!\n\n')
459 w('.. AUTO-GENERATED FILE -- DO NOT EDIT!\n\n')
449 w('.. autosummary::\n'
460 w('.. autosummary::\n'
450 ' :toctree: %s\n\n' % relpath)
461 ' :toctree: %s\n\n' % relpath)
451 for mod in self.written_modules:
462 for mod in self.written_modules:
452 w(' %s\n' % mod)
463 w(' %s\n' % mod)
General Comments 0
You need to be logged in to leave comments. Login now