##// END OF EJS Templates
Correct suppression defaults, add a test for #13735
krassowski -
Show More
@@ -1,2856 +1,2858 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press ``<tab>`` to expand it to its latex form.
53 and press ``<tab>`` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103
103
104 Matchers
104 Matchers
105 ========
105 ========
106
106
107 All completions routines are implemented using unified *Matchers* API.
107 All completions routines are implemented using unified *Matchers* API.
108 The matchers API is provisional and subject to change without notice.
108 The matchers API is provisional and subject to change without notice.
109
109
110 The built-in matchers include:
110 The built-in matchers include:
111
111
112 - ``IPCompleter.dict_key_matcher``: dictionary key completions,
112 - ``IPCompleter.dict_key_matcher``: dictionary key completions,
113 - ``IPCompleter.magic_matcher``: completions for magics,
113 - ``IPCompleter.magic_matcher``: completions for magics,
114 - ``IPCompleter.unicode_name_matcher``, ``IPCompleter.fwd_unicode_matcher`` and ``IPCompleter.latex_matcher``: see `Forward latex/unicode completion`_,
114 - ``IPCompleter.unicode_name_matcher``, ``IPCompleter.fwd_unicode_matcher`` and ``IPCompleter.latex_matcher``: see `Forward latex/unicode completion`_,
115 - ``back_unicode_name_matcher`` and ``back_latex_name_matcher``: see `Backward latex completion`_,
115 - ``back_unicode_name_matcher`` and ``back_latex_name_matcher``: see `Backward latex completion`_,
116 - ``IPCompleter.file_matcher``: paths to files and directories,
116 - ``IPCompleter.file_matcher``: paths to files and directories,
117 - ``IPCompleter.python_func_kw_matcher`` - function keywords,
117 - ``IPCompleter.python_func_kw_matcher`` - function keywords,
118 - ``IPCompleter.python_matches`` - globals and attributes (v1 API),
118 - ``IPCompleter.python_matches`` - globals and attributes (v1 API),
119 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
119 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
120 - ``IPCompleter.custom_completer_matcher`` - pluggable completer with a default implementation in any:`core.InteractiveShell`
120 - ``IPCompleter.custom_completer_matcher`` - pluggable completer with a default implementation in any:`core.InteractiveShell`
121 which uses uses IPython hooks system (`complete_command`) with string dispatch (including regular expressions).
121 which uses uses IPython hooks system (`complete_command`) with string dispatch (including regular expressions).
122 Differently to other matchers, ``custom_completer_matcher`` will not suppress Jedi results to match
122 Differently to other matchers, ``custom_completer_matcher`` will not suppress Jedi results to match
123 behaviour in earlier IPython versions.
123 behaviour in earlier IPython versions.
124
124
125 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
125 Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
126
126
127 Suppression of competing matchers
127 Suppression of competing matchers
128 ---------------------------------
128 ---------------------------------
129
129
130 By default results from all matchers are combined, in the order determined by
130 By default results from all matchers are combined, in the order determined by
131 their priority. Matchers can request to suppress results from subsequent
131 their priority. Matchers can request to suppress results from subsequent
132 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
132 matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
133
133
134 When multiple matchers simultaneously request surpression, the results from of
134 When multiple matchers simultaneously request surpression, the results from of
135 the matcher with higher priority will be returned.
135 the matcher with higher priority will be returned.
136
136
137 Sometimes it is desirable to suppress most but not all other matchers;
137 Sometimes it is desirable to suppress most but not all other matchers;
138 this can be achieved by adding a list of identifiers of matchers which
138 this can be achieved by adding a list of identifiers of matchers which
139 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
139 should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
140 """
140 """
141
141
142
142
143 # Copyright (c) IPython Development Team.
143 # Copyright (c) IPython Development Team.
144 # Distributed under the terms of the Modified BSD License.
144 # Distributed under the terms of the Modified BSD License.
145 #
145 #
146 # Some of this code originated from rlcompleter in the Python standard library
146 # Some of this code originated from rlcompleter in the Python standard library
147 # Copyright (C) 2001 Python Software Foundation, www.python.org
147 # Copyright (C) 2001 Python Software Foundation, www.python.org
148
148
149
149
150 import builtins as builtin_mod
150 import builtins as builtin_mod
151 import glob
151 import glob
152 import inspect
152 import inspect
153 import itertools
153 import itertools
154 import keyword
154 import keyword
155 import os
155 import os
156 import re
156 import re
157 import string
157 import string
158 import sys
158 import sys
159 import time
159 import time
160 import unicodedata
160 import unicodedata
161 import uuid
161 import uuid
162 import warnings
162 import warnings
163 from contextlib import contextmanager
163 from contextlib import contextmanager
164 from functools import lru_cache, partial
164 from functools import lru_cache, partial
165 from importlib import import_module
165 from importlib import import_module
166 from types import SimpleNamespace
166 from types import SimpleNamespace
167 from typing import (
167 from typing import (
168 Iterable,
168 Iterable,
169 Iterator,
169 Iterator,
170 List,
170 List,
171 Tuple,
171 Tuple,
172 Union,
172 Union,
173 Any,
173 Any,
174 Sequence,
174 Sequence,
175 Dict,
175 Dict,
176 NamedTuple,
176 NamedTuple,
177 Pattern,
177 Pattern,
178 Optional,
178 Optional,
179 Callable,
179 Callable,
180 TYPE_CHECKING,
180 TYPE_CHECKING,
181 Set,
181 Set,
182 )
182 )
183
183
184 from IPython.core.error import TryNext
184 from IPython.core.error import TryNext
185 from IPython.core.inputtransformer2 import ESC_MAGIC
185 from IPython.core.inputtransformer2 import ESC_MAGIC
186 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
186 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
187 from IPython.core.oinspect import InspectColors
187 from IPython.core.oinspect import InspectColors
188 from IPython.testing.skipdoctest import skip_doctest
188 from IPython.testing.skipdoctest import skip_doctest
189 from IPython.utils import generics
189 from IPython.utils import generics
190 from IPython.utils.dir2 import dir2, get_real_method
190 from IPython.utils.dir2 import dir2, get_real_method
191 from IPython.utils.path import ensure_dir_exists
191 from IPython.utils.path import ensure_dir_exists
192 from IPython.utils.process import arg_split
192 from IPython.utils.process import arg_split
193 from traitlets import (
193 from traitlets import (
194 Bool,
194 Bool,
195 Enum,
195 Enum,
196 Int,
196 Int,
197 List as ListTrait,
197 List as ListTrait,
198 Unicode,
198 Unicode,
199 Dict as DictTrait,
199 Dict as DictTrait,
200 Union as UnionTrait,
200 Union as UnionTrait,
201 default,
201 default,
202 observe,
202 observe,
203 )
203 )
204 from traitlets.config.configurable import Configurable
204 from traitlets.config.configurable import Configurable
205
205
206 import __main__
206 import __main__
207
207
208 # skip module docstests
208 # skip module docstests
209 __skip_doctest__ = True
209 __skip_doctest__ = True
210
210
211
211
212 try:
212 try:
213 import jedi
213 import jedi
214 jedi.settings.case_insensitive_completion = False
214 jedi.settings.case_insensitive_completion = False
215 import jedi.api.helpers
215 import jedi.api.helpers
216 import jedi.api.classes
216 import jedi.api.classes
217 JEDI_INSTALLED = True
217 JEDI_INSTALLED = True
218 except ImportError:
218 except ImportError:
219 JEDI_INSTALLED = False
219 JEDI_INSTALLED = False
220
220
221 if TYPE_CHECKING:
221 if TYPE_CHECKING:
222 from typing import cast
222 from typing import cast
223 from typing_extensions import TypedDict, NotRequired
223 from typing_extensions import TypedDict, NotRequired
224 else:
224 else:
225
225
226 def cast(obj, _type):
226 def cast(obj, _type):
227 return obj
227 return obj
228
228
229 TypedDict = Dict
229 TypedDict = Dict
230 NotRequired = Tuple
230 NotRequired = Tuple
231
231
232 # -----------------------------------------------------------------------------
232 # -----------------------------------------------------------------------------
233 # Globals
233 # Globals
234 #-----------------------------------------------------------------------------
234 #-----------------------------------------------------------------------------
235
235
236 # ranges where we have most of the valid unicode names. We could be more finer
236 # ranges where we have most of the valid unicode names. We could be more finer
237 # grained but is it worth it for performance While unicode have character in the
237 # grained but is it worth it for performance While unicode have character in the
238 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
238 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
239 # write this). With below range we cover them all, with a density of ~67%
239 # write this). With below range we cover them all, with a density of ~67%
240 # biggest next gap we consider only adds up about 1% density and there are 600
240 # biggest next gap we consider only adds up about 1% density and there are 600
241 # gaps that would need hard coding.
241 # gaps that would need hard coding.
242 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
242 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
243
243
244 # Public API
244 # Public API
245 __all__ = ["Completer", "IPCompleter"]
245 __all__ = ["Completer", "IPCompleter"]
246
246
247 if sys.platform == 'win32':
247 if sys.platform == 'win32':
248 PROTECTABLES = ' '
248 PROTECTABLES = ' '
249 else:
249 else:
250 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
250 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
251
251
252 # Protect against returning an enormous number of completions which the frontend
252 # Protect against returning an enormous number of completions which the frontend
253 # may have trouble processing.
253 # may have trouble processing.
254 MATCHES_LIMIT = 500
254 MATCHES_LIMIT = 500
255
255
256 # Completion type reported when no type can be inferred.
256 # Completion type reported when no type can be inferred.
257 _UNKNOWN_TYPE = "<unknown>"
257 _UNKNOWN_TYPE = "<unknown>"
258
258
259 class ProvisionalCompleterWarning(FutureWarning):
259 class ProvisionalCompleterWarning(FutureWarning):
260 """
260 """
261 Exception raise by an experimental feature in this module.
261 Exception raise by an experimental feature in this module.
262
262
263 Wrap code in :any:`provisionalcompleter` context manager if you
263 Wrap code in :any:`provisionalcompleter` context manager if you
264 are certain you want to use an unstable feature.
264 are certain you want to use an unstable feature.
265 """
265 """
266 pass
266 pass
267
267
268 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
268 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
269
269
270
270
271 @skip_doctest
271 @skip_doctest
272 @contextmanager
272 @contextmanager
273 def provisionalcompleter(action='ignore'):
273 def provisionalcompleter(action='ignore'):
274 """
274 """
275 This context manager has to be used in any place where unstable completer
275 This context manager has to be used in any place where unstable completer
276 behavior and API may be called.
276 behavior and API may be called.
277
277
278 >>> with provisionalcompleter():
278 >>> with provisionalcompleter():
279 ... completer.do_experimental_things() # works
279 ... completer.do_experimental_things() # works
280
280
281 >>> completer.do_experimental_things() # raises.
281 >>> completer.do_experimental_things() # raises.
282
282
283 .. note::
283 .. note::
284
284
285 Unstable
285 Unstable
286
286
287 By using this context manager you agree that the API in use may change
287 By using this context manager you agree that the API in use may change
288 without warning, and that you won't complain if they do so.
288 without warning, and that you won't complain if they do so.
289
289
290 You also understand that, if the API is not to your liking, you should report
290 You also understand that, if the API is not to your liking, you should report
291 a bug to explain your use case upstream.
291 a bug to explain your use case upstream.
292
292
293 We'll be happy to get your feedback, feature requests, and improvements on
293 We'll be happy to get your feedback, feature requests, and improvements on
294 any of the unstable APIs!
294 any of the unstable APIs!
295 """
295 """
296 with warnings.catch_warnings():
296 with warnings.catch_warnings():
297 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
297 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
298 yield
298 yield
299
299
300
300
301 def has_open_quotes(s):
301 def has_open_quotes(s):
302 """Return whether a string has open quotes.
302 """Return whether a string has open quotes.
303
303
304 This simply counts whether the number of quote characters of either type in
304 This simply counts whether the number of quote characters of either type in
305 the string is odd.
305 the string is odd.
306
306
307 Returns
307 Returns
308 -------
308 -------
309 If there is an open quote, the quote character is returned. Else, return
309 If there is an open quote, the quote character is returned. Else, return
310 False.
310 False.
311 """
311 """
312 # We check " first, then ', so complex cases with nested quotes will get
312 # We check " first, then ', so complex cases with nested quotes will get
313 # the " to take precedence.
313 # the " to take precedence.
314 if s.count('"') % 2:
314 if s.count('"') % 2:
315 return '"'
315 return '"'
316 elif s.count("'") % 2:
316 elif s.count("'") % 2:
317 return "'"
317 return "'"
318 else:
318 else:
319 return False
319 return False
320
320
321
321
322 def protect_filename(s, protectables=PROTECTABLES):
322 def protect_filename(s, protectables=PROTECTABLES):
323 """Escape a string to protect certain characters."""
323 """Escape a string to protect certain characters."""
324 if set(s) & set(protectables):
324 if set(s) & set(protectables):
325 if sys.platform == "win32":
325 if sys.platform == "win32":
326 return '"' + s + '"'
326 return '"' + s + '"'
327 else:
327 else:
328 return "".join(("\\" + c if c in protectables else c) for c in s)
328 return "".join(("\\" + c if c in protectables else c) for c in s)
329 else:
329 else:
330 return s
330 return s
331
331
332
332
333 def expand_user(path:str) -> Tuple[str, bool, str]:
333 def expand_user(path:str) -> Tuple[str, bool, str]:
334 """Expand ``~``-style usernames in strings.
334 """Expand ``~``-style usernames in strings.
335
335
336 This is similar to :func:`os.path.expanduser`, but it computes and returns
336 This is similar to :func:`os.path.expanduser`, but it computes and returns
337 extra information that will be useful if the input was being used in
337 extra information that will be useful if the input was being used in
338 computing completions, and you wish to return the completions with the
338 computing completions, and you wish to return the completions with the
339 original '~' instead of its expanded value.
339 original '~' instead of its expanded value.
340
340
341 Parameters
341 Parameters
342 ----------
342 ----------
343 path : str
343 path : str
344 String to be expanded. If no ~ is present, the output is the same as the
344 String to be expanded. If no ~ is present, the output is the same as the
345 input.
345 input.
346
346
347 Returns
347 Returns
348 -------
348 -------
349 newpath : str
349 newpath : str
350 Result of ~ expansion in the input path.
350 Result of ~ expansion in the input path.
351 tilde_expand : bool
351 tilde_expand : bool
352 Whether any expansion was performed or not.
352 Whether any expansion was performed or not.
353 tilde_val : str
353 tilde_val : str
354 The value that ~ was replaced with.
354 The value that ~ was replaced with.
355 """
355 """
356 # Default values
356 # Default values
357 tilde_expand = False
357 tilde_expand = False
358 tilde_val = ''
358 tilde_val = ''
359 newpath = path
359 newpath = path
360
360
361 if path.startswith('~'):
361 if path.startswith('~'):
362 tilde_expand = True
362 tilde_expand = True
363 rest = len(path)-1
363 rest = len(path)-1
364 newpath = os.path.expanduser(path)
364 newpath = os.path.expanduser(path)
365 if rest:
365 if rest:
366 tilde_val = newpath[:-rest]
366 tilde_val = newpath[:-rest]
367 else:
367 else:
368 tilde_val = newpath
368 tilde_val = newpath
369
369
370 return newpath, tilde_expand, tilde_val
370 return newpath, tilde_expand, tilde_val
371
371
372
372
373 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
373 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
374 """Does the opposite of expand_user, with its outputs.
374 """Does the opposite of expand_user, with its outputs.
375 """
375 """
376 if tilde_expand:
376 if tilde_expand:
377 return path.replace(tilde_val, '~')
377 return path.replace(tilde_val, '~')
378 else:
378 else:
379 return path
379 return path
380
380
381
381
382 def completions_sorting_key(word):
382 def completions_sorting_key(word):
383 """key for sorting completions
383 """key for sorting completions
384
384
385 This does several things:
385 This does several things:
386
386
387 - Demote any completions starting with underscores to the end
387 - Demote any completions starting with underscores to the end
388 - Insert any %magic and %%cellmagic completions in the alphabetical order
388 - Insert any %magic and %%cellmagic completions in the alphabetical order
389 by their name
389 by their name
390 """
390 """
391 prio1, prio2 = 0, 0
391 prio1, prio2 = 0, 0
392
392
393 if word.startswith('__'):
393 if word.startswith('__'):
394 prio1 = 2
394 prio1 = 2
395 elif word.startswith('_'):
395 elif word.startswith('_'):
396 prio1 = 1
396 prio1 = 1
397
397
398 if word.endswith('='):
398 if word.endswith('='):
399 prio1 = -1
399 prio1 = -1
400
400
401 if word.startswith('%%'):
401 if word.startswith('%%'):
402 # If there's another % in there, this is something else, so leave it alone
402 # If there's another % in there, this is something else, so leave it alone
403 if not "%" in word[2:]:
403 if not "%" in word[2:]:
404 word = word[2:]
404 word = word[2:]
405 prio2 = 2
405 prio2 = 2
406 elif word.startswith('%'):
406 elif word.startswith('%'):
407 if not "%" in word[1:]:
407 if not "%" in word[1:]:
408 word = word[1:]
408 word = word[1:]
409 prio2 = 1
409 prio2 = 1
410
410
411 return prio1, word, prio2
411 return prio1, word, prio2
412
412
413
413
414 class _FakeJediCompletion:
414 class _FakeJediCompletion:
415 """
415 """
416 This is a workaround to communicate to the UI that Jedi has crashed and to
416 This is a workaround to communicate to the UI that Jedi has crashed and to
417 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
417 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
418
418
419 Added in IPython 6.0 so should likely be removed for 7.0
419 Added in IPython 6.0 so should likely be removed for 7.0
420
420
421 """
421 """
422
422
423 def __init__(self, name):
423 def __init__(self, name):
424
424
425 self.name = name
425 self.name = name
426 self.complete = name
426 self.complete = name
427 self.type = 'crashed'
427 self.type = 'crashed'
428 self.name_with_symbols = name
428 self.name_with_symbols = name
429 self.signature = ''
429 self.signature = ''
430 self._origin = 'fake'
430 self._origin = 'fake'
431
431
432 def __repr__(self):
432 def __repr__(self):
433 return '<Fake completion object jedi has crashed>'
433 return '<Fake completion object jedi has crashed>'
434
434
435
435
436 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
436 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
437
437
438
438
439 class Completion:
439 class Completion:
440 """
440 """
441 Completion object used and returned by IPython completers.
441 Completion object used and returned by IPython completers.
442
442
443 .. warning::
443 .. warning::
444
444
445 Unstable
445 Unstable
446
446
447 This function is unstable, API may change without warning.
447 This function is unstable, API may change without warning.
448 It will also raise unless use in proper context manager.
448 It will also raise unless use in proper context manager.
449
449
450 This act as a middle ground :any:`Completion` object between the
450 This act as a middle ground :any:`Completion` object between the
451 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
451 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
452 object. While Jedi need a lot of information about evaluator and how the
452 object. While Jedi need a lot of information about evaluator and how the
453 code should be ran/inspected, PromptToolkit (and other frontend) mostly
453 code should be ran/inspected, PromptToolkit (and other frontend) mostly
454 need user facing information.
454 need user facing information.
455
455
456 - Which range should be replaced replaced by what.
456 - Which range should be replaced replaced by what.
457 - Some metadata (like completion type), or meta information to displayed to
457 - Some metadata (like completion type), or meta information to displayed to
458 the use user.
458 the use user.
459
459
460 For debugging purpose we can also store the origin of the completion (``jedi``,
460 For debugging purpose we can also store the origin of the completion (``jedi``,
461 ``IPython.python_matches``, ``IPython.magics_matches``...).
461 ``IPython.python_matches``, ``IPython.magics_matches``...).
462 """
462 """
463
463
464 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
464 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
465
465
466 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
466 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
467 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
467 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
468 "It may change without warnings. "
468 "It may change without warnings. "
469 "Use in corresponding context manager.",
469 "Use in corresponding context manager.",
470 category=ProvisionalCompleterWarning, stacklevel=2)
470 category=ProvisionalCompleterWarning, stacklevel=2)
471
471
472 self.start = start
472 self.start = start
473 self.end = end
473 self.end = end
474 self.text = text
474 self.text = text
475 self.type = type
475 self.type = type
476 self.signature = signature
476 self.signature = signature
477 self._origin = _origin
477 self._origin = _origin
478
478
479 def __repr__(self):
479 def __repr__(self):
480 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
480 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
481 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
481 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
482
482
483 def __eq__(self, other)->Bool:
483 def __eq__(self, other)->Bool:
484 """
484 """
485 Equality and hash do not hash the type (as some completer may not be
485 Equality and hash do not hash the type (as some completer may not be
486 able to infer the type), but are use to (partially) de-duplicate
486 able to infer the type), but are use to (partially) de-duplicate
487 completion.
487 completion.
488
488
489 Completely de-duplicating completion is a bit tricker that just
489 Completely de-duplicating completion is a bit tricker that just
490 comparing as it depends on surrounding text, which Completions are not
490 comparing as it depends on surrounding text, which Completions are not
491 aware of.
491 aware of.
492 """
492 """
493 return self.start == other.start and \
493 return self.start == other.start and \
494 self.end == other.end and \
494 self.end == other.end and \
495 self.text == other.text
495 self.text == other.text
496
496
497 def __hash__(self):
497 def __hash__(self):
498 return hash((self.start, self.end, self.text))
498 return hash((self.start, self.end, self.text))
499
499
500
500
501 class SimpleCompletion:
501 class SimpleCompletion:
502 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
502 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
503
503
504 .. warning::
504 .. warning::
505
505
506 Provisional
506 Provisional
507
507
508 This class is used to describe the currently supported attributes of
508 This class is used to describe the currently supported attributes of
509 simple completion items, and any additional implementation details
509 simple completion items, and any additional implementation details
510 should not be relied on. Additional attributes may be included in
510 should not be relied on. Additional attributes may be included in
511 future versions, and meaning of text disambiguated from the current
511 future versions, and meaning of text disambiguated from the current
512 dual meaning of "text to insert" and "text to used as a label".
512 dual meaning of "text to insert" and "text to used as a label".
513 """
513 """
514
514
515 __slots__ = ["text", "type"]
515 __slots__ = ["text", "type"]
516
516
517 def __init__(self, text: str, *, type: str = None):
517 def __init__(self, text: str, *, type: str = None):
518 self.text = text
518 self.text = text
519 self.type = type
519 self.type = type
520
520
521 def __repr__(self):
521 def __repr__(self):
522 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
522 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
523
523
524
524
525 class MatcherResultBase(TypedDict):
525 class MatcherResultBase(TypedDict):
526 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
526 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
527
527
528 #: suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
528 #: suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
529 matched_fragment: NotRequired[str]
529 matched_fragment: NotRequired[str]
530
530
531 #: whether to suppress results from all other matchers (True), some
531 #: whether to suppress results from all other matchers (True), some
532 #: matchers (set of identifiers) or none (False); default is False.
532 #: matchers (set of identifiers) or none (False); default is False.
533 suppress: NotRequired[Union[bool, Set[str]]]
533 suppress: NotRequired[Union[bool, Set[str]]]
534
534
535 #: identifiers of matchers which should NOT be suppressed
535 #: identifiers of matchers which should NOT be suppressed
536 do_not_suppress: NotRequired[Set[str]]
536 do_not_suppress: NotRequired[Set[str]]
537
537
538 #: are completions already ordered and should be left as-is? default is False.
538 #: are completions already ordered and should be left as-is? default is False.
539 ordered: NotRequired[bool]
539 ordered: NotRequired[bool]
540
540
541
541
542 class SimpleMatcherResult(MatcherResultBase):
542 class SimpleMatcherResult(MatcherResultBase):
543 """Result of new-style completion matcher."""
543 """Result of new-style completion matcher."""
544
544
545 #: list of candidate completions
545 #: list of candidate completions
546 completions: Sequence[SimpleCompletion]
546 completions: Sequence[SimpleCompletion]
547
547
548
548
549 class _JediMatcherResult(MatcherResultBase):
549 class _JediMatcherResult(MatcherResultBase):
550 """Matching result returned by Jedi (will be processed differently)"""
550 """Matching result returned by Jedi (will be processed differently)"""
551
551
552 #: list of candidate completions
552 #: list of candidate completions
553 completions: Iterable[_JediCompletionLike]
553 completions: Iterable[_JediCompletionLike]
554
554
555
555
556 class CompletionContext(NamedTuple):
556 class CompletionContext(NamedTuple):
557 """Completion context provided as an argument to matchers in the Matcher API v2."""
557 """Completion context provided as an argument to matchers in the Matcher API v2."""
558
558
559 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
559 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
560 # which was not explicitly visible as an argument of the matcher, making any refactor
560 # which was not explicitly visible as an argument of the matcher, making any refactor
561 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
561 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
562 # from the completer, and make substituting them in sub-classes easier.
562 # from the completer, and make substituting them in sub-classes easier.
563
563
564 #: Relevant fragment of code directly preceding the cursor.
564 #: Relevant fragment of code directly preceding the cursor.
565 #: The extraction of token is implemented via splitter heuristic
565 #: The extraction of token is implemented via splitter heuristic
566 #: (following readline behaviour for legacy reasons), which is user configurable
566 #: (following readline behaviour for legacy reasons), which is user configurable
567 #: (by switching the greedy mode).
567 #: (by switching the greedy mode).
568 token: str
568 token: str
569
569
570 #: The full available content of the editor or buffer
570 #: The full available content of the editor or buffer
571 full_text: str
571 full_text: str
572
572
573 #: Cursor position in the line (the same for ``full_text`` and ``text``).
573 #: Cursor position in the line (the same for ``full_text`` and ``text``).
574 cursor_position: int
574 cursor_position: int
575
575
576 #: Cursor line in ``full_text``.
576 #: Cursor line in ``full_text``.
577 cursor_line: int
577 cursor_line: int
578
578
579 #: The maximum number of completions that will be used downstream.
579 #: The maximum number of completions that will be used downstream.
580 #: Matchers can use this information to abort early.
580 #: Matchers can use this information to abort early.
581 #: The built-in Jedi matcher is currently excepted from this limit.
581 #: The built-in Jedi matcher is currently excepted from this limit.
582 limit: int
582 limit: int
583
583
584 @property
584 @property
585 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
585 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
586 def text_until_cursor(self) -> str:
586 def text_until_cursor(self) -> str:
587 return self.line_with_cursor[: self.cursor_position]
587 return self.line_with_cursor[: self.cursor_position]
588
588
589 @property
589 @property
590 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
590 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
591 def line_with_cursor(self) -> str:
591 def line_with_cursor(self) -> str:
592 return self.full_text.split("\n")[self.cursor_line]
592 return self.full_text.split("\n")[self.cursor_line]
593
593
594
594
595 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
595 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
596
596
597 MatcherAPIv1 = Callable[[str], List[str]]
597 MatcherAPIv1 = Callable[[str], List[str]]
598 MatcherAPIv2 = Callable[[CompletionContext], MatcherResult]
598 MatcherAPIv2 = Callable[[CompletionContext], MatcherResult]
599 Matcher = Union[MatcherAPIv1, MatcherAPIv2]
599 Matcher = Union[MatcherAPIv1, MatcherAPIv2]
600
600
601
601
602 def completion_matcher(
602 def completion_matcher(
603 *, priority: float = None, identifier: str = None, api_version: int = 1
603 *, priority: float = None, identifier: str = None, api_version: int = 1
604 ):
604 ):
605 """Adds attributes describing the matcher.
605 """Adds attributes describing the matcher.
606
606
607 Parameters
607 Parameters
608 ----------
608 ----------
609 priority : Optional[float]
609 priority : Optional[float]
610 The priority of the matcher, determines the order of execution of matchers.
610 The priority of the matcher, determines the order of execution of matchers.
611 Higher priority means that the matcher will be executed first. Defaults to 0.
611 Higher priority means that the matcher will be executed first. Defaults to 0.
612 identifier : Optional[str]
612 identifier : Optional[str]
613 identifier of the matcher allowing users to modify the behaviour via traitlets,
613 identifier of the matcher allowing users to modify the behaviour via traitlets,
614 and also used to for debugging (will be passed as ``origin`` with the completions).
614 and also used to for debugging (will be passed as ``origin`` with the completions).
615 Defaults to matcher function ``__qualname__``.
615 Defaults to matcher function ``__qualname__``.
616 api_version: Optional[int]
616 api_version: Optional[int]
617 version of the Matcher API used by this matcher.
617 version of the Matcher API used by this matcher.
618 Currently supported values are 1 and 2.
618 Currently supported values are 1 and 2.
619 Defaults to 1.
619 Defaults to 1.
620 """
620 """
621
621
622 def wrapper(func: Matcher):
622 def wrapper(func: Matcher):
623 func.matcher_priority = priority or 0
623 func.matcher_priority = priority or 0
624 func.matcher_identifier = identifier or func.__qualname__
624 func.matcher_identifier = identifier or func.__qualname__
625 func.matcher_api_version = api_version
625 func.matcher_api_version = api_version
626 if TYPE_CHECKING:
626 if TYPE_CHECKING:
627 if api_version == 1:
627 if api_version == 1:
628 func = cast(func, MatcherAPIv1)
628 func = cast(func, MatcherAPIv1)
629 elif api_version == 2:
629 elif api_version == 2:
630 func = cast(func, MatcherAPIv2)
630 func = cast(func, MatcherAPIv2)
631 return func
631 return func
632
632
633 return wrapper
633 return wrapper
634
634
635
635
636 def _get_matcher_priority(matcher: Matcher):
636 def _get_matcher_priority(matcher: Matcher):
637 return getattr(matcher, "matcher_priority", 0)
637 return getattr(matcher, "matcher_priority", 0)
638
638
639
639
640 def _get_matcher_id(matcher: Matcher):
640 def _get_matcher_id(matcher: Matcher):
641 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
641 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
642
642
643
643
644 def _get_matcher_api_version(matcher):
644 def _get_matcher_api_version(matcher):
645 return getattr(matcher, "matcher_api_version", 1)
645 return getattr(matcher, "matcher_api_version", 1)
646
646
647
647
648 context_matcher = partial(completion_matcher, api_version=2)
648 context_matcher = partial(completion_matcher, api_version=2)
649
649
650
650
651 _IC = Iterable[Completion]
651 _IC = Iterable[Completion]
652
652
653
653
654 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
654 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
655 """
655 """
656 Deduplicate a set of completions.
656 Deduplicate a set of completions.
657
657
658 .. warning::
658 .. warning::
659
659
660 Unstable
660 Unstable
661
661
662 This function is unstable, API may change without warning.
662 This function is unstable, API may change without warning.
663
663
664 Parameters
664 Parameters
665 ----------
665 ----------
666 text : str
666 text : str
667 text that should be completed.
667 text that should be completed.
668 completions : Iterator[Completion]
668 completions : Iterator[Completion]
669 iterator over the completions to deduplicate
669 iterator over the completions to deduplicate
670
670
671 Yields
671 Yields
672 ------
672 ------
673 `Completions` objects
673 `Completions` objects
674 Completions coming from multiple sources, may be different but end up having
674 Completions coming from multiple sources, may be different but end up having
675 the same effect when applied to ``text``. If this is the case, this will
675 the same effect when applied to ``text``. If this is the case, this will
676 consider completions as equal and only emit the first encountered.
676 consider completions as equal and only emit the first encountered.
677 Not folded in `completions()` yet for debugging purpose, and to detect when
677 Not folded in `completions()` yet for debugging purpose, and to detect when
678 the IPython completer does return things that Jedi does not, but should be
678 the IPython completer does return things that Jedi does not, but should be
679 at some point.
679 at some point.
680 """
680 """
681 completions = list(completions)
681 completions = list(completions)
682 if not completions:
682 if not completions:
683 return
683 return
684
684
685 new_start = min(c.start for c in completions)
685 new_start = min(c.start for c in completions)
686 new_end = max(c.end for c in completions)
686 new_end = max(c.end for c in completions)
687
687
688 seen = set()
688 seen = set()
689 for c in completions:
689 for c in completions:
690 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
690 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
691 if new_text not in seen:
691 if new_text not in seen:
692 yield c
692 yield c
693 seen.add(new_text)
693 seen.add(new_text)
694
694
695
695
696 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
696 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
697 """
697 """
698 Rectify a set of completions to all have the same ``start`` and ``end``
698 Rectify a set of completions to all have the same ``start`` and ``end``
699
699
700 .. warning::
700 .. warning::
701
701
702 Unstable
702 Unstable
703
703
704 This function is unstable, API may change without warning.
704 This function is unstable, API may change without warning.
705 It will also raise unless use in proper context manager.
705 It will also raise unless use in proper context manager.
706
706
707 Parameters
707 Parameters
708 ----------
708 ----------
709 text : str
709 text : str
710 text that should be completed.
710 text that should be completed.
711 completions : Iterator[Completion]
711 completions : Iterator[Completion]
712 iterator over the completions to rectify
712 iterator over the completions to rectify
713 _debug : bool
713 _debug : bool
714 Log failed completion
714 Log failed completion
715
715
716 Notes
716 Notes
717 -----
717 -----
718 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
718 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
719 the Jupyter Protocol requires them to behave like so. This will readjust
719 the Jupyter Protocol requires them to behave like so. This will readjust
720 the completion to have the same ``start`` and ``end`` by padding both
720 the completion to have the same ``start`` and ``end`` by padding both
721 extremities with surrounding text.
721 extremities with surrounding text.
722
722
723 During stabilisation should support a ``_debug`` option to log which
723 During stabilisation should support a ``_debug`` option to log which
724 completion are return by the IPython completer and not found in Jedi in
724 completion are return by the IPython completer and not found in Jedi in
725 order to make upstream bug report.
725 order to make upstream bug report.
726 """
726 """
727 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
727 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
728 "It may change without warnings. "
728 "It may change without warnings. "
729 "Use in corresponding context manager.",
729 "Use in corresponding context manager.",
730 category=ProvisionalCompleterWarning, stacklevel=2)
730 category=ProvisionalCompleterWarning, stacklevel=2)
731
731
732 completions = list(completions)
732 completions = list(completions)
733 if not completions:
733 if not completions:
734 return
734 return
735 starts = (c.start for c in completions)
735 starts = (c.start for c in completions)
736 ends = (c.end for c in completions)
736 ends = (c.end for c in completions)
737
737
738 new_start = min(starts)
738 new_start = min(starts)
739 new_end = max(ends)
739 new_end = max(ends)
740
740
741 seen_jedi = set()
741 seen_jedi = set()
742 seen_python_matches = set()
742 seen_python_matches = set()
743 for c in completions:
743 for c in completions:
744 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
744 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
745 if c._origin == 'jedi':
745 if c._origin == 'jedi':
746 seen_jedi.add(new_text)
746 seen_jedi.add(new_text)
747 elif c._origin == 'IPCompleter.python_matches':
747 elif c._origin == 'IPCompleter.python_matches':
748 seen_python_matches.add(new_text)
748 seen_python_matches.add(new_text)
749 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
749 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
750 diff = seen_python_matches.difference(seen_jedi)
750 diff = seen_python_matches.difference(seen_jedi)
751 if diff and _debug:
751 if diff and _debug:
752 print('IPython.python matches have extras:', diff)
752 print('IPython.python matches have extras:', diff)
753
753
754
754
755 if sys.platform == 'win32':
755 if sys.platform == 'win32':
756 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
756 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
757 else:
757 else:
758 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
758 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
759
759
760 GREEDY_DELIMS = ' =\r\n'
760 GREEDY_DELIMS = ' =\r\n'
761
761
762
762
763 class CompletionSplitter(object):
763 class CompletionSplitter(object):
764 """An object to split an input line in a manner similar to readline.
764 """An object to split an input line in a manner similar to readline.
765
765
766 By having our own implementation, we can expose readline-like completion in
766 By having our own implementation, we can expose readline-like completion in
767 a uniform manner to all frontends. This object only needs to be given the
767 a uniform manner to all frontends. This object only needs to be given the
768 line of text to be split and the cursor position on said line, and it
768 line of text to be split and the cursor position on said line, and it
769 returns the 'word' to be completed on at the cursor after splitting the
769 returns the 'word' to be completed on at the cursor after splitting the
770 entire line.
770 entire line.
771
771
772 What characters are used as splitting delimiters can be controlled by
772 What characters are used as splitting delimiters can be controlled by
773 setting the ``delims`` attribute (this is a property that internally
773 setting the ``delims`` attribute (this is a property that internally
774 automatically builds the necessary regular expression)"""
774 automatically builds the necessary regular expression)"""
775
775
776 # Private interface
776 # Private interface
777
777
778 # A string of delimiter characters. The default value makes sense for
778 # A string of delimiter characters. The default value makes sense for
779 # IPython's most typical usage patterns.
779 # IPython's most typical usage patterns.
780 _delims = DELIMS
780 _delims = DELIMS
781
781
782 # The expression (a normal string) to be compiled into a regular expression
782 # The expression (a normal string) to be compiled into a regular expression
783 # for actual splitting. We store it as an attribute mostly for ease of
783 # for actual splitting. We store it as an attribute mostly for ease of
784 # debugging, since this type of code can be so tricky to debug.
784 # debugging, since this type of code can be so tricky to debug.
785 _delim_expr = None
785 _delim_expr = None
786
786
787 # The regular expression that does the actual splitting
787 # The regular expression that does the actual splitting
788 _delim_re = None
788 _delim_re = None
789
789
790 def __init__(self, delims=None):
790 def __init__(self, delims=None):
791 delims = CompletionSplitter._delims if delims is None else delims
791 delims = CompletionSplitter._delims if delims is None else delims
792 self.delims = delims
792 self.delims = delims
793
793
794 @property
794 @property
795 def delims(self):
795 def delims(self):
796 """Return the string of delimiter characters."""
796 """Return the string of delimiter characters."""
797 return self._delims
797 return self._delims
798
798
799 @delims.setter
799 @delims.setter
800 def delims(self, delims):
800 def delims(self, delims):
801 """Set the delimiters for line splitting."""
801 """Set the delimiters for line splitting."""
802 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
802 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
803 self._delim_re = re.compile(expr)
803 self._delim_re = re.compile(expr)
804 self._delims = delims
804 self._delims = delims
805 self._delim_expr = expr
805 self._delim_expr = expr
806
806
807 def split_line(self, line, cursor_pos=None):
807 def split_line(self, line, cursor_pos=None):
808 """Split a line of text with a cursor at the given position.
808 """Split a line of text with a cursor at the given position.
809 """
809 """
810 l = line if cursor_pos is None else line[:cursor_pos]
810 l = line if cursor_pos is None else line[:cursor_pos]
811 return self._delim_re.split(l)[-1]
811 return self._delim_re.split(l)[-1]
812
812
813
813
814
814
815 class Completer(Configurable):
815 class Completer(Configurable):
816
816
817 greedy = Bool(False,
817 greedy = Bool(False,
818 help="""Activate greedy completion
818 help="""Activate greedy completion
819 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
819 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
820
820
821 This will enable completion on elements of lists, results of function calls, etc.,
821 This will enable completion on elements of lists, results of function calls, etc.,
822 but can be unsafe because the code is actually evaluated on TAB.
822 but can be unsafe because the code is actually evaluated on TAB.
823 """,
823 """,
824 ).tag(config=True)
824 ).tag(config=True)
825
825
826 use_jedi = Bool(default_value=JEDI_INSTALLED,
826 use_jedi = Bool(default_value=JEDI_INSTALLED,
827 help="Experimental: Use Jedi to generate autocompletions. "
827 help="Experimental: Use Jedi to generate autocompletions. "
828 "Default to True if jedi is installed.").tag(config=True)
828 "Default to True if jedi is installed.").tag(config=True)
829
829
830 jedi_compute_type_timeout = Int(default_value=400,
830 jedi_compute_type_timeout = Int(default_value=400,
831 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
831 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
832 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
832 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
833 performance by preventing jedi to build its cache.
833 performance by preventing jedi to build its cache.
834 """).tag(config=True)
834 """).tag(config=True)
835
835
836 debug = Bool(default_value=False,
836 debug = Bool(default_value=False,
837 help='Enable debug for the Completer. Mostly print extra '
837 help='Enable debug for the Completer. Mostly print extra '
838 'information for experimental jedi integration.')\
838 'information for experimental jedi integration.')\
839 .tag(config=True)
839 .tag(config=True)
840
840
841 backslash_combining_completions = Bool(True,
841 backslash_combining_completions = Bool(True,
842 help="Enable unicode completions, e.g. \\alpha<tab> . "
842 help="Enable unicode completions, e.g. \\alpha<tab> . "
843 "Includes completion of latex commands, unicode names, and expanding "
843 "Includes completion of latex commands, unicode names, and expanding "
844 "unicode characters back to latex commands.").tag(config=True)
844 "unicode characters back to latex commands.").tag(config=True)
845
845
846 def __init__(self, namespace=None, global_namespace=None, **kwargs):
846 def __init__(self, namespace=None, global_namespace=None, **kwargs):
847 """Create a new completer for the command line.
847 """Create a new completer for the command line.
848
848
849 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
849 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
850
850
851 If unspecified, the default namespace where completions are performed
851 If unspecified, the default namespace where completions are performed
852 is __main__ (technically, __main__.__dict__). Namespaces should be
852 is __main__ (technically, __main__.__dict__). Namespaces should be
853 given as dictionaries.
853 given as dictionaries.
854
854
855 An optional second namespace can be given. This allows the completer
855 An optional second namespace can be given. This allows the completer
856 to handle cases where both the local and global scopes need to be
856 to handle cases where both the local and global scopes need to be
857 distinguished.
857 distinguished.
858 """
858 """
859
859
860 # Don't bind to namespace quite yet, but flag whether the user wants a
860 # Don't bind to namespace quite yet, but flag whether the user wants a
861 # specific namespace or to use __main__.__dict__. This will allow us
861 # specific namespace or to use __main__.__dict__. This will allow us
862 # to bind to __main__.__dict__ at completion time, not now.
862 # to bind to __main__.__dict__ at completion time, not now.
863 if namespace is None:
863 if namespace is None:
864 self.use_main_ns = True
864 self.use_main_ns = True
865 else:
865 else:
866 self.use_main_ns = False
866 self.use_main_ns = False
867 self.namespace = namespace
867 self.namespace = namespace
868
868
869 # The global namespace, if given, can be bound directly
869 # The global namespace, if given, can be bound directly
870 if global_namespace is None:
870 if global_namespace is None:
871 self.global_namespace = {}
871 self.global_namespace = {}
872 else:
872 else:
873 self.global_namespace = global_namespace
873 self.global_namespace = global_namespace
874
874
875 self.custom_matchers = []
875 self.custom_matchers = []
876
876
877 super(Completer, self).__init__(**kwargs)
877 super(Completer, self).__init__(**kwargs)
878
878
879 def complete(self, text, state):
879 def complete(self, text, state):
880 """Return the next possible completion for 'text'.
880 """Return the next possible completion for 'text'.
881
881
882 This is called successively with state == 0, 1, 2, ... until it
882 This is called successively with state == 0, 1, 2, ... until it
883 returns None. The completion should begin with 'text'.
883 returns None. The completion should begin with 'text'.
884
884
885 """
885 """
886 if self.use_main_ns:
886 if self.use_main_ns:
887 self.namespace = __main__.__dict__
887 self.namespace = __main__.__dict__
888
888
889 if state == 0:
889 if state == 0:
890 if "." in text:
890 if "." in text:
891 self.matches = self.attr_matches(text)
891 self.matches = self.attr_matches(text)
892 else:
892 else:
893 self.matches = self.global_matches(text)
893 self.matches = self.global_matches(text)
894 try:
894 try:
895 return self.matches[state]
895 return self.matches[state]
896 except IndexError:
896 except IndexError:
897 return None
897 return None
898
898
899 def global_matches(self, text):
899 def global_matches(self, text):
900 """Compute matches when text is a simple name.
900 """Compute matches when text is a simple name.
901
901
902 Return a list of all keywords, built-in functions and names currently
902 Return a list of all keywords, built-in functions and names currently
903 defined in self.namespace or self.global_namespace that match.
903 defined in self.namespace or self.global_namespace that match.
904
904
905 """
905 """
906 matches = []
906 matches = []
907 match_append = matches.append
907 match_append = matches.append
908 n = len(text)
908 n = len(text)
909 for lst in [keyword.kwlist,
909 for lst in [keyword.kwlist,
910 builtin_mod.__dict__.keys(),
910 builtin_mod.__dict__.keys(),
911 self.namespace.keys(),
911 self.namespace.keys(),
912 self.global_namespace.keys()]:
912 self.global_namespace.keys()]:
913 for word in lst:
913 for word in lst:
914 if word[:n] == text and word != "__builtins__":
914 if word[:n] == text and word != "__builtins__":
915 match_append(word)
915 match_append(word)
916
916
917 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
917 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
918 for lst in [self.namespace.keys(),
918 for lst in [self.namespace.keys(),
919 self.global_namespace.keys()]:
919 self.global_namespace.keys()]:
920 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
920 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
921 for word in lst if snake_case_re.match(word)}
921 for word in lst if snake_case_re.match(word)}
922 for word in shortened.keys():
922 for word in shortened.keys():
923 if word[:n] == text and word != "__builtins__":
923 if word[:n] == text and word != "__builtins__":
924 match_append(shortened[word])
924 match_append(shortened[word])
925 return matches
925 return matches
926
926
927 def attr_matches(self, text):
927 def attr_matches(self, text):
928 """Compute matches when text contains a dot.
928 """Compute matches when text contains a dot.
929
929
930 Assuming the text is of the form NAME.NAME....[NAME], and is
930 Assuming the text is of the form NAME.NAME....[NAME], and is
931 evaluatable in self.namespace or self.global_namespace, it will be
931 evaluatable in self.namespace or self.global_namespace, it will be
932 evaluated and its attributes (as revealed by dir()) are used as
932 evaluated and its attributes (as revealed by dir()) are used as
933 possible completions. (For class instances, class members are
933 possible completions. (For class instances, class members are
934 also considered.)
934 also considered.)
935
935
936 WARNING: this can still invoke arbitrary C code, if an object
936 WARNING: this can still invoke arbitrary C code, if an object
937 with a __getattr__ hook is evaluated.
937 with a __getattr__ hook is evaluated.
938
938
939 """
939 """
940
940
941 # Another option, seems to work great. Catches things like ''.<tab>
941 # Another option, seems to work great. Catches things like ''.<tab>
942 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
942 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
943
943
944 if m:
944 if m:
945 expr, attr = m.group(1, 3)
945 expr, attr = m.group(1, 3)
946 elif self.greedy:
946 elif self.greedy:
947 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
947 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
948 if not m2:
948 if not m2:
949 return []
949 return []
950 expr, attr = m2.group(1,2)
950 expr, attr = m2.group(1,2)
951 else:
951 else:
952 return []
952 return []
953
953
954 try:
954 try:
955 obj = eval(expr, self.namespace)
955 obj = eval(expr, self.namespace)
956 except:
956 except:
957 try:
957 try:
958 obj = eval(expr, self.global_namespace)
958 obj = eval(expr, self.global_namespace)
959 except:
959 except:
960 return []
960 return []
961
961
962 if self.limit_to__all__ and hasattr(obj, '__all__'):
962 if self.limit_to__all__ and hasattr(obj, '__all__'):
963 words = get__all__entries(obj)
963 words = get__all__entries(obj)
964 else:
964 else:
965 words = dir2(obj)
965 words = dir2(obj)
966
966
967 try:
967 try:
968 words = generics.complete_object(obj, words)
968 words = generics.complete_object(obj, words)
969 except TryNext:
969 except TryNext:
970 pass
970 pass
971 except AssertionError:
971 except AssertionError:
972 raise
972 raise
973 except Exception:
973 except Exception:
974 # Silence errors from completion function
974 # Silence errors from completion function
975 #raise # dbg
975 #raise # dbg
976 pass
976 pass
977 # Build match list to return
977 # Build match list to return
978 n = len(attr)
978 n = len(attr)
979 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
979 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
980
980
981
981
982 def get__all__entries(obj):
982 def get__all__entries(obj):
983 """returns the strings in the __all__ attribute"""
983 """returns the strings in the __all__ attribute"""
984 try:
984 try:
985 words = getattr(obj, '__all__')
985 words = getattr(obj, '__all__')
986 except:
986 except:
987 return []
987 return []
988
988
989 return [w for w in words if isinstance(w, str)]
989 return [w for w in words if isinstance(w, str)]
990
990
991
991
992 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
992 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
993 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
993 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
994 """Used by dict_key_matches, matching the prefix to a list of keys
994 """Used by dict_key_matches, matching the prefix to a list of keys
995
995
996 Parameters
996 Parameters
997 ----------
997 ----------
998 keys
998 keys
999 list of keys in dictionary currently being completed.
999 list of keys in dictionary currently being completed.
1000 prefix
1000 prefix
1001 Part of the text already typed by the user. E.g. `mydict[b'fo`
1001 Part of the text already typed by the user. E.g. `mydict[b'fo`
1002 delims
1002 delims
1003 String of delimiters to consider when finding the current key.
1003 String of delimiters to consider when finding the current key.
1004 extra_prefix : optional
1004 extra_prefix : optional
1005 Part of the text already typed in multi-key index cases. E.g. for
1005 Part of the text already typed in multi-key index cases. E.g. for
1006 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1006 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1007
1007
1008 Returns
1008 Returns
1009 -------
1009 -------
1010 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1010 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1011 ``quote`` being the quote that need to be used to close current string.
1011 ``quote`` being the quote that need to be used to close current string.
1012 ``token_start`` the position where the replacement should start occurring,
1012 ``token_start`` the position where the replacement should start occurring,
1013 ``matches`` a list of replacement/completion
1013 ``matches`` a list of replacement/completion
1014
1014
1015 """
1015 """
1016 prefix_tuple = extra_prefix if extra_prefix else ()
1016 prefix_tuple = extra_prefix if extra_prefix else ()
1017 Nprefix = len(prefix_tuple)
1017 Nprefix = len(prefix_tuple)
1018 def filter_prefix_tuple(key):
1018 def filter_prefix_tuple(key):
1019 # Reject too short keys
1019 # Reject too short keys
1020 if len(key) <= Nprefix:
1020 if len(key) <= Nprefix:
1021 return False
1021 return False
1022 # Reject keys with non str/bytes in it
1022 # Reject keys with non str/bytes in it
1023 for k in key:
1023 for k in key:
1024 if not isinstance(k, (str, bytes)):
1024 if not isinstance(k, (str, bytes)):
1025 return False
1025 return False
1026 # Reject keys that do not match the prefix
1026 # Reject keys that do not match the prefix
1027 for k, pt in zip(key, prefix_tuple):
1027 for k, pt in zip(key, prefix_tuple):
1028 if k != pt:
1028 if k != pt:
1029 return False
1029 return False
1030 # All checks passed!
1030 # All checks passed!
1031 return True
1031 return True
1032
1032
1033 filtered_keys:List[Union[str,bytes]] = []
1033 filtered_keys:List[Union[str,bytes]] = []
1034 def _add_to_filtered_keys(key):
1034 def _add_to_filtered_keys(key):
1035 if isinstance(key, (str, bytes)):
1035 if isinstance(key, (str, bytes)):
1036 filtered_keys.append(key)
1036 filtered_keys.append(key)
1037
1037
1038 for k in keys:
1038 for k in keys:
1039 if isinstance(k, tuple):
1039 if isinstance(k, tuple):
1040 if filter_prefix_tuple(k):
1040 if filter_prefix_tuple(k):
1041 _add_to_filtered_keys(k[Nprefix])
1041 _add_to_filtered_keys(k[Nprefix])
1042 else:
1042 else:
1043 _add_to_filtered_keys(k)
1043 _add_to_filtered_keys(k)
1044
1044
1045 if not prefix:
1045 if not prefix:
1046 return '', 0, [repr(k) for k in filtered_keys]
1046 return '', 0, [repr(k) for k in filtered_keys]
1047 quote_match = re.search('["\']', prefix)
1047 quote_match = re.search('["\']', prefix)
1048 assert quote_match is not None # silence mypy
1048 assert quote_match is not None # silence mypy
1049 quote = quote_match.group()
1049 quote = quote_match.group()
1050 try:
1050 try:
1051 prefix_str = eval(prefix + quote, {})
1051 prefix_str = eval(prefix + quote, {})
1052 except Exception:
1052 except Exception:
1053 return '', 0, []
1053 return '', 0, []
1054
1054
1055 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1055 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1056 token_match = re.search(pattern, prefix, re.UNICODE)
1056 token_match = re.search(pattern, prefix, re.UNICODE)
1057 assert token_match is not None # silence mypy
1057 assert token_match is not None # silence mypy
1058 token_start = token_match.start()
1058 token_start = token_match.start()
1059 token_prefix = token_match.group()
1059 token_prefix = token_match.group()
1060
1060
1061 matched:List[str] = []
1061 matched:List[str] = []
1062 for key in filtered_keys:
1062 for key in filtered_keys:
1063 try:
1063 try:
1064 if not key.startswith(prefix_str):
1064 if not key.startswith(prefix_str):
1065 continue
1065 continue
1066 except (AttributeError, TypeError, UnicodeError):
1066 except (AttributeError, TypeError, UnicodeError):
1067 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1067 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1068 continue
1068 continue
1069
1069
1070 # reformat remainder of key to begin with prefix
1070 # reformat remainder of key to begin with prefix
1071 rem = key[len(prefix_str):]
1071 rem = key[len(prefix_str):]
1072 # force repr wrapped in '
1072 # force repr wrapped in '
1073 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1073 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1074 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1074 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1075 if quote == '"':
1075 if quote == '"':
1076 # The entered prefix is quoted with ",
1076 # The entered prefix is quoted with ",
1077 # but the match is quoted with '.
1077 # but the match is quoted with '.
1078 # A contained " hence needs escaping for comparison:
1078 # A contained " hence needs escaping for comparison:
1079 rem_repr = rem_repr.replace('"', '\\"')
1079 rem_repr = rem_repr.replace('"', '\\"')
1080
1080
1081 # then reinsert prefix from start of token
1081 # then reinsert prefix from start of token
1082 matched.append('%s%s' % (token_prefix, rem_repr))
1082 matched.append('%s%s' % (token_prefix, rem_repr))
1083 return quote, token_start, matched
1083 return quote, token_start, matched
1084
1084
1085
1085
1086 def cursor_to_position(text:str, line:int, column:int)->int:
1086 def cursor_to_position(text:str, line:int, column:int)->int:
1087 """
1087 """
1088 Convert the (line,column) position of the cursor in text to an offset in a
1088 Convert the (line,column) position of the cursor in text to an offset in a
1089 string.
1089 string.
1090
1090
1091 Parameters
1091 Parameters
1092 ----------
1092 ----------
1093 text : str
1093 text : str
1094 The text in which to calculate the cursor offset
1094 The text in which to calculate the cursor offset
1095 line : int
1095 line : int
1096 Line of the cursor; 0-indexed
1096 Line of the cursor; 0-indexed
1097 column : int
1097 column : int
1098 Column of the cursor 0-indexed
1098 Column of the cursor 0-indexed
1099
1099
1100 Returns
1100 Returns
1101 -------
1101 -------
1102 Position of the cursor in ``text``, 0-indexed.
1102 Position of the cursor in ``text``, 0-indexed.
1103
1103
1104 See Also
1104 See Also
1105 --------
1105 --------
1106 position_to_cursor : reciprocal of this function
1106 position_to_cursor : reciprocal of this function
1107
1107
1108 """
1108 """
1109 lines = text.split('\n')
1109 lines = text.split('\n')
1110 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1110 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1111
1111
1112 return sum(len(l) + 1 for l in lines[:line]) + column
1112 return sum(len(l) + 1 for l in lines[:line]) + column
1113
1113
1114 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1114 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1115 """
1115 """
1116 Convert the position of the cursor in text (0 indexed) to a line
1116 Convert the position of the cursor in text (0 indexed) to a line
1117 number(0-indexed) and a column number (0-indexed) pair
1117 number(0-indexed) and a column number (0-indexed) pair
1118
1118
1119 Position should be a valid position in ``text``.
1119 Position should be a valid position in ``text``.
1120
1120
1121 Parameters
1121 Parameters
1122 ----------
1122 ----------
1123 text : str
1123 text : str
1124 The text in which to calculate the cursor offset
1124 The text in which to calculate the cursor offset
1125 offset : int
1125 offset : int
1126 Position of the cursor in ``text``, 0-indexed.
1126 Position of the cursor in ``text``, 0-indexed.
1127
1127
1128 Returns
1128 Returns
1129 -------
1129 -------
1130 (line, column) : (int, int)
1130 (line, column) : (int, int)
1131 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1131 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1132
1132
1133 See Also
1133 See Also
1134 --------
1134 --------
1135 cursor_to_position : reciprocal of this function
1135 cursor_to_position : reciprocal of this function
1136
1136
1137 """
1137 """
1138
1138
1139 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1139 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1140
1140
1141 before = text[:offset]
1141 before = text[:offset]
1142 blines = before.split('\n') # ! splitnes trim trailing \n
1142 blines = before.split('\n') # ! splitnes trim trailing \n
1143 line = before.count('\n')
1143 line = before.count('\n')
1144 col = len(blines[-1])
1144 col = len(blines[-1])
1145 return line, col
1145 return line, col
1146
1146
1147
1147
1148 def _safe_isinstance(obj, module, class_name):
1148 def _safe_isinstance(obj, module, class_name):
1149 """Checks if obj is an instance of module.class_name if loaded
1149 """Checks if obj is an instance of module.class_name if loaded
1150 """
1150 """
1151 return (module in sys.modules and
1151 return (module in sys.modules and
1152 isinstance(obj, getattr(import_module(module), class_name)))
1152 isinstance(obj, getattr(import_module(module), class_name)))
1153
1153
1154
1154
1155 @context_matcher()
1155 @context_matcher()
1156 def back_unicode_name_matcher(context):
1156 def back_unicode_name_matcher(context):
1157 """Match Unicode characters back to Unicode name
1157 """Match Unicode characters back to Unicode name
1158
1158
1159 Same as ``back_unicode_name_matches``, but adopted to new Matcher API.
1159 Same as ``back_unicode_name_matches``, but adopted to new Matcher API.
1160 """
1160 """
1161 fragment, matches = back_unicode_name_matches(context.token)
1161 fragment, matches = back_unicode_name_matches(context.token)
1162 return _convert_matcher_v1_result_to_v2(
1162 return _convert_matcher_v1_result_to_v2(
1163 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1163 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1164 )
1164 )
1165
1165
1166
1166
1167 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1167 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1168 """Match Unicode characters back to Unicode name
1168 """Match Unicode characters back to Unicode name
1169
1169
1170 This does ``β˜ƒ`` -> ``\\snowman``
1170 This does ``β˜ƒ`` -> ``\\snowman``
1171
1171
1172 Note that snowman is not a valid python3 combining character but will be expanded.
1172 Note that snowman is not a valid python3 combining character but will be expanded.
1173 Though it will not recombine back to the snowman character by the completion machinery.
1173 Though it will not recombine back to the snowman character by the completion machinery.
1174
1174
1175 This will not either back-complete standard sequences like \\n, \\b ...
1175 This will not either back-complete standard sequences like \\n, \\b ...
1176
1176
1177 Returns
1177 Returns
1178 =======
1178 =======
1179
1179
1180 Return a tuple with two elements:
1180 Return a tuple with two elements:
1181
1181
1182 - The Unicode character that was matched (preceded with a backslash), or
1182 - The Unicode character that was matched (preceded with a backslash), or
1183 empty string,
1183 empty string,
1184 - a sequence (of 1), name for the match Unicode character, preceded by
1184 - a sequence (of 1), name for the match Unicode character, preceded by
1185 backslash, or empty if no match.
1185 backslash, or empty if no match.
1186
1186
1187 """
1187 """
1188 if len(text)<2:
1188 if len(text)<2:
1189 return '', ()
1189 return '', ()
1190 maybe_slash = text[-2]
1190 maybe_slash = text[-2]
1191 if maybe_slash != '\\':
1191 if maybe_slash != '\\':
1192 return '', ()
1192 return '', ()
1193
1193
1194 char = text[-1]
1194 char = text[-1]
1195 # no expand on quote for completion in strings.
1195 # no expand on quote for completion in strings.
1196 # nor backcomplete standard ascii keys
1196 # nor backcomplete standard ascii keys
1197 if char in string.ascii_letters or char in ('"',"'"):
1197 if char in string.ascii_letters or char in ('"',"'"):
1198 return '', ()
1198 return '', ()
1199 try :
1199 try :
1200 unic = unicodedata.name(char)
1200 unic = unicodedata.name(char)
1201 return '\\'+char,('\\'+unic,)
1201 return '\\'+char,('\\'+unic,)
1202 except KeyError:
1202 except KeyError:
1203 pass
1203 pass
1204 return '', ()
1204 return '', ()
1205
1205
1206
1206
1207 @context_matcher()
1207 @context_matcher()
1208 def back_latex_name_matcher(context):
1208 def back_latex_name_matcher(context):
1209 """Match latex characters back to unicode name
1209 """Match latex characters back to unicode name
1210
1210
1211 Same as ``back_latex_name_matches``, but adopted to new Matcher API.
1211 Same as ``back_latex_name_matches``, but adopted to new Matcher API.
1212 """
1212 """
1213 fragment, matches = back_latex_name_matches(context.token)
1213 fragment, matches = back_latex_name_matches(context.token)
1214 return _convert_matcher_v1_result_to_v2(
1214 return _convert_matcher_v1_result_to_v2(
1215 matches, type="latex", fragment=fragment, suppress_if_matches=True
1215 matches, type="latex", fragment=fragment, suppress_if_matches=True
1216 )
1216 )
1217
1217
1218
1218
1219 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1219 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
1220 """Match latex characters back to unicode name
1220 """Match latex characters back to unicode name
1221
1221
1222 This does ``\\β„΅`` -> ``\\aleph``
1222 This does ``\\β„΅`` -> ``\\aleph``
1223
1223
1224 """
1224 """
1225 if len(text)<2:
1225 if len(text)<2:
1226 return '', ()
1226 return '', ()
1227 maybe_slash = text[-2]
1227 maybe_slash = text[-2]
1228 if maybe_slash != '\\':
1228 if maybe_slash != '\\':
1229 return '', ()
1229 return '', ()
1230
1230
1231
1231
1232 char = text[-1]
1232 char = text[-1]
1233 # no expand on quote for completion in strings.
1233 # no expand on quote for completion in strings.
1234 # nor backcomplete standard ascii keys
1234 # nor backcomplete standard ascii keys
1235 if char in string.ascii_letters or char in ('"',"'"):
1235 if char in string.ascii_letters or char in ('"',"'"):
1236 return '', ()
1236 return '', ()
1237 try :
1237 try :
1238 latex = reverse_latex_symbol[char]
1238 latex = reverse_latex_symbol[char]
1239 # '\\' replace the \ as well
1239 # '\\' replace the \ as well
1240 return '\\'+char,[latex]
1240 return '\\'+char,[latex]
1241 except KeyError:
1241 except KeyError:
1242 pass
1242 pass
1243 return '', ()
1243 return '', ()
1244
1244
1245
1245
1246 def _formatparamchildren(parameter) -> str:
1246 def _formatparamchildren(parameter) -> str:
1247 """
1247 """
1248 Get parameter name and value from Jedi Private API
1248 Get parameter name and value from Jedi Private API
1249
1249
1250 Jedi does not expose a simple way to get `param=value` from its API.
1250 Jedi does not expose a simple way to get `param=value` from its API.
1251
1251
1252 Parameters
1252 Parameters
1253 ----------
1253 ----------
1254 parameter
1254 parameter
1255 Jedi's function `Param`
1255 Jedi's function `Param`
1256
1256
1257 Returns
1257 Returns
1258 -------
1258 -------
1259 A string like 'a', 'b=1', '*args', '**kwargs'
1259 A string like 'a', 'b=1', '*args', '**kwargs'
1260
1260
1261 """
1261 """
1262 description = parameter.description
1262 description = parameter.description
1263 if not description.startswith('param '):
1263 if not description.startswith('param '):
1264 raise ValueError('Jedi function parameter description have change format.'
1264 raise ValueError('Jedi function parameter description have change format.'
1265 'Expected "param ...", found %r".' % description)
1265 'Expected "param ...", found %r".' % description)
1266 return description[6:]
1266 return description[6:]
1267
1267
1268 def _make_signature(completion)-> str:
1268 def _make_signature(completion)-> str:
1269 """
1269 """
1270 Make the signature from a jedi completion
1270 Make the signature from a jedi completion
1271
1271
1272 Parameters
1272 Parameters
1273 ----------
1273 ----------
1274 completion : jedi.Completion
1274 completion : jedi.Completion
1275 object does not complete a function type
1275 object does not complete a function type
1276
1276
1277 Returns
1277 Returns
1278 -------
1278 -------
1279 a string consisting of the function signature, with the parenthesis but
1279 a string consisting of the function signature, with the parenthesis but
1280 without the function name. example:
1280 without the function name. example:
1281 `(a, *args, b=1, **kwargs)`
1281 `(a, *args, b=1, **kwargs)`
1282
1282
1283 """
1283 """
1284
1284
1285 # it looks like this might work on jedi 0.17
1285 # it looks like this might work on jedi 0.17
1286 if hasattr(completion, 'get_signatures'):
1286 if hasattr(completion, 'get_signatures'):
1287 signatures = completion.get_signatures()
1287 signatures = completion.get_signatures()
1288 if not signatures:
1288 if not signatures:
1289 return '(?)'
1289 return '(?)'
1290
1290
1291 c0 = completion.get_signatures()[0]
1291 c0 = completion.get_signatures()[0]
1292 return '('+c0.to_string().split('(', maxsplit=1)[1]
1292 return '('+c0.to_string().split('(', maxsplit=1)[1]
1293
1293
1294 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1294 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1295 for p in signature.defined_names()) if f])
1295 for p in signature.defined_names()) if f])
1296
1296
1297
1297
1298 _CompleteResult = Dict[str, MatcherResult]
1298 _CompleteResult = Dict[str, MatcherResult]
1299
1299
1300
1300
1301 def _convert_matcher_v1_result_to_v2(
1301 def _convert_matcher_v1_result_to_v2(
1302 matches: Sequence[str],
1302 matches: Sequence[str],
1303 type: str,
1303 type: str,
1304 fragment: str = None,
1304 fragment: str = None,
1305 suppress_if_matches: bool = False,
1305 suppress_if_matches: bool = False,
1306 ) -> SimpleMatcherResult:
1306 ) -> SimpleMatcherResult:
1307 """Utility to help with transition"""
1307 """Utility to help with transition"""
1308 result = {
1308 result = {
1309 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1309 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1310 "suppress": (True if matches else False) if suppress_if_matches else False,
1310 "suppress": (True if matches else False) if suppress_if_matches else False,
1311 }
1311 }
1312 if fragment is not None:
1312 if fragment is not None:
1313 result["matched_fragment"] = fragment
1313 result["matched_fragment"] = fragment
1314 return result
1314 return result
1315
1315
1316
1316
1317 class IPCompleter(Completer):
1317 class IPCompleter(Completer):
1318 """Extension of the completer class with IPython-specific features"""
1318 """Extension of the completer class with IPython-specific features"""
1319
1319
1320 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1320 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1321
1321
1322 @observe('greedy')
1322 @observe('greedy')
1323 def _greedy_changed(self, change):
1323 def _greedy_changed(self, change):
1324 """update the splitter and readline delims when greedy is changed"""
1324 """update the splitter and readline delims when greedy is changed"""
1325 if change['new']:
1325 if change['new']:
1326 self.splitter.delims = GREEDY_DELIMS
1326 self.splitter.delims = GREEDY_DELIMS
1327 else:
1327 else:
1328 self.splitter.delims = DELIMS
1328 self.splitter.delims = DELIMS
1329
1329
1330 dict_keys_only = Bool(
1330 dict_keys_only = Bool(
1331 False,
1331 False,
1332 help="""
1332 help="""
1333 Whether to show dict key matches only.
1333 Whether to show dict key matches only.
1334
1334
1335 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1335 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1336 """,
1336 """,
1337 )
1337 )
1338
1338
1339 suppress_competing_matchers = UnionTrait(
1339 suppress_competing_matchers = UnionTrait(
1340 [Bool(), DictTrait(Bool(None, allow_none=True))],
1340 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1341 default_value=None,
1341 help="""
1342 help="""
1342 Whether to suppress completions from other *Matchers*.
1343 Whether to suppress completions from other *Matchers*.
1343
1344
1344 When set to ``None`` (default) the matchers will attempt to auto-detect
1345 When set to ``None`` (default) the matchers will attempt to auto-detect
1345 whether suppression of other matchers is desirable. For example, at
1346 whether suppression of other matchers is desirable. For example, at
1346 the beginning of a line followed by `%` we expect a magic completion
1347 the beginning of a line followed by `%` we expect a magic completion
1347 to be the only applicable option, and after ``my_dict['`` we usually
1348 to be the only applicable option, and after ``my_dict['`` we usually
1348 expect a completion with an existing dictionary key.
1349 expect a completion with an existing dictionary key.
1349
1350
1350 If you want to disable this heuristic and see completions from all matchers,
1351 If you want to disable this heuristic and see completions from all matchers,
1351 set ``IPCompleter.suppress_competing_matchers = False``.
1352 set ``IPCompleter.suppress_competing_matchers = False``.
1352 To disable the heuristic for specific matchers provide a dictionary mapping:
1353 To disable the heuristic for specific matchers provide a dictionary mapping:
1353 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1354 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1354
1355
1355 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1356 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1356 completions to the set of matchers with the highest priority;
1357 completions to the set of matchers with the highest priority;
1357 this is equivalent to ``IPCompleter.merge_completions`` and
1358 this is equivalent to ``IPCompleter.merge_completions`` and
1358 can be beneficial for performance, but will sometimes omit relevant
1359 can be beneficial for performance, but will sometimes omit relevant
1359 candidates from matchers further down the priority list.
1360 candidates from matchers further down the priority list.
1360 """,
1361 """,
1361 ).tag(config=True)
1362 ).tag(config=True)
1362
1363
1363 merge_completions = Bool(
1364 merge_completions = Bool(
1364 True,
1365 True,
1365 help="""Whether to merge completion results into a single list
1366 help="""Whether to merge completion results into a single list
1366
1367
1367 If False, only the completion results from the first non-empty
1368 If False, only the completion results from the first non-empty
1368 completer will be returned.
1369 completer will be returned.
1369
1370
1370 As of version 8.6.0, setting the value to ``False`` is an alias for:
1371 As of version 8.6.0, setting the value to ``False`` is an alias for:
1371 ``IPCompleter.suppress_competing_matchers = True.``.
1372 ``IPCompleter.suppress_competing_matchers = True.``.
1372 """,
1373 """,
1373 ).tag(config=True)
1374 ).tag(config=True)
1374
1375
1375 disable_matchers = ListTrait(
1376 disable_matchers = ListTrait(
1376 Unicode(), help="""List of matchers to disable."""
1377 Unicode(), help="""List of matchers to disable."""
1377 ).tag(config=True)
1378 ).tag(config=True)
1378
1379
1379 omit__names = Enum(
1380 omit__names = Enum(
1380 (0, 1, 2),
1381 (0, 1, 2),
1381 default_value=2,
1382 default_value=2,
1382 help="""Instruct the completer to omit private method names
1383 help="""Instruct the completer to omit private method names
1383
1384
1384 Specifically, when completing on ``object.<tab>``.
1385 Specifically, when completing on ``object.<tab>``.
1385
1386
1386 When 2 [default]: all names that start with '_' will be excluded.
1387 When 2 [default]: all names that start with '_' will be excluded.
1387
1388
1388 When 1: all 'magic' names (``__foo__``) will be excluded.
1389 When 1: all 'magic' names (``__foo__``) will be excluded.
1389
1390
1390 When 0: nothing will be excluded.
1391 When 0: nothing will be excluded.
1391 """
1392 """
1392 ).tag(config=True)
1393 ).tag(config=True)
1393 limit_to__all__ = Bool(False,
1394 limit_to__all__ = Bool(False,
1394 help="""
1395 help="""
1395 DEPRECATED as of version 5.0.
1396 DEPRECATED as of version 5.0.
1396
1397
1397 Instruct the completer to use __all__ for the completion
1398 Instruct the completer to use __all__ for the completion
1398
1399
1399 Specifically, when completing on ``object.<tab>``.
1400 Specifically, when completing on ``object.<tab>``.
1400
1401
1401 When True: only those names in obj.__all__ will be included.
1402 When True: only those names in obj.__all__ will be included.
1402
1403
1403 When False [default]: the __all__ attribute is ignored
1404 When False [default]: the __all__ attribute is ignored
1404 """,
1405 """,
1405 ).tag(config=True)
1406 ).tag(config=True)
1406
1407
1407 profile_completions = Bool(
1408 profile_completions = Bool(
1408 default_value=False,
1409 default_value=False,
1409 help="If True, emit profiling data for completion subsystem using cProfile."
1410 help="If True, emit profiling data for completion subsystem using cProfile."
1410 ).tag(config=True)
1411 ).tag(config=True)
1411
1412
1412 profiler_output_dir = Unicode(
1413 profiler_output_dir = Unicode(
1413 default_value=".completion_profiles",
1414 default_value=".completion_profiles",
1414 help="Template for path at which to output profile data for completions."
1415 help="Template for path at which to output profile data for completions."
1415 ).tag(config=True)
1416 ).tag(config=True)
1416
1417
1417 @observe('limit_to__all__')
1418 @observe('limit_to__all__')
1418 def _limit_to_all_changed(self, change):
1419 def _limit_to_all_changed(self, change):
1419 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1420 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1420 'value has been deprecated since IPython 5.0, will be made to have '
1421 'value has been deprecated since IPython 5.0, will be made to have '
1421 'no effects and then removed in future version of IPython.',
1422 'no effects and then removed in future version of IPython.',
1422 UserWarning)
1423 UserWarning)
1423
1424
1424 def __init__(
1425 def __init__(
1425 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1426 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1426 ):
1427 ):
1427 """IPCompleter() -> completer
1428 """IPCompleter() -> completer
1428
1429
1429 Return a completer object.
1430 Return a completer object.
1430
1431
1431 Parameters
1432 Parameters
1432 ----------
1433 ----------
1433 shell
1434 shell
1434 a pointer to the ipython shell itself. This is needed
1435 a pointer to the ipython shell itself. This is needed
1435 because this completer knows about magic functions, and those can
1436 because this completer knows about magic functions, and those can
1436 only be accessed via the ipython instance.
1437 only be accessed via the ipython instance.
1437 namespace : dict, optional
1438 namespace : dict, optional
1438 an optional dict where completions are performed.
1439 an optional dict where completions are performed.
1439 global_namespace : dict, optional
1440 global_namespace : dict, optional
1440 secondary optional dict for completions, to
1441 secondary optional dict for completions, to
1441 handle cases (such as IPython embedded inside functions) where
1442 handle cases (such as IPython embedded inside functions) where
1442 both Python scopes are visible.
1443 both Python scopes are visible.
1443 config : Config
1444 config : Config
1444 traitlet's config object
1445 traitlet's config object
1445 **kwargs
1446 **kwargs
1446 passed to super class unmodified.
1447 passed to super class unmodified.
1447 """
1448 """
1448
1449
1449 self.magic_escape = ESC_MAGIC
1450 self.magic_escape = ESC_MAGIC
1450 self.splitter = CompletionSplitter()
1451 self.splitter = CompletionSplitter()
1451
1452
1452 # _greedy_changed() depends on splitter and readline being defined:
1453 # _greedy_changed() depends on splitter and readline being defined:
1453 super().__init__(
1454 super().__init__(
1454 namespace=namespace,
1455 namespace=namespace,
1455 global_namespace=global_namespace,
1456 global_namespace=global_namespace,
1456 config=config,
1457 config=config,
1457 **kwargs,
1458 **kwargs,
1458 )
1459 )
1459
1460
1460 # List where completion matches will be stored
1461 # List where completion matches will be stored
1461 self.matches = []
1462 self.matches = []
1462 self.shell = shell
1463 self.shell = shell
1463 # Regexp to split filenames with spaces in them
1464 # Regexp to split filenames with spaces in them
1464 self.space_name_re = re.compile(r'([^\\] )')
1465 self.space_name_re = re.compile(r'([^\\] )')
1465 # Hold a local ref. to glob.glob for speed
1466 # Hold a local ref. to glob.glob for speed
1466 self.glob = glob.glob
1467 self.glob = glob.glob
1467
1468
1468 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1469 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1469 # buffers, to avoid completion problems.
1470 # buffers, to avoid completion problems.
1470 term = os.environ.get('TERM','xterm')
1471 term = os.environ.get('TERM','xterm')
1471 self.dumb_terminal = term in ['dumb','emacs']
1472 self.dumb_terminal = term in ['dumb','emacs']
1472
1473
1473 # Special handling of backslashes needed in win32 platforms
1474 # Special handling of backslashes needed in win32 platforms
1474 if sys.platform == "win32":
1475 if sys.platform == "win32":
1475 self.clean_glob = self._clean_glob_win32
1476 self.clean_glob = self._clean_glob_win32
1476 else:
1477 else:
1477 self.clean_glob = self._clean_glob
1478 self.clean_glob = self._clean_glob
1478
1479
1479 #regexp to parse docstring for function signature
1480 #regexp to parse docstring for function signature
1480 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1481 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1481 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1482 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1482 #use this if positional argument name is also needed
1483 #use this if positional argument name is also needed
1483 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1484 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1484
1485
1485 self.magic_arg_matchers = [
1486 self.magic_arg_matchers = [
1486 self.magic_config_matcher,
1487 self.magic_config_matcher,
1487 self.magic_color_matcher,
1488 self.magic_color_matcher,
1488 ]
1489 ]
1489
1490
1490 # This is set externally by InteractiveShell
1491 # This is set externally by InteractiveShell
1491 self.custom_completers = None
1492 self.custom_completers = None
1492
1493
1493 # This is a list of names of unicode characters that can be completed
1494 # This is a list of names of unicode characters that can be completed
1494 # into their corresponding unicode value. The list is large, so we
1495 # into their corresponding unicode value. The list is large, so we
1495 # lazily initialize it on first use. Consuming code should access this
1496 # lazily initialize it on first use. Consuming code should access this
1496 # attribute through the `@unicode_names` property.
1497 # attribute through the `@unicode_names` property.
1497 self._unicode_names = None
1498 self._unicode_names = None
1498
1499
1499 self._backslash_combining_matchers = [
1500 self._backslash_combining_matchers = [
1500 self.latex_name_matcher,
1501 self.latex_name_matcher,
1501 self.unicode_name_matcher,
1502 self.unicode_name_matcher,
1502 back_latex_name_matcher,
1503 back_latex_name_matcher,
1503 back_unicode_name_matcher,
1504 back_unicode_name_matcher,
1504 self.fwd_unicode_matcher,
1505 self.fwd_unicode_matcher,
1505 ]
1506 ]
1506
1507
1507 if not self.backslash_combining_completions:
1508 if not self.backslash_combining_completions:
1508 for matcher in self._backslash_combining_matchers:
1509 for matcher in self._backslash_combining_matchers:
1509 self.disable_matchers.append(matcher.matcher_identifier)
1510 self.disable_matchers.append(matcher.matcher_identifier)
1510
1511
1511 if not self.merge_completions:
1512 if not self.merge_completions:
1512 self.suppress_competing_matchers = True
1513 self.suppress_competing_matchers = True
1513
1514
1514 @property
1515 @property
1515 def matchers(self) -> List[Matcher]:
1516 def matchers(self) -> List[Matcher]:
1516 """All active matcher routines for completion"""
1517 """All active matcher routines for completion"""
1517 if self.dict_keys_only:
1518 if self.dict_keys_only:
1518 return [self.dict_key_matcher]
1519 return [self.dict_key_matcher]
1519
1520
1520 if self.use_jedi:
1521 if self.use_jedi:
1521 return [
1522 return [
1522 *self.custom_matchers,
1523 *self.custom_matchers,
1523 *self._backslash_combining_matchers,
1524 *self._backslash_combining_matchers,
1524 *self.magic_arg_matchers,
1525 *self.magic_arg_matchers,
1525 self.custom_completer_matcher,
1526 self.custom_completer_matcher,
1526 self.magic_matcher,
1527 self.magic_matcher,
1527 self._jedi_matcher,
1528 self._jedi_matcher,
1528 self.dict_key_matcher,
1529 self.dict_key_matcher,
1529 self.file_matcher,
1530 self.file_matcher,
1530 ]
1531 ]
1531 else:
1532 else:
1532 return [
1533 return [
1533 *self.custom_matchers,
1534 *self.custom_matchers,
1534 *self._backslash_combining_matchers,
1535 *self._backslash_combining_matchers,
1535 *self.magic_arg_matchers,
1536 *self.magic_arg_matchers,
1536 self.custom_completer_matcher,
1537 self.custom_completer_matcher,
1537 self.dict_key_matcher,
1538 self.dict_key_matcher,
1538 # TODO: convert python_matches to v2 API
1539 # TODO: convert python_matches to v2 API
1539 self.magic_matcher,
1540 self.magic_matcher,
1540 self.python_matches,
1541 self.python_matches,
1541 self.file_matcher,
1542 self.file_matcher,
1542 self.python_func_kw_matcher,
1543 self.python_func_kw_matcher,
1543 ]
1544 ]
1544
1545
1545 def all_completions(self, text:str) -> List[str]:
1546 def all_completions(self, text:str) -> List[str]:
1546 """
1547 """
1547 Wrapper around the completion methods for the benefit of emacs.
1548 Wrapper around the completion methods for the benefit of emacs.
1548 """
1549 """
1549 prefix = text.rpartition('.')[0]
1550 prefix = text.rpartition('.')[0]
1550 with provisionalcompleter():
1551 with provisionalcompleter():
1551 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1552 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1552 for c in self.completions(text, len(text))]
1553 for c in self.completions(text, len(text))]
1553
1554
1554 return self.complete(text)[1]
1555 return self.complete(text)[1]
1555
1556
1556 def _clean_glob(self, text:str):
1557 def _clean_glob(self, text:str):
1557 return self.glob("%s*" % text)
1558 return self.glob("%s*" % text)
1558
1559
1559 def _clean_glob_win32(self, text:str):
1560 def _clean_glob_win32(self, text:str):
1560 return [f.replace("\\","/")
1561 return [f.replace("\\","/")
1561 for f in self.glob("%s*" % text)]
1562 for f in self.glob("%s*" % text)]
1562
1563
1563 @context_matcher()
1564 @context_matcher()
1564 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1565 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1565 """Same as ``file_matches``, but adopted to new Matcher API."""
1566 """Same as ``file_matches``, but adopted to new Matcher API."""
1566 matches = self.file_matches(context.token)
1567 matches = self.file_matches(context.token)
1567 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1568 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1568 # starts with `/home/`, `C:\`, etc)
1569 # starts with `/home/`, `C:\`, etc)
1569 return _convert_matcher_v1_result_to_v2(matches, type="path")
1570 return _convert_matcher_v1_result_to_v2(matches, type="path")
1570
1571
1571 def file_matches(self, text: str) -> List[str]:
1572 def file_matches(self, text: str) -> List[str]:
1572 """Match filenames, expanding ~USER type strings.
1573 """Match filenames, expanding ~USER type strings.
1573
1574
1574 Most of the seemingly convoluted logic in this completer is an
1575 Most of the seemingly convoluted logic in this completer is an
1575 attempt to handle filenames with spaces in them. And yet it's not
1576 attempt to handle filenames with spaces in them. And yet it's not
1576 quite perfect, because Python's readline doesn't expose all of the
1577 quite perfect, because Python's readline doesn't expose all of the
1577 GNU readline details needed for this to be done correctly.
1578 GNU readline details needed for this to be done correctly.
1578
1579
1579 For a filename with a space in it, the printed completions will be
1580 For a filename with a space in it, the printed completions will be
1580 only the parts after what's already been typed (instead of the
1581 only the parts after what's already been typed (instead of the
1581 full completions, as is normally done). I don't think with the
1582 full completions, as is normally done). I don't think with the
1582 current (as of Python 2.3) Python readline it's possible to do
1583 current (as of Python 2.3) Python readline it's possible to do
1583 better.
1584 better.
1584
1585
1585 DEPRECATED: Deprecated since 8.6. Use ``file_matcher`` instead.
1586 DEPRECATED: Deprecated since 8.6. Use ``file_matcher`` instead.
1586 """
1587 """
1587
1588
1588 # chars that require escaping with backslash - i.e. chars
1589 # chars that require escaping with backslash - i.e. chars
1589 # that readline treats incorrectly as delimiters, but we
1590 # that readline treats incorrectly as delimiters, but we
1590 # don't want to treat as delimiters in filename matching
1591 # don't want to treat as delimiters in filename matching
1591 # when escaped with backslash
1592 # when escaped with backslash
1592 if text.startswith('!'):
1593 if text.startswith('!'):
1593 text = text[1:]
1594 text = text[1:]
1594 text_prefix = u'!'
1595 text_prefix = u'!'
1595 else:
1596 else:
1596 text_prefix = u''
1597 text_prefix = u''
1597
1598
1598 text_until_cursor = self.text_until_cursor
1599 text_until_cursor = self.text_until_cursor
1599 # track strings with open quotes
1600 # track strings with open quotes
1600 open_quotes = has_open_quotes(text_until_cursor)
1601 open_quotes = has_open_quotes(text_until_cursor)
1601
1602
1602 if '(' in text_until_cursor or '[' in text_until_cursor:
1603 if '(' in text_until_cursor or '[' in text_until_cursor:
1603 lsplit = text
1604 lsplit = text
1604 else:
1605 else:
1605 try:
1606 try:
1606 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1607 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1607 lsplit = arg_split(text_until_cursor)[-1]
1608 lsplit = arg_split(text_until_cursor)[-1]
1608 except ValueError:
1609 except ValueError:
1609 # typically an unmatched ", or backslash without escaped char.
1610 # typically an unmatched ", or backslash without escaped char.
1610 if open_quotes:
1611 if open_quotes:
1611 lsplit = text_until_cursor.split(open_quotes)[-1]
1612 lsplit = text_until_cursor.split(open_quotes)[-1]
1612 else:
1613 else:
1613 return []
1614 return []
1614 except IndexError:
1615 except IndexError:
1615 # tab pressed on empty line
1616 # tab pressed on empty line
1616 lsplit = ""
1617 lsplit = ""
1617
1618
1618 if not open_quotes and lsplit != protect_filename(lsplit):
1619 if not open_quotes and lsplit != protect_filename(lsplit):
1619 # if protectables are found, do matching on the whole escaped name
1620 # if protectables are found, do matching on the whole escaped name
1620 has_protectables = True
1621 has_protectables = True
1621 text0,text = text,lsplit
1622 text0,text = text,lsplit
1622 else:
1623 else:
1623 has_protectables = False
1624 has_protectables = False
1624 text = os.path.expanduser(text)
1625 text = os.path.expanduser(text)
1625
1626
1626 if text == "":
1627 if text == "":
1627 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1628 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1628
1629
1629 # Compute the matches from the filesystem
1630 # Compute the matches from the filesystem
1630 if sys.platform == 'win32':
1631 if sys.platform == 'win32':
1631 m0 = self.clean_glob(text)
1632 m0 = self.clean_glob(text)
1632 else:
1633 else:
1633 m0 = self.clean_glob(text.replace('\\', ''))
1634 m0 = self.clean_glob(text.replace('\\', ''))
1634
1635
1635 if has_protectables:
1636 if has_protectables:
1636 # If we had protectables, we need to revert our changes to the
1637 # If we had protectables, we need to revert our changes to the
1637 # beginning of filename so that we don't double-write the part
1638 # beginning of filename so that we don't double-write the part
1638 # of the filename we have so far
1639 # of the filename we have so far
1639 len_lsplit = len(lsplit)
1640 len_lsplit = len(lsplit)
1640 matches = [text_prefix + text0 +
1641 matches = [text_prefix + text0 +
1641 protect_filename(f[len_lsplit:]) for f in m0]
1642 protect_filename(f[len_lsplit:]) for f in m0]
1642 else:
1643 else:
1643 if open_quotes:
1644 if open_quotes:
1644 # if we have a string with an open quote, we don't need to
1645 # if we have a string with an open quote, we don't need to
1645 # protect the names beyond the quote (and we _shouldn't_, as
1646 # protect the names beyond the quote (and we _shouldn't_, as
1646 # it would cause bugs when the filesystem call is made).
1647 # it would cause bugs when the filesystem call is made).
1647 matches = m0 if sys.platform == "win32" else\
1648 matches = m0 if sys.platform == "win32" else\
1648 [protect_filename(f, open_quotes) for f in m0]
1649 [protect_filename(f, open_quotes) for f in m0]
1649 else:
1650 else:
1650 matches = [text_prefix +
1651 matches = [text_prefix +
1651 protect_filename(f) for f in m0]
1652 protect_filename(f) for f in m0]
1652
1653
1653 # Mark directories in input list by appending '/' to their names.
1654 # Mark directories in input list by appending '/' to their names.
1654 return [x+'/' if os.path.isdir(x) else x for x in matches]
1655 return [x+'/' if os.path.isdir(x) else x for x in matches]
1655
1656
1656 @context_matcher()
1657 @context_matcher()
1657 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1658 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1658 text = context.token
1659 text = context.token
1659 matches = self.magic_matches(text)
1660 matches = self.magic_matches(text)
1660 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1661 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1661 is_magic_prefix = len(text) > 0 and text[0] == "%"
1662 is_magic_prefix = len(text) > 0 and text[0] == "%"
1662 result["suppress"] = is_magic_prefix and bool(result["completions"])
1663 result["suppress"] = is_magic_prefix and bool(result["completions"])
1663 return result
1664 return result
1664
1665
1665 def magic_matches(self, text: str):
1666 def magic_matches(self, text: str):
1666 """Match magics.
1667 """Match magics.
1667
1668
1668 DEPRECATED: Deprecated since 8.6. Use ``magic_matcher`` instead.
1669 DEPRECATED: Deprecated since 8.6. Use ``magic_matcher`` instead.
1669 """
1670 """
1670 # Get all shell magics now rather than statically, so magics loaded at
1671 # Get all shell magics now rather than statically, so magics loaded at
1671 # runtime show up too.
1672 # runtime show up too.
1672 lsm = self.shell.magics_manager.lsmagic()
1673 lsm = self.shell.magics_manager.lsmagic()
1673 line_magics = lsm['line']
1674 line_magics = lsm['line']
1674 cell_magics = lsm['cell']
1675 cell_magics = lsm['cell']
1675 pre = self.magic_escape
1676 pre = self.magic_escape
1676 pre2 = pre+pre
1677 pre2 = pre+pre
1677
1678
1678 explicit_magic = text.startswith(pre)
1679 explicit_magic = text.startswith(pre)
1679
1680
1680 # Completion logic:
1681 # Completion logic:
1681 # - user gives %%: only do cell magics
1682 # - user gives %%: only do cell magics
1682 # - user gives %: do both line and cell magics
1683 # - user gives %: do both line and cell magics
1683 # - no prefix: do both
1684 # - no prefix: do both
1684 # In other words, line magics are skipped if the user gives %% explicitly
1685 # In other words, line magics are skipped if the user gives %% explicitly
1685 #
1686 #
1686 # We also exclude magics that match any currently visible names:
1687 # We also exclude magics that match any currently visible names:
1687 # https://github.com/ipython/ipython/issues/4877, unless the user has
1688 # https://github.com/ipython/ipython/issues/4877, unless the user has
1688 # typed a %:
1689 # typed a %:
1689 # https://github.com/ipython/ipython/issues/10754
1690 # https://github.com/ipython/ipython/issues/10754
1690 bare_text = text.lstrip(pre)
1691 bare_text = text.lstrip(pre)
1691 global_matches = self.global_matches(bare_text)
1692 global_matches = self.global_matches(bare_text)
1692 if not explicit_magic:
1693 if not explicit_magic:
1693 def matches(magic):
1694 def matches(magic):
1694 """
1695 """
1695 Filter magics, in particular remove magics that match
1696 Filter magics, in particular remove magics that match
1696 a name present in global namespace.
1697 a name present in global namespace.
1697 """
1698 """
1698 return ( magic.startswith(bare_text) and
1699 return ( magic.startswith(bare_text) and
1699 magic not in global_matches )
1700 magic not in global_matches )
1700 else:
1701 else:
1701 def matches(magic):
1702 def matches(magic):
1702 return magic.startswith(bare_text)
1703 return magic.startswith(bare_text)
1703
1704
1704 comp = [ pre2+m for m in cell_magics if matches(m)]
1705 comp = [ pre2+m for m in cell_magics if matches(m)]
1705 if not text.startswith(pre2):
1706 if not text.startswith(pre2):
1706 comp += [ pre+m for m in line_magics if matches(m)]
1707 comp += [ pre+m for m in line_magics if matches(m)]
1707
1708
1708 return comp
1709 return comp
1709
1710
1710 @context_matcher()
1711 @context_matcher()
1711 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1712 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1712 """Match class names and attributes for %config magic."""
1713 """Match class names and attributes for %config magic."""
1713 # NOTE: uses `line_buffer` equivalent for compatibility
1714 # NOTE: uses `line_buffer` equivalent for compatibility
1714 matches = self.magic_config_matches(context.line_with_cursor)
1715 matches = self.magic_config_matches(context.line_with_cursor)
1715 return _convert_matcher_v1_result_to_v2(matches, type="param")
1716 return _convert_matcher_v1_result_to_v2(matches, type="param")
1716
1717
1717 def magic_config_matches(self, text: str) -> List[str]:
1718 def magic_config_matches(self, text: str) -> List[str]:
1718 """Match class names and attributes for %config magic.
1719 """Match class names and attributes for %config magic.
1719
1720
1720 DEPRECATED: Deprecated since 8.6. Use ``magic_config_matcher`` instead.
1721 DEPRECATED: Deprecated since 8.6. Use ``magic_config_matcher`` instead.
1721 """
1722 """
1722 texts = text.strip().split()
1723 texts = text.strip().split()
1723
1724
1724 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1725 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1725 # get all configuration classes
1726 # get all configuration classes
1726 classes = sorted(set([ c for c in self.shell.configurables
1727 classes = sorted(set([ c for c in self.shell.configurables
1727 if c.__class__.class_traits(config=True)
1728 if c.__class__.class_traits(config=True)
1728 ]), key=lambda x: x.__class__.__name__)
1729 ]), key=lambda x: x.__class__.__name__)
1729 classnames = [ c.__class__.__name__ for c in classes ]
1730 classnames = [ c.__class__.__name__ for c in classes ]
1730
1731
1731 # return all classnames if config or %config is given
1732 # return all classnames if config or %config is given
1732 if len(texts) == 1:
1733 if len(texts) == 1:
1733 return classnames
1734 return classnames
1734
1735
1735 # match classname
1736 # match classname
1736 classname_texts = texts[1].split('.')
1737 classname_texts = texts[1].split('.')
1737 classname = classname_texts[0]
1738 classname = classname_texts[0]
1738 classname_matches = [ c for c in classnames
1739 classname_matches = [ c for c in classnames
1739 if c.startswith(classname) ]
1740 if c.startswith(classname) ]
1740
1741
1741 # return matched classes or the matched class with attributes
1742 # return matched classes or the matched class with attributes
1742 if texts[1].find('.') < 0:
1743 if texts[1].find('.') < 0:
1743 return classname_matches
1744 return classname_matches
1744 elif len(classname_matches) == 1 and \
1745 elif len(classname_matches) == 1 and \
1745 classname_matches[0] == classname:
1746 classname_matches[0] == classname:
1746 cls = classes[classnames.index(classname)].__class__
1747 cls = classes[classnames.index(classname)].__class__
1747 help = cls.class_get_help()
1748 help = cls.class_get_help()
1748 # strip leading '--' from cl-args:
1749 # strip leading '--' from cl-args:
1749 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1750 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1750 return [ attr.split('=')[0]
1751 return [ attr.split('=')[0]
1751 for attr in help.strip().splitlines()
1752 for attr in help.strip().splitlines()
1752 if attr.startswith(texts[1]) ]
1753 if attr.startswith(texts[1]) ]
1753 return []
1754 return []
1754
1755
1755 @context_matcher()
1756 @context_matcher()
1756 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1757 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1757 """Match color schemes for %colors magic."""
1758 """Match color schemes for %colors magic."""
1758 # NOTE: uses `line_buffer` equivalent for compatibility
1759 # NOTE: uses `line_buffer` equivalent for compatibility
1759 matches = self.magic_color_matches(context.line_with_cursor)
1760 matches = self.magic_color_matches(context.line_with_cursor)
1760 return _convert_matcher_v1_result_to_v2(matches, type="param")
1761 return _convert_matcher_v1_result_to_v2(matches, type="param")
1761
1762
1762 def magic_color_matches(self, text: str) -> List[str]:
1763 def magic_color_matches(self, text: str) -> List[str]:
1763 """Match color schemes for %colors magic.
1764 """Match color schemes for %colors magic.
1764
1765
1765 DEPRECATED: Deprecated since 8.6. Use ``magic_color_matcher`` instead.
1766 DEPRECATED: Deprecated since 8.6. Use ``magic_color_matcher`` instead.
1766 """
1767 """
1767 texts = text.split()
1768 texts = text.split()
1768 if text.endswith(' '):
1769 if text.endswith(' '):
1769 # .split() strips off the trailing whitespace. Add '' back
1770 # .split() strips off the trailing whitespace. Add '' back
1770 # so that: '%colors ' -> ['%colors', '']
1771 # so that: '%colors ' -> ['%colors', '']
1771 texts.append('')
1772 texts.append('')
1772
1773
1773 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1774 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1774 prefix = texts[1]
1775 prefix = texts[1]
1775 return [ color for color in InspectColors.keys()
1776 return [ color for color in InspectColors.keys()
1776 if color.startswith(prefix) ]
1777 if color.startswith(prefix) ]
1777 return []
1778 return []
1778
1779
1779 @context_matcher(identifier="IPCompleter.jedi_matcher")
1780 @context_matcher(identifier="IPCompleter.jedi_matcher")
1780 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1781 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1781 matches = self._jedi_matches(
1782 matches = self._jedi_matches(
1782 cursor_column=context.cursor_position,
1783 cursor_column=context.cursor_position,
1783 cursor_line=context.cursor_line,
1784 cursor_line=context.cursor_line,
1784 text=context.full_text,
1785 text=context.full_text,
1785 )
1786 )
1786 return {
1787 return {
1787 "completions": matches,
1788 "completions": matches,
1788 # static analysis should not suppress other matchers
1789 # static analysis should not suppress other matchers
1789 "suppress": False,
1790 "suppress": False,
1790 }
1791 }
1791
1792
1792 def _jedi_matches(
1793 def _jedi_matches(
1793 self, cursor_column: int, cursor_line: int, text: str
1794 self, cursor_column: int, cursor_line: int, text: str
1794 ) -> Iterable[_JediCompletionLike]:
1795 ) -> Iterable[_JediCompletionLike]:
1795 """
1796 """
1796 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1797 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1797 cursor position.
1798 cursor position.
1798
1799
1799 Parameters
1800 Parameters
1800 ----------
1801 ----------
1801 cursor_column : int
1802 cursor_column : int
1802 column position of the cursor in ``text``, 0-indexed.
1803 column position of the cursor in ``text``, 0-indexed.
1803 cursor_line : int
1804 cursor_line : int
1804 line position of the cursor in ``text``, 0-indexed
1805 line position of the cursor in ``text``, 0-indexed
1805 text : str
1806 text : str
1806 text to complete
1807 text to complete
1807
1808
1808 Notes
1809 Notes
1809 -----
1810 -----
1810 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1811 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1811 object containing a string with the Jedi debug information attached.
1812 object containing a string with the Jedi debug information attached.
1812
1813
1813 DEPRECATED: Deprecated since 8.6. Use ``_jedi_matcher`` instead.
1814 DEPRECATED: Deprecated since 8.6. Use ``_jedi_matcher`` instead.
1814 """
1815 """
1815 namespaces = [self.namespace]
1816 namespaces = [self.namespace]
1816 if self.global_namespace is not None:
1817 if self.global_namespace is not None:
1817 namespaces.append(self.global_namespace)
1818 namespaces.append(self.global_namespace)
1818
1819
1819 completion_filter = lambda x:x
1820 completion_filter = lambda x:x
1820 offset = cursor_to_position(text, cursor_line, cursor_column)
1821 offset = cursor_to_position(text, cursor_line, cursor_column)
1821 # filter output if we are completing for object members
1822 # filter output if we are completing for object members
1822 if offset:
1823 if offset:
1823 pre = text[offset-1]
1824 pre = text[offset-1]
1824 if pre == '.':
1825 if pre == '.':
1825 if self.omit__names == 2:
1826 if self.omit__names == 2:
1826 completion_filter = lambda c:not c.name.startswith('_')
1827 completion_filter = lambda c:not c.name.startswith('_')
1827 elif self.omit__names == 1:
1828 elif self.omit__names == 1:
1828 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1829 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1829 elif self.omit__names == 0:
1830 elif self.omit__names == 0:
1830 completion_filter = lambda x:x
1831 completion_filter = lambda x:x
1831 else:
1832 else:
1832 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1833 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1833
1834
1834 interpreter = jedi.Interpreter(text[:offset], namespaces)
1835 interpreter = jedi.Interpreter(text[:offset], namespaces)
1835 try_jedi = True
1836 try_jedi = True
1836
1837
1837 try:
1838 try:
1838 # find the first token in the current tree -- if it is a ' or " then we are in a string
1839 # find the first token in the current tree -- if it is a ' or " then we are in a string
1839 completing_string = False
1840 completing_string = False
1840 try:
1841 try:
1841 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1842 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1842 except StopIteration:
1843 except StopIteration:
1843 pass
1844 pass
1844 else:
1845 else:
1845 # note the value may be ', ", or it may also be ''' or """, or
1846 # note the value may be ', ", or it may also be ''' or """, or
1846 # in some cases, """what/you/typed..., but all of these are
1847 # in some cases, """what/you/typed..., but all of these are
1847 # strings.
1848 # strings.
1848 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1849 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1849
1850
1850 # if we are in a string jedi is likely not the right candidate for
1851 # if we are in a string jedi is likely not the right candidate for
1851 # now. Skip it.
1852 # now. Skip it.
1852 try_jedi = not completing_string
1853 try_jedi = not completing_string
1853 except Exception as e:
1854 except Exception as e:
1854 # many of things can go wrong, we are using private API just don't crash.
1855 # many of things can go wrong, we are using private API just don't crash.
1855 if self.debug:
1856 if self.debug:
1856 print("Error detecting if completing a non-finished string :", e, '|')
1857 print("Error detecting if completing a non-finished string :", e, '|')
1857
1858
1858 if not try_jedi:
1859 if not try_jedi:
1859 return []
1860 return []
1860 try:
1861 try:
1861 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1862 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1862 except Exception as e:
1863 except Exception as e:
1863 if self.debug:
1864 if self.debug:
1864 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1865 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1865 else:
1866 else:
1866 return []
1867 return []
1867
1868
1868 def python_matches(self, text:str)->List[str]:
1869 def python_matches(self, text:str)->List[str]:
1869 """Match attributes or global python names"""
1870 """Match attributes or global python names"""
1870 if "." in text:
1871 if "." in text:
1871 try:
1872 try:
1872 matches = self.attr_matches(text)
1873 matches = self.attr_matches(text)
1873 if text.endswith('.') and self.omit__names:
1874 if text.endswith('.') and self.omit__names:
1874 if self.omit__names == 1:
1875 if self.omit__names == 1:
1875 # true if txt is _not_ a __ name, false otherwise:
1876 # true if txt is _not_ a __ name, false otherwise:
1876 no__name = (lambda txt:
1877 no__name = (lambda txt:
1877 re.match(r'.*\.__.*?__',txt) is None)
1878 re.match(r'.*\.__.*?__',txt) is None)
1878 else:
1879 else:
1879 # true if txt is _not_ a _ name, false otherwise:
1880 # true if txt is _not_ a _ name, false otherwise:
1880 no__name = (lambda txt:
1881 no__name = (lambda txt:
1881 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1882 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1882 matches = filter(no__name, matches)
1883 matches = filter(no__name, matches)
1883 except NameError:
1884 except NameError:
1884 # catches <undefined attributes>.<tab>
1885 # catches <undefined attributes>.<tab>
1885 matches = []
1886 matches = []
1886 else:
1887 else:
1887 matches = self.global_matches(text)
1888 matches = self.global_matches(text)
1888 return matches
1889 return matches
1889
1890
1890 def _default_arguments_from_docstring(self, doc):
1891 def _default_arguments_from_docstring(self, doc):
1891 """Parse the first line of docstring for call signature.
1892 """Parse the first line of docstring for call signature.
1892
1893
1893 Docstring should be of the form 'min(iterable[, key=func])\n'.
1894 Docstring should be of the form 'min(iterable[, key=func])\n'.
1894 It can also parse cython docstring of the form
1895 It can also parse cython docstring of the form
1895 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1896 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1896 """
1897 """
1897 if doc is None:
1898 if doc is None:
1898 return []
1899 return []
1899
1900
1900 #care only the firstline
1901 #care only the firstline
1901 line = doc.lstrip().splitlines()[0]
1902 line = doc.lstrip().splitlines()[0]
1902
1903
1903 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1904 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1904 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1905 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1905 sig = self.docstring_sig_re.search(line)
1906 sig = self.docstring_sig_re.search(line)
1906 if sig is None:
1907 if sig is None:
1907 return []
1908 return []
1908 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1909 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1909 sig = sig.groups()[0].split(',')
1910 sig = sig.groups()[0].split(',')
1910 ret = []
1911 ret = []
1911 for s in sig:
1912 for s in sig:
1912 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1913 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1913 ret += self.docstring_kwd_re.findall(s)
1914 ret += self.docstring_kwd_re.findall(s)
1914 return ret
1915 return ret
1915
1916
1916 def _default_arguments(self, obj):
1917 def _default_arguments(self, obj):
1917 """Return the list of default arguments of obj if it is callable,
1918 """Return the list of default arguments of obj if it is callable,
1918 or empty list otherwise."""
1919 or empty list otherwise."""
1919 call_obj = obj
1920 call_obj = obj
1920 ret = []
1921 ret = []
1921 if inspect.isbuiltin(obj):
1922 if inspect.isbuiltin(obj):
1922 pass
1923 pass
1923 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1924 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1924 if inspect.isclass(obj):
1925 if inspect.isclass(obj):
1925 #for cython embedsignature=True the constructor docstring
1926 #for cython embedsignature=True the constructor docstring
1926 #belongs to the object itself not __init__
1927 #belongs to the object itself not __init__
1927 ret += self._default_arguments_from_docstring(
1928 ret += self._default_arguments_from_docstring(
1928 getattr(obj, '__doc__', ''))
1929 getattr(obj, '__doc__', ''))
1929 # for classes, check for __init__,__new__
1930 # for classes, check for __init__,__new__
1930 call_obj = (getattr(obj, '__init__', None) or
1931 call_obj = (getattr(obj, '__init__', None) or
1931 getattr(obj, '__new__', None))
1932 getattr(obj, '__new__', None))
1932 # for all others, check if they are __call__able
1933 # for all others, check if they are __call__able
1933 elif hasattr(obj, '__call__'):
1934 elif hasattr(obj, '__call__'):
1934 call_obj = obj.__call__
1935 call_obj = obj.__call__
1935 ret += self._default_arguments_from_docstring(
1936 ret += self._default_arguments_from_docstring(
1936 getattr(call_obj, '__doc__', ''))
1937 getattr(call_obj, '__doc__', ''))
1937
1938
1938 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1939 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1939 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1940 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1940
1941
1941 try:
1942 try:
1942 sig = inspect.signature(obj)
1943 sig = inspect.signature(obj)
1943 ret.extend(k for k, v in sig.parameters.items() if
1944 ret.extend(k for k, v in sig.parameters.items() if
1944 v.kind in _keeps)
1945 v.kind in _keeps)
1945 except ValueError:
1946 except ValueError:
1946 pass
1947 pass
1947
1948
1948 return list(set(ret))
1949 return list(set(ret))
1949
1950
1950 @context_matcher()
1951 @context_matcher()
1951 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1952 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1952 """Match named parameters (kwargs) of the last open function."""
1953 """Match named parameters (kwargs) of the last open function."""
1953 matches = self.python_func_kw_matches(context.token)
1954 matches = self.python_func_kw_matches(context.token)
1954 return _convert_matcher_v1_result_to_v2(matches, type="param")
1955 return _convert_matcher_v1_result_to_v2(matches, type="param")
1955
1956
1956 def python_func_kw_matches(self, text):
1957 def python_func_kw_matches(self, text):
1957 """Match named parameters (kwargs) of the last open function.
1958 """Match named parameters (kwargs) of the last open function.
1958
1959
1959 DEPRECATED: Deprecated since 8.6. Use ``magic_config_matcher`` instead.
1960 DEPRECATED: Deprecated since 8.6. Use ``magic_config_matcher`` instead.
1960 """
1961 """
1961
1962
1962 if "." in text: # a parameter cannot be dotted
1963 if "." in text: # a parameter cannot be dotted
1963 return []
1964 return []
1964 try: regexp = self.__funcParamsRegex
1965 try: regexp = self.__funcParamsRegex
1965 except AttributeError:
1966 except AttributeError:
1966 regexp = self.__funcParamsRegex = re.compile(r'''
1967 regexp = self.__funcParamsRegex = re.compile(r'''
1967 '.*?(?<!\\)' | # single quoted strings or
1968 '.*?(?<!\\)' | # single quoted strings or
1968 ".*?(?<!\\)" | # double quoted strings or
1969 ".*?(?<!\\)" | # double quoted strings or
1969 \w+ | # identifier
1970 \w+ | # identifier
1970 \S # other characters
1971 \S # other characters
1971 ''', re.VERBOSE | re.DOTALL)
1972 ''', re.VERBOSE | re.DOTALL)
1972 # 1. find the nearest identifier that comes before an unclosed
1973 # 1. find the nearest identifier that comes before an unclosed
1973 # parenthesis before the cursor
1974 # parenthesis before the cursor
1974 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1975 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1975 tokens = regexp.findall(self.text_until_cursor)
1976 tokens = regexp.findall(self.text_until_cursor)
1976 iterTokens = reversed(tokens); openPar = 0
1977 iterTokens = reversed(tokens); openPar = 0
1977
1978
1978 for token in iterTokens:
1979 for token in iterTokens:
1979 if token == ')':
1980 if token == ')':
1980 openPar -= 1
1981 openPar -= 1
1981 elif token == '(':
1982 elif token == '(':
1982 openPar += 1
1983 openPar += 1
1983 if openPar > 0:
1984 if openPar > 0:
1984 # found the last unclosed parenthesis
1985 # found the last unclosed parenthesis
1985 break
1986 break
1986 else:
1987 else:
1987 return []
1988 return []
1988 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1989 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1989 ids = []
1990 ids = []
1990 isId = re.compile(r'\w+$').match
1991 isId = re.compile(r'\w+$').match
1991
1992
1992 while True:
1993 while True:
1993 try:
1994 try:
1994 ids.append(next(iterTokens))
1995 ids.append(next(iterTokens))
1995 if not isId(ids[-1]):
1996 if not isId(ids[-1]):
1996 ids.pop(); break
1997 ids.pop(); break
1997 if not next(iterTokens) == '.':
1998 if not next(iterTokens) == '.':
1998 break
1999 break
1999 except StopIteration:
2000 except StopIteration:
2000 break
2001 break
2001
2002
2002 # Find all named arguments already assigned to, as to avoid suggesting
2003 # Find all named arguments already assigned to, as to avoid suggesting
2003 # them again
2004 # them again
2004 usedNamedArgs = set()
2005 usedNamedArgs = set()
2005 par_level = -1
2006 par_level = -1
2006 for token, next_token in zip(tokens, tokens[1:]):
2007 for token, next_token in zip(tokens, tokens[1:]):
2007 if token == '(':
2008 if token == '(':
2008 par_level += 1
2009 par_level += 1
2009 elif token == ')':
2010 elif token == ')':
2010 par_level -= 1
2011 par_level -= 1
2011
2012
2012 if par_level != 0:
2013 if par_level != 0:
2013 continue
2014 continue
2014
2015
2015 if next_token != '=':
2016 if next_token != '=':
2016 continue
2017 continue
2017
2018
2018 usedNamedArgs.add(token)
2019 usedNamedArgs.add(token)
2019
2020
2020 argMatches = []
2021 argMatches = []
2021 try:
2022 try:
2022 callableObj = '.'.join(ids[::-1])
2023 callableObj = '.'.join(ids[::-1])
2023 namedArgs = self._default_arguments(eval(callableObj,
2024 namedArgs = self._default_arguments(eval(callableObj,
2024 self.namespace))
2025 self.namespace))
2025
2026
2026 # Remove used named arguments from the list, no need to show twice
2027 # Remove used named arguments from the list, no need to show twice
2027 for namedArg in set(namedArgs) - usedNamedArgs:
2028 for namedArg in set(namedArgs) - usedNamedArgs:
2028 if namedArg.startswith(text):
2029 if namedArg.startswith(text):
2029 argMatches.append("%s=" %namedArg)
2030 argMatches.append("%s=" %namedArg)
2030 except:
2031 except:
2031 pass
2032 pass
2032
2033
2033 return argMatches
2034 return argMatches
2034
2035
2035 @staticmethod
2036 @staticmethod
2036 def _get_keys(obj: Any) -> List[Any]:
2037 def _get_keys(obj: Any) -> List[Any]:
2037 # Objects can define their own completions by defining an
2038 # Objects can define their own completions by defining an
2038 # _ipy_key_completions_() method.
2039 # _ipy_key_completions_() method.
2039 method = get_real_method(obj, '_ipython_key_completions_')
2040 method = get_real_method(obj, '_ipython_key_completions_')
2040 if method is not None:
2041 if method is not None:
2041 return method()
2042 return method()
2042
2043
2043 # Special case some common in-memory dict-like types
2044 # Special case some common in-memory dict-like types
2044 if isinstance(obj, dict) or\
2045 if isinstance(obj, dict) or\
2045 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2046 _safe_isinstance(obj, 'pandas', 'DataFrame'):
2046 try:
2047 try:
2047 return list(obj.keys())
2048 return list(obj.keys())
2048 except Exception:
2049 except Exception:
2049 return []
2050 return []
2050 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2051 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
2051 _safe_isinstance(obj, 'numpy', 'void'):
2052 _safe_isinstance(obj, 'numpy', 'void'):
2052 return obj.dtype.names or []
2053 return obj.dtype.names or []
2053 return []
2054 return []
2054
2055
2055 @context_matcher()
2056 @context_matcher()
2056 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2057 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2057 """Match string keys in a dictionary, after e.g. ``foo[``."""
2058 """Match string keys in a dictionary, after e.g. ``foo[``."""
2058 matches = self.dict_key_matches(context.token)
2059 matches = self.dict_key_matches(context.token)
2059 return _convert_matcher_v1_result_to_v2(
2060 return _convert_matcher_v1_result_to_v2(
2060 matches, type="dict key", suppress_if_matches=True
2061 matches, type="dict key", suppress_if_matches=True
2061 )
2062 )
2062
2063
2063 def dict_key_matches(self, text: str) -> List[str]:
2064 def dict_key_matches(self, text: str) -> List[str]:
2064 """Match string keys in a dictionary, after e.g. ``foo[``.
2065 """Match string keys in a dictionary, after e.g. ``foo[``.
2065
2066
2066 DEPRECATED: Deprecated since 8.6. Use `dict_key_matcher` instead.
2067 DEPRECATED: Deprecated since 8.6. Use `dict_key_matcher` instead.
2067 """
2068 """
2068
2069
2069 if self.__dict_key_regexps is not None:
2070 if self.__dict_key_regexps is not None:
2070 regexps = self.__dict_key_regexps
2071 regexps = self.__dict_key_regexps
2071 else:
2072 else:
2072 dict_key_re_fmt = r'''(?x)
2073 dict_key_re_fmt = r'''(?x)
2073 ( # match dict-referring expression wrt greedy setting
2074 ( # match dict-referring expression wrt greedy setting
2074 %s
2075 %s
2075 )
2076 )
2076 \[ # open bracket
2077 \[ # open bracket
2077 \s* # and optional whitespace
2078 \s* # and optional whitespace
2078 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2079 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2079 ((?:[uUbB]? # string prefix (r not handled)
2080 ((?:[uUbB]? # string prefix (r not handled)
2080 (?:
2081 (?:
2081 '(?:[^']|(?<!\\)\\')*'
2082 '(?:[^']|(?<!\\)\\')*'
2082 |
2083 |
2083 "(?:[^"]|(?<!\\)\\")*"
2084 "(?:[^"]|(?<!\\)\\")*"
2084 )
2085 )
2085 \s*,\s*
2086 \s*,\s*
2086 )*)
2087 )*)
2087 ([uUbB]? # string prefix (r not handled)
2088 ([uUbB]? # string prefix (r not handled)
2088 (?: # unclosed string
2089 (?: # unclosed string
2089 '(?:[^']|(?<!\\)\\')*
2090 '(?:[^']|(?<!\\)\\')*
2090 |
2091 |
2091 "(?:[^"]|(?<!\\)\\")*
2092 "(?:[^"]|(?<!\\)\\")*
2092 )
2093 )
2093 )?
2094 )?
2094 $
2095 $
2095 '''
2096 '''
2096 regexps = self.__dict_key_regexps = {
2097 regexps = self.__dict_key_regexps = {
2097 False: re.compile(dict_key_re_fmt % r'''
2098 False: re.compile(dict_key_re_fmt % r'''
2098 # identifiers separated by .
2099 # identifiers separated by .
2099 (?!\d)\w+
2100 (?!\d)\w+
2100 (?:\.(?!\d)\w+)*
2101 (?:\.(?!\d)\w+)*
2101 '''),
2102 '''),
2102 True: re.compile(dict_key_re_fmt % '''
2103 True: re.compile(dict_key_re_fmt % '''
2103 .+
2104 .+
2104 ''')
2105 ''')
2105 }
2106 }
2106
2107
2107 match = regexps[self.greedy].search(self.text_until_cursor)
2108 match = regexps[self.greedy].search(self.text_until_cursor)
2108
2109
2109 if match is None:
2110 if match is None:
2110 return []
2111 return []
2111
2112
2112 expr, prefix0, prefix = match.groups()
2113 expr, prefix0, prefix = match.groups()
2113 try:
2114 try:
2114 obj = eval(expr, self.namespace)
2115 obj = eval(expr, self.namespace)
2115 except Exception:
2116 except Exception:
2116 try:
2117 try:
2117 obj = eval(expr, self.global_namespace)
2118 obj = eval(expr, self.global_namespace)
2118 except Exception:
2119 except Exception:
2119 return []
2120 return []
2120
2121
2121 keys = self._get_keys(obj)
2122 keys = self._get_keys(obj)
2122 if not keys:
2123 if not keys:
2123 return keys
2124 return keys
2124
2125
2125 extra_prefix = eval(prefix0) if prefix0 != '' else None
2126 extra_prefix = eval(prefix0) if prefix0 != '' else None
2126
2127
2127 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2128 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2128 if not matches:
2129 if not matches:
2129 return matches
2130 return matches
2130
2131
2131 # get the cursor position of
2132 # get the cursor position of
2132 # - the text being completed
2133 # - the text being completed
2133 # - the start of the key text
2134 # - the start of the key text
2134 # - the start of the completion
2135 # - the start of the completion
2135 text_start = len(self.text_until_cursor) - len(text)
2136 text_start = len(self.text_until_cursor) - len(text)
2136 if prefix:
2137 if prefix:
2137 key_start = match.start(3)
2138 key_start = match.start(3)
2138 completion_start = key_start + token_offset
2139 completion_start = key_start + token_offset
2139 else:
2140 else:
2140 key_start = completion_start = match.end()
2141 key_start = completion_start = match.end()
2141
2142
2142 # grab the leading prefix, to make sure all completions start with `text`
2143 # grab the leading prefix, to make sure all completions start with `text`
2143 if text_start > key_start:
2144 if text_start > key_start:
2144 leading = ''
2145 leading = ''
2145 else:
2146 else:
2146 leading = text[text_start:completion_start]
2147 leading = text[text_start:completion_start]
2147
2148
2148 # the index of the `[` character
2149 # the index of the `[` character
2149 bracket_idx = match.end(1)
2150 bracket_idx = match.end(1)
2150
2151
2151 # append closing quote and bracket as appropriate
2152 # append closing quote and bracket as appropriate
2152 # this is *not* appropriate if the opening quote or bracket is outside
2153 # this is *not* appropriate if the opening quote or bracket is outside
2153 # the text given to this method
2154 # the text given to this method
2154 suf = ''
2155 suf = ''
2155 continuation = self.line_buffer[len(self.text_until_cursor):]
2156 continuation = self.line_buffer[len(self.text_until_cursor):]
2156 if key_start > text_start and closing_quote:
2157 if key_start > text_start and closing_quote:
2157 # quotes were opened inside text, maybe close them
2158 # quotes were opened inside text, maybe close them
2158 if continuation.startswith(closing_quote):
2159 if continuation.startswith(closing_quote):
2159 continuation = continuation[len(closing_quote):]
2160 continuation = continuation[len(closing_quote):]
2160 else:
2161 else:
2161 suf += closing_quote
2162 suf += closing_quote
2162 if bracket_idx > text_start:
2163 if bracket_idx > text_start:
2163 # brackets were opened inside text, maybe close them
2164 # brackets were opened inside text, maybe close them
2164 if not continuation.startswith(']'):
2165 if not continuation.startswith(']'):
2165 suf += ']'
2166 suf += ']'
2166
2167
2167 return [leading + k + suf for k in matches]
2168 return [leading + k + suf for k in matches]
2168
2169
2169 @context_matcher()
2170 @context_matcher()
2170 def unicode_name_matcher(self, context):
2171 def unicode_name_matcher(self, context):
2171 fragment, matches = self.unicode_name_matches(context.token)
2172 fragment, matches = self.unicode_name_matches(context.token)
2172 return _convert_matcher_v1_result_to_v2(
2173 return _convert_matcher_v1_result_to_v2(
2173 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2174 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2174 )
2175 )
2175
2176
2176 @staticmethod
2177 @staticmethod
2177 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2178 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
2178 """Match Latex-like syntax for unicode characters base
2179 """Match Latex-like syntax for unicode characters base
2179 on the name of the character.
2180 on the name of the character.
2180
2181
2181 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2182 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
2182
2183
2183 Works only on valid python 3 identifier, or on combining characters that
2184 Works only on valid python 3 identifier, or on combining characters that
2184 will combine to form a valid identifier.
2185 will combine to form a valid identifier.
2185 """
2186 """
2186 slashpos = text.rfind('\\')
2187 slashpos = text.rfind('\\')
2187 if slashpos > -1:
2188 if slashpos > -1:
2188 s = text[slashpos+1:]
2189 s = text[slashpos+1:]
2189 try :
2190 try :
2190 unic = unicodedata.lookup(s)
2191 unic = unicodedata.lookup(s)
2191 # allow combining chars
2192 # allow combining chars
2192 if ('a'+unic).isidentifier():
2193 if ('a'+unic).isidentifier():
2193 return '\\'+s,[unic]
2194 return '\\'+s,[unic]
2194 except KeyError:
2195 except KeyError:
2195 pass
2196 pass
2196 return '', []
2197 return '', []
2197
2198
2198 @context_matcher()
2199 @context_matcher()
2199 def latex_name_matcher(self, context):
2200 def latex_name_matcher(self, context):
2200 """Match Latex syntax for unicode characters.
2201 """Match Latex syntax for unicode characters.
2201
2202
2202 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2203 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2203 """
2204 """
2204 fragment, matches = self.latex_matches(context.token)
2205 fragment, matches = self.latex_matches(context.token)
2205 return _convert_matcher_v1_result_to_v2(
2206 return _convert_matcher_v1_result_to_v2(
2206 matches, type="latex", fragment=fragment, suppress_if_matches=True
2207 matches, type="latex", fragment=fragment, suppress_if_matches=True
2207 )
2208 )
2208
2209
2209 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2210 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
2210 """Match Latex syntax for unicode characters.
2211 """Match Latex syntax for unicode characters.
2211
2212
2212 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2213 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
2213
2214
2214 DEPRECATED: Deprecated since 8.6. Use `latex_matcher` instead.
2215 DEPRECATED: Deprecated since 8.6. Use `latex_matcher` instead.
2215 """
2216 """
2216 slashpos = text.rfind('\\')
2217 slashpos = text.rfind('\\')
2217 if slashpos > -1:
2218 if slashpos > -1:
2218 s = text[slashpos:]
2219 s = text[slashpos:]
2219 if s in latex_symbols:
2220 if s in latex_symbols:
2220 # Try to complete a full latex symbol to unicode
2221 # Try to complete a full latex symbol to unicode
2221 # \\alpha -> Ξ±
2222 # \\alpha -> Ξ±
2222 return s, [latex_symbols[s]]
2223 return s, [latex_symbols[s]]
2223 else:
2224 else:
2224 # If a user has partially typed a latex symbol, give them
2225 # If a user has partially typed a latex symbol, give them
2225 # a full list of options \al -> [\aleph, \alpha]
2226 # a full list of options \al -> [\aleph, \alpha]
2226 matches = [k for k in latex_symbols if k.startswith(s)]
2227 matches = [k for k in latex_symbols if k.startswith(s)]
2227 if matches:
2228 if matches:
2228 return s, matches
2229 return s, matches
2229 return '', ()
2230 return '', ()
2230
2231
2231 @context_matcher()
2232 @context_matcher()
2232 def custom_completer_matcher(self, context):
2233 def custom_completer_matcher(self, context):
2233 matches = self.dispatch_custom_completer(context.token) or []
2234 matches = self.dispatch_custom_completer(context.token) or []
2234 result = _convert_matcher_v1_result_to_v2(
2235 result = _convert_matcher_v1_result_to_v2(
2235 matches, type="<unknown>", suppress_if_matches=True
2236 matches, type="<unknown>", suppress_if_matches=True
2236 )
2237 )
2237 result["ordered"] = True
2238 result["ordered"] = True
2238 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2239 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
2239 return result
2240 return result
2240
2241
2241 def dispatch_custom_completer(self, text):
2242 def dispatch_custom_completer(self, text):
2242 """
2243 """
2243 DEPRECATED: Deprecated since 8.6. Use `custom_completer_matcher` instead.
2244 DEPRECATED: Deprecated since 8.6. Use `custom_completer_matcher` instead.
2244 """
2245 """
2245 if not self.custom_completers:
2246 if not self.custom_completers:
2246 return
2247 return
2247
2248
2248 line = self.line_buffer
2249 line = self.line_buffer
2249 if not line.strip():
2250 if not line.strip():
2250 return None
2251 return None
2251
2252
2252 # Create a little structure to pass all the relevant information about
2253 # Create a little structure to pass all the relevant information about
2253 # the current completion to any custom completer.
2254 # the current completion to any custom completer.
2254 event = SimpleNamespace()
2255 event = SimpleNamespace()
2255 event.line = line
2256 event.line = line
2256 event.symbol = text
2257 event.symbol = text
2257 cmd = line.split(None,1)[0]
2258 cmd = line.split(None,1)[0]
2258 event.command = cmd
2259 event.command = cmd
2259 event.text_until_cursor = self.text_until_cursor
2260 event.text_until_cursor = self.text_until_cursor
2260
2261
2261 # for foo etc, try also to find completer for %foo
2262 # for foo etc, try also to find completer for %foo
2262 if not cmd.startswith(self.magic_escape):
2263 if not cmd.startswith(self.magic_escape):
2263 try_magic = self.custom_completers.s_matches(
2264 try_magic = self.custom_completers.s_matches(
2264 self.magic_escape + cmd)
2265 self.magic_escape + cmd)
2265 else:
2266 else:
2266 try_magic = []
2267 try_magic = []
2267
2268
2268 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2269 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2269 try_magic,
2270 try_magic,
2270 self.custom_completers.flat_matches(self.text_until_cursor)):
2271 self.custom_completers.flat_matches(self.text_until_cursor)):
2271 try:
2272 try:
2272 res = c(event)
2273 res = c(event)
2273 if res:
2274 if res:
2274 # first, try case sensitive match
2275 # first, try case sensitive match
2275 withcase = [r for r in res if r.startswith(text)]
2276 withcase = [r for r in res if r.startswith(text)]
2276 if withcase:
2277 if withcase:
2277 return withcase
2278 return withcase
2278 # if none, then case insensitive ones are ok too
2279 # if none, then case insensitive ones are ok too
2279 text_low = text.lower()
2280 text_low = text.lower()
2280 return [r for r in res if r.lower().startswith(text_low)]
2281 return [r for r in res if r.lower().startswith(text_low)]
2281 except TryNext:
2282 except TryNext:
2282 pass
2283 pass
2283 except KeyboardInterrupt:
2284 except KeyboardInterrupt:
2284 """
2285 """
2285 If custom completer take too long,
2286 If custom completer take too long,
2286 let keyboard interrupt abort and return nothing.
2287 let keyboard interrupt abort and return nothing.
2287 """
2288 """
2288 break
2289 break
2289
2290
2290 return None
2291 return None
2291
2292
2292 def completions(self, text: str, offset: int)->Iterator[Completion]:
2293 def completions(self, text: str, offset: int)->Iterator[Completion]:
2293 """
2294 """
2294 Returns an iterator over the possible completions
2295 Returns an iterator over the possible completions
2295
2296
2296 .. warning::
2297 .. warning::
2297
2298
2298 Unstable
2299 Unstable
2299
2300
2300 This function is unstable, API may change without warning.
2301 This function is unstable, API may change without warning.
2301 It will also raise unless use in proper context manager.
2302 It will also raise unless use in proper context manager.
2302
2303
2303 Parameters
2304 Parameters
2304 ----------
2305 ----------
2305 text : str
2306 text : str
2306 Full text of the current input, multi line string.
2307 Full text of the current input, multi line string.
2307 offset : int
2308 offset : int
2308 Integer representing the position of the cursor in ``text``. Offset
2309 Integer representing the position of the cursor in ``text``. Offset
2309 is 0-based indexed.
2310 is 0-based indexed.
2310
2311
2311 Yields
2312 Yields
2312 ------
2313 ------
2313 Completion
2314 Completion
2314
2315
2315 Notes
2316 Notes
2316 -----
2317 -----
2317 The cursor on a text can either be seen as being "in between"
2318 The cursor on a text can either be seen as being "in between"
2318 characters or "On" a character depending on the interface visible to
2319 characters or "On" a character depending on the interface visible to
2319 the user. For consistency the cursor being on "in between" characters X
2320 the user. For consistency the cursor being on "in between" characters X
2320 and Y is equivalent to the cursor being "on" character Y, that is to say
2321 and Y is equivalent to the cursor being "on" character Y, that is to say
2321 the character the cursor is on is considered as being after the cursor.
2322 the character the cursor is on is considered as being after the cursor.
2322
2323
2323 Combining characters may span more that one position in the
2324 Combining characters may span more that one position in the
2324 text.
2325 text.
2325
2326
2326 .. note::
2327 .. note::
2327
2328
2328 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2329 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2329 fake Completion token to distinguish completion returned by Jedi
2330 fake Completion token to distinguish completion returned by Jedi
2330 and usual IPython completion.
2331 and usual IPython completion.
2331
2332
2332 .. note::
2333 .. note::
2333
2334
2334 Completions are not completely deduplicated yet. If identical
2335 Completions are not completely deduplicated yet. If identical
2335 completions are coming from different sources this function does not
2336 completions are coming from different sources this function does not
2336 ensure that each completion object will only be present once.
2337 ensure that each completion object will only be present once.
2337 """
2338 """
2338 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2339 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2339 "It may change without warnings. "
2340 "It may change without warnings. "
2340 "Use in corresponding context manager.",
2341 "Use in corresponding context manager.",
2341 category=ProvisionalCompleterWarning, stacklevel=2)
2342 category=ProvisionalCompleterWarning, stacklevel=2)
2342
2343
2343 seen = set()
2344 seen = set()
2344 profiler:Optional[cProfile.Profile]
2345 profiler:Optional[cProfile.Profile]
2345 try:
2346 try:
2346 if self.profile_completions:
2347 if self.profile_completions:
2347 import cProfile
2348 import cProfile
2348 profiler = cProfile.Profile()
2349 profiler = cProfile.Profile()
2349 profiler.enable()
2350 profiler.enable()
2350 else:
2351 else:
2351 profiler = None
2352 profiler = None
2352
2353
2353 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2354 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2354 if c and (c in seen):
2355 if c and (c in seen):
2355 continue
2356 continue
2356 yield c
2357 yield c
2357 seen.add(c)
2358 seen.add(c)
2358 except KeyboardInterrupt:
2359 except KeyboardInterrupt:
2359 """if completions take too long and users send keyboard interrupt,
2360 """if completions take too long and users send keyboard interrupt,
2360 do not crash and return ASAP. """
2361 do not crash and return ASAP. """
2361 pass
2362 pass
2362 finally:
2363 finally:
2363 if profiler is not None:
2364 if profiler is not None:
2364 profiler.disable()
2365 profiler.disable()
2365 ensure_dir_exists(self.profiler_output_dir)
2366 ensure_dir_exists(self.profiler_output_dir)
2366 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2367 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2367 print("Writing profiler output to", output_path)
2368 print("Writing profiler output to", output_path)
2368 profiler.dump_stats(output_path)
2369 profiler.dump_stats(output_path)
2369
2370
2370 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2371 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2371 """
2372 """
2372 Core completion module.Same signature as :any:`completions`, with the
2373 Core completion module.Same signature as :any:`completions`, with the
2373 extra `timeout` parameter (in seconds).
2374 extra `timeout` parameter (in seconds).
2374
2375
2375 Computing jedi's completion ``.type`` can be quite expensive (it is a
2376 Computing jedi's completion ``.type`` can be quite expensive (it is a
2376 lazy property) and can require some warm-up, more warm up than just
2377 lazy property) and can require some warm-up, more warm up than just
2377 computing the ``name`` of a completion. The warm-up can be :
2378 computing the ``name`` of a completion. The warm-up can be :
2378
2379
2379 - Long warm-up the first time a module is encountered after
2380 - Long warm-up the first time a module is encountered after
2380 install/update: actually build parse/inference tree.
2381 install/update: actually build parse/inference tree.
2381
2382
2382 - first time the module is encountered in a session: load tree from
2383 - first time the module is encountered in a session: load tree from
2383 disk.
2384 disk.
2384
2385
2385 We don't want to block completions for tens of seconds so we give the
2386 We don't want to block completions for tens of seconds so we give the
2386 completer a "budget" of ``_timeout`` seconds per invocation to compute
2387 completer a "budget" of ``_timeout`` seconds per invocation to compute
2387 completions types, the completions that have not yet been computed will
2388 completions types, the completions that have not yet been computed will
2388 be marked as "unknown" an will have a chance to be computed next round
2389 be marked as "unknown" an will have a chance to be computed next round
2389 are things get cached.
2390 are things get cached.
2390
2391
2391 Keep in mind that Jedi is not the only thing treating the completion so
2392 Keep in mind that Jedi is not the only thing treating the completion so
2392 keep the timeout short-ish as if we take more than 0.3 second we still
2393 keep the timeout short-ish as if we take more than 0.3 second we still
2393 have lots of processing to do.
2394 have lots of processing to do.
2394
2395
2395 """
2396 """
2396 deadline = time.monotonic() + _timeout
2397 deadline = time.monotonic() + _timeout
2397
2398
2398 before = full_text[:offset]
2399 before = full_text[:offset]
2399 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2400 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2400
2401
2401 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2402 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2402
2403
2403 results = self._complete(
2404 results = self._complete(
2404 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2405 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2405 )
2406 )
2406 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2407 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2407 identifier: result
2408 identifier: result
2408 for identifier, result in results.items()
2409 for identifier, result in results.items()
2409 if identifier != jedi_matcher_id
2410 if identifier != jedi_matcher_id
2410 }
2411 }
2411
2412
2412 jedi_matches = (
2413 jedi_matches = (
2413 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2414 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2414 if jedi_matcher_id in results
2415 if jedi_matcher_id in results
2415 else ()
2416 else ()
2416 )
2417 )
2417
2418
2418 iter_jm = iter(jedi_matches)
2419 iter_jm = iter(jedi_matches)
2419 if _timeout:
2420 if _timeout:
2420 for jm in iter_jm:
2421 for jm in iter_jm:
2421 try:
2422 try:
2422 type_ = jm.type
2423 type_ = jm.type
2423 except Exception:
2424 except Exception:
2424 if self.debug:
2425 if self.debug:
2425 print("Error in Jedi getting type of ", jm)
2426 print("Error in Jedi getting type of ", jm)
2426 type_ = None
2427 type_ = None
2427 delta = len(jm.name_with_symbols) - len(jm.complete)
2428 delta = len(jm.name_with_symbols) - len(jm.complete)
2428 if type_ == 'function':
2429 if type_ == 'function':
2429 signature = _make_signature(jm)
2430 signature = _make_signature(jm)
2430 else:
2431 else:
2431 signature = ''
2432 signature = ''
2432 yield Completion(start=offset - delta,
2433 yield Completion(start=offset - delta,
2433 end=offset,
2434 end=offset,
2434 text=jm.name_with_symbols,
2435 text=jm.name_with_symbols,
2435 type=type_,
2436 type=type_,
2436 signature=signature,
2437 signature=signature,
2437 _origin='jedi')
2438 _origin='jedi')
2438
2439
2439 if time.monotonic() > deadline:
2440 if time.monotonic() > deadline:
2440 break
2441 break
2441
2442
2442 for jm in iter_jm:
2443 for jm in iter_jm:
2443 delta = len(jm.name_with_symbols) - len(jm.complete)
2444 delta = len(jm.name_with_symbols) - len(jm.complete)
2444 yield Completion(
2445 yield Completion(
2445 start=offset - delta,
2446 start=offset - delta,
2446 end=offset,
2447 end=offset,
2447 text=jm.name_with_symbols,
2448 text=jm.name_with_symbols,
2448 type=_UNKNOWN_TYPE, # don't compute type for speed
2449 type=_UNKNOWN_TYPE, # don't compute type for speed
2449 _origin="jedi",
2450 _origin="jedi",
2450 signature="",
2451 signature="",
2451 )
2452 )
2452
2453
2453 # TODO:
2454 # TODO:
2454 # Suppress this, right now just for debug.
2455 # Suppress this, right now just for debug.
2455 if jedi_matches and non_jedi_results and self.debug:
2456 if jedi_matches and non_jedi_results and self.debug:
2456 some_start_offset = before.rfind(
2457 some_start_offset = before.rfind(
2457 next(iter(non_jedi_results.values()))["matched_fragment"]
2458 next(iter(non_jedi_results.values()))["matched_fragment"]
2458 )
2459 )
2459 yield Completion(
2460 yield Completion(
2460 start=some_start_offset,
2461 start=some_start_offset,
2461 end=offset,
2462 end=offset,
2462 text="--jedi/ipython--",
2463 text="--jedi/ipython--",
2463 _origin="debug",
2464 _origin="debug",
2464 type="none",
2465 type="none",
2465 signature="",
2466 signature="",
2466 )
2467 )
2467
2468
2468 ordered = []
2469 ordered = []
2469 sortable = []
2470 sortable = []
2470
2471
2471 for origin, result in non_jedi_results.items():
2472 for origin, result in non_jedi_results.items():
2472 matched_text = result["matched_fragment"]
2473 matched_text = result["matched_fragment"]
2473 start_offset = before.rfind(matched_text)
2474 start_offset = before.rfind(matched_text)
2474 is_ordered = result.get("ordered", False)
2475 is_ordered = result.get("ordered", False)
2475 container = ordered if is_ordered else sortable
2476 container = ordered if is_ordered else sortable
2476
2477
2477 # I'm unsure if this is always true, so let's assert and see if it
2478 # I'm unsure if this is always true, so let's assert and see if it
2478 # crash
2479 # crash
2479 assert before.endswith(matched_text)
2480 assert before.endswith(matched_text)
2480
2481
2481 for simple_completion in result["completions"]:
2482 for simple_completion in result["completions"]:
2482 completion = Completion(
2483 completion = Completion(
2483 start=start_offset,
2484 start=start_offset,
2484 end=offset,
2485 end=offset,
2485 text=simple_completion.text,
2486 text=simple_completion.text,
2486 _origin=origin,
2487 _origin=origin,
2487 signature="",
2488 signature="",
2488 type=simple_completion.type or _UNKNOWN_TYPE,
2489 type=simple_completion.type or _UNKNOWN_TYPE,
2489 )
2490 )
2490 container.append(completion)
2491 container.append(completion)
2491
2492
2492 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2493 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
2493 :MATCHES_LIMIT
2494 :MATCHES_LIMIT
2494 ]
2495 ]
2495
2496
2496 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2497 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2497 """Find completions for the given text and line context.
2498 """Find completions for the given text and line context.
2498
2499
2499 Note that both the text and the line_buffer are optional, but at least
2500 Note that both the text and the line_buffer are optional, but at least
2500 one of them must be given.
2501 one of them must be given.
2501
2502
2502 Parameters
2503 Parameters
2503 ----------
2504 ----------
2504 text : string, optional
2505 text : string, optional
2505 Text to perform the completion on. If not given, the line buffer
2506 Text to perform the completion on. If not given, the line buffer
2506 is split using the instance's CompletionSplitter object.
2507 is split using the instance's CompletionSplitter object.
2507 line_buffer : string, optional
2508 line_buffer : string, optional
2508 If not given, the completer attempts to obtain the current line
2509 If not given, the completer attempts to obtain the current line
2509 buffer via readline. This keyword allows clients which are
2510 buffer via readline. This keyword allows clients which are
2510 requesting for text completions in non-readline contexts to inform
2511 requesting for text completions in non-readline contexts to inform
2511 the completer of the entire text.
2512 the completer of the entire text.
2512 cursor_pos : int, optional
2513 cursor_pos : int, optional
2513 Index of the cursor in the full line buffer. Should be provided by
2514 Index of the cursor in the full line buffer. Should be provided by
2514 remote frontends where kernel has no access to frontend state.
2515 remote frontends where kernel has no access to frontend state.
2515
2516
2516 Returns
2517 Returns
2517 -------
2518 -------
2518 Tuple of two items:
2519 Tuple of two items:
2519 text : str
2520 text : str
2520 Text that was actually used in the completion.
2521 Text that was actually used in the completion.
2521 matches : list
2522 matches : list
2522 A list of completion matches.
2523 A list of completion matches.
2523
2524
2524 Notes
2525 Notes
2525 -----
2526 -----
2526 This API is likely to be deprecated and replaced by
2527 This API is likely to be deprecated and replaced by
2527 :any:`IPCompleter.completions` in the future.
2528 :any:`IPCompleter.completions` in the future.
2528
2529
2529 """
2530 """
2530 warnings.warn('`Completer.complete` is pending deprecation since '
2531 warnings.warn('`Completer.complete` is pending deprecation since '
2531 'IPython 6.0 and will be replaced by `Completer.completions`.',
2532 'IPython 6.0 and will be replaced by `Completer.completions`.',
2532 PendingDeprecationWarning)
2533 PendingDeprecationWarning)
2533 # potential todo, FOLD the 3rd throw away argument of _complete
2534 # potential todo, FOLD the 3rd throw away argument of _complete
2534 # into the first 2 one.
2535 # into the first 2 one.
2535 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2536 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2536 # TODO: should we deprecate now, or does it stay?
2537 # TODO: should we deprecate now, or does it stay?
2537
2538
2538 results = self._complete(
2539 results = self._complete(
2539 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2540 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2540 )
2541 )
2541
2542
2542 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2543 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2543
2544
2544 return self._arrange_and_extract(
2545 return self._arrange_and_extract(
2545 results,
2546 results,
2546 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2547 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2547 skip_matchers={jedi_matcher_id},
2548 skip_matchers={jedi_matcher_id},
2548 # this API does not support different start/end positions (fragments of token).
2549 # this API does not support different start/end positions (fragments of token).
2549 abort_if_offset_changes=True,
2550 abort_if_offset_changes=True,
2550 )
2551 )
2551
2552
2552 def _arrange_and_extract(
2553 def _arrange_and_extract(
2553 self,
2554 self,
2554 results: Dict[str, MatcherResult],
2555 results: Dict[str, MatcherResult],
2555 skip_matchers: Set[str],
2556 skip_matchers: Set[str],
2556 abort_if_offset_changes: bool,
2557 abort_if_offset_changes: bool,
2557 ):
2558 ):
2558
2559
2559 sortable = []
2560 sortable = []
2560 ordered = []
2561 ordered = []
2561 most_recent_fragment = None
2562 most_recent_fragment = None
2562 for identifier, result in results.items():
2563 for identifier, result in results.items():
2563 if identifier in skip_matchers:
2564 if identifier in skip_matchers:
2564 continue
2565 continue
2565 if not result["completions"]:
2566 if not result["completions"]:
2566 continue
2567 continue
2567 if not most_recent_fragment:
2568 if not most_recent_fragment:
2568 most_recent_fragment = result["matched_fragment"]
2569 most_recent_fragment = result["matched_fragment"]
2569 if (
2570 if (
2570 abort_if_offset_changes
2571 abort_if_offset_changes
2571 and result["matched_fragment"] != most_recent_fragment
2572 and result["matched_fragment"] != most_recent_fragment
2572 ):
2573 ):
2573 break
2574 break
2574 if result.get("ordered", False):
2575 if result.get("ordered", False):
2575 ordered.extend(result["completions"])
2576 ordered.extend(result["completions"])
2576 else:
2577 else:
2577 sortable.extend(result["completions"])
2578 sortable.extend(result["completions"])
2578
2579
2579 if not most_recent_fragment:
2580 if not most_recent_fragment:
2580 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2581 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2581
2582
2582 return most_recent_fragment, [
2583 return most_recent_fragment, [
2583 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2584 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2584 ]
2585 ]
2585
2586
2586 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2587 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2587 full_text=None) -> _CompleteResult:
2588 full_text=None) -> _CompleteResult:
2588 """
2589 """
2589 Like complete but can also returns raw jedi completions as well as the
2590 Like complete but can also returns raw jedi completions as well as the
2590 origin of the completion text. This could (and should) be made much
2591 origin of the completion text. This could (and should) be made much
2591 cleaner but that will be simpler once we drop the old (and stateful)
2592 cleaner but that will be simpler once we drop the old (and stateful)
2592 :any:`complete` API.
2593 :any:`complete` API.
2593
2594
2594 With current provisional API, cursor_pos act both (depending on the
2595 With current provisional API, cursor_pos act both (depending on the
2595 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2596 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2596 ``column`` when passing multiline strings this could/should be renamed
2597 ``column`` when passing multiline strings this could/should be renamed
2597 but would add extra noise.
2598 but would add extra noise.
2598
2599
2599 Parameters
2600 Parameters
2600 ----------
2601 ----------
2601 cursor_line
2602 cursor_line
2602 Index of the line the cursor is on. 0 indexed.
2603 Index of the line the cursor is on. 0 indexed.
2603 cursor_pos
2604 cursor_pos
2604 Position of the cursor in the current line/line_buffer/text. 0
2605 Position of the cursor in the current line/line_buffer/text. 0
2605 indexed.
2606 indexed.
2606 line_buffer : optional, str
2607 line_buffer : optional, str
2607 The current line the cursor is in, this is mostly due to legacy
2608 The current line the cursor is in, this is mostly due to legacy
2608 reason that readline could only give a us the single current line.
2609 reason that readline could only give a us the single current line.
2609 Prefer `full_text`.
2610 Prefer `full_text`.
2610 text : str
2611 text : str
2611 The current "token" the cursor is in, mostly also for historical
2612 The current "token" the cursor is in, mostly also for historical
2612 reasons. as the completer would trigger only after the current line
2613 reasons. as the completer would trigger only after the current line
2613 was parsed.
2614 was parsed.
2614 full_text : str
2615 full_text : str
2615 Full text of the current cell.
2616 Full text of the current cell.
2616
2617
2617 Returns
2618 Returns
2618 -------
2619 -------
2619 An ordered dictionary where keys are identifiers of completion
2620 An ordered dictionary where keys are identifiers of completion
2620 matchers and values are ``MatcherResult``s.
2621 matchers and values are ``MatcherResult``s.
2621 """
2622 """
2622
2623
2623 # if the cursor position isn't given, the only sane assumption we can
2624 # if the cursor position isn't given, the only sane assumption we can
2624 # make is that it's at the end of the line (the common case)
2625 # make is that it's at the end of the line (the common case)
2625 if cursor_pos is None:
2626 if cursor_pos is None:
2626 cursor_pos = len(line_buffer) if text is None else len(text)
2627 cursor_pos = len(line_buffer) if text is None else len(text)
2627
2628
2628 if self.use_main_ns:
2629 if self.use_main_ns:
2629 self.namespace = __main__.__dict__
2630 self.namespace = __main__.__dict__
2630
2631
2631 # if text is either None or an empty string, rely on the line buffer
2632 # if text is either None or an empty string, rely on the line buffer
2632 if (not line_buffer) and full_text:
2633 if (not line_buffer) and full_text:
2633 line_buffer = full_text.split('\n')[cursor_line]
2634 line_buffer = full_text.split('\n')[cursor_line]
2634 if not text: # issue #11508: check line_buffer before calling split_line
2635 if not text: # issue #11508: check line_buffer before calling split_line
2635 text = (
2636 text = (
2636 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2637 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2637 )
2638 )
2638
2639
2639 # If no line buffer is given, assume the input text is all there was
2640 # If no line buffer is given, assume the input text is all there was
2640 if line_buffer is None:
2641 if line_buffer is None:
2641 line_buffer = text
2642 line_buffer = text
2642
2643
2643 # deprecated - do not use `line_buffer` in new code.
2644 # deprecated - do not use `line_buffer` in new code.
2644 self.line_buffer = line_buffer
2645 self.line_buffer = line_buffer
2645 self.text_until_cursor = self.line_buffer[:cursor_pos]
2646 self.text_until_cursor = self.line_buffer[:cursor_pos]
2646
2647
2647 if not full_text:
2648 if not full_text:
2648 full_text = line_buffer
2649 full_text = line_buffer
2649
2650
2650 context = CompletionContext(
2651 context = CompletionContext(
2651 full_text=full_text,
2652 full_text=full_text,
2652 cursor_position=cursor_pos,
2653 cursor_position=cursor_pos,
2653 cursor_line=cursor_line,
2654 cursor_line=cursor_line,
2654 token=text,
2655 token=text,
2655 limit=MATCHES_LIMIT,
2656 limit=MATCHES_LIMIT,
2656 )
2657 )
2657
2658
2658 # Start with a clean slate of completions
2659 # Start with a clean slate of completions
2659 results = {}
2660 results = {}
2660
2661
2661 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2662 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2662
2663
2663 suppressed_matchers = set()
2664 suppressed_matchers = set()
2664
2665
2665 matchers = {
2666 matchers = {
2666 _get_matcher_id(matcher): matcher
2667 _get_matcher_id(matcher): matcher
2667 for matcher in sorted(
2668 for matcher in sorted(
2668 self.matchers, key=_get_matcher_priority, reverse=True
2669 self.matchers, key=_get_matcher_priority, reverse=True
2669 )
2670 )
2670 }
2671 }
2671
2672
2672 for matcher_id, matcher in matchers.items():
2673 for matcher_id, matcher in matchers.items():
2673 api_version = _get_matcher_api_version(matcher)
2674 api_version = _get_matcher_api_version(matcher)
2674 matcher_id = _get_matcher_id(matcher)
2675 matcher_id = _get_matcher_id(matcher)
2675
2676
2676 if matcher_id in self.disable_matchers:
2677 if matcher_id in self.disable_matchers:
2677 continue
2678 continue
2678
2679
2679 if matcher_id in results:
2680 if matcher_id in results:
2680 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2681 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2681
2682
2682 if matcher_id in suppressed_matchers:
2683 if matcher_id in suppressed_matchers:
2683 continue
2684 continue
2684
2685
2685 try:
2686 try:
2686 if api_version == 1:
2687 if api_version == 1:
2687 result = _convert_matcher_v1_result_to_v2(
2688 result = _convert_matcher_v1_result_to_v2(
2688 matcher(text), type=_UNKNOWN_TYPE
2689 matcher(text), type=_UNKNOWN_TYPE
2689 )
2690 )
2690 elif api_version == 2:
2691 elif api_version == 2:
2691 result = cast(matcher, MatcherAPIv2)(context)
2692 result = cast(matcher, MatcherAPIv2)(context)
2692 else:
2693 else:
2693 raise ValueError(f"Unsupported API version {api_version}")
2694 raise ValueError(f"Unsupported API version {api_version}")
2694 except:
2695 except:
2695 # Show the ugly traceback if the matcher causes an
2696 # Show the ugly traceback if the matcher causes an
2696 # exception, but do NOT crash the kernel!
2697 # exception, but do NOT crash the kernel!
2697 sys.excepthook(*sys.exc_info())
2698 sys.excepthook(*sys.exc_info())
2698 continue
2699 continue
2699
2700
2700 # set default value for matched fragment if suffix was not selected.
2701 # set default value for matched fragment if suffix was not selected.
2701 result["matched_fragment"] = result.get("matched_fragment", context.token)
2702 result["matched_fragment"] = result.get("matched_fragment", context.token)
2702
2703
2703 if not suppressed_matchers:
2704 if not suppressed_matchers:
2704 suppression_recommended = result.get("suppress", False)
2705 suppression_recommended = result.get("suppress", False)
2705
2706
2707 suppression_config = (
2708 self.suppress_competing_matchers.get(matcher_id, None)
2709 if isinstance(self.suppress_competing_matchers, dict)
2710 else self.suppress_competing_matchers
2711 )
2706 should_suppress = (
2712 should_suppress = (
2707 self.suppress_competing_matchers is True
2713 (suppression_config is True)
2708 or suppression_recommended
2714 or (suppression_recommended and (suppression_config is not False))
2709 or (
2710 isinstance(self.suppress_competing_matchers, dict)
2711 and self.suppress_competing_matchers[matcher_id]
2712 )
2713 ) and len(result["completions"])
2715 ) and len(result["completions"])
2714
2716
2715 if should_suppress:
2717 if should_suppress:
2716 suppression_exceptions = result.get("do_not_suppress", set())
2718 suppression_exceptions = result.get("do_not_suppress", set())
2717 try:
2719 try:
2718 to_suppress = set(suppression_recommended)
2720 to_suppress = set(suppression_recommended)
2719 except TypeError:
2721 except TypeError:
2720 to_suppress = set(matchers)
2722 to_suppress = set(matchers)
2721 suppressed_matchers = to_suppress - suppression_exceptions
2723 suppressed_matchers = to_suppress - suppression_exceptions
2722
2724
2723 new_results = {}
2725 new_results = {}
2724 for previous_matcher_id, previous_result in results.items():
2726 for previous_matcher_id, previous_result in results.items():
2725 if previous_matcher_id not in suppressed_matchers:
2727 if previous_matcher_id not in suppressed_matchers:
2726 new_results[previous_matcher_id] = previous_result
2728 new_results[previous_matcher_id] = previous_result
2727 results = new_results
2729 results = new_results
2728
2730
2729 results[matcher_id] = result
2731 results[matcher_id] = result
2730
2732
2731 _, matches = self._arrange_and_extract(
2733 _, matches = self._arrange_and_extract(
2732 results,
2734 results,
2733 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2735 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2734 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2736 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2735 skip_matchers={jedi_matcher_id},
2737 skip_matchers={jedi_matcher_id},
2736 abort_if_offset_changes=False,
2738 abort_if_offset_changes=False,
2737 )
2739 )
2738
2740
2739 # populate legacy stateful API
2741 # populate legacy stateful API
2740 self.matches = matches
2742 self.matches = matches
2741
2743
2742 return results
2744 return results
2743
2745
2744 @staticmethod
2746 @staticmethod
2745 def _deduplicate(
2747 def _deduplicate(
2746 matches: Sequence[SimpleCompletion],
2748 matches: Sequence[SimpleCompletion],
2747 ) -> Iterable[SimpleCompletion]:
2749 ) -> Iterable[SimpleCompletion]:
2748 filtered_matches = {}
2750 filtered_matches = {}
2749 for match in matches:
2751 for match in matches:
2750 text = match.text
2752 text = match.text
2751 if (
2753 if (
2752 text not in filtered_matches
2754 text not in filtered_matches
2753 or filtered_matches[text].type == _UNKNOWN_TYPE
2755 or filtered_matches[text].type == _UNKNOWN_TYPE
2754 ):
2756 ):
2755 filtered_matches[text] = match
2757 filtered_matches[text] = match
2756
2758
2757 return filtered_matches.values()
2759 return filtered_matches.values()
2758
2760
2759 @staticmethod
2761 @staticmethod
2760 def _sort(matches: Sequence[SimpleCompletion]):
2762 def _sort(matches: Sequence[SimpleCompletion]):
2761 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2763 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2762
2764
2763 @context_matcher()
2765 @context_matcher()
2764 def fwd_unicode_matcher(self, context):
2766 def fwd_unicode_matcher(self, context):
2765 """Same as ``fwd_unicode_match``, but adopted to new Matcher API."""
2767 """Same as ``fwd_unicode_match``, but adopted to new Matcher API."""
2766 fragment, matches = self.latex_matches(context.token)
2768 fragment, matches = self.latex_matches(context.token)
2767 return _convert_matcher_v1_result_to_v2(
2769 return _convert_matcher_v1_result_to_v2(
2768 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2770 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2769 )
2771 )
2770
2772
2771 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2773 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2772 """
2774 """
2773 Forward match a string starting with a backslash with a list of
2775 Forward match a string starting with a backslash with a list of
2774 potential Unicode completions.
2776 potential Unicode completions.
2775
2777
2776 Will compute list list of Unicode character names on first call and cache it.
2778 Will compute list list of Unicode character names on first call and cache it.
2777
2779
2778 Returns
2780 Returns
2779 -------
2781 -------
2780 At tuple with:
2782 At tuple with:
2781 - matched text (empty if no matches)
2783 - matched text (empty if no matches)
2782 - list of potential completions, empty tuple otherwise)
2784 - list of potential completions, empty tuple otherwise)
2783
2785
2784 DEPRECATED: Deprecated since 8.6. Use `fwd_unicode_matcher` instead.
2786 DEPRECATED: Deprecated since 8.6. Use `fwd_unicode_matcher` instead.
2785 """
2787 """
2786 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2788 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2787 # We could do a faster match using a Trie.
2789 # We could do a faster match using a Trie.
2788
2790
2789 # Using pygtrie the following seem to work:
2791 # Using pygtrie the following seem to work:
2790
2792
2791 # s = PrefixSet()
2793 # s = PrefixSet()
2792
2794
2793 # for c in range(0,0x10FFFF + 1):
2795 # for c in range(0,0x10FFFF + 1):
2794 # try:
2796 # try:
2795 # s.add(unicodedata.name(chr(c)))
2797 # s.add(unicodedata.name(chr(c)))
2796 # except ValueError:
2798 # except ValueError:
2797 # pass
2799 # pass
2798 # [''.join(k) for k in s.iter(prefix)]
2800 # [''.join(k) for k in s.iter(prefix)]
2799
2801
2800 # But need to be timed and adds an extra dependency.
2802 # But need to be timed and adds an extra dependency.
2801
2803
2802 slashpos = text.rfind('\\')
2804 slashpos = text.rfind('\\')
2803 # if text starts with slash
2805 # if text starts with slash
2804 if slashpos > -1:
2806 if slashpos > -1:
2805 # PERF: It's important that we don't access self._unicode_names
2807 # PERF: It's important that we don't access self._unicode_names
2806 # until we're inside this if-block. _unicode_names is lazily
2808 # until we're inside this if-block. _unicode_names is lazily
2807 # initialized, and it takes a user-noticeable amount of time to
2809 # initialized, and it takes a user-noticeable amount of time to
2808 # initialize it, so we don't want to initialize it unless we're
2810 # initialize it, so we don't want to initialize it unless we're
2809 # actually going to use it.
2811 # actually going to use it.
2810 s = text[slashpos + 1 :]
2812 s = text[slashpos + 1 :]
2811 sup = s.upper()
2813 sup = s.upper()
2812 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2814 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2813 if candidates:
2815 if candidates:
2814 return s, candidates
2816 return s, candidates
2815 candidates = [x for x in self.unicode_names if sup in x]
2817 candidates = [x for x in self.unicode_names if sup in x]
2816 if candidates:
2818 if candidates:
2817 return s, candidates
2819 return s, candidates
2818 splitsup = sup.split(" ")
2820 splitsup = sup.split(" ")
2819 candidates = [
2821 candidates = [
2820 x for x in self.unicode_names if all(u in x for u in splitsup)
2822 x for x in self.unicode_names if all(u in x for u in splitsup)
2821 ]
2823 ]
2822 if candidates:
2824 if candidates:
2823 return s, candidates
2825 return s, candidates
2824
2826
2825 return "", ()
2827 return "", ()
2826
2828
2827 # if text does not start with slash
2829 # if text does not start with slash
2828 else:
2830 else:
2829 return '', ()
2831 return '', ()
2830
2832
2831 @property
2833 @property
2832 def unicode_names(self) -> List[str]:
2834 def unicode_names(self) -> List[str]:
2833 """List of names of unicode code points that can be completed.
2835 """List of names of unicode code points that can be completed.
2834
2836
2835 The list is lazily initialized on first access.
2837 The list is lazily initialized on first access.
2836 """
2838 """
2837 if self._unicode_names is None:
2839 if self._unicode_names is None:
2838 names = []
2840 names = []
2839 for c in range(0,0x10FFFF + 1):
2841 for c in range(0,0x10FFFF + 1):
2840 try:
2842 try:
2841 names.append(unicodedata.name(chr(c)))
2843 names.append(unicodedata.name(chr(c)))
2842 except ValueError:
2844 except ValueError:
2843 pass
2845 pass
2844 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2846 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2845
2847
2846 return self._unicode_names
2848 return self._unicode_names
2847
2849
2848 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2850 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2849 names = []
2851 names = []
2850 for start,stop in ranges:
2852 for start,stop in ranges:
2851 for c in range(start, stop) :
2853 for c in range(start, stop) :
2852 try:
2854 try:
2853 names.append(unicodedata.name(chr(c)))
2855 names.append(unicodedata.name(chr(c)))
2854 except ValueError:
2856 except ValueError:
2855 pass
2857 pass
2856 return names
2858 return names
@@ -1,210 +1,210 b''
1 """Implementation of configuration-related magic functions.
1 """Implementation of configuration-related magic functions.
2 """
2 """
3 #-----------------------------------------------------------------------------
3 #-----------------------------------------------------------------------------
4 # Copyright (c) 2012 The IPython Development Team.
4 # Copyright (c) 2012 The IPython Development Team.
5 #
5 #
6 # Distributed under the terms of the Modified BSD License.
6 # Distributed under the terms of the Modified BSD License.
7 #
7 #
8 # The full license is in the file COPYING.txt, distributed with this software.
8 # The full license is in the file COPYING.txt, distributed with this software.
9 #-----------------------------------------------------------------------------
9 #-----------------------------------------------------------------------------
10
10
11 #-----------------------------------------------------------------------------
11 #-----------------------------------------------------------------------------
12 # Imports
12 # Imports
13 #-----------------------------------------------------------------------------
13 #-----------------------------------------------------------------------------
14
14
15 # Stdlib
15 # Stdlib
16 import re
16 import re
17
17
18 # Our own packages
18 # Our own packages
19 from IPython.core.error import UsageError
19 from IPython.core.error import UsageError
20 from IPython.core.magic import Magics, magics_class, line_magic
20 from IPython.core.magic import Magics, magics_class, line_magic
21 from logging import error
21 from logging import error
22
22
23 #-----------------------------------------------------------------------------
23 #-----------------------------------------------------------------------------
24 # Magic implementation classes
24 # Magic implementation classes
25 #-----------------------------------------------------------------------------
25 #-----------------------------------------------------------------------------
26
26
27 reg = re.compile(r'^\w+\.\w+$')
27 reg = re.compile(r'^\w+\.\w+$')
28 @magics_class
28 @magics_class
29 class ConfigMagics(Magics):
29 class ConfigMagics(Magics):
30
30
31 def __init__(self, shell):
31 def __init__(self, shell):
32 super(ConfigMagics, self).__init__(shell)
32 super(ConfigMagics, self).__init__(shell)
33 self.configurables = []
33 self.configurables = []
34
34
35 @line_magic
35 @line_magic
36 def config(self, s):
36 def config(self, s):
37 """configure IPython
37 """configure IPython
38
38
39 %config Class[.trait=value]
39 %config Class[.trait=value]
40
40
41 This magic exposes most of the IPython config system. Any
41 This magic exposes most of the IPython config system. Any
42 Configurable class should be able to be configured with the simple
42 Configurable class should be able to be configured with the simple
43 line::
43 line::
44
44
45 %config Class.trait=value
45 %config Class.trait=value
46
46
47 Where `value` will be resolved in the user's namespace, if it is an
47 Where `value` will be resolved in the user's namespace, if it is an
48 expression or variable name.
48 expression or variable name.
49
49
50 Examples
50 Examples
51 --------
51 --------
52
52
53 To see what classes are available for config, pass no arguments::
53 To see what classes are available for config, pass no arguments::
54
54
55 In [1]: %config
55 In [1]: %config
56 Available objects for config:
56 Available objects for config:
57 AliasManager
57 AliasManager
58 DisplayFormatter
58 DisplayFormatter
59 HistoryManager
59 HistoryManager
60 IPCompleter
60 IPCompleter
61 LoggingMagics
61 LoggingMagics
62 MagicsManager
62 MagicsManager
63 OSMagics
63 OSMagics
64 PrefilterManager
64 PrefilterManager
65 ScriptMagics
65 ScriptMagics
66 TerminalInteractiveShell
66 TerminalInteractiveShell
67
67
68 To view what is configurable on a given class, just pass the class
68 To view what is configurable on a given class, just pass the class
69 name::
69 name::
70
70
71 In [2]: %config IPCompleter
71 In [2]: %config IPCompleter
72 IPCompleter(Completer) options
72 IPCompleter(Completer) options
73 ----------------------------
73 ----------------------------
74 IPCompleter.backslash_combining_completions=<Bool>
74 IPCompleter.backslash_combining_completions=<Bool>
75 Enable unicode completions, e.g. \\alpha<tab> . Includes completion of latex
75 Enable unicode completions, e.g. \\alpha<tab> . Includes completion of latex
76 commands, unicode names, and expanding unicode characters back to latex
76 commands, unicode names, and expanding unicode characters back to latex
77 commands.
77 commands.
78 Current: True
78 Current: True
79 IPCompleter.debug=<Bool>
79 IPCompleter.debug=<Bool>
80 Enable debug for the Completer. Mostly print extra information for
80 Enable debug for the Completer. Mostly print extra information for
81 experimental jedi integration.
81 experimental jedi integration.
82 Current: False
82 Current: False
83 IPCompleter.disable_matchers=<list-item-1>...
83 IPCompleter.disable_matchers=<list-item-1>...
84 List of matchers to disable.
84 List of matchers to disable.
85 Current: []
85 Current: []
86 IPCompleter.greedy=<Bool>
86 IPCompleter.greedy=<Bool>
87 Activate greedy completion
87 Activate greedy completion
88 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
88 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
89 This will enable completion on elements of lists, results of function calls, etc.,
89 This will enable completion on elements of lists, results of function calls, etc.,
90 but can be unsafe because the code is actually evaluated on TAB.
90 but can be unsafe because the code is actually evaluated on TAB.
91 Current: False
91 Current: False
92 IPCompleter.jedi_compute_type_timeout=<Int>
92 IPCompleter.jedi_compute_type_timeout=<Int>
93 Experimental: restrict time (in milliseconds) during which Jedi can compute types.
93 Experimental: restrict time (in milliseconds) during which Jedi can compute types.
94 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
94 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
95 performance by preventing jedi to build its cache.
95 performance by preventing jedi to build its cache.
96 Current: 400
96 Current: 400
97 IPCompleter.limit_to__all__=<Bool>
97 IPCompleter.limit_to__all__=<Bool>
98 DEPRECATED as of version 5.0.
98 DEPRECATED as of version 5.0.
99 Instruct the completer to use __all__ for the completion
99 Instruct the completer to use __all__ for the completion
100 Specifically, when completing on ``object.<tab>``.
100 Specifically, when completing on ``object.<tab>``.
101 When True: only those names in obj.__all__ will be included.
101 When True: only those names in obj.__all__ will be included.
102 When False [default]: the __all__ attribute is ignored
102 When False [default]: the __all__ attribute is ignored
103 Current: False
103 Current: False
104 IPCompleter.merge_completions=<Bool>
104 IPCompleter.merge_completions=<Bool>
105 Whether to merge completion results into a single list
105 Whether to merge completion results into a single list
106 If False, only the completion results from the first non-empty
106 If False, only the completion results from the first non-empty
107 completer will be returned.
107 completer will be returned.
108 As of version 8.6.0, setting the value to ``False`` is an alias for:
108 As of version 8.6.0, setting the value to ``False`` is an alias for:
109 ``IPCompleter.suppress_competing_matchers = True.``.
109 ``IPCompleter.suppress_competing_matchers = True.``.
110 Current: True
110 Current: True
111 IPCompleter.omit__names=<Enum>
111 IPCompleter.omit__names=<Enum>
112 Instruct the completer to omit private method names
112 Instruct the completer to omit private method names
113 Specifically, when completing on ``object.<tab>``.
113 Specifically, when completing on ``object.<tab>``.
114 When 2 [default]: all names that start with '_' will be excluded.
114 When 2 [default]: all names that start with '_' will be excluded.
115 When 1: all 'magic' names (``__foo__``) will be excluded.
115 When 1: all 'magic' names (``__foo__``) will be excluded.
116 When 0: nothing will be excluded.
116 When 0: nothing will be excluded.
117 Choices: any of [0, 1, 2]
117 Choices: any of [0, 1, 2]
118 Current: 2
118 Current: 2
119 IPCompleter.profile_completions=<Bool>
119 IPCompleter.profile_completions=<Bool>
120 If True, emit profiling data for completion subsystem using cProfile.
120 If True, emit profiling data for completion subsystem using cProfile.
121 Current: False
121 Current: False
122 IPCompleter.profiler_output_dir=<Unicode>
122 IPCompleter.profiler_output_dir=<Unicode>
123 Template for path at which to output profile data for completions.
123 Template for path at which to output profile data for completions.
124 Current: '.completion_profiles'
124 Current: '.completion_profiles'
125 IPCompleter.suppress_competing_matchers=<Union>
125 IPCompleter.suppress_competing_matchers=<Union>
126 Whether to suppress completions from other *Matchers*.
126 Whether to suppress completions from other *Matchers*.
127 When set to ``None`` (default) the matchers will attempt to auto-detect
127 When set to ``None`` (default) the matchers will attempt to auto-detect
128 whether suppression of other matchers is desirable. For example, at the
128 whether suppression of other matchers is desirable. For example, at the
129 beginning of a line followed by `%` we expect a magic completion to be the
129 beginning of a line followed by `%` we expect a magic completion to be the
130 only applicable option, and after ``my_dict['`` we usually expect a
130 only applicable option, and after ``my_dict['`` we usually expect a
131 completion with an existing dictionary key.
131 completion with an existing dictionary key.
132 If you want to disable this heuristic and see completions from all matchers,
132 If you want to disable this heuristic and see completions from all matchers,
133 set ``IPCompleter.suppress_competing_matchers = False``. To disable the
133 set ``IPCompleter.suppress_competing_matchers = False``. To disable the
134 heuristic for specific matchers provide a dictionary mapping:
134 heuristic for specific matchers provide a dictionary mapping:
135 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher':
135 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher':
136 False}``.
136 False}``.
137 Set ``IPCompleter.suppress_competing_matchers = True`` to limit completions
137 Set ``IPCompleter.suppress_competing_matchers = True`` to limit completions
138 to the set of matchers with the highest priority; this is equivalent to
138 to the set of matchers with the highest priority; this is equivalent to
139 ``IPCompleter.merge_completions`` and can be beneficial for performance, but
139 ``IPCompleter.merge_completions`` and can be beneficial for performance, but
140 will sometimes omit relevant candidates from matchers further down the
140 will sometimes omit relevant candidates from matchers further down the
141 priority list.
141 priority list.
142 Current: False
142 Current: None
143 IPCompleter.use_jedi=<Bool>
143 IPCompleter.use_jedi=<Bool>
144 Experimental: Use Jedi to generate autocompletions. Default to True if jedi
144 Experimental: Use Jedi to generate autocompletions. Default to True if jedi
145 is installed.
145 is installed.
146 Current: True
146 Current: True
147
147
148 but the real use is in setting values::
148 but the real use is in setting values::
149
149
150 In [3]: %config IPCompleter.greedy = True
150 In [3]: %config IPCompleter.greedy = True
151
151
152 and these values are read from the user_ns if they are variables::
152 and these values are read from the user_ns if they are variables::
153
153
154 In [4]: feeling_greedy=False
154 In [4]: feeling_greedy=False
155
155
156 In [5]: %config IPCompleter.greedy = feeling_greedy
156 In [5]: %config IPCompleter.greedy = feeling_greedy
157
157
158 """
158 """
159 from traitlets.config.loader import Config
159 from traitlets.config.loader import Config
160 # some IPython objects are Configurable, but do not yet have
160 # some IPython objects are Configurable, but do not yet have
161 # any configurable traits. Exclude them from the effects of
161 # any configurable traits. Exclude them from the effects of
162 # this magic, as their presence is just noise:
162 # this magic, as their presence is just noise:
163 configurables = sorted(set([ c for c in self.shell.configurables
163 configurables = sorted(set([ c for c in self.shell.configurables
164 if c.__class__.class_traits(config=True)
164 if c.__class__.class_traits(config=True)
165 ]), key=lambda x: x.__class__.__name__)
165 ]), key=lambda x: x.__class__.__name__)
166 classnames = [ c.__class__.__name__ for c in configurables ]
166 classnames = [ c.__class__.__name__ for c in configurables ]
167
167
168 line = s.strip()
168 line = s.strip()
169 if not line:
169 if not line:
170 # print available configurable names
170 # print available configurable names
171 print("Available objects for config:")
171 print("Available objects for config:")
172 for name in classnames:
172 for name in classnames:
173 print(" ", name)
173 print(" ", name)
174 return
174 return
175 elif line in classnames:
175 elif line in classnames:
176 # `%config TerminalInteractiveShell` will print trait info for
176 # `%config TerminalInteractiveShell` will print trait info for
177 # TerminalInteractiveShell
177 # TerminalInteractiveShell
178 c = configurables[classnames.index(line)]
178 c = configurables[classnames.index(line)]
179 cls = c.__class__
179 cls = c.__class__
180 help = cls.class_get_help(c)
180 help = cls.class_get_help(c)
181 # strip leading '--' from cl-args:
181 # strip leading '--' from cl-args:
182 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
182 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
183 print(help)
183 print(help)
184 return
184 return
185 elif reg.match(line):
185 elif reg.match(line):
186 cls, attr = line.split('.')
186 cls, attr = line.split('.')
187 return getattr(configurables[classnames.index(cls)],attr)
187 return getattr(configurables[classnames.index(cls)],attr)
188 elif '=' not in line:
188 elif '=' not in line:
189 msg = "Invalid config statement: %r, "\
189 msg = "Invalid config statement: %r, "\
190 "should be `Class.trait = value`."
190 "should be `Class.trait = value`."
191
191
192 ll = line.lower()
192 ll = line.lower()
193 for classname in classnames:
193 for classname in classnames:
194 if ll == classname.lower():
194 if ll == classname.lower():
195 msg = msg + '\nDid you mean %s (note the case)?' % classname
195 msg = msg + '\nDid you mean %s (note the case)?' % classname
196 break
196 break
197
197
198 raise UsageError( msg % line)
198 raise UsageError( msg % line)
199
199
200 # otherwise, assume we are setting configurables.
200 # otherwise, assume we are setting configurables.
201 # leave quotes on args when splitting, because we want
201 # leave quotes on args when splitting, because we want
202 # unquoted args to eval in user_ns
202 # unquoted args to eval in user_ns
203 cfg = Config()
203 cfg = Config()
204 exec("cfg."+line, self.shell.user_ns, locals())
204 exec("cfg."+line, self.shell.user_ns, locals())
205
205
206 for configurable in configurables:
206 for configurable in configurables:
207 try:
207 try:
208 configurable.update_config(cfg)
208 configurable.update_config(cfg)
209 except Exception as e:
209 except Exception as e:
210 error(e)
210 error(e)
@@ -1,1390 +1,1434 b''
1 # encoding: utf-8
1 # encoding: utf-8
2 """Tests for the IPython tab-completion machinery."""
2 """Tests for the IPython tab-completion machinery."""
3
3
4 # Copyright (c) IPython Development Team.
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
5 # Distributed under the terms of the Modified BSD License.
6
6
7 import os
7 import os
8 import pytest
8 import pytest
9 import sys
9 import sys
10 import textwrap
10 import textwrap
11 import unittest
11 import unittest
12
12
13 from contextlib import contextmanager
13 from contextlib import contextmanager
14
14
15 from traitlets.config.loader import Config
15 from traitlets.config.loader import Config
16 from IPython import get_ipython
16 from IPython import get_ipython
17 from IPython.core import completer
17 from IPython.core import completer
18 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
18 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
19 from IPython.utils.generics import complete_object
19 from IPython.utils.generics import complete_object
20 from IPython.testing import decorators as dec
20 from IPython.testing import decorators as dec
21
21
22 from IPython.core.completer import (
22 from IPython.core.completer import (
23 Completion,
23 Completion,
24 provisionalcompleter,
24 provisionalcompleter,
25 match_dict_keys,
25 match_dict_keys,
26 _deduplicate_completions,
26 _deduplicate_completions,
27 completion_matcher,
27 completion_matcher,
28 SimpleCompletion,
28 SimpleCompletion,
29 CompletionContext,
29 CompletionContext,
30 )
30 )
31
31
32 # -----------------------------------------------------------------------------
32 # -----------------------------------------------------------------------------
33 # Test functions
33 # Test functions
34 # -----------------------------------------------------------------------------
34 # -----------------------------------------------------------------------------
35
35
36 def recompute_unicode_ranges():
36 def recompute_unicode_ranges():
37 """
37 """
38 utility to recompute the largest unicode range without any characters
38 utility to recompute the largest unicode range without any characters
39
39
40 use to recompute the gap in the global _UNICODE_RANGES of completer.py
40 use to recompute the gap in the global _UNICODE_RANGES of completer.py
41 """
41 """
42 import itertools
42 import itertools
43 import unicodedata
43 import unicodedata
44 valid = []
44 valid = []
45 for c in range(0,0x10FFFF + 1):
45 for c in range(0,0x10FFFF + 1):
46 try:
46 try:
47 unicodedata.name(chr(c))
47 unicodedata.name(chr(c))
48 except ValueError:
48 except ValueError:
49 continue
49 continue
50 valid.append(c)
50 valid.append(c)
51
51
52 def ranges(i):
52 def ranges(i):
53 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
53 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
54 b = list(b)
54 b = list(b)
55 yield b[0][1], b[-1][1]
55 yield b[0][1], b[-1][1]
56
56
57 rg = list(ranges(valid))
57 rg = list(ranges(valid))
58 lens = []
58 lens = []
59 gap_lens = []
59 gap_lens = []
60 pstart, pstop = 0,0
60 pstart, pstop = 0,0
61 for start, stop in rg:
61 for start, stop in rg:
62 lens.append(stop-start)
62 lens.append(stop-start)
63 gap_lens.append((start - pstop, hex(pstop), hex(start), f'{round((start - pstop)/0xe01f0*100)}%'))
63 gap_lens.append((start - pstop, hex(pstop), hex(start), f'{round((start - pstop)/0xe01f0*100)}%'))
64 pstart, pstop = start, stop
64 pstart, pstop = start, stop
65
65
66 return sorted(gap_lens)[-1]
66 return sorted(gap_lens)[-1]
67
67
68
68
69
69
70 def test_unicode_range():
70 def test_unicode_range():
71 """
71 """
72 Test that the ranges we test for unicode names give the same number of
72 Test that the ranges we test for unicode names give the same number of
73 results than testing the full length.
73 results than testing the full length.
74 """
74 """
75 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
75 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
76
76
77 expected_list = _unicode_name_compute([(0, 0x110000)])
77 expected_list = _unicode_name_compute([(0, 0x110000)])
78 test = _unicode_name_compute(_UNICODE_RANGES)
78 test = _unicode_name_compute(_UNICODE_RANGES)
79 len_exp = len(expected_list)
79 len_exp = len(expected_list)
80 len_test = len(test)
80 len_test = len(test)
81
81
82 # do not inline the len() or on error pytest will try to print the 130 000 +
82 # do not inline the len() or on error pytest will try to print the 130 000 +
83 # elements.
83 # elements.
84 message = None
84 message = None
85 if len_exp != len_test or len_exp > 131808:
85 if len_exp != len_test or len_exp > 131808:
86 size, start, stop, prct = recompute_unicode_ranges()
86 size, start, stop, prct = recompute_unicode_ranges()
87 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
87 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
88 likely due to a new release of Python. We've find that the biggest gap
88 likely due to a new release of Python. We've find that the biggest gap
89 in unicode characters has reduces in size to be {size} characters
89 in unicode characters has reduces in size to be {size} characters
90 ({prct}), from {start}, to {stop}. In completer.py likely update to
90 ({prct}), from {start}, to {stop}. In completer.py likely update to
91
91
92 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
92 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
93
93
94 And update the assertion below to use
94 And update the assertion below to use
95
95
96 len_exp <= {len_exp}
96 len_exp <= {len_exp}
97 """
97 """
98 assert len_exp == len_test, message
98 assert len_exp == len_test, message
99
99
100 # fail if new unicode symbols have been added.
100 # fail if new unicode symbols have been added.
101 assert len_exp <= 138552, message
101 assert len_exp <= 138552, message
102
102
103
103
104 @contextmanager
104 @contextmanager
105 def greedy_completion():
105 def greedy_completion():
106 ip = get_ipython()
106 ip = get_ipython()
107 greedy_original = ip.Completer.greedy
107 greedy_original = ip.Completer.greedy
108 try:
108 try:
109 ip.Completer.greedy = True
109 ip.Completer.greedy = True
110 yield
110 yield
111 finally:
111 finally:
112 ip.Completer.greedy = greedy_original
112 ip.Completer.greedy = greedy_original
113
113
114
114
115 @contextmanager
115 @contextmanager
116 def custom_matchers(matchers):
116 def custom_matchers(matchers):
117 ip = get_ipython()
117 ip = get_ipython()
118 try:
118 try:
119 ip.Completer.custom_matchers.extend(matchers)
119 ip.Completer.custom_matchers.extend(matchers)
120 yield
120 yield
121 finally:
121 finally:
122 ip.Completer.custom_matchers.clear()
122 ip.Completer.custom_matchers.clear()
123
123
124
124
125 def test_protect_filename():
125 def test_protect_filename():
126 if sys.platform == "win32":
126 if sys.platform == "win32":
127 pairs = [
127 pairs = [
128 ("abc", "abc"),
128 ("abc", "abc"),
129 (" abc", '" abc"'),
129 (" abc", '" abc"'),
130 ("a bc", '"a bc"'),
130 ("a bc", '"a bc"'),
131 ("a bc", '"a bc"'),
131 ("a bc", '"a bc"'),
132 (" bc", '" bc"'),
132 (" bc", '" bc"'),
133 ]
133 ]
134 else:
134 else:
135 pairs = [
135 pairs = [
136 ("abc", "abc"),
136 ("abc", "abc"),
137 (" abc", r"\ abc"),
137 (" abc", r"\ abc"),
138 ("a bc", r"a\ bc"),
138 ("a bc", r"a\ bc"),
139 ("a bc", r"a\ \ bc"),
139 ("a bc", r"a\ \ bc"),
140 (" bc", r"\ \ bc"),
140 (" bc", r"\ \ bc"),
141 # On posix, we also protect parens and other special characters.
141 # On posix, we also protect parens and other special characters.
142 ("a(bc", r"a\(bc"),
142 ("a(bc", r"a\(bc"),
143 ("a)bc", r"a\)bc"),
143 ("a)bc", r"a\)bc"),
144 ("a( )bc", r"a\(\ \)bc"),
144 ("a( )bc", r"a\(\ \)bc"),
145 ("a[1]bc", r"a\[1\]bc"),
145 ("a[1]bc", r"a\[1\]bc"),
146 ("a{1}bc", r"a\{1\}bc"),
146 ("a{1}bc", r"a\{1\}bc"),
147 ("a#bc", r"a\#bc"),
147 ("a#bc", r"a\#bc"),
148 ("a?bc", r"a\?bc"),
148 ("a?bc", r"a\?bc"),
149 ("a=bc", r"a\=bc"),
149 ("a=bc", r"a\=bc"),
150 ("a\\bc", r"a\\bc"),
150 ("a\\bc", r"a\\bc"),
151 ("a|bc", r"a\|bc"),
151 ("a|bc", r"a\|bc"),
152 ("a;bc", r"a\;bc"),
152 ("a;bc", r"a\;bc"),
153 ("a:bc", r"a\:bc"),
153 ("a:bc", r"a\:bc"),
154 ("a'bc", r"a\'bc"),
154 ("a'bc", r"a\'bc"),
155 ("a*bc", r"a\*bc"),
155 ("a*bc", r"a\*bc"),
156 ('a"bc', r"a\"bc"),
156 ('a"bc', r"a\"bc"),
157 ("a^bc", r"a\^bc"),
157 ("a^bc", r"a\^bc"),
158 ("a&bc", r"a\&bc"),
158 ("a&bc", r"a\&bc"),
159 ]
159 ]
160 # run the actual tests
160 # run the actual tests
161 for s1, s2 in pairs:
161 for s1, s2 in pairs:
162 s1p = completer.protect_filename(s1)
162 s1p = completer.protect_filename(s1)
163 assert s1p == s2
163 assert s1p == s2
164
164
165
165
166 def check_line_split(splitter, test_specs):
166 def check_line_split(splitter, test_specs):
167 for part1, part2, split in test_specs:
167 for part1, part2, split in test_specs:
168 cursor_pos = len(part1)
168 cursor_pos = len(part1)
169 line = part1 + part2
169 line = part1 + part2
170 out = splitter.split_line(line, cursor_pos)
170 out = splitter.split_line(line, cursor_pos)
171 assert out == split
171 assert out == split
172
172
173
173
174 def test_line_split():
174 def test_line_split():
175 """Basic line splitter test with default specs."""
175 """Basic line splitter test with default specs."""
176 sp = completer.CompletionSplitter()
176 sp = completer.CompletionSplitter()
177 # The format of the test specs is: part1, part2, expected answer. Parts 1
177 # The format of the test specs is: part1, part2, expected answer. Parts 1
178 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
178 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
179 # was at the end of part1. So an empty part2 represents someone hitting
179 # was at the end of part1. So an empty part2 represents someone hitting
180 # tab at the end of the line, the most common case.
180 # tab at the end of the line, the most common case.
181 t = [
181 t = [
182 ("run some/scrip", "", "some/scrip"),
182 ("run some/scrip", "", "some/scrip"),
183 ("run scripts/er", "ror.py foo", "scripts/er"),
183 ("run scripts/er", "ror.py foo", "scripts/er"),
184 ("echo $HOM", "", "HOM"),
184 ("echo $HOM", "", "HOM"),
185 ("print sys.pa", "", "sys.pa"),
185 ("print sys.pa", "", "sys.pa"),
186 ("print(sys.pa", "", "sys.pa"),
186 ("print(sys.pa", "", "sys.pa"),
187 ("execfile('scripts/er", "", "scripts/er"),
187 ("execfile('scripts/er", "", "scripts/er"),
188 ("a[x.", "", "x."),
188 ("a[x.", "", "x."),
189 ("a[x.", "y", "x."),
189 ("a[x.", "y", "x."),
190 ('cd "some_file/', "", "some_file/"),
190 ('cd "some_file/', "", "some_file/"),
191 ]
191 ]
192 check_line_split(sp, t)
192 check_line_split(sp, t)
193 # Ensure splitting works OK with unicode by re-running the tests with
193 # Ensure splitting works OK with unicode by re-running the tests with
194 # all inputs turned into unicode
194 # all inputs turned into unicode
195 check_line_split(sp, [map(str, p) for p in t])
195 check_line_split(sp, [map(str, p) for p in t])
196
196
197
197
198 class NamedInstanceClass:
198 class NamedInstanceClass:
199 instances = {}
199 instances = {}
200
200
201 def __init__(self, name):
201 def __init__(self, name):
202 self.instances[name] = self
202 self.instances[name] = self
203
203
204 @classmethod
204 @classmethod
205 def _ipython_key_completions_(cls):
205 def _ipython_key_completions_(cls):
206 return cls.instances.keys()
206 return cls.instances.keys()
207
207
208
208
209 class KeyCompletable:
209 class KeyCompletable:
210 def __init__(self, things=()):
210 def __init__(self, things=()):
211 self.things = things
211 self.things = things
212
212
213 def _ipython_key_completions_(self):
213 def _ipython_key_completions_(self):
214 return list(self.things)
214 return list(self.things)
215
215
216
216
217 class TestCompleter(unittest.TestCase):
217 class TestCompleter(unittest.TestCase):
218 def setUp(self):
218 def setUp(self):
219 """
219 """
220 We want to silence all PendingDeprecationWarning when testing the completer
220 We want to silence all PendingDeprecationWarning when testing the completer
221 """
221 """
222 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
222 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
223 self._assertwarns.__enter__()
223 self._assertwarns.__enter__()
224
224
225 def tearDown(self):
225 def tearDown(self):
226 try:
226 try:
227 self._assertwarns.__exit__(None, None, None)
227 self._assertwarns.__exit__(None, None, None)
228 except AssertionError:
228 except AssertionError:
229 pass
229 pass
230
230
231 def test_custom_completion_error(self):
231 def test_custom_completion_error(self):
232 """Test that errors from custom attribute completers are silenced."""
232 """Test that errors from custom attribute completers are silenced."""
233 ip = get_ipython()
233 ip = get_ipython()
234
234
235 class A:
235 class A:
236 pass
236 pass
237
237
238 ip.user_ns["x"] = A()
238 ip.user_ns["x"] = A()
239
239
240 @complete_object.register(A)
240 @complete_object.register(A)
241 def complete_A(a, existing_completions):
241 def complete_A(a, existing_completions):
242 raise TypeError("this should be silenced")
242 raise TypeError("this should be silenced")
243
243
244 ip.complete("x.")
244 ip.complete("x.")
245
245
246 def test_custom_completion_ordering(self):
246 def test_custom_completion_ordering(self):
247 """Test that errors from custom attribute completers are silenced."""
247 """Test that errors from custom attribute completers are silenced."""
248 ip = get_ipython()
248 ip = get_ipython()
249
249
250 _, matches = ip.complete('in')
250 _, matches = ip.complete('in')
251 assert matches.index('input') < matches.index('int')
251 assert matches.index('input') < matches.index('int')
252
252
253 def complete_example(a):
253 def complete_example(a):
254 return ['example2', 'example1']
254 return ['example2', 'example1']
255
255
256 ip.Completer.custom_completers.add_re('ex*', complete_example)
256 ip.Completer.custom_completers.add_re('ex*', complete_example)
257 _, matches = ip.complete('ex')
257 _, matches = ip.complete('ex')
258 assert matches.index('example2') < matches.index('example1')
258 assert matches.index('example2') < matches.index('example1')
259
259
260 def test_unicode_completions(self):
260 def test_unicode_completions(self):
261 ip = get_ipython()
261 ip = get_ipython()
262 # Some strings that trigger different types of completion. Check them both
262 # Some strings that trigger different types of completion. Check them both
263 # in str and unicode forms
263 # in str and unicode forms
264 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
264 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
265 for t in s + list(map(str, s)):
265 for t in s + list(map(str, s)):
266 # We don't need to check exact completion values (they may change
266 # We don't need to check exact completion values (they may change
267 # depending on the state of the namespace, but at least no exceptions
267 # depending on the state of the namespace, but at least no exceptions
268 # should be thrown and the return value should be a pair of text, list
268 # should be thrown and the return value should be a pair of text, list
269 # values.
269 # values.
270 text, matches = ip.complete(t)
270 text, matches = ip.complete(t)
271 self.assertIsInstance(text, str)
271 self.assertIsInstance(text, str)
272 self.assertIsInstance(matches, list)
272 self.assertIsInstance(matches, list)
273
273
274 def test_latex_completions(self):
274 def test_latex_completions(self):
275 from IPython.core.latex_symbols import latex_symbols
275 from IPython.core.latex_symbols import latex_symbols
276 import random
276 import random
277
277
278 ip = get_ipython()
278 ip = get_ipython()
279 # Test some random unicode symbols
279 # Test some random unicode symbols
280 keys = random.sample(sorted(latex_symbols), 10)
280 keys = random.sample(sorted(latex_symbols), 10)
281 for k in keys:
281 for k in keys:
282 text, matches = ip.complete(k)
282 text, matches = ip.complete(k)
283 self.assertEqual(text, k)
283 self.assertEqual(text, k)
284 self.assertEqual(matches, [latex_symbols[k]])
284 self.assertEqual(matches, [latex_symbols[k]])
285 # Test a more complex line
285 # Test a more complex line
286 text, matches = ip.complete("print(\\alpha")
286 text, matches = ip.complete("print(\\alpha")
287 self.assertEqual(text, "\\alpha")
287 self.assertEqual(text, "\\alpha")
288 self.assertEqual(matches[0], latex_symbols["\\alpha"])
288 self.assertEqual(matches[0], latex_symbols["\\alpha"])
289 # Test multiple matching latex symbols
289 # Test multiple matching latex symbols
290 text, matches = ip.complete("\\al")
290 text, matches = ip.complete("\\al")
291 self.assertIn("\\alpha", matches)
291 self.assertIn("\\alpha", matches)
292 self.assertIn("\\aleph", matches)
292 self.assertIn("\\aleph", matches)
293
293
294 def test_latex_no_results(self):
294 def test_latex_no_results(self):
295 """
295 """
296 forward latex should really return nothing in either field if nothing is found.
296 forward latex should really return nothing in either field if nothing is found.
297 """
297 """
298 ip = get_ipython()
298 ip = get_ipython()
299 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
299 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
300 self.assertEqual(text, "")
300 self.assertEqual(text, "")
301 self.assertEqual(matches, ())
301 self.assertEqual(matches, ())
302
302
303 def test_back_latex_completion(self):
303 def test_back_latex_completion(self):
304 ip = get_ipython()
304 ip = get_ipython()
305
305
306 # do not return more than 1 matches for \beta, only the latex one.
306 # do not return more than 1 matches for \beta, only the latex one.
307 name, matches = ip.complete("\\Ξ²")
307 name, matches = ip.complete("\\Ξ²")
308 self.assertEqual(matches, ["\\beta"])
308 self.assertEqual(matches, ["\\beta"])
309
309
310 def test_back_unicode_completion(self):
310 def test_back_unicode_completion(self):
311 ip = get_ipython()
311 ip = get_ipython()
312
312
313 name, matches = ip.complete("\\β…€")
313 name, matches = ip.complete("\\β…€")
314 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
314 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
315
315
316 def test_forward_unicode_completion(self):
316 def test_forward_unicode_completion(self):
317 ip = get_ipython()
317 ip = get_ipython()
318
318
319 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
319 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
320 self.assertEqual(matches, ["β…€"]) # This is not a V
320 self.assertEqual(matches, ["β…€"]) # This is not a V
321 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
321 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
322
322
323 def test_delim_setting(self):
323 def test_delim_setting(self):
324 sp = completer.CompletionSplitter()
324 sp = completer.CompletionSplitter()
325 sp.delims = " "
325 sp.delims = " "
326 self.assertEqual(sp.delims, " ")
326 self.assertEqual(sp.delims, " ")
327 self.assertEqual(sp._delim_expr, r"[\ ]")
327 self.assertEqual(sp._delim_expr, r"[\ ]")
328
328
329 def test_spaces(self):
329 def test_spaces(self):
330 """Test with only spaces as split chars."""
330 """Test with only spaces as split chars."""
331 sp = completer.CompletionSplitter()
331 sp = completer.CompletionSplitter()
332 sp.delims = " "
332 sp.delims = " "
333 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
333 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
334 check_line_split(sp, t)
334 check_line_split(sp, t)
335
335
336 def test_has_open_quotes1(self):
336 def test_has_open_quotes1(self):
337 for s in ["'", "'''", "'hi' '"]:
337 for s in ["'", "'''", "'hi' '"]:
338 self.assertEqual(completer.has_open_quotes(s), "'")
338 self.assertEqual(completer.has_open_quotes(s), "'")
339
339
340 def test_has_open_quotes2(self):
340 def test_has_open_quotes2(self):
341 for s in ['"', '"""', '"hi" "']:
341 for s in ['"', '"""', '"hi" "']:
342 self.assertEqual(completer.has_open_quotes(s), '"')
342 self.assertEqual(completer.has_open_quotes(s), '"')
343
343
344 def test_has_open_quotes3(self):
344 def test_has_open_quotes3(self):
345 for s in ["''", "''' '''", "'hi' 'ipython'"]:
345 for s in ["''", "''' '''", "'hi' 'ipython'"]:
346 self.assertFalse(completer.has_open_quotes(s))
346 self.assertFalse(completer.has_open_quotes(s))
347
347
348 def test_has_open_quotes4(self):
348 def test_has_open_quotes4(self):
349 for s in ['""', '""" """', '"hi" "ipython"']:
349 for s in ['""', '""" """', '"hi" "ipython"']:
350 self.assertFalse(completer.has_open_quotes(s))
350 self.assertFalse(completer.has_open_quotes(s))
351
351
352 @pytest.mark.xfail(
352 @pytest.mark.xfail(
353 sys.platform == "win32", reason="abspath completions fail on Windows"
353 sys.platform == "win32", reason="abspath completions fail on Windows"
354 )
354 )
355 def test_abspath_file_completions(self):
355 def test_abspath_file_completions(self):
356 ip = get_ipython()
356 ip = get_ipython()
357 with TemporaryDirectory() as tmpdir:
357 with TemporaryDirectory() as tmpdir:
358 prefix = os.path.join(tmpdir, "foo")
358 prefix = os.path.join(tmpdir, "foo")
359 suffixes = ["1", "2"]
359 suffixes = ["1", "2"]
360 names = [prefix + s for s in suffixes]
360 names = [prefix + s for s in suffixes]
361 for n in names:
361 for n in names:
362 open(n, "w", encoding="utf-8").close()
362 open(n, "w", encoding="utf-8").close()
363
363
364 # Check simple completion
364 # Check simple completion
365 c = ip.complete(prefix)[1]
365 c = ip.complete(prefix)[1]
366 self.assertEqual(c, names)
366 self.assertEqual(c, names)
367
367
368 # Now check with a function call
368 # Now check with a function call
369 cmd = 'a = f("%s' % prefix
369 cmd = 'a = f("%s' % prefix
370 c = ip.complete(prefix, cmd)[1]
370 c = ip.complete(prefix, cmd)[1]
371 comp = [prefix + s for s in suffixes]
371 comp = [prefix + s for s in suffixes]
372 self.assertEqual(c, comp)
372 self.assertEqual(c, comp)
373
373
374 def test_local_file_completions(self):
374 def test_local_file_completions(self):
375 ip = get_ipython()
375 ip = get_ipython()
376 with TemporaryWorkingDirectory():
376 with TemporaryWorkingDirectory():
377 prefix = "./foo"
377 prefix = "./foo"
378 suffixes = ["1", "2"]
378 suffixes = ["1", "2"]
379 names = [prefix + s for s in suffixes]
379 names = [prefix + s for s in suffixes]
380 for n in names:
380 for n in names:
381 open(n, "w", encoding="utf-8").close()
381 open(n, "w", encoding="utf-8").close()
382
382
383 # Check simple completion
383 # Check simple completion
384 c = ip.complete(prefix)[1]
384 c = ip.complete(prefix)[1]
385 self.assertEqual(c, names)
385 self.assertEqual(c, names)
386
386
387 # Now check with a function call
387 # Now check with a function call
388 cmd = 'a = f("%s' % prefix
388 cmd = 'a = f("%s' % prefix
389 c = ip.complete(prefix, cmd)[1]
389 c = ip.complete(prefix, cmd)[1]
390 comp = {prefix + s for s in suffixes}
390 comp = {prefix + s for s in suffixes}
391 self.assertTrue(comp.issubset(set(c)))
391 self.assertTrue(comp.issubset(set(c)))
392
392
393 def test_quoted_file_completions(self):
393 def test_quoted_file_completions(self):
394 ip = get_ipython()
394 ip = get_ipython()
395
395
396 def _(text):
396 def _(text):
397 return ip.Completer._complete(
397 return ip.Completer._complete(
398 cursor_line=0, cursor_pos=len(text), full_text=text
398 cursor_line=0, cursor_pos=len(text), full_text=text
399 )["IPCompleter.file_matcher"]["completions"]
399 )["IPCompleter.file_matcher"]["completions"]
400
400
401 with TemporaryWorkingDirectory():
401 with TemporaryWorkingDirectory():
402 name = "foo'bar"
402 name = "foo'bar"
403 open(name, "w", encoding="utf-8").close()
403 open(name, "w", encoding="utf-8").close()
404
404
405 # Don't escape Windows
405 # Don't escape Windows
406 escaped = name if sys.platform == "win32" else "foo\\'bar"
406 escaped = name if sys.platform == "win32" else "foo\\'bar"
407
407
408 # Single quote matches embedded single quote
408 # Single quote matches embedded single quote
409 c = _("open('foo")[0]
409 c = _("open('foo")[0]
410 self.assertEqual(c.text, escaped)
410 self.assertEqual(c.text, escaped)
411
411
412 # Double quote requires no escape
412 # Double quote requires no escape
413 c = _('open("foo')[0]
413 c = _('open("foo')[0]
414 self.assertEqual(c.text, name)
414 self.assertEqual(c.text, name)
415
415
416 # No quote requires an escape
416 # No quote requires an escape
417 c = _("%ls foo")[0]
417 c = _("%ls foo")[0]
418 self.assertEqual(c.text, escaped)
418 self.assertEqual(c.text, escaped)
419
419
420 def test_all_completions_dups(self):
420 def test_all_completions_dups(self):
421 """
421 """
422 Make sure the output of `IPCompleter.all_completions` does not have
422 Make sure the output of `IPCompleter.all_completions` does not have
423 duplicated prefixes.
423 duplicated prefixes.
424 """
424 """
425 ip = get_ipython()
425 ip = get_ipython()
426 c = ip.Completer
426 c = ip.Completer
427 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
427 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
428 for jedi_status in [True, False]:
428 for jedi_status in [True, False]:
429 with provisionalcompleter():
429 with provisionalcompleter():
430 ip.Completer.use_jedi = jedi_status
430 ip.Completer.use_jedi = jedi_status
431 matches = c.all_completions("TestCl")
431 matches = c.all_completions("TestCl")
432 assert matches == ["TestClass"], (jedi_status, matches)
432 assert matches == ["TestClass"], (jedi_status, matches)
433 matches = c.all_completions("TestClass.")
433 matches = c.all_completions("TestClass.")
434 assert len(matches) > 2, (jedi_status, matches)
434 assert len(matches) > 2, (jedi_status, matches)
435 matches = c.all_completions("TestClass.a")
435 matches = c.all_completions("TestClass.a")
436 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
436 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
437
437
438 def test_jedi(self):
438 def test_jedi(self):
439 """
439 """
440 A couple of issue we had with Jedi
440 A couple of issue we had with Jedi
441 """
441 """
442 ip = get_ipython()
442 ip = get_ipython()
443
443
444 def _test_complete(reason, s, comp, start=None, end=None):
444 def _test_complete(reason, s, comp, start=None, end=None):
445 l = len(s)
445 l = len(s)
446 start = start if start is not None else l
446 start = start if start is not None else l
447 end = end if end is not None else l
447 end = end if end is not None else l
448 with provisionalcompleter():
448 with provisionalcompleter():
449 ip.Completer.use_jedi = True
449 ip.Completer.use_jedi = True
450 completions = set(ip.Completer.completions(s, l))
450 completions = set(ip.Completer.completions(s, l))
451 ip.Completer.use_jedi = False
451 ip.Completer.use_jedi = False
452 assert Completion(start, end, comp) in completions, reason
452 assert Completion(start, end, comp) in completions, reason
453
453
454 def _test_not_complete(reason, s, comp):
454 def _test_not_complete(reason, s, comp):
455 l = len(s)
455 l = len(s)
456 with provisionalcompleter():
456 with provisionalcompleter():
457 ip.Completer.use_jedi = True
457 ip.Completer.use_jedi = True
458 completions = set(ip.Completer.completions(s, l))
458 completions = set(ip.Completer.completions(s, l))
459 ip.Completer.use_jedi = False
459 ip.Completer.use_jedi = False
460 assert Completion(l, l, comp) not in completions, reason
460 assert Completion(l, l, comp) not in completions, reason
461
461
462 import jedi
462 import jedi
463
463
464 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
464 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
465 if jedi_version > (0, 10):
465 if jedi_version > (0, 10):
466 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
466 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
467 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
467 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
468 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
468 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
469 _test_complete("cover duplicate completions", "im", "import", 0, 2)
469 _test_complete("cover duplicate completions", "im", "import", 0, 2)
470
470
471 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
471 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
472
472
473 def test_completion_have_signature(self):
473 def test_completion_have_signature(self):
474 """
474 """
475 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
475 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
476 """
476 """
477 ip = get_ipython()
477 ip = get_ipython()
478 with provisionalcompleter():
478 with provisionalcompleter():
479 ip.Completer.use_jedi = True
479 ip.Completer.use_jedi = True
480 completions = ip.Completer.completions("ope", 3)
480 completions = ip.Completer.completions("ope", 3)
481 c = next(completions) # should be `open`
481 c = next(completions) # should be `open`
482 ip.Completer.use_jedi = False
482 ip.Completer.use_jedi = False
483 assert "file" in c.signature, "Signature of function was not found by completer"
483 assert "file" in c.signature, "Signature of function was not found by completer"
484 assert (
484 assert (
485 "encoding" in c.signature
485 "encoding" in c.signature
486 ), "Signature of function was not found by completer"
486 ), "Signature of function was not found by completer"
487
487
488 def test_completions_have_type(self):
488 def test_completions_have_type(self):
489 """
489 """
490 Lets make sure matchers provide completion type.
490 Lets make sure matchers provide completion type.
491 """
491 """
492 ip = get_ipython()
492 ip = get_ipython()
493 with provisionalcompleter():
493 with provisionalcompleter():
494 ip.Completer.use_jedi = False
494 ip.Completer.use_jedi = False
495 completions = ip.Completer.completions("%tim", 3)
495 completions = ip.Completer.completions("%tim", 3)
496 c = next(completions) # should be `%time` or similar
496 c = next(completions) # should be `%time` or similar
497 assert c.type == "magic", "Type of magic was not assigned by completer"
497 assert c.type == "magic", "Type of magic was not assigned by completer"
498
498
499 @pytest.mark.xfail(reason="Known failure on jedi<=0.18.0")
499 @pytest.mark.xfail(reason="Known failure on jedi<=0.18.0")
500 def test_deduplicate_completions(self):
500 def test_deduplicate_completions(self):
501 """
501 """
502 Test that completions are correctly deduplicated (even if ranges are not the same)
502 Test that completions are correctly deduplicated (even if ranges are not the same)
503 """
503 """
504 ip = get_ipython()
504 ip = get_ipython()
505 ip.ex(
505 ip.ex(
506 textwrap.dedent(
506 textwrap.dedent(
507 """
507 """
508 class Z:
508 class Z:
509 zoo = 1
509 zoo = 1
510 """
510 """
511 )
511 )
512 )
512 )
513 with provisionalcompleter():
513 with provisionalcompleter():
514 ip.Completer.use_jedi = True
514 ip.Completer.use_jedi = True
515 l = list(
515 l = list(
516 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
516 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
517 )
517 )
518 ip.Completer.use_jedi = False
518 ip.Completer.use_jedi = False
519
519
520 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
520 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
521 assert l[0].text == "zoo" # and not `it.accumulate`
521 assert l[0].text == "zoo" # and not `it.accumulate`
522
522
523 def test_greedy_completions(self):
523 def test_greedy_completions(self):
524 """
524 """
525 Test the capability of the Greedy completer.
525 Test the capability of the Greedy completer.
526
526
527 Most of the test here does not really show off the greedy completer, for proof
527 Most of the test here does not really show off the greedy completer, for proof
528 each of the text below now pass with Jedi. The greedy completer is capable of more.
528 each of the text below now pass with Jedi. The greedy completer is capable of more.
529
529
530 See the :any:`test_dict_key_completion_contexts`
530 See the :any:`test_dict_key_completion_contexts`
531
531
532 """
532 """
533 ip = get_ipython()
533 ip = get_ipython()
534 ip.ex("a=list(range(5))")
534 ip.ex("a=list(range(5))")
535 _, c = ip.complete(".", line="a[0].")
535 _, c = ip.complete(".", line="a[0].")
536 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
536 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
537
537
538 def _(line, cursor_pos, expect, message, completion):
538 def _(line, cursor_pos, expect, message, completion):
539 with greedy_completion(), provisionalcompleter():
539 with greedy_completion(), provisionalcompleter():
540 ip.Completer.use_jedi = False
540 ip.Completer.use_jedi = False
541 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
541 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
542 self.assertIn(expect, c, message % c)
542 self.assertIn(expect, c, message % c)
543
543
544 ip.Completer.use_jedi = True
544 ip.Completer.use_jedi = True
545 with provisionalcompleter():
545 with provisionalcompleter():
546 completions = ip.Completer.completions(line, cursor_pos)
546 completions = ip.Completer.completions(line, cursor_pos)
547 self.assertIn(completion, completions)
547 self.assertIn(completion, completions)
548
548
549 with provisionalcompleter():
549 with provisionalcompleter():
550 _(
550 _(
551 "a[0].",
551 "a[0].",
552 5,
552 5,
553 "a[0].real",
553 "a[0].real",
554 "Should have completed on a[0].: %s",
554 "Should have completed on a[0].: %s",
555 Completion(5, 5, "real"),
555 Completion(5, 5, "real"),
556 )
556 )
557 _(
557 _(
558 "a[0].r",
558 "a[0].r",
559 6,
559 6,
560 "a[0].real",
560 "a[0].real",
561 "Should have completed on a[0].r: %s",
561 "Should have completed on a[0].r: %s",
562 Completion(5, 6, "real"),
562 Completion(5, 6, "real"),
563 )
563 )
564
564
565 _(
565 _(
566 "a[0].from_",
566 "a[0].from_",
567 10,
567 10,
568 "a[0].from_bytes",
568 "a[0].from_bytes",
569 "Should have completed on a[0].from_: %s",
569 "Should have completed on a[0].from_: %s",
570 Completion(5, 10, "from_bytes"),
570 Completion(5, 10, "from_bytes"),
571 )
571 )
572
572
573 def test_omit__names(self):
573 def test_omit__names(self):
574 # also happens to test IPCompleter as a configurable
574 # also happens to test IPCompleter as a configurable
575 ip = get_ipython()
575 ip = get_ipython()
576 ip._hidden_attr = 1
576 ip._hidden_attr = 1
577 ip._x = {}
577 ip._x = {}
578 c = ip.Completer
578 c = ip.Completer
579 ip.ex("ip=get_ipython()")
579 ip.ex("ip=get_ipython()")
580 cfg = Config()
580 cfg = Config()
581 cfg.IPCompleter.omit__names = 0
581 cfg.IPCompleter.omit__names = 0
582 c.update_config(cfg)
582 c.update_config(cfg)
583 with provisionalcompleter():
583 with provisionalcompleter():
584 c.use_jedi = False
584 c.use_jedi = False
585 s, matches = c.complete("ip.")
585 s, matches = c.complete("ip.")
586 self.assertIn("ip.__str__", matches)
586 self.assertIn("ip.__str__", matches)
587 self.assertIn("ip._hidden_attr", matches)
587 self.assertIn("ip._hidden_attr", matches)
588
588
589 # c.use_jedi = True
589 # c.use_jedi = True
590 # completions = set(c.completions('ip.', 3))
590 # completions = set(c.completions('ip.', 3))
591 # self.assertIn(Completion(3, 3, '__str__'), completions)
591 # self.assertIn(Completion(3, 3, '__str__'), completions)
592 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
592 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
593
593
594 cfg = Config()
594 cfg = Config()
595 cfg.IPCompleter.omit__names = 1
595 cfg.IPCompleter.omit__names = 1
596 c.update_config(cfg)
596 c.update_config(cfg)
597 with provisionalcompleter():
597 with provisionalcompleter():
598 c.use_jedi = False
598 c.use_jedi = False
599 s, matches = c.complete("ip.")
599 s, matches = c.complete("ip.")
600 self.assertNotIn("ip.__str__", matches)
600 self.assertNotIn("ip.__str__", matches)
601 # self.assertIn('ip._hidden_attr', matches)
601 # self.assertIn('ip._hidden_attr', matches)
602
602
603 # c.use_jedi = True
603 # c.use_jedi = True
604 # completions = set(c.completions('ip.', 3))
604 # completions = set(c.completions('ip.', 3))
605 # self.assertNotIn(Completion(3,3,'__str__'), completions)
605 # self.assertNotIn(Completion(3,3,'__str__'), completions)
606 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
606 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
607
607
608 cfg = Config()
608 cfg = Config()
609 cfg.IPCompleter.omit__names = 2
609 cfg.IPCompleter.omit__names = 2
610 c.update_config(cfg)
610 c.update_config(cfg)
611 with provisionalcompleter():
611 with provisionalcompleter():
612 c.use_jedi = False
612 c.use_jedi = False
613 s, matches = c.complete("ip.")
613 s, matches = c.complete("ip.")
614 self.assertNotIn("ip.__str__", matches)
614 self.assertNotIn("ip.__str__", matches)
615 self.assertNotIn("ip._hidden_attr", matches)
615 self.assertNotIn("ip._hidden_attr", matches)
616
616
617 # c.use_jedi = True
617 # c.use_jedi = True
618 # completions = set(c.completions('ip.', 3))
618 # completions = set(c.completions('ip.', 3))
619 # self.assertNotIn(Completion(3,3,'__str__'), completions)
619 # self.assertNotIn(Completion(3,3,'__str__'), completions)
620 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
620 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
621
621
622 with provisionalcompleter():
622 with provisionalcompleter():
623 c.use_jedi = False
623 c.use_jedi = False
624 s, matches = c.complete("ip._x.")
624 s, matches = c.complete("ip._x.")
625 self.assertIn("ip._x.keys", matches)
625 self.assertIn("ip._x.keys", matches)
626
626
627 # c.use_jedi = True
627 # c.use_jedi = True
628 # completions = set(c.completions('ip._x.', 6))
628 # completions = set(c.completions('ip._x.', 6))
629 # self.assertIn(Completion(6,6, "keys"), completions)
629 # self.assertIn(Completion(6,6, "keys"), completions)
630
630
631 del ip._hidden_attr
631 del ip._hidden_attr
632 del ip._x
632 del ip._x
633
633
634 def test_limit_to__all__False_ok(self):
634 def test_limit_to__all__False_ok(self):
635 """
635 """
636 Limit to all is deprecated, once we remove it this test can go away.
636 Limit to all is deprecated, once we remove it this test can go away.
637 """
637 """
638 ip = get_ipython()
638 ip = get_ipython()
639 c = ip.Completer
639 c = ip.Completer
640 c.use_jedi = False
640 c.use_jedi = False
641 ip.ex("class D: x=24")
641 ip.ex("class D: x=24")
642 ip.ex("d=D()")
642 ip.ex("d=D()")
643 cfg = Config()
643 cfg = Config()
644 cfg.IPCompleter.limit_to__all__ = False
644 cfg.IPCompleter.limit_to__all__ = False
645 c.update_config(cfg)
645 c.update_config(cfg)
646 s, matches = c.complete("d.")
646 s, matches = c.complete("d.")
647 self.assertIn("d.x", matches)
647 self.assertIn("d.x", matches)
648
648
649 def test_get__all__entries_ok(self):
649 def test_get__all__entries_ok(self):
650 class A:
650 class A:
651 __all__ = ["x", 1]
651 __all__ = ["x", 1]
652
652
653 words = completer.get__all__entries(A())
653 words = completer.get__all__entries(A())
654 self.assertEqual(words, ["x"])
654 self.assertEqual(words, ["x"])
655
655
656 def test_get__all__entries_no__all__ok(self):
656 def test_get__all__entries_no__all__ok(self):
657 class A:
657 class A:
658 pass
658 pass
659
659
660 words = completer.get__all__entries(A())
660 words = completer.get__all__entries(A())
661 self.assertEqual(words, [])
661 self.assertEqual(words, [])
662
662
663 def test_func_kw_completions(self):
663 def test_func_kw_completions(self):
664 ip = get_ipython()
664 ip = get_ipython()
665 c = ip.Completer
665 c = ip.Completer
666 c.use_jedi = False
666 c.use_jedi = False
667 ip.ex("def myfunc(a=1,b=2): return a+b")
667 ip.ex("def myfunc(a=1,b=2): return a+b")
668 s, matches = c.complete(None, "myfunc(1,b")
668 s, matches = c.complete(None, "myfunc(1,b")
669 self.assertIn("b=", matches)
669 self.assertIn("b=", matches)
670 # Simulate completing with cursor right after b (pos==10):
670 # Simulate completing with cursor right after b (pos==10):
671 s, matches = c.complete(None, "myfunc(1,b)", 10)
671 s, matches = c.complete(None, "myfunc(1,b)", 10)
672 self.assertIn("b=", matches)
672 self.assertIn("b=", matches)
673 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
673 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
674 self.assertIn("b=", matches)
674 self.assertIn("b=", matches)
675 # builtin function
675 # builtin function
676 s, matches = c.complete(None, "min(k, k")
676 s, matches = c.complete(None, "min(k, k")
677 self.assertIn("key=", matches)
677 self.assertIn("key=", matches)
678
678
679 def test_default_arguments_from_docstring(self):
679 def test_default_arguments_from_docstring(self):
680 ip = get_ipython()
680 ip = get_ipython()
681 c = ip.Completer
681 c = ip.Completer
682 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
682 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
683 self.assertEqual(kwd, ["key"])
683 self.assertEqual(kwd, ["key"])
684 # with cython type etc
684 # with cython type etc
685 kwd = c._default_arguments_from_docstring(
685 kwd = c._default_arguments_from_docstring(
686 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
686 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
687 )
687 )
688 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
688 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
689 # white spaces
689 # white spaces
690 kwd = c._default_arguments_from_docstring(
690 kwd = c._default_arguments_from_docstring(
691 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
691 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
692 )
692 )
693 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
693 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
694
694
695 def test_line_magics(self):
695 def test_line_magics(self):
696 ip = get_ipython()
696 ip = get_ipython()
697 c = ip.Completer
697 c = ip.Completer
698 s, matches = c.complete(None, "lsmag")
698 s, matches = c.complete(None, "lsmag")
699 self.assertIn("%lsmagic", matches)
699 self.assertIn("%lsmagic", matches)
700 s, matches = c.complete(None, "%lsmag")
700 s, matches = c.complete(None, "%lsmag")
701 self.assertIn("%lsmagic", matches)
701 self.assertIn("%lsmagic", matches)
702
702
703 def test_cell_magics(self):
703 def test_cell_magics(self):
704 from IPython.core.magic import register_cell_magic
704 from IPython.core.magic import register_cell_magic
705
705
706 @register_cell_magic
706 @register_cell_magic
707 def _foo_cellm(line, cell):
707 def _foo_cellm(line, cell):
708 pass
708 pass
709
709
710 ip = get_ipython()
710 ip = get_ipython()
711 c = ip.Completer
711 c = ip.Completer
712
712
713 s, matches = c.complete(None, "_foo_ce")
713 s, matches = c.complete(None, "_foo_ce")
714 self.assertIn("%%_foo_cellm", matches)
714 self.assertIn("%%_foo_cellm", matches)
715 s, matches = c.complete(None, "%%_foo_ce")
715 s, matches = c.complete(None, "%%_foo_ce")
716 self.assertIn("%%_foo_cellm", matches)
716 self.assertIn("%%_foo_cellm", matches)
717
717
718 def test_line_cell_magics(self):
718 def test_line_cell_magics(self):
719 from IPython.core.magic import register_line_cell_magic
719 from IPython.core.magic import register_line_cell_magic
720
720
721 @register_line_cell_magic
721 @register_line_cell_magic
722 def _bar_cellm(line, cell):
722 def _bar_cellm(line, cell):
723 pass
723 pass
724
724
725 ip = get_ipython()
725 ip = get_ipython()
726 c = ip.Completer
726 c = ip.Completer
727
727
728 # The policy here is trickier, see comments in completion code. The
728 # The policy here is trickier, see comments in completion code. The
729 # returned values depend on whether the user passes %% or not explicitly,
729 # returned values depend on whether the user passes %% or not explicitly,
730 # and this will show a difference if the same name is both a line and cell
730 # and this will show a difference if the same name is both a line and cell
731 # magic.
731 # magic.
732 s, matches = c.complete(None, "_bar_ce")
732 s, matches = c.complete(None, "_bar_ce")
733 self.assertIn("%_bar_cellm", matches)
733 self.assertIn("%_bar_cellm", matches)
734 self.assertIn("%%_bar_cellm", matches)
734 self.assertIn("%%_bar_cellm", matches)
735 s, matches = c.complete(None, "%_bar_ce")
735 s, matches = c.complete(None, "%_bar_ce")
736 self.assertIn("%_bar_cellm", matches)
736 self.assertIn("%_bar_cellm", matches)
737 self.assertIn("%%_bar_cellm", matches)
737 self.assertIn("%%_bar_cellm", matches)
738 s, matches = c.complete(None, "%%_bar_ce")
738 s, matches = c.complete(None, "%%_bar_ce")
739 self.assertNotIn("%_bar_cellm", matches)
739 self.assertNotIn("%_bar_cellm", matches)
740 self.assertIn("%%_bar_cellm", matches)
740 self.assertIn("%%_bar_cellm", matches)
741
741
742 def test_magic_completion_order(self):
742 def test_magic_completion_order(self):
743 ip = get_ipython()
743 ip = get_ipython()
744 c = ip.Completer
744 c = ip.Completer
745
745
746 # Test ordering of line and cell magics.
746 # Test ordering of line and cell magics.
747 text, matches = c.complete("timeit")
747 text, matches = c.complete("timeit")
748 self.assertEqual(matches, ["%timeit", "%%timeit"])
748 self.assertEqual(matches, ["%timeit", "%%timeit"])
749
749
750 def test_magic_completion_shadowing(self):
750 def test_magic_completion_shadowing(self):
751 ip = get_ipython()
751 ip = get_ipython()
752 c = ip.Completer
752 c = ip.Completer
753 c.use_jedi = False
753 c.use_jedi = False
754
754
755 # Before importing matplotlib, %matplotlib magic should be the only option.
755 # Before importing matplotlib, %matplotlib magic should be the only option.
756 text, matches = c.complete("mat")
756 text, matches = c.complete("mat")
757 self.assertEqual(matches, ["%matplotlib"])
757 self.assertEqual(matches, ["%matplotlib"])
758
758
759 # The newly introduced name should shadow the magic.
759 # The newly introduced name should shadow the magic.
760 ip.run_cell("matplotlib = 1")
760 ip.run_cell("matplotlib = 1")
761 text, matches = c.complete("mat")
761 text, matches = c.complete("mat")
762 self.assertEqual(matches, ["matplotlib"])
762 self.assertEqual(matches, ["matplotlib"])
763
763
764 # After removing matplotlib from namespace, the magic should again be
764 # After removing matplotlib from namespace, the magic should again be
765 # the only option.
765 # the only option.
766 del ip.user_ns["matplotlib"]
766 del ip.user_ns["matplotlib"]
767 text, matches = c.complete("mat")
767 text, matches = c.complete("mat")
768 self.assertEqual(matches, ["%matplotlib"])
768 self.assertEqual(matches, ["%matplotlib"])
769
769
770 def test_magic_completion_shadowing_explicit(self):
770 def test_magic_completion_shadowing_explicit(self):
771 """
771 """
772 If the user try to complete a shadowed magic, and explicit % start should
772 If the user try to complete a shadowed magic, and explicit % start should
773 still return the completions.
773 still return the completions.
774 """
774 """
775 ip = get_ipython()
775 ip = get_ipython()
776 c = ip.Completer
776 c = ip.Completer
777
777
778 # Before importing matplotlib, %matplotlib magic should be the only option.
778 # Before importing matplotlib, %matplotlib magic should be the only option.
779 text, matches = c.complete("%mat")
779 text, matches = c.complete("%mat")
780 self.assertEqual(matches, ["%matplotlib"])
780 self.assertEqual(matches, ["%matplotlib"])
781
781
782 ip.run_cell("matplotlib = 1")
782 ip.run_cell("matplotlib = 1")
783
783
784 # After removing matplotlib from namespace, the magic should still be
784 # After removing matplotlib from namespace, the magic should still be
785 # the only option.
785 # the only option.
786 text, matches = c.complete("%mat")
786 text, matches = c.complete("%mat")
787 self.assertEqual(matches, ["%matplotlib"])
787 self.assertEqual(matches, ["%matplotlib"])
788
788
789 def test_magic_config(self):
789 def test_magic_config(self):
790 ip = get_ipython()
790 ip = get_ipython()
791 c = ip.Completer
791 c = ip.Completer
792
792
793 s, matches = c.complete(None, "conf")
793 s, matches = c.complete(None, "conf")
794 self.assertIn("%config", matches)
794 self.assertIn("%config", matches)
795 s, matches = c.complete(None, "conf")
795 s, matches = c.complete(None, "conf")
796 self.assertNotIn("AliasManager", matches)
796 self.assertNotIn("AliasManager", matches)
797 s, matches = c.complete(None, "config ")
797 s, matches = c.complete(None, "config ")
798 self.assertIn("AliasManager", matches)
798 self.assertIn("AliasManager", matches)
799 s, matches = c.complete(None, "%config ")
799 s, matches = c.complete(None, "%config ")
800 self.assertIn("AliasManager", matches)
800 self.assertIn("AliasManager", matches)
801 s, matches = c.complete(None, "config Ali")
801 s, matches = c.complete(None, "config Ali")
802 self.assertListEqual(["AliasManager"], matches)
802 self.assertListEqual(["AliasManager"], matches)
803 s, matches = c.complete(None, "%config Ali")
803 s, matches = c.complete(None, "%config Ali")
804 self.assertListEqual(["AliasManager"], matches)
804 self.assertListEqual(["AliasManager"], matches)
805 s, matches = c.complete(None, "config AliasManager")
805 s, matches = c.complete(None, "config AliasManager")
806 self.assertListEqual(["AliasManager"], matches)
806 self.assertListEqual(["AliasManager"], matches)
807 s, matches = c.complete(None, "%config AliasManager")
807 s, matches = c.complete(None, "%config AliasManager")
808 self.assertListEqual(["AliasManager"], matches)
808 self.assertListEqual(["AliasManager"], matches)
809 s, matches = c.complete(None, "config AliasManager.")
809 s, matches = c.complete(None, "config AliasManager.")
810 self.assertIn("AliasManager.default_aliases", matches)
810 self.assertIn("AliasManager.default_aliases", matches)
811 s, matches = c.complete(None, "%config AliasManager.")
811 s, matches = c.complete(None, "%config AliasManager.")
812 self.assertIn("AliasManager.default_aliases", matches)
812 self.assertIn("AliasManager.default_aliases", matches)
813 s, matches = c.complete(None, "config AliasManager.de")
813 s, matches = c.complete(None, "config AliasManager.de")
814 self.assertListEqual(["AliasManager.default_aliases"], matches)
814 self.assertListEqual(["AliasManager.default_aliases"], matches)
815 s, matches = c.complete(None, "config AliasManager.de")
815 s, matches = c.complete(None, "config AliasManager.de")
816 self.assertListEqual(["AliasManager.default_aliases"], matches)
816 self.assertListEqual(["AliasManager.default_aliases"], matches)
817
817
818 def test_magic_color(self):
818 def test_magic_color(self):
819 ip = get_ipython()
819 ip = get_ipython()
820 c = ip.Completer
820 c = ip.Completer
821
821
822 s, matches = c.complete(None, "colo")
822 s, matches = c.complete(None, "colo")
823 self.assertIn("%colors", matches)
823 self.assertIn("%colors", matches)
824 s, matches = c.complete(None, "colo")
824 s, matches = c.complete(None, "colo")
825 self.assertNotIn("NoColor", matches)
825 self.assertNotIn("NoColor", matches)
826 s, matches = c.complete(None, "%colors") # No trailing space
826 s, matches = c.complete(None, "%colors") # No trailing space
827 self.assertNotIn("NoColor", matches)
827 self.assertNotIn("NoColor", matches)
828 s, matches = c.complete(None, "colors ")
828 s, matches = c.complete(None, "colors ")
829 self.assertIn("NoColor", matches)
829 self.assertIn("NoColor", matches)
830 s, matches = c.complete(None, "%colors ")
830 s, matches = c.complete(None, "%colors ")
831 self.assertIn("NoColor", matches)
831 self.assertIn("NoColor", matches)
832 s, matches = c.complete(None, "colors NoCo")
832 s, matches = c.complete(None, "colors NoCo")
833 self.assertListEqual(["NoColor"], matches)
833 self.assertListEqual(["NoColor"], matches)
834 s, matches = c.complete(None, "%colors NoCo")
834 s, matches = c.complete(None, "%colors NoCo")
835 self.assertListEqual(["NoColor"], matches)
835 self.assertListEqual(["NoColor"], matches)
836
836
837 def test_match_dict_keys(self):
837 def test_match_dict_keys(self):
838 """
838 """
839 Test that match_dict_keys works on a couple of use case does return what
839 Test that match_dict_keys works on a couple of use case does return what
840 expected, and does not crash
840 expected, and does not crash
841 """
841 """
842 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
842 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
843
843
844 keys = ["foo", b"far"]
844 keys = ["foo", b"far"]
845 assert match_dict_keys(keys, "b'", delims=delims) == ("'", 2, ["far"])
845 assert match_dict_keys(keys, "b'", delims=delims) == ("'", 2, ["far"])
846 assert match_dict_keys(keys, "b'f", delims=delims) == ("'", 2, ["far"])
846 assert match_dict_keys(keys, "b'f", delims=delims) == ("'", 2, ["far"])
847 assert match_dict_keys(keys, 'b"', delims=delims) == ('"', 2, ["far"])
847 assert match_dict_keys(keys, 'b"', delims=delims) == ('"', 2, ["far"])
848 assert match_dict_keys(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
848 assert match_dict_keys(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
849
849
850 assert match_dict_keys(keys, "'", delims=delims) == ("'", 1, ["foo"])
850 assert match_dict_keys(keys, "'", delims=delims) == ("'", 1, ["foo"])
851 assert match_dict_keys(keys, "'f", delims=delims) == ("'", 1, ["foo"])
851 assert match_dict_keys(keys, "'f", delims=delims) == ("'", 1, ["foo"])
852 assert match_dict_keys(keys, '"', delims=delims) == ('"', 1, ["foo"])
852 assert match_dict_keys(keys, '"', delims=delims) == ('"', 1, ["foo"])
853 assert match_dict_keys(keys, '"f', delims=delims) == ('"', 1, ["foo"])
853 assert match_dict_keys(keys, '"f', delims=delims) == ('"', 1, ["foo"])
854
854
855 match_dict_keys
855 match_dict_keys
856
856
857 def test_match_dict_keys_tuple(self):
857 def test_match_dict_keys_tuple(self):
858 """
858 """
859 Test that match_dict_keys called with extra prefix works on a couple of use case,
859 Test that match_dict_keys called with extra prefix works on a couple of use case,
860 does return what expected, and does not crash.
860 does return what expected, and does not crash.
861 """
861 """
862 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
862 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
863
863
864 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
864 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
865
865
866 # Completion on first key == "foo"
866 # Completion on first key == "foo"
867 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("foo",)) == ("'", 1, ["bar", "oof"])
867 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("foo",)) == ("'", 1, ["bar", "oof"])
868 assert match_dict_keys(keys, "\"", delims=delims, extra_prefix=("foo",)) == ("\"", 1, ["bar", "oof"])
868 assert match_dict_keys(keys, "\"", delims=delims, extra_prefix=("foo",)) == ("\"", 1, ["bar", "oof"])
869 assert match_dict_keys(keys, "'o", delims=delims, extra_prefix=("foo",)) == ("'", 1, ["oof"])
869 assert match_dict_keys(keys, "'o", delims=delims, extra_prefix=("foo",)) == ("'", 1, ["oof"])
870 assert match_dict_keys(keys, "\"o", delims=delims, extra_prefix=("foo",)) == ("\"", 1, ["oof"])
870 assert match_dict_keys(keys, "\"o", delims=delims, extra_prefix=("foo",)) == ("\"", 1, ["oof"])
871 assert match_dict_keys(keys, "b'", delims=delims, extra_prefix=("foo",)) == ("'", 2, ["bar"])
871 assert match_dict_keys(keys, "b'", delims=delims, extra_prefix=("foo",)) == ("'", 2, ["bar"])
872 assert match_dict_keys(keys, "b\"", delims=delims, extra_prefix=("foo",)) == ("\"", 2, ["bar"])
872 assert match_dict_keys(keys, "b\"", delims=delims, extra_prefix=("foo",)) == ("\"", 2, ["bar"])
873 assert match_dict_keys(keys, "b'b", delims=delims, extra_prefix=("foo",)) == ("'", 2, ["bar"])
873 assert match_dict_keys(keys, "b'b", delims=delims, extra_prefix=("foo",)) == ("'", 2, ["bar"])
874 assert match_dict_keys(keys, "b\"b", delims=delims, extra_prefix=("foo",)) == ("\"", 2, ["bar"])
874 assert match_dict_keys(keys, "b\"b", delims=delims, extra_prefix=("foo",)) == ("\"", 2, ["bar"])
875
875
876 # No Completion
876 # No Completion
877 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("no_foo",)) == ("'", 1, [])
877 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("no_foo",)) == ("'", 1, [])
878 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("fo",)) == ("'", 1, [])
878 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("fo",)) == ("'", 1, [])
879
879
880 keys = [('foo1', 'foo2', 'foo3', 'foo4'), ('foo1', 'foo2', 'bar', 'foo4')]
880 keys = [('foo1', 'foo2', 'foo3', 'foo4'), ('foo1', 'foo2', 'bar', 'foo4')]
881 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1',)) == ("'", 1, ["foo2", "foo2"])
881 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1',)) == ("'", 1, ["foo2", "foo2"])
882 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2')) == ("'", 1, ["foo3"])
882 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2')) == ("'", 1, ["foo3"])
883 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2', 'foo3')) == ("'", 1, ["foo4"])
883 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2', 'foo3')) == ("'", 1, ["foo4"])
884 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2', 'foo3', 'foo4')) == ("'", 1, [])
884 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2', 'foo3', 'foo4')) == ("'", 1, [])
885
885
886 def test_dict_key_completion_string(self):
886 def test_dict_key_completion_string(self):
887 """Test dictionary key completion for string keys"""
887 """Test dictionary key completion for string keys"""
888 ip = get_ipython()
888 ip = get_ipython()
889 complete = ip.Completer.complete
889 complete = ip.Completer.complete
890
890
891 ip.user_ns["d"] = {"abc": None}
891 ip.user_ns["d"] = {"abc": None}
892
892
893 # check completion at different stages
893 # check completion at different stages
894 _, matches = complete(line_buffer="d[")
894 _, matches = complete(line_buffer="d[")
895 self.assertIn("'abc'", matches)
895 self.assertIn("'abc'", matches)
896 self.assertNotIn("'abc']", matches)
896 self.assertNotIn("'abc']", matches)
897
897
898 _, matches = complete(line_buffer="d['")
898 _, matches = complete(line_buffer="d['")
899 self.assertIn("abc", matches)
899 self.assertIn("abc", matches)
900 self.assertNotIn("abc']", matches)
900 self.assertNotIn("abc']", matches)
901
901
902 _, matches = complete(line_buffer="d['a")
902 _, matches = complete(line_buffer="d['a")
903 self.assertIn("abc", matches)
903 self.assertIn("abc", matches)
904 self.assertNotIn("abc']", matches)
904 self.assertNotIn("abc']", matches)
905
905
906 # check use of different quoting
906 # check use of different quoting
907 _, matches = complete(line_buffer='d["')
907 _, matches = complete(line_buffer='d["')
908 self.assertIn("abc", matches)
908 self.assertIn("abc", matches)
909 self.assertNotIn('abc"]', matches)
909 self.assertNotIn('abc"]', matches)
910
910
911 _, matches = complete(line_buffer='d["a')
911 _, matches = complete(line_buffer='d["a')
912 self.assertIn("abc", matches)
912 self.assertIn("abc", matches)
913 self.assertNotIn('abc"]', matches)
913 self.assertNotIn('abc"]', matches)
914
914
915 # check sensitivity to following context
915 # check sensitivity to following context
916 _, matches = complete(line_buffer="d[]", cursor_pos=2)
916 _, matches = complete(line_buffer="d[]", cursor_pos=2)
917 self.assertIn("'abc'", matches)
917 self.assertIn("'abc'", matches)
918
918
919 _, matches = complete(line_buffer="d['']", cursor_pos=3)
919 _, matches = complete(line_buffer="d['']", cursor_pos=3)
920 self.assertIn("abc", matches)
920 self.assertIn("abc", matches)
921 self.assertNotIn("abc'", matches)
921 self.assertNotIn("abc'", matches)
922 self.assertNotIn("abc']", matches)
922 self.assertNotIn("abc']", matches)
923
923
924 # check multiple solutions are correctly returned and that noise is not
924 # check multiple solutions are correctly returned and that noise is not
925 ip.user_ns["d"] = {
925 ip.user_ns["d"] = {
926 "abc": None,
926 "abc": None,
927 "abd": None,
927 "abd": None,
928 "bad": None,
928 "bad": None,
929 object(): None,
929 object(): None,
930 5: None,
930 5: None,
931 ("abe", None): None,
931 ("abe", None): None,
932 (None, "abf"): None
932 (None, "abf"): None
933 }
933 }
934
934
935 _, matches = complete(line_buffer="d['a")
935 _, matches = complete(line_buffer="d['a")
936 self.assertIn("abc", matches)
936 self.assertIn("abc", matches)
937 self.assertIn("abd", matches)
937 self.assertIn("abd", matches)
938 self.assertNotIn("bad", matches)
938 self.assertNotIn("bad", matches)
939 self.assertNotIn("abe", matches)
939 self.assertNotIn("abe", matches)
940 self.assertNotIn("abf", matches)
940 self.assertNotIn("abf", matches)
941 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
941 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
942
942
943 # check escaping and whitespace
943 # check escaping and whitespace
944 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
944 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
945 _, matches = complete(line_buffer="d['a")
945 _, matches = complete(line_buffer="d['a")
946 self.assertIn("a\\nb", matches)
946 self.assertIn("a\\nb", matches)
947 self.assertIn("a\\'b", matches)
947 self.assertIn("a\\'b", matches)
948 self.assertIn('a"b', matches)
948 self.assertIn('a"b', matches)
949 self.assertIn("a word", matches)
949 self.assertIn("a word", matches)
950 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
950 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
951
951
952 # - can complete on non-initial word of the string
952 # - can complete on non-initial word of the string
953 _, matches = complete(line_buffer="d['a w")
953 _, matches = complete(line_buffer="d['a w")
954 self.assertIn("word", matches)
954 self.assertIn("word", matches)
955
955
956 # - understands quote escaping
956 # - understands quote escaping
957 _, matches = complete(line_buffer="d['a\\'")
957 _, matches = complete(line_buffer="d['a\\'")
958 self.assertIn("b", matches)
958 self.assertIn("b", matches)
959
959
960 # - default quoting should work like repr
960 # - default quoting should work like repr
961 _, matches = complete(line_buffer="d[")
961 _, matches = complete(line_buffer="d[")
962 self.assertIn('"a\'b"', matches)
962 self.assertIn('"a\'b"', matches)
963
963
964 # - when opening quote with ", possible to match with unescaped apostrophe
964 # - when opening quote with ", possible to match with unescaped apostrophe
965 _, matches = complete(line_buffer="d[\"a'")
965 _, matches = complete(line_buffer="d[\"a'")
966 self.assertIn("b", matches)
966 self.assertIn("b", matches)
967
967
968 # need to not split at delims that readline won't split at
968 # need to not split at delims that readline won't split at
969 if "-" not in ip.Completer.splitter.delims:
969 if "-" not in ip.Completer.splitter.delims:
970 ip.user_ns["d"] = {"before-after": None}
970 ip.user_ns["d"] = {"before-after": None}
971 _, matches = complete(line_buffer="d['before-af")
971 _, matches = complete(line_buffer="d['before-af")
972 self.assertIn("before-after", matches)
972 self.assertIn("before-after", matches)
973
973
974 # check completion on tuple-of-string keys at different stage - on first key
974 # check completion on tuple-of-string keys at different stage - on first key
975 ip.user_ns["d"] = {('foo', 'bar'): None}
975 ip.user_ns["d"] = {('foo', 'bar'): None}
976 _, matches = complete(line_buffer="d[")
976 _, matches = complete(line_buffer="d[")
977 self.assertIn("'foo'", matches)
977 self.assertIn("'foo'", matches)
978 self.assertNotIn("'foo']", matches)
978 self.assertNotIn("'foo']", matches)
979 self.assertNotIn("'bar'", matches)
979 self.assertNotIn("'bar'", matches)
980 self.assertNotIn("foo", matches)
980 self.assertNotIn("foo", matches)
981 self.assertNotIn("bar", matches)
981 self.assertNotIn("bar", matches)
982
982
983 # - match the prefix
983 # - match the prefix
984 _, matches = complete(line_buffer="d['f")
984 _, matches = complete(line_buffer="d['f")
985 self.assertIn("foo", matches)
985 self.assertIn("foo", matches)
986 self.assertNotIn("foo']", matches)
986 self.assertNotIn("foo']", matches)
987 self.assertNotIn('foo"]', matches)
987 self.assertNotIn('foo"]', matches)
988 _, matches = complete(line_buffer="d['foo")
988 _, matches = complete(line_buffer="d['foo")
989 self.assertIn("foo", matches)
989 self.assertIn("foo", matches)
990
990
991 # - can complete on second key
991 # - can complete on second key
992 _, matches = complete(line_buffer="d['foo', ")
992 _, matches = complete(line_buffer="d['foo', ")
993 self.assertIn("'bar'", matches)
993 self.assertIn("'bar'", matches)
994 _, matches = complete(line_buffer="d['foo', 'b")
994 _, matches = complete(line_buffer="d['foo', 'b")
995 self.assertIn("bar", matches)
995 self.assertIn("bar", matches)
996 self.assertNotIn("foo", matches)
996 self.assertNotIn("foo", matches)
997
997
998 # - does not propose missing keys
998 # - does not propose missing keys
999 _, matches = complete(line_buffer="d['foo', 'f")
999 _, matches = complete(line_buffer="d['foo', 'f")
1000 self.assertNotIn("bar", matches)
1000 self.assertNotIn("bar", matches)
1001 self.assertNotIn("foo", matches)
1001 self.assertNotIn("foo", matches)
1002
1002
1003 # check sensitivity to following context
1003 # check sensitivity to following context
1004 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1004 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
1005 self.assertIn("'bar'", matches)
1005 self.assertIn("'bar'", matches)
1006 self.assertNotIn("bar", matches)
1006 self.assertNotIn("bar", matches)
1007 self.assertNotIn("'foo'", matches)
1007 self.assertNotIn("'foo'", matches)
1008 self.assertNotIn("foo", matches)
1008 self.assertNotIn("foo", matches)
1009
1009
1010 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1010 _, matches = complete(line_buffer="d['']", cursor_pos=3)
1011 self.assertIn("foo", matches)
1011 self.assertIn("foo", matches)
1012 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1012 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1013
1013
1014 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1014 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1015 self.assertIn("foo", matches)
1015 self.assertIn("foo", matches)
1016 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1016 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1017
1017
1018 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1018 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1019 self.assertIn("bar", matches)
1019 self.assertIn("bar", matches)
1020 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1020 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1021
1021
1022 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1022 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1023 self.assertIn("'bar'", matches)
1023 self.assertIn("'bar'", matches)
1024 self.assertNotIn("bar", matches)
1024 self.assertNotIn("bar", matches)
1025
1025
1026 # Can complete with longer tuple keys
1026 # Can complete with longer tuple keys
1027 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1027 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1028
1028
1029 # - can complete second key
1029 # - can complete second key
1030 _, matches = complete(line_buffer="d['foo', 'b")
1030 _, matches = complete(line_buffer="d['foo', 'b")
1031 self.assertIn("bar", matches)
1031 self.assertIn("bar", matches)
1032 self.assertNotIn("foo", matches)
1032 self.assertNotIn("foo", matches)
1033 self.assertNotIn("foobar", matches)
1033 self.assertNotIn("foobar", matches)
1034
1034
1035 # - can complete third key
1035 # - can complete third key
1036 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1036 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1037 self.assertIn("foobar", matches)
1037 self.assertIn("foobar", matches)
1038 self.assertNotIn("foo", matches)
1038 self.assertNotIn("foo", matches)
1039 self.assertNotIn("bar", matches)
1039 self.assertNotIn("bar", matches)
1040
1040
1041 def test_dict_key_completion_contexts(self):
1041 def test_dict_key_completion_contexts(self):
1042 """Test expression contexts in which dict key completion occurs"""
1042 """Test expression contexts in which dict key completion occurs"""
1043 ip = get_ipython()
1043 ip = get_ipython()
1044 complete = ip.Completer.complete
1044 complete = ip.Completer.complete
1045 d = {"abc": None}
1045 d = {"abc": None}
1046 ip.user_ns["d"] = d
1046 ip.user_ns["d"] = d
1047
1047
1048 class C:
1048 class C:
1049 data = d
1049 data = d
1050
1050
1051 ip.user_ns["C"] = C
1051 ip.user_ns["C"] = C
1052 ip.user_ns["get"] = lambda: d
1052 ip.user_ns["get"] = lambda: d
1053
1053
1054 def assert_no_completion(**kwargs):
1054 def assert_no_completion(**kwargs):
1055 _, matches = complete(**kwargs)
1055 _, matches = complete(**kwargs)
1056 self.assertNotIn("abc", matches)
1056 self.assertNotIn("abc", matches)
1057 self.assertNotIn("abc'", matches)
1057 self.assertNotIn("abc'", matches)
1058 self.assertNotIn("abc']", matches)
1058 self.assertNotIn("abc']", matches)
1059 self.assertNotIn("'abc'", matches)
1059 self.assertNotIn("'abc'", matches)
1060 self.assertNotIn("'abc']", matches)
1060 self.assertNotIn("'abc']", matches)
1061
1061
1062 def assert_completion(**kwargs):
1062 def assert_completion(**kwargs):
1063 _, matches = complete(**kwargs)
1063 _, matches = complete(**kwargs)
1064 self.assertIn("'abc'", matches)
1064 self.assertIn("'abc'", matches)
1065 self.assertNotIn("'abc']", matches)
1065 self.assertNotIn("'abc']", matches)
1066
1066
1067 # no completion after string closed, even if reopened
1067 # no completion after string closed, even if reopened
1068 assert_no_completion(line_buffer="d['a'")
1068 assert_no_completion(line_buffer="d['a'")
1069 assert_no_completion(line_buffer='d["a"')
1069 assert_no_completion(line_buffer='d["a"')
1070 assert_no_completion(line_buffer="d['a' + ")
1070 assert_no_completion(line_buffer="d['a' + ")
1071 assert_no_completion(line_buffer="d['a' + '")
1071 assert_no_completion(line_buffer="d['a' + '")
1072
1072
1073 # completion in non-trivial expressions
1073 # completion in non-trivial expressions
1074 assert_completion(line_buffer="+ d[")
1074 assert_completion(line_buffer="+ d[")
1075 assert_completion(line_buffer="(d[")
1075 assert_completion(line_buffer="(d[")
1076 assert_completion(line_buffer="C.data[")
1076 assert_completion(line_buffer="C.data[")
1077
1077
1078 # greedy flag
1078 # greedy flag
1079 def assert_completion(**kwargs):
1079 def assert_completion(**kwargs):
1080 _, matches = complete(**kwargs)
1080 _, matches = complete(**kwargs)
1081 self.assertIn("get()['abc']", matches)
1081 self.assertIn("get()['abc']", matches)
1082
1082
1083 assert_no_completion(line_buffer="get()[")
1083 assert_no_completion(line_buffer="get()[")
1084 with greedy_completion():
1084 with greedy_completion():
1085 assert_completion(line_buffer="get()[")
1085 assert_completion(line_buffer="get()[")
1086 assert_completion(line_buffer="get()['")
1086 assert_completion(line_buffer="get()['")
1087 assert_completion(line_buffer="get()['a")
1087 assert_completion(line_buffer="get()['a")
1088 assert_completion(line_buffer="get()['ab")
1088 assert_completion(line_buffer="get()['ab")
1089 assert_completion(line_buffer="get()['abc")
1089 assert_completion(line_buffer="get()['abc")
1090
1090
1091 def test_dict_key_completion_bytes(self):
1091 def test_dict_key_completion_bytes(self):
1092 """Test handling of bytes in dict key completion"""
1092 """Test handling of bytes in dict key completion"""
1093 ip = get_ipython()
1093 ip = get_ipython()
1094 complete = ip.Completer.complete
1094 complete = ip.Completer.complete
1095
1095
1096 ip.user_ns["d"] = {"abc": None, b"abd": None}
1096 ip.user_ns["d"] = {"abc": None, b"abd": None}
1097
1097
1098 _, matches = complete(line_buffer="d[")
1098 _, matches = complete(line_buffer="d[")
1099 self.assertIn("'abc'", matches)
1099 self.assertIn("'abc'", matches)
1100 self.assertIn("b'abd'", matches)
1100 self.assertIn("b'abd'", matches)
1101
1101
1102 if False: # not currently implemented
1102 if False: # not currently implemented
1103 _, matches = complete(line_buffer="d[b")
1103 _, matches = complete(line_buffer="d[b")
1104 self.assertIn("b'abd'", matches)
1104 self.assertIn("b'abd'", matches)
1105 self.assertNotIn("b'abc'", matches)
1105 self.assertNotIn("b'abc'", matches)
1106
1106
1107 _, matches = complete(line_buffer="d[b'")
1107 _, matches = complete(line_buffer="d[b'")
1108 self.assertIn("abd", matches)
1108 self.assertIn("abd", matches)
1109 self.assertNotIn("abc", matches)
1109 self.assertNotIn("abc", matches)
1110
1110
1111 _, matches = complete(line_buffer="d[B'")
1111 _, matches = complete(line_buffer="d[B'")
1112 self.assertIn("abd", matches)
1112 self.assertIn("abd", matches)
1113 self.assertNotIn("abc", matches)
1113 self.assertNotIn("abc", matches)
1114
1114
1115 _, matches = complete(line_buffer="d['")
1115 _, matches = complete(line_buffer="d['")
1116 self.assertIn("abc", matches)
1116 self.assertIn("abc", matches)
1117 self.assertNotIn("abd", matches)
1117 self.assertNotIn("abd", matches)
1118
1118
1119 def test_dict_key_completion_unicode_py3(self):
1119 def test_dict_key_completion_unicode_py3(self):
1120 """Test handling of unicode in dict key completion"""
1120 """Test handling of unicode in dict key completion"""
1121 ip = get_ipython()
1121 ip = get_ipython()
1122 complete = ip.Completer.complete
1122 complete = ip.Completer.complete
1123
1123
1124 ip.user_ns["d"] = {"a\u05d0": None}
1124 ip.user_ns["d"] = {"a\u05d0": None}
1125
1125
1126 # query using escape
1126 # query using escape
1127 if sys.platform != "win32":
1127 if sys.platform != "win32":
1128 # Known failure on Windows
1128 # Known failure on Windows
1129 _, matches = complete(line_buffer="d['a\\u05d0")
1129 _, matches = complete(line_buffer="d['a\\u05d0")
1130 self.assertIn("u05d0", matches) # tokenized after \\
1130 self.assertIn("u05d0", matches) # tokenized after \\
1131
1131
1132 # query using character
1132 # query using character
1133 _, matches = complete(line_buffer="d['a\u05d0")
1133 _, matches = complete(line_buffer="d['a\u05d0")
1134 self.assertIn("a\u05d0", matches)
1134 self.assertIn("a\u05d0", matches)
1135
1135
1136 with greedy_completion():
1136 with greedy_completion():
1137 # query using escape
1137 # query using escape
1138 _, matches = complete(line_buffer="d['a\\u05d0")
1138 _, matches = complete(line_buffer="d['a\\u05d0")
1139 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1139 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1140
1140
1141 # query using character
1141 # query using character
1142 _, matches = complete(line_buffer="d['a\u05d0")
1142 _, matches = complete(line_buffer="d['a\u05d0")
1143 self.assertIn("d['a\u05d0']", matches)
1143 self.assertIn("d['a\u05d0']", matches)
1144
1144
1145 @dec.skip_without("numpy")
1145 @dec.skip_without("numpy")
1146 def test_struct_array_key_completion(self):
1146 def test_struct_array_key_completion(self):
1147 """Test dict key completion applies to numpy struct arrays"""
1147 """Test dict key completion applies to numpy struct arrays"""
1148 import numpy
1148 import numpy
1149
1149
1150 ip = get_ipython()
1150 ip = get_ipython()
1151 complete = ip.Completer.complete
1151 complete = ip.Completer.complete
1152 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1152 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1153 _, matches = complete(line_buffer="d['")
1153 _, matches = complete(line_buffer="d['")
1154 self.assertIn("hello", matches)
1154 self.assertIn("hello", matches)
1155 self.assertIn("world", matches)
1155 self.assertIn("world", matches)
1156 # complete on the numpy struct itself
1156 # complete on the numpy struct itself
1157 dt = numpy.dtype(
1157 dt = numpy.dtype(
1158 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1158 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1159 )
1159 )
1160 x = numpy.zeros(2, dtype=dt)
1160 x = numpy.zeros(2, dtype=dt)
1161 ip.user_ns["d"] = x[1]
1161 ip.user_ns["d"] = x[1]
1162 _, matches = complete(line_buffer="d['")
1162 _, matches = complete(line_buffer="d['")
1163 self.assertIn("my_head", matches)
1163 self.assertIn("my_head", matches)
1164 self.assertIn("my_data", matches)
1164 self.assertIn("my_data", matches)
1165 # complete on a nested level
1165 # complete on a nested level
1166 with greedy_completion():
1166 with greedy_completion():
1167 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1167 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1168 _, matches = complete(line_buffer="d[1]['my_head']['")
1168 _, matches = complete(line_buffer="d[1]['my_head']['")
1169 self.assertTrue(any(["my_dt" in m for m in matches]))
1169 self.assertTrue(any(["my_dt" in m for m in matches]))
1170 self.assertTrue(any(["my_df" in m for m in matches]))
1170 self.assertTrue(any(["my_df" in m for m in matches]))
1171
1171
1172 @dec.skip_without("pandas")
1172 @dec.skip_without("pandas")
1173 def test_dataframe_key_completion(self):
1173 def test_dataframe_key_completion(self):
1174 """Test dict key completion applies to pandas DataFrames"""
1174 """Test dict key completion applies to pandas DataFrames"""
1175 import pandas
1175 import pandas
1176
1176
1177 ip = get_ipython()
1177 ip = get_ipython()
1178 complete = ip.Completer.complete
1178 complete = ip.Completer.complete
1179 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1179 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1180 _, matches = complete(line_buffer="d['")
1180 _, matches = complete(line_buffer="d['")
1181 self.assertIn("hello", matches)
1181 self.assertIn("hello", matches)
1182 self.assertIn("world", matches)
1182 self.assertIn("world", matches)
1183
1183
1184 def test_dict_key_completion_invalids(self):
1184 def test_dict_key_completion_invalids(self):
1185 """Smoke test cases dict key completion can't handle"""
1185 """Smoke test cases dict key completion can't handle"""
1186 ip = get_ipython()
1186 ip = get_ipython()
1187 complete = ip.Completer.complete
1187 complete = ip.Completer.complete
1188
1188
1189 ip.user_ns["no_getitem"] = None
1189 ip.user_ns["no_getitem"] = None
1190 ip.user_ns["no_keys"] = []
1190 ip.user_ns["no_keys"] = []
1191 ip.user_ns["cant_call_keys"] = dict
1191 ip.user_ns["cant_call_keys"] = dict
1192 ip.user_ns["empty"] = {}
1192 ip.user_ns["empty"] = {}
1193 ip.user_ns["d"] = {"abc": 5}
1193 ip.user_ns["d"] = {"abc": 5}
1194
1194
1195 _, matches = complete(line_buffer="no_getitem['")
1195 _, matches = complete(line_buffer="no_getitem['")
1196 _, matches = complete(line_buffer="no_keys['")
1196 _, matches = complete(line_buffer="no_keys['")
1197 _, matches = complete(line_buffer="cant_call_keys['")
1197 _, matches = complete(line_buffer="cant_call_keys['")
1198 _, matches = complete(line_buffer="empty['")
1198 _, matches = complete(line_buffer="empty['")
1199 _, matches = complete(line_buffer="name_error['")
1199 _, matches = complete(line_buffer="name_error['")
1200 _, matches = complete(line_buffer="d['\\") # incomplete escape
1200 _, matches = complete(line_buffer="d['\\") # incomplete escape
1201
1201
1202 def test_object_key_completion(self):
1202 def test_object_key_completion(self):
1203 ip = get_ipython()
1203 ip = get_ipython()
1204 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1204 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1205
1205
1206 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1206 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1207 self.assertIn("qwerty", matches)
1207 self.assertIn("qwerty", matches)
1208 self.assertIn("qwick", matches)
1208 self.assertIn("qwick", matches)
1209
1209
1210 def test_class_key_completion(self):
1210 def test_class_key_completion(self):
1211 ip = get_ipython()
1211 ip = get_ipython()
1212 NamedInstanceClass("qwerty")
1212 NamedInstanceClass("qwerty")
1213 NamedInstanceClass("qwick")
1213 NamedInstanceClass("qwick")
1214 ip.user_ns["named_instance_class"] = NamedInstanceClass
1214 ip.user_ns["named_instance_class"] = NamedInstanceClass
1215
1215
1216 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1216 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1217 self.assertIn("qwerty", matches)
1217 self.assertIn("qwerty", matches)
1218 self.assertIn("qwick", matches)
1218 self.assertIn("qwick", matches)
1219
1219
1220 def test_tryimport(self):
1220 def test_tryimport(self):
1221 """
1221 """
1222 Test that try-import don't crash on trailing dot, and import modules before
1222 Test that try-import don't crash on trailing dot, and import modules before
1223 """
1223 """
1224 from IPython.core.completerlib import try_import
1224 from IPython.core.completerlib import try_import
1225
1225
1226 assert try_import("IPython.")
1226 assert try_import("IPython.")
1227
1227
1228 def test_aimport_module_completer(self):
1228 def test_aimport_module_completer(self):
1229 ip = get_ipython()
1229 ip = get_ipython()
1230 _, matches = ip.complete("i", "%aimport i")
1230 _, matches = ip.complete("i", "%aimport i")
1231 self.assertIn("io", matches)
1231 self.assertIn("io", matches)
1232 self.assertNotIn("int", matches)
1232 self.assertNotIn("int", matches)
1233
1233
1234 def test_nested_import_module_completer(self):
1234 def test_nested_import_module_completer(self):
1235 ip = get_ipython()
1235 ip = get_ipython()
1236 _, matches = ip.complete(None, "import IPython.co", 17)
1236 _, matches = ip.complete(None, "import IPython.co", 17)
1237 self.assertIn("IPython.core", matches)
1237 self.assertIn("IPython.core", matches)
1238 self.assertNotIn("import IPython.core", matches)
1238 self.assertNotIn("import IPython.core", matches)
1239 self.assertNotIn("IPython.display", matches)
1239 self.assertNotIn("IPython.display", matches)
1240
1240
1241 def test_import_module_completer(self):
1241 def test_import_module_completer(self):
1242 ip = get_ipython()
1242 ip = get_ipython()
1243 _, matches = ip.complete("i", "import i")
1243 _, matches = ip.complete("i", "import i")
1244 self.assertIn("io", matches)
1244 self.assertIn("io", matches)
1245 self.assertNotIn("int", matches)
1245 self.assertNotIn("int", matches)
1246
1246
1247 def test_from_module_completer(self):
1247 def test_from_module_completer(self):
1248 ip = get_ipython()
1248 ip = get_ipython()
1249 _, matches = ip.complete("B", "from io import B", 16)
1249 _, matches = ip.complete("B", "from io import B", 16)
1250 self.assertIn("BytesIO", matches)
1250 self.assertIn("BytesIO", matches)
1251 self.assertNotIn("BaseException", matches)
1251 self.assertNotIn("BaseException", matches)
1252
1252
1253 def test_snake_case_completion(self):
1253 def test_snake_case_completion(self):
1254 ip = get_ipython()
1254 ip = get_ipython()
1255 ip.Completer.use_jedi = False
1255 ip.Completer.use_jedi = False
1256 ip.user_ns["some_three"] = 3
1256 ip.user_ns["some_three"] = 3
1257 ip.user_ns["some_four"] = 4
1257 ip.user_ns["some_four"] = 4
1258 _, matches = ip.complete("s_", "print(s_f")
1258 _, matches = ip.complete("s_", "print(s_f")
1259 self.assertIn("some_three", matches)
1259 self.assertIn("some_three", matches)
1260 self.assertIn("some_four", matches)
1260 self.assertIn("some_four", matches)
1261
1261
1262 def test_mix_terms(self):
1262 def test_mix_terms(self):
1263 ip = get_ipython()
1263 ip = get_ipython()
1264 from textwrap import dedent
1264 from textwrap import dedent
1265
1265
1266 ip.Completer.use_jedi = False
1266 ip.Completer.use_jedi = False
1267 ip.ex(
1267 ip.ex(
1268 dedent(
1268 dedent(
1269 """
1269 """
1270 class Test:
1270 class Test:
1271 def meth(self, meth_arg1):
1271 def meth(self, meth_arg1):
1272 print("meth")
1272 print("meth")
1273
1273
1274 def meth_1(self, meth1_arg1, meth1_arg2):
1274 def meth_1(self, meth1_arg1, meth1_arg2):
1275 print("meth1")
1275 print("meth1")
1276
1276
1277 def meth_2(self, meth2_arg1, meth2_arg2):
1277 def meth_2(self, meth2_arg1, meth2_arg2):
1278 print("meth2")
1278 print("meth2")
1279 test = Test()
1279 test = Test()
1280 """
1280 """
1281 )
1281 )
1282 )
1282 )
1283 _, matches = ip.complete(None, "test.meth(")
1283 _, matches = ip.complete(None, "test.meth(")
1284 self.assertIn("meth_arg1=", matches)
1284 self.assertIn("meth_arg1=", matches)
1285 self.assertNotIn("meth2_arg1=", matches)
1285 self.assertNotIn("meth2_arg1=", matches)
1286
1286
1287 def test_percent_symbol_restrict_to_magic_completions(self):
1287 def test_percent_symbol_restrict_to_magic_completions(self):
1288 ip = get_ipython()
1288 ip = get_ipython()
1289 completer = ip.Completer
1289 completer = ip.Completer
1290 text = "%a"
1290 text = "%a"
1291
1291
1292 with provisionalcompleter():
1292 with provisionalcompleter():
1293 completer.use_jedi = True
1293 completer.use_jedi = True
1294 completions = completer.completions(text, len(text))
1294 completions = completer.completions(text, len(text))
1295 for c in completions:
1295 for c in completions:
1296 self.assertEqual(c.text[0], "%")
1296 self.assertEqual(c.text[0], "%")
1297
1297
1298 def test_dict_key_restrict_to_dicts(self):
1299 """Test that dict key suppresses non-dict completion items"""
1300 ip = get_ipython()
1301 c = ip.Completer
1302 d = {"abc": None}
1303 ip.user_ns["d"] = d
1304
1305 text = 'd["a'
1306
1307 def _():
1308 with provisionalcompleter():
1309 c.use_jedi = True
1310 return [
1311 completion.text for completion in c.completions(text, len(text))
1312 ]
1313
1314 completions = _()
1315 self.assertEqual(completions, ["abc"])
1316
1317 # check that it can be disabled in granular manner:
1318 cfg = Config()
1319 cfg.IPCompleter.suppress_competing_matchers = {
1320 "IPCompleter.dict_key_matcher": False
1321 }
1322 c.update_config(cfg)
1323
1324 completions = _()
1325 self.assertIn("abc", completions)
1326 self.assertGreater(len(completions), 1)
1327
1298 def test_matcher_suppression(self):
1328 def test_matcher_suppression(self):
1299 @completion_matcher(identifier="a_matcher")
1329 @completion_matcher(identifier="a_matcher")
1300 def a_matcher(text):
1330 def a_matcher(text):
1301 return ["completion_a"]
1331 return ["completion_a"]
1302
1332
1303 @completion_matcher(identifier="b_matcher", api_version=2)
1333 @completion_matcher(identifier="b_matcher", api_version=2)
1304 def b_matcher(context: CompletionContext):
1334 def b_matcher(context: CompletionContext):
1305 text = context.token
1335 text = context.token
1306 result = {"completions": [SimpleCompletion("completion_b")]}
1336 result = {"completions": [SimpleCompletion("completion_b")]}
1307
1337
1308 if text == "suppress c":
1338 if text == "suppress c":
1309 result["suppress"] = {"c_matcher"}
1339 result["suppress"] = {"c_matcher"}
1310
1340
1311 if text.startswith("suppress all"):
1341 if text.startswith("suppress all"):
1312 result["suppress"] = True
1342 result["suppress"] = True
1313 if text == "suppress all but c":
1343 if text == "suppress all but c":
1314 result["do_not_suppress"] = {"c_matcher"}
1344 result["do_not_suppress"] = {"c_matcher"}
1315 if text == "suppress all but a":
1345 if text == "suppress all but a":
1316 result["do_not_suppress"] = {"a_matcher"}
1346 result["do_not_suppress"] = {"a_matcher"}
1317
1347
1318 return result
1348 return result
1319
1349
1320 @completion_matcher(identifier="c_matcher")
1350 @completion_matcher(identifier="c_matcher")
1321 def c_matcher(text):
1351 def c_matcher(text):
1322 return ["completion_c"]
1352 return ["completion_c"]
1323
1353
1324 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1354 with custom_matchers([a_matcher, b_matcher, c_matcher]):
1325 ip = get_ipython()
1355 ip = get_ipython()
1326 c = ip.Completer
1356 c = ip.Completer
1327
1357
1328 def _(text, expected):
1358 def _(text, expected):
1329 with provisionalcompleter():
1359 c.use_jedi = False
1330 c.use_jedi = False
1360 s, matches = c.complete(text)
1331 s, matches = c.complete(text)
1361 self.assertEqual(expected, matches)
1332 self.assertEqual(expected, matches)
1333
1362
1334 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1363 _("do not suppress", ["completion_a", "completion_b", "completion_c"])
1335 _("suppress all", ["completion_b"])
1364 _("suppress all", ["completion_b"])
1336 _("suppress all but a", ["completion_a", "completion_b"])
1365 _("suppress all but a", ["completion_a", "completion_b"])
1337 _("suppress all but c", ["completion_b", "completion_c"])
1366 _("suppress all but c", ["completion_b", "completion_c"])
1338
1367
1368 def configure(suppression_config):
1369 cfg = Config()
1370 cfg.IPCompleter.suppress_competing_matchers = suppression_config
1371 c.update_config(cfg)
1372
1373 # test that configuration takes priority over the run-time decisions
1374
1375 configure(False)
1376 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1377
1378 configure({"b_matcher": False})
1379 _("suppress all", ["completion_a", "completion_b", "completion_c"])
1380
1381 configure({"a_matcher": False})
1382 _("suppress all", ["completion_b"])
1383
1384 configure({"b_matcher": True})
1385 _("do not suppress", ["completion_b"])
1386
1339 def test_matcher_disabling(self):
1387 def test_matcher_disabling(self):
1340 @completion_matcher(identifier="a_matcher")
1388 @completion_matcher(identifier="a_matcher")
1341 def a_matcher(text):
1389 def a_matcher(text):
1342 return ["completion_a"]
1390 return ["completion_a"]
1343
1391
1344 @completion_matcher(identifier="b_matcher")
1392 @completion_matcher(identifier="b_matcher")
1345 def b_matcher(text):
1393 def b_matcher(text):
1346 return ["completion_b"]
1394 return ["completion_b"]
1347
1395
1348 def _(expected):
1396 def _(expected):
1349 with provisionalcompleter():
1397 s, matches = c.complete("completion_")
1350 c.use_jedi = False
1398 self.assertEqual(expected, matches)
1351 s, matches = c.complete("completion_")
1352 self.assertEqual(expected, matches)
1353
1399
1354 with custom_matchers([a_matcher, b_matcher]):
1400 with custom_matchers([a_matcher, b_matcher]):
1355 ip = get_ipython()
1401 ip = get_ipython()
1356 c = ip.Completer
1402 c = ip.Completer
1357
1403
1358 _(["completion_a", "completion_b"])
1404 _(["completion_a", "completion_b"])
1359
1405
1360 cfg = Config()
1406 cfg = Config()
1361 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1407 cfg.IPCompleter.disable_matchers = ["b_matcher"]
1362 c.update_config(cfg)
1408 c.update_config(cfg)
1363
1409
1364 _(["completion_a"])
1410 _(["completion_a"])
1365
1411
1366 cfg.IPCompleter.disable_matchers = []
1412 cfg.IPCompleter.disable_matchers = []
1367 c.update_config(cfg)
1413 c.update_config(cfg)
1368
1414
1369 def test_matcher_priority(self):
1415 def test_matcher_priority(self):
1370 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1416 @completion_matcher(identifier="a_matcher", priority=0, api_version=2)
1371 def a_matcher(text):
1417 def a_matcher(text):
1372 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1418 return {"completions": [SimpleCompletion("completion_a")], "suppress": True}
1373
1419
1374 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1420 @completion_matcher(identifier="b_matcher", priority=2, api_version=2)
1375 def b_matcher(text):
1421 def b_matcher(text):
1376 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1422 return {"completions": [SimpleCompletion("completion_b")], "suppress": True}
1377
1423
1378 def _(expected):
1424 def _(expected):
1379 with provisionalcompleter():
1425 s, matches = c.complete("completion_")
1380 c.use_jedi = False
1426 self.assertEqual(expected, matches)
1381 s, matches = c.complete("completion_")
1382 self.assertEqual(expected, matches)
1383
1427
1384 with custom_matchers([a_matcher, b_matcher]):
1428 with custom_matchers([a_matcher, b_matcher]):
1385 ip = get_ipython()
1429 ip = get_ipython()
1386 c = ip.Completer
1430 c = ip.Completer
1387
1431
1388 _(["completion_b"])
1432 _(["completion_b"])
1389 a_matcher.matcher_priority = 3
1433 a_matcher.matcher_priority = 3
1390 _(["completion_a"])
1434 _(["completion_a"])
General Comments 0
You need to be logged in to leave comments. Login now