##// END OF EJS Templates
Add test for test_match_dict...
Corentin Cadiou -
Show More
@@ -1,2256 +1,2256 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, `F\\\\vec<tab>` is correct, not `\\\\vec<tab>F`.
44 counterpart that is to say, `F\\\\vec<tab>` is correct, not `\\\\vec<tab>F`.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press `<tab>` to expand it to its latex form.
53 and press `<tab>` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103 """
103 """
104
104
105
105
106 # Copyright (c) IPython Development Team.
106 # Copyright (c) IPython Development Team.
107 # Distributed under the terms of the Modified BSD License.
107 # Distributed under the terms of the Modified BSD License.
108 #
108 #
109 # Some of this code originated from rlcompleter in the Python standard library
109 # Some of this code originated from rlcompleter in the Python standard library
110 # Copyright (C) 2001 Python Software Foundation, www.python.org
110 # Copyright (C) 2001 Python Software Foundation, www.python.org
111
111
112
112
113 import builtins as builtin_mod
113 import builtins as builtin_mod
114 import glob
114 import glob
115 import inspect
115 import inspect
116 import itertools
116 import itertools
117 import keyword
117 import keyword
118 import os
118 import os
119 import re
119 import re
120 import string
120 import string
121 import sys
121 import sys
122 import time
122 import time
123 import unicodedata
123 import unicodedata
124 import uuid
124 import uuid
125 import warnings
125 import warnings
126 from contextlib import contextmanager
126 from contextlib import contextmanager
127 from importlib import import_module
127 from importlib import import_module
128 from types import SimpleNamespace
128 from types import SimpleNamespace
129 from typing import Iterable, Iterator, List, Tuple, Union, Any, Sequence, Dict, NamedTuple, Pattern, Optional
129 from typing import Iterable, Iterator, List, Tuple, Union, Any, Sequence, Dict, NamedTuple, Pattern, Optional
130
130
131 from IPython.core.error import TryNext
131 from IPython.core.error import TryNext
132 from IPython.core.inputtransformer2 import ESC_MAGIC
132 from IPython.core.inputtransformer2 import ESC_MAGIC
133 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
133 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
134 from IPython.core.oinspect import InspectColors
134 from IPython.core.oinspect import InspectColors
135 from IPython.utils import generics
135 from IPython.utils import generics
136 from IPython.utils.dir2 import dir2, get_real_method
136 from IPython.utils.dir2 import dir2, get_real_method
137 from IPython.utils.path import ensure_dir_exists
137 from IPython.utils.path import ensure_dir_exists
138 from IPython.utils.process import arg_split
138 from IPython.utils.process import arg_split
139 from traitlets import Bool, Enum, Int, List as ListTrait, Unicode, default, observe
139 from traitlets import Bool, Enum, Int, List as ListTrait, Unicode, default, observe
140 from traitlets.config.configurable import Configurable
140 from traitlets.config.configurable import Configurable
141
141
142 import __main__
142 import __main__
143
143
144 # skip module docstests
144 # skip module docstests
145 skip_doctest = True
145 skip_doctest = True
146
146
147 try:
147 try:
148 import jedi
148 import jedi
149 jedi.settings.case_insensitive_completion = False
149 jedi.settings.case_insensitive_completion = False
150 import jedi.api.helpers
150 import jedi.api.helpers
151 import jedi.api.classes
151 import jedi.api.classes
152 JEDI_INSTALLED = True
152 JEDI_INSTALLED = True
153 except ImportError:
153 except ImportError:
154 JEDI_INSTALLED = False
154 JEDI_INSTALLED = False
155 #-----------------------------------------------------------------------------
155 #-----------------------------------------------------------------------------
156 # Globals
156 # Globals
157 #-----------------------------------------------------------------------------
157 #-----------------------------------------------------------------------------
158
158
159 # ranges where we have most of the valid unicode names. We could be more finer
159 # ranges where we have most of the valid unicode names. We could be more finer
160 # grained but is it worth it for performace While unicode have character in the
160 # grained but is it worth it for performace While unicode have character in the
161 # rage 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
161 # rage 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
162 # write this). With below range we cover them all, with a density of ~67%
162 # write this). With below range we cover them all, with a density of ~67%
163 # biggest next gap we consider only adds up about 1% density and there are 600
163 # biggest next gap we consider only adds up about 1% density and there are 600
164 # gaps that would need hard coding.
164 # gaps that would need hard coding.
165 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
165 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
166
166
167 # Public API
167 # Public API
168 __all__ = ['Completer','IPCompleter']
168 __all__ = ['Completer','IPCompleter']
169
169
170 if sys.platform == 'win32':
170 if sys.platform == 'win32':
171 PROTECTABLES = ' '
171 PROTECTABLES = ' '
172 else:
172 else:
173 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
173 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
174
174
175 # Protect against returning an enormous number of completions which the frontend
175 # Protect against returning an enormous number of completions which the frontend
176 # may have trouble processing.
176 # may have trouble processing.
177 MATCHES_LIMIT = 500
177 MATCHES_LIMIT = 500
178
178
179 _deprecation_readline_sentinel = object()
179 _deprecation_readline_sentinel = object()
180
180
181
181
182 class ProvisionalCompleterWarning(FutureWarning):
182 class ProvisionalCompleterWarning(FutureWarning):
183 """
183 """
184 Exception raise by an experimental feature in this module.
184 Exception raise by an experimental feature in this module.
185
185
186 Wrap code in :any:`provisionalcompleter` context manager if you
186 Wrap code in :any:`provisionalcompleter` context manager if you
187 are certain you want to use an unstable feature.
187 are certain you want to use an unstable feature.
188 """
188 """
189 pass
189 pass
190
190
191 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
191 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
192
192
193 @contextmanager
193 @contextmanager
194 def provisionalcompleter(action='ignore'):
194 def provisionalcompleter(action='ignore'):
195 """
195 """
196
196
197
197
198 This context manager has to be used in any place where unstable completer
198 This context manager has to be used in any place where unstable completer
199 behavior and API may be called.
199 behavior and API may be called.
200
200
201 >>> with provisionalcompleter():
201 >>> with provisionalcompleter():
202 ... completer.do_experimental_things() # works
202 ... completer.do_experimental_things() # works
203
203
204 >>> completer.do_experimental_things() # raises.
204 >>> completer.do_experimental_things() # raises.
205
205
206 .. note:: Unstable
206 .. note:: Unstable
207
207
208 By using this context manager you agree that the API in use may change
208 By using this context manager you agree that the API in use may change
209 without warning, and that you won't complain if they do so.
209 without warning, and that you won't complain if they do so.
210
210
211 You also understand that, if the API is not to your liking, you should report
211 You also understand that, if the API is not to your liking, you should report
212 a bug to explain your use case upstream.
212 a bug to explain your use case upstream.
213
213
214 We'll be happy to get your feedback, feature requests, and improvements on
214 We'll be happy to get your feedback, feature requests, and improvements on
215 any of the unstable APIs!
215 any of the unstable APIs!
216 """
216 """
217 with warnings.catch_warnings():
217 with warnings.catch_warnings():
218 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
218 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
219 yield
219 yield
220
220
221
221
222 def has_open_quotes(s):
222 def has_open_quotes(s):
223 """Return whether a string has open quotes.
223 """Return whether a string has open quotes.
224
224
225 This simply counts whether the number of quote characters of either type in
225 This simply counts whether the number of quote characters of either type in
226 the string is odd.
226 the string is odd.
227
227
228 Returns
228 Returns
229 -------
229 -------
230 If there is an open quote, the quote character is returned. Else, return
230 If there is an open quote, the quote character is returned. Else, return
231 False.
231 False.
232 """
232 """
233 # We check " first, then ', so complex cases with nested quotes will get
233 # We check " first, then ', so complex cases with nested quotes will get
234 # the " to take precedence.
234 # the " to take precedence.
235 if s.count('"') % 2:
235 if s.count('"') % 2:
236 return '"'
236 return '"'
237 elif s.count("'") % 2:
237 elif s.count("'") % 2:
238 return "'"
238 return "'"
239 else:
239 else:
240 return False
240 return False
241
241
242
242
243 def protect_filename(s, protectables=PROTECTABLES):
243 def protect_filename(s, protectables=PROTECTABLES):
244 """Escape a string to protect certain characters."""
244 """Escape a string to protect certain characters."""
245 if set(s) & set(protectables):
245 if set(s) & set(protectables):
246 if sys.platform == "win32":
246 if sys.platform == "win32":
247 return '"' + s + '"'
247 return '"' + s + '"'
248 else:
248 else:
249 return "".join(("\\" + c if c in protectables else c) for c in s)
249 return "".join(("\\" + c if c in protectables else c) for c in s)
250 else:
250 else:
251 return s
251 return s
252
252
253
253
254 def expand_user(path:str) -> Tuple[str, bool, str]:
254 def expand_user(path:str) -> Tuple[str, bool, str]:
255 """Expand ``~``-style usernames in strings.
255 """Expand ``~``-style usernames in strings.
256
256
257 This is similar to :func:`os.path.expanduser`, but it computes and returns
257 This is similar to :func:`os.path.expanduser`, but it computes and returns
258 extra information that will be useful if the input was being used in
258 extra information that will be useful if the input was being used in
259 computing completions, and you wish to return the completions with the
259 computing completions, and you wish to return the completions with the
260 original '~' instead of its expanded value.
260 original '~' instead of its expanded value.
261
261
262 Parameters
262 Parameters
263 ----------
263 ----------
264 path : str
264 path : str
265 String to be expanded. If no ~ is present, the output is the same as the
265 String to be expanded. If no ~ is present, the output is the same as the
266 input.
266 input.
267
267
268 Returns
268 Returns
269 -------
269 -------
270 newpath : str
270 newpath : str
271 Result of ~ expansion in the input path.
271 Result of ~ expansion in the input path.
272 tilde_expand : bool
272 tilde_expand : bool
273 Whether any expansion was performed or not.
273 Whether any expansion was performed or not.
274 tilde_val : str
274 tilde_val : str
275 The value that ~ was replaced with.
275 The value that ~ was replaced with.
276 """
276 """
277 # Default values
277 # Default values
278 tilde_expand = False
278 tilde_expand = False
279 tilde_val = ''
279 tilde_val = ''
280 newpath = path
280 newpath = path
281
281
282 if path.startswith('~'):
282 if path.startswith('~'):
283 tilde_expand = True
283 tilde_expand = True
284 rest = len(path)-1
284 rest = len(path)-1
285 newpath = os.path.expanduser(path)
285 newpath = os.path.expanduser(path)
286 if rest:
286 if rest:
287 tilde_val = newpath[:-rest]
287 tilde_val = newpath[:-rest]
288 else:
288 else:
289 tilde_val = newpath
289 tilde_val = newpath
290
290
291 return newpath, tilde_expand, tilde_val
291 return newpath, tilde_expand, tilde_val
292
292
293
293
294 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
294 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
295 """Does the opposite of expand_user, with its outputs.
295 """Does the opposite of expand_user, with its outputs.
296 """
296 """
297 if tilde_expand:
297 if tilde_expand:
298 return path.replace(tilde_val, '~')
298 return path.replace(tilde_val, '~')
299 else:
299 else:
300 return path
300 return path
301
301
302
302
303 def completions_sorting_key(word):
303 def completions_sorting_key(word):
304 """key for sorting completions
304 """key for sorting completions
305
305
306 This does several things:
306 This does several things:
307
307
308 - Demote any completions starting with underscores to the end
308 - Demote any completions starting with underscores to the end
309 - Insert any %magic and %%cellmagic completions in the alphabetical order
309 - Insert any %magic and %%cellmagic completions in the alphabetical order
310 by their name
310 by their name
311 """
311 """
312 prio1, prio2 = 0, 0
312 prio1, prio2 = 0, 0
313
313
314 if word.startswith('__'):
314 if word.startswith('__'):
315 prio1 = 2
315 prio1 = 2
316 elif word.startswith('_'):
316 elif word.startswith('_'):
317 prio1 = 1
317 prio1 = 1
318
318
319 if word.endswith('='):
319 if word.endswith('='):
320 prio1 = -1
320 prio1 = -1
321
321
322 if word.startswith('%%'):
322 if word.startswith('%%'):
323 # If there's another % in there, this is something else, so leave it alone
323 # If there's another % in there, this is something else, so leave it alone
324 if not "%" in word[2:]:
324 if not "%" in word[2:]:
325 word = word[2:]
325 word = word[2:]
326 prio2 = 2
326 prio2 = 2
327 elif word.startswith('%'):
327 elif word.startswith('%'):
328 if not "%" in word[1:]:
328 if not "%" in word[1:]:
329 word = word[1:]
329 word = word[1:]
330 prio2 = 1
330 prio2 = 1
331
331
332 return prio1, word, prio2
332 return prio1, word, prio2
333
333
334
334
335 class _FakeJediCompletion:
335 class _FakeJediCompletion:
336 """
336 """
337 This is a workaround to communicate to the UI that Jedi has crashed and to
337 This is a workaround to communicate to the UI that Jedi has crashed and to
338 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
338 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
339
339
340 Added in IPython 6.0 so should likely be removed for 7.0
340 Added in IPython 6.0 so should likely be removed for 7.0
341
341
342 """
342 """
343
343
344 def __init__(self, name):
344 def __init__(self, name):
345
345
346 self.name = name
346 self.name = name
347 self.complete = name
347 self.complete = name
348 self.type = 'crashed'
348 self.type = 'crashed'
349 self.name_with_symbols = name
349 self.name_with_symbols = name
350 self.signature = ''
350 self.signature = ''
351 self._origin = 'fake'
351 self._origin = 'fake'
352
352
353 def __repr__(self):
353 def __repr__(self):
354 return '<Fake completion object jedi has crashed>'
354 return '<Fake completion object jedi has crashed>'
355
355
356
356
357 class Completion:
357 class Completion:
358 """
358 """
359 Completion object used and return by IPython completers.
359 Completion object used and return by IPython completers.
360
360
361 .. warning:: Unstable
361 .. warning:: Unstable
362
362
363 This function is unstable, API may change without warning.
363 This function is unstable, API may change without warning.
364 It will also raise unless use in proper context manager.
364 It will also raise unless use in proper context manager.
365
365
366 This act as a middle ground :any:`Completion` object between the
366 This act as a middle ground :any:`Completion` object between the
367 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
367 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
368 object. While Jedi need a lot of information about evaluator and how the
368 object. While Jedi need a lot of information about evaluator and how the
369 code should be ran/inspected, PromptToolkit (and other frontend) mostly
369 code should be ran/inspected, PromptToolkit (and other frontend) mostly
370 need user facing information.
370 need user facing information.
371
371
372 - Which range should be replaced replaced by what.
372 - Which range should be replaced replaced by what.
373 - Some metadata (like completion type), or meta information to displayed to
373 - Some metadata (like completion type), or meta information to displayed to
374 the use user.
374 the use user.
375
375
376 For debugging purpose we can also store the origin of the completion (``jedi``,
376 For debugging purpose we can also store the origin of the completion (``jedi``,
377 ``IPython.python_matches``, ``IPython.magics_matches``...).
377 ``IPython.python_matches``, ``IPython.magics_matches``...).
378 """
378 """
379
379
380 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
380 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
381
381
382 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
382 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
383 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
383 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
384 "It may change without warnings. "
384 "It may change without warnings. "
385 "Use in corresponding context manager.",
385 "Use in corresponding context manager.",
386 category=ProvisionalCompleterWarning, stacklevel=2)
386 category=ProvisionalCompleterWarning, stacklevel=2)
387
387
388 self.start = start
388 self.start = start
389 self.end = end
389 self.end = end
390 self.text = text
390 self.text = text
391 self.type = type
391 self.type = type
392 self.signature = signature
392 self.signature = signature
393 self._origin = _origin
393 self._origin = _origin
394
394
395 def __repr__(self):
395 def __repr__(self):
396 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
396 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
397 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
397 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
398
398
399 def __eq__(self, other)->Bool:
399 def __eq__(self, other)->Bool:
400 """
400 """
401 Equality and hash do not hash the type (as some completer may not be
401 Equality and hash do not hash the type (as some completer may not be
402 able to infer the type), but are use to (partially) de-duplicate
402 able to infer the type), but are use to (partially) de-duplicate
403 completion.
403 completion.
404
404
405 Completely de-duplicating completion is a bit tricker that just
405 Completely de-duplicating completion is a bit tricker that just
406 comparing as it depends on surrounding text, which Completions are not
406 comparing as it depends on surrounding text, which Completions are not
407 aware of.
407 aware of.
408 """
408 """
409 return self.start == other.start and \
409 return self.start == other.start and \
410 self.end == other.end and \
410 self.end == other.end and \
411 self.text == other.text
411 self.text == other.text
412
412
413 def __hash__(self):
413 def __hash__(self):
414 return hash((self.start, self.end, self.text))
414 return hash((self.start, self.end, self.text))
415
415
416
416
417 _IC = Iterable[Completion]
417 _IC = Iterable[Completion]
418
418
419
419
420 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
420 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
421 """
421 """
422 Deduplicate a set of completions.
422 Deduplicate a set of completions.
423
423
424 .. warning:: Unstable
424 .. warning:: Unstable
425
425
426 This function is unstable, API may change without warning.
426 This function is unstable, API may change without warning.
427
427
428 Parameters
428 Parameters
429 ----------
429 ----------
430 text: str
430 text: str
431 text that should be completed.
431 text that should be completed.
432 completions: Iterator[Completion]
432 completions: Iterator[Completion]
433 iterator over the completions to deduplicate
433 iterator over the completions to deduplicate
434
434
435 Yields
435 Yields
436 ------
436 ------
437 `Completions` objects
437 `Completions` objects
438
438
439
439
440 Completions coming from multiple sources, may be different but end up having
440 Completions coming from multiple sources, may be different but end up having
441 the same effect when applied to ``text``. If this is the case, this will
441 the same effect when applied to ``text``. If this is the case, this will
442 consider completions as equal and only emit the first encountered.
442 consider completions as equal and only emit the first encountered.
443
443
444 Not folded in `completions()` yet for debugging purpose, and to detect when
444 Not folded in `completions()` yet for debugging purpose, and to detect when
445 the IPython completer does return things that Jedi does not, but should be
445 the IPython completer does return things that Jedi does not, but should be
446 at some point.
446 at some point.
447 """
447 """
448 completions = list(completions)
448 completions = list(completions)
449 if not completions:
449 if not completions:
450 return
450 return
451
451
452 new_start = min(c.start for c in completions)
452 new_start = min(c.start for c in completions)
453 new_end = max(c.end for c in completions)
453 new_end = max(c.end for c in completions)
454
454
455 seen = set()
455 seen = set()
456 for c in completions:
456 for c in completions:
457 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
457 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
458 if new_text not in seen:
458 if new_text not in seen:
459 yield c
459 yield c
460 seen.add(new_text)
460 seen.add(new_text)
461
461
462
462
463 def rectify_completions(text: str, completions: _IC, *, _debug=False)->_IC:
463 def rectify_completions(text: str, completions: _IC, *, _debug=False)->_IC:
464 """
464 """
465 Rectify a set of completions to all have the same ``start`` and ``end``
465 Rectify a set of completions to all have the same ``start`` and ``end``
466
466
467 .. warning:: Unstable
467 .. warning:: Unstable
468
468
469 This function is unstable, API may change without warning.
469 This function is unstable, API may change without warning.
470 It will also raise unless use in proper context manager.
470 It will also raise unless use in proper context manager.
471
471
472 Parameters
472 Parameters
473 ----------
473 ----------
474 text: str
474 text: str
475 text that should be completed.
475 text that should be completed.
476 completions: Iterator[Completion]
476 completions: Iterator[Completion]
477 iterator over the completions to rectify
477 iterator over the completions to rectify
478
478
479
479
480 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
480 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
481 the Jupyter Protocol requires them to behave like so. This will readjust
481 the Jupyter Protocol requires them to behave like so. This will readjust
482 the completion to have the same ``start`` and ``end`` by padding both
482 the completion to have the same ``start`` and ``end`` by padding both
483 extremities with surrounding text.
483 extremities with surrounding text.
484
484
485 During stabilisation should support a ``_debug`` option to log which
485 During stabilisation should support a ``_debug`` option to log which
486 completion are return by the IPython completer and not found in Jedi in
486 completion are return by the IPython completer and not found in Jedi in
487 order to make upstream bug report.
487 order to make upstream bug report.
488 """
488 """
489 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
489 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
490 "It may change without warnings. "
490 "It may change without warnings. "
491 "Use in corresponding context manager.",
491 "Use in corresponding context manager.",
492 category=ProvisionalCompleterWarning, stacklevel=2)
492 category=ProvisionalCompleterWarning, stacklevel=2)
493
493
494 completions = list(completions)
494 completions = list(completions)
495 if not completions:
495 if not completions:
496 return
496 return
497 starts = (c.start for c in completions)
497 starts = (c.start for c in completions)
498 ends = (c.end for c in completions)
498 ends = (c.end for c in completions)
499
499
500 new_start = min(starts)
500 new_start = min(starts)
501 new_end = max(ends)
501 new_end = max(ends)
502
502
503 seen_jedi = set()
503 seen_jedi = set()
504 seen_python_matches = set()
504 seen_python_matches = set()
505 for c in completions:
505 for c in completions:
506 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
506 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
507 if c._origin == 'jedi':
507 if c._origin == 'jedi':
508 seen_jedi.add(new_text)
508 seen_jedi.add(new_text)
509 elif c._origin == 'IPCompleter.python_matches':
509 elif c._origin == 'IPCompleter.python_matches':
510 seen_python_matches.add(new_text)
510 seen_python_matches.add(new_text)
511 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
511 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
512 diff = seen_python_matches.difference(seen_jedi)
512 diff = seen_python_matches.difference(seen_jedi)
513 if diff and _debug:
513 if diff and _debug:
514 print('IPython.python matches have extras:', diff)
514 print('IPython.python matches have extras:', diff)
515
515
516
516
517 if sys.platform == 'win32':
517 if sys.platform == 'win32':
518 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
518 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
519 else:
519 else:
520 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
520 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
521
521
522 GREEDY_DELIMS = ' =\r\n'
522 GREEDY_DELIMS = ' =\r\n'
523
523
524
524
525 class CompletionSplitter(object):
525 class CompletionSplitter(object):
526 """An object to split an input line in a manner similar to readline.
526 """An object to split an input line in a manner similar to readline.
527
527
528 By having our own implementation, we can expose readline-like completion in
528 By having our own implementation, we can expose readline-like completion in
529 a uniform manner to all frontends. This object only needs to be given the
529 a uniform manner to all frontends. This object only needs to be given the
530 line of text to be split and the cursor position on said line, and it
530 line of text to be split and the cursor position on said line, and it
531 returns the 'word' to be completed on at the cursor after splitting the
531 returns the 'word' to be completed on at the cursor after splitting the
532 entire line.
532 entire line.
533
533
534 What characters are used as splitting delimiters can be controlled by
534 What characters are used as splitting delimiters can be controlled by
535 setting the ``delims`` attribute (this is a property that internally
535 setting the ``delims`` attribute (this is a property that internally
536 automatically builds the necessary regular expression)"""
536 automatically builds the necessary regular expression)"""
537
537
538 # Private interface
538 # Private interface
539
539
540 # A string of delimiter characters. The default value makes sense for
540 # A string of delimiter characters. The default value makes sense for
541 # IPython's most typical usage patterns.
541 # IPython's most typical usage patterns.
542 _delims = DELIMS
542 _delims = DELIMS
543
543
544 # The expression (a normal string) to be compiled into a regular expression
544 # The expression (a normal string) to be compiled into a regular expression
545 # for actual splitting. We store it as an attribute mostly for ease of
545 # for actual splitting. We store it as an attribute mostly for ease of
546 # debugging, since this type of code can be so tricky to debug.
546 # debugging, since this type of code can be so tricky to debug.
547 _delim_expr = None
547 _delim_expr = None
548
548
549 # The regular expression that does the actual splitting
549 # The regular expression that does the actual splitting
550 _delim_re = None
550 _delim_re = None
551
551
552 def __init__(self, delims=None):
552 def __init__(self, delims=None):
553 delims = CompletionSplitter._delims if delims is None else delims
553 delims = CompletionSplitter._delims if delims is None else delims
554 self.delims = delims
554 self.delims = delims
555
555
556 @property
556 @property
557 def delims(self):
557 def delims(self):
558 """Return the string of delimiter characters."""
558 """Return the string of delimiter characters."""
559 return self._delims
559 return self._delims
560
560
561 @delims.setter
561 @delims.setter
562 def delims(self, delims):
562 def delims(self, delims):
563 """Set the delimiters for line splitting."""
563 """Set the delimiters for line splitting."""
564 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
564 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
565 self._delim_re = re.compile(expr)
565 self._delim_re = re.compile(expr)
566 self._delims = delims
566 self._delims = delims
567 self._delim_expr = expr
567 self._delim_expr = expr
568
568
569 def split_line(self, line, cursor_pos=None):
569 def split_line(self, line, cursor_pos=None):
570 """Split a line of text with a cursor at the given position.
570 """Split a line of text with a cursor at the given position.
571 """
571 """
572 l = line if cursor_pos is None else line[:cursor_pos]
572 l = line if cursor_pos is None else line[:cursor_pos]
573 return self._delim_re.split(l)[-1]
573 return self._delim_re.split(l)[-1]
574
574
575
575
576
576
577 class Completer(Configurable):
577 class Completer(Configurable):
578
578
579 greedy = Bool(False,
579 greedy = Bool(False,
580 help="""Activate greedy completion
580 help="""Activate greedy completion
581 PENDING DEPRECTION. this is now mostly taken care of with Jedi.
581 PENDING DEPRECTION. this is now mostly taken care of with Jedi.
582
582
583 This will enable completion on elements of lists, results of function calls, etc.,
583 This will enable completion on elements of lists, results of function calls, etc.,
584 but can be unsafe because the code is actually evaluated on TAB.
584 but can be unsafe because the code is actually evaluated on TAB.
585 """
585 """
586 ).tag(config=True)
586 ).tag(config=True)
587
587
588 use_jedi = Bool(default_value=JEDI_INSTALLED,
588 use_jedi = Bool(default_value=JEDI_INSTALLED,
589 help="Experimental: Use Jedi to generate autocompletions. "
589 help="Experimental: Use Jedi to generate autocompletions. "
590 "Default to True if jedi is installed.").tag(config=True)
590 "Default to True if jedi is installed.").tag(config=True)
591
591
592 jedi_compute_type_timeout = Int(default_value=400,
592 jedi_compute_type_timeout = Int(default_value=400,
593 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
593 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
594 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
594 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
595 performance by preventing jedi to build its cache.
595 performance by preventing jedi to build its cache.
596 """).tag(config=True)
596 """).tag(config=True)
597
597
598 debug = Bool(default_value=False,
598 debug = Bool(default_value=False,
599 help='Enable debug for the Completer. Mostly print extra '
599 help='Enable debug for the Completer. Mostly print extra '
600 'information for experimental jedi integration.')\
600 'information for experimental jedi integration.')\
601 .tag(config=True)
601 .tag(config=True)
602
602
603 backslash_combining_completions = Bool(True,
603 backslash_combining_completions = Bool(True,
604 help="Enable unicode completions, e.g. \\alpha<tab> . "
604 help="Enable unicode completions, e.g. \\alpha<tab> . "
605 "Includes completion of latex commands, unicode names, and expanding "
605 "Includes completion of latex commands, unicode names, and expanding "
606 "unicode characters back to latex commands.").tag(config=True)
606 "unicode characters back to latex commands.").tag(config=True)
607
607
608
608
609
609
610 def __init__(self, namespace=None, global_namespace=None, **kwargs):
610 def __init__(self, namespace=None, global_namespace=None, **kwargs):
611 """Create a new completer for the command line.
611 """Create a new completer for the command line.
612
612
613 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
613 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
614
614
615 If unspecified, the default namespace where completions are performed
615 If unspecified, the default namespace where completions are performed
616 is __main__ (technically, __main__.__dict__). Namespaces should be
616 is __main__ (technically, __main__.__dict__). Namespaces should be
617 given as dictionaries.
617 given as dictionaries.
618
618
619 An optional second namespace can be given. This allows the completer
619 An optional second namespace can be given. This allows the completer
620 to handle cases where both the local and global scopes need to be
620 to handle cases where both the local and global scopes need to be
621 distinguished.
621 distinguished.
622 """
622 """
623
623
624 # Don't bind to namespace quite yet, but flag whether the user wants a
624 # Don't bind to namespace quite yet, but flag whether the user wants a
625 # specific namespace or to use __main__.__dict__. This will allow us
625 # specific namespace or to use __main__.__dict__. This will allow us
626 # to bind to __main__.__dict__ at completion time, not now.
626 # to bind to __main__.__dict__ at completion time, not now.
627 if namespace is None:
627 if namespace is None:
628 self.use_main_ns = True
628 self.use_main_ns = True
629 else:
629 else:
630 self.use_main_ns = False
630 self.use_main_ns = False
631 self.namespace = namespace
631 self.namespace = namespace
632
632
633 # The global namespace, if given, can be bound directly
633 # The global namespace, if given, can be bound directly
634 if global_namespace is None:
634 if global_namespace is None:
635 self.global_namespace = {}
635 self.global_namespace = {}
636 else:
636 else:
637 self.global_namespace = global_namespace
637 self.global_namespace = global_namespace
638
638
639 self.custom_matchers = []
639 self.custom_matchers = []
640
640
641 super(Completer, self).__init__(**kwargs)
641 super(Completer, self).__init__(**kwargs)
642
642
643 def complete(self, text, state):
643 def complete(self, text, state):
644 """Return the next possible completion for 'text'.
644 """Return the next possible completion for 'text'.
645
645
646 This is called successively with state == 0, 1, 2, ... until it
646 This is called successively with state == 0, 1, 2, ... until it
647 returns None. The completion should begin with 'text'.
647 returns None. The completion should begin with 'text'.
648
648
649 """
649 """
650 if self.use_main_ns:
650 if self.use_main_ns:
651 self.namespace = __main__.__dict__
651 self.namespace = __main__.__dict__
652
652
653 if state == 0:
653 if state == 0:
654 if "." in text:
654 if "." in text:
655 self.matches = self.attr_matches(text)
655 self.matches = self.attr_matches(text)
656 else:
656 else:
657 self.matches = self.global_matches(text)
657 self.matches = self.global_matches(text)
658 try:
658 try:
659 return self.matches[state]
659 return self.matches[state]
660 except IndexError:
660 except IndexError:
661 return None
661 return None
662
662
663 def global_matches(self, text):
663 def global_matches(self, text):
664 """Compute matches when text is a simple name.
664 """Compute matches when text is a simple name.
665
665
666 Return a list of all keywords, built-in functions and names currently
666 Return a list of all keywords, built-in functions and names currently
667 defined in self.namespace or self.global_namespace that match.
667 defined in self.namespace or self.global_namespace that match.
668
668
669 """
669 """
670 matches = []
670 matches = []
671 match_append = matches.append
671 match_append = matches.append
672 n = len(text)
672 n = len(text)
673 for lst in [keyword.kwlist,
673 for lst in [keyword.kwlist,
674 builtin_mod.__dict__.keys(),
674 builtin_mod.__dict__.keys(),
675 self.namespace.keys(),
675 self.namespace.keys(),
676 self.global_namespace.keys()]:
676 self.global_namespace.keys()]:
677 for word in lst:
677 for word in lst:
678 if word[:n] == text and word != "__builtins__":
678 if word[:n] == text and word != "__builtins__":
679 match_append(word)
679 match_append(word)
680
680
681 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
681 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
682 for lst in [self.namespace.keys(),
682 for lst in [self.namespace.keys(),
683 self.global_namespace.keys()]:
683 self.global_namespace.keys()]:
684 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
684 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
685 for word in lst if snake_case_re.match(word)}
685 for word in lst if snake_case_re.match(word)}
686 for word in shortened.keys():
686 for word in shortened.keys():
687 if word[:n] == text and word != "__builtins__":
687 if word[:n] == text and word != "__builtins__":
688 match_append(shortened[word])
688 match_append(shortened[word])
689 return matches
689 return matches
690
690
691 def attr_matches(self, text):
691 def attr_matches(self, text):
692 """Compute matches when text contains a dot.
692 """Compute matches when text contains a dot.
693
693
694 Assuming the text is of the form NAME.NAME....[NAME], and is
694 Assuming the text is of the form NAME.NAME....[NAME], and is
695 evaluatable in self.namespace or self.global_namespace, it will be
695 evaluatable in self.namespace or self.global_namespace, it will be
696 evaluated and its attributes (as revealed by dir()) are used as
696 evaluated and its attributes (as revealed by dir()) are used as
697 possible completions. (For class instances, class members are
697 possible completions. (For class instances, class members are
698 also considered.)
698 also considered.)
699
699
700 WARNING: this can still invoke arbitrary C code, if an object
700 WARNING: this can still invoke arbitrary C code, if an object
701 with a __getattr__ hook is evaluated.
701 with a __getattr__ hook is evaluated.
702
702
703 """
703 """
704
704
705 # Another option, seems to work great. Catches things like ''.<tab>
705 # Another option, seems to work great. Catches things like ''.<tab>
706 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
706 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
707
707
708 if m:
708 if m:
709 expr, attr = m.group(1, 3)
709 expr, attr = m.group(1, 3)
710 elif self.greedy:
710 elif self.greedy:
711 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
711 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
712 if not m2:
712 if not m2:
713 return []
713 return []
714 expr, attr = m2.group(1,2)
714 expr, attr = m2.group(1,2)
715 else:
715 else:
716 return []
716 return []
717
717
718 try:
718 try:
719 obj = eval(expr, self.namespace)
719 obj = eval(expr, self.namespace)
720 except:
720 except:
721 try:
721 try:
722 obj = eval(expr, self.global_namespace)
722 obj = eval(expr, self.global_namespace)
723 except:
723 except:
724 return []
724 return []
725
725
726 if self.limit_to__all__ and hasattr(obj, '__all__'):
726 if self.limit_to__all__ and hasattr(obj, '__all__'):
727 words = get__all__entries(obj)
727 words = get__all__entries(obj)
728 else:
728 else:
729 words = dir2(obj)
729 words = dir2(obj)
730
730
731 try:
731 try:
732 words = generics.complete_object(obj, words)
732 words = generics.complete_object(obj, words)
733 except TryNext:
733 except TryNext:
734 pass
734 pass
735 except AssertionError:
735 except AssertionError:
736 raise
736 raise
737 except Exception:
737 except Exception:
738 # Silence errors from completion function
738 # Silence errors from completion function
739 #raise # dbg
739 #raise # dbg
740 pass
740 pass
741 # Build match list to return
741 # Build match list to return
742 n = len(attr)
742 n = len(attr)
743 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
743 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
744
744
745
745
746 def get__all__entries(obj):
746 def get__all__entries(obj):
747 """returns the strings in the __all__ attribute"""
747 """returns the strings in the __all__ attribute"""
748 try:
748 try:
749 words = getattr(obj, '__all__')
749 words = getattr(obj, '__all__')
750 except:
750 except:
751 return []
751 return []
752
752
753 return [w for w in words if isinstance(w, str)]
753 return [w for w in words if isinstance(w, str)]
754
754
755
755
756 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
756 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
757 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
757 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
758 """Used by dict_key_matches, matching the prefix to a list of keys
758 """Used by dict_key_matches, matching the prefix to a list of keys
759
759
760 Parameters
760 Parameters
761 ==========
761 ==========
762 keys:
762 keys:
763 list of keys in dictionary currently being completed.
763 list of keys in dictionary currently being completed.
764 prefix:
764 prefix:
765 Part of the text already typed by the user. E.g. `mydict[b'fo`
765 Part of the text already typed by the user. E.g. `mydict[b'fo`
766 delims:
766 delims:
767 String of delimiters to consider when finding the current key.
767 String of delimiters to consider when finding the current key.
768 extra_prefix: optional
768 extra_prefix: optional
769 Part of the text already typed in multi-key index cases. E.g. for
769 Part of the text already typed in multi-key index cases. E.g. for
770 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
770 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
771
771
772 Returns
772 Returns
773 =======
773 =======
774
774
775 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
775 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
776 ``quote`` being the quote that need to be used to close current string.
776 ``quote`` being the quote that need to be used to close current string.
777 ``token_start`` the position where the replacement should start occurring,
777 ``token_start`` the position where the replacement should start occurring,
778 ``matches`` a list of replacement/completion
778 ``matches`` a list of replacement/completion
779
779
780 """
780 """
781 prefix_tuple = extra_prefix if extra_prefix else ()
781 prefix_tuple = extra_prefix if extra_prefix else ()
782 Nprefix = len(prefix_tuple)
782 Nprefix = len(prefix_tuple)
783 def filter_by_prefix_tuple(key):
783 def filter_by_prefix_tuple(key):
784 if len(key) < Nprefix:
784 if len(key) <= Nprefix:
785 return False
785 return False
786 for k, pt in zip(key, prefix_tuple):
786 for k, pt in zip(key, prefix_tuple):
787 if k != pt:
787 if k != pt:
788 return False
788 return False
789 else:
789 else:
790 return True
790 return True
791
791
792 filtered_keys:List[Union[str,bytes]] = []
792 filtered_keys:List[Union[str,bytes]] = []
793 def _add_to_filtered_keys(key):
793 def _add_to_filtered_keys(key):
794 if isinstance(key, (str, bytes)):
794 if isinstance(key, (str, bytes)):
795 filtered_keys.append(key)
795 filtered_keys.append(key)
796
796
797 for k in keys:
797 for k in keys:
798 if isinstance(k, tuple):
798 if isinstance(k, tuple):
799 if filter_by_prefix_tuple(k):
799 if filter_by_prefix_tuple(k):
800 _add_to_filtered_keys(k[Nprefix])
800 _add_to_filtered_keys(k[Nprefix])
801 else:
801 else:
802 _add_to_filtered_keys(k)
802 _add_to_filtered_keys(k)
803
803
804 if not prefix:
804 if not prefix:
805 return '', 0, [repr(k) for k in filtered_keys]
805 return '', 0, [repr(k) for k in filtered_keys]
806 quote_match = re.search('["\']', prefix)
806 quote_match = re.search('["\']', prefix)
807 assert quote_match is not None # silence mypy
807 assert quote_match is not None # silence mypy
808 quote = quote_match.group()
808 quote = quote_match.group()
809 try:
809 try:
810 prefix_str = eval(prefix + quote, {})
810 prefix_str = eval(prefix + quote, {})
811 except Exception:
811 except Exception:
812 return '', 0, []
812 return '', 0, []
813
813
814 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
814 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
815 token_match = re.search(pattern, prefix, re.UNICODE)
815 token_match = re.search(pattern, prefix, re.UNICODE)
816 assert token_match is not None # silence mypy
816 assert token_match is not None # silence mypy
817 token_start = token_match.start()
817 token_start = token_match.start()
818 token_prefix = token_match.group()
818 token_prefix = token_match.group()
819
819
820 matched:List[str] = []
820 matched:List[str] = []
821 for key in filtered_keys:
821 for key in filtered_keys:
822 try:
822 try:
823 if not key.startswith(prefix_str):
823 if not key.startswith(prefix_str):
824 continue
824 continue
825 except (AttributeError, TypeError, UnicodeError):
825 except (AttributeError, TypeError, UnicodeError):
826 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
826 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
827 continue
827 continue
828
828
829 # reformat remainder of key to begin with prefix
829 # reformat remainder of key to begin with prefix
830 rem = key[len(prefix_str):]
830 rem = key[len(prefix_str):]
831 # force repr wrapped in '
831 # force repr wrapped in '
832 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
832 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
833 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
833 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
834 if quote == '"':
834 if quote == '"':
835 # The entered prefix is quoted with ",
835 # The entered prefix is quoted with ",
836 # but the match is quoted with '.
836 # but the match is quoted with '.
837 # A contained " hence needs escaping for comparison:
837 # A contained " hence needs escaping for comparison:
838 rem_repr = rem_repr.replace('"', '\\"')
838 rem_repr = rem_repr.replace('"', '\\"')
839
839
840 # then reinsert prefix from start of token
840 # then reinsert prefix from start of token
841 matched.append('%s%s' % (token_prefix, rem_repr))
841 matched.append('%s%s' % (token_prefix, rem_repr))
842 return quote, token_start, matched
842 return quote, token_start, matched
843
843
844
844
845 def cursor_to_position(text:str, line:int, column:int)->int:
845 def cursor_to_position(text:str, line:int, column:int)->int:
846 """
846 """
847
847
848 Convert the (line,column) position of the cursor in text to an offset in a
848 Convert the (line,column) position of the cursor in text to an offset in a
849 string.
849 string.
850
850
851 Parameters
851 Parameters
852 ----------
852 ----------
853
853
854 text : str
854 text : str
855 The text in which to calculate the cursor offset
855 The text in which to calculate the cursor offset
856 line : int
856 line : int
857 Line of the cursor; 0-indexed
857 Line of the cursor; 0-indexed
858 column : int
858 column : int
859 Column of the cursor 0-indexed
859 Column of the cursor 0-indexed
860
860
861 Return
861 Return
862 ------
862 ------
863 Position of the cursor in ``text``, 0-indexed.
863 Position of the cursor in ``text``, 0-indexed.
864
864
865 See Also
865 See Also
866 --------
866 --------
867 position_to_cursor: reciprocal of this function
867 position_to_cursor: reciprocal of this function
868
868
869 """
869 """
870 lines = text.split('\n')
870 lines = text.split('\n')
871 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
871 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
872
872
873 return sum(len(l) + 1 for l in lines[:line]) + column
873 return sum(len(l) + 1 for l in lines[:line]) + column
874
874
875 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
875 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
876 """
876 """
877 Convert the position of the cursor in text (0 indexed) to a line
877 Convert the position of the cursor in text (0 indexed) to a line
878 number(0-indexed) and a column number (0-indexed) pair
878 number(0-indexed) and a column number (0-indexed) pair
879
879
880 Position should be a valid position in ``text``.
880 Position should be a valid position in ``text``.
881
881
882 Parameters
882 Parameters
883 ----------
883 ----------
884
884
885 text : str
885 text : str
886 The text in which to calculate the cursor offset
886 The text in which to calculate the cursor offset
887 offset : int
887 offset : int
888 Position of the cursor in ``text``, 0-indexed.
888 Position of the cursor in ``text``, 0-indexed.
889
889
890 Return
890 Return
891 ------
891 ------
892 (line, column) : (int, int)
892 (line, column) : (int, int)
893 Line of the cursor; 0-indexed, column of the cursor 0-indexed
893 Line of the cursor; 0-indexed, column of the cursor 0-indexed
894
894
895
895
896 See Also
896 See Also
897 --------
897 --------
898 cursor_to_position : reciprocal of this function
898 cursor_to_position : reciprocal of this function
899
899
900
900
901 """
901 """
902
902
903 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
903 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
904
904
905 before = text[:offset]
905 before = text[:offset]
906 blines = before.split('\n') # ! splitnes trim trailing \n
906 blines = before.split('\n') # ! splitnes trim trailing \n
907 line = before.count('\n')
907 line = before.count('\n')
908 col = len(blines[-1])
908 col = len(blines[-1])
909 return line, col
909 return line, col
910
910
911
911
912 def _safe_isinstance(obj, module, class_name):
912 def _safe_isinstance(obj, module, class_name):
913 """Checks if obj is an instance of module.class_name if loaded
913 """Checks if obj is an instance of module.class_name if loaded
914 """
914 """
915 return (module in sys.modules and
915 return (module in sys.modules and
916 isinstance(obj, getattr(import_module(module), class_name)))
916 isinstance(obj, getattr(import_module(module), class_name)))
917
917
918 def back_unicode_name_matches(text:str) -> Tuple[str, Sequence[str]]:
918 def back_unicode_name_matches(text:str) -> Tuple[str, Sequence[str]]:
919 """Match Unicode characters back to Unicode name
919 """Match Unicode characters back to Unicode name
920
920
921 This does ``β˜ƒ`` -> ``\\snowman``
921 This does ``β˜ƒ`` -> ``\\snowman``
922
922
923 Note that snowman is not a valid python3 combining character but will be expanded.
923 Note that snowman is not a valid python3 combining character but will be expanded.
924 Though it will not recombine back to the snowman character by the completion machinery.
924 Though it will not recombine back to the snowman character by the completion machinery.
925
925
926 This will not either back-complete standard sequences like \\n, \\b ...
926 This will not either back-complete standard sequences like \\n, \\b ...
927
927
928 Returns
928 Returns
929 =======
929 =======
930
930
931 Return a tuple with two elements:
931 Return a tuple with two elements:
932
932
933 - The Unicode character that was matched (preceded with a backslash), or
933 - The Unicode character that was matched (preceded with a backslash), or
934 empty string,
934 empty string,
935 - a sequence (of 1), name for the match Unicode character, preceded by
935 - a sequence (of 1), name for the match Unicode character, preceded by
936 backslash, or empty if no match.
936 backslash, or empty if no match.
937
937
938 """
938 """
939 if len(text)<2:
939 if len(text)<2:
940 return '', ()
940 return '', ()
941 maybe_slash = text[-2]
941 maybe_slash = text[-2]
942 if maybe_slash != '\\':
942 if maybe_slash != '\\':
943 return '', ()
943 return '', ()
944
944
945 char = text[-1]
945 char = text[-1]
946 # no expand on quote for completion in strings.
946 # no expand on quote for completion in strings.
947 # nor backcomplete standard ascii keys
947 # nor backcomplete standard ascii keys
948 if char in string.ascii_letters or char in ('"',"'"):
948 if char in string.ascii_letters or char in ('"',"'"):
949 return '', ()
949 return '', ()
950 try :
950 try :
951 unic = unicodedata.name(char)
951 unic = unicodedata.name(char)
952 return '\\'+char,('\\'+unic,)
952 return '\\'+char,('\\'+unic,)
953 except KeyError:
953 except KeyError:
954 pass
954 pass
955 return '', ()
955 return '', ()
956
956
957 def back_latex_name_matches(text:str) -> Tuple[str, Sequence[str]] :
957 def back_latex_name_matches(text:str) -> Tuple[str, Sequence[str]] :
958 """Match latex characters back to unicode name
958 """Match latex characters back to unicode name
959
959
960 This does ``\\β„΅`` -> ``\\aleph``
960 This does ``\\β„΅`` -> ``\\aleph``
961
961
962 """
962 """
963 if len(text)<2:
963 if len(text)<2:
964 return '', ()
964 return '', ()
965 maybe_slash = text[-2]
965 maybe_slash = text[-2]
966 if maybe_slash != '\\':
966 if maybe_slash != '\\':
967 return '', ()
967 return '', ()
968
968
969
969
970 char = text[-1]
970 char = text[-1]
971 # no expand on quote for completion in strings.
971 # no expand on quote for completion in strings.
972 # nor backcomplete standard ascii keys
972 # nor backcomplete standard ascii keys
973 if char in string.ascii_letters or char in ('"',"'"):
973 if char in string.ascii_letters or char in ('"',"'"):
974 return '', ()
974 return '', ()
975 try :
975 try :
976 latex = reverse_latex_symbol[char]
976 latex = reverse_latex_symbol[char]
977 # '\\' replace the \ as well
977 # '\\' replace the \ as well
978 return '\\'+char,[latex]
978 return '\\'+char,[latex]
979 except KeyError:
979 except KeyError:
980 pass
980 pass
981 return '', ()
981 return '', ()
982
982
983
983
984 def _formatparamchildren(parameter) -> str:
984 def _formatparamchildren(parameter) -> str:
985 """
985 """
986 Get parameter name and value from Jedi Private API
986 Get parameter name and value from Jedi Private API
987
987
988 Jedi does not expose a simple way to get `param=value` from its API.
988 Jedi does not expose a simple way to get `param=value` from its API.
989
989
990 Parameter
990 Parameter
991 =========
991 =========
992
992
993 parameter:
993 parameter:
994 Jedi's function `Param`
994 Jedi's function `Param`
995
995
996 Returns
996 Returns
997 =======
997 =======
998
998
999 A string like 'a', 'b=1', '*args', '**kwargs'
999 A string like 'a', 'b=1', '*args', '**kwargs'
1000
1000
1001
1001
1002 """
1002 """
1003 description = parameter.description
1003 description = parameter.description
1004 if not description.startswith('param '):
1004 if not description.startswith('param '):
1005 raise ValueError('Jedi function parameter description have change format.'
1005 raise ValueError('Jedi function parameter description have change format.'
1006 'Expected "param ...", found %r".' % description)
1006 'Expected "param ...", found %r".' % description)
1007 return description[6:]
1007 return description[6:]
1008
1008
1009 def _make_signature(completion)-> str:
1009 def _make_signature(completion)-> str:
1010 """
1010 """
1011 Make the signature from a jedi completion
1011 Make the signature from a jedi completion
1012
1012
1013 Parameter
1013 Parameter
1014 =========
1014 =========
1015
1015
1016 completion: jedi.Completion
1016 completion: jedi.Completion
1017 object does not complete a function type
1017 object does not complete a function type
1018
1018
1019 Returns
1019 Returns
1020 =======
1020 =======
1021
1021
1022 a string consisting of the function signature, with the parenthesis but
1022 a string consisting of the function signature, with the parenthesis but
1023 without the function name. example:
1023 without the function name. example:
1024 `(a, *args, b=1, **kwargs)`
1024 `(a, *args, b=1, **kwargs)`
1025
1025
1026 """
1026 """
1027
1027
1028 # it looks like this might work on jedi 0.17
1028 # it looks like this might work on jedi 0.17
1029 if hasattr(completion, 'get_signatures'):
1029 if hasattr(completion, 'get_signatures'):
1030 signatures = completion.get_signatures()
1030 signatures = completion.get_signatures()
1031 if not signatures:
1031 if not signatures:
1032 return '(?)'
1032 return '(?)'
1033
1033
1034 c0 = completion.get_signatures()[0]
1034 c0 = completion.get_signatures()[0]
1035 return '('+c0.to_string().split('(', maxsplit=1)[1]
1035 return '('+c0.to_string().split('(', maxsplit=1)[1]
1036
1036
1037 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1037 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1038 for p in signature.defined_names()) if f])
1038 for p in signature.defined_names()) if f])
1039
1039
1040
1040
1041 class _CompleteResult(NamedTuple):
1041 class _CompleteResult(NamedTuple):
1042 matched_text : str
1042 matched_text : str
1043 matches: Sequence[str]
1043 matches: Sequence[str]
1044 matches_origin: Sequence[str]
1044 matches_origin: Sequence[str]
1045 jedi_matches: Any
1045 jedi_matches: Any
1046
1046
1047
1047
1048 class IPCompleter(Completer):
1048 class IPCompleter(Completer):
1049 """Extension of the completer class with IPython-specific features"""
1049 """Extension of the completer class with IPython-specific features"""
1050
1050
1051 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1051 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1052
1052
1053 @observe('greedy')
1053 @observe('greedy')
1054 def _greedy_changed(self, change):
1054 def _greedy_changed(self, change):
1055 """update the splitter and readline delims when greedy is changed"""
1055 """update the splitter and readline delims when greedy is changed"""
1056 if change['new']:
1056 if change['new']:
1057 self.splitter.delims = GREEDY_DELIMS
1057 self.splitter.delims = GREEDY_DELIMS
1058 else:
1058 else:
1059 self.splitter.delims = DELIMS
1059 self.splitter.delims = DELIMS
1060
1060
1061 dict_keys_only = Bool(False,
1061 dict_keys_only = Bool(False,
1062 help="""Whether to show dict key matches only""")
1062 help="""Whether to show dict key matches only""")
1063
1063
1064 merge_completions = Bool(True,
1064 merge_completions = Bool(True,
1065 help="""Whether to merge completion results into a single list
1065 help="""Whether to merge completion results into a single list
1066
1066
1067 If False, only the completion results from the first non-empty
1067 If False, only the completion results from the first non-empty
1068 completer will be returned.
1068 completer will be returned.
1069 """
1069 """
1070 ).tag(config=True)
1070 ).tag(config=True)
1071 omit__names = Enum((0,1,2), default_value=2,
1071 omit__names = Enum((0,1,2), default_value=2,
1072 help="""Instruct the completer to omit private method names
1072 help="""Instruct the completer to omit private method names
1073
1073
1074 Specifically, when completing on ``object.<tab>``.
1074 Specifically, when completing on ``object.<tab>``.
1075
1075
1076 When 2 [default]: all names that start with '_' will be excluded.
1076 When 2 [default]: all names that start with '_' will be excluded.
1077
1077
1078 When 1: all 'magic' names (``__foo__``) will be excluded.
1078 When 1: all 'magic' names (``__foo__``) will be excluded.
1079
1079
1080 When 0: nothing will be excluded.
1080 When 0: nothing will be excluded.
1081 """
1081 """
1082 ).tag(config=True)
1082 ).tag(config=True)
1083 limit_to__all__ = Bool(False,
1083 limit_to__all__ = Bool(False,
1084 help="""
1084 help="""
1085 DEPRECATED as of version 5.0.
1085 DEPRECATED as of version 5.0.
1086
1086
1087 Instruct the completer to use __all__ for the completion
1087 Instruct the completer to use __all__ for the completion
1088
1088
1089 Specifically, when completing on ``object.<tab>``.
1089 Specifically, when completing on ``object.<tab>``.
1090
1090
1091 When True: only those names in obj.__all__ will be included.
1091 When True: only those names in obj.__all__ will be included.
1092
1092
1093 When False [default]: the __all__ attribute is ignored
1093 When False [default]: the __all__ attribute is ignored
1094 """,
1094 """,
1095 ).tag(config=True)
1095 ).tag(config=True)
1096
1096
1097 profile_completions = Bool(
1097 profile_completions = Bool(
1098 default_value=False,
1098 default_value=False,
1099 help="If True, emit profiling data for completion subsystem using cProfile."
1099 help="If True, emit profiling data for completion subsystem using cProfile."
1100 ).tag(config=True)
1100 ).tag(config=True)
1101
1101
1102 profiler_output_dir = Unicode(
1102 profiler_output_dir = Unicode(
1103 default_value=".completion_profiles",
1103 default_value=".completion_profiles",
1104 help="Template for path at which to output profile data for completions."
1104 help="Template for path at which to output profile data for completions."
1105 ).tag(config=True)
1105 ).tag(config=True)
1106
1106
1107 @observe('limit_to__all__')
1107 @observe('limit_to__all__')
1108 def _limit_to_all_changed(self, change):
1108 def _limit_to_all_changed(self, change):
1109 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1109 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1110 'value has been deprecated since IPython 5.0, will be made to have '
1110 'value has been deprecated since IPython 5.0, will be made to have '
1111 'no effects and then removed in future version of IPython.',
1111 'no effects and then removed in future version of IPython.',
1112 UserWarning)
1112 UserWarning)
1113
1113
1114 def __init__(self, shell=None, namespace=None, global_namespace=None,
1114 def __init__(self, shell=None, namespace=None, global_namespace=None,
1115 use_readline=_deprecation_readline_sentinel, config=None, **kwargs):
1115 use_readline=_deprecation_readline_sentinel, config=None, **kwargs):
1116 """IPCompleter() -> completer
1116 """IPCompleter() -> completer
1117
1117
1118 Return a completer object.
1118 Return a completer object.
1119
1119
1120 Parameters
1120 Parameters
1121 ----------
1121 ----------
1122
1122
1123 shell
1123 shell
1124 a pointer to the ipython shell itself. This is needed
1124 a pointer to the ipython shell itself. This is needed
1125 because this completer knows about magic functions, and those can
1125 because this completer knows about magic functions, and those can
1126 only be accessed via the ipython instance.
1126 only be accessed via the ipython instance.
1127
1127
1128 namespace : dict, optional
1128 namespace : dict, optional
1129 an optional dict where completions are performed.
1129 an optional dict where completions are performed.
1130
1130
1131 global_namespace : dict, optional
1131 global_namespace : dict, optional
1132 secondary optional dict for completions, to
1132 secondary optional dict for completions, to
1133 handle cases (such as IPython embedded inside functions) where
1133 handle cases (such as IPython embedded inside functions) where
1134 both Python scopes are visible.
1134 both Python scopes are visible.
1135
1135
1136 use_readline : bool, optional
1136 use_readline : bool, optional
1137 DEPRECATED, ignored since IPython 6.0, will have no effects
1137 DEPRECATED, ignored since IPython 6.0, will have no effects
1138 """
1138 """
1139
1139
1140 self.magic_escape = ESC_MAGIC
1140 self.magic_escape = ESC_MAGIC
1141 self.splitter = CompletionSplitter()
1141 self.splitter = CompletionSplitter()
1142
1142
1143 if use_readline is not _deprecation_readline_sentinel:
1143 if use_readline is not _deprecation_readline_sentinel:
1144 warnings.warn('The `use_readline` parameter is deprecated and ignored since IPython 6.0.',
1144 warnings.warn('The `use_readline` parameter is deprecated and ignored since IPython 6.0.',
1145 DeprecationWarning, stacklevel=2)
1145 DeprecationWarning, stacklevel=2)
1146
1146
1147 # _greedy_changed() depends on splitter and readline being defined:
1147 # _greedy_changed() depends on splitter and readline being defined:
1148 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
1148 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
1149 config=config, **kwargs)
1149 config=config, **kwargs)
1150
1150
1151 # List where completion matches will be stored
1151 # List where completion matches will be stored
1152 self.matches = []
1152 self.matches = []
1153 self.shell = shell
1153 self.shell = shell
1154 # Regexp to split filenames with spaces in them
1154 # Regexp to split filenames with spaces in them
1155 self.space_name_re = re.compile(r'([^\\] )')
1155 self.space_name_re = re.compile(r'([^\\] )')
1156 # Hold a local ref. to glob.glob for speed
1156 # Hold a local ref. to glob.glob for speed
1157 self.glob = glob.glob
1157 self.glob = glob.glob
1158
1158
1159 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1159 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1160 # buffers, to avoid completion problems.
1160 # buffers, to avoid completion problems.
1161 term = os.environ.get('TERM','xterm')
1161 term = os.environ.get('TERM','xterm')
1162 self.dumb_terminal = term in ['dumb','emacs']
1162 self.dumb_terminal = term in ['dumb','emacs']
1163
1163
1164 # Special handling of backslashes needed in win32 platforms
1164 # Special handling of backslashes needed in win32 platforms
1165 if sys.platform == "win32":
1165 if sys.platform == "win32":
1166 self.clean_glob = self._clean_glob_win32
1166 self.clean_glob = self._clean_glob_win32
1167 else:
1167 else:
1168 self.clean_glob = self._clean_glob
1168 self.clean_glob = self._clean_glob
1169
1169
1170 #regexp to parse docstring for function signature
1170 #regexp to parse docstring for function signature
1171 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1171 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1172 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1172 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1173 #use this if positional argument name is also needed
1173 #use this if positional argument name is also needed
1174 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1174 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1175
1175
1176 self.magic_arg_matchers = [
1176 self.magic_arg_matchers = [
1177 self.magic_config_matches,
1177 self.magic_config_matches,
1178 self.magic_color_matches,
1178 self.magic_color_matches,
1179 ]
1179 ]
1180
1180
1181 # This is set externally by InteractiveShell
1181 # This is set externally by InteractiveShell
1182 self.custom_completers = None
1182 self.custom_completers = None
1183
1183
1184 # This is a list of names of unicode characters that can be completed
1184 # This is a list of names of unicode characters that can be completed
1185 # into their corresponding unicode value. The list is large, so we
1185 # into their corresponding unicode value. The list is large, so we
1186 # laziliy initialize it on first use. Consuming code should access this
1186 # laziliy initialize it on first use. Consuming code should access this
1187 # attribute through the `@unicode_names` property.
1187 # attribute through the `@unicode_names` property.
1188 self._unicode_names = None
1188 self._unicode_names = None
1189
1189
1190 @property
1190 @property
1191 def matchers(self) -> List[Any]:
1191 def matchers(self) -> List[Any]:
1192 """All active matcher routines for completion"""
1192 """All active matcher routines for completion"""
1193 if self.dict_keys_only:
1193 if self.dict_keys_only:
1194 return [self.dict_key_matches]
1194 return [self.dict_key_matches]
1195
1195
1196 if self.use_jedi:
1196 if self.use_jedi:
1197 return [
1197 return [
1198 *self.custom_matchers,
1198 *self.custom_matchers,
1199 self.file_matches,
1199 self.file_matches,
1200 self.magic_matches,
1200 self.magic_matches,
1201 self.dict_key_matches,
1201 self.dict_key_matches,
1202 ]
1202 ]
1203 else:
1203 else:
1204 return [
1204 return [
1205 *self.custom_matchers,
1205 *self.custom_matchers,
1206 self.python_matches,
1206 self.python_matches,
1207 self.file_matches,
1207 self.file_matches,
1208 self.magic_matches,
1208 self.magic_matches,
1209 self.python_func_kw_matches,
1209 self.python_func_kw_matches,
1210 self.dict_key_matches,
1210 self.dict_key_matches,
1211 ]
1211 ]
1212
1212
1213 def all_completions(self, text:str) -> List[str]:
1213 def all_completions(self, text:str) -> List[str]:
1214 """
1214 """
1215 Wrapper around the completion methods for the benefit of emacs.
1215 Wrapper around the completion methods for the benefit of emacs.
1216 """
1216 """
1217 prefix = text.rpartition('.')[0]
1217 prefix = text.rpartition('.')[0]
1218 with provisionalcompleter():
1218 with provisionalcompleter():
1219 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1219 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1220 for c in self.completions(text, len(text))]
1220 for c in self.completions(text, len(text))]
1221
1221
1222 return self.complete(text)[1]
1222 return self.complete(text)[1]
1223
1223
1224 def _clean_glob(self, text:str):
1224 def _clean_glob(self, text:str):
1225 return self.glob("%s*" % text)
1225 return self.glob("%s*" % text)
1226
1226
1227 def _clean_glob_win32(self, text:str):
1227 def _clean_glob_win32(self, text:str):
1228 return [f.replace("\\","/")
1228 return [f.replace("\\","/")
1229 for f in self.glob("%s*" % text)]
1229 for f in self.glob("%s*" % text)]
1230
1230
1231 def file_matches(self, text:str)->List[str]:
1231 def file_matches(self, text:str)->List[str]:
1232 """Match filenames, expanding ~USER type strings.
1232 """Match filenames, expanding ~USER type strings.
1233
1233
1234 Most of the seemingly convoluted logic in this completer is an
1234 Most of the seemingly convoluted logic in this completer is an
1235 attempt to handle filenames with spaces in them. And yet it's not
1235 attempt to handle filenames with spaces in them. And yet it's not
1236 quite perfect, because Python's readline doesn't expose all of the
1236 quite perfect, because Python's readline doesn't expose all of the
1237 GNU readline details needed for this to be done correctly.
1237 GNU readline details needed for this to be done correctly.
1238
1238
1239 For a filename with a space in it, the printed completions will be
1239 For a filename with a space in it, the printed completions will be
1240 only the parts after what's already been typed (instead of the
1240 only the parts after what's already been typed (instead of the
1241 full completions, as is normally done). I don't think with the
1241 full completions, as is normally done). I don't think with the
1242 current (as of Python 2.3) Python readline it's possible to do
1242 current (as of Python 2.3) Python readline it's possible to do
1243 better."""
1243 better."""
1244
1244
1245 # chars that require escaping with backslash - i.e. chars
1245 # chars that require escaping with backslash - i.e. chars
1246 # that readline treats incorrectly as delimiters, but we
1246 # that readline treats incorrectly as delimiters, but we
1247 # don't want to treat as delimiters in filename matching
1247 # don't want to treat as delimiters in filename matching
1248 # when escaped with backslash
1248 # when escaped with backslash
1249 if text.startswith('!'):
1249 if text.startswith('!'):
1250 text = text[1:]
1250 text = text[1:]
1251 text_prefix = u'!'
1251 text_prefix = u'!'
1252 else:
1252 else:
1253 text_prefix = u''
1253 text_prefix = u''
1254
1254
1255 text_until_cursor = self.text_until_cursor
1255 text_until_cursor = self.text_until_cursor
1256 # track strings with open quotes
1256 # track strings with open quotes
1257 open_quotes = has_open_quotes(text_until_cursor)
1257 open_quotes = has_open_quotes(text_until_cursor)
1258
1258
1259 if '(' in text_until_cursor or '[' in text_until_cursor:
1259 if '(' in text_until_cursor or '[' in text_until_cursor:
1260 lsplit = text
1260 lsplit = text
1261 else:
1261 else:
1262 try:
1262 try:
1263 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1263 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1264 lsplit = arg_split(text_until_cursor)[-1]
1264 lsplit = arg_split(text_until_cursor)[-1]
1265 except ValueError:
1265 except ValueError:
1266 # typically an unmatched ", or backslash without escaped char.
1266 # typically an unmatched ", or backslash without escaped char.
1267 if open_quotes:
1267 if open_quotes:
1268 lsplit = text_until_cursor.split(open_quotes)[-1]
1268 lsplit = text_until_cursor.split(open_quotes)[-1]
1269 else:
1269 else:
1270 return []
1270 return []
1271 except IndexError:
1271 except IndexError:
1272 # tab pressed on empty line
1272 # tab pressed on empty line
1273 lsplit = ""
1273 lsplit = ""
1274
1274
1275 if not open_quotes and lsplit != protect_filename(lsplit):
1275 if not open_quotes and lsplit != protect_filename(lsplit):
1276 # if protectables are found, do matching on the whole escaped name
1276 # if protectables are found, do matching on the whole escaped name
1277 has_protectables = True
1277 has_protectables = True
1278 text0,text = text,lsplit
1278 text0,text = text,lsplit
1279 else:
1279 else:
1280 has_protectables = False
1280 has_protectables = False
1281 text = os.path.expanduser(text)
1281 text = os.path.expanduser(text)
1282
1282
1283 if text == "":
1283 if text == "":
1284 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1284 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1285
1285
1286 # Compute the matches from the filesystem
1286 # Compute the matches from the filesystem
1287 if sys.platform == 'win32':
1287 if sys.platform == 'win32':
1288 m0 = self.clean_glob(text)
1288 m0 = self.clean_glob(text)
1289 else:
1289 else:
1290 m0 = self.clean_glob(text.replace('\\', ''))
1290 m0 = self.clean_glob(text.replace('\\', ''))
1291
1291
1292 if has_protectables:
1292 if has_protectables:
1293 # If we had protectables, we need to revert our changes to the
1293 # If we had protectables, we need to revert our changes to the
1294 # beginning of filename so that we don't double-write the part
1294 # beginning of filename so that we don't double-write the part
1295 # of the filename we have so far
1295 # of the filename we have so far
1296 len_lsplit = len(lsplit)
1296 len_lsplit = len(lsplit)
1297 matches = [text_prefix + text0 +
1297 matches = [text_prefix + text0 +
1298 protect_filename(f[len_lsplit:]) for f in m0]
1298 protect_filename(f[len_lsplit:]) for f in m0]
1299 else:
1299 else:
1300 if open_quotes:
1300 if open_quotes:
1301 # if we have a string with an open quote, we don't need to
1301 # if we have a string with an open quote, we don't need to
1302 # protect the names beyond the quote (and we _shouldn't_, as
1302 # protect the names beyond the quote (and we _shouldn't_, as
1303 # it would cause bugs when the filesystem call is made).
1303 # it would cause bugs when the filesystem call is made).
1304 matches = m0 if sys.platform == "win32" else\
1304 matches = m0 if sys.platform == "win32" else\
1305 [protect_filename(f, open_quotes) for f in m0]
1305 [protect_filename(f, open_quotes) for f in m0]
1306 else:
1306 else:
1307 matches = [text_prefix +
1307 matches = [text_prefix +
1308 protect_filename(f) for f in m0]
1308 protect_filename(f) for f in m0]
1309
1309
1310 # Mark directories in input list by appending '/' to their names.
1310 # Mark directories in input list by appending '/' to their names.
1311 return [x+'/' if os.path.isdir(x) else x for x in matches]
1311 return [x+'/' if os.path.isdir(x) else x for x in matches]
1312
1312
1313 def magic_matches(self, text:str):
1313 def magic_matches(self, text:str):
1314 """Match magics"""
1314 """Match magics"""
1315 # Get all shell magics now rather than statically, so magics loaded at
1315 # Get all shell magics now rather than statically, so magics loaded at
1316 # runtime show up too.
1316 # runtime show up too.
1317 lsm = self.shell.magics_manager.lsmagic()
1317 lsm = self.shell.magics_manager.lsmagic()
1318 line_magics = lsm['line']
1318 line_magics = lsm['line']
1319 cell_magics = lsm['cell']
1319 cell_magics = lsm['cell']
1320 pre = self.magic_escape
1320 pre = self.magic_escape
1321 pre2 = pre+pre
1321 pre2 = pre+pre
1322
1322
1323 explicit_magic = text.startswith(pre)
1323 explicit_magic = text.startswith(pre)
1324
1324
1325 # Completion logic:
1325 # Completion logic:
1326 # - user gives %%: only do cell magics
1326 # - user gives %%: only do cell magics
1327 # - user gives %: do both line and cell magics
1327 # - user gives %: do both line and cell magics
1328 # - no prefix: do both
1328 # - no prefix: do both
1329 # In other words, line magics are skipped if the user gives %% explicitly
1329 # In other words, line magics are skipped if the user gives %% explicitly
1330 #
1330 #
1331 # We also exclude magics that match any currently visible names:
1331 # We also exclude magics that match any currently visible names:
1332 # https://github.com/ipython/ipython/issues/4877, unless the user has
1332 # https://github.com/ipython/ipython/issues/4877, unless the user has
1333 # typed a %:
1333 # typed a %:
1334 # https://github.com/ipython/ipython/issues/10754
1334 # https://github.com/ipython/ipython/issues/10754
1335 bare_text = text.lstrip(pre)
1335 bare_text = text.lstrip(pre)
1336 global_matches = self.global_matches(bare_text)
1336 global_matches = self.global_matches(bare_text)
1337 if not explicit_magic:
1337 if not explicit_magic:
1338 def matches(magic):
1338 def matches(magic):
1339 """
1339 """
1340 Filter magics, in particular remove magics that match
1340 Filter magics, in particular remove magics that match
1341 a name present in global namespace.
1341 a name present in global namespace.
1342 """
1342 """
1343 return ( magic.startswith(bare_text) and
1343 return ( magic.startswith(bare_text) and
1344 magic not in global_matches )
1344 magic not in global_matches )
1345 else:
1345 else:
1346 def matches(magic):
1346 def matches(magic):
1347 return magic.startswith(bare_text)
1347 return magic.startswith(bare_text)
1348
1348
1349 comp = [ pre2+m for m in cell_magics if matches(m)]
1349 comp = [ pre2+m for m in cell_magics if matches(m)]
1350 if not text.startswith(pre2):
1350 if not text.startswith(pre2):
1351 comp += [ pre+m for m in line_magics if matches(m)]
1351 comp += [ pre+m for m in line_magics if matches(m)]
1352
1352
1353 return comp
1353 return comp
1354
1354
1355 def magic_config_matches(self, text:str) -> List[str]:
1355 def magic_config_matches(self, text:str) -> List[str]:
1356 """ Match class names and attributes for %config magic """
1356 """ Match class names and attributes for %config magic """
1357 texts = text.strip().split()
1357 texts = text.strip().split()
1358
1358
1359 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1359 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1360 # get all configuration classes
1360 # get all configuration classes
1361 classes = sorted(set([ c for c in self.shell.configurables
1361 classes = sorted(set([ c for c in self.shell.configurables
1362 if c.__class__.class_traits(config=True)
1362 if c.__class__.class_traits(config=True)
1363 ]), key=lambda x: x.__class__.__name__)
1363 ]), key=lambda x: x.__class__.__name__)
1364 classnames = [ c.__class__.__name__ for c in classes ]
1364 classnames = [ c.__class__.__name__ for c in classes ]
1365
1365
1366 # return all classnames if config or %config is given
1366 # return all classnames if config or %config is given
1367 if len(texts) == 1:
1367 if len(texts) == 1:
1368 return classnames
1368 return classnames
1369
1369
1370 # match classname
1370 # match classname
1371 classname_texts = texts[1].split('.')
1371 classname_texts = texts[1].split('.')
1372 classname = classname_texts[0]
1372 classname = classname_texts[0]
1373 classname_matches = [ c for c in classnames
1373 classname_matches = [ c for c in classnames
1374 if c.startswith(classname) ]
1374 if c.startswith(classname) ]
1375
1375
1376 # return matched classes or the matched class with attributes
1376 # return matched classes or the matched class with attributes
1377 if texts[1].find('.') < 0:
1377 if texts[1].find('.') < 0:
1378 return classname_matches
1378 return classname_matches
1379 elif len(classname_matches) == 1 and \
1379 elif len(classname_matches) == 1 and \
1380 classname_matches[0] == classname:
1380 classname_matches[0] == classname:
1381 cls = classes[classnames.index(classname)].__class__
1381 cls = classes[classnames.index(classname)].__class__
1382 help = cls.class_get_help()
1382 help = cls.class_get_help()
1383 # strip leading '--' from cl-args:
1383 # strip leading '--' from cl-args:
1384 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1384 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1385 return [ attr.split('=')[0]
1385 return [ attr.split('=')[0]
1386 for attr in help.strip().splitlines()
1386 for attr in help.strip().splitlines()
1387 if attr.startswith(texts[1]) ]
1387 if attr.startswith(texts[1]) ]
1388 return []
1388 return []
1389
1389
1390 def magic_color_matches(self, text:str) -> List[str] :
1390 def magic_color_matches(self, text:str) -> List[str] :
1391 """ Match color schemes for %colors magic"""
1391 """ Match color schemes for %colors magic"""
1392 texts = text.split()
1392 texts = text.split()
1393 if text.endswith(' '):
1393 if text.endswith(' '):
1394 # .split() strips off the trailing whitespace. Add '' back
1394 # .split() strips off the trailing whitespace. Add '' back
1395 # so that: '%colors ' -> ['%colors', '']
1395 # so that: '%colors ' -> ['%colors', '']
1396 texts.append('')
1396 texts.append('')
1397
1397
1398 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1398 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1399 prefix = texts[1]
1399 prefix = texts[1]
1400 return [ color for color in InspectColors.keys()
1400 return [ color for color in InspectColors.keys()
1401 if color.startswith(prefix) ]
1401 if color.startswith(prefix) ]
1402 return []
1402 return []
1403
1403
1404 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str) -> Iterable[Any]:
1404 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str) -> Iterable[Any]:
1405 """
1405 """
1406
1406
1407 Return a list of :any:`jedi.api.Completions` object from a ``text`` and
1407 Return a list of :any:`jedi.api.Completions` object from a ``text`` and
1408 cursor position.
1408 cursor position.
1409
1409
1410 Parameters
1410 Parameters
1411 ----------
1411 ----------
1412 cursor_column : int
1412 cursor_column : int
1413 column position of the cursor in ``text``, 0-indexed.
1413 column position of the cursor in ``text``, 0-indexed.
1414 cursor_line : int
1414 cursor_line : int
1415 line position of the cursor in ``text``, 0-indexed
1415 line position of the cursor in ``text``, 0-indexed
1416 text : str
1416 text : str
1417 text to complete
1417 text to complete
1418
1418
1419 Debugging
1419 Debugging
1420 ---------
1420 ---------
1421
1421
1422 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1422 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1423 object containing a string with the Jedi debug information attached.
1423 object containing a string with the Jedi debug information attached.
1424 """
1424 """
1425 namespaces = [self.namespace]
1425 namespaces = [self.namespace]
1426 if self.global_namespace is not None:
1426 if self.global_namespace is not None:
1427 namespaces.append(self.global_namespace)
1427 namespaces.append(self.global_namespace)
1428
1428
1429 completion_filter = lambda x:x
1429 completion_filter = lambda x:x
1430 offset = cursor_to_position(text, cursor_line, cursor_column)
1430 offset = cursor_to_position(text, cursor_line, cursor_column)
1431 # filter output if we are completing for object members
1431 # filter output if we are completing for object members
1432 if offset:
1432 if offset:
1433 pre = text[offset-1]
1433 pre = text[offset-1]
1434 if pre == '.':
1434 if pre == '.':
1435 if self.omit__names == 2:
1435 if self.omit__names == 2:
1436 completion_filter = lambda c:not c.name.startswith('_')
1436 completion_filter = lambda c:not c.name.startswith('_')
1437 elif self.omit__names == 1:
1437 elif self.omit__names == 1:
1438 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1438 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1439 elif self.omit__names == 0:
1439 elif self.omit__names == 0:
1440 completion_filter = lambda x:x
1440 completion_filter = lambda x:x
1441 else:
1441 else:
1442 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1442 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1443
1443
1444 interpreter = jedi.Interpreter(text[:offset], namespaces)
1444 interpreter = jedi.Interpreter(text[:offset], namespaces)
1445 try_jedi = True
1445 try_jedi = True
1446
1446
1447 try:
1447 try:
1448 # find the first token in the current tree -- if it is a ' or " then we are in a string
1448 # find the first token in the current tree -- if it is a ' or " then we are in a string
1449 completing_string = False
1449 completing_string = False
1450 try:
1450 try:
1451 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1451 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1452 except StopIteration:
1452 except StopIteration:
1453 pass
1453 pass
1454 else:
1454 else:
1455 # note the value may be ', ", or it may also be ''' or """, or
1455 # note the value may be ', ", or it may also be ''' or """, or
1456 # in some cases, """what/you/typed..., but all of these are
1456 # in some cases, """what/you/typed..., but all of these are
1457 # strings.
1457 # strings.
1458 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1458 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1459
1459
1460 # if we are in a string jedi is likely not the right candidate for
1460 # if we are in a string jedi is likely not the right candidate for
1461 # now. Skip it.
1461 # now. Skip it.
1462 try_jedi = not completing_string
1462 try_jedi = not completing_string
1463 except Exception as e:
1463 except Exception as e:
1464 # many of things can go wrong, we are using private API just don't crash.
1464 # many of things can go wrong, we are using private API just don't crash.
1465 if self.debug:
1465 if self.debug:
1466 print("Error detecting if completing a non-finished string :", e, '|')
1466 print("Error detecting if completing a non-finished string :", e, '|')
1467
1467
1468 if not try_jedi:
1468 if not try_jedi:
1469 return []
1469 return []
1470 try:
1470 try:
1471 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1471 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1472 except Exception as e:
1472 except Exception as e:
1473 if self.debug:
1473 if self.debug:
1474 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1474 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1475 else:
1475 else:
1476 return []
1476 return []
1477
1477
1478 def python_matches(self, text:str)->List[str]:
1478 def python_matches(self, text:str)->List[str]:
1479 """Match attributes or global python names"""
1479 """Match attributes or global python names"""
1480 if "." in text:
1480 if "." in text:
1481 try:
1481 try:
1482 matches = self.attr_matches(text)
1482 matches = self.attr_matches(text)
1483 if text.endswith('.') and self.omit__names:
1483 if text.endswith('.') and self.omit__names:
1484 if self.omit__names == 1:
1484 if self.omit__names == 1:
1485 # true if txt is _not_ a __ name, false otherwise:
1485 # true if txt is _not_ a __ name, false otherwise:
1486 no__name = (lambda txt:
1486 no__name = (lambda txt:
1487 re.match(r'.*\.__.*?__',txt) is None)
1487 re.match(r'.*\.__.*?__',txt) is None)
1488 else:
1488 else:
1489 # true if txt is _not_ a _ name, false otherwise:
1489 # true if txt is _not_ a _ name, false otherwise:
1490 no__name = (lambda txt:
1490 no__name = (lambda txt:
1491 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1491 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1492 matches = filter(no__name, matches)
1492 matches = filter(no__name, matches)
1493 except NameError:
1493 except NameError:
1494 # catches <undefined attributes>.<tab>
1494 # catches <undefined attributes>.<tab>
1495 matches = []
1495 matches = []
1496 else:
1496 else:
1497 matches = self.global_matches(text)
1497 matches = self.global_matches(text)
1498 return matches
1498 return matches
1499
1499
1500 def _default_arguments_from_docstring(self, doc):
1500 def _default_arguments_from_docstring(self, doc):
1501 """Parse the first line of docstring for call signature.
1501 """Parse the first line of docstring for call signature.
1502
1502
1503 Docstring should be of the form 'min(iterable[, key=func])\n'.
1503 Docstring should be of the form 'min(iterable[, key=func])\n'.
1504 It can also parse cython docstring of the form
1504 It can also parse cython docstring of the form
1505 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1505 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1506 """
1506 """
1507 if doc is None:
1507 if doc is None:
1508 return []
1508 return []
1509
1509
1510 #care only the firstline
1510 #care only the firstline
1511 line = doc.lstrip().splitlines()[0]
1511 line = doc.lstrip().splitlines()[0]
1512
1512
1513 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1513 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1514 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1514 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1515 sig = self.docstring_sig_re.search(line)
1515 sig = self.docstring_sig_re.search(line)
1516 if sig is None:
1516 if sig is None:
1517 return []
1517 return []
1518 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1518 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1519 sig = sig.groups()[0].split(',')
1519 sig = sig.groups()[0].split(',')
1520 ret = []
1520 ret = []
1521 for s in sig:
1521 for s in sig:
1522 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1522 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1523 ret += self.docstring_kwd_re.findall(s)
1523 ret += self.docstring_kwd_re.findall(s)
1524 return ret
1524 return ret
1525
1525
1526 def _default_arguments(self, obj):
1526 def _default_arguments(self, obj):
1527 """Return the list of default arguments of obj if it is callable,
1527 """Return the list of default arguments of obj if it is callable,
1528 or empty list otherwise."""
1528 or empty list otherwise."""
1529 call_obj = obj
1529 call_obj = obj
1530 ret = []
1530 ret = []
1531 if inspect.isbuiltin(obj):
1531 if inspect.isbuiltin(obj):
1532 pass
1532 pass
1533 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1533 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1534 if inspect.isclass(obj):
1534 if inspect.isclass(obj):
1535 #for cython embedsignature=True the constructor docstring
1535 #for cython embedsignature=True the constructor docstring
1536 #belongs to the object itself not __init__
1536 #belongs to the object itself not __init__
1537 ret += self._default_arguments_from_docstring(
1537 ret += self._default_arguments_from_docstring(
1538 getattr(obj, '__doc__', ''))
1538 getattr(obj, '__doc__', ''))
1539 # for classes, check for __init__,__new__
1539 # for classes, check for __init__,__new__
1540 call_obj = (getattr(obj, '__init__', None) or
1540 call_obj = (getattr(obj, '__init__', None) or
1541 getattr(obj, '__new__', None))
1541 getattr(obj, '__new__', None))
1542 # for all others, check if they are __call__able
1542 # for all others, check if they are __call__able
1543 elif hasattr(obj, '__call__'):
1543 elif hasattr(obj, '__call__'):
1544 call_obj = obj.__call__
1544 call_obj = obj.__call__
1545 ret += self._default_arguments_from_docstring(
1545 ret += self._default_arguments_from_docstring(
1546 getattr(call_obj, '__doc__', ''))
1546 getattr(call_obj, '__doc__', ''))
1547
1547
1548 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1548 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1549 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1549 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1550
1550
1551 try:
1551 try:
1552 sig = inspect.signature(call_obj)
1552 sig = inspect.signature(call_obj)
1553 ret.extend(k for k, v in sig.parameters.items() if
1553 ret.extend(k for k, v in sig.parameters.items() if
1554 v.kind in _keeps)
1554 v.kind in _keeps)
1555 except ValueError:
1555 except ValueError:
1556 pass
1556 pass
1557
1557
1558 return list(set(ret))
1558 return list(set(ret))
1559
1559
1560 def python_func_kw_matches(self, text):
1560 def python_func_kw_matches(self, text):
1561 """Match named parameters (kwargs) of the last open function"""
1561 """Match named parameters (kwargs) of the last open function"""
1562
1562
1563 if "." in text: # a parameter cannot be dotted
1563 if "." in text: # a parameter cannot be dotted
1564 return []
1564 return []
1565 try: regexp = self.__funcParamsRegex
1565 try: regexp = self.__funcParamsRegex
1566 except AttributeError:
1566 except AttributeError:
1567 regexp = self.__funcParamsRegex = re.compile(r'''
1567 regexp = self.__funcParamsRegex = re.compile(r'''
1568 '.*?(?<!\\)' | # single quoted strings or
1568 '.*?(?<!\\)' | # single quoted strings or
1569 ".*?(?<!\\)" | # double quoted strings or
1569 ".*?(?<!\\)" | # double quoted strings or
1570 \w+ | # identifier
1570 \w+ | # identifier
1571 \S # other characters
1571 \S # other characters
1572 ''', re.VERBOSE | re.DOTALL)
1572 ''', re.VERBOSE | re.DOTALL)
1573 # 1. find the nearest identifier that comes before an unclosed
1573 # 1. find the nearest identifier that comes before an unclosed
1574 # parenthesis before the cursor
1574 # parenthesis before the cursor
1575 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1575 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1576 tokens = regexp.findall(self.text_until_cursor)
1576 tokens = regexp.findall(self.text_until_cursor)
1577 iterTokens = reversed(tokens); openPar = 0
1577 iterTokens = reversed(tokens); openPar = 0
1578
1578
1579 for token in iterTokens:
1579 for token in iterTokens:
1580 if token == ')':
1580 if token == ')':
1581 openPar -= 1
1581 openPar -= 1
1582 elif token == '(':
1582 elif token == '(':
1583 openPar += 1
1583 openPar += 1
1584 if openPar > 0:
1584 if openPar > 0:
1585 # found the last unclosed parenthesis
1585 # found the last unclosed parenthesis
1586 break
1586 break
1587 else:
1587 else:
1588 return []
1588 return []
1589 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1589 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1590 ids = []
1590 ids = []
1591 isId = re.compile(r'\w+$').match
1591 isId = re.compile(r'\w+$').match
1592
1592
1593 while True:
1593 while True:
1594 try:
1594 try:
1595 ids.append(next(iterTokens))
1595 ids.append(next(iterTokens))
1596 if not isId(ids[-1]):
1596 if not isId(ids[-1]):
1597 ids.pop(); break
1597 ids.pop(); break
1598 if not next(iterTokens) == '.':
1598 if not next(iterTokens) == '.':
1599 break
1599 break
1600 except StopIteration:
1600 except StopIteration:
1601 break
1601 break
1602
1602
1603 # Find all named arguments already assigned to, as to avoid suggesting
1603 # Find all named arguments already assigned to, as to avoid suggesting
1604 # them again
1604 # them again
1605 usedNamedArgs = set()
1605 usedNamedArgs = set()
1606 par_level = -1
1606 par_level = -1
1607 for token, next_token in zip(tokens, tokens[1:]):
1607 for token, next_token in zip(tokens, tokens[1:]):
1608 if token == '(':
1608 if token == '(':
1609 par_level += 1
1609 par_level += 1
1610 elif token == ')':
1610 elif token == ')':
1611 par_level -= 1
1611 par_level -= 1
1612
1612
1613 if par_level != 0:
1613 if par_level != 0:
1614 continue
1614 continue
1615
1615
1616 if next_token != '=':
1616 if next_token != '=':
1617 continue
1617 continue
1618
1618
1619 usedNamedArgs.add(token)
1619 usedNamedArgs.add(token)
1620
1620
1621 argMatches = []
1621 argMatches = []
1622 try:
1622 try:
1623 callableObj = '.'.join(ids[::-1])
1623 callableObj = '.'.join(ids[::-1])
1624 namedArgs = self._default_arguments(eval(callableObj,
1624 namedArgs = self._default_arguments(eval(callableObj,
1625 self.namespace))
1625 self.namespace))
1626
1626
1627 # Remove used named arguments from the list, no need to show twice
1627 # Remove used named arguments from the list, no need to show twice
1628 for namedArg in set(namedArgs) - usedNamedArgs:
1628 for namedArg in set(namedArgs) - usedNamedArgs:
1629 if namedArg.startswith(text):
1629 if namedArg.startswith(text):
1630 argMatches.append("%s=" %namedArg)
1630 argMatches.append("%s=" %namedArg)
1631 except:
1631 except:
1632 pass
1632 pass
1633
1633
1634 return argMatches
1634 return argMatches
1635
1635
1636 @staticmethod
1636 @staticmethod
1637 def _get_keys(obj: Any) -> List[Any]:
1637 def _get_keys(obj: Any) -> List[Any]:
1638 # Objects can define their own completions by defining an
1638 # Objects can define their own completions by defining an
1639 # _ipy_key_completions_() method.
1639 # _ipy_key_completions_() method.
1640 method = get_real_method(obj, '_ipython_key_completions_')
1640 method = get_real_method(obj, '_ipython_key_completions_')
1641 if method is not None:
1641 if method is not None:
1642 return method()
1642 return method()
1643
1643
1644 # Special case some common in-memory dict-like types
1644 # Special case some common in-memory dict-like types
1645 if isinstance(obj, dict) or\
1645 if isinstance(obj, dict) or\
1646 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1646 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1647 try:
1647 try:
1648 return list(obj.keys())
1648 return list(obj.keys())
1649 except Exception:
1649 except Exception:
1650 return []
1650 return []
1651 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1651 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1652 _safe_isinstance(obj, 'numpy', 'void'):
1652 _safe_isinstance(obj, 'numpy', 'void'):
1653 return obj.dtype.names or []
1653 return obj.dtype.names or []
1654 return []
1654 return []
1655
1655
1656 def dict_key_matches(self, text:str) -> List[str]:
1656 def dict_key_matches(self, text:str) -> List[str]:
1657 "Match string keys in a dictionary, after e.g. 'foo[' "
1657 "Match string keys in a dictionary, after e.g. 'foo[' "
1658
1658
1659
1659
1660 if self.__dict_key_regexps is not None:
1660 if self.__dict_key_regexps is not None:
1661 regexps = self.__dict_key_regexps
1661 regexps = self.__dict_key_regexps
1662 else:
1662 else:
1663 dict_key_re_fmt = r'''(?x)
1663 dict_key_re_fmt = r'''(?x)
1664 ( # match dict-referring expression wrt greedy setting
1664 ( # match dict-referring expression wrt greedy setting
1665 %s
1665 %s
1666 )
1666 )
1667 \[ # open bracket
1667 \[ # open bracket
1668 \s* # and optional whitespace
1668 \s* # and optional whitespace
1669 # Capture any number of str-like objects (e.g. "a", "b", 'c')
1669 # Capture any number of str-like objects (e.g. "a", "b", 'c')
1670 ((?:[uUbB]? # string prefix (r not handled)
1670 ((?:[uUbB]? # string prefix (r not handled)
1671 (?:
1671 (?:
1672 '(?:[^']|(?<!\\)\\')*'
1672 '(?:[^']|(?<!\\)\\')*'
1673 |
1673 |
1674 "(?:[^"]|(?<!\\)\\")*"
1674 "(?:[^"]|(?<!\\)\\")*"
1675 )
1675 )
1676 \s*,\s*
1676 \s*,\s*
1677 )*)
1677 )*)
1678 ([uUbB]? # string prefix (r not handled)
1678 ([uUbB]? # string prefix (r not handled)
1679 (?: # unclosed string
1679 (?: # unclosed string
1680 '(?:[^']|(?<!\\)\\')*
1680 '(?:[^']|(?<!\\)\\')*
1681 |
1681 |
1682 "(?:[^"]|(?<!\\)\\")*
1682 "(?:[^"]|(?<!\\)\\")*
1683 )
1683 )
1684 )?
1684 )?
1685 $
1685 $
1686 '''
1686 '''
1687 regexps = self.__dict_key_regexps = {
1687 regexps = self.__dict_key_regexps = {
1688 False: re.compile(dict_key_re_fmt % r'''
1688 False: re.compile(dict_key_re_fmt % r'''
1689 # identifiers separated by .
1689 # identifiers separated by .
1690 (?!\d)\w+
1690 (?!\d)\w+
1691 (?:\.(?!\d)\w+)*
1691 (?:\.(?!\d)\w+)*
1692 '''),
1692 '''),
1693 True: re.compile(dict_key_re_fmt % '''
1693 True: re.compile(dict_key_re_fmt % '''
1694 .+
1694 .+
1695 ''')
1695 ''')
1696 }
1696 }
1697
1697
1698 match = regexps[self.greedy].search(self.text_until_cursor)
1698 match = regexps[self.greedy].search(self.text_until_cursor)
1699
1699
1700 if match is None:
1700 if match is None:
1701 return []
1701 return []
1702
1702
1703 expr, prefix0, prefix = match.groups()
1703 expr, prefix0, prefix = match.groups()
1704 try:
1704 try:
1705 obj = eval(expr, self.namespace)
1705 obj = eval(expr, self.namespace)
1706 except Exception:
1706 except Exception:
1707 try:
1707 try:
1708 obj = eval(expr, self.global_namespace)
1708 obj = eval(expr, self.global_namespace)
1709 except Exception:
1709 except Exception:
1710 return []
1710 return []
1711
1711
1712 keys = self._get_keys(obj)
1712 keys = self._get_keys(obj)
1713 if not keys:
1713 if not keys:
1714 return keys
1714 return keys
1715
1715
1716 extra_prefix = eval(prefix0) if prefix0 != '' else None
1716 extra_prefix = eval(prefix0) if prefix0 != '' else None
1717
1717
1718 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
1718 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
1719 if not matches:
1719 if not matches:
1720 return matches
1720 return matches
1721
1721
1722 # get the cursor position of
1722 # get the cursor position of
1723 # - the text being completed
1723 # - the text being completed
1724 # - the start of the key text
1724 # - the start of the key text
1725 # - the start of the completion
1725 # - the start of the completion
1726 text_start = len(self.text_until_cursor) - len(text)
1726 text_start = len(self.text_until_cursor) - len(text)
1727 if prefix:
1727 if prefix:
1728 key_start = match.start(3)
1728 key_start = match.start(3)
1729 completion_start = key_start + token_offset
1729 completion_start = key_start + token_offset
1730 else:
1730 else:
1731 key_start = completion_start = match.end()
1731 key_start = completion_start = match.end()
1732
1732
1733 # grab the leading prefix, to make sure all completions start with `text`
1733 # grab the leading prefix, to make sure all completions start with `text`
1734 if text_start > key_start:
1734 if text_start > key_start:
1735 leading = ''
1735 leading = ''
1736 else:
1736 else:
1737 leading = text[text_start:completion_start]
1737 leading = text[text_start:completion_start]
1738
1738
1739 # the index of the `[` character
1739 # the index of the `[` character
1740 bracket_idx = match.end(1)
1740 bracket_idx = match.end(1)
1741
1741
1742 # append closing quote and bracket as appropriate
1742 # append closing quote and bracket as appropriate
1743 # this is *not* appropriate if the opening quote or bracket is outside
1743 # this is *not* appropriate if the opening quote or bracket is outside
1744 # the text given to this method
1744 # the text given to this method
1745 suf = ''
1745 suf = ''
1746 continuation = self.line_buffer[len(self.text_until_cursor):]
1746 continuation = self.line_buffer[len(self.text_until_cursor):]
1747 if key_start > text_start and closing_quote:
1747 if key_start > text_start and closing_quote:
1748 # quotes were opened inside text, maybe close them
1748 # quotes were opened inside text, maybe close them
1749 if continuation.startswith(closing_quote):
1749 if continuation.startswith(closing_quote):
1750 continuation = continuation[len(closing_quote):]
1750 continuation = continuation[len(closing_quote):]
1751 else:
1751 else:
1752 suf += closing_quote
1752 suf += closing_quote
1753 if bracket_idx > text_start:
1753 if bracket_idx > text_start:
1754 # brackets were opened inside text, maybe close them
1754 # brackets were opened inside text, maybe close them
1755 if not continuation.startswith(']'):
1755 if not continuation.startswith(']'):
1756 suf += ']'
1756 suf += ']'
1757
1757
1758 return [leading + k + suf for k in matches]
1758 return [leading + k + suf for k in matches]
1759
1759
1760 @staticmethod
1760 @staticmethod
1761 def unicode_name_matches(text:str) -> Tuple[str, List[str]] :
1761 def unicode_name_matches(text:str) -> Tuple[str, List[str]] :
1762 """Match Latex-like syntax for unicode characters base
1762 """Match Latex-like syntax for unicode characters base
1763 on the name of the character.
1763 on the name of the character.
1764
1764
1765 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
1765 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
1766
1766
1767 Works only on valid python 3 identifier, or on combining characters that
1767 Works only on valid python 3 identifier, or on combining characters that
1768 will combine to form a valid identifier.
1768 will combine to form a valid identifier.
1769 """
1769 """
1770 slashpos = text.rfind('\\')
1770 slashpos = text.rfind('\\')
1771 if slashpos > -1:
1771 if slashpos > -1:
1772 s = text[slashpos+1:]
1772 s = text[slashpos+1:]
1773 try :
1773 try :
1774 unic = unicodedata.lookup(s)
1774 unic = unicodedata.lookup(s)
1775 # allow combining chars
1775 # allow combining chars
1776 if ('a'+unic).isidentifier():
1776 if ('a'+unic).isidentifier():
1777 return '\\'+s,[unic]
1777 return '\\'+s,[unic]
1778 except KeyError:
1778 except KeyError:
1779 pass
1779 pass
1780 return '', []
1780 return '', []
1781
1781
1782
1782
1783 def latex_matches(self, text:str) -> Tuple[str, Sequence[str]]:
1783 def latex_matches(self, text:str) -> Tuple[str, Sequence[str]]:
1784 """Match Latex syntax for unicode characters.
1784 """Match Latex syntax for unicode characters.
1785
1785
1786 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
1786 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
1787 """
1787 """
1788 slashpos = text.rfind('\\')
1788 slashpos = text.rfind('\\')
1789 if slashpos > -1:
1789 if slashpos > -1:
1790 s = text[slashpos:]
1790 s = text[slashpos:]
1791 if s in latex_symbols:
1791 if s in latex_symbols:
1792 # Try to complete a full latex symbol to unicode
1792 # Try to complete a full latex symbol to unicode
1793 # \\alpha -> Ξ±
1793 # \\alpha -> Ξ±
1794 return s, [latex_symbols[s]]
1794 return s, [latex_symbols[s]]
1795 else:
1795 else:
1796 # If a user has partially typed a latex symbol, give them
1796 # If a user has partially typed a latex symbol, give them
1797 # a full list of options \al -> [\aleph, \alpha]
1797 # a full list of options \al -> [\aleph, \alpha]
1798 matches = [k for k in latex_symbols if k.startswith(s)]
1798 matches = [k for k in latex_symbols if k.startswith(s)]
1799 if matches:
1799 if matches:
1800 return s, matches
1800 return s, matches
1801 return '', ()
1801 return '', ()
1802
1802
1803 def dispatch_custom_completer(self, text):
1803 def dispatch_custom_completer(self, text):
1804 if not self.custom_completers:
1804 if not self.custom_completers:
1805 return
1805 return
1806
1806
1807 line = self.line_buffer
1807 line = self.line_buffer
1808 if not line.strip():
1808 if not line.strip():
1809 return None
1809 return None
1810
1810
1811 # Create a little structure to pass all the relevant information about
1811 # Create a little structure to pass all the relevant information about
1812 # the current completion to any custom completer.
1812 # the current completion to any custom completer.
1813 event = SimpleNamespace()
1813 event = SimpleNamespace()
1814 event.line = line
1814 event.line = line
1815 event.symbol = text
1815 event.symbol = text
1816 cmd = line.split(None,1)[0]
1816 cmd = line.split(None,1)[0]
1817 event.command = cmd
1817 event.command = cmd
1818 event.text_until_cursor = self.text_until_cursor
1818 event.text_until_cursor = self.text_until_cursor
1819
1819
1820 # for foo etc, try also to find completer for %foo
1820 # for foo etc, try also to find completer for %foo
1821 if not cmd.startswith(self.magic_escape):
1821 if not cmd.startswith(self.magic_escape):
1822 try_magic = self.custom_completers.s_matches(
1822 try_magic = self.custom_completers.s_matches(
1823 self.magic_escape + cmd)
1823 self.magic_escape + cmd)
1824 else:
1824 else:
1825 try_magic = []
1825 try_magic = []
1826
1826
1827 for c in itertools.chain(self.custom_completers.s_matches(cmd),
1827 for c in itertools.chain(self.custom_completers.s_matches(cmd),
1828 try_magic,
1828 try_magic,
1829 self.custom_completers.flat_matches(self.text_until_cursor)):
1829 self.custom_completers.flat_matches(self.text_until_cursor)):
1830 try:
1830 try:
1831 res = c(event)
1831 res = c(event)
1832 if res:
1832 if res:
1833 # first, try case sensitive match
1833 # first, try case sensitive match
1834 withcase = [r for r in res if r.startswith(text)]
1834 withcase = [r for r in res if r.startswith(text)]
1835 if withcase:
1835 if withcase:
1836 return withcase
1836 return withcase
1837 # if none, then case insensitive ones are ok too
1837 # if none, then case insensitive ones are ok too
1838 text_low = text.lower()
1838 text_low = text.lower()
1839 return [r for r in res if r.lower().startswith(text_low)]
1839 return [r for r in res if r.lower().startswith(text_low)]
1840 except TryNext:
1840 except TryNext:
1841 pass
1841 pass
1842 except KeyboardInterrupt:
1842 except KeyboardInterrupt:
1843 """
1843 """
1844 If custom completer take too long,
1844 If custom completer take too long,
1845 let keyboard interrupt abort and return nothing.
1845 let keyboard interrupt abort and return nothing.
1846 """
1846 """
1847 break
1847 break
1848
1848
1849 return None
1849 return None
1850
1850
1851 def completions(self, text: str, offset: int)->Iterator[Completion]:
1851 def completions(self, text: str, offset: int)->Iterator[Completion]:
1852 """
1852 """
1853 Returns an iterator over the possible completions
1853 Returns an iterator over the possible completions
1854
1854
1855 .. warning:: Unstable
1855 .. warning:: Unstable
1856
1856
1857 This function is unstable, API may change without warning.
1857 This function is unstable, API may change without warning.
1858 It will also raise unless use in proper context manager.
1858 It will also raise unless use in proper context manager.
1859
1859
1860 Parameters
1860 Parameters
1861 ----------
1861 ----------
1862
1862
1863 text:str
1863 text:str
1864 Full text of the current input, multi line string.
1864 Full text of the current input, multi line string.
1865 offset:int
1865 offset:int
1866 Integer representing the position of the cursor in ``text``. Offset
1866 Integer representing the position of the cursor in ``text``. Offset
1867 is 0-based indexed.
1867 is 0-based indexed.
1868
1868
1869 Yields
1869 Yields
1870 ------
1870 ------
1871 :any:`Completion` object
1871 :any:`Completion` object
1872
1872
1873
1873
1874 The cursor on a text can either be seen as being "in between"
1874 The cursor on a text can either be seen as being "in between"
1875 characters or "On" a character depending on the interface visible to
1875 characters or "On" a character depending on the interface visible to
1876 the user. For consistency the cursor being on "in between" characters X
1876 the user. For consistency the cursor being on "in between" characters X
1877 and Y is equivalent to the cursor being "on" character Y, that is to say
1877 and Y is equivalent to the cursor being "on" character Y, that is to say
1878 the character the cursor is on is considered as being after the cursor.
1878 the character the cursor is on is considered as being after the cursor.
1879
1879
1880 Combining characters may span more that one position in the
1880 Combining characters may span more that one position in the
1881 text.
1881 text.
1882
1882
1883
1883
1884 .. note::
1884 .. note::
1885
1885
1886 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
1886 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
1887 fake Completion token to distinguish completion returned by Jedi
1887 fake Completion token to distinguish completion returned by Jedi
1888 and usual IPython completion.
1888 and usual IPython completion.
1889
1889
1890 .. note::
1890 .. note::
1891
1891
1892 Completions are not completely deduplicated yet. If identical
1892 Completions are not completely deduplicated yet. If identical
1893 completions are coming from different sources this function does not
1893 completions are coming from different sources this function does not
1894 ensure that each completion object will only be present once.
1894 ensure that each completion object will only be present once.
1895 """
1895 """
1896 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
1896 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
1897 "It may change without warnings. "
1897 "It may change without warnings. "
1898 "Use in corresponding context manager.",
1898 "Use in corresponding context manager.",
1899 category=ProvisionalCompleterWarning, stacklevel=2)
1899 category=ProvisionalCompleterWarning, stacklevel=2)
1900
1900
1901 seen = set()
1901 seen = set()
1902 profiler:Optional[cProfile.Profile]
1902 profiler:Optional[cProfile.Profile]
1903 try:
1903 try:
1904 if self.profile_completions:
1904 if self.profile_completions:
1905 import cProfile
1905 import cProfile
1906 profiler = cProfile.Profile()
1906 profiler = cProfile.Profile()
1907 profiler.enable()
1907 profiler.enable()
1908 else:
1908 else:
1909 profiler = None
1909 profiler = None
1910
1910
1911 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
1911 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
1912 if c and (c in seen):
1912 if c and (c in seen):
1913 continue
1913 continue
1914 yield c
1914 yield c
1915 seen.add(c)
1915 seen.add(c)
1916 except KeyboardInterrupt:
1916 except KeyboardInterrupt:
1917 """if completions take too long and users send keyboard interrupt,
1917 """if completions take too long and users send keyboard interrupt,
1918 do not crash and return ASAP. """
1918 do not crash and return ASAP. """
1919 pass
1919 pass
1920 finally:
1920 finally:
1921 if profiler is not None:
1921 if profiler is not None:
1922 profiler.disable()
1922 profiler.disable()
1923 ensure_dir_exists(self.profiler_output_dir)
1923 ensure_dir_exists(self.profiler_output_dir)
1924 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
1924 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
1925 print("Writing profiler output to", output_path)
1925 print("Writing profiler output to", output_path)
1926 profiler.dump_stats(output_path)
1926 profiler.dump_stats(output_path)
1927
1927
1928 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
1928 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
1929 """
1929 """
1930 Core completion module.Same signature as :any:`completions`, with the
1930 Core completion module.Same signature as :any:`completions`, with the
1931 extra `timeout` parameter (in seconds).
1931 extra `timeout` parameter (in seconds).
1932
1932
1933
1933
1934 Computing jedi's completion ``.type`` can be quite expensive (it is a
1934 Computing jedi's completion ``.type`` can be quite expensive (it is a
1935 lazy property) and can require some warm-up, more warm up than just
1935 lazy property) and can require some warm-up, more warm up than just
1936 computing the ``name`` of a completion. The warm-up can be :
1936 computing the ``name`` of a completion. The warm-up can be :
1937
1937
1938 - Long warm-up the first time a module is encountered after
1938 - Long warm-up the first time a module is encountered after
1939 install/update: actually build parse/inference tree.
1939 install/update: actually build parse/inference tree.
1940
1940
1941 - first time the module is encountered in a session: load tree from
1941 - first time the module is encountered in a session: load tree from
1942 disk.
1942 disk.
1943
1943
1944 We don't want to block completions for tens of seconds so we give the
1944 We don't want to block completions for tens of seconds so we give the
1945 completer a "budget" of ``_timeout`` seconds per invocation to compute
1945 completer a "budget" of ``_timeout`` seconds per invocation to compute
1946 completions types, the completions that have not yet been computed will
1946 completions types, the completions that have not yet been computed will
1947 be marked as "unknown" an will have a chance to be computed next round
1947 be marked as "unknown" an will have a chance to be computed next round
1948 are things get cached.
1948 are things get cached.
1949
1949
1950 Keep in mind that Jedi is not the only thing treating the completion so
1950 Keep in mind that Jedi is not the only thing treating the completion so
1951 keep the timeout short-ish as if we take more than 0.3 second we still
1951 keep the timeout short-ish as if we take more than 0.3 second we still
1952 have lots of processing to do.
1952 have lots of processing to do.
1953
1953
1954 """
1954 """
1955 deadline = time.monotonic() + _timeout
1955 deadline = time.monotonic() + _timeout
1956
1956
1957
1957
1958 before = full_text[:offset]
1958 before = full_text[:offset]
1959 cursor_line, cursor_column = position_to_cursor(full_text, offset)
1959 cursor_line, cursor_column = position_to_cursor(full_text, offset)
1960
1960
1961 matched_text, matches, matches_origin, jedi_matches = self._complete(
1961 matched_text, matches, matches_origin, jedi_matches = self._complete(
1962 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
1962 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
1963
1963
1964 iter_jm = iter(jedi_matches)
1964 iter_jm = iter(jedi_matches)
1965 if _timeout:
1965 if _timeout:
1966 for jm in iter_jm:
1966 for jm in iter_jm:
1967 try:
1967 try:
1968 type_ = jm.type
1968 type_ = jm.type
1969 except Exception:
1969 except Exception:
1970 if self.debug:
1970 if self.debug:
1971 print("Error in Jedi getting type of ", jm)
1971 print("Error in Jedi getting type of ", jm)
1972 type_ = None
1972 type_ = None
1973 delta = len(jm.name_with_symbols) - len(jm.complete)
1973 delta = len(jm.name_with_symbols) - len(jm.complete)
1974 if type_ == 'function':
1974 if type_ == 'function':
1975 signature = _make_signature(jm)
1975 signature = _make_signature(jm)
1976 else:
1976 else:
1977 signature = ''
1977 signature = ''
1978 yield Completion(start=offset - delta,
1978 yield Completion(start=offset - delta,
1979 end=offset,
1979 end=offset,
1980 text=jm.name_with_symbols,
1980 text=jm.name_with_symbols,
1981 type=type_,
1981 type=type_,
1982 signature=signature,
1982 signature=signature,
1983 _origin='jedi')
1983 _origin='jedi')
1984
1984
1985 if time.monotonic() > deadline:
1985 if time.monotonic() > deadline:
1986 break
1986 break
1987
1987
1988 for jm in iter_jm:
1988 for jm in iter_jm:
1989 delta = len(jm.name_with_symbols) - len(jm.complete)
1989 delta = len(jm.name_with_symbols) - len(jm.complete)
1990 yield Completion(start=offset - delta,
1990 yield Completion(start=offset - delta,
1991 end=offset,
1991 end=offset,
1992 text=jm.name_with_symbols,
1992 text=jm.name_with_symbols,
1993 type='<unknown>', # don't compute type for speed
1993 type='<unknown>', # don't compute type for speed
1994 _origin='jedi',
1994 _origin='jedi',
1995 signature='')
1995 signature='')
1996
1996
1997
1997
1998 start_offset = before.rfind(matched_text)
1998 start_offset = before.rfind(matched_text)
1999
1999
2000 # TODO:
2000 # TODO:
2001 # Suppress this, right now just for debug.
2001 # Suppress this, right now just for debug.
2002 if jedi_matches and matches and self.debug:
2002 if jedi_matches and matches and self.debug:
2003 yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
2003 yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
2004 _origin='debug', type='none', signature='')
2004 _origin='debug', type='none', signature='')
2005
2005
2006 # I'm unsure if this is always true, so let's assert and see if it
2006 # I'm unsure if this is always true, so let's assert and see if it
2007 # crash
2007 # crash
2008 assert before.endswith(matched_text)
2008 assert before.endswith(matched_text)
2009 for m, t in zip(matches, matches_origin):
2009 for m, t in zip(matches, matches_origin):
2010 yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
2010 yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
2011
2011
2012
2012
2013 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2013 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2014 """Find completions for the given text and line context.
2014 """Find completions for the given text and line context.
2015
2015
2016 Note that both the text and the line_buffer are optional, but at least
2016 Note that both the text and the line_buffer are optional, but at least
2017 one of them must be given.
2017 one of them must be given.
2018
2018
2019 Parameters
2019 Parameters
2020 ----------
2020 ----------
2021 text : string, optional
2021 text : string, optional
2022 Text to perform the completion on. If not given, the line buffer
2022 Text to perform the completion on. If not given, the line buffer
2023 is split using the instance's CompletionSplitter object.
2023 is split using the instance's CompletionSplitter object.
2024
2024
2025 line_buffer : string, optional
2025 line_buffer : string, optional
2026 If not given, the completer attempts to obtain the current line
2026 If not given, the completer attempts to obtain the current line
2027 buffer via readline. This keyword allows clients which are
2027 buffer via readline. This keyword allows clients which are
2028 requesting for text completions in non-readline contexts to inform
2028 requesting for text completions in non-readline contexts to inform
2029 the completer of the entire text.
2029 the completer of the entire text.
2030
2030
2031 cursor_pos : int, optional
2031 cursor_pos : int, optional
2032 Index of the cursor in the full line buffer. Should be provided by
2032 Index of the cursor in the full line buffer. Should be provided by
2033 remote frontends where kernel has no access to frontend state.
2033 remote frontends where kernel has no access to frontend state.
2034
2034
2035 Returns
2035 Returns
2036 -------
2036 -------
2037 Tuple of two items:
2037 Tuple of two items:
2038 text : str
2038 text : str
2039 Text that was actually used in the completion.
2039 Text that was actually used in the completion.
2040 matches : list
2040 matches : list
2041 A list of completion matches.
2041 A list of completion matches.
2042
2042
2043
2043
2044 .. note::
2044 .. note::
2045
2045
2046 This API is likely to be deprecated and replaced by
2046 This API is likely to be deprecated and replaced by
2047 :any:`IPCompleter.completions` in the future.
2047 :any:`IPCompleter.completions` in the future.
2048
2048
2049
2049
2050 """
2050 """
2051 warnings.warn('`Completer.complete` is pending deprecation since '
2051 warnings.warn('`Completer.complete` is pending deprecation since '
2052 'IPython 6.0 and will be replaced by `Completer.completions`.',
2052 'IPython 6.0 and will be replaced by `Completer.completions`.',
2053 PendingDeprecationWarning)
2053 PendingDeprecationWarning)
2054 # potential todo, FOLD the 3rd throw away argument of _complete
2054 # potential todo, FOLD the 3rd throw away argument of _complete
2055 # into the first 2 one.
2055 # into the first 2 one.
2056 return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
2056 return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
2057
2057
2058 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2058 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2059 full_text=None) -> _CompleteResult:
2059 full_text=None) -> _CompleteResult:
2060 """
2060 """
2061
2061
2062 Like complete but can also returns raw jedi completions as well as the
2062 Like complete but can also returns raw jedi completions as well as the
2063 origin of the completion text. This could (and should) be made much
2063 origin of the completion text. This could (and should) be made much
2064 cleaner but that will be simpler once we drop the old (and stateful)
2064 cleaner but that will be simpler once we drop the old (and stateful)
2065 :any:`complete` API.
2065 :any:`complete` API.
2066
2066
2067
2067
2068 With current provisional API, cursor_pos act both (depending on the
2068 With current provisional API, cursor_pos act both (depending on the
2069 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2069 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2070 ``column`` when passing multiline strings this could/should be renamed
2070 ``column`` when passing multiline strings this could/should be renamed
2071 but would add extra noise.
2071 but would add extra noise.
2072
2072
2073 Return
2073 Return
2074 ======
2074 ======
2075
2075
2076 A tuple of N elements which are (likely):
2076 A tuple of N elements which are (likely):
2077
2077
2078 matched_text: ? the text that the complete matched
2078 matched_text: ? the text that the complete matched
2079 matches: list of completions ?
2079 matches: list of completions ?
2080 matches_origin: ? list same lenght as matches, and where each completion came from
2080 matches_origin: ? list same lenght as matches, and where each completion came from
2081 jedi_matches: list of Jedi matches, have it's own structure.
2081 jedi_matches: list of Jedi matches, have it's own structure.
2082 """
2082 """
2083
2083
2084
2084
2085 # if the cursor position isn't given, the only sane assumption we can
2085 # if the cursor position isn't given, the only sane assumption we can
2086 # make is that it's at the end of the line (the common case)
2086 # make is that it's at the end of the line (the common case)
2087 if cursor_pos is None:
2087 if cursor_pos is None:
2088 cursor_pos = len(line_buffer) if text is None else len(text)
2088 cursor_pos = len(line_buffer) if text is None else len(text)
2089
2089
2090 if self.use_main_ns:
2090 if self.use_main_ns:
2091 self.namespace = __main__.__dict__
2091 self.namespace = __main__.__dict__
2092
2092
2093 # if text is either None or an empty string, rely on the line buffer
2093 # if text is either None or an empty string, rely on the line buffer
2094 if (not line_buffer) and full_text:
2094 if (not line_buffer) and full_text:
2095 line_buffer = full_text.split('\n')[cursor_line]
2095 line_buffer = full_text.split('\n')[cursor_line]
2096 if not text: # issue #11508: check line_buffer before calling split_line
2096 if not text: # issue #11508: check line_buffer before calling split_line
2097 text = self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ''
2097 text = self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ''
2098
2098
2099 if self.backslash_combining_completions:
2099 if self.backslash_combining_completions:
2100 # allow deactivation of these on windows.
2100 # allow deactivation of these on windows.
2101 base_text = text if not line_buffer else line_buffer[:cursor_pos]
2101 base_text = text if not line_buffer else line_buffer[:cursor_pos]
2102
2102
2103 for meth in (self.latex_matches,
2103 for meth in (self.latex_matches,
2104 self.unicode_name_matches,
2104 self.unicode_name_matches,
2105 back_latex_name_matches,
2105 back_latex_name_matches,
2106 back_unicode_name_matches,
2106 back_unicode_name_matches,
2107 self.fwd_unicode_match):
2107 self.fwd_unicode_match):
2108 name_text, name_matches = meth(base_text)
2108 name_text, name_matches = meth(base_text)
2109 if name_text:
2109 if name_text:
2110 return _CompleteResult(name_text, name_matches[:MATCHES_LIMIT], \
2110 return _CompleteResult(name_text, name_matches[:MATCHES_LIMIT], \
2111 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ())
2111 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ())
2112
2112
2113
2113
2114 # If no line buffer is given, assume the input text is all there was
2114 # If no line buffer is given, assume the input text is all there was
2115 if line_buffer is None:
2115 if line_buffer is None:
2116 line_buffer = text
2116 line_buffer = text
2117
2117
2118 self.line_buffer = line_buffer
2118 self.line_buffer = line_buffer
2119 self.text_until_cursor = self.line_buffer[:cursor_pos]
2119 self.text_until_cursor = self.line_buffer[:cursor_pos]
2120
2120
2121 # Do magic arg matches
2121 # Do magic arg matches
2122 for matcher in self.magic_arg_matchers:
2122 for matcher in self.magic_arg_matchers:
2123 matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
2123 matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
2124 if matches:
2124 if matches:
2125 origins = [matcher.__qualname__] * len(matches)
2125 origins = [matcher.__qualname__] * len(matches)
2126 return _CompleteResult(text, matches, origins, ())
2126 return _CompleteResult(text, matches, origins, ())
2127
2127
2128 # Start with a clean slate of completions
2128 # Start with a clean slate of completions
2129 matches = []
2129 matches = []
2130
2130
2131 # FIXME: we should extend our api to return a dict with completions for
2131 # FIXME: we should extend our api to return a dict with completions for
2132 # different types of objects. The rlcomplete() method could then
2132 # different types of objects. The rlcomplete() method could then
2133 # simply collapse the dict into a list for readline, but we'd have
2133 # simply collapse the dict into a list for readline, but we'd have
2134 # richer completion semantics in other environments.
2134 # richer completion semantics in other environments.
2135 completions:Iterable[Any] = []
2135 completions:Iterable[Any] = []
2136 if self.use_jedi:
2136 if self.use_jedi:
2137 if not full_text:
2137 if not full_text:
2138 full_text = line_buffer
2138 full_text = line_buffer
2139 completions = self._jedi_matches(
2139 completions = self._jedi_matches(
2140 cursor_pos, cursor_line, full_text)
2140 cursor_pos, cursor_line, full_text)
2141
2141
2142 if self.merge_completions:
2142 if self.merge_completions:
2143 matches = []
2143 matches = []
2144 for matcher in self.matchers:
2144 for matcher in self.matchers:
2145 try:
2145 try:
2146 matches.extend([(m, matcher.__qualname__)
2146 matches.extend([(m, matcher.__qualname__)
2147 for m in matcher(text)])
2147 for m in matcher(text)])
2148 except:
2148 except:
2149 # Show the ugly traceback if the matcher causes an
2149 # Show the ugly traceback if the matcher causes an
2150 # exception, but do NOT crash the kernel!
2150 # exception, but do NOT crash the kernel!
2151 sys.excepthook(*sys.exc_info())
2151 sys.excepthook(*sys.exc_info())
2152 else:
2152 else:
2153 for matcher in self.matchers:
2153 for matcher in self.matchers:
2154 matches = [(m, matcher.__qualname__)
2154 matches = [(m, matcher.__qualname__)
2155 for m in matcher(text)]
2155 for m in matcher(text)]
2156 if matches:
2156 if matches:
2157 break
2157 break
2158
2158
2159 seen = set()
2159 seen = set()
2160 filtered_matches = set()
2160 filtered_matches = set()
2161 for m in matches:
2161 for m in matches:
2162 t, c = m
2162 t, c = m
2163 if t not in seen:
2163 if t not in seen:
2164 filtered_matches.add(m)
2164 filtered_matches.add(m)
2165 seen.add(t)
2165 seen.add(t)
2166
2166
2167 _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
2167 _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
2168
2168
2169 custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
2169 custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
2170
2170
2171 _filtered_matches = custom_res or _filtered_matches
2171 _filtered_matches = custom_res or _filtered_matches
2172
2172
2173 _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
2173 _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
2174 _matches = [m[0] for m in _filtered_matches]
2174 _matches = [m[0] for m in _filtered_matches]
2175 origins = [m[1] for m in _filtered_matches]
2175 origins = [m[1] for m in _filtered_matches]
2176
2176
2177 self.matches = _matches
2177 self.matches = _matches
2178
2178
2179 return _CompleteResult(text, _matches, origins, completions)
2179 return _CompleteResult(text, _matches, origins, completions)
2180
2180
2181 def fwd_unicode_match(self, text:str) -> Tuple[str, Sequence[str]]:
2181 def fwd_unicode_match(self, text:str) -> Tuple[str, Sequence[str]]:
2182 """
2182 """
2183
2183
2184 Forward match a string starting with a backslash with a list of
2184 Forward match a string starting with a backslash with a list of
2185 potential Unicode completions.
2185 potential Unicode completions.
2186
2186
2187 Will compute list list of Unicode character names on first call and cache it.
2187 Will compute list list of Unicode character names on first call and cache it.
2188
2188
2189 Return
2189 Return
2190 ======
2190 ======
2191
2191
2192 At tuple with:
2192 At tuple with:
2193 - matched text (empty if no matches)
2193 - matched text (empty if no matches)
2194 - list of potential completions, empty tuple otherwise)
2194 - list of potential completions, empty tuple otherwise)
2195 """
2195 """
2196 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2196 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2197 # We could do a faster match using a Trie.
2197 # We could do a faster match using a Trie.
2198
2198
2199 # Using pygtrie the follwing seem to work:
2199 # Using pygtrie the follwing seem to work:
2200
2200
2201 # s = PrefixSet()
2201 # s = PrefixSet()
2202
2202
2203 # for c in range(0,0x10FFFF + 1):
2203 # for c in range(0,0x10FFFF + 1):
2204 # try:
2204 # try:
2205 # s.add(unicodedata.name(chr(c)))
2205 # s.add(unicodedata.name(chr(c)))
2206 # except ValueError:
2206 # except ValueError:
2207 # pass
2207 # pass
2208 # [''.join(k) for k in s.iter(prefix)]
2208 # [''.join(k) for k in s.iter(prefix)]
2209
2209
2210 # But need to be timed and adds an extra dependency.
2210 # But need to be timed and adds an extra dependency.
2211
2211
2212 slashpos = text.rfind('\\')
2212 slashpos = text.rfind('\\')
2213 # if text starts with slash
2213 # if text starts with slash
2214 if slashpos > -1:
2214 if slashpos > -1:
2215 # PERF: It's important that we don't access self._unicode_names
2215 # PERF: It's important that we don't access self._unicode_names
2216 # until we're inside this if-block. _unicode_names is lazily
2216 # until we're inside this if-block. _unicode_names is lazily
2217 # initialized, and it takes a user-noticeable amount of time to
2217 # initialized, and it takes a user-noticeable amount of time to
2218 # initialize it, so we don't want to initialize it unless we're
2218 # initialize it, so we don't want to initialize it unless we're
2219 # actually going to use it.
2219 # actually going to use it.
2220 s = text[slashpos+1:]
2220 s = text[slashpos+1:]
2221 candidates = [x for x in self.unicode_names if x.startswith(s)]
2221 candidates = [x for x in self.unicode_names if x.startswith(s)]
2222 if candidates:
2222 if candidates:
2223 return s, candidates
2223 return s, candidates
2224 else:
2224 else:
2225 return '', ()
2225 return '', ()
2226
2226
2227 # if text does not start with slash
2227 # if text does not start with slash
2228 else:
2228 else:
2229 return '', ()
2229 return '', ()
2230
2230
2231 @property
2231 @property
2232 def unicode_names(self) -> List[str]:
2232 def unicode_names(self) -> List[str]:
2233 """List of names of unicode code points that can be completed.
2233 """List of names of unicode code points that can be completed.
2234
2234
2235 The list is lazily initialized on first access.
2235 The list is lazily initialized on first access.
2236 """
2236 """
2237 if self._unicode_names is None:
2237 if self._unicode_names is None:
2238 names = []
2238 names = []
2239 for c in range(0,0x10FFFF + 1):
2239 for c in range(0,0x10FFFF + 1):
2240 try:
2240 try:
2241 names.append(unicodedata.name(chr(c)))
2241 names.append(unicodedata.name(chr(c)))
2242 except ValueError:
2242 except ValueError:
2243 pass
2243 pass
2244 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2244 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2245
2245
2246 return self._unicode_names
2246 return self._unicode_names
2247
2247
2248 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2248 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2249 names = []
2249 names = []
2250 for start,stop in ranges:
2250 for start,stop in ranges:
2251 for c in range(start, stop) :
2251 for c in range(start, stop) :
2252 try:
2252 try:
2253 names.append(unicodedata.name(chr(c)))
2253 names.append(unicodedata.name(chr(c)))
2254 except ValueError:
2254 except ValueError:
2255 pass
2255 pass
2256 return names
2256 return names
@@ -1,1177 +1,1206 b''
1 # encoding: utf-8
1 # encoding: utf-8
2 """Tests for the IPython tab-completion machinery."""
2 """Tests for the IPython tab-completion machinery."""
3
3
4 # Copyright (c) IPython Development Team.
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
5 # Distributed under the terms of the Modified BSD License.
6
6
7 import os
7 import os
8 import sys
8 import sys
9 import textwrap
9 import textwrap
10 import unittest
10 import unittest
11
11
12 from contextlib import contextmanager
12 from contextlib import contextmanager
13
13
14 import nose.tools as nt
14 import nose.tools as nt
15
15
16 from traitlets.config.loader import Config
16 from traitlets.config.loader import Config
17 from IPython import get_ipython
17 from IPython import get_ipython
18 from IPython.core import completer
18 from IPython.core import completer
19 from IPython.external import decorators
19 from IPython.external import decorators
20 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
20 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
21 from IPython.utils.generics import complete_object
21 from IPython.utils.generics import complete_object
22 from IPython.testing import decorators as dec
22 from IPython.testing import decorators as dec
23
23
24 from IPython.core.completer import (
24 from IPython.core.completer import (
25 Completion,
25 Completion,
26 provisionalcompleter,
26 provisionalcompleter,
27 match_dict_keys,
27 match_dict_keys,
28 _deduplicate_completions,
28 _deduplicate_completions,
29 )
29 )
30 from nose.tools import assert_in, assert_not_in
30 from nose.tools import assert_in, assert_not_in
31
31
32 # -----------------------------------------------------------------------------
32 # -----------------------------------------------------------------------------
33 # Test functions
33 # Test functions
34 # -----------------------------------------------------------------------------
34 # -----------------------------------------------------------------------------
35
35
36 def recompute_unicode_ranges():
36 def recompute_unicode_ranges():
37 """
37 """
38 utility to recompute the largest unicode range without any characters
38 utility to recompute the largest unicode range without any characters
39
39
40 use to recompute the gap in the global _UNICODE_RANGES of completer.py
40 use to recompute the gap in the global _UNICODE_RANGES of completer.py
41 """
41 """
42 import itertools
42 import itertools
43 import unicodedata
43 import unicodedata
44 valid = []
44 valid = []
45 for c in range(0,0x10FFFF + 1):
45 for c in range(0,0x10FFFF + 1):
46 try:
46 try:
47 unicodedata.name(chr(c))
47 unicodedata.name(chr(c))
48 except ValueError:
48 except ValueError:
49 continue
49 continue
50 valid.append(c)
50 valid.append(c)
51
51
52 def ranges(i):
52 def ranges(i):
53 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
53 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
54 b = list(b)
54 b = list(b)
55 yield b[0][1], b[-1][1]
55 yield b[0][1], b[-1][1]
56
56
57 rg = list(ranges(valid))
57 rg = list(ranges(valid))
58 lens = []
58 lens = []
59 gap_lens = []
59 gap_lens = []
60 pstart, pstop = 0,0
60 pstart, pstop = 0,0
61 for start, stop in rg:
61 for start, stop in rg:
62 lens.append(stop-start)
62 lens.append(stop-start)
63 gap_lens.append((start - pstop, hex(pstop), hex(start), f'{round((start - pstop)/0xe01f0*100)}%'))
63 gap_lens.append((start - pstop, hex(pstop), hex(start), f'{round((start - pstop)/0xe01f0*100)}%'))
64 pstart, pstop = start, stop
64 pstart, pstop = start, stop
65
65
66 return sorted(gap_lens)[-1]
66 return sorted(gap_lens)[-1]
67
67
68
68
69
69
70 def test_unicode_range():
70 def test_unicode_range():
71 """
71 """
72 Test that the ranges we test for unicode names give the same number of
72 Test that the ranges we test for unicode names give the same number of
73 results than testing the full length.
73 results than testing the full length.
74 """
74 """
75 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
75 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
76
76
77 expected_list = _unicode_name_compute([(0, 0x110000)])
77 expected_list = _unicode_name_compute([(0, 0x110000)])
78 test = _unicode_name_compute(_UNICODE_RANGES)
78 test = _unicode_name_compute(_UNICODE_RANGES)
79 len_exp = len(expected_list)
79 len_exp = len(expected_list)
80 len_test = len(test)
80 len_test = len(test)
81
81
82 # do not inline the len() or on error pytest will try to print the 130 000 +
82 # do not inline the len() or on error pytest will try to print the 130 000 +
83 # elements.
83 # elements.
84 message = None
84 message = None
85 if len_exp != len_test or len_exp > 131808:
85 if len_exp != len_test or len_exp > 131808:
86 size, start, stop, prct = recompute_unicode_ranges()
86 size, start, stop, prct = recompute_unicode_ranges()
87 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
87 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
88 likely due to a new release of Python. We've find that the biggest gap
88 likely due to a new release of Python. We've find that the biggest gap
89 in unicode characters has reduces in size to be {size} charaters
89 in unicode characters has reduces in size to be {size} charaters
90 ({prct}), from {start}, to {stop}. In completer.py likely update to
90 ({prct}), from {start}, to {stop}. In completer.py likely update to
91
91
92 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
92 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
93
93
94 And update the assertion below to use
94 And update the assertion below to use
95
95
96 len_exp <= {len_exp}
96 len_exp <= {len_exp}
97 """
97 """
98 assert len_exp == len_test, message
98 assert len_exp == len_test, message
99
99
100 # fail if new unicode symbols have been added.
100 # fail if new unicode symbols have been added.
101 assert len_exp <= 137714, message
101 assert len_exp <= 137714, message
102
102
103
103
104 @contextmanager
104 @contextmanager
105 def greedy_completion():
105 def greedy_completion():
106 ip = get_ipython()
106 ip = get_ipython()
107 greedy_original = ip.Completer.greedy
107 greedy_original = ip.Completer.greedy
108 try:
108 try:
109 ip.Completer.greedy = True
109 ip.Completer.greedy = True
110 yield
110 yield
111 finally:
111 finally:
112 ip.Completer.greedy = greedy_original
112 ip.Completer.greedy = greedy_original
113
113
114
114
115 def test_protect_filename():
115 def test_protect_filename():
116 if sys.platform == "win32":
116 if sys.platform == "win32":
117 pairs = [
117 pairs = [
118 ("abc", "abc"),
118 ("abc", "abc"),
119 (" abc", '" abc"'),
119 (" abc", '" abc"'),
120 ("a bc", '"a bc"'),
120 ("a bc", '"a bc"'),
121 ("a bc", '"a bc"'),
121 ("a bc", '"a bc"'),
122 (" bc", '" bc"'),
122 (" bc", '" bc"'),
123 ]
123 ]
124 else:
124 else:
125 pairs = [
125 pairs = [
126 ("abc", "abc"),
126 ("abc", "abc"),
127 (" abc", r"\ abc"),
127 (" abc", r"\ abc"),
128 ("a bc", r"a\ bc"),
128 ("a bc", r"a\ bc"),
129 ("a bc", r"a\ \ bc"),
129 ("a bc", r"a\ \ bc"),
130 (" bc", r"\ \ bc"),
130 (" bc", r"\ \ bc"),
131 # On posix, we also protect parens and other special characters.
131 # On posix, we also protect parens and other special characters.
132 ("a(bc", r"a\(bc"),
132 ("a(bc", r"a\(bc"),
133 ("a)bc", r"a\)bc"),
133 ("a)bc", r"a\)bc"),
134 ("a( )bc", r"a\(\ \)bc"),
134 ("a( )bc", r"a\(\ \)bc"),
135 ("a[1]bc", r"a\[1\]bc"),
135 ("a[1]bc", r"a\[1\]bc"),
136 ("a{1}bc", r"a\{1\}bc"),
136 ("a{1}bc", r"a\{1\}bc"),
137 ("a#bc", r"a\#bc"),
137 ("a#bc", r"a\#bc"),
138 ("a?bc", r"a\?bc"),
138 ("a?bc", r"a\?bc"),
139 ("a=bc", r"a\=bc"),
139 ("a=bc", r"a\=bc"),
140 ("a\\bc", r"a\\bc"),
140 ("a\\bc", r"a\\bc"),
141 ("a|bc", r"a\|bc"),
141 ("a|bc", r"a\|bc"),
142 ("a;bc", r"a\;bc"),
142 ("a;bc", r"a\;bc"),
143 ("a:bc", r"a\:bc"),
143 ("a:bc", r"a\:bc"),
144 ("a'bc", r"a\'bc"),
144 ("a'bc", r"a\'bc"),
145 ("a*bc", r"a\*bc"),
145 ("a*bc", r"a\*bc"),
146 ('a"bc', r"a\"bc"),
146 ('a"bc', r"a\"bc"),
147 ("a^bc", r"a\^bc"),
147 ("a^bc", r"a\^bc"),
148 ("a&bc", r"a\&bc"),
148 ("a&bc", r"a\&bc"),
149 ]
149 ]
150 # run the actual tests
150 # run the actual tests
151 for s1, s2 in pairs:
151 for s1, s2 in pairs:
152 s1p = completer.protect_filename(s1)
152 s1p = completer.protect_filename(s1)
153 nt.assert_equal(s1p, s2)
153 nt.assert_equal(s1p, s2)
154
154
155
155
156 def check_line_split(splitter, test_specs):
156 def check_line_split(splitter, test_specs):
157 for part1, part2, split in test_specs:
157 for part1, part2, split in test_specs:
158 cursor_pos = len(part1)
158 cursor_pos = len(part1)
159 line = part1 + part2
159 line = part1 + part2
160 out = splitter.split_line(line, cursor_pos)
160 out = splitter.split_line(line, cursor_pos)
161 nt.assert_equal(out, split)
161 nt.assert_equal(out, split)
162
162
163
163
164 def test_line_split():
164 def test_line_split():
165 """Basic line splitter test with default specs."""
165 """Basic line splitter test with default specs."""
166 sp = completer.CompletionSplitter()
166 sp = completer.CompletionSplitter()
167 # The format of the test specs is: part1, part2, expected answer. Parts 1
167 # The format of the test specs is: part1, part2, expected answer. Parts 1
168 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
168 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
169 # was at the end of part1. So an empty part2 represents someone hitting
169 # was at the end of part1. So an empty part2 represents someone hitting
170 # tab at the end of the line, the most common case.
170 # tab at the end of the line, the most common case.
171 t = [
171 t = [
172 ("run some/scrip", "", "some/scrip"),
172 ("run some/scrip", "", "some/scrip"),
173 ("run scripts/er", "ror.py foo", "scripts/er"),
173 ("run scripts/er", "ror.py foo", "scripts/er"),
174 ("echo $HOM", "", "HOM"),
174 ("echo $HOM", "", "HOM"),
175 ("print sys.pa", "", "sys.pa"),
175 ("print sys.pa", "", "sys.pa"),
176 ("print(sys.pa", "", "sys.pa"),
176 ("print(sys.pa", "", "sys.pa"),
177 ("execfile('scripts/er", "", "scripts/er"),
177 ("execfile('scripts/er", "", "scripts/er"),
178 ("a[x.", "", "x."),
178 ("a[x.", "", "x."),
179 ("a[x.", "y", "x."),
179 ("a[x.", "y", "x."),
180 ('cd "some_file/', "", "some_file/"),
180 ('cd "some_file/', "", "some_file/"),
181 ]
181 ]
182 check_line_split(sp, t)
182 check_line_split(sp, t)
183 # Ensure splitting works OK with unicode by re-running the tests with
183 # Ensure splitting works OK with unicode by re-running the tests with
184 # all inputs turned into unicode
184 # all inputs turned into unicode
185 check_line_split(sp, [map(str, p) for p in t])
185 check_line_split(sp, [map(str, p) for p in t])
186
186
187
187
188 class NamedInstanceMetaclass(type):
188 class NamedInstanceMetaclass(type):
189 def __getitem__(cls, item):
189 def __getitem__(cls, item):
190 return cls.get_instance(item)
190 return cls.get_instance(item)
191
191
192
192
193 class NamedInstanceClass(metaclass=NamedInstanceMetaclass):
193 class NamedInstanceClass(metaclass=NamedInstanceMetaclass):
194 def __init__(self, name):
194 def __init__(self, name):
195 if not hasattr(self.__class__, "instances"):
195 if not hasattr(self.__class__, "instances"):
196 self.__class__.instances = {}
196 self.__class__.instances = {}
197 self.__class__.instances[name] = self
197 self.__class__.instances[name] = self
198
198
199 @classmethod
199 @classmethod
200 def _ipython_key_completions_(cls):
200 def _ipython_key_completions_(cls):
201 return cls.instances.keys()
201 return cls.instances.keys()
202
202
203 @classmethod
203 @classmethod
204 def get_instance(cls, name):
204 def get_instance(cls, name):
205 return cls.instances[name]
205 return cls.instances[name]
206
206
207
207
208 class KeyCompletable:
208 class KeyCompletable:
209 def __init__(self, things=()):
209 def __init__(self, things=()):
210 self.things = things
210 self.things = things
211
211
212 def _ipython_key_completions_(self):
212 def _ipython_key_completions_(self):
213 return list(self.things)
213 return list(self.things)
214
214
215
215
216 class TestCompleter(unittest.TestCase):
216 class TestCompleter(unittest.TestCase):
217 def setUp(self):
217 def setUp(self):
218 """
218 """
219 We want to silence all PendingDeprecationWarning when testing the completer
219 We want to silence all PendingDeprecationWarning when testing the completer
220 """
220 """
221 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
221 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
222 self._assertwarns.__enter__()
222 self._assertwarns.__enter__()
223
223
224 def tearDown(self):
224 def tearDown(self):
225 try:
225 try:
226 self._assertwarns.__exit__(None, None, None)
226 self._assertwarns.__exit__(None, None, None)
227 except AssertionError:
227 except AssertionError:
228 pass
228 pass
229
229
230 def test_custom_completion_error(self):
230 def test_custom_completion_error(self):
231 """Test that errors from custom attribute completers are silenced."""
231 """Test that errors from custom attribute completers are silenced."""
232 ip = get_ipython()
232 ip = get_ipython()
233
233
234 class A:
234 class A:
235 pass
235 pass
236
236
237 ip.user_ns["x"] = A()
237 ip.user_ns["x"] = A()
238
238
239 @complete_object.register(A)
239 @complete_object.register(A)
240 def complete_A(a, existing_completions):
240 def complete_A(a, existing_completions):
241 raise TypeError("this should be silenced")
241 raise TypeError("this should be silenced")
242
242
243 ip.complete("x.")
243 ip.complete("x.")
244
244
245 def test_custom_completion_ordering(self):
245 def test_custom_completion_ordering(self):
246 """Test that errors from custom attribute completers are silenced."""
246 """Test that errors from custom attribute completers are silenced."""
247 ip = get_ipython()
247 ip = get_ipython()
248
248
249 _, matches = ip.complete('in')
249 _, matches = ip.complete('in')
250 assert matches.index('input') < matches.index('int')
250 assert matches.index('input') < matches.index('int')
251
251
252 def complete_example(a):
252 def complete_example(a):
253 return ['example2', 'example1']
253 return ['example2', 'example1']
254
254
255 ip.Completer.custom_completers.add_re('ex*', complete_example)
255 ip.Completer.custom_completers.add_re('ex*', complete_example)
256 _, matches = ip.complete('ex')
256 _, matches = ip.complete('ex')
257 assert matches.index('example2') < matches.index('example1')
257 assert matches.index('example2') < matches.index('example1')
258
258
259 def test_unicode_completions(self):
259 def test_unicode_completions(self):
260 ip = get_ipython()
260 ip = get_ipython()
261 # Some strings that trigger different types of completion. Check them both
261 # Some strings that trigger different types of completion. Check them both
262 # in str and unicode forms
262 # in str and unicode forms
263 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
263 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
264 for t in s + list(map(str, s)):
264 for t in s + list(map(str, s)):
265 # We don't need to check exact completion values (they may change
265 # We don't need to check exact completion values (they may change
266 # depending on the state of the namespace, but at least no exceptions
266 # depending on the state of the namespace, but at least no exceptions
267 # should be thrown and the return value should be a pair of text, list
267 # should be thrown and the return value should be a pair of text, list
268 # values.
268 # values.
269 text, matches = ip.complete(t)
269 text, matches = ip.complete(t)
270 nt.assert_true(isinstance(text, str))
270 nt.assert_true(isinstance(text, str))
271 nt.assert_true(isinstance(matches, list))
271 nt.assert_true(isinstance(matches, list))
272
272
273 def test_latex_completions(self):
273 def test_latex_completions(self):
274 from IPython.core.latex_symbols import latex_symbols
274 from IPython.core.latex_symbols import latex_symbols
275 import random
275 import random
276
276
277 ip = get_ipython()
277 ip = get_ipython()
278 # Test some random unicode symbols
278 # Test some random unicode symbols
279 keys = random.sample(latex_symbols.keys(), 10)
279 keys = random.sample(latex_symbols.keys(), 10)
280 for k in keys:
280 for k in keys:
281 text, matches = ip.complete(k)
281 text, matches = ip.complete(k)
282 nt.assert_equal(text, k)
282 nt.assert_equal(text, k)
283 nt.assert_equal(matches, [latex_symbols[k]])
283 nt.assert_equal(matches, [latex_symbols[k]])
284 # Test a more complex line
284 # Test a more complex line
285 text, matches = ip.complete("print(\\alpha")
285 text, matches = ip.complete("print(\\alpha")
286 nt.assert_equal(text, "\\alpha")
286 nt.assert_equal(text, "\\alpha")
287 nt.assert_equal(matches[0], latex_symbols["\\alpha"])
287 nt.assert_equal(matches[0], latex_symbols["\\alpha"])
288 # Test multiple matching latex symbols
288 # Test multiple matching latex symbols
289 text, matches = ip.complete("\\al")
289 text, matches = ip.complete("\\al")
290 nt.assert_in("\\alpha", matches)
290 nt.assert_in("\\alpha", matches)
291 nt.assert_in("\\aleph", matches)
291 nt.assert_in("\\aleph", matches)
292
292
293 def test_latex_no_results(self):
293 def test_latex_no_results(self):
294 """
294 """
295 forward latex should really return nothing in either field if nothing is found.
295 forward latex should really return nothing in either field if nothing is found.
296 """
296 """
297 ip = get_ipython()
297 ip = get_ipython()
298 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
298 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
299 nt.assert_equal(text, "")
299 nt.assert_equal(text, "")
300 nt.assert_equal(matches, ())
300 nt.assert_equal(matches, ())
301
301
302 def test_back_latex_completion(self):
302 def test_back_latex_completion(self):
303 ip = get_ipython()
303 ip = get_ipython()
304
304
305 # do not return more than 1 matches fro \beta, only the latex one.
305 # do not return more than 1 matches fro \beta, only the latex one.
306 name, matches = ip.complete("\\Ξ²")
306 name, matches = ip.complete("\\Ξ²")
307 nt.assert_equal(matches, ['\\beta'])
307 nt.assert_equal(matches, ['\\beta'])
308
308
309 def test_back_unicode_completion(self):
309 def test_back_unicode_completion(self):
310 ip = get_ipython()
310 ip = get_ipython()
311
311
312 name, matches = ip.complete("\\β…€")
312 name, matches = ip.complete("\\β…€")
313 nt.assert_equal(matches, ("\\ROMAN NUMERAL FIVE",))
313 nt.assert_equal(matches, ("\\ROMAN NUMERAL FIVE",))
314
314
315 def test_forward_unicode_completion(self):
315 def test_forward_unicode_completion(self):
316 ip = get_ipython()
316 ip = get_ipython()
317
317
318 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
318 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
319 nt.assert_equal(matches, ["β…€"] ) # This is not a V
319 nt.assert_equal(matches, ["β…€"] ) # This is not a V
320 nt.assert_equal(matches, ["\u2164"] ) # same as above but explicit.
320 nt.assert_equal(matches, ["\u2164"] ) # same as above but explicit.
321
321
322 @nt.nottest # now we have a completion for \jmath
322 @nt.nottest # now we have a completion for \jmath
323 @decorators.knownfailureif(
323 @decorators.knownfailureif(
324 sys.platform == "win32", "Fails if there is a C:\\j... path"
324 sys.platform == "win32", "Fails if there is a C:\\j... path"
325 )
325 )
326 def test_no_ascii_back_completion(self):
326 def test_no_ascii_back_completion(self):
327 ip = get_ipython()
327 ip = get_ipython()
328 with TemporaryWorkingDirectory(): # Avoid any filename completions
328 with TemporaryWorkingDirectory(): # Avoid any filename completions
329 # single ascii letter that don't have yet completions
329 # single ascii letter that don't have yet completions
330 for letter in "jJ":
330 for letter in "jJ":
331 name, matches = ip.complete("\\" + letter)
331 name, matches = ip.complete("\\" + letter)
332 nt.assert_equal(matches, [])
332 nt.assert_equal(matches, [])
333
333
334 class CompletionSplitterTestCase(unittest.TestCase):
334 class CompletionSplitterTestCase(unittest.TestCase):
335 def setUp(self):
335 def setUp(self):
336 self.sp = completer.CompletionSplitter()
336 self.sp = completer.CompletionSplitter()
337
337
338 def test_delim_setting(self):
338 def test_delim_setting(self):
339 self.sp.delims = " "
339 self.sp.delims = " "
340 nt.assert_equal(self.sp.delims, " ")
340 nt.assert_equal(self.sp.delims, " ")
341 nt.assert_equal(self.sp._delim_expr, r"[\ ]")
341 nt.assert_equal(self.sp._delim_expr, r"[\ ]")
342
342
343 def test_spaces(self):
343 def test_spaces(self):
344 """Test with only spaces as split chars."""
344 """Test with only spaces as split chars."""
345 self.sp.delims = " "
345 self.sp.delims = " "
346 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
346 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
347 check_line_split(self.sp, t)
347 check_line_split(self.sp, t)
348
348
349 def test_has_open_quotes1(self):
349 def test_has_open_quotes1(self):
350 for s in ["'", "'''", "'hi' '"]:
350 for s in ["'", "'''", "'hi' '"]:
351 nt.assert_equal(completer.has_open_quotes(s), "'")
351 nt.assert_equal(completer.has_open_quotes(s), "'")
352
352
353 def test_has_open_quotes2(self):
353 def test_has_open_quotes2(self):
354 for s in ['"', '"""', '"hi" "']:
354 for s in ['"', '"""', '"hi" "']:
355 nt.assert_equal(completer.has_open_quotes(s), '"')
355 nt.assert_equal(completer.has_open_quotes(s), '"')
356
356
357 def test_has_open_quotes3(self):
357 def test_has_open_quotes3(self):
358 for s in ["''", "''' '''", "'hi' 'ipython'"]:
358 for s in ["''", "''' '''", "'hi' 'ipython'"]:
359 nt.assert_false(completer.has_open_quotes(s))
359 nt.assert_false(completer.has_open_quotes(s))
360
360
361 def test_has_open_quotes4(self):
361 def test_has_open_quotes4(self):
362 for s in ['""', '""" """', '"hi" "ipython"']:
362 for s in ['""', '""" """', '"hi" "ipython"']:
363 nt.assert_false(completer.has_open_quotes(s))
363 nt.assert_false(completer.has_open_quotes(s))
364
364
365 @decorators.knownfailureif(
365 @decorators.knownfailureif(
366 sys.platform == "win32", "abspath completions fail on Windows"
366 sys.platform == "win32", "abspath completions fail on Windows"
367 )
367 )
368 def test_abspath_file_completions(self):
368 def test_abspath_file_completions(self):
369 ip = get_ipython()
369 ip = get_ipython()
370 with TemporaryDirectory() as tmpdir:
370 with TemporaryDirectory() as tmpdir:
371 prefix = os.path.join(tmpdir, "foo")
371 prefix = os.path.join(tmpdir, "foo")
372 suffixes = ["1", "2"]
372 suffixes = ["1", "2"]
373 names = [prefix + s for s in suffixes]
373 names = [prefix + s for s in suffixes]
374 for n in names:
374 for n in names:
375 open(n, "w").close()
375 open(n, "w").close()
376
376
377 # Check simple completion
377 # Check simple completion
378 c = ip.complete(prefix)[1]
378 c = ip.complete(prefix)[1]
379 nt.assert_equal(c, names)
379 nt.assert_equal(c, names)
380
380
381 # Now check with a function call
381 # Now check with a function call
382 cmd = 'a = f("%s' % prefix
382 cmd = 'a = f("%s' % prefix
383 c = ip.complete(prefix, cmd)[1]
383 c = ip.complete(prefix, cmd)[1]
384 comp = [prefix + s for s in suffixes]
384 comp = [prefix + s for s in suffixes]
385 nt.assert_equal(c, comp)
385 nt.assert_equal(c, comp)
386
386
387 def test_local_file_completions(self):
387 def test_local_file_completions(self):
388 ip = get_ipython()
388 ip = get_ipython()
389 with TemporaryWorkingDirectory():
389 with TemporaryWorkingDirectory():
390 prefix = "./foo"
390 prefix = "./foo"
391 suffixes = ["1", "2"]
391 suffixes = ["1", "2"]
392 names = [prefix + s for s in suffixes]
392 names = [prefix + s for s in suffixes]
393 for n in names:
393 for n in names:
394 open(n, "w").close()
394 open(n, "w").close()
395
395
396 # Check simple completion
396 # Check simple completion
397 c = ip.complete(prefix)[1]
397 c = ip.complete(prefix)[1]
398 nt.assert_equal(c, names)
398 nt.assert_equal(c, names)
399
399
400 # Now check with a function call
400 # Now check with a function call
401 cmd = 'a = f("%s' % prefix
401 cmd = 'a = f("%s' % prefix
402 c = ip.complete(prefix, cmd)[1]
402 c = ip.complete(prefix, cmd)[1]
403 comp = {prefix + s for s in suffixes}
403 comp = {prefix + s for s in suffixes}
404 nt.assert_true(comp.issubset(set(c)))
404 nt.assert_true(comp.issubset(set(c)))
405
405
406 def test_quoted_file_completions(self):
406 def test_quoted_file_completions(self):
407 ip = get_ipython()
407 ip = get_ipython()
408 with TemporaryWorkingDirectory():
408 with TemporaryWorkingDirectory():
409 name = "foo'bar"
409 name = "foo'bar"
410 open(name, "w").close()
410 open(name, "w").close()
411
411
412 # Don't escape Windows
412 # Don't escape Windows
413 escaped = name if sys.platform == "win32" else "foo\\'bar"
413 escaped = name if sys.platform == "win32" else "foo\\'bar"
414
414
415 # Single quote matches embedded single quote
415 # Single quote matches embedded single quote
416 text = "open('foo"
416 text = "open('foo"
417 c = ip.Completer._complete(
417 c = ip.Completer._complete(
418 cursor_line=0, cursor_pos=len(text), full_text=text
418 cursor_line=0, cursor_pos=len(text), full_text=text
419 )[1]
419 )[1]
420 nt.assert_equal(c, [escaped])
420 nt.assert_equal(c, [escaped])
421
421
422 # Double quote requires no escape
422 # Double quote requires no escape
423 text = 'open("foo'
423 text = 'open("foo'
424 c = ip.Completer._complete(
424 c = ip.Completer._complete(
425 cursor_line=0, cursor_pos=len(text), full_text=text
425 cursor_line=0, cursor_pos=len(text), full_text=text
426 )[1]
426 )[1]
427 nt.assert_equal(c, [name])
427 nt.assert_equal(c, [name])
428
428
429 # No quote requires an escape
429 # No quote requires an escape
430 text = "%ls foo"
430 text = "%ls foo"
431 c = ip.Completer._complete(
431 c = ip.Completer._complete(
432 cursor_line=0, cursor_pos=len(text), full_text=text
432 cursor_line=0, cursor_pos=len(text), full_text=text
433 )[1]
433 )[1]
434 nt.assert_equal(c, [escaped])
434 nt.assert_equal(c, [escaped])
435
435
436 def test_all_completions_dups(self):
436 def test_all_completions_dups(self):
437 """
437 """
438 Make sure the output of `IPCompleter.all_completions` does not have
438 Make sure the output of `IPCompleter.all_completions` does not have
439 duplicated prefixes.
439 duplicated prefixes.
440 """
440 """
441 ip = get_ipython()
441 ip = get_ipython()
442 c = ip.Completer
442 c = ip.Completer
443 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
443 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
444 for jedi_status in [True, False]:
444 for jedi_status in [True, False]:
445 with provisionalcompleter():
445 with provisionalcompleter():
446 ip.Completer.use_jedi = jedi_status
446 ip.Completer.use_jedi = jedi_status
447 matches = c.all_completions("TestCl")
447 matches = c.all_completions("TestCl")
448 assert matches == ['TestClass'], jedi_status
448 assert matches == ['TestClass'], jedi_status
449 matches = c.all_completions("TestClass.")
449 matches = c.all_completions("TestClass.")
450 assert len(matches) > 2, jedi_status
450 assert len(matches) > 2, jedi_status
451 matches = c.all_completions("TestClass.a")
451 matches = c.all_completions("TestClass.a")
452 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
452 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
453
453
454 def test_jedi(self):
454 def test_jedi(self):
455 """
455 """
456 A couple of issue we had with Jedi
456 A couple of issue we had with Jedi
457 """
457 """
458 ip = get_ipython()
458 ip = get_ipython()
459
459
460 def _test_complete(reason, s, comp, start=None, end=None):
460 def _test_complete(reason, s, comp, start=None, end=None):
461 l = len(s)
461 l = len(s)
462 start = start if start is not None else l
462 start = start if start is not None else l
463 end = end if end is not None else l
463 end = end if end is not None else l
464 with provisionalcompleter():
464 with provisionalcompleter():
465 ip.Completer.use_jedi = True
465 ip.Completer.use_jedi = True
466 completions = set(ip.Completer.completions(s, l))
466 completions = set(ip.Completer.completions(s, l))
467 ip.Completer.use_jedi = False
467 ip.Completer.use_jedi = False
468 assert_in(Completion(start, end, comp), completions, reason)
468 assert_in(Completion(start, end, comp), completions, reason)
469
469
470 def _test_not_complete(reason, s, comp):
470 def _test_not_complete(reason, s, comp):
471 l = len(s)
471 l = len(s)
472 with provisionalcompleter():
472 with provisionalcompleter():
473 ip.Completer.use_jedi = True
473 ip.Completer.use_jedi = True
474 completions = set(ip.Completer.completions(s, l))
474 completions = set(ip.Completer.completions(s, l))
475 ip.Completer.use_jedi = False
475 ip.Completer.use_jedi = False
476 assert_not_in(Completion(l, l, comp), completions, reason)
476 assert_not_in(Completion(l, l, comp), completions, reason)
477
477
478 import jedi
478 import jedi
479
479
480 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
480 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
481 if jedi_version > (0, 10):
481 if jedi_version > (0, 10):
482 yield _test_complete, "jedi >0.9 should complete and not crash", "a=1;a.", "real"
482 yield _test_complete, "jedi >0.9 should complete and not crash", "a=1;a.", "real"
483 yield _test_complete, "can infer first argument", 'a=(1,"foo");a[0].', "real"
483 yield _test_complete, "can infer first argument", 'a=(1,"foo");a[0].', "real"
484 yield _test_complete, "can infer second argument", 'a=(1,"foo");a[1].', "capitalize"
484 yield _test_complete, "can infer second argument", 'a=(1,"foo");a[1].', "capitalize"
485 yield _test_complete, "cover duplicate completions", "im", "import", 0, 2
485 yield _test_complete, "cover duplicate completions", "im", "import", 0, 2
486
486
487 yield _test_not_complete, "does not mix types", 'a=(1,"foo");a[0].', "capitalize"
487 yield _test_not_complete, "does not mix types", 'a=(1,"foo");a[0].', "capitalize"
488
488
489 def test_completion_have_signature(self):
489 def test_completion_have_signature(self):
490 """
490 """
491 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
491 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
492 """
492 """
493 ip = get_ipython()
493 ip = get_ipython()
494 with provisionalcompleter():
494 with provisionalcompleter():
495 ip.Completer.use_jedi = True
495 ip.Completer.use_jedi = True
496 completions = ip.Completer.completions("ope", 3)
496 completions = ip.Completer.completions("ope", 3)
497 c = next(completions) # should be `open`
497 c = next(completions) # should be `open`
498 ip.Completer.use_jedi = False
498 ip.Completer.use_jedi = False
499 assert "file" in c.signature, "Signature of function was not found by completer"
499 assert "file" in c.signature, "Signature of function was not found by completer"
500 assert (
500 assert (
501 "encoding" in c.signature
501 "encoding" in c.signature
502 ), "Signature of function was not found by completer"
502 ), "Signature of function was not found by completer"
503
503
504 def test_deduplicate_completions(self):
504 def test_deduplicate_completions(self):
505 """
505 """
506 Test that completions are correctly deduplicated (even if ranges are not the same)
506 Test that completions are correctly deduplicated (even if ranges are not the same)
507 """
507 """
508 ip = get_ipython()
508 ip = get_ipython()
509 ip.ex(
509 ip.ex(
510 textwrap.dedent(
510 textwrap.dedent(
511 """
511 """
512 class Z:
512 class Z:
513 zoo = 1
513 zoo = 1
514 """
514 """
515 )
515 )
516 )
516 )
517 with provisionalcompleter():
517 with provisionalcompleter():
518 ip.Completer.use_jedi = True
518 ip.Completer.use_jedi = True
519 l = list(
519 l = list(
520 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
520 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
521 )
521 )
522 ip.Completer.use_jedi = False
522 ip.Completer.use_jedi = False
523
523
524 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
524 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
525 assert l[0].text == "zoo" # and not `it.accumulate`
525 assert l[0].text == "zoo" # and not `it.accumulate`
526
526
527 def test_greedy_completions(self):
527 def test_greedy_completions(self):
528 """
528 """
529 Test the capability of the Greedy completer.
529 Test the capability of the Greedy completer.
530
530
531 Most of the test here does not really show off the greedy completer, for proof
531 Most of the test here does not really show off the greedy completer, for proof
532 each of the text below now pass with Jedi. The greedy completer is capable of more.
532 each of the text below now pass with Jedi. The greedy completer is capable of more.
533
533
534 See the :any:`test_dict_key_completion_contexts`
534 See the :any:`test_dict_key_completion_contexts`
535
535
536 """
536 """
537 ip = get_ipython()
537 ip = get_ipython()
538 ip.ex("a=list(range(5))")
538 ip.ex("a=list(range(5))")
539 _, c = ip.complete(".", line="a[0].")
539 _, c = ip.complete(".", line="a[0].")
540 nt.assert_false(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
540 nt.assert_false(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
541
541
542 def _(line, cursor_pos, expect, message, completion):
542 def _(line, cursor_pos, expect, message, completion):
543 with greedy_completion(), provisionalcompleter():
543 with greedy_completion(), provisionalcompleter():
544 ip.Completer.use_jedi = False
544 ip.Completer.use_jedi = False
545 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
545 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
546 nt.assert_in(expect, c, message % c)
546 nt.assert_in(expect, c, message % c)
547
547
548 ip.Completer.use_jedi = True
548 ip.Completer.use_jedi = True
549 with provisionalcompleter():
549 with provisionalcompleter():
550 completions = ip.Completer.completions(line, cursor_pos)
550 completions = ip.Completer.completions(line, cursor_pos)
551 nt.assert_in(completion, completions)
551 nt.assert_in(completion, completions)
552
552
553 with provisionalcompleter():
553 with provisionalcompleter():
554 yield _, "a[0].", 5, "a[0].real", "Should have completed on a[0].: %s", Completion(
554 yield _, "a[0].", 5, "a[0].real", "Should have completed on a[0].: %s", Completion(
555 5, 5, "real"
555 5, 5, "real"
556 )
556 )
557 yield _, "a[0].r", 6, "a[0].real", "Should have completed on a[0].r: %s", Completion(
557 yield _, "a[0].r", 6, "a[0].real", "Should have completed on a[0].r: %s", Completion(
558 5, 6, "real"
558 5, 6, "real"
559 )
559 )
560
560
561 yield _, "a[0].from_", 10, "a[0].from_bytes", "Should have completed on a[0].from_: %s", Completion(
561 yield _, "a[0].from_", 10, "a[0].from_bytes", "Should have completed on a[0].from_: %s", Completion(
562 5, 10, "from_bytes"
562 5, 10, "from_bytes"
563 )
563 )
564
564
565 def test_omit__names(self):
565 def test_omit__names(self):
566 # also happens to test IPCompleter as a configurable
566 # also happens to test IPCompleter as a configurable
567 ip = get_ipython()
567 ip = get_ipython()
568 ip._hidden_attr = 1
568 ip._hidden_attr = 1
569 ip._x = {}
569 ip._x = {}
570 c = ip.Completer
570 c = ip.Completer
571 ip.ex("ip=get_ipython()")
571 ip.ex("ip=get_ipython()")
572 cfg = Config()
572 cfg = Config()
573 cfg.IPCompleter.omit__names = 0
573 cfg.IPCompleter.omit__names = 0
574 c.update_config(cfg)
574 c.update_config(cfg)
575 with provisionalcompleter():
575 with provisionalcompleter():
576 c.use_jedi = False
576 c.use_jedi = False
577 s, matches = c.complete("ip.")
577 s, matches = c.complete("ip.")
578 nt.assert_in("ip.__str__", matches)
578 nt.assert_in("ip.__str__", matches)
579 nt.assert_in("ip._hidden_attr", matches)
579 nt.assert_in("ip._hidden_attr", matches)
580
580
581 # c.use_jedi = True
581 # c.use_jedi = True
582 # completions = set(c.completions('ip.', 3))
582 # completions = set(c.completions('ip.', 3))
583 # nt.assert_in(Completion(3, 3, '__str__'), completions)
583 # nt.assert_in(Completion(3, 3, '__str__'), completions)
584 # nt.assert_in(Completion(3,3, "_hidden_attr"), completions)
584 # nt.assert_in(Completion(3,3, "_hidden_attr"), completions)
585
585
586 cfg = Config()
586 cfg = Config()
587 cfg.IPCompleter.omit__names = 1
587 cfg.IPCompleter.omit__names = 1
588 c.update_config(cfg)
588 c.update_config(cfg)
589 with provisionalcompleter():
589 with provisionalcompleter():
590 c.use_jedi = False
590 c.use_jedi = False
591 s, matches = c.complete("ip.")
591 s, matches = c.complete("ip.")
592 nt.assert_not_in("ip.__str__", matches)
592 nt.assert_not_in("ip.__str__", matches)
593 # nt.assert_in('ip._hidden_attr', matches)
593 # nt.assert_in('ip._hidden_attr', matches)
594
594
595 # c.use_jedi = True
595 # c.use_jedi = True
596 # completions = set(c.completions('ip.', 3))
596 # completions = set(c.completions('ip.', 3))
597 # nt.assert_not_in(Completion(3,3,'__str__'), completions)
597 # nt.assert_not_in(Completion(3,3,'__str__'), completions)
598 # nt.assert_in(Completion(3,3, "_hidden_attr"), completions)
598 # nt.assert_in(Completion(3,3, "_hidden_attr"), completions)
599
599
600 cfg = Config()
600 cfg = Config()
601 cfg.IPCompleter.omit__names = 2
601 cfg.IPCompleter.omit__names = 2
602 c.update_config(cfg)
602 c.update_config(cfg)
603 with provisionalcompleter():
603 with provisionalcompleter():
604 c.use_jedi = False
604 c.use_jedi = False
605 s, matches = c.complete("ip.")
605 s, matches = c.complete("ip.")
606 nt.assert_not_in("ip.__str__", matches)
606 nt.assert_not_in("ip.__str__", matches)
607 nt.assert_not_in("ip._hidden_attr", matches)
607 nt.assert_not_in("ip._hidden_attr", matches)
608
608
609 # c.use_jedi = True
609 # c.use_jedi = True
610 # completions = set(c.completions('ip.', 3))
610 # completions = set(c.completions('ip.', 3))
611 # nt.assert_not_in(Completion(3,3,'__str__'), completions)
611 # nt.assert_not_in(Completion(3,3,'__str__'), completions)
612 # nt.assert_not_in(Completion(3,3, "_hidden_attr"), completions)
612 # nt.assert_not_in(Completion(3,3, "_hidden_attr"), completions)
613
613
614 with provisionalcompleter():
614 with provisionalcompleter():
615 c.use_jedi = False
615 c.use_jedi = False
616 s, matches = c.complete("ip._x.")
616 s, matches = c.complete("ip._x.")
617 nt.assert_in("ip._x.keys", matches)
617 nt.assert_in("ip._x.keys", matches)
618
618
619 # c.use_jedi = True
619 # c.use_jedi = True
620 # completions = set(c.completions('ip._x.', 6))
620 # completions = set(c.completions('ip._x.', 6))
621 # nt.assert_in(Completion(6,6, "keys"), completions)
621 # nt.assert_in(Completion(6,6, "keys"), completions)
622
622
623 del ip._hidden_attr
623 del ip._hidden_attr
624 del ip._x
624 del ip._x
625
625
626 def test_limit_to__all__False_ok(self):
626 def test_limit_to__all__False_ok(self):
627 """
627 """
628 Limit to all is deprecated, once we remove it this test can go away.
628 Limit to all is deprecated, once we remove it this test can go away.
629 """
629 """
630 ip = get_ipython()
630 ip = get_ipython()
631 c = ip.Completer
631 c = ip.Completer
632 c.use_jedi = False
632 c.use_jedi = False
633 ip.ex("class D: x=24")
633 ip.ex("class D: x=24")
634 ip.ex("d=D()")
634 ip.ex("d=D()")
635 cfg = Config()
635 cfg = Config()
636 cfg.IPCompleter.limit_to__all__ = False
636 cfg.IPCompleter.limit_to__all__ = False
637 c.update_config(cfg)
637 c.update_config(cfg)
638 s, matches = c.complete("d.")
638 s, matches = c.complete("d.")
639 nt.assert_in("d.x", matches)
639 nt.assert_in("d.x", matches)
640
640
641 def test_get__all__entries_ok(self):
641 def test_get__all__entries_ok(self):
642 class A:
642 class A:
643 __all__ = ["x", 1]
643 __all__ = ["x", 1]
644
644
645 words = completer.get__all__entries(A())
645 words = completer.get__all__entries(A())
646 nt.assert_equal(words, ["x"])
646 nt.assert_equal(words, ["x"])
647
647
648 def test_get__all__entries_no__all__ok(self):
648 def test_get__all__entries_no__all__ok(self):
649 class A:
649 class A:
650 pass
650 pass
651
651
652 words = completer.get__all__entries(A())
652 words = completer.get__all__entries(A())
653 nt.assert_equal(words, [])
653 nt.assert_equal(words, [])
654
654
655 def test_func_kw_completions(self):
655 def test_func_kw_completions(self):
656 ip = get_ipython()
656 ip = get_ipython()
657 c = ip.Completer
657 c = ip.Completer
658 c.use_jedi = False
658 c.use_jedi = False
659 ip.ex("def myfunc(a=1,b=2): return a+b")
659 ip.ex("def myfunc(a=1,b=2): return a+b")
660 s, matches = c.complete(None, "myfunc(1,b")
660 s, matches = c.complete(None, "myfunc(1,b")
661 nt.assert_in("b=", matches)
661 nt.assert_in("b=", matches)
662 # Simulate completing with cursor right after b (pos==10):
662 # Simulate completing with cursor right after b (pos==10):
663 s, matches = c.complete(None, "myfunc(1,b)", 10)
663 s, matches = c.complete(None, "myfunc(1,b)", 10)
664 nt.assert_in("b=", matches)
664 nt.assert_in("b=", matches)
665 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
665 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
666 nt.assert_in("b=", matches)
666 nt.assert_in("b=", matches)
667 # builtin function
667 # builtin function
668 s, matches = c.complete(None, "min(k, k")
668 s, matches = c.complete(None, "min(k, k")
669 nt.assert_in("key=", matches)
669 nt.assert_in("key=", matches)
670
670
671 def test_default_arguments_from_docstring(self):
671 def test_default_arguments_from_docstring(self):
672 ip = get_ipython()
672 ip = get_ipython()
673 c = ip.Completer
673 c = ip.Completer
674 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
674 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
675 nt.assert_equal(kwd, ["key"])
675 nt.assert_equal(kwd, ["key"])
676 # with cython type etc
676 # with cython type etc
677 kwd = c._default_arguments_from_docstring(
677 kwd = c._default_arguments_from_docstring(
678 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
678 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
679 )
679 )
680 nt.assert_equal(kwd, ["ncall", "resume", "nsplit"])
680 nt.assert_equal(kwd, ["ncall", "resume", "nsplit"])
681 # white spaces
681 # white spaces
682 kwd = c._default_arguments_from_docstring(
682 kwd = c._default_arguments_from_docstring(
683 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
683 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
684 )
684 )
685 nt.assert_equal(kwd, ["ncall", "resume", "nsplit"])
685 nt.assert_equal(kwd, ["ncall", "resume", "nsplit"])
686
686
687 def test_line_magics(self):
687 def test_line_magics(self):
688 ip = get_ipython()
688 ip = get_ipython()
689 c = ip.Completer
689 c = ip.Completer
690 s, matches = c.complete(None, "lsmag")
690 s, matches = c.complete(None, "lsmag")
691 nt.assert_in("%lsmagic", matches)
691 nt.assert_in("%lsmagic", matches)
692 s, matches = c.complete(None, "%lsmag")
692 s, matches = c.complete(None, "%lsmag")
693 nt.assert_in("%lsmagic", matches)
693 nt.assert_in("%lsmagic", matches)
694
694
695 def test_cell_magics(self):
695 def test_cell_magics(self):
696 from IPython.core.magic import register_cell_magic
696 from IPython.core.magic import register_cell_magic
697
697
698 @register_cell_magic
698 @register_cell_magic
699 def _foo_cellm(line, cell):
699 def _foo_cellm(line, cell):
700 pass
700 pass
701
701
702 ip = get_ipython()
702 ip = get_ipython()
703 c = ip.Completer
703 c = ip.Completer
704
704
705 s, matches = c.complete(None, "_foo_ce")
705 s, matches = c.complete(None, "_foo_ce")
706 nt.assert_in("%%_foo_cellm", matches)
706 nt.assert_in("%%_foo_cellm", matches)
707 s, matches = c.complete(None, "%%_foo_ce")
707 s, matches = c.complete(None, "%%_foo_ce")
708 nt.assert_in("%%_foo_cellm", matches)
708 nt.assert_in("%%_foo_cellm", matches)
709
709
710 def test_line_cell_magics(self):
710 def test_line_cell_magics(self):
711 from IPython.core.magic import register_line_cell_magic
711 from IPython.core.magic import register_line_cell_magic
712
712
713 @register_line_cell_magic
713 @register_line_cell_magic
714 def _bar_cellm(line, cell):
714 def _bar_cellm(line, cell):
715 pass
715 pass
716
716
717 ip = get_ipython()
717 ip = get_ipython()
718 c = ip.Completer
718 c = ip.Completer
719
719
720 # The policy here is trickier, see comments in completion code. The
720 # The policy here is trickier, see comments in completion code. The
721 # returned values depend on whether the user passes %% or not explicitly,
721 # returned values depend on whether the user passes %% or not explicitly,
722 # and this will show a difference if the same name is both a line and cell
722 # and this will show a difference if the same name is both a line and cell
723 # magic.
723 # magic.
724 s, matches = c.complete(None, "_bar_ce")
724 s, matches = c.complete(None, "_bar_ce")
725 nt.assert_in("%_bar_cellm", matches)
725 nt.assert_in("%_bar_cellm", matches)
726 nt.assert_in("%%_bar_cellm", matches)
726 nt.assert_in("%%_bar_cellm", matches)
727 s, matches = c.complete(None, "%_bar_ce")
727 s, matches = c.complete(None, "%_bar_ce")
728 nt.assert_in("%_bar_cellm", matches)
728 nt.assert_in("%_bar_cellm", matches)
729 nt.assert_in("%%_bar_cellm", matches)
729 nt.assert_in("%%_bar_cellm", matches)
730 s, matches = c.complete(None, "%%_bar_ce")
730 s, matches = c.complete(None, "%%_bar_ce")
731 nt.assert_not_in("%_bar_cellm", matches)
731 nt.assert_not_in("%_bar_cellm", matches)
732 nt.assert_in("%%_bar_cellm", matches)
732 nt.assert_in("%%_bar_cellm", matches)
733
733
734 def test_magic_completion_order(self):
734 def test_magic_completion_order(self):
735 ip = get_ipython()
735 ip = get_ipython()
736 c = ip.Completer
736 c = ip.Completer
737
737
738 # Test ordering of line and cell magics.
738 # Test ordering of line and cell magics.
739 text, matches = c.complete("timeit")
739 text, matches = c.complete("timeit")
740 nt.assert_equal(matches, ["%timeit", "%%timeit"])
740 nt.assert_equal(matches, ["%timeit", "%%timeit"])
741
741
742 def test_magic_completion_shadowing(self):
742 def test_magic_completion_shadowing(self):
743 ip = get_ipython()
743 ip = get_ipython()
744 c = ip.Completer
744 c = ip.Completer
745 c.use_jedi = False
745 c.use_jedi = False
746
746
747 # Before importing matplotlib, %matplotlib magic should be the only option.
747 # Before importing matplotlib, %matplotlib magic should be the only option.
748 text, matches = c.complete("mat")
748 text, matches = c.complete("mat")
749 nt.assert_equal(matches, ["%matplotlib"])
749 nt.assert_equal(matches, ["%matplotlib"])
750
750
751 # The newly introduced name should shadow the magic.
751 # The newly introduced name should shadow the magic.
752 ip.run_cell("matplotlib = 1")
752 ip.run_cell("matplotlib = 1")
753 text, matches = c.complete("mat")
753 text, matches = c.complete("mat")
754 nt.assert_equal(matches, ["matplotlib"])
754 nt.assert_equal(matches, ["matplotlib"])
755
755
756 # After removing matplotlib from namespace, the magic should again be
756 # After removing matplotlib from namespace, the magic should again be
757 # the only option.
757 # the only option.
758 del ip.user_ns["matplotlib"]
758 del ip.user_ns["matplotlib"]
759 text, matches = c.complete("mat")
759 text, matches = c.complete("mat")
760 nt.assert_equal(matches, ["%matplotlib"])
760 nt.assert_equal(matches, ["%matplotlib"])
761
761
762 def test_magic_completion_shadowing_explicit(self):
762 def test_magic_completion_shadowing_explicit(self):
763 """
763 """
764 If the user try to complete a shadowed magic, and explicit % start should
764 If the user try to complete a shadowed magic, and explicit % start should
765 still return the completions.
765 still return the completions.
766 """
766 """
767 ip = get_ipython()
767 ip = get_ipython()
768 c = ip.Completer
768 c = ip.Completer
769
769
770 # Before importing matplotlib, %matplotlib magic should be the only option.
770 # Before importing matplotlib, %matplotlib magic should be the only option.
771 text, matches = c.complete("%mat")
771 text, matches = c.complete("%mat")
772 nt.assert_equal(matches, ["%matplotlib"])
772 nt.assert_equal(matches, ["%matplotlib"])
773
773
774 ip.run_cell("matplotlib = 1")
774 ip.run_cell("matplotlib = 1")
775
775
776 # After removing matplotlib from namespace, the magic should still be
776 # After removing matplotlib from namespace, the magic should still be
777 # the only option.
777 # the only option.
778 text, matches = c.complete("%mat")
778 text, matches = c.complete("%mat")
779 nt.assert_equal(matches, ["%matplotlib"])
779 nt.assert_equal(matches, ["%matplotlib"])
780
780
781 def test_magic_config(self):
781 def test_magic_config(self):
782 ip = get_ipython()
782 ip = get_ipython()
783 c = ip.Completer
783 c = ip.Completer
784
784
785 s, matches = c.complete(None, "conf")
785 s, matches = c.complete(None, "conf")
786 nt.assert_in("%config", matches)
786 nt.assert_in("%config", matches)
787 s, matches = c.complete(None, "conf")
787 s, matches = c.complete(None, "conf")
788 nt.assert_not_in("AliasManager", matches)
788 nt.assert_not_in("AliasManager", matches)
789 s, matches = c.complete(None, "config ")
789 s, matches = c.complete(None, "config ")
790 nt.assert_in("AliasManager", matches)
790 nt.assert_in("AliasManager", matches)
791 s, matches = c.complete(None, "%config ")
791 s, matches = c.complete(None, "%config ")
792 nt.assert_in("AliasManager", matches)
792 nt.assert_in("AliasManager", matches)
793 s, matches = c.complete(None, "config Ali")
793 s, matches = c.complete(None, "config Ali")
794 nt.assert_list_equal(["AliasManager"], matches)
794 nt.assert_list_equal(["AliasManager"], matches)
795 s, matches = c.complete(None, "%config Ali")
795 s, matches = c.complete(None, "%config Ali")
796 nt.assert_list_equal(["AliasManager"], matches)
796 nt.assert_list_equal(["AliasManager"], matches)
797 s, matches = c.complete(None, "config AliasManager")
797 s, matches = c.complete(None, "config AliasManager")
798 nt.assert_list_equal(["AliasManager"], matches)
798 nt.assert_list_equal(["AliasManager"], matches)
799 s, matches = c.complete(None, "%config AliasManager")
799 s, matches = c.complete(None, "%config AliasManager")
800 nt.assert_list_equal(["AliasManager"], matches)
800 nt.assert_list_equal(["AliasManager"], matches)
801 s, matches = c.complete(None, "config AliasManager.")
801 s, matches = c.complete(None, "config AliasManager.")
802 nt.assert_in("AliasManager.default_aliases", matches)
802 nt.assert_in("AliasManager.default_aliases", matches)
803 s, matches = c.complete(None, "%config AliasManager.")
803 s, matches = c.complete(None, "%config AliasManager.")
804 nt.assert_in("AliasManager.default_aliases", matches)
804 nt.assert_in("AliasManager.default_aliases", matches)
805 s, matches = c.complete(None, "config AliasManager.de")
805 s, matches = c.complete(None, "config AliasManager.de")
806 nt.assert_list_equal(["AliasManager.default_aliases"], matches)
806 nt.assert_list_equal(["AliasManager.default_aliases"], matches)
807 s, matches = c.complete(None, "config AliasManager.de")
807 s, matches = c.complete(None, "config AliasManager.de")
808 nt.assert_list_equal(["AliasManager.default_aliases"], matches)
808 nt.assert_list_equal(["AliasManager.default_aliases"], matches)
809
809
810 def test_magic_color(self):
810 def test_magic_color(self):
811 ip = get_ipython()
811 ip = get_ipython()
812 c = ip.Completer
812 c = ip.Completer
813
813
814 s, matches = c.complete(None, "colo")
814 s, matches = c.complete(None, "colo")
815 nt.assert_in("%colors", matches)
815 nt.assert_in("%colors", matches)
816 s, matches = c.complete(None, "colo")
816 s, matches = c.complete(None, "colo")
817 nt.assert_not_in("NoColor", matches)
817 nt.assert_not_in("NoColor", matches)
818 s, matches = c.complete(None, "%colors") # No trailing space
818 s, matches = c.complete(None, "%colors") # No trailing space
819 nt.assert_not_in("NoColor", matches)
819 nt.assert_not_in("NoColor", matches)
820 s, matches = c.complete(None, "colors ")
820 s, matches = c.complete(None, "colors ")
821 nt.assert_in("NoColor", matches)
821 nt.assert_in("NoColor", matches)
822 s, matches = c.complete(None, "%colors ")
822 s, matches = c.complete(None, "%colors ")
823 nt.assert_in("NoColor", matches)
823 nt.assert_in("NoColor", matches)
824 s, matches = c.complete(None, "colors NoCo")
824 s, matches = c.complete(None, "colors NoCo")
825 nt.assert_list_equal(["NoColor"], matches)
825 nt.assert_list_equal(["NoColor"], matches)
826 s, matches = c.complete(None, "%colors NoCo")
826 s, matches = c.complete(None, "%colors NoCo")
827 nt.assert_list_equal(["NoColor"], matches)
827 nt.assert_list_equal(["NoColor"], matches)
828
828
829 def test_match_dict_keys(self):
829 def test_match_dict_keys(self):
830 """
830 """
831 Test that match_dict_keys works on a couple of use case does return what
831 Test that match_dict_keys works on a couple of use case does return what
832 expected, and does not crash
832 expected, and does not crash
833 """
833 """
834 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
834 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
835
835
836 keys = ["foo", b"far"]
836 keys = ["foo", b"far"]
837 assert match_dict_keys(keys, "b'", delims=delims) == ("'", 2, ["far"])
837 assert match_dict_keys(keys, "b'", delims=delims) == ("'", 2, ["far"])
838 assert match_dict_keys(keys, "b'f", delims=delims) == ("'", 2, ["far"])
838 assert match_dict_keys(keys, "b'f", delims=delims) == ("'", 2, ["far"])
839 assert match_dict_keys(keys, 'b"', delims=delims) == ('"', 2, ["far"])
839 assert match_dict_keys(keys, 'b"', delims=delims) == ('"', 2, ["far"])
840 assert match_dict_keys(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
840 assert match_dict_keys(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
841
841
842 assert match_dict_keys(keys, "'", delims=delims) == ("'", 1, ["foo"])
842 assert match_dict_keys(keys, "'", delims=delims) == ("'", 1, ["foo"])
843 assert match_dict_keys(keys, "'f", delims=delims) == ("'", 1, ["foo"])
843 assert match_dict_keys(keys, "'f", delims=delims) == ("'", 1, ["foo"])
844 assert match_dict_keys(keys, '"', delims=delims) == ('"', 1, ["foo"])
844 assert match_dict_keys(keys, '"', delims=delims) == ('"', 1, ["foo"])
845 assert match_dict_keys(keys, '"f', delims=delims) == ('"', 1, ["foo"])
845 assert match_dict_keys(keys, '"f', delims=delims) == ('"', 1, ["foo"])
846
846
847 match_dict_keys
847 match_dict_keys
848
848
849 def test_match_dict_keys_tuple(self):
850 """
851 Test that match_dict_keys called with extra prefix works on a couple of use case,
852 does return what expected, and does not crash.
853 """
854 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
855
856 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
857
858 # Completion on first key == "foo"
859 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("foo",)) == ("'", 1, ["bar", "oof"])
860 assert match_dict_keys(keys, "\"", delims=delims, extra_prefix=("foo",)) == ("\"", 1, ["bar", "oof"])
861 assert match_dict_keys(keys, "'o", delims=delims, extra_prefix=("foo",)) == ("'", 1, ["oof"])
862 assert match_dict_keys(keys, "\"o", delims=delims, extra_prefix=("foo",)) == ("\"", 1, ["oof"])
863 assert match_dict_keys(keys, "b'", delims=delims, extra_prefix=("foo",)) == ("'", 2, ["bar"])
864 assert match_dict_keys(keys, "b\"", delims=delims, extra_prefix=("foo",)) == ("\"", 2, ["bar"])
865 assert match_dict_keys(keys, "b'b", delims=delims, extra_prefix=("foo",)) == ("'", 2, ["bar"])
866 assert match_dict_keys(keys, "b\"b", delims=delims, extra_prefix=("foo",)) == ("\"", 2, ["bar"])
867
868 # No Completion
869 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("no_foo",)) == ("'", 1, [])
870 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("fo",)) == ("'", 1, [])
871
872 keys = [('foo1', 'foo2', 'foo3', 'foo4'), ('foo1', 'foo2', 'bar', 'foo4')]
873 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1',)) == ("'", 1, ["foo2", "foo2"])
874 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2')) == ("'", 1, ["foo3"])
875 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2', 'foo3')) == ("'", 1, ["foo4"])
876 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2', 'foo3', 'foo4')) == ("'", 1, [])
877
849 def test_dict_key_completion_string(self):
878 def test_dict_key_completion_string(self):
850 """Test dictionary key completion for string keys"""
879 """Test dictionary key completion for string keys"""
851 ip = get_ipython()
880 ip = get_ipython()
852 complete = ip.Completer.complete
881 complete = ip.Completer.complete
853
882
854 ip.user_ns["d"] = {"abc": None}
883 ip.user_ns["d"] = {"abc": None}
855
884
856 # check completion at different stages
885 # check completion at different stages
857 _, matches = complete(line_buffer="d[")
886 _, matches = complete(line_buffer="d[")
858 nt.assert_in("'abc'", matches)
887 nt.assert_in("'abc'", matches)
859 nt.assert_not_in("'abc']", matches)
888 nt.assert_not_in("'abc']", matches)
860
889
861 _, matches = complete(line_buffer="d['")
890 _, matches = complete(line_buffer="d['")
862 nt.assert_in("abc", matches)
891 nt.assert_in("abc", matches)
863 nt.assert_not_in("abc']", matches)
892 nt.assert_not_in("abc']", matches)
864
893
865 _, matches = complete(line_buffer="d['a")
894 _, matches = complete(line_buffer="d['a")
866 nt.assert_in("abc", matches)
895 nt.assert_in("abc", matches)
867 nt.assert_not_in("abc']", matches)
896 nt.assert_not_in("abc']", matches)
868
897
869 # check use of different quoting
898 # check use of different quoting
870 _, matches = complete(line_buffer='d["')
899 _, matches = complete(line_buffer='d["')
871 nt.assert_in("abc", matches)
900 nt.assert_in("abc", matches)
872 nt.assert_not_in('abc"]', matches)
901 nt.assert_not_in('abc"]', matches)
873
902
874 _, matches = complete(line_buffer='d["a')
903 _, matches = complete(line_buffer='d["a')
875 nt.assert_in("abc", matches)
904 nt.assert_in("abc", matches)
876 nt.assert_not_in('abc"]', matches)
905 nt.assert_not_in('abc"]', matches)
877
906
878 # check sensitivity to following context
907 # check sensitivity to following context
879 _, matches = complete(line_buffer="d[]", cursor_pos=2)
908 _, matches = complete(line_buffer="d[]", cursor_pos=2)
880 nt.assert_in("'abc'", matches)
909 nt.assert_in("'abc'", matches)
881
910
882 _, matches = complete(line_buffer="d['']", cursor_pos=3)
911 _, matches = complete(line_buffer="d['']", cursor_pos=3)
883 nt.assert_in("abc", matches)
912 nt.assert_in("abc", matches)
884 nt.assert_not_in("abc'", matches)
913 nt.assert_not_in("abc'", matches)
885 nt.assert_not_in("abc']", matches)
914 nt.assert_not_in("abc']", matches)
886
915
887 # check multiple solutions are correctly returned and that noise is not
916 # check multiple solutions are correctly returned and that noise is not
888 ip.user_ns["d"] = {
917 ip.user_ns["d"] = {
889 "abc": None,
918 "abc": None,
890 "abd": None,
919 "abd": None,
891 "bad": None,
920 "bad": None,
892 object(): None,
921 object(): None,
893 5: None,
922 5: None,
894 }
923 }
895
924
896 _, matches = complete(line_buffer="d['a")
925 _, matches = complete(line_buffer="d['a")
897 nt.assert_in("abc", matches)
926 nt.assert_in("abc", matches)
898 nt.assert_in("abd", matches)
927 nt.assert_in("abd", matches)
899 nt.assert_not_in("bad", matches)
928 nt.assert_not_in("bad", matches)
900 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
929 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
901
930
902 # check escaping and whitespace
931 # check escaping and whitespace
903 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
932 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
904 _, matches = complete(line_buffer="d['a")
933 _, matches = complete(line_buffer="d['a")
905 nt.assert_in("a\\nb", matches)
934 nt.assert_in("a\\nb", matches)
906 nt.assert_in("a\\'b", matches)
935 nt.assert_in("a\\'b", matches)
907 nt.assert_in('a"b', matches)
936 nt.assert_in('a"b', matches)
908 nt.assert_in("a word", matches)
937 nt.assert_in("a word", matches)
909 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
938 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
910
939
911 # - can complete on non-initial word of the string
940 # - can complete on non-initial word of the string
912 _, matches = complete(line_buffer="d['a w")
941 _, matches = complete(line_buffer="d['a w")
913 nt.assert_in("word", matches)
942 nt.assert_in("word", matches)
914
943
915 # - understands quote escaping
944 # - understands quote escaping
916 _, matches = complete(line_buffer="d['a\\'")
945 _, matches = complete(line_buffer="d['a\\'")
917 nt.assert_in("b", matches)
946 nt.assert_in("b", matches)
918
947
919 # - default quoting should work like repr
948 # - default quoting should work like repr
920 _, matches = complete(line_buffer="d[")
949 _, matches = complete(line_buffer="d[")
921 nt.assert_in('"a\'b"', matches)
950 nt.assert_in('"a\'b"', matches)
922
951
923 # - when opening quote with ", possible to match with unescaped apostrophe
952 # - when opening quote with ", possible to match with unescaped apostrophe
924 _, matches = complete(line_buffer="d[\"a'")
953 _, matches = complete(line_buffer="d[\"a'")
925 nt.assert_in("b", matches)
954 nt.assert_in("b", matches)
926
955
927 # need to not split at delims that readline won't split at
956 # need to not split at delims that readline won't split at
928 if "-" not in ip.Completer.splitter.delims:
957 if "-" not in ip.Completer.splitter.delims:
929 ip.user_ns["d"] = {"before-after": None}
958 ip.user_ns["d"] = {"before-after": None}
930 _, matches = complete(line_buffer="d['before-af")
959 _, matches = complete(line_buffer="d['before-af")
931 nt.assert_in("before-after", matches)
960 nt.assert_in("before-after", matches)
932
961
933 def test_dict_key_completion_contexts(self):
962 def test_dict_key_completion_contexts(self):
934 """Test expression contexts in which dict key completion occurs"""
963 """Test expression contexts in which dict key completion occurs"""
935 ip = get_ipython()
964 ip = get_ipython()
936 complete = ip.Completer.complete
965 complete = ip.Completer.complete
937 d = {"abc": None}
966 d = {"abc": None}
938 ip.user_ns["d"] = d
967 ip.user_ns["d"] = d
939
968
940 class C:
969 class C:
941 data = d
970 data = d
942
971
943 ip.user_ns["C"] = C
972 ip.user_ns["C"] = C
944 ip.user_ns["get"] = lambda: d
973 ip.user_ns["get"] = lambda: d
945
974
946 def assert_no_completion(**kwargs):
975 def assert_no_completion(**kwargs):
947 _, matches = complete(**kwargs)
976 _, matches = complete(**kwargs)
948 nt.assert_not_in("abc", matches)
977 nt.assert_not_in("abc", matches)
949 nt.assert_not_in("abc'", matches)
978 nt.assert_not_in("abc'", matches)
950 nt.assert_not_in("abc']", matches)
979 nt.assert_not_in("abc']", matches)
951 nt.assert_not_in("'abc'", matches)
980 nt.assert_not_in("'abc'", matches)
952 nt.assert_not_in("'abc']", matches)
981 nt.assert_not_in("'abc']", matches)
953
982
954 def assert_completion(**kwargs):
983 def assert_completion(**kwargs):
955 _, matches = complete(**kwargs)
984 _, matches = complete(**kwargs)
956 nt.assert_in("'abc'", matches)
985 nt.assert_in("'abc'", matches)
957 nt.assert_not_in("'abc']", matches)
986 nt.assert_not_in("'abc']", matches)
958
987
959 # no completion after string closed, even if reopened
988 # no completion after string closed, even if reopened
960 assert_no_completion(line_buffer="d['a'")
989 assert_no_completion(line_buffer="d['a'")
961 assert_no_completion(line_buffer='d["a"')
990 assert_no_completion(line_buffer='d["a"')
962 assert_no_completion(line_buffer="d['a' + ")
991 assert_no_completion(line_buffer="d['a' + ")
963 assert_no_completion(line_buffer="d['a' + '")
992 assert_no_completion(line_buffer="d['a' + '")
964
993
965 # completion in non-trivial expressions
994 # completion in non-trivial expressions
966 assert_completion(line_buffer="+ d[")
995 assert_completion(line_buffer="+ d[")
967 assert_completion(line_buffer="(d[")
996 assert_completion(line_buffer="(d[")
968 assert_completion(line_buffer="C.data[")
997 assert_completion(line_buffer="C.data[")
969
998
970 # greedy flag
999 # greedy flag
971 def assert_completion(**kwargs):
1000 def assert_completion(**kwargs):
972 _, matches = complete(**kwargs)
1001 _, matches = complete(**kwargs)
973 nt.assert_in("get()['abc']", matches)
1002 nt.assert_in("get()['abc']", matches)
974
1003
975 assert_no_completion(line_buffer="get()[")
1004 assert_no_completion(line_buffer="get()[")
976 with greedy_completion():
1005 with greedy_completion():
977 assert_completion(line_buffer="get()[")
1006 assert_completion(line_buffer="get()[")
978 assert_completion(line_buffer="get()['")
1007 assert_completion(line_buffer="get()['")
979 assert_completion(line_buffer="get()['a")
1008 assert_completion(line_buffer="get()['a")
980 assert_completion(line_buffer="get()['ab")
1009 assert_completion(line_buffer="get()['ab")
981 assert_completion(line_buffer="get()['abc")
1010 assert_completion(line_buffer="get()['abc")
982
1011
983 def test_dict_key_completion_bytes(self):
1012 def test_dict_key_completion_bytes(self):
984 """Test handling of bytes in dict key completion"""
1013 """Test handling of bytes in dict key completion"""
985 ip = get_ipython()
1014 ip = get_ipython()
986 complete = ip.Completer.complete
1015 complete = ip.Completer.complete
987
1016
988 ip.user_ns["d"] = {"abc": None, b"abd": None}
1017 ip.user_ns["d"] = {"abc": None, b"abd": None}
989
1018
990 _, matches = complete(line_buffer="d[")
1019 _, matches = complete(line_buffer="d[")
991 nt.assert_in("'abc'", matches)
1020 nt.assert_in("'abc'", matches)
992 nt.assert_in("b'abd'", matches)
1021 nt.assert_in("b'abd'", matches)
993
1022
994 if False: # not currently implemented
1023 if False: # not currently implemented
995 _, matches = complete(line_buffer="d[b")
1024 _, matches = complete(line_buffer="d[b")
996 nt.assert_in("b'abd'", matches)
1025 nt.assert_in("b'abd'", matches)
997 nt.assert_not_in("b'abc'", matches)
1026 nt.assert_not_in("b'abc'", matches)
998
1027
999 _, matches = complete(line_buffer="d[b'")
1028 _, matches = complete(line_buffer="d[b'")
1000 nt.assert_in("abd", matches)
1029 nt.assert_in("abd", matches)
1001 nt.assert_not_in("abc", matches)
1030 nt.assert_not_in("abc", matches)
1002
1031
1003 _, matches = complete(line_buffer="d[B'")
1032 _, matches = complete(line_buffer="d[B'")
1004 nt.assert_in("abd", matches)
1033 nt.assert_in("abd", matches)
1005 nt.assert_not_in("abc", matches)
1034 nt.assert_not_in("abc", matches)
1006
1035
1007 _, matches = complete(line_buffer="d['")
1036 _, matches = complete(line_buffer="d['")
1008 nt.assert_in("abc", matches)
1037 nt.assert_in("abc", matches)
1009 nt.assert_not_in("abd", matches)
1038 nt.assert_not_in("abd", matches)
1010
1039
1011 def test_dict_key_completion_unicode_py3(self):
1040 def test_dict_key_completion_unicode_py3(self):
1012 """Test handling of unicode in dict key completion"""
1041 """Test handling of unicode in dict key completion"""
1013 ip = get_ipython()
1042 ip = get_ipython()
1014 complete = ip.Completer.complete
1043 complete = ip.Completer.complete
1015
1044
1016 ip.user_ns["d"] = {"a\u05d0": None}
1045 ip.user_ns["d"] = {"a\u05d0": None}
1017
1046
1018 # query using escape
1047 # query using escape
1019 if sys.platform != "win32":
1048 if sys.platform != "win32":
1020 # Known failure on Windows
1049 # Known failure on Windows
1021 _, matches = complete(line_buffer="d['a\\u05d0")
1050 _, matches = complete(line_buffer="d['a\\u05d0")
1022 nt.assert_in("u05d0", matches) # tokenized after \\
1051 nt.assert_in("u05d0", matches) # tokenized after \\
1023
1052
1024 # query using character
1053 # query using character
1025 _, matches = complete(line_buffer="d['a\u05d0")
1054 _, matches = complete(line_buffer="d['a\u05d0")
1026 nt.assert_in("a\u05d0", matches)
1055 nt.assert_in("a\u05d0", matches)
1027
1056
1028 with greedy_completion():
1057 with greedy_completion():
1029 # query using escape
1058 # query using escape
1030 _, matches = complete(line_buffer="d['a\\u05d0")
1059 _, matches = complete(line_buffer="d['a\\u05d0")
1031 nt.assert_in("d['a\\u05d0']", matches) # tokenized after \\
1060 nt.assert_in("d['a\\u05d0']", matches) # tokenized after \\
1032
1061
1033 # query using character
1062 # query using character
1034 _, matches = complete(line_buffer="d['a\u05d0")
1063 _, matches = complete(line_buffer="d['a\u05d0")
1035 nt.assert_in("d['a\u05d0']", matches)
1064 nt.assert_in("d['a\u05d0']", matches)
1036
1065
1037 @dec.skip_without("numpy")
1066 @dec.skip_without("numpy")
1038 def test_struct_array_key_completion(self):
1067 def test_struct_array_key_completion(self):
1039 """Test dict key completion applies to numpy struct arrays"""
1068 """Test dict key completion applies to numpy struct arrays"""
1040 import numpy
1069 import numpy
1041
1070
1042 ip = get_ipython()
1071 ip = get_ipython()
1043 complete = ip.Completer.complete
1072 complete = ip.Completer.complete
1044 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1073 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1045 _, matches = complete(line_buffer="d['")
1074 _, matches = complete(line_buffer="d['")
1046 nt.assert_in("hello", matches)
1075 nt.assert_in("hello", matches)
1047 nt.assert_in("world", matches)
1076 nt.assert_in("world", matches)
1048 # complete on the numpy struct itself
1077 # complete on the numpy struct itself
1049 dt = numpy.dtype(
1078 dt = numpy.dtype(
1050 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1079 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1051 )
1080 )
1052 x = numpy.zeros(2, dtype=dt)
1081 x = numpy.zeros(2, dtype=dt)
1053 ip.user_ns["d"] = x[1]
1082 ip.user_ns["d"] = x[1]
1054 _, matches = complete(line_buffer="d['")
1083 _, matches = complete(line_buffer="d['")
1055 nt.assert_in("my_head", matches)
1084 nt.assert_in("my_head", matches)
1056 nt.assert_in("my_data", matches)
1085 nt.assert_in("my_data", matches)
1057 # complete on a nested level
1086 # complete on a nested level
1058 with greedy_completion():
1087 with greedy_completion():
1059 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1088 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1060 _, matches = complete(line_buffer="d[1]['my_head']['")
1089 _, matches = complete(line_buffer="d[1]['my_head']['")
1061 nt.assert_true(any(["my_dt" in m for m in matches]))
1090 nt.assert_true(any(["my_dt" in m for m in matches]))
1062 nt.assert_true(any(["my_df" in m for m in matches]))
1091 nt.assert_true(any(["my_df" in m for m in matches]))
1063
1092
1064 @dec.skip_without("pandas")
1093 @dec.skip_without("pandas")
1065 def test_dataframe_key_completion(self):
1094 def test_dataframe_key_completion(self):
1066 """Test dict key completion applies to pandas DataFrames"""
1095 """Test dict key completion applies to pandas DataFrames"""
1067 import pandas
1096 import pandas
1068
1097
1069 ip = get_ipython()
1098 ip = get_ipython()
1070 complete = ip.Completer.complete
1099 complete = ip.Completer.complete
1071 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1100 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1072 _, matches = complete(line_buffer="d['")
1101 _, matches = complete(line_buffer="d['")
1073 nt.assert_in("hello", matches)
1102 nt.assert_in("hello", matches)
1074 nt.assert_in("world", matches)
1103 nt.assert_in("world", matches)
1075
1104
1076 def test_dict_key_completion_invalids(self):
1105 def test_dict_key_completion_invalids(self):
1077 """Smoke test cases dict key completion can't handle"""
1106 """Smoke test cases dict key completion can't handle"""
1078 ip = get_ipython()
1107 ip = get_ipython()
1079 complete = ip.Completer.complete
1108 complete = ip.Completer.complete
1080
1109
1081 ip.user_ns["no_getitem"] = None
1110 ip.user_ns["no_getitem"] = None
1082 ip.user_ns["no_keys"] = []
1111 ip.user_ns["no_keys"] = []
1083 ip.user_ns["cant_call_keys"] = dict
1112 ip.user_ns["cant_call_keys"] = dict
1084 ip.user_ns["empty"] = {}
1113 ip.user_ns["empty"] = {}
1085 ip.user_ns["d"] = {"abc": 5}
1114 ip.user_ns["d"] = {"abc": 5}
1086
1115
1087 _, matches = complete(line_buffer="no_getitem['")
1116 _, matches = complete(line_buffer="no_getitem['")
1088 _, matches = complete(line_buffer="no_keys['")
1117 _, matches = complete(line_buffer="no_keys['")
1089 _, matches = complete(line_buffer="cant_call_keys['")
1118 _, matches = complete(line_buffer="cant_call_keys['")
1090 _, matches = complete(line_buffer="empty['")
1119 _, matches = complete(line_buffer="empty['")
1091 _, matches = complete(line_buffer="name_error['")
1120 _, matches = complete(line_buffer="name_error['")
1092 _, matches = complete(line_buffer="d['\\") # incomplete escape
1121 _, matches = complete(line_buffer="d['\\") # incomplete escape
1093
1122
1094 def test_object_key_completion(self):
1123 def test_object_key_completion(self):
1095 ip = get_ipython()
1124 ip = get_ipython()
1096 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1125 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1097
1126
1098 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1127 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1099 nt.assert_in("qwerty", matches)
1128 nt.assert_in("qwerty", matches)
1100 nt.assert_in("qwick", matches)
1129 nt.assert_in("qwick", matches)
1101
1130
1102 def test_class_key_completion(self):
1131 def test_class_key_completion(self):
1103 ip = get_ipython()
1132 ip = get_ipython()
1104 NamedInstanceClass("qwerty")
1133 NamedInstanceClass("qwerty")
1105 NamedInstanceClass("qwick")
1134 NamedInstanceClass("qwick")
1106 ip.user_ns["named_instance_class"] = NamedInstanceClass
1135 ip.user_ns["named_instance_class"] = NamedInstanceClass
1107
1136
1108 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1137 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1109 nt.assert_in("qwerty", matches)
1138 nt.assert_in("qwerty", matches)
1110 nt.assert_in("qwick", matches)
1139 nt.assert_in("qwick", matches)
1111
1140
1112 def test_tryimport(self):
1141 def test_tryimport(self):
1113 """
1142 """
1114 Test that try-import don't crash on trailing dot, and import modules before
1143 Test that try-import don't crash on trailing dot, and import modules before
1115 """
1144 """
1116 from IPython.core.completerlib import try_import
1145 from IPython.core.completerlib import try_import
1117
1146
1118 assert try_import("IPython.")
1147 assert try_import("IPython.")
1119
1148
1120 def test_aimport_module_completer(self):
1149 def test_aimport_module_completer(self):
1121 ip = get_ipython()
1150 ip = get_ipython()
1122 _, matches = ip.complete("i", "%aimport i")
1151 _, matches = ip.complete("i", "%aimport i")
1123 nt.assert_in("io", matches)
1152 nt.assert_in("io", matches)
1124 nt.assert_not_in("int", matches)
1153 nt.assert_not_in("int", matches)
1125
1154
1126 def test_nested_import_module_completer(self):
1155 def test_nested_import_module_completer(self):
1127 ip = get_ipython()
1156 ip = get_ipython()
1128 _, matches = ip.complete(None, "import IPython.co", 17)
1157 _, matches = ip.complete(None, "import IPython.co", 17)
1129 nt.assert_in("IPython.core", matches)
1158 nt.assert_in("IPython.core", matches)
1130 nt.assert_not_in("import IPython.core", matches)
1159 nt.assert_not_in("import IPython.core", matches)
1131 nt.assert_not_in("IPython.display", matches)
1160 nt.assert_not_in("IPython.display", matches)
1132
1161
1133 def test_import_module_completer(self):
1162 def test_import_module_completer(self):
1134 ip = get_ipython()
1163 ip = get_ipython()
1135 _, matches = ip.complete("i", "import i")
1164 _, matches = ip.complete("i", "import i")
1136 nt.assert_in("io", matches)
1165 nt.assert_in("io", matches)
1137 nt.assert_not_in("int", matches)
1166 nt.assert_not_in("int", matches)
1138
1167
1139 def test_from_module_completer(self):
1168 def test_from_module_completer(self):
1140 ip = get_ipython()
1169 ip = get_ipython()
1141 _, matches = ip.complete("B", "from io import B", 16)
1170 _, matches = ip.complete("B", "from io import B", 16)
1142 nt.assert_in("BytesIO", matches)
1171 nt.assert_in("BytesIO", matches)
1143 nt.assert_not_in("BaseException", matches)
1172 nt.assert_not_in("BaseException", matches)
1144
1173
1145 def test_snake_case_completion(self):
1174 def test_snake_case_completion(self):
1146 ip = get_ipython()
1175 ip = get_ipython()
1147 ip.Completer.use_jedi = False
1176 ip.Completer.use_jedi = False
1148 ip.user_ns["some_three"] = 3
1177 ip.user_ns["some_three"] = 3
1149 ip.user_ns["some_four"] = 4
1178 ip.user_ns["some_four"] = 4
1150 _, matches = ip.complete("s_", "print(s_f")
1179 _, matches = ip.complete("s_", "print(s_f")
1151 nt.assert_in("some_three", matches)
1180 nt.assert_in("some_three", matches)
1152 nt.assert_in("some_four", matches)
1181 nt.assert_in("some_four", matches)
1153
1182
1154 def test_mix_terms(self):
1183 def test_mix_terms(self):
1155 ip = get_ipython()
1184 ip = get_ipython()
1156 from textwrap import dedent
1185 from textwrap import dedent
1157
1186
1158 ip.Completer.use_jedi = False
1187 ip.Completer.use_jedi = False
1159 ip.ex(
1188 ip.ex(
1160 dedent(
1189 dedent(
1161 """
1190 """
1162 class Test:
1191 class Test:
1163 def meth(self, meth_arg1):
1192 def meth(self, meth_arg1):
1164 print("meth")
1193 print("meth")
1165
1194
1166 def meth_1(self, meth1_arg1, meth1_arg2):
1195 def meth_1(self, meth1_arg1, meth1_arg2):
1167 print("meth1")
1196 print("meth1")
1168
1197
1169 def meth_2(self, meth2_arg1, meth2_arg2):
1198 def meth_2(self, meth2_arg1, meth2_arg2):
1170 print("meth2")
1199 print("meth2")
1171 test = Test()
1200 test = Test()
1172 """
1201 """
1173 )
1202 )
1174 )
1203 )
1175 _, matches = ip.complete(None, "test.meth(")
1204 _, matches = ip.complete(None, "test.meth(")
1176 nt.assert_in("meth_arg1=", matches)
1205 nt.assert_in("meth_arg1=", matches)
1177 nt.assert_not_in("meth2_arg1=", matches)
1206 nt.assert_not_in("meth2_arg1=", matches)
General Comments 0
You need to be logged in to leave comments. Login now